V6R1 San
V6R1 San
V6R1 San
Nick Harris
Hernando Bedoya
Amit Dave
Ingo Dimmer
Jana Jamsek
David Painter
Veerendra Para
Sanjay Patel
Stu Preacher
Ario Wicaksono
ibm.com/redbooks
International Technical Support Organization
September 2008
SG24-7120-01
Note: Before using this information and the product it supports, read the information in “Notices” on
page ix.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
The team that wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Part 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 New features with Version 6 Release 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 System i storage solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 IBM System Storage solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 System i integrated storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.3 Managing System i availability with IBM System Storage. . . . . . . . . . . . . . . . . . . . 6
1.2.4 Copy Services and i5/OS Fibre Channel Load Source . . . . . . . . . . . . . . . . . . . . . . 7
1.2.5 Using Backup Recovery and Media Services with FlashCopy . . . . . . . . . . . . . . . . 7
1.2.6 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
iv IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5.2.6 Sizing for multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.2.7 Sizing for applications in an IASP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.2.8 Sizing for space efficient FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.3 Sizing tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.3.1 Disk Magic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.3.2 IBM Systems Workload Estimator for i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.3.3 IBM System Storage Productivity Center for Disk. . . . . . . . . . . . . . . . . . . . . . . . 133
5.4 Gathering information for sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.4.1 Typical workloads in i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.4.2 Identifying peak periods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.4.3 i5/OS Performance Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.5 Sizing examples with Disk Magic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
5.5.1 Sizing the System i5 with DS8000 for a customer with iSeries model 8xx and internal
disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
5.5.2 Sharing DS8100 ranks between two i5/OS systems (partitions). . . . . . . . . . . . . 163
5.5.3 Modeling System i5 and DS8100 for a batch job currently running
Model 8xx and ESS 800 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.5.4 Using IBM Systems Workload Estimator connection to Disk Magic:
Modeling DS6000 and System i for an existing workload. . . . . . . . . . . . . . . . . . 189
Contents v
7.3.1 RAID protected internal LSU migrating to external mirrored or
multipath LSU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
7.3.2 Internal LSU mirrored to internal LSU migrating to external LSU . . . . . . . . . . . . 321
7.3.3 Internal LSU mirrored to internal remote LSU migrating to external LSU . . . . . . 339
7.3.4 Internal LSU mirrored to external remote LSU migrating to external LSU . . . . . 358
7.3.5 Unprotected internal LSU migrating to external LSU . . . . . . . . . . . . . . . . . . . . . 367
7.3.6 Migrating to external LSU from iSeries 8xx or 5xx with 8 Gb LSU . . . . . . . . . . . 386
7.3.7 SAN to SAN storage migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Chapter 10. Installing the IBM System Storage DS6000 storage system . . . . . . . . . 519
10.1 Preparing the site and verifying the ship group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
10.1.1 Pre-installation planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
10.1.2 Ship group verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
10.2 Installing the DS6000 in a rack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
10.2.1 Installing storage and server enclosures in a rack . . . . . . . . . . . . . . . . . . . . . . 521
10.2.2 Attaching IBM System i host systems to the DS6000 . . . . . . . . . . . . . . . . . . . . 521
10.3 Cabling the DS6000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
10.3.1 Connecting IBM System i hosts to the DS6000 processor cards . . . . . . . . . . . 523
10.3.2 Connecting the DS6000 to the customer network. . . . . . . . . . . . . . . . . . . . . . . 523
10.3.3 Connecting optional storage enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
10.3.4 Turning on the DS6000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
10.4 Setting the DS6000 IP addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
10.4.1 Setting the DS6000 server enclosure processor card IP addresses. . . . . . . . . 526
vi IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Chapter 11. Usage considerations for Copy Services with i5/OS . . . . . . . . . . . . . . . 537
11.1 Usage considerations for Copy Services with boot from SAN . . . . . . . . . . . . . . . . . 538
11.2 Copying the entire DASD space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
11.2.1 Creating a clone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
11.2.2 System backups using FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
11.2.3 Using a copy for Disaster Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
11.2.4 Considerations when copying the entire DASD space . . . . . . . . . . . . . . . . . . . 544
11.3 Using IASPs and Copy Services for System i high availability . . . . . . . . . . . . . . . . . 545
11.3.1 System architecture for System i availability. . . . . . . . . . . . . . . . . . . . . . . . . . . 545
11.3.2 Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
11.3.3 Providing both backup and DR capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
Contents vii
Question 9. Do I need to upgrade any software on my HMC?. . . . . . . . . . . . . . . . . . . . . . 597
Question 10. What are the minimum software requirements to support #2847? . . . . . . . . 598
Question 11. Will the #2847 IOP work with iSeries models? . . . . . . . . . . . . . . . . . . . . . . . 598
Question 12. Do I need to upgrade my system firmware on a System i5 server? . . . . . . . 598
Question 13. What changes do I need to make to an IBM System Storage DS8000, DS6000,
or ESS model 800 series to support boot from SAN? . . . . . . . . . . . . . . . . . . . . . . . . 598
Question 14. Will I have to define the load source LUN as a “protected” or as an “unprotected”
LUN? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
Question 15. Will the Fibre Channel load source require direct connectivity to my SAN storage
device, or can I go through a SAN fabric? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
Question 16. Do I have to replace all of my #2844 IOPs with #2847? . . . . . . . . . . . . . . . . 599
Question 17. Can I share #2847 across multiple LPARs on the same system? . . . . . . . . 599
Question 18. Is the #2847 IOP supported in Linux or AIX partitions on System i? . . . . . . 600
Question 19. Where can I get additional information about #2847 IOP? . . . . . . . . . . . . . . 600
Question 20. Is the #2847 customer set up? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
Question 21. Will my system come preloaded with i5/OS when I order #2847? . . . . . . . . 600
Question 22. What is the difference between V5R3M5 and V5R3M0 for LIC? . . . . . . . . . 600
Question 23. Can I continue to use both internal and external storage even though I have
ordered the #2847 IOP? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
Question 24. Could I install #2847 on my iSeries model 8xx system, or in one of the LPARs on
this system? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
Question 25. Will the #2847 IOP work with V5R3M0 Licensed Internal Code? . . . . . . . . . 601
Question 26. What happens to my system name and network attributes when I perform a point
in time FlashCopy operation?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
Question 27. Prior to i5/OS V6R1, multipath I/O is not supported on #2847 for SAN Load
Source. Does this mean that the LUNs attached to the Fibre Channel I/O adapter are
unprotected by multipath I/O? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
Question 28. Will the base IOP that is installed in every system unit be replaced with the new
#2847 IOP? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
Question 29. Why does it take a long time to ship the #2847 IOP? . . . . . . . . . . . . . . . . . . 602
Question 30. Do I need to complete the questionnaire that I got
after I ordered the #2847 IOP?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
Question 31. Where do I obtain information about the #2847 IOP in Information Center? 602
Question 32. How many Fibre Channel adapters are supported by the #2847 IOP? . . . . 602
Question 33. Can I use the #2847 IOP to attach my tape Fibre Channel I/O adapter and also
to boot from it? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
Question 34. How many card slots does the #2847 IOP require? Can I install the IOP in 32-bit
slot, or does it need to be in a 64-bit slot? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
viii IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX 5L™ i5/OS® System i®
AIX® IBM® System p®
AS/400® iSeries® System Storage™
BladeCenter® Lotus® System Storage DS®
Calibrated Vectored Cooling™ OS/390® System x™
DB2® OS/400® System z®
DFSMSdss™ PartnerWorld® Tivoli®
Domino® POWER5™ TotalStorage®
DS4000™ POWER5+™ Virtualization Engine™
DS6000™ POWER6™ WebSphere®
DS8000™ PowerHA™ Workplace™
Enterprise Storage Server® PowerPC® xSeries®
ESCON® Predictive Failure Analysis® z/OS®
eServer™ Redbooks® z/VM®
Express™ Redbooks (logo) ® zSeries®
FICON® RETAIN®
FlashCopy® System i5®
InfiniBand, and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade
Association.
Disk Magic, IntelliMagic, and the IntelliMagic logo are trademarks of IntelliMagic BV in the United States, other
countries, or both.
Novell, SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States
and other countries.
SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other
countries.
Java, JDK, JRE, JVM, Solaris, Sun, and all Java-based trademarks are trademarks of Sun Microsystems, Inc.
in the United States, other countries, or both.
Internet Explorer, Microsoft, Windows Server, Windows, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.
Intel, Pentium, Pentium 4, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered
trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
x IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Preface
This IBM® Redbooks® publication provides a broad discussion of a new architecture of the
IBM System Storage™ DS6000™ and DS8000™ and how these products relate to System i
servers. The book includes information for both planning and implementing IBM System i®
with the IBM System Storage DS6000 or DS8000 series where you intend to externalize the
i5/OS® loadsource disk unit using boot from SAN. It also covers migration from System i
internal disks to IBM System Storage DS6000 and DS8000.
This book is intended for IBMers, IBM Business Partners, and customers in the planning and
implementation of external disk attachments to System i servers.
The newest release of this book accounts for the following new functions of IBM System i
POWER6™, i5/OS V6R1, and IBM System Storage DS8000 Release 3:
System i POWER6 IOP-less Fibre Channel
i5/OS V6R1 multipath load source support
i5/OS V6R1 quiesce for Copy Services
i5/OS V6R1 High Availability Solution Manager (HASM)
i5/OS V6R1 SMI-S support
i5/OS V6R1 multipath resetter HSM function
System i HMC V7
DS8000 R3 space efficient FlashCopy®
DS8000 R3 storage pool striping
DS8000 R3 System Storage Productive Center (SSPC)
DS8000 R3 Storage Manager GUI
Nick Harris is a Consulting IT Specialist for the System i and has spent the last eight years in
the ITSO, Rochester Center. He specializes in LPAR, System i hardware and software,
external disk, Integrated xSeries® Server for iSeries®, and Linux®. He writes IBM Redbooks
publications and teaches IBM classes worldwide on all these subjects and how they are
related to system design and server consolidation. He spent 13 years in the U.K. AS/400®
Business and has experience in S/36, S/38, AS/400, and System i servers. You can contact
him by sending e-mail to mailto:[email protected].
Ingo Dimmer is an IBM Advisory IT Specialist for System i and a PMI Project Management
Professional working in the IBM STG Europe storage support organization in Mainz,
Germany. He has eight years of experience in enterprise storage support from working in IBM
post-sales and pre-sales support. He holds a degree in Electrical Engineering from the
Gerhard-Mercator University, Duisburg. His areas of expertise include System i external disk
storage solutions, I/O performance, and tape encryption for which he has been an author of
several whitepapers and IBM Redbooks publications. You can contact him by sending e-mail
to [email protected].
Jana Jamsek is an IT specialist in IBM Slovenia. She works in Storage Advanced Technical
Support for Europe as a specialist for IBM System Storage and i5/OS systems. Jana has
eight years of experience in the iSeries and AS/400 area and five years experience in
storage. She holds a masters degree in computer science and a degree in mathematics from
the University of Ljubljana, Slovenia. She has authored several IBM Redbooks publications.
You can contact Jana by sending e-mail to mailto:[email protected].
David Painter is a System i Technical Manager with Morse Group in the U.K. He studied
Electronic Physics at University of London and is also an IBM Certified Solutions Expert. He
provides both pre- and post-sales technical support to customers across Europe. David has
20 years experience with the iSeries and System 38 product line and currently holds
numerous IBM Certifications. You can contact him by sending e-mail to
mailto:[email protected].
Veerendra Para is an advisory IT Specialist for System i in IBM Bangalore, India. His job
responsibility includes planning, implementation, and support for all the System i platforms.
He has nine years of experience in IT field. He has over six years of experience in AS/400
installations, networking, transition management, problem determination and resolution, and
implementations at customer sites. He has worked for IBM Global Services and IBM SWG.
He holds a diploma in Electronics and Communications. You can contact him by sending
e-mail to [email protected] or [email protected].
Sanjay Patel is a staff software engineer within the IBM Systems and Technology group at
Rochester, Minnesota. He has 10 years of experience with the System i platform, having
worked in Backup Recovery and Media Services (BRMS) development at IBM since 2001.
Currently, Sanjay is a Technical Leader for BRMS product and is involved with product design
and development. You can contact him by sending e-mail to [email protected].
Stu Preacher is a Consulting IT Specialist from IBM U.K. Stu has extensive experience in
System i availability and external storage. Stu now works for IBM System Storage specializing
in attachment to System i servers. You can contact him by sending e-mail to
[email protected].
Ario Wicaksono is an IT Specialist for System i at IBM Indonesia. He has two years of
experience in Global Technology Services as System i support. His areas of expertise are
System i hardware and software, external storage for System i, Hardware Management
xii IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Console, and LPAR. He holds a degree in Electrical Engineering from the University of
Indonesia. You can contact him by sending e-mail to [email protected].
Ginny McCright
Mike Petrich
Curt Schemmel
Clark Anderson
Joe Writz
Scott Helt
Jeff Palm
Henry May
Tom Crowley
Andy Kulich
Jim Lembke
Lee La Frese
Kevin Gibble
Diane E. Olson
Jenny Dervin
Adam Aslakson
Steven Finnes
Selwyn Dickey
John Stroh
Tim Klubertanz
Dave Owen
Ron Devroy
Scott Maxson
Dawn May
Sergrey Zhiganov
Gerhard Pieper
IBM Rochester development lab
Thanks also to the following people who shared written material from IBM System Storage
DS8000: Copy Services in Open Environments, SG24-6788:
Jana Jamsek
Bertrand Dufrasne
International Support Center Organization, San Jose, California
Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you
will develop a network of contacts in IBM development labs, and increase your productivity
and marketability.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Preface xiii
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an e-mail to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
xiv IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Part 1
Part 1 Introduction
This book is divided into multiple sections. This part introduces the new architecture of the
IBM System Storage DS6000 and DS8000 and how these products relate to System i
servers.
Chapter 1. Introduction
This chapter provides an introduction to System i and External Storage.
The IBM System i platform is one of the most complete and secure integrated business
systems, designed to run thousands of the world’s most popular business applications. It
provides faster, more reliable, and highly secure ways to help you simplify your IT
environment by reducing the number of servers and associated staff required to help you
save money and reinvest in growing your business.
The System i POWER6 IOP-less Fibre Channel technology takes full advantage of the
performance potential of IBM System Storage and eliminates the previously required I/O
processor (IOP), which can become a bottleneck for I/O performance. The dual-port IOP-less
Fibre Channel cards now supporting up to 64 LUNs per port significantly reduce the required
number of PCI slots. IOP-less Fibre Channel also introduces new D-mode IPL boot from tape
support for Fibre Channel attached tapes.
i5/OS V6R1 enhances the multipath function to support multipath external load source, which
brings enhanced functionality to boot from SAN support that was introduced previously with
i5/OS V5R3M5. For FlashCopy solutions, the i5/OS V6R1 quiesce for Copy Services function
can help to improve system availability by eliminating the need to turn off the server before
taking a FlashCopy system image. The new i5/OS V6R1 licensed program 5761-HAS High
Availability Solutions Manager (HASM) or PowerHA™ for i integrates the management of
disaster recovery or high availability solutions with IBM System Storage Copy Services into
i5/OS. From the system management GUI perspective, a new IBM Systems Director
Navigator for i5/OS, which is a Web browser based GUI, replaces the iSeries Navigator. The
SMI-S protocol support of i5/OS V6R1 helps to integrate i5/OS systems with IBM Systems
Director management software solutions.
On the storage side, with the IBM System Storage DS8000 Release 3 especially, the storage
virtualization and integration has been enhanced.
The DS8000 R3 space efficient FlashCopy function eliminates the need to fully provision the
physical capacity for the FlashCopy target volume. In addition, the System Storage
Productivity Center that ships with new DS8000 R3 systems improves functionality and
usability by consolidating the management consoles for DS8000 and IBM SAN Volume
Controller (SVC) and by providing a pre-installed System Storage Productivity Center storage
management solution. The enabled System Storage Productivity Center Basic Edition
provides basic storage, SAN, and data management capabilities that you can upgrade easily
to enhanced functionality by applying licenses for System Storage Productivity Center for
Disk, System Storage Productivity Center for Data, or System Storage Productivity Center for
Fabric. A new cache algorithm Adaptive Multi-Stream Prefetching (AMP) is implemented in
DS8000 to optimize cache efficiency, especially for single-rank constrained sequential
workload.
4 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
1.2.1 IBM System Storage solutions
IBM System Storage solutions bring new advanced storage capabilities to System i by
allowing more storage consolidation and flexibility in an enterprise environment, such as
multiserver connectivity, fully redundant hardware (including NVS cache, RAID-5, or RAID-10
protection), Copy Services solutions, and advanced storage allocation capabilities. Many
customer environments employ a strategic direction where all severs utilize SAN-based
storage, and the DS8000 is designed to meet those needs.
Chapter 1. Introduction 5
Figure 1-1 Virtual I/O hosting by i5/OS
i5/OS integrated storage management can eliminate the need for specialized storage skills,
such as storage subsystem, SAN, and fix management.
Internal storage does not allow you to migrate internal storage capacity easily to other
systems and must stay within the limits of SCSI, High Speed Loop (HSL), or 12X InfiniBand®
boundaries Alternatively, SAN offers greater flexibility when it comes to sharing storage
among multiple servers or to attaching storage across distances using Fibre Channel.
Data replication solutions with internal storage operating on a logical object level typically
increase the System i CPU usage because of the server processing the data replication and
might require additional administrative effort to keep the redundant System i servers
synchronized.
When combined with i5/OS availability solutions, including journaling and commitment
control, High Availability Business Partner (HABP) logical replication solutions, Domino online
backup, and System i clusters, customers have many options for meeting their business
continuity objectives. These solutions are not mutually exclusive, but they are not always
interchangeable either. Each solution has its own benefits and considerations. For a good
overview of these data resiliency solutions, see Data Resilience Solutions for IBM i5/OS High
Availability Clusters, REDP-0888.
6 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
1.2.4 Copy Services and i5/OS Fibre Channel Load Source
The introduction of i5/OS boot from SAN further expands i5/OS availability options by
exploiting solutions such as FlashCopy, which is provided through IBM System Storage Copy
Services functions. With boot from SAN, you no longer have to use remote load source
mirroring to mirror your internal load source to a SAN attached load source. Instead, you can
now place the load source directly inside a SAN attached storage subsystem and with i5/OS
V6R1 even use multipath attachment to the external load source for redundancy.
Boot from SAN enables easier bring up of a system environment that has been copied using
Copy Services functions such as FlashCopy or PPRC. During the restart of a cloned
environment, you no longer have to perform the Recover Remote Load Source Disk Unit
through Dedicated Service Tools (DST), thus reducing the time and overall steps required to
bring up a point-in-time system image after FlashCopy and PPRC functions have been
completed.
Since June of 2005, you can manage Copy Services functions on eServer i5 systems through
a command-line interface (CLI), called IBM System Storage DS® CLI, and a Web-based
interface called IBM System Storage DS Storage Manager. The DS Storage Manager allows
you to set up and manage point-in-time copy. This feature includes FlashCopy and enables
you to create full volume copies of data using source and target volumes that span logical
subsystems within a single storage unit. After the FlashCopy function completes, you can
access the target point-in-time system image immediately by associating it with another
System i server or a logical partition.
Note: Using the new i5/OS V6R1 quiesce for Copy Services function effective for both
SYSBAS and IASPs, you no longer need to turn off the source system prior to initiating the
FlashCopy function. Similar to a system shutdown, the quiesce function ensures that
contents of main storage (memory) are written to disks.
Utilities such as clear pool commands (CLRPOOL) can help clear the contents of the memory
pool but do not necessarily clear all of the contents of main storage. The requirement before
i5/OS V6R1 for shutting down the source system completely to assure that all contents of
main storage is written to the disks, prior to initiating FlashCopy can be avoided when
combining Copy Services functions with IASP. The advantage of using an IASP for your data
and application needs enables you to perform a vary off function on the source system,
therefore ensuring that all in flight transactions in the system memory are committed to the
disk before varying off the disk units. FlashCopy can then be initiated and the disks can be
varied on immediately after the FlashCopy task has completed. You can also automate this
entire process using IASP Copy Services Toolkit. Refer to IBM System Storage Copy
Services and IBM i: A Guide to Planning and Implementation, SG24-7103 for more
information.
The cloning of a System i source system using BRMS means that an identical disk image is
created, including system unique attributes such as system name, local location name,
TCP/IP configuration, and relational database directory entries. Because BRMS relies on
managing its common tape and media inventory based on unique system names, a change
Chapter 1. Introduction 7
had to be made to accommodate use of BRMS with FlashCopy. The enhancement enables
you to continue to use BRMS as you backup choice, and enables you to keep on using the
recovery options and recovery reports to restore the source system, should it be required,
even when the backups were completed, using the point-in-time copy of your entire disk
storage attached to a target system or a logical partition.
For additional planning considerations and enabling BRMS with FlashCopy, refer to IBM
System Storage Copy Services and IBM i: A Guide to Planning and Implementation,
SG24-7103.
1.2.6 Summary
The first release of this book in 2005 focussed on the new boot from SAN function with its
capability to place the loadsource disk unit directly inside an IBM ESS 800, DS6000, or
DS8000 series. Boot from SAN enables additional availability options, such as creating a
point-in-time instance with FlashCopy for backup and recovery as well as disaster recovery
needs. This capability, combined with the enhancements to BRMS, enables you to better
manage point-in-time disk images and reduces the overall steps that are required to start a
System i server or a logical partition utilizing the cloned disk image.
The new System i POWER6 IOP-less support takes advantage of the full performance
potential of IBM System storage and reduces the cost of deploying a SAN storage solution on
System i by requiring significantly fewer PCI slots and adapters.
Tighter integration of i5/OS with IBM System Storage with the new i5/OS V6R1 availability
feature quiesce for Copy Services and its integrated High Availability Solution Manager ease
the deployment of external storage with System i.
If you use System i internal storage, now is the time for a paradigm shift that follows the
strategy available from IBM System i storage.
Chapter 4, “i5/OS planning for external storage” on page 75 discusses important planning
considerations that are required to exploit the external storage capabilities with System i.
8 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2
Because the mirror solutions are compatible between the ESS and the DS8000 series, it is
possible to think about a setup for a disaster recovery solution with the high performance
DS8000 at the primary site and the ESS at the secondary site, where the same performance
is not required.
10 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2.2.2 DS8000 compared to DS6000
DS6000 and DS8000 now offer an enterprise continuum of storage solutions. All copy
functions (with the exception of Global Mirror, which is available only on the DS8000) are
available on both systems. You can do Metro Mirror, Global Mirror, and Global Copy between
the series. The CLI commands and the GUI look the same for both systems.
Obviously, the DS8000 can deliver a higher throughput and scales higher than the DS6000,
but not all customers need this high throughput and capacity. You can choose the system that
fits your needs. Both systems support the same SAN infrastructure and the same host
systems.
So it is very easy to have a mixed environment with DS8000 and DS6000 systems to optimize
the cost effectiveness of your storage solution, while providing the cost efficiencies of
common skills and management functions.
Logical partitioning with some DS8000 models is not available on the DS6000. For more
information about the DS6000, refer to IBM System Storage DS6000 Series: Architecture and
Implementation, SG24-6781 throughout when we mention this book.
If you want to keep your ESS and if it is a model 800 or 750 with Fibre Channel adapters, you
can use your existing ESS, for example, as a secondary for remote copy. With the ESS at the
appropriate LIC level, scripts or CLI commands written for Copy Services work for both the
ESS and the DS6800.
For most environments, the DS6800 performs better than an ESS. You might even replace
two ESS 800s with one DS6800. The sequential performance of the DS6800 is excellent.
However, when you plan to replace an ESS with a large cache (for example, more than
16 GB) with a DS6800 (which comes with 4 GB cache) and you currently get the benefit of a
high cache hit rate, your cache hit rate on the DS6800 will lower. This lower cache hit rate is
caused by the smaller cache. z/OS benefits from large cache, so for transaction-oriented
workloads with high read cache hits, careful planning is required.
Logical partitioning with some DS8000 models is not available on the DS6000. For more
information about the DS8000 refer to IBM System Storage DS8000 Series: Architecture and
Implementation, SG24-6786 when we mention this book throughout.
Both product families have about the same size and capacity but their functions differ. With
respect to performance, the DS4000 series range is below the DS6000 series. You have the
The DS4000 series products allow you to grow with a granularity of a single disk drive, while
with the DS6000 series you have to order at least four drives. Currently the DS4000 series
also is more flexible with respect to changing RAID arrays on the fly and changing LUN sizes.
The implementation of FlashCopy on the DS4000 series is different when compared to the
DS6000 series. On a DS4000 series, space is needed only for the changed data; however,
you need the full LUN size for the copy LUN on a DS6000 series. Although the target LUN on
a DS4000 series cannot be used for production, it can be used for production on the DS6000
series. If you need a real copy of a LUN on a DS4000 series, you can do a volume copy.
However, this process can take a long time before the copy is available for use. On a DS6000
series, the copy is available for production after a few seconds.
While the DS4000 series also offers remote copy solutions, these functions are not
compatible with the DS6000 series.
With the implementation of the POWER5™ Server Technology in the DS8000 it is possible to
create storage system logical partitions (LPARs) that can be used for completely separate
production, test, or other unique storage environments.
The DS8000 is a flexible and extendable disk storage subsystem because it is designed to
add and adapt to new technologies as they become available. The new packaging also
includes new management tools, such as the DS Storage Manager and the DS command-line
interface (CLI), which allow for the management and configuration of the DS8000 series as
well as the DS6000 series.
The DS8000 series is designed for 24x7 environments in terms of availability while still
providing the industry leading remote mirror and copy functions to ensure business continuity.
12 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 2-1 DS8000 base frame
POWER5+ technology
The DS8000 series exploits the IBM System p® POWER5+™ technology, which is the
foundation of the storage system LPARs. The DS8100 Model 931 utilizes the 64-bit
microprocessors’ dual 2-way processor complexes, and the DS8300 Model 932/9B2 uses the
64-bit dual 4-way processor complexes. Within the POWER5+ servers the DS8000 series
offers up to 256 GB of cache, which is up to four times as much cache as the previous ESS
models.
Internal fabric
DS8000 comes with a high bandwidth, fault tolerant internal interconnection, which is also
used in the IBM System p server, called RIO-2 (Remote I/O). RIO-2 can operate at speeds up
to 1 GHz and offers a 2 GBps sustained bandwidth per link. On System i and System p
servers, it is also known as High Speed Link (HSL).
Host adapters
The DS8000 offers enhanced connectivity with the availability of four-port Fibre
Channel/FICON® host adapters. The 4 Gbps Fibre Channel/FICON host adapters, which are
offered in longwave and shortwave, can also auto-negotiate to 1 Gbps and 2 Gbps link
speeds. This flexibility enables immediate exploitation of the benefits offered by the higher
performance, 4 Gbps SAN-based solutions, while also maintaining compatibility with existing
1 Gbps and 2 Gbps infrastructures. In addition, the four ports on the adapter can be
configured with an intermix of Fibre Channel Protocol (FCP) and FICON, which can help
protect your investment in fibre adapters and increase the ability to migrate to new servers.
The DS8000 also offers 2-port ESCON® adapters. A DS8000 can support up to a maximum
of 32 host adapters, which provide up to 128 Fibre Channel/FICON ports.
14 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
With DS8000 Release 3, new machines are shipped with a System Storage Productivity
Center (SSPC) console. The SSPC helps to centralize data center storage management with
supporting heterogeneous SMI-S conforming system and storage devices. The first SSPC
release supports management of IBM System Storage DS8000 and SAN Volume Controller
(SVC). The SSPC is an external, that is not rack-mounted, System x server running Windows
2003 Server and System Storage Productivity Center Basic Edition. With pre-installed
System Storage Productivity Center for Disk, Data and Fabric and optional System Storage
Productivity Center for Replication installation, an easy upgrade path for advanced
end-to-end storage management functionality is available as shown in Figure 2-2.
System
System Storage
Storage Productivity
Productivity Center
Center for
for Disk
Disk
•• Disk
Disk and
and Virtualization
Virtualization Performance
Performance management,
management, administration,
administration, and
and
operations
operations
Pre-installed
and System
System Storage
Storage Productivity
Productivity Center
Center for
for Data
Data
Unlocked •• Asset
Asset and
and capacity
capacity reporting
reporting and
and monitoring
monitoring
by priced •• File
File systems
systems and
and database
database management
management
license file
System
System Storage
Storage Productivity
Productivity Center
Center for
for Fabric
Fabric
•• SAN
SAN administration,
administration, operations
operations and
and performance
performance management.
management.
With SSPC the DS8000 Storage Manager GUI front-end has been moved from the S-HMC to
the SSPC where it can be accessed directly from the TPC Element Manager or using a Web
browser pointing to the SSPC (see 9.1.2, “Installing DS8000 Storage Manager” on page 465).
For additional flexibility, feature conversions are available to exchange existing disk drive sets
when purchasing new disk drive sets with higher capacity, or higher speed disk drives.
In the first frame, there is space for a maximum of 128 disk drive modules (DDMs) and every
expansion frame can contain 256 DDMs. Thus there is, at the moment, a maximum limit of
640 DDMs, which in combination with the 300 GB drives gives a maximum capacity of
192 TB.
Price, performance, and capacity can further be optimized to help meet specific application
and business requirements through the intermix of different DDM sizes and speeds.
Note: The intermix of DDMs is not supported within the same disk enclosure or disk
enclosure pair (front/back enclosure of the same rack position).
For more information about capacity planning, see 4.2.6, “Planning for capacity” on page 93.
The first application of the IBM Virtualization Engine™ technology in the DS8000 partitions
the subsystem into two virtual storage system images. The processors, memory, adapters,
and disk drives are split between the images. There is a robust isolation between the two
images through hardware and the POWER5 Hypervisor firmware.
With these separate resources, each storage system LPAR can run the same or different
versions of microcode and can be used for completely separate production, test, or other
unique storage environments within this single physical system. This capability can enable
storage consolidations, where separate storage subsystems were required previously and
can help to increase management efficiency and cost effectiveness.
16 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
attached environments, can help support storage consolidation requirements and dynamic,
changing environments.
The DS8000 supports a rich set of Copy Service functions and management tools that you
can use to build solutions to help meet business continuance requirements. These include
IBM System Storage Resiliency family point-in-time copy and Remote Mirror and Copy
solutions that are supported currently by the ESS.
Note: Remote Mirror and Copy was referred to as Peer-to-Peer Remote Copy (PPRC) in
earlier documentation for the IBM System Storage Enterprise Storage Server.
You can manage Copy Services functions through the DS command-line interface (CLI),
called the IBM System Storage DS CLI, and the Web-based interface, called the IBM System
Storage DS Storage Manager. The DS Storage Manager allows you to set up and manage
data copy features from anywhere that network access is available.
IBM System Storage z/OS Global Mirror (Extended Remote Copy XRC)
z/OS Global Mirror is a remote data mirroring function available for the z/OS and OS/390®
operating systems. It maintains a copy of the data asynchronously at a remote location over
18 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
unlimited distances. z/OS Global Mirror is well suited for large zSeries® server workloads and
can be used for business continuance solutions, workload movement, and data migration.
For further information about the function of the IBM System Storage Copy Services features,
refer to 2.7, “Copy Services overview” on page 46.
2.3.6 Interoperability
As we mentioned before, the DS8000 supports a broad range of server environments. But
there is another big advantage regarding interoperability. The DS8000 Remote Mirror and
Copy functions can interoperate between the DS8000, the DS6000, and ESS Models
800/750. This offers a dramatically increased flexibility in developing mirroring and remote
copy solutions and also the opportunity to deploy business continuity solutions at lower costs
than have been previously available.
For maintenance and service operations, the Storage Hardware Management Console
(S-HMC) is the focal point. The management console is a dedicated workstation that is
physically located (installed) inside the DS8000 subsystem and can monitor the state of the
system automatically, notifying you and IBM when service is required.
The S-HMC is also the interface for remote services (call home and call back). You can
configure remote connections to meet customer requirements. It is possible to allow one or
more of the following:
Call on error (machine detected)
Connection for a few days (customer initiated)
Remote error investigation (service initiated)
The remote connection between the management console and the IBM service organization
is done using a virtual private network (VPN) point-to-point connection over the internet or
modem.
The DS8000 comes with a four year warranty on both hardware and software. This type of
warranty is outstanding in the industry and shows the confidence that IBM has in this product.
In addition, this warranty provides the DS8000 a product with a low total cost of ownership
(TCO).
The common functions for storage management include the IBM System Storage DS Storage
Manager, which is the Web-based graphical user interface, the IBM System Storage DS
command-line interface (CLI), and the IBM System Storage DS open application
programming interface (API).
FlashCopy, Metro Mirror, Global Copy, and Global Mirror are the common functions regarding
the Advanced Copy Services. In addition to this, the DS6000/DS8000 series mirroring
solutions are also compatible between IBM System Storage ESS 800 and ESS 750, which
offers a new era in flexibility and cost effectiveness in designing business continuity solutions.
The following list highlights a few of the specific types of functions that you can perform with
the DS command-line interface:
Check and verify your storage unit configuration
Check the current Copy Services configuration that is used by the storage unit
Create new logical storage and Copy Services configuration settings
Modify or delete logical storage and Copy Services configuration settings
20 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
We describe the DS CLI in detail in Chapter 9, “Using DS GUI with System i” on page 439.
With the DS8000 series there are various choices of base and expansion models, so it is
possible to configure the storage units to meet your particular performance and configuration
needs. The DS8100 (latest model 931) features a dual 2-way processor complex and support
for one expansion frame. The DS8300 (latest models 932 and 9B2) features a dual 4-way
processor complex and support for one or two expansion frames which can be extended
through a RPQ to even four expansion frames supporting up to 1024 DDMs. The Model 9B2
supports two IBM System Storage System LPARs (Logical Partitions) in one physical
DS8000.
The DS8100 offers up to 128 GB of processor memory and the DS8300 offers up to 256 GB
of processor memory. In addition, the Non-Volatile Storage (NVS) scales to the processor
memory size selected, which can also help optimize performance.
The access to LUNs by the host systems is controlled through volume groups. Hosts or disks
in the same volume group share access to data. This is the new form of LUN masking.
These are all features that can react to changing workload and performance requirements,
showing the enormous flexibility of the DS8000 series.
In a small 3U footprint, the new storage subsystem provides performance and functions for
business continuity, disaster recovery and resiliency, previously only available in expensive
high-end storage subsystems. The DS6000 series is compatible regarding Copy Services
with the previous Enterprise Storage Server (ESS) Models 800 and 750, as well as with the
new DS8000 series.
The DS6000 series offers an entirely new era in price, performance, and scalability. Now for
the first time zSeries and iSeries customers have the option for a midrange priced storage
subsystem with all the features and functions of an enterprise storage subsystem.
22 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Some clients do not like to put large amounts of storage behind one storage controller. In
particular, the controller part of a high-end storage system makes it expensive. Now you have
the option of choice. You can build very cost efficient storage systems by adding expansion
enclosures to the DS6800 controller, but because the DS6800 controller is not really
expensive, you can also grow horizontally by adding other DS6800 controllers. You also have
the option to grow into the DS8000 series easily by adding DS8000 systems to your
environment or by replacing DS6000 systems (see Figure 2-4).
Scale
up
DS8000
DS6000 DS8000
DS6000
Scale
out
Figure 2-4 Scaling options of the DS6000 and DS8000 series
The processors
The DS6800 utilizes two 64-bit PowerPC 750GX 1 GHz processors for the storage server and
the host adapters, respectively, and another PowerPC 750FX 500 MHz processor for the
device adapter on each controller card. The DS6800 is equipped with 2 GB memory in each
controller card, adding up to 4 GB. Some part of the memory is used for the operating system
and another part in each controller card acts as nonvolatile storage (NVS), but most of the
memory is used as cache. This design to use processor memory makes cache accesses very
fast.
When data is written to the DS6800, it is placed in cache and a copy of the write data is also
copied to the NVS of the other controller card, so there are always two copies of write data
until the updates have been destaged to the disks. On System z, this mirroring of write data
24 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
can be disabled by application programs, for example, when writing temporary data (Cache
Fast Write). The NVS is battery backed up and the battery can keep the data for at least 72
hours if power is lost.
The DS6000 series controller’s Licensed Internal Code (LIC) is based on the DS8000 series
software, a greatly enhanced extension of the ESS software. Because 97% of the functional
code of the DS6000 is identical to the DS8000 series, the DS6000 has a very good base to
be a stable system.
Dense packaging
Calibrated Vectored Cooling™ technology used in System x and BladeCenter® to achieve
dense space saving packaging is also used in the DS6800. The DS6800 weighs only 49.6 kg
(109 lbs.) with 16 drives. It connects to normal power outlets with its two power supplies in
each DS6800 or DS6000 expansion enclosure. All this provides savings in space, cooling,
and power consumption.
Host adapters
The DS6800 has eight 2 Gbps Fibre Channel ports that can be equipped with two or up to
eight shortwave or longwave Small Formfactor Plugables (SFP). You order SFPs in pairs. The
2 Gbps Fibre Channel host ports (when equipped with SFPs) can also auto-negotiate to
1 Gbps for existing SAN components that support only 1 Gbps. Each port can be configured
individually to operate in Fibre Channel or FICON mode, but you should always have pairs.
Host servers should have paths to each of the two RAID controllers of the DS6800.
There are four paths from the DS6800 controllers to each disk drive to provide greater data
availability in the event of multiple failures along the data path. The DS6000 series systems
provide preferred path I/O steering and can automatically switch the data path used to
improve overall performance.
Aside from the drives, the DS6000 expansion enclosure contains two Fibre Channel switches
to connect to the drives and two power supplies with integrated fans.
According to your performance needs you can select from three different disk drive types: fast
73 GB drives rotating at 15 000 RPM, good performing and cost efficient 146 GB drives
operating at 10 000 or 15 000 RPM, and high capacity 300 GB drives running at 10 000 or
150 000 RPM.
The minimum storage capability with eight 73 GB DDMs is 584 GB. The maximum storage
capability with 16 300 GB DDMs for the DS6800 controller enclosure is 4.8 TB. If you want to
connect more than 16 disks, you can use the optional DS6000 expansion enclosures that
allow a maximum of 224 DDMs per storage system and provide a maximum storage
capability of 67.2 TB.
Every four or eight drives form a RAID array, and you can choose between RAID-5 and
RAID-10. The configuration process enforces that at least two spare drives are defined on
each loop. In case of a disk drive failure or even when the DS6000’s predictive failure analysis
comes to the conclusion that a disk drive might fail soon, the data of the failing disk is
reconstructed on the spare disk. More spare drives might be assigned if you have drives of
mixed capacity and speed. The mix of different capacities and speeds will not be available at
general availability, but at a later time.
26 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2.5.3 DS management console
The DS management console consists of the DS Storage Manager software, shipped with
every DS6000 series system, and a computer system on which the software can run. The
DS6000 management console running the DS Storage Manager software is used to
configure and manage DS6000 series systems. The software runs on a Windows or Linux
system that the client can provide.
The DS6000 series’ Express™ Configuration Wizards guide you through the configuration
process and help get the system operational in minimal time. The DS Storage Manager’s GUI
is intuitive and very easy to understand.
A few of the specific types of functions that you can perform with the DS CLI include:
Checking and verifying storage unit configuration
Checking the current Copy Services configuration that is used by the storage unit
Creating new logical storage and Copy Services configuration settings
Modifying or deleting logical storage and Copy Services configuration settings
Particularly for System z and System i customers, the DS6000 series is an exciting product,
because for the first time they have the choice of a midrange priced storage system for their
environment with a performance that is similar to or exceeds that of an IBM ESS.
As data becomes more and more important for an enterprise, losing data or access to data,
even only for a few days, might be fatal for the enterprise. Therefore, many customers,
particularly those with high end systems like the ESS and the DS8000 series, have
implemented Remote Mirroring and Copy techniques previously called Peer-to-Peer Remote
Copy (PPRC) and now called Metro Mirror, Global Mirror, or Global Copy. These functions are
also available on the DS6800 and are fully interoperable with ESS 800 and 750 models and
the DS8000 series.
The benefits of FlashCopy are that the point-in-time copy is immediately available for use for
backups and the source volume is immediately released so that applications can be
restarted, with minimal application downtime. The target volume is available for read and write
processing so it can be used for testing or backup purposes. You can choose to leave the
copy as a logical copy or to physically copy the data. If you choose to physically copy the data,
a background process copies tracks from the source volume to the target volume.
FlashCopy is an additional charged feature. You have to order the Point-in-time Copy feature,
which includes FlashCopy. Then you have to follow a procedure to get the key from the
internet and install it on your DS6800.
To make a FlashCopy of a LUN or a z/OS CKD volume you need a target LUN or z/OS CKD
volume of the same size as the source within the same DS6000 system (some operating
systems also support a copy to a larger volume). z/OS customers can even do FlashCopy on
a data set level basis when using DFSMSdss™. The DS6000 also supports Concurrent Copy.
The DS Storage Manager’s GUI provides an easy way to set up FlashCopy or Remote Mirror
and Copy functions. Not all functions are available through the GUI. Instead, we recommend
that you use the new DS command-line interface (DS CLI), which is much more flexible.
28 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Full volume FlashCopy
Full volume FlashCopy allows a FlashCopy of a logical volume by either copying all source
tracks to the target (background copy) or by copying tracks to be modified only
Incremental FlashCopy
Incremental FlashCopy provides the capability to refresh a LUN or volume involved in a
FlashCopy relationship. When a subsequent FlashCopy is initiated, only the data required to
bring the target current to the source's newly established point-in-time is copied. This
unburdens the backend and the disk drives are not so busy and can do more production I/Os.
The direction of the refresh can also be reversed, in which case the LUN or volume previously
defined as the target becomes the source for the LUN or volume previously defined as the
source (and now the target).
Previous ESS clients faced the issue that they could not use FlashCopy because a
FlashCopy onto a volume that was mirrored was not possible. This restriction particularly
affected z/OS clients using data set level FlashCopy for copy operations within a mirrored
pool of production volumes.
Metro Mirror
Metro Mirror was previously called Synchronous Peer-to-Peer Remote Copy (PPRC) on the
ESS. It provides a synchronous copy of LUNs or zSeries CKD volumes. A write I/O to the
source volume is not complete until it is acknowledged by the remote system. Metro Mirror
supports distances of up to 300 km.
Global Copy
This is a non-synchronous long distance copy option for data migration and backup. Global
Copy was previously called PPRC-XD on the ESS. It is an asynchronous copy of LUNs or
System z CKD volumes. An I/O is signaled complete to the server as soon as the data is in
cache and mirrored to the other controller cache. The data is then sent to the remote storage
system. Global Copy allows for copying data to far away remote sites. However, if you have
more than one volume, there is no mechanism that guarantees that the data of different
volumes at the remote site is consistent in time.
Global Mirror
Global Mirror is similar to Global Copy but it provides data consistency.
Global Mirror is a long distance remote copy solution across two sites using asynchronous
technology. It is designed to provide the following:
Support for virtually unlimited distances between the local and remote sites, with the
distance typically limited only by the capabilities of the network and channel extension
technology being used. This can better enable you to choose your remote site location
based on business needs and enables site separation to add protection from localized
disasters.
A consistent and restartable copy of the data at the remote site, created with little impact
to applications at the local site.
Data currency, where for many environments the remote site lags behind the local site an
average of three to five seconds, helps to minimize the amount of data exposure in the
event of an unplanned outage. The actual lag in data currency experienced will depend
upon a number of factors, including specific workload characteristics and bandwidth
between the local and remote sites.
Efficient synchronization of the local and remote sites, with support for failover and failback
modes, which helps to reduce the time required to switch back to the local site after a
planned or unplanned outage.
30 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
large zSeries resiliency requirements. The DS6000 series systems can only be used as a
target system in z/OS Global Mirror operations.
2.5.6 Resiliency
The DS6000 series has built in resiliency features that are not generally found in small
storage devices. The DS6000 series is designed and implemented with component
redundancy to help reduce and avoid many potential single points of failure.
Within a DS6000 series controller unit, there are redundant RAID controller cards, power
supplies, fans, Fibre Channel switches, and Battery Backup Units (BBUs).
There are four paths to each disk drive. Using Predictive Failure Analysis®, the DS6000 can
identify a failing drive and replace it with a spare drive without customer interaction.
Spare drives
The configuration process when forming RAID-5 or RAID-10 arrays will require that two global
spares are defined in the DS6800 controller enclosure. If you have expansion enclosures, the
first enclosure will have another two global spares. More spares could be assigned when
drive groups with larger capacity drives are added.
2.5.7 Interoperability
The DS6800 features unsurpassed enterprise interoperability for a modular storage
subsystem because it uses the same software as the DS8000 series, which is an extension of
the proven IBM ESS code. This allows for cross-DS6000/8000 management and common
software function interoperability, for example, Metro Mirror between a DS6000 and an ESS
Model 800, while maintaining a Global Mirror between the same DS6000 and a DS8000 for
some other volumes.
Light Path Diagnostics and controls are available for easy failure determination, component
identification, and repair if a failure does occur. The DS6000 series can also be remotely
configured and maintained when it is installed in a remote location.
The DS6800 consists of only five types of customer replaceable units (CRU). Light Path
indicators will tell you when you can replace a failing unit without having to shut down your
whole environment. If a concurrent maintenance is not possible, which might be the case for
some double failures, the DS Storage Manager’s GUI will guide you on what to do. Of course,
a customer can also sign a service contract with IBM or an IBM Business Partner for
extended service.
The DS6800 can be configured for a call home in the event of a failure and it can do event
notification messaging. In this case an Ethernet connection to the external network is
necessary. The DS6800 can use this link to place a call to IBM or to another service provider
when it requires service. With access to the machine, service personnel can perform service
tasks, such as viewing error and problem logs or initiating trace and dump retrievals.
At regular intervals the DS6000 sends out a heartbeat. The service provider uses this report
to monitor the health of the call home process.
Configuration changes like adding disk drives or expansion enclosures are a nondisruptive
process. Most maintenance actions are nondisruptive, including downloading and activating
new Licensed Internal Code.
The DS6000 model 511 comes with a four year warranty, while model 522 has a 1-year IBM
onsite repair warranty which can be extended to three years by IBM Global Services
offerings. This type of warranty is outstanding in the industry and shows the confidence from
IBM in this product. In addition, this warranty provides the DS6800 a product with a low total
cost of ownership (TCO).
LUN and volume creation and deletion is nondisruptive. When you delete a LUN or volume,
the capacity can be reused, for example, to form a LUN of a different size.
The maximum volume size has also been increased for CKD volumes. Volumes with up to
65520 cylinders can now be defined, which corresponds to about 55.6 GB. This can greatly
reduce the number of volumes that have to be managed.
32 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Flexible LUN to LSS association
A Logical Subsystem (LSS) is constructed to address up to 256 devices.
There is no predefined association of arrays to LSSs on the DS6000 series. Clients are free
to put LUNs or CKD volumes into LSSs and make the best use of the 256 address range of
an LSS.
This new LUN masking process simplifies storage management because you no longer have
to deal with individual Host Bus Adapters (HBAs) and volumes, but instead with groups.
Summary
In summary, the DS6000 series allows for:
Up to 32 logical subsystems
Up to 8192 logical volumes
Up to 1040 volume groups
Up to 2 TB LUNs
Large z/OS volumes with up to 65520 cylinders
DS8000 architecture
DS8000 consists of a base frame and up to four expansion frames. The base frame contains:
Two processor complexes System p models 570.
I/O enclosures, which contain up to 16 host adapters to connect to host servers, and up to
eight device adapters to connect to disk drives. Device adapters are arranged in up to four
device adapters pairs.
Figure 2-6 shows the DS8000 base frame and expansion frames.
Battery
Battery
Backup unit
I/O I/O Backup unit I/O I/O
Battery Enclosure 1 enclosure 0 Battery Enclosure 5 enclosure 4
Backup unit backup unit
I/O I/O I/O I/O
Battery Battery
Backup unit Enclosure 3 enclosure 2 Enclosure 7 enclosure 6
Backup unit
The DS8000 storage system consists of two processor complexes. Each processor complex
has access to multiple host adapters to connect to host systems or use for Copy Services.
Each processor complex uses several Fibre Channel Arbitrated Loop (FC-AL) device
adapters to connect to disk enclosures, and a DS8000 can have up to 16 device adapters,
34 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
arranged in up to eight device adapter pairs. Each device adapter connects the processor
complex to two switched fabric channel networks, each switched network attaches storage
enclosures that contain disk drives.
The DS8000 contains Fibre Channel disk drives that reside in disk enclosures. Each
enclosure can contain up to two array sets, and each array set can contain up to 16 disk
drives. In each enclosure are two Fibre Channel switches, both of them are connected to all
disk drives in an enclosure.
Each device adapter connects a processor complex to disk drives in up to two disk
enclosures and connects the disk drives in each enclosure both switches. Disk drives from
each enclosure are connected to two device adapters in a pair. Each device adapter has
access to any disk drive through two Fibre Channel networks.
Device adapters and host adapters operate in a high bandwidth interconnect called Remote
Input Output-G (RIO-G).
Table 2-1 shows the possible amounts of processor memory (PM) and write cache or none for
each model. Note that in this table SF means storage facility image of which there are two for
the 9B2 LPAR machine model.
Table 2-1 Processor memory and write cache for the various models
PM / write cache PM / write cache PM / write cache PM / write cache PM / write cache
RIO-G
N-way First RIO-G loop N-way
SMP SMP
For detailed information about the DS8000 architecture, refer to IBM System Storage DS8000
Series: Architecture and Implementation, SG24-6786, which is available at:
http://www.redbooks.ibm.com/abstracts/sg246786.html?Open
DS6000 architecture
The DS6000 contains disk drives, controller cards, and power supplies in a chassis, which is
called a server enclosure. Up to seven expansion enclosures with disk drives can be
connected to the server enclosure.
Two controller cards are in the server enclosure, each of them contains a 4-port host adapter
to connect to host servers or use for Copy Services. Each controller card has also an
integrated 4-port FC-AL device adapter to connect it to two separate Fibre Channel loops,
each of them attaches disk enclosures.
36 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
DS6000 contains Fibre Channel disk drives. Each server enclosure or expansion enclosure
can contain up to 16 disk drives and two Fibre Channel switches through which the drives are
connected. Four ports in each switch are used to connect to other enclosures.
A device adapter on each controller card connects disk drives in the server enclosure and the
first expansion enclosure using both switches in the enclosure. Figure 2-8 shows the next
expansion enclosures connected to controller cards in FC loops.
Up to 16 DDMs
Second
16
per enclosure
FC switch
FC switch
expansion
enclosure
Loop 0
16
FC switch
FC switch
Server
1
enclosure
Server 0 Server 1
First
1
FC switch
Loop 1
FC switch
expansion
maximum per loop
Seven enclosures
enclosure 16
Cables between
enclosures
1
Third
FC switch
FC switch
SAN fabric
Power PC Power PC
chipset Volatile Volatile chipset
memory memory
device adapter Persistent memory device adapter
Persistent memory
chipset chipset
Server First
enclosure expansion
enclosure
(if present)
38 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
In this section we describe each of these layers briefly. Figure 2-10 illustrates the
virtualization layers.
Data
1 G B FB
1 G B FB
1 G B FB
Data
1 GB FB
1 GB FB
1 GB FB
Data
S e rv e r0
Data
Data
1 GB FB
1 GB FB
1 GB FB
Data
P arity
S pare
X'2x' FB
409 6
addresses
LS S X '27'
X '3x' CK D
409 6
addresses
Array sites
An array site in DS8000 is a group of eight disk drives, sometimes also referred to as Disk
Drive Modules (DDMs). An array site in DS8000 is pre-determined by IBM manufacturing and
is made of DDMs from the same disk enclosure. All DDMs is an Array site are of the same
type and capacity.
An array site in DS6000 is a group of four DDMs. An array site in DS6000 is pre-determined
and is made of DDMs from the same disk enclosure. All DDMs is an array site are of the
same type and capacity.
Drive set
(16-pack) Array site
Arrays
In DS8000 an array is created from one array site. In DS6000 an array is created from one or
two array sites. Forming an array means defining it for a specific RAID type, the supported
RAID types are RAID-5 and RAID-10.
In DS8000, depending on the sparing rules, some RAID-5 arrays contain a spare disk drive,
and have parity data distributed across seven disk drives. Such an array is referred to as a
6+P+S array. In a RAID-5 array that does not contain a spare disk drive, parity data is
distributed across 8 disk drives, such an array is referred to as a 7+P array.
Similarly, in DS8000, depending on sparing rules some RAID-10 arrays contain two spare
drives and are referred to as 3+3+2 arrays. A RAID-10 array without spare disks is referred to
as a 4+4 array.
In DS6000 a RAID-5 array can contain eight DDMs (such array is referred to as 8-array), or it
can contain four DDMs (such an array is referred to as 4-array). Depending on sparing rules,
in DS6000 some arrays contain spare disk drive. In an 8-array with a spare disk drive, the
parity data is spread across seven disk drives, such array is referred to as 6+P+S array. In an
8-array that does not contain a spare disk drive, parity data are distributed across 8 disk
drives, such array is referred to as 7+P array. In a 4-array with a spare disk drive parity data
are spread across three disk drives. In a 4-array with no spare disk drive parity data are
spread across four DDMs.
In DS6000, some RAID-10 8-arrays or RAID-10 4-arrays contain two spare disks, depending
on sparing rules of DS6000.
For information about sparing rules in DS8000 and DS6000, refer to 4.2.6, “Planning for
capacity” on page 93.
40 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 2-12 shows 8-arrays in RAID-5 protection. Note that in reality, parity data is spread
across seven or eight disk drives, but because they take capacity of one disk drive, we usually
show them as a disk drive.
parity
S 6+ P arra y
P S
7+ P arra y
Ranks
A rank is logically contiguous storage space made up from one array. When formatting a rank,
you decide if the rank is formatted as fixed block or CKD. If you specify that a rank is fixed
blocked, the corresponding array is defined for fixed block data and can be used by open
systems. i5/OS uses fixed block arrays. If you specify that a rank is CKD, the corresponding
array is defined for CKD data and can be used by System z hosts.
When forming a rank, the capacity of corresponding array is divided into equal extents. An
extent size of a fixed blocked rank is 1 GB, where GB is 230 bytes, It is also called binary GB.
On DS storage systems, the strip size, which is the piece of data of a RAID stripe on each
physical disk, is 256 KB. So, each extent is made of 4096 strips.
6+P+S rank
73 GB
Metadata
Extent 1
Extent 2 S
Extent 388
7+P rank
73 GB
Metadata
Extent 1
Extent 3
Extent 452
Extent pools
An extent pool is a group of extents of the same type (either FB or CKD) that belong to one or
more ranks of the same rankgroup. Logical volumes (LUNs) are created from extent pools.
Although it is possible that an extent pool contains extents from ranks with different
characteristics such as RAID types and DDM speeds or capacities, it is recommended that all
ranks that belong to an extent pool have the same characteristics and, consequently, that all
extents in the extent pool have homogenous characteristics. An extent pool can be created
only from ranks with the same extent type, either fixed block or CKD.
Note: We recommend that you define one extent pool for each single rank to better keep
evidence of the location and performance of LUNs and to ensure that LUNs are evenly
spread between the two processors.
When you define an extent pool, you decide to which processor the ranks in the extent pool
have an affinity. This affinity determines which of the two processors is handling its I/O
processing using the rankgroup parameter. If an extent pool is defined for rankgroup 0, the
ranks belonging to it have an affinity to processor 0. Likewise, if the extent pool is defined for
rankgroup 1, the ranks in it have an affinity to processor 1.
42 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 2-14 shows an example of extent pools.
P S
Logical volumes
A standard logical volume is a SCSI logical unit (LUN) or CKD volume that is made of a set of
real extents from one extent pool. The capacity allocated to a LUN is always a multiple of
1 GB extent, so any LUN size that is not an exact multiple of 1 GB leaves some space in the
last extent that is allocated to the LUN unused. For more information about i5/OS LUNs in an
extent pool, refer to 4.2.6, “Planning for capacity” on page 93.
DS8000 Release 3 includes the virtualization concept of a space efficient logical volume,
which is a thinly provisioned volume that is made of virtual extents from a repository volume
within the same extent pool. The repository volume is the backstore for providing real storage
space to all space efficient volumes within the same extent pool. Space from the repository
volume is allocated to a space efficient volume proportional to the host I/O write activity on
track level (64 KB) granularity. A space efficient volume reports to the host with its full virtual
capacity, although in reality only a user-defined smaller piece of its storage space, such as
20% of its virtual capacity, is available physically from the repository volume.
The intended usage case for space efficient volumes is space efficient FlashCopy where a
space efficient volume is used as the FlashCopy target volume in a “short-lived” FlashCopy
relationship, for example for a nightly system backup where the FlashCopy source volume is
changed merely for the duration of the backup. For more information, see 2.7.8, “Space
efficient FlashCopy” on page 52.
For an i5/OS workload, it is important that enough physical disk arms are available for the
i5/OS system. A LUN can be spread over DDMs from only one rank, so that it can be served
by only six or seven disk arms, depending on the type of rank. Thus, with the use of extent
pools on DS, the question arises of can a LUN contain extents from multiple ranks and use
disk arms from more than one rank?
Normally, extents for a LUN are taken from only one rank, even if there are many ranks in the
extent pool. LUNs are defined so that they use extents of the first free rank until it is full, then
use extents from the next rank in the extent pool, and so forth. Therefore, a LUN usually uses
disk arms from only one rank. However, a LUN uses disk arms from two ranks when it is
created in a multi-rank extent pool and when the first rank does not have enough free extents
so that some first extents from the next rank are used also.
As we mentioned in “Extent pools” on page 42, we recommend that you use only one rank
per extent pool, so that any single LUN is always using extents from one single rank only. To
LUN S
The DS8000 Release 3 microcode supports storage pool striping, also known as extent
rotation, which allows you to create LUNs in a multi-rank extent pool so that, with the same
LUN, extents are used from multiple ranks.
Note: We generally do not recommend that you use storage pool striping to create LUNs
for i5/OS. We recommend that you create single-rank extent pools only. System i storage
management balances the I/O as best as possible across all available LUNs, and DS8000
storage pool striping simply introduces another virtualization layer that is not required for
i5/OS.
DS8000 Release 3 also includes the Dynamic Volume Expansion function, which provides the
ability to increase the size of a logical volume when it is online to a host system, with the
restriction that it can be used only for logical volumes that are not in a Copy Services relation.
Logical subsystems
A logical subsystem (LSS) is a logical construct for grouping up to 256 logical volumes, which
are assigned a unique logical volume identifier (ID) that consists of the LSS number and the
volume number. The volume ID is represented in hexadecimal format by a 2-digit LSS
number, followed by a 2-digit volume number. For example, volume 0x1023 belongs to LSS
0x10 and has volume number 0x23. When you create a LUN, you determine to which LSS it
belongs with the first two digits of the specified volume ID.
On ESS, there was a fixed association between the LSS and device adapters (and associated
ranks). On DS, there is no fixed binding between any rank. In addition, any logical subsystem,
except the LSS that is associated with volumes on ranks that belong to rankgroup 0, have
even numbers. The LSS that is associated with volumes on ranks belonging to rankgroup 1
have odd numbers.
For open systems and i5/OS, LSSs are important in two aspects:
To determine to which processor a LUN has affinity
To allocate LUNs correctly for Copy Services
For more information about planning LSSs, refer to 4.2.6, “Planning for capacity” on page 93.
An address group is a group of fixed block or CKD LSSs that can contain up to 16 LSSs. An
address group is created automatically when the first LSS that is associated with that address
group is created. When you create a logical volume, you determine the address group of that
44 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
volume by the first digit of the volume’s identifier. For example, when you create a LUN and
specify its ID as 1200, you determine that the LUN is in address group 1.
Note: Do not use address group 0 for creating open systems host volumes, that is LSS
0x00 to 0x0F, for reasons of consistency because it was limited for CKD volumes in the
past.
A volume group is a group of logical volumes (LUNs) that is attached to a host adapter. Two
types of volume groups are used with open system hosts. These volume groups determine
how the logical volume number is converted to the host-addressable LUN_ID on the Fibre
Channel SCSI interface as follows:
A map volume group is used in conjunction with FC SCSI host types that poll for LUNs by
walking the address range on the SCSI interface.
A mask volume group type is used in conjunction with FC SCSI host types that use the
SCSI Report LUN command to determine the LUN_IDs that are accessible.
i5/OS uses the Report LUN command to determine the LUN_ID. Therefore, mask is the
correct volume group type for i5/OS.
When associating a host attachment to the volume group, the host attachment contains
attributes that define the logical blocksize and the address discovery method that the host
adapter uses. These attributes must be consistent with the type of volume group that is
assigned to that host attachment.
i5/OS LUNs use 520 bytes per sector. From these 520 bytes, 8 bytes are the header
metadata that is used by System i storage management. The remaining 512 bytes are for
user data, such as in LUNs for other open system platforms. So, for an i5/OS host
attachment, 520 is the correct blocksize to define when creating LUNs. The correct address
discovery method is Report LUN.
V o lu m e G ro u p
LUN LU N
R ank
This section introduces the main features of Copy Services for the open systems
environment. We limit the discussion to those functions that are supported on LIC level 2.4.0
and above for ESS and for all DS6000 and DS8000 code levels as follows:
Metro Mirror (previously known as Synchronous Peer-to-Peer Remote Copy)
Global Mirror (previously known as Asynchronous Peer-to-Peer Remote Copy)
FlashCopy
Incremental FlashCopy
Inband FlashCopy
Multiple Relationship FlashCopy (V2)
FlashCopy consistency groups
Space efficient FlashCopy
For information about implementing IBM System Storage Copy Services solutions with i5/OS,
refer to IBM System Storage Copy Services and IBM i: A Guide to Planning and
Implementation, SG24-7103.
Metro Mirror is operating system and application independent. Because the copying function
occurs at the disk subsystem level, it is transparent to the host and its applications.
46 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 2-17 shows an example of a Metro Mirror configuration. From a host perspective, the
I/O flows between the host and primary disk subsystem as though there is no data replication.
IB
M
1 4
2
VOLUME VOLUME
A
CHAN
EXT
CHAN
EXT B
3
PRIMARY SECONDARY
1 - Write to primary volume (cache/NVS)
2 - Write to secondary volume(cache/NVS)
3 - Acknowledgment
4 - Post I/O complete (DE)
Figure 2-17 Metro Mirror architecture
The synchronous protocol guarantees that the secondary copy is up-to-date and consistent
by ensuring that the primary copy is committed only if the primary receives acknowledgment
that the secondary copy is written.
Important: Metro Mirror provides a remote copy that is synchronized with the source copy,
providing a Recovery Point Objective (RPO) of zero. In other words, the disaster recovery
system is at the point of failure when it is recovered. However, this recovery does not take
into account any further recovery actions that are necessary to bring the applications to a
clean recovery point, such as applying or removing journal entries. The additional actions
happen at the database recovery stage. Thus, take this into account when considering
your Recovery Time Objective (RTO). This recovery process is much less time consuming
and complicated than recovering the system from tape.
Important: Global Mirror provides a remote copy that is some time behind the source copy,
which gives an RPO of between a few seconds and a few minutes, depending on write I/O
activity and the available communication bandwidth. In other words, the disaster recovery
system is a few seconds or minutes behind the production system point of failure when it is
recovered. This recovery does not take into account any further recovery actions that are
necessary to bring the applications to a clean recovery point, such as applying or removing
journal entries. This happens at the database recovery stage. Thus, take this into account
when considering your Recovery Time Objective (RTO). This recovery process is much
less than recovering the system from tape.
48 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 2-18 shows the functions that are provided with Global Mirror.
PRIMARY REMOTE
APPLICATION APPLICATION
HOSTS HOSTS
FlashCopy
SAN
'A‘ SAN ‘B’
Global Copy
Primary Secondary
Global Copy
0
1
0
0
1
(PPRC-XD) FlashCopy FlashCopy
1
0
1
0
1 Source ‘C’ Target
0
0 0
0
0 1
0
0 0
1
0
0
Change 1
0 Out of
0
0
0 1
0
0 Recording 0
0 Sync 0
0
Change
1
bitmap 1 0
Recording
0
0 bitmap 0
1
0 bitmap
Remote Site
Local Site
The primary disk subsystems provide functionality to coordinate the formation of data
consistency groups. Fibre Channel protocol links provide low latency connections between
disk subsystems, ensuring that this process involves negligible impact to the production
applications. The consistency group information is held in bitmaps rather than requiring the
data updates itself to be maintained in cache.
These consistency groups are sent to the secondary location using asynchronous Global
Copy (previously known as PPRC-XD). Using Global Copy means that duplicate updates
within the consistency group are not sent and, if the data sent is still in the cache on the
primary disk subsystems, that only the changed blocks are sent.
When the complete consistency group is sent to the secondary location, this consistent
image of the primary data is saved using incremental FlashCopy and the Global Mirror
consistency group process starts over again. This ensures that there is always a consistent
image of the primary data at the secondary location.
Using the data freeze concept, consistency is obtained by temporarily inhibiting write I/O to
the devices and then performing the actions required to create consistency. When all devices
have performed the required actions the write I/O is allowed to resume. This might be
suspending devices in a Metro Mirror environment or performing a FlashCopy when using
consistent FlashCopy. With Global Mirror, this action is the creation of the bitmaps for the
consistency group and we are able to create the consistent point in approximately 1 to 3
milliseconds. If you now consider that this consistency group creation is done, for example,
every 3 seconds the host I/O performance impact with Global Mirror freezing the write I/O for
1 to 3 milliseconds is negligible.
Figure 2-19 shows the relationship between the source volume on the left and the target
volume to the right. During establishment of a FlashCopy relationship, a bitmap (indicated by
the grid in the diagram) is created in the storage system’s cache for tracking which tracks are
copied to the target volume and which are not.
You can use FlashCopy with either the COPY or NOCOPY option. In both cases, the target is
available as soon as the FlashCopy command is processed, usually a few milliseconds after
the command runs. When you use the COPY option, a background task runs to copy the data
sequentially track-by-track from source to target. With NOCOPY, the data is copied only on
write, meaning that only those tracks that are written to on either the source or target are
actually written to the target volume. The shaded segments in the second grid show this
NOCOPY option, and the shaded segments indicate the tracks that have been updated on
the target volume.
When the source volume is accessed for read, data is simply read from the source volume. If
a write operation occurs to the source, it is applied immediately if the track is copied to the
target already, either because a COPY option or because a copy on write has occurred. If the
source track to be written is not already copied, the unchanged track is copied to the target
volume first to secure the point-in-time copy state, and then the source is updated.
When the target volume is accessed for read, the bitmap shows which tracks are copied
already. If the track has been copied, either by a background COPY or by a copy on write, the
read is done from the target volume. If the track has not been copied, either because the
background copy has not reached that track yet or because it has not been updated on either
the source or target volume, the data is read from the source volume.
50 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Thus, both source and target volumes are totally independent. If you use the COPY option,
the relationship between source and target ends automatically when all target tracks are
written. With NOPCOPY, the relationship is maintained until it is ended explicitly or until all
tracks are copied. Although, Figure 2-19 on page 50 shows only one source and one target
volume, the same concept applies for all volumes in a relationship. Because System i storage
management stripes its data across all available volumes in an ASP (System, User, or
Independent), FlashCopy, as any other storage-based copying solution, must treat all
volumes in the ASP as one entity.
Typically, you use the point-in-time copy that FlashCopy creates when you need to produce a
copy of production data with minimal application downtime. You can use the point-in-time
copy for online backup, testing of new applications, or copying a database for data mining
purposes. The copy looks exactly like the original source volume and is an instantly available,
binary copy.
For copies that are usually access only once, such as for creating a point-in-time backup, you
normally use the NOCOPY option. When the target is required for regular access (for
example, for populating a Data Warehouse or creating a cloned environment), we
recommend that you use the COPY option.
Note: The i5/OS V6R1 quiesce for Copy Services function (CHGASPACT CL command)
allows you to suspend or resume ASP I/O activity to take an online FlashCopy without
needing to vary-off the IASP or turning off the system.
When this option is selected, only the tracks that have been changed on the source are
copied again to the target. The direction of the refresh can also be reversed, copying the
changes made to the new source (originally the target volume) to the new target volume
(originally the source volume).
Note: Using FlashCopy consistency groups with i5/OS, which requires that it is either shut
down or that its database I/O is quiesced before taking a FlashCopy, has no real benefit,
although it cannot harm either.
Space efficient FlashCopy is implemented using a space efficient logical volume, that is a
non-provisioned volume that has no physical storage allocated, as a target volume for a
FlashCopy no-background copy relationship. The actual physical storage for all space
efficient volumes in an extent pool is derived from a single shared repository volume that
needs to be created within the same extent pool.
For host write I/Os to either the fully-provisioned FlashCopy source or the non-provisioned
target causing a destage to the target volume, space is allocated to the space efficient target
from the repository volume on track-level granularity (64 KB). The repository manager within
DS8000 microcode uses a mapping table to keep track of the allocation of virtual track IDs of
the space efficient volumes to physical track IDs from the repository volume. Less for the
mapping table look-up than because of the initial space allocation on destage writes and
non-sequential stage or destage of the shared repository volume, space efficient FlashCopy
causes a slight performance penalty compared to regular, that is fully-provisioned,
FlashCopy.
52 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 2-20 shows the track mapping for space efficient FlashCopy.
Non-provisioned
space efficient volumes
(no space EVER allocated)
Repository Volume
(over-provisioned,
for example 500 GB virtual and
100 GB real capacity)
Correct sizing of the repository volume space becomes very important to prevent running out
of space situations, which can cause the FlashCopy relationship to fail and make the target
volume that contains only the update data useless. For more information, see 5.2.8, “Sizing
for space efficient FlashCopy” on page 130.
Restriction: The intended usage of space efficient FlashCopy is for short-lived FlashCopy
no-background copy relationships with only limited host write I/O activity such as for low
workload period system backups. If much more than 20% of the data is changed, using
regular fully-provisioned FlashCopy is recommended.
For further information about implementing space efficient FlashCopy with System i, refer to
IBM System Storage Copy Services and IBM i: A Guide to Planning and Implementation,
SG24-7103.
Figure 3-1 shows the simplest example. The i5/OS load source is a logical unit number (LUN)
in the DS model. To avoid single-points of failure for the storage attachment i5/OS
multipathing should be implemented to the LUNs in the external storage server.
Note: With i5/OS V6R1 and later, multipathing is also supported for the external load
source unit.
Prior to i5/OS V6R1 the external load source unit should be mirrored to another LUN on the
external storage system to provide path protection for the load source. The System i model is
connected to the DS model with Fibre Channel (FC) cables through a storage area network
(SAN).
Figure 3-1 System i5® model and all disk storage in external storage
56 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The FC connections through the SAN switched network are either through direct FC local
connections or through a dark fiber up providing up to 10 km distance. Figure 3-2 is the same
simple example, but the System i platform is divided into logical partitions (LPAR). Each LPAR
has it own mirrored pair of LUNs in the DS model.
Figure 3-2 LPAR System i5 environment and all disk storage in external storage
Unless using switchable independent ASPs boot from SAN helps to significantly reduce the
recovery time in case of a system failure by eliminating the requirement for a manual D-type
IPL with remote load source recovery.
Figure 3-3 External disk with the System i5 internal load source drive
58 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3.1.3 System i model with mixed internal and external storage
Examples of selected environments where the internal disk is retained in the System i model
and additional disk is located in the external storage server include:
A solution where the internal drives support *SYSBAS storage and the external storage
supports the IASP, which is similar to the example in “Metro Mirror with switchable IASP
replication” on page 66.
A solution where the internal drives are one half of the mirrored environment and the
external storage LUNs are the other half, giving mirrored protection and distance
capability.
A solution that requires a considerable amount of space for archiving.
In Figure 3-4, the external disk is used typically for a user auxiliary storage pool (ASP) or
an independent ASP (IASP). This ASP disk space can house the archive data, and his
storage is fairly independent of the production environment.
It is possible to mix internal and external drives in the same ASP, but we do not recommend
this mixing because performance management becomes difficult.
Figure 3-5 Migration from internal RAID protected disk drives to external storage
One such technique is to add additional I/O hardware to the existing System i model to
support the new external disk environment. This hardware can be an expansion tower, I/O
loops (HSL or 12X), #2847 IOP-based or POWER6 IOP-less Fibre Channel IOAs for external
load source support, other #2844 IOP-based or IOP-less FC adapters for the non-load source
volumes.
The movement of data from internal to external storage is achieved by the Disk Migration
While Active function (see Figure 3-5). Not all data is removed from the disk. Certain object
types, such as temporary storage, journals and receivers, and integrated file system objects,
are not moved. These objects are not removed until the disk is removed from the i5/OS
configuration. The removal of disk drives is disruptive, because it has to be done from DST.
The time to remove them depends on the amount of residual data left on the drive.
60 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. For further details about this function, refer to IBM eServer iSeries Migration: System
Migration and Upgrades at V5R1 and V5R2, SG24-6055.
9. Perform a manual IPL to DST, and remove the disks that have had the data drained from
the i5/OS configuration.
10.Stop device parity protection for the load source RAID set.
11.Migrate the load source drive by copying the load source unit data.
12.Physically remove the old internal load source unit.
13.Change the I/O tagging to the new external load source.
14.Re-start device parity protection.
For detailed information about migrating an internal load source to boot from SAN, refer to
IBM i and IBM System Storage: A Guide to Implementing External Disk on IBM i, SG24-7120.
Attention: The Disk Migrate While Active function starts a job for every disk migration.
These jobs can impact performance if many are started. If data is migrated from a disk and
the disk is not removed from the configuration, a job is started. Do not start data moves on
more drives than you can support without impacting your existing workload. Schedule the
data movement outside normal business hours.
This technique provides protection for the internal load source. The System i load source
drive should always be protected either by RAID or mirroring.
To migrate from a remote mirrored load source to external mirrored load source (Figure 3-6):
1. Increase the size of your existing load source to 17 GB or greater.
2. Load the new i5/OS V5R3M5 or later operating system support for boot from SAN.
3. Create the new mirrored load source pair in the external storage server.
4. Turn off System i and change the load source I/O tagging to the remote external load
source.
5. Remove the internal load source.
6. Perform a manual IPL to DST.
7. Use the replace configured unit function to replace the internal suspended load source
with the new external load source.
8. Perform an IPL on the new external mirrored load source.
For detailed information about migrating an internal load source to boot from SAN refer to IBM
i and IBM System Storage: A Guide to Implementing External Disk on IBM i, SG24-7120.
Boot from SAN enables you to take advantage of some of the advanced features that are
available with the DS8000 and DS6000 family, such as FlashCopy. It allows you to perform a
point-in-time instantaneous copy of the data held on a LUN or group of LUNs. Therefore,
when you have a system that has only SAN LUNs with no internal drives, you can create a
clone of your system.
Important: When we refer to a clone, we are referring to a copy of a system that only uses
SAN LUNs. Therefore, boot from SAN is a prerequisite.
To obtain a full system backup of i5/OS with FlashCopy, either a system shutdown or, since
i5/OS V6R1, a quiesce is required to flush modified data from memory to disk. FlashCopy
copies only the data on the disk. Therefore, a significant amount of data is left in memory, and
extended database recovery is required if the FlashCopy is taken with the system running or
not suspended.
62 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Note: The new i5/OS V6R1 quiesce for Copy Services function (CHGASPACT) allows you
to suspend all database I/O activity for *SYSBAS and IASP devices before taking a
FlashCopy system image eliminating the requirement to power down your system. (For
more information refer to IBM System Storage Copy Services and IBM i: A Guide to
Planning and Implementation, SG24-7103.
An alternative method to perform offline backups without a shutdown and IPL of your
production system is using FlashCopy with IASPs, as shown in Figure 3-7. You might
consider using an IASP FlashCopy backup solution for an environment that has no boot from
SAN implementation or that is using IASPs in anyway for high availability. Because the
production data is located in the IASP, the IASP can be varied off or because i5/OS V6R1
quiesced before taking a FlashCopy without shutting down the whole i5/OS system. It also
has the advantage that no load source recovery is required.
Note: Temporary space includes QTEMP libraries, index build space, and so on. There is a
statement of direction to allow spooled files in an IASP in the future.
Planning considerations
Keep in mind the following considerations:
You must vary off or quiesce the IASP before the FlashCopy can be taken. Customer
application data must be in an IASP environment in order to use FlashCopy. Using
storage-based replication of IASPs requires using the System i Copy Services Toolkit or
the new i5/OS V6R1 High Availability Solutions Manager (HASM).
Disk sizing for a system ASP is important because it requires the fastest disk on the
system because this is where memory paging, index builds, and so on happen.
Figure 3-8 shows a System i model with internal drives that are one half of the mirror to an
external storage server that is at a distance with a remote load source mirror and a set of
LUNs that are mirrored to the internal drives.
If the production site has a disk hardware failure, the system can continue off the remote
mirrored pairs. If a disaster occurs that causes the production site to be unavailable, it is
64 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
possible to IPL your recovery System i server from the attached remote LUNs. If your
production system is running i5/OS V5R3M5 or later and your recovery system is configured
for boot from SAN, it can directly IPL from the remote load source even without requiring a
remote load source recovery.
Restriction: If using i5/OS mirroring for disaster recovery as we describe, your production
system must not use boot from SAN because, at failback from your recovery to your
production site, you cannot control which mirror side you want to be the active one.
The main consideration with this solution is distance. The solution is limited by the distance
between the two sites. Synchronous replication needs sufficient bandwidth to prevent latency
in the I/O between the two sites. I/O latency can cause application performance problems.
Testing is necessary to ensure that this solution is viable depending on a particular
application’s design and business throughput.
When you recover in the event of a failure, the IPL of your recovery system will always be an
abnormal IPL of i5/OS on the remote site.
Note: Using i5/OS journaling for Metro Mirror or Global Mirror replication solutions is highly
recommended to ensure transaction consistency and faster recovery.
Note: Replicating switchable independent ASPs to a remote site provides both disaster
recovery and high availability and is supported only with either using the System i Copy
Services Toolkit or i5/OS V6R1 High Availability Solutions Manager (HASM).
66 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 3-10 Metro Mirror IASP replication
Using switchable IASPs with Copy Services requires either the System i Copy Services
Toolkit or the new i5/OS V6R1 High Availability Solutions Manager (HASM) for managing the
failover or switchover. If there is a failure at the production site, i5/OS cluster management
detects the failure and switches the IASP to the backup system. In this environment, we
normally have only one copy of the IASP, but we are using Copy Services technology to
create a second copy of the IASP at the remote site and provide distance.
The switchover and the recovery to the backup system are a relatively simple operation,
which is a combination of i5/OS cluster services commands and DS command-line interface
(CLI) commands. The IASP switch is cluster services passing the management over to the
backup system. The backup IASP is then varied on the active backup system. During a
disaster journal recovery attempts to recover or rollout any damaged objects. After the vary
on action completes, the application is available. These functions are automated with the
System i Copy Services Toolkit (see IBM System Storage Copy Services and IBM i: A Guide
to Planning and Implementation, SG24-7103).
All the data on the production system is asynchronously transmitted to the remote DS model.
Asynchronous replication through Global Copy alone does not guarantee the order of the
writes, and the remote production copy will lose consistency quickly. In order to guarantee
data consistency Global Mirror creates consistency groups at regular intervals, by default as
fast as the environment and the available bandwidth allows. FlashCopy is used at the remote
site to save these consistency groups to ensure a consistent set of data is available at the
remote site which is only a few seconds behind the production site, i.e. with using Global
Mirror a recovery point objective (RPO) of only a few seconds can be achieved normally
without any performance impact to the production site.
This is an attractive solution because of the extreme distances that can be achieved with
Global Mirror. However, it requires a proper sizing of the replication link bandwidth to ensure
the RPO targets can be achieved, and testing should be performed to ensure the resulting
image is usable.
68 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Global Mirror and switchable IASP replication
Global Mirror and switchable IASPs offer a new and exciting opportunity for a highly available
environment. It enables customers to replicate their environment over an extremely long
distance without the use of traditional i5/OS replication software. This environment comes in
two types, asymmetrical and symmetrical.
While Global Mirror can entail a fairly complex setup, the operation of this environment is
simplified for i5/OS with the use of the System i Copy Services Toolkit, automating the
switchover and failover or the IASP from production to backup.
Asymmetrical replication
The configuration shown in Figure 3-12 provides both availability switching between the
production system and the backup system. It also provides disaster recovery between either
the production system or backup system, depending on which system has control when the
disaster occurs, and the disaster recovery system. With the asymmetrical configuration, only
one consistency group is setup, and it resides at the remote site. This means that you cannot
do regular role swaps and reverse the I/O direction (disaster recovery to production).
In a normal operation, the IASP holds the application data and runs varied on to the
production system. I/O is asynchronously replicated through Global Copy to the backup DS
model maintaining a copy of the IASP. At regular intervals, FlashCopy is used to save the
consistency groups created at repeated intervals by the Global Mirror algorithm. The
consistency groups can be only a few seconds behind the production system, offering the
opportunity for a fast recovery.
Two primary operations can occur in this environment: switchover from production to backup
and failover to backup. Switchover from production to backup does not involve the DS models
in the previous example. It is simply a matter of running the System i Copy Services Toolkit
The failover to backup configuration change is after a failure. In this case, you run the failover
PPRC command (failoverpprc) on the backup system. Running this command allows the
disaster recovery system to take over the production role, vary on the copy IASP as though it
were the original, and restart the application. During vary on processing, journal recovery
occurs. If the application does not use journaling, the vary on process is considerably long
because the recovery process can fail due to damage and unrecoverable objects. You can
restore these objects from backup tapes, but some data integrity analysis needs to occur,
which can delay users who are allowed to access the application. This is similar to a disaster
crash on a single system, where the same recover process needs to occur.
Symmetrical replication
In this configuration, an additional FlashCopy consistency group is created on the source
production DS model. It provides all the capabilities of asymmetrical replication, but adds the
ability to do regular role swaps between the production and disaster recovery sites. When the
role swaps occur with a configuration as shown in Figure 3-13, the backup system does not
provide any planned switch capability for the disaster recovery site.
In this configuration, there are multiple capabilities, local planned availability between the
production and backup, and role swap or disaster recovery between the production and
70 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
disaster recovery site. The planned availability switch between production and backup is the
same as described in “Asymmetrical replication” on page 69, which does not involve the DS
models.
If you are going to do a role swap between the production system and the disaster recovery
site, you must also work with the DS models. Role swap involves the reversal of the flow of
data between production DS and disaster recovery DS. While this is more complex, the tasks
can be simply run from DS CLI and scripts. Either the System i Copy Services Toolkit or the
i5/OS V6R1 High Availability Solutions Manager (HASM) is required for this solution. For
more information about these System i Copy Services management tools, refer to IBM
System Storage Copy Services and IBM i: A Guide to Planning and Implementation,
SG24-7103.
Figure 3-14 shows the internal drive solution for XSM. The replication between the source
and target system is TCP/IP based, so considerable distance is achievable. Figure 3-14 also
shows a local backup server, which enables an administrative (planned) switchover to occur if
the primary system should need to be made unavailable for maintenance.
If the load source and system base are located in the external storage system, it is possible to
have all disks within the external storage system. Separation of the *SYSBAS LUNs and the
IASP LUNs and switchable tower are done at the expansion tower level.
Figure 3-15 Geographic mirroring with a mix of internal and external drives
72 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Part 2
Good planning is essential for the successful setup and use of your server and storage
subsystems. It ensures that you have met all of the prerequisites for your server and storage
subsystems and everything you need to gain advantage from best practices for functionality,
redundancy, performance, and availability.
Continue to use and customize the planning and implementation considerations based on
your hardware setup and as recommended through the IBM Information Center
documentation that is provided. Do not use the contents in this chapter as a substitute for
completing your initial server setup (IBM System i or IBM System p with i5/OS logical
partitions), IBM System Storage subsystems, and configuration of the Hardware
Management Console (HMC).
For example, you might want to use DS6000 or DS8000 storage for i5/OS, AIX, and Linux,
which reside on your System i servers. You might want to also implement a disaster recovery
solution with Remote Mirror and Copy features, such as IBM System Storage Metro Mirror or
Global Mirror, as well as plan to implement FlashCopy to minimize the backup window.
76 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The flowchart in Figure 4-1 can assist you with the important planning steps that you need to
consider based on your solution requirement. We strongly recommend that you evaluate the
flow in this diagram and create the appropriate planning checklists for each of the solutions.
Customer's
aim
Minimizing
Disaster External backup
recovery disk, window
consolidation
Which Which
solution solution
Cloning Cloning FlashCopy
Copy
Workload from i5/OS services
i5/OS services other servers
system system of IASP
of IASP
Capacity planning
Performance expectations
and sizing
Multipath
Planning SAN
When planning for external storage solutions, review the following planning considerations:
Evaluate the supported hardware configurations.
Understand the minimum software and firmware requirements for i5/OS, HMC, system
firmware, and microcode for the ESS Model 800, DS6000, and DS8000 series.
Understand additional implementation considerations, such as multipath I/O, redundancy,
and port setup on the storage subsystem.
Note that boot from SAN is required only if you are planning to externalize your i5/OS load
source completely and to place all disk volumes that belong to that system or LPAR in the
IBM System Storage subsystem. You might not need boot from SAN if you plan to use
independent auxiliary storage pools (IASPs) with external storage, where the system objects
(*SYSBAS) could remain on System i integrated internal storage.
The new System i POWER6 IOP-less Fibre Channel cards 5749 or 5774 support boot from
SAN for Fibre Channel attached IBM System Storage DS8000 models and tape drives. Refer
to Table 4-1 and Table 4-2 for the minimum hardware and software requirements for IOP-less
Fibre Channel and to 4.2.2, “Planning considerations for i5/OS multipath Fibre Channel
attachment” on page 81 for further configuration planning information.
The 2847 I/O processor (IOP) introduced with the i5/OS V5R3M5 IOP-based Fibre Channel
boot from SAN support is intended only to support boot capability for the disk unit of the FC
i5/OS load source and up to 31 additional LUNs, in addition to the load source, attached using
a 2766, 2787, or 5760 FC disk adapter. This IOP cannot be used as an alternate IPL device
for booting from any other devices, such as a DVD-ROM, CD-ROM, or integrated internal load
source. Also, the 2847 IOP cannot be used as a substitute for 2843 or 2844 IOP to drive
non-FC storage, LAN, or any other System i adapters.
Important: The IBM Manufacturing Plant does not preload i5/OS and licensed programs
on new orders or upgrades to existing System i models when the 2847 IOP is selected.
You must install the system or partitions using the media that is supplied with the order,
after you complete the set up the ESS 800, DS6000, or DS8000 series.
78 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
For information about more resources to assist with planning and implementation tasks, see n
“Related publications” on page 629.
DS8000 microcode V2.4.3 However, we strongly recommend that you to install the
latest level of FBM code available at the time of installation. Contact your IBM System
Storage specialist for additional information.
2847 IOP for each server instance that requires a load source or for each LPAR that is
enabled to boot i5/OS from Fibre Channel load sourcea
When using i5/OS prior to V6R1 we recommend that the FC i5/OS load source is
mirrored using i5/OS mirroring at an IOP level, with the remaining LUNs protected with
i5/OS multipath I/O capabilities. For IOP-level redundancy, you need at least two 2847
IOPs and two FC adapters for each system image or LPAR.
System i POWER5 or POWER6 model, for POWER5 I/O slots in the system unit,
expansion drawers or towers to support the IOP and I/O adapter (IOA) requirements,
for POWER6 IOPs are only supported in HSL loop attached supported expansion
drawers or towers
System p models for i5/OS in an LPAR (9411-100) with I/O slots in expansion drawers
or towers to support the IOP and IOA requirements
2766, 2787, or 5760 Fibre Channel Disk Adapter (IOA) for attaching i5/OS storage to
ESS 800, DS6000, or DS8000 seriesb
IBM System Storage DS8000, DS6000 or Enterprise Storage Server (ESS) 800 series
The PC must be in the same subnet mask as the DS6000. The PC configuration must
have a minimum of 700 MB disk, 512 MB of memory, and Intel Pentium® 4 1.4 Ghz or
more processor configuration.
80 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Requirement Complete
DS8000 microcode. We strongly recommend that you to install the latest level of FBM
code available at the time of installation. Contact your IBM System Storage specialist
for additional information.
ESS 800: 2.4.3.35 or later
Important: With i5/OS V6R1, multipath is now supported also for an external load source
disk unit for both the older 2847 IOP-based and the new IOP-less Fibre Channel adapters.
The new multipath function with i5/OS V6R1 eliminates the need with previous i5/OS
V5R3M5 or V5R4 versions to mirror the external load source merely for the purpose of
achieving path redundancy (see 6.10, “Protecting the external load source unit” on page 240).
Originally multipath support was added for System i external disks in V5R3 of i5/OS. Other
platforms have a specific software component, such as the Subsystem Device Driver (SDD).
Multipath is part of the base operating system. With V5R3 and later, you can define up to
eight connections from multiple I/O adapters on an iSeries or System i server to a single
logical volume in the DS8000, DS6000 or ESS. Each connection for a multipath disk unit
functions independently. Several connections provide redundancy by allowing disk storage to
be used even if a single path fails.
Multipath is important for the System i platform because it provides greater resilience to
storage area network (SAN) failures, which can be critical to i5/OS due to the single-level
storage architecture. Multipath is not available for System i internal disk units, but the
likelihood of path failure is much less with internal drives because there are fewer interference
points. There is an increased likelihood of issues in a SAN-based I/O path because there are
more potential points of failure, such as long fiber cables and SAN switches. There is also an
increased possibility of human error occurring when performing such tasks as configuring
switches, configuring external storage, or applying concurrent maintenance on DS6000 or
ESS, which might make some I/O paths temporarily unavailable.
Many System i customers still have their entire environment on the system or user auxiliary
storage pools (ASPs). Loss of access to any disk causes the system to enter a freeze state
until the disk access problem gets resolved. Even a loss of a user ASP disk will eventually
cause the system to stop. Independent ASPs (IASPs) provide isolation so that loss of disks in
the IASP only affect users who access that IASP while the remainder of the system is
unaffected. However, with multipath, even loss of a path to disk in an IASP will not cause an
outage.
Prior to multipath, some customers used i5/OS mirroring to two sets of disks, either in the
same or different external disk subsystems. This mirroring provided implicit dual path as long
as the mirrored copy was connected to a different I/O processor (IOP) or I/O adapter (IOA),
bus, or I/O tower. However, this mirroring also required twice as much capacity for two copies
of data. Because disk failure protection is already provided by RAID-5 or RAID-10 in the
external disk subsystem, this was sometimes considered unnecessary.
With the combination of multipath and RAID-5 or RAID-10 protection in DS8000, DS6000, or
ESS, you can provide full protection of the data paths and the data itself without the
requirement for additional disks.
1. IO Frame
2. BUS
3. IOP
4. IOA 6. Port
7. Switch
5. Cable 8. Port
9. ISL
10. Port
11. Switch
12. Port
13. Cable
14. Host Adapter
15. IO Drawer
82 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Unlike other systems that might support only two paths (dual-path), i5/OS V5R3 supports up
to eight paths to the same logical volumes. At a minimum, you should use two paths, although
some small performance benefits might be experienced with more paths. However, because
i5/OS multipath spreads I/O across all available paths in a round-robin manner, there is no
load balancing, only load sharing.
Configuration planning
The System i platform has three IOP-based Fibre Channel I/O adapters that support DS8000,
DS6000, and ESS model 800:
FC 5760 / CCIN 280E 4 Gigabit Fibre Channel Disk Controller PCI-X
FC 2787 / CCIN 2787 2 Gigabit Fibre Channel Disk Controller PCI-X (withdrawn from
marketing)
FC 2766 / CCIN 2766 2 Gigabit Fibre Channel Disk Controller PCI (withdrawn from
marketing)
The following new System i POWER6 IOP-less Fibre Channel I/O adapters support DS8000
as external disk storage only:
FC 5749 / CCIN 576B 4 Gigabit Dual-Port IOP-less Fibre Channel Controller PCI-X (see
Figure 4-4)
FC 5774 / CCIN 5774 4 Gigabit Dual-Port IOP-less Fibre Channel Controller PCIe (see
Figure 4-5)
Note: The 5749/5774 IOP-less FC adapters are supported with System i POWER6 and
i5/OS V6R1 or later only. They support both Fibre Channel attached disk and tape devices
on the same adapter but not on the same port. As a new feature these adapters support
D-mode IPL boot from a tape drive which should be either direct attached or, by proper
SAN zoning, the only tape drive seen by the adapter. Otherwise, with multiple tape drives
seen by the adapter, it picks only the first one that reported in and is loaded, and if it
contains no valid IPL source, the IPL fails.
Figure 4-5 New 5774 PCIe IOP-less Fibre Channel Disk Controller
Important: For direct attachment, that is point-to-point topology connections using no SAN
switch, the IOP-less Fibre Channel adapters support only the Fibre Channel arbitrated loop
(FC-AL) protocol. This support is different to the previous 2847 IOP-based FC adapters,
which supported only the Fibre Channel switched-fabric (FC-SW) protocol, whether direct-
or switch-connected, although other 2843 or 2844 IOP-based FC adapters support either
FC-SW or FC-AL.
84 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
All these System i Fibre Channel I/O adapters can be used for multipath.
Important: Though there is no requirement for all paths of a multipath disk unit group to
use the same type of adapter we strongly recommend to avoid mixing IOP-based and
IOP-less FC I/O adapters within the same multipath group. In a multipath group with mixed
IOP-based and IOP-less adapters the IOP-less adapter performance would be throttled by
the lower performance IOP-based adapter due to the I/O being distributed by a round-robin
algorithm across all paths of a multipath group.
The IOP-based single-port adapters can address up to 32 logical units (LUNs) while the
dual-port IOP-less adapters support up to 64 LUNs per port.
Table 4-5 summarizes the key differences between IOP-based and IOP-less Fibre Channel.
Table 4-5 Key differences between IOP-based versus IOP-less Fibre Channel
Function IOP-based IOP-less
A / B mode IPL (boot from SAN) Yes (with #2847 IOP only) Yes
The System i i5/OS multipath implementation requires each path of a multipath group to be
connected to a separate System i I/O adapter to be utilized as an active path. Attaching a
System i I/O adapter to a switch and going from the switch to two different storage subsystem
ports results in only one of the two paths between the switch and the storage subsystem
being used with the second path only being used in case of a failure of the first one,
sometimes referred to as backup-link and used to be a solution for higher redundancy with
ESS external storage before i5/OS multipathing became available.
It is important to plan for multipath so that the two or more paths to the same set of LUNs use
different hardware elements of connection, such as storage subsystem host adapters, SAN
switches, System i I/O towers, and high-speed link (HSL) or 12X loops.
When deciding how many I/O adapters to use, your first priority should be to consider
performance throughput of the IOA because this limit can be reached before the maximum
number of logical units. See Chapter 5, “Sizing external storage for i5/OS” on page 115, for
more information about sizing and performance guidelines.
For more information about implementing multipath, see Chapter 6, “Implementing external
storage with i5/OS” on page 207.
System i single-level storage requires you to adhere to the following rules when you use
multipath disk units in a multiple-system environment:
If you move an IOP with a multipath connection to a different LPAR, you must also move all
other IOPs with connections to the same disk unit to the same LPAR.
When you make an expansion unit switchable, make sure that all multipath connections to
a disk unit switch with the expansion unit.
When you configure a switchable independent disk pool, make sure that all of the required
IOPs for multipath disk units switch with the independent disk pool.
If a multipath configuration rule is violated, the system issues warnings or errors to alert you
of the condition. It is important to pay attention when disk unit connections are reported
missing. You want to prevent a situation where a node might overwrite data on a LUN that
belongs to another node.
Disk unit connections might be missing for a variety of reasons, but especially if one of the
preceding rules has been violated. If a connection for a multipath disk unit in any disk pool is
found to be missing during an IPL or vary on, a message is sent to the QSYSOPR message
queue.
If a connection is missing, and you confirm that the connection has been removed, you can
update Hardware Service Manager to remove that resource. Hardware Service Manager is a
tool to display and work with system hardware from both a logical and a packaging viewpoint,
an aid for debugging IOPs and devices, and for fixing failing and missing hardware. You can
access Hardware Service Manager in System Service Tools (SST) and Dedicated Service
Tools (DST) by selecting the option to start a service tool.
86 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4.2.3 Planning considerations for Copy Services
In this section, we discuss important planning considerations for implementing IBM System
Storage Copy Services solutions for i5/OS.
Note: The first release of space efficient FlashCopy with DS8000 R3 does not allow
you to increase the repository capacity dynamically. That is, to increase the capacity,
you will need to delete the repository storage space and re-create it with more physical
capacity.
For better space efficient FlashCopy write performance, you might consider using RAID10
for the target volumes as the writes to the shared repository volumes always have random
I/O character (see 5.2.8, “Sizing for space efficient FlashCopy” on page 130).
By using FlashCopy for creating a duplicate i5/OS system image of your production system
and IPLing another i5/OS LPAR from it running the backup to tape, you can increase the
availability of your production system by reducing or eliminating down-times for system saves.
FlashCopy can also assist you with having a backup image of your entire system
configuration to which you can rollback easily in the event of a failure during a release
migration or a major application upgrade.
Keep in mind that creating an i5/OS image through FlashCopy is a point-in-time instance and
thus should be used only for recovery of the production system only as a full backup for the
production system image. Many of the objects, such as history logs, journal receivers, and
journals, have different data history reflected in them and must not be restored to the
production system.
You must not attach any copied LUNs to the original parent system unless they have been
used on another partition first or initialized within the IBM System Storage subsystems.
Failure to observe this restriction will have unpredictable results and can lead to loss of data.
This is due to the fact that the copied LUNs are perfect copies of LUNs that are on the parent
system. As such, the system would not be able to tell the difference between the original and
the cloned LUN if they were attached to the same system.
As soon as you copy an i5/OS image, attach it to a separate partition that will own the LUNs
that are associated with the copied image. By doing this, you make them safe to be reused
again on the parent partition.
When planning to implement FlashCopy or Remote Mirror and Copy functions such as Metro
Mirror and Global Mirror for copying of an i5/OS system consider the following points:
Storage system licenses for use of Copy Services functions are required.
Have a sizing exercise completed to ensure that your system and storage configuration is
capable of handling the additional I/O requests. You also need to account for additional
memory, I/O, and disk storage requirements in the storage subsystem in addition to
hardware resources at the system side.
Ensure that the recovery system or backup partition is configured for boot from SAN to IPL
from the copied i5/OS load source.
Sufficient capacity (processor, memory, I/O towers, IOPs, IOAs, and storage) is reserved
to bring up the target environment, either in an LPAR or on a separate system that is
locally available in the same data center complex.
When restarting the environment after attaching the copied LUNs, it is important to
understand that, because these are identical copies of the LUNs in your production
environment, all of the attributes that are unique to your production environment are also
copied, such as network attributes, resource configuration, and system names. It is
important that you perform a manual IPL when you first start the system or partition so that
you can change the configuration attributes before the system fully starts. Examples of the
changes that you need to perform are:
– System Name, Local Location Name, and Default Location Name
You need to change these attributes before you restart SNA or APPC communications,
or prior to using BRMS.
Tip: You might want to create a “backup” startup program that you invoke during the
restart of a cloned i5/OS image so that you can automate many of the configuration
attribute changes that otherwise need manual intervention.
88 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
– TCPI/IP network attributes
You need to reassign a new IP address for the new system and reconfigure any related
attributes before the cloned image is added to the network, either for performing a full
system save or for performing any read-only operations such as database queries or
report printing.
– System name in the relational database directory entry
You might need to update this entry using the WRKRDBDIRE command before you
start any database activities that rely on these attributes.
The hardware resource configuration will not match what is on the production system and
needs to be updated prior to starting any network or tape connectivity.
Remember that any jobs in the job queue will still be there, and any scheduled entries in
the job scheduler will also be there. You might want to clear job queues or hold the job
scheduler on the backup server to avoid any updates to the files, enabling you to have a
true point-in-time instance of your production server.
You must understand the usage of BRMS when saving from a FlashCopy image of your
production system (see “Using Backup Recovery and Media Services with FlashCopy” on
page 89.)
Important: If you have to restore your application data or libraries back on the
production system, do not restore any journal receivers that are associated with that
library. Use the OMTOBJ parameter during the restore library operation.
BRMS for V5R3 has been enhanced to support FlashCopy by adding more options that
can be initiated prior to starting the FlashCopy operation.
For more information about using BRMS with FlashCopy, see 1.2.5, “Using Backup Recovery
and Media Services with FlashCopy” on page 7.
Note the following considerations when planning for Remote Mirror and Copy:
Determine the recovery point objective (RPO) for your business and clearly understand
the differences between synchronous storage-based data replication with Metro Mirror
and asynchronous replication with Global Mirror, and Global Copy.
When planning for a synchronous Metro Mirror solution, be aware of the maximum
supported distance of 300 km and expect a delay of your write I/O of around 1 ms per 100
km distance.
Have a sizing exercise completed to ensure that your system and storage configuration is
capable of handling additional I/O requests, that your I/O performance expectations are
met and that your network bandwidth supports your data replication traffic to meet your
recovery point objective (RPO) targets.
Acquire storage system licenses for the Copy Services functions to be implemented.
Unless you are not replicating IASPs only, configure your System i production system and
target system with boot from SAN for faster recovery times.
Sufficient capacity (processor, memory, I/O towers, IOPs, IOAs, and storage) is reserved
to bring up the target environment, either in an LPAR or on a separate system that is
locally available in the same data center complex.
These tools provide a set of functions to combine PPRC, IASP, and i5/OS cluster services for
coordinated switchover and failover processing through a cluster resource group (CRG) which
is not provided by stand-alone Copy Services management tools such as TPC-R or DS CLI.
This solution provides the benefit of the Remote Copy function and coordinated switching of
operations, which gives you good data resiliency capability if the replication is done
synchronously.
90 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
System i Copy Services Toolkit
Fully automated solution for i5/OS
Clustering and Copy Services management
Easy to use Copy Services setup scripts
Figure 4-6 Enhanced functionality of integrated System i Copy Services Management Tools
One of the biggest advantages of using IASP is that you do not need to shut down the
production server for switching over to your recovery system. A vary off of the IASP ensures
that data is written to the disk prior to initiating a switchover. HASM or the toolkit enables you
to attach the second copy to a backup server without an IPL. Replication of IASPs only
instead of your whole System i storage space can also help you to reduce your network
bandwidth requirements for data replication by excluding write I/O to temporary objects in
*SYSBAS. You also have the ability to combine this solution with other functions, such as
FlashCopy, for additional benefits such as save window reduction.
Note the following considerations when planning for Remote Mirror and Copy of IASP:
Complete the feasibility study for enabling your applications to take advantage of IASP. For
the latest information about high availability and resources on IASP, refer to the System i
high availability Web site:
http://www.ibm.com/eserver/iseries/ha
Ensure that you have i5/OS 5722-SS1 option 41 - Switchable resources installed on your
system and that you have set up an IASP environment.
Keep in mind that journaling your database files is still required, even when your data is
residing in an IASP.
Objects that reside in *SYSBAS, that is the disk space that is not IASP must be
maintained at equal levels on both the production and backup systems. You can do this by
using the software solutions offered by one of the High Availability Business Partners
(HABPs).
Set up IASPs and install your applications in IASP. After the application is prepared to run
in an IASP and is tested, implement HASM or the System i Copy Services Toolkit, which is
provided as a service offering from IBM STG lab services.
The toolkit contains the code and services needed to implement a disaster recovery solution.
For more information about the toolkit, contact the High/Continuous Availability and Cluster
Note: Avoid putting more than one storage subsystem host port into a switch zone with
System i FC adapters. At any given time a System i FC adapter uses only one of the
available storage ports in the switch zone whichever reports in first. A slack
configuration of the SAN switch with multiple System i FC adapters having access to
multiple storage ports can result in performance degradation by an excessive number
of System i FC adapters accidentally sharing the same link to the storage port.
Refer to Chapter 5, “Sizing external storage for i5/OS” on page 115 for recommendations
on the numbers of FC adapters per host ports.
If the IBM System Storage disk subsystem is connected remotely to a System i host, or if
local and remote storage subsystems are connected using SAN, plan for enough FC links
to meet the I/O requirement of your workload.
If extenders or dense wavelength division multiplexing (DWDMs) are used for remote
connection, take into account their expected latency when planning for performance.
If FC over IP is planned for remote connection, carefully plan for the IP bandwidth.
92 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4.2.6 Planning for capacity
When planning the capacity of external disk storage for System i environments, ensure that
you understand the difference between the following three capacity terms of the DS8000 and
DS6000 series:
Raw capacity
Effective capacity
Capacity usable for i5/OS
In this section, we explain these capacity terms and highlight the differences between them.
Raw capacity
Raw capacity of a DS, also referred to as physical capacity, is the capacity of all physical disk
drives in a DS system including the spare drives. When calculating raw capacity, we do not
take into account any capacity that is needed for parity information of RAID protection. We
simply multiply the number of disk drive modules (DDMs) by their capacity. Consider the
example where a DS8000 has five disk drive enclosures (each enclosure has 16 disk drives)
of 73 GB. Thus, the DDMs have 5.84 TB of raw capacity based on the following equation:
5 x 16 x 73 GB = 5840 GB, which is 5.84 TB of raw capacity
Note: Figure 4-7 shows only front disk enclosures. However, there are actually as many
back enclosures, that is up to eight enclosures per base frame and up to 16 enclosures per
expansion frame.
94 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
In DS8300 (4-way processors), there can be up to eight DA pairs. The pairs are connected in
the following order: 2, 0, 4, 6, 7, 5, 3, and 1. They are connected to arrays in the same way as
described for DS8100. All the DA pairs are filled with arrays until eight arrays per DA pair are
reached. DA pair 0 and 2 are used for more than eight arrays if needed.
Note: Figure 4-8 shows only the front disk enclosures. However, there are actually as
many back enclosures, that is up to eight enclosures per base frame and up to 16
enclosures per expansion frame.
Spares in DS8000
In DS8000, a minimum of one spare is required for each array site (or array) until the following
conditions are met:
Minimum of four spares per DA pair
Minimum of four spares of the largest capacity array site on the DA pair
Minimum of two spares of capacity and an RPM greater than or equal to the fastest array
site of any given capacity on the DA pair
Knowing the rule of how DA pairs are used, we can determine the number of spares that are
needed in a DS configuration and which RAID arrays will have a spare. If there are DDMs of a
different size, more work is needed to calculate which arrays will have spares.
Consider the same example for which we calculate raw capacity in “Raw capacity” on
page 93, and now calculate the spares. DS8100 with 10 array sites (10 arrays) of 73 GB
DDMs, all of the arrays are RAID-5. Eight arrays are connected to DA pair 2. The first four
arrays have a spare (6+P arrays) to fulfill the rule minimum of four spares for DA pair. The
next four arrays on this DA pair are without a spare (7+P arrays). Two arrays are connected to
Figure 4-9 illustrates this example, which is a result of the DS CLI command lsarray.
Array State Data RAIDtype arsite Rank DA Pair DDMcap (Decimal GB)
======================================================================
A0 Assigned Normal 5 (6+P) S1 R0 0 73.0
A1 Assigned Normal 5 (6+P) S2 R1 0 73.0
A2 Assigned Normal 5 (6+P) S3 R2 2 73.0
A3 Assigned Normal 5 (6+P) S4 R3 2 73.0
A4 Assigned Normal 5 (6+P) S5 R4 2 73.0
A5 Assigned Normal 5 (6+P) S6 R5 2 73.0
A6 Assigned Normal 5 (7+P) S7 R6 2 73.0
A7 Assigned Normal 5 (7+P) S8 R7 2 73.0
A8 Assigned Normal 5 (7+P) S9 R8 2 73.0
A9 Assigned Normal 5 (7+P) S10 R9 2 73.0
Figure 4-9 Sparing rule for DS8000
Spares in DS6000
DS6000 has two device adapters or one device adapter pair that is used to connect disk
drives in two FC loops, as shown in Figure 4-10 and Figure 4-11. In DS6000, a minimum of
one spare is required for each array site until the following conditions are met:
Minimum of two spares on each FC loop
Minimum of two spares of the largest capacity array site on the FC loop
Minimum of two spares of capacity and rpm greater than or equal to the fastest array site
of any given capacity on the DA pair
Therefore, if only a single RAID-5 array is configured, then one spare is in the server
enclosure. If two RAID-5 arrays are configured, two spares are present in the enclosure as
shown in Figure 4-10. This figure shows the first expansion enclosure and its location on the
second FC loop, which is separate from the server enclosure FC loop. Therefore the same
sparing rules apply. That is, if the expansion enclosure has only one RAID-5 array, there is
one spare. If two RAID arrays are configured in the expansion enclosure, then two spares are
present.
96 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 4-10 DS6000 spares for RAID-5
Effective capacity
Effective capacity of a DS system is the amount of storage capacity that is available for the
host system after the logical configuration of DS has been completed. However, the actual
capacity that is visible by i5/OS is smaller than the effective capacity. Therefore, we discuss
the actual usable capacity for i5/OS in “i5/OS LUNs and usable capacity for i5/OS” on
page 99.
Effective capacity of a rank depends on the number of spare disks in the corresponding array
and on the type of RAID protection of the array. When calculating effective capacity of a rank,
we take into account the capacity of the spare disk, the capacity needed for RAID parity, the
and capacity needed for metadata, which internally describes the logical to physical volume
mapping. Also, effective capacity of a rank depends on the type of rank, either CKD or fixed
block. Because i5/OS uses fixed block ranks, we limit our discussion to these ranks.
Table 4-7 shows the effective capacity of fixed block 8-width RAID ranks in DS6000 in decimal
GB and binary GB. It also shows the number of extents.
R A ID -5 73 G B 6 +P + S 38 2 382 4 10.1 7
R A ID -5 73 G B 7 +P 44 5 445 4 77.8 1
R A ID -5 1 46 G B 6 +P + S 77 3 773 8 30.0 0
R A ID -5 1 46 G B 7 +P 90 2 902 9 68.5 1
R A ID -5 3 00 G B 6 +P + S 1 576 15 76 16 92.2 1
R A ID -5 3 00 G B 7 +P 1 837 18 37 19 72.4 6
R A ID -10 73 G B 3 +3 + 2S 19 0 190 2 04.0 1
R A ID -10 73 G B 4 +4 25 4 254 2 72.7 3
R A ID -10 1 46 G B 3 +3 + 2S 38 6 386 4 14.4 6
R A ID -10 1 46 G B 4 +4 51 5 515 5 52.9 7
R A ID -10 3 00 G B 3 +3 + 2S 78 7 787 8 45.0 3
R A ID -10 3 00 G B 4 +4 1 050 10 50 11 27.4 2
98 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Table 4-8 shows the effective capacities of 4-width RAID ranks in DS6000.
R A ID -5 73 G B 2+P +S 1 27 12 7 13 6.3 6
R A ID -5 73 G B 3+P 1 90 19 0 20 4.0 1
R A ID -5 146 G B 2+P +S 2 56 25 6 27 4.8 7
R A ID -5 146 G B 3+P 3 86 38 6 41 4.4 6
R A ID -5 300 G B 2+P +S 5 24 52 4 56 2.6 4
R A ID -5 300 G B 3+P 7 87 78 7 84 5.0 3
R A ID -1 0 73 G B 1+1 +2S 62 62 6 6.5 7
R A ID -1 0 73 G B 2+2 1 27 12 7 13 6.3 6
R A ID -1 0 146 G B 1+1 +2S 1 27 12 7 13 6.3 6
R A ID -1 0 146 G B 2+2 2 56 25 6 27 4.8 7
R A ID -1 0 300 G B 1+1 +2S 2 61 26 1 28 0.2 4
R A ID -1 0 300 G B 2+2 5 24 52 4 56 2.6 4
As an example, we calculate the effective capacity for the same DS configuration as we use in
“Raw capacity” on page 93, and “Spare disk drives” on page 93. For a DS8100 with 10
RAID-5 ranks of 73 GB DDMs, six ranks are 6+P+S and four ranks are 7+P. The effective
capacity is:
(6 x 414.46 GB) + (4 x 483.18 GB) = 4419.48 GB
A LUN on DS8000 and DS6000 is formed of so called extents of the size 1 binary GB.
Because i5/OS LUN sizes expressed in binary GB are not whole multipliers of 1 GB, part of
the space of an assigned extent will not be used but can also not be used for other LUNs.
Table 4-9 shows the models of i5/OS LUNs, their sizes in decimal GB, the number of extents
they use, and the percentage of usable space (not waisted) in decimal GB for each LUN.
When defining a LUN for i5/OS, it is possible to specify whether the LUN is seen by i5/OS as
RAID protected or as unprotected. You achieve this by specifying the correct model of i5/OS
LUN. Models A0x are seen by i5/OS as protected, while models A8x are seen as unprotected.
Here, x stands for 1, 2, 4, 5, 6, or 7.
The general recommendation is to define LUNs as protected models. However you must take
into account that, whenever a LUN shall be mirrored by i5/OS mirroring, you must define it as
unprotected. Whenever there will be mirrored and non-mirrored LUNs in the same ASP,
define the LUNs that shall not be mirrored as protected. When mirroring on ASP is started,
only the unprotected LUNs from this ASP are mirrored, but all the protected ones are left out
of mirroring. This should be considered, e.g when using i5/OS prior to V6R1 when the load
source used to be mirrored between an internal disk and a LUN or between two LUNs to
provide path redundancy when multipathing was not supported yet for the load source unit.
LUNs are created in DS8000 or DS6000 storage from an extent pool which can contain one
or more RAID ranks. For information about the number of available extents from a certain
type of DS rank, see Table 4-6 on page 98, Table 4-7 on page 98, and Table 4-8 on page 99.
Note: We generally recommend to configure DS8000 or DS6000 storage with only one
single rank per extent pool for System i host attachment. This ensures that storage space
for a LUN is allocated from a single rank only which helps to better isolate potential
performance problems. It also supports the recommendation to use dedicated ranks for
System i server or LPARs not shared with other platform servers.
This implies that we also generally do not recommend to use the DS8000 Release 3
function of storage pool striping (also known as extent rotation) for System i host
attachment. System i storage management already distributes its I/O as best as possible
across the available LUNs in an auxiliary storage pool so that using extent rotation to
distribute the storage space of a single LUN across multiple ranks is rather
over-virtualization.
An i5/OS LUN uses a fixed number of extents. After a certain number of LUNs are created
from an extent pool, usually some is space left. Usually, we define as much as possible LUNs
of one size from an extent pool and optionally define LUNs of the next smaller size from the
space remaining in the extent pool. We try to define the LUNs of as equal size as possible in
order to have balanced I/O rate and consequently better performance.
100 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Table 4-10 and Table 4-11 show possibilities for defining i5/OS LUNs in an extent pool.
Table 4-10 LUNs from a 6+P+S rank of 73 GB DDMs (386 extents, 414.46 GB)
70 GB 35 GB 17 GB 8 GB LUNs Used Used decimal GB
LUNs LUNs LUNs extents
5 1 0 0 381 387.96
0 11 1 0 380 404.3
0 10 3 0 381 404.22
0 9 5 0 382 404.14
0 8 7 0 383 404.06
0 0 22 1 382 394.47
0 0 21 3 381 394.11
0 0 20 5 380 393.75
0 0 19 7 379 393.39
0 0 18 10 386 401.62
0 0 17 12 385 401.26
0 0 16 14 384 400.9
Table 4-11 LUNs from a 7+P rank of 73 GB DDMs (450 extents, 483.18 GB)
70 GB 35 GB 17 GB 8 GB LUNs Used Used decimal GB
LUNs LUNs LUNs extents
6 1 0 0 429 458.52
0 13 1 0 446 474.62
0 12 3 0 447 474.54
0 11 5 0 448 474.46
0 10 7 0 449 474.38
0 0 26 1 450 464.63
0 0 25 3 449 464.27
0 0 24 5 448 463.91
0 0 23 7 447 463.55
0 0 22 9 446 463.19
0 0 21 11 446 462.83
0 0 20 13 444 462.47
Use the following equation to determine the number of LUNs of a given size that one extent
pool can contain:
number of extents in extent pool - (number of LUNs x number of extents in a LUN) =
residual
Optionally, repeat the same operation to define smaller LUNs from the residual.
Capacity Magic
Capacity Magic, from IntelliMagic™ (Netherlands), is a Windows-based tool that calculates
the raw and effective capacity of DS8000, DS6000 or ESS model 800 based on the input of
the number of ranks, type of DDMs, and RAID type. The input parameters can be entered
through a graphical user interface. The output of Capacity Magic is a detailed report and a
graphical representation of capacity.
102 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Example of using Capacity Magic
In this example, we plan a DS8100 with 9 TB of effective capacity in RAID-5. We use
Capacity Magic to calculate the needed raw capacity and to present the structure of spares
and parity disks. The process that we use is as follows:
1. Launch Capacity Magic.
2. In the Welcome to Capacity Magic for Windows window (Figure 4-12), specify the type of
planned storage system and the desired way to create a Capacity Magic project. In our
example, we select DS6000 and DS8000 Configuration Wizard and select OK to guide
us through the Capacity Magic configuration.
104 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Select the way in which you plan to define the extent pools. For System i attachment, we
define 1 Extent Pool for each RAID rank (see Figure 4-14). Click Next.
106 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. Specify the type of DDMs and the type of RAID protection. As shown in Figure 4-16,
observe that 73 GB DDMs and RAID-5 are already inserted as the default. In our example,
we leave the default values. Click Next.
108 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. Next, review the selected configuration and click Finish to continue, as shown in
Figure 4-18.
110 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
A detailed report displays of the needed drive sets (megapacks), including disk enclosure
fillers, number of extents, raw capacity, effective capacity, and so on. Figure 4-20 shows a
part of this report.
It is equally important to ensure that the sizing requirements for your SAN configuration also
take into account the additional resources required when enabling advanced Copy Services
functions such as FlashCopy or PPRC. This is particularly important if you are planning to
enable synchronous Metro Mirror storage-based replication or space efficient FlashCopy.
Attention: You must correctly size the Copy Services functions that are enabled at the
system level to account for additional I/O resources, bandwidth, memory, and storage
capacity. The use of these functions, either synchronously or asynchronously, can impact
the overall performance of your system. To reduce the overhead of not duplicating the
temporary objects that are created in the system libraries, such as QTEMP, consider using
IASP with Copy Services functions.
We recommend that you obtain i5/OS performance reports from data that is collected during
critical workload periods and size the DS8000 or DS6000 accordingly, for every System i
environment or i5/OS LPAR that you want to attach to a SAN configuration. For information
about how to size IBM System Storage external disk subsystems for i5/OS workloads see
Chapter 5, “Sizing external storage for i5/OS” on page 115.
With PCI-X, the maximum bus speed is increased to 133 MHz from a PCI maximum of
66 MHz. PCI-X is backward compatible and can run at slower speeds, which means that you
can plug a PCI-X adapter into a PCI slot and it runs at the PCI speed, not the PCI-X speed.
This can result in a more efficient use of card slots but potentially for the tradeoff of less
performance.
Attention: If the configuration rules and restrictions are not fully understood and followed,
it is possible to create a hardware configuration that does not work, marginally works, or
quits working when a system is upgraded to future software releases.
Follow these plugging rules for the #5760, #2787, and #2766 Fibre Channel Disk Controllers:
Each of these adapters requires a dedicated IOP. No other IOAs are allowed on that IOP.
For best performance, place these 64-bit adapters in 64-bit PCI-X slots. They can be
plugged into 32-bit or 64-bit PCI slots but the performance might not be optimized.
If these adapters are heavily used, we recommend that you have only one per
Multi-Adapter Bridge (MAB) boundary.
In general spread any Fibre Channel disk controller IOAs as evenly as possible among the
attached I/O towers and spread I/O towers as evenly as possible among the I/O loops.
Refer to the recommendations in Table 4-12 for limiting the number of FC adapters per
System i I/O half-loop to prevent performance degradation due to congestion on the loop.
112 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
I/O Half-Loop Maximum number of Maximum number of Maximum number of
#2766/#2787 for #5760 for IOP-lessa for
transaction/ transaction/ transaction/
sequential workload sequential workload sequential workload
Our sizing is based on an I/O half-loop concept because, as shown in Figure 4-21, a
physically closed I/O loop with one or more I/O towers is actually used by the system as two
I/O half-loops. There is an exception to this though only for older iSeries hardware prior to
POWER5 where a single I/O tower per loop configuration resulted in only one half-loop being
actively used. As can be seen with three I/O towers in a loop, one half-loop will get two, the
other half-loop will get I/O tower. The PHYP bringup code determines which half-loop gets the
extra I/O tower.
I/O tower I/O tower I/O tower I/O tower I/O tower
I/O tower
With the System i POWER6 12X loop technology, the parallel bus data-width is increased
from previous 8 bits used by HSL-1 and HSL-2 to 12 bits, which is where the name 12X
comes from referring to the number of wires used for data transfer. In addition, with 12X the
clock rate is increased to 2.5 GHz compared to 2.0 GHz of the previous HSL-2 technology.
For using System i POWER6 with12X loops for external storage attachment plan for using it
with GX slot P1-C9 (right one from behind) in the CEC which in contrast to its neighbor GX
slot P1-C8 does not need to share bandwidth with the CEC’s internal slots.
Fully understanding your customer’s i5/OS workload I/O characteristics and then using
specific recommended analysis and sizing techniques to configure a DS and System i
solution is key to meeting the customer’s storage performance and capacity expectations. A
properly sized and configured DS system on a System i model provides the customer with an
optimized solution for their storage requirements. However, configurations that are drawn
without care of proper planning or understanding of workload requirements can result in poor
performance and even customer impact events.
In this chapter, we describe how to size a DS system for the System i platform. We present
the rules of thumb and describe several tools to help with the sizing tasks.
For good performance of a DS system with i5/OS workload, it is important to provide enough
resources, such as disk arms and FC adapters. Therefore, we recommend that you follow the
general sizing guidelines or rules of thumb even before you use the Disk Magic™ tool for
modeling performance of a DS system with the System i5 platform.
Workload Workload
characteristics statistics
Other
requirements:
HA, BC,..
Rules of thumb
SAN Fabric
Proposed
configuration
Workload from
other servers
Modeling with
Disk Magic Adjust conf.
based on DM
modeling
Requirements
and expectations
met ? No
Yes
Finish
116 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 5-2 illustrates the concept of single-level storage.
Single-level storage
Main memory
When the application performs an I/O operation, the portion of the program that contains read
or write instructions is first brought into main memory where the instructions are then
executed.
With the read request, the virtual addresses of the needed record are resolved, and for each
needed page, storage management first looks to see if it is in the main memory. If the page is
there, it is used to resolve the read request. However, if the corresponding page is not in main
memory, a page fault is encountered and it must be retrieved from disk. When a page is
retrieved, it replaces another page in memory that recently was not used; the replaced page
is paged out (destaged) to disk.
Similarly writing a new record or updating an existing record is done in main memory, and the
affected pages are marked as changed. A changed page normally remains in main memory
until it is written to disk as a result of a page fault. Pages are also written to disk when a file is
closed or when write-to-disk is forced by a user through commands and parameters. Also,
database journals are written to the disk.
When a page must be retrieved from disk or a page is written to disk, System Licensed
Internal Code (SLIC) storage management translates the virtual address to a real address of
a disk location and builds an I/O request to disk. The amount of data that is transferred to disk
at one I/O request is called a blocksize or transfer size. From the way reads and writes are
performed in single-level storage, you would expect that the amount of transferred data is
always one page or 4 KB. In fact, data is usually blocked by the i5/OS database to minimize
disk I/O requests and transferred in blocks that are larger than 4 KB. The blocking of
transferred data is done based on the attributes of database files, the amount that a file
extends, user commands, the usage of expert cache, and so on.
Storage management
Page swap, close files, and
Main memory so forth. Disk space
Page fault
Blocking
An I/O request to disk is created by the IOA device driver (DD) which for System i POWER6
now resides in SLIC instead of inside the I/O processor (IOP). It proceeds through the RIO
bus to the Fibre Channel I/O adapter (IOA) which is used to connect to the external storage
subsystem. Each IOA accesses a set of logical volumes, logical unit numbers (LUNs), in a DS
system; each LUN is seen by i5/OS as a disk unit. Therefore, the I/O request for a certain
System i disk (LUN) goes to an IOA to which a particular LUN is assigned; I/O requests for a
LUN are queued in IOA. From IOA, the request proceeds through an FC connection to a host
adapter in the DS system. The FC connection topology between IOAs and storage system
host adapters can be point-to-point or can be done using switches.
In a DS system, an I/O request is received by the host adapter. From the host adapter, a
message is sent to the DS processor that is requesting access to a disk track that is specified
for that I/O operation. The following actions are then performed for a read or write operation:
Read operation: A directory lookup is performed if the requested track is in cache. If the
requested track is not found in the cache, the corresponding disk track is staged to cache.
The setup of the address translation is performed to map the cache image to the host
adapter PCI space. The data is then transferred from cache to host adapter and further to
the host connection, and a message is sent indicating that the transfer is completed.
Write operation: A directory lookup is performed if the requested track is in cache. If the
requested track is not in cache, segments in the write cache are allocated for the track
image. Setup of the address translation is performed to map the write cache image pages
to the host adapter PCI space. The data is then transferred through DMA from the host
adapter memory to the two redundant write cache instances, and a message is sent
indicating that the transfer is completed.
118 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 5-4 shows the described I/O flow between System i POWER6 and a DS8000 storage
system with the previous IOP.
i5/OS LPAR
in System i POWER6 Main Memory
SLIC IOA DD
RIO
PCI-X PCI-X
IOA IOA
FC connection
I/O Flow
DS8000
HA HA
Processor + Processor +
Cache/NVS Cache/NVS
DA DA
FC connection
Switch Switch
Performance measurements were done in IBM Rochester that show how disk response time
relates to throughput. These measurements show the number of transactions per second for
a database workload. This workload is used as an approximation for an i5/OS transaction
workload. The measurements were performed for different configurations of DS6000
connected to the System i platform and different workloads. The graphs in Figure 5-5 show
disk response time at workloads for 25, 50, 75, 100, and 125 database users.
Throughput (ops/sec)
13 13
12 12
11 11 10502 10754
10330
9885
10 10
9041
9 9
8 8
7 6204 6258 6314 7
6000
6 5582 6
5 5
4.3 6.6 7.8 8.4 8.6 2.7 4.6 6.0 7.0 7.8
Disk response time (ms) Disk response time (ms)
4* (2 Fbr 7 LUNs)
16
15 14394 14732
13896 14240
14
Throughput (ops/sec)
13 12588
12
11
10
9
8
7
6
5
1.9 3.8 5.3 6.5 7.3
Disk response time (ms)
From the three graphs, notice that as we increase the number of FC adapters and LUNs, we
gain more throughput. If we merely increased the throughput for a given configuration, we can
see the disk response time grow sharply.
120 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
basic performance requirements are met and eliminate future performance bottlenecks as
much as possible.
When a page or a block of data is written to disk space, storage management spreads it over
multiple disks. By spreading data over multiple disks, it is achieved that multiple disk arms
work in parallel for any request to this piece of data, so writes and reads are done faster.
When using external storage with i5/OS what SLIC storage management sees as a “physical”
disk unit is actually a logical unit (LUN) composed of multiple stripes of a RAID rank in the
IBM DS storage subsystem (see Figure 5-6). A LUN uses multiple disk arms in parallel
depending on the width of the used RAID rank. For example, the LUNs configured on a single
DS8000 RAID5 rank use six or seven disk arms in parallel, while with evenly distributing these
LUNs over two ranks twice as much disk arm are used.
Disk
Unit 3
Typically the number of physical disk arms that should be made available for a performance
critical i5/OS transaction workload is prevailing over the capacity requirements.
Important: Determining the number of RAID ranks for a System i external storage solution
by looking at how many ranks of a given physical DDM size and RAID protection level
would be required for the desired storage capacity typically does not satisfy the
performance requirements of System i workload.
Note: We generally do not recommend using lower speed 10 KB RPM drives for i5/OS
workload.
The calculation for the recommended number of RAID ranks is as follows, providing that
reads per second and writes per second of an i5/OS workload are known:
A RAID-5 rank of 8 * 15 KB RPM DDMs without a spare disk (7+P rank) is capable of
maximum 1700 disk operations per second at 100% utilization without cache hits. This is
valid for both DS8000 and DS6000.
We take into account a recommended 40% utilization of a rank so the rank can handle
40% of 1700 = 680 disk operations per second. From the same measurement we can
calculate maximum number of disk operations per second for other RAID ranks by
calculating disk operations per second for one disk drive and then multiplying them by the
number of active drives in a rank. For example, a RAID-5 rank with a spare disk (6+P+S
rank) can handle maximum 1700 / 7 * 6 = 1458 disk operations per second. At
recommended 40% utilization it can handle 583 disk operations per second.
We calculate disk operations of i5/OS workload so that we take into account percentage of
read cache hits, percentage of write cache hits, and the fact that each write operation in
RAID-5 results in 4 disk operations (RAID-5 write penalty). If cache hits are not known, we
make a save assumption of 20% read cache hits and 30% write cache hits. We use the
following formula:
disk operations=(reads/sec - read cache hits) + 4 * (writes/sec - write cache hits)
As an example, a workload of 1000 reads per second and 700 writes per second results
in:
(1000 - 20% of 1000) + 4 * (700 - 30% of 700) = 2760 disk operations/sec
To obtain the needed number of ranks, we divide disk operations per second of i5/OS
workload by the maximum I/O rate one rank can handle at 40% utilization.
As an example, for workload with previously calculated 2760 disk operations per second,
we need the following number of 7+P raid-5 ranks:
2760 / 680 = 4
So, we recommend to use 4 ranks in DS for this workload.
A handy reference for determining the recommened number of RAID ranks for a known
System i workload is provided by the table in Table 5-1 on page 123, which shows the I/O
capabilities of different RAID-5 and RAID-10 rank configurations. The I/O capability numbers
in the two columns for the host I/O workload examples of 70/30 and 50/50 read/write ratios
imply no cache hits and 40% rank utilization. If the System i workload is similar to one of the
122 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
two listed read/write ratios a rough estimate for the number of recommended RAID ranks can
simply be determined by dividing the total System i I/O workload by the listed I/O capability for
the corresponding RAID rank configuration.
Similar to the number of ranks, to avoid potential I/O performance bottleneck due to
undersized configurations it is also important to properly size the number of Fibre Channel
adapters used for System i external storage attachment. To better understand this sizing, we
present a short description of the data flow through IOPs and the FC adapter (IOA).
A block of data in main memory consists of an 8 byte header and actual data that is 512 bytes
long. When the block of data is written from main memory to external storage or read to main
memory from external storage, requests are first sent to the IOA device driver which converts
the requests to generate a corresponding SCSI command understood by the disk unit resp.
storage system. The IOA device driver either resides within the IOP for IOP-based IOAs or
within SLIC for IOP-less IOAs. In addition, data descriptor lists (DDLs) tell the IOA where in
system memory the data and headers reside. See Figure 5-7.
Memory buffers
RIO-G
SAN Data
DS
With IOP-less Fibre Channel architectural changes in the process of getting the eight headers
for a 4 KB page out of or to main memory by packing them into just one DMA request reduce
the latency for disk I/O operations and put less burden on the PCI-X.
You need to size the number of FC adapters carefully for the throughput capability of an
adapter. Here, you must also take into account the capability of the IOP and the PCI
connection between the adapter and IOP.
We performed several measurements in the testing for this book, by which we can size the
capability of an adapter in terms of maximal I/O per second at different block sizes or maximal
MBps. Table 5-2 shows the results of measuring maximal I/O per second for different System
i Fibre Channel adapters and the I/O capability at 70% utilization which is relevant for sizing
the number of required System i FC adapters for a known transactional I/O workload.
Table 5-2 Maximal I/O per second per Fibre Channel IOA
IOP/IOA Maximal I/O per second per I/O per second per port
port at 70% utilization
IOP-less 5749 or 5774 15000 10500
2844 IOP / 5760 IOA 3900 3200
2844 IOP / 2787 IOA 3650 2555
124 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Table 5-3 shows the maximum throughput for System i Fibre Channel adapters based on
measurement of large 256 KB block sequential transfers and typical transaction workload with
rather small 14 KB block transfers.
When using IOP-based FC adapters there is another reason why the number of FC adapters
is important for performance. With IOP-based FC adapters only one I/O operation per path to
a LUN can be done at a time, so I/O requests could queue up in each LUN queue in the IOP
resulting in undesired I/O wait time. SLIC storage management allows a maximum of six I/O
requests in an IOP queue per LUN and path. By using more FC adapters for adding paths to
a LUN the number of active I/O and the number of available IOP LUN I/O queues can be
increased.
Note: For IOP-based Fibre Channel using more FC adapters for multipath with adding
more paths to a LUN can help to significantly reduce the disk I/O wait time.
With IOP-less Fibre Channel support the limit of one active I/O per LUN per path has been
removed and up to six active I/O per path and LUN are now supported. This inherently
provides six times better I/O concurrency compared to previous IOP-based Fibre Channel
technology and makes multipath for IOP-less a function primarily for redundancy which less
potential performance benefits compared to IOP-based Fibre Channel technology.
When a System i customer plans for external storage, the customer usually decides first how
much disk capacity is needed and then asks how many FC adapters will be necessary to
handle the planned capacity. It is useful to have a rule of thumb to determine how much disk
capacity to plan per FC adapter. We calculate this by using the access density of an i5/OS
workload. The access density of a workload is the number of I/O per second per GB and
denotes how “dense” I/O operations are on available disk space.
To calculate the capacity per FC adapter, we take the maximum I/O per second that an
adapter can handle at 70% utilization (see Table 5-2). We divide the maximal number of I/O
per second by access density to get the capacity per FC adapter. We recommend that LUN
utilization does not exceed 40%. Therefore, we apply 40% to the calculated capacity.
Consider this example. An i5/OS workload has an access density of 1.4 I/O per second per
GB. Adapter 5760 at IOP 2844 is capable of a maximum of 3200 I/O per second at 70%
utilization. Therefore, it handles the capacity 2285 GB, that is:
3200 / 1.4 = 2285 GB
After applying 40% for LUN utilization, the sized capacity per adapter is 40% of 2285 GB
which is:
2285 * 40% = 914 GB
In addition to a proper sizing of the number of FC adapters to use for external storage
attachment also following the guidelines for placing IOPs and FC adapters (IOAs) in the
System i platform (see 4.2.7, “Planning considerations for performance” on page 111).
With IOP-based Fibre Channel LUN size considerations are very important from System i
perspective because of the limit of one active I/O per path per LUN. (We discuss this limitation
in 5.2.2, “Number of Fibre Channel adapters” on page 123 and mention multipath for
IOP-based Fibre Channel as a solution that can reduce the wait time as each additional path
to a LUN enables one more active I/O to this LUN.) For the same reason of increasing the
amount of System i active I/O with IOP-based Fibre Channel, we rather recommend using
more smaller LUNs than fewer larger LUNs.
Note: As a rule of thumb for IOP-based Fibre Channel, we recommend choosing the LUN
size so that the ratio between the capacity of the DDM and LUN is at least two LUNs per
capacity of the DDM.
With 73 GB DDM capacity in the DS system, a customer can define 35.1 GB LUNs. For even
better performance 17.54 GB LUNs can be considered.
IOP-less Fibre Channel supports up to six active I/O for each path to a LUN so compared to
IOP-based Fibre Channel there is no stringent requirement anymore to use small LUN sizes
for better performance.
Note: With IOP-based Fibre Channel we generally recommend using a LUN size of
70.56 GB, that is protected and unprotected volume model A04/A84, when configuring
LUNs on external storage.
126 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Currently the only exception, for when we recommend using larger LUN sizes than 70.56 GB,
is when the customer anticipates a low capacity usage within the System i auxiliary storage
pool (ASP). For a low ASP capacity usage, using larger LUNs can provide better performance
by reducing the data fragmentation on the disk subsystem’s RAID array resulting in less disk
arm movements as illustrated in Figure 5-8.
RAID array with low capacity usage and a "large" size LUN
data
LUN 0
RAID array with low capacity usage and "regular" size LUNs
data
LUN 0
data
LUN 1
data
LUN 2
data
LUN 3
When allocating the LUNs for i5/OS, consider the following guidelines for better performance:
Balance the activity between the two DS processors, referred to as cluster0 and cluster1,
as much as possible. Because each cluster has separate memory buses and cache, this
maximizes the use of those resources.
In the DS system, an extent pool has an affinity to either cluster0 or cluster1. We define it
by specifying a rank group for a particular extent pool with rank group 0 served by cluster0
and rank group 1 served by cluster1. Therefore, define the same amount of extent pools in
rank group 0 as in rank group 1 for the i5/OS workload and allocate the LUNs evenly
among them.
Recommendation: We recommend that you to define one extent pool from one rank to
keep better evidence of LUNs and to ensure that LUNs are spread evenly between the
two processors.
Balance the activity of a critical application among device adapters in the DS system.
When choosing extent pools (ranks) for a critical application, make sure that they are
evenly served by as much as possible by device adapters.
In the DS system, we define a volume group that is a group of LUNs that are assigned to one
System i FC adapter or to multiple FC adapters in a multipath configuration. Create a volume
group so that it contains LUNs from the same rank group, that is do not mix logical subsystem
(LSS) LUNs server by cluster0 and odd LSS LUNs served by cluster1 on the same System i
host adapter. This multipath configuration helps to optimize sequential read performance with
making most efficient usage of the available DS8000 RIO loop bandwidth.
A heavy workload might get hold of disk arms in a rank and cache for almost all the time, so
the other workload will rarely have a chance to use them. Alternatively, if two heavy critical
workloads share a rank, they can prevent each other from using disk arms and cache at the
times when both are busy.
Therefore, we recommend that you dedicate ranks for a heavy critical i5/OS workload such as
SAP® or banking applications. When the other workload does not exceed 10% of the
workload from your critical i5/OS application, consider sharing the ranks.
Consider sharing ranks among multiple i5/OS systems or among i5/OS and open systems
when the workloads are less important and not I/O intensive. For example, testing and
developing, mail, and so on can share ranks with other systems.
With DS8000, we recommend that you size up to four 2 Gb FC adapters per one 2 Gb DS
port. In DS6000, consider sizing two System i FC adapters for one DS port. Figure 5-9 shows
an example of SAN switch zoning for four System i FC adapters accessing one DS8000 host
port.
2 Gb 2 Gb 2 Gb 2 Gb 2 Gb 2 Gb 2 Gb 2 Gb
IOA IOA IOA IOA IOA IOA IOA IOA
Host Host
port. port.
DS8000
128 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Consider the following guidelines for connecting System i 4 Gb FC IOAs to 4 Gb adapters in
the DS8000:
Connect one 4 Gb IOA port to one port on DS8000, provided that all four ports of the
DS8000 adapter card are used.
Connect two 4 Gb IOA ports to one port in DS8000, provided that only two ports of the
DS8000 adapter card are used.
Figure 5-10 shows the disk response time measurements of the same database workload
running in a single path and dual path at different I/O rates. The blue line represents a single
path, and the yellow line represents dual path.
6.00
5.00
Response Time (ms)
4.00
3.00
2.00
1.00
0.00
0 500 1000 1500 2000 2500
Throughput (IO/s)
i5 single-path i5 dual-path
The response time in IOP with a single path starts to increase drastically at about 1200 I/O
per second. With two paths, it starts to increase at about 1800 I/Os per second. From this, we
can make a rough rule of thumb that for IOP-based Fibre Channel multipath with two paths is
capable of 50% more I/Os than a single path and provides significantly shorter wait time than
a single path. Disk response time consists of service time and wait time. With multipath, only
the wait time is improved, while it does not influence service time. With IOP-less Fibre
Channel allowing six times as much active I/O as IOP-based Fibre Channel the performance
improvement by using multipath is of minor importance and multipath is primarily used for
redundancy.
For more information about how to plan for multipath, refer to 4.2.2, “Planning considerations
for i5/OS multipath Fibre Channel attachment” on page 81.
i5/OS performance reports—Resource report - Disk utilization and System report - Disk
utilization—show the average number of I/O per second for both IASP and *SYSBAS. To see
how many I/Os per second actually go to an IASP, we recommend that you look at the System
report - Resource utilization. This report shows the database reads per second and writes
per second for each application job, as shown in Figure 5-12.
Add the database reads per second (synchronous DBR and asynchronous DBR) and the
database writes per second (synchronous DBW and asynchronous DBW) of all application jobs
in IASP. Then, you can obtain reads per second and writes per second of IASP. Calculate the
number of reads per second and writes per second for *SYSBAS so that you subtract the
reads per second of the IASP from the overall reads per second and subtract the writes per
second of the IASP from the overall writes per second.
To allocate LUNs for IASP and *SYSBAS, we recommend that you first create the LUNs for
the IASP and spread them across available ranks in the DS system. From the left free space
on each rank, define (smaller) LUNs for using as *SYSBAS disk units. The reasoning for this
approach is that the first LUNs created on a RAID rank are created on the outer cylinders of
the disk drives which provide a higher data rate than the inner cylinders.
130 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The following sizing approach can help you prevent this undesired situation:
1. Use i5/OS Performance Tools to collect a resource report for disk utilization from the
production system, which accesses the FlashCopy source volumes, and the backup
system, which accesses the FlashCopy target volumes (see 5.4.3, “i5/OS Performance
Tools” on page 137).
2. Determine the amount of write I/O activity from the production and backup system for the
expected duration of the FlashCopy relationship, that is the duration of the system save to
tape.
3. Assuming that one track (64 KB) is moved to the repository for each write I/O and 33% of
all writes are re-writes to the same track, calculate 50% contingency for the recommended
space for the repository capacity as follows:
Recommended repository capacity [GB] = write IO/s x 67% x FlashCopy active time
[s] x 64 KB/IO / (1048576 KB/GB) x 150%
For example, let us assume an i5/OS partition with a total disk space of 1.125 TB, a
system save duration of 3 hours, and a given System i workload of 300 write I/O per
second.
The recommended repository size is then is as follows:
300 IO/s x 67% x 10800 s x 64 KB/IO / 1048576 GB/KB x 150% = 199 GB
So, the repository capacity needs to be 18% of its virtual capacity of 1.125 TB for the copy
of the production system space.
To calculate the recommended number of physical disk arms for the repository volume space
depending on your write I/O workload in tracks per second (at 50% disk utilization), refer to
Table 5-4.
RAID5 15 KB RPM 25
RAID5 10 KB RPM 18
RAID10 15 KB RPM 50
RAID10 10 KB RPM 36
For example, if you are using RAID5 with 15 KB RPM drives and 600 I/O per second, your
production host peak write I/O throughput during the active time of the space efficient
FlashCopy relationship is 600 I/O per second x 67% (accounting for 33% re-writes),
corresponding to 402 tracks per second and resulting in a recommended number of disk arms
as follows:
402 tracks per second / (25 tracks per second) / disk arm = 16 disk arms of 15 KB RPM
disks with RAID5
To use Disk Magic for sizing the System i platform with the DS system, you need the following
i5/OS Performance Tools reports:
Resource report: Disk utilization section
System report: Disk utilization section
Optional: System report: Storage utilization section
Component report: Disk Activity section
For instructions on how to use Disk Magic to size System i with a DS system, refer to 5.5,
“Sizing examples with Disk Magic” on page 139, which presents several examples of using
Disk Magic for the System i platform. The Disk Magic Web site also provides a Disk Magic
Learning Guide that you can download, which contains a few step-by-step examples for using
Disk Magic for modelling external storage performance.
To use WLE, you select one or more workloads from an existing selection list and answer a
series of questions about each workload. Based on the answers, WLE generates a
recommendation and shows the predicted processor utilization.
WLE also provides the capability to model external storage for recommended System i
hardware. When the recommended System i models are shown in WLE, you can choose to
directly invoke Disk Magic and model external storage for this workload. Therefore, you can
obtain both recommendations for System i hardware and recommendations for external
storage in the same run of WLE combined with Disk Magic.
132 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
For an example of how to use WLE with Disk Magic, see 5.5.4, “Using IBM Systems
Workload Estimator connection to Disk Magic: Modeling DS6000 and System i for an existing
workload” on page 189.
IBM System Storage Productivity Center for Disk enables the device configuration and
management of SAN-attached devices from a single console. In addition, it includes
performance capabilities to monitor and manage the performance of the disks.
The functions of System Storage Productivity Center for Disk performance include:
Collect and store performance data and provide alerts
Provide graphical performance reports
Help optimize storage allocation
Provide volume contention analysis
When using System Storage Productivity Center for Disk to monitor a System i workload on
DS8000 or DS6000, we recommend that you inspect the following information:
Read I/O Rate (sequential)
Read I/O Rate (overall)
Write I/O Rate (normal)
Read Cache Hit Percentage (overall)
Write Response Time
Overall Response Time
Read Transfer Size
Write Transfer Size
Cache to Disk Transfer Rate
Write-cache Delay Percentage
Write-cache Delay I/O (I/O delayed due to NVS overflow)
Backend Read Response Time
Port Send Data Rate
Port Receive Data Rate
Total Port Data Rate (should be balanced among ports)
Port Receive Response Time
I/O per rank
Response time per rank
Response time per volumes
134 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 5-13 shows the cache hit percentage of the System Storage Productivity Center
graph.
Other applications tend to follow the same patterns as the System i benchmark compute
intensive workload (CIW). These applications typically have fewer jobs running transactions
that spend a substantial amount of time in the application itself. An example of such a
workload is Lotus® Domino Mail and Calendar.
In general, System i batch workloads can be I/O or compute intensive. For I/O intensive batch
applications, the overall batch performance is dependent on the speed of the disk subsystem.
136 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
For compute-intensive batch jobs, the run time likely depends on the processor power of the
System i platform. For many customers, batch workloads run with large block sizes.
Typically batch jobs run during the night. For some environments, it is important that these
jobs finish on time to enable timely starting of the daily transaction application. The amount of
time that a batch job takes is called a batch window.
In many cases, you know when the peak periods or the most critical periods occur. If you
know when these times are, collect performance data during these periods. In some cases,
you might not know when the peak periods occur. In such a case, we recommend that you
collect performance data during a 24-hour period and in different time periods, for example,
during end-of-week and end-of-month jobs.
After the data is collected, produce a Resource report with a disk utilization section and use
the following guidelines to identify peak periods:
Look for one hour with the most I/O per seconds. You can insert the report into a
spreadsheet, calculate the hourly average of I/O per second, and look for the maximum of
the hourly average. Figure 5-15 shows part of such a spreadsheet.
For many customers, performance data shows patterns in block sizes, with significantly
different block sizes in different periods of time. If this is so, calculate the hourly average of
the block sizes and use the hour with the maximal block sizes as the second peak.
If you identified two peak periods, size the DS system so that both are accommodated.
Figure 5-15 Identifying the peak period for the System Storage Productivity Center
5. Performance utilities
6. Configure and manage tools
7. Display performance data
8. System activity
9. Performance graphics
10. Advisor
Selection or command
===> 2
138 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
13.On the Select Section for Report panel, select Disk Activity, and then select Time
Interval. Then select all intervals or just the intervals of the peak period. Press Enter to
start the job for report.
14.On the Print Performance report - Sample Data panel, for member, select 5. Resource
report.
15.On the Select Section for Report panel, select Disk Utilization and then select Time
Interval. Then select all intervals or just the intervals of the peak period. Press Enter to
start the job for the report.
16.To insert the reports into Disk Magic, transfer the reports from the spooled file to a PC
using iSeries Navigator.
17.In iSeries Navigator, expand the i5/OS system on which the reports are located. Expand
Basic Operations and double-click Printer output.
18.Performance reports in the spooled file are shown on the right side of the panel. Copy and
paste the necessary reports to your PC.
5.5.1 Sizing the System i5 with DS8000 for a customer with iSeries model 8xx
and internal disks
In this example, DS8000 is sized for a customer’s production workload. The customer is
currently running a host workload on internal disks; performance reports from a peak period
are available. For instruction on how to produce performance reports, refer to 5.4.3, “i5/OS
Performance Tools” on page 137.
140 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. In the Open window (Figure 5-18), choose the directory that contains the performance
reports, select altogether the corresponding system, resource and component report files
and click Open.
You can also concatenate all necessary iSeries performance reports into one file and
insert it into Disk Magic. In this example, both System report - Storage pool utilization and
System report - Disk utilization are concatenated into one System report file.
142 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
If you want to model your external storage solution with a system I/O workload aggregated
from all fASPs or if you want to continue using potentially configured i5/OS mirroring with
external storage:
a. Click Edit Properties.
b. Click Discern ASP level.
c. Select Keep mirroring, if applicable,
d. Click OK as shown in Figure 5-20.
Otherwise, click Process All Files (in Figure 5-19) to continue.
While inserting reports, Disk Magic might show a warning message about inconsistent
interval star and stop times (see Figure 5-21).
One cause for inconsistent start and stop times might be that the customer gives you
performance reports for 24 hours, and you select a one-hour peak period from them. Then
the customer produces reports again and selects only the interval of the peak period from
the collected data. In such reports, the start and stop time of the collection does not match
the start and stop time of produced reports. The reports are correct, and you can ignore
this warning. However, there can be other instances where inconsistent reports are
5. In the TreeView panel in Disk Magic, observe the following two icons (Figure 5-23):
– Example1 denotes a workload.
– iSeries1 denotes a disk subsystem for this workload.
Double-click iSeries1.
144 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. The Disk Subsystem - iSeries1 panel displays, which contains data about the current
workload on the internal disks. The General tab shows the current type of disks
(Figure 5-24).
146 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The iSeries Workload tab on the same panel (Figure 5-26) shows the characteristics of
the iSeries workload. These include reads per sec, writes per sec, block size, and reported
current disk service time and wait time.
a. Click the Cache Statistics button.
b. You can observe the current percentage of cache read hits and write efficiency as
shown in Figure 5-27. Click OK to return to the iSeries Workload tab.
c. Click Base to save the current disk subsystem as a base for Disk Magic modeling.
7. Insert the planned DS configuration in the disk subsystem model by inserting the relevant
values on each tab, as shown in the next steps. In this example, we insert the following
planned configuration:
– DS8100 with 32 GB cache
– 12 FC adapters in System i5 in multipath, two paths for each set of LUNs
– Six FC ports in DS8100
– Eight ranks of 73 GB DDMs used for the System i5 workload
– 182 LUNs of size 17.54 GB
To insert the planned DS configuration information:
a. On the General tab in the Disk Subsystem - iSeries 1 window, choose the type of
planned DS for Hardware Type (Figure 5-29).
148 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Notice that the General tab interface changes as shown in Figure 5-30. If you use
multipath, select Multipath with iSeries. In our example, we use multipath, so we
select this box. Notice that the Interfaces tab is added as soon as you select DS8000
as a disk subsystem.
Figure 5-30 Disk Magic: Selecting the hardware and specifying multipath
b. Click the Hardware Details button. In the Hardware Details window (Figure 5-31), for
System Memory, choose the planned amount of cache, and for Fibre Host Adapters,
enter the planned number of host adapters, and click OK.
d. In the Edit Interfaces for Disk Subsystem window (Figure 5-33), for Count, enter the
planned number of DS ports, and click OK.
Figure 5-33 Inserting the DS host ports: Edit Interfaces for Disk Subsystem
150 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
e. Back on the Interfaces tab (Figure 5-32), select the From Servers tab, and click Edit. In
the Edit Interfaces window (Figure 5-34), enter the number of planned System i5 FC
adapters. Click OK.
f. Next, in the Disk Subsystem - iSeries1 window, select the iSeries Disk tab, as shown in
Figure 5-35. Notice that Disk Magic uses the reported capacity on internal disks as the
default capacity on DS. Click Edit.
Figure 5-36 Inserting the capacity for the planned number of ranks
152 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
After you insert the capacity for the planned number of ranks, the iSeries Disk tab
shows the correct number of planned ranks (see Figure 5-37).
154 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
i. In the Cache Statistics for Host window (see Figure 5-39), notice that Disk Magic
models cache usage on DS8000 automatically based on the reported current cache
usage on internal disks. Click OK.
8. After you enter the planned values of the DS configuration, in the Disk Subsystem -
iSeries1 panel (Figure 5-38), click Solve.
9. A Disk Magic message displays indicating that the model of planned scenario is
successfully solved (Figure 5-40). Click OK to solve the model of iSeries or i5/OS
workload on DS.
156 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
11.In the Utilizations IBM DS8100 window (Figure 5-42), observe the modeled utilization of
physical disk drives or hard disk drives (HDDs), DS device adapters, LUNs, FC ports in
DS, and so on.
In our example, none of the utilization values exceeds the recommended maximal value.
However, the HDD utilization of 32% approaches the recommended threshold of 40%.
Thus, you need to consider additional ranks if you intend to grow the workload. Click OK.
12.On iSeries Workload tab (Figure 5-41), click Cache Statistics. In the Cache Statistics for
Host window (Figure 5-43), notice the modeled cache values on DS. In our example, the
modeled read cache percentage is higher than the current read cache percentage with
internal disks, but modeled write cache efficiency on DS is about the same as current
rather high write cache percentage. Notice also that the modeled disk seek percentage
dropped to almost half of the reported seek percentage on internal disks.
iSeries Server I/O Transfer Serv Wait Read Read Write Write LUN LUN
Rate Size (KB) Time Time Perc Hit% Hit% Eff % Cnt Util%
Average 4926 9.0 3.8 0.0 60 41 100 74 182 10
Example1 4926 9.0 3.8 0.0 60 41 100 74 182 10
Figure 5-44 Modeled values in the Disk Magic log
158 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
13.You can use Disk Magic to model the critical values for planned growth of a customer’s
workload, which can be predicted to a point at which the current DS configuration no
longer meet performance requirements and the customer must consider additional ranks,
FC adapters, and so on. To model DS for growth of the workload:
a. In the Disk Subsystem - iSeries1 window, click Graph. In the Graph Options window
(Figure 5-45), select the following options:
• For Graph Data, choose Response Time in ms.
• For Graph Type, select Line.
• For Range Type, select I/O Rate.
Observe that the values for range of I/O rate are already filled with default values,
starting from current I/O rate. In our example, we predict a growth rate of three times
larger than the current I/O rate, increasing by 1000 I/O per second at a time. Therefore,
we insert 14800 in the To field and 1000 in the By field.
b. Click Plot.
Model1
iSeries1
Response Time in ms (iSeries)
20
15
DS8100 / 16 GB
10
0
4926 6926 8926 10926 12926
5926 7926 9926 11926 13926
Total I/O Rate (I/Os per second)
Figure 5-46 Disk response time at I/O growth
160 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
14.Next, produce the graph of HDD utilizations at workload growth.
a. In the Disk Subsystem - iSeries1 window, on the iSeries Workload tab, click Graph. In
the Graph Options window (Figure 5-47):
• For Graph Data, select Highest HDD Utilization (%).
• For Graph Type, select Line.
• For Range Type, select I/O Rate and select the appropriate range values. In our
example, we use the same I/O rate values as for disk response time.
b. Click Plot.
Model1
Highest HDD utilization (%) (iSeries) iSeries1
100
90
80
70
60 DS8100 / 16 GB
50
40
30
20
4926 6926 8926 10926 12926
5926 7926 9926 11926 13926
Total I/O Rate (I/Os per second)
After the installing the System i5 platform and DS8100, the customer used initially six ranks
and 10 FC adapters in multipath for the production workload. Because an iSeries model 825
replaced a System i5 model, the I/O characteristics of the production workload changed,
because of higher processor power and larger memory pool in the System i5 model. The
production workload produces 230 reads per second and 1523 writes per second. Also, the
actual service times and wait times do not exceed one millisecond.
162 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5.5.2 Sharing DS8100 ranks between two i5/OS systems (partitions)
In this example, we use Disk Magic to model two i5/OS workloads that share the same extent
pool in DS8000. To model this scenario with Disk Magic:
1. Insert into Disk Magic reports of the first workload as described in 5.5.1, “Sizing the
System i5 with DS8000 for a customer with iSeries model 8xx and internal disks” on
page 139.
2. After reports of the first i5/OS system are inserted, add the reports for the other system. In
the Disk Magic TreeView panel, right-click the disk subsystem icon, and select Add
Reports as shown in Figure 5-49.
3. In the Open window (Figure 5-50), select the reports of another workload to insert, and
click Open.
4. After the reports of the second system are inserted, observe that the models for both
workloads are present in TreeView panel as shown in Figure 5-51. Double-click the
iSeries disk subsystem.
5. In the Disk Subsystem - iSeries1 window (Figure 5-52), select the iSeries Disk tab. Notice
that the two subtabs on the iSeries Disk tab and that each shows the current capacity for
the internal disks of one workload.
a. Click the Example2-1 tab, and observe the current capacity for the first i5/OS workload.
164 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 5-52 Current capacity of the first i5/OS system
6. Select the iSeries Workload tab, and click Cache Statistics. The Cache Statistics for Host
window opens and shows the current cache usage. Figure 5-54 shows the cache usage of
the second i5/OS system. Click OK.
7. In the Disk Subsystem - iSeries1 window, click Base to save the current configuration of
both i5/OS systems as a base for further modeling.
166 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. After the base is saved, model the external disk subsystem for both workloads:
a. In the Disk subsystem - iSeries1 window, select the General tab. For Hardware type,
select the desired disk system. In our example, we select DS8100 and Multipath with
iSeries, as shown in Figure 5-55.
In our example, we plan the following configurations for each i5/OS workload:
• Workload Example2-1: 12 LUNs of size 17 GB and 2 System i5 FC adapters in
multipath
• Workload Example2-2: 22 LUNs of size 17 GB and 2 System i5 FC adapters in
multipath
The four System i5 FC adapters is connected to two DS host ports using switches.
c. In the Edit Interfaces window (Figure 5-57), change the number of interfaces as
planned, and click OK.
d. To model the number of DS host ports, select the Interfaces tab, and then select the
From Disk Subsystem tab. You see the interfaces from DS8100. Click Edit, and insert
the planned number of DS host ports. Click OK.
168 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
e. In the Disk Subsystem - iSeries1 window, select the iSeries Disk tab. Notice that Disk
Magic creates an extent pool for each i5/OS system automatically. Each extent pool
contains the same capacity that is reported for internal disks. See Figure 5-58.
In our example, we plan to share two ranks between the two i5/OS systems, so we do
not want a separate extent pool for each i5/OS system. Instead, we want one extent
pool for both systems.
f. On the iSeries Disk tab, click the Add button. In the Add a Disk Type window
(Figure 5-59), in the Capacity (GB) field, enter the needed capacity of the new extent
pool. For Extent Pool, select Add New.
Figure 5-59 Creating an extent pool to share between the two workloads
h. The iSeries Disk tab shows the new extent pool along with the two previous extent
pools (Figure 5-61). Select each extent pool, and click Delete.
170 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
After you delete both of the previous extent pools, only the new extent pool named
Shared is shown on the iSeries Disk tab, as shown in Figure 5-62.
172 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
j. Select the tab with the name of the second i5/OS workload, which in this case is
Example2-2 (Figure 5-64). Then, complete the following information:
• For Extent Pool, select the extent pool named Shared.
• For LUN count, enter the planned number of LUNs.
• For Used Capacity, enter the amount of usable capacity.
k. In the Disk Subsystem - iSeries1 window, click Solve to solve the modeled DS
configuration.
Figure 5-65 Modeled service time and wait time for the first workload
174 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
m. Click the tab with the name of second workload, which in this case is Example2-2.
Notice the modeled disk service time and wait time, as shown in Figure 5-66.
n. Select the Average tab, and then click Utilizations.
Figure 5-66 Modeled service time and wait time for the second workload
5.5.3 Modeling System i5 and DS8100 for a batch job currently running
Model 8xx and ESS 800
In this example, we describe the sizing of DS8100 for a batch job that currently runs on
iSeries Model 825 with ESS 800. The needed performance reports are available, except for
System report - Storage pool utilization, which is optional for modeling with Disk Magic.
176 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. After you insert the performance reports, Disk Magic creates one disk subsystem for the
I/O rate and capacity part of the iSeries workload that runs on ESS 800, and one disk
subsystem for the part of the workload that runs on internal disks, as shown in
Figure 5-68.
Figure 5-68 Disk Magic model for iSeries with external disk
3. Double-click iSeries1.
4. In the Disk Subsystem - iSeries1 window (Figure 5-69), select the iSeries Disk tab.
178 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Select the iSeries Workload tab. Notice the I/O rate on the internal disks as shown in
Figure 5-71. In our example, a low I/O rate is used for the internal disks.
8. Adjust the model for the currently used ESS 800 so that it reflects the correct number of
ranks, size of DDMs, and FC adapters as described in the steps that follow. In our
example, the existing ESS 800 contains 8 GB cache, 12 ranks of 73 GB 15 KB rpm DDMs
and four FC adapters with feature number 2766, so we enter these values for disk
subsystem ESS1. To adjust the model:
a. Select the General tab, and click Hardware Details.
b. The ESS Configuration Details window (Figure 5-73 on page 181) opens. Replace the
default values with the correct values for the existing ESS. In our example, we use four
FC adapters and 8 GB of cache, so we do not change the default values. Click OK.
180 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 5-73 Hardware details for the existing ESS 800
c. Select the lnterfaces tab, and click the From Disk Subsystem subtab. Click Edit.
d. The Edit Interfaces for Disk Subsystem window (Figure 5-74) opens. Enter the correct
values for the current ESS 800. In our example, the customer uses four host ports from
ESS, so we do not change the default value of 4. However, we change the type of
adapters for Server side to Fibre 1 Gb to reflect the existing iSeries adapter 2766.
Click OK.
e. On the lnterfaces tab, click the From Servers subtab and click Edit.
f. In the Edit Interfaces window (Figure 5-75), enter the current number and type of
iSeries FC adapters. In our example, we use four iSeries 2766 adapters, so we leave
the default value of 4. However, for Server side, we change the type of adapters to
Fibre 1 Gb to reflect the current adapters 2766.
182 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
h. Select the iSeries Workload tab. Notice that the current I/O rate and block sizes are
inserted by Disk Magic as shown in Figure 5-77.
i. On the iSeries Workload tab, click Cache Statistics. In the Cache Statistics for Host
window (Figure 5-78), notice the currently used cache percentages. Click OK.
j. In the Disk Subsystem - ESS1 window, click Base to save the current model of ESS.
b. Click Hardware Details. In the Hardware Details IBM DS8100 window (Figure 5-80),
enter the values for the planned DS system. In our example, the customer uses four
DS FC host ports, so we enter 4 for Fibre Host Adapters. Click OK.
184 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
c. Select the Interfaces tab. Select the From Disk Subsystem tab and click Edit.
d. The Edit Interfaces for Disk Subsystem window (Figure 5-81) opens. Enter the planned
number and type of DS host ports. In our example, the customer plans on four DS
ports and four adapters with feature number 2787 in the System i5 model. Therefore,
we leave the default value for Count. However, for Server side, we change the type to
Fibre 2 Gb. Click OK.
e. On the Interfaces tab, select the From Servers tab and click Edit. The Edit Interfaces
window (Figure 5-82) opens. Enter the planned number and type of System i5 FC
adapters. In our example, the customer plans for four FC adapters 2787, so we leave
the default value of 4 for Count. However, for Server side, we select Fibre 2 Gb. Click
OK.
g. In the Edit a Disk Type panel (Figure 5-84), enter the capacity that corresponds to the
desired number of ranks for Capacity. Observe that 73 GB 15 KB rpm ranks are
already inserted as the default for HDD Type.
In our example, the customer plans nine ranks. The available capacity of one RAID-5
73 GB rank with spare (6+P+S rank) is 414.46 GB. We enter a capacity of 3730 (9 x
414.46 GB = 3730 GB), and click OK.
186 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
h. Select the iSeries Workload tab. Enter the planned number of LUNs and the amount of
capacity that is used by the System i5 model. Notice that the extent pool for the i5/OS
workload is already specified for Extent Pool.
In our example, the customer plans for 113 of 17 GB LUNs, so we enter 113 for LUN
count. We also enter 1982 (using the equation 113 x 17.54 = 1982 GB) for Used
Capacity. See Figure 5-85.
i. On the iSeries Workload tab, click Cache Statistics. In the Cache Statistics for Host
window (Figure 5-86), notice that the box Automatic cache Modeling is selected. This
indicates that Disk Magic will model cache percentages automatically for DS8100
based on the reported values from performance reports for the currently used ESS
800. Note that write cache efficiency reported in performance reports is not correct for
ESS 800, so Disk Magic uses a default value 30%.
188 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
k. On the iSeries Workload tab, click Utilizations. Notice the modeled utilization of HDDs,
DS FC ports, LUNs, and so on, as shown in Figure 5-88. In our example, the modeled
utilizations are rather low so the customer can grow the workload to a certain extent
without needing additional hardware in the DS system.
In our example, the customer migration from iSeries model 825 to a System i5 model was
performed at the same time as the installation of DS8100. Therefore, the number of I/Os per
second and the cache values differ from the ones that were used by Disk Magic. The actual
disk response times were lower than the modeled ones. The actual reported disk service time
is 2.2 ms, and disk wait time is 1.4 ms.
190 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. In the Workload Selection panel (Figure 5-90), for Add Workload, select Existing and click
Go.
7. You return to the initial panel, which contains the Existing #1 workload (see Figure 5-92).
Click Continue.
192 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. In the Existing #1 - Existing System Workload Definition panel (Figure 5-93), enter the
hardware and characteristics of the existing workload as described in the next steps.
b. Next to Processor model, select the corresponding model and features (see
Figure 5-95).
194 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
c. Obtain the total CPU utilization and Interactive CPU utilization data from the System
report - Workload (see Figure 5-96).
d. Obtain memory data from the System report in the Main Storage field (see Figure 5-94
on page 194).
e. Insert these values into the Total CPU Utilization, Interactive Utilization, and Memory
(MB) fields. If the workload runs in a partition, specify the number of processors for this
partition and select Yes for Represent a Logical partition. See Figure 5-97.
g. Obtain the current IOA feature and RAID protection used from the iSeries
configuration. Obtain the Drive Type and number of disk units from the System report -
Disk Utilization (Figure 5-99).
Unit Size IOP IOP Dsk CPU ASP Rsc ASP --Percent-- Op Per K Per - Average Time Per I/O --
Unit Name Type (M) Util Name Util Name ID Full Util Second I/O Service Wait Response
---- ---------- ---- ------- ---- ---------- ------- ---------- --- ---- ---- -------- --------- ------- ------ --------
0001 DD004 4326 30.769 0,7 CMB01 0,6 1 59,0 1,8 14,98 9,7 .0012 .0002 .0014
0002 DD003 4326 26.373 0,7 CMB01 0,6 1 59,0 1,6 13,72 10,0 .0011 .0002 .0013
0003 DD011 4326 30.769 0,7 CMB01 0,6 1 59,0 1,6 11,83 11,7 .0013 .0003 .0016
0004 DD005 4326 30.769 0,7 CMB01 0,6 1 59,0 1,7 16,49 8,2 .0010 .0000 .0010
0005 DD009 4326 30.769 0,7 CMB01 0,6 1 59,0 1,5 15,17 9,5 .0009 .0002 .0011
0006 DD010 4326 26.373 0,7 CMB01 0,6 1 59,0 1,3 15,90 9,3 .0008 .0001 .0009
0007 DD007 4326 26.373 0,7 CMB01 0,6 1 59,0 1,2 11,42 10,2 .0010 .0001 .0011
0008 DD012 4326 30.769 0,7 CMB01 0,6 1 59,0 1,5 10,22 10,8 .0014 .0003 .0017
0009 DD008 4326 30.769 0,7 CMB01 0,6 1 59,0 1,5 15,67 9,0 .0009 .0001 .0010
0010 DD001 4326 26.373 0,7 CMB01 0,6 1 59,0 1,5 15,20 8,7 .0009 .0002 .0011
0011 DD006 4326 30.769 0,7 CMB01 0,6 1 59,0 1,7 21,17 8,3 .0008 .0000 .0008
h. In the Storage (GB) field, insert the number of disk units multiplied by the size of a unit.
In our example, we have 24 of disk feature 4326, which is a 15 KB RPM 35.16 GB
internal disk drive. They are connected through IOA 2780.
196 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
i. In the Storage field, insert the total current disk capacity, by multiplying the capacity of
one disk unit with the number of disks. In our example, there are 24 x 35.16 GB disk
units, so we insert in the Storage field, 24 x 35.16 GB = 844 GB (see Figure 5-98).
You can also click the WLE Help/Tutorials tab for instructions on how to obtain the
necessary values to enter in the WLE.
j. Obtain the Read Ops Per Second value from the Resource report - Disk utilization (see
Figure 5-100).
k. If the workload is small or if WebFacing or HATS is used, specify the values for in the
Additional Characteristics and WebFacing or HATS Support fields. Refer to WLE Help
for more information about these fields.
l. The System reports are shown in one block size (size of operation) for both reads and
writes, so insert this size for both operations. Click Continue (see Figure 5-98).
9. The Selected System - Choose Base System panel displays as shown in Figure 5-101.
Here you can limit your selection to an existing system, or you can use WLE to size any
system for the inserted workload. In our example, we use WLE to size any system. We
click the two Select buttons.
11.The Selected System - External Storage Sizing Information panel displays as shown in
Figure 5-103. For Which system, select either Immediate or Growth for the system for
which you want to size external storage. In our example, we select Immediate to size our
external storage. Then click Download Now.
198 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
12.The File Download window opens. You can choose to start Disk Magic immediately for the
sized workload (by clicking Open), or you can choose to save the Disk Magic command
file and use it later (by clicking Save). In our example, we want to start Disk Magic
immediately, so we click Open.
Important: At this point, to start Disk Magic, you must have Disk Magic installed.
13.Disk Magic starts with the workload modeled with WLE (see Figure 5-104). Observe that
the workload Existing #1 is already shown under TreeView. Double-click dss1.
14.The Disk Subsystem - dss1 window (Figure 5-105) opens, displaying the General tab.
Follow these steps:
a. To size DS6800 for the Existing #1 workload, from Hardware Type, select DS6800. We
highly recommend that you use multipath with DS6800. To model multipath, select
Multipath with iSeries.
c. Click the From Disk Subsystem tab. Notice that four interfaces from DS6000 are
configured as the default. In our example, we use two DS6000 host ports for
connecting to the System i5 platform, so we change the number of interfaces. Click
Edit to open the Edit Interfaces for Disk Subsystem window. In the Count field, enter
the number of planned DS6000 ports. Click OK. In our example, we insert two ports as
shown in Figure 5-107.
200 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
d. In the Disk Subsystem - dss1 window, click the iSeries Disk tab (Figure 5-108).
Observe that an extent pool is already configured for the Existing #1 workload. Its
capacity is equal to the capacity that you specified in the Storage field of the WLE.
e. In the Disk Subsystem - dss1 window, select the iSeries Workload tab. Notice that the
number of reads per second and writes per second, the number of LUNs, and the
capacity are specified based on values that you inserted in WLE. You might want to
check the modeled expert cache size, by comparing it to the sum of all expert cache
storage pools in the System report (Figure 5-109).
Pool Expert Size Act CPU Number Average ------ DB ------ ---- Non-DB ---- Act-
ID Cache (KB) Lvl Util Tns Response Fault Pages Fault Pages Wait
---- ------- ----------- ----- ----- ----------- -------- ------- ------- ------- ------- --------
01 0 808.300 0 28,5 0 0,00 0,0 0,0 0,3 1,0 257 0
*02 3 1.812.504 147 15,7 825 0,31 3,8 17,9 32,7 138,8 624 0
*03 3 1.299.488 48 9,6 4.674 0,56 2,4 13,0 28,1 107,0 198 0
04 3 121.244 5 0,0 0 0,00 0,0 0,0 0,0 0,0 0 0
Total 4.041.536 53,9 5.499 6,3 31,0 61,2 246,9 1.080 0
g. The Cache Statistics for Host Existing #1 window (Figure 5-111) opens. Notice that the
cache statistics are already specified in the Disk Magic model. For more conservative
sizing, you might want to change them to lower values, such as 20% read cache and
30% write cache. Then, click OK.
202 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
h. On the Disk Subsystem - dss1 window, click Base to save the current model as base.
After the base is saved successfully, notice the modeled disk service time and wait
time, as shown in Figure 5-112.
i. On the iSeries Workload tab, click Utilizations. The Utilizations IBM DS6800 window
(Figure 5-113) opens. Observe the modeled utilizations for the existing workload. In
our example, the modeled hard disk drive (HDD) utilization and LUN utilization are far
below the limits that are recommended for good performance. There is room for growth
in the modeled DS configuration.
6.1.1 Hardware
DS8000, DS6000, and ESS model 800 are supported on all System i models that support
Fibre Channel (FC) attachment for external storage. Fibre channel was supported on all
iSeries 8xx models and later. AS/400 models 7xx and earlier only supported SCSI
attachment for external storage so they cannot support DS8000 or DS6000.
The following IOP-based FC adapters for System i support DS8000 and DS6000:
2766 2 Gb Fibre Channel Disk Controller PCI
2787 2 Gb Fibre Channel Disk Controller PCI-X
5760 4 Gb Fibre Channel Disk Controller PCI-X
With System i POWER6 new IOP-less FC adapters are available which only support IBM
System Storage DS8000 on LIC level 2.4.3 or later for external disk storage attachment:
5749 IOP-less 4 Gb dual-port Fibre Channel Disk Controller PCI-X
5774 IOP-less 4 Gb dual-port Fibre Channel Disk Controller PCIe
For further planning information with these System i FC adapters, refer to 4.2, “Solution
implementation considerations” on page 78.
For information about current hardware requirements, including support for switches, refer to:
http://www-1.ibm.com/servers/eserver/iseries/storage/storage_hw.html
To support boot from SAN with the load source unit on external storage, either the #2847 I/O
processor (IOP) or an IOP-less FC adapter is required.
Restriction: Prior to i5/OS V6R1 the #2847 IOP for SAN load source does not support
multipath for the load source unit but does support multipath for all other logical unit
numbers (LUNs) attached to this I/O processor (IOP). See 6.10, “Protecting the external
load source unit” on page 240 for more information.
6.1.2 Software
The iSeries or System i environment must be running V5R3, V5R4 or V6R1of i5/OS. In
addition the following PTFs are required:
V5R3
– MF33328
– MF33845
– MF33437
– MF33303
– SI14690
– SI14755
– SI14550
208 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
V5R3M5 and later
– Load source must be at least 17.54 GB
Important:
The #2847 PCI-X IOP for SAN load source requires i5/OS V5R3M5 or later.
The #5760 FC I/O adapter (IOA) requires V5R3M0 resave RSI or V5R3M5 RSB with
C6045530 or later (ref. #5761 APAR II14169) and for System i5 firmware level
SF235_160 or later
The #5749/#5774 IOP-less FC IOA is supported on System i POWER6 models only
Prior to attaching a DS8000, DS6000, or ESS model 800 system to a System i model, check
for the latest PTFs, which probably have superseded the minimum requirements listed
previously.
Note: We generally recommend installing one of the latest i5/OS cumulative PTFs
(cumPTFs) before attaching IBM System Storage external disk storage subsystems to
System i.
Table 6-1 indicates the number of extents that are required for different System i volume
sizes. The value xxxx represents 1750 for DS6000 and 2107 for DS8000.
When creating the logical volumes for use with i5/OS, in almost every case, the i5/OS device
size does not match a whole number of extents, so some space remains unused. Use the
values in Table 6-1 in conjunction with extent pools to see how much space will be wasted for
your specific configuration. Also, note that the #2766, #2787, and #5760 Fibre Channel Disk
For more information about sizing guidelines for i5/OS, refer to Chapter 5, “Sizing external
storage for i5/OS” on page 115.
Under some circumstances, you might want to mirror the i5/OS internal load source unit to a
LUN in the DS8000 or DS6000 storage system. In this case, define only one LUN as
unprotected. Otherwise, when mirroring is started to mirror the load source unit to the
DS6000 or DS8000 LUN, i5/OS attempts to mirror all unprotected volumes.
Important: Prior to i5/OS V6R1, we strongly recommend that if you use an external load
source unit that you use i5/OS mirroring to another LUN in external storage system to
provide path protection for the external load source unit (see 6.10, “Protecting the external
load source unit” on page 240).
Although it is possible to change a volume from protected to unprotected (or vice versa) using
the DS command-line interface (CLI) chfbvol command, you need to be extremely careful
when changing LUN protection.
Attention: Changing the LUN protection of a System i volume is only supported for
non-configured volumes, that is volumes not a part of the System i auxiliary storage pool
configuration.
If the volume is configured, that is within an auxiliary storage pool (ASP) configuration, do not
change the protection. In this case if you want to change the protection, you must remove the
volume from the ASP configuration first and add it back later after having changed its
protection mode. This process is unlike ESS models E20, F20, and 800 where from storage
side no dynamic change of the LUN protection mode is supported so that the logical volume
would have to be deleted, requiring the entire array that contains the logical volume to be
reformatted, and created new with the desired other volume protection mode.
210 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6.4 Setting up an external load source unit
The new #5749 and #5774 IOP-less Fibre Channel IOAs for System i POWER6 allow to
perform an IPL from a LUN in the IBM System Storage DS8000 series.
The #2847 PCI-X IOP for SAN load source allows a System i to perform an IPL from a LUN in
a DS6000, DS8000, or ESS model 800. This IOP supports only a single FC IOA. No other
IOAs are supported.
Restrictions:
The new IOP-less Fibre Channel IOAs #5749 or and #5774 support for direct
attachment the FC-AL protocol only.
For the #2847 IOP driven IOAs Point-to-Point (also known as FC-SW and SCSI-FCP) is
the only support protocol. You must not define the host connection (DS CLI) or the Host
Attachment (Storage Manager GUI) as FC-AL because this prevents you from using the
system.
Creating a new load source unit on external storage is similar to creating one on an internal
drive. However, instead of tagging a RAID disk controller for the internal load source unit, you
must tag your load source IOA for the SAN load source.
Note: With System i SLIC V5R4M5 and later all buses and IOPs are booted in the D-mode
IPL environment and if no existing loadsource disk unit is found, a list of eligible disk units
(of the correct capacity) displays for the user to select which disk to use as the loadsource
disk.
For previous SLIC versions, we recommend that you assign only your designated load source
LUN to your load source IOA first to make sure that this is the LUN chosen by the system for
your load source at SLIC install. Then, assign the other LUNs to your load source IOA
afterwards.
Note: For below HMC V7, right-click the partition profile name and select Properties.
212 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. In the Logical Partition Profile Properties window (Figure 6-3), select the Tagged I/O tab.
4. On the Tagged I/O tab, click the Select button that corresponds to the load source as
shown in Figure 6-4.
Note: For below HMC V7, right-click the partition name and select Properties.
214 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
b. In the Partition Properties window (Figure 6-7), select the Settings tab.
c. On the Settings tab, for Keylock position, select Manual as shown in Figure 6-8.
Note: For below HMC V7, right-click the partition, select Properties, and click
Activate.
2. In the Activate Logical Partition window (Figure 6-10), select the partition profile to be
used and click OK.
In the HMC, a status window displays, which closes when the task is complete and the
partition is activated. Wait for the Dedicated Service Tools (DST) panel to open.
3. After the system has done an IPL to DST, select 3. Use Dedicated Service Tools (DST).
216 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. On the OS/400 logo panel (Figure 6-11), enter the language feature code.
5. On the Confirm Language Group panel (Figure 6-12), press Enter to confirm the language
code.
F12=Cancel
Figure 6-12 Confirming the language feature
Selection
1
Figure 6-13 Install Licensed Internal Code panel
7. The next panel shows the volume that is selected as the external load source unit and a
list of options for installing the Licensed Internal Code (see Figure 6-14). Select 2. Install
Licensed Internal Code and Initialize System.
Selection
2
F3=Exit F12=Cancel
Figure 6-14 Install Licensed Internal Code options
218 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. On the Confirmation panel, read the warning message that displays (as shown in
Figure 6-15) and press F10=Continue when you are sure that you want to proceed.
Warning:
All data on this system will be destroyed and the Licensed
Internal Code will be written to the selected disk if you
choose to continue the initialize and install.
9. The Initialize the Disk - Status panel displays for a short time (see Figure 6-16). Unlike
internal drives, formatting external LUNs on DS8000 and DS6000 is a task that is run by
the storage system in the background, that is the task might complete faster than you
expect.
Please wait.
Wait for next display or press F16 for DST main menu
Figure 6-16 Initialize the Disk Status panel
+--------------------------------------------------+
Percent | 100% |
complete +--------------------------------------------------+
Please wait.
Figure 6-17 Install Licensed Internal Code status
When the Install Licensed Internal Code process is complete, the system does another IPL to
DST automatically. You have now built an external load source unit.
Adding disk units to the configuration can be done either by using the 5250 interface with
Dedicated Service Tools (DST) or System Service Tools (SST) or with iSeries Navigator.
220 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6.5.1 Adding logical volumes using the 5250 interface
To add a logical volume in the DS8000 or DS6000 to the system ASP, using System Service
Tools (SST), follow these steps:
1. Enter the command STRSST and sign on System Service Tools.
2. In the System Service Tools (SST) panel (Figure 6-18), select 3. Work with disk units.
3. In the Work with Disk Units panel (Figure 6-19), select 2. Work with disk configuration.
Selection
2
F3=Exit F12=Cancel
Figure 6-19 Work with Disk Units panel
Selection
8
F3=Exit F12=Cancel
Figure 6-20 Work with Disk Configuration panel
5. In the Specify ASPs to Add Units to panel (Figure 6-21), specify the ASP number next to
the desired units. Here, we specify 1 for ASP, which is the System ASP. Press Enter.
222 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. In the Confirm Add Units panel (Figure 6-22), review the information and verify that
everything is correct. If the information is correct, press Enter to continue. Depending on
the number of units that you are adding, this step can take some time to complete.
Add will take several minutes for each unit. The system will
have the displayed protection after the unit(s) are added.
Serial Resource
ASP Unit Number Type Model Name Protection
1 Unprotected
1 02-89058 6717 074 DD004 Device Parity
2 68-0CA4E32 6717 074 DD003 Device Parity
3 68-0C9F8CA 6717 074 DD002 Device Parity
4 68-0CA5D96 6717 074 DD001 Device Parity
5 75-1118707 2107 A85 DD006 Unprotected
7. After the units are added, view your disk configuration to verify the capacity and data
protection.
224 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. Expand the iSeries to which you want to add the logical volume and sign on to that server.
Then expand Configuration and Service → Hardware → Disk Units (see Figure 6-24).
3. Sign on to SST. Enter your Service tools ID and password and then click OK.
5. The New Disk Pool wizard opens. Figure 6-26 shows the Welcome window. Click Next.
226 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. In the New Disk Pool window (Figure 6-27):
a. For Type of disk pool, select Primary.
b. For Disk pool, type the new disk pool name.
c. Leave Database set to the default of Generated by the system.
d. Ensure that the disk protection method matches the type of logical volume that you are
adding. If you leave it deselected, you will see all available disks.
e. Select OK to continue.
7. The New Disk Pool - Select Disk Pool window (Figure 6-28) summarizes the disk pool
configuration. Review the configuration and click Next.
9. The Disk Pool - Add Disks window lists the non-configured units. Highlight the disk or
disks that you want to add to the disk pool, and click Add, as shown in Figure 6-30.
228 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
10.The next window confirms the selection (see Figure 6-31). Click Next to continue.
11.In the New Disk Pool - Summary window, review the summary of the configuration. Click
Finish to add the disks to the disk pool, as shown in Figure 6-32.
14.In iSeries Navigator, you can see the new disk pool under Disk Pools (see Figure 6-35).
230 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
15.To see the logical volume, expand Configuration and Service → Hardware → Disk
Pools and select the disk pool that you just created. See Figure 6-36.
Note: For multipath volumes, only one path is shown. For the additional paths, see 6.8,
“Managing multipath volumes using iSeries Navigator” on page 236.
5. On the Confirm Add Units panel (Figure 6-38), check the configuration details. If the
details are correct, press Enter.
Add will take several minutes for each unit. The system will
have the displayed protection after the unit(s) are added.
Serial Resource
ASP Unit Number Type Model Name Protection
1 Unprotected
1 02-89058 6717 074 DD004 Device Parity
2 68-0CA4E32 6717 074 DD003 Device Parity
3 68-0C9F8CA 6717 074 DD002 Device Parity
4 68-0CA5D96 6717 074 DD001 Device Parity
5 75-1118707 2107 A85 DMP135 Unprotected
232 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6.7 Adding volumes to System i using iSeries Navigator
You can use iSeries Navigator to add volumes to the system ASP, user ASPs, or IASPs. In
this example, we add a multipath logical volume to a private (non-switchable) IASP. The same
principles apply when adding multipath volumes to the system ASP or user ASPs.
1. Follow the steps in 6.5.2, “Adding volumes to an independent auxiliary storage pool” on
page 224. When you reach the point where you select the volumes to add, a panel similar
to the panel that is shown in Figure 6-39 displays. Multipath volumes appear as DMPxxx.
Highlight the disk or disks that you want to add to the disk pool and click Add.
Note: For multipath volumes, only one path is shown. To see the additional paths, see
6.8, “Managing multipath volumes using iSeries Navigator” on page 236.
234 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. To see the logical volume, expand Configuration and Service → Hardware → Disk
Units → Disk Pools, and click the disk pool that you just created as shown in Figure 6-41.
When using the standard disk panels in iSeries Navigator, only a single path, the initial path,
is shown. To see the additional paths follow these steps:
1. To see the number of paths available for a logical unit, open iSeries Navigator and expand
Configuration and Service → Hardware → Disk Units. As shown in Figure 6-42, the
number of paths for each unit is in the Number of Connections column (far right side of the
panel). In this example, there are eight connections for each of the multipath units.
236 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. To see the other connections to a logical unit, right-click a unit, and select Properties, as
shown in Figure 6-43.
238 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
To see the other paths to this unit, click the Connections tab, where the other seven
connections for this logical unit are displayed, as shown in Figure 6-45.
Figure 6-46 shows an example where 48 logical volumes are configured in the DS8000. The
first 24 of these being in one DS volume group are assigned using a host adapter in the top
left I/O drawer in the DS8000 to a Fibre Channel (FC) I/O adapter in the first iSeries I/O tower
or rack. The next 24 logical volumes within another DS volume group are assigned using a
host adapter in the lower left I/O drawer in the DS8000 to an FC I/O adapter on a different bus
in the first iSeries I/O tower or rack. This is a valid single path configuration.
To implement multipath, the first group of 24 logical volumes is also assigned to an iSeries FC
I/O adapter in the second iSeries I/O tower or rack through a host adapter in the lower right
I/O drawer in the DS8000. The second group of 24 logical volumes is also assigned to an FC
I/O adapter on a different bus in the second iSeries I/O tower or rack through a host adapter in
the upper right I/O drawer.
Volumes 25-48
IO Drawers and
IO Drawer IO Drawer
IO Drawer IO Drawer
Host Adapters
Host Adapter 3 Host Adapter 4
BUS a BUS x
FC IOA FC IOA
iSeries IO
BUS b
FC IOA FC IOA
BUS y Towers/Racks
Logical connection
Note: For the remainder of this section, we focus on implementing load source mirroring
for an #2847 IOP-based SAN load source prior to i5/OS V6R1.
Prior to i5/OS V6R1, the #2847 PCI-X IOP for SAN load source did not support multipath for
the external load source unit. To provide path protection for the external load source unit prior
to V6R1 it has to be mirrored using i5/OS mirroring. Therefore, the two LUNs used for
mirroring the external load source across two #2847 IOP-based Fibre Channel adapters
(ideally in different I/O towers to provide highly redundant path protection) are created as
unprotected LUN models.
240 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
To mirror the load source unit, unless you are using SLIC V5R4M5 or later (see 6.4, “Setting
up an external load source unit” on page 211) initially assign only one LUN to the IOA that is
tagged as the load source unit IOA. Other LUNs, including the “mirror mate” for the load
source unit, should be assigned to another #2847 IOP-based IOA as shown in Figure 6-47.
The simplest way to do this is to create two volume groups on the DS8000 or DS6000. The
first volume group (shown on the left) contains only the load source unit and is assigned to the
#2847 tagged as the load source IOA. The second volume group (shown on the right)
contains the load source unit mirror mate plus the remaining LUNs, which eventually will have
multipaths. This volume group is assigned to the second #2847 IOP-based IOA.
iSeries
IO Tower IO Tower
#2847 IOP #2847 IOP
Fibre Channel Fibre Channel
IOA IOA
Unprot Unprot
LSU LSU'
iSeries
IO Tower IO Tower
#2847 IOP #2847 IOP
Fibre Channel Fibre Channel
IOA IOA
Unprot Unprot
LSU LSU'
If you have more LUNs that require more IOPs and IOAs, you can assign these to volume
groups with already using a multipath configuration as shown in Figure 6-49. It is important to
ensure that your load source unit initially is the only volume assigned to the #2487 IOP-based
242 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
IOA that is tagged in Hardware Management Console (HMC) as the load source IOA. Our
example including SAN switches shows a configuration with two redundant SAN switches to
avoid a single-point of failure.
iSeries
IO Tower IO Tower
BUS BUS BUS BUS
#2847 IOP #2844 IOP #2844 IOP #2847 IOP
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA IOA
Switch Switch
Unprot Unprot
LSU LSU'
iSeries
IO Tower IO Tower
BUS BUS BUS BUS
#2847 IOP #2844 IOP #2844 IOP #2847 IOP
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA IOA
Switch Switch
Unprot Unprot
LSU LSU'
244 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6.10.1 Setting up load source mirroring
After you create the LUN to be set up as the remote load source unit pair, this LUN and any
other LUNs are identified by SLIC and displayed under non-configured units in DST and SST.
To set up load source mirroring on the System i5 platform, you must use DST:
1. From the DST menu (Figure 6-51), select 4. Work with disk units.
1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support
Selection
4
F3=Exit F12=Cancel
Figure 6-51 Using Dedicated Service Tools panel
2. From the Work with Disk Units menu (Figure 6-52), select 1. Work with disk
configuration.
Selection
1
F3=Exit F12=Cancel
Figure 6-52 Working with Disk Units panel
Selection
4
F3=Exit F12=Cancel
Figure 6-53 Work with Disk Configuration panel
4. From the Work with mirrored protection menu (Figure 6-54), select 4. Enable remote load
source mirroring. This option does not perform the remote load source mirroring but tells
the system that you want to mirror the load source when mirroring is started.
Selection
4
F3=Exit F12=Cancel
Figure 6-54 Setting up remote load source mirroring
246 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. In the Enable Remote Load Source Mirroring confirmation panel (Figure 6-55), press
Enter to confirm that you want to enable remote load source mirroring.
Remote load source mirroring will allow you to place the two
units that make up a mirrored load source disk unit (unit 1) on
two different IOPs. This may allow for higher availability
if there is a failure on the multifunction IOP.
Note: When there is only one load source disk unit attached to
the multifunction IOP, the system will not be able to IPL if
that unit should fail.
6. In the Work with mirrored protection panel, you see a message at the bottom of the panel,
indicating that remote load source mirroring is enabled (Figure 6-56). Select 2. Start
mirrored protection, for the load source unit.
Selection
2
F3=Exit F12=Cancel
Remote load source mirroring enabled successfully.
Figure 6-56 Confirmation that remote load source mirroring is enabled
Serial Resource
ASP Unit Number Type Model Name Status
1 Mirrored
1 30-1000000 1750 A85 DD001 Active
1 30-1100000 1750 A85 DD004 Active
2 30-1001000 1750 A05 DMP002 DPY/Active
3 30-1002000 1750 A05 DMP004 DPY/Active
5 30-1101000 1750 A05 DMP006 DPY/Active
6 30-1102000 1750 A05 DMP008 DPY/Active
8. When the remote load source mirroring task is finished, perform an IPL on the system to
start mirroring the data from the source unit to the target. This process is done during the
database recovery phase of the IPL.
To migrate from a mirrored external load source unit to a multipath load source unit, follow
these steps:
1. Enter STRSST to start System Service Tools from the i5/OS command line.
2. Select 3. Work with disk units.
3. Select 2. Work with disk configuration.
4. Select 1. Display disk configuration.
248 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Select 1. Display disk configuration status to look at your currently mirrored external
load source LUNs. Take note of the two serial numbers for your mirrored load source unit 1
(105E951 and 1060951 in our example) because you will need these numbers later for
changing the DS storage system configuration to a multipath setup.
6. Press F12 to exit from the Display Disk Configuration Status panel, as shown in
Figure 6-58.
Serial Resource
ASP Unit Number Type Model Name Status
1 Mirrored
1 50-105E951 2107 A85 DD001 Active
1 50-1060951 2107 A85 DD002 Active
2 50-1061951 2107 A05 DMP003 RAID 5/Active
3 50-105F951 2107 A05 DMP001 RAID 5/Active
Figure 6-58 Displaying mirrored disks
7. Select 6. Disable remote load source mirroring to turn off the remote load source
mirroring function as shown in Figure 6-59.
Note: Turning off the remote load mirroring function does not stop the mirrored
protection. However, disabling this function is required to actually allow stop mirroring in
a later step.
Selection
6
F3=Exit F12=Cancel
Figure 6-59 Disable remote load source mirroring
F3=Exit F12=Cancel
Figure 6-60 Disable Remote Load Source Mirroring confirmation panel
Selection
F3=Exit F12=Cancel
Remote load source mirroring disabled successfully.
Figure 6-61 Message after disabling the remote load source mirroring
10.To stop mirror protection, set your system to B-type manual mode IPL, and re-IPL the
system. When you get to the Dedicated Service Tools (DST) panel, continue with these
steps.
250 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
11. Select 4. Work with disk units as shown in Figure 6-62.
System: RCHLTTN1
Select one of the following:
1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support
Selection
4
F3=Exit F12=Cancel
Figure 6-62 Work with disk units
Selection
1
F3=Exit F12=Cancel
Figure 6-63 Work with disk units
Selection
4
F3=Exit F12=Cancel
Figure 6-64 Work with mirrored protection
Selection
3
F3=Exit F12=Cancel
Figure 6-65 Stop mirrored protection
252 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
15.Enter 1 to select ASP 1, as shown in Figure 6-66.
F3=Exit F12=Cancel
Figure 6-66 Selecting ASP to stop mirror
16.On the Confirm Stop Mirrored Protection panel, confirm that ASP 1 is selected, as shown
in Figure 6-67, and then press Enter to proceed.
Serial Resource
ASP Unit Number Type Model Name Protection
1 Unprotected
1 50-105E951 2107 A85 DD001 Unprotected
2 50-1061951 2107 A05 DMP003 RAID 5
3 50-105F951 2107 A05 DMP001 RAID 5
17.When the stop for mirrored protection completes, a confirmation panel displays as shown
in Figure 6-68.
Information
Serial Resource
Number Type Model Name Capacity Status
50-1060951 2107 A85 DD002 35165 Non-configured
19.Now, you can exit from the DST panels to continue the manual mode IPL. At the Add All
Disk Units to the System panel, select 1. Perform any disk configuration at SST as
shown in Figure 6-70.
Selection
1
Figure 6-70 Message to add disks
254 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
20.You have stopped mirrored protection for the load source unit and re-IPLed the system
successfully. Now, use the DS CLI to identify the volume groups that contain the two LUNs
of your previously mirrored load source unit by entering the showfbvol volumeID command
for the previously mirrored load source unit (for volumeID use the four digits from the disk
unit serial number noted down in step 5) as shown in Figure 6-71.
21.Enter showvolgrp volumegroup_ID for the two volume groups that contain the previously
mirrored load source unit LUNs, as shown in Figure 6-72 and Figure 6-73.
23.To finish the multipath setup, make sure that the current load source unit LUN (LUN 105E
in our example) is also assigned to both System i IOAs. You assign the load source unit
LUN to the second IOA by assigning the volume group (V13 in our example) that now
contains both previously mirrored load source unit LUNs to both IOAs. To obtain the IOAs
host connection ID on the DS storage system for changing the volume group assignment,
enter the lshostconnect command as shown in Figure 6-75. Note the ID for the lines that
show the two load source IOA volume groups determined previously.
dscli> lshostconnect
Date/Time: November 7, 2007 3:30:36 AM IST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7589951
Name ID WWPN HostType Profile portgrp volgrpID ESSIOpo
===============================================================================================
RedBookTN1LS 0010 10000000C94C45CE iSeries IBM iSeries - OS/400 0 V13 all
24.Change the volume group assignment of the IOA host connection that does not yet have
access to the current load source. (In our example, volume group V22 does not contain
the current load source unit LUN, so we have to assign volume group V13 that contains
both previous load source units to host connection 001B.) Use the chhostconnect -volgrp
volumegroupID hostconnecID command as shown in Figure 6-76.
256 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Now, we describe how to change two previously mirrored unprotected disk units to protected
ones.
Important: It is not supported to change the LUN protection status of a LUN that is being
configured, that is a LUN that is part of an ASP configuration. To convert the unprotected
load source disk unit to a protected model follow, steps 12 to 18 in the process that follows.
Serial Resource
ASP Unit Number Type Model Name Status
1 Unprotected
1 50-105E951 2107 A85 DMP007 Configured
2 50-1061951 2107 A05 DMP003 RAID 5/Active
3 50-105F951 2107 A05 DMP001 RAID 5/Active
Serial Resource
Number Type Model Name Capacity Status
50-1060951 2107 A85 DMP005 35165 Non-configured
2. On the storage system, use the DS CLI lsfbvol command output to display the
unprotected, previously mirrored load source LUNs with a datatype of 520U that refer to
unprotected volumes, as shown in Figure 6-78.
dscli> lsfbvol
Date/Time: November 7, 2007 3:25:51 AM IST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7589951
Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) cap (10^9B) cap (blocks
==================================================================================================================
TN1ls 105E Online Normal Normal 2107-A85 FB 520U P0 32.8 35.2 6868172
TN1Vol1 105F Online Normal Normal 2107-A05 FB 520P P0 32.8 35.2 6868172
TN1mm 1060 Online Normal Normal 2107-A85 FB 520U P4 32.8 35.2 6868172
TN1Vol2 1061 Online Normal Normal 2107-A05 FB 520P P4 32.8 35.2 6868172
TN1ls 105E Online Normal Normal 2107-A85 FB 520U P0 32.8 35.2 6868172
TN1Vol1 105F Online Normal Normal 2107-A05 FB 520P P0 32.8 35.2 6868172
TN1mm 1060 Online Normal Normal 2107-A05 FB 520P P4 32.8 35.2 6868172
TN1Vol2 1061 Online Normal Normal 2107-A05 FB 520P P4 32.8 35.2 6868172
4. Perform an IOP reset for the IOA that is attached to the unconfigured previous load source
volume on which you changed the protection mode on the storage system in the previous
step.
Note: Note this IOP reset is required for System i to rediscover its devices for
recognizing the changed LUN protection mode.
Resource
Opt Description Type-Model Status Name
Bus Expansion Adapter 28E7- Operational BCC10
System Bus 28B7- Operational LB09
Multi-adapter Bridge 28B7- Operational PCI11D
6 Combined Function IOP 2847-001 Operational CMB03
HSL I/O Bridge 28E7- Operational BC05
Bus Expansion Adapter 28E7- Operational BCC05
System Bus 28B7- Operational LB04
More...
F3=Exit F5=Refresh F6=Print F8=Include non-reporting resources
F9=Failed resources F10=Non-reporting resources
F11=Display serial/part numbers F12=Cancel
Figure 6-80 Selecting IOP for reset
258 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Select 3. Reset I/O processor to reset the IOP as shown in Figure 6-81.
Selection
3
F3=Exit F12=Cancel
F8=Disable I/O processor reset F9=Disable I/O processor IPL
Figure 6-81 Reset IOP option
F3=Exit F12=Cancel
Figure 6-82 Confirming IOP reset
Selection
F3=Exit F12=Cancel
F8=Disable I/O processor reset F9=Disable I/O processor IPL
Reset of IOP was successful.
Figure 6-83 IOP reset confirmation message
7. Now, select 4. IPL I/O processor in the Select IOP Debug Function menu to IPL the I/O
as shown in Figure 6-84. Press Enter to confirm your selection.
Selection
4
F3=Exit F12=Cancel
F8=Disable I/O processor reset F9=Disable I/O processor IPL
Figure 6-84 IPL I/O
260 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
After a successful IPL, a confirmation message displays, as shown in Figure 6-85.
Selection
F3=Exit F12=Cancel
F8=Disable I/O processor reset F9=Disable I/O processor IPL
Re-IPL of IOP was successful.
Figure 6-85 I/O IPL confirmation message
8. Next, check the changed protection status for the unconfigured previous load source LUN
in the SST Display non-configured units menu as shown in Figure 6-86.
Serial Resource
Number Type Model Name Capacity Status
50-1060951 2107 A05 DMP006 35165 Non-configured
Now, we explain the remaining steps to change the unprotected load source unit to a
protected load source. To look at the current unprotected load source unit, we choose the
DST menu function Display disk configuration status as shown in Figure 6-87.
Serial Resource
ASP Unit Number Type Model Name Status
1 Unprotected
1 50-105E951 2107 A85 DMP007 Configured
2 50-1061951 2107 A05 DMP003 RAID 5/Active
3 50-105F951 2107 A05 DMP001 RAID 5/Active
1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support
Selection
4
F3=Exit F12=Cancel
Figure 6-88 DST: Main menu
Selection
2
F3=Exit F12=Cancel
Figure 6-89 Work with Disk Units
262 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
11.Select 9. Copy disk unit data as shown in Figure 6-90.
Selection
9
12.Select the current unprotected load source unit 1 as the disk unit from which to copy, as
shown in Figure 6-91.
Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 50-105E951 2107 A85 DMP007 Configured
2 1 50-1061951 2107 A05 DMP003 RAID 5/Active
3 1 50-105F951 2107 A05 DMP001 RAID 5/Active
Serial Resource
Unit ASP Number Type Model Name Status
1 1 50-105E951 2107 A85 DMP007 Configured
1=Select
Serial Resource
Option Number Type Model Name Status
1 50-1060951 2107 A05 DMP006 Non-configured
Serial Resource
Unit ASP Number Type Model Name Status
1 1 50-105E951 2107 A85 DMP007 Configured
Serial Resource
Number Type Model Name Status
50-1060951 2107 A05 DMP006 Non-configured
F12=Cancel
Figure 6-93 Confirm Copy Disk Unit Data
During the copy process, the system displays the Copy Disk Unit Data Status panel, as
shown in Figure 6-94.
Phase Status
264 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
15.After the copy process completes successfully, the system IPLs automatically. During the
IPL, a message displays, as shown in Figure 6-95, because the system found an
unconfigured unit as the previous load source IPL. You can continue by selecting 1. Keep
the current disk configuration, as shown in Figure 6-95.
Selection
1
16.Next, look at the protected load source unit using the Display Disk Configuration Status
menu, as shown in Figure 6-96.
Serial Resource
ASP Unit Number Type Model Name Status
1 Unprotected
1 50-1060951 2107 A05 DMP006 RAID 5/Active
2 50-1061951 2107 A05 DMP003 RAID 5/Active
3 50-105F951 2107 A05 DMP001 RAID 5/Active
17.Then, look at the previous load source unit with its unprotected status using the Display
Non-Configured Units menu as shown in Figure 6-97.
Serial Resource
Number Type Model Name Capacity Status
50-105E951 2107 A85 DMP007 35165 Non-configured
Note: Carefully plan and size your IOP-less Fibre Channel adapter card placement in your
System i server and its attachment to your storage system to avoid potential I/O loop or FC
port performance bottlenecks with the increased IOP-less I/O performance. Refer to
Chapter 4, “i5/OS planning for external storage” on page 75 and Chapter 5, “Sizing
external storage for i5/OS” on page 115 for further information.
Important: Do not try to workaround the migration procedures that we discuss in this
section by concurrently replacing the IOP/IOA pair for one mirror side or one path after the
other. Concurrent hardware replacement is supported only for like-to-like replacement
using the same feature codes.
Because the migration procedures are pretty much straightforward, we only outline the
required steps for different configurations.
Internally for each multipath group, this process creates a new multipath connection. Some
time later, you need to remove the obsolete connection using the multipath reset function (see
6.13, “Resetting a lost multipath configuration” on page 267).
266 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6.12.3 IOP-less migration in a configuration without path redundancy
If you do not use multipath or mirroring, you need to follow these steps to migrate to IOP-less
Fibre Channel:
1. Turn off the System i server.
2. Replace the Fibre Channel IOP/IOA cards with IOP-less cards.
3. Change the host connections on the DS storage system to reflect the new WWPNs.
Note: An IPL might be required so that the System i recognizes the missing paths.
WARNING: This service function should be run only under the direction of
the IBM Hardware Service.
You have selected to reset the number of paths on a multipath unit to
equal
the number of paths currently enlisted.
Press F10 to reset the paths to the following multipath disk units.
268 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. When the operation is complete, a confirmation panel displays, as shown in Figure 6-101.
WARNING: This service function should be run only under the direction o
the IBM Hardware Service.
You have selected to reset the number of paths on a multipath unit to
equal
the number of paths currently enlisted.
Press F10 to reset the paths to the following multipath disk units.
See help for more details
Note: The DMPxxx resource name is not reset to DDxxx when multipathing is stopped.
1. Display/Alter/Dump
2. Licensed Internal Code log
3. Trace Licensed Internal code
4. Hardware service manager
5. Main storage dump manager
6. Product activity log
7. Operator panel functions
8. Performance data collector
Selection
1
F3=Exit F12=Cancel
Figure 6-102 Starting a Service Tool panel
270 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. In the Display/Alter/Dump Output Device panel, select 1. Display/Alter/Dump, as shown
in Figure 6-103.
Attention: Use extreme caution when using the Display/Alter/Dump Output panel
because you can end up damaging your system configuration. Ideally, when performing
these tasks for the first time, do so after referring to IBM Support.
1. Display/Alter storage
2. Dump to printer
4. Dump to media
4. In the Select Data panel, select 2. Licensed Internal Code (LIC) data, as shown in
Figure 6-104.
Select Data
Bottom
Selection
14
F3=Exit F12=Cancel
Figure 6-105 Selecting Advanced analysis
272 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. In the Select Advanced Analysis Command panel, scroll down the page, and select 1 to
run the MULTIPATHRESETTER macro, as shown in Figure 6-106.
Option Command
JAVALOCKINFO
LICLOG
LLHISTORYLOG
LOCKINFO
MASOCONTROLINFO
MASOWAITERINFO
MESSAGEQUEUE
MODINFO
MPLINFO
1 MULTIPATHRESETTER
MUTEXDEADLOCKINFO
MUTEXINFO
More...
F3=Exit F12=Cancel
Figure 6-106 Select Advanced Analysis Command panel
7. The multipath resetter macro has various options, which are displayed in the Specify
Advanced Analysis Options panel (Figure 6-107). For Options, enter -RESTMP -ALL.
Command . . . . : MULTIPATHRESETTER
This service function should be run only under the direction of the
IBM Hardware Service Support. You have selected to reset the
number of paths on a multipath unit to equal the number of paths
that have currently enlisted.
More...
F2=Find F3=Exit F4=Top F5=Bottom F10=Right F12=Cancel
Figure 6-108 Multipath reset confirmation
8. Press Enter to return to the Specify Advanced Analysis Options panel (Figure 6-109). For
Options, enter -CONFIRM -ALL.
Command . . . . : MULTIPATHRESETTER
274 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9. In the Display Formatted Data panel (Figure 6-110), press F3 to return to the Specify
Advanced Analysis Options panel (Figure 6-107 on page 273).
*********************************************************************
***CONFIRM RESET MULTIPATH UNIT PATHS TO NUMBER CURRENTLY ENLISTED***
*********************************************************************
This service function should be run only under the direction of the
IBM Hardware Service Support.
More...
F2=Find F3=Exit F4=Top F5=Bottom F10=Right F12=Cancel
Figure 6-110 Multipath reset results
10.In the Specify Advanced Analysis Options panel (Figure 6-109 on page 274), repeat the
confirmation process to ensure that the path reset is performed. Retain the setting for the
Option parameter as -CONFIRM -ALL, and press Enter again.
*********************************************************************
***CONFIRM RESET MULTIPATH UNIT PATHS TO NUMBER CURRENTLY ENLISTED***
*********************************************************************
This service function should be run only under the direction of the
IBM Hardware Service Support.
Could not find any disk units with paths which need to be reset.
Bottom
F2=Find F3=Exit F4=Top F5=Bottom F10=Right F12=Cancel
Figure 6-111 No disks have to be reset
Note: The DMPxxx resource name is not reset to DDxxx when multipathing is stopped.
276 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7
Important: Before you start with any migration, as with any form of upgrade or system
re-configuration, it is important that you be able to recover the system in the event of a
failure. Therefore, we strongly recommend that you have two copies of a current full system
backup before attempting any migration of your systems.
Important: The migration procedures that we describe require that you use tools from DST
and that you perform removals of disk units. If you are uncomfortable performing these
tasks, consult an IBM Sales Representative or your IBM Business Partner.
You must ensure that your system has the correct level of i5/OS, service processor, and HMC
code, as well as a minimum of one, but preferably two, #2847 IOPs or IOP-less Fibre Channel
cards for SAN Load before attempting any of the migration procedures that we describe in
this chapter.
For the scenarios that we describe, we assume that you have already configured the storage
system and have performed any other disk migration work. Depending on your migration
scenario, you need one protected and or two unprotected LUNs of at least 17 GB for use as
the new LSU. If you have two LUNs for mirroring the external load source, then they need to
be on separate #2847 IOP-based or IOP-less Fibre Channel adapters.
Important: Make sure that the DS host port that is used for IOP-less direct storage
attachment is configured to the FC-AL protocol. In all other cases, such as using SAN
switch attached IOP-less or using #2847 IOP-based Fibre Channel adapters, make sure
the DS host port is configured for FC-SW (SCSI-FCP).
These scenarios cover only the migration of the LSU to the external storage subsystem. If you
want to perform any further configuration, review the discussion about implementing external
storage in Chapter 6, “Implementing external storage with i5/OS” on page 207.
You need to know which of your internal disks is currently your LSU.
278 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7.2.1 Pre-migration checklist
Table 7-1 includes a checklist that can help you gather information about your system that you
need before you begin the migration process. You need to complete the information in this
table before you start any procedures to migrate your LSU. You can determine the locations of
the #2847 IOPs from the Hardware Configuration Listing. We describe the procedure for
determining the current load source protection in 7.3, “Migration scenarios” on page 280.
SP Code Level
i5/OS Level
To determine which scenario is appropriate for your environment, you first need to identify
your current environment as follows:
1. Issue the STRSST command. A panel similar to that shown in Figure 7-1 opens.
SYSTEM: MICKEY
280 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. Enter your user ID and password. In the System Service Tools (SST) main menu, select
3. Work with disk units, as shown in Figure 7-2.
Selection
3
Selection
1
F3=Exit F12=Cancel
Figure 7-3 Work with Disk Units
282 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. In the Display Disk Configuration panel, select 3. Display disk configuration protection,
as shown in Figure 7-4.
Selection
3
F3=Exit F12=Cancel
Figure 7-4 Display Disk Configuration
Serial Resource
ASP Unit Number Type Model Name Protection
1 Mirrored
1 68-0D0BC12 6718 050 DD007 I/O Bus
1 68-0D0A0DA 6718 050 DD002 I/O Bus
2 68-0D0A6AE 6718 050 DD010 Bus
2 68-0D09EA2 6718 050 DD006 Bus
3 68-0D0A722 6718 050 DD011 Bus
3 68-0D0A773 6718 050 DD001 Bus
4 68-0D0A733 6718 050 DD012 Bus
4 68-0D09F9C 6718 050 DD008 Bus
5 68-0D0AB08 6718 050 DD009 Bus
5 68-0D0BBB0 6718 050 DD003 Bus
6 68-0D0A51B 6718 050 DD005 I/O Bus
6 68-0D0BB13 6718 050 DD004 I/O Bus
If you have:
If you have a single load source unit and the protection shows Unprotected, you have an
unprotected load source unit. Follow the procedure that we describe in 7.3.5, “Unprotected
internal LSU migrating to external LSU” on page 367.
If you have a single load source unit and the protection shows Device Parity, you have a
RAID protected load source unit. Follow the procedure that we describe in 7.3.1, “RAID
protected internal LSU migrating to external mirrored or multipath LSU” on page 285.
If you have dual load source units and the protection shows Controller, you are mirrored
to an internal load source unit. Follow the procedure that we describe in 7.3.2, “Internal
LSU mirrored to internal LSU migrating to external LSU” on page 321.
If you have dual load source units and the protection shows anything other than
Controller, then you have enabled remote load source mirroring. If one of the unit 1 disk
units shows a type of 2105, 1750, or 2107, then you have an external remote load source
mirror. Follow the procedure that we describe in 7.3.4, “Internal LSU mirrored to external
remote LSU migrating to external LSU” on page 358.
If your devices are any other type, then you have a remotely mirrored internal load source
unit. Follow the procedure that we describe in 7.3.3, “Internal LSU mirrored to internal
remote LSU migrating to external LSU” on page 339.
284 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7.3.1 RAID protected internal LSU migrating to external mirrored or
multipath LSU
For this scenario, we assume that your systems meets the prerequisites for boot from SAN
(see 7.2, “Migration prerequisites” on page 278). In this section, we describe the steps that to
migrate the system from using an internal LSU that is device parity protected to using the
boot from SAN function.
Attention: Do not turn off the system while any disk unit data function is running.
Unpredictable errors can occur if the system is turned off in the middle of the load source
migration function.
Important: This procedure requires that you stop RAID protection on the LSU set, which
leaves your system unprotected, and also that you remove a disk unit. If you are not
comfortable with task, engage services from your IBM Sales Representative or IBM
Business Partner.
When migrating from an internal load source to external, you need to decide on your
protection strategy:
For i5/OS V6R1 and later
We suggest that you retain a RAID protected load source and that you provide path
redundancy using load source multipathing. To enable this protection, create the LUN in
the DS system as protected and include it in one DS volume group, which is assigned to
two Fibre Channel IOAs to be used for boot from SAN (see Figure 7-6).
Parity protected
System i LPAR System i LPAR
internal load source
LSU
I/O Tower I/O Tower Boot from SAN Migration I/O Tower I/O Tower
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA
i5/OS V6R1 and later IOA
Figure 7-6 Boot from SAN Migration for parity protected internal LSU for i5/OS V6R1 and later
Parity
System i LPAR System i LPAR
protected
LSU
internal LSU
I/O Tower I/O Tower Boot from SAN Migration I/O Tower I/O Tower
#2844 IOP #2844 IOP #2847 IOP #2847 IOP
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA
prior to i5/OS V6R1 IOA
Figure 7-7 Boot from SAN Migration for parity protected internal LSU prior to i5/OS V6R1
In our discussion, we assume that you follow these recommendations for a protection
strategy.
Before you begin, use either DS CLI or the DS Storage Manager GUI to configure the load
source unit LUN or LUNs on the external storage system. For further information, refer to
Chapter 8, “Using DS CLI with System i” on page 391 if you are using DS CLI or refer ti 9.2,
“Configuring DS Storage Manager logical storage” on page 474 if you are using the GUI.
If you plan to use a mirrored external load source, add only one of the two unprotected LUNs
to the system ASP at this time.
Attention: At this time, you should have one LUN that is non-configured, which a protected
model A0x if you are using i5/OS V6R1 multipath load source and an unprotected model
A8x if you are going to mirror your external load source LUN. You should now add another
unprotected LUN added to your system ASP only if you are going to use load source
mirroring. Make sure to note the serial number of your external load source LUN or LUNs
for further reference.
286 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Begin the migration process by accessing the Hardware Management Console (HMC) and
changing the partition settings to do a manual IPL as follows:
1. From the Systems Management → Servers navigation tree, select your managed server.
Select the partition with which you are working. Then, click Tasks → Properties as shown
in Figure 7-8.
Note: For below HMC V7, right-click the partition name and select Properties.
2. In the Partition Properties window, shown in Figure 7-9, select the Settings tab.
1. Perform an IPL
2. Install the operating system
3. Use Dedicated Service Tools (DST)
4. Perform automatic installation of the operating system
5. Save Licensed Internal Code
Selection
3
Figure 7-11 IPL or Install the System
288 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. Log on with your DST user ID and password, as shown in Figure 7-12.
1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support
Selection
4
F3=Exit F12=Cancel
Figure 7-13 Use Dedicated Service Tools
290 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Turn off the device parity protection on the load source to prepare for the migration by
selecting 1. Work with disk configuration (Figure 7-14).
Selection
1
F3=Exit F12=Cancel
Figure 7-14 Work with Disk Units
Selection
5
F3=Exit F12=Cancel
Figure 7-15 Work with Disk Configuration
292 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. In the Work with Device Parity Protection panel, select 1. Display device parity status
(Figure 7-16).
Selection
1
F3=Exit F12=Cancel
Figure 7-16 Work with Device Parity Status
7. Press F12 to return to the Work with Device Parity Status panel.
294 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. In the Work with Device Parity Protection panel, select 3. Stop device parity protection
(Figure 7-18).
Selection
3
F3=Exit F12=Cancel
Figure 7-18 Stop device parity protection
Attention: After this step, the system is running unprotected. Thus, make sure that you
know the location of backups before you continue.
F3=Exit F12=Cancel
Figure 7-19 Stop Device Parity Protection
296 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
11.In the Confirm Stop Device Parity Protection panel, shown in Figure 7-20, take a moment
to confirm that the load source unit is listed. When you are sure you have selected the
correct parity set, press Enter.
F12=Cancel
Figure 7-20 Confirm Stop Device Parity Protection
Operation Status
Prepare to stop . . . . . . . . . . . . : Completed
Stop device parity protection . . . . . : 83 %
Wait for next display or press F16 for DST main menu
Figure 7-21 Stop Device Parity Protection Status
298 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Copying the LSU data
Now that the LSU is no longer RAID protected, you can copy it as follows:
1. Press F12 to return to the Work with Disk Configuration panel.
2. Press F12 again to return to the Work with Disk Units panel.
3. In the Work with Disk Units panel, select 2. Work with disk unit recovery as shown in
Figure 7-22.
Selection
2
F3=Exit F12=Cancel
Figure 7-22 Work with Disk Units
Important: Take care when using the following options, because using the incorrect
option can result in loss of data.
Selection
9
300 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Select your existing internal load source (disk unit 1) as the copy from unit (Figure 7-24).
Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 050 DD007 Configured
2 1 50-1106741 2107 A05 DD014 DPY/Active
3 1 50-1003741 2107 A05 DD015 DPY/Active
4 1 50-1105741 2107 A05 DD016 DPY/Active
5 1 50-1103741 2107 A05 DD018 DPY/Active
6 1 50-1009741 2107 A05 DD019 DPY/Active
7 1 50-1108741 2107 A05 DD021 DPY/Active
8 1 50-110A741 2107 A05 DD022 DPY/Active
9 1 50-1102741 2107 A05 DD023 DPY/Active
10 1 50-1104741 2107 A05 DD024 DPY/Active
11 1 50-1002741 2107 A05 DD025 DPY/Active
12 1 50-1001741 2107 A05 DD026 DPY/Active
13 1 50-1005741 2107 A05 DD027 DPY/Active
14 1 50-1006741 2107 A05 DD028 DPY/Active
More...
F3=Exit F5=Refresh F11=Display non-configured units F12=Cancel
Figure 7-24 Select Copy from Disk Unit
Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 072 DD007 DPY/Active
1=Select
Serial Resource
Option Number Type Model Name Status
1 50-1000741 2107 A85 DD013 Non-configured
50-1103741 2107 A05 DD014 Non-configured
50-1007741 2107 A05 DD023 Non-configured
50-1006741 2107 A05 DD020 Non-configured
50-110A741 2107 A05 DD019 Non-configured
50-1109741 2107 A05 DD024 Non-configured
50-1004741 2107 A05 DD025 Non-configured
50-1002741 2107 A05 DD029 Non-configured
50-1005741 2107 A05 DD030 Non-configured
More...
F3=Exit F11=Display disk configuration status F12=Cancel
Figure 7-25 Select Copy to Disk Unit
302 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7. When you are certain that you have selected the correct from and to units, press Enter.
You might see the panel that is shown in Figure 7-26 if the LUN was attached previously to
a system. If see this panel and if you are sure that it is the correct LUN, press F10 to
ignore the problem report and continue.
Problem Report
Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.
OPT Problem
Unit possibly configured for Power PC AS
Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 072 DD007 DPY/Active
Serial Resource
Number Type Model Name Status
50-1000741 2107 A85 DD013 Non-configured
F12=Cancel
Figure 7-27 Confirm Copy of Disk Unit
304 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9. Press Enter to copy the existing LSU to the new LUN. A panel displays that indicates the
progress of the copy, as shown in Figure 7-28.
Phase Status
Wait for next display or press F16 for DST main menu
Figure 7-28 Copy Disk Unit Data Status
The copy process can take from 10 to 60 minutes. After the copy, you are returned to the
Work with Disk Unit Recovery panel.
1. Display/Alter/Dump
2. Licensed Internal Code log
3. Trace Licensed Internal code
4. Hardware service manager
5. Main storage dump manager
6. Product activity log
7. Operator panel functions
8. Performance data collector
Selection
F3=Exit F12=Cancel
Figure 7-29 Start a Service Tool
306 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
12.In the Operator Panel Functions panel, ensure that the IPL source is set to 1 or 2 and that
IPL mode is set to 1, as shown in Figure 7-30. Then, press F10 to turn off the system.
F3=Exit F12=Cancel
Figure 7-31 Confirm System Power Down
Important: Now, you are required to remove a disk unit from the system. If you are unsure
about how to remove a disk unit, contact your local customer engineer. This service is a
chargeable service. Tell the customer engineer that you are migrating to a Boot from SAN
configuration.
After the system is shut down, refer to the checklist that you created previously (using
Table 7-1 on page 279). Then, physically remove the first LSU from the machine.
Important: Do not proceed beyond this point until you are sure that you have removed the
correct disk unit.
308 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Changing the tagged LSU
Now, go to your HMC, and change the tagged LSU from the RAID Controller Card for the
internal LSU to the Fibre Channel Disk Controller that is controlling the new SAN LSU by
following these steps:
1. Select the partition name with which you are working, and choose Tasks →
Configuration → Manage profiles, as shown in Figure 7-32.
Note: For below HMC V7, right-click the partition name and select Properties.
2. In the Managed Profiles window, select Actions → Edit, as shown in Figure 7-33.
310 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Select the IOA that is assigned to your new LSU, as shown in Figure 7-36. Then, click OK.
6. Click OK.
7. Activate the partition by selecting Tasks → Operations → Activate, as shown in
Figure 7-37.
Note: For below HMC V7, right-click the partition name and select Activate.
9. The HMC displays a status dialog box that closes when the task is complete and when the
partition is activated. Then, wait for the DST panel to display.
10.When the system has IPLed to DST, select 3. Use Dedicated Service Tools (DST), as
shown in Figure 7-39.
1. Perform an IPL
2. Install the operating system
3. Use Dedicated Service Tools (DST)
4. Perform automatic installation of the operating system
5. Save Licensed Internal Code
Selection
3
Figure 7-39 Use Dedicated Service Tools (DST)
Note: If you are migrating to a RAID protected load source for using i5/OS V6R1
multipathing for the load source LUN in the DS system, skip to “Re-enabling parity
protection” on page 316.
312 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Mirroring of the LSU
If you are migrating to a mirrored external load source, you next enable remote load source
mirroring because the mirror is on a separate IOA. Follow these steps:
1. In the Use Dedicated Service Tools (DST) panel, select 4. Work with disk units, as
shown in Figure 7-40.
1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support
Selection
4
F3=Exit F12=Cancel
Figure 7-40 Work with disk units
Selection
1
F3=Exit F12=Cancel
Figure 7-41 Work with Disk Units menu
3. In the Work with Disk Configuration panel, select 4. Work with mirrored protection
(Figure 7-42).
Selection
4
F3=Exit F12=Cancel
Figure 7-42 Work with Disk Configuration
314 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. With the Work with Mirrored Protection panel, select 4. Enable remote load source
mirroring (Figure 7-43).
Selection
4
F3=Exit F12=Cancel
Figure 7-43 Work with Mirrored Protection
5. The confirmation panel shown in Figure 7-44 displays. This panel only enables remote
mirroring. It does not actually start it. Press Enter to enable remote load source mirroring.
Remote load source mirroring will allow you to place the two
units that make up a mirrored load source disk unit (unit 1) on
two different IOPs. This may allow for higher availability
if there is a failure on the MFIOP.
Note: When there is only one load source disk unit attached to
the multifunction IOP, the system will not be able to IPL if
that unit should fail.
F3=Exit F12=Cancel
Figure 7-44 Confirm Enable Remote Load Source Mirroring
F3=Exit F12=Cancel
Figure 7-45 Select ASP to Start Mirrored Protection
After the system IPLs, you are returned to the DST, and mirroring is activated. The next IPL
fully starts the mirror by synchronizing the LUNs.
Alternatively, you can choose to remove the unprotected internal drives if you do not intend to
use them further. For removing the drives, perform normal service actions. Then, restart the
system from the Dedicated Service Tools Menu, and choose 1. Perform an IPL.
If you want to continue using the internal drives from your previous load source parity set,
restart device parity protection as follows:
1. From the IPL or Install the System panel, select 3. Use Dedicated Service Tools (DST),
as shown in Figure 7-46.
1. Perform an IPL
2. Install the operating system
3. Use Dedicated Service Tools (DST)
4. Perform automatic installation of the operating system
5. Save Licensed Internal Code
Selection
3
Figure 7-46 Use Dedicated Service Tools (DST)
316 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. Sign on and select 4. Work with disk units (Figure 7-47).
1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support
Selection
4
F3=Exit F12=Cancel
Figure 7-47 Dedicated Service Tools menu
3. In the Work with Disk Units panel, select 1. Work with disk configuration, as shown in
Figure 7-48.
Selection
1
F3=Exit F12=Cancel
Figure 7-48 Work with Disk Units panel
Selection
5
F3=Exit F12=Cancel
Figure 7-49 Work with Disk Configuration panel
5. In the Work with Device Parity Protection panel, select 2. Start device parity protection,
as shown in Figure 7-50.
Selection
2
F3=Exit F12=Cancel
Figure 7-50 Work with Device Parity Protection panel
318 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. You are presented with a panel similar to that shown in Figure 7-51, which indicates how
many parity sets can be started. Select each set that you want to start by selecting
1=Start device parity protection.
F3=Exit F12=Cancel
Figure 7-51 Start Device Parity Protection
You might receive the warning panel that is shown in Figure 7-52. This warning tells you
that one or more of the non-configured disks was used on an i5/OS system previously.
Press F10 to ignore this warning and continue.
Problem Report
Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.
OPT Problem
Unit possibly configured for Power PC AS
F12=Cancel
Figure 7-53 Confirm Starting Device Parity Protection
Phase Status
Prepare to start . . . . . . . . . . . . : 4 %
Start device parity protection . . . . . :
Wait for next display or press F16 for DST main menu
Figure 7-54 Start Device Parity Protection Status
320 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7.3.2 Internal LSU mirrored to internal LSU migrating to external LSU
For this scenario, we assume that your systems meets the prerequisites for boot from SAN
(see 7.2, “Migration prerequisites” on page 278). In this section, we describe how to migrate a
system using an internal LSU that is protected currently by i5/OS mirroring to boot from SAN
with an external mirrored load source (see Figure 7-55).
Mirrored
internal
System i LPAR System i LPAR
load source LSU LSU'
I/O Tower I/O Tower Boot from SAN Migration I/O Tower I/O Tower
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA IOA
Figure 7-55 Boot from SAN Migration from internal mirrored LSU to external mirrored LSU
Attention: Do not turn off the system while the disk unit data function is running.
Unpredictable errors can occur if the system is shut down in the middle of the load source
migration function.
Important: This procedure requires that you suspend mirroring on the load source unit.
While mirroring is suspended, the remaining load source is a single point of failure. Thus,
ensure that you have the necessary backups so that you can recover your system in the
event of failure. You are required also to remove the internal load source unit. If you are not
comfortable with task, engage services from your IBM Sales Representative or IBM
Business Partner.
When migrating from internal load source to external, you first need to configure two
unprotected load source unit LUNs in separate DS volume groups. Assign each volume group
to separate #2847 IOP-based or IOP-less Fibre Channel adapters in your System i server.
For further information, refer to Chapter 8, “Using DS CLI with System i” on page 391 if you
use the DS CLI or 9.2, “Configuring DS Storage Manager logical storage” on page 474 if you
use the GUI. Add one of the two unprotected LUNs to your system ASP.
Attention: At this time, you should have one unprotected LUN that is non-configured and
one that is in the system ASP.
Now that your two new load source LUNs are attached to your system, you can use SST to
check that the disks are reporting correctly. You should see at least one non-configured
device (model A8x) to be used for your new external load source LUN.
Serial Resource
Number Type Model Name Capacity Status
50-1000741 2107 A85 DD013 35165 Non-configured
50-1105741 2107 A05 DD027 35165 Non-configured
50-110A741 2107 A05 DD019 35165 Non-configured
50-1107741 2107 A05 DD017 35165 Non-configured
50-1101741 2107 A05 DD033 35165 Non-configured
50-1108741 2107 A05 DD032 35165 Non-configured
50-1005741 2107 A05 DD030 35165 Non-configured
50-1003741 2107 A05 DD031 35165 Non-configured
50-1104741 2107 A05 DD026 35165 Non-configured
50-1004741 2107 A05 DD025 35165 Non-configured
50-1102741 2107 A05 DD028 35165 Non-configured
50-1109741 2107 A05 DD024 35165 Non-configured
50-1002741 2107 A05 DD029 35165 Non-configured
50-100A741 2107 A05 DD015 35165 Non-configured
More...
Press Enter to continue.
322 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Access the Hardware Management Console (HMC) and change the partition settings to do a
manual IPL as follows:
1. From the Systems Management → Servers navigation tree, select your managed server.
Select the partition with which you are working. Then, click Tasks → Properties, as
shown in Figure 7-57.
Note: For below HMC V7, right-click the partition name and select Properties.
2. In the Partition Properties window, select the Settings tab (Figure 7-58).
1. Perform an IPL
2. Install the operating system
3. Use Dedicated Service Tools (DST)
4. Perform automatic installation of the operating system
5. Save Licensed Internal Code
Selection
3
Figure 7-60 IPL or Install the System panel
324 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. Sign on, and select 4. Work with disk units (Figure 7-61).
1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support
Selection
4
F3=Exit F12=Cancel
Figure 7-61 Use Dedicated Service Tools (DST)
3. In the Work with Disk Units panel, select 2. Work with disk unit recovery (Figure 7-62).
Selection
1
F3=Exit F12=Cancel
Figure 7-62 Work with Disk Units
Important: Exercise care when using the following options, because using the incorrect
option can result in loss of data.
Selection
7
5. A list of the LUNs that you can suspend displays (Figure 7-64). Select unit 1. There is only
one of these, so you can only select the LSU mirror, not the actual primary LSU.
Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0A773 6718 050 DD001 Active
3 1 68-0D0A0DA 6718 050 DD002 Active
3 1 68-0D0A51B 6718 050 DD005 Active
6 1 68-0D09EA2 6718 050 DD006 Active
6 1 68-0D0A6AE 6718 050 DD012 Active
9 1 68-0D0AB08 6718 050 DD009 Active
9 1 68-0D0BBB0 6718 050 DD003 Active
10 1 68-0D0A733 6718 050 DD010 Active
10 1 68-0D0BB13 6718 050 DD004 Active
11 1 68-0D0A722 6718 050 DD011 Active
11 1 68-0D09F9C 6718 050 DD008 Active
326 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Copying the LSU data
To copy the LSU data, follow these steps:
1. From the Work with Disk Unit Recovery panel select 9. Copy disk unit data (Figure 7-65).
Selection
9
2. Select the existing internal load source (disk unit 1) as the copy from unit (Figure 7-66).
Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 050 DD007 Active
1=Select
Serial Resource
Option Number Type Model Name Status
50-1000741 2107 A85 DD013 Non-configured
50-1104741 2107 A05 DD032 Non-configured
50-1009741 2107 A05 DD026 Non-configured
50-1007741 2107 A05 DD019 Non-configured
50-1006741 2107 A05 DD023 Non-configured
50-1005741 2107 A05 DD034 Non-configured
50-1002741 2107 A05 DD022 Non-configured
50-1100741 2107 A85 DD033 Non-configured
50-110A741 2107 A05 DD014 Non-configured
More...
F3=Exit F11=Display disk configuration status F12=Cancel
Figure 7-67 Select Copy to Disk Unit
328 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. You might see the panel shown in Figure 7-68 if the LUN was attached to a system
previously. If so, and you are sure it is the correct LUN, press F10 to ignore the warning
and continue.
Problem Report
Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.
OPT Problem
Unit possibly configured for Power PC AS
Other sub-unit will become missing
5. Confirm your choice to prevent any chance of accidentally copying a disk unit
(Figure 7-69).
Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 050 DD007 Active
Serial Resource
Number Type Model Name Status
50-1000741 2107 A85 DD013 Non-configured
F12=Cancel
Figure 7-69 Confirm Copy of Disk Unit
Phase Status
Wait for next display or press F16 for DST main menu
Figure 7-70 Copy Disk Unit Data Status
The copy process can take from 10 to 60 minutes. After the copy, you return to the Work
with Disk Unit Recovery panel.
7. Shut down the system by pressing F12 to return to the work with Disk Units panel. Then,
press F12 again to return to the Dedicated Service Tools main menu. Select 7. Start a
service tool to get to the Service Tools menu shown as shown in Figure 7-29 on
page 306.
330 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. Select 7. Operator Panel Functions. Make sure that the IPL source is set to 1 or 2 and
that the IPL mode is set to 1, as shown in Figure 7-71. Press F10 to shut down the system.
9. Next, you need to confirm the restart request, as shown in Figure 7-72.
F3=Exit F12=Cancel
Figure 7-72 Confirm System Reset
Important: You are required to remove a disk unit from the system. If you are unsure about
this process, contact your local customer engineer. This service is a chargeable service,
and you need to tell them you are migrating to a Boot from SAN configuration.
After the system is shut down, refer to the checklist that you created previously (Table 7-1 on
page 279). Then, physically remove the first Load Source Unit from the machine. Do not
proceed beyond this point until you are sure that you have removed the correct disk unit.
Note: For below HMC V7, right-click the partition profile name and select Properties.
2. In the Managed Profiles window, select Actions → Edit as shown in Figure 7-74.
332 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. In the Logical Partition Profile Properties window, choose the Tagged I/O tab, as shown in
Figure 7-75.
4. On the Tagged I/O tab, click Select for the load source, as shown in Figure 7-76.
Attention: You must shut down the partition fully and then reactivate it from the HMC
so that the new load source tagging takes effect.
6. After shutting down your partition, activate it from the HMC by selecting Tasks →
Operations and then selecting Activate (Figure 7-78).
Note: For below HMC V7, right-click partition, select Properties, and click Activate.
334 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7. In the Activate Logical Partition, select the profile to be used for activating the partition
(Figure 7-79).
The HMC then displays a status dialog box that closes when the task is complete and
when the partition is activated. Then, wait for the DST panel to display.
Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0A773 6718 050 DD001 Suspended
Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0A773 6718 050 DD001 Suspended
Serial Resource
Option Number Type Model Name Status
50-1100741 2107 A85 DD033 Non-configured
F3=Exit F12=Cancel
Figure 7-81 Select Replacement Unit
4. You might receive a problem report similar to that shown in Figure 7-82. If so, look at the
errors and check that you have not chosen the wrong LUN or that the configuration of the
LUN is correct. If everything is correct, then press F10 to continue.
Problem Report
Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.
OPT Problem
Unit possibly configured for Power PC AS
Lower level of mirrored protection
336 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. You then receive a confirmation panel similar to that shown in Figure 7-83. Press Enter to
continue.
Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0A773 6718 050 DD001 Suspended
Serial Resource
Unit ASP Number Type Model Name Status
1 1 50-1100741 2107 A85 DD033 Resuming
F12=Cancel
Figure 7-83 Confirm Replace of Configured Unit
Phase Status
Wait for next display or press F16 for DST main menu
Figure 7-84 Replace Disk Unit Data Status
Serial Resource
ASP Unit Number Type Model Name Status
1 Mirrored
1 50-1000741 2107 A85 DD013 Active
1 50-1100741 2107 A85 DD033 Resuming
3 68-0D0A0DA 6718 050 DD002 Active
3 68-0D0A51B 6718 050 DD005 Active
6 68-0D09EA2 6718 050 DD006 Active
6 68-0D0A6AE 6718 050 DD012 Active
9 68-0D0AB08 6718 050 DD009 Active
9 68-0D0BBB0 6718 050 DD003 Active
10 68-0D0A733 6718 050 DD010 Active
10 68-0D0BB13 6718 050 DD004 Active
11 68-0D0A722 6718 050 DD011 Active
11 68-0D09F9C 6718 050 DD008 Active
The external LSU disks are now mirrored, and the mirror completely resumes during IPL.
338 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7.3.3 Internal LSU mirrored to internal remote LSU migrating to external LSU
For this scenario, we assume that your systems meets the prerequisites for boot from SAN
(see 7.2, “Migration prerequisites” on page 278). In this section, we describe the steps to
migrate a system from using an internal load source unit that is currently protected by a
remote load source unit currently housed on a remote internal unit to using the boot from SAN
function (see Figure 7-86).
Figure 7-86 Boot from SAN migration from internal mirrored to internal remote LSU to external LSU
Attention: Do not turn off the system while the disk unit data function is running.
Unpredictable errors can occur if the system is shut down in the middle of the load source
migration function.
Important: This procedure requires that you suspend mirroring on the load source unit.
While mirroring is suspended, the remaining load source is a single point of failure. Thus,
ensure that you have the necessary backups so that you can recover your system in the
event of failure. You are required also to remove the internal load source unit. If you are not
comfortable with task, engage services from your IBM Sales Representative or IBM
Business Partner.
When migrating from an internal remotely mirrored load source to an external mirrored load
source, you first need to configure two unprotected load source unit LUNs in separate DS
volume groups. Assign each of these to separate System i host #2847 IOP-based or IOP-less
Fibre Channel adapters. For further information, refer to Chapter 8, “Using DS CLI with
System i” on page 391 if you are using DS CLI or 9.2, “Configuring DS Storage Manager
logical storage” on page 474 if you are using the GUI. Add one of the two unprotected LUNs
to your system ASP now.
Attention: At this time, you should have one unprotected LUN that is non-configured and
one that is in the system ASP.
Serial Resource
Number Type Model Name Capacity Status
50-1000741 2107 A85 DD013 35165 Non-configured
50-1105741 2107 A05 DD027 35165 Non-configured
50-110A741 2107 A05 DD019 35165 Non-configured
50-1107741 2107 A05 DD017 35165 Non-configured
50-1101741 2107 A05 DD033 35165 Non-configured
50-1108741 2107 A05 DD032 35165 Non-configured
50-1005741 2107 A05 DD030 35165 Non-configured
50-1003741 2107 A05 DD031 35165 Non-configured
50-1104741 2107 A05 DD026 35165 Non-configured
50-1004741 2107 A05 DD025 35165 Non-configured
50-1102741 2107 A05 DD028 35165 Non-configured
50-1109741 2107 A05 DD024 35165 Non-configured
50-1002741 2107 A05 DD029 35165 Non-configured
50-100A741 2107 A05 DD015 35165 Non-configured
More...
Press Enter to continue.
340 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Proceed with the migration process by performing a manual IPL to DST as follows:
1. Select the partition name with which you are working. Then, click Tasks → Properties
(see Figure 7-88).
Note: For below HMC V7, right-click the partition profile name and select Properties.
2. In the Partition Properties window, select the Settings tab (Figure 7-89).
1. Perform an IPL
2. Install the operating system
3. Use Dedicated Service Tools (DST)
4. Perform automatic installation of the operating system
5. Save Licensed Internal Code
Selection
3
Figure 7-91 IPL or Install the System
342 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. Sign on, and then in the Use Dedicated Service Tools (DST) panel, select 4. Work with
disk units, as shown in Figure 7-92.
1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support
Selection
4
F3=Exit F12=Cancel
Figure 7-92 Use Dedicated Service Tools (DST)
3. On the Work with Disk Units panel, select 2. Work with disk unit recovery, as shown in
Figure 7-93.
Selection
1
F3=Exit F12=Cancel
Figure 7-93 Work with Disk Units
Important: Exercise care when using these options, because using the incorrect option
can result in loss of data.
Selection
7
344 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. A list of the LUNs that you can suspend displays, as shown in Figure 7-95. Select unit 1.
Note that there is only one of these units, because you can select only the LSU mirror and
not the actual primary LSU.
Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0A773 6718 050 DD001 Active
3 1 68-0D0A0DA 6718 050 DD002 Active
3 1 68-0D0A51B 6718 050 DD005 Active
6 1 68-0D09EA2 6718 050 DD006 Active
6 1 68-0D0A6AE 6718 050 DD012 Active
9 1 68-0D0AB08 6718 050 DD009 Active
9 1 68-0D0BBB0 6718 050 DD003 Active
10 1 68-0D0A733 6718 050 DD010 Active
10 1 68-0D0BB13 6718 050 DD004 Active
11 1 68-0D0A722 6718 050 DD011 Active
11 1 68-0D09F9C 6718 050 DD008 Active
Selection
9
2. Select the existing internal load source (disk unit 1) as the copy from unit (Figure 7-97).
Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0BC12 6718 050 DD007 Active
346 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. Select the designated unprotected external load source LUN (model A8x) for which you
noted the serial number previously as the copy to unit (Figure 7-98).
In a mirrored environment, you probably only see the single load source unit in the list,
because you are unable to copy a unit that is part of a live mirrored pair.
When you are certain you have selected the correct from and to units, press Enter.
Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 050 DD007 Active
1=Select
Serial Resource
Option Number Type Model Name Status
50-1000741 2107 A85 DD013 Non-configured
50-1104741 2107 A05 DD032 Non-configured
50-1009741 2107 A05 DD026 Non-configured
50-1007741 2107 A05 DD019 Non-configured
50-1006741 2107 A05 DD023 Non-configured
50-1005741 2107 A05 DD034 Non-configured
50-1002741 2107 A05 DD022 Non-configured
50-1100741 2107 A85 DD033 Non-configured
50-110A741 2107 A05 DD014 Non-configured
More...
F3=Exit F11=Display disk configuration status F12=Cancel
Figure 7-98 Select Copy to Disk Unit
Problem Report
Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.
OPT Problem
Unit possibly configured for Power PC AS
Other sub-unit will become missing
5. In the Confirm Copy Disk Unit Data panel, review your choice carefully to prevent copying
a wrong disk unit accidentally. Then, press Enter (Figure 7-100).
Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 050 DD007 Active
Serial Resource
Number Type Model Name Status
50-1000741 2107 A85 DD013 Non-configured
F12=Cancel
Figure 7-100 Confirm Copy of Disk Unit
348 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. The system proceeds to copy the existing suspended internal LSU to the new LUN. A
panel displays that indicates the progress of the copy, as shown in Figure 7-101.
Phase Status
Wait for next display or press F16 for DST main menu
Figure 7-101 Copy Disk Unit Data Status
The process can take 10 to 60 minutes. Then, you return to the Work with Disk Unit Recovery
panel. You now have migrated the load source unit, but you need to IPL the system to use it.
Follow these steps:
1. Press F12 twice to get to the main DST panel.
2. Select 7. Start a service tool.
3. Select 7. Operator panel functions.
F3=Exit F12=Cancel
Figure 7-103 Confirm System Reset
Important: Now, you need to remove a disk unit from the system. If you are unsure how to
perform this process, contact your local customer engineer. This service is a chargeable
service, and you should tell them that you are migrating to a Boot from SAN configuration.
After the system is shut down, refer to the checklist that you created previously (Table 7-1 on
page 279). Next, you need to physically remove the first internal Load Source Unit from the
machine.
350 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Changing the tagged LSU
Go to the Hardware Management Console (HMC), and change the tagged load source unit
from the RAID Controller Card for the internal LSU to the Fibre Channel IOA that is controlling
your new LSU on external storage. Follow these steps on the HMC:
1. Select the partition name with which you are working. Go to Tasks → Configuration →
Manage Profiles, as shown in Figure 7-104.
Note: For below HMC V7, right-click the partition profile name and select Properties.
4. On the Tagged I/O tab, click Select for the load source, as shown in Figure 7-107.
352 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Now, select the Fibre Channel IOA that is assigned to the new external LSU, as shown in
Figure 7-108. Click OK to proceed.
Attention: You must fully shut down the system and then reactivate it from the HMC so
that the new load source tag takes effect.
After the system has shut down, you must activate it again from the HMC:
1. From the drop-down menu, select Tasks → Operations → Activate (Figure 7-109).
Note: For below HMC V7, right-click the partition, then select Properties and click
Activate.
The HMC then displays a status dialog box that closes when the task is complete and when
the partition is activated. Wait for the DST panel to display.
Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0A773 6718 050 DD001 Suspended
354 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. Select the new external load source mirror from the list provided (Figure 7-112) and press
Enter.
Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0A773 6718 050 DD001 Suspended
Serial Resource
Option Number Type Model Name Status
50-1100741 2107 A85 DD033 Non-configured
F3=Exit F12=Cancel
Figure 7-112 Select replacement unit
4. You might receive a problem report (Figure 7-113). If so, look at the errors, and check that
you have not chosen the wrong LUN or that the configuration of the LUN is correct. If you
determine that the LUN or the configuration is correct, then press F10 to continue.
Problem Report
Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.
OPT Problem
Unit possibly configured for Power PC AS
Lower level of mirrored protection
Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0A773 6718 050 DD001 Suspended
Serial Resource
Unit ASP Number Type Model Name Status
1 1 50-1100741 2107 A85 DD033 Resuming
F12=Cancel
Figure 7-114 Confirm Replace of Configured Unit
The progress panel is very similar to one that you saw previously (Figure 7-115).
Phase Status
Wait for next display or press F16 for DST main menu
Figure 7-115 Replace Disk Unit Data Status
356 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. After the replacement process completes, the Work with Disk Units recovery panel
displays again. Press F12, and then select:
a. 1. Work with disk configuration
b. 1. Display disk configuration
c. 1. Display disk configuration status
A panel similar to that shown in Figure 7-116 displays.
Serial Resource
ASP Unit Number Type Model Name Status
1 Mirrored
1 50-1000741 2107 A85 DD013 Active
1 50-1100741 2107 A85 DD033 Resuming
3 68-0D0A0DA 6718 050 DD002 Active
3 68-0D0A51B 6718 050 DD005 Active
6 68-0D09EA2 6718 050 DD006 Active
6 68-0D0A6AE 6718 050 DD012 Active
9 68-0D0AB08 6718 050 DD009 Active
9 68-0D0BBB0 6718 050 DD003 Active
10 68-0D0A733 6718 050 DD010 Active
10 68-0D0BB13 6718 050 DD004 Active
11 68-0D0A722 6718 050 DD011 Active
11 68-0D09F9C 6718 050 DD008 Active
Your external LSU disks are now mirrored, and the mirror resumes completely during IPL.
Remote
mirrored
internal System i LPAR System i LPAR
load source
LSU
I/O Tower I/O Tower Boot from SAN Migration I/O Tower I/O Tower
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA IOA
Figure 7-117 Boot from SAN Migration from remote mirrored LS to external mirrored LS
Attention: Do not turn off the system while the disk unit data function is running.
Unpredictable errors can occur if the system is shut down in the middle of the load source
migration function.
Important: This procedure requires you to remove your internal load source unit. If you are
not comfortable with task, engage services from your IBM Sales Representative or IBM
Business Partner.
The migration from an internal load source with a remote mirrored load source on external
storage is probably the simplest of all the migration scenarios as most of the hard work has
already been done with having a copy of the load source unit data on external storage.
First, you need to configure another unprotected load source unit LUN as the load source
mirror mate in a separate DS volume group than your existing remote external load source.
Each DS volume group with a load source unit LUN should be assigned to separate System i
host #2847 IOP-based or IOP-less Fibre Channel adapters. For further information, refer to
Chapter 8, “Using DS CLI with System i” on page 391 if you are using DS CLI or 9.2,
“Configuring DS Storage Manager logical storage” on page 474 if you are using the GUI.
Attention: At this time, you should have one unprotected LUN that is non-configured.
358 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
To start the migration to boot from SAN, shut down the system by entering the following
command:
PWRDWNSYS OPTION(*IMMED) RESTART(*NO)
Note: The partition must be fully deactivated and not just restarted, because you are going
to change the load source tagging afterwards.
Note: For below HMC V7, right-click the partition profile name and select Properties.
2. In the Managed Profiles window, select Actions → Edit as shown in Figure 7-119.
4. On the Tagged I/O tab, click Select for the load source, as shown in Figure 7-121.
360 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Now, select the Fibre Channel IOA that is assigned to your new LSU, as shown in
Figure 7-122. Click OK to proceed.
Next, you need to change the partition settings to do a manual IPL as follows:
1. Click Tasks → Properties in the drop-down menu as shown in (Figure 7-123).
Note: For below HMC V7, right-click the partition name and select Properties.
3. On the Settings tab, change the Keylock Position to Manual, and click OK (Figure 7-125).
Important: Now, you need to remove a disk unit from the system. If you are unsure how to
remove a disk unit, contact your local customer engineer. This service is a chargeable
service, and you need to explain that you are migrating to a Boot from SAN configuration.
362 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Now, you need to physically remove the old internal LSU. You noted the location of the LSU
earlier (see Table 7-1 on page 279). After you have removed the old internal LSU, activate the
system again from the HMC as follows:
1. Select Tasks → Operations → Activate, as shown in Figure 7-126.
Note: For below HMC V7, right-click the partition, select Properties, and click
Activate.
2. Select the profile that you want to use for activating the partition, and click OK
(Figure 7-127)
The HMC then displays a status dialog box that closes when the task is complete and
when the partition is activated. Then, wait for the DST panel to display.
Opt Problem
Missing mirror protected units in the configuration
Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0A773 6718 050 DD001 Suspended
364 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. Select the new external load source mirror from the list that is provided (Figure 7-130).
Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0A773 6718 050 DD001 Suspended
Serial Resource
Option Number Type Model Name Status
50-1100741 2107 A82 DD033 Non-configured
F3=Exit F12=Cancel
Figure 7-130 Select Replacement Unit
4. You might receive a problem report, as shown in Figure 7-131. If so, look at the errors, and
check that you have not chosen the wrong LUN or that the configuration of the LUN is
correct. If the LUN and LUN configuration are correct, then press F10 to continue.
Problem Report
Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.
OPT Problem
Unit possibly configured for Power PC AS
Lower level of mirrored protection
Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0A773 6718 050 DD001 Suspended
Serial Resource
Unit ASP Number Type Model Name Status
1 1 50-1100741 2107 A82 DD033 Resuming
F12=Cancel
Figure 7-132 Confirm Replace of Configured Unit
The progress panel is very similar to one that displayed previously (Figure 7-133).
Phase Status
Wait for next display or press F16 for DST main menu
Figure 7-133 Replace Disk Unit Data Status
366 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. After the replacement process completes, you return to the Work with Disk Units recovery
panel. Press F12, and select the following options:
a. 1. Work with disk configuration
b. 1. Display disk configuration
c. 1. Display disk configuration status
A panel similar to that shown in Figure 7-134 displays.
Serial Resource
ASP Unit Number Type Model Name Status
1 Mirrored
1 50-1000741 2107 A82 DD013 Active
1 50-1100741 2107 A82 DD033 Resuming
3 68-0D0A0DA 6718 050 DD002 Active
3 68-0D0A51B 6718 050 DD005 Active
6 68-0D09EA2 6718 050 DD006 Active
6 68-0D0A6AE 6718 050 DD012 Active
9 68-0D0AB08 6718 050 DD009 Active
9 68-0D0BBB0 6718 050 DD003 Active
10 68-0D0A733 6718 050 DD010 Active
10 68-0D0BB13 6718 050 DD004 Active
11 68-0D0A722 6718 050 DD011 Active
11 68-0D09F9C 6718 050 DD008 Active
Your external LSU disks are now mirrored, and the mirror resumes completely during IPL.
Attention: Do not turn off the system while the disk unit data function is running.
Unpredictable errors can occur if the system is shut down in the middle of the load source
migration function.
I/O Tower I/O Tower Boot from SAN Migration I/O Tower I/O Tower
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA
i5/OS V6R1 and later IOA
Figure 7-135 Boot from SAN Migration for unprotected internal LSU for i5/OS V6R1 and later
368 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Prior to i5/OS V6R1
Because multipathing for the load source is not supported prior to i5/OS V6R1, you need
to mirror the external load across two #2847 IOP/IOA pairs to provide path redundancy.
For this purpose, you must create two LUNs in the DS system as unprotected and assign
them to two different DS volume groups, which allows you to attach other LUNs to two
#2847 IOP-based FC adapters in a multipath configuration (see Figure 7-136).
I/O Tower I/O Tower Boot from SAN Migration I/O Tower I/O Tower
#2844 IOP #2844 IOP #2847 IOP #2847 IOP
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA
prior to i5/OS V6R1 IOA
Figure 7-136 Boot from SAN Migration for unprotected internal LSU prior to i5/OS V6R1
In this section, we assume that you follow these recommendations for protection.
First, you need to configure your load source unit LUN or LUNs on the external storage
system using either DS CLI or the DS Storage Manager GUI. For further information, refer to
Chapter 8, “Using DS CLI with System i” on page 391 if you are using DS CLI or 9.2,
“Configuring DS Storage Manager logical storage” on page 474 if you are using the GUI.
If you are planning to use a mirrored external load source, add only one of the two
unprotected LUNs to your system ASP at this time.
Attention: At this time, you should have one LUN that is non-configured. It should be a
protected model A0x if you are using i5/OS V6R1 multipath load source and an
unprotected model A8x if you are going to mirror the external load source LUN. Only if you
are going to use load source mirroring, you now add another unprotected LUN to your
system ASP. Make sure to note the serial number of the external load source LUN or LUNs
for further reference.
Important: You are required to remove the internal load source unit. If you are not
comfortable with task, engage services from your IBM Sales Representative or IBM
Business Partner.
Now that the two new load source LUNs are attached to your system, you can use SST to
check that the disks are reporting correctly. When you display non-configured disk devices,
you should see one unprotected LUN (model A8x) for your new load source.
Serial Resource
Number Type Model Name Capacity Status
50-1000741 2107 A85 DD013 35165 Non-configured
50-1105741 2107 A05 DD027 35165 Non-configured
50-110A741 2107 A05 DD019 35165 Non-configured
50-1107741 2107 A05 DD017 35165 Non-configured
50-1101741 2107 A05 DD033 35165 Non-configured
50-1108741 2107 A05 DD032 35165 Non-configured
50-1005741 2107 A05 DD030 35165 Non-configured
50-1003741 2107 A05 DD031 35165 Non-configured
50-1104741 2107 A05 DD026 35165 Non-configured
50-1004741 2107 A05 DD025 35165 Non-configured
50-1102741 2107 A05 DD028 35165 Non-configured
50-1109741 2107 A05 DD024 35165 Non-configured
50-1002741 2107 A05 DD029 35165 Non-configured
50-100A741 2107 A05 DD015 35165 Non-configured
More...
Press Enter to continue.
370 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Changing the tagged LSU
Now, go to the Hardware Management Console (HMC) and change the tagged LSU from the
RAID Controller Card for the internal LSU to the Fibre Channel IOA that is controlling your
new SAN LSU. Follow these steps on the HMC:
1. Select the partition name with which you are working. Choose Tasks → Configuration →
Manage Profiles, as shown in Figure 7-138.
2. In the Managed Profiles window, click Actions → Edit, as shown in Figure 7-139.
4. On the Tagged I/O tab, click Select for the load source, as shown in Figure 7-141.
372 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Now, select the IOA that is assigned to the new LSU, as shown in Figure 7-142. Click OK
to proceed.
Note: For below HMC V7, right-click partition name and select Properties.
3. On the Settings panel, change the Keylock Position to Manual, as shown in Figure 7-145.
Attention: You must fully shut down the system and then reactivate it from the HMC so
that the new load source tag takes effect.
374 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
After the system has shut down, activate it again from the HMC. Follow these steps:
1. Select Tasks → Operations → Activate, as shown in Figure 7-146.
2. Select the profile that you want to use to activate the partition (Figure 7-147).
The HMC then displays a status dialog box that closes when the task is complete and when
the partition is activated. Then, wait for the DST panel to display.
1. Perform an IPL
2. Install the operating system
3. Use Dedicated Service Tools (DST)
4. Perform automatic installation of the operating system
5. Save Licensed Internal Code
Selection
3
Figure 7-148 Use Dedicated Service Tools (DST)
2. After signing on to DST, select 4. Work with disk units from the Use Dedicated Service
Tools (DST) panel (Figure 7-149).
1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support
Selection
4
F3=Exit F12=Cancel
Figure 7-149 Selecting - Work with disk units
3. At the Work with disk units panel, select 2. Work with disk unit recovery (Figure 7-150).
376 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Work with Disk Units
Selection
1
F3=Exit F12=Cancel
Figure 7-150 Work with Disk Units
Important: Exercise care when using the options that we describe here, because using
the incorrect option can result in loss of data.
4. On the Work with Disk Unit Recovery panel, select 9. Copy disk unit data (Figure 7-151).
Selection
9
Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0BC12 6718 050 DD007 Configured
2 1 68-0D09F9C 6718 050 DD008 Configured
3 1 68-0D0A773 6718 050 DD001 Configured
4 1 68-0D0A0DA 6718 050 DD002 Configured
5 1 68-0D0A51B 6718 050 DD005 Configured
6 1 68-0D09EA2 6718 050 DD006 Configured
7 1 68-0D0BBB0 6718 050 DD003 Configured
8 1 68-0D0BB13 6718 050 DD004 Configured
9 1 68-0D0AB08 6718 050 DD009 Configured
10 1 68-0D0A733 6718 050 DD010 Configured
11 1 68-0D0A722 6718 050 DD011 Configured
12 1 68-0D0A6AE 6718 050 DD012 Configured
6. Select the designated unprotected external load source LUN (model A8x), for which you
noted the serial number previously, as the copy to unit (Figure 7-153).
Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 050 DD007 Configured
1=Select
Serial Resource
Option Number Type Model Name Status
1 50-1000741 2107 A85 DD013 Non-configured
50-1103741 2107 A05 DD014 Non-configured
50-1007741 2107 A05 DD023 Non-configured
50-1006741 2107 A05 DD020 Non-configured
50-110A741 2107 A05 DD019 Non-configured
50-1109741 2107 A05 DD024 Non-configured
50-1004741 2107 A05 DD025 Non-configured
50-1002741 2107 A05 DD029 Non-configured
50-1005741 2107 A05 DD030 Non-configured
More...
F3=Exit F11=Display disk configuration status F12=Cancel
Figure 7-153 Select Copy to Disk Unit
378 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7. When you are certain that you have selected the correct from and to units, press Enter.
If the LUN was previously attached to a system, you might see the panel shown in
Figure 7-154. If so and if you are sure it is the correct LUN, press F10 to ignore the
problem report and continue.
Problem Report
Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.
OPT Problem
Unit possibly configured for Power PC AS
8. On the Confirm Copy Disk Unit Data panel, review your choice again to prevent
accidentally copying the wrong disk unit, and press Enter to confirm (Figure 7-155).
Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 050 DD007 Configured
Serial Resource
Number Type Model Name Status
50-1000741 2107 A85 DD013 Non-configured
F12=Cancel
Figure 7-155 Confirm Copy of Disk Unit
Phase Status
Wait for next display or press F16 for DST main menu
Figure 7-156 Copy Disk Unit Data Status
It can take 10 to 60 minutes for the process to complete. When it is complete, you return to
the Work with Disk Unit Recovery panel. You now have migrated the LSU, but to use it, you
need to IPL your partition as follows:
1. Press F12 twice to return to the main DST panel.
2. Select 7. Start a service tool.
3. Select 7. Operator panel functions.
380 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Make sure that the IPL source is set to 1 or 2 and that the IPL mode is set to 1 as shown in
Figure 7-157. Then, press F10 to shut down the system.
F3=Exit F12=Cancel
Figure 7-158 Confirm System Reset
Important: You now need to remove a disk unit from the system. If you are unsure how to
perform this task, contact a local customer engineer. This service is a chargeable service,
and you need to tell them that you are migrating to a Boot from SAN configuration.
2. Select the profile that you want to use to activate the partition (Figure 7-160).
The HMC then displays a status dialog box that closes when the task is complete and the
partition is activated. Then, wait for the DST window to open.
Note: For prior to i5/OS V6R1, we recommended that you mirror the new external load
source for path protection. However, if you have other unprotected volumes in your system
ASP and do not want to start mirrored protection for ASP 1 yet, then your boot from SAN
migrating procedure ends here, and you can perform an IPL of your system from the IPL or
Install the System panel.
382 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Mirroring of the LSU
To mirror the LSU, follow these steps:
1. When the system has IPLed to DST, select 3. Use Dedicated Service Tools (DST) from
the IPL or Install the System panel (Figure 7-161).
1. Perform an IPL
2. Install the operating system
3. Use Dedicated Service Tools (DST)
4. Perform automatic installation of the operating system
5. Save Licensed Internal Code
Selection
3
Figure 7-161 Use Dedicated Service Tools (DST)
2. After signing on to DST, select 4. Work with disk units from the Use Dedicated Service
Tools (DST) panel (Figure 7-162).
1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support
Selection
4
F3=Exit F12=Cancel
Figure 7-162 Selecting - Work with disk units
3. Because the mirror is on a separate IOA, you must first enable remote load source
mirroring from the Work with Disk Configuration panel. Select 1. Work with disk
configuration (Figure 7-163).
Selection
1
F3=Exit F12=Cancel
Figure 7-163 Work with disk units
Selection
4
F3=Exit F12=Cancel
Figure 7-164 Work with mirrored protection
384 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Select 4. Enable remote load source mirroring (Figure 7-165).
Selection
4
F3=Exit F12=Cancel
Figure 7-165 Work with Mirrored Protection
6. The confirmation panel shown in Figure 7-166 displays explaining that this action only
enables remote mirroring and that it does not actually start it. Press Enter.
Remote load source mirroring will allow you to place the two
units that make up a mirrored load source disk unit (unit 1) on
two different IOPs. This may allow for higher availability
if there is a failure on the MFIOP.
Note: When there is only one load source disk unit attached to
the multifunction IOP, the system will not be able to IPL if
that unit should fail.
F3=Exit F12=Cancel
Figure 7-166 Enable Remote Load Source Mirroring Confirmation panel
F3=Exit F12=Cancel
Figure 7-167 Select ASP to Start Mirrored Protection
When the system has IPLed, you are returned to the DST and mirroring is activated. The next
IPL fully starts the mirror by synchronizing the LUNs.
You can now either add the old LSU back into the ASP configuration or proceed to migrate the
remaining internal drives to the SAN.
7.3.6 Migrating to external LSU from iSeries 8xx or 5xx with 8 Gb LSU
For this scenario, we assume that your systems meets the prerequisites for boot from SAN
(see 7.2, “Migration prerequisites” on page 278). In this scenario, we describe the steps to
migrate an older system, with all of its storage except for the LSU that is on external storage,
that you cannot upgrade to V5R3M5. The older system can be running an older release or
later for boot from SAN support, or it might have an 8 GB internal load source unit that cannot
accommodate V5R3M5 and later, which requires at least a 17 GB LSU. In either case, you
have an existing system that is not capable of running V5R3M5 and later. Thus, you use this
process to migrate to using a larger external load source housed in a SAN and using the boot
from SAN functionality.
Important: If you have already loaded V5R3M5 or later onto your system, you cannot use
the process that we describe here.
Attention: Do not turn off the system while the disk unit data function is running.
Unpredictable errors can occur if the system is shut down in the middle of the load source
migration function.
This process actually allows you to deal with two issues at the same time—increasing the
LSU size and migrating to V5R3M5 or later.
386 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
You can perform this procedure with any release that can support the new SAN connection
and an upgrade to the target release, in this case V5R3M5 or later.
You basically follow the same procedure that we describe in 7.3, “Migration scenarios” on
page 280 with the following changes:
1. If you are not running yet i5/OS V5R3M0 and, for this boot from SAN migration, plan to
upgrade to a release that is higher than i5/OS V5R3M5 (such as V5R4 or V6R1), make
sure that you meet all the i5/OS upgrade requirements, such as preparing for the
installation of PTFs). Refer to the i5/OS Information Center for the corresponding i5/OS
target release before proceeding with the migration.
Note: You cannot IPL an i5 system from a previous iSeries 8xx or 5xx load source to
migrate to SAN unless it is running V5R3M5 or higher.
2. The opening steps for this boot from SAN migration remain the same, in that you have to
suspend any mirrored pair for your load source. However, at this time, the Fibre Disk
Controller IOA that is intended to connect to the unprotected LUN for the load source is
still driven by an older IOP #2843 or 2844.
Note: If you currently have your internal load source mirrored to SAN external storage,
you must suspend the SAN unit.
3. When the internal load source is a single unit, you can then use the copy disk unit data
procedure to migrate the load source to a new LUN in the SAN that is 17 GB or larger.
Attention: At this point the migration procedure deviates from the other methods that
we have described previously.
4. You now have your load source out in the SAN on a 17 GB LUN or larger. The next step
you need to do is to shut down the system by using the DST option 7. Operator panel
functions from the Start a Service Tool panel.
5. If you are going to connect a new System i POWER5 or POWER6 system, detach the
SAN storage from the old iSeries model 8xx and connect to your System i server.
6. Before you IPL your System i server, ensure that the load source LUN is now driven by a
#2847 IOP-based or IOP-less Fibre Channel IOA for boot from SAN. Also, ensure that the
HMC load source tagging is set correctly to the Fibre Channel IOA that is attached to your
new SAN LSU. If you are not going to i5/OS V6R1 load source multipathing and plan to
mirror your external load source instead, also ensure that a second unprotected LUN is
attached to a second boot from SAN Fibre Channel IOA and is added to the system ASP.
7. Make sure that your i5/OS target release (V5R3M5 or higher) SLIC CD I_Base_01 is
loaded into the CD/DVD drive, change the partition settings to a D-type manual mode IPL,
and then activate the partition.
8. When the system has IPLed to DST, perform an i5/OS software upgrade as described in
the i5/OS and related software. Refer to information about installing, upgrading, or deleting
i5/OS and related software in the i5/OS Information Center for your corresponding target
release.
For documentation about this method, see Backup and Recovery, SC41-5304, which is
available online at:
http://publib.boulder.ibm.com/iseries/
You achieve the migration by adding all your new LUNs into the system from system service
tools (SST). This might require that you add Fibre Disk Controllers to your system to
accommodate the additional LUNs, or you might have sufficient capacity on your existing
adapters to accommodate the migration.
Next, instruct the system not to write any new data to the LUNs that are in the old SAN by
using the command STRASPBAL TYPE(*ENDALC) UNIT(a b c d), where a, b, c, and d are
the disk unit numbers of the LUNs in the old SAN.
Now, tell the system to move all of the permanent data from the old SAN to the new SAN by
using the command STRASPBAL TYPE(*MOVDTA) TIMLMT(t), where t is the time in
minutes for which you allow the function to run, or *NOMAX.
388 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
At this stage, you have only temporary data remaining on the old SAN. So, at a convenient
time, you can perform a manual IPL to DST, where you can then remove the old SAN LUNs
from the ASPs.
If your load source is also in the SAN, you also migrate the load source using the DST copy
disk unit data function. If it is an internal drive you would leave it unchanged.
When the old SAN LUNs are removed, turn off the system, and disconnect the old SAN.
You first start by setting up PPRC between the old and new SANs to build and maintain an
identical disk set within the new SAN. When the image is fully established, you then turn off
the system and deconfigure the host link in the old SAN, while at the same time configuring
the link in the new SAN. Finally, IPL the system and, provided that the links are correct, the
system is back and operational.
You can use DS CLI to perform storage management and Copy Services functions on the
DS6000 and DS8000. You can also use it to perform Copy Services functions on ESS 800
and ESS 750 with microcode levels 2.4.2.x and above. For example, with DS CLI, you can
format arrays and ranks on DS6000 and DS8000, create and delete LUNs, connect LUNs to
host systems, create Copy Services relationships, and so forth.
Before DS CLI became available, you managed ESS storage and Copy Services by the ESS
Command Line Interface (CLI) from open servers. The primary difference between the two
interfaces is that with DS CLI, you can invoke a Copy Service relationship directly. With ESS
CLI, you must first create a Copy Services task with the ESS Copy Services GUI and then
invoke it from ESS CLI using the rsExecuteTask command.
Also, ESS CLI was not available for i5/OS. So, customers who wanted to invoke Copy
Services tasks for i5/OS automatically had to use ESS CLI on a Windows server and then
trigger Copy Services using remote commands from i5/OS.
You can use the DS CLI to invoke the following Copy Services functions:
FlashCopy
Metro Mirror, also known as synchronous Peer-to-Peer Remote Copy (PPRC)
Global Copy, also known as PPRC-XD
Global Mirror, also known as asynchronous PPRC
A storage unit is a physical unit that consists of storage server and integrated devices.
A storage image is a partition of a storage unit that provides emulation of a storage server and
devices.
When used with the DS8000, DS CLI connects to the HMC and can communicate with any
DS8000 systems that are connected to that HMC. DS CLI sends commands to the HMC,
where the ESSNI server directs them to the appropriate DS8000, based on the Machine Type
and Machine Serial (MTMS) supplied. The commands are executed on the DS8000 against
the microcode. Then, the response data is gathered and sent back to the ESSNI server on
the HMC and back to the DS CLI client.
392 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 8-1 shows the DS CLI command flow between the DS CLI client and the HMC or SMC.
DS CLI Client
User Scripts
User
interaction through
interaction
Interactive batch jobs
single
m ode for
comm and
calls
automation
DS CLI
DS CLI Fram ework
Fram ework
HMC or SM C
Command modes
You can use the following command modes to invoke DS CLI commands:
Single-shot
Script
Interactive
Typically, you use the DS CLI single-shot command mode when you want to issue only an
occasional command. In this mode, you enter DS CLI followed by the command.
An example of the lssi command, which lists the storage image configuration, in single-shot
command mode in Windows is as follows:
dscli -hmc1 9.5.17.156 -user admin -password itso4all lssi
The DS CLI script mode is useful when you want to issue a sequence of DS CLI commands
repeatedly. To use DS CLI script mode, create a file and insert the DS CLI commands that
You use the DS CLI interactive command mode when you need to perform multiple
commands that you cannot incorporate into a script, for example when you perform initial
storage setup configuration tasks. You invoke interactive DS CLI mode by typing DS CLI,
where you can specify or are prompted for the IP address of HMC or SMC, the user ID, and
the password.
Command syntax
A DS CLI command consists of the following components:
The command name specifies the task that DS CLI is to perform.
One or more flags, each followed by flag parameters if required. Flags provide additional
information that directs DS CLI to perform a command task in a specific way.
The command parameter, which provides the basic operations that are necessary to
perform the command task.
The following example shows a DS CLI command that lists DS ranks R1, R2, and R3:
lsrank -dev IBM.2107-7580741 R1 R2 R3
In this example, lsrank is the command, -dev is a flag, IBM.2107-7580741 is a flag parameter,
and R1, R2, and R3 are command parameters.
In this section, we describe how to configure the DS storage using DS CLI from Windows. We
describe how to use DS CLI commands to set up a DS6000 or DS8000 for System i external
storage. The commands that we show were performed on a DS6000. To set up a DS8000 for
System i external storage, you can use the same commands, with the exception that in
DS6000 array sites that contain 4 DDMs, you can make a 4 DDM array from one array site or
you can make an 8 DDM array out of two array sites. However, in DS8000, an array site
contains 8 DDMS, so in our examples, we always create an 8 DDM array from one array site.
For a description of DS CLI commands, refer to IBM System Storage DS6000 Command-Line
Interface User’s Guide, GC26-7922, or IBM System Storage DS8000 Command-Line
Interface User’s Guide, SC26-7916.
In our test environment, we used DS CLI on a Windows server and a LAN connection to SMC
of DS6000. You can issue the same commands from DS CLI on i5/OS, but we performed
them from Windows because customers who use an external load source have to set up the
DS before IPL of i5/OS. These customers must use DS CLI on another server.
394 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Perform the following steps to enter the DS CLI interactive command mode:
1. In a Windows command prompt, type dscli to start the DS CLI console.
2. Enter the IP address of your primary management console.
3. Enter the IP address of secondary management console. If you established a second
HMC or SMC configuration that will serve as a backup for the first, enter the IP address of
secondary management system. Otherwise, press Enter.
4. Enter a user ID and a password. The initial user ID for DS CLI or GUI is admin, and the
initial password is admin.
The prompt DS CLI> in your Windows command prompt indicates that you are now in the DS
CLI console. You can verify your connection to the HMC or SMC ESSNI server by entering
the lssi command. If the connection is successful, this command lists the available storage
images.
When you enter the DS CLI for the first time after you create a user, it prompts you to change
the password. Use the chuser command to change the password, as shown in Example 8-2.
We recommend that you create a password file for the user ID to contain an encrypted user
ID password. After you specify the name of the password file in the DS CLI profile, you do not
need to insert a password every time you use the DS CLI command framework. We describe
the DS CLI profile in “Setting up a DS CLI profile” on page 396.
To create a password file, use the managepwfile command. As shown in Example 8-3, a
password file is created in a current working directory and a DS CLI message indicated in
which directory the password file is created.
After you install DS CLI, you can find a default profile with the name dscli.profile in the
C:\Program Files\IBM\dscli\profile directory. Insert the corresponding values for the following
parameters:
hmc1 The IP address of the primary management console.
hmc2 The IP address of the secondary management console, if available
pwfile If you created a password file, insert the path and name of the file. You have
to specify \\ to qualify the directory path separator.
username Insert the user ID for DS CLI. If you set up the password file, use the same
user ID that you specified in the password file.
password Insert the password for the specified user ID. This value is not required if you
use the password file.
devid Insert the storage image ID of the DS storage system, which consists of the
manufacturer, type, and serial number. For DS8000, use IBM.2107-xxxxx,
for DS6000, use IBM.1750-xxxxx, where xxxxx denotes the serial number of
DS. You can find the serial number of a DS8000 on the operator panel on
the DS base frame. When you insert the storage image ID in the profile,
replace the last digit 0 with 1 or 2, depending on which storage image with
which you want to work when using this profile.
The DS6000 serial number is on the label in the lower right corner of the
front side of the base enclosure. You can also find it using the DS6000 GUI
or DS8000 GUI. Go to My Work, expand Real Time Manager, expand
Manage hardware, and click Storage Images. The serial number of the
storage image is displayed in the frame, Storage images.
remotedevid If you use Copy Services, specify the target storage image ID.
Example 8-4 shows part of a DS CLI profile with customized parameter values.
#
# Management Console/Node IP Address(es)
# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.
hmc1: 9.5.17.171
#hmc2:
# Password filename
# The password file can be generated using mkuser command.
#
pwfile: c:\\Program files\\ibm\\dscli\\itso01pw
396 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
username: itso01
#
# Default target Storage Image ID
# "devid" and "remotedevid" are equivalent to
# "-dev storage_image_ID" and "-remotedev storeage_image_ID" command options,
respectively.
devid: IBM.1750-13abvda
#remotedevid:IBM.1750-13abvda
#
# locale
# Default locale is based on user environment.
#locale: en
After you obtain the signature, you can download the LIC activation keys from:
http://www.ibm.com/storage/dsfa
Creating arrays
List the available array sites using the lsarraysite command. Example 8-6 shows the DS
CLI response, which shows four unassigned array sites. In the displayed array sites, you can
also observe through which device adapter (DA) pair they are attached.
You create RAID-5 arrays by using the mkarray command. You might decide to have RAID-5
or RAID-10 arrays. Also, you can have 8 DDMs in an array or 4 DDMs in an array. For
performance reasons, we created arrays with 8 DDMs for our examples.
Example 8-7 shows the mkarray command, which creates a RAID-5 and an 8 array from 2
array sites. It also shows the DS CLI response.
After the arrays are created, you can use the command lsarray to list them, as shown in
Example 8-8.
398 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Creating ranks
From each array, create a fixed block rank using the mkrank command, as shown in
Example 8-9.
After you create the ranks, use the lsrank command to list them, as shown in Example 8-10.
Note: We recommend that you create an extent pool for each rank (see “Extent pools” on
page 42).
Use the mkextpool command to create an extent pool. After you create the extent pool, assign
a rank to it with the chrank command, as shown in Example 8-13 on page 400. Alternatively,
you can also create extent pools before you create ranks and assign each rank to an extent
pool with the extpool parameter in the mkrank command.
Create as many extent pools as there are ranks so that you can assign each rank to one
extent pool, as recommended. In the mkextpool command, determine which processor in a
cluster that this particular extent pool will use by specifying cluster number 0 or 1 for the
rankgrp parameter.
Example 8-11 shows how to create an extent pool that is assigned to processor 0. We
recommend that you assign extent pools evenly between the two processors. Observe that a
created extent pool has an ID that is assigned to it automatically, which is different from the
name of the extent pool that you specify in the mkextpool command. In our example, the
name of the extent pool is extpool-01, but the ID is P0. When you use commands later that
refer to this extent pool, such as change extent pool or show extent pool, you refer to it by its
ID, not by the name.
If you create a large number of extent pools, you can consider using a DS CLI script.
After you create the extent pools, assign each rank to an extent pool using the chrank
command, as shown in Example 8-13.
After you assign ranks to extent pools, use the lsrank command to observe to which extent
pool and cluster a particular rank is assigned, as shown in Example 8-14.
400 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
When creating logical volumes, take into account the following rules:
Volumes that belong to an even rankgroup (cluster) must be in an a even LSS. Volumes
that belong to an odd rankgroup (cluster) must be in an odd LSS. You determine to which
cluster a volume will belong by specifying the extent pool in which a volume will be
created. Volumes that are created in an extent pool with rankgroup 0 belong to cluster 0,
and volumes in an extent pool with rankgroup 1 belong to cluster 1. So, the volumes in an
extent pool with rankgroup 0 must have an even LSS number, and volumes in an extent
pool with rankgroup 1 must have an odd LSS number.
LSS number FF is reserved for internal use. Do not use it for volume IDs.
Avoid LSS number 00 for the following reasons:
– If the DS is attached by ESCON connection, then you must use LSS 00 for ESCON
connected volumes.
– If i5/OS external Load Source is on volume with ID 0000, and the serial number of DS
happens to be 0000000, i5/OS will not recognize Load Source. The disk serial number
in i5/OS is composed from the volume ID and the serial number of DS. Thus, if the
serial number of a disk is 0000000, the disk cannot be a Load Source.
If you use DS Copy Services, plan which volumes are the source and target volumes and
make decisions for LSSs of volumes accordingly.
Note: We generally recommend that you use one LSS for volumes from the same rank to
keep track of your lDS volume layout more easily.
i5/OS volumes have fixed sizes, such as 8.59 GB, 17.54 GB, 35.16 GB, and so on. You can
define each volume as protected or unprotected. (For more information about sizes and
protection of volumes, refer to Chapter 6, “Implementing external storage with i5/OS” on
page 207.) You determine the size and protection of a volume by specifying the volume model
in the mkfbvol command parameter -os400. For example, model A85 designates an
unprotected volume of size 35.16 GB.
You can use the mkfbvol command to create multiple volumes at the same time by specifying
a range of volume IDs. The volume ID format is hexadecimal, where XY is a logical
subsystem number and ZZ is a volume number that is contained in a logical subsystem.
When creating a volume or a range of volumes and specifying #h as part of the volume name,
it is replaced by the volume ID of each volume. Observe the difference between volume name
and volume ID. When you use commands later that refer to this volume, such as change
volume or show volume, you refer to it by its ID, not by its name.
Example 8-15 shows the creation of one protected 35.16 GB volume and a range of protected
35.16 GB volumes.
Use the showfbvol command to show specifications and status of a particular volume, as
shown in Example 8-17.
402 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8.2.3 Configuring volume groups, I/O ports, and host connections
To assign the volumes to System i Fibre Channel adapters, you have to create volume groups
as a container entity for your logical volumes and assign a volume group to a host connection
that you create for each of your System i Fibre Channel IOAs.
When the partition is IPLed from the external Load Source, we add more volumes to this
volume group so that i5/OS recognizes them and can use them. For more information about
implementing external Load Source, refer to 6.4, “Setting up an external load source unit” on
page 211.
When creating a volume group for i5/OS, specify the -type os400mask parameter. Note that
i5/OS uses a blocksize of 520 bytes per sector. By specifying -type os400mask, you denote
that the volumes are formatted with 520 bytes per sector. Observe that a created volume
group has an ID that is assigned to it automatically, which is different than the name of volume
group that you specify in the mkvolgrp command. When you use commands later that refer to
this volume group, such as change volume group or show volume group, you refer to it by its
ID, not by its name.
In Example 8-18, the name of the volume group is blue, but the ID is V14. Also, a volume
group assigned the logical volume ID 1000 is created.
If you will not use external Load Source, you might want to create a volume group that
contains all volumes that are assigned to an i5 FC adapter using the following command:
DS CLI> mkvolgrp -type os400mask -volume 1000-1002 volgrp10
Important: Remember that the #2847 I/O processor supports the FC-SW (SCSI-FCP)
protocol only, while direct-attached IOP-less Fibre Channel IOAs require the FC-AL
protocol (see 4.2.2, “Planning considerations for i5/OS multipath Fibre Channel
attachment” on page 81).
For each System i FC adapter, create a host connection and specify which volume group is
assigned to it. Then, you can assign volumes from one volume group to connect to a System i
FC adapter. You create a host connection with the mkhostconnect command. When creating a
host connection, specify the following parameters:
Specify -wwname as the world-wide port name (WWPN) of your System i FC adapter.
Specify -hosttype iSeries. With this parameter, you implicitly determine the correct
blocksize of 520 bytes per sector and address discovery method (Report LUNs) which is
used by i5/OS.
Specify -volgrp as the volume group that is assigned to this host connection.
Example 8-20 shows an example of creating a host connection for a System i Fibre Channel
I/O adapter (IOA) being assigned volume group V14. Observe that the name chosen for the
host connection is adapter0 but its ID is 0002.
The following objects are created when you install DS CLI in i5/OS:
Library QDSCLI
The library QDSCLI contains the code for DS CLI commands.
Files in IFS directory /ibm/dscli
The IFS directory /ibm/dscli contains sample files, instructions, and necessary Java .jar
files to run the DS CLI commands.
When you invoke a DS CLI command, it initiates a Java process and a Java Virtual Machine
(JVM™). It uses JDK™ APIs to communicate to HMC or SMC through Java Secure Socket
Layer (SSL).
If you use DS CLI on i5/OS in command line mode, the DS CLI commands issue in JVM and
responses display in JVM as well.
404 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
With single-shot commands, JVM is initiated, and JVM takes input from the command,
executes it, and displays a response. After the response displays, JVM ends.
When using DS CLI scripts, JVM takes input from the script file, executes it in JVM, creates a
response file with a name that is specified in the DS CLI command, and returns a response to
this file.
Initializing JVM with connecting Java jars from directory /ibm/dscli is shown in Figure 8-16 on
page 416.
8.3.1 Prerequisites
Before you install DS CLI to i5/OS, check that the following prerequisites are installed on
i5/OS:
The latest Java group PTF
i5/OS 5722-SS1 option 34 - Digital certificate manager
Licensed product 5722-AC3 option *base - Crypto Access Provider 128 bit (before V5R4
only)
Licensed product 5722-DG1option *base - IBM HTTP Server for iSeries
Licensed product 5722-JV1 options 6 - Java Developer Kit 1.4
The latest CUM package is installed on i5/OS
C:\Residency>cd cd_image
The system cannot find the path specified.
C:\Residency>cd dscli-windows*
C:\Residency\DSCLI-Windows-Install-Imager>cd cd_*
C:\Residency\DSCLI-Windows-Install-Imager\cd_image>setupwin32 -os400
2. Enter the IP address or the DNS name of i5/OS server, the i5/OS user ID, and the
password for i5/OS user ID, as shown in Figure 8-3.
406 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. The wizard initializes, as shown in Figure 8-4. Click Next to continue.
4. The Welcome panel displays, as shown in Figure 8-5. Click Next to continue.
408 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. Specify the IFS directory where Java is installed. Observe that the default directory,
QOpenSys, is inserted by default (Figure 8-7). If Java is installed in the specified directory,
click Next. Otherwise click Browse, select the IFS directory where Java is installed, and
click Next.
410 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The wizard continues to install DS CLI to the specified IFS directory in i5/OS, as shown in
Figure 8-9.
8. A message that indicates a successful installation displays, as shown in Figure 8-10. Click
Next to continue.
412 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8.3.3 Invoking DS CLI from i5/OS
Before you invoke DS CLI from i5/OS, add the library QDSCLI to the i5/OS library list using
the addlible qdscli command, as shown in Figure 8-12.
1. User tasks
2. Office tasks
3. General system tasks
4. Files, libraries, and folders
5. Programming
6. Communications
7. Define or change the system
8. Problem handling
9. Display a menu
10. Information Assistant options
11. iSeries Access tasks
Selection or command
===> addlible qdscli
1. User tasks
2. Office tasks
3. General system tasks
4. Files, libraries, and folders
5. Programming
6. Communications
7. Define or change the system
8. Problem handling
9. Display a menu
10. Information Assistant options
11. iSeries Access tasks
Selection or command
===> dscli
DS CLI displays the panel where you specify whether you are using a DS CLI script for DS
CLI commands and which DS CLI profile you are using. In our example, we did not use a
script, so we specify *none.
DS CLI on i5/OS comes with a default profile in the file DS CLI.profile in IFS directory
/ibm/dscli/PROFILE. If you use the default profile, leave the value *DEFAULT in the Profile
field, but if you use another file as a profile, specify the name and path of this file in the
Profile field.
414 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
In our example, we use the default profile. However, at this point, we have not set up the
profile with values for DS. So, we leave the *DEFAULT value in the Profile field
(Figure 8-14). The profile does not have any impact on our use of DS CLI yet.
After you specify the values on this panel, press Enter.
Script . . . . . . . . . . . . . *none
Profile . . . . . . . . . . . . *DEFAULT
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 8-14 Script and profile with DS CLI on i5/OS
2. In the next panel, specify the following values in the fields, as shown in Figure 8-15 on
page 416.
HMC1 Specify the IP address of the primary management console of DS.
HMC2 Insert the IP address of the secondary management console of DS. If the
secondary management console is not used, you can leave the field
specified as *PROFILE.
For a description of the primary and secondary management consoles,
refer to 8.4, “Using DS CLI on i5/OS” on page 422.
User Insert the user ID for accessing DS. In our example, we use the initial user
ID admin.
Password Insert the initial password of user admin for accessing DS.
Install Path Insert the IFS directory in which the DS CLI stream files are installed.
DS CLI CMD Insert the DS CLI command. Alternatively, if you use the DS CLI command
frame, insert *int to start the command frame. In our example, we use the
DS CLI command frame, so we insert *int.
After you insert the values, press Enter.
Profile . . . . . . . . . . . . *DEFAULT
HMC1 . . . . . . . . . . . . . . 9.5.17.171
HMC2 . . . . . . . . . . . . . . *PROFILE
User . . . . . . . . . . . . . . admin
Password . . . . . . . . . . . .
Install Path . . . . . . . . . . '/ibm/dscli'
DSCLI CMD . . . . . . . . . . . *int
The first time that you invoke DS CLI on i5/OS, you see informational information, as shown in
Figure 8-16. This informational panel is not shown on subsequent DS CLI invocations.
===>
416 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
You are presented the panel with the DS CLI command frame, where you can enter DS CLI
commands to the command line of the interface (Java shell), as shown in Figure 8-17.
Date/Time: July 8, 2005 2:55:20 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1750-13ABVDA
dscli>
===>
Use the DS CLI lssi command to list available storage images, as shown in Figure 8-18.
Date/Time: July 8, 2005 2:55:20 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1750-13ABVDA
dscli>
> lssi
Date/Time: July 8, 2005 2:56:14 PM CDT IBM DSCLI Version: 5.0.4.32
Name ID Storage Unit Model WWNN State ESSNet
============================================================================
- IBM.1750-13ABVDA IBM.1750-13ABVDA 511 500507630EFE0154 Online Enabled
dscli>
===>
We recommend that you create a password file that contains an encrypted user ID and
password. After you create a password file and insert its name in the DS CLI profile, you do
not need to insert the password every time that you invoke DS CLI from i5/OS. Use the DS
CLI managepwfile command to create a password file, as shown in Figure 8-19. If you specify
an unqualified name of the password file, it is created in the home IFS directory.
Date/Time: July 18, 2005 3:27:14 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1750-13ABVDA
dscli>
> managepwfile -action add -pwfile itso02pw -name itso02 -pw itso4all
Date/Time: July 18, 2005 3:28:12 PM CDT IBM DSCLI Version: 5.0.4.32
CMUC00205I managepwfile: Password file /itso02pw successfully created.
CMUC00206I managepwfile: Record 9.5.17.171/itso02 successfully added to
passw
ord file /itso02pw.
dscli>
===>
418 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
We recommend that you create a DS CLI Profile that contains values such as the IP address
of the DS management console, the Storage image ID, the name of the password file, and so
forth. After you have stored these values in DS CLI profile, you do not need to specify them
every time that you invoke DS CLI or enter a command within DS CLI framework.
After installing DS CLI to i5/OS, you see a sample streamfile DS CLI.profile in the IFS
directory /ibm/dscli/profile. You might want to change this file so that it contains values for your
installation, or you can copy it to another file and change.
The default name of DS CLI Profile is DS CLI.profile, its default location is IFS directory
/ibm/DS CLI/profile. If your DS CLI profile is the default file, leave the Profile parameter as
*DEFAULT when invoking DS CLI from i5/OS, as shown in Figure 8-20.
Profile . . . . . . . . . . . . *DEFAULT
HMC1 . . . . . . . . . . . . . . *PROFILE
HMC2 . . . . . . . . . . . . . . *PROFILE
User . . . . . . . . . . . . . . itso02
Password . . . . . . . . . . . .
Install Path . . . . . . . . . . '/ibm/dscli'
DSCLI CMD . . . . . . . . . . . *int
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 8-20 Using default DS CLI profile
Use *PROFILE when invoking DS CLI from i5/OS, as shown in Figure 8-21.
HMC1 . . . . . . . . . . . . . . *PROFILE
HMC2 . . . . . . . . . . . . . . *PROFILE
User . . . . . . . . . . . . . . itso02
Password . . . . . . . . . . . .
Install Path . . . . . . . . . . '/ibm/dscli'
DSCLI CMD . . . . . . . . . . . *int
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
CMD ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+
************Beginning of data**************
#
# DS CLI Profile
#
# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.
hmc1: 9.5.17.171
#
pwfile: /itso02pw
#sername:
# "devid" and "remotedevid" are equivalent to
# "-dev storage_image_ID" and "-remotedev storeage_image_ID" command opt
devid: IBM.1750-13ABVDA
#remotedevid: IBM.2107-AZ12341
420 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
End the DS CLI command framework on i5/OS by typing exit, as shown in Figure 8-23.
Date/Time: July 18, 2005 4:18:32 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1750-13ABVDA
dscli>
===> exit
A message displays to inform you that the Java program has completed, as shown in
Figure 8-24.
Date/Time: July 18, 2005 4:18:32 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1750-13ABVDA
dscli>
> exit
Java program completed
===>
Script . . . . . . . . . . . . .
Profile . . . . . . . . . . . . *DEFAULT
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 8-25 Prompt when invoking DS CLI
422 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Insert *none for the Script parameter, as shown in Figure 8-26, and press Enter.
Script . . . . . . . . . . . . . *none
Profile . . . . . . . . . . . . *DEFAULT
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Profile . . . . . . . . . . . . *DEFAULT
HMC1 . . . . . . . . . . . . . . *PROFILE
HMC2 . . . . . . . . . . . . . . *PROFILE
User . . . . . . . . . . . . . .
Password . . . . . . . . . . . .
Install Path . . . . . . . . . . '/ibm/dscli'
DSCLI CMD . . . . . . . . . . .
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 8-27 Invoking DS CLI single-shot command
Profile . . . . . . . . . . . . *DEFAULT
JVM is initiated and the response to the DS CLI command displays immediately, as shown
in Figure 8-29.
Date/Time: July 20, 2005 3:54:07 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
Array State Data RAIDtype arsite Rank DA Pair DDMcap (10ÿ9B)
=================================================================
A0 Assigned Normal 5 (6+P) S1,S2 R0 0 146.0
A1 Assigned Normal 5 (6+P) S3,S4 R1 0 146.0
Java program completed
===>
424 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
To use DS CLI scripts from i5/OS, perform the following steps:
1. Create a stream file in IFS to contain the DS CLI script using the edtf command, as
shown in Figure 8-30.
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 8-30 Create script file
CMD ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+
************Beginning of data**************
************End of Data********************
Selection . . . . . . . . . . . .
5. Stream file EOL option . . . . *CRLF *CR, *LF, *CRLF, *LFCR, *USRDFN
User defined. . . . . . . . . Hexadecimal value
F3=Exit F12=Cancel
Figure 8-32 EDTF options
3. On the EDTF Options panel, enter 3 and enter 00819 as the Job CCSID, as shown in
Figure 8-33. Press Enter to perform the change. Then, press F3 to exit.
Selection . . . . . . . . . . . . 3
5. Stream file EOL option . . . . *CRLF *CR, *LF, *CRLF, *LFCR, *USRDFN
User defined. . . . . . . . . Hexadecimal value
F3=Exit F12=Cancel
With this, the CCSID of the script file is 819, which is needed for DS CLI scripts on i5/OS.
426 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Insert the DS CLI commands in the script file as shown in Figure 8-34. Then, press F3 to
save file and exit.
CMD ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+
************Beginning of data**************
lsarray -dev ibm.1750-13abvda
************End of Data********************
5. To invoke the DS CLI, use command DS CLI, and press F4 to open the command prompt
panel. Insert the qualified name of script file at the Script parameter, as shown in
Figure 8-35, and press Enter.
Profile . . . . . . . . . . . . *DEFAULT
User . . . . . . . . . . . . . . admin
Password . . . . . . . . . . . .
Install Path . . . . . . . . . . '/ibm/dscli'
Output . . . . . . . . . . . . . /out1
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 8-35 Using DSCLI on i5/OS
In our example, the production partition has all its disk units including the load source on
LUNs from a DS8000. The LUNs of production partition are in LSS 0x14 and belong to
volume group V4. You can show them by using the showvolgrp from DS CLI on i5/OS, as
shown in Figure 8-36.
I
dscli>
> showvolgrp v4
Date/Time: July 19, 2005 5:34:01 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.2
107-7580741
Name Volume Group 5
ID V4
Type OS400 Mask
Vols 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 140A
dscli>
===>
428 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The LUNs that belong to the backup partition are in LSS 0x16 of the same DS8000 and are
contained in volume group V6. You can look at them using the showvolgrp command from DS
CLI on i5/OS, as shown in Figure 8-37.
dscli>
> showvolgrp v6
Date/Time: July 19, 2005 5:35:04 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.2
107-7580741
Name Volgrp7
ID V6
Type OS400 Mask
Vols 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 160A
dscli>
===>
After the production partition is shut down, perform FlashCopy by using the DS CLI script for
performing FlashCopy. For instructions about how to use the DS CLI scripts, refer to 8.4,
“Using DS CLI on i5/OS” on page 422.
Figure 8-38 shows the script that you use to invoke FlashCopy. Observe that in this example,
we use the FlashCopy nocopy option. Therefore, we specify the -nocp parameter.
Browse : /flash.script
Record : 1 of 1 by 14 Column : 1 59 by 79
Control :
....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....
************Beginning of data**************
mkflash -dev IBM.2107-7580741 -nocp 1400-140a:1600-160a
************End of Data********************
Profile . . . . . . . . . . . . *DEFAULT
User . . . . . . . . . . . . . . admin
Password . . . . . . . . . . . .
Install Path . . . . . . . . . . '/ibm/dscli'
Output . . . . . . . . . . . . . /out3
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
After the script complete successfully, observe the output in the IFS files, which you specified
when invoking the script. In our example, the file is /out3. Figure 8-40 shows the contents of
the file.
Browse : /out3
Record : 1 of 12 by 14 Column : 1 88 by 79
Control :
....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....
************Beginning of data**************
Date/Time: July 20, 2005 3:27:19 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.2107
CMUC00137I mkflash: FlashCopy pair 1400:1600 successfully created.
CMUC00137I mkflash: FlashCopy pair 1401:1601 successfully created.
CMUC00137I mkflash: FlashCopy pair 1402:1602 successfully created.
CMUC00137I mkflash: FlashCopy pair 1403:1603 successfully created.
CMUC00137I mkflash: FlashCopy pair 1404:1604 successfully created.
CMUC00137I mkflash: FlashCopy pair 1405:1605 successfully created.
CMUC00137I mkflash: FlashCopy pair 1406:1606 successfully created.
CMUC00137I mkflash: FlashCopy pair 1407:1607 successfully created.
CMUC00137I mkflash: FlashCopy pair 1408:1608 successfully created.
CMUC00137I mkflash: FlashCopy pair 1409:1609 successfully created.
CMUC00137I mkflash: FlashCopy pair 140A:160A successfully created.
************End of Data********************
430 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
When the mkflash command completes and the FlashCopy pairs are created successfully,
you can re-IPL or resume the Production partition and begin working. At the same time, the
Backup partition can IPL to bring up the clone of the production partition.
The LUNs belonging to the production partition are on the DS6000. Two unprotected LUNs
are the external Load Source and its mirror on the DS6000. The other LUNs are protected
and connected in multipath.
Use the DS CLI showfbvol command to observe a particular LUN. Figure 8-41 shows the
showfbvol command. The LUN shown is the external Load Source. Observe the model A85
and datatype FB 520U, which denote an unprotected LUN.
dscli>
> showfbvol 1000
Date/Time: July 18, 2005 7:26:50 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
Name i5_unprot_1000
ID 1000
accstate Online
datastate Normal
configstate Normal
deviceMTM 1750-A85
datatype FB 520U
addrgrp 1
extpool P0
exts 33
captype iSeries
cap (2ÿ30B) 32.8
cap (10ÿ9B) 35.2
cap (blocks) 68681728
volgrp V30,V14
dscli>
===>
===>
dscli>
> showvolgrp v15
Date/Time: July 18, 2005 7:09:50 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
Name orange
ID V15
Type OS400 Mask
Vols 1001 1002 1100 1101 1102
dscli>
> showvolgrp v14
Date/Time: July 18, 2005 7:13:49 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
===>
432 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
To see the adapters to which volume groups are assigned, use the lshostconnect command,
as shown in Figure 8-44.
> lshostconnect
Date/Time: July 18, 2005 7:21:45 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
Name ID WWPN HostType Profile portgrp
volgrpID
ESSIOport
========================================================================
adapter1 0000 10000000C942ED3E iSeries IBM iSeries - OS/400 0 V15
I0100
adapter2 0001 10000000C928D12A iSeries IBM iSeries - OS/400 0 V13
I0001
adapter0 0002 10000000C942BA4D iSeries IBM iSeries - OS/400 0 V14
I0001
dscli>
===>
===>
434 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
To establish PPRC paths, perform the mkpprcpath command, as shown in Figure 8-46, to
establish two PPRC paths from source LSS 0x10 to target LSS 0x12 and from source LSS
0x11 to target LSS 0x13.
===>
750-13ABVDA
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1000:1200
successfully created.
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1001:1201
successfully created.
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1002:1202
successfully created.
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1100:1300
successfully created.
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1101:1301
successfully created.
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1102:1302
successfully created.
dscli>
===>
436 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
In case a failure occurs on the production partition, terminate Remote Mirror using the rmpprc
command, as shown in Figure 8-48.
Date/Time: July 20, 2005 1:47:40 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1750-13ABVDA
dscli>
> rmpprc -remotedev ibm.2107-7580741 1000-1002:1200-1202 1100-1102:1300-1302
Date/Time: July 20, 2005 1:48:44 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
CMUC00160W rmpprc: Are you sure you want to delete the Remote Mirror and
Copy
volume pair relationship 1000-1002:1200-1202:? £y/nÑ:
> y
CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1000:1200 relationship
successfully withdrawn.
CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1001:1201 relationship
successfully withdrawn.
CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1002:1202 relationship
successfully withdrawn.
CMUC00160W rmpprc: Are you sure you want to delete the Remote Mirror and
Copy
volume pair relationship 1100-1102:1300-1302:? £y/nÑ:
> y
CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1100:1300 relationship
successfully withdrawn.
CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1101:1301 relationship
successfully withdrawn.
CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1102:1302 relationship
successfully withdrawn.
dscli>
Figure 8-48 Terminating Remote Mirror using rmpprc
===>
On the target DS, create a host connection using the mkhostconnect command and associate
it with the volume group containing the target volumes, as shown in Figure 8-50.
===>
Now, perform an IPL of the recovery partition that is connected to Remote Mirror targets.
438 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9
440 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9.1.1 Installing DS6000 Storage Manager
This section describes the DS6000 GUI installation, which includes the management and
network servers.
Prerequisites
This section describes the prerequisites for the hardware, browsers, and operating system.
Your system must meet these prerequisites for installation of the DS6000 Storage Manager
on a Windows PC serving as the Storage Management Console (SMC). The DS6000 Storage
Manager requires that the system that is used as the management console be available
continuously for custom operation, configuration, and problem management.
Table 9-1 lists the minimum hardware resources that are required on the PC that serves as
the management console.
Disk 1 GB
Memory 1 GB RAM
The management console runs through a browser. Table 9-2 lists the supported browsers.
A number of Windows operating system versions support the management console, as listed
in Table 9-3. For our examples, we used Windows XP Pro SP2.
In the supported browsers, you need to make certain configuration changes to display the
progress information correctly:
Internet Explorer 6.x
Netscape 6.2
Netscape 7.x
Note: To properly display the installation progress bars, animations need to be turned on in
the browser as follows:
Internet Explorer
a. Select Tools → Internet Options.
b. Select the Advanced tab and scroll down to the Multimedia section.
c. Ensure that Play animation in web pages is enabled.
Netscape
a. Select Edit → Preferences.
b. Double-click Privacy and Security.
c. Select Images and select as many times as the image specifies in the Animated
image should loop section.
Appropriate browser security settings are also needed to open the DS Storage Manager in a
browser.
Installation procedure
The following steps describe the installation procedure on a Windows XP Pro SP2 PC:
1. Locate the Management GUI installation CD that came with the DS6000 product. Visit the
following IBM System Storage support Web site and check whether there are any updates
that are available or required for the installation of the Management GUI:
http://www.ibm.com/servers/storage/support/disk/ds6800/downloading.html
2. Review the installation guide that comes with the product.
3. Log on to the Windows environment that will be used for the installation of the DS Storage
Manager with Windows Administrator authority.
4. Insert the IBM System Storage DS Storage Manager CD into the CD-ROM drive. The
LaunchPad used for installation starts automatically within 15 to 30 seconds if autorun
mode is enabled for the CD-ROM drive under Windows. You can also start the LaunchPad
manually using Windows Explorer and browsing to the root of the CD. Then double-click
the LaunchPad.bat file.
5. When the DS Storage Manager panel opens, select Installation wizard to start the DS
Storage Manager installation.
442 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. The Welcome panel instructs you to view the readme file and installation guide and warns
that you should not be running other applications during the install. Select Next to
continue the installation (see Figure 9-2).
7. Read the license agreement, and select I accept the terms in the license agreement to
accept the license agreement. Otherwise, click Cancel. Select Next to continue the
installation (see Figure 9-3).
9. Select Next to accept the default DS Storage Manager Server host name and TCP/IP
ports (optionally choose different ports) and proceed to the SSL configuration window (see
Figure 9-5).
444 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
10.Select Generate the self-signed certificates during installation, and enter a password
including its confirmation for each the key file and trust file in the corresponding fields.
Select Next to proceed to the certificate window. Record these passwords as part of your
recovery and management documentation (see Figure 9-6).
Figure 9-7 DS6000 Storage Manager Installer Generate Self-Signed Certificate window
12.Select Install to confirm your settings and to start the installation (optionally select back to
review or change any settings), as shown in Figure 9-8.
446 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
13.The installation wizard installs and updates the following components without intervention:
– DS Storage Manager Server (see Figure 9-9)
– DS Network Interface Server (see Figure 9-10)
– DS Storage Manager product components (see Figure 9-11)
14.The installation wizard shows the DS Storage Manager Installer Finish window after the
completed product installation (see Figure 9-12).
448 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
15.You need to reboot Windows after the installation completes successfully. Select Finish to
reboot the Windows system now (optionally select to reboot the system at a later time), as
shown in Figure 9-13.
To start the DS Storage Manager GUI locally on the SMC, select Start → Programs → IBM
System Storage DS6000 Storage Manager → Open DS Storage Manager.
Alternatively, you can start the DS6000 Storage Manager GUI from a network client by
opening a Web browser and entering a Web address that includes the SMC_IP_address with
the external IP address of the DS6000 Storage Management Console.
Note: The DS Storage Manager Web addresses shown here are case sensitive.
Enter the default user name admin and password admin in the DS6000 Storage Manager
Sign On panel shown in Figure 9-15. You need to change the password immediately and
make a record of the new admin password. If you lose the admin password, you will have to
delete the current Storage Management GUI and reload it to recover.
450 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Assigning a storage unit to a storage complex
To start the process of assigning a DS6000 storage unit (which corresponds to a physical
DS6000 system) to a storage complex (Storage Manager administrative entity), perform the
following steps (see Figure 9-16):
1. Log in to the DS6000 Storage Manager (see “Starting DS6000 Storage Manager” on
page 450).
2. Select Real-time manager → Manage hardware → Storage complexes and choose
Assign Storage Unit from the Select Action drop-down menu.
Note: At any time during the installation, you can select Help through the ? button in the
information bar.
The style of the GUI is typically that you must select the Select Action pull-down menu
from the action bar and then highlight the task that you want to perform. The pull-down
menu closes, leaving the option that you selected displayed. Nothing happens unless
you complete the action by clicking Go.
The options available in the pull-down menu change as you progress through the
installation process.
452 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Enter the following Network settings information (see Figure 9-18):
– A Gateway address
– A Subnet mask
– An optional Primary/Alternate domain name server
– An optional Maximum transmission unit information
Select Next to continue.
Figure 9-18 DS6000 Storage Manager Assign Storage Unit Network settings panel
Figure 9-19 DS6000 Storage Manager Assign Storage Unit Verification panel
454 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. Enter the date, time, and time zone information, and click OK to submit the changes (see
Figure 9-21).
Figure 9-21 DS6000 Storage Manager Configure Storage Unit Date and time zone panel
Figure 9-22 DS6000 Storage Manager Customer Contact Shipping information panel
456 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
d. Complete the following contact information for a system administrator, which is used by
IBM service representatives to contact you with remote assistance (Figure 9-23):
• Name
• Telephone information
• E-mail address
Select OK to continue.
Important: If an SMTP server is not specified as we describe here, the DS6000 cannot
call home for e-mail alert messages, which enables rapid IBM support for remote
assistance to resolve issues.
458 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
b. Ensure that Enable Call Home is selected, and enter the SMTP server name, its IP
address, and Server port (Figure 9-25). Then, click OK. Optionally, you can select
Apply and Test Call Home to send a connection test and to generate a test problem
log entry (error code BE810081). To complete this test, the SMC PC must be
connected to the Internet.
Figure 9-25 DS6000 Storage Manager Configure Notifications Define Call Home panel
Figure 9-26 DS6000 Storage Manager Configure Notifications Define SNMP connection panel
460 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Registering for IBM MySupport
IBM MySupport provides pro-active e-mail notification of DS6000 microcode updates and
how to obtain them (example shown in Figure 9-27). We highly recommend that you register
for MySupport to stay current on new DS6000 microcode fixes and enhancements.
At this point, we recommend that you have researched the currency of the Storage
Management GUI that you are installing. We also recommend that you test the MySupport
function to ensure that you are familiar with it before going live with the DS6000.
2. Enter your existing IBM ID and Password, and select Submit to sign on.
Note: If you have not registered with IBM MySupport, select register now and
complete the required information in the My IBM Registration forms. Then, select
Submit to sign on with your new IBM ID and password. You need the IBM Customer
number that is associated with the DS6000 that you are installing.
462 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. Select the Edit profile tab and make the following selections under the Products section:
– Storage
– Computer Storage
– Disk Storage Systems
– System Storage DS6000 series
– System Storage DS6800
Select Add products to continue (see Figure 9-29).
Figure 9-29 IBM MySupport Web Site Edit profile Add products panel
Figure 9-30 IBM MySupport Web Site Add products confirmation panel
464 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. Select Sign out in the Welcome panel to end your MySupport DS6000 registration
(Figure 9-32).
Figure 9-32 IBM MySupport Web site Subscribe to e-mail update confirmation panel
For optional separate installation of the DS8000 Storage Manager simulated (offline)
component on a customer client machine, refer to IBM System Storage DS8000 User’s
Guide, SC26-7915, which is available at:
http://www-1.ibm.com/support/docview.wss?rs=1113&context=HW2B2&dc=DA400&q1=ssg1*&u
id=ssg1S7001163&loc=en_US&cs=utf-8&lang=en
Compared to the DS6000, there are no further post-installation configuration tasks that are
required by the IBM service representative after installing the DS8000 except to apply the
storage unit activation keys, which we describe in 9.1.3, “Applying storage unit activation
keys” on page 469.
Note: The DS Storage Manager Web addresses that we show here are case sensitive.
For DS8000 systems with SSPC installed (see 2.3.1, “Hardware overview” on page 13), you
access the DS Storage Manager GUI remotely through a Web browser that points to the
SSPC (Figure 9-34):
1. Access the SSPC through a Web browser at the following address:
http://SSPC_IP_address:9550/ITSRM/app/en_US/index.html
2. Click TPC GUI (Java Web Start) to launch the TPC GUI.
Note: The TPC GUI requires an IBM 1.4.2 JRE™. Select one of the IBM 1.4.2 JRE
links on the Web page to download and install it based on your OS platform.
466 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. The TPC GUI window displays (Figure 9-35). Enter the user name, password, and the
SSPC server. Click OK to continue.
6. The DS8000 Storage Manager Welcome panel displays as shown in Figure 9-38.
468 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9.1.3 Applying storage unit activation keys
This section describes the application of the licensed internal code feature activation keys for
both the IBM System Storage DS6000 and DS8000 products.
After you have completed the DS6000 post-installation tasks (see “DS6000 Storage Manager
post-installation configuration tasks” on page 449) or the physical installation of the IBM
System Storage DS8000 storage unit has been completed by the IBM service representative,
you first need to configure logical storage on the DS6000. For the DS8000, you begin by
applying the licensed internal code feature activation keys as follows:
1. Use a Web browser to connect to the IBM DSFA Web Site:
http://publib.boulder.ibm.com/infocenter/dsichelp/ds6000ic/index.jsp?topic=/com
.ibm.storage.smric.help.doc/f2d_cfgstgun_26kh49.html
2. Depending on your machine type, select either IBM System Storage DS8000 series or
IBM System Storage DS6000 series (see Figure 9-39).
c. Note the Machine signature information from DS Storage Manager Storage Unit
Properties panel (see Figure 9-41).
470 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Next, go back to the browser that is connected to the DSFA Web site (the DS6000/8000
series machine) to display the required information. Enter the Model, Serial number, and
Machine signature information, and select Submit to continue retrieving the DS licensed
internal code feature activation keys (Figure 9-42). You can note the keys manually or you
can export them to a diskette to be applied when using the DS Storage Manager.
Note: In case the DSFA Web Site application cannot locate the 2244 license
authorization record because it is not attached to the DS serial number record, assign it
to the DS record in the DSFA application using the 2244 serial number that is provided
with the License Function Authorization document.
b. Enter the DS8000 license internal code feature activation keys that you retrieved from
the DSFA Web site and select OK to continue (Figure 9-44).
472 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 9-44 DS8000 Storage Manager Apply activation codes panel
Figure 9-46 DS6000 Storage Manager Configure Storage Unit Activation codes panel
Note: To see the capacity and storage type that is associated with the successful
application of the activation codes, repeat step 4.
The steps that are involved in the logical storage configuration process include:
Configuring arrays and ranks
Creating extent pools
Creating logical volumes
Configuring I/O ports
Creating volume groups
Creating host systems
474 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Note: This section includes only example configurations. The configuration steps apply to
both DS8000 and DS6000. For the figures in this section, we include screen captures from
the DS8000 Storage Manager Release 3 GUI. However, where applicable, we include
comments on any differences from the DS6000 and releases prior to R3 of DS8000
Storage Manager. We chose the order of configuration steps to start with the storage
configuration itself before defining the host systems and host ports. Optionally, you can
define the host systems and host ports before starting the logical storage configuration.
The Offline (Simulated) DS Storage Manager GUI function is designed such that you can
do the logical configuration on a separate PC and then implement it later through
Customer Technical Support or IBM or IBM Business Partner services.
Note: Normally, you do not need to control the array assignment to the available array
sites. We recommend that you use the automatic option instead of the manual array
creation option.
If you choose to use the manual array creation option on a DS6000, the two array sites
for an 8-DDM RAID5 array would need to be chosen deliberately so that only one of the
two array sites contains a spare DDM. If the choice is not made, the rank creation will
fail.
476 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. For the available DDM types, select the quantity of arrays to be created and the RAID
type, which is either RAID 5 or RAID 10 from the corresponding menus. (For DS6000,
select Create an 8 disk array.) Select Next to continue (Figure 9-49).
On DS6000, we strongly recommend that you only create arrays for IBM System i
storage on 8 disk arrays.
Note: If the “Add these arrays to ranks” option is not selected as the default, you will
need to create the ranks separately and associated them with an array. The current
1750/2107 product design has a one-to-one relationship between ranks and arrays. We
recommend using this option. To save time, add these arrays to ranks.
478 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. The Verification panel shows the resulting array configuration. Verify that this information
is correct, and select Finish to start the array creation process. Optionally, you can step
back and change the array creation settings if needed (Figure 9-51).
Figure 9-52 DS8000 Storage Manager Create Array Long Running Task Properties panel
480 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. Check the State and Status information Long Running Task Summary window to see if the
task completed successfully (Figure 9-53).
Note: You can also access the summary for long running configuration tasks by
selecting Real-time manager → Monitor system → Long running task summary.
Note: The amount of total rank capacity is shown in binary GB, that is
1 GB = 1024 x 1024 x 1024 bytes.
482 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9.2.2 Creating extent pools
Before you create logical volumes from rank extents, you need to assign the ranks to extent
pools. This section describes how to create extent pools on DS8000 Release 3 or later and
on DS6000 and releases prior to R3.
Figure 9-57 DS8000 Storage Manager Create New Extent Pools panel (upper)
5. Scroll down to see the lower portion of the panel (Figure 9-58):
a. For number of extent pools, select Single extent pool.
b. For the first extent pool, enter the pool name prefix of Extent Pool 0, for the second
extent pool, use Extent Pool 1, and so forth.
c. Enter 100 for the Storage Threshold percentage and 0 for the Storage Reserved
percentage.
Note: You can use the option to reserve storage from an extent pool to reserve storage
for a later project so that it is currently not made available for configuration. This
reserved storage does not become available until you explicitly change the amount of
reserved storage by modifying the extent pool properties.
d. Select Server 0 for the server assignment of Extent Pool 0, 2, 4, and so forth. Select
Server 1 for the server assignment of Extent Pool 1, 3, 5, and so forth. Then, select
Add Another Pool to continue creating other extent pools.
484 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Important: For DS8000 server resource affinity conventions and a performance
balanced configuration, we strongly recommend that you ensure that the nickname for
the extent pool is chosen such that even numbered extent pools are associated with
DS8000 Server 0 and odd numbered extent pools are associated with DS8000
Server 1. This association implies that there are equal amounts of even and odd extent
pools so that each of the two DS8000 servers are assigned exactly half of the available
amount of extent pools.
Figure 9-58 DS8000 Storage Manager Create New Extent Pools panel (lower)
Note: Repeat the previous steps to create Extent Pool 1, 2, and 3. When creating
Extent Pool 3, select OK to create the Extent Pool 3 and then close the Create New
Extent Pools panel.
6. Click OK when the task shows Finished and Success status (Figure 9-59).
486 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
DS6000 and releases prior to R3 of DS8000
Create the extent pools as follows:
1. Select Real-time manager → Configure storage → Extent pools.
2. For an LPAR DS8000 model, select the storage unit from the Select storage unit menu.
3. Then, choose Create New Extent Pools from the Select action menu, as shown in
Figure 9-61.
For DS6000 and releases prior to R3 of DS8000, select Create (Figure 9-62).
Figure 9-62 Previous release of DS8000 Storage Manager Extent pools panel
Figure 9-63 Storage Manager Create Extent Pool Definition method panel
488 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. In the Define properties panel (Figure 9-64):
a. Enter Extent Pool 0 as the nickname for the first extent pool, Extent Pool 1 for the
second extent pool, and so forth for each extent pool that you are creating.
b. Select FB for the Storage Type.
c. Select the RAID type according to the RAID protection chosen for the arrays that you
created previously.
d. Select 0 for the server for Extent Pool 0, 2, 4, and so forth. Select 1 for the server for
Extent Pool 1, 3, 5, and so forth.
e. Select Next to continue.
Figure 9-64 Storage Manager Create Extent Pool Define properties panel
Note: For DS6000 and DS8000 server resource affinity conventions, we recommend
that you ensure that even numbered extent pools are associated with even numbered
ranks and that odd numbered extent pools are associated with odd numbered ranks.
For consistency reasons, the extent pool number needs to match the rank number.
Figure 9-65 Storage Manager Create Extent Pool Select ranks panel
490 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7. In the Reserve storage panel, enter 0 for the percentage of storage to reserve in the extent
pool, and click Next to continue (Figure 9-66).
Figure 9-66 Storage Manager Create Extent Pool Reserve storage panel
8. Review the attributes for the extent pool, and select Finish to create the extent pool
(Figure 9-67). Optionally, you can step back and change the extent pool creation settings if
desired.
9. A long running task window displays the extent pool creation process, which shows the
Finished and Status Success state, after the extent pool is created successfully. Select
Close and View Summary. Then, repeat steps 4 through 8 for each extent pool that you
are creating for each of the remaining configured ranks.
Note: Beginning with V6R1 i5/OS, multipathing is supported on an external LSU. If you are
using i5/OS prior to V6R1, refer to 6.10, “Protecting the external load source unit” on
page 240 for information about how to set up external storage configuration to use
mirroring for your external load source to provide path protection.
492 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
System i LPAR System i LPAR
“Mickey" “Minnie"
I/O Tower I/O Tower I/O Tower I/O Tower
IOPless IOPless IOPless IOPless
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA IOA
LSU LSU
0 ... 22 0 ... 22
Figure 9-69 Attachment of two IBM System i partitions with external load source to DS8000
For workload separation and availability reasons, in this example we assign the volumes for
each partition on different extent pools, which per our recommendation to create one extent
pool for each rank, is on different array sites.
Figure 9-71 DS8000 Storage Manager Create Volume Select extent pool panel
4. Create the protected LUNs for the LSU and all other LUNs for the two System i partitions.
In the Define volume characteristics panel, select iSeries - Protected as the Volume type
and Rotate volumes as the Extent allocation method (only available in DS8000 Release 3
GUI), as shown in Figure 9-72. Then, select Next to continue.
Note: The Extent allocation method menu is used to specify the extent allocation
method, which can be either the recommended default option of Rotate volumes or the
Rotate extents option (storage pool striping). We do not recommend the Rotate extent
method for using with System i (see “Logical volumes” on page 43).
494 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 9-72 DS8000 Storage Manager Create Volume Define volume characteristics panel
Note: The System i volume protection types of either unprotected or protected refer
only to the volume model type in the SCSI vital product data that is reported by the
DS6000 or DS8000 to the System i server. Both System i volume protection types have
the same DS6000 and DS8000 internal RAID protection. The two different types allow
System i customers to choose the type of protection even on external storage. The
unprotected type is required for i5/OS mirroring, for example if you want to mirror the
load source or want to have LUNs mirrored between two external storage servers using
i5/OS mirroring. These external storage servers can be at different sites for disaster
recovery.
The DS6000 logical volume creation attribute, “Enable write cache with mirroring,” is
always enabled for System i volume types to ensure DASD fast write data protection.
Note: We explicitly select an LSS for the volume to be created, because in our
example we prefer to use a different LSS for each extent pool and array site. We
recommend that array sites, extent pools, and ranks have a unique one-to-one
relationship. This type of relationship helps with the association of a DS8000 volume
serial number on the System i server partition with the physical storage location for
the volume on the DS8000.
c. Click Calculate max quantity to use the complete remaining space in the extent pool
to create volumes of this same type. Verify that the Quantity field updates with the
resulting maximum number of volumes to create. Then, select Next to continue.
Figure 9-73 DS8000 Storage Manager Create Volume Define volume properties panel
6. In the Create volume nicknames panel, clear Generate a sequence of nicknames based
on the following, and select Next to continue (Figure 9-74). Optionally, you can select the
option to generate nicknames and specify a nickname for the volume by completing the
Prefix and Suffix entry fields.
Note: If no volume prefix and suffix is specified, the nickname is created from the
volume ID. In this case, we still recommend that you change the nickname in the
volume properties for volumes, which become the external load source LUNs, so that
you can identify them easily.
496 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 9-74 DS8000 Storage Manager Create volume nicknames panel
7. Select Finish to create the volume creation. Optionally, you can step back and change the
volume creation settings if needed (Figure 9-75).
Figure 9-76 DS8000 Storage Manager Create Volume long running task panel
498 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9. Repeat these steps until you have created all 46 protected LUNs for the System i server
partitions. Select Realtime-manager → Configure storage → Open systems →
Volumes - Opensystems, and choose Select secondary filter → All. Then, select
Refresh (Figure 9-77).
Attention: When configuring DS8000 and DS6000 storage I/O ports, adhere to the
following restrictions:
The #2847 IOP requires the Fibre Channel switched-fabric (SCSI-FCP) protocol,
regardless of whether the IOAs are direct or switch attached to the DS8000
andDS6000. The Fibre Channel arbitrated loop (FC-AL) protocol is not supported by
the IBM System i #2847 loadsource IOP.
IOP-less Fibre Channel cards #5749 or #5774 direct-attached to DS8000 support
FC-AL only.
500 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
In our example, we set up the System i server partitions to boot from the DS8000 and
DS6000 using a #2847 I/O processor. Thus, we configure the I/O ports to Fibre Channel
Switched-Fabric protocol by choosing Change to FcSf from the Select Action menu
(Figure 9-80).
Figure 9-81 DS8000 Storage Manager Configure I/O Ports confirmation message panel
Important for DS6000 or releases prior to R3 of DS8000: You must create the host
connections first so that you can select them to which to connect the newly created volume
group when using the volume group creation process that we describe in this section.
502 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
DS8000 Release 3 or higher
To create volume groups, follow these steps:
1. Select Real-time manager → Configure storage → Open systems → Volume groups.
2. Select the storage image from the Select storage image menu.
3. Choose Create from the Select Action menu (Figure 9-83).
Hint: We assigned different LSSs for the extent pools. To assign volumes into a volume
group, choose the LSS and scroll up or down the Volumes window to select the
volumes.
Figure 9-84 DS8000 Storage Manager Create New Volume Group panel
504 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
DS6000 and releases prior to R3 of DS8000
To create volume groups, follow these steps:
1. Select Real-time manager → Configure storage → Open systems → Volume groups.
2. Select select the storage unit from the Select storage unit menu.
3. Choose Create from the Select Action menu (Figure 9-83).
4. In the Define volume group properties panel, accept the default volume group nickname or
enter a different nickname if desired. Then, select iSeries in the “Accessed by host types”
list (Figure 9-85). Select Next to continue.
506 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. In the Select volumes for group panel, choose the volumes to include in the volume group
from the “Select volumes” list. Then, select Next to continue (Figure 9-87).
Hint: Use the Next/Previous Page button with the arrow icon to page through the
volume list pages.
Figure 9-87 Storage Manager Create Volume Group Select volumes for group panel
7. Select Finish to start the volume group creation process. Optionally, you can step back
and change the volume group creation settings if needed.
8. Repeat these steps to create volumes groups until you have created all the volume
groups. (In our example, we created only one volume group for each partition.)
Note for DS6000 and releases prior to R3 of DS8000: This ends the example of logical
storage configuration using the DS Storage Manager GUI. We not describe how to attach
the IBM System i partitions to the DS6000 and DS8000 external storage system.
Important: Because only one volume group can be associated with a host system, we
define a host system entity for each System i server Fibre Channel I/O adapter (IOA)
instead of one host system entity with several connection ports for each System i server
partition (see Figure 9-69 on page 493).
508 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. In the Define Host Ports panel:
a. Enter a nickname that is associated with the System i server partition and its Fibre
Channel IOA that is connected to the DS8000. In our example, we choose the
nickname Mickey_0, where Mickey is the System i server host name and _0 denotes
to the first IOA (Figure 9-89).
b. Select Fibre Channel Point to Point/Switched (FcSf) as the Port Type.
c. Choose IBM iSeries and AS/400 Servers - OS/400(iSeries) as the Host Type.
d. Enter the 16-digit world-wide-port-name (WWPN) in the Host WWPN field, and click
Add. The WWPN that enter displays in the table.
e. Click Next to continue.
4. In the Map Host Ports to a Volume Group panel, select Map to an existing volume
group, and select the volume group that is associated with the host connections. In our
example, the associated volume group is VolumeGroup 1 for host connection Mickey_0
(Figure 9-90).
Figure 9-90 DS8000 Storage Manager Map Host Ports to a Volume Group
Note: We recommend that you do not restrict the I/O port usage for the host system by
selecting specific storage I/O ports to which the host system can log in. For separation
of System i host systems from other host systems in the SAN, for DS8000 and DS6000
maintenance, and for reconfiguration, we recommend that you use the more flexible
zoning solutions offered by the Fibre Channel switch vendors.
510 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. In the Verification panel, select Finish to start the host system creation process
(Figure 9-92). Optionally, you can step back and change the host connections creation
settings if desired.
Figure 9-92 DS8000 Storage Manager Create Host System Verification panel
Note for DS8000 Release 3 or higher: This completes our example of logical storage
configuration using the DS Storage Manager GUI. Now, you are ready to attach the IBM
System i partitions to the DS8000 external storage system.
512 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
DS6000 and releases prior to R3 of DS8000
To create a host system, follow these steps:
1. Select Real-time manager → Manage hardware → Host systems). Select storage
complex.
2. Select Create from the Select Action menu.
3. In the General host information panel (Figure 9-94):
a. Choose IBM iSeries and AS/400 Servers - OS/400(iSeries) as the Type.
b. Enter a nickname to be associated with the System i server partition and its Fibre
Channel IOA that is connected to the DS8000.
c. Select Next to continue.
In our example, we choose the nickname Mickey_0, where Mickey is the System i server
host name and _0 denotes to the first IOA.
Figure 9-94 DS8000 Storage Manager Create Host System General host information panel
5. Enter the world-wide-port-name (WWPN) in the text field under the Port 1 entry. Then,
select OK to continue (Figure 9-96).
514 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. In the Select storage images panel, choose the storage facility image (for DS6000, choose
the storage unit) to which the host system connects from the Available storage images
menu (for DS6000, select Available storage units from the menu). Select Add, which
moves the selection to the Selected storage images list (for DS6000, to the Selected
storage units list), as shown in Figure 9-97. Select Next to continue.
7. In the Specify storage image parameters panel, ensure that any valid storage image I/O
port is selected for the “This host attachment can login” option (for DS6000, select any
valid storage unit I/O port).
Note: The IBM System i attachment to the DS6000 that supports concurrent code load
and to achieve the highest DS6000 I/O path failure protection requires a redundant
Fibre Channel path to each of the two DS6000 server processor cards.
Select Apply assignment to update the storage image allocation in the list, which
changes from 0 to 1. Then, click OK to continue (Figure 9-98).
If so, make sure that the attachment port type that you selected corresponds to the
DS8000 and DS6000 I/O port topology configuration. That is, select Cancel and Back
to review the Attachment port type. If you need to correct it, select the host port to be
removed from the list, Defined host ports, and select Remove before beginning again in
the Define Host Ports panel. If the problem persists, go back to 9.2.4, “Configuring I/O
ports” on page 499.
8. Select Finish on the Verification panel to start the host system creation process.
Optionally, you can step back and change the host connections creation settings if
desired.
516 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9. Repeat these steps until you have created a host system for each of the two Fibre
Channel IOAs in each of the two System i server partitions. Afterwards the host system
configuration looks as shown in Figure 9-99.
10.Return to 9.2.5, “Creating volume groups” on page 502 to create volume groups and to
complete the logical storage configuration for DS6000 and releases prior to R3 of
DS8000.
520 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
If any items are missing or damaged, contact your IBM customer support before proceeding.
Each I/O adapter requires its own dedicated #2844 PCI I/O processor (IOP) or for external
boot the #2847 PCI IOP. For more information, see IBM TotalStorage DS6000 Host Systems
Attachment Guide, GC26-7680.
External boot support through #2847 IOP requires i5/OS V5R3M5 or later.
For further details, refer to Chapter 6, “Implementing external storage with i5/OS” on
page 207.
Chapter 10. Installing the IBM System Storage DS6000 storage system 521
10.3 Cabling the DS6000
In this section, we describe how to route the cables for a basic DS6000 installation with up to
one storage enclosure (see Figure 10-1).
522 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
10.3.1 Connecting IBM System i hosts to the DS6000 processor cards
To connect the System i server I/O adapter (IOA) to the DS6000 511 processor cards, follow
these steps:
1. Install a small form-factor pluggable (SFP) in a host port on the 511 processor card.
Note: For direct System i server attachment a Fibre Channel, shortwave SFP (feature
code #1310) is required. For attachment of the DS6000 to a Storage Area Network
(SAN), you can use either a long-wave SFP (feature code #1315) or a shortwave SFP,
depending on the SFP type of the SAN node to be connected to the DS6000.
2. Connect all the Fibre Channel cables from the System i server Fibre Channel IOAs or
switch to the 511 processor host ports (numbers 10, 11, 12, and 13 in Figure 10-1)
Important: For availability reasons and for DS6000 concurrent code load support, we
strongly recommend that you use i5/OS native multipathing when connecting each path
to a host port of a different 511 processor card to allow continuous I/O even if one 511
processor card is unavailable due to DS6000 microcode updates or maintenance.
To connect a second storage enclosure to a new DS6000 server enclosure use two Fibre
Channel cables to make two connections from the OUT ports on the lower processor card of
the first storage enclosure to the IN ports on the lower processor card of the second storage
enclosure.
For attaching a third or further storage enclosure, refer to IBM System Storage DS6000
Introduction, Installation, and Recovery Guide, GC26-7678.
Chapter 10. Installing the IBM System Storage DS6000 storage system 523
Figure 10-2 DS6000 Server enclosure cabling with up to two storage enclosures
524 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. Turn on any attached IBM System i host system this is not already running.
Note: After the initial process to turn on the DS6000 is complete, detection of all
storage enclosure hardware the power to the attached storage enclosures can be
turned off or on automatically in conjunction with turning off or on the DS6000 server
enclosure.
7. Check for the correct DS6000 status after initial power on by verifying that the LED status
indicators show the indicators as shown in Figure 10-3.
If all LEDs do not show the correct state, refer to Chapter 13, “Troubleshooting i5/OS with
external storage” on page 569 to help you diagnose the problem.
Chapter 10. Installing the IBM System Storage DS6000 storage system 525
10.4 Setting the DS6000 IP addresses
In this section, we discuss the setup of IP addressing for the DS6000 enclosure.
3. Create a new connection by entering DS6000 in the Name field, and selecting OK to
continue (see Figure 10-5).
526 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Choose the COM serial port to which you connected the cable for the DS6000 from the
“Connect using” menu, and select OK to continue (see Figure 10-6).
Tip: If you are unsure which COMx resource to select, access the Windows Device
Manager by right-clicking the My Computer icon on the Windows desktop. Then, select
Properties, and on the Hardware tab, select Device Manager. The COM resource that
is associated with the PC serial port to which you connected the cable the DS6000 is
listed in the Device Manager Ports (COM & LPT) section.
Chapter 10. Installing the IBM System Storage DS6000 storage system 527
5. Enter the port settings as listed in Table 10-1, and select OK to establish the connection
(Figure 10-7).
528 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. Enter the default user ID (guest) and password (guest) to access the DS6000 processor
card (Figure 10-8).
7. At the initial setting of the DS6000 processor card IP addresses, change the default guest
password to one of your choice as follows:
a. Choose 2. Change “guest” password from the ncnetconf Main Menu options
b. Enter the current password (guest).
c. Enter the new password. A confirmation message states that the password was
changed successfully.
Chapter 10. Installing the IBM System Storage DS6000 storage system 529
8. Select 1. Configure network parameters from the ncnetconf Main Menu options
(Figure 10-9).
530 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9. Set the IP addresses for both DS6000 processor cards as follows:
a. Choose 1. Use static IP address from the Network configuration menu options
(Figure 10-10).
b. Change the IP address for the current DS6000 processor by choosing 1. IP address
for this node from the Static IP addresses configuration menu options.
c. When the IP Address? prompt displays, enter the desired IP address, and press Enter.
d. Change the IP address for the other DS6000 processor by choosing 2. IP address for
other node from the Static IP addresses configuration menu options.
e. When the IP Address? prompt displays, enter the desired IP address, and press Enter
f. Select 7. Back to Network Configuration to return to the Network configuration menu
(Figure 10-11).
Chapter 10. Installing the IBM System Storage DS6000 storage system 531
Figure 10-11 Windows HyperTerminal Static IP addresses Configuration window
g. Select 3. Advanced configuration options to set the domain name server and the
gateway settings for both DS6000 processor cards (Figure 10-12).
532 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
h. Select 7. Back to Network Configuration.
i. Select 7. Back to Main Menu to return to the ncnetconf Main Menu.
j. Select 8. Apply network changes and exit from the options in the main menu to save
your changes and to exit the application.
Chapter 10. Installing the IBM System Storage DS6000 storage system 533
2. From the Network Connections window, select Advanced → Advanced Settings
(Figure 10-14).
534 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. In the Adapters and Binding tab Connections view, ensure that the first network adapter
listed is the one that is on the same subnet as the DS6000 server enclosure processor
cards. If this is not the case, select the network adapter from the list that is on the same
network as the DS6000 server, and select the up arrow button to move this adapter to the
top of the list (Figure 10-15). Then, click OK.
Chapter 10. Installing the IBM System Storage DS6000 storage system 535
536 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
11
Figure 11-1 shows the use of FlashCopy, where the LSU is on an internal drive and is
mirrored by i5/OS to a remote load source pair in the external disk subsystem, and then
FlashCopy is used to create an instant point-in-time copy for offline backup.
Backup
System i
or LPAR
Primary (inactive)
System i
LSU
LSU
Point in
time copy
Flash
LSU Copy LSU
When the FlashCopy is complete, a second system or LPAR is attached as a backup system
to the external storage subsystem and the FlashCopy image. This second system has a
single internal LSU. Because it was not possible to IPL from this remote copy of the LSU, it
was necessary to perform a D-mode IPL and use the Dedicated Service Tools (DST) function
to perform a remote load source recovery from the LSU copy in the external storage
subsystem to the backup system’s internal drive. This process initializes the internal LSU
before doing the copy and takes a considerable amount of time.
538 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 11-2 shows a similar arrangement when using Metro or Global Mirror to replicate data
to another, remote external disk subsystem.
Primary
Primary disk System i
subsystem
LSU
LSU
Metro or Global
Mirror
continuous copy
LSU
LSU
DR
System i
Secondary disk
subsystem
Although using a mirrored LSU can be acceptable under DR circumstances when invoking
the DR copy was infrequent, it is not practical to mirror the LSU on a daily basis if you use
FlashCopy to assist with offline backups. Thus, we do not recommend using this approach,
especially for FlashCopy.
Now, with boot from SAN through either #2487 IOP-based or IOP-less Fibre Channel, it is
possible to have the entire disk space, including the LSU, contained in the external storage
subsystem. This means that it is much easier to replicate the entire storage space.
System i external storage-based replication solutions using Copy Services with switchable
independent ASPs (IASPs) managed either by the existing System i Copy Services Toolkit or
the newly introduced i5/OS V6R1 High Availability Solutions Manager (HASM) licensed
product separate the application and its data into IASPs and replicate that with either
FlashCopy for backups (perhaps for populating a data warehouse or development
environment) or Metro Mirror or/and Global Mirror for DR purposes. When using such
switchable IASP replication solutions the production system and the backup system have
their own LSU and *SYSBAS and implementing boot from SAN typically does not help to
reduce the recovery time.
For simple environments, or for those applications that are not supported in an IASP, having
everything contained in *SYSBAS (system ASP plus user ASPs 2-32) is the easiest
environment to implement Copy Services. However, you must bear in mind that *SYSBAS
includes all work areas and swap space. Replicating these requires greater bandwidth and
can introduce other operational complexities that must be managed (for example, the target
will be an exact replica of the source system and will have exactly the same network attributes
as the source system). If used for disaster recovery purposes, these operational complexities
will not cause a problem. However, if you want to test the DR environment or if you want to
Chapter 11. Usage considerations for Copy Services with i5/OS 539
use FlashCopy for daily backups, you need to change the network attributes so that both
systems can be in the network at the same time.
By using the capability that we describe in this section, you can create a complete copy of
your entire system in moments. You can then use this copy in any way you want. For example,
you can use the copy to minimize windows during backup, to protect yourself from a failure
during an upgrade, or to provide a backup or test system. You can accomplish each of these
option by copying the entire DASD space with minimal impact to production operations.
Although both have similar characteristics, we differentiate these two environments in the
sections that follow.
540 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
the system shutdown and perform the upgrade on the original copy. If problems or delays
occur, you can continue with the upgrade until just prior to the time the service is needed
to be available for the users. If the maintenance is not completed, you can abort the
maintenance and re-attach the target copy (or perhaps do a “fast reverse/restore” to the
original source LUNs) and do a normal IPL, rather than having to do a full system restore.
Historically, a major part of the time to upgrade your operating system or application
software has been related to the need to obtain a full and reliable backup. This would be
achieved by your usual backups, and in many case a full system save as well. This takes
an appreciable amount of time. You would then perform your upgrade and take another
backup, as you would not want to have to do the upgrade again. At this point you have
spent many hours just taking the backups.
By cloning, you are able to eliminate the vast majority of this lost time. Before starting your
upgrade, you would shutdown your system, then you would make a clone of the whole
system with a full copy of the data. You would then restart your system and start on the
upgrade. While you are doing this, you could concurrently be attaching the clone to a
second partition where you could then be taking a full backup for archive purposes. In the
event that your upgrade fails or you get into a situation that necessitates that you revert to
the original system, you would simply shutdown the system, change the production
system to use the clone image and resume production in a matter of minutes. You could
also attach the original image to another partition and examine the cause of the failure in
preparation for the next attempt.
Creating a test environment:
Imagine that you have a new release of the operating system or a new version of a major
application that you have heavily customized and for which you need to test the upgrades
prior to putting them into production.
Traditionally, you have three options:
– Load a backup from your production system onto a test partition or system, and create
an environment that is identical to the production system. Then, go through the
upgrade procedure. If you need to start again, you go back to the beginning of the
process and load the backup. This method usually takes several hours just to prepare
the environment to start testing.
– Load the upgrade to your test partition or system, which can be difficult to achieve
because not all upgrades coexist with the existing level of software. If you need to back
out, then you needed to reload to a backup. Again, this method can be time
consuming.
– Load the upgrade straight to the production system, which is a good method if it is
successful. You always have backups before you need them and, in the event of a
failure, you then take several more hours to recover the system.
All of these methods have risks associated with them, not to mention the time that is
required to obtain the backups as well as load them.
Using cloning, you can take a near instant copy of a working system. This system can then
be attached to another separate system or partition, allowing you to be up and running in a
totally independent environment in a matter of minutes.
For a major upgrade, a clone means that you can revert to the starting position very
quickly. The clone is available for use immediately. It is just a matter of reassigning the
disks that are used by the partition and you are up and running. This process takes
minutes rather than the hours that a reload can take.
You also have an exact copy of the production system and all its data, which means that
you perform the upgrade exactly as you need to on the production system and discover
any problems.
Chapter 11. Usage considerations for Copy Services with i5/OS 541
Creating a replica
You might have the need to create a new partition quite frequently. By creating a single
disk system on the external storage subsystem, you can simply clone the master, add the
additional resources required (additional LUNs and other I/O resources) and you have an
operational system in minutes without having to restore SLIC, i5/OS and Licensed
Program Products.
These are just some of the more common examples of when you can use an ad hoc clone of
the disks. Such functions are only possible when having your entire System i disk space on
external storage.
Save while Active is built into i5/OS and does not require an IPL or any substantial restrictions
on your users, it achieves this by making a checkpoint of the objects so it can track any
changes that are made while the save is running. On very busy systems SWA can take some
time to achieve the checkpoint and as it locks objects for save there might be application
conflicts. Using the new “SWA Partial Transaction” function introduced in V5R3 allows the
save activity continue without holding extended locks on objects, therefore speeding up the
checkpoint acquisition.
Customers considering using SWA might find it easier to restrict the system to obtain the
checkpoint more quickly and then restart the applications. The actual backup itself is done
concurrently with normal operations. This save operation consumes additional resources, so
you would need to ensure there is capacity to use SWA.
Standard i5/OS system save commands, while not requiring an IPL or a power down, do
require that the system be in a restricted state for the whole duration of the save. This causes
a substantial downtime requirement for the application. Recovery from your backups will be
simpler than for save while active and journaling is not a requirement. When the save is
finished you have to restart your system. In Table 11-1 on page 543 we compare the standard
Save Library command (SAVLIB) plus the Non-system objects save parameter (*NONSYS)
and compare with SWA and FlashCopy.
To understand more about i5/OS’s built in backup and recovery techniques, visit the System i
Information Center at:
http://publib.boulder.ibm.com/iseries/
Prior to i5/OS V6R1, taking a copy of the entire System i disk space required that you shut
down the system to ensure that all of the modified data in main memory is flushed to disk.
The new i5/OS V6R1 quiesce for Copy Services function allows to quiesce I/O activity for
*SYSBAS and IASPs by suspending all database I/O operations and thus eliminating the
requirement to shut down your production system before taking a FlashCopy.
542 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Note: The DS8000 Release 3 space-efficient FlashCopy virtualization function (see 2.7.8,
“Space efficient FlashCopy” on page 52) allows you to lower the amount of physical
storage for the FlashCopy target volumes significantly by thinly provisioning the target
space proportional to the amount of write activity from the host fits very well for system
backup scenarios with saving to tape.
For further information about the new i5/OS quiesce for Copy Services function, refer to IBM
System Storage Copy Services and IBM i: A Guide to Planning and Implementation,
SG24-7103.
When you have shut down or quiesced your system, the actual copy for the clone is done in
seconds, after which you are able to IPL or resume I/O on your production system and return
it to service, while you perform your backup on a second system, or more likely, partition.
Unlike taking controlled point-in-time copies through FlashCopy after a controlled quiesce or
power-down, with Metro Mirror and Global Mirror which are constantly updating the target
copy, you cannot be assured of having a clean starting point in a disaster scenario where the
copy process all of a sudden got interrupted. There is no chance to preempt a disaster event
with a power down of the source system to flush objects from main storage. This applies to all
Chapter 11. Usage considerations for Copy Services with i5/OS 543
environments, regardless of whether IASPs are used or not. With both Metro Mirror and
Global Mirror, you have a restartable copy, but the restart point is at the same point that the
original system would be if an IPL was performed after the failure. The result is that all
recovery on the target system will include abnormal IPL recovery. It is critical that application
availability techniques like journaling are employed to accelerate and assist the recovery.
With Metro Mirror, the recovery point is the same as the point at which the production system
failed, i.e. an recovery point objective of zero (last transaction) is achieved. With Global Mirror,
the recovery point is where the last consistency group was formed. By default Global Mirror
consistency groups are continuously formed as often as the environment allows depending
on the bandwidth and write I/O rate.
Note: When using synchronous mirroring solutions, distance and bandwidth can have an
impact on the production system’s performance as write IOs must wait until the write
update for the remote copy has been acknowledged back to the host. This can cause
significant impact if there is a lot of write I/O for the system components such as swap
space, temporary work areas and temporary index builds.
Although the IBM Metro Mirror algorithm is very efficient compared with synchronous
mirroring techniques from other vendors, we recommend that for performance critical
System i workloads, especially when they have a high amount of write I/O activity, you
should at least have a PPRC performance modelling done with tools like DiskMagic or
perform a benchmark at an IBM benchmark center. If this proves to be too much of a
performance impact, you should consider Global Mirror which has been designed to not
impact the production server performance, or split the application into IASPs (see “System
architecture for System i availability” on page 545 for more details).
You should be extremely careful when you activate a partition that has been built from a
complete copy of the DASD space. In particular, you have to ensure that it does not
automatically connect to the network because this can cause substantial problems within both
the copy and its parent system.
You must ensure that your copy environment is correctly customized before attaching it to a
network. Remember that booting from a SAN and copying the entire DASD space is not a
544 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
high-availability solution, because it involves a large amount of subsequent work to make sure
that the copy works in the environment where it is used.
Note: Using independent ASPs with Copy Services is only supported with using either the
System i Copy Services Toolkit or HASM and a pre-sale and pre-install Solution Assurance
Review is highly recommended or required.
For further information about HASM and the System i Copy Services Toolkit, refer to IBM
System Storage Copy Services and IBM i: A Guide to Planning and Implementation,
SG24-7103.
In the following sections, we describe some disaster recovery, backup, and high availability
scenarios using the System i switchable IASP architecture.
This architecture uses IASPs to isolate the applications and data from the system
components. The middle system (with yellow *SYSBAS disks) can be regarded as the
production server. Local availability is provided by the upper system with orange *SYSBAS.
The dark green IASP can be switched between these two local servers in the event of
planned or unplanned outages on the servers.
The system continues to run if errors occur in the IASP disk subsystem, no matter which disk
technology is used. With multiple IASPs, perhaps each holding different applications or
environments, failure in one IASP will not affect the others. Although disaster recovery (DR)
can be provided without IASPs, the level of availability can be increased using IASPs.
Disaster Recovery functions can be provided at a remote site by the server with gray
*SYSBAS. The light green IASP in the lower site could be replicated using either Cross Site
Mirroring (XSM) or an HABP solution.
Chapter 11. Usage considerations for Copy Services with i5/OS 545
LSU
*SYSBAS
HSL
Switchable
Tower
System i
LSU
*SYSBAS Cluster &
Device
Domain
TCP/IP LAN
Cluster- Comms
HSL
LSU
Tower *SYSBAS
Figure 11-3 System i Availability
Further variations of this model could include another server in the remote site to provide a
more symmetric solution. Multiple systems can also assist with workload balancing although
this is not an automatic function of IASPs.
546 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
By simply replacing the internal disks making up the IASPs with external storage, we can
introduce another replication method - storage-based data replication by Metro or Global
Mirror for offsite disaster recovery, or FlashCopy for onsite backups, as shown in Figure 11-4.
Production
DS or ESS
LSU
*SYSB AS
IASP
F ib re
HSL
Sw itchable
Tow er LSU
*SYSBAS
S ystem i
Fibre Channel
Cluster &
Metro or Global Device
Mirror Dom ain
T CP/IP LAN
Cluster- Com m s
IASP' F ib re
HSL
LSU
*SYS BAS
Tow er
D/R
DS or ESS
11.3.2 Backups
As well as providing DR capabilities as shown above, the System i Copy Services Toolkit or
HASM also support FlashCopy. This can be a great benefit for customers who want to
minimize the downtime associated with doing daily backups. By separating the application
data from the system using IASPs, it is possible to create a point-in-time copy of the
application data and to attach this copy to another server (or more likely partition) to perform
the backups, as shown in Figure 11-5.
The upper server with the yellow *SYSBAS is the production server. It is attached to the dark
green IASP in the external storage subsystem. When backups are to be done, the application
should either be quiesced for a short period of time allowing the IASP to be varied off or the
IASP should be quiesced using the new i5/OS V6R1 quiesce for Copy Services function.
Chapter 11. Usage considerations for Copy Services with i5/OS 547
Either way modified objects for the IASP from memory are flushed to disk where they can be
copied using FlashCopy.
LSU
*SYSBAS
IASP
Fibre
(Cluster- Comm.)
Prod.
TCP/IP LAN
DS or ESS System i
Cluster &
Device
Domain
Fibre
Backup-System i or LPAR
IASP'
FlashCopy
LSU
*SYSBAS
Tape
Backup
IBM ENHANCED CAPACITY
CARTRIDGE SYSTEM TAPE
As soon as the FlashCopy command has completed (a matter of minutes), the dark green
IASP can be varied on again or resumed on the production server and the light green IASP’
can be attached and varied on to the backup server or partition where the backups can be
performed. Under most circumstances, we would anticipate that a partition would be used
allowing resources (memory and processor) to be re-allocated to the backup partition for the
duration of the backups. The System i Copy Services Toolkit manages the whole process.
IBM does not support this capability without using the Toolkit.
Important: Although it is possible to perform a FlashCopy without varying off the IASP, we
would normally advise customers to vary off the IASP or use the new i5/OS V6R1 quiesce
function (CHGASPACT). If this is not possible, it is imperative that journaling is used, with
the journal receivers being in the IASP, so that journal changes can be applied to the
database when the FlashCopy target IASP is varied on to the backup server.
548 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
If you have more than one production partition or server attached to the external storage
subsystem, you can use a single partition or server to do all your different production IASPs
backups, as shown in Figure 11-6.
LSU
*S Y S B AS
P rod. S ystem i A or LP AR A
iAS P -A varied on
e
br
Cluster- Comm.
Fi
TCP/IP LAN
IAS P -A IAS P -B
P rod.
DS or E S S iS eries
P rod.S ystem i B or LPAR B C luster &
iAS P -B varied on D evice
D om ain
F ib r
e
LSU
*S Y S B AS
Cluster- Comm.
TCP/IP LAN
Fi
br
e
IAS P -A' IASP -B ' B ackup-S ystem i or LP AR
C an concurrently support both
F lash Co py F lash C op y IAS P 's
Figure 11-6 Using a single backup partition or server for multiple production environments
In this case, all three environments (two production and one backup) are in the same cluster
and device domain, so all three nodes know about the two production IASPs (dark green and
dark blue). This allows each of the IASP copies to be attached and varied on to the single
backup server or partition. As the System i Copy Services Toolkit manages each IASP
separately, it requires separate Fibre Channel attachments on the backup server for each
IASP.
Chapter 11. Usage considerations for Copy Services with i5/OS 549
Prod. LSU
DS or ESS *SYSBAS
IASP
Prod. System i or LPAR
(iASP varied on)
Fibre
Cluster- Comm.
TCP/IP LAN
DR System i or LPAR System i
(iASP' varied off) Cluster &
IASP' Device
Domain
(sync.)
LSU
*SYSBAS
re
Fib
Cluster- Comm.
TCP/IP LAN
Backup System i or LPAR
Fibre
LSU
IASP'' *SYSBAS
DR Tape
(FlashCopy) DS or ESS Backup
IBM ENHANCED CAPACITY
CARTRIDGE SYSTEM TAPE
IASP'' Tape
(FlashCopy) Prod. Backup System i or LPAR IBM ENHANCED CAPACITY
CARTRIDGE SYSTEM TAPE
Backup
DS or ESS
Fibre
Cluster- Comm.
TCP/IP LAN
LSU
*SYSBAS
Fibre
System i
IASP LSU
*SYSBAS Cluster &
Prod. System i or LPAR Device
(iASP varied on) Domain
Cluster- Comm.
TCP/IP LAN
Fibr
e
DR System i or LPAR
IASP' (iASP' varied off)
(sync.)
DR
DS or ESS LSU
*SYSBAS
550 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
These examples are just some of the possibilities that are enabled by using IASPs and Copy
Services. There can be many more, but the same principles apply. Use IASPs to separate the
application data and code from the system. This allows much more flexibility and resilience in
designing you availability solutions.
For more information about System i high availability and disaster recovery solutions, System
i Copy Services Toolkit, HASM and implementing Copy Services refer to IBM System Storage
Copy Services and IBM i: A Guide to Planning and Implementation, SG24-7103.
Chapter 11. Usage considerations for Copy Services with i5/OS 551
552 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
12
The new of boot from SAN support enables you to take advantage of some of the advanced
features available with the DS6000, DS8000 series and Copy Services functions. One of
these functions is known as FlashCopy; this function allows you to perform a near
instantaneous copy of the data held on a LUN or group of LUNs. Therefore, when you have a
system that only has external LUNs with no internal drives, you are able to create a clone of
your system.
Important: When we refer to a clone, we are referring to a copy of a system that only uses
external LUNs. Boot (or IPL) from SAN is, therefore, a prerequisite for this.
You need to have enough free storage space on your external storage server to
accommodate the clone. Additionally, you should remember that using FlashCopy with the
full-copy option, that is copying all tracks from source to target, to create a clone is very
resource intensive primarily for the involved external storage disk units. Running such
FlashCopy background copy tasks during the normal business operating hours could
cause performance impacts.
You should not attach a clone to your network until you have resolved any potential
conflicts that the clone has with the parent system.
By using the cloning capability that we describe in this chapter, you can create a complete
copy of your entire system in moments. You can then use this copy in any way you want. For
example, you can potentially use it to minimize windows during backup or to protect yourself
from a failure during an upgrade. You can even use it as a fast way to provide a backup or test
system. You can accomplish all of these tasks with minimal impact to your production
operations.
If the restriction on system down time is related to having multiple applications on a single
system you could consider migrating some applications to another partition. Alternatively, if
you are able to shut down the application but not the system, then you should consider other
tools for quickly switching your applications using independent ASPs to another system like
the System i Copy Services Toolkit service offering from IBM STG Lab Services developed
from IBM Rochester. For further information about the toolkit offering refer to:
http://www-03.ibm.com/systems/services/labservices/labservices_i.html
554 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
A significant improvement in System i availability in a FlashCopy environment and is the new
i5/OS V6R1 quiesce for Copy Services function:
Tip: The new i5/OS V6R1 quiesce for Copy Services function, CHGASPACT CL
command, allows you to suspend all database I/O activity and therewith eliminate the
requirement for a system shut down to ensure a consistent database state before taking a
FlashCopy to create a clone of your production system.
For further information about this new quiesce function refer to IBM System Storage Copy
Services and IBM i: A Guide to Planning and Implementation, SG24-7103.
Because cloning creates a copy of the whole of the source system, you need to remember
the following considerations when you create a clone:
A clone is an exact copy of the original source system in every respect.
The system name and network attributes are identical.
The TCP/IP settings are identical.
The BRMS network information is identical.
The Netserver settings are identical.
User profiles and passwords are identical.
The Job schedule entries are identical.
Relational database entries are identical.
You need to take extreme when you activate a clone. In particular, you have to ensure that it
does not connect to the network automatically, because doing so can cause substantial
problems within both the clone and its parent system.
Imagine that you are in the process of creating a clone and that your network has a problem
in a router. Your network is effectively split in two. You finish your clone and connect it to a new
partition ready for use. When you IPL your clone, it might see itself plugged into the network
and working correctly. The job scheduler kicks in and starts updating some external systems
that it can see. While this is happening, your live production system is updating those other
systems that it can see. The result can be catastrophic.
Important: You should not attach a clone to your network until you have resolved any
potential conflicts that the clone has with the parent system.
You need to ensure that you have checked that your clone system is customized properly
before you attach it to a network.
Important: While cloning is a highly effective means of backing up a system for disaster
recovery, always remember it does not make sense to back up all objects on the clone
unless the backup is as part of a full backup for disaster recovery. In particular, if you bring
journals and associated receivers or system logs back from the clone to the production
system, the data content will not be relevant, because the systems would in fact have a
different data history reflected in the journals. This inconsistency will lead to unpredictable
results if attempted.
This restriction is because the clone LUNs are perfect copies of LUNs that are on the
parent system, and as such, the system would not be able to tell the difference between
the original and the clone if they were attached to the same system.
As soon as you use the clone LUNs on a separate partition, they become owned by that
partition, when then makes them safe to be reused on the original partition.
The actual creation of the clone is very straightforward. Follow these steps:
1. Turn off your system using the PWRDWNSYS command.
2. Use DS CLI or the DS Storage Manager to create a FlashCopy of the LUNs that are
currently assigned to the partition.
For a clone, you typically perform FlashCopy using the full-copy option to physically copy
all source volumes to the targets because you normally intend to use the clone copy for a
longer time without performance implications to the production system.
3. After you have created the FlashCopy bitmap and established the copy, you can turn on
the production system again.
4. Next, attach the clone LUNs to a second partition, which is probably already configured for
you within the SAN
5. Ensure that the clone partition is not connected to a network.
6. Activate the clone system.
7. Modify any settings that will cause clashes with the parent system.
8. Perform the backup or any other functions that you want on the clone.
The examples in this section assume that you have already installed ssh and have it working
with your HMC.
The SSH interface to the HMC is a very powerful interface. Thus, some of the command
strings that are required can seem, at first glance, to be incomprehensible; however, when
you break down the command strings, they are in reality very simple to understand.
556 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Important: It is extremely important that when you are working with scripting that you
make sure that your spelling and selection of partition names is accurate. For example, if
you attempt to delete a partition and enter the wrong name, you might well delete the
wrong partition very quickly, which can have very predictable results and results in the loss
of data.
You will also find that some of the parameters require double quotation marks around
them. Because this the parameters are usually is a string enclosed in single quotation
marks, you need to use a \ (backslash) before the double quotation marks.
Creating a partition
Example 12-1 shows an example of a script to create a partition.
You might notice in the example that the third parameter is quite a long string, but again we
can break down this string into smaller pieces. The command consists of the command name
and its associated parameters. Appendix B, “HMC CLI command definitions” on page 605
includes a full definition of all of the possible parameters.
558 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
\"virtual_eth_adapters=2/0/1//0/1\"
The slot information for any virtual Ethernet adaptors
virtual_opti_pool_id=0
The pool ID for the virtual Opti Connect
hsl_pool_id=0
The pool ID for HSL opticonnect
conn_monitoring=0
Is connection monitoring enabled
auto_start=0
Should the partition start automatically when the system starts
If you are familiar with the HMC GUI or WebSM interface into the HMC, you should recognize
most of these parameters from creating a partition with the GUI, because both the GUI and
the CLI are built over the underlying HMC functions. In fact, the CLI can be faster in many
aspects, while the GUI trades performance for ease of use.
Deleting a partition
Example 12-2 is a sample script for deleting a partition. The basic layout is the same as for
creating a script, but the command is much simpler.
In this example, the command we use is rmsyscfg, the full definition of which is included in
Example B-2 on page 614. The parameters that we use are:
-r lpar
The level at which we want the command to operate, in this case the partition level
-m <machine_name>
The name of the machine on which we want to operate
-n <partition_name>
The name of the partition on which we want to operate
Activating a partition
In the case of cloning, it is most likely that you have created your partition but have left it in a
Not Activated status until it is needed. You, therefore, are more likely to need to activate or
deactivate the partition. To activate a partition, use a script as shown in Example 12-3.
Deactivating a partition
You might also need to deactivate a partition automatically as well. You can use the
chsysstate command with the -o shutdown option, as shown in Example 12-4.
Use the lshwres command to show the resources associated with the machine:
-r io
The type of hardware resources to list, io means physical I/O
--rsubtype slot
What level of detail to be provided
-, <machine_name>
The machine on which you want to operate
-F lpar_name,unit_phys_loc,bus_id,phys_loc,vpd_type,description
The data fields that you want included
Example B-4 on page 622 provides full detail about the lshwres command.
560 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Example 12-6 Output from lshwres
$
> lstrcl
ITSOMIGRTEST1,U5074.007.01D87DE,40,C01,2843,PCI I/O Processor
ITSOMIGRTEST1,U5074.007.01D87DE,40,C02,2757,PCI-X Ultra RAID Disk Controller
ITSOMIGRTEST1,U5074.007.01D87DE,40,C03,5708,SCSI bus controller
ITSOMIGRTEST1,U5074.007.01D87DE,40,C04,null,Empty slot
ITSOMIGRTEST1,U5074.007.01D87DE,41,C05,5703,PCI RAID Disk Unit Controller
ITSOMIGRTEST1,U5074.007.01D87DE,41,C06,null,Empty slot
ITSOMIGRTEST1,U5074.007.01D87DE,41,C07,null,Empty slot
ITSOMIGRTEST1,U5074.007.01D87DE,41,C09,5703,PCI RAID Disk Unit Controller
ITSOMIGRTEST1,U5074.007.01D87DE,41,C10,null,Empty slot
ITSOMIGRTEST1,U5074.007.01D87DE,41,C11,null,Empty slot
ITSOMIGRTEST1,U5074.007.01D87DE,41,C12,null,Empty slot
ITSOMIGRTEST1,U5074.007.01D87DE,41,C13,5706,PCI 10/100/1000Mbps Ethernet UTP
2-port
ITSOMIGRTEST1,U5074.007.01D87DE,41,C14,2847,I/O Processor
ITSOMIGRTEST1,U5074.007.01D87DE,41,C15,2766,PCI Fibre Channel Disk Controller
null,U5294.001.105867B,13,C11,null,I/O Processor
null,U5294.001.105867B,13,C12,null,PCI Ultra4 SCSI Disk Controller
null,U5294.001.105867B,13,C13,null,Empty slot
null,U5294.001.105867B,13,C14,null,Empty slot
null,U5294.001.105867B,13,C15,null,Empty slot
ITSO5804A,U5294.001.105867B,14,C01,2847,I/O Processor
ITSO5804A,U5294.001.105867B,14,C02,2787,PCI Fibre Channel Disk Controller
ITSO5804A,U5294.001.105867B,14,C03,2844,PCI I/O Processor
ITSO5804A,U5294.001.105867B,14,C04,2849,PCI 100/10Mbps Ethernet
null,U5294.001.105867B,15,C05,null,I/O Processor
null,U5294.001.105867B,15,C06,null,PCI Ultra4 SCSI Disk Controller
null,U5294.001.105867B,15,C07,null,Empty slot
null,U5294.001.105867B,15,C08,null,Empty slot
null,U5294.001.105867B,15,C09,null,Empty slot
Martin-Brower,U5294.001.105869B,19,C11,2844,PCI I/O Processor
Martin-Brower,U5294.001.105869B,19,C12,2780,PCI Ultra4 SCSI Disk Controller
Martin-Brower,U5294.001.105869B,19,C13,null,Empty slot
Martin-Brower,U5294.001.105869B,19,C14,null,Empty slot
Martin-Brower,U5294.001.105869B,19,C15,null,Empty slot
Martin-Brower,U5294.001.105869B,20,C01,2844,PCI I/O Processor
Martin-Brower,U5294.001.105869B,20,C02,2780,PCI Ultra4 SCSI Disk Controller
Martin-Brower,U5294.001.105869B,20,C03,2749,PCI Ultra Magnetic Media Controller
Martin-Brower,U5294.001.105869B,20,C04,2838,PCI 100/10Mbps Ethernet
Martin-Brower,U5294.001.105869B,21,C05,2844,PCI I/O Processor
Martin-Brower,U5294.001.105869B,21,C06,2780,PCI Ultra4 SCSI Disk Controller
Martin-Brower,U5294.001.105869B,21,C07,null,Empty slot
Martin-Brower,U5294.001.105869B,21,C08,null,Empty slot
Martin-Brower,U5294.001.105869B,21,C09,null,Empty slot
null,U5294.002.105868B,16,C11,null,Empty slot
null,U5294.002.105868B,16,C12,null,PCI Ultra4 SCSI Disk Controller
ITSOFCALTEST,U5294.002.105868B,16,C13,2847,I/O Processor
ITSOFCALTEST,U5294.002.105868B,16,C14,2766,PCI Fibre Channel Disk Controller
null,U5294.002.105868B,16,C15,null,Empty slot
ITSO5804A,U5294.002.105868B,17,C01,2847,I/O Processor
ITSO5804A,U5294.002.105868B,17,C02,2787,PCI Fibre Channel Disk Controller
ITSO5804A,U5294.002.105868B,17,C03,2844,PCI I/O Processor
ITSO5804A,U5294.002.105868B,17,C04,5703,PCI RAID Disk Unit Controller
For further details about the HMC commands, see Appendix B, “HMC CLI command
definitions” on page 605.
Note: If you are not using the new i5/OS V6R1 quiesce for Copy Services function, you
need to run DS CLI to invoke FlashCopy from another server, because your System i
server, in fact, is turned off.
You can find a full description of the DS CLI and its use, refer to Chapter 8, “Using DS CLI
with System i” on page 391.
562 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Example 12-7 creates the FlashCopy volume pairs from the Windows DS CLI using the
mkflash command. You can also use PPRC to create a copy in a different SAN environment if
you want. In addition to the command, the example also identifies the storage unit on which to
perform the action and the LUN pairs for which to create a FlashCopy relationship.
Date/Time: July 15, 2005 10:07:05 AM CDT IBM DSCLI Version: 5.0.4.32 DS: IBM.
2107-7580741
CMUC00137I mkflash: FlashCopy pair 1400:1600 successfully created.
CMUC00137I mkflash: FlashCopy pair 1401:1601 successfully created.
CMUC00137I mkflash: FlashCopy pair 1402:1602 successfully created.
CMUC00137I mkflash: FlashCopy pair 1403:1603 successfully created.
CMUC00137I mkflash: FlashCopy pair 1404:1604 successfully created.
CMUC00137I mkflash: FlashCopy pair 1405:1605 successfully created.
CMUC00137I mkflash: FlashCopy pair 1406:1606 successfully created.
CMUC00137I mkflash: FlashCopy pair 1407:1607 successfully created.
CMUC00137I mkflash: FlashCopy pair 1408:1608 successfully created.
CMUC00137I mkflash: FlashCopy pair 1409:1609 successfully created.
CMUC00137I mkflash: FlashCopy pair 140A:160A successfully created.
dscli>
In this section, we provide some sample code to use in this situation, which consists of two
ILE Control Language programs and a data area.
The first program DJPCLONE (Example 12-8 on page 564) sets the production system so
that it is ready for cloning by capturing the information that it requires to identify the clone. It
then takes over the i5/OS QSTRUPPGM system value and points it to the second program.
The second program DJPSTRUP (Example 12-9 on page 565) is a replacement startup
program to be called from the QSTRUPPGM system value. It performs the necessary checks
to restrict the startup on the clone system and also ensures that the production system IPLs
correctly with its original startup program and system values.
The data area DJPPRTN is used to store the following information about the production
system:
System name
Serial number
Startup program
Partition ID
When the partition is then either in a Not Activated or suspended state, you initiate the
FlashCopy process to make the clone.
Important: Remember the considerations for activating a clone and preventing conflict
with the parent copy. See 12.2, “Considerations when cloning i5/OS systems” on page 554
for more information.
When the FlashCopy pairs are active, you can then re-IPL or resume your production system
and start your clone partition.
When the partitions IPL, the DJPSTRUP program runs as the startup program and takes
control of the systems at startup.
On the production system, it simply resets to normal production values and runs the regular
startup program. On the Clone system, it simply stops the startup and leaves the system in a
safe minimal state with only the console operational.
Important: This code is provided on an as-is basis. You need to include any additional
checks that you might deem necessary to prevent inappropriate use of these programs.
They are provided for education purposes only.
RTVNETA SYSNAME(&CURSYSNAM)
564 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
RTVSYSVAL SYSVAL(QSRLNBR) RTNVAR(&CURSRLNBR)
RTVSYSVAL SYSVAL(QSTRUPPGM) RTNVAR(&CURSTRUP)
RTVNETA SYSNAME(&CURSYSNAM)
RTVSYSVAL SYSVAL(QSRLNBR) RTNVAR(&CURSRLNBR)
RTVSYSVAL SYSVAL(QSTRUPPGM) RTNVAR(&CURSTRUP)
/* Partition Changed */
/* At this point you should include any code that you wish to run */
/* remembering that TCP/IP, Network Attributes, Relational Database */
566 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
/* entries, NetServer settings and the System Name are all copies */
RETURN
ENDDO
CALL PGM(&WKSTRUPLIB/&WKSTRUPPGM)
MONMSG MSGID(CPF0000) EXEC(DO)
ENDPGM
CRTPGM PGM(DPAINTER/DJPCLONE) +
MODULE(DPAINTER/DJPCLONE) BNDSRVPGM(QPMLPMGT)
CRTPGM PGM(DPAINTER/DJPSTRUP) +
MODULE(DPAINTER/DJPSTRUP) BNDSRVPGM(QPMLPMGT)
ENDPGM
568 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
13
For further information about System i and DS8000 recovery, refer to IBM System i & System
Storage DS8000 Recovery Handbook at:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101099
570 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Select the Reference Code for which you want to see the details (Figure 13-3).
Note: You cannot get to your partition console in this state. Thus, the only way to further
debug this issue from the i5 server side is a D-mode IPL to DST. We recommend this
method only either with clear evidence of failed i5 hardware or as a last resort after
verifying that both the DS6000 or DS8000 and the SAN environment are OK.
SRC B2003200 on front Fatal hardware error (during Perform a D-mode IPL
panel IPL) Check the I/O processor
No additional information (IOP) and I/O adapter (IOA)
available in ASM status in DST → HSM
SRC B2003200 on front No valid IOA found (during IPL) Verify the following in your HMC
panel i5 partition config:
ASM informational error log A#2847 IOP with a #2766
entry B7005122 with a LS or #2787 IOA is assigned
Not Found code value of Tagged I/O load source is
C6000001 set to #2766 or #2787 IOA
controlled by a #2847 IOP
SRC B2003200 on front The Fibre Channel IOA is Verify the DS6000/DS8000
panel operational but the Fibre host adapter port for the
ASM informational error log Channel link out of the adapter load source IOA is
entry B7005122 with a LS is not (during IPL) configured as
Not Found code value of switched-fabric (FcSf)
C6000002 If direct attached to a DS8000
(DS6000):
Verify the FC cable is fine
Verify the DS host adapter
port is operational by using
a wrap-plug on the DS host
adapter resulting in a solid
green and flashing yellow
LED (bottom green)
If SAN attached to a
DS6000/DS8000:
Verify the FC cable
between i5 and SAN switch
is fine
Verify the SAN switch port
is operational (refer to your
switch vendor’s
documentation)
572 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Error Symptom Scenario Recommended Action
SRC B2003200 on front The Fibre Channel IOA and link For code C6000000:
panel out of the IOA are operational Verify your SAN switch
ASM informational error log but the load source unit was not zoning is correct and there
entry B7005122 with a LS found (during IPL). is no DS6000/DS8000 port
Not Found code value of login restriction configured
C6xxyy00 For other codes C6xx0000, xx >
xx - number of devices 00:
(each DS6000/DS8000 Verify the DS6000/DS8000
count as one device even if host system configuration
connected to multiple ports for the boot IOA is
on the device) configured with the correct
yy - total number of LUNs WWPN
under all devices. Verify there is a volume
example C6000000: group with at least one
no DS6000/DS8000 system volume attached to the
found on the fabric. DS6000/DS8000 host
example C6010300: system configuration for the
found one DS6000/ boot IOA
DS8000 system with three Verify the DS6000/DS8000
LUNs but the requested LS volumes assigned to the
was not found. IOA are in normal status
For other codes C6xxyy00,
xx>00:
Verify your
DS6000/DS8000 volume
group assigned to the i5
boot IOA contains the
volume ID being the load
source
Verify the DS6000/DS8000
volume being the load
source is in normal status
For new installs verify the
DS6000/DS8000 volume ID
supposed to be the load
source was installed
correctly with SLIC
V5R3M5 or higher
SRC A600255/A600266 on Contact was lost with the device Perform LIC problem isolation
front panel indicated (during normal procedure LICIP13 (see the
operation) IBM Systems Hardware
Information Center:
http://publib.boulder.ibm.c
om/infocenter/systems/scope
/hw/index.jsp
Tip: You can download the Web-based System Manager (WebSM) remote client for
remote HMC management as a self-extracting Windows install file directly from the
HMC at:
http://hmc_ip_address/setup.exe
574 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. Select OK on the Manage Servicable Events — Select the Servicable Events window that
opens (Figure 13-5).
Figure 13-5 DS8000 WebSM Manage Servicable Events: Select Servicable Events window
4. Review any open problems listed in the Manage Servicable Events - Servicable Event
Overview (Figure 13-6).
If you have indications for a potential DS8000 storage related problem from your System i
server side, we especially recommend that you check the “Servicable event text” and
“First/Last reported time” information to see whether there are time matched problem log
error indications on the DS8000. For a DS8000 machine that is registered properly in the
IBM Remote Technical Assistance and Information Network (RETAIN®), there is a
Problem Management Hardware (PMH) ticket number that is associated for each problem
the DS8000 called home.
For further information about DS8000 serviceable events, refer to the IBM System Storage
DS8000 Service Information Center at:
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000sv/index.jsp
A successful reset of the DS Storage Manager administrative password back to its default
password admin is indicated by the message shown in Figure 13-8.
576 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
13.3 DS6000 actions
The following tasks might help you to diagnose System i external storage problems from
DS6000 side.
3. Check each enclosure for all physical resource state to be Normal (green color). If there is
any resource in an Attention or Alert status, click its status field to get more information
about its non-normal resource status.
578 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. If there are no log entries in Status Open, the procedure to check for open problems ends
here. Otherwise proceed to the next step.
4. View details of any log entry in Status Open, starting from the oldest to the most recent
entry by clicking the Message ID or by marking the Select radio button that corresponds
with the log entry. Select View Details from the Select Action menu, and select Go to
continue (Figure 13-11).
Figure 13-11 DS6000 Storage Manager Logs View Details selection window
6. If you are successful in correcting the problem, close the problem entry by marking the
Select radio button and by selecting Close from the Select Action menu. A confirm
message displays, CMUR00003W, and asks if you really want to close the problem. Select
Continue (Figure 13-13).
If IBM support does not contact you, for example because the DS6000 call home was not
successful, and if you need further assistance on resolving the problem, you can contact
IBM support as described in the following sections.
580 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
13.3.3 Contacting IBM support
To contact IBM DS6000 technical support to open a problem ticket on the IBM Electronic
Services Web site as follows:
1. Contact IBM support by selecting Real-time Manager → Monitor system → Contact
IBM and by clicking Contact IBM (Figure 13-14).
2. On the IBM Electronic Services Web site, select your country in the “Send request to”
menu, select Hardware in the Select type and submit menu, select All Hardware
products (the choices are based on your country choice) in the “Select product and
submit” menu. Then, select Submit to submit a service request to IBM support
(Figure 13-15).
582 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. On the Electronic Service Call Web page, select Place a request, and select Hardware
repair activities from the Request type menu with Repair/Fix hardware product from
the Sub-request type menu. Complete the remaining information that is requested in the
form, preferably with also entering the Error code in the Problem section, before selecting
Submit to have IBM support contact you for the opened service request (Figure 13-17).
Figure 13-17 IBM Electronic Service Call Place a request Web site
Figure 13-18 IBM Electronic Service Call Place a request confirmation Web site
6. Wait for IBM support to contact you for the newly opened problem ticket.
584 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
To send DS6000 problem determination data to IBM support, follow these steps:
1. Log in to the DS6000 Storage Manager (see Chapter 9, “Using DS GUI with System i” on
page 439).
2. Select Real-time Manager → Manage hardware → Storage units. Select the storage
unit, choose Copy and Send Problem Determination Data in the Select Action menu,
and then select Go to continue (Figure 13-20).
3. Choose Copy new data for Select a data type, mark both Traces and Dumps for Select a
file type, and select Next to continue (Figure 13-21).
Note: After selecting Next on the DS6000 Storage Manager Copy problem
determination data window, a process to collect trace (PE-package) and dump
(statesave) data runs automatically in the background. The panel does not refresh until
this process actually ends, which can take a few minutes.
The usability has been improved for future DS6000 Storage Manager releases.
Figure 13-21 DS6000 Storage Manager Copy problem determination data window
Figure 13-22 DS6000 Storage Manager Download problem determination data window
5. In the Send problem determination data to IBM panel, ensure that the Send all data
option is selected, and select Next to continue (Figure 13-23).
Figure 13-23 DS6000 Storage Manager Send problem determination data to IBM window
586 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. Select Finish on the Verification panel to start the automatic process of transferring
problem determination data through FTP port 21 to the testcase.software.ibm.com IBM
server (Figure 13-24).
Figure 13-24 DS6000 Storage Manager Problem determination data Verification window
Figure 13-26 DS6000 Storage Manager Storage Units Activate Remote Support selection window
3. Select OK in the Activate Remote Support panel to start the remote support VPN
connection to IBM (Figure 13-27).
4. Close the long running task window which opens by selecting Close and View summary.
The established VPN connection is indicated by an IBMVPN network connection item in
the Windows system tray on the SMC (Figure 13-28).
Figure 13-28 DS6000 Storage Manager Activate Remote Support long running task window
588 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. To stop the remote support VPN connection after IBM remote support is done with their
problem analysis right-click the IBMVPN network connection item and select Disconnect.
(Figure 13-29)
Figure 13-29 Stopping the VPN connection from the IBMVPN connection item on the SMC
Note: Future DS Storage Manager versions will include an enhancement to prevent the
IBM DS Network Interface Server service becoming killed due to a log off of the
administrative user.
Contact IBM support (see 13.3.3, “Contacting IBM support” on page 581) if you need further
assistance.
This message can occur for many reasons, and they all relate to the Java SSL setup on
i5/OS.
If you receive this message, you need to review the DS CLI client log file at
/home/userprf/dscli/log/niclient.log. This file lists the connectivity attempts to the systems. The
DS CLI will always try to connect to an IBM System Storage DS6000 or DS8000 storage
subsystem first. If it cannot connect, it tries to communicate to an IBM System Storage
Enterprise Storage Server (ESS).
Tip: It is often helpful to delete the niclient.log file before you try a DS CLI command that
you know to be failing. The DS CLI re-create the file, and it is easier to see the relevant
connection information.
If the following message appears in the DS CLI niclient.log file Then your Java SSL is not set
up correctly:
javax.net.ssl.SSLHandshakeException: No compatible cipher suite available between
SSL end points
590 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The steps to check are:
a. Ensure that the admin HTTP server is started using the following command:
STRTCPSVR SERVER(*HTTP)HTTPSVR(*ADMIN)
b. Use a Web browser to connect to the OS/400 HTTP server admin port:
http://IP_address:2001
c. Sign on as QSECOFR.
d. Click the Digital Certificate Manager option.
If the following message appears in the Web browser, your DCM is not operating correctly:
You must install one of the cryptographic access provider products on your
system before using the Digital Certificate Manager (DCM) functions. Contact
your system administrator.
Use the following command to initialize your DCM only if this message is returned:
CALL QCAP3/QYAC3INAT
Now end the HTTP admin server and restart:
ENDTCPSVR SERVER(*HTTP)HTTPSVR(*ADMIN)
STRTCPSVR SERVER(*HTTP)HTTPSVR(*ADMIN)
Use these steps to check that it is operational.
4. If the following message appears in the DS CLI niclient.log file, typically your i5 server
system date is set incorrectly:
javax.net.ssl.SSLException: No certificate authorities are registered for this
secure application
The certificate exchange requires that the stem date is later than Tue Apr 29 20:55:29
UTC 2003, or the certificate that the SMC or HMC sends is rejected.
Note:
592 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Important: These standard Emulex LED codes are not suitable to debug IBM eServer i5
IOA connection problems, as the IOP/IOA might have been reset as part of error recovery.
However, they still apply for any connection of a DS6000/DS8000 to a SAN switch.
This appendix includes frequently asked questions (FAQs) regarding boot from SAN external
storage with the IOP-less and #2847 IOP-based Fibre Channel adapter cards.
Compared to previous remote load source mirroring solutions, where the load source
remained on an internal disk unit mirrored to an external LUN, boot from SAN eliminates the
requirement for a remote load source recovery when the load source mirror mate shall be
used either for recovery purposes or system cloning. Storage-based data replication solutions
that replicate the whole system space, that is not only IASPs, take advantage of boot from
SAN by reducing significantly the recovery time to IPL another (recovery) system from the
copied boot from SAN external load source. Also, because no manual intervention for remote
load source recovery is required, boot from SAN lays a foundation for FlashCopy automation
solutions, for example allowing a fully automated system backup process with FlashCopy.
Question 2. What is new for boot from SAN with IOP-less Fibre
Channel?
The new System i IOP-less Fibre Channel (FC) technology, supported on System i POWER6
models only, provides inherent boot from SAN support. Two new IOP-less dual-port FC
adapters are available, the #5749 as a PCI-X version and the #5774 as a PCI Express (PCIe)
version. These IOP-less adapters support both disk and tape systems but not on the same
adapter port. For disk storage systems, they support only the IBM System Storage DS8000
series models.
For further information, refer to Chapter 4, “i5/OS planning for external storage” on page 75.
Question 4. How is the #2844 IOP different from the #2847 IOP?
The #2847 IOP is designed specifically to support boot capabilities by placing the i5/OS load
source disk unit directly inside one of the supported external storage servers. Compared to
the #2844 IOP, which supports a wide range of IOAs, the #2847 IOP supports only the #2766,
#2787, or #5760 Fibre Channel disk controllers.
596 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Question 5. What System i hardware models and IBM System
Storage subsystems does the #2847 IOP support?
The #2847 IOP requires IBM System i POWER5 or POWER6 systems (Model 515, 520, 525,
550, 570, 595, or 9411-100) along with an IBM System Storage disk storage subsystem. IBM
has tested, and therefore supports, IBM System Storage ESS model 800, DS6000, and
DS8000 series. Other IBM System Storage ESS models or any OEM hardware configurations
are not tested.
Question 6. How many LUNs do the adapters for boot from SAN
support?
The new IOP-less dual-port Fibre Channel adapters #5749 or #5774 support up to 64 LUNs
per port, that is up to 128 LUNs per adapter.
With #2847 IOP-based boot from SAN, the LUNs are connected through a single-port Fibre
Channel adapter #2766, #2787, or #5760 supporting 32 LUNs in total. In addition to the load
source LUN, the #2847 can support up to 31 additional LUNs.
Prior to i5/OS V6R1, multipath was not supported for the boot from SAN LSU though the
#2847 IOP supported multipath to other non-load source LUNs.
To provide protection for the SAN load source prior to i5/OS V6R1, you had to purchase a
second #2847 IOP instead of #2844, define an additional unprotected LUN, and use i5/OS
mirroring to protect the load source LUN. Remaining LUNs can take advantage of multipath
I/O.
Appendix A. FAQs for boot from SAN with IOP-less and #2847 IOP-based Fibre Channel 597
Question 10. What are the minimum software requirements to
support #2847?
Minimum software requirements for i5/OS, HMC, system firmware, and IBM System Storage
disk storage subsystem include:
Licensed Internal Code: V5R3M5 LIC (level RS 535-A or later)
i5/OS Operating System: V5R3M0 Resave Level (level RS 530-10 or later)
System Firmware: 2.3.5
HMC Firmware: 5.1
Latest microcode levels for IBM ESS model 800, DS6000, or DS8000 series
Latest cumulative PTF package for i5/OS and Licensed Program Products.
Question 11. Will the #2847 IOP work with iSeries models?
No. Only iSeries systems with POWER5 processor-based systems with an HMC support
#2847 IOP.
For #2847 IOP-based Fibre Channel boot from SAN no microcode or hardware changes are
required on the disk storage subsystem. However, during configuration time, it is important to
ensure that the ports are configured as FC-SW and not as FC-AL. Failure to configure the
ports as FC-SW will result in your system not being able to find the load source disk unit in the
storage subsystem. For more information, see 6.4, “Setting up an external load source unit”
on page 211.
598 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Question 14. Will I have to define the load source LUN as a
“protected” or as an “unprotected” LUN?
With i5/OS V6R1, multipath is supported for the load source LUN. So, i5/OS mirroring is not
required to provide path redundancy for the load source. Unless you are planning to use
i5/OS mirroring for data redundancy, you can define the load source LUN as a protected
volume model on the external storage subsystem.
Prior to i5/OS V6R1, multipath is not supported for the load source LUN. So, to provide
redundancy for the SAN Load Source, you need to specify it on the external storage
subsystem as unprotected to mirror the load source LUN using i5/OS mirroring. If you are not
planning to enable redundancy at the IOP level to provide an alternate path to the load
source, you can define the LUN as a protected LUN in the external storage subsystem.
Question 15. Will the Fibre Channel load source require direct
connectivity to my SAN storage device, or can I go through a
SAN fabric?
You can either have a direct point-to-point connection from the Fibre Channel adapter to the
Host Bay Adapter (HBA) on the external storage subsystem, or you can use one of the
supported SAN switches or directors. In the later case, zoning for System i IOAs is highly
recommended to avoid potential performance degradation and to allow for easier problem
isolation. For further information about SAN switch zoning for System i connectivity, refer to
4.2.5, “Planning for SAN connectivity” on page 92.
Appendix A. FAQs for boot from SAN with IOP-less and #2847 IOP-based Fibre Channel 599
Question 18. Is the #2847 IOP supported in Linux or AIX
partitions on System i?
No. Linux and AIX partitions do not need IOPs, and none of the IOPs, including the #2847,
are supported in these partitions. The supported Fibre Channel adapters installed in these
partitions support boot capabilities for Linux or AIX kernels loaded in these partitions.
600 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Question 24. Could I install #2847 on my iSeries model 8xx
system, or in one of the LPARs on this system?
#2847 IOP is not supported on iSeries model 8xx. Therefore, you cannot install it on the
system or in any of the LPARs that are defined on these systems.
Question 25. Will the #2847 IOP work with V5R3M0 Licensed
Internal Code?
No, minimum LIC requirement is V5R3M5 or later.
When enabling the cloned system image, you need to perform a manual IPL to change the
system name and network attributes and to re-assign hardware resource names for your
network configuration objects prior to entering the cloned partition to your network.
For example, if you had an existing i5 system or logical partition using multipathed LUNs in
external storage, you would have either a RAID or mirror protected load source. To replace
this load source and maintain multipath to all other LUNs, you can use of the following
methods:
The simplest method of load source migration is to purchase 2 x #2847 and 2 FC IOAs
and use these adapters to create a mirrored SAN Load Source pair, and 31 additional
multipath LUNs. You would need to ensure there is the slot capacity in your i5 system for
these additional features.
Alternatively, you can use existing multipath LUNs by changing the IOPs that drive them.
This method is a complex migration scenario, so we overview it briefly here. You need to
plan whether the existing FC had capacity to support the migrated load source, less than
32 LUNs and throughput capacity. We assume this a two IOP multipath set. You upgrade
i5/OS to V5R3M5. Then, you swap the existing pair of 2844s supporting the multipath for a
pair of new 2847s with the system turned off or using concurrent maintenance. Assuming
load source redundancy is required, you need a pair of unprotected LUNs. Then, you
Appendix A. FAQs for boot from SAN with IOP-less and #2847 IOP-based Fibre Channel 601
perform load source migration as described in Chapter 7, “Migrating to i5/OS boot from
SAN” on page 277.
Question 28. Will the base IOP that is installed in every system
unit be replaced with the new #2847 IOP?
No. Among other devices, the base IOP is used to drive internal DVD or CD ROM drives, ECS
communications link, and base. These are still required even when you plan to attach all of
your disk storage using external SAN disk storage subsystem.
Question 29. Why does it take a long time to ship the #2847
IOP?
To ensure that important planning and implementation considerations are completed prior to
enabling the new #2847 IOP, IBM has deployed a mandatory technical review for all system
orders that have this new IOP. A questionnaire is generated automatically, and you need to
complete and return it to [email protected]. IBM then schedules a Technical Review call,
after which the order is approved for shipment.
You can direct additional questions regarding the Technical Review or the Questionnaire to:
mailto:[email protected]
This questionnaire allows IBM to schedule the required Technical Review prior to processing
your order.
602 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Question 33. Can I use the #2847 IOP to attach my tape Fibre
Channel I/O adapter and also to boot from it?
No. The #2847 IOP is only designed to support SAN i5/OS load source LUN, and up to 31
additional LUNs. Tape adapters are not supported by this IOP.
Boot from Fibre Channel attached tape drives is only supported with System i POWER6
IOP-less cards Fibre Channel cards #5749 or #5774.
Question 34. How many card slots does the #2847 IOP require?
Can I install the IOP in 32-bit slot, or does it need to be in a
64-bit slot?
The IOP occupies one card slot and it can be placed in either a 32-bit or a 64-bit slot. It is
highly recommended that the Fibre Channel disk adapter be placed in a 64-bit card slot for
optimum performance.
Appendix A. FAQs for boot from SAN with IOP-less and #2847 IOP-based Fibre Channel 603
604 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
B
The CLI references are located in the PDF files that are ordered by the HMC release. Be sure
to select the 4.5 Command Line Specification or later.
NAME
mksyscfg - create system resources
SYNOPSIS
mksyscfg -r {lpar | prof | sysprof} -m managed-system
{-f configuration-file | -i "configuration-data"}
[--help]
DESCRIPTION
mksyscfg creates partitions, partition profiles, or system
profiles for the managed-system.
OPTIONS
attribute-name=value,attribute-name=value,...<LF>
606 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
"attribute-name=value,value,...",...<LF>
profiles:
[all_resources]
Valid values are:
0 - do not use all the managed system
resources
1 - use all the managed system resources
(this option is not valid for i5/OS
partitions on IBM eServer p5 servers)
min_mem
megabytes
desired_mem
megabytes
max_mem
megabytes
[proc_mode]
[desired_5250_cpw_percent]
Only valid for i5/OS partitions in
managed systems that support the
assignment of 5250 CPW percentages
[max_5250_cpw_percent]
Only valid for i5/OS partitions in
managed systems that support the
assignment of 5250 CPW percentages
[sharing_mode]
Valid values are:
keep_idle_procs - valid with dedicated
processors
share_idle_procs - valid with dedicated
processors
cap - valid with shared processors
uncap - valid with shared processors
[uncap_weight]
[io_slots]
Comma separated list of I/O slots, with
each I/O slot having the following
format:
slot-DRC-index/slot-IO-pool-ID/
is-required
For example:
21030002/3/1 specifies an I/O slot with a
DRC index of 21030002, it is assigned to
I/O pool 3, and it is a required slot.
[lpar_io_pool_ids]
comma separated
load_source_slot
608 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
i5/OS only
DRC index of I/O slot, or virtual slot
number
[alt_restart_device_slot]
i5/OS only
DRC index of I/O slot, or virtual slot
number
console_slot
i5/OS only
DRC index of I/O slot, virtual slot
number, or the value hmc
[alt_console_slot]
i5/OS only
DRC index of I/O slot, or virtual slot
number
[op_console_slot]
i5/OS only
DRC index of I/O slot, or virtual slot
number
[auto_start]
Valid values are:
0 - off
1 - on
[boot_mode]
AIX, Linux, and virtual I/O server only
Valid values are:
norm - normal
dd - diagnostic with default boot list
ds - diagnostic with stored boot list
of - Open Firmware OK prompt
sms - System Management Services
[power_ctrl_lpar_ids | power_ctrl_lpar_names]
comma separated
[conn_monitoring]
Valid values are:
0 - off
1 - on
[hsl_pool_id]
i5/OS only
Valid values are:
0 - HSL OptiConnect is disabled
1 - HSL OptiConnect is enabled
[virtual_opti_pool_id]
i5/OS only
For example:
3/1/5/"6,7"/0/1
specifies a virtual ethernet adapter with
a virtual slot number of 3, is IEEE
802.1Q compatible, has a port virtual LAN
ID of 5, additional virtual LAN IDs of
6 and 7, it is not a trunk
adapter, and it is required.
[virtual_scsi_adapters]
Comma separated list of virtual SCSI
adapters, with each adapter having the
following format:
virtual-slot-number/client-or-server/
remote-lpar-ID/remote-lpar-name/
remote-slot-number/is-required
For example:
4/client/2//3/0
specifies a virtual SCSI client adapter
with a virtual slot number of 4, a
remote (server) partition ID of 2, a
remote (server) slot number of 3, and
610 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
it is not required.
[virtual_serial_adapters]
Comma separated list of virtual serial
adapters, with each adapter having the
following format:
virtual-slot-number/client-or-server/
supports-HMC/remote-lpar-ID/
remote-lpar-name/remote-slot-number/
is-required
For example:
4/server/0////0
specifies a virtual serial server adapter
with a virtual slot number of 4, it does
not support an HMC connection, any client
adapter is allowed to connect to it, and
it is not required.
[sni_device_ids]
AIX, Linux, and virtual I/O server only
--help Display the help text for this command and exit.
EXAMPLES
Create an AIX or Linux partition:
612 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Create a system profile:
SYNOPSIS
rmsyscfg -r {lpar | prof | sysprof} -m managed-system
[-n resource-name] [-p partition-name]
[--id partition-ID] [--help]
DESCRIPTION
rmsyscfg removes a partition, a partition profile, or a
system profile from the managed-system.
OPTIONS
-r The type of system resource to remove. Valid val-
ues are lpar for a partition, prof for a partition
profile, and sysprof for a system profile.
614 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
removing a partition profile.
--help Display the help text for this command and exit.
EXAMPLES
Remove the partition partition5:
SYNOPSIS
To power on a managed system:
chsysstate -m managed-system -r sys
-o {on | onstandby | onsysprof}
[-f system-profile-name]
[-k keylock-position]
To activate a partition:
chsysstate -m managed-system -r lpar -o on
{-n partition-name | --id partition-ID}
-f partition-profile-name
[-k keylock-position]
[-b boot-mode] [-i IPL-source]
616 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
iopreset | iopdump}
{-n partition-name | --id partition-ID}
DESCRIPTION
chsysstate changes the state of a partition, the managed-
system, or the managed-frame.
OPTIONS
-m The name of the managed system on which to perform
the operation. The name may either be the user-
defined name for the managed system, or be in the
form tttt-mmm*ssssssss, where tttt is the machine
type, mmm is the model, and ssssssss is the serial
number of the managed system. The tttt-
mmm*ssssssss form must be used if there are multi-
ple managed systems with the same user-defined
name.
618 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
partitions only.
remotedston - enables a remote service session
for the partition (operator panel function
66). This operation is valid for i5/OS
partitions only.
iopreset - resets or reloads the failed IOP
(operator panel function 67). This
operation is valid for i5/OS partitions
only.
iopdump - allows use of the IOP control storage
dump (operator panel function 70). This
operation is valid for i5/OS partitions
only.
unownediooff - powers off all of the unowned
I/O units in a managed frame.
--immed
If this option is specified when powering off a
managed system, a fast power off is performed.
--restart
If this option is specified, the partition or man-
aged system will be restarted.
--continue
If this option is specified when activating a sys-
tem profile, remaining partitions will continue to
be activated after a partition activation failure
occurs.
--help Display the help text for this command and exit.
EXAMPLES
Power on a managed system and auto start partitions:
620 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
chsysstate -m 9406-570*12345678 -r sys -o rebuild
SYNOPSIS
To list physical I/O resources:
lshwres -r io --rsubtype {unit | bus | slot |
iopool | taggedio} -m managed-system
[--level {pool | sys}] [-R]
[--filter "filter-data"]
[-F [attribute-names] [--header]] [--help]
DESCRIPTION
lshwres lists the hardware resources of the managed-sys-
tem, including physical I/O, virtual I/O, memory, process-
ing, and Switch Network Interface (SNI) adapter resources.
OPTIONS
-r The type of hardware resources to list. Valid val-
ues are io for physical I/O, virtualio for virtual
I/O, mem for memory, proc for processing, and sni
for SNI adapter resources.
--rsubtype
622 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The subtype of hardware resources to list. Valid
physical I/O resource subtypes are unit for I/O
units, bus for I/O buses, slot for I/O slots,
iopool for I/O pools, and taggedio for tagged I/O
resources. Valid virtual I/O resource subtypes are
eth for virtual ethernet, hsl for High Speed Link
(HSL) OptiConnect, virtualopti for virtual OptiCon-
nect, scsi for virtual SCSI, serial for virtual
serial, and slot for virtual slot resources.
--level
The level of information to list. Valid values are
lpar for partition, pool for pool, slot for slot,
and sys for system.
--maxmem
When this option is specified, the required minimum
--procunits
When this option is specified, the range of optimal
5250 CPW percentages for partitions assigned the
quantity of processing units specified is listed.
The quantity of processing units specified can have
up to 2 decimal places.
--filter
The filter(s) to apply to the hardware resources to
be listed. Filters are used to select which hard-
ware resources of the specified type are to be
listed. If no filters are used, then all of the
hardware resources of the specified type will be
listed. For example, all of the physical I/O slots
on a specific I/O unit and bus can be listed by
using a filter to specify the I/O unit and the bus
which has the slots to list. Otherwise, if no fil-
ter is used, then all of the physical I/O slots in
the managed system will be listed.
"filter-name=value,filter-name=value,..."
""filter-name=value,value,...",..."
624 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
When a list of values is specified, the filter
name/value pair must be enclosed in double quotes.
Depending on the shell being used, nested double
quote characters may need to be preceded by an
escape character, which is usually a '\' character.
--header
--help Display the help text for this command and exit.
EXAMPLES
List all I/O units on the managed system:
626 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
lshwres -r io --rsubtype unit -m system1
List only the DRC index, description, and the owning par-
tition for each physical I/O slot on buses 2 and 3 of I/O
unit U787A.001.0395036:
List all I/O pools and the partitions and slots assigned
to each I/O pool:
List the tagged I/O devices for the i5/OS partition that
has an ID of 1:
628 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Related publications
We consider the publications that we list in this section particularly suitable for a more
detailed discussion of the topics that we cover in this book.
IBM i5, iSeries, and AS/400e System Builder IBM i5/OS Version 5 Release 3, GA24-2155
iSeries in Storage Area Networks A Guide to Implementing FC Disk and Tape with iSeries,
SG24-6220
IBM eServer iSeries Migration: System Migration and Upgrades at V5R1 and V5R2,
SG24-60555
IBM eServer iSeries Migration: A Guide to Upgrades and Migrations to System i5,
SG24-72000
Online resources
The following Web sites and URLs are also relevant as further information sources:
IBM Systems Information Centers
http://publib.boulder.ibm.com/eserver/
IBM TotalStorage Enterprise Server Introduction and Planning Guide
http://www-1.ibm.com/support/docview.wss?rs=503&context=HW26L&dc=DA400&q1=plann
ing&uid=ssg1S7000003&loc=en_US&cs=utf-8&lang=en
IBM System Storage DS6000 Introduction and Planning Guide
http://www-1.ibm.com/support/docview.wss?rs=1112&context=HW2A2&dc=DA400&q1=ssg1
*&uid=ssg1S7001072&loc=en_US&cs=utf-8&lang=en
630 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Index
disaster recovery 64
Numerics disk arm 115, 121, 128
12X loop 113 i5/OS workload 121
disk configuration 221–222, 231, 246–248, 283, 299, 383
A disk drive 122, 126, 196, 203
access density 125 disk operations per second 122
application disk drive modules 122
response time 119 disk drive set 15
array 40 Disk Magic 115, 120, 130, 132, 137, 139–143, 147–148,
array site 39 151–152, 154–155, 158–159, 163, 169, 176–178,
ASP 127, 130, 143, 224 182–183, 187, 189, 198–199, 201–202
Asynchronous PPRC. See Global Mirror cache values 189
attribute name 607, 626–627 size DS8000 140
auxiliary storage pool. See ASP disk pool 86, 226–227, 230, 233–235
disk response time 119
disk service time 147, 156, 174–175, 188–189, 203
B disk space 116, 122, 128, 130, 132, 138–139, 148–151,
Backup Recovery 7 154–155, 157, 159–160, 167–168, 173, 184, 188–189,
Backup Recovery and Media Services (BRMS) 7, 544 200, 203
batch job, duration 119 disk subsystem 119, 136, 144, 146–150, 155, 161, 164,
batch window 137 177, 180–183, 199–200
blocksize 117, 147, 197, 202 ESS1 184, 188
business continuity 10, 28 icon 145
Resiliency Family 17 iSeries icon 164
model 167
disk unit 5, 7, 81, 86, 196, 220–221, 225, 231, 236, 245,
C 278, 285, 304, 308, 321, 329–331, 346, 349–350, 362,
Capacity Magic 102 377, 380–381, 386–387
example 103 connection 86
clone 62 data function 339, 358, 367
i5/OS 62 recovery 221, 245, 305
clone system 555–556, 563–564 display
codes C6xxyy00 573 disk configuration
Commercial Processing Workload (CPW) 136 capacity 222, 232
Compute Intensive Workload (CIW) 136 status 248
Consistency Group displayed array sites 398
commands 18 D-mode IPL 216, 572
consistency group 544 DRC index 608, 625, 627
consistency groups 52 DS 443
copy disk unit data 300 DS CLI 17, 71, 211, 391–395, 404, 410–411, 414–422,
Copy Service 5–7, 391, 396, 401, 404, 422 425, 427–429, 431, 562, 589–591
CPW percentage 608, 624 command 393–394, 396, 404, 414–415, 418
customer replaceable unit (CRU) 456 command frame insert 415
customer setup unit (CSU) 455 command framework 395
Description 394
D form i5/OS. This 419
DASD 544 framework 395, 419
space 544 Insert CD 405
total copy 544 installation CD 405
Data Set FlashCopy 17, 29 interactive command framework 422
DDM 122 message 395
Dedicated Service Tool. See DST mkuser 395
device adapter 34 profile 395–396, 414, 419
direct access storage device. See DASD response 398
direct attach of external storage, example 56 script 394, 405, 414
632 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
F host adapter 82, 118
failover PCI space 118
to backup 70 planned number 149
failoverpprc command 70 host connection 45
FB 483 host port 150, 167–168, 181, 184–185, 200, 475,
FC-AL 514–515
switched 14 host system 475, 492, 499, 502, 508, 510–513, 515–517
Fibre Channel 121, 123, 208–209, 211, 572–573, 592 I/O port usage 510
adapter 115, 121, 125, 127, 129, 148, 151, 159, 162, i5 host systems 510
167, 176, 180–181, 185, 200 HSL loop 82
attachment 208
Disk Controller 208, 212, 309, 332, 351, 359, 371, I
387 I/O
I/O adapter 239 latency 65
port 121 operation 117–119, 125
request proceeds 118 operations 116
Switched-Fabric (FCSF) 501 property 213
valid link 592 rate 129
Fixed Block (FB) 478, 489 request 117–118
fixed block storage 209 tower 81, 112, 239
FlashCopy 17, 392, 402, 428–430, 554, 562, 564 I/O adapter (IOA) 81, 86, 196, 211, 214, 311, 313, 334,
data sets 29 353, 373, 383, 387, 506, 508–509, 513, 572–573, 593
i5/OS cloning 62 I/O per second 124–125, 129, 159, 162, 189
inband commands 18, 29 I/O pool 623, 627
incremental 29 3 608
multiple relationship 29 I/O port 499, 501, 510
FlashCopy to a Remote Mirror primary 17 physical location 500
I/O processor (IOP) 4, 81, 86, 208, 241–242, 279, 572,
G 593
geographic mirroring maximal I/O per second 124
with external disk 71 I/O unit 619, 624–625, 627
Global Copy 18, 30 i5/OS 137, 163–166, 169, 172
Global Mirror 30 cloning 62
asymmetrical replication 69 current cache usage 166
examples 67 current configuration 166
full system replication 68 extent pool 169
switchable IASP replication 69 mirroring 81
graphical user interface (GUI) 559 performance reports 137
separate extent pool 169
workload 116, 163
H following configurations 167
HA 14 sizing DS 116
hard disk drive (HDD) 157 i5/OS Performance Tools 132
hardware management 211, 287, 323, 332, 351, 359, i5/OS system 319
371, 570, 574 i5/OS workload
Hardware Management Console . See HMC disk operations 122
hardware overview 24 disk operations per second 122
hardware resource 560, 622–623 IASP (independent auxiliary storage pool) 224
Hardware Service Manager 86 IBM System Storage
HDD (hard disk drive) 157 DS CLI 7
utilization 157–158, 161 DS command-line interface 391
high availability 64 DS Storage Manager 7
High Availability Business Partner (HABP) 6 DS Storage Manager server 439
High Availability Solution Manager 4 DS6000 439, 469
High Availability Solutions Manager 63, 90 DS6000 series 469
High Link (HSL) 559, 609, 622–623, 625 DS6000/DS8000 storage subsystem 590
HMC 211–212, 216, 278–279, 287, 309, 312, 334, DS8000 574, 592
353–354, 363, 374–375, 382, 392, 394–396, 404, 440, DS8000 series 469
465, 556–557, 560, 570, 572, 574, 591, 609, 611–612, DS8000 storage unit 469
618 Enterprise Storage Server 5, 590
Index 633
subsystem 5 Licensed Internal Code. See LIC
IBM System Storage DS Storage Manager load source 6, 211, 213–214, 216, 218–220, 222,
server 439 245–248, 279, 284, 291, 297, 300, 310, 321, 324, 331,
IBM System Storage DS Storage Manager. See DS Stor- 333, 335, 339, 342, 349–350, 352, 354, 358, 360,
age Manager 363–365, 367, 369, 372, 374, 380, 383, 386, 389,
IBM System Storage Solution 5 401–403, 494, 506, 558, 570, 572–573
IBM Systems Director Navigator for i5/OS 4 IOP 211, 216, 240–241, 243, 500
IFS directory 404, 409, 419 Unit 328
inband commands 29 load source unit. See LSU
Incremental FlashCopy 17, 29 logical unit 43, 86, 220, 230, 239
independent auxiliary storage pool (IASP) 5–7, 130, 224 logical unit number. See LUN
information life cycle management 10 logical volume 43, 82, 118, 126, 207, 209–210, 220, 231,
infrastructure simplification 10 392, 483–484, 492, 494–495, 502
initial program load (IPL) 86, 210, 214, 220, 245, Logical volumes 42
247–248 LPAR 557, 559–560, 612, 615–616, 621
initial program load. See IPL LSU 208–211, 214, 216, 218, 240, 242, 244–245,
input/output. See I/O 247–248, 278, 285, 294, 296, 299, 305, 308–309, 311,
integrated file system (IFS) 404, 409, 411–412, 414–415, 313, 316, 321, 326, 330, 332, 334, 339, 345, 349, 351,
418–419, 422, 425, 430 353, 358–359, 361, 367, 371, 373, 383, 386–387, 492,
internal disc 278 538, 573
internal disk 139, 210 LUN 42, 62, 86, 118, 121, 125–126, 152, 158, 187, 203,
current cache usage 155 210, 240–241, 243, 286, 303, 305, 321, 329–330, 336,
current read cache percentage 157 339, 348–349, 355, 358, 365, 369, 379–380, 387, 431,
current workload 145 554, 563
host workload 139 masking 33
I/O rate 179
internal LSU 211, 285, 309, 321
RAID disk controller 211 M
inter-switch link (ISL) 82 Machine Type and Serial (MTS) 392
IOA 5760 129 main memory 116–117, 542
IOA. See I/O adapter (IOA) block od data 123
IOP. See I/O processor (IOP) manual IPL 214, 287, 323, 361, 373, 389
IOP-based Fibre Channel 129 memory page 116
IOP-less Fibre Channel 4, 78, 124–126 Metro Mirror 18, 30
IP address 394–396, 406, 415, 419, 450, 452, 459–460, examples 65
465, 557 full system replication 65
IPL 86, 210, 214, 220, 245, 247–248, 287–288, 307, 316, switchable IASP replication 66
323, 331, 338, 349, 357, 361, 367, 373, 380, 386–387, migration
389, 542, 555, 563–565, 616, 620–621 external mirrored load source to boot load source 61
IPL mode 307, 331, 350, 381 internal drives to external storage including load
IPL source 350, 381 source 60
iSeries 5, 81–83, 86, 207–210, 220, 224–225, 230–231, mirrored load source unit
233–234 unprotected LUNs 494
Model 825 176 Model 825 176
iSeries Copy Services Toolkit 90 multifunction IOP 247
iSeries Navigator 220, 224, 230, 233, 236 multipath
connection 86
I/O 81
J volume 231, 233, 236
Java Virtual Machine (JVM) 404, 422 Multiple Relationship FlashCopy 17, 29
journal receiver 544
N
K needed number 122
Keylock position 215, 288, 324, 342, 362, 374, 618, 621 non-configured unit 220, 222, 228, 245
null Empty slot 561–562
L
large LUN and CKD volume support 32 O
level LPAR 625–628 operating system 245, 391, 541, 592
LIC 218 automatic installation 245
634 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
operator panel raw capacity 93
function 618 Real-time manager 451, 454–455, 458, 460, 470,
service function 616 472–473, 475, 481–482, 486, 503, 505, 508, 513
OS/400 recovery point of objective 65
mirroring 210 Redbooks Web site 630
V5R3 83 Contact us xiv
Remote Mirror 392, 431, 434–438
Remote Mirror and Copy function. See RMC
P repository volume 43
page fault 117 response time
partition application 119
name 212, 309, 332, 341, 351, 359, 371 disk 119
profile name 212, 309 RMC
property 213–215 role swap 71
partition name 557 RPO 65
partition profile rsubtype slot 560, 625–627
name 332, 341
prof1 615
password file 395, 418 S
PCI I/O card placement rules 112 San
PCI-X IOP 208, 211 permanent data 388
peak period 137 SAN LUNs 62, 340, 554
performance expectations 111 SAN Volume Controller 15
performance report 132, 138 Save while Active 542
reported values 187 scalability 21
Performance Tools 189 SDD (Subsystem Device Driver) 81
PFA 31 Secure Socket Layer (SSL) 404
physical capacity 93 server layer 392
physical I/O servicable event 455, 574–575
resource 622 service processor
resource subtypes 623 correct level 278
slot 624 service processor (SP) 278
slot DRC index 625 service tool 86, 221, 245
planned number 150, 168 Simple Network Mail Protocol (SNMP) 455, 460
Point-in-time Copy see PTC simplified LUN masking 33
port group 45 single point 82, 321, 339
Power On Self Test (POST) 592 single-level storage 116
POWER5 14 sizing 111
PPRC Extended Distance. See Global Copy SMI-S 4, 15
PPRC license 389 SNI adapter
PPRC-XD. See Global Copy resource 622
press F4 414, 422, 427 source system 544
Problem Management Hardware (PMH) 575 space efficient FlashCopy 43, 87
Problem Report 336, 355, 365 space efficient logical volume 43
production partition 428–429, 431 spare disk drive 93
production system 541, 543, 555, 563–564 sparing rule 93
protected LUN 210 SRC B2003200 572–573
protected mirror 100 SSPC 15
PTC 28 Standby Capacity on Demand. See Standby CoD
PWRDWNSYS Option 288 Standby CoD
storage capacity 15, 26
storage facility 35
Q Storage Hardware Management Console (S-HMC) 14,
quiesce for CopyServices 4 19, 34
storage image 392, 396–397, 419, 472, 474–475, 483,
R 487, 493, 499, 503, 515–516
RAID-5 rank 152 serial number 396
rank 41 Storage Management Console (SMC) 441, 450, 584,
rank group 127 586, 588–589, 591
rankgroup 42 storage pool striping 44, 100
storage system logical partitions see storage system
Index 635
LPAR virtual address 116
storage system LPAR 16 virtual private network (VPN) 587–589
DS8300 16 virtual SCSI
future directions 21 adapter 558, 627
storage unit 392, 397, 439, 442, 449, 451–452, 454–455, client adapter 610
458, 460, 465, 469–470, 473–475, 487, 493, 499, 505, virtualio 622, 625–627
515, 577, 585, 588, 591 volume creation and deletion 32
Serial Number information 470 volume group 45, 127, 241, 397, 404, 428–429, 432,
StorWatch Expert 133 502, 504–508, 573
strip size 41 volumegroup 403–404, 438
STRSST command 280
Subsystem Device Driver (SDD) 81
SVC 15 W
Switch Network Interface (SNI) 611, 622, 625–626, 628 Web-based System Manager (WEBSM) 574
switched FC-AL 14, 25 world wide port name (WWPN) 45, 509, 514, 573
switchover write penalty 122
from production to backup 69
swpprc command 70 X
Synchronous PPRC see Metro Mirror XRC see z/OS Global Mirror
system ASP 81, 210, 221–222, 286, 316, 321, 339, 369,
386–387
System i Copy Services Toolkit 63, 71 Z
System i5 z/OS Global Mirror 18, 30
all disk in external storage 56 z/OS Metro/Global Mirror 19
external storage HA environments 64
System Licensed Internal Code 117
System Management Console (SMC) 392, 394, 396, 404
system profile 606, 611
Attribute names 611
mySysProf 621
operation 617, 620
sp1 621
sysprof1 615
system report 130, 132, 137, 141, 176, 189, 194–196,
201
expert cache storage pools 201
interactive cpu utilization 195
System Service Tools 221
System Storage Productivity Center 15
System Storage Solution 5–6
Managing eServer i5 Availability 6
T
tem resource 614
TPC 468
transfer size 117
typing DS CLI
inetractive DS CLI mode 394
U
unprotected LUN 210, 387, 431
Use DS CLI
interactive command mode 394
user profile 544
V
Valid filter 625–626
Valid value 607–610, 617, 619–620, 623
636 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
IBM i and IBM System Storage: A
Guide to Implementing External
Disks on IBM i
IBM i and IBM System Storage: A Guide to
Implementing External Disks on IBM i
(1.0” spine)
0.875”<->1.498”
460 <-> 788 pages
IBM i and IBM System Storage: A Guide to Implementing External Disks
IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
IBM i and IBM System Storage: A
Guide to Implementing External
Disks on IBM i
IBM i and IBM System Storage: A
Guide to Implementing External
Disks on IBM i
Back cover ®
Take advantage of This IBM Redbooks publication provides a broad discussion of a new
architecture of the IBM System Storage DS6000 and DS8000 and how INTERNATIONAL
DS8000 and DS6000
these products relate to System i servers. The book includes TECHNICAL
with IBM i
information for both planning and implementing IBM System i with the SUPPORT
IBM System Storage DS6000 or DS8000 series where you intend to ORGANIZATION
Learn about the externalize the i5/OS loadsource disk unit using boot from SAN. It also
storage performance covers migration from System i internal disks to IBM System Storage
and HA DS6000 and DS8000.
enhancements in IBM This book is intended for IBMers, IBM Business Partners, and
i 6.1 customers in the planning and implementation of external disk BUILDING TECHNICAL
attachments to System i servers. INFORMATION BASED ON
PRACTICAL EXPERIENCE
Understand how to The newest release of this book accounts for the following new
migrate from internal functions of IBM System i POWER6, i5/OS V6R1, and IBM System IBM Redbooks are developed
to external disks Storage DS8000 Release 3: by the IBM International
System i POWER6 IOP-less Fiber Channel Technical Support
i5/OS V6R1 multipath load source support Organization. Experts from
i5/OS V6R1 quiesce for Copy Services
IBM, Customers and Partners
from around the world create
i5/OS V6R1 High Availability Solution Manager (HASM) timely technical information
i5/OS V6R1 SMI-S support based on realistic scenarios.
i5/OS V6R1 multipath resetter HSM function Specific recommendations
System i HMC V7 are provided to help you
DS8000 R3 space efficient FlashCopy implement IT solutions more
DS8000 R3 storage pool striping effectively in your
DS8000 R3 System Storage Productive Center (SSPC) environment.
DS8000 R3 Storage Manager GUI