SSE1G3 Course
SSE1G3 Course
SSE1G3 Course
cover
Front cover
Course Guide
IBM Storwize V7000 Implementation
Workshop
Course code SSE1G ERC 3.1
August 2016 edition
Notices
This information was developed for products and services offered in the US.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative
for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not
intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or
service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate
and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this
document does not grant you any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive, MD-NC119
Armonk, NY 10504-1785
United States of America
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein;
these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s)
and/or the program(s) described in this publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an
endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those
websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other
publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other
claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those
products.
This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible,
the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to
actual people or business enterprises is entirely coincidental.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many
jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM
trademarks is available on the web at “Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml.
© Copyright International Business Machines Corporation 2012, 2016.
This document may not be reproduced in whole or in part without the prior written permission of IBM.
US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
V11.0
Contents
TOC
Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
TOC Array and RAID levels: Drive counts and redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-22
Storwize V7000 balanced system (chain balanced) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24
Enclosure 4 (chain 2) drives and array members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-25
Enclosure 3 (chain 1) drives and array members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-26
Array member goals and spare attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27
Spare drive use attribute assignment by GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28
Spare selection for array member replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-29
Traditional RAID 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-30
Traditional RAID 6 reads/writes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-31
Distributed RAID (DRAID) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-32
Distributed RAID 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-33
DRAID performance goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-34
Distributed RAID considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-35
Drive-Auto Manage/Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-36
Storwize V7000 Gen2 supports T10DIF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-37
Storage provisioning topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-38
Storwize V7000 overview block-level structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-39
Storwize V7000 internal storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-40
Internal drive attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-41
Internal drive properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-42
Change internal drive attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-43
Create a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-44
Modifying the default extent size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-45
Adding capacity to a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-46
Mdisks by pools view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-47
Advanced custom array creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-48
Parent and child pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-49
Creating a child pool from an existing mdiskgrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-50
Child pool attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-51
Benefit of a child pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-52
Child pool volumes and host view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-53
Child pool limitations and restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-54
Storage provisioning topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-55
Storwize V7000 to back-end storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-56
Backend-storage partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-57
Storwize V7000 WWNNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-58
Backend storage system WWNNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-59
Storwize V7000 to DS3500 with more than one WWNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-60
Logical unit number to managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-61
Disk storage management interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-63
DS3K Storage system WWNN and WWPNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-64
DS3K Storwize V7000 host group definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-65
DS3K LUNs assigned to Storwize V7000 host group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-66
External storage system (automatic discovery) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-67
Managed disks are SCSI LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-68
Renaming logical number units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-70
Example of storage system LUN details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-71
Best practice: Rename a storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-72
Rename controller using chcontroller CLI command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-73
Best practice: Rename an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-74
Rename MDisks using chmdisk CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-75
MDisk properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-76
Storwize V7000 quorum index indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-77
MDisk properties: Quorum index indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-78
Distribute quorum disks across multiple controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-79
Best practice: Reassign the active quorum disk index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-80
TMK
Trademarks
The reader should recognize that the following terms, which appear in the content of this training
document, are official trademarks of IBM or other companies:
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide.
The following are trademarks of International Business Machines Corporation, registered in many
jurisdictions worldwide:
AIX 5L™ AIX® DB2®
developerWorks® DS4000® DS5000™
DS8000® Easy Tier® Express®
FlashCopy® FlashSystem™ GPFS™
HyperSwap® IBM FlashSystem® IBM Flex System®
IBM Spectrum™ IBM Spectrum Accelerate™ IBM Spectrum Archive™
IBM Spectrum Control™ IBM Spectrum Protect™ IBM Spectrum Scale™
IBM Spectrum Storage™ IBM Spectrum Virtualize™ Linear Tape File System™
Notes® Power Systems™ Power®
Real-time Compression™ Redbooks® Redpaper™
Storwize® System Storage DS® System Storage®
Tivoli® XIV®
Intel, Intel Xeon and Xeon are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Java™ and all Java-based trademarks and logos are trademarks or registered trademarks of
Oracle and/or its affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
VMware and the VMware "boxes" logo and design, Virtual SMP and VMotion are registered
trademarks or trademarks (the "Marks") of VMware, Inc. in the United States and/or other
jurisdictions.
Other product and service names might be trademarks of IBM or other companies.
pref
Course description
IBM Storwize V7000 Implementation Workshop
Duration: 4 days
Purpose
This course is designed to leverage SAN storage connectivity by integrating a layer of intelligence
of virtualization, the IBM Storwize V7000 to facilitate storage application data access independence
from storage management functions and requirements. The focus is on planning and
implementation tasks associated with integrating the Storwize V7000 into the storage area network.
It also explains how to:
• Centralize storage provisioning to host servers from common storage pools using internal
storage and SAN attached external heterogeneous storage.
• Improve storage utilization effectiveness using Thin Provisioning and Real-Time Compression.
• Implement storage tiering and optimize solid state drives (SSDs) or flash systems usage with
Easy Tier.
• Facilitate the coexistence and migration of data from non-virtualization to the virtualized
environment.
• Utilize network-level storage subsystem-independent data replication services to satisfy backup
and disaster recovery requirements.
• This course lecture offering is a the Storwize V7000 V7.6. level.
Important
This course consists of several independent modules. The modules, including the lab exercises,
stand on their own and do not depend on any other content.
Audience
This lecture and exercise-based course is for individuals who are assessing and/or planning to
deploy IBM System Storage networked storage virtualization solutions.
Prerequisites
• Introduction to Storage (SS01G)
• Storage Area Networking Fundamentals (SN71) or equivalent experience
• An understanding of the basic concepts of open systems disk storage system and I/O
operations.
pref
Objectives
After completing this course, you should be able to:
• Outline the benefits of implementing an Storwize V7000 storage virtualization solution.
• Differentiate between the Storwize V7000 2076-524 control enclosure and the 2076-312/324
expansion enclosure models.
• Outline the physical and logical requirements to integrate the Storwize V7000 system solution.
• Implement the Storwize V7000 GUI and CLI system setup to configure the V7000 systems.
• Summarize the symmetric virtualization process to convert physical storage into virtual storage
resources.
• Implement volume allocations and map volumes to SAN attached host systems.
• Summarize the advanced system management strategies to maintain storage efficiency,
enhance storage performance and reliability.
• Employ data migration strategies to the virtualized Storwize V7000 system environment.
• Implement Copy Services strategies to managed Storwize V7000 system environment
remotely.
• Employ administration operations to maintain system ability.
Contents
Introduction to IBM Storwize V7000
Storwize V7000 hardware architecture
Storwize V7000 planning and zoning requirements
Storwize V7000 system initialization and user authentication
Storwize V7000 storage provisioning
Storwize V7000 host to volume allocation
Spectrum Virtualize advanced features
Spectrum Virtualize data migration
Spectrum Virtualize Copy Services: FlashCopy
Spectrum Virtualize Copy Services: Remote Copy
Storwize V7000 administration management
pref
Agenda
Note
The following unit and exercise durations are estimates, and might not reflect every class
experience.
Day 1
(00:30) Course Introduction
(00:25) Unit 1: Introduction to IBM Storwize V7000
(01:00) Unit 2: Storwize V7000 hardware architecture
(00:45) Unit 3: Storwize V7000 planning and zoning requirements
(00:25) Unit 4: Storwize V7000 system initialization and user authentication
(00:45) Unit 5: Storwize V7000 storage provisioning
(00:10) Exercise 0: Lab environment overview
(00:15) Exercise 1: Storwize V7000 system initialization
(00:45) Exercise 2: Storwize V7000 system configuration
(00:20) Exercise 3: System user authentication
(00:20) Exercise 4: Provision internal storage
(00:15) Exercise 5: Examine external storage resources
Day 2
(00:20) Review
(01:15) Unit 6: Storwize V7000 host and volume allocation
(01:15) Unit 7: Spectrum Virtualize advanced features
(00:45) Exercise 6: Managing external storage resources
(00:45) Exercise 7: Host definitions and volume allocations
(00:30) Exercise 8: Access storage from Windows and AIX
(01:00) Exercise 9: Hybrid pool and Easy Tier
(00:30) Exercise 10: Access Storwize V7000 through iSCSI host
pref
Day 3
(00:20) Review
(01:30) Unit 8: Spectrum Virtualize data migration
(00:45) Unit 9: Spectrum Virtualize Copy Services: FlashCopy
(00:45) Unit 10: Spectrum Virtualize Copy Services: Remote Copy
(00:25) Exercise 11: Volume dependencies and tier migrations
(00:30) Exercise 12: Reconfigure internal storage: RAID options
(00:30) Exercise 13: Thin provision and volume copy
(00:30) Exercise 14: Real-time Compression
(01:00) Exercise 15: Import Data Migration
Day 4
(00:20) Review
(01:15) Unit 11: Storwize V7000 administration management
(01:00) Exercise 16: Copy Services: FlashCopy and consistency groups
(00:30) Exercise 17: User roles and access
(01:00) Exercise 18: Migrate existing data: Migration Wizard
(00:30) Exercise 19: Easy Tier and STAT analysis
Class Review and Evaluation
Uempty
Overview
This unit provides a high-level overview of the course deliverables and overall course objectives
that will be discussed in detail in this course.
Uempty
Course overview
This is a 4-day lecture and exercise-based course for individuals who are
assessing and/or planning to deploy IBM System Storage networked
storage virtualization solutions.
Uempty
Course prerequisites
• IBM Introduction to Storage (SS01G)
• IBM Storage Area Networking Fundamentals (SN71) or equivalent
experience
• A basic understanding of the basic concepts of open systems disk
storage systems and I/O operations
Uempty
Course objectives
After completing this course, you should be able to:
• Outline the benefit of implementing an Storwize V7000 storage virtualization solution
• Differentiate between the Storwize V7000 2076-524 control enclosure model and the
2076-312/324 expansion enclosure models
• Outline the physical and logical requirements to integrate the Storwize V7000 system
solution
• Implement the Storwize V7000 GUI and CLI system setup to configure the V7000
systems
• Summarize the symmetric virtualization process to convert physical storage into virtual
storage resources
• Implement volume allocations and map volumes to SAN attached host systems.
• Summarize the advanced system management strategies to maintain storage efficiency,
enhance storage performance and reliability
• Employ data migration strategies to the virtualized Storwize V7000 system environment
• Implement Copy Services strategies to perform data replication between two virtualized
Storwize V7000 system environment
• Employ administrative operations to maintain system ability
Uempty
Agenda: Day 1
• Course introduction
• Unit 1: Introduction to IBM Storwize V7000
• Unit 2: Storwize V7000 hardware architecture
• Unit 3: Storwize V7000 planning and zoning requirements
• Unit 4: Storwize V7000 system initialization and user authentication
• Unit 5: Storwize V7000 storage provisioning
ƒ Exercise 1: Storwize V7000 system initialization
ƒ Exercise 2: Storwize V7000 system configuration
ƒ Exercise 3: Configure user authentication
ƒ Exercise 4: Provision internal storage
ƒ Exercise 5: Examine external storage resources
Uempty
Agenda: Day 2
• Review
• Unit 6: Storwize V7000 host and volume allocation
• Unit 7: Spectrum Virtualize advanced features
ƒ Exercise 6: Managing external storage resources
ƒ Exercise 7: Host definitions and volume allocations
ƒ Exercise 8: Access storage from Windows and AIX
ƒ Exercise 9: Hybrid pools and Easy Tier
ƒ Exercise10: Access Storwize V7000 through iSCSI host
Uempty
Agenda: Day 3
• Review
• Unit 8: Spectrum Virtualize data migration
• Unit 9: Spectrum Virtualize Copy Services: FlashCopy
• Unit 10: Spectrum Virtualize Copy Services: Remote Copy
ƒ Exercise 11: Volume dependencies and tier migration
ƒ Exercise 12: Reconfigure internal storage: RAID options
ƒ Exercise 13: Thin provisioning and volume mirroring
ƒ Exercise 14: Real-time compression
ƒ Exercise 15: Migrate existing data: Import Wizard
Uempty
Agenda: Day 4
• Review
• Unit 11: Storwize V7000 administration management
ƒ Exercise 16: Copy Services: FlashCopy and consistency groups
ƒ Exercise 17: User roles and access
ƒ Exercise 18: Migrate existing data: Migration Wizard
ƒ Exercise 19: Easy Tier and STAT analysis
• Class review and evaluation
Uempty
Introductions
• Name
• Company
• Where you live
• Your job role
• Your current experience with the products and technologies in this
course
• Do you meet the course prerequisites?
• What you expect from this class
Uempty
Class logistics
• Course environment
• Start and end times
• Lab exercise procedures
• Materials in your student packet
• Topics not on the agenda
• Evaluations
• Breaks and lunch
• Outside business
• For classroom courses:
ƒ Lab room availability
ƒ Food
ƒ Restrooms
ƒ Fire exits
ƒ Local amenities
Uempty
Overview
This unit provides an overview for each unit that will be discussed in detail in this course.
Uempty
Unit objectives
• Summarize the units covered in this course
Uempty
Storwize V7000
Unified
Storwize V7000
Storwize V5000
* FlashSystem
Storwize V3700 840, 900 and V9000
IBM’s market-leading Software Defined Storage solutions, offers smarter storage for smarter
computing with distinct characteristics and values for small and mid-size businesses to major
enterprises.
IBM System Storage Storwize V7000 systems are virtualizing RAID storage systems that are
designed to store more data with fewer disk drives to reduce space, power and cooling demands,
and reduce operational cost.
IBM SAN Volume Controller, IBM’s first storage virtualization appliance for large enterprises, offers
high-availability and a wide range of sophisticated functions. IBM took this platform software and
shared it across this family of virtualized storage systems to fit businesses of all sizes. The Storwize
family offers a common code base and integrated set of advanced functions like Real-time
Compression and Easy Tier with an easy to used GUI.
Although not part of the Storwize family, IBM FlashSystems are supported on the same enhanced
functions and management tools.
Uempty
An SCSI logical unit (also known as LUN) built from an internal or external RAID
Managed disk (MDisk)
array
This table lists storage terminologies that will be used in this unit.
Uempty
Uempty
• The Storwize V7000 LFF Expansion Enclosure Model 12F supports up to twelve 3.5-inch
drives, while the Storwize V7000 SFF Expansion Enclosure Model 24F supports up to
twenty-four 2.5-inch drives.
• High-performance disk drives, high-capacity nearline disk drives, and flash (solid state) drives
are supported. Drives of the same form factor can be intermixed within an enclosure and LFF
and SFF expansion enclosures can be intermixed within a Storwize V7000 system.
▪ A Storwize V7000 Model 524 system scales up to 504 drives with the attachment of 20
Storwize V7000 expansion enclosures. Storwize V7000 systems can be clustered to help
deliver greater performance, bandwidth, and scalability. A Storwize V7000 clustered system
can contain up to four Storwize V7000 systems and up to 1,056 drives. Storwize V7000
Model 524 systems can be added into existing clustered systems that include previous
generation Storwize V7000 systems.
Unit 2 will discuss in detail the architecture structure of the IBM Storwize V7000 Gen2 model.
Uempty
Physical planning
9Rack hardware configuration
9Cabling connection requirements
Logical planning
9 Management IP addressing plan
9 iSCSI IP addressing plan
9 SAN zoning and SAN connections
9 Backend storage subsystem configuration
9 Storwize V7000 system configuration
In Unit 3, we will review the Storwize V7000 infrastructure physical planning requirements for
installing and cabling the hardware environment. We will also discuss the logical planning
requirements for defining system management access, implementing dual SAN fabric zoning
policies that includes external storage devices, host systems, including optional zoning
requirements to support remote copy services. In addition, we will highlight best practices to
achieve performance as well as non-disruptive scalability.
Uempty
In Unit 4, we will discuss the procedures to initializing the Storwize V7000 Gen2 system using the
technician port and define system information and configuration parameters using the Storwize
V7000 management GUI System Setup wizard.
Uempty
External
LUN1
Storage LUN0
RAID 6
APPLOG
NL
20 TiBSAS
20 TiBFLASH
NL
SAS
20 TiB NL
10 TiB
10 TiBFLASH 20 TiB
NL
10 TiB
10 TiB
Internal Distributed
Hybrid pool Virtualization
RAID 6
RAID 5
In Unit 5, we will discuss how the Storwize V7000 manages physical (internal) storage resources
using different RAID levels and different optimization strategies; and discuss its ability to
consolidate disk controllers from various vendors into pools of virtualized storage resources.
Uempty
With the IBM Storwize V7000, clients can create various host objects to support specific
configuration such as Fibre Channel, Fibre Channel over Ethernet (FCoE) and iSCSI.
In Unit 6, we will discuss each of the supported host interface support that includes 8 gigabit (Gb)
and 16 Gb FC, and 10 Gb Fibre Channel over Ethernet (FCoE) and Internet Small Computer
System Interface (iSCSI). In addition, we will discuss volume allocations, creating host-accessible
storage provisioned from a storage pool.
Uempty
Dynamic
growth Recycle waste
Thin Purchase only the
Without thin provisioning, pre-allocated With thin provisioning, applications can storage you need when
provisioning space is reserved whether the grow dynamically, but only consume space you need it.
application uses it or not. they are actually using.
Store less
Real-time Reduce data storage
Compression (RtC) ingestion.
Perform economically
Flash Optimized
Meet and exceed
with IBM Easy business service levels.
Tier
IBM Storwize V7000 offers advanced software features that are based on capabilities in IBM
Spectrum Virtualize software, which has its origins in IBM’s SAN volume Controller (SVC); included
in its the base price.
Unit 7 introduce the basic concepts of dynamic data relocation and storage optimization features
and how each can be implement in the IBM Storwize V7000 environment.
The following functions for storage efficiency will be discussed:
• Thin provisioning
• Real-time compression
• Easy Tier
Uempty
Data Migration
NetApp DS3000
N series
EMC Storwize
Moving workload (data extents) to: family
HPQ
; Balance usage distribution XIV
; Move data to lower-cost storage tier
HDS ; Expand or convert to new storage DS8000
systems; decommission old systems
Sun ; Optimize Flash with Easy Tier Storwize V7000
900
In Unit 8 of the IBM Storwize V7000 Storage Implementation Workshop covers the functionality of
data migration that enables you to seamless integrate the Storwize V7000 into existing storage
environments to include the ability to transfer data to and from other storage systems for
consolidation or decommission.
Uempty
In Unit 9 and Unit 10 of the IBM Storwize V7000 Storage Implementation Workshop course, we will
describe the Advanced Copy Services functions that are enabled by IBM Spectrum Virtualize
software running inside IBM Storwize family products. This unit will includes the following topics:
• Volume Mirroring alternative method of migrating volumes between storage pools internally or
externally (two close sites).
• FlashCopy allows the administrator to create copies of data for backup, parallel processing,
testing, and development.
• Metro Mirror is a Synchronous Mirror Copy that ensures updates are committed at both the
primary and the secondary before the application considers the updates complete. Therefore,
the secondary is fully up to date if it is needed in a failover.
• Global Mirror is an Asynchronous Mirror Copy which means the application acknowledges that
the write is complete before the write is committed at the secondary. Therefore, on a failover
certain updates (data) might be missing at the secondary. Global Mirroring supports Recovery
Point Objective (RPO) which defines the amount of acceptable data loss in the event of a
disaster. It supports up to 250 ms Global Mirror round trip latency and distances of up to 20,000
km are supported.
This also includes native IP replication. Supporting Remote Mirroring over IP communication on the
IBM Storwize Family systems by using Ethernet communication links.
Uempty
As part of the final unit (Unit 10) for this course, we will discuss the essentials for administration,
maintenance, and serviceability of the IBM Storwize V7000 storage system. This unit reviews how
management GUI events are reported by the system, highlight procedures for troubleshooting and
handling components for service, such as using Directed Maintenance Procedure (DMP), and
review the implementation of concurrent firmware code updates. To include management support
from the CLI.
In addition, we will take a look at system maintenance options that can be configured to perform
system backup, Call Home notifications, and how the Service Assistant Tool can be used for
troubleshooting if the flash nodes are inaccessible or when an IBM Support engineer directs you to
use it.
And for those storage administrators that are mobile, we will also highlight the features of the IBM
Storage Mobile Dashboard application that provides basic monitoring capabilities for IBM storage
systems health and performance status of their IBM Storage systems.
Uempty
Unit summary
• Summarize the units covered in this course
Uempty
Overview
This unit introduces the IBM Storwize V7000 2076 hardware architecture, detailing the 2076-524
control enclosure components and features to include the option Storwize V7000 2076-12F/24F
expansion enclosure options. This unit will also discuss the benefits of scaling the Storwize V7000
for both performance and capacity.
References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html
Uempty
Unit objectives
• Identify component features of the IBM Storwize V7000 2076-524
control enclosure model
• Distinguish between the IBM Storwize V7000 2076-12F and 2076-24F
expansion enclosure
• Characterize IBM Storwize V7000 Gen2 scalability requirements to
incrementally increase storage capacity and performance
Uempty
This topic introduces the Storwize V7000 2076 next generation hardware components of the IBM
Storwize V7000 disk system.
Uempty
Storwize V7000 is a virtualized, enterprise-class storage system that provides the foundation for
implementing an effective storage infrastructure; providing the latest storage technologies for
unlocking the business value of stored data, including virtualization and Real-time Compression,
and is designed to deliver outstanding efficiency, ease of use and dependability for organizations of
all sizes.
The IBM Storwize V7000 2076 is available in the following models:
• Storwize V7000 SFF Control Enclosure Model 524
• Storwize V7000 LFF Expansion Enclosure Model 12F
• Storwize V7000 SFF Expansion Enclosure Model 24F
All models are delivered in a 2U, 19-inch rack mount enclosure and include a three-year warranty
with customer replaceable unit (CRU) and on-site service. Optional warranty service upgrades are
available for enhanced levels of warranty service.
This unit will discuss features and function of each enclosure option.
Uempty
Physical link or lane Within a single SAS cable are four physical links each capable of 6 Gbps
Uempty
This topic introduces the hardware architecture of the Storwize V7000 2076-524 control enclosure
and the Storwize V7000 2076-12F and 24F expansion enclosures.
Uempty
LED
Indicator
Panel
2U
The IBM Storwize V7000 2076-524 Gen2 control enclosure is a midrange virtualization RAID
storage subsystem that employs the IBM Spectrum Virtualize software engine. The Storwize V7000
control enclosure is packaged in a double-high, double-wide rack-mount enclosure that installs in a
standard 19-inch equipment rack that contains twelve front load 2.5-inch drives slots.
All components located in the front of the unit are protected by redundant hot-swappable
components.
Uempty
Canisters
PSU
Fan Cage
Enclosure Chassis
Midplane
Drive Cage
Drives
This diagram illustrates all the moving parts of the Storwize V7000 Gen2 model. As you can see,
the front of the chassis is basically unchanged. The major change applies to the rear of the chassis
which as redesigned to make room for the more powerful node canisters and power supply units.
Also, you’ll notice that a separate fan assembly is now part of the structure and no longer
configured inside the power supply.
Uempty
HBAs
High speed 8Gb/
PLX Ivy Bridge PCIe 3-1GB full duplex
cross card 16Gb FC
communications
1.9GHz 8 lanes
E5-2628L-V2 or
10GbE
Boot
DMI 128GB SSD
Quad
1GbE
Mezz Conn
To Expansion Enclosure
To Control 4 phys TPM
SAS Drives on
Enclosure 4 phys SPC
EXP SAS Chain 12Gb/phy 4 phys SAS Chain 1
Drives on
SAS Chain 0 4 phys SAS Chain 2
This visual is provides a logical block diagram of the components and flow of each Storwize V7000
Gen2 hardware specification (per control canister):
• Same form factor as existing Storwize V7000 control enclosure. However, each canister in
Gen2 now occupies a full 2U inside the 2U chassis.
• Each controller has a modern 8-core 64-bit Intel Ivy Bridge processor
• 32 GB RAM with the ability to scale up to 64 GB for Real-time Compression
• On-board hardware compression engine that is build into the system board via the Coleto
Creek compression accelerator chipset to support the default pass-through adapter enablement
and encryption. The Gen2 system board also supports the attachment of the optional
compression Acceleration card. Each Intel Quick Assist compression acceleration card is also
based on the support of the Coleto Creek chipset.
• Two 12Gb/s SAS drive expansion ports per node canister to support the attach of the optional
Storwize V7000 2076-12F large form factor (LFF) drives and 24F small form factor (SFF) drives
expansion enclosures.
• Three PCIe Gen 3 slots (2x Host Interface, 1x for additional hardware compression).
• Two USB ports for debug and emergency access
Uempty
• One battery (moved from PSU into canister)
The SAS expander provides the drive attachment for the drives within the control enclosure.
Storwize V7000 Gen2 controller used SAS Chain 0 to manage the drives within the enclosure. The
controller disk drives connect to the Gen2 chassis through the midplane interconnect for their
power. Also in the control enclosure, the midplane interconnect is used for the internal control and
IO paths to the chassis IO bays.
All optional attached expansion enclosure are connected using SAS Chain 1 and SAS Chain 2.
Uempty
Rear connectors
Uempty
The Storwize V7000 Gen2 controller adopts the processor subsystem of the IBM SAN Volume
Controller DH8 node with specific hardware modifications that match the needs of the Storwize
V7000 Gen2.
The Storwize V7000 Gen2 processor subsystem incorporates dual Socket-R, LGA2011 to support
the Intel Xeon Ivy Bridge, eight-core processors with up to 8 GT/s QPI link between the two
processors. With the PCIe Gen3 provides 8 lanes, which gives you about 8 GB of data per second
- per slot.
The Storwize V7000 Gen2 control enclosure also combines Intel QuickAssist Technology with an
Intel architecture core supporting the industry first hardware compression accelerator. This feature
provides dedicated processing power and greater throughput for compression.
Uempty
3x 1GbE ports
2x 12GBps SAS
expansion ports PCIe3 slots PCIe3 slots
Port 1
Port 2
Port 3
Port 1
Port 2
Port 3
Canister1 Canister2
ac PSU PSU
The 2U rack-mount form factor of the 2076-524 allows the Storwize V7000 node to accommodate a
mix of different supported network adapters and compression accelerators. IBM Storwize V7000
Gen2 node canisters were redesigned to increase the enclosure’s height which now allows support
for up to three half height PCIe3 slots per canister for I/O connectivity to be installed.
Each Storwize V7000 node canister come standard with:
• Four 1-Gb on-board Ethernet ports. Ethernet port 1 -3 are used for 1 Gb iSCSI connectivity and
system management. Ports are numbered in the order of left to right beginning with Ethernet
port 1.
• The fourth Ether port is a dedicated Technician port (T-Port) used to initialize the system or
redirects to the Service Assistant (SA) tool.
• Two 12 Gb SAS ports for Storwize V7000 expansion enclosure attachment
• Two USB ports (management port - not in use)
▪ The USB ports and the Technician port are not used during normal operation. Connect a
device to any of these ports only when you are directed to do so by a service procedure or
by an IBM service representative.
• Two ac power supplies and cooling units
Uempty
Each Storwize V7000 2076-524 node canister has indicator LEDs that provide status information
about the canister.
Each of the SAS port contains a port link and a fault link LEDs:
• When there is no link connection on any phys (lanes). The connection is down.
• The status is On and Port link is Green when there is a connection on at least one phy which
indicates at least one phy connector is up.
• An OFF status indicates no fault. All four phys have a link connection. This can indicate a
number of different error conditions:
▪ One or more, but not all, of the 4 phys are connected.
▪ Not all 4 phys are at the same speed.
▪ One or more of the connected phys are attached to an address different from the others
Each canister has a system status LED panel that is the same LED panel indicator located on the
front of the chassis.
When Power status is Green:
• OFF No power is available or power is coming from the battery.
Uempty
• SLOW BLINK Power is available but the main CPU is not running; called standby mode.
• FAST BLINK In self test.
• ON Power is available and the system code is running.
When Status indicator is Green:
• OFF The system code has not started. The system is off, in standby, or self test.
• BLINK The canister is in candidate or service state. It is not performing I/O. It is safe to remove
the node.
• FAST BLINK The canister is active, able to perform I/O, or starting. ON The canister is active,
able to perform I/O, or starting. The node is part of a cluster.
When Canister fault status is Amber:
• OFF The canister is able to function as an active member of the system. If there is a problem on
the node canister, it is not severe enough to stop the node canister form performing I/O.
• BLINK The canister is being identified. There might or might not be a fault condition.
• ON The node is in service state or an error exists that might be stopping the system code from
starting. The node canister cannot become active in the system until the problem is resolved.
You must determine the cause of the error before replacing the node canister. The error may be
due to insufficient battery charge; in this event, resolving the error simply requires waiting for
the battery to charge.
Each canister also have battery status LEDs that indicates the following:
• When the Battery status Green:
▪ OFF Indicates the battery is not available for use (e.g., battery is missing or there is a fault
in the battery).
▪ FAST BLINK The battery has insufficient charge to perform a fire hose dump.
▪ BLINK The battery has sufficient charge to perform a single fire hose dump.
▪ ON The battery has sufficient charge to perform at least two fire hose dumps.
• When Battery fault is Amber:
▪ OFF No fault. An exception to this would be where a battery has insufficient charge to
complete a single fire hose dump. Refer to the documentation for the Battery status LED.
▪ ON There is a fault in the battery.
Uempty
Port 2
Port 3
Example identifies the installment location for the I/O adapter cards
Storwize V7000 hardware architecture © Copyright IBM Corporation 2012, 2016
IBM Storwize V7000 Gen2 offers enhanced I/O connectivity with the support of two riser card slots
(3 PCIe Gen3 slots). The Storwize V7000 2076-524 model does not ship with any I/O connectivity
cards. However, customers can select multiple add-on adapters for driving host I/O and offloading
compression workloads.
The Storwize V7000 node provides link speeds of 2, 4, 8 and 16 Gb with the following optional
support:
Slot 1 is only used for the on-board compression hardware engine or you can replace it with the
Compression Accelerator card.
Slots 2 and 3 support the following:
• 16 Gb FC four port adapter pair for 16 Gb FC connectivity (two cards each with four 16 Gb FC
ports and shortwave SFP transceivers)
• 16 Gb FC two port adapter pair for 16 Gb FC connectivity (two cards each with two 16 Gb FC
ports and shortwave SFP transceivers)
▪ The quantity of 16 Gb FC adapter features can be two
• 8 Gb FC adapter pair for 8 Gb FC connectivity (two cards each with four 8 Gb FC ports and
shortwave SFP transceivers)
Uempty
- The quantity of 8 Gb FC adapter features can be two
• 10 Gb Ethernet adapter pair for 10 Gb iSCSI and FCoE connectivity (two cards each with four
10 Gb Ethernet ports and SFP+ transceivers)
▪ The quantity of 10 Gb Ethernet feature cannot exceed one.
A minimum of one I/O adapter feature is required. Effectively, with the optional card in place,
customers would get 2 Gb pipe from 1 Gb Ethernet ports + 32 Gb of pipe from FC adapter (16 Gb
adapters) and additional 20 Gb of pipe from converged network adapter.
Uempty
The 8 Gbps FC HIC is a high-performance 4-port adapter that features an 8-lane native
PCI-Express Gen-2 link, enabling full-duplex operation simultaneously on all ports. The Tachyon
QE8 is an integrated single chip solution ideal for a variety of high-performance I/O applications.
The 8 Gb FC 4-port HIC supports up to eight ports in a single system configuration, and up to 32
ports in a 4-node cluster.
Uempty
The 16 Gb FC HIC supports up to eight ports in a single system configuration, and up to 32 ports in
a 4-node cluster. The 16 Gb node hardware requires the Spectrum Virtualize Family Software V7.4
to be installed. The 16 Gb HIC is supported when connected
Review the System Storage Interoperation Center (SSIC) for supported 16 Gbps Fibre Channel
configurations as it can only be supported using Brocade 8 Gb or 16 Gb fabrics and Cisco 16 Gb
fabrics. Direct connections to Brocade 2 and 4 Gbps or Cisco 2, 4 or 8 Gbps Fabrics are currently
not supported. Other configured switches, which are not directly connected to the 16 Gbps Node
hardware can be any supported fabric switch as currently listed in SSIC.
Uempty
3
1
Each FC port can have up to an 8 or 16 Gbps SW SFP transceiver installed. Each transceiver
connects to a host or Fibre Channel switch with an LC-to-LC Fibre Channel cable. Each Fibre
Channel port has two green LED indicators. The link-state LED [2] is above the speed-state LED
[3] for each port. Consider the LEDs as a pair to determine the overall link state, which in listed in
the table.
Uempty
Storwize V7000 offers client 10 Gb iSCSI/FCoE connection using the 4-port 10 Gb iSCSI/FCoE
host interface adapter that enables Storwize V7000 connections to servers for host attachment and
to other Storwize V7000 systems using Fibre Channel cables to connect them to your 10Gbps
Ethernet or FCoE SAN.
This type of configuration would require extra IPv4 or extra IPv6 addresses for each of those 10
GbE ports used on each node canister. These IP addresses are independent of the system
configuration IP addresses which allows the IP-based hosts to access Storwize V7000 managed
Fibre Channel SAN-attached disk storage.
Uempty
FC FCoE
Host can connect to Storwize V7000 using FC ports or
FCoE ports
Storwize V7000 can connect to storage system using FC
ports or FCoE ports
Storwize V7000 Gen2 system, communication among I/O
groups using any combination of FC and FCoE ports
For remote mirroring, between systems using any
combination of FC and FCoE
Storwize V7000 Gen2 supports 10 Gb Ethernet Fibre Channel over Ethernet (FCoE) fabric
configuration only if the optional 10 Gb Ethernet host interface card is installed. This 4-port card can
be used simultaneously for both FCoE and iSCSI server attachment. It also supports migration from
Fibre Channel networks.
A Fibre Channel forwarder (FCF) switch has both 10 Gb ports and Fibre Channel (FC) ports. The
terms FCF and FCoE switch are used interchangeably. It provides both Ethernet switching
capability and FC switching capability in a single switch. A pure Fibre Channel over Ethernet
(FCoE) switch has 10 Gb ports and FCoE switching capability.
Storwize V7000 supports FCoE with 10 Gb Ethernet ports on Gen1 models can be upgraded
without disruption.
Uempty
3
1
OFF OFF The port is not configured in flex hardware, or the port is not active in the current
profile. For example, in the 2 x 16 Gbps profile, two ports are not active.
ON ON The port is configured, but is not connected or the negotiation of the link failed.
ON OFF The link is up and is running at the configured speed.
ON ON The link is up and is running at less than the configured speed.
The 10 Gbps host interface card has four Ethernet ports, none of which are used for system
management. The ports are named 1, 2, 3 and 4, from top to bottom when installed in a slot. Each
port has two LED indicators, one green and one amber.
The table lists the LED states and its meaning.
Uempty
IBM Storwize V7000 Gen2 2076-524 model comes standard with integrated, hardware-assisted
compression acceleration to support Compression Pass-though adapter that is installed in slot 1.
This is a special compression pass-through adapter is standard on each node canister. With a
single on-board card, the maximum number of compressed volumes per I/O group is 200.
Enabling compression on the Storwize V7000 Gen2 does not affect non-compressed hosts to disk
I/O performance. You can replace the on-board compression card with an Intel based “Quick
Assist” Compression Accelerator card. With the addition of a second Quick Assist card, the
maximum number of compressed volumes per I/O group is 512. Real-time Compression workloads
can further benefit using compression with dual RACEs and two acceleration cards for best
performance.
Compressed volumes are a special type of volume where data is compressed as it is written to
disk, saving additional space. To use the compression function outside of the internal use, you must
obtain the IBM Real-time Compression license.
Uempty
This visual lists the requirements and limitations of I/O card combinations:
• Slot 1 is dedicated to support compression pass-through or compression Accelerator cards.
• Slot 2 and slot 3 can be used to support FC host connectivity using 8 Gb FC HIC or 16 Gb FC
HIC, or 10 Gb Ethernet card for both iSCSI or FCoE connectivity (only one 10 GbE card per
node)
Uempty
Unlike Storwize V7000 Gen1 battery pack which was located in the power supply unit, Storwize
V7000 Gen2 contains integrated battery pack within each node canister. Their main task is to allow
the controllers to save the current configuration and the write cache to the internal Flash drive, in
case of failure of the guaranteed supply. This means that the control enclosure who now provides
battery backup to support a non-volatile write cache and protect persistent metadata.
For control enclosure power supply units, the battery integrated in the node canister continues to
supply power to the node in the event of a failure.
Storwize V7000 Gen2 expansion canisters do not cache volume data or store state information in
volatile memory. Therefore, expansion canisters do not require battery power. If ac power to both
power supplies in an expansion enclosure fails, the enclosure powers off. When ac power is
restored to at least one power supply, the enclosure restarts without operator intervention.
Uempty
The battery is maintained in a fully charged state by the battery subsystem. At maximum power, the
battery can save critical data and state information in two firehose dumps (back-to-back power
failures). If power to a node canister is lost, saving critical data starts after a five-second wait (If the
outage is shorter than five seconds, the battery continues to support the node and critical data is
not saved.). During this process, the battery pack powers the processor and memory for a few
minutes while the Storwize code copies the memory contents to the onboard SSD. The node
canister stops handling I/O requests from host applications. The saving of critical data runs to
completion, even if power is restored during this time. The loss of power might be because the input
power to the enclosure is lost, or because the node canister is removed from the enclosure.
When power is restored to the node canister, the system restarts without operator intervention. How
quickly it restarts depends on whether there is a history of previous power failures. The system
restarts only when the battery has sufficient charge for the node canister to save the cache and
state data again. A node canister with multiple power failures might not have sufficient charge to
save critical data. In such a case, the system starts in service state and waits to start I/O operations
until the battery has sufficient charge.
Uempty
Reconditioning the battery ensures that the system can accurately determine the charge in the
battery.
As a battery ages, it loses capacity. When a battery no longer has capacity to protect against two
power loss events, it reports the battery end of life event and it should be replaced.
A reconditioning cycle is automatically scheduled to occur approximately once every three months,
but reconditioning is rescheduled or canceled if the system loses redundancy. In addition, a
two-day delay is imposed between the recondition cycles of the two batteries in one enclosure.
Uempty
The Storwize V7000 Gen2 control enclosure has a new component feature called a Fan Module.
The fan modules replaced the fans that were previously housed inside the Gen1 power supplies.
Each Storwize V7000 Gen2 control enclosure contains two fan modules for cooling purposes. Each
fan module contains eight individual fans in four banks of two. The fan modules case has been
strategically placed between the node canisters and the midplane to continues to cool drives.
The fan modules are designed in purpose of servicing as they can be easily removed using the two
cam levers. It is important that the fan module be reinserted into the Storwize V7000 Gen2 within 3
minutes of removal to maintain adequate system cooling.
The fan module as a whole is a replaceable component, but the individual fans are not.
You can also used the lsenclosurefanmodule command to view a concise or detailed status of
the new fan modules that are installed in the V7000 Storwize Gen2.
Uempty
The Storwize V7000 model 2076-524 control enclosure contains two redundant, 1200 watts, hot
swappable 100 - 240V AC auto-sensing high efficiency power supply modules that are located on
the rear of the unit.
A Gen2 power supply has no power switch. A power supply is active when ac power cord is
connected to the power connector and to a power source. With the Storwize V7000 Gen2
integration of a redundant battery backup system, eliminates the need for external rack-mount
uninterruptible power supply (UPS), optional power switch, and related cabling.
If a power failure should occur, the system can fully operate under one power supply. However, it is
highly recommended that you attach each of the two power supplies in the enclosure to separate
power circuits or to separate uninterruptible power supply (UPS) battery-backed power source.
Remember to replaced failed power supply as soon as possible, it is never advised to run the
system using only one power supply. Storwize V7000 management GUI and alerting systems (such
SNMP and Event notifications) will report a power supply fault. The failed power supply can
replaced without software intervention by following the directed maintenance procedure as
instructed from the management GUI.
Uempty
ƒ Expansion enclosure
í Expansion canisters
í Supported drive form factors
í Power and cooling
ƒ Scalability
This topic introduces the Storwize V7000 2076-12F and 2076-24F expansion enclosure hardware
components.
Uempty
Storwize V7000 offers both large form factor (LFF) and small form factor (SFF) 12 Gb SAS
expansion enclosure models in a 2U, 19-inch rack mount enclosure. The Storwize V7000 LFF
Expansion Enclosure Model 2076-12F supports up to twelve 3.5-inch drives, while the Storwize
V7000 SFF Expansion Enclosure Model 2076-24F supports up to twenty-four 2.5-inch drives.
High-performance disk drives, high-capacity nearline disk drives, and flash (solid state) drives are
supported. Drives of the same form factor can be intermixed within an enclosure and LFF and SFF
expansion enclosures can be intermixed within a Storwize V7000 system.
The 2076-12F/24F is only supported with the Storwize V7000 2076-524 controller. The 2076 Gen2
expansion enclosures are presented within the GUI as internal drives just like the Storwize V7000
control enclosure.
Uempty
Rear
2 x hot-swap redundant power supplies
Both Storwize V7000 Gen2 expansion enclosure contains two vertical expansion canisters, two 12
Gb SAS port connectors, and LED panel located in the rear of the enclosure.
Each expansion enclosure contains two hot-swap redundant 800 watts power supplies. These
redundant power supplies operate in parallel with one continuing to power the canisters if the other
fails. Even though these are hot-swappable components they are intended to be used only when
your system is not active (no I/O operations). Therefore do not remove a power supply unit from an
active enclosure until a replacement power supply unit is ready to be installed. If a power supply
unit is not installed then airflow through the enclosure is reduced and the enclosure can overheat.
Replace the power supply within 5 minutes of replacing a faulty unit.
Uempty
The two 12 Gb SAS ports on each canister are side by side and are numbered 1 on the left and 2
on the right. Port 1 is used to connect to a SAS expansion port on a node canister or port 2 of
another expansion canister.
The canister is ready with no critical errors when Power is illuminated, Status is illuminated, and
Fault is off.
When both ends of a SAS cable are inserted correctly, the green link LEDs next to the connected
SAS ports are lit.
Uempty
The purpose of the 12 Gb SAS card to attach the Storwize V7000 expansion enclosure to
expansion system capacity. The SAS ports of both units are connected by using SAS connectors.
The 12 Gb SAS port is a third-generation SAS interface that offers double the performance rate
which allows the SAS infrastructure to deliver bandwidth that exceeds that of PCI Express 3.0. The
improved bandwidth backed by I/O processing capabilities to maximize link utilization supports
increased scaling of traditional HDDs as well as improved SSD performance.
Each 12 Gb SAS port as well as the SAS cable contains four physical (PHY) lanes. Each lane uses
multiple links (as the 6 Gb SAS technology) for full duplex transmission to transmit and receive
higher date rates up to 4800 Mb (48 Gb).
Above each port is a green LED that is associated with each PHY (eight LEDs in total). The LEDs
are numbered 1 - 4. The LED indicates activity on the PHY. For example, if traffic starts it goes over
PHY 1. If the line is saturated, the next PHY starts working. If all four Phy LEDs are flashing, the
backend is fully saturated.
The 12 Gb SAS also provides investment protection with compatibility with an earlier version with 3
Gb and 6 Gb SAS.
Uempty
200/400GB
SSD 6Gb SAS
200/400/800GB
SSD 12Gb SAS 200/400/800GB
Flash 12Gb SAS 3.2TB
300/600/900GB
2.5" Small Form Factor
(SFF) 10K 6Gb SAS 1.2TB
10K 12Gb SAS 1.8TB
15K 6Gb SAS 146/300GB
15K 12Gb SAS 300/600GB
7.2K NL-SAS 1TB
No restrictions on mixing of drive types with same form factor within the same enclosure
7.2K NL-SAS
2TB / 3TB / 4TB
3.5" Large Form Factor
(LFF) 2TB/ 6TB / 8TB
12GB NL-SAS
This table lists the available drive options for the Storwize V7000 2076-12 and 24F expansion
enclosures. The drives listed were available during this publication.
Both Gen2 control enclosure and the expansion enclosure support the same disk drives and form
factor as listed here. Drive options are SAS drives, Near-line SAS drives Solid State and Flash
drive. Gen2, 12 Gb SAS expansion enclosures supports twelve 3.5-inch large form factor (LFF) or
twenty-four 2.5-inch small form factor (SFF) drives. All drives dual ported and hot swappable.
Drives of the same form factor can be intermixed within an enclosure, and LFF and SFF expansion
enclosures can be intermixed behind the SFF control enclosure.
Uempty
This table lists the available cable components for the Storwize V7000 2076-12 and 24F expansion
enclosures. The 2076 controller and expansions are connected using the IBM 0.6m, 1.5 m, 0.3 m,
0.6 m 12 Gb SAS Cable (mSAS HD to mSAS).
Check Interoperability Guide for the latest supported options. Cable requirements are discussed in
the installation unit.
Uempty
The Storwize V7000 models 2076-24F and 2076-12F contains two 800Wpower supply units. The
power supply has no power switch. A power supply is active when a power cord is connected to the
power connector and to a power source. With the Storwize V7000 Gen2 integration of a redundant
battery backup system, eliminates the need for external rack-mount uninterruptible power supply
(UPS), optional power switch, and related cabling. All expansion enclosures power on and off
automatically after AC is plugged in or an interruption.
Uempty
ƒ Expansion enclosure
ƒ Scalability
This topic discusses the scalability configuration to grow the Storwize V7000 system capacity for
greater performance.
Uempty
Expand
2x todays V7000 Gen1
Maximum configurations supports up to
1056 drives
ޤAll SFF = 44 enclosures, just over 2 racks
ޤAll LFF = 84 enclosure, 4 racks
• Storwize V7000 can be added into
existing clustered systems including
Gen1 system.
Storwize V7000 model 2076 524 offers scalable growth and flexibility to start small and grow as
needed. You can basically use Storwize V7000 Gen2 model as stand-alone with it choice of 12 and
24 bay enclosures. IBM Storwize V7000 solution can scale up to 480 x 3.5 inch, up to 504 x 2.5
inch serial-attached SCSI (SAS) drives with the attachment of twenty expansion enclosures
(intermix with 12 and 24 bay expansion or even with in the enclosure intermix drive HDDs and
SDDs). For an even greater capacity of drives, you can scale up to 1,056 drives with four Storwize
V7000 clustered systems. In a clustered system, the Storwize V7000 can provide up to 8 PiB raw
capacity (with 6 TB nearline SAS disks), delivering greater performance, bandwidth, and scalability.
Storwize V7000 Model 524 systems can be added into existing clustered systems that include
previous generation Storwize V7000 systems.
Uempty
Failure
Node1
offline
The IBM System V7000 node canisters in a clustered system operate as a single system and
present a single point of control for system management and service. System management and
error reporting are provided through an Ethernet interface to one of the node canisters in the
system, which is called the configuration node canister.
The configuration node canister is a role that any node canister can take. If the configuration node
fails, the system chooses a new configuration node. This action is called configuration node
failover. The new configuration node takes over the management IP addresses. Thus you can
access the system through the same IP addresses although the original configuration node has
failed. During the failover, there is a short period when you cannot use the command-line tools or
management GUI.
The Storwize V7000 node canisters are also active/active storage controllers, in that both node
canisters are processing I/O at any time and any volume can be happily accessed via either node
canister.
Uempty
Storwize V7000 2076-524 supports two independent SAS chains to connect the V7000 control
enclosure to the expansion enclosures. This provides a symmetrical way to balanced distribution of
the expansion enclosures on both SAS chains for performance and availability. The internal disk
drives of the control enclosure belong to SAS Chain 0. Each of the independent SAS chain (SAS
port 1 and SAS port 2) supports a maximum of 10 expansion enclosures per SAS chain.
Uempty
Listed are examples in the number of Storwize V7000 control enclosures and expansion enclosure
that can be configured in a clustered system.
Uempty
This is an example of a dual clustered system. Each control enclosure is chained to twenty 24 SFF
drives expansion enclosures (10 expansion enclosures per SAS chain) for a maximum total of 40
expansion enclosures and 1008 SFF drives.
Uempty
In this example of a four clustered system, each control enclosure is chained to twenty 24 SFF
drives expansion enclosures (10 expansion per SAS chain) for a maximum total of 40 expansion
enclosures and 1056 SFF drives.
Uempty
In this example of a four clustered system, each control enclosure is chained to ten 24 SFF drives
expansion enclosures (5 expansions per SAS chain) for a maximum total of 40 expansion
enclosures and 1056 SFF drives.
Uempty
In the final example of a four clustered system, each control enclosure is chained to twenty 12 LFF
drives expansion enclosures (10 expansions per SAS chain) for a maximum total of 90 expansion
enclosures and 960 LFF drives.
Uempty
Uempty
Storwize provides the same interoperability as the SAN Volume Controller since it is based on the
Storwize V7000 software.
It is recommended to have a look to the actual interoperability matrix, since it experiences changes
continuously.
For more information have a look at:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004946
Uempty
6x 1 GbE
4x 1 GbE 8x to 16x - 8 Gb FC
Host I/O 8x 8 Gb FC 8x to 16x - 16 Gb FC
4x 10 GbE (some models) 4x to 8x - 10 GbE
(Six I/O cards max)
The following is a comparison of the IBM Storwize V7000 Gen2 model to the Storwize V7000 Gen1
model functional differences:
• Dual CPU supporting the 8-core Ivy Bridge processor (up to 16-cores)
• Cache re-architecture, up to 128 GB cache, better performance
• Up to 16 I/O FC ports using six PCIe cards (4x 16 Gb Fc or 8 Gb FC)
• Up to two Compression Accelerator cards supporting 512 compressed volumes
• Up to 1056 flash drives using twenty 2076-12F/24F expansion enclosure
• Supports 12 Gb HD Mini SAS versus 6 Gb SAS connectors
• Integrated battery pack that resides inside the Storwize V7000 Gen2 control node versus inside
the Gen1 power supply
Uempty
Listed are a few hardware compatibility guidelines when integrating the Storwize V7000 Gen1
model.
Uempty
Option 1 Option 2
FLEXIBLE OPTIONS FULL BUNDLE
Controller Expansion External Controller Expansion External
; Base ; Base
Basic software:
• 5639-CB7 (Controller-Based software)
• 5639-XB7 (Expansion-Based software)
• 5639-EB7 (External virtualization)
Storwize V7000 hardware architecture * Storwize V7000 Only © Copyright IBM Corporation 2012, 2016
IBM has simplified the license structure for the IBM Storwize V7000 Gen2 which include new
features. IBM Storwize V7000 offers two ways of license procurement: Fully flexible and Bundled
(license packages) The license model is based on license-per-enclosure concept known from the
first generation of IBM Storwize V7000, however the second generation offers more flexibility that
exactly matches your needs.
The base module is represented by IBM Spectrum Virtualize family and is mandatory for every
controller, enclosure, or externally managed controller unit. For advanced functions, there will be a
choice. The Full bundle, entitles the user to all advanced functions available on the system, and will
cost less than the sum of the those licenses. This full bundle will be the default pre-select, as we
expect the majority of customers will select this, for the value for money it offers.
We would expect almost all customers to be using Easy Tier and Flashcopy, and with the new
assurance and performance of Real-time compression, we would again expect this to be sold in all
but the most exceptional situation.
Additional licensed features can be purchased on-demand either as a full software bundle or each
feature separately.
Uempty
Listed are the benefits of block-level virtualization that is provided by the Storwize V7000:
1. Central point of control: All advanced functions are implemented in the virtualization layer.
Therefore, it enables rapid, flexible provisioning and simple configuration changes thus
increasing storage administrator productivity.
2. Improve capacity utilization: By pooling storage, storage administrators can improve capacity
utilization rates.
3. Disaster recovery: Enables environments to replicate asymmetrically at the DR site.
4. Data migration: Enables non-disruptive movement of virtualized data among tiers of storage,
including Easy Tier.
5. Facilitates common platform for data replication: Improve network utilization for remote
mirroring with innovative replication technology.
6. Application testing: Instead of testing an application against actual production data, use
virtualization to create a replicated data set to safely test with an application.
7. Increases operational flexibility and administrator productivity: Increase application
throughput performance for most critical activities by migrating data from HDD to Flash (SSDs).
8. High availability: By separating an application's storage from the application, virtualization
insulates an application from an application's server failure.
9. Resource sharing between heterogeneous servers: Virtualization helps to ensure servers
running different operating systems can safely coexist on the same SAN.
Uempty
Keywords
• IBM Storwize V7000 2076-524 Control Enclosure
• IBM Storwize V7000 2076-12F Expansion Enclosure
• IBM Storwize V7000 2076-24F Expansion Enclosure
• IBM Spectrum Virtualize
• Fibre Channel (FC)
• Fibre Channel over Ethernet (FCoE)
• Technician port
• Firehose Dump (FHD)
• Boot drives
• Battery modules
• Serial Attached iSCSI (SAS)
• Compression Accelerator card
• Intel QuickAssist technology
• Real-time Compression (RtC)
• Real-time Compression Acceleration card
Uempty
Review questions (1 of 2)
1. Which of the following components are supported with the
Storwize V7000 Gen2 control enclosure?
a. Two processors with 64 GB of memory
b. Dual batteries backup
c. Up to six PCIe I/O slots
d. Up to two Compression Accelerator cards
e. All of the above
Uempty
Review answers (1 of 2)
1. Which of the following components are supported with the
Storwize V7000 Gen2 control enclosure?
a. Two processors with 64 GB of memory
b. Dual batteries backup
c. Up to six PCIe I/O slots
d. Up to two Compression Accelerator cards
e. All of the above
The answer is all of the above.
Uempty
Review questions (2 of 2)
3. True or False: Storwize V7000 Gen2 node I/O slots 2 and 3
are used for both FC and iSCSI/FCoE connections.
Uempty
Review answers (2 of 2)
3. True or False: Storwize V7000 Gen2 node I/O slots 2 and 3
are used for both FC and iSCSI/FCoE connections.
The answer is true.
Uempty
Unit summary
• Identify component features of the IBM Storwize V7000 2076-524
control enclosure model
• Distinguish between the IBM Storwize V7000 2076-12F and 2076-24F
expansion enclosure
• Characterize IBM Storwize V7000 Gen2 scalability requirements to
incrementally increase storage capacity and performance
Uempty
Overview
This unit examines the physical planning guidelines for installing and configuring an Storwize
V7000 environment. This unit also provides best practices on how to logically configure the
Storwize V7000 system management IP addresses and network connections, implement zoning
fabrics, and host and storage attachments.
References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html
Uempty
Unit objectives
• Determine planning and implementation requirements that are
associated with the Storwize V7000
• Implement the physical hardware and cable requirements for the
Storwize V7000 Gen2
• Implement the logical configuration of IP addresses, network
connections, zoning fabrics, and storage attachment to the Storwize
V7000 Gen2 nodes
• Integrate the Storwize V7000 Gen2 into an existing SVC environment
• Verify zoned ports between a host to the Storwize V7000 and between
a storage system
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
Uempty
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
This topic provides installation guidance for physically installing and cabling the IBM Storwize
V7000 control enclosures and Storwize V7000 storage enclosures in a rack environment.
Uempty
9 Physical planning
9Rack hardware configuration
9Cabling connection requirements
9 Logical planning
9 Management IP addressing plan
9 iSCSI IP addressing plan
9 SAN zoning and SAN connections
9 Backend storage subsystem configuration
9 System configuration
9 Initial cluster system configuration
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
IBM Storwize V7000 planning can be categorized into two types: physical planning and logical
planning. To achieve the most benefit from the Storwize V7000, we recommend using a
pre-installation planning check list. The visual lists several important planning steps. These steps
ensure that the V7000 provides the best possible performance, reliability, and ease of management
for application needs. Proper configuration also helps minimize downtime by avoiding changes to
the V7000 and the storage area network (SAN) environment to meet future growth needs.
Logical planning is done in two parts. Portion of this topic focuses only on the logical planning of
configuration of the management IP, host connection, SAN, backend storage and the initial
configuration of the clustered system. The storage pools, volume creation, host mapping, advanced
copy services, and data migration are covered in other units.
Before configuring the IBM Storwize V7000 environment, ensure that you have all license required,
IP addresses established and they are zoned properly.
Uempty
Enclosure 2 10 units up
Node 1 Node 2
1 1
1 1
This visual shows the rear view of a Storwize V7000 Gen2 2076-524 controller with the two PCIe
adapter slots identified and configured with six Storwize V7000 expansion enclosures.
To install the cables, complete the following steps.
1. Using the supplied SAS cables, connect the control enclosure to the expansion enclosure 1.
a. Connect SAS port 1 of the left node canister in the control enclosure to SAS port 1 of the left
expansion canister in the first expansion enclosure.
b. Connect SAS port 1 of the right node canister in the control enclosure to SAS port 1 of the
right expansion canister in the first expansion enclosure.
2. To add a second expansion enclosure to the control enclosure, use the supplied SAS cables to
connect the control enclosure to the expansion enclosure at rack position 2.
a. Connect SAS port 2 of the left node canister in the control enclosure to SAS port 1 of the left
expansion canister in the second expansion enclosure.
b. Connect SAS port 2 of the right node canister in the control enclosure to SAS port 1 of the
right expansion canister in the second expansion enclosure.
3. If additional expansion enclosures are installed, connect each one to the previous expansion
enclosure in a chain, using two Mini SAS HD to Mini SAS HD cables.
Uempty
4. If additional control enclosures are installed, repeat this cabling procedure on each control
enclosure and its expansion enclosures.
When using the optional 2076-12F/24F flash arrays as part of your Storwize V7000 Gen2 cluster
implementation, the distance that you can separate the 2076-524 nodes in the I/O Group away from
their shared 2076-12F/24F flash array is limited by the maximum length of the 6-meter
serial-attached SCSI (miniSAS) cable used to attach the array to the Storwize V7000 Gen2 units.
Ensure that cables are installed in an orderly way to reduce the risk of cable damage when
replaceable units are removed or inserted. The cables need to be arranged to provide clear access
to Ethernet ports, including the technician port.
Uempty
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
The Storwize V7000 control enclosure and the Storwize V7000 expansion enclosure can be
installed in almost any of the IBM rack offerings which are industry-standard 19-inch server
cabinets that are designed for high availability environments.
When cabling power, connect one power cable from each Storwize V7000 node to the left side
internal PDU and the other power supply power cable to the right side internal PDU. PDUs are fed
by separate power sources. This enables the cabinet to be split power supplies between two
independent power sources for greater upstream high availability. When adding more V7000s to
the solution the same power cabling scheme should be continued for each additional enclosure.
Upstream redundancy of the power to your cabinet (power circuit panels and on-floor Power
Distribution Units (PDUs)) and within cabinet power redundancy (dual power strips or in-cabinet
PDUs) and also upstream high availability structures (uninterruptible power supply (UPS),
generators, and so on) influences your power cabling decisions.
Uempty
Mini SAS
connector
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
The visual lists the available cable components for the Storwize V7000 2076-12F/24F expansion
enclosures. The cables listed were available during this publication. Both the Storwize V7000 2076
controller and expansion are connected using the IBM 1.5 m, 0.3 m, 0.6 m 12 Gb SAS Cable
(mSAS HD to mSAS). Check Interoperability Guide for the latest supported options. Cable
requirements are discussed in the installation unit.
The following SAS cables can be ordered with the Storwize V7000 expansion enclosure:
• 0.6 m 12 Gb SAS cable (mSAS HD to mSAS HD)
• 1.5 m 12 Gb SAS cable (mSAS HD to mSAS HD)
• 3 m 12 Gb SAS cable (mSAS HD to mSAS HD)
• 6 m 12 Gb SAS cable (mSAS HD to mSAS HD)
Uempty
Lost in translation
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
The cable-management arm is an optional feature and is used to efficiently route cables so that you
have proper access to the rear of the system. Cables are routed through the arm channel and
secured with cable ties or hook-and-loop fasteners. Allow slack in the cables to avoid strain in the
cables as the cable management arm moves.
Uempty
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
Uempty
To ensure proper performance and to maintain application high availability in the unlikely event of
an individual node canister failure, Storwize V7000 node canisters are deployed in pairs (I/O
Groups).
The visual illustrates network port connections of the Storwize V7000 node canister to dual SAN
fabrics and the local area network.
It is recommended that all V7000 control enclosures in a clustered system must be on dual SAN
fabrics with each Storwize V7000 node adapter ports spread evenly across both fabrics. All V7000
control enclosures must also be on the same local area network (LAN) segment, which allows for
any node in the clustered system to assume the clustered system management IP address. For a
dual LAN segment, port 1 of every node is connected to the first LAN segment, and port 2 of every
node is connected to the second LAN segment. Therefore, if a node fails or is removed from the
configuration, the remaining node operates in a degraded mode, but the configuration is still valid
for the I/O Group.
Uempty
E1 E2
E3 T
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
At least three management IP addresses are required to manage the Storwize V7000 storage
system through either a graphical user interface (GUI), command-line interface (CLI) accessed
using a Secure Shell connection (SSH), or using an embedded CIMOM that supports the Storage
Management Initiative Specification (SMI-S). The system IP address is also used to access remote
services like authentication servers, NTP, SNMP, SMTP, and syslog systems, if configured. Each
node canister contains a default management IP addresses that can be changed to allow the
device to be managed on a different address than the IP address assigned to the interface, which is
used for data traffic.
The Storwize V7000 system requires the following IP addresses:
• Cluster management IP address: Address used for all normal configuration and service access
to the cluster. There are two management IP ports on each control enclosure. Port 1 is required
to be configured as the port for cluster management. Both Internet Protocol Version 4 (IPv4)
and Internet Protocol Version 6 (IPv6) are supported.
• Service assistant IP address: One address per control enclosure. The cluster operates without
these control enclosure service IP addresses but it is highly recommended that each control
enclosure is assigned an IP address for service-related actions.
• A 10/100/1000 Mb Ethernet connection is required for each cable.
Uempty
For increased redundancy to the system management interface, connect Ethernet port 2 of each
node canister in the system to a second IP network. The second IP port of the control enclosure
can also be configured and used as an alternate address to manage the cluster.
Each node canister ports 1, 2 and 3 on the rear of each canister can also provide iSCSI
connectivity.
Uempty
10.10.1.10 10.10.1.100
ETH1
Config Node 1 ETH2
10.10.2.10 10.10.2.100
10.10.2.1
10.10.1.40 10.10.2.x
Gateway
Node 4 10.10.2.40
The visual illustrates the configuration of a redundant network for Storwize V7000 IPv4
management and iSCSI addresses that shares the same subnet. This same setup can be
configured by using the equivalent configuration with only IPv6 addresses.
Uempty
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
IBM Storwize V7000 Service Assistant (SA) is a browser-based GUI designed to assist with service
issues. Administrators can access the interface of a node that uses its Ethernet port 1 service IP
address through either a web browser or open an SSH session. During the system initialization, the
system automatically redirects you to the initial configuration of the system through a web browser.
The SA provides a default user ID (superuser) and password (passw0rd with a zero “0” instead of
the letter “o”). Only those with a superuser ID can access the Service Assistant interface. This ID
can be changed if required.
Administrators can use the Service Assistant IP address to access the SA GUI and perform
recovery tasks and other service-related issues.
Uempty
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
To access the IBM Storwize V7000 management GUI, direct a web browser to the system
management IP address after the system initialization of the IBM Storwize V7000. To ensure that
you have the latest supported web and the appropriate settings are enabled, visit the IBM Storwize
V7000 Knowledge Center.
Uempty
Configuration Node
Boss Node
IBM Storwize V7000 system can contain up to four I/O groups which is a four Storwize V7000 system
configuration. When the initial node is used to create a cluster it automatically becomes the configuration
node for the Storwize V7000 system. The configuration node responds to the system IP address and
provides the configuration interface to the cluster. All configuration management and services are
performed at the system level. If the configuration node fails another node is chosen to be the configuration
node automatically and this node takes over the cluster IP address. Thus, configuration access to the
cluster remains unchanged. A cluster can contain up to four I/O groups (4x Storwize V7000 systems).
The system state data holds all configuration and internal system data for the 8 node-V7000
system. This system state information is held in non-volatile memory of each node. If the main
power supply fails then the battery modules maintain battery power long enough for the cluster
state information to be stored on the internal disk of each control enclosure. The read/write cache
information is also held in non-volatile memory. If power fails to a node then the cached data is
written to the internal disk.
A control enclosure in the cluster serves as the Boss node. The Boss node ensures
synchronization and controls the updating of the cluster state. When a request is made in a node
that results in a change being made to the cluster state data, that node notifies the boss node of the
change. The boss node then forwards the change to all nodes (including the requesting node) and
all the nodes make the state-change at the same point in time. This ensures that all nodes in the
Uempty
cluster have the same cluster state data. The FlashSystem cluster time can be obtained from an
NTP (Network Time Protocol) server from time synchronization.
Uempty
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
This topic discusses the SAN zoning and network requirements for the Storwize V7000 system
environment.
Uempty
Remote Flash
Fabric 1 System
Host Zones
Hosts see only
the volumes
Remote Copy
Redundancy
Zones
V7000 Intra-cluster
Open zoning
Fabric 2 DS8800
Storage
Zones
External storage
Storwize V70000
Storwize V70000
Local storage
DS3500
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
SAN zoning configuration is implemented at the switch level. To meet business requirements for
high availability it is recommend to build a dual fabric network that uses two independent fabrics or
SANs (up to four fabrics are supported).
The switches can be configured into four distinct types of fabric zones:
• Storwize V7000 intracluster open zoning: This requires using internal switches to create up to
two zones per fabric and include a single port per node, which is designated for intra-cluster
traffic. No more than four ports per node should be allocated to intra-cluster traffic.
• Host zones: A host zone consists of the V7000 control enclosure and the host.
• Storage zones: Is a single storage system zone that consists of all the storage systems that is
virtualized by the Storwize V7000 controller.
• Remote copy zones: An optional zone to support Copy Services features for Metro Mirroring
and Global Mirroring operations if the feature is licensed. This zone contains half of the system
ports of the system clusters in partnerships.
The SAN fabric zones allow the Storwize V7000 nodes to see each other and all of the storage that
is attached to the fabrics, and for all hosts to see only the Storwize V7000 controllers. The host
systems should not directly see or operate LUNs on the storage systems that are assigned to the
Storwize V7000 systems.
Uempty
System zone
Typically, the front-end host HBAs and the back-end storage systems are not in the same zone.
The exception to this is where split host
and split storage system configuration is in use. You will need to create a host zone for every host
server that needs access to storage from the V7000 controller. A single host should not have more
than eight paths to an I/O group.
Follow basic zoning recommendations to ensure that each host has at least two network adapters,
that each adapter is on a separate network (or at minimum in a separate zone), and is connected to
both canisters. This setup assures four paths for failover and failback purposes.
Uempty
Fabric1
1 2 3 4 5 6 7 8 FC Switch1
Fiber Cable
(LC) Fabric2
FC Switch1
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
The visual illustrates an example of a switch port connection. A dual fabric environment must be
identical to one another in concept. The eight ports on the switch are used to connect to a four-node
Storwize V7000 system. Identical switch port numbers are used for the second fabric of the dual
fabric SAN configuration. You can alternate the port attachments between the two fabrics.
Storwize V7000 base configuration ships with no host adapters are installed. Storwize V7000 Gen2
supports 2 gigabits per second (Gbps), 4 Gbps, or 8 Gbps FC fabric, 16 Gbps connects, depending
on the hardware platform and on the switch where the Storwize V7000 Gen2 is connected. In an
environment where you have a fabric with multiple-speed switches, the preferred practice is to
connect the Storwize V7000 Gen2 and the disk subsystem to the switch operating at the highest
speed. his SFP transceiver provides an auto-negotiating 2, 4, 8 or 16 Gb shortwave optical
connection on the 2-port Fibre Channel adapter.
A shortwave small form-factor pluggable (SFP) transceiver is required for all FC adapters and must
be of the same speed as the adapter to be installed. For example, if a 2-port 16 Gb HIC is installed,
then a 16 Gb SFP transceiver must be installed.
Uempty
Host communication
• No adapters are shipped with the Storwize V7000 control enclosures.
• The optional features of Storwize V7000 that can be configured for host
attachment include:
ƒ 16 Gb FC four port adapter pair for 16 Gb FC connectivity
ƒ 16 Gb FC two port adapter pair for 16 Gb FC connectivity
ƒ 8 Gb FC adapter pair (four port each) for 8 Gb FC connectivity
ƒ 10 Gb Ethernet adapter pair for 10 Gb iSCSI/FCoE connectivity
í Requires extra IPv4 or extra IPv6 addresses for each 10 GbE port used on each
node canister
E1 E2
E3
Port 2
Port 3
Port 2
Port 3
Example identifies the installment location for the host interface cards.
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
The node ports on each Storwize V7000 system must communicate with each other for the
partnership creation to be performed. Switch zoning is critical to facilitating intercluster
communication. One port for each node per I/O Group per fabric that is associated with the host is
the recommended zoning configuration for fabrics. IBM Storwize V7000 control enclosures are
shipped without any host I/O adapters. Following optional features are available for host
attachment.
• 16 Gb FC four port adapter pair for 16 Gb FC connectivity (two cards each with four 16 Gb FC
ports and shortwave SFP transceivers)
• 16 Gb FC two port adapter pair for 16 Gb FC connectivity (two cards each with two 16 Gb FC
ports and shortwave SFP transceivers)
• 8 Gb FC adapter pair for 8 Gb FC connectivity (two cards each with four 8 Gb FC ports and
shortwave SFP transceivers)
• 10 Gb Ethernet adapter pair for 10 Gb iSCSI/FCoE connectivity (two cards each with four 10 Gb
Ethernet ports and SFP+ transceivers)
▪ This type of configuration uses Fibre Channel cables to connect to your 10Gbps Ethernet or
FCoE SAN. Connect each 10 Gbps port to the network that will provide connectivity to that
port. It would also require extra IPv4 or extra IPv6 addresses for each 10 GbE port used on
Uempty
each node canister. These IP addresses are independent of the system configuration IP
addresses which allows the IP-based hosts to access Storwize V7000 managed Fibre
Channel SAN-attached disk storage.
Hosts can be connected to the Storwize V7000 Fibre Channel ports directly or through a SAN
fabric. To provide redundant connectivity, connect both node canisters in a control enclosure to the
same networks.
• Each node canister ports 1, 2 and 3 can also be used to provide iSCSI connectivity.
Uempty
Full duplex
Host Switch Disk
TX
TX TX
Port
RX RX
RX
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
Fibre Channel Protocol (FCP) is the prevalent technology standard in the storage area network
(SAN) data center environment. Fibre Channel (FC) offers a high speed serial interface for
connecting servers and peripheral devices together to consolidate dedicated SAN. Fibre Channel
was also designed to enable redundant and fault tolerant configurations, and especially appropriate
in SAN environments where high availability is an important requirement. Therefore FC technology
creates a multitude of FC-based solutions that have paved the way for high performance, high
availability, and the highly efficient transport and management of data.
Fibre channel communicates at full-duplex, allowing data to flow in opposite directions at the same
time. Fibre Channel devices are attached together through the use of light as a carrier at data rates
up to 8 Gb/s (gigabits per second) and up to 16 Gb/s when used on supported switches. FC
massive bandwidth capabilities, allows high-speed transfer of multiple protocols over the same
physical interface.
Each device in the SAN is identified by a unique world wide name (WWN). The WWN also contains
a vendor identifier field and a vendor-specific information field, which is defined and maintained by
the IEEE.
Uempty
Ethernet
header IP TCP iSCSI Data CRC
Internet SCSI (iSCSI) is a storage protocol that transports SCSI over TCP/IP allowing IP-based
SANs to be created using the same networking technologies — for both storage and data networks.
iSCSI runs at speeds of 1Gbps or at 10Gbps with the emerge of 10 Gigabit Ethernet adapters with
TCP Offload Engines (TOE). This technology allows block-level storage data to be transported over
widely used IP networks, enabling end users to access the storage network from anywhere in the
enterprise. In addition, iSCSI can be used in conjunction with existing FC fabrics as gateway
medium between the FC initiators and targets, or as a migration from a Fibre Channel SAN to an IP
SAN.
The advantage of an iSCSI SAN solution is that it uses the low-cost Ethernet IP environment for
connectivity and greater distance than allowed when using traditional SCSI ribbon cables
containing multiple copper wires. The disadvantage of an iSCSI SAN environment is that data is still
managed at the volume level, performance is limited to the speed of the Ethernet IP network, and
adding storage to an existing IP network may degrade performance for the systems that were using
the network previously. When not implemented as part of a Fibre Channel configuration, it is widely
recommended building a separate Ethernet LAN exclusively to support iSCSI data traffic.
Uempty
Converged
Enhanced SAN Fabric 1
Ethernet (CEE)
network
AC2 with
Hosts with 10Gb
CNAs
SAN Fabric 2
FCoE provides the same target and initiator functions as the Fibre Channel protocol as it is
encapsulated in the Fibre Channel frames over Ethernet networks. This allows Fibre Channel to
use 10 Gigabit Ethernet networks (or higher speeds) while preserving the Fibre Channel protocol.
FCoE maps Fibre Channel directly over Ethernet while being independent of the Ethernet
forwarding scheme.
The FlashSystem AC2 node supports 10 Gb FCoE port attachment to a converged Ethernet switch
to support FCoE, Fibre Channel, Converged Enhanced Ethernet (CEE), and traditional Ethernet
protocol connectivity for servers and storage.
Uempty
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
If optional 4-port 10Gbps Ethernet host interface adapters are installed in the node canisters,
connect each port to the network that will provide connectivity to that port. To provide redundant
connectivity, connect both node canisters in a control enclosure to the same networks.
IBM Storwize V7000 systems supports remote copy over native Internet Protocol (IP)
communication using Ethernet communication links. Native IP replication enables the use of
lower-cost Ethernet connections for remote mirroring as an alternative to using Fibre Channel
configurations.
Native IP replication enables replication between any FlashSystem family member (running the
supported) that uses the built-in networking ports of the cluster nodes. IP replication includes
Bridgeworks SANSlide network optimization technology to bridge storage protocols and accelerate
data transfer over long distances. SANSlide is available at no additional charge.
Native IP replication supports Copy Services features Metro and Global Mirror and Global Mirror
Change Volumes. The function in the same way that traditional FC-based mirroring, native IP
replication is transparent to servers and applications.
IP replication requires a 1 Gb or 10 Gb LAN connections. IBM FlashSystem V9000 can have only
one port that is configured in an IP partnership, either port 1 or 2, cannot use both. If the optional 10
Gb Ethernet card is installed in a system, ports 3 and 4 are also available. A system may be
Uempty
partnered with up to three remote systems. A maximum of one of those can be IP and the other two
FC.
Recommend a straight forward setup:
▪ Two active Ethernet links with two port groups to provide link failover capabilities
▪ At least two I/O groups to provide full IP replication bandwidth if one component is offline
Uempty
Zoning by port
(Domain ID and Port #)
WinA AIX Zoning by
WWPN
Fabric
Switch domain#
LUN
masking
Lw Ls
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
This is a basic understanding on how zoning can be implemented by switch domain ID and port
number. When a cable is moved to another switch or another port, then the zoning definition needs
to be updated. This is sometimes referred to as port zoning.
Zoning by WWPN provides the granularity at the adapter port level. If the cable is moved to another
port or to a different switch in the fabric the zoning definition is not affected. However, if the HIC is
replaced and the WWPN is changed (this does not apply to the FlashSystem WWPNs) then the
zoning definition needs to be updated accordingly.
When zoning by switch domain ID, ensure that all switch domain IDs are unique between both
fabrics and that the switch name incorporates the domain ID. Having a unique domain ID makes
troubleshooting problems much easier in situations where an error message contains the Fibre
Channel ID of the port with a problem. For example, have all domain IDs in first fabric to start with
10 and all domain IDs in second fabric to start with 20.
Uempty
Port name
N_Port ID
Host HBA (node HBA)
Port name
N_Port ID
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
IBM storage uses a methodology whereby each world wide port name (WWPN) is a child of the
world wide node name (WWNN). The unique world wide name (WWN) is used to identity the Fibre
Channel storage device in a Storage Area Network (SAN). This means that if you know the WWPN
of a port, you can easily identify the vendor and match it to the WWNN of the storage device that
owns that port.
Uempty
10 : 00 00 : 00 : c9 2f : 65 : d6 IEEE Standard
format
2 0 : 00 00 : 0e : 8b 05 : 05 : 04 IEEE Extended
format
Section 1 Section 2 Section 3 Section 4
Vendor specific
Company ID Vendor Specific Info
information
5 0 : 05 : 07 : 6 3 : 00 : c7 : 01 : 99
Registered
format
6 0 : 05 : 07 : 6 3 : 00 : c7 : 01 : 99
Each N_port on a storage device contains persistent (16 hexadecimal) World Wide Port Name
(WWPN) that is actually 8 bytes.
The first table is an example of an Emulex HBA IEEE Standard format (10). Section 1 identifies the
WWN as a standard format WWN. Only one of the 4 digits is used, the other three must be zero
filled. Section 2 is called the OUI or “company_id” and identifies the vendor (more on this later).
Part 3 is a unique identifier created by the vendor.
Our next example is an QLogic HBA identifying an IEEE Extended format (20). Section 1 identifies
the WWN as an extended format WWN. Section 2 is a vendor specific code and can be used to
identify specific ports on a node or to extend the serial number (section 4) of the WWN. Section 3
identifies the vendor. Section 4 is the unique vendor-supplied serial number for the device.
The last two tables identifies vendor IEEE Registered Name format of the WWN. This is referred to
a Format 5 which enables vendors to create unique identifiers without having to maintain a
database of serial number codes. IBM owns the 005076 company ID. Section 1: 5 identifies the
registered name WWN. Section 2: 0: 05: 07:6 identifies the vendor. Section 3: 3:00:c7:01:99 is a
vendor-specific generated code, usually based on the serial number of the device, such as a disk
subsystem.
All vendors wishing to create WWNs must register for a company ID or OUI (Organizationally
Unique Identifier). These are maintained and published by IEEE.
Uempty
50 05 07 68 0B 22 xx xx 50 05 07 68 0B 34 xx xx
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
As a Fibre Channel SAN participant, each Storwize V7000 Gen2 has a unique worldwide node
name (WWNN) and each Fibre Channel port on the HICs has a unique worldwide port name
(WWPN). These ports are used to connect the V7000 node canister to the SAN. The Storwize
V7000 nodes use the new 80c product ID, IBM’s latest schema to generate WWNN/WWPNs. The
previous generation of Gen1 nodes WWNN seed supports only 59 WWPN End Points. One of the
important considerations when upgrading the system to Gne2 nodes or when just installing an
additional I/O group based on Gne2 nodes is the use of WWPN Range. With the support of the 16
Gb FC HIC, Storwize V7000 Gen2 generate 6x WWPNs per port, far too many ports to use the
WWPN Range provided by the pre-existing Gen1 nodes.
The visual shows the rear of an Storwize V7000 Gen2 model with two 4-port FC HIC installed in
slots 2 and 3. Ports are physically numbered from top to bottom. Each node port takes the form of
the Gen2 WWN numbering scheme.
For high availability the ports of an Gen2 node should be spread across the two fabrics in a dual
fabric SAN configuration.
The maximum number of worldwide node names (WWNNs) increased to 1024 allowing up to 1024
back-end storage subsystems to be virtualized.
Uempty
1 2 500507680b12xxxx
ƒ Public names
1 3 500507680b13xxxx
ޤUsed for the various fake switch components
for FC direct-attach 1 4 500507680b14xxxx
2 1 500507680b21xxxx
ޤUsed for SAS initiator in 2076-12F/24F
2 2 500507680b22xxxx
expansion units
2 3 500507680b23xxxx
ޤPublic WWNs take the form
2 4 500507680b24xxxx
500507680b <slot number> <port number>
xxxx 3 1 500507680b31xxxx
3 2 500507680b32xxxx
ƒ Private name:
3 3 500507680b33xxxx
ޤUsed by hosts to identify storage
3 4 500507680b34xxxx
ޤUsed by backend controllers for LUN
masking 4 1 500507680b41xxxx
4 3 500507680b43xxxx
4 4 500507680b44xxxx
********************************************
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
Uempty
The visual lists options that represent optimal configurations based on port assignment to function.
Using the same port assignment but different physical locations will not have any significant
performance impact in most client environments.
This recommendation provides the wanted traffic isolation while also simplifying migration from
existing configurations with only 4 ports, or even later migrating from 8-port or 12-port
configurations to configurations with additional ports. More complicated port mapping
configurations that spread the port traffic across the adapters are supported and can be considered
but these approaches do not appreciably increase availability of the solution since the mean time
between failures (MTBF) of the adapter is not significantly less than that of the non-redundant node
components.
Uempty
P1 P2 P1 P2
Connect the P1 P3 P4 P3 P4
The visual illustrates connecting the Storwize V7000 Gen 2 and the SVC DH8 8-node system to
redundant fabrics that use two 4-port 8 Gb FC HIC per node for an 8-port fabric connections. Each
of the odd number ports (1 and 3) is connected to the first SAN switch and even number ports (2
and 4) are connected the second SAN switch.
For this example, we are zoning the Storwize V7000 as a back-end storage controller of SAN
Volume Controller. Therefore, every SAN Volume Controller node must have the same Storwize
V7000 view as a minimum requirement which must be at least one port per Storwize canister. For
best performance and availability, it is recommended to zone all the Storwize Gen2 and SAN
Volume Controller DH8 ports together in each fabric. If the SVC nodes see a different set of ports
on the same storage system then operation is degraded and logged as error.
Cabling is done to facilitate zone definitions that are coded by using either switch domain ID and
port number, or WWPN values. When cabling the Storwize V7000 ports to the switch, adhere to the
following recommendations and objectives:
• Split the attachment of the ports of the V7000 node across both fabrics. This implies that ports
alternate between the two V7000 nodes as they are attached to the switch.
• Enable the paths from the host, with either four-paths or eight-paths to the V7000 I/O group to
be distributed across WWPNs of the V7000 node ports.
Uempty
• The ports of all nodes in a cluster (even from multiple I/O groups of the same cluster) need to be
zoned together in the system zone. This example shows how it works with the distribution of the
ports through two distinct fabrics. In the case that an intersystem zone is required (that is
planned usage of remote copy function with another Storwize or SVC device), it is required to
create an additional zone. This zone must contain all WWPNs of the nodes from both clusters
(any-to-any). Even if that technically implies that the system zone becomes obsolete, it is still a
support requirement (and best practice) to keep it.
Uempty
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
The visual illustrates connecting Storwize Gen2 8-node system and the FlashSystem 900 to
redundant fabrics using 4-port 8 Gb FC connections. For an Storwize V7000 in a FlashSystem 900
environment, the switch has the same recommendations as the Storwize V7000. To maximize the
performance that can be achieved when deploying the FlashSystem 900 with the Storwize V7000
carefully consider the assignment and usage of the FC HBA ports on the Storwize V7000.
Specifically, SAN switch zoning which is coupled with port masking can be used for traffic isolation
for various Storwize V7000 functions reducing congestion and improving latency.
Uempty
1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2
P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4
4
Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2
Node 1 Node 2 Node 3 Node4 Node 5 Node 6 Node 7 Node 8
I/O Group 0 I/O Group 1 I/O Group 2 I/O Group 3
Cntrl A Cntrl B
Channels Channels
1 and 3 2 and 4
C1 C2 C4
C3
Controller 1 Controller 2
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
The visual illustrates connecting Storwize V7000 Gen2 8-node system and the DS3500 to
redundant fabrics using 4-port 8 Gb FC connections. The DS3K has two ports and the 4-node
Storwize V7000 cluster has 16 ports. Both parties have their ports that are evenly split between two
SAN fabrics.
Uempty
1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2
P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4
4
Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2
Node 1 Node 2 Node3 Node 4 Node 5 Node 6 Node 7 Node 8
I/O Group 0 I/O Group 1 I/O Group 2 I/O Group 3
Ctrl A Ctrl B
Channels Channels
1 and 3 2 and 4
C2 C4
C1 C3
Controller 1 Controller 2
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
This visual illustrates how each Storwize V7000 Gen2 node with four ports per I/O group and the
IBM System Storage DS3500 two port are connected to a redundant fabric using 8 Gb FC
connections. Both system ports are evenly split between two SAN fabrics.
Uempty
ID ID
All storage ports and all SVC ports
21 22
Fabric 1 Fabric 2
0 1 2 3 4 5 6 0 1 2 3 4 5 6
ID ID
E3 11 21 14 24 F1 E1 11 E4 13 23 12 22 F2 E2 12
11 12 13 14 F
1 F
2
E
1 E
2
NODE1
VendorX
E
3 E
4
21 22 23 24 DSxK
NODE2
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
Multiple ports or connections from a given storage system can be defined to provide greater data
bandwidth and more availability. To avoid interaction among storage ports of different storage
system types, multiple back-end storage zones can be defined.
For example, one zone contains all the Storwize V7000 ports and the VendorX port and another
zone contains all the node ports and the DSxK ports. Storage system vendors might have additional
best practice recommendations such as not mixing ports from different controllers of the same
storage system in the same zoning. Storwize V7000 supports and follows those guidelines that are
provided by the storage vendors.
Uempty
FC SwitchA FC SwitchB
LUN
masking LUN sharing requires
Lw Ls additional software.
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
A host system is generally equipped with two HBAs requiring one to be attached to each fabric.
Each storage system also attaches to each fabric with one or more adapter ports. A dual fabric is
also highly recommended when integrating the Storwize V7000 into the SAN infrastructure.
LUN masking is typically implemented in the storage system and in an analogous manner in the
Storwize V7000 to ensure data access integrity across multiple heterogeneous or homogeneous
host servers. Zoning is deployed often complementing LUN masking to ensure resource access
integrity. Issues that are related to LUN or volume sharing across host servers are not changed by
the Storwize V7000 implementation. Additional shared access software, such as clustering
software, is still required if sharing is desired.
Another aspect of zoning is to limit the number of paths among ports across the SAN and thus
reducing the number of instances the same LUN is reported to a host operating system.
Uempty
FC SwitchA1 FC SwitchA
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
A Storwize V7000 cluster with multiple nodes might potentially introduce more paths than
necessary between the host HBA ports and the Storwize V7000 FC ports. A given host should have
two HBA ports for availability and no more than four HBA ports. This allows for a minimum of four
paths and a maximum of eight paths between the host and the I/O group.
From the perspective of the host, the eight paths do not necessarily provide performance
improvement over the four paths environment. However, from an Storwize V7000 perspective the
host activity is balanced across the four ports of each node. Usually there are multiple hosts
connected to an Storwize V7000 cluster. The eight path configuration provides for an automatic
load leveling of activity across the four Storwize V7000 node ports as opposed to the manual load
placement approach of the four path configuration. With a manual load placement it is easy to
introduce a skew of activity to a subset of ports on a node. The skew manifests itself as higher
utilization on some ports and therefore longer response times for I/O operations.
Uempty
Lz Lh Li cLx La Lb Lc Ly Le L f Lg
RAIDa RAIDb RAIDc
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
Several data access issues and considerations need to be examined when heterogeneous servers
and storage systems are interconnected in a SAN infrastructure. These include:
• Device driver coexistence on the same host if that host is to be accessing storage from different
types or brands of storage systems.
• Multipath driver when multiple paths are available to access the same LUN or same set of
LUNs. If multiple storage systems of differing types or brands are to be accessed then
coexistence of multipath path drivers needs to be verified.
• LUN sharing among two or more hosts require shared access capability software (such as
clustering or serialization software) to be installed on each accessing host.
• Each storage system must be managed separately. Changes in the storage configuration might
have a significant impact on application availability.
These storage systems, while intelligent, exist as separate entities. Storage under or over utilization
is managed at the individual storage system level. Data replication capabilities of each storage
system are typically unique to that system and as a rule generally require like-systems as targets of
remote mirroring operations. Also, to decommission older equipment, data movement (or
migration) typically requires scheduled application outages.
Uempty
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
When MDisks are created, they are not visible to the host. The host only sees a number of logical
disks, known as virtual disks or volumes, which are presented by the Storwize V7000 I/O groups
through the SAN (FC/FCoE) or LAN (iSCSI) to the servers. The host system access virtual disks as
SCSI targets (for example, SCSI disks in Windows, hdisks in AIX). Since host systems will be
zoned to access LUNs provided by the SVC, the only multipath driver needed in these host
systems is the Subsystem Device Driver (SDD).
The Subsystem Device Driver (SDD, or SDDDSM for Windows MPIO environments, SDDPCM for
AIX MPIO environments) is a standard function of the Storwize V7000 and provides multipathing
support for host servers accessing Storwize V7000 provisioned volumes. These devices are often
referred to as Multipath I/O or MPIO. For Windows and AIX only a multipath driver that instructs the
OS which path to pick for a given I/O is required.
In addition to SDD, a wealth of other multipath drivers is supported. Refer to the Storwize V7000
product support website for latest support levels and platforms.
Uempty
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
The SDD provides multipath support for certain OS environments that do not have native MPIO
capability. SDD also enhances the functions of the Windows DSM and AIX PCM MPIO frameworks.
For availability, host systems generally have two HBA ports installed; and storage systems typically
have multiple ports as well. The number of multiple instances of the same LUN increases as more
ports are added. In a SAN environment, a host system with a multiple Fibre Channel adapter ports
which connect through a switch to multiple storage ports is considered to have multiple paths. Due
to these multiple paths, the same LUN is reported to the host system more than once.
For coexistence and gradual conversion to the Storwize V7000 environment, a storage system
RAID controller might present LUNs to both the Storwize V7000 as well as other hosts attached to
the SAN. Dependent upon some restrictions, a host might be accessing SCSI LUNs surfaced either
directly from the storage system or indirectly as volumes from the Storwize V7000. Besides
adhering to the support matrix for storage system type and model, HBA brand and firmware levels,
device driver levels and multipath driver coexistence, and OS platform and software levels, the
fabric zoning must be implemented to ensure resource access integrity as well as multipathing
support for high availability.
Although attached storage is supported, it is the Storwize V7000 and not the individual host
systems that interacts with these storage systems, their device drivers, and multipath drivers.
Uempty
and P3
DIR 1 SAN Fabric DIR 2 SAN Fabric
1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
Physical ports HIC 1 HIC 2 HIC 1 HIC 2
Node 1 Node 2
I/O Group 0
By default, the Storwize V7000 GUI assigns ownership of even-numbered volumes to one node of
a caching pair and the ownership of odd-numbered volumes to the other node. When a volume is
assigned to an V7000 node at creation, this node is known as the preferred node through which the
volume will normally be accessed. The preferred node is responsible for I/Os for the volume and
coordinates sending the I/Os to the alternative node.
This illustration is of a 2-node system with dual paths to both the fabric and HBA to the Storwize
V7000 I/O group. Each host HBA port is zoned with one port of each V7000 node of an I/O group in
a four-path environment. The first volume (vdisk1), whose preferred node is NODE 1, is accessed
for I/O then the path selection algorithms will load balance across the two preferred paths to NODE
1. The other two non preferred paths defined in this zone are to NODE 2, which is the alternate
node for volume (vdisk1).
The reason for not assigning one HBA to each path is because, one node solely serves as a
backup node for any specific volume. That is, a preferred node scheme is used. The load is never
be balanced for that particular volume. Therefore, it is better to load balance by I/O group instead
so that the volume is assigned to nodes automatically.
Uempty
1 1 1 1 2 2 2 2 5 5 5 5 1 1 1 1 2 2 2 2 5 5 5 5 1 1 1 1 2 2 2 2 5 5 5 5 1 1 1 1 2 2 2 2 5 5 5 5
1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12
Slot 1 Slot 2 Slot5 Slot 1 Slot 2 Slot5 Slot 1 Slot 2 Slot5 Slot 1 Slot 2 Slot5
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
Multiple fabrics increase the redundancy and resilience of the SAN by duplicating the fabric
infrastructure. With multiple fabrics, the hosts and the resources have simultaneous access to both
fabrics, and have zoning to allow multiple paths over each fabric.
In this example, the host has two HBAs installed, and each port of the HBA is connected to a
separate SAN switch. This allows the host to have multiple paths to its resources. This also means
that the zoning has to be done in each fabric separately. If there is a complete failure in one fabric,
the host can still access the resources through the second fabric.
Uempty
V1
1 1
ID ID
A1 21 A2 22
Fabric 1 Fabric 2
1 2 1 2
ID ID
E3 11 21 14 24 F1 E1 11 E4 13 23 12 22 F2 E2 12
Preferred Alternate
paths paths
11 12 13 14 21 22 23 24
NODE1
V1 NODE2
This example shows how host access to the Storwize V7000 nodes is implemented in a dual SAN
fabric. Each host HBA is connect to a separate SAN switch to allow multiple paths to resources. For
example, the AIX HBA A1 is zoned with one port from NODE1 and one from NODE2. HBA A2 is
also zoned with one port from each Storwize V7000 node. The vdisk1 is assigned to NODE 1
(preferred node) with two preferred paths, and has an alternative paths that are zoned to NODE 2.
The numbers located in switch ID 11 and 12 correlates to the Storwize V7000 nodes’ HBA ports 1
and 3. Therefore, the AIX host HBA A1 [(21, 1)] port is a member of a single zone whose zone
members [(11,1) and (11,2)] are the list of ports shown in both fabrics.
Uempty
V2
1 1
ID ID
A1 21 A2 22
Fabric 1 Fabric 2
1 2 1 2
ID ID
E3 11 21 14 24 F1 E1 11
E4 13 23 12 22 F2 E2 12
Alternate Preferred
paths paths
11 12 13 14 21 22 23 24
NODE2
NODE1
V2
Alternate node I/O group Preferred node
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
The preferred node can also specified by the administrator in situation where you need to assigned
owner of a specific volume created. In this example, at volume creation the vdisk2 was assigned to
NODE2 as its preferred node. SDDPCM path selection will load balance I/Os for vdisk2 across
paths to its preferred node NODE2. Access from AIX1 is with the assigned paths to the volume’s
preferred node by SDDPCM. All four paths are used to handle AIX1’s I/O requests if the ownership
of volumes is spread across both SVC nodes in the I/O group. Therefore, AIX host HBAs A1 and
A2 zone members are unchanged.
Uempty
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
Before the host can be presented to the Storwize V7000, it needs to know the server’s HBA’s WWN
(whether you are using a fiber switch or plugging directly). Once a host is created, you can verify
host connectivity from the Storwize V7000 management GUI by selecting Settings > Network >
Fibre Channel in the Network filter list. Change the “view connectivity to Hosts and the host name.
Much like the storage system view, this visual displays the connection between the host and the
Storwize V7000 ports.
The host shown is zoned for four-path access to its volumes. The guideline for four-path host
zoning is to zone each host port with one Storwize V7000 port per node. The connectivity data
displayed confirms that each host port (Remote WWPN) has four entries (one per node). The Local
WWPN column lists the specific Storwize V7000 node ports zoned with a given host WWPN.
The lsfabric command can also be use to list the SAN connectivity data between the Storwize
V7000 and its attaching ports. It is actually the command invoked by the GUI Fibre Channel
connectivity view.
Uempty
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
The visual shows details of an AIX host object using both the GUI and the CLI. The Name column
lists the WWPN values of its two ports. The host port is inactive if all the nodes with volume
mappings have a login for the specified WWPN but no nodes have seen any Small Computer
System Interface (SCSI) commands from the WWPN within the last five minutes. The host ports
becomes active once all nodes with VDisk (volume) mappings have a login for the specified
worldwide port name (WWPN) and at least one node has received SCSI commands from the
WWPN within the last five minutes.
From the CLI output, the AIX host has an object ID of 1. This AIX host is entitled to only access
volumes owned by I/O group 0.
You can use the lshost command to generate a list with concise information about all the hosts
visible to the clustered system and detailed information about a single host.
Uempty
For AIX host, you can check the availability of the FC host adapters and find the worldwide port
name (WWPN) using the AIX command lsdev -Cc adapter |grep fcs shows the names of the
Fibre Channel adapters in this AIX system. The Network Address of the lscfg -vl fcs0 and
lscfg -vl fcs1 output identifies the WWPN of each HBA port. The fscsi0 and fscsi1 devices are
protocol conversion devices in AIX. They are child devices of fcs0 and fcs1 respectively.
Uempty
Uempty
Nodes 11 21 14 24 13 23 12 22
Stgbox1 11 21 14 24 F1 13 23 12 22 F2
Stgbox2 11 21 14 24 E1 E3 13 23 12 22 E2 E4
AIX1 11 21 A1 13 23 A2
W2K 14 24 W1 12 22 W2
Linux 14 24 L1 12 22 L2
non-virtualized zone
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
By creating a worksheet that documents which host ports should be assigned to which Storwize
V700 ports can aid in spreading the workload across the Storwize V7000 HBA ports. This might be
particularly helpful when host ports are set up with four paths to the Storwize V7000 I/O group.
Not all OS platforms recommend or support eight (or even four) paths between the host ports and
the I/O group. Consult the Storwize V7000 Information Center for platform-specific host attachment
details.
Uempty
Expansion
enclosure
Storwize V7000
Host
systems
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
The sequence in which to turn on the Storwize V7000 system is important. Bringing up the
enclosures initially for configuration after installation in a rack requires no particular power on
sequence. Once the system is operational, you will need to power on all expansion enclosures by
connecting both power supply units of the enclosure to their power sources, using the supplied
power cables. The enclosure does not have power switches. Repeat this step for each expansion
enclosure in the system.
Wait for all expansion canisters to finish powering on. Power on the control enclosure by connecting
both power supply units of the enclosure to their power sources, using the supplied power cables.
Verify the system is operational and power on or restart servers and applications.
The power off sequence is the reverse of the arrow. However, you need to stop all I/O to the servers
accessing the volume from the Storwize V7000. Stopping I/O operation on the external storage
virtualized by the system is not required.
Depending on the storage system, powering up the disk enclosures and storage system can be a
single step. Ensure that all the devices to be powered on are in the off position before plugging in
the power cables.
Shutting down the system while it is still connected to the main power ensures that the node’s
batteries are fully charged when the power is restored.
Uempty
Keywords
• Physical planning • Fabric zoning
• Logical planning • Host zoning
• Management interface • Virtualization
• Clustered system • Worldwide node name (WWNN)
• I/O Group • Worldwide port name (WWPN)
• Configuration node • Lightweight Directory Access
• Boss node Protocol (LDAP)
• SSH client • Service Assistant Tool
• SAN Zoning
• Management GUI
Storwize V7000
© Copyright IBMplanning and2014
Corporation zoning requirements © Copyright IBM Corporation 2012, 2016
Uempty
Review questions (1 of 2)
1. To initialize the Storwize V7000 node canisters a PC or
workstation must be connected to (blank) on the rear of a
node canister.
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
Uempty
Review answers (1 of 2)
1. To initialize the Storwize V7000 node canisters a PC or workstation
must be connected to Technician port (T-Port) on the rear of a node
canister
The answer is Technician port (T-Port).
2. True or False: To access the Storwize V7000 GUI, a user name with a
password must be defined. To access the Storwize V7000 CLI, a user
name can be defined with a password or SSH key; or both.
The answer is true.
3. Which of the following menu options will allow you to create new
users, delete, change, and remove passwords?
a. Settings
b. Monitoring
c. Access
The answer is access.
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
Uempty
Review questions (2 of 2)
4. Which IP address is not a function of Ethernet port 1 of
each Storwize V7000 node?
a. Cluster management IP
b. Service Assistant IP
c. iSCSI IP
d. Cluster alternate management IP
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
Uempty
Review answers (2 of 2)
4. Which of the following IP addresses can be configured to
the Ethernet port 1 of each Storwize V7000 node?
a. Cluster management IP
b. Service Assistant IP
c. iSCSI IP
d. Cluster alternate management IP
The answer is cluster alternate management IP, which is
configured to the Ethernet port 2 of the Storwize V7000
node.
5. True or False: Zoning is used to control the number of paths
between host servers and the Storwize V7000.
The answer is true.
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
Uempty
Unit summary
• Determine planning and implementation requirements that are
associated with the Storwize V7000
• Implement the physical hardware and cable requirements for the
Storwize V7000 Gen2
• Implement the logical configuration of IP addresses, network
connections, zoning fabrics, and storage attachment to the Storwize
V7000 Gen2 nodes
• Integrate the Storwize V7000 Gen2 into an existing SVC environment
• Verify zoned ports between a host to the Storwize V7000 and between
a storage system
Storwize V7000 planning and zoning requirements © Copyright IBM Corporation 2012, 2016
Uempty
Overview
This unit highlights the procedures required to initialize a Storwize V7000 system using the
Technician port (T-port) and the Service Assistant (SA) interface. Users will also review steps to
setup system resources using the graphical user interface (GUI).
In addition, introduce administrative operations to establish user authentication for local and remote
users’ management access to both the GUI and CLI are introduced.
References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html
Uempty
Unit objectives
• Summarize the concept of using the Storwize V7000 Technician port
and Service Assistant tool to initialize the system
• Identify the basic usage and functionality of IBM Storwize V7000
management interfaces
• Recall administrative operations to create user authentication for local
and remote users access to the Storwize V7000 system
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
This topic starts with listing the management interfaces for the administration of IBM Storwize
V7000 and highlights the steps required to create an clustered system by initializing the Storwize
V7000 node canisters using the Technician (service) port. We will also review the system basic
configuration and its accessing mechanisms.
Uempty
V7000 Ethernet
2 – 8 nodes https
GUI, CLI, and
CIMOM
GUI: Web browser
CLI: Over SSH
over https
with key or password
Embedded GUI with password
with best practices
Any
presets resource
SMI-S to
manager
CIMOM
CIM interface
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The Storwize V7000 simplifies storage management by providing a single image for multiple
controllers and a consistent user interface for provisioning heterogeneous storage.
The Storwize V7000 provides cluster management interfaces that includes:
• An embedded Storwize V7000 Graphical User Interface (GUI) that supports a web browser
connection for configuration management, which is similar to the common source code base as
IBM Storwize V7000.
• A command line interface (CLI) accessed using a Secure Shell connection (SSH) with PuTTY.
• An embedded CIMOM that supports the SMI-S which allows any CIM compliant resource
manager to communicate and manage the system cluster.
To access the cluster for management, there are two user authentication methods available:
• Local authentication: Local users are those managed within the cluster, that is, without using
a remote authentication service. Local users are created with a password to access the
management GUI, and/or assigned an SSH key pair (public/private) to access the CLI.
• Remote authentication: Remote users are defined and authenticated by a remote
authentication service. The remote authentication service enables integration of Storwize
V7000 with LDAP (or MS Active Directory) to support single sign-on. We will take a closer look
at the remote authentication method later in this unit.
Uempty
In order to create a Storwize V7000 clustered system, you must initialize the system using the
Technician service port. The technician port is designed to simplify and ease the initial basic
configuration of the Storwize V7000 storage system. This process requires the administrator to be
physically at the hardware site.
To initialize the V7000 Gen2 nodes you connect a personal computer (PC) to the Technician port
(Ethernet port 4) on the rear of a node canister ─ only one node required. This port can be identified
by the letter “T”. The node canister uses DHCP to configure IP and DNS settings of the personal
computer. If your PC is not DHCP enabled, configure the IP addresses as follows:
• Static IPv4: 192.168.0.2
• Subnet Mask: 255.255.255.0
• Gateway: 192.168.0.1
• DNS: 192.168.0.1
After the Ethernet port of the PC is connected to the technician port, open a supported web
browser. If the Storwize V7000 node has Candidate status, you are automatically redirect
192.168.0.1 initialization wizard. Otherwise, the service assistant interface is displayed.
The IBM Storwize V7000 Gen2 does not provide IPv6 IP addresses for the technician port.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The System Initialization wizard provides a few simple steps to initialize a new system or to expand
an existing system using IPv4 or IPv6 management address (you can use DHCP or statically
assign one). The subnet mask and gateway will be a listed by default but can be changed, if
required.
The most common cause of configuration errors is the inability to access and communicate due to
an incorrect IP address, a wrong subnet mask, or an incorrect default gateway entered. Remember
to validate the entries before you proceed.
The system will generate the svctask mkcluster command to create the system cluster using the
addresses as specified. The Web Server restarts to complete the initialization. This process might
take several minutes to complete. Once complete, disconnect the Ethernet cable from the Storwize
V7000 Gen2 node’s technician port and connect the same PC to the same network as the system.
The system will now re-direct to the management IP address for completion of the system setup.
Uempty
License agreement
• Open a supported browser (Mozilla Firefox, Microsoft Internet Explorer
(IE) or Google Chrome).
ƒ Enable browser with JavaScript support.
• Point browser to the http://management_IP_address of IBM Storwize
V7000.
ƒ System redirects http access automatically to https.
• Accept License Agreement to continue.
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
To complete the system basic configuration, open a supported browser to the management IP
address of the IBM Storwize V7000 system. If the web browser fails to launch, you might need to
enable JavaScript support through your web browser.
When launching the management GUI for the first time a License Agreement is presented. This is
different from the previous code releases as the license agreement was part of the System setup
wizard. You must accept the Storwize V7000 product license agreement to continue with the
system configuration.
Uempty
ƒ Click Login.
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Once the user accepts the License Agreement the log in screen is displayed. Storwize V7000
maintains a factory-set default username (superuser) and password (passw0rd with a zero in place
of the letter “o”). The superuser ID is displayed by default. Once you have entered the default
passw0rd you are immediately prompted to change the username and password. It is highly
recommended to maintain IT security policies that enforce the use of password-protected user IDs
rather than the use of a generic, shared IDs, such as superuser, admin, or root.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The Welcome to System Setup page of the System Setup wizard displays a list of required
components and content that needs to be available during this system setup configuration. If you
do not have this information ready or choose not to configure some of these settings during the
installation process you can configure them later through the management GUI.
The system name field displays the cluster name that was created during the system initialization
using the Service Assistant interface. You may choose to change the system name which will apply
the name change to the SA. In a data center environment, it might be best to define system names
by which it can reflect the system use or client.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The Licensed Functions is where you must specify the additional licenses required to expand the
base functions of the Storwize V7000. This includes Encryption license base on the size equal to
the number of expansion enclosure, external storage virtualization, FlashCopy, Global and metro
Mirror and Real-time Compression. Each license option supports capacity-based licensing that is
based on the number of terabytes (TB).
Administrators are responsible for managing use within the terms of the existing licenses. They are
also responsible for purchasing extra licenses when existing license settings are no longer
sufficient. In addition, the system also creates warning messages if the capacity used for licensed
functions is above 90% of the license settings that are specified on the system.
When the Apply and Next > button is clicked, the GUI generates the necessary CLI commands to
update the license settings.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The date and time can be set manually or with an NTP server IP address. However, it’s highly
recommended to configure an NTP server that will allow you to maintain common time stamp for
troubleshoot tracking all of your SAN and storage devices.
You will find that all executed task such as the Apply and Next > option will generate commands
that are used to achieve the desired settings specified in the wizard panels. You can click the View
more detail hyperlink to view specific details of the commands issued.
At this time, you cannot choose to use the 24-hour clock. You can change to the 24-hour clock after
you complete the initial configuration.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
IBM Storwize V7000 supports hardware and software encryption. There are no “trial” licenses for
encryption on the basis that when that trial runs out, access to the data would be lost. For either
hardware or software encryption to be enabled at a system level, you must have purchased an
encryption license before you activate the function. Therefore an encryption license must be
purchased prior to activation.
Activation of the license can be performed in one of two ways, either Automatically or Manually and
can be performed during either System Setup.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
As part of the Call Home procedure, you will need to specify the address to where the system is
physically located. This information is used by IBM service personnel for troubleshooting site
service requirements and shipment of parts.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Next, enter the contact details for the person who will be contacted in the event call home service is
required.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Enter the SMTP server IP address in which call home and event notification are routed. You can
click on the Ping button which verifies if there is network access to the Email server (SMTP server).
Ensure that the email server accepts SMTP traffic because some enterprises do not permit SMTP
traffic especially if the destination email address is outside the enterprise.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Review the setting values in the Summary panel. You can use the Back button to make any
modification. Once you click Finish the system setup of the initial configuration is complete. The
view will depend on the initial configuration options chosen.
Uempty
2
1
Select control
enclosure
3 4
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The System Overview window will appear showing your system configuration is complete and the
next step will be adding the storage enclosure. Click in the empty box in the center of the screen
and the add storage enclosure wizard starts. From the Add Enclosures panel, select the available
controllers to be added to the system. Click Next. Review the summary panel and click Finish to
complete the add enclosure procedure. Click Close when the completion panel opens.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The System Overview window will appear showing your system configuration is complete. At this
point you are ready to configure and virtualized storage system resources such as pools, hosts and
volumes.
Uempty
Software upgrade
is reviewed later
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Once the system setup is complete, the Storwize V7000 GUI might display a reminder that a
current update is available. This notification indicates the importance of maintaining the latest
software code. The Settings > System > Update System link will redirect you to check for the
latest version.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
This topic explores the IBM Storwize V7000 management graphical user interface (GUI) and its
accessing mechanisms. This topic also describes the steps that are required to configure an SSH
(PuTTYGen) connection and create user authentication for access.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
With the release of the Spectrum Virtualize V7.4 code, the IBM Storwize V7000 management GUI
welcome screen has changed from what was formerly known as the Overview panel to a dynamic
system panel with enhanced functions that are available directly from welcome screen.
These panels group common configuration and administration objects and present individual
administrative objects to the GUI users. It provides common, unified procedures to manage all
these systems in a similar way allowing administrators a way to simplify their operational
procedures across all systems.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The dynamic menu icons are grouped to display the associated menu options available. The
dynamic menu is a fixed menu on the left side of management GUI window and is accessible from
any page inside GUI. As of the V7.4 release, the dynamic menu offers 7 menu options. There is no
Overview menu or a System Details option to choose. You can now hover over each menu icon
with ease. To browse by using this menu, hover the mouse pointer over the various icons and
choose a page that you want to display. You can use the various function icons to examine
available storage, create storage pools, create host objects, create volumes, map volumes to host
object, create user access, update system software, and configure network settings.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
From the System panel (which is the default panel) allows you to monitor the entire system capacity
as well as view details on control and expansion enclosures and various hardware components of
the system.
The Hardware is represented in its physical form, with all components indicators providing a
dynamic view of your system.
For systems with multiple expansion enclosures, the number indicates the total of detected
expansion enclosures that are attached to the control enclosure. Components can be selected
individually to view the status and Properties in detail. In addition, you can view individual hardware
components and monitor their operating state.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Select Monitoring > System > Events to track all informational, warning, and error messages that
occur in the system. You can apply various filters to sort them or export them to an external
comma-separated values (CSV) file. A CSV file can be created from the information listed.
Uempty
Monitoring: Performance
• Select Monitoring > System > Performance to view system statistics
in MBps or IOPS.
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Select Monitoring > System > Performance to capture various reports of general system
statistics with regards to the processor (CPU) utilization, host and internal interfaces, volumes, and
MDisks. You can switch between MBps or IOPS.
Uempty
Actions menu
• The Actions menu lists actions to rename the system, update the
system or power off the entire system.
ƒ Located in the upper-left corner of the home screen of the Storwize
V7000 GUI.
ƒ The Actions menu can be accessed by right-clicking anywhere in the
home screen GUI blank space.
• Each enclosure also has its own Action menu, which can be
displayed by right-clicking on an enclosure.
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Dynamic system view in the middle of the System panel can be rotated by 180° to view front and
rear of the nodes. When you click a specific component of a node a pop-up window indicates the
details about the unit and components installed.
You can right click on the Storwize V7000 and select Properties to view general details such as the
product name, status, machine type and model number, serial number and FRU part number.
This context menu also provides additional options to rename a node, power off the node (without
option for remote start), remove the node or enclosure from the system, or list all volumes
associated with the system.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Once a node has been added to the cluster you can rename the node by right clicking on the node
and select Rename.
In general, changing an object name is not a concern as Storwize V7000 processing is done by
object IDs and not object names. One exception is the case of changing the name of a node if the
iSCSI protocol is being used for host data access to the node. Be aware that the Storwize V7000
node name is part of the node’s IQN name. Thus changing the node name requires more planning
as the iSCSI host connections need to be updated Otherwise, it might cause an iSCSI-connected
hosts to lose access to their volumes.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
For a quick view of a specific adapter, hover the mouse pointer over any component in the rear of
the node to see its status, WWPN, and speed. For more detail information, right click to open the
Property view.
Select View and choose the option Fibre Channel Ports to see the list and status of available FC
ports with their WWPN. The View option also allows you to display Storwize V7000 expansion
enclosures and configured Ethernet ports.
Uempty
Modify Memory
• Displays the total amount of memory allocated to the I/O group for
certain service features
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The modify memory option display the default memory that is available for Copy Services or VDisk
mirroring operations. This information can be modified to allocate additional memory to maintain
efficient memory for certain uses of routine and advance services on the Storwize V7000 system.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The Status Indicators provide information about capacity usage, performance in bandwidth, IOPS
and latency, and the Health Status of the system. The Status Indicators are visible from all panels in
the GUI. The status indicators also show the running tasks of current ongoing tasks and those that
are recently completed.
Uempty
Initial amount of
physical storage
storage that was Virtual
Initial
(fixed
amount
capacity)
of
allocated
allocated amount
storageofthatstorage
was
allocated
used
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The cylindrical shape that is located around the node displays the capacity utilization for the entire
system. This same information is also indicated from the left status indicator. You can switch
between views to display information about the overall physical capacity (the initial amount of
storage that was allocated) as well as the virtual capacity (with thin provisioned storage, volume
size is dynamically changed as data grows or shrinks, but you still see a fixed capacity).
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
A modified Overview panel is accessible by clicking the Overview hyperlink in the top-right corner of
the System panel. The Overview panel offers a similar structure as in previous versions that
illustrates a task flow of how storage is provisioned, as well as existing configuration. Resources
managed by the cluster are itemized and updated dynamically. You can click on any option to be
redirected to the selected panel.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The Storwize V7000 users and the access level of the users are defined and managed through the
Access menu. The Users panel allows you to specify the name and password of the user and
delete users, change and remove passwords, and add and remove Secure Shell (SSH) keys (if the
SSH key has been generated). The SSH key is not required for CLI access, and you can choose to
use either SSH or a password for CLI authentication.
A Storwize V7000 clustered system maintains an audit log of successfully executed commands for
the Storwize V7000 using the management GUI or through CLI. It also indicates which users
performed particular actions at certain times.
Uempty
• Remote authentication
Access performed from
a remote server login as: superuser
Requires validation of [email protected]’s password:
IBM_Storeize:V009B:superuser>lscurrentuser
user’s permission to access name superuser
role SecurityAdmin
IBM_Storwize V7000:V09B:superuser>
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Administrators can create role-based user groups where any users that are added to the group
adopt the role that is assigned to that group. Roles apply to both local and remote users on the
system and are based on the user group to which the user belongs. A local user can only belong to
a single group; therefore, the role of a local user is defined by the single group to which that user
belongs.
The User Group navigation pane lists the user groups pre-defined in the system. To create a user
group you must define its Roles. Once created, you can determine the authentication type and the
number of users assigned with this group.
Uempty
User does not have the authority to change the state of the cluster
or cluster resources
The user group assigned to a user controls the role or the scope of
operational authority granted to that user.
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
There are five default user groups and roles. When adding a new user to a group, the user must be
associated with one of the corresponding roles:
• Security Administrator: User has access to all the functions provided by both the management
GUI and CLI including those related to managing users, user groups, and authentication.
• Administrator: User has access to all the functions provided by both the management GUI and
CLI except those related to managing users, user groups, and authentication.
• Copy Operator: User has the authority to start, modify, change direction, and stop FlashCopy
mappings and Remote Copy relationships at the standalone or consistency group level, but
cannot create or delete definitions. The user has access to all the functions associated with
monitor role.
• Service: User has a limited command set related to servicing the cluster. It is designed primarily
for IBM service personnel. The user has access to all the functions associated with monitor
role.
• Monitor: User has access to all information-related panels and commands, backup system
configuration metadata, manage its own password and SSH key, and issue commands related
to diagnostic data collection. The user does not have the authority to change the state of the
cluster or cluster resources.
Uempty
GUI,
CLI CLI
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
When a Storwize V7000 clustered system is created, the authentication settings default to local,
which means that the Storwize V7000 contains a local database of users and their privileges. The
Access > Users option can be used to perform user administration such as create new users,
apply password administration, plus add and remove SSH keys. Authorized access to the GUI is
provided for the default superuser ID which belongs to the SecurityAdmin user group. Therefore,
users can be created on the system using the user accounts they are given by the local superuser
account. With a valid password and username, users are allowed to login into both GUI and CLI
with the defined access level privileges. If a password is not configured, the user will not be able to
log in to the GUI.
SSH keys are not required for CLI access. However, you can choose either to use SSH or a
password for CLI authentication. The CLI can be accessed with a pair of public and private SSH
keys. The public key is stored in the cluster as part of the user create process. In order for the
superuser to have access to the CLI, the SSH public key must be upload. We will discuss
authentication using SSH keys later in the CLI topic.
Uempty
Create additional
user with up to 256
characters
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
User roles are predefined and can not be changed or added to. However, an administrator does
have authorization to create new user groups and assign a predefined role with the same
SecuirtyAdmin role but with lesser authority.
To add an additional local user, select the Create User option:
• Enter the Name (user ID) that you want to create and then enter the password twice. A user
name can be up to 256 characters and cannot contain colon, comma, percent sign, quote
marks (double or single).
• Select the user Authentication Mode. Select the access level that you want to assign to the
user.
• Select the User Group in which the user belongs to. A local user must be associated with one
and only one of the user groups. The Security Administrator (SecurityAdmin) is the maximum
access level.
• If a local user requires access to the management GUI or CLI, then a password, an SSH key, or
both are required.
▪ Enter and verify the password.
▪ Select the location from which you want to upload the SSH Public Key file that you created
for this user.
• Click Create.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
IBM Storwize V7000 remote authentication using LDAP is supported. This enables authentication
with a domain user name and password instead of a locally defined user name. If the enterprise
has multiple Storwize V7000 clusters then user names are no longer need to be defined on each of
these systems. Centralized user management is at the domain controller level instead of the
individual Storwize V7000 clusters.
Before configuring authentication for a remote user, you first verify that the remote authentication
service is configured for the SAN management application. You also need to configure remote
authentication before you can create a new user.
To configure the remote authentication service, navigate to the Directory Services panel. Click
Configure Remote Authentication. The supported types of LDAP servers are IBM Tivoli Directory
server, Microsoft Active Directory (MS AD), and Open LDAP (running on a Linux system).
The user that is authenticated remotely by an LDAP server is granted permission on the Storwize
V7000 system according to the role that is assigned to the group of which the user is a member.
That is, the user group must exist with an identical name on the Storwize V7000 and on the LDAP
server for the remote authentication to succeed.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
IBM Storwize V7000 remote authentication using LDAP is supported. This enables authentication
with a domain user name and password instead of a locally defined user name. If the enterprise
has multiple Storwize V7000 clusters then user names are no longer need to be defined on each of
these systems. Centralized user management is at the domain controller level instead of the
individual Storwize V7000 clusters.
Before configuring authentication for a remote user, you first verify that the remote authentication
service is configured for the SAN management application. You also need to configure remote
authentication before you can create a new user.
To configure the remote authentication service, navigate to the Directory Services panel. Click
Configure Remote Authentication. The supported types of LDAP servers are IBM Tivoli Directory
server, Microsoft Active Directory (MS AD), and Open LDAP (running on a Linux system).
The user that is authenticated remotely by an LDAP server is granted permission on the Storwize
V7000 system according to the role that is assigned to the group of which the user is a member.
That is, the user group must exist with an identical name on the Storwize V7000 and on the LDAP
server for the remote authentication to succeed.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
In this example, a MS Active Directory server is located at 10.6.5.30. A user group by the name of
IBM_Storage_Administrators has been defined to contain two users: SpunkyAdmin and
WiskerAdmin. The domain name is reddom.com.
Uempty
Ensure LDAP is
enabled to allow
remote user access
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Uempty
IBM_Storwize:V009B:SpunkyAdmin>lscurrentuser
name SpunkAdmin
role Administrator
IBM_Storwize:V009B:SpunkyAdmin>
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The user group IBM_Storage_Administrators is now listed in the User Groups filter list, and
remote access is enabled. User SpunkyAdmin of the IBM_Storage_Administrators group which is
defined on the MS Active Directory server is able to login to both the GUI and the CLI using its
network defined user name and password. However, this user group is defined to support remote
authentication, the users of this group are not defined locally in the Storwize V7000 system. Use
the lsuser or lscurrentuser command to view remote users.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
A user can login with either its short name or the fully qualified user name. By defining user
credentials at the domain controller enables centralized user management. More efficiency is
realized as additions and removals of user credentials only need to be performed once on the
LDAP server.
Uempty
Public
3
2
Install public key
in cluster
Storwize V7000
Secure communications
Storwize V7000
Public
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
To use the CLI, the PuTTY program (on any workstation with PuTTY installed) must be set up to
provide the SSH connection to the Storwize V7000 cluster. The command-line interface (CLI)
commands use the Secure Shell (SSH) connection between the SSH client software on the host
system and the SSH server on the system cluster. For Windows environments, the Windows SSH
client program PuTTY can be downloaded.
A configured PuTTY session using a generated Secure Shell (SSH) key pair (Private and Public) is
needed to use the CLI. The key pair is associated with a given user. The user and its key
association are defined using the superuser. The public key is stored in the system cluster as part
of the user definition process. When the client (for example, a workstation) tries to connect and use
the CLI, the private key on the client is used to authenticate with its public key stored in the system
cluster.
The CLI can be accessed using password instead of SSH. However, when invoking commands
from scripts, using the SSH key interface is recommended as it is more secure.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Since most desktop workstations are Windows-based, we are using PuTTY examples.
To generate a key-pair on the local-host, you will need to specify the key type. PuTTYGen defaults
to the SSH-2 RSA which is recommended to a provide better security level.
SSH2 is separated into modules and consists of three protocols working together:
• SSH Transport Layer Protocol (SSH-TRANS)
• SSH Authentication Protocol (SSH-AUTH)
• SSH Connection Protocol (SSH-CONN)
The SSH-TRANS protocol is the fundamental building block which provides the initial connection,
packet protocol, server authentication, basic encryption services, and integrity services. PuTTYGen
supports bits up to 4096 and defaults to 1024. However it is recommended to set this at a minimum
of 2048. Once you have chosen the type of key-pair to generate, click Generate. This procedure
generates random characters used to create a unique key.
A helpful tip is to move the cursor over the blank area in the Key Generator window until the
progress bar reaches the far right. Movement of the cursor causes the keys to be generated faster.
The progress bar will move faster with more mouse movement.
Uempty
Public
\Keys.PUBLICKEY.PUB
_@hostname \Keys.PRIVATEKEY.PPK
Private
SSH Keys to be
unique for a user
\Keys
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The result of the key generation shows the public key (in the box labeled Public key for pasting into
OpenSSH authorized_keys file).
The Key comment enables you to generate multiple keys. Therefore it is general recommended to
set this to username@hostname for easy identification.
The Key passphrase is an additional way to protect the private key and is never transmitted over
the Internet. If your set a passphrase then you will be asked enter it before any connection is made
via SSH. If you can not remember the key passphrase then there is no way to recover it.
Save the generated keys using the Save private key and Save public key buttons respectively. The
name and location of the file to place the key will be prompted. The default location is C:\Support
Utils\PuTTY. If another location is chosen then make a record for later reference. The public key
can be saved in any format such as *.PUB or *.txt. The public key is stored into the cluster as part of
user management. However, the private key uses the PuTTY format of *.PPK which is required for
authentication.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The SSH-AUTH protocol defines three authentication methods: public key, host based, and
password. Each SSH-AUTH method is used over the SSH-TRANS connection to authenticate itself
to the server.
Now that we have generated the SSH key pair, if a user requires CLI access for Storwize V7000
management through SSH you must provide the user with a valid SSH public key file. The SSH
public key option can also be configured later after user creation. In this case a password for the
user is required.
To upload the SSH public key for an existing user, right-click on the user and select Properties.
From the Create User pane, click the Browse button which will open the windows explorer.
Navigate to the \Keys folder to upload the public .PPK file, and click Create. The CLI mkuser
command is generated to define or add user with SSH key authentication for a CLI no-password
required login.
This is an optional feature for users and is not compulsory for Storwize V7000 management.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Now that you have stored the public key in the Storwize V7000 cluster, you will need to establish an
CLI SSH connection and upload the private key .PPK file. To do so, open the PuTTY client. From
the Category navigation tree, click Session and enter the management IP address or DNS host
name of the cluster and accept the default port 22 that is used for SSH Protocol. Ensure that SSH
is selected as the connection type.
Next, select Connection > SSH > Auth. Since we are using generated key pair, we will use the o
that matches the corresponding public key, In the Private key file for authentication field box, use
the Browse button to navigate to the location of the generated private.PPK file, or copy paste the
file path into the field.
Once the session parameters are specified, return to the Session pane and provide a name to
associate with the new session environment definition in the Saved Sessions field. Click Save to
save the PuTTY session settings and establish SSH private key authentication using CLI SSH
connection. Putty is a commonly used Terminal client.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Providing the SSH authentication has been enabled, the PuTTY client will only prompt login for the
user ID. This will be the same user ID that was authenticated with the public.ppk file. Once the login
ID is entered, SSH authentication validates password access in the form of the private key and
management IP address against the public key and user ID in the cluster.
In this example, we have issued the lscurrentuser command, which list the username by which
the current terminal is logged in.
The PuTTY SSH client software's are available in portable form and requires no need of special
setup. For other operating systems, use the default SSH clients or installed ones.
Once the SSH authentication has been established, upon the next log in using the PuTTY Client,
you will need to only select the name saved session and click Load > Open to recall the saved
management IP address.
Uempty
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
The command-line interface (CLI) enables you to manage the Storwize V7000 by typing
commands. Based on the user’s privilege level, commands can be issued to list information and
execute the commands for performing actions. Commands can be complemented with logically
consistent command line syntax. The syntax of a command is basically the rules for running the
command. It is important to understand how to read syntax notation so that you can use a
command properly.
Uempty
Keywords
• System initialization
• Cluster system
• Service Assistant Tool
• Storwize V7000 GUI
• Secure Shell (SSH) key
• Stretched System
• Event notifications
• Directory Services
• Remote authentication
• Lightweight Directory Access Protocol (LDAP)
• Support package
• Upgrade test utility
• User group
• Remote user
• System audit log entry
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Uempty
Review questions (1 of 2)
1. To initialize the Storwize V7000 node canisters a PC or
workstation must be connected to (blank) on the rear of a
node canister.
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Uempty
Review answers (1 of 2)
1. To initialize the Storwize V7000 node canisters a PC or
workstation must be connected to Technician port (T-Port) on the
rear of a node canister
The answer is Technician port (T-Port).
3. Which of the following menu options will allow you to create new
users and delete, change, and remove passwords?
a. Settings
b. Monitoring
c. Access
The answer is Access.
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Uempty
Review questions (2 of 2)
4. List the two administration management interface options
for IBM Storwize V7000.
5. List the two authentication mechanisms supported by IBM
Storwize V7000.
6. True or False: The CLI interface can only be accessed
using the Service Assistant IP address.
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Uempty
Review answers (2 of 2)
4. List the two administration management interface options
for IBM Storwize V7000.
The answers are web browser-based GUI and SSH
protocol based command-line interface.
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Uempty
Unit summary
• Summarize the concept of using the Storwize V7000 Technician port
and Service Assistant tool to initialize the system
• Identify the basic usage and functionality of IBM Storwize V7000
management interfaces
• Recall administrative operations to create user authentication for local
and remote users access to the Storwize V7000 system
System initialization and user authentication © Copyright IBM Corporation 2012, 2016
Uempty
Overview
This unit identifies the provisioning and management of the Storwize V7000 internal storage
resources as well as external storage devices that are part of the SAN fabric.
References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html
Uempty
Unit objectives
• Summarize the infrastructure of Storwize V7000 block storage
virtualization
• Recall steps to define internal storage resources using GUI
• Identify the characteristic of external storage resources
• Summarize how external storage resources are virtualized for Storwize
V7000 management GUI and CLI operations
• Summarize the benefit of quorum disks allocation
• Recognize how external storage MDisk allocation facilitate I/O load
balancing across zoned
• Distinguish between Storwize V7000 hardware and software encryption
Uempty
• Internal storage
• External storage
• Encryption
This topic examines the Storwize V7000 storage infrastructure and identifies its use of SAN block
aggregation to virtualize its resources.
Uempty
Uempty
Target
SAN
SAN
Virtualization
Initiator
Virtualization
Two major approaches in use today for the implementation of block-level aggregation and
virtualization are Symmetric (In-band Appliance Virtualization) and Asymmetric (Out-of-band or
controller-based virtualization).
IBM Storwize V7000 is implemented by using the symmetric virtualization, an in-band SAN, or
fabric-based appliance approach. Storwize V7000 control enclosure sits in the data path and all I/O
flows through the device as it acts as both target (I/O requests from the host) and initiator (I/O
requests from the storage) perspective. The redirection is performed by issuing new I/O requests to
the storage.
The controller-based (asymmetric) approach offers high functionality, but it fails in terms of
scalability or upgradeability. The device is usually a storage controller that provides an internal
switch for external storage attachment. In this approach, the storage controller intercepts and
redirects I/O requests to the external storage as it does for internal storage. The actual I/O requests
are themselves redirected. Because of the nature of its design, there is no true decoupling with this
approach, which becomes an issue for the lifecycle of this solution, such as with a controller.
Only the fabric-based appliance solution offers an independent and scalable virtualization platform
that provides enterprise-class copy services. The fabric-based appliance is open for future
interfaces and protocols, which allows you to choose the disk subsystems that best fit your
requirements, and does not lock you into specific SAN hardware.
Uempty
With controller-based approach, there are data migration issues such as how to reconnect the
servers to the new controller and how to reconnect them online without any effect on your
applications. When using this approach, if there is a need to replace a controller it also indirectly
replaces the entire virtualization solution.
Uempty
Storage Pools (IBM, HDS, HP, EMC, Sun, NetApp, Fujitsu Eternus, Bull Storeway,
NEC iStorage, Pillar Data, Texas Memory, Xiotech, Nexsan, Compellent (ongoing…)
IBM Storwize V7000 Gen2 is an appliance-based in-band block virtualization process, in which
intelligence, including Spectrum Virtualize advanced storage functions, is migrated from individual
(internal/external) storage devices to the storage network. Therefore, Storwize V7000 is a complete
virtualization solution for flexibility, scalability, and redundancy.
IBM Spectrum Virtualize adds an abstraction layer to the existing SAN infrastructure, enabling
enterprises to centralize storage provisioning with a single point of control. The Spectrum Virtualize
approach is based on a scale-out cluster architecture and lifecycle management tasks. Spectrum
Virtualize allows for non-disruptive replacements of any part in the storage infrastructure, including
the Storwize V7000 devices themselves. It also simplifies compatibility requirements that are
associated in heterogeneous server and storage environments. Therefore, all advanced functions
are implemented in the virtualization layer, which allows switching storage array vendors without
impact. This enables the application server storage requirement needs to be articulated in terms of
performance, availability, or cost.
One of the significant benefits of this approach is that the Storwize V7000 control enclosure, a
virtualization engine, provides a common platform for IBM Spectrum Virtualize advanced functions.
The virtualization engine provides one place to perform, administer, and control functions like Copy
Services regardless of the underlying storage.
Uempty
Host
Storwize V7000 is
Network
implemented as a
clustered appliance in the Block aggregation
storage network layer Device
Lz Lh Li
Storage devices RAID
Virtualization at the disk layer is referred to as block-level virtualization, or the block aggregation
layer. The model splits the block aggregation layer into three sublayers. Block aggregation can be
realized within hosts (servers), in the storage network (storage routers and storage controllers), or
in storage devices (intelligent disk arrays). IBM’s implementation of a block aggregation solution is
the Storwize V7000 which is implemented as a clustered appliance in the storage network layer.
The key concept of virtualization is to decouple the storage from the storage functions that are
required in the storage area network (SAN) environment. This means abstracting the physical
location of data from the logical representation of the data. The virtualization engine presents
logical entities to the user and internally manages the process of mapping these entities to the
actual location of the physical storage.
Storwize V7000 block-level virtualization provides a layer of abstraction between the application
servers and the underlying physical storage systems. By having the virtualization layer reside
above the storage controller level, application servers can be configured to use virtual disks while
the physical disks (or disks surfaced by the RAID controllers) are hidden from the application
servers.
Uempty
Disk2 Disk3
Storage Pools:
*Quorum Managed Disks from 256 disk
Disk1
systems (Max. 128)
Pool 1 Pool 2 Pool 3 Assign LUNs to Storage Pools
(Max. 128)
Define Extent size (16 MB to 8
RAID5 Hybrid RAID10 GB)
*Quorum functionality is not supported on flash drives
The Storwize V7000 Gen2 hardware consists of control enclosures and expansion enclosures,
connected with wide SAS cables (four lanes of 6Gbit/s or 12Gbit/s). Each enclosure houses 2.5" or
3.5" drives. The control enclosure contains two independent control units (nodes) based on SAN
Volume Controller technology, which are clustered via an internal network.
The Storwize V7000 system can support up to four I/O groups (8-node system). Each I/O group
(node pairs) manages assigned volumes. The term I/O Group is used to denote the group of
volumes managed by a specific node-pair. A single I/O group can manage up to 1024 volumes for a
maximum of 4096 total with all four node-pairs.
Each volume can be as small as 16 MB or as large as 2 TB in size, and can be dynamically resized
smaller or larger as needed.
The cluster manages a group of physical volumes called managed disks (MDisks). This is the
foundation of an I/O virtualization in which MDisks are grouped into storage pools. Managed disks
can also be LUNs selected from up to 64 disk subsystems.
The Storwize V7000 uses the storage from the storage pools to create virtual volumes. Volumes
can be segregated into managed disk groups, for whatever reason. For example, You can separate
an RAID5 group from an RAID10 group, or separate the EMC from the HDS disks, or separate data
that belongs to different customers or departments. Typically, like devices are placed into a group.
Uempty
Therefore, when a volume is defined, you designate which node-pair handles the I/O (which I/O
group it belongs to) and which managed disk group you want as the physical storage of the data.
The mapping for which volumes are stored onto which managed disks is stored inside each node
and mirrored across all nodes so that all nodes know where all data is stored. You also can back up
this mapping externally to handle the unlikely event of losing all nodes.
Uempty
Storwize V7000
Figure 5-8. Spectrum Virtualize with Storwize V7000: One complete solution
IBM Spectrum Virtualize supports different tiers of storage from different vendors with different
interfaces and multipathing drivers. But the hosts see only one device type, one multipathing driver
and one management interface regardless of the number of types of storage controllers being
managed by the Storwize V7000.
Uempty
• External storage
• Encryption
This topic examines the Storwize V7000 storage infrastructure and identifies its use of SAN block
aggregation to virtualize its resources.
Uempty
Storage Pool
Storage Pool
RAID 5
Managed Disks (MDisks)
11
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016
A logical building block represents basic storage units called managed mdisks (MDisks) which are
added to storage pools to create virtualized storage resources that gets presented to a host.
Uempty
A Storwize V7000 system can manage a combination of internal and supported external storage
systems. Internal storage is the RAID-protected storage that is directly attached to the system using
the drive slots in the front of the node or with the expansion enclosure. The Storwize V7000
automatically detects the drives that are attached to it and displays them within the GUI as internal
or external storage.
The external storage subsystems are independent back-end disk controllers that are discovered
out on the same fabric to be by the Storwize V7000 systems for virtualization.
Uempty
Managed disks
• A managed disk must be protected by RAID to prevent loss of the entire
storage pool.
• MDisk can be either part of a RAID array of internal storage or a logical unit
(LUN) from external storage.
• Each managed disk group can contain up to 128 managed disks and a
maximum of 4096 MDisks per system.
• MDisk is not visible to a host system on the SAN.
3.2 TB
Member Member Member Member
12 Gbps Flash disk disk disk disk
1.8 TB
Member Member Member Member
10K RPM disk disk disk disk
2 TB
Member Member Member Member
7.5 K RPM disk disk disk disk
A managed disk (MDisk) refers to the unit of storage that the Storwize V7000 system virtualizes.
This unit might be a logical volume from an external storage array that is presented to the system, a
RAID array that is created on internal drives, or an external expansion that is managed by the
Storwize V7000 node. The node allocates these MDisks into various storage pools for different
usage or configuration needs.
If zoning has been configured correctly, MDisks are not be visible to a host system on the storage
area network as it should only be zoned to the Storwize V7000 system.
Managed disks are grouped by the storage administrator into one or more pools of storage that is
known as storage pools, or managed disk groups. The grouping is typically based on performance
and availability characteristics. While a storage pool can span multiple storage systems, for
availability and ease of management it is recommended that a storage pool be populated with
MDisks from the same storage system.
Uempty
Extent 2
Extent 3
Pool_IBMSAS
Extent 4
R5 Extent 5
R5
R5
R5
...
Extent-n
Extent-n
Once the MDisks are placed into a storage pool they are automatically divided into a number of
extents. The system administrator must determine how many storage pools are to be defined and
the extent size to be used by the pool. Each clustered system can manage up to 1024 storage
pools, 128 parent pools, and 1023 child pools.
Uempty
Storwize V7000 management GUI provides a default extent size of 1024 MB. To change the extent
size, you must enable this feature using the management GUI preference option. Extent sizes
ranges from 16 to 8192 MB. The choice of the extent size affects the total amount of storage that
can be managed by a cluster. Once set, the extent size stays constant for the life of the pool.
A 16 MB extent size supports a maximum capacity of 64 TB, and the 32 MB extent size supports up
to 128 TB. Increasing capacity based on the powers of 2, the 8192 MB extent size allows for 32 PB
of Storwize managed storage.
For most systems a capacity of 1 to 2 PB is sufficient. A preferred practice is to use 256 MB for
larger clustered systems. To avoid wasting storage capacity, the volume size should be allocated as
a multiple of the extent size. You can specify different extent sizes for different (storage pools);
however, you cannot migrate (volumes) between (storage pools) with different extent sizes. If
possible, create all your (storage pools) with the same extent size to facilitate easy migration of
volume data from one storage pool to another.
Uempty
Mapping of extents
• Storage pool extents are used to create volumes (VDisk).
Whenever you create a new volume you must pick a single storage pool to
provide the physical capacity.
Extents taken from each MDisk (unallocated) in a storage pool (round robin) to
fulfill the required capacity of a specified volume.
By default the created volume will stripe all of its data across all the managed
disks in the storage pool.
Storage pool
Striped Volume
(name, extent size)
(default)
Extent 1a
Extent 1a Extent 2a Extent 3a Extent 2a
Extent 1b Extent 2b Extent 3b Extent 3a
Extent 1c Extent 2c Extent 3c Extent 1b
Extent 1d Extent 2d Extent 3d Extent 2b
Extent 1e Extent 2e Extent 3e Extent 3b
Extent 1f Extent 2f Extent 3f Extent 1c
Extent 1 g Extent 2 g Extent 3 g
Extent 2c
Extent 3c
Managed Disks (MDisks)
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016
The extents from a given storage pool are used by the Storwize V7000 to create volumes which are
known as logical disks. A volume also represents the mapping of extents that are contained in one
or more MDisks of a storage pool. When an application server needs a disk capacity of a given
size, a volume of that capacity can be created from a storage pool that contains MDisks with free
space (unallocated extents). Storwize V7000 creates the volume by allocating extents from a given
storage pool. The number of extents that are required is based on the extent size attribute of the
storage pool and the capacity that is requested for the volume. By default, extents are taken from all
MDisks contained in the storage pool in round robin fashion until the capacity of the volume is
fulfilled.
A volume is sourced by extents that are contained in only one storage pool. A storage pool is
referred to as a MDisk group and a volume is referred to as a virtual disk (VDisk). These terms are
still used and the CLI command syntax is still based on the traditional terms of MDisk group and
VDisk.
Uempty
R5
A storage pool provides the pool of storage from which volumes are created. You must ensure that
the MDisks that make up each tier of the storage pool have the same performance and reliability
characteristics to avoid causing performance problems and other issues.
MDisks that are used in a single-tiered storage pool must have the same hardware characteristics,
such as the same RAID type, RAID array size, disk type, and RPMs. Any disk subsystems that are
providing the MDisks must also have similar characteristics, such as maximum input/output
operations per second (IOPS), response time, cache, and throughput. The MDisks that are used
are the same size, therefore the MDisks provide the same number of extents. If that is not feasible
then check the distribution of the volumes’ extents in that storage pool.
A multi-tiered storage pool contains a mix of MDisks with more than one type of disk tier attribute. A
multi-tiered storage pool that contains both generic_hdd and generic_ssd or flash MDisks is also
known as a hybrid storage pool. Therefore a multi-tiered storage pool contains MDisks with various
characteristics as opposed to a single-tiered storage pool. However, it is a preferred practice for
each tier to have MDisks of the same size and MDisks that provide the same number of extents.
Uempty
Storage
Pool
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016
RAID levels provide various degrees of redundancy and performance, and have various restrictions
regarding the number of members in the array. IBM Storwize V7000 supports RAID levels 0, 1, 5, 6
and 10. With the release of the Spectrum Virtualize V7.6 code, IBM Storwize V7000 now support
Distributed RAID 5 and Distributed RAID 6.
Uempty
• RAID 5
ƒ Use four I/Os per logical write (sequence
RAID5 RAID5
writes) Array Array
mdisk mdisk
ƒ Best choice in performance and storage
RAID5 RAID5 RAID5
levels Array Array Array
mdisk mdisk mdisk
Storage Pool
In general, RAID 10 arrays are capable of higher throughput for random write workloads than RAID
5 because RAID 10 requires only two I/Os per logical write compared to four I/Os per logical write
for RAID 5. For random reads and sequential workloads, often no benefit is gained. With certain
workloads, such as sequential writes, RAID 5 often shows a performance advantage.
Selecting RAID 10 for its performance advantage comes at a high cost in usable capacity and in
most cases RAID 5 is the best overall choice.
If you are considering RAID 10, use Disk Magic to determine the difference in I/O service times
between RAID 5 and RAID 10. If the service times are similar then the lower-cost solution makes
the most sense. If RAID 10 shows a service time advantage over RAID 5 then the importance of
that advantage must be weighed against its additional cost.
Uempty
Figure 5-19. Array and RAID levels: Drive counts and redundancy
An array can be created using the GUI or CLI. Both use the mkarray command. The Storwize
V7000 management GUI offers presets implemented based on best practices guidelines. After the
array is created, it can be used instantly and moved to a pool where volumes can be created.
Volumes can be written immediately after creation and mapping.
Redundancy depends on the type of RAID level that is selected at creation time. To reduce the
calculation of parity information and to improve performance, the cache attempts to combine writes
together into full strides. The usable capacity for RAID0 is the drive count time the drive size which
is 100% usage but with the cost of no redundancy. A redundancy of 1 means that one drive can fail
without failing the array. The usable capacity is only one drive size (50%) since the other is just a
mirror. The supported drive counts for RAID 5, 6, and 10 are higher, but a default size is used in the
GUI. Eight is the best practice number for a RAID5 array and that is why it is used by the GUI.
Avoid splitting arrays into multiple logical disks at the storage system level. Where possible, create
a single logical disk from the entire capacity of the array. You can intermix different block-size drives
within an array and a storage pool. Performance degradation can occur, however, if you intermix
512 block-size drives and 4096 block-size drives within an array. Depending on the redundancy
that is required, create RAID 5 arrays by using 5 - 8 data bits plus parity components (that is, 5 + P,
6 + P, 7 + P or 8 + P).
Uempty
Do not mix managed disks (MDisks) that greatly vary in performance in the same storage pool tier.
The overall storage pool performance in a tier is limited by the slowest MDisk. Because some
storage systems can sustain much higher I/O bandwidths than others, do not mix MDisks that are
provided by low-end storage systems with those that are provided by high-end storage systems in
the same tier.
Keep in mind that RAID data redundancy is not to the same as data backup. You still need to
ensure data safety by backing up your data daily to offline or off-site storage.
Uempty
RAID5 RAID5
arrayPool ID 0 array
Chain 1 MDisk MDisk
Enclosure 3
A special group is the balanced RAID group where the members of an array are balanced using the
two chains. For RAID10, select half disk drives from the enclosure in the first chain and the second
half of disk drives from the second chain (for better availability). Although drive selection is not a
concern for RAID5 or RAID6, it is still best to ensure that the selected drives that are in the
enclosures are part of the same SAS chain. The result is a balanced array that has 50% of its
drives on chain 1 and the other 50% of its drives on chain 2. The system performs the separation of
both chains in the wizard so that they are balanced well. Therefore, the arrays are in a set of up to
eight mirrored pairs with the data striped across mirrors. They can tolerate the failure of one drive in
each mirror and they allow reading from both drives in a mirror.
Uempty
This output identifies four of the eight mirrored pairs that are part of chain 2 (enclosure 4), which is
half of the drives from each enclosure. Chain 2 also contains the spare drive which provides
protection for the two array members balanced across the two enclosure chain.
Uempty
This output identifies the remaining mirrored pairs that are part of chain 1 (enclosure 3).
Uempty
• Array member goals are used for hot spare selection and can be displayed with the
lsarraymembergoals command.
• Both the array and its member drives have a property called balanced (suitability in
the GUI) which indicates whether member goals are met.
ƒ Exact: All member goals have been met.
ƒ Yes: All member goals except location have been met.
ƒ No: One or more of the capability goals has not been met.
• For an array and its members, the system dynamically maintains a spare protection
count of non-degrading spares.
When creating an array using the GUI several parameters are used to select the used drives that
are based on different goals. One goal is to have only drives from the same type in one array (Flash
and SSDs). Also, the drive RPM is a goal; all members should have the same speed. The same is
true for the capacity. Another goal is the location goal which places the members on a special
chain, enclosure, or slot ID.
Storwize V7000 supports hot-spare drives. To decide for a spare drive, the member goals are used.
They can be listed with the lsarraymembergoals command.
Uempty
RAID5 RAID5
Spare RAID5
Array Array Array
mdisk mdisk mdisk
When a RAID member drive fails, the system automatically replaces the failed member with a
hot-spare drive and resynchronizes the array to restore its redundancy. The management GUI
automatically creates drives that are marked as spare when the internal storage is configured by
the wizards. The rule is to create one spare for every 23 array members. This results in one
enclosure with 24 disks in the following setup: 23 drives have the Candidate state while one has the
state of Spare.
The selection of a spare drive that replaces a failed disk is done by the system. A drive with a lit
fault LED indicates that the drive has been marked as failed and is no longer in use by the system.
When the system detects that such a failed drive is replaced, it reconfigures the replacement drive
to be a spare and the drive that was replaced is automatically removed from the configuration. The
new spare drive is then used to fulfill the array membership goals of the system. The process can
take a few minutes.
If the replaced drive was a failed drive, the system automatically reconfigures the replacement drive
as a spare and the replaced drive is removed from the configuration.
Slide contains animations.
Uempty
• When the system selects a spare for member replacement, the spare
that is the best possible match to array member goals is chosen based
on.
ƒ An exact match of member goal capacity, performance, and location
ƒ A performance match: The spare drive has a capacity that is the same or
larger and has the same or better performance
The goal is always to replace them with the same type and properties as the failed disk. If they are
not available, the system searches for the best solution. The spare drives are global and can be
used from any array. There are no limits in using the spare drives.
Uempty
Traditional RAID 6
• Double parity improves data
availability by protecting against single
or double drive failure in an array.
• Disadvantage:
ƒ Spare drives are idle and cannot
contribute to performance.
í Particularly an issue with flash drives
ƒ Rebuild is limited by throughput of
single drive.
í Longer rebuild time with larger drives
í Potentially exposes data to risk of dual
failure
Traditional RAID 6 offers double parity which improves data availability by protecting against single
or double drive failure in an array. With Traditional RAID (TRAID), reading from a single drive or
multiple drives and writing to a single spare drive, the rebuild time is extended due to the spare
drive’s performance. In addition the spares, when not being used, sit idle wasting resources.
Uempty
Stripe
In a RAID 6, each stripe is made up of data strips (represented by D1, D2 and D3) and two parity
strips (P and Q). A two parity strips means the ability to cope with two simultaneous drive failures.
RAID 6 does not have a performance penalty for read operations, but it does have a performance
penalty on write operations because of the overhead associated with parity calculations.
Performance varies greatly depending on how RAID 6 is implemented.
Uempty
Distributed RAID arrays are designed to improve RAID implementation for better performance and
availability by offering faster drive rebuilds using the spare capacity reserved on each drive in the
array. Therefore, no “idle” drives as all drives contribute to performance.
Uempty
Distributed RAID 6
• Distribute 3+P+Q over 10 drives with two distributed spares
ƒ Spare is allocated depending on the pack number.
• The number of rows in a pack depends on the number of strips in a
stripe, this means the pack size is constant for an array.
• Extent size is irrelevant.
Drive
Row
In this instance
these 5 rows make
up a pack.
Distributed RAID 6 arrays stripe data over the member drives with two parity strips on every stripe.
These distributed arrays can support 6 - 128 drives. A RAID 6 distributed array can tolerate any two
concurrent member drive failures.
If a distributed array that contains a failed drive. To recover data, the data is read from multiple
drives. The recovered data is then written to the rebuild areas, which are distributed across all of
the drives in the array. The remaining rebuild areas are distributed across all drives.
Uempty
• With host I/O, if drives are being utilized up to 50%, the rebuild time will
be 50% slower.
ƒ Approximately three hours, but still that is much faster then TRAID time of 24
hours for a 4TB drive.
Since DRAID reads from every drive in the set, and can be written to during the rebuild, allowing
typical rebuild times in under 2 hours or 2-4 hours on average - rather than some 36+ hours in
worst case examples with traditional single spare drives. Which means you gain the performance in
daily operations, potentially adding 33% more performance to the array.
Uempty
DRAID arrays are a completely different RAID structure, which are still created as traditional RAID
using candidate drives as the building blocks. You cannot convert from traditional RAID to
Distributed RAID. DRAID are not currently supported the expansion of a distributed array.
Best practice would to be maintain the usual rules with storage pools, and only mix same capability
arrays in a single pool. You can use the CLI lsarrayrecommendation command for suitable
candidates.
When creating a new pool for your DRAID and only add the same type of DRAID to a single pool,
with 8+P+Q and 2 spares etc. You can have up to 4 spares per DRAID and up to 128 drives are
supported in a single DRAID. Each IO Group can support up to 12 DRAID.
DRAID supports large sets of drives, typically around 60 drives per array. So if you're only
considering adding a small number of DRAID, then this may not be appropriate for you. Use
RAID-6 with NLSAS drives for more redundancy as well as for larger 10K SAS drives
Keep in mind that I/O performance for a given number of drives improves with number of arrays,
distributed or non-distributed, specially with SSD drives.
Uempty
Drive-Auto Manage/Replacement
• DMP (directed maintenance procedure) step-by-step guidance no
longer required.
• Drive-Auto Manage/Replacement
ƒ Simply swap the old drive for new
í New drive in that slot takes over from the replaced drive
OLD NEW
drive drive
RAID5 RAID5
Array Array
mdisk mdisk
Slot 5 Slot 5
Replacing an old drive is much easier since you no longer have to follow the guidance of DMP
(direct maintenance procedure) to exchange an old drive for a new one. With Drive-Auto Manage,
you can simply swap the drives and the new drive in that slot takes over from the replaced drive.
Wait at least 20 seconds before you remove the drive assembly from the enclosure to enable the
drive to spin down and avoid possible damage to the drive. Do not leave a drive slot empty for
extended periods. Do not remove a drive assembly or a blank filler without having a replacement
drive or a blank filler with which to replace it.
Uempty
With the release of the V7.4 code introduced the industry standard extension at the RAID and SAS
level to provide an extra level of data integrity. T10DIF (data integrity field) was released in the v7.4
code and only available on Storwize V7000 Gen2 model.
T10DIF is a type two protection information (PI) that sits between the internal RAID layer and SAS
drives and appends 8 bytes of integrity metadata while the data is being transferred between the
controller and the PI-formatted disk drives. The 8 byte integrity field contains cyclic redundancy
check (CRC) data and more that provides validation data that can be used to ensure that data
written is valid and is not altered in the SAS network.
Uempty
• External storage
• Encryption
This topic examines the internal structure by defining the components of the array.
Uempty
Storage servers to be
virtualized
The Storwize V7000 GUI Overview diagram illustrates the path in which storage resources are
configured. The management GUI automatically detects the number of internal storage drives and
the external attached storage systems that are configured within the SAN fabric. These block-level
storage components are used for virtualization. The Overview panel provides a quick view of the
system configuration. Hover the mouse pointer over the icons to view each description. You can
click on any of the resource option to be re-directed to the selected panel.
Uempty
The Storwize V7000 internal drives are displayed within the management GUI Pools > Internal
Storage panel.
The installed Drive Class Filter column represents the type and size of the internal drives.
The Storwize V7000 GUI automatically detects each drive by its usage roles to include the capacity,
speed, and drive technology. Various types and drive capacities can be supported.
The 2076 -12F/24F enclosures that are attached to the Storwize V7000 nodes are presented as
internal storage. All drives that are detected by the GUI are presented in the usage role of Unused
as they have yet to be configured.
Uempty
Spare
(hot spare drive)
Member
(part of an array)
Candidate
(ready for use)
Failed
Unused
(Service needed)
(newly added)
The Storwize V7000 assigns several internal drive usage roles. The usage role identifies the status
of an installed drive:
• Unused: The drive is not a member of an MDisk. The GUI offers to change the drive use
attribute if it is selected as part of an array.
• Candidate: The drive is available for use in an array.
• Member: The drive is a member of an MDisk.
• Spare: The drive can be used as a hot spare if required.
• Failed: The drive was either intentionally taken offline or failed due to an error.
Uempty
You can right-click on any physical drive to view specific details of the drive status, UID and the
drive technology characteristics to include the vendor ID, part number, speed and firmware level of
the drive.
Uempty
All internal drives must have a use attribute of Candidate before it can be used as a member of an
array. You can change a drive attribute by right-clicking one or more of the unused drives and then
selecting Mark as. The GUI issues a chdrive command for each drive selected to change drive
usage from unused to candidate.
Uempty
ƒ Administrators can define all storage pools required before adding storage to
create array Mdisks.
Configuring a storage array and assign it to the pool is no longer an option using the Internal
Storage, Configure Storage. You must first define the storage pool in which the array is added for
virtualization.
All storage pools are created using with a (default) extent size of 1024 MB. An easytier setting is
defined and set to auto indicates that the Easy Tier function is to be automatically enabled if the
pool is to contain more than one tier of storage (Flash and HDD technologies). The -guiid value
corresponds to the particular GUI icon selected for the pool. The pool is also defined a -warning
value of 80% indicates that a warning message is to be generated when the 80% of the pool
capacity has been allocated.
The system provides an Add Storage notification as a reminder until storage has been added to the
pool. This feature can be disabled by clicking the Got it button.
Uempty
The extent size is not a real performance factor rather it is a management factor. If you have
preexisting storage pools it’s recommended to create a storage pool with the same extent size as
the extent size of existing storage pools. If you don’t have any other storage pools you can leave
the default extent size of 1GB.
You can allow user changes to the default extent size using the Settings > GUI Preferences.
Select the General option and click the box next to enable Advanced pool settings. Click the
Save button to save changes.
Uempty
After creating storage pools you must assign storage to specific pools. An array can be created
using the GUI or CLI. Both use the mkarray command. Spectrum Virtualize software has redesign
how to which still offers presets implemented based on best practices guidelines. The management
GUI provides three options to assign storage based on where the storage is located and its use.
The Quick Internal and External options assign storage based on drive class and RAID level. For
both of these options, the management GUI displays the recommended configuration based on
drive class, RAID level and the width of the array. Use the Internal Custom option to assign storage
that has been added to a system to customize your storage configuration.
By default, the system will recommend and create distributed arrays for most new Quick option
configurations. However, there are some exceptions. If not enough drives are available on the
system (for example, in configurations where there are under two flash drives), you cannot
configure a distributed array. In addition, you can continue to assign new storage to existing pools
in arrays that use previously-configured RAID settings.
The Advanced Internal Custom option allow you to assign storage that has been added to a
system to customize your storage configuration.
Uempty
At any point in time, an MDisk can only be a member in one storage pool, except for image mode
volumes. Once a drive becomes part of an array, its Use attribute changes from Candidate to
Member indicating it is now part of an array. If you recall, a distributed array provides reserved
capacity on each disk within array to regenerate data if there is a drive failure. Therefore, a spare is
not indicate
After the array is created, it can be used instantly and moved to a pool where volumes can be
created. Volumes can be written immediately after creation and mapping.
Uempty
RAID options
RAID types
# of spares Array width
The system supports non-distributed and distributed array configurations. In non-distributed arrays,
entire drives are defined as “hot-spares”. Hot-spare drives are idle and do not process I/O for the
system until a drive failure occurs. When a member drive fails, the system automatically replaces
the failed drive with a hot-spare drive. The system then resynchronizes the array to restore its
redundancy. However, all member drives within a distributed array have a rebuild area that is
reserved for drive failures. All the drives in an array can process I/O data and provide faster rebuild
times when a drive fails. The RAID level provides different degrees of redundancy and
performance; it also determines the number of members in the array.
Uempty
There are often cases where you want to sub-divide a storage pool (or managed disk group) but
maintain a larger number of mdisks in that pool. A Parent pool is a standard pool creation that
receive its capacity from MDisks that are divided into a defined extent size.
Child Pools were introduced in V7.4.0 code release. Instead of being created directly from MDisks,
child pools are created from existing capacity that is allocated to a parent pool.
Child pools are created with fully allocated physical capacity. The capacity of the child pool must be
smaller than the free capacity that is available to the parent pool. The allocated capacity of the child
pool is no longer reported as the free space of its parent pool. Child pools are logically similar to
storage pools, but allow you to specify one or more sub divided child pools.
Uempty
• To view child pool using the CLI, issue the following command:
IBM Storwize:V009B:V009B1-admin>mkmdiskgrp -name ChildPool_1 -unit gb
-size 40 -parentmdiskgrp 0
MDisk Group, id [2], successfully created
IBM Storwize:V009B:V009B1-admin>
The same mkmdiskgrp command that is used to create physical storage pools is also used to
create child pools. To view the child pool that is created, right-click the parent pool and select
Child Pools. This view provides only basic information about the child pools.
Uempty
Table describe commands that are used to assign a given storage pool
Parameter Child pool usage Storage pool usage
-name Optional Optional
Child pools are similar to parent pools with similar properties and provides most of the functions
that MDiskgrps have such as creating volumes that specifically use the capacity that is allocated to
the child pool.
Maximum number of storage pools remains at 128 and each storage pool can have up to 127 child
pools. Child pools can be created used both the GUI and CLI however they are shown as child
pools with all their differences to parent pools in the GUI.
Uempty
Once the child pool has been defined, you can create a child pool volume by using the same
procedural steps listed within the Create Volumes wizard, as well as map volume directly to a host.
Administrators can use child pools to control capacity allocation for volumes that are used for
specific purposes such as assigning application/server administrator their own pool of storage to
manage, without allocating entire managed disks.
As with parent pools, you can specify a warning threshold that alerts you when the capacity of the
child pool is reaching its upper limit. Use this threshold to ensure that access is not lost when the
capacity of the child pool is close to its allocated capacity.
Uempty
You can view child pool volumes collectively from the Volumes > Volumes view. Child pool
volumes maintain the same Unique Identifier (UID) value as the other volumes of the cluster.
Uempty
Uempty
• External storage
ƒ Examine external storage
ƒ Quorum disks allocation
ƒ MDisk multipathing methods
• Encryption
This topic examines the back-end storage and defines how external storage resources are
presented to the Storwize V7000 for management.
Uempty
Best practice:
Node 8 Each V7000 node has
Define ALL V7000 ports Node 4
Node 7 four WWPNs
to storage system Node 3
Node 6
Node 2
Node 1
Node 5 SAN
SAN
LUN1 LUN2
LUN0 LUN4
In the SAN, storage controllers that are used by the Storwize V7000 Gen2 clustered system must
be connected through SAN switches. Direct connection between the Storwize V7000 Gen2 and the
storage controller is not supported. All Storwize V7000 Gen2 nodes in an Storwize V7000 clustered
system must be able to see the same set of ports from each storage subsystem controller.
Inappropriate zoning and LUN masking can causes the paths to become degraded. You will need to
follow guidelines that applies to the supported
disk subsystem, as to which HBA WWPNs a storage partition can be mapped.
From the perspective of the disk storage system, Storwize V7000 is defined as a SCSI host. Since
a Storage V7000 host is a cluster with 8 Fibre Channel ports, each node canister has four WWPNs.
Therefore, an eight-node Storwize V7000 has a total of 32 WWPNs. Disk storage systems tend to
have different mechanisms or conventions to define hosts. For example, a DS3500 or a DS5000
uses the construct of a host group to define the Storwize V7000 cluster with each node in the
cluster that is identified as a host with four host ports within the host group. For best practice, define
all of the cluster’s WWPNs to the storage system.
Uempty
Unmapped Volumes
Most backend storage system supports heterogeneous host support which enables consolidation in
multi-platform environments.
In a SAN fabric, LUN storage is essential to the configuration of the environment and its
performance. A storage device can be directly attached to the host group or connected via storage
networking protocols such as Fibre Channel and iSCSI. This allows LUNs to be mapped to a
defined host group.
Uempty
4 4
3 3
2 2
1 1
Each of the Storwize V7000 node canister has its own WWNN which is based
on 50:05:07:68:02:0z:zz:zz where z:zz:zz is unique for each node canister. It is unrelated to the
WWNN of the other node canister (they may be sequential numbers, they may not).
The WWPN of each Storwize V7000 Fibre Channel port is based on: 50:05:07:68:02:Yz:zz:zz
where z:zz:zz is unique for each node canister and the Y value is taken from the port position.
The number in each black box (which represents a Fibre Channel port) is the Y value, which is also
the port number. Therefore, the Y value and the port number are the same number. In this
example, port 1, contains a 1, so a WWPN presented by this port would look like:
50:05:07:68:02:1z:zz:zz.
The environment of having multiple WWNNs used in certain disk storage systems is limited only by
the maximum of 1024 WWPNs and 1024 WWNNs.
Uempty
w w w w w w w w w w w w w w w w
w w w w w w w w w w w w w w w w
p p p p p p p p p p p p p p p p
n n n n n n n n n n n n n n n n
L0 L1 L2 L3 L4 L5 L6 L7
L8 L9 La Lb Lc Ld Le Lf
Examples: Various EMC and HDS models
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016
Some storage systems generate more than 16 WWNNs. In this case, up to 16 WWNNs of the
storage system can be set up as a group. The Storwize V7000 treats each group of 16 WWNNs as
a storage system. Deploy LUN masking so that each LUN is assigned to no more than 16 ports of
these storage systems. The environment of having multiple WWNNs used in certain disk storage
systems is limited only by the maximum of 1024 WWPNs and 1024 WWNNs.
Uempty
Controller1 Controller2
WWNN1 WWNN2
w w w w w w w w
w w w w w w w w
p p p p p p p p
n n n n n n n n
L0 L2 L4 L6 L1 L3 L5 L7
Preferred node = Node1 Preferred node = Node2
Example: IBM Storwize V7000 to DS3500 implementation
Figure 5-56. Storwize V7000 to DS3500 with more than one WWNN
As best practice, assign MDisks in multiples of storage ports zoned with Storwize V7000 cluster (8
WWPNs - 8 MDisks/16 MDisks).
For the latest information on the Storwize V7000 product support, refer to the Storwize V7000
Information Center > Configuration > Configuring and servicing external storage systems for
details regarding to storage systems setup parameters. Maximum configuration limits can be found
at the web by searching with the keywords of IBM Storwize V7000 maximum configuration
limits.
Uempty
LUN0 LUN4
Assigned to Host_V7K C
Storwize V7000 storage provisioning © Copyright IBM Corporation 2012, 2016
Logical Unit Number (LUN) masking is an authorization process that makes a Logical Unit Number
available to some hosts and unavailable to other hosts. LUN masking is mainly implemented at the
host bus adapter (HBA) level.
A volume group is a named construct that defines a set of LUNs. The Storwize V7000 host
attachment can then be associated with a volume group to access its allowed or assigned LUNs.
The LUNs identified in the visual as LUN1 through LUN4 become unmanaged MDisks after
Storwize V7000 performs device discovery on the SAN. These LUNs should be large, similar in
size, and be assigned to all of the Storwize V7000 ports of the cluster. These LUNs must not be
accessible by other host ports or other Storwize V7000 clusters.
LUNs become MDisks to be grouped into storage pools. Create a storage pool by using MDisks
with similar performance and availability characteristics. For ease of management and availability,
do not span the storage pool across storage systems.
The recommendation is to allocate and assign LUNs with large capacities from the storage systems
to the Storwize V7000 ports. These SCSI LUNs or MDisks once under the control of the Storwize
V7000 provide extents from which volumes can be derived.
All storage systems use variations of these approaches to implement LUN masking. Refer to the
Storwize V7000 Information Center > Configuration > Configuring and servicing external
Uempty
storage systems for more specific information about the numerous heterogeneous storage
systems that are supported by the Storwize V7000.
Within a given disk storage system, considerations for best practices of LUN placement have to be
adjusted to the given disk storage system. For example, an IBM XIV disk storage system do not
have an array-based architecture. For an XIV, large LUN size, even multiples of LUNs per path to
the Storwize V7000, and usage of XIV capacity need to be considered.
For an IBM DS8000, performance optimizing functions such as rotate extent space allocation
techniques negate a LUN per array practice, where in a DS3500 or DS5000 one LUN per array is
indeed best practice, within the context of even multiples of LUNs per path, and same-sized, large
LUNs.
The best practice of LUN/array placement needs to be qualified by the storage system used. There
are many documents that relate the LUN masking. Going into details for each storage system that
is supported by the Storwize V7000 is beyond the scope of this course.
Uempty
Each storage device has an feature-rich management interface that gives control over the storage
device. The Subsystem Management window is used to configure and manage the logical and
hardware components with in the storage subsystem. With the latest upgrade, provides a new look
and feel, with a more intuitive interface and redefined tabs, creates an ease in the flow for
configurations and management. Summary tab interface has been modified to depicts a boarder
“at-a-glance” summary view of all component activities within the storage subsystem which
includes the latest firmware version and Premium Features installed.
Uempty
To illustrate the disk storage system management interface, this visual shows the WWPNs and
WWNN of an IBM DS3x00 disk storage system.
The profile of this storage system can be displayed by the DS Storage Manager GUI. Two different
controllers within this DS3400 are displayed (note the Controllers tab). Each controller has its own
unique WWPN value but they share WWNN value.
Therefore, the DS3x00 storage system is identified by just one WWNN and each controller port
within the storage system has its own WWPN. This is also the case with other models of the
DS3000, DS4000, and DS5000 series of storage systems.
Uempty
This is an example of LUN masking using the host group constructed on a disk storage system.
The Configure Hosts view of the DS Storage Manager, the Storwize V7000 clustered system is
defined to the DS3K as a host group. Each host (node) is defined with four ports. This means that
the Storwize V7000 is defined as a SCSI initiator (host) to the external storage system.
The host ports are shown in detail in the Configured Hosts: box. The host type is an IBM TS SAN
VCE (IBM TotalStorage SAN Volume Controller Engine). The IBM TS SAN VCE host type uses an
Auto Logical Drive Transfer (ADT) that allows the Storwize V7000 to properly manage SCSI LUN
ownership between controllers used the paths specified by the host.
Uempty
In the first example of the Configure Hosts view of the DS Storage Manager, the Storwize V7000
clustered system is defined to the DS3K as a host group. This means that the Storwize V7000 is
defined as a SCSI initiator (host) to the external storage system. The IBM TS SAN VCE host type
uses an Auto Logical Drive Transfer (ADT) that allows the Storwize V7000 to properly manage
SCSI LUN ownership between controllers used the paths specified by the host.
From the Host-to-Logical Drive Mappings view of the DS Storage Manager, eight LUNs have
been mapped to the host group called Storwize V7000 with their respective LUN numbers which
become the MDisks in the Storwize V7000.
If the LUN numbers or mappings are changed, the MDisk has to be removed from the Storwize
V7000 first. If this is not done, then access to Storwize V7000-surfaced volumes are lost and the
risk of data loss is high (due to human error).
Uempty
Click plus
+ sign to
expand
view
External storage systems can be displayed by selecting the Pools > External Storage menu
option. The External Storage GUI panel display was captured in two images to provide a full view.
A controller entry is listed by a default name of controllerx (where x indicates the ID assigned to the
detected controller). Each controller is associated with an ID number and WWNN which was zoned
with the Storwize V7000 node ports. The Storwize V7000 GUI has performed device logins with the
storage system ports to discover LUNs that have been assigned to the Storwize V7000. The device
type of 17xx FastT indicates the type of storage system attached.
You can click the + (plus) sign of the controller entry to list the LUNs present. LUNs are displayed
as MDisk entries. The LUN number column represents the assigned LUN numbers of these
MDisks. To uniquely identify an MDisk in the Storwize V7000 inventory, it is correlated to a specific
storage system and the specific LUN number that is assigned by that storage system. The Storwize
V7000 assigns to the MDisk a default object name and object ID.
All newly discovered MDisks are always interpreted in an unmanaged mode. Each MDisk
represents a LUN that has been assigned to the Storwize V7000 host group from the DS3K storage
system. You must assign MDisks to the specific pool to be able to manage the allocated capacity.
Uempty
Traditionally SCSI LUNs surfaced by RAID controllers are presented to application host servers as
physical disks.
With the Storwize V7000 serving as the insulating layer, SCSI LUNs become the foundational
storage resource that is owned by the Storwize V7000. A one-to-one relationship exists between
the SCSI LUNs and the managed disks.
The Storwize V7000 takes advantage of the basic RAID controller features (such as RAID 0, 1, 5, 6,
or 10) but does not depend on large controller cache or host independent copy functions that are
associated with sophisticated storage systems. The Storwize V7000 has its own cache repository
and offers network-based Copy Services.
Managed disks have associated access modes. These modes, which govern how the Storwize
V7000 cluster uses MDisks, are:
• Unmanaged: The default access mode for LUNs discovered from the SAN fabric by the
Storwize V7000. These LUNs have not yet been assigned to a storage pool.
• Managed: The standard access mode for a managed disk that has been assigned to a storage
pool. The process of assigning a discovered SCSI LUN to a storage pool automatically changes
the access mode from unmanaged to managed mode. In managed mode space from the
managed disk can be used to create virtual disks.
Uempty
• Image: A special access mode that is reserved for SCSI LUNs that contain existing data. Image
mode preserves the existing data when control of this data is turned over to the Storwize
V7000. Image mode is specifically designed to enable existing data to become Storwize
V7000-managed. SCSI LUNs containing existing data must be added to the Storwize V7000 as
image mode.
Uempty
IBM DS8K
The LUNs (volumes) surfaced by the disk storage systems become unmanaged MDisks. The
logical unit names (LUNs) name spaces are local to the external storage systems. Therefore, it is
not possible for the system to determine this name. However, you can use the external storage
system WWNNs and LUN IDs to identify each device. This unique ID can be used to associate
MDisks on the system to the corresponding LUNs on the external storage system.
The visual continues with the DS3500 and DS8000 examples where the MDisk entries of those
LUNs are renamed to enable easier identification of the storage system and the LUNs within the
storage system.
The storage pool names in this example reflect the storage system and disk device type making it
easier to identify relative performance and perhaps storage tier in an enterprise.
Names of Storwize V7000 objects can be changed without impacting Storwize V7000 processing. If
installation naming standards have been modified then names of Storwize V7000 objects can be
modified accordingly. All Storwize V7000 processing is predicated by object IDs and not object
names. Up to 63 characters can be used in an object name.
Uempty
The DS Storage Manager GUI can be used to display the details of the LUN represented by the
MDisk (LUN). The MDisk UID corresponds to the logical drive ID. The active WWPN from the
Storwize V7000 CLI output matches the Controller A WWPN in the storage system which is the
preferred and current owner of the LUN.
Uempty
Since the storage system default names are automatically assigned by the system, it can be
difficult to properly identify a system under this naming convention. Therefore, adhering to naming
convention standards to create a more self-documenting environment often saves time for problem
determination.
To change the name of a system:
1. Right-click the entry and select the Rename option from the pop-up list.
2. Enter a new system name in the Rename Storage System pane, and click the Rename button.
When you initiate a task the GUI generates a list of commands that were used to complete the
task. In this example, a chcontroller -name command was used to rename the storage
system.
3. Once task is completed, click the Close button.
Uempty
Command-line interface (CLI) commands offers a straightforward, obvious, and simple way to view
and configure the Storwize V7000 and the attached storage subsystems. Before you issue a
command it is best practice to first check the status of the system’s current configuration.
The lscontroller command appended with –delim, provides a comprehensive summary entry
for each storage system of the cluster. This output format is also referred to as the concise view as
it provides high-level summary or concise information for each object - namely object ID, name, and
device type.
The lscontroller x output (where x is the object ID of the controller) provides much more detail
about the specific object as a more verbose view. For example, an additional key field that is found
in the verbose view is the WWNN of the controller which enables the association of the controller
entry in the Storwize V7000 inventory with its physical entity counterpart.
Once the correct storage system has been pinpointed, a meaningful name can be assigned. To
rename the storage system merely add a ch to the object category. Thus, the chcontroller
command with -name allows a new name to be assigned to the specified object (identified by ID or
name).
Again, adhering to the best practice of using meaningful names for objects (instead of staying with
the Storwize V7000 assigned default names) the storage systems (also known as controllers) of the
cluster are renamed.
Uempty
The disk drives that are discovered within the storage device are automatically assigned a default
name and sequentially numbered (MDisk#). The GUI automatically presents a list of all unmanaged
MDisks by object name and ID. However, each drive’s availability can be identified by an access
mode which determines how the cluster uses the MDisk. As a productivity aid the management GUI
offers multi-select for some functions so that the same action can be applied to multiple entries or
objects. The Ctrl or Shift keys can be used to select multiple entries. Storwize V7000 management
GUI supports cut and paste to minimize editing with the same character string.
To change the name of an MDisk is a fairly simple task, right-click to select the Rename option from
the pop-up list.
The GUI generates sequential chmdisk -name commands to change the name of each selected
MDisk.
Uempty
If you are renaming MDisks using the CLI then use the detectmdisk command to scan the system
for any new LUNs that might have been assigned to the Storwize V7000 from storage systems.
This is analogous to cfgmgr in AIX or Rescan Disks in Windows. Any newly discovered LUNs
become MDisks with an access mode of unmanaged.
The lsmdisk command is filtered to list MDisks from a controller whose name begins with an
asterisk (example: *DS3K). This output displays each MDisk by its ID, access mode, LUN number,
and UID.
To rename multiple MDisks, use the CLI and issue the commands as follows:
lsmdisk -filtervalue name=mdisk* -nohdr |while read id name; do chmdisk -name
mdisk$id $id; done
This command adds a do-loop to rename all MDisks with a “filtervalue name =mdisk” and
change the “VB1-DS3K$ID” such as VB1-DS3K0, VB1-DS3K1.
The renaming of the MDisks correlates to the MDisk ID and the LUN number that is assigned by the
storage system.
Uempty
MDisk properties
• Right-click an MDisk and select Properties.
ƒ Show Details provides technical parameters such as capacity, interface, rotation
speed, and the drive status (online or offline).
Observe the
WWPNs correlates
to the storage
system
Right-click a managed disk to view the properties of a specific drive. Check the View more details
link. IBM Storage uses a methodology whereby each WWPN is a child of the WWNN. This means
that if you know the WWPN of a port then you can easily match it to the WWNN of the storage
device that owns that port.
Uempty
TeamB50_GRP2
IBM_2076:Team50A:TeamAdmin>svcinfo lsfreeextents DS3K0
id 0
number_of_extents 230 quorum index using 1 extent
BM_Storwize:V009B:superuser>
The three quorum disks are used to resolve tie-breaking cluster state issues and track cluster
control information or metadata.
Use the lsquorum command to list the quorum disks. Quorums are identified by three quorum index
values - 0, 1, and 2. One quorum index is the active quorum and the others are in stand-by mode.
For this example, the active quorum is index 0 resident on MDisk ID 0.
The quorum size is affected by the number of objects in the cluster and the extent size of the pools.
For this example, the pool extent size is 1 GB and based on the number of free extents available a
quorum disk is deduced to be using one extent or 1 GB (the smallest unit of allocation). This might
help explain the missing capacity in the storage pool capacity value. The remaining extents of a
quorum MDisk are available to be assigned to volumes (VDisks).
A quorum disk is an MDisk or a managed drive that contains a reserved area that is used
exclusively for system management. A clustered system automatically assigns quorum disk
candidates. The three quorum disks are used to resolve tie-breaking cluster state issues and track
cluster control information or metadata.
Uempty
As seen in the lsquorum output, MDisk ID 0 contains quorum index 0, which is the active quorum.
Only managed mode MDisks can be used as quorum disks. Observe the detailed lsmdisk output
for this MDisk has an entry of quorum index 0.
Each MDisk has a UID value, which is the serial number that is externalized by the owning storage
system for the LUN and appended with many bytes of zeros.
Uempty
IBM_2076:Team50A:TeamAdmin>lsquorum
quorum_index status id name controller_id controller_name active object_type
override
0 Replaced online 3 DS3K3 0 TeamA_DS3K yes mdisk no
1 online 1 DS3K4 1 TeamA_DS3K no mdisk no
2 online 10 DS8K1 2 TeamA_DS3K no mdisk no
IBM_2076:Team50A:TeamAdmin>svcinfo lsfreeextents DS3K0
id 0
number_of_extents 500
IBM_2076:Team50A:TeamAdmin>chquorum -mdisk DS3K0
Use the chquorum
IBM_2076:Team50A:TeamAdmin>lsfreeextents DS3K0
id 0 command to change the
number_of_extents 499 quorum association.
IBM_2076:Team50A:TeamAdmin>lsquorum
quorum_index status id name controller_id controller_name active object_type
override
0 online 0 DS3K3 0 TeamA_DS3K no mdisk no
1 online 1 DS3K4 0 TeamA_DS3K yes mdisk no
2 online 10 DS8K1 0 TeamA_DS3K no mdisk no
IBM_2145:Team50A:TeamAdmin>
Best practice
0
quorum 0 1 10
quorum 2
DS3K0 active
quorum 0 DS8K1
DS3K4
Quorum disks can be assigned to drives in the control enclosure automatically or manually by using
the chquorum command. This command allows the administrator to select the MDisk for a quorum
index.
In this example, quorum index 0 is being placed on MDisk DS3K 0. The three quorum disks have
been placed in three different storage pools that are backed by three different storage systems.
A quorum disk is automatically relocated if there are changes in the cluster configuration affecting
the quorum.
Uempty
TeamA50_GRP2 TeamB50_GRP2
Figure 5-74. Best practice: Reassign the active quorum disk index
Not only is it best practice to spread the quorum disks across storage systems, it is also
recommended that the active quorum be placed in the storage system that is deemed to be the
most robust in the enterprise.
For example, the removal of MDisk ID 3 from a storage pool caused a configuration change that
impacted quorum index 0. Therefore, quorum index 0 was moved to another MDisk.
The -active keyword of the chquorum command allows the specification of the quorum index to be
the active quorum.
Uempty
A storage pool goes offline if an MDisk is unavailable, even if the MDisk has no data on it. If these
MDisks contains the quorum disks then it affects the Storwize V7000 quorum configuration. For
disaster recovery purposes, running an Storwize V7000 system without a quorum disk can
seriously affect the operation. A lack of available quorum disks for storing metadata prevents any
migration operation (including a forced MDisk delete). Therefore, the Storwize V7000 automatically
reconfigures the affected quorum disks and moves them from the storage pool MDisks to another
eligible managed mode MDisk. As a result, the Health Status indicator turns red.
Once the storage pool MDisks have been restored to Online then the Storwize V7000 automatically
reconfigures the quorum environment to the quorum indexes back to the storage pool MDisks.
There are special considerations concerning the placement of the active quorum disk for a
stretched (split) cluster and Split I/O Group configurations. For more information, refer to the
following website: http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311
Uempty
A single system can manage up to 128 storage pools. The size of a pool can be changed to
maintain storage utilization. You can add MDisks to a pool at any time either to increase the number
of extents that are available for new volume creations or to expand existing volumes. If statistics
show storage utilization rates are low, the pool’s unused space can be reduced by unassigning
MDisks from the pool. Both procedures can be performed using the management GUI without
disruption to the storage pool or volumes being accessed.
You can add only MDisks that are in unmanaged mode. When MDisks are added to a storage pool,
their mode changes from unmanaged to managed. Adding MDisk to a pool that contains exiting
data, the system automatically balances volume extents between the MDisks to provide the best
performance to the volumes. However, if you add an MDisk that contains existing data to a
managed disk group, you lose the data that it contains. The image mode is the only mode that
preserves its data.
When an MDisk is removed from a storage pool, the system reallocates the removed MDisk’s
extents to other MDisks in the same pool. If the MDisk contains a quorum file, the system will
automatically relocate it to a new location in the system.
You can add only MDisks that are in unmanaged mode. When MDisks are added to a storage pool,
their mode changes from unmanaged to managed.
You can delete MDisks from a group under the following conditions:
Uempty
• Volumes are not using any of the extents that are on the MDisk.
• Enough free extents are available elsewhere in the group to move any extents that are in use
from this MDisk.
More information is covered on deleting data in a volume protection slide.
Uempty
The addmdisk command is generated to add a new MDisk to an existing pool. If volumes are being
used by the MDisks that you are removing from the storage pool, you must select the Remove the
MDisk from the storage pool even if it has data on it. The rmmdisk command will generate the
-force parameter enables the removal of the MDisk by redistributing the allocated extents of the
moved MDisk to other MDisks in the pool. However, if there is insufficient free space among the
remaining MDisks in the pool to receive the reallocated extents, the removal fails - no data can be
transferred, even a forced removal is not allowed. Once an MDisk is removed from a storage pool,
its access mode is reset to unmanaged to indicate it is not being used by the Storwize V7000
cluster.
Uempty
Backend storage system MDisks are accessed based on one of four multipathing methods upon
Storwize V7000’s discovery of the storage system model. The objective is to spread I/O activity to
MDisks across the available paths (or ports zoned) to the storage system.
The access method for a given storage system model is documented at the Storwize V7000
support website under Controllers > Multipathing of the Supported Hardware page. For example,
MDisks presented by a DS3500 are accessed using the MDisk group balancing method while
MDisks presented by a Storwize V7000 are accessed using the round robin method.
Uempty
The four multipathing methods or options to access an MDisk of an external storage system are:
• Round robin: I/Os for the MDisk are distributed over multiple ports of the storage system.
• MDisk group balanced: I/Os for the MDisk are sent to one target port of the storage system.
The assignment of ports to MDisks is chosen to spread all the MDisks within the MDisk group
(pool) across all of the active ports as evenly as possible.
• Single port active: All I/Os are sent to a single port of the storage system for all the MDisks of
the system.
• Controller balanced: I/Os are sent to one target port of the storage system for each MDisk.
The assignment of ports to MDisks is chosen to spread all the MDisks (of the given storage
system) across all of the active ports as evenly as possible.
Uempty
Figure 5-80. Example: Storage system path count DS3K in four-node Storwize V7000 cluster
For storage systems that implement preferred DS3K controllers (such as DS3500 or DS3400), the
Storwize V7000 honors the MDisk’s preferred controller attribute when this information is discerned
from SCSI inquiry data. It is good practice to balance MDisk (LUN) assignments across storage
controllers in the storage system.
For the DS3K system, the Storwize V7000 implements the MDisk group balancing multipathing
access method. This means I/Os for a given DS3K MDisk are sent to one given target port that is
zoned for this storage system. The assignment of ports to MDisks is chosen with the aim to spread
all the MDisks within the storage pool across the zoned ports and with consideration to the
preferred controller of MDisks (LUNs).
A path count value is accounted for on a per MDisk basis. Each node of the Storwize V7000 cluster
has a path to a given MDisk and is counted as one path. In a four-node cluster, the path count
would be 4 per MDisk.
In the lscontroller output detail, the path count is reported at the storage port level and provides
a clue to how many MDisks are accessed through a given storage port (16 = 4 paths x 4 MDisks, 20
= 4 paths x 5 MDisks).
Uempty
The path count value for a given MDisk is found in the lsmdisk verbose output for the MDisk. For a
four-node Storwize V7000 cluster, the path count to the DS3K-VB1 MDisk would be 4. Since the
MDisk group balancing access method is used when access the MDisks, a given MDisk is
accessed only through one assigned storage port. This port is documented as the preferred
WWPN and it should also be the active WWPN.
Examine the output detail for the DS3KNAVY0 MDisk to obtain its preferred WWPN and active
WWPN value. Verify that this WWPN is one of the two WWPN values from the lscontroller output
from a prior example. All four nodes of the Storwize V7000 cluster use the active WWPN port for
I/O to this MDisk.
Likewise each of the MDisks in the DS3K is assigned one of the DS3K ports (two ports that are
zoned in this example) based on the preferred controller attribute of the DS3K MDisk.
Uempty
Figure 5-82. Example: Storage system path count DS8K in four-node Storwize V7000 cluster
Contrasted with the DS3K and Storwize V7000, there is a key difference in pathing with the
DS8000. The DS8000 LUNs can be reached through any DS8000 port as it is a symmetric device.
There is no preferred controller concept.
The Storwize V7000 uses the same round robin multipathing method to access the DS8000
MDisks. Round robin distributes I/Os of an MDisk over all zoned ports of the DS8K.
The path count value is accounted on a per MDisk basis. Each node of the Storwize V7000 cluster
has a path to a given MDisk and is counted as one path. In a four-node cluster, the MDisk would
have a path count of 4. In the lscontroller output detail, the path count is reported by storage port
(16 = 4 paths x 4 MDisks). This output identify only four ports of the DS8K are zoned with the
Storwize V7000 cluster in this example.
Uempty
The DS8K MDisks can be accessed through any of its ports without controller preference
considerations. There is no controller port allegiance, therefore, the preferred WWPN is blank.
Since the MDisk can be access through any of the zoned DS8K ports, the active WWPN has a
value of many. Remember, that each of the four Storwize V7000 nodes designates one of its
Storwize V7000 ports to access a given MDisk for a total of 4 paths. Since the Storwize V7000
access method for the DS8000 is round robin, any of the 4 ports of the DS8K can be used. Thus
maximum path count to this MDisk is 16 (4 ports x 4 paths of the MDisk).
Uempty
Figure 5-84. Example: Storage system path count FlashSystem in two-node Storwize V7000 cluster
Similar to the DS8000, the IBM FlashSystem is a symmetric device and there is no preferred
controller concept. LUNs are accessible from any of the four ports of the owning FlashSystem. The
Storwize V7000 also uses the round robin method to access FlashSystem MDisks. Round robin
distributes I/Os of an MDisk over all zoned ports of the FlashSystem.
The path count value is accounted on a per MDisk basis. Therefore, each node of the Storwize
V7000 cluster has a path to a given MDisk and is counted as one path. In a two-node cluster, the
MDisk would have a path count of 2.
In the lscontroller output detail, the path count is reported by storage port (18 = 2 paths x 9
MDisks). This is a two-node Storwize V7000 cluster. All four ports of the FlashSystem are zoned
with the two-node Storwize V7000 cluster in this example.
Uempty
The FlashSystem MDisks can be accessed through any of its ports without controller preference
considerations. There is no controller port allegiance, therefore, the preferred WWPN is blank.
Since the MDisk can be access through any of the zoned FlashSystem ports the active WWPN has
a value of many.
Each of the two Storwize V7000 nodes designates one of its Storwize V7000 ports to access a
given MDisk - for a total of 2 paths. Since the Storwize V7000 access method for the FlashSystem
is round robin, any of the 4 ports of the FlashSystem can be used. Thus maximum path count to
this MDisk is 8 (4 ports x 2 paths of the MDisk).
Uempty
As a general practice ensure the number of MDisks presented from a given storage system is a
multiple of the number of its storage ports that are zoned with the Storwize V7000. This approach is
particularly useful for storage systems where the round robin method is not implemented for MDisk
access.
Uempty
• External storage
• Encryption
This topic discusses the Storwize V7000 support for hardware and software encryption.
Uempty
DS3x00
The Storwize V7000 Gen2 supports two levels of encryption. Hardware encryption of internal
storage and software encryption of external storage. Both methods of encryption protect against
the potential exposure of sensitive user data and user metadata that is stored on discarded, lost, or
stolen storage devices, and it can also facilitate the warranty return, or disposal of hardware.
Distributed RAID (DRAID) is not supported by hardware encryption, and software encryption
cannot be used on internal storage.
Uempty
Storwize V7000 support software data at rest encryption using AES_NI CPU instruction set and
engines:- 8 cores on the 1 CPU performing AES 256-XTS Encryption, which is a FIPS 140-2
compliant algorithm.
Software encryption maps all IO buffers into user space (this carries a risk of data scribblers) and
reads are decrypted using the client-provided buffer. Writes are encrypted in a new pool of buffers
is performed by software in the Storwize V7000 nodes.
Data-at-Rest is also instant secure – without having to rely on human intervention which is open to
user’s errors, making the data vulnerable.
There is no performance penalty for data-at-rest encryption. Encryption of system data and
metadata is not required, so system data and metadata are not encrypted.
Data-at-Rest Encryption is an optional feature that requires a purchased license.
Uempty
SCSI Target
Forwarding
Replication
Upper Cache
Communications
Interface Layer
Configuration
FlashCopy
Clustering
Software
encryption Peer Mirroring
Forwarding
RAID
Forwarding
SCSI Initiator
Fibre Channel
iSCSI
FCoE
SAS Encryption
PCIe
By using software encryption in the interface layer of the Storwize V7000 node canister, rather than
performing encryption in the SAS hardware layer, allows for greater flexibility on how the Storwize
V7000 handles external MDisks and their encryption attributes. In this case, I/O goes into the
Platform interface (PLIF), and gets encrypted there before being passed to a driver.
If you are mixing storage pools with internal RAID encrypted drives/flash drives and externally
virtualized storage, apply the key to the pool and it will only apply the software encryption to the
external storage, letting the SAS hardware separately encrypt the internal storage.
Uempty
IBM Storwize V7000 uses the Protection Enablement Process (PEP) to transform the system from
a state that is not protection-enabled to a state that is protection-enabled. This process establishes
from what is called a master encryption access key to access the system and a data encryption key
to encrypt and decrypt data.
When a system is protected-enabled, the system is both encryption-enabled and
access-control-enabled, and an access key is required to unlock the Storwize V7000 so it can it can
transparently perform all required encryption-related functionality.
Encryption access key can be created during the system initialization process by inserting the USB
flash drives into the control enclosures. During this process, you will need to add a check mark on
Encryption to start the encryption wizard.
Uempty
5
1
2
4
Activation of the license can be performed in one of two ways, either Automatically or Manually and
can be performed during either System Setup.
To activate encryption automatically using the System Setup menu, the workstation being used to
activate the license must be connected to an external network. Select the Yes option. If the control
enclosure is not highlighted, select the control enclosure you wish to enable the encryption feature
on. Click the Actions menu. From here, select Activate License Automatically. Click Next.
From the pop-up menu, enter the authorization code specific to the enclosure you have selected.
The authorization code can be found within the licensed function authorization documents. This
code is required to obtain keys for each licensed function that you purchased for your system. Once
authorization code has been entered, click the Activate button.
The system will generate the activatefeature command which connects to IBM in order to verify
the authorization code, retrieve license keys and apply them. This procedure can take a few
minutes to complete.
Uempty
Once the encryption is successfully enabled, a green check mark will appear under the Licensed
row.
If a problem occurs with the activation procedure, the Storwize V7000 will timeout after a short time
(approximately 2:30 minutes).
In this case, check that you have a valid activation (not license) code, access to the Internet or any
other problems with the Storwize V7000 Gen2.
Uempty
Encryption License
hyperlink
From the System > Settings menu, you can use the encryption license hyperlink to manually
activate the encryption feature on a previously initialized system. This procedure will displays the
same tasking steps used to complete the System Setup automatic encryption activation.
Uempty
Once the encryption license feature has been successfully applied, the Storwize V7000
management GUI Suggest Task provides the option to enable data-at-rest encryption for the
system. The suggested task also serves as a reminder that encryption is not enabled. To perform
this task, click Enable Encryption which will re-direct you to the Enable Encryption wizard, or click
Cancel to continue and enable encryption at a later time.
Uempty
USB
flash
drives
Before you can enable encryption for the Storwize V7000, you will need 3 USB flash drives to
complete the process. The UBS drives are used to store copies of the encryption keys. The wizard
will prompt you to insert two USB flash drives into the one V7000 node canister USB ports and
remaining USB flash drive will go into the second V7000 node canister. The location of the USB
ports are highlighted in this image.
Uempty
When the system detects the USB flash drives, encryption keys automatically are copied to each
USB flash drive.
Uempty
Although the system has copied encryption keys to the three USB flash drives, encryption is not
enabled until you click the Commit button.
Once the system is encrypted, if you wish to access data, perform upgrades, or power on or have a
Storwize V7000 system to automatically restarted, you must have an encryption key installed in
each control enclosure so that both canisters have access to the encryption key.
It is optional to leave one USB flash drive in each node; if the environment is secure. However the
standard practice is to always implement secure operations by making extra copies of the
encryption keys and locking all UBS keys in a secure location to prevent unauthorized access to
system data.
You can verify the system encryption status from the management GUI by selecting Settings >
Security > Encryption. If encryption keys are still inserted, this view will also indicate that they are
accessible.
Uempty
When encryption is enabled, a access key is provided to unlock the Storwize V7000 so that it can
perform encryption on user data writes and reads to and from the external storage systems.
This visual illustrates how the software encryption encrypt and decrypt user data on a external
storage system. During read and write operations, data is automatically encrypted and decrypted
as passed through the platform interface. Hardware encryption is performed by the SAS hardware.
Therefore, encryption only works to internal drives.
The encryption process is Application-transparent encryption which means that applications are not
aware that encryption and protection are occurring ─ a complete transparent to the users.
Data is not encrypted when transferred on SAN interfaces in other circumstances (front end/remote
system/inter node)
• Intra-system communication for clustered systems
• Remote mirror
• Server connections
Uempty
Encrypted
Encrypted Mdisk 3
MDisk 1
Encrypted
MDisk 2 Encrypted
Volume
Any pools that created after encryption is enabled are assigned a key that can be used to encrypt
and decrypt data. However, if encryption was configured after volumes were already assigned to
non-encrypted pools, you can migrate those volumes to an encrypted pool by using child pools.
When you create a child pool after encryption is enabled, an encryption key is created for the child
pool even when the parent pool is not encrypted. You can then use volume mirroring to migrate the
volumes from the non-encrypted parent pool to the encrypted child pool. You can use either the
management GUI or the command-line interface to migrate volumes to an encrypted pool.
Uempty
Visit the Storwize V7000 product support website for the latest list of storage systems and their
corresponding supported software and firmware levels.
Refer to the Storwize V7000 Information Center > Configuration > Configuring and servicing
external storage systems, for detailed descriptions of each supported storage system.
Support for additional devices might be added periodically. The website would have more current
information than this handout.
Uempty
Keywords
• Candidate disk • Redundant Array of Independent
Disks (RAID)
• Cluster initialization
• Cluster system • SCSI LUNs
• Command-line interface (CLI) • Spare disk
• Back-end storage • Storage provisioning
• Encryption
• Storwize V7000 GUI
• Extent
• Storage pool
• External storage
• Internal storage • Quorum disks
Uempty
Review questions (1 of 2)
1. True or False: Storwize V7000 2076-12F/24F expansion
enclosures are displayed in the Storwize V7000 GUI as
external storage resources.
Uempty
Review answers (1 of 2)
1. True or False: Storwize V7000 2145-24F expansion
enclosures are displayed in the Storwize V7000 GUI as
external storage resources.
The answer is false. IBM Storwize V7000 2076-12F/24F are
configured as internal storage with in the management GUI.
Uempty
Review questions (2 of 2)
4. List at least three use attributes of drive objects.
Uempty
Review answers (2 of 2)
4. List at least three use attributes of drive objects.
The answers are unused, failed, candidate, member, and
spare.
Uempty
Unit summary
• Summarize the infrastructure of Storwize V7000 block storage
virtualization
• Recall steps to define internal storage resources using GUI
• Identify the characteristic of external storage resources
• Summarize how external storage resources are virtualized for Storwize
V7000 management GUI and CLI operations
• Summarize the benefit of quorum disks allocation
• Recognize how external storage MDisk allocation facilitate I/O load
balancing across zoned
• Distinguish between Storwize V7000 hardware and software encryption
Uempty
Overview
This unit provides an overview of the IBM Storwize V7000 FC and iSCSI hosts integration. This unit
also identifies striped, sequential and image volume allocations to the supported host to include
benefits I/O load balancing and non-disruptive volume movement between the caching I/O groups.
References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html
Uempty
Unit objectives
• Summarize host systems functions in an Storwize V7000 system
environment
• Differentiate the configuration procedures required to connect an FCP
host versus an iSCSI host
• Recall the configuration procedures required define volumes to a host
• Differentiate between a volume’s caching I/O group and accessible I/O
groups
• Identify subsystem device driver (SDD) commands to monitor device
path configuration
• Perform non-disruptive volume movement from one caching I/O group
to another
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
This topic discusses the host server configuration in an Storwize V7000 environment.
Uempty
Host terminology
V7000 terminology Description
Caching I/O group The I/O group in the system that performs the cache function for a volume.
Child Pool Child pools are used to control capacity allocation for volumes that are used for
specific purposes.
Host mapping Host mapping refers to the process of controlling which hosts have access to
specific volumes within a cluster (it is equivalent to LUN masking).
iSCSI qualified name (IQN) IQN refers to special names that identify both iSCSI initiators and targets.
I/O Group Each pair of V7000 cluster nodes is known as an input/output (I/O) group.
Storage pool (Managed A storage pool is a collection of storage capacity that is made up of MDisks,
Disk group) which provides the pool of storage capacity for a specific set of volumes.
VLAN Virtual Local Area Network (VLAN) tagging separates network traffic at the layer
2 level for Ethernet transport.
Volume protection Prevent active volumes or host mappings from being deleted inadvertently.
Write-through mode A process in which data is written to a storage device at the same time as the
data is cached.
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Listed are a few host-related terms that are used throughout this unit.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The Storwize V7000 integrates intelligence into the SAN fabric by placing a layer of abstraction
between the host server’s logical view of storage (front-end) and the storage systems’ physical
presentation of storage resources (back-end).
By providing this insulation layer the host servers can be configured to use volumes and be
uncoupled from physical storage systems for data access. This uncoupling allows storage
administrators to make storage infrastructure changes and perform data migration to implement
tiered storage infrastructures transparently without the need to change host server configurations.
Additionally, the virtualization layer provides a central point for management of block storage
devices in the SAN through its provisioning storage to host servers that spans across multiple
storage systems. It also provides a platform for advanced functions such as data migration, Thin
Provisioning, and data replication services.
Uempty
Logical
Host-level
Layer virtualization
Host Software
J J
Physical B B
O Lh Li O D La Lb Lc
Layer D
HBA
WYSIWYG Storage-level
WYSIWYG WYSIWYG virtualization
SAN
RAID Controller
J L h Li
B Lz J
O B
D RAIDa O D Lx La Lb Lc
RAIDb
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Physical storage can be presented to a host system on an as-is basis. This is the case with simple
storage devices packaged as just a bunch of disks (JBOD). There is a one-to-one correspondence
between the JBOD and the disks that are seen by the host, that is, “what you see is what you get”
(WYSIWYG).
As faster processors and microchips become commonplace, storage aggregation and virtualization
can conceivably be done at any layer of the I/O path and introduce negligible latency to the I/O
requests. This results in more abstraction, or separation, between the physical hardware and the
logical entity that is presented to the host operating system.
With RAID storage systems the physical disks are configured into logical volumes in the storage
controller and presented to the host as physical disks. The aggregation and virtualization are
implemented in the storage system, outboard from the host.
Aggregation and virtualization can also be done in the host. The logical volume manager might
group or aggregate multiple physical disks (as seen by the host physical layer) and manage those
disks as one logical volume. Or, a logical volume might comprise partitions striped across multiple
physical disks.
Uempty
iSCSI
IQNs
LAN SAN Zones
iSCSI
initiator
FC
WWPNs
Hosts can be connected to the Storwize V7000 Fibre Channel ports directly or through a SAN
fabric. For a given host, it is recommended with SAN-attached 8 Gbps and 16 Gbps Fibre
connection using Fibre Channel WWPNs or through 1 Gbps iSCSI using its iSCSI Qualified Names
(IQN) but generally not both at the same time.
Storwize V7000 supports native attachment for host systems using the optional 10 Gbps
iSCSI/Fibre Channel over Ethernet (FCoE) NAS-attached Ethernet. This enables customers to
connect host systems to an Storwize V7000 using higher-performance, lower-cost IP networks and
supporting up to 7x per port throughput over the 1 Gb iSCSI. The 10 Gb port cannot be used for
system to system communication nor can it be used to attached external storage systems.
Uempty
FC redundant configuration
• Dual fabric is highly recommended.
• Hosts should be connected to all interface nodes.
• Number of paths through the SAN from V7000 nodes to a
host must not exceed eight.
SAN
Fabric 1
SAN
Fabric 2
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
In a SAN, a host system can be connected to a storage device across the network. For Fibre
Channel host connections the V7000 must be connected to either SAN switches or directly
connected to a host port. The V7000 detects Fibre Channel host interface card (HIC) ports that are
connected to the SAN. Worldwide port names (WWPNs) associated with the HIC ports are used to
define host objects.
In a manner analogous to the dual fabric (redundant fabric) Fibre Channel environment, a highly
available environment can be created for iSCSI attached hosts by using two Ethernet NICs and two
independent LANs. Both Ethernet ports in an V7000 node are connected to the two LANs and in
conjunction with the two host NICs a multipathing environment is created for access to volumes.
The number of paths through the SAN from V7000 nodes to a host must not exceed eight. For most
configurations, four paths to an I/O Group (four paths to each volume that is provided by this I/O
Group) are sufficient.
Uempty
Preparation guidelines
• List of general procedures that pertain to all hosts:
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
When managing Storwize V7000 storage that is connected to any host, you must follow basic
configuration guidelines. These guidelines pertain to determining the preferred operating system,
driver, firmware, and supported host bus adapters (HBAs) to prevent unanticipated problems due to
untested levels.
Next, what is the number of paths through the fabric that are allocated to the host, the number of
host ports to use, and the approach for spreading the hosts across I/O groups. They also apply to
logical unit number (LUN) mapping and the correct size of virtual disks (volumes) to use.
Uempty
The maximum number of host objects in an 8-node V7000 system cluster is 2,048. A total of 512
distinct, configured host worldwide port names (WWPNs) is supported per I/O Group. If the same
host is connected to multiple I/O Groups of a cluster, it counts as a host in each of these groups.
The maximum number of volumes in an 8-node system is 8,192 (having a maximum of 2,048
volumes per I/O Group). The maximum storage capacity supported is 32 PB per system. The
maximum size of a single volume is 256 TB.
To configure more than 256 hosts, you must configure the host to I/O Group mappings on the
Storwize V7000 Gen2. Each I/O Group can contain a maximum of 256 hosts, so it is possible to
create 1024 host objects on an eight-node Storwize V7000 Gen2 clustered system. Volumes can
only be mapped to a host that is associated with the I/O Group to which the volume belongs.
For more information about the maximum configurations that are applicable to the system, I/O
Group, and nodes, visit the IBM Support website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1005423.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
This topic discusses the host connection in an Storwize V7000 system environment, beginning with
the FC host attachment.
Uempty
SCSI initiator
Fabric 1 Fabric 2
SCSI target
Storwize
V7000
AC2 Node 1 AC2 Node 2
SCSI initiator
SCSI target
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The Storwize V7000 supports IBM and non-IBM storage systems so that you can consolidate
storage capacity and workloads for open-systems hosts into a single storage pool. In environments
where the requirement is to maintain high performance and high availability, hosts are attached
through a storage area network (SAN) by using the Fibre Channel Protocol (FCP). From the
perspective of the SCSI protocol, the Storwize V7000 is no different from any other SCSI device. It
appears as a SCSI target to the host SCSI initiator. The Storwize V7000 nodes does behave as a
SCSI device to the host objects it services and in turn it acts a SCSI initiator that interfaces with the
back-end storage systems.
For high availability, the recommendation for attaching the Storwize V7000 system to a SAN is
consistent with the recommendations of designing a standard SAN network. That is, build a dual
fabric configuration in which if any one single component fails then the connectivity between the
devices within the SAN is still maintained although possibly with degraded performance.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Fibre Channel (FC) is the prevalent technology standard in the storage area network (SAN) data
center environment. This standard has created a multitude of FC-based solutions that have paved
the way for high performance, high availability, and the highly efficient transport and management
of data.
Each device in the SAN is identified by a unique worldwide name (WWN). The WWN also contains
a vendor identifier field and a vendor-specific information field, which is defined and maintained
by the IEEE.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
In order for the Storwize V70000 to present volumes to an attached host, the 2145 host attachment
support file set must be installed. The hosts must run a multipathing device driver to limit the
pathing back to a single device. The multipathing driver supported and delivered by Storwize V7000
Gen2 is Subsystem Device Driver (SDD). Native multipath I/O (MPIO) drivers on selected hosts are
supported such as Subsystem Device Driver Device Specific Module (SDDDSM) for Windows
hosts or the Subsystem Device Driver Path Control Module (SDDPCM) for AIX host.
The Subsystem Device Driver (SDD) provides multipath support for certain OS environments that
do not have native MPIO capability. Both the SDDDSM and SDDPCM are loadable path control
modules for supported storage devices to supply path management functions and error recovery
algorithms. The host MPIO device driver along with SDD enhances the data availability and I/O
load balancing of Storwize V7000 volumes. The host MPIO device driver automatically discovers,
configures, and makes available all storage device paths. SDDDSM and SDDPCM then manage
these paths to provide the following functions:
• High availability and load balancing of storage I/O
• Automatic path-failover protection
• Concurrent download of supported storage devices’ licensed machine code
• Prevention of a single-point failure
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Multi-path I/O (MPIO) is an optional feature in Windows Server 2008 R2, and is not installed by
default. Installing requires a system reboot. After restarting the computer, the computer finalizes the
MPIO installation.
When MPIO is installed, the Microsoft device-specific module (DSM) is also installed, as well as an
MPIO control panel. The control panel can be used to do the following:
• Configure MPIO functionality
• Install additional storage DSMs
• Create MPIO configuration reports
Microsoft ended support for Windows Server 2003 on July 14, 2015. This change affects your
software updates and security options.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Depending on the host adapter, you can used the HBA application like QLogic’s SANSurfer FC
HBA Manager which provides a graphical user interface (GUI) that lets you easily install, configure,
and deploy QLogic Fibre Channel HBAs. To include perform diagnostic and troubleshooting
capabilities to optimize SAN performance.
Uempty
On FC host system, you can verify the host HBA driver settings using Device Manager. Device
Manager is an extension of the Microsoft Management Console that provides a central and
organized view of all the Microsoft Windows recognized hardware installed in a computer. Device
Manager can be used for changing hardware configuration options, managing drivers, disabling
and enabling hardware, identifying conflicts between hardware devices, and much more.
Uempty
50:01:73:68:NN:NN:RR:MP
Port ID (0-3)
IBM Storwize 0 for WWNN
V7000 serial
Module ID (0-f)
number (hex)
0 for WWNN
Rack ID (01-ff)
0 for WWNN
IEEE company ID
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
SAN zoning connectivity of an Storwize V7000 environment can be verified using the management
GUI by selecting Settings > Network and then select Fibre Channel Connectivity in the Network
filter list. The Fibre Channel Connectivity view display to the connectivity between nodes and
other storage systems and hosts that are attached through the Fibre Channel network.
The GUI zoning output conforms to the guideline that, for a given storage system, zone its ports
with all the ports of the FlashSystem V9000 cluster on that fabric. The number of port dedicated will
determine the number of ports zoned.
In a dual fabric, Storwize V7000 storage system ports and the additional V7000 storage enclosure
ports as well as those ports for external storage are split between the two SAN fabrics. The WWPN
values are specific to the Storwize V7000 node ports of the same fabric.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
A host object can be created by using the GUI Hosts > Hosts menu option. Click the Add Host
button to create either a Fibre Channel or iSCSI host. Before you proceed, make sure you have
knowledge of the host WWPNs or the IQN to verify that it matches back to the selected host.
The Add Host window allows you to specify parameters in which to define an FC host name and
add the port definitions WWPNs that corresponds to your host HBAs.
By default, new hosts are created as generic host types and assigned to all four I/O groups from
which the host can access volumes. You can select the Advanced option to modify the host OS
type such as Hewlett-Packard UNIX (HP-UX) or Sun, select HP_UX (to have more than eight LUNs
supported for HP_UX machines) or TPGS for Sun hosts using MPxIO. Or you can restrict the I/O
groups access to volumes.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
When defining an AIX host object, for various reasons the host WWPNs might not be displayed
within the Host Port (WWPN) panel such as the AIX FC ports might have logged out from the FC
switch, or a new AIX host has been added to the SAN fabric. You will first need to issue the cfgmgr
command to force the ports to perform a Fibre Channel login and sync the ports with the V7000
nodes. You can display the installed AIX host adapters by using the lsdev -Cc adapter |grep
fcs command. The maximum number of FC ports that are supported in a single host (or logical
partition) is four. These ports can be four single-port adapters, two dual-port adapters, or a
combination as long as the maximum number of ports that attach to V7000 does not exceed four.
The fscsi0 and fscsi1 devices are protocol conversion devices in AIX. They are child devices of fcs0
and fcs1 respectively.
Display the WWPN, along with other attributes including the firmware level, by using the lscfg
-vpl fcs* wildcard command or using the adapter number.
Once the AIX host ports are synced with the V7000 system, the WWPN should be available for
selection. Defining an AIX host object can be done in the same manner as the Windows host.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The V7000 GUI generates the mkhost command to correlate the selected WWPN values with the
host object defined and assigns a host ID. When a host object is defined the host count is
incremented by one for each I/O group specified.
If required, you can use the host properties option to modify a host attributes such changing the
host name and host type or restrict host access to volumes in a particular I/O group. You can also
view assigned volumes to include adding or deleting a WWPN.
Uempty
The lshost <hostname> (or ID) command returns the details that are associated with the specific
host object. It displays the values of all the WWPNs defined for the host object. The
node_logged_in_count is the number of V7000 nodes that the host WWPN has logged in.
Uempty
A host object can be defined to access fewer I/O groups in order to manage larger environments
where the host object count might exceed the maximum that is supported by an I/O group.
To support more than 256 host objects the rmhostiogrp command is used to remove an I/O group
eligibility from an existing host object.
The host object to I/O group associations only define a host object’s entitlement to access volumes
owned by the I/O groups. Physical access to the volumes requires proper SAN zoning for Fibre
Channel hosts and IP connectivity for iSCSI hosts.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Use the Fibre Channel panel to display the Fibre Channel connectivity between nodes, storage
systems, and hosts. This example of the connectivity matrix shows FC hosts and system
information with a listing of the node and port number that they are connected to.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
This topic discusses the iSCSI host connection in an Storwize V7000 environment.
Uempty
iSCSI architecture
• Mapping of SCSI architecture model
to IP
ƒ Storage server (target)
ƒ Storage client (initiator)
OS
Target disk
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Internet Small Computer System Interface (iSCSI) is an alternative means of attaching hosts to the
Storwize V7000 nodes. All communications with external back-end storage subsystems or other
IBM virtual storage systems must be done through a Fibre Channel or FCOE connection. The
iSCSI function is a software function that is provided by the Storwize V7000 code and not the
hardware.
In the simplest terms, iSCSI allows the transport of SCSI commands and data over a TCP/IP
network that is based on IP routers and Ethernet switches. iSCSI is a block-level protocol that
encapsulates SCSI commands into TCP/IP packets and uses an existing IP network. A pure SCSI
architecture is based on the client/server model.
Uempty
Before configuring an iSCSI host access, you need to identify if the target (storage node) you plan
to use supports MPIO and that it supports active-active connections. If the target supports MPIO
but does not support active-active then you can still make an iSCSI MPIO connection but the only
supported mode will be failover. A failover mode provides the network with redundancy but it does
not provide the performance increase as the other MPIO modes.
Some target manufacturers have their own MPIO DSM (Device Specific Module), therefore, it might
be preferable to use the target specified DSM mode. Consult the Storwize V7000 Support website
for supported iSCSI host platforms and if multipathing support is available for the host OS. If you
are using Windows 2008, MPIO support should be implemented when more than one path or
connection is desired between the host and the Storwize V7000 system.
You will also need to install the iSCSI initiator to initiate a SCSI session which sends a SCSI
command to the iSCSI target. The iSCSI target waits for the initiator’s commands and provides
required input/output data transfers. The iSCSI initiator does not provide the LUN, as it cannot
perform read or write commands. Therefore, it has to rely on the target to provide the initiators one
or more LUNs.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
You can attach the Storwize V7000 to iSCSI hosts by using the Ethernet ports of the Storwize
V7000. The Storwize V7000 nodes have two or four Ethernet ports. These ports are either for 1 Gb
support or 10 Gb support, depending on the model. For each Ethernet port a maximum of one IPv4
address and one IPv6 address can be designated for iSCSI I/O.
An iSCSI host connects to the Storwize V7000 through the node-port IP address. If the node fails,
the address becomes unavailable and the host loses communication with the Storwize V7000.
Therefore, you want to ensure that both the primary (configuration) node and the secondary
(failover) node are configured for host access.
The cfgportip command is generated to enable the component IP address be set for node ID1
port 1 and node ID2 port 2.
The lsportip command output displays the iSCSI IP port configuration of the nodes of the cluster.
Use the -filtervalue node_id= keyword to filter the output by node.
Uempty
iSCSI target
10.6.9.211 10.6.9.212 IQNs
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Each iSCSI initiator and target must have a worldwide unique name which is typically implemented
as an iSCSI qualified name (IQN). In this example, the Windows IQN is shown on the
Configuration tab of the iSCSI Initiator Properties window. The iSCSI initiator host’s IQN is used
to define a host object.
The Storwize V7000 node IQN can be obtained by selecting Settings > Network > iSCSI
Configuration pane of the Storwize V7000 GUI. The verbose format of the lsnode command can
also be used to obtain the Storwize V7000 node IQN.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Once you have defined the iSCSI initiator IQN and iSCSI target IQN, you need to perform a iSCSI
initiator discovery from the host server of the target portal. This process must be completed using
the installed iSCSI MPIO. From the iSCSI Initiator Properties window Discovery tab, click the
Discover Target Portal button and enter the Storwize V7000 node iSCSI IP port address or DSN
name. Port number 3260 is the default (official TPC/IP port for the iSCSI protocol). Once the portal
address has been entered the available iSCSI targets are displayed.
It is recommend to use the Favorite Targets tab to remove any previous targets mounted. This
might obstruct an iSCSI host discovery of a new target, if the previous mounted targets try to
reconnect.
Uempty
• Advanced Settings:
Set Local adapter to Microsoft iSCSI
Initiator
Select source IP address on the iSCSI
network
Set destination address
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The Targets tab lists each Storwize V7000 node IQNs that are automatically discovered by the
iSCSI initiator (Windows in this example). The discovered targets initial status of inactive. Use the
Connect button to connect to the target (Storwize V7000 node). The Connect to Target window
provides options to tailor the behavior of the connection, check both the box for persistent
connections and to enable multipathing access. Use the Advanced button to configure the
individual connects by pairing initiator to target ports in the same subnet to Storwize V7000 node.
Set Local adapter to Microsoft iSCSI Initiator, and select one of the two IP addresses on the
iSCSI network as the source IP. Select the destination address. Once this process is complete, the
initiator to the discovered target (NODE) is now connected.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
If the target supports multiple sessions then the Add Session option under the Target Properties
panel allows you to create an additional session. You can also disconnect individual sessions that
are listed. Use the Devices button to view more information about devices that are associated with
a selected session.
Multiple Connections per Session (MCS) support is defined in the iSCSI RFC to allow multiple
TCP/IP connections from the initiator to the target for the same iSCSI session. This is iSCSI
protocol specific. This allows I/O to be sent over either TCP/IP connection to the target. If one
connection fails then another connection can continue processing I/O without interrupting the
application. Not all iSCSI target support MCS. iSCSI targets that support MCS include but are not
limited to EMC Celerra, iStor, and Network Appliance.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
An iSCSI host is an alternative means of attaching hosts to the Storwize V7000. However,
communications with back-end storage subsystems and with other Storwize V7000 systems can
occur only through FC.
When you are setting up a host server for use as an iSCSI initiator with Storwize V7000 volumes,
the specific steps vary depending on the particular host type and operating system that you use. An
iSCSI host must first be configured with the Storwize V7000 node iSCSI IP port address for access,
assuming that you have performed the necessary host access requirements.
The Add Host for creating an iSCSI host is comparable to setting up Fibre Channel hosts. Instead
of entering the Fibre Channel ports panel, it requires you to enter the iSCSI Initiator hosts IQN that
was used to discover and pair with the Storwize V7000 node IQN.
You can identify the host objects and the number of ports (IQN) by using the lshost command or
use the iSCSI Initiator to copy and paste the IQN into the iSCSI Ports field.
When the host is initially configured the default authentication method is set to no authentication
and no Challenge Handshake Authentication Protocol (CHAP) secret is set. You can choose to
enable CHAP authentication which involves sharing a CHAP secret passphrase between the
Storwize V7000 and the host before the Storwize V7000 allows access to volumes.
Uempty
The Storwize V7000 GUI generates the mkhost command to create the iSCSI host object contains
the -iscsiname parameter followed by the iSCSI host IQN. The maximum for iSCSI hosts per I/O
group is 256 per Storwize V7000 due to the IQN limits.
If you are using Windows, it logs on to the target as soon as you click Connect. Other platforms
such as Linux RH, log on during device discovery.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The SDDDSM netstat command can be used to display the contents of various network-related
data structures for active connections. The –n option displays active TCP connections, however,
addresses and port numbers are expressed numerically and no attempt is made to determine
names. When this flag is not specified, the netstat command interprets addresses where possible
and displays them symbolically. This flag can be used with any of the display formats.
Uempty
Current state
Management IP iSCSI IP
iSCSI targets iSCSI IP Management IP
addresses addresses addresses addresses
Storwize V7000
10.6.9.200 10.6.9.208 10.6.9.202
NODE1 NODE2
10.6.9.201 10.6.9.208 Config node Ptrn node 10.6.9.203
Run the lsnode –delim , command to confirms that the NODE1 iSCSI IP
addresses have been transferred to NODE2..
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
This visual illustrates an iSCSI IP address failover that might be the results of a node failure or code
upgrade. In either case, if a node is no longer available then its partner node inherits the iSCSI IP
addresses of the departed node. The partner node port responds to the inherited iSCSI IP address
as well as its original iSCSI IP address. However, if the failed node was the Storwize V7000 cluster
configuration node then the cluster designates another node as the new configuration node. The
cluster management IP addresses are moved automatically to the new configuration node (or
config node).
From the perspective of the iSCSI host, I/O operations proceed as normal. To allow hosts to
maintain access to their data, the node-port IP addresses for the failed node are transferred to the
partner node in the I/O group. The partner node handles requests for both its own node-port IP
addresses and also for node-port IP addresses on the failed node. This process is known as
node-port IP failover. Therefore, the Storwize V7000 node failover activity is totally transparent and
non-disruptive to the attaching hosts.
Uempty
• Advantage:
ƒ Storwize V7000 node failover activity is transparent and non-disruptive.
ƒ iSCSI host I/O operations proceed as normal.
• Disadvantages:
ƒ Opened CLI sessions are lost when a config node switch occurs.
ƒ Opened GUI sessions might survive the switch.
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The Storwize V7000 node failover activity is totally transparent and non-disruptive to the attaching
hosts.
If there is an opened CLI session during the node failover then the session is lost when a config
node switch occurs. Depending on the timing, opened GUI sessions might survive the switch.
Uempty
1 Node1 return
Management IP iSCSI IP
iSCSI targets iSCSI IP Management IP
addresses addresses addresses addresses
Storwize V7000
10.6.9.200 10.6.9.208 10.6.9.202 10.6.9.200
NODE1 NODE2
10.6.9.201 10.6.9.208 Ptrn node Config node 10.6.9.203 10.6.9.201
Run the lsnode –delim , command to confirms that the NODE1 iSCSI IP
addresses have been transferred to NODE2..
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Once the failed node has been repaired or code upgrade has completed, it is brought back online.
The iSCSI IP addresses previously transferred to NODE2 automatically failback to NODE1. The
configuration node remains intact, and does change node. A configuration node switch occurs only
if its hosting node is no longer available.
When a failed node is re-establishing itself to rejoin the cluster, its attributes do not change (for
example its object name is the same). However, a new node object ID is assigned. Example, if
NODE1, whose object ID was 1, will now be assigned the next sequentially available object ID of 5.
Uempty
Node1 return
Management IP iSCSI IP
iSCSI targets iSCSI IP Management IP
addresses addresses addresses addresses
Storwize V7000 10.6.9.200
10.6.9.200 10.6.9.208 10.6.9.202
NODE1 NODE2 10.6.9.201
10.6.9.201 10.6.9.208 Ptrn node Config node 10.6.9.203
A host port failure reduces the number of paths between the host and the
Storwize V7000 cluster (4 to 2), but host application I/O continues without
issues due to host multipath support.
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Practice redundancy to protect against LAN failures, host port failures, or V7000 node port failures.
Configure dual subnets, two host interface ports, and the second IP port on each node. Implied with
defining multiple iSCSI target addresses to the initiator is the need for host multipathing support.
It is also highly recommended that a second cluster management IP address is defined at port 2 so
that a LAN failure does not prevent management access to the V7000 cluster.
If a host port failure occurs, it reduces the number of paths between the host and the V7000 cluster
from four to two. However, host application I/O continues without issues due to host multipath
support. When the failed host port returns then the original pathing infrastructure from the host to
the V7000 volume is restored automatically.
The second or alternate cluster management IP address assignment is discussed in the last unit of
this course.
Uempty
Redundancy enables a
robust LAN configuration iSCSI Email
iSNS
host gateway
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Redundancy enables a robust LAN configuration both for iSCSI attached hosts as well as V7000
configuration management.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Determining the difference between a Fibre Channel host and an iSCSI host that are listed within
the V7000 GUI can be challenging if the hosts are defined with common names. This is one reason
why using a naming convention is important.
To manage the host resources such as modify host mappings, unmap hosts, rename hosts, or
create new hosts, right-click the respective host to view options available.
Uempty
Host properties
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The Properties option provides an overview of the selected host. From this view you have the ability
to modify the host name, change the host type or restrict host access to a particular I/O group.
The Mapped Volumes tab provides a view of the volumes that are mapped to the host.
The Port Definition tab provides a quick status update on the host port definitions. From this view,
administrators can also add FC and iSCSI or
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
This topic discusses the host to Storwize V7000 volume access infrastructure.
Uempty
Volume allocation
• Volumes must be
Host Objects
mapped to a particular
host object.
FC host iSCSI
• Volumes are accessed WWPNs IQNs
through host WWPNs
or IQN.
I/O grp 0
V1 V2 I/O grp 0
Storwize V7000 Pair Storwize V7000 Pair
• Volumes are
automatically assigned Node canister 1
Node canister 2
Preferred
to an I/O group.
Uses round-robin Nodes
algorithm
MDisk1 MDisk 3
MDisk 2
Storage pool
A volume is also known as a VDisk
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The system does not automatically present volumes to the host system. You must map each
volume to a particular host object to enable the volume to be accessed through the WWPNs or
iSCSI names that are associated with the host object. Volumes can be mapped only to a host that
is associated with the I/O Group in which it belongs.
An I/O group contains two Storwize V7000 node canisters and provides access to the volumes
defined for the I/O group. While the Storwize V7000 cluster can have multiple I/O groups the I/O
traffic for a particular volume is handled exclusively by the nodes of a single I/O group. This
facilitates horizontal scalability of the Storwize V7000 cluster.
Upon creation a volume is automatically assigned to a given node within the I/O group and is
associated with one node of the I/O Group. By default, when you create a second volume it is
associated with the next node by using a round-robin algorithm. The assigned nodes are known as
the preferred node which is the node through which the volume is normally accessed. You can
specify a preferred access node which is the node through which you send I/O to the volume
instead of using the round-robin algorithm. Similar to LUN masking provided by storage systems,
host servers can access volumes that have been assigned to them.
Uempty
Volumes are presented by the Storwize V7000 system to a host connected over a Fibre Channel or
Ethernet network. A volume essentially contains pointers to its assigned extents of a given storage
pool. The advantage with storage virtualization is that the host is “decoupled” from the underlying
storage so the virtualization appliance can move the extents around without impacting the host
system.
Volumes are mapped to the application server hosts in the SAN conceptually in the same manner
as SCSI LUNs are mapped to host ports from storage systems or controllers (also known as LUN
masking).
Uempty
Existing data
• Image volume extents are subsequently 3. Image Extent 5a
moved to other MDisks within the Extent 5b
Extent 5c
storage pool without losing access to VW_DATA Extent 5d VW_DATA
data. 800 MB Extent 5e 800 MB
Creates a replica of the volume data Extent 5f
Extent 5g
(same size)
Partial extent
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Uempty
that contain data already. It is a one-to-one mapping. For example, an administrator can
virtualize a LUN containing Microsoft NTFS and gain the benefits of advanced functions and
improved performance immediately without any data movement.
Uempty
V1 V2 V3 V4
Storage pools
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
In an IBM Storwize V7000 environment, using active/active architecture, the I/O handling for a
volume can be managed by both nodes of the I/O group. Therefore, it is mandatory for servers that
are connected through Fibre Channel connectors to use multipath device drivers to be able to
handle this capability.
Each node of the Storwize V7000 system has a copy of the system state data, MDisks and storage
pools are system-wide resources available to all I/O groups in the system. Volumes, on the other
hand, are owned and managed by the nodes within one I/O group. The I/O group is known as the
volume’s caching I/O group.
Each node canister in the control enclosure caches critical data and holds state information in
volatile memory.
If power to a node canister fails, the node canister uses battery power to write cache and state data
to its boot drive thus mirroring the fast write cache data. This method is known as the Fire Hose
Dump (FHD).
The V7000 allows for 32 GB memory for cache and an optional 32 GB upgrade for use with
Real-time Compression. To support larger write cache, the V7000 node writes data from volatile
memory across both the drives effectively doubling the rate at which data is written to disk. The dual
boot drives contain full installation of Storwize V7000 code. The boot drives can also help increase
reliability while doing a code upgrade on any V7000 node. During upgrade when a node shuts
Uempty
down, the hardened data are written to both the internal drives so that the node can survive even if
one of the internal drives fails.
Uempty
Preferred Alternative
node node
2
V7000 node1 2 V7000 node2
Boot disk Boot disk
Cache Cache
3
MDisk1 MDisk2 MDisk3
200 GB 200 GB 200 GB
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
A volume is accessible through the ports of its caching I/O group. These ports of access can be
Fibre Channel ports for Fibre Channel hosts and Ethernet ports for iSCSI host. By default each host
is entitled to access volumes that are owned by all four I/O groups of a clustered system.
It is the Multipath drivers (such as SDDDSM and SDDPCM) or SCSI specifications for TPGS
(Target Port Group Support) and ALUA (Asymmetric Logical Unit Access) that enables host I/Os to
be driven to the volume’s preferred node.
The visual illustrates how the host request to send I/O writes to a volume that is assigned to a
preferred node. This is the node through which a volume is normally accessed. However, the
distributed cache can be managed by both nodes of the caching I/O group.
1. The write I/O request (1) from a host is accessing volume V1.
2. The preferred node for V1 is Storwize V7000 node1 and SDD drives the I/O to the preferred
node (2). The write data is cached in Storwize V7000 node1 and a copy of the data is cached in
Storwize V7000 node2. A write status completion is then returned to the requesting host.
3. Some time later, cache management in Storwize V7000 node1 (the preferred node) will cause
the cached data to be destaged to the storage system (3) and the other Storwize V7000 node is
notified that the data has been destaged.
Uempty
The Storwize V7000 write cache is partitioned to ensure that in the event of a back-end storage
pool is under-performing so that no impact is introduced to other storage pools managed by
Storwize V7000.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Both Storwize V7000 nodes can act as failover nodes for their respective partner node within the
I/O Group. If a node failure occurs within an I/O group then the other node in the I/O group takes
over the I/O responsibilities of the failed node. Since the write cache is mirrored in both nodes, data
loss is prevented. If only one node exists in an I/O group (due to failures or maintenance) then the
surviving node accelerates the destaging of all modified data in the cache to minimize the exposure
to failure (0).
1. All I/O writes are processed in write-through mode.
2. A write request to volume V1 (1) is driven automatically by SDD to the alternative Storwize
V7000 node in the I/O group using the alternative node path (2).
3. The changed data is written to the cache and the target storage system (3) before the write
request is acknowledged as having completed. Cache can also be used for read operations.
During this time, the V7000 node batteries maintain internal power until the cache and cluster state
information is striped across both boot disk drives of each node. Each drive has half of the cache
contents.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Both Storwize V7000 nodes can act as failover nodes for their respective partner node within the
I/O Group. If a node failure occurs within an I/O group then the other node in the I/O group takes
over the I/O responsibilities of the failed node. Since the write cache is mirrored in both nodes, data
loss is prevented. If only one node exists in an I/O group (due to failures or maintenance) then the
surviving node accelerates the destaging of all modified data in the cache to minimize the exposure
to failure (0). All I/O writes are processed in write-through mode. A write request to volume V1 (1) is
driven automatically by SDD to the alternative Storwize V7000 node in the I/O group using the
alternative node path (2) and the changed data is written to the cache and the target storage
system (3) before the write request is acknowledged as having completed. Cache can also be used
for read operations. During this time, the V7000 node batteries maintain internal power until the
cache and cluster state information is striped across both boot disk drives of each node. Each drive
has half of the cache contents.
Uempty
Automatically enabled by
default for quick initialization
Based on user-
formatting
defined customization
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The Storwize V7000 management GUI provides presets, which are templates with supplied default
values that incorporate best practices for creating volumes. The presets are designed to minimize
the necessity of having to specify many parameters during object creation while providing the ability
to override the predefined attributes and values. Each volume preset relates to one of more of the
three types of virtualization modes.
Storage pools are predefined by the storage administrator or the system administrator in which all
volumes are created from the unallocated extents that are available in the pool.
With the v7.6 release, the GUI now includes a Quick Volume Creation option that fills all fully
allocated volumes with zeros as a background task whilst the volume is online. The Advanced
Custom option provides an alternative means to creating volumes that are based on user-defined
customization rather than taking the standard default settings for each of the options under quick
volume creation.
This view is of the standard topology, which is single-site configuration, you can create Basic
volumes
or Mirrored volumes. These volumes are automatically enabled by default for quick initialization
formatting which means that the fully allocated volume will be filled with zeros when the host try to
read them.
Uempty
HyperSwap
The “HyperSwap Volume” Volume
is a combination of the
Master Volume, Auxiliary Master Auxiliary
Volume. UID1 UID1 UID1
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
You can create other forms of volumes, depending on the type of topology that is configured on
your system.
With HyperSwap topology, which is a three-site High Availability configuration, can be used to
create basic volumes or a HyperSwap volumes.
HyperSwap volumes create copies on separate sites for systems that are configured with
HyperSwap topology. Data that is written to a HyperSwap volume is automatically sent to both
copies so that either site can provide access to the volume if the other site becomes unavailable.
The Stretched topology, which is a three-site disaster resilient configuration, creates basic volumes
or a Stretched volumes. This feature is not supported with the IBM Storwize V7000.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
IBM Spectrum Virtualize introduced five new CLI commands for administering Volumes, but the
GUI will also continue to use legacy commands, for all volume administration.
The new volume commands:
• mkvolume
• mkimagevolume
• addvolumecopy
• rmvolumecopy
• rmvolume
The lsvdisk command has also been modified to include “volume_id”, “volume_name” and
“function” fields to easily identify the individual volumes that make up a HyperSwap volume
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The mkvolume, as opposed the mkvdisk command creates an new empty volume using storage
from existing storage pools. The type of volume created is determined by the system topology and
the number of storage pools specified. Volume is always formatted (zeroed).when creating
HyperSwap volumes
The mkimagevolume command creates a new image mode volume. This command be used to
import a volume, preserving existing data. Implemented as a separate command to provide greater
differentiation between the
action of creating a new empty volume and creating a volume by importing data on an existing
mdisk.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The addvolumecopy command add a new copy to an existing volume. The new copy will always be
synchronized from the existing copy. For stretched and hyperswap topology systems this creates a
highly available volume. It can be used across all volume topologies:
The rmvolumecopy command removes a copy of a volume, leaving the volume fully intact. It also
converts a Mirrored, Stretched or HyperSwap volume into a basic volume. The rmvolume command
deletes the volume. For a HyperSwap volume this includes deleting the active-active relationship
and the change volumes.
This command also allows a copy to be identified simply by its site.
The –force parameter with rmvdiskcopy is replaced by individual override parameters, making it
clearer to the user exactly what protection they are bypassing.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
A Basic volume is the simplest form of volume, it consists of a single volume copy, made up of
extents striped across all MDisks in a storage pool. It services I/O using readwrite cache and is
classified as fully allocated, therefore it reports real capacity and virtual capacity and equal.
To create a basic volume, Click the Create Volumes option and follow the procedural steps that are
listed within the Create Volumes wizard. This simple wizard provides common options in which to
create any type of the volume specified. Multiple volumes can be created at the same time by using
an automatic sequential numbering suffix. However, the wizard does not prompt you for a name for
each volume that is created. Instead, the name that you use here becomes the prefix and a number
(starting at zero) is appended to this prefix as each volume is created.
We recommend using an appropriate naming convention of volumes to help you easily identify the
associated host or group of hosts. Once all the characteristics of the Basic volume have been
defined it can be created or created and Map directly to host.
The Quick Volume Creation menu also provides Capacity Savings features with the ability to
alter the provisioning of a Basic or Mirrored volume into Thin-provisioned or Compressed.
A volume is also accessible through its accessible I/O groups. By default, the system
automatically balances the load between the nodes. You can choose a preferred node to handle
the caching for the I/O Group or leave the default values for Storwize V7000 auto-balance.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
A Mirrored volume is creation is similar to the Basic volume, except with this option you are creating
two identical mirrored volumes. Each volume copy can belong to a different pool, and each copy
has the same virtual capacity as the volume.
Mirrored volumes are discussed in details in a later topic.
Uempty
6 exts 5 exts
MDisk1 MDisk2
If you chose the Create option, the GUI proceeds by generating the mkvdisk command that is
incorporated with the volume parameters specified in previous panels. The volume-to-host
mapping can be performed at a later date. Since a volume must be owned by an I/O group, the GUI
has selected one by default. The other keywords and values are actually Storwize V7000 defaults
that do not need to be explicitly coded. These include using Storwize V7000 cache for read/write
operations, create one copy (that is, not mirrored and so sync-rate is not relevant), and assign a
virtualization type of striped. All volumes that are created are assigned with an object ID based on
the order of the volume object category of this cluster.
The lsvdiskextent command requires a volume name or ID. It displays the extent distributions of
the volume across the MDisks providing extents.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Before a volume can be mapped directly to a specified host system, a host must be predefined on
the system. This is also indicated by the activated Create and Map to Host button. The GUI
generates a mkvdiskhostmap command for each volume being mapped to the specified host. An
alternative way to map volumes to a host is to right-click on the volume(s) and select from the
Actions menu the option to Map to Host.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The Quick Volume Creation volume presets are automatically formatted through the quick
initialization process. This process makes fully allocated volumes available for use immediately.
Quick initialization requires a small amount of I/O to complete and limits the number of volumes that
can be initialized at the same time. Some volume actions such as moving, expanding, shrinking, or
adding a volume copy are disabled when the specified volume is initializing. Those actions are
available after the initialization process completes.
The quick initialization process can be disabled in circumstances where it is not necessary using
the Advanced Custom preset. For example, if the volume is the target of a Copy Services function,
the Copy Services operation formats the volume. The quick initialization process can also be
disabled for performance testing so that the measurements of the raw system capabilities can take
place without waiting for the process to complete.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The Advanced Custom volume creation provides an alternative method of defining Capacity
savings options i.e. Thin-provisioning and/or Compression, but also expands on the base level
default options for available Basic and Mirrored volumes. A Custom volume can be customized with
respect to Mirror synch rate, Cache mode and Fast-Format.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The Advanced Custom option can be used to define thin-provisioned volumes or compressed
volumes. These volumes are very similar as both volume type behaves as though they were fully
allocated. However, the thin volumes uses grain size to increase the real capacity. This feature is
not supported on compressed volumes. The Advanced Custom allows volumes to be tailored to the
specifics of the clients environment.
Thin-provisioned and compressed volumes are discussed in details in later topics.
Uempty
General tab
• A custom volume is enabled by default to be a format volume. This
feature can be disabled by removing the check mark.
• Cache mode indicates if the cache contains changed data for the
volume.
ƒ By default, the cache mode for all volumes are read and write I/O operations.
• OpenVMS UDID (Unit Device Identifier) is used by Open VMS host to
identify the volume.
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Custom volume is enabled by default to format the new volume before use (formatting writes zeros
to the volume before it can be used. That is, it writes zeros to its MDisk extents). This feature can
be disabled by simply removing the check mark.
All read and write I/O operations that are performed by the volume are stored in cache. This is the
default cache mode for all volumes.
OpenVMS user-defined identifier (UDID) requirement applies only for OpenVMS system.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
You can view volumes collectively from the Volumes > Volumes view. Observe the entry for the
newly created volume with an object ID of 0. Each volume is assigned a Unique Identifier (UID)
value. All the volumes from the same Storwize V7000 cluster have the same prefix and only the last
couple of bytes vary from volume to volume. When a volume is mapped to a host the Host
Mappings column confirms mapping.
You can also view the entry that describes the mapping between the host and the volume using the
Hosts > Host Mappings menu option. Observe that the selected host displays the mapped
volumes, its UID, and caching I/O group. You can also view assigned volumes by using the
Volumes by Host menu to select an individual host in the Host Filter list.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The Modify Host Mappings window allows you to map the newly created volumes to the selected or
unmap preexisting volumes. The GUI will generate a rmvdiskhostmap command to unmap the
volume from host.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The volume Actions menu provides similar option in which you can manage volume resources
such as modify volume mappings, unmap volume, rename volume, or create new volumes. In
addition, it offers resources to reduce the complexity of moving data that is transparent to the host.
Uempty
Volume properties
• Volume UID is the
equivalent of a hardware
volume serial number.
• Caching I/O Group:
ƒ Specifies the I/O group to
which the volume belongs
• Accessible I/O Group:
ƒ Specifies the I/O groups the
volume can be moved to
• Preferred Node:
ƒ Specifies the ID of the
preferred node for the volume
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The volume properties option provides an overview of the selected volume as seen within the GUI.
You can expand the view by selecting View more details.
All volumes are created with a Volume ID which is assigned by the Storwize V7000 at volume
creation. The Volume UID is the equivalent of a hardware volume serial number. This UID is
transmitted to the host OS and on some platforms it can be displayed by host-based commands.
Uempty
Column display
can be modified
for viewing
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The Host Maps tab shows host mapping information including the LUN number or SCSI ID as seen
by the host.
The Member MDisks tab shows the MDisks supplying extents to this volume. The extents are
spread across all the MDisks of the pool.
Uempty
You can also use the CLI lsvdisk command to view the volume details. In this example, the
lsvdisk command is specified with an object name or object ID which provides much of the
detailed information that is displayed for the volume by the GUI.
The formatted option indicates that at creation, its entire volume capacity is written with zeros (so
that residual data is overridden). Volume formatting is not invoked by default.
The throttling option limits the amount of I/O that is accepted for the volume either with IOPS or
MB/s. Throttling is not set by default.
Both the format and throttling options are not typically used. This is why the GUI does not provide
an interface for these options.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Figure 6-68. View host mappings and pools details using CLI
To view the host details, the lshostvdiskmap command output displays the host objects and
volumes that are mapped to these hosts objects. It can be filtered to a specific host by specifying
the host name or ID.
The CLI tends to favor object IDs instead of object names. The GUI provides both to be more user
friendly.
Extensive filtering (-filtervalue) is available with the CLI as the number of objects within some
categories will grow larger over time. A common usage of the lsmdisk command is to filter by the
pool name (mdisk_grp_name) to obtain a list of MDisks within a given pool.
The -delim parameter reduces the width of the resulting output by replacing blank spaces
between columns with a delimiter (a comma, for example). When the CLI displays a summary list of
objects each entry generally begins with the object ID followed by the object name of the object.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The size of a volume can be expanded to present a larger capacity disk to the host operating
system. You can increase a volume size with a few clicks using management GUI. Increasing the
size of the volume is done without interruptions to the user availability of the system. However, you
must ensure that the host operating system provides support to recognize that a volume has
increased in size.
For example:
• AIX 5L V5.2 and higher issuing the chvg –g vgname
• Windows Server 2008, and Windows Server 2012 for basic and dynamic disks
• Windows Server 2003 for basic disks and with Microsoft hot fix (Q327020) for dynamic
The command that is generated by the GUI is expandvdisksize with the capacity amount to be
added to the existing volume identified. When a volume is expanded its virtualization type becomes
striped even if it was previously defined as sequential. Image type volumes cannot be expanded.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The method that Storwize V7000 uses to shrink a volume is to remove the required number of
extents from the end of the volume. Depending on where the data is on the volume, this action can
be data destructive. Therefore, the recommendation is that the volume should not be in-use by a
host. The shrinking of a volume using the Storwize V7000 is similar to expanding volume capacity.
Ensure that the operating system supports shrinking (natively or by using third-party tools) before
you use this function. In addition, it is best practice to always have a consistent backup before you
attempt to shrink volume.
The shrinkvdisksize command that is generated by the GUI decreases the size of the volume by
the specified size. This interface to reduce the size of a volume is not intended for in-use volumes
that are mapped to a host. It is used for volumes whose content will be overlaid after the size
reduction, such as being a FlashCopy target volume where the source volume has an esoteric size
that needs to be matched.
Uempty
DS3K0 DS3K1
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
If an MDisk is removed from a storage pool then all of the allocated extents of from the removed
MDisk are redistributed to the other MDisks in the pool.
The rmmdisk command that is generated by the GUI contains the -force parameter to remove the
MDisk from its current pool. The -force specification enables the removal of the MDisk by
redistributing the allocated extents of this MDisk to other MDisks in the pool.
Examine the output of the two progressively issued lsvdiskextent 0 commands to view the extent
distribution for a certain volume. The number of extents in use by the volume has decreased. In
order to remove the MDisk, all the extents of this volume need to be migrated from this MDisk to the
remaining MDisks in the pool.
The 22s value in the Running Tasks bubble indicates that the background migration task started
22 seconds ago.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
To prevent active volumes or host mappings from being deleted inadvertently, the system supports
a global setting that prevents these objects from being deleted if the system detects that they have
had recent I/O activity. When you delete a volume the system checks to verify whether it is part of a
host mapping, FlashCopy mapping, or remote-copy relationship. For this example, the threshold is
set to 30 minutes. Therefore, the volume you want to delete has received any I/O within the
preceding 30 minutes, you will not be able to delete the volume or unmap the volume from the host.
In these cases, the system fails to delete the volume and the user has to use the -force flag to
override that failure. Using the -force parameter can lead to an unintentional deletion of volumes
that are still active. Active means that the system has detected recent I/O activity to the volume
from any host.
When the vdisk protection is enabled, you are protected for unintentionally deleting a volume even
with the -force parameter added, within whatever time period you have decided on. You can
configure the “idle time” from 15 to 1440 minutes. If the last I/O was within the specified time period,
then the rmvdisk command fails and the user has to either wait until the volume really is
considered idle or disable the system setting, delete/unmap the volume, and re-enable the setting.
That is, the volume has to have been idle for that long before you are allowed to delete it. You can
of course force the deletion by disabling the feature.
Uempty
Encrypted
Encrypted Mdisk 3
MDisk 1
Encrypted
MDisk 2 Encrypted
Volume
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
IBM Data Protection with Encryption enables software encryption for all volumes created from
Storwize V7000 Gen2 systems using encrypted internal storage pools or external storage systems,
including down stream Storwize systems.
For new volumes the simplest way to create an encrypted volume is to create it in an encrypted
pool. All volumes created in encrypted pool will take on an encrypted attribute. The encryption
attribute is independent of the volume class created.
Uempty
Encrypting volumes
• Unencrypted volumes can be converted to encrypted volume using the
Volume Action of Migrate to Another Pool.
ƒ Target pool must be a software encrypted parent pool.
í Not possible to use the migrate option between parent and child Pools.
Encryption
status
Migrate to target
encrypted pool
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
With encryption enabled, you can display the encryption status of a volume from the GUI Volumes
> Volumes view. In order to do this you must customize the column view to display the encryption
key status.
Unencrypted volume can be encrypted using the GUI’s Migrate to Another Pool option. This
procedure will execute the command migratevdisk if the target pool is encrypted.
Software encrypted pools will have a unique per-pool key, and so may their Child pools. These may
be different to the parent pool i.e. If a parent pool has no encryption enabled, a child pool can still
enable encryption. If a parent and a child pool have a key then the child pool key will be used for
child pool volumes. Child pools are allocated extents from their parents, but it is possible for the
extents in the child pool to take on different encryption attributes.
Uempty
Encrypted
Mdisk 1 Encrypted
Volume
Encrypted
Encrypted Mdisk 3
MDisk 1 Encrypted Child pool to take on
MDisk 2 different encryption
attributes
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Software encrypted pools will have a unique per-pool key, and so may their Child pools. These may
be different to the parent pool, such as:
• If a parent pool has no encryption enabled, a child pool can still enable encryption.
• If a parent and a child pool have a key then the child pool key will be used for child pool
volumes.
Child pools are allocated extents from their parents, but it is possible for the extents in the child pool
to take on different encryption attributes
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
This topic discusses the process in which a host system access storage assigned volumes.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
From the Windows host perspective, the Storwize V7000 volumes are standard SCSI disks. All
volumes that are mapped to either a Windows host or an iSCSI host are discovered and displayed
collectively within the Windows Disk Management interface. Windows presents volumes as
unallocated disks that must be initialized and partitioned with a logical drive of the size you
designate by using the new volume feature selected.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The SDDDSM datapath query device command can be used to correlate Storwize V7000
volumes based on the serial number that is shown for the disk (which is the Storwize V7000 UID for
the volume).
Depending on the host type, path numbers are displayed with each disk or Storwize V7000 volume
which validates the four-path zoning objective. Even though these volumes are owned by different
I/O groups this information is not obvious from the SDD output.
The -l parameter is appended to the datapath query device command and causes SDDDSM to
flag paths to the alternate (non-preferred node) with an asterisk. The 0 and 1 value of the
command identifies the SDD device number range of the Windows disks to be displayed.
After rescanning disks, writing a signature on the disk, and creating a partition, the Storwize V7000
mapped volume is used like any other drive in Windows.
For example, the SDDDSM output for the datapath query device command identifies a host
device name (Disk4). Based on the serial number that is displayed, disk4 can be correlated to the
Storwize V7000 VW_CPVOL (Child Pool volume) volume UID value.
The paths that are displayed by this output can be used to validate zoning (zoned for four paths
between the host and Storwize V7000 ports). SDDDSM manages path selection for I/Os to the
volume and drives I/O to the paths of the volume’s preferred node.
Uempty
Windows volumes
iSCSI volumes
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The configured paths or sessions between the host and the Storwize V7000 nodes are reflected in
the Windows Device Manager for the device. Based on the four-path zoning, each volume has
been reported four times during host HBA SAN device discovery. This is seen in the Windows
Device Manager view for Disk drives.
The SDDDSM (Windows MPIO multipath driver) recognizes and supports the 2145 device type.
SDDDSM determines that each of the 2145 SCSI Disk Device instances correlates to one Storwize
V7000 volume and thus creates a 2145 Multi-Path Disk Device to represent that volume. SDDDSM
also manages the path selection to the four paths of each volume.
For iSCSI host volumes, Windows Device Manager lists one 2145 SCSI disk device reported by
the four paths between the host and the Storwize V7000 node. The MPIO support on Windows
recognizes that these instances of 2145 SCSI disks actually represent one 2145 LUN and manages
these as one 2145 multi-path device.
Four instances of the Storwize V7000 volume are reported to the host through the four configured
paths. Windows MPIO manages the reported instances as one disk with four paths.
Uempty
VB1-AIX
io_grp0
VB1-AIX2
VB1-AIX1
ID 6
ID 2 UID…07
UID…03 SCSI ID 1
SCSI ID 0
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
For AIX hosts, historically, SDD configured vpath devices to provide MPIO to hdisk devices.
Therefore, you need to run lsdev -Cc disk to query the Object Data Manager (ODM) to present
the LUNs as hdisks. When the new LUN had been assigned to the AIX host, the cfgmgr command
is executed to pick up the new V7000 disk. LUNs are represented as hdisk0, hdisk1, and hdisk2.
Note that hdisk0 is the root volume group (rootvg) for the AIX OS.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Once the hdisks are discovered the mkvg command is used to create a Volume Group (VG) with
the newly configured hdisks. The lspv output shows the existing Physical Volume (PV) hdisks with
the new VG label on each of the hdisks that were included in the VGs. The lsvg output shows the
existing volume group (VG).
Uempty
To confirm that the new disks are discovered and that the paths have been configured correctly, the
SDDPCM pcmpath query device command is used. The output of this command is the same
structure as the SDDDSM. The pcmpath query device command validates the I/O distribution
across the paths of the preferred node of the volume (or hdisk). SDDPCM identifies eight paths for
each hdisk because this host is zoned for eight paths access in the example. Currently, all eight
paths have a state of CLOSE because the host access infrastructure still needs to be defined.
An asterisk on a path indicates it is an alternate path.
The SERIAL: number of the AIX hdisk correlates to the V7000 UID value of the volume.
Uempty
CuPath: CuPath:
name = "hdisk1" name = "hdisk1"
parent = "fscsi0" parent = "fscsi1"
connection = "500507680130f0fb,0" connection = "500507680140f0fb,0"
alias = "" alias = ""
path_status = 1 path_status = 1
path_id = 2 path_id = 6
CuPath: CuPath:
name = "hdisk1" name = "hdisk1"
parent = "fscsi0" parent = "fscsi1"
connection = "500507680120f0fb,0" connection = "500507680110f0fb,0"
alias = "" alias = ""
path_status = 1 path_status = 1
path_id = 3 path_id = 7
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The AIX Object Data Manager (ODM) contains pathing or connectivity data for the hdisk. The AIX
odmget command can be used to obtain detailed information regarding the paths of a given hdisk.
The WWPN is shown as connection ((remember the Q value in the WWPN). Thus, the connection
identifies the V7000 node and port that is represented by this path. The path_id fields for values of
0, 1, 4, and 5 represent the four ports of V7000 node ID 1 (the volume’s preferred node).
Uempty
VB1-AIX
fscsi1 fscsi0
Path4 D20B D20A Path1
io_grp0
Path 5 VB1-AIX1
Path0
12 13 ID 2 11 14
UID…03
NODE1 (F072) SCSI ID 0 NODE1 (F072)
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
From the previous two command output sets, an understanding of the path configuration can be
obtained and host zoning can be validated. Under normal circumstances the SDD distributes I/O
requests across these four paths to the volume’s preferred node.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
This topic discusses the non-disruptive movement of volumes between the I/O groups.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Moving a volume between I/O groups is considered a migration task. Hosts mapped to the volume
must support non disruptive volume movement (NDVM). Modifying the I/O group that services the
volume can be done concurrently with I/O operations if the host supports non disruptive volume
move. However, the cached data that is held within the system must first be written to the system
disk before the allocation of the volume can be changed. Since paths to the new I/O group need to
be discovered and managed, the multipath driver support is critical for nondisruptive volume move
between I/O groups. Rescanning at the host level ensures that the multipathing driver is notified
that the allocation of the preferred node has changed and the ports by which the volume is
accessed has changed. This can be done in the situation where one pair of nodes has become
over utilized.
If there are any host mappings for the volume, the hosts must be members of the target I/O group
or the migration fails. Keep in mind that the commands and actions on the host vary depending on
the type of host and the connection method used. These steps must be completed on all hosts to
which the selected volumes are currently mapped.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Support information for the Storwize V7000 is based on code level. One easy way to locate its web
page is to perform a web search using the keywords IBM Storwize V7000 V7.6 supported hardware
list.
Within the V7.6 support page, locate and click the link to Non-Disruptive Volume Move (NDVM).
Host system multipath driver support information is also found on this web page.
Uempty
A volume in a Metro/Global
Mirror relationship cannot
change its caching I/O group
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The NDVM link provides a list of host environments that supports nondisruptively moving a volume
between I/O groups. The Multipathing column identifies the multipath driver required. After the
move, paths to the prior I/O group might not be deleted until a host reboot occurs.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
The visual shows some addition notes and summary on Non-Disruptive Volume Move (NDVM).
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
You can also use the management GUI to move volumes between I/O groups non-disruptively. In
the management GUI, select Volumes > Volumes. On the Volumes panel, select the volume that
you want to move and select Modify I/O Group or select Actions > Modify I/O Group. The wizard
guides you through all the steps that are necessary for moving a volume to another I/O group,
including any changes to hosts that are required.
A volume is owned by a caching I/O group as active I/O data of the volume is cached in the nodes
of this I/O group. If a volume is not assigned to a host, changing its I/O group is simple, as none of
its data is cached yet.
Make sure you create paths to I/O groups on the host system. After the system has successfully
added the new I/O group to the volume's access set and you have moved selected volumes to
another I/O group, detect the new paths to the volumes on the host.
The GUI generates the following commands to a new caching I/O group:
• The movevdisk -iogrp command enables the caching I/O group of the volume to be changed.
The -node parameter allows the preferred node of the volume to be explicitly specified.
Otherwise, the system load balances between the two nodes of the specified I/O group.
Uempty
• The addvdiskaccess -iogrp command adds the specified I/O group to the volume’s access
list. The volume is accessible from the ports of both I/O groups. However, the volume’s data is
only cached in its new caching I/O group.
• The rmvdiskaccess -iogrp command removes the access to the volume from the ports of the
specified I/O group. The volume is now only accessible through the ports of its newly assigned
caching I/O group.
The chvdisk -iogrp option is no longer available beginning with v6.4.0.
Uempty
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
With the release of the v7.3 code, changing a preferred node in the I/O can be a simple task.
Previously, this task could only be done using Non-Disruptive Volume Move (NDVM) between I/O
groups. Now you can perform this same task by using the CLI movevdisk command to move the
preferred node of a volume either within the same caching I/O group or to another caching I/O
group.
Uempty
Keywords
• Storwize V7000 GUI • Subsystem Device Driver Path Control
Module (SDDPCM)
• Command-line interface (CLI)
• Disk Management
• I/O load balancing
• Device Manager
• Fabric zoning
• Host object
• Virtualization
• FCP Host
• Cluster system
• iSCSI host
• Storage pool
• SCSI LUNs
• MDisks
• Extents
• Internal disks
• Thin-provisioning
• External storage
• Volume mirroring
• Subsystem Device Driver Device
Specific Module (SDDDSM) • Worldwide node name (WWNN)
• Worldwide port name (WWPN)
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Uempty
Review questions (1 of 2)
1. True or False: Zoning is used to control the number of paths
between host servers and the Storwize V7000.
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Uempty
Review answers (1 of 2)
1. True or False: Zoning is used to control the number of paths
between host servers and the Storwize V7000.
The answer is true.
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Uempty
Review questions (2 of 2)
3. True or False: A multipath driver is needed when multiple
paths exist between a host server and the Storwize V7000
system.
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Uempty
Review answers (2 of 2)
3. True or False: A multipath driver is needed when multiple
paths exist between a host server and the Storwize V7000
system.
The answer is true.
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Uempty
Unit summary
• Summarize host systems functions in an Storwize V7000 system
environment
• Differentiate the configuration procedures required to connect an FCP
host versus an iSCSI host
• Recall the configuration procedures required define volumes to a host
• Differentiate between a volume’s caching I/O group and accessible I/O
groups
• Identify subsystem device driver (SDD) commands to monitor device
path configuration
• Perform non-disruptive volume movement from one caching I/O group
to another
Storwize V7000 host to volume allocation © Copyright IBM Corporation 2012, 2016
Uempty
Overview
This unit discusses the IBM Spectrum Virtualize advanced software functions designed to deliver
storage efficiency and optimize storage asset investments. The topics include Easy Tier
optimization of flash memory; volume capacity savings using Thin Provisioned virtualization and
storage capacity utilization efficiency with the achievement of Real-time Compression (RtC).
References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html
Uempty
Unit objectives
• Recognize IBM Storage System Easy Tier settings and statuses at the
storage pool and volume levels
• Differentiate among fully allocated, thin-provisioned, and compressed
volumes in terms of storage capacity allocation and consumption
• Recall steps to create thin-provisioned volumes and monitor volume
capacity utilization of auto expand volumes
• Categorize Storwize V7000 hardware resources required for Real-time
Compression (RtC)
Uempty
• Thin provisioning
• Comprestimator utility
Uempty
SCSI Target
Forwarding
Implemented below
Upper Cache. Neither Replication Compression
Neither host nor Copy implemented within
host nor Copy Services Upper Cache
Services are aware of Thin Provisioning layer
are aware of these of node I/O stack
these special volumes FlashCopy
special volumes
Mirroring
Lower Cache
Forwarding
IBM Storwize V7000 utilizes the same software architecture as the IBM SAN Volume Controller to
implement Volume Mirroring, Thin-Provisioning, and Easy Tier.
Both thin-provisioned and mirrored volumes are implemented in the I/O stack below the upper
cache and Copy Services. Neither the host application servers nor Copy Services functions are
aware of these types of special volumes. Therefore, they are seen as normal created volumes, and
host.
For compressed volumes, the host servers and Copy Services operate with uncompressed data.
Compression occurs on the fly in the Thin Provisioning layer so that physical storage is only
consumed by compressed data.
Easy Tier is designed to reduce the I/O latency for hot spots, it does not however replace storage
cache. Both methods solve a similar access latency workload problem, but each method weigh
differently in the algorithmic construction that is based on locality of reference, regency, and
frequency. Therefore, Easy Tier monitors I/O performance from the device end (after cache), where
it picks up the performance issues that cache cannot solve. The placement of this method balance
the overall storage system performance.
Uempty
• Thin provisioning
• Comprestimator utility
This topic introduces the enhanced features of Easy Tier 3rd Generation.
Uempty
IBM System Storage Easy Tier is a 3rd generation software with built-in dynamic data relocation
feature. Easy Tier automatically enables non-disruptively automated subvolume data placement
throughout different or within the same storage tiers to intelligently align the system with current
workload requirements and to optimize the usage of SSDs or flash arrays.
IBM Storage Tier Advisor tool (STAT) is a Windows console application that analyzes heat data files
that are produced by Easy Tier and produces a graphical display of the amount of “hot” data per
volume (with predictions about how additional Flash or SSD capacity could benefit the performance
for the system) and per storage pool. STAT is available at no additional cost.
Uempty
IBM Storwize V7000 implements Easy Tier enterprise storage functions, which were originally
available only on the IBM DS8000 and IBM XIV enterprise class storage systems. IBM Easy Tier is
now available on the Storwize V7000 that allows host transparent movement of data among the
internal and external storage subsystem resources. Easy Tier is a no charge feature that automates
the placement of data amongst different storage tiers.
Uempty
IBM Easy Tier is a function that responds to the presence of drives in a storage pool that also
contains hard disk drives (HDDs). The system automatically and non-disruptively moves frequently
accessed data from HDD MDisks to flash drive MDisks, thus placing such data in a faster tier of
storage. With the Easy Tier technology, clients can improve performance at lower costs through
more efficient use of flash.
The concept of Easy Tier is to transparently move data up and down unnoticed from host and user
point of view. Therefore, Easy Tier eliminates manual intervention where you assign highly active
data on volumes to faster responding storage. In this dynamically tiered environment, data
movement is seamless to the host application regardless of the storage tier in which the data
belongs. Manual controls exist so that you can change the default behavior, for example, such as
turning off Easy Tier on pools that have any combinations of the three types of MDisks.
Easy Tier migration can be performed on the internal flash disks within Storwize V7000 storage
enclosure, or to external storage systems that are virtualized by Storwize V7000 control enclosure.
Uempty
• Each of these three types of storage pool utilizes Easy Tier to optimize storage
performance:
Pools with a single tier of storage and with multiple managed disks will be optimized so that
each managed disk is equally loaded.
Pools with two or more storage pools will also ensure that the data is stored on the most
appropriate tier of storage
Automatic storage hierarchy
Tier 0* Tier 1 Tier 2
NONE ENT NL
NONE NONE NL
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016
The key benefit of Easy Tier 3rd generation software versus the older versions is that it allows tiering
between three tiers (Flash, Enterprise, and Nearline), which helps increase the performance of the
system. This table shows naming convention and all supported combinations of storage tiering
used by Easy Tier. Tier0 are identified as Flash which represents solid state drives, flash drives, or
flash storage that is being virtualized. Hard disk drives are separated into two tiers: Enterprise 15 K
and 10K SAS drives are both classified as Tier1 ENT, and NL is the Nearline 7.2 RPM drives are
Tier2.
Uempty
Automatic Data
Placement or extent
Evaluation or migration
measurement only
Storage Pool
Easy Tier Balancing
Acceleration
OFF
Uempty
Volume
Exchange
Applications
DB2
Warehouse Volumes
Smart
monitoring
Four extents identified as hot
– candidates for Flash tier
Size: 10 GB
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016
Easy tier must be enabled on non hybrid pools to collect data. When the Easy Tier evaluation mode
is enabled for a storage pool with a single tier of storage, Easy Tier collects usage statistics for all
the volumes in the pool, regardless of the form factor. The Storwize V7000 monitors the storage
use at the volume extent level. Easy Tier constantly gathers and analyzes monitoring statistics to
derive moving averages for the past 24 hours.
Volumes are not monitored when the easytier attribute of a storage pool is set to off or inactive with
a single tier of storage. You can enable Easy Tier evaluation mode for a storage pool with a single
tier of storage by setting the easytier attribute of the storage pool to on.
If you turn on Easy Tier in a Single Tiered Storage pool, it runs in evaluation mode. This means it
measures the I/O activity for all extents. A statistic summary file is created and can be off-loaded
and analyzed with the IBM Storage Tier Advisory Tool (STAT). This will provide an understanding
about the benefits for your workload if you were to add Flash/SSDs to your pool, prior to any
hardware acquisition.
IBM Easy Tier can be enabled on a volume basis to monitor the I/O activity and latency of the
extents over a 24 hour period. This type of volume data migration works at the extent level, it is
often referred to as sub-LUN migration.
Uempty
Flash
Cold Extents Hot Extents
Migrate down Migrate up
HDD
1024 MB extents
Size: 10 GB
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016
Automatic data placement is enabled by default once multi-tiers are placed in a pool. This process
allows I/O monitoring to be done for all volumes whether the volume is a candidate for automatic
data placement. Once automatic data placement is enabled, and if there is sufficient activity to
warrant relocation, extents will begin to be relocated within a day after enablement. This
sub-volume extent movement is transparent to host servers and applications.
For a single level storage pool and for the volumes within that pool, Easy Tier creates a migration
report every 24 hours on the number of extents it would move if the pool was a multi-tiered pool.
Easy Tier statistics measurement is enabled. Using Easy Tier can make it more appropriate to use
smaller storage pool extent sizes.
A statistic summary file or ‘heat’ file generated by Easy Tier can be offloaded for input to the IBM
Storage Tier Advisor Tool (STAT). This tool produces reports on the amount of extents moved to
Flash/SSD-based MDisks and predictions of performance improvements that could be gained if
more Flash/SSD capacity is available.
Uempty
When growing a storage pool by adding more storage to it, Storwize V7000 software can restripe
the system data in pools of storage without having to implement any manual or scripting steps. This
process is called Automated Storage Pool Balancing. Although Automated Storage Pool Balancing
can work in conjunction with Easy Tier, it operates independently and does not require an Easy Tier
license. This helps grow storage environments with greater ease while retaining the performance
benefits that come from striping the data across the disk systems in a storage pool.
Uempty
Automated Storage Pool Balancing uses XML files that are embedded in the software code. The
XML files uses stanzas to records the characteristics of the internal drives by RAID levels that are
built, the width of the array, drive types, and sizes used in the array, and so on, to determine MDisk
thresholds. External virtualized LUNs are based on its controller.
During the Automated Storage Pool Balancing process, it assess the extents that are written in the
pool, and based on the drive stanzas and its IOPs capabilities, data is automatically restriped
across all MDisks within the pool equally. In this case, you can have a single tier pool or mix
different drive type and capacity that you have MDisks on in the same pool. This is only a
performance rebalance – not an extent rebalance.
Automated Storage Pool Balancing can be disabled at the pool.
Uempty
It is also possible to change more advanced parameters of Easy Tier using: Easy Tier Acceleration
and MDisk Easy Tier Load.
Easy Tier acceleration allow administrators to modify Easy Tier setting. Turning this setting on
makes Easy Tier data migration to move extents up to four times faster than when in default setting.
In accelerate mode Easy Tier can move up to 48 GiB per 5 minutes while in normal mode it moves
up to 12 GiB. Enabling Easy Tier acceleration is advised only during periods of low system activity.
The two most probable use cases for acceleration are:
• When adding new capacity to the pool accelerating Easy Tier can spread quickly existing
volumes onto the new MDisks.
• Migrating the volumes between the storage pools when target storage pool has more tiers than
the source storage pool so Easy Tier can quickly promote or demote extents in the target pool.
This is system wide setting and is disabled by default. This setting can be changed online, without
any impact on host or data availability. To turn on or off Easy Tier acceleration mode use command
chsystem.
The second setting is called MDisk Easy Tier load. This setting is set per MDisk basis and indicates
how much load can Easy Tier put on the particular MDisk. There are five different values that can
be set to each MDisk: default, low, medium, high, very high.
Uempty
The system uses default setting based on the storage tier of the presented MDisks, either flash,
nearline or enterprise. In case of internal disk drives the tier is known but in case of external MDisk
tier should be changed by the user to align it with underlying storage.
Change the default setting to any other value only when you are certain a particular MDisk is under
utilized and can handle more load, or the MDisk is over utilized and the load should be lowered.
Change this setting to very high only for SDD and flash MDisks.
Each of these settings should be used with caution because changing the default values can
impact system performance.
Uempty
Nearline Tier
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016
When enabled, Easy Tier determines the right storage media for a given extent based on the extent
heat and resource utilization. Easy Tier uses the following extent migration types to perform actions
between the three different storage tiers.
• Promote
▪ Moves the relevant hot extents to higher performing tier
• Swap
▪ Exchange cold extent in upper tier with hot extent in lower tier
• Warm Demote
▪ Prevents performance overload of a tier by demoting a warm extent to the lower tier
▪ This action is based on predefined bandwidth or IOPS overload thresholds. Warm demotes
are triggered when bandwidth or IOPS exceeds those predefined thresholds. This allows
Easy Tier to continuously ensure that the higher-performance tier does not suffer from
saturation or overload conditions that might affect the overall performance in the extent
pool.
• Demote or Cold Demote
Uempty
▪ Easy Tier Automatic Mode automatically locates and demotes inactive (or cold) extents that
are on a higher performance tier to its adjacent lower-cost tier.
▪ Once cold data is demoted, Easy Tier automatically frees extents on the higher storage tier.
This helps the system to be more responsive to new hot data.
• Expanded Cold Demote
▪ Demotes appropriate sequential workloads to the lowest tier to better utilize Nearline disk
bandwidth
• Storage Pool Balancing
▪ Redistribute extents within a tier to balance utilization across MDisks for maximum
performance
▪ Moves hot extents from high utilized MDisks to low utilized MDisks
▪ Exchanges extents between high utilized MDisks and low utilized MDisks
• It attempts to migrate the most active volume extents up to Flash/SSD first.
• A previous migration plan and any queued extents that are not yet relocated are abandoned.
Extent migration occurs only between adjacent tiers. In three tiered storage pool Easy Tier will not
move extents from Flash/SSD directly to NL-SAS and vice versa without moving them first to SAS
drives.
Uempty
There are a number of reasons why Easy Tier can choose to move data between managed disks.
Easy tier helps data storage administrators see the potential value of automation with greater
storage efficiency. Easy Tier removes the need for storage administrators to spend hours manually
performing this analysis and migration, eliminates unnecessary investments in high-performance
storage, and improves your bottom line.
Uempty
IBM Easy Tier can be enabled on a volume basis to monitor the I/O activity and latency of the
extents over a 24 hour period. This type of volume data migration works at the extent level, it is
often referred to as sub-LUN migration.
It supports data movement across three tiers, and therefore, the data can be classified into the
following three categories based on usage:
• Heavily used data (also called hot data)
• Moderately used data (also called warm data)
• Cold data
When creating new volumes, they are placed by default on the Enterprise or middle Tier 1. If Tier 1
has reached its capacity, then it will used the next lowest tier which would be Tier 2. If all tiers are
full, only then will it allocate extents from Tier 0. Easy Tier will then automatically start migrating
those extents (hot or cold) based on the workload. As a result of extent movement the volume no
longer has all its data in one tier but rather in two or three tiers.
Uempty
IBM Easy Tier uses an automated data placement (ADP) plan which involves scheduling and the
actual movement or migration of the volume’s extents up to, or down from, the highest disk tier. This
involves collecting I/O stats in five minute intervals on all volume with in the three tiered pool. Based
on the performance log after the 24 hour learning period, Easy Tier will use data migrator (DM) to
create an extent migration plan and dynamically moves extents based on suggestions and
performance between three tiers, and the heat of extents. Therefore high activity or hot extents are
moved to a higher disk tier such as Flash and SSD within the same storage pool. It also moves
extents whose activity dropped off, or cooled, from higher disk tier MDisk back to a lower tier MDisk.
The extent migration rate is capped so that a maximum of up to 30 MBps is migrated, which
equates to approximately 3 TB per day that is migrated between disk tiers.
Uempty
Movement of
extents scheduled
There are three different types of analytics which can decide to perform data migration based on
different schedules:
▪ Once per day Easy Tier will analyze the statistics to work out which data should be
promoted or demoted.
▪ Four times per day Easy Tier will analyze the statistics to identify if any data needs to be
rebalanced between managed disks in the same tier
▪ Once every 5 minutes Easy Tier will analyze the statistics to identify if any of the managed
disks is overloaded.
Each of the analysis phases generates a list of migrations that should be executed. The system will
then spend as long as needed executing the migration plan.
▪ Migration will occur at a maximum rate of 12 GB every 5 minutes for the entire system
▪ The system will prioritize the three types of analysis as follows
- Promote and rebalance get equal priority
- Demote is guaranteed 1 GB every 5 minutes, and receive whatever is left
Uempty
This table provides a summary of Easy Tier settings. The rows highlighted in yellow are the default
settings. Also observe the reference numbers that are annotated in the Volume copy Easy Tier
status:
1. If the volume copy is in image or sequential mode or is being migrated then the volume copy
Easy Tier status is measured instead of active.
2. When the volume copy status is inactive, no Easy Tier functions are enabled for that volume
copy.
3. When the volume copy status is measured, the Easy Tier function collects usage statistics for
the volume but automatic data placement is not active.
4. When the volume copy status is balanced, the Easy Tier function enables performance-based
pool balancing for that volume copy.
5. When the volume copy status is active, the Easy Tier function operates in automatic data
placement mode for that volume.
Uempty
Volume Copy settings only altered Storage Pool settings only altered in
in CLI CLI
Easy Tier On/Off Easy Tier On/Off/Auto/Measured
setting Setting
Easy Tier Inactive/Active/Measured/ Easy Tier Inactive/Active
status Balanced Status
The default Easy Tier setting for a storage pool is Auto, and the default Easy Tier setting for a
volume copy is On (for a single tier pool). This means that Easy Tier functions except pool
performance balancing are disabled for storage pools with a single tier, and that automatic data
placement mode are enabled for all striped volume copies in a storage pool with two or more tiers.
If the single tier pool Easy Tier setting is changed to On, the pool Easy Tier status would become
active and the volume copy Easy Tier status would become measured. This enables Easy Tier
evaluation and analysis of I/O activity for volumes of this pool.
With the default pool Easy Tier setting of auto and the default volume Easy Tier setting of on (for a
two-tier or hybrid pool) this causes the pool and the volume Easy Tier status to become active.
Easy Tier automatic data placement becomes active automatically.
The Easy Tier heat file is generated and continually updated as long as Easy Tier is active for a
storage pool.
Uempty
DB2
Warehouse
Backups
User
directories
Applications I/Os with sizes larger
than 64 KB are not considered the Multimedia
best use. This data is not “hot”.
To help manage and improve performance, Easy Tier is designed to identify hot data at the
subvolume or sub-LUN (extent) level, based on ongoing performance monitoring, and then
automatically relocate that data to an appropriate storage device in an extent pool that is managed
by Easy Tier. Easy Tier uses an algorithm to assign heat values to each extent in a storage device.
These heat values determine on what tier the data would best reside, and migration takes place
automatically. Data movement is dynamic and transparent to the host server and to applications
using the data.
The common question is where should the Flash drives and Easy Tier function be deployed in your
environment? There are several areas to be considered when determining where the Easy Tier
feature and the Flash drives can provide the best value to our clients. If the environment is one
where there is a significant amount of very small granularity striping, such as Oracle or DB2
tablespace striping, then the output of the workload may be significantly reduced. In these cases
there may be less benefit from smaller amounts of SSDs and it may not be economical to
implement an Easy Tier solution. Therefore, you should test the application platform before fully
deploying Easy Tier in to your Storwize V7000 environment.
Uempty
Easy Tier supports a two tiered storage pool hierarchy, where one tier is composed of Flash and
SSD drives and the other tier is composed of HDDs (SATA, SAS or FC). This tier is referred to as
the single-tiered storage with one type for disk tier attributes. Therefore, each disk should have the
same size and performance characteristics.
Easy Tier can also support mixed disk technology with two different disk tier attributes, which
means high performance tier Flash/SSD disks, and generic HDD disks forms multi-tier storage.
Uempty
HDD Pool
HDD Pool Easy Tier = On HDD Pool
Easy Tier = Auto East Tier status = Inactive Easy Tier = Off
East Tier status = Active East Tier status = Inactive
Figure 7-24. Example: Easy Tier single tier pool and volume copy
This diagram illustrates the Easy Tier setting and status for a single tier pool and volume copy. The
first example shows a volume copy setting as Auto and its status as Active, therefore Easy Tier
operates in a storage pool balancing mode for performance-based pool balancing for that volume
copy to migrate extents within the intra (same) of the storage tier. For the second example, the
status On and the status Inactive indicates Easy Tier collects usage statistics for the volume but
automatic data placement (ADP) is not active. For the third example, the Inactive status means that
Easy Tier is neither collecting statistics nor will it enable the APD functions for that volume.
Uempty
Will evaluate volume I/O Will evaluate I/O, but Will not evaluate I/O
and perform ADP will not perform ADP or perform ADP
Figure 7-25. Example: Easy Tier two tier pool and volume copy
This diagram illustrates the Easy Tier setting and status for a two tier pool and volume copy. Easy
Tier automatic mode automatically manages the capacity allocated in a hybrid pool that contains
mixed disk technology (HDD + Flash/SSD). Therefore, Easy Tier will monitor and collect the
volume's I/O statistics, and as warranted, perform ADP functions as required.
If a pool has already been created with both HDD and Flash/SSD-based MDisks, with the Easy
Tier default setting of measured, this will not enable the automatic data placement functions on
volumes in the pool. For the second example, the Inactive status means that Easy Tier is neither
collecting statistics nor will it enable the APD functions for that volume.
Uempty
MDiskX MDiskY
Pre-existing
External Storage External Storage Storwize V7000 900
system LUN created system LUN created will be managed as
with SSD drives; with flash technology; externally virtualized
assigned to assigned to storage
Storwize V7000 Storwize V7000
MDisks from external storage systems created from Flash storage are discovered by the Storwize
V7000 on the SAN as unmanaged mode MDisks with a default technology type of hard disk drive.
Since there is no interface for the Storwize V7000 to automatically discern the technology attributes
of the drives that created the MDisks in attaching storage systems, an interface is provided through
both the GUI and the CLI to enable the administrator to update the technology tier of these MDisks
to SSDs or flash.
In Storwize V7000 terminology, flash is used to denote tier 0 storage. The backing technology could
be SSDs or flash systems (such as the IBM Storwize V7000 900 storage system). All pre-existing
Storwize V7000 900 will be managed as externally virtualized storage, unlike the Storwize V7000
Storage Enclosures. Storwize V7000 cannot provide a single point of configuration for an externally
virtualized Storwize V7000 900, as it can for its own Storage Enclosure(s).
Uempty
Storwize V7000 does not automatically detect the type of external MDisks. Instead, all external
MDisks initially are put into the enterprise tier by default. If Flash disk are part of the configuration,
the administrator must manually change the tier of MDisks and add them to storage pools.
To change the tier, from the GUI select Pools > External Storage and click the plus (+) sign next to
the controller which owns the MDisks which you want to change the tier. Then right click on the
desired MDisk and select Modify Tier. This only applies to external Mdisks. The change happens
online and has no impact on hosts or availability of the volumes.
If you do not see the Tier column right-click the blue title row and select the Tier check box as
presented in.
Uempty
Before Easy Tier 3, the system could overload an MDisk by moving too much hot data onto a single
MDisk. Easy Tier 3 understands the “tipping point” for an MDisk and stops migrating extents – even
if there is space capacity available on that MDisk.
The Easy Tier overload protection is designed to avoid overloading any type of drive with too much
work. To achieve this, Easy Tier needs to have an indication of the maximum capability of a
managed disk.
This maximum can be provided in one of two ways:
• For an array made of locally attached drives, the system can calculate the performance of the
managed disk because it is pre-programmed with performance characteristics for different
drives
• For a SAN attached managed disk, the system can not calculate the performance capabilities,
so the system has a number of pre-defined levels, that can be configured manually for each
managed disk. This is called the easy tier load parameter (low, medium, high, very_high)
If you analyze the statistics and find that the system doesn’t appear to be sending enough IOPs to
your SSDs, you can always increase the load parameter.
Uempty
Multi-tiered
storage
You can view the Hybrid pool details from the Pools > MDisks by Pools view. Right-click on the
Flash MDisk and select Properties to display summary information about the hybrid pool and the
capacity details.
Uempty
Right-click on volume
and select View Mapped
Hosts
Figure 7-30. Example of Hybrid pool dependent volumes and volume extents
You can view a pool’s dependent volumes by right-clicking the MDisk and select Dependent
Volumes. To view the volume extents, right-click on the volume. Select View Mapped Hosts option
and click the Member MDisks tab.
Uempty
At the volume level, the Easy Tier setting has a default value of on, which allows a volume to be
automatically managed by Easy Tier once its pool becomes Easy Tier active. This default settings
for both the pool and volume enable automated storage tiering to be implemented without manual
intervention.
The CLI also displays the volume status information. In addition, the CLI displays the volume’s
Easy Tier setting of on/off. In the same manner as changing the setting on a pool, you will have to
use to CLI to change a Easy Tier volume setting to auto, on, off, or measured.
Uempty
Easy Tier = On
Easy Tier status = Active
To obtain a quick review of the Easy Tier status of a list of volumes, use the Volumes > Volumes
view and add the Easy Tier Status column to the display. If desired, the search box allows a more
focused list to be displayed.
Uempty
Real
Capacity
Copy 0
Copy 1
• Volume limitations
ƒ Compressed volumes: Easy Tier can only optimize for read performance and
not write performance.
ƒ Volume mirroring: If volume copy data has a different workload
characteristics, the extents migrated by Easy Tier might differ for each
copy.
Easy Tier will work seamlessly with any type of volume. However for compressed volumes Easy
Tier can only optimize for read performance, it can not work for write performance, due to the
nature of the way the compression software stores data on the disk.
Volume mirroring can have different workload characteristics on each copy of the data because
reads are normally directed to the primary copy and writes occur to both copies. Therefore, the
number of extents that Easy Tier migrates between the tiers might be differ for each copy.
Uempty
Since internal or external MDisks (LUNs) are likely to have different performance attributes
because of the type of disk or RAID array on which they reside. The MDisks can be created on 15K
revolutions per minute (RPM) Fibre Channel (FC) or serial-attached SCSI (SAS) disks, nearline
SAS or Serial Advanced Technology Attachment (SATA), or even SSDs or flash storage systems.
This provide examples of storage pools populated with different MDisk types:
• Single-tier storage pool should have the same hardware characteristics; for example, the same
RAID type, RAID array size, disk type, disk revolutions per minute (RPM), and controller
performance characteristics.
• A multi-tier storage pool supports a mix of MDisks with more than one type of disk tier attribute;
following the same hardware characteristics as the single-tier pool. One belonging to an Flash
array, one belonging to SAS HDD array, and one belonging to an NL-SAS HDD array.
Uempty
The configuration is as simple as adding more than one managed disk to a storage pool
• If it is a Storwize array, the tier and capability of the array will be automatically detected.
• If it is a SAN attached managed disk, the user will need to manually configure the tier and the
capability (easy tier load) of the managed disk.
Uempty
The Easy Tier function can be disabled at the storage pool and volume level. When disabling a
given volume of a multi-tiered pool from Easy Tier automatic data placement, the volume is set to
Off. Therefore, Easy Tier will not record any statistics or perform cross tier extent migration occurs.
Remember, for this mode only storage pool balancing is active to migrate extents within the same
storage pool.
Easy Tier setting for storage pools and volumes can be changed only via command line. Use
chvdisk command to turn off or turn on Easy Tier on selected volumes and chmdiskgrp to change
status of Easy Tier on selected storage pools.
You can use the management GUI to delete the Flash RAID array from the hybrid pool. This action
will allow you to force the removal of the Flash MDisk array from the configuration even if it contains
data. This action will trigger a migration of the extents to the other MDisks in the same pool. This is
a non-destructive and non-disruptive action, the host systems are unaware that data is being
moved around under the covers.
Uempty
When you use Easy Tier on the IBM Storwize V7000, keep in mind the following limitations:
• When an MDisk is deleted from a storage pool with the -force parameter, extents in use are
migrated to MDisks in the same tier as the MDisk that is being removed, if possible. If
insufficient extents exist in that tier, extents from the other tier are used.
• When Easy Tier automatic data placement is enabled for a volume; you cannot use the svctask
migrateexts CLI command on that volume.
• When IBM Storwize V7000 migrates a volume to a new storage pool, Easy Tier automatic data
placement between the two tiers is temporarily suspended. After the volume is migrated to its
new storage pool, Easy Tier automatic data placement between the generic SSD tier and the
generic HDD tier resumes for the moved volume, if appropriate.
▪ When the IBM Storwize V7000 migrates a volume from one storage pool to another, it
attempts to migrate each extent to an extent in the new storage pool from the same tier as
the original extent. In several cases, such as where a target tier is unavailable, the other tier
is used. For example, the generic SSD tier might be unavailable in the new storage pool.
• Multi-tier storage pools containing image mode and sequential volumes are not candidates for
Easy Tier automatic data placement because all extents for those types of volumes must reside
on one, specific MDisk and cannot be moved.
Uempty
The IBM Storage Tier Advisor Tool (STAT) is a Microsoft Windows application that used in
conjunction with the Easy Tier function to interprets historical usage information from Storwize
V7000 systems. The STAT utility analyzes heat data files to provides information on how much
value can be derived by placing “hot” data with high I/O density and low response time
requirements on Flash/SSDs while targeting HDDs for “cooler” data that is accessed more often
sequentially and at lower I/O rates.
With the release of the software V7.3 code supporting 3 tiered storage Easy Tier functionality, STAT
can be used to determine the data usage for each of the tiered storage.
Uempty
The STAT tool can be downloaded from the IBM support website. You can also do a web search on
‘IBM Easy Tier STAT tool’ for a more direct link. Download the STAT tool and install it on a Windows
workstation. The default directory is C: \Program Files\IBM\STAT.
IBM Storage Tier Advisor Tool can be downloaded at:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000935
You will need an IBM ID to proceed with the download. A suggestion: Use Google search, do not
rely on the URL name.
Uempty
To evaluate the data, the heat map file needs to be downloaded from Storwize V7000 system using
management GUI with the Download option. On Storwize V7000 the heat data file is located in the
/dumps directory on the configuration node and is named “dpa_heat.node_name.time_stamp.data”.
Heat data files are produced approximately once a day when Easy Tier is active on one or more
storage pools and updates the activity per Volume since the prior heat data file was produced. This
heat information is added to a running tally that will reflect the heat activity to-date for the measured
pools. The file must be off-loaded by the user and Storage Tier Advisor Tool invoked from a
Windows command prompt console with the file specified as a parameter. The user can also
specify the output directory. Any existing heat data file is erased when it has been existing longer
than 7 days.
The program can also be invoked from CLI using the PuTTY scp (PSCP) window with the heat file
name specified. Ensure the heat file is in the same directory as the STAT program when invoking
from the CLI. The Storage Tier Advisor Tool creates an result index.html file to view the results
through a supported browser. Browsers Firefox 27, Firefox ESR_24, Chrome 33 and IE 10 are
supported. The file is stored in a folder called Data_files in either the current directory or the
directory where STAT is installed. The output index.html file can then be opened with a web
browser.
Uempty
IOMeter creates a
iobw.tst file for each
volume generated I/Os
In order to received the desired results, the Iometer allows you to specify parameters to get started.
• In the Topology panel on the left side of the Iometer window, under All Managers you select
your system name.
• If there are available mounted drives, they will be appear under the Disk Targets tab view. Blue
icons represent physical drives; they are only shown if they have no partitions on them. Yellow
icons represent logical (mounted) drives; which are only shown if they are writable. A yellow
icon with a red slash through it means that the drive needs to be prepared before the test starts.
• The select disk(s) to use in the test (use Shift and CTRL to select multiple disks). The selected
disks will be automatically distributed among the manager’s workers (threads).
• From the Access Specifications tab, select the disk name. This tab specifies how the disk will be
accessed. If disk is not display, use the Edit button and Default in the Global Access
Specifications window to set the workload parameters. The default is 2-Kilobyte random I/Os
with a mix of 67% reads and 33% writes, which represents a typical database workload. You
can leave it alone or change it.
• The Results Display tab, allows you to set the Update Frequency between 1 second or 60
seconds. For example, if you set the frequency to 10 seconds, the first test results appear in the
Results Display tab, and they are updated every 10 seconds after that.
Uempty
Once you have the specifications set, press the Start Tests button (green flag). A standard Save
File dialog appears. Select a file to store the test results (default results.csv). Iometer must run for
24 hours to get accurate results. Press the Stop Test button (stop sign), and the final results are
saved in the results.csv file.
Uempty
Import CSV files using the IBM Storage Tier Advisory Tool Charting Utility from IBM techdocs:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5251
The STAT tool also creates 3 CSV files in the Data_files folder containing a very large amount of
information on what is going on. The best way to start investigating this data is to use the IBM
Storage Tier Advisory Tool Charting Utility from IBM techdocs:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5251
This tool will import the all three of these CSV into excel and automatically draw the three most
interesting charts automatically. It also contains a tab called Reference, which will explain all of the
terms used in the graphs, as well as providing a useful reminder about the different types of data
migration in Easy Tier.
Uempty
Total capacity of
extents identified All storage volumes with
as being hot from Easy Tier status of active or
collected I/O on measured and the total
the monitored capacity of these volumes
volumes
The output html file from the STAT tool is stored in the same folder as the stat.exe. Open it with a
web browser to review the generated reports.
The System Summary report includes a brief inventory of volumes and capacity measured, amount
of detected hot data to be migrated, and an estimated amount of time for the migration to complete.
It also provides a recommendation of amount of Flash/SSD capacity to add, or to take advantage of
existing SSDs currently not in use, for possible performance improvement.
The Easy Tier V3 STAT Tool analyzes the supported tiers (ENT, SSD (Flash) + ENT, ENT) storage
using the Easy Tier functionality. Each green portion of the Data Management status bar displays
both capacity and I/O percentage of the extent pool (except that the black portion of the bar only
displays the capacity of the unallocated data) by following the “Capacity/IO Percentage” format.
Uempty
COLD
COLD/WARM
You can select any Storage Pool ID to view the performance statistics and improvement
recommendation of that specific pool. The measurement data by pools is where Easy Tier is either
activated automatically (two-tiers of storage technology detected) or manually (single tier pool
where Easy Tier was turned on to run in evaluation mode).
This report represents a brilliant distribution of each tier that constructed the pool displaying the
MDisks, number of IOPS threshold, utilization of MDisk IOPS, and projected utilization of MDisk
IOPS for each MDisk of each tier. There is also a threshold set for maximum allowed IOPS. The
utilization of the MDisk IOPS is the current MDisk IOPS, calculated as a moving average, as a
percentage of the maximum allowed IOPS threshold for the device type (such as SATA and SSD)
and the projected utilization of MDisk IOPS is the expected MDisk IOPS, calculated as a moving
average, after rebalancing operations have been completed, as a percentage of the maximum
allowed IOPS threshold for the device type (such as SATA and SSD). Observe the utilization of the
MDisks IOPS and the projected utilization of MDisk IOPS color bars, which denotes the percentage
of MDisk IOPS utilization in comparison to the average utilization of MDisks IOPS.
The color codes provide the following representation:
• The blue portion of the bar represents the capacity of cold data in the volume. Data is
considered cold when it is either not used heavily or the I/O per second on that data has been
very less.
Uempty
• The orange portion of the bar represents the capacity of warm data in the volume. Data is
considered warm when it is relatively heavily used than data that is cold or the IOPS on that
data is relatively more than that on the data that is cold.
• The red portion of the bar represents the capacity of hot data in the volume. Data is considered
hot when it is used most heavily or the IOPS on that data has been highest. (Not shown).
Uempty
Recommendations to
add SSD to pool that
contains Enterprise
and NL MDisks
Recommendations to
add NL to all pools to
migrate the less active
data
With Easy Tier supporting 3-tier combination in this release, there are five kinds of
recommendations: “Recommended SSD Configuration”, “Recommended ENT Configuration”,
“Recommended SATA Configuration”, “Recommended NL Configuration” and “Recommended
SSD + ENT Configuration”. For each kind of recommendation, the recommendation result will be
listed in a table format, which contains recommendation title, selection list, table head, and table
content. The recommended drive configuration is for each tier combination based on the storage
pool.
STAT can be used to determine what application data can benefit the most from relocation to SSDs,
Enterprise (SAS/FC) drives or Nearline SAS drives. STAT uses limited storage performance
measurement data from a user's operational environment to model potential unbalanced workload
(skew) on disk and array resources. It is intended to supplement and support but not replace
detailed pre-installation sizing and planning analysis. It is most useful to obtain a “rule of thumb”
system-wide performance projection of cumulative latency reduction on arrays and disks when a
Solid State Drive configuration and the IBM Easy Tier function are used in combination with
handling workload growth or skew management.
Uempty
90
50% of the extents do 10% of
80 the MB and virtually no
random IOPS!
70
Percent of workload
60
50
40
30
58% of the random IOPS and
20 33% of the MB from about
5% of the extents!
10
0
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
Percent of extents
This illustration show the skews of the distributed and projective workloads across the system in a
graph to provide a visual in the system performance based on the percentage of allocated capacity.
• Workloads distribution: The X-axis denotes the top x intensive data based on sorted data by a
small I/O. The Y-axis denotes the accumulative small I/O percentage distributed on the top x
intensive data.
• Projective workloads: The top tier workload displays the projected skew of the secondary
storage device. The X-axis denotes the top x intensive data based on sorted data by the small
write I/O. The Y-axis denotes the accumulative small I/O percentage distributed on the top x
intensive data.
This output can be used to compare the workload distribution curved across tiers within and across
pools to help determine optimal drive mix for current workloads.
Uempty
The volume heat distribution of that storage pool which shows the VDisk ID, configuration size, I/O
percentage of extent pool, tier, capacity on tier, and heat distribution for each VDisk of that storage
pool. The heat distribution for each VDisk is displayed using a color bar which represents the type
of data on that volume.
This example provides a more vibrant display in the color bar for storage pool P0, which contains
Enterprise MDisks only. Here you can see that VDisk 10 heat distribution shows a relatively heavily
used in data than VDisk 0, as well as the high IO density. You can also see that VDisk 7, although
smaller in size, shows that the heat distribution for the data in use is balanced between warm and
hot. Therefore, based on the previous information, storage pool P0 can benefit from adding SSDs,
as well as the additional of Nearline MDisks to migrate the less active data between tier 1 and tier 2.
Uempty
In this example, the Volume Heat Distribution report for storage pool P1, a two-tier hybrid (Tier 0
and Tier 1) storage pool, provides a distribution of hot, warm and cold data (in terms of capacity) for
each volume monitored. Again, the recommendation for this pool would be to add NL to migrate
less active data to between Tier 1 and Tier 2, before migrating to Tier 0.
Overall, Easy Tier maximum value can be derived by placing hot data with high IO density and low
response time requirements on SSDs, while targeting HDDs for cooler data that is accessed more
sequentially and at lower rates.
Uempty
Flash
Tier 0
ENT
Tier 1
NT
Tier 2
It is recommended to keep some free extents with in a pool in order for Easy Tier to function. This
will allow Easy Tier to move the extents between tiers as well as move extents with in the same tier
to load-balance the MDisks with in that tier, without delays or performance impact.
Easy Tier will work using only one extent; however, it will not work as efficient.
Free extents can be estimated based on one extent times the number of MDisks in a storage pool
plus 16.
And remember, Easy Tier heat map is updated every 24 hours for moves between tiers.
Performance rebalance is within a single tier (even in a hybrid pool) looked at and updated much
more often. The system is rebalancing on an hourly basis.
Uempty
The key item to note is Easy Tier can be used for workload measurement. It can be used to assess
the impact of adding Flash/SSDs to the workload before the actual investment of Flash/SSD-based
MDisks.
Easy Tier is a licensed feature, except for storage pool balancing which is a no charge feature, and
is enabled by default. Easy Tier comes as part of the Storwize V7000 code. For Easy Tier to
migrate extents, you must have disk storage available that has different tiers: a mix of Flash/SSD
and HDD.
Uempty
Figure 7-51. Best practices: Easy Tier data relocation decision criteria
The hot and cold temperature of an extent is dependent upon the measurements of random, small
data transfer I/O operations.
Large sequential transfers are not considered as these tend to perform equally well with
HDD-based MDisks. Thus, Easy Tier only considers extents with I/Os of up to 64 K as migration
candidates.
Uempty
• Thin provisioning
• Real-time Compression (R
• Comprestimator Utility
Uempty
; Thin-provisioned volumes:
• Extent space-efficient volumes
• Enhances utilization efficiency of physical storage capacity
• Enables just-in-time capacity deployment
• Aligns application growth with its storage capacity growth
• Facilitates more frequent copies/backups while minimizing capacity
consumption
Dynamic
growth
Thin provisioning function is to extends storage utilization efficiency to all Storwize V7000
supported storage systems by allocating disk storage space in a flexible manner among multiple
users, based on the minimum space required by each user at any given time.
With thin-provisioning, storage administrators can also benefit from reduced consumption of
electrical energy because less hardware space is required, and enable more frequent recovery
points of data to be taken (point in time copies) without a commensurate increase in storage
capacity consumption.
Uempty
Thin-Provision volumes are sequential volume types that can be created either as fully allocated or
thin-provisioned. Thin-provisioned volumes creates two capacities: virtual and real.
Thin-provisioned volume creates an additional layer of virtualization. This layer gives the
appearance of storage as traditionally provisioned, having more physical resources than are
actually available.
Uempty
LBAn Virtual
Capacity
Real
Capacity
LBA0
rsize parm
Benefits from lower
cache functions (such
as coalescing writes or
Physical storage prefetching
used is 25 GB
For thin-provisioned volumes, only the real capacity is acquired at its creation. The real capacity
defines the amount of capacity actually allocated from disk systems represented by storage pools.
The virtual capacity is only presented to attaching hosts, as well as Copy Services such as
FlashCopy and Metro and Global Mirroring. There is no awareness to discern between fully
allocated versus thin-provisioned volumes. Therefore, Storwize V7000 implementation of
thin-provisioning is totally transparent which includes back-end storage systems.
As a standard feature of the Storwize V7000, thin-provisioned volumes can also reside in any
storage pool representing any attaching storage system.
The thin-provisioned volumes can benefit from lower cache functions (such as coalescing writes or
prefetching), which greatly improve performance.
Uempty
A thin-provisioned volume real capacity is controlled by the rsize (real size) parameter. There might
be times where thin provisioning can potentially reach a situation where demands for additional
storage capacity is required.
There are two operating modes in which to expand thin-provisioning real capacity: autoexpand and
non-autoexpand. You can switch the mode at any time. If a thin volume real capacity is increased
using the autoexpand mode, the Storwize V7000 automatically adds a fixed amount of more real
capacity to the thin volume as required. The autoexpand feature attempts to maintain a fixed
amount of contingency (unused) real capacity for the volume. The contingency capacity is initially
set to the real capacity that is assigned when the thin volume is created. Autoexpand mode does
not cause real capacity to grow much beyond the virtual capacity.
The real capacity can be manually expanded to more than the maximum that is required by the
current virtual capacity, and the contingency capacity is recalculated to be the difference between
the used capacity and real capacity. This process can be achieved using the CLI non-expand
mode.
Uempty
Capacity
Volume
(size)
Extents
Real Capacity Space allocated in grain size increments:
Virtual Capacity: 50 GB
Write/Read Contingency
Activities: Auto-expand capacity
= OFF* (Grain=256K*) (rsize=2GB)
LBA0-511 LBA1536-2047
Space within the allocated real capacity is allocated in grain-sized increments corresponding to a
logical block address (LBA) range dictated on write activity. LBA is a single block of information that
is addressed using a LUN identifier and an offset within that LUN. The default grain size is 256 K,
which represents 512 blocks of 512 bytes each. A write operation to an LBA range not previously
allocated causes a new grain sized space to be allocated.
A directory of metadata is used to map or track allocated space to a corresponding LBA range
based on write activities. When write activity exceeds its real capacity, the volume goes offline (if
the automatically expand setting is off) and application I/Os would fail. Once the real capacity is
expanded, the volume becomes online automatically.
Uempty
Extents
Real Capacity
Space allocated in grain size increments:
Virtual Capacity: 50 GB
Write/Read Contingency
Activities: capacity
(rsize=2GB)
(Grain=64K)
LBA0-127 LBA256-383 LBA128-255
LBA200
With the autoexpand attribute set ON, the rsize specified value serves as a contingency capacity
that is maintained as write activities occur. This contingency or reserved capacity protects the
volume from going offline in the event the storage pool runs out of extents. The contingency
capacity diminishes to zero when the real capacity reaches the total capacity of the volume.
The combination of autoexpand and the existence of the metadata directory might cause the real
capacity of the volume to become greater than the total capacity seen by host servers and other
Storwize V7000 services. A thin-provisioned volume can be converted to fully allocated using the
Volume Mirroring function.
The lsmdisklba command returns the logical block address (LBA) of the MDisk that is associated
with the volume LBA. For mirrored volume, the command lists the MDisk LBA for both the primary
and the copy.
If applicable, the command also lists the range of LBAs on both the volume and MDisk that are
mapped in the same extent, or for thin-provisioned disks, in the same grain. If a thin-provisioned
volume is offline and the specified LBA is not allocated, the command displays the volume LBA
range only.
Uempty
Directory (B-tree)
< 1 % of volume capacity
Volume’s
extents
Metadata
The directory of metadata and user data shares the real capacity allotment of the volume. When an
thin volume is initially created, the volume has no real data capacity stored. However, a small
amount of the real capacity is used for metadata, which it uses to manage space allocation. The
metadata holds information about extents and volume blocks already allocated in a rank. Once the
write I/O to an LBA (which is not previously been written to) causes a grain-sized amount of space
to be marked as used within the volume’s allocated real capacity; and its metadata directory
updated. This metadata that is used for thin provisioning allows the Storwize V7000 to determine
whether new extents have to be allocated or not.
Here are a few examples:
• If the volume default grain size is 256, then 256 K within the allocated real capacity is marked
as used, for the 512 blocks of 512 bytes each, spanning the LBA range in response to this write
I/O request.
• If a subsequent write I/O request is to an LBA within the previously allocated 256 K, the I/O
proceeds as usual since its requested location is within the prior allocated 256 K.
• If a subsequent write I/O request is to an LBA outside the range of a previously allocated 256 K,
then another 256 K within the allocated real capacity is used.
Uempty
All three of these write examples consult and might update the metadata directory. Read requests
also need to consult the same directory. Consequently, the volume’s directory is highly likely to be
Storwize V7000 cache-resident while I/Os are active on the volume.
Uempty
A few factors (extent and grain size) limit the virtual capacity of thin-provisioned volumes beyond
the factors that limit the capacity of regular volumes.
The first table shows the maximum thin provisioned volume virtual capacities for an extent size. The
second table shows the maximum thin provisioned volume virtual capacities for a grain size.
Uempty
80%
If no Threshold is set, capacity
volume goes offline when used
write activity exceeds real
capacity Threshold alerts
Threshold alerts
as
as sent the
sent to the
Administrator
Administrator
X
X
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016
To avoid exhausting the real capacity, you can enable the warning threshold on thin-provisioned
volumes to send alerts to an administrator by using email or an SNMP trap. The administrator can
then (if warranted) increase real capacity and/or change the volume attribute to autoexpand so that
real capacity is increased automatically. You can enable it on the volume, and on the storage pool
side, especially when you do not use the autoexpand mode. Otherwise, the thin volume goes offline
if it runs out of space.
Warning threshold to log a message to the event log when capacity is exceeded (default is 80%).
When the virtual capacity of a thin-provisioned volume is changed, the warning threshold is
automatically scaled to match. The new threshold is stored as a percentage.
For more information and detailed performance considerations for configuring thin provisioning, see
IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and Performance
Guidelines.
Uempty
Modified view
Thin-provision volumes are created using the same procedural steps as any other volume created
within the management GUI Create Volumes wizard.
Uempty
• Thin Provision Gain Size can be set to 32 KB, 64 KB, 128 KB or 256 KB
Larger gain sizes produce better performance
Thin-
Provisioned
defaults and
options
The Thin Provisioning tab allows you to modify the default parameters, such as real capacity,
warning threshold or autoexpand enabled or disabled.
Real capacity defines how much disk space is allocated to a volume. Virtual capacity is the capacity
of the volume that is reported to other IBM Storwize V7000 components (such as FlashCopy or
remote copy) and to the hosts. For example, the default of 2% of virtual size, which implies that only
2 GB of actual volume size gets allocated.
You can enable the thin volume’s real capacity to automatically expand without user intervention,
set a warning threshold to notify an administrator when the volume has reached the specified
threshold percentage, and change the grain size. These attributes can be overridden; however, the
default settings are general best practice.
Uempty
The Summary statement calculates the real and virtual capacity value of the volume. The virtual
capacity is the size presented to hosts and other Copy Services such as FlashCopy and
Metro/Global Mirror.
The management GUI generates the mkvdisk command. The thin-provisioned volume is set to
-autoexpand, -grainsize 256, -rsize 2% parameter indicates real capacity is 2% of the actual
volume size with an -warning 80%.
Uempty
10 GB
í Cannot be used in a Synchronous Mirroring operation
Storage pool
Distinguished icon
Thin volumes can be easily distinguished from the other volumes by its hourglass icon. From the
host perspective, the thin-provisioned volume appears as a fully allocated volume. Thin volumes
also support all of the operations that standard volumes do with the following exceptions:
• You cannot change the segment size of a thin volume
• You cannot enable the pre-read redundancy check for a thin volume
• You cannot use a thin volume as the target volume in a Volume Copy
• You cannot use a thin volume in a snapshot (legacy) operation
• You cannot use a thin volume in a Synchronous Mirroring operation
Uempty
Volume warning
threshold line
2 GB of real
capacity with
768KB used
for metadata
directory
Extents allocated
To view a volume details, you will need to right-click on the volume and select Properties. However
to view a volume’s capacity such as the thin provision, you will need to select Volume Copy
Properties. From this view, the is identified as thin-provisioned volume. Only a tiny amount, 2 GB of
real capacity, is allocated; within which 768 KB is used for the volume metadata directory.
The Member Disks tab displays the extent distribution of this volume as well as the actual number
of extents consumed on each MDisk. Observe that the five extents of 512 MB work out to be more
than 2 GB.
Recall that the automatically expand attribute defined for this volume. The real capacity (rsize)
value of 2 GB (or four extents) serves as a contingency buffer maintained for this volume. Because
768 KB of real capacity has been used, the volume was automatically expanded by one extent to
maintain the 2 GB buffer.
Uempty
Increased
capacity Resolution options:
Expand volume size
Convert to fully allocated
When a warning threshold is exceeded (for a volume or a pool), the Spectrum Virtualize will
generate an event notification on the Storwize V7000 primary node. For each event, the
notifications include SNMP trap, email notification, and an entry in the Events log. To see the
Events log threshold warning entry, go to Monitoring > Events to view messages and alerts view
of the log. Right-click the entry and select Properties for a more readable and detailed description
of the event. A thin-provisioned volume copy space warning has occurred.
Data or file deletion is managed at the host or OS level. In Windows, a file deletion is just an update
of allocation tables associated with the Windows drive to release allocated space. It is an activity
not known to external storage systems including the Storwize V7000.
Consequently, the real capacity utilization of the volume is not changed nor released even though
the host system indicates more free space for the drive.
Assuming that the application has estimated the volume size correctly, a fully provisioned volume
might be more appropriate. Otherwise, the application might want to reassess the volume virtual
size. A thin-provisioned volume can be converted to a fully allocated volume by using the
management GUI Volume Mirroring function.
Uempty
Uempty
VB1-THIN
V0B1-DS3KSATA
(SATA)pool
From the host perspective, nothing has changed, it see only attributes and usage of the volume.
The volume is still mapped to its perspective host for both read/write access, and still maintain its
assigned object ID and UID number.
The magic of Storwize V7000 virtualization (actually extent pointers) affords the freedom of
changing the backend storage infrastructure without host impact; and the opportunity to exploit
newer technology to optimize storage efficiency for better returns on storage investments.
As the data ages and become less active, it might be worthwhile to migrate it to a lower cost
storage ties; and at the same time release the allocated but unused capacity.
Uempty
• Thin provisioning
This topic discusses how Real-time Compression works in an Storwize V7000 environment.
Uempty
Real-time Compression
As the industry needs continue grow, the demand for data compression must be fast, reliable, and
scalable. The compression must occur without affecting the production use of the data at any time.
In addition, the data compression solution must also be easy to implement.
Based on these industry requirements, IBM offers IBM Real-time Compression, a combination of a
lossless data compression algorithm with a real-time compression technology.
Uempty
IBM Real-time Compression offers innovative, easy-to-use compression that is fully integrated to
support active primary workloads:
• Provides high performance compression of active primary data
▪ Supports workloads off-limits to other alternatives
▪ Expands candidate data types for compression
▪ Derives greater capacity gains due to more eligible data types
• Operates transparently and immediately for ease of management
▪ Eliminates need to schedule post-process compression
▪ Eliminates need to reserve space for uncompressed data pending post-processing
• Enhances and prolongs value of existing storage assets
▪ Increases operational effectiveness and capacity efficiency; optimizing back-end cache and
data transfer efficacy
▪ Delays the need to procure additional storage capacity; deferring additional capacity-based
software licensing
• Supports both internal and externally virtualized storage
Uempty
▪ Compresses up to 512 volumes per I/O group (v7.3 code)
▪ Exploits the thin-provisioned volume framework
Uempty
The Compression technology is delivered by the Random Access Compression Engine (RACE),
which is the core of the IBM Spectrum Virtualize Real-time Compression solution. The RACE has
been integrated seamlessly in the Thin Provisioning layer of the node I/O stack, below the Upper
Cache level. At a high level, the RACE component compresses data that is written into the storage
system dynamically. RACE is an in-line compression technology that allows host servers and Copy
Services to operate with uncompressed data. The compression process occurs transparently to the
attached host system (FC or iSCSI) and Copy Services. All of the advanced features of the
RtC-supported system are supported on compressed volumes. You can create, delete, migrate,
map (assign), and unmap (unassign) a compressed volume as though it were a fully allocated
volume. In addition, you can use IBM Real-time Compression along with IBM Easy Tier on the
same volumes.
Uempty
Traditional compression
• Data compression location-based
Host update sequence • Must locate repetition of bytes within a
1 given chuck of data to be compressed
2 • Must detect and calculate the
3 repetition bytes that are stored in the
same chuck
Traditional compression ƒ Locating all bytes might yield a lower
Location compression ratio
Compression
1 Window
3
Three compression actions
(based on physical data location)
# = File Update
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016
Uempty
IBM RACE offers an innovation leap by incorporating a time-of-data-access dimension into the
compression algorithm called temporal compression. When host writes arrive, multiple compressed
writes are aggregated into a fixed size chunk called a compressed block. These writes are likely to
originate from the same application and same data type, thus more repetitions can usually be
detected by the compression algorithm.
Due to the time-of-access dimension of temporal compression (instead of creating different
compressed chunks each with its unique compression dictionaries) RACE compression causes
related writes to be compressed together using a single dictionary; yielding a higher compression
ratio as well as faster subsequent retrieval access.
Uempty
Repetitions of data
are detected within
the sliding window
history, most often
32 kilobytes (KB)
Repetitions outside
of the window
cannot be reduced in
size unless data is
repeated when
window slides to the
next 32 KB
The most compression is probably most known to users because of the widespread use of
compression utilities, such as Zip and Gzip. At a high level, these utilities take a file as their input,
and parse the data by using a sliding window technique. Repetitions of data are detected within the
sliding window history, most often 32 kilobytes (KB). Repetitions outside of the window cannot be
referenced. Therefore, the file cannot be reduced in size unless data is repeated when the window
“slides” to the next 32 KB slot.
This example shows compression that using a sliding window, where the first two repetitions of the
string “ABCDEF” fall within the same compression window, and can therefore be compressed using
the same dictionary. However, the third repetition of the string falls outside of this window, and
cannot, therefore, be compressed using the same compression dictionary as the first two
repetitions, reducing the overall achieved compression ratio.
Uempty
Compresses volume
in storage device
As part of its staging, data passes through the compression engine, and is then stored in
compressed format onto the storage pool. This means that the write of each host is compressed as
it passes through the RACE engine to the storage disks. Therefore, the physical storage is only
consumed by compressed volume.
Writes are therefore acknowledged immediately after received by the write cache, with
compression occurring as part of the staging to internal or external physical storage.
Uempty
The Storwize V7000 nodes must have two processors, 64 GB of memory, and the optional (two
required) Compression Accelerator cards installed in order to use compression. Enabling
compression on AC2 nodes does not affect non-compressed host to disk I/O performance.
I/O group recommendations:
• It is strongly recommend to place Compression Accelerator cards into their dedicated slots 4
and 6. However, if there is no I/O card installed in slot 5, a compression card can be in any slot
connected to this second processor.
• Up to two Compression Accelerator adapters can be installed. Each additional that is installed
in the BB solution improves the I/O performance and in particular the maximum bandwidth
when using compressed volumes.
• With a single Compression Acceleration Card in each node, the existing recommendation on
the number of compressed volumes able to be managed per I/O group remains the same at
200 volumes. However, with the addition of the second Compression Acceleration card in each
node (a total of four cards per I/O group), the total number of managed compressed volumes
increases to 512. A cluster with four (4) I/O groups can support as many as 800 compressed
volumes.
Uempty
Dual RACE
No bottleneck
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016
With the release of V7.4, enables two instances of the RACE (Random Access Compression
Engine) software, which in short means almost twice the performance for most workloads. The
Storwize V7000 nodes are capable of compression (2nd CPU, extra cache and hardware offload
card(s), when all the compression assist hardware is installed (extra cache and both compression
offload cards).
The allocation of volumes is essentially round robin across both instances, so you do need at least
two volumes to make use of this enhancement, but it takes random IOPs performance from say
175K up to over 330K when the workload behaves well with compression. The bandwidth has also
increased significantly and almost 4.5GB/s can be achieved per Storwize node pair.
Uempty
• CPU allocation between System and RACE per one node canister:
• Memory allocation between System and RACE per one node canister:
With the initial release there is a fixed memory sizes assigned for RtC use based on how much
memory is installed in each node canister.
This gives a balanced configuration between general IO and RtC performance
▪ Recommendation for serious RtC use is add the extra 32GB of memory per node canister
▪ Second Compression Accelerator is also recommended and requires extra 32GB of
memory
Uempty
Figure 7-81. Storwize V7000 Gen1 versus Gen2: Max performance (one I/O group)
This chart shows the difference in performance between the previous Storwize Gen1 and the Gen2
model.
The first chart shows that performance is almost doubled for regular performance capability.
The second chart is with compression enabled. You can see that on the Gen1 model, compression
fluctuates. This is mainly because there just was not enough processing power to handle a serious
case of compression. With the Gen2 model, the number are sufficiently better than the Gen1,
especially for the DBs, VMware and so on
Uempty
IBM authorizes existing Storwize V7000 customers to evaluate the potential benefits of Real-time
Compression capability based on their own specific environment and application workloads for free
using it’s the Free Evaluation 45 Days Program. However, before you can use the RtC 45 days trail
period, you must have Storwize V7000 software version 7.4 or later and two RtC Compression
Accelerator cards installed. The 45 days evaluation period begins when the you enable the
Real-time Compression function. At the end of the evaluation period, the you must either purchase
the required licenses for Real-time Compression or disable the function.
IBM Storwize V7000 requires Real-time Compression License. However, with the purchase of the
base license entitles Storwize V7000 (machine type 2076) to all of the licensed functions, such as
Virtualization, FlashCopy, Global Mirror, and Metro Mirror, and Real-Time Compression. With
Storwize V7000, Real-time Compression is licensed by capacity, per terabyte of virtual data.
To apply your Storwize V7000 compression license using CLI, enter the total number of terabytes of
virtual capacity that is licensed for compression. For example, run chlicense -compression 200.
Uempty
Volume
Fully-allocated mirror Compressed
or Thin-provisioned volume
volume
Copy 0 Copy 1
Compression is enabled on a volume-copy basis. Compressed volume copies are a special type of
thinly provisioned volume that is also compressed. In addition to compressing data in real time, it is
also possible to compress existing data. Compressed volume copies can be freely mixed with fully
allocated and regular thin-provisioned (that is, not compressed) volume copies. For existing
volumes, the Volume Mirroring function can be used to non-disruptively add a compressed volume
copy. This compression adds a compressed mirrored copy to an existing volume. The original
uncompressed volume copy can then be deleted after the synchronization to the compressed copy
is complete. A compressed volume can also become uncompressed with the same volume copy
functionality provided by Volume Mirroring.
Uempty
Compressed volumes are configured in the same manner as the other preset volumes.
Uempty
The first time you create a compressed volume, a warning message is automatically displayed. To
use compressed volumes without affecting performance of existing non-compressed volumes in a
pre-existing system, ensure that you understand the way that resources are re-allocated when the
first compressed volume is created.
Hardware resources (CPU cores and cache) are reserved when compression is activated for an I/O
group. These resources are freed when the last compressed volume is removed from the I/O
group.
Compressed volumes have the same characteristics as thin provisioned volumes, the defaults are
almost identical. You must specify that the volume is compressed, specify the rsize, enable
autoexpand, and specify a warning threshold. If not configured properly, the volume can go offline
prematurely. The preferred settings are to set rsize to 2%, enable autoexpand, and set warning to
80%. The only difference between the two presets is the grain size attribute. Compressed volumes
do not have an externally controlled grain size.
Uempty
Compressed
volume mapped
to WIN host
Spectrum Virtualize advanced features © Copyright IBM Corporation 2012, 2016
The mkvdisk command generates the -compressed parameter which defines the volume being
allocated as a compressed volume.
The -autoexpand, -rsize, and -warning parameters which are used to define thin-provisioned
volumes, are also used to define compressed volumes.
A volume is owned by an I/O group and is assigned a preferred node within the I/O group at volume
creation. Unless overridden, the preferred node of a volume is assigned in round robin fashion by
the system.
Uempty
The Copy 0 details of the compressed volume capacity bar has the same appearance as the
thin-provisioned volume. Its capacity details contain statistics related to compression. At initial
creation, only capacity for volume metadata has been allocated. This compressed volume is owned
by io_grp0 and NODE2 has been assigned as its preferred node. Data compression is performed
by the preferred node of the volume. In this case, the total compression savings based on volume
data is about 75%.
Upon the creation of the first compressed volume in a storage pool, the Compression Savings bar
is included in the pool details to display compression statistics at the pool level. You can view the
compression statistics for the volume entry is displayed in the Compression Savings column. The
statistics related to compression are dynamically calculated by the GUI. These calculations of
percentages of savings are not available with the CLI.
IBM Easy Tier supports compressed volumes. Only random read operations are monitored for
compressed volumes (versus both reads and writes). Extents with high random reads (64 K or
smaller) of compressed volumes are eligible to be migrated to tier 0 storage.
Uempty
Compressed volume
Compression done
by volume’s
preferred node Host see the fully
allocated 50 GB
volume
Storwize V7000 can increase the effective capacity of your flash storage up to 5 times using IBM
Real-time Compression. Compression requires dedicated hardware resources within the nodes
which are assigned or de-assigned when compression is enabled or disabled. Compression is
enabled whenever the first compressed volume in an I/O group is created and is disabled when the
last compressed volume is removed from the I/O group.
However, each Storwize V7000 nodes has 16-cores (two 8-core processors), each with 64 GB of
memory. Without RtC activated, non-compressed volumes utilizes 8-cores for system processing.
When RtC compression is activated, the second 8-core is used only (at this time) to open the PCIe
lanes as well as schedule traffic into and out of the compression accelerator cards installed.
Compression CPU utilization can be monitored from Monitoring > Performance. Use the
drop-down list to select and view CPU utilization data of the preferred node of the volume.
Behind the scene, compression is managed by the preferred node of the volume. As data is written,
it is compressed on the fly by the preferred node before written to the storage pool. Just as all
volume created by the Storwize V7000 system, Real-time Compression is totally transparent to the
host. A compressed volume appears a standard volume with its full capacity to the attaching host
system. Host reads and writes are handles as normal I/O. As write activity occur, compression
statistics are updated for the volume.
Uempty
You can use Volume Mirroring or Add Mirrored Copy to convert between compressed volume
copies and other kinds. This process can be used to compress or migrate existing data to a
compressed volume or to move an already compressed volume back to a generic volume or
uncompressed while volume is still in use.
Compressed volume copies can be assigned to different storage pool. To add a volume copy to the
selected volume by right click the volume entry and select Add Mirrored Copy. From the Add
Volume Copy pane, select the a second pool (optional) or keep both copies in the same pool as
selected volume.
Uempty
While the synchronization is running, there is no change to system behavior, all reads and writes
are going to the original uncompressed volume. Once the synchronization is complete, there are
two identical copies of the data, one is the original (uncompressed) and the other is compressed.
You can change the role of the volume by making Copy1 the copy primary. Compression is
performed by the volume’s preferred node. The compression savings of 76.31% for volume copy 1
is within the advertised 5% accuracy range.
Uempty
Delete
uncompressed Volume
volume mirror
Make compressed
Copy 0 Copy 1
volume the Primary
After the compressed volume copy is created (copy 1 in this case) and synchronized, you can make
the compressed volume the primary and then the fully allocated volume copy (copy 0) can be
deleted. Through Volume Mirroring, a volume with existing data has now become a compressed
volume.
Uempty
• Thin provisioning
• Volume Mirroring
Uempty
Not all workloads are good candidates for compression. The best candidates are data types that
are not compressed by nature. These data types involve many workloads and applications such as
databases, character/ASCII based data, email systems, server virtualization infrastructures,
CAD/CAM, software development systems, and vector data.
Use the IBM Comprestimator utility to evaluate workloads or data on existing volumes for potential
benefits of compression. Implement compression for data with an expected compression ratio of
45% or higher.
Do not attempt to compress data that is already compressed or with low compression ratios. They
consume more processor and I/O resources with small capacity savings.
RtC algorithms are optimized for application workloads that are more random in nature. Heavy
sequential read/write application access profiles might not yield optimal compression ratios and
throughput.
Refer to the Redpaper REDP-4859: Real-time Compression in SAN Volume Controller and the
Storwize V7000 for reference.
Uempty
http://www-
304.ibm.com/webapp/set2/sas/f/
comprestimator/home.html
Not all workloads are good candidates for compression. The best candidates are data types that
are not compressed by nature. These data types involve many workloads and applications such as
databases, character/ASCII based data, email systems, server virtualization infrastructures,
CAD/CAM, software development systems, and vector data.
Use the IBM Comprestimator Utility to evaluate workloads or data on existing volumes for potential
benefits of compression. Implement compression for data with an expected compression ratio of
45% or higher.
Do not attempt to compress data that is already compressed or with low compression ratios. They
consume more processor and I/O resources with small capacity savings.
The Comprestimator is a host based command line executable available from the IBM support
website. The utility and its documentation can also be found by performing a web search using the
key words ‘IBM Comprestimator’.
The Comprestimator supports a variety of host platforms. The utility runs on a host that has access
to the devices that will be analyzed, and performs only read operations so it has no effect
whatsoever on the data stored on the device.
Uempty
Comprestimator version 1.5.2.2 adds support for analyzing expected compression savings in
accordance with XIV Gen3 storage systems running version 11.6, and Storwize V7000 and SAN
Volume Controller (SVC) storage systems running software version 7.4 or higher.
Uempty
The Comprestimator Utility is designed to provide a fast estimated compression rates for
block-based volumes that contain existing data. It uses random sampling of non-zero data on the
volume and mathematical analysis to estimate the compression ratio of existing data. By default, it
runs in less than 60 seconds (regardless of the volume size). Optionally, it can be invoked to run
longer and obtain more samples for an even better estimate of the compression ratio.
Given the Comprestimator is sampling existing data, the estimated compression ratio becomes
more accurate or meaningful for volumes that contain as much relevant active application data as
possible. Previously deleted old data on the volume or empty volumes not initialized with zeros are
subject to sampling and will affect the estimated compression ratio. It employs advanced
mathematical and statistical algorithms to efficiently perform read-only sampling and analysis of
existing data volumes owned by the given host. For each volume analyzed, it reports an estimated
compression capacity savings range; within an accuracy range of 5 percent.
Uempty
To execute the Comprestimator Utility, log into the server using an account with administrator
privileges. Open a Command Prompt with administrator rights (Run as Administrator). Run
Comprestimator with the Comprestimator –n X –p -s SVC command. For Storwize V7000 and
Storwize systems, you will need to use SVC.
The Comprestimator output for the volume example shown indicates that the real storage capacity
consumption for this volume would be reduced from 50 GB to 10.6 GB. This represents a saving of
32.4% within an accuracy range of 5.0%. Only 68.5% of the capacity savings would be derived from
Thin-Provisioning.
The guideline for a volume to be considered as a good candidate is a compression savings of 45%
or more.
Uempty
In addition to the above statements, recall compression is performed by the volume’s preferred
node. The preferred node is assigned in round-robin fashion within the I/O group as each volume is
created. Over time, as volumes are created and deleted, monitor and maintain the distribution of
compressed volumes across both nodes of the I/O group.
In the example scenarios of this unit, compressed volumes and non-compressed volumes share
the same storage pool. For certain configurations and environments, it might be beneficial to
segregate compressed volumes into a separate pool to minimize impact on non-compressed
volumes. Review your environment with your IBM support representative when activating Real-time
Compression.
Uempty
Keywords
• Easy Tier V3
• Automatic data placement mode
• Evaluation mode
• Easy tier indicators
• Drive use attributes
• Thin provisioning
• Auto expand
• Overallocation
• Volume mirroring
• Real time Compression (RtC)
• Comprestimator utility
• Easy Tier STAT
• Data relocation
• Volume Heat Distribution report
• Storage Pool Recommendation report
• System Summary report
Uempty
Review questions (1 of 2)
1. What are three tier levels supported with a Storwize V7000
using Easy Tier technology?
Uempty
Review answers (1 of 2)
1. What are three tier levels supported with a Storwize V7000 using Easy
Tier technology?
The answer is Flash tier, Enterprise tier and Nearline tier.
Uempty
Review questions (2 of 2)
4. True or False: Each copy of a mirrored volume can be mapped to its
own unique host.
6. True or False: Easy Tier can collect and analyze workload statistics
even if no SSD-based MDisks are available.
Uempty
Review answers (2 of 2)
4. True or False: Each copy of a mirrored volume can be
mapped to its own unique host.
The answer is false.
Uempty
Unit summary
• Recognize IBM Storage System Easy Tier settings and statuses at the
storage pool and volume levels
• Differentiate among fully allocated, thin-provisioned, and compressed
volumes in terms of storage capacity allocation and consumption
• Recall steps to create thin-provisioned volumes and monitor volume
capacity utilization of auto expand volumes
• Categorize Storwize V7000 hardware resources required for Real-time
Compression (RtC)
Uempty
Overview
This unit discusses the data migration concept and examines the data migration options provided
by the IBM Spectrum Virtualize Software to move data across the Storwize V7000 managed
infrastructure.
References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html
Uempty
Unit objectives
• Analyze data migration options available with Storwize V7000
• Implement data migration from one storage pool to another
• Implement data migration of existing data to Storwize V7000 managed
storage using the Import and System Migration Wizards
• Implement the Export migration from a striped type volume to image
type to remove it from Storwize V7000 management
• Differentiate between a volume migration and volume mirroring
Uempty
In this topic, we will review the concept of data migration. In addition, looks at several options in
which data migration can be performed. We will begin with Data Migration concept.
Uempty
NetApp
Data Migration DS5000
DS4000
N series
DS3000
EMC
Moving workload (data extents) to: Storwize
HPQ
; Balance usage distribution family
; Move data to lower-cost storage tier XIV
For volumes managed by the Storwize V7000, the mapping of volume extents to MDisk extents can
be dynamically modified without interrupting or affecting a host’s access to these volumes. The
process of moving the physical location is known as data migration. Most implementations allow for
this to be done in a non-disruptive manner that is concurrently while the host continues to perform
I/O to the logical disk (or LUN).
This capability can be used to redistribute workload within an Storwize V7000 cluster across
back-end storage such as:
• Moving workload to rebalance a changed workload.
• Moving workload onto newly installed storage capacity - either new disk drives to expand a
currently installed storage system or a new storage system.
• Moving workload to a lower-cost storage tier.
• Moving workload off older equipment in order to decommission that equipment.
In addition, migration of existing data to Storwize V7000 management takes place without data
conversion and movement. Once under Storwize V7000 management, transparent data migration
allows existing data to gain the benefits and flexibility of data movement without application
disruption.
Uempty
There are two aspects to data migration. One is to move data from a non-Storwize V7000
environment to an Storwize V7000 environment (and vice versa). The other is to move data within
the Storwize V7000 managed environment.
While host-based data migration software solutions are available, the Storwize V7000 import
capability can be used to move large quantities of non-Storwize V7000 managed data under
Storwize V7000 control in a relatively small amount of time.
Moving existing volumes of data to Storwize V7000 control (and vice versa) involves an interruption
of host or application access to the data. Moving data within the Storwize V7000 environment is not
disruptive to the host and the application environment.
Uempty
Existing data
Extent 5a
Extent 5b
Extent 5c
BLUDATV Extent 5d BLUDATA
800 MB 800 MB
Extent 5e
Extent 5f
Extent 5g
Partial extent
The image mode, one of the three virtualization types, facilitates the creation of a one-to-one direct
mapping between a volume and a MDisk that contains existing data. Image mode simplifies the
transition of existing data from a non-virtualized to a virtualized environment without requiring
physical data movement or conversion.
The best practice recommendation is to have a separately defined storage pool set aside to house
SCSI LUNs containing existing data. Use the image type volume attribute to securely bring that
data under Storwize V7000 management. Once under Storwize V7000 management, migration to
the virtualized environment (striped type) is totally transparent from host access.
If desired, run with Storwize V7000 management but without virtualization (image mode). Migration
of the volume from the image virtualization to striped virtualization type can occur either
immediately or at a later point in time.
Uempty
Extent 1a Extent 1a
Extent 2a Extent 2a
Extent 3a Extent 3a
Extent 1b Extent 1b
Extent 2b Extent 2b
Extent 3b Extent 3b
Extent 1c Extent 1c
Extent 1a Extent 2c
Extent 2c
Storage PoolA Extent 3c Extent 2a Extent 3c
Storage PoolB
Extent 3a
R5 R5 R5 Extent 1b
R5 R5 R5
Extent 2b
Chucks are
Extent 3b copied in 16
R5
LUN
R5
LUN
R5
LUN Extent 1c MB R5 R5 R5
LUN LUN LUN
Extent 2c
RAID Controller
Extent 3c RAID Controller
Storage SystemA
Storage SystemB
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016
Since the volume represents the mapping of data extents rather than the data itself, the mapping
can be dynamically updated as data is moved from one extent location to another.
Regardless of the extent size the data is migrated in units of 16 MB. During migration the reads and
writes are directed to the destination for data already copied and to the source for data not yet
copied.
A write to the 16 MB area of the extent that is being copied (most likely due to Storwize V7000
cache destaging) is paused until the data is moved. If contention is detected in the back-end
storage system that might impact the overall performance of the Storwize V7000, the migration is
paused to allow pending writes to proceed.
Once an entire extent has been copied to the destination pool, the extent pointer is updated and the
source extent is freed.
For data to migrate between storage pools, the extent size of the source and destination storage
pools must be identical.
Uempty
MDisks R5 R5 R5 R5 R5 R5
R5 R5 R5 R5 R5 R5
SCSI LUNs LUN LUN LUN LUN LUN LUN
Storage system
migration
RAID Controller RAID Controller
The volume migration (migratevdisk) function of the Storwize V7000 enables all the extents
associated with one volume to be moved to MDisks in another storage pool.
One use for this function is to move all existing data that mapped by volumes in one storage pool
for a legacy storage system to another storage pool for another storage system. The legacy storage
system can then be decommissioned without impact to accessing applications.
Another example of usage is enabling the implementation of a tiered storage scheme using multiple
storage pools. Lifecycle management is facilitated by migrating aged or inactive volumes to a
lower-cost storage tier in a different storage pool.
Uempty
BLUE1 BLUE3
300 GB 300 GB BLUDATA
Migrate BLUE2 800 MB
Striped Extent 1x 300 GB BLUE4
Extent 3y 300 GB
Extent 4g
BLUDATV
Virtualized 800 MB
Extent 5d
Extent 5e
Managed mode
Extent 5f
Extent 1y
Image type volumes have the special property that its last extent might be a partial extent. The
migration function of the Storwize V7000 allows one or more extents of the volume to be moved
and thus change the volume from the image to the striped virtualization type. Several methods are
available to migrate the image type volume to striped:
If the image type volume is mapped to a storage pool that is set aside to map only to image type
volumes then migrate the volume from that storage pool to another storage pool using the volume
migration function.
If the image type volume is mapped to the same storage pool as other MDisks to which the extents
are to be moved then the extent migration function facilitates:
• Migrating one extent of the image type volume. If the last extent is a partial extent then that
extent is automatically selected and migrated to a full extent.
• Migrating multiple extents.
• Migrating all the extents off the image mode MDisk.
Uempty
Migrate to image
BLUE1 BLUE3
300 GB 300 GB
BLUE2
Striped Extent 1x 300 GB BLUE4
Extent 3y 300 GB
Extent 4g
BLUDATV Extent 5d
800 MB 800 MB
Virtualized Extent 5e
Extent 5f
Extent 1y
Unmanaged
mode
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016
An export option (migratetoimage) is available to reverse the migration from the virtualized realm
back to non-virtualized. Data extents associated with a striped type volume are collocated to an
empty or unmanaged destination MDisk. The volume is returned to the image virtualization type
with its destination MDisk placed in image access mode.
The image volume can then be deleted from Storwize V7000 management causing its related
MDisk or SCSI LUN to be removed from the storage pool and set in unmanaged access mode. The
SCSI LUN can then be unassigned from the Storwize V7000 ports and assigned directly to the
original owning host using the storage system’s management interfaces.
The migrate to image function also allows an image type volume backed with extents of an MDisk
in one storage pool to be backed by another MDisk in the same or different storage pool while
retaining the image virtualization type.
In essence, the volume virtualization type is not relevant to the migrate to image function. The
outcome is one MDisk containing all the data extents for the corresponding volume.
Uempty
The extent migration (migrateexts) function is used to move data of a volume from extents
associated with one MDisk to another MDisk within the same storage pool without impacting host
application data access.
When the Easy Tier function causes extents of volumes to move from HDD-based MDisks to
SSD-based MDisk of a pool, migrateexts is the interface used for the extent movement.
When an MDisk is to be removed from a storage pool and that MDisks contains allocated extents,
then a forced removal of the MDisk causes data associated with those extents to be implicitly
migrated to free extents among remaining MDisks within the same storage pool.
Uempty
New volume
Import Wizard (create image volume, migrate to striped)
(existing data)
o Export to image mode (migrate striped MDisk to image
Volume
MDisk for export)
New volume
(existing data) Migration Wizard (import multiple volumes; map to host)
A wealth of data migration options are provided by the Storwize V7000. We will examine each of
these options listed as we explore data migration.
Uempty
Uempty
You can move a volume to a different storage pool only if the destination pool has free space equal
to the size of the volume. If the pool does not have enough space, the volume will not move. Before
performing a volume to pool migration, you need to ensure that the volumes are in good status. You
can issue the lsvg command to view the current status of a volume.
Migrating a volume to another pool also means its extents (the data that belongs to this volume) are
moved (actually copied) to another pool. The volume itself remains unchanged from the host’s
perspective.
Migrate a volume to another pool can be invoked by clicking Volumes > Volumes by Host and
selecting the desired host in the Host Filter list. Right-click the volume entry and then select
Migrate to Another Pool from the menu list.
A list of storage pools eligible to receive the volume copy extents is displayed. The GUI only
displays target pools with the same extent size as the source pool and only if these pools have
enough free capacity needed for the incoming extents. Once you have selected a target pool, the
management GUI generates the migratevdisk command which causes the extents of the volume
copy to be migrated to the selected target storage pool.
A volume might potentially have two sets of extents, typically residing in different pools. The
granularity of volume migration is at the volume copy level. Therefore the more technically precise
Uempty
terminology for a volume is actually a volume copy. Data migration occurs at the volume copy level
and migrates all the extents associated with one volume copy of the volume.
Uempty
AIX_CHIPSV
Basic-WIN1
Extents
Hybrid pool
As an extent is copied from the source pool to the destination pool the extent pointer for the volume
is updated to that of the destination pool. The extent in the source pool becomes free space eligible
to be reassigned to another volume.
Due to the Storwize V7000 implementation of volume extent pointers the volume migration is totally
transparent to the host. Nothing has changed from a host perspective. The fact that the copied
volume extents are now sourced by another pool is totally transparent and inconsequential to the
attaching host. I/O operations proceed as normal during the data migration.
Once the volume copy’s last extent has been moved to the target pool then the volume’s pool name
is updated to that of the target pool.
Uempty
This topic reviews how to use the Import Wizard to bring a volume that contains existing data under
Storwize V7000 control as an image type volume.
Finally, we will examine the list of procedures to be completed once the volume has been migrated
to its new pool.
Uempty
A storage box
on the SAN
APPLUN
An image mode volume provides a direct block-for-block translation from the managed disk (MDisk)
to the volume with no virtualization. This mode is intended to provide virtualization of MDisks that
already contain data that was written directly, not through a Storwize V7000. Image mode volumes
have a minimum size of 1 block (512 bytes) and always occupy at least one extent.
To preserve LUN data that are hosted on external storage systems, the LUN must be imported into
IBM Storwize V7000 environment as an image-mode volume using the Import option. Hosts that
were previously directly attached to those external storage systems can continue to use their
storage that is now presented by the IBM Storwize V7000 instead.
Do not leave volumes in image mode. Only use image mode to import or export existing data into or
out of the IBM Storwize V7000. Migrate such data from image mode MDisks to other storage pools
to benefit from storage virtualization.
If you need to preserve existing data on the unmanaged MDisks, do not assign them to the pools
because this action deletes the data.
Uempty
A storage box
on the SAN
APPLUN
Image Striped
unmanaged mode APP3VOL APP3VOL type
type
Importing LUNs with existing data from an external storage system into the Storwize V7000
environment involves the following:
1. The LUN being imported to the Storwize V7000 has to be unassigned from the host in storage
box. The application which had been using that LUN obviously has to take an outage. The LUN
will then need to be re-assign to Storwize V7000
2. Detect MDisk (LUN) on the Storwize V7000 which becomes unmanaged mode MDisk.
3. Rename unmanaged MDisk to link to correlate to the LUN application.
4. Import unmanaged MDisk; GUI Import Wizard performs the following:
i. Defines migration pool with the exact same size of the LUN being migrated. If the Import
option is used and no existing storage pool is chosen, a temporary migration pool is
created to hold the new image-mode volume.
ii. Create image volume and MDisk pair in migration pool. All existing data volumes
brought under Storwize V7000 management with the Migration Wizard have an image
type copy initially. Then the option to add a striped volume copy is offered as part of the
import process. Subsequent writes to both volume copies are then maintained by the
Volume Mirroring function of the Storwize V7000 until the image volume copy is deleted.
Uempty
iii. Migrates image volume to striped volume. Do not leave volumes in image mode. Only
use image mode to import or export existing data into or out of the IBM Storwize V7000
Some additional host housekeeping is typically involved. For example, in the UNIX environment
this generally entails an unmount of the file system, vary off and export the volume group. There
might be host OS unique activities such as remove the hdisk(s) associated with the volume group
in AIX. Analogous activity for Windows might involve the removal of the drive letter to take the drive
(application) offline.
Depending on the OS and the storage systems previously deployed the multipathing driver
software might need to be replaced which generally requires a host reboot.
Uempty
Supports multi-LUN
migration concurrently
The LUN needs to be imported to the Storwize V7000 for management. If the external controller is
not listed in the Pools > External Storage view, you will need to perform a SAN device discovery
by selecting Action > Discover storage. The GUI will issued the detectmdisk command to cause
the Storwize V7000 to perform Fibre Channel SAN device discovery.
The newly detected LUN is treated as an unmanaged MDisk with an assigned default name and
object ID. There is no interface for the Storwize V7000 to discern if this MDisk contains free space
or existing data. You will need to confirm that the correct external storage volume had been
discovered by examining the details of the MDisk. Right-click the MDisk and select Properties.
Uempty
It is best practice to rename the MDisk to clarify its identity or to match its existing name in the LUN
data being imported from the external storage system. To do so, right-click the MDisk and select
the Rename option from the menu. Specify a name and click the Rename button.
Uempty
To start the import to image mode process, right click an unmanaged MDisk that correlates to the
external storage LUN and select Import from the drop-down menu. The import wizard guides you
through a quick import process to bring the volume’s existing data under Storwize V7000
management.
There are two methods in which you can import existing data:
Import to temporary pool as image-mode volume option allows you to virtualize existing
data from the external storage system without migrating the data from the source MDisk (LUN)
and then present them to host as image mode volume. This data will become accessible via
IBM Storwize V7000 system while still be available on the backend storage system original
LUN.
Migrate to existing pool option allows you to create an image mode volume and start migrate the
data to the selected storage pool. After the migration process completes the data will be removed
from the original MDisk and placed on the MDisks in the target storage pool.
Uempty
3
MDisk
10
DS3K volume
Extent = 1024 MB
MigrationPool_1024
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016
For this example, we chose to Import to temporary pool as image-mode volume. During this
process, the MDisk transitions from unmanaged mode to image mode. Immediately after, the
image type volume is migrated to the MigrationPool to become virtualized. Migrating the image
volume to striped is to be performed later and outside the control of the Import Wizard.
The MigrationPool_1024 is normally used as a vehicle to migrate data from existing external
LUNs into storage pools, either located internally or externally, on the IBM Storwize V7000. You
should not use image-mode volumes as a long-term solution for reasons of performance and
reliability.
Uempty
3
volume
10
DS3K volume
Extent = 1024 MB
MigrationPool_1024
The Wizard generates several tasks. It first creates a storage pool called MigrationPool_1024 using
the same extent size (-ext 1024) as the intended target storage pool.
The mkvdisk command is used to concurrently perform two functions. It places the DS3K MDisk
into the MigrationPool_1024 and at the same time creates an image type volume based on this
MDisk. At this point, there is a one-to-one relationship between the MDisk and the volume. This
volume’s extents are all sourced from this MDisk. The MDisk has an access mode of image and the
volume has a virtualization type of image. You will notice there is no reference to the volume’s
capacity as it is implicitly derived from the capacity of its MDisk.
Uempty
APPLUN
image APPLUN
MDisk0
mode MDisk
managed mode
Migration_Pool
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016
In terms of host access to the existing data, as soon as the mkvdisk command completes, the
volume can be mapped to the host object that was previously using the data that the MDisk now
contains. The GUI issues the mkvdiskhostmap command to create a new mapping between a
volume and a host. This makes the image mode volume accessible for I/O operations to the host.
After the volume is mapped to a host object, the volume is detected as a disk drive with which the
host can perform I/O operations.
Uempty
Striped APP3VOL
To virtualize the storage on an image mode volume, the volume needs to be transformed it into a
striped volume. This process migrates the data on the image mode volume to managed-mode disks
in another storage pool. Issue the migratevdisk command to migrate an entire image mode
volume from one storage pool to another storage pool.
Uempty
Input LUN no
longer used
Rename volume to
correlate to the external
storage LUN name
The migration is now complete. The MigrationPool_1024 pool no longer contains the volume and
the volume allocation capacity of this pool even though the volume count for this pool is zero, the
managed mode MDisk is still in this pool.
The default name given to the volume created by the Import Wizard is a concatenation of the
storage system name followed by the MDisk LUN number can now be renamed. As best practice,
use the Rename option to rename the volume to a more descriptive name typically to identify it as
being used by its assigned host.
Uempty
10
volume
1 2
5exts 6 exts
MDisk01 MDisk02
Extent = 1024 MB
Hybrid pool
Click the Member MDisks tab of the volume details panel to display the MDisks supplying extents
to this volume. By default, the Storwize V7000 attempts to distribute extents of the volume across
all MDisks of the pool.
All extents of the MDisk have been freed.
Remember that the MDisk's access mode became managed when the migratevdisk process
began.
Uempty
Delete MigrationPool
• Volume and MigrationPool_1024 can now be deleted.
ƒ You have the option to keep the MigrationPool for subsequent imports.
APPLUN
MDisk
Migration_Pool
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016
Having migrated the volume data from the original LUN and to a new storage pool, the MDisk and
the temporary MigrationPool_1024 storage pool are no longer.
The finalize the import migration, the image type volume is deleted, its corresponding MDisk is
automatically removed from the storage pool. The empty MigrationPool_1024 can either be deleted
or kept for subsequent imports. The data migration to IBM Storwize V7000 is done.
Additional steps will need to be perform to unassign the LUNs in the storage system from the
Storwize V7000 cluster.
Uempty
All the Mdisk is now unmanaged and no longer being used by the Storwize V7000.
Return to the external storage system and unassign the LUNs from the Storwize V7000 host group
(Storwize V7000 cluster). This will prevent the LUN from being detected the next time the Storwize
V7000 performs SAN device discovery. Consequently the Storwize V7000 removes the MDisk
entries from its inventory.
If the external storage device is scheduled for decommissioning, the SAN zoning needs to be
updated so that the Storwize V7000 can no longer see the its FC ports.
Uempty
Volumes have been migrated to their destination pools. Due to the virtualization provided by the
Storwize V7000, the storage infrastructure changes can be made without impact to the host
applications. Therefore, nothing has changed from a host perspective. Host I/O operations proceed
as normal.
Uempty
This topic discusses the Export to image mode option to remove striped type volume from Storwize
V7000 management. It also highlights the steps in which to reassign the volume data to the host
directly from a storage system.
Uempty
n
n
Volume
copy Volume
copy
ID n exts
Extents of volume n
ID n exts ID n exts ID n exts
migrated
MDisk1 MDisk2 MDisk3 MDisk4 to one MDisk
Extent = 1024 MB
RAID10 pool
Same
capacity
or bigger as
volume
unmanaged
Extent = 1024 MB
MDisk
Any_same_extent_size_pool
The process to export or revert a striped type volume back to image is transparent to the host. The
migratetoimage function is used to relocate all extents of a volume to one MDisk of a storage
system and to recreate the image mode pair. The image type volume is then deleted and the
unmanaged MDisk can then be removed from Storwize V7000 management and reassigned to the
host from the owning storage system. The deletion of the Storwize V7000 volume and the
reassignment of the storage system LUN to the host is disruptive to the host and its applications.
In the example, the Storwize V7000 export volume function (migratetoimage) enables all the
extents associated with a volume copy to be relocated to just one destination MDisk. The access
mode of the MDisk must be unmanaged for it to be selected as the destination MDisk. The capacity
of this MDisk must be either identical to or larger than the capacity of the volume.
As a result of the export process, the volume copy’s virtualization type changes to image and its
extents are sourced sequentially from the destination MDisk.
The image volume and MDisk pair can reside in any pool as long as the resident pool has the same
extent size as the pool that contained the volume copy initially.
Uempty
Issue the following command to create an export pool using the same extents size:
mkmdiskgrp -ext 1024 -name ExportPool_1024
IBM Storwize:V009B:V009B1-admin>lsmdiskgrp 5
id 5
name ExportPool_1024
status online
mdisk_count 0
vdisk_count 0
capacity 0
extent_size 1024
free_capacity 0
virtual_capacity 0.00MB
used_capacity 0.00MB
real_capacity 0.00MB
overallocation 0
As a general practice, image mode pairs should be kept in a designated migration pool instead of
being intermingled in a pool with striped volumes. A preparatory step needed prior to exporting the
volume copy is to have a storage pool with the same extent size as the volume’s storage pool.
In this case, you will need to determine the extents size of the pool in which the volume to be
exported resides in. Based on the pool’s extent size 1024, create an ExportPool _1024 with the
same extent size.
The subsequent lsmdiskgrp 5 command shows the details for the storage pool just created and
confirms the extent size of 1024 MB for the empty pool.
Uempty
Image mode provides a direct block-for-block translation from the MDisk to the volume with no
virtualization. An image mode MDisk is associated with exactly one volume. This feature can be
used to export a volume to a non-virtualized disk and to remove the volume from storage
virtualization, for example, to map it directly from external storage system to host. If you have two
copies of a volume, you can choose one to export to image mode. To export a volume copy from
striped to image mode, right-click on the volume and select Export to Image Mode from the menu
list.
From the Export to Image Mode window, select an unmanaged MDisk that is the size of the volume
copy or larger to export the volume’s extents. In this example, we selected the APP3VOL MDisk of
the same capacity that is still in the Storwize V7000 inventory as an eligible destination MDisk. Click
Next.
Uempty
Select the ExportPool_1024 storage pool for the new image mode volume and click Finish.
Since the target storage pool has to have the same extent size as the source storage pool, this pool
was pre-defined for that purpose. Also, the target storage pool may be an empty pool, so the
selected MDisks will be the target pool’s only member at the end of migration procedure. But the
target storage pool does not have to be empty. It can have other image mode or striped MDisks. In
case you have image and striped MDisks in the same pool, volumes created in this pool will only
use striped MDisks because MDisks that are in image mode already have image mode volume
created on top of them and cannot be used as an extent source for other volumes.
Uempty
The GUI generates the migratetoimage command to identify the destination MDisk to use for the
volume copy and the pool to contain the image mode pair.
The Running Task status indicates one migration task is now in progress.
Uempty
3 3
Volume volume
copy 1 copy 1
4 exts
APP3VOL
APP3VOL
Extent = 1024 MB
ExportPool_1024
RAID10 pool
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016
The migratetoimage command causes the data extents for the selected volume to be migrated
from its current MDisks to one destination MDisk.
The Member MDisks tab of the volume detail shows the redistribution snapshot of the volume’s
extents from the RAID10 storage pool to the one MDisk of the ExportPool_1024 pool.
Uempty
At the completion of the export process the ExportPool_1024 pool contains one image access
mode MDisk with no free space. All the extents of this MDisk has been assigned to the volume to
be exported.
Uempty
From the Volumes > Volumes by Host confirms that the APP3VOL volume is still mapped to the
host. This volume now has the ExportPool_1024 pool as backing storage. Nothing has changed
from a host perspective. During the migration to image mode the I/O operations continue to
proceed as normal.
Uempty
Right-click the selected volume to view the volume details. This panel confirms that the volume is
an image virtualization type with all of its extents from the APP3VOL MDisk.
Uempty
Remove the volume and MDisk from Storwize V7000 control and present the former MDisk as a
DS3K LUN to the Windows host.
First, stop application activity. Either remove the drive letter in Windows to take the drive offline, or
shut down the Windows host.
From the Storwize V7000 GUI select the host system and right-click the volume you want to delete.
Next, select the Delete option from the menu list. Since the volume copy is the image type, the
MDisk backing the image type volume is removed from the storage pool and becomes unmanaged.
Ensure the correct volume is selected. If the volume has a striped type then the data extents
typically span multiple MDisks and these extents are freed. Like most if not all storage systems,
there is no volume undelete function.
The GUI requires a confirmation that the correct volume has been selected for deletion. To delete
the host mapping (since this volume had been mapped to NAVYWIN1) verify the correct volume is
listed. Check the box to Delete the volume even if it has host mappings or is used in
FlashCopy mappings or remote copy relationships and click the Delete button.
The will GUI-generate rmvdisk command contains the -force parameter so that the host mapping
is deleted along with the deletion of the volume.
Uempty
Since the storage system LUN is to be directly assigned to the host, you will need to update host
SAN zoning to enable access to the storage system. Also verify the appropriate device drivers have
been installed on the host.
From the storage system, the example shows how to reassign the unmanaged mode MDisk from
the Storwize V7000 to the Windows host. For this process, you need to ensure that the correct LUN
is chosen by verifying the LUN number of the unmanaged mode MDisk and that the correct storage
system is being updated.
You will need to reboot the Windows server particularly if different drivers need to be installed.
Windows recognizes the volume label and will attempt to reassign the same driver letter if it is
available.
Uempty
This topic discuss the procedures to migrate existing data on external storage systems using the
IBM Spectrum Virtualize storage system migration wizard.
Uempty
A storage box
on the SAN
APPLUN APPLUN
APPLUN
IBM Spectrum Virtualize Storage System Migration is an wizard-based tool that is designed to
simply the migration task. The wizard features easy-to-follow pane that guides you through the
entire migration process.
System Migration uses volume mirroring instead of migratevdisk command to migrate existing
data into the virtualized environment. Similar to the Import Wizard this step can be optional.
Uempty
A storage box
on the SAN
copy 0 copy 1
image striped unmanage
APPLUN APPLUN type APPVOL type d
APPLUN
mode
MDisk2
Unmanaged mode
MDisk1
For a MDisk3
Image APPLUN
given LUN
mode MDisk
Enables import of Other_Pool_1024
large capacity LUNs MigrationPool_8192 (any extent size)
To start the Migration Wizard, slick Pools in the navigation tree and select System Migration menu
from the quick navigation drop-down list. Then click the Start New Migration button.
Migration Wizard generates step-by-step commands to:
a. Defines a migration pool with extent size 8192. The large extent size enables the Migration
Wizard to support the import of extremely large capacity LUNs to Storwize V7000
management.
b. Create image volume and MDisk pair in migration pool. All existing data volumes brought
under Storwize V7000 management with the Migration Wizard have an image type copy
initially. Then the option to add a striped volume copy is offered as part of the import
process. Subsequent writes to both volume copies are then maintained by the Volume
Mirroring function of the Storwize V7000 until the image volume copy is deleted.
c. If a host has not been defined yet, the wizard provides additional guidance as part of the
import process to create the host object.
d. Administrators will then have the ability to map image volume to host object
e. The migration wizard will also add mirrored copy to each image volume to mirror the volume
data to an appropriate storage pool.
f. Finalize: Remove image copy of volume
Uempty
Before you begin migrating external storage, confirm that the restrictions and prerequisites are met.
The Storwize V7000 system supports migrating data from external storage system to the system
using either direct serial-attached SCSI (SAS) connections and Fibre Channel or Fibre Channel
over Ethernet connections. The list of excluded environments are not built into the guided Migration
Wizard procedure.
• Cable this system into the SAN of the external storage that you want to migrate.
Ensure that your system is cabled into the same storage area network (SAN) as the external
storage system that you are migrating. If you are using Fibre Channel, connect the Fibre
Channel cables to the Fibre Channel ports in both canisters of your system, and then to the
Fibre Channel network. If you are using Fibre Channel over Ethernet, connect Ethernet cables
to the 10 Gbps Ethernet ports.
• Change VMWare ESX host settings, or do not run VMWare ESX.
If you have VMware ESX server hosts, you must change settings on the VMWare host so
copies of the volumes can be recognized by the system after the migration is completed. To
enable volume copies to be recognized by the system for VMWare ESX hosts, you must
complete one of the following actions:
▪ Enable the EnableResignature setting.
▪ Disable the DisallowSnapshotLUN setting.
To learn more about these settings, consult the documentation for the VMWare ESX host.
Uempty
The following are required to prepare external storage systems and IBM Storwize V7000 for data
migration.
• In order for the IBM Storwize V7000 to virtualize external storage, a per-enclosure external
virtualization license is required. You can temporarily set the license without any charge only
during the migration process. Configuring the external license setting prevents messages from
being sent that indicate that you are in violation of the license agreement. When the migration is
complete, the external virtualization license must be reset to its original limit.
• I/O operations to the LUNs must be stopped and changes made to the mapping of the storage
system LUNs and to the SAN fabric zoning. The LUNs must then be presented to the Storwize
V7000 and not to the hosts.
• The hosts must have the existing storage system multipath device drives removed, and the be
configured for the Storwize V7000 attachment. This might require further zoning changes to be
made for host-to V7000 SAN connections.
• Storwize V7000 discovers the external LUNs as unmanaged Mdisks.
Uempty
In order to ensure that data is not corrupted during the migration process, all I/O operations on the
host side must be stopped. In addition, SAN zoning needs to be modified to remove the zoning
requirements between the old external storage system and host to the Storwize V7000 and old
external storage system.
Before migrating storage, administrator should record the hosts and their WWPNs for each volume
that is being migrated, and the SCSI LUN when mapped to this system.
Uempty
You can use the external storage DS Storage Manager Client interface to verify the map host LUNs
to the Storwize V7000 host group. This remap of LUNs to the Storwize V7000 host group can be
performed either prior to invoking the Migration Wizard or before the next step in the Migration
Wizard.
The LUN number assigned to the logical drives can be any LUN number. In this example, by default
the DS3400 storage unit uses the next available LUN numbers for the target host or host group.
The LUN number is assigned as LUN 1 for APP1DB. The logical drive ID of the LUN should match
the worldwide unique LUN names reported by the QLogic HBA management interface.
Uempty
Right-click to
rename Mdisk to
correlated to
the LUNs on the
external storage
system
The Storage V7000 management GUI will issue the svctask detectmdisk command to scan the
environment to detect the available LUNs that have been mapped to the Storwize V7000 host
group. The lsdiscoverystatus command list the unmanaged MDisks to be assigned to the
V7000. If the MDisks were not renamed during the GUI external system discovery, you can
right-click on each Mdisk to rename them to correspond to the LUNs from the external storage
system.
Uempty
Right-click on an
MDisk and
select Properties
to verify MDisk
UID to LUN UID
The Migration Wizard supports concurrently importing multiple unmanaged MDisks. The LUNs are
presented as unmanaged mode MDisks. The LUN numbers range from 0 to 255 range and are
surfaced to the Storwize V7000 as a 64-bit number with the low-order byte containing the external
storage assigned LUN number in hexadecimal format. The MDisk properties provide additional
confirmation that includes the storage system name and UID.
Uempty
10 13
mdisk1 mdisk2
16
mdisk3
MigrationPool_8192
For each of the selected MDisk, a svctask mkvdisk command is generated to create the
one-to-one volume pair with a virtualization type of image. The image mode means that the volume
is an extract image of the LUN that is on the external storage system with its data completely
unchanged. Therefore the Storwize V7000 is simple presenting an active image of the external
storage LUN.
The svctask mkmdiskgrp command is used to create a MigrationPool whose extent size is 8192
MB. Using the largest extent size possible for this pool enables MDisk addressability when
importing extremely large capacity LUNs.
The unmanaged image volumes are moved into the migration pool with an access mode of image
and a corresponding image type volume is created with all its extents pointing to this MDisk. The
name assigned to each volume follows the format of storage system name concatenated with the
storage system assigned LUN number for the MDisk.
As with all Storwize V7000 objects, an object ID is assigned to each newly created volume. As a
preferred practice, map the volume to the host with the same SCSI ID before the migration.
Uempty
Before you proceed, to map image volumes to a host, you need to verify that the potential host
system have been installed with the supported drivers and properly zoned within the Storwize
V7000 SAN fabric.
If a host object has not been defined to the Storwize V7000 yet, click the Add Host option.
Configuring host objects using the System Migration Wizard is optional as it can be perform after
volumes have been migrated to a specified pool.
Uempty
14
The System Migration Map Volumes to Hosts (optional) pane presents the image volumes under
the default name that contains the name of the external storage system along with the
corresponding MDisk name. All columns with in the wizard can be modified for viewing purposes to
view information like the volume object IDs.
Uempty
From this pane, the selected image volumes can now be mapped to the desired host. This task can
be completed using the Map to Host option or from the Action menu select Map to Host.
With today’s SAN-aware operating systems and applications, a change in the SCSI ID (LUN
number) of a LUN presented to the host is not usually an issue. Windows behavior is consistent.
Therefore, it is not an issue for a disk to be removed from the system and the represented with a
different ID/LUN number. Windows will typically reassign the same drive letter if it is still available.
Once the image volumes are mapped to the host object, host device discovery can be performed. It
might be appropriate to reboot the server as part of the host device discovery effort.
Uempty
Striped
Storage pool volume
copy 1 extents
Image MigrationPool_8192
volume copy 0 extents
Migrating image volumes to a selected pool is an optional. If it is desired to migrate these volumes
to the virtualized environment (virtualization type of striped) then select a target pool.
Unlike the Import Wizard, the Migration Wizard uses the Storwize V7000 Volume Mirroring function
(instead of migratevdisk) to implement the migration to the striped virtualization type. The GUI
generates one addvdiskcopy command to create a second volume copy (copy 1) for each volume.
Since Volume Mirroring is used then the target pool extent size does not need to match the
migration pool extent size.
If no target pool is selected for this step then volumes and their corresponding MDisks are left as
image mode pairs. The System Migration Wizard can be invoked at a later point in time to complete
the migration to the virtualized environment.
Uempty
The GUI starts volume synchronization on each volume copy. This part of the System Migration
Wizard is complete. However the end of the storage migration wizard is not the end of the data
migration process. Click the Finish button.
Uempty
APPLUN APPLUN
APPLUN
Managed
copy 0 copy 1 mode
Unmanaged mode
image striped
type APPVOL type
MDisk2
For a MDisk1
given MDisk3
LUN
Image APPLUN
mode MDisk
Other_Pool_1024
MigrationPool_8192 (any extent size)
Since Storwize V7000 environment is virtualized and we were able to successfully map volumes to
the host, the host will have continuously access to the volume data while migration occurs in the
background. The application can be restart and the host will have no awareness of the migration
process.
Uempty
After Volume Mirroring synchronization has reached 100%, you can finalize the migration process.
The image copy (copy 0) is deemed to be no longer needed since data have been migrated into the
Storwize V7000 virtualized environment.
From the System Migration pane, select the Finalize option. A subsequent Storwize V7000 SAN
device discovery action will delete each Copy 0 image volume copy from the Storwize V7000
inventory.
Uempty
Delete MigrationPool_8192
• MigrationPool_8192 can either be deleted or kept for subsequent
imports.
A storage box
on the SAN
APPLUN APPLUN
APPLUN
Volume can be
renamed to correlate
to the LUN data
When the finalization completes, the image type volumes are deleted, its corresponding MDisk is
automatically removed from the storage pool. You can unzone and remove the older storage
system from the IBM Storwize V7000 SAN fabric. The empty MigrationPool_8192 can either be
deleted or kept for subsequent imports. The data migration to IBM Storwize V7000 is done.
Additional steps will need to be perform to unassign the LUNs in the storage system from the
Storwize V7000 cluster and then remove external storage from the Storwize V7000 system SAN
fabric.
Uempty
This topic discusses how volume mirroring can be used to migrate data from one pool to another
pool.
Uempty
Volume
Volume Volume
Copy0 Copy1
Volume Mirroring is a function where Spectrum Virtualize software stores two copies of a volume
and maintains those two copies in synchronization. Volume mirroring is a simple RAID 1-type
function that allows a volume to remain online even when the storage pool backing it becomes
inaccessible.
Volume mirroring is designed to protect the volume from storage infrastructure failures by seamless
mirroring between
storage pools that might impact availability of critical data or applications. Accordingly Volume
Mirroring is a local high availability function and is not intended to be used as a disaster recovery
function.
Uempty
Thin Provisioning
Lower Cache
Virtualization
Forwarding
RAID
Forwarding
SCSI Initiator
Starting with V7.3 and the introduction of the new cache architecture, mirrored volume performance
has been significantly improved. Now, lower cache is beneath the volume mirroring layer, which
means both copies have its own cache. This approach helps in cases of having copies of different
types, for example generic and compressed, because now both copies use its independent cache
and performs its own read prefetch. Destaging of the cache can now be done independently for
each copy, so one copy does not affect performance of a
second copy.
Also, because the Storwize destage algorithm is MDisk aware it can tune or adapt the destaging
process, depending on MDisk type and utilization, for each copy independently.
Uempty
Uempty
you can delete the original copy that is in the source storage pool. During the
synchronization process, the volume remains available even if there is a problem with the
destination storage pool.
• In addition, you can use volume mirroring to mirror the host data between storage systems at
the two independent storage systems (primary and secondary) sites.
Uempty
Volume Volume
Copy0 Copy1
Extent 1a Extent 1a
Extent 2a Extent 2a
Extent 3a Extent 3a
Extent 1b Extent 1b
Extent 2b Extent 2b
Extent 3b
Extent 1c
Extent 2c
Extent 3c
The ability to create a volume copy affords additional management flexibility. Volume copy uses the
same virtualization policy and can be create as a striped, sequential, and image volumes. Volume
mirroring also offers non non-disruptive conversions between fully allocated volumes and
thin-provisioned volumes.
A volume copy can also be added to an existing volume. In this case, the two copies do not have to
share the same virtualization policy. When a volume copy is added, the Spectrum Virtualize
software automatically synchronizes the new copy so that it contains the same data as the existing
copy.
Uempty
5 GB
Extent 4
Extent 3
Pool1
Extent 2
extent size
Extent 1
Volume 512 MB
Extent 0
Copy 1 Extent 4
Extent 3
Pool2
Extent 2
Copy has its own:
Extent 1 extent size
• Storage pool
Extent 0 1024 MB
• Virtualization type
• Fully allocated or thin
Spectrum Virtualize data migration © Copyright IBM Corporation 2012, 2016
A volume can be migrated from one storage pool to another and acquire a different extent size. The
original volume copy can either be deleted or you can split the volume into two separate volumes –
breaking the synchronization. The process of moving any volume between storage pools is
non-disruptive to host access. This option is a quicker version of the “Volume Mirroring and Split
into New Volume” option. You might use this option if you want to move volumes in a single step or
you do not have a volume mirror copy already.
Uempty
• Enhancements
Pool1 Pool2
ƒ Support for different tiers in the mirror. extents extents
By default, volume copy 0 is assigned as the primary copy of the volume. However, from an I/O
processing point of view under normal conditions, reads and writes always go through the primary
copy. Writes are also sent to volume copy 1 so that synchronization is maintained between the two
volume copies. The location of the primary volume can also be changed by the user to either
account for load-balancing or possibly different performance characteristics for the storage of each
copy.
If the primary copy is unavailable - for example volume copy 0’s pool became unavailable due to its
storage system has been taken offline - the volume remains accessible to assigned servers. Reads
and writes are handled with volume copy 1. The Storwize V7000 tracks changed blocks of volume
copy 1 and resynchronize these blocks with volume copy 0 when it becomes available. Reads and
writes then revert back to volume copy 0. It is also possible to set volume copy 1 as the primary
copy if desired.
Uempty
One of the simplest way to create a volume copy is to right-click on a particular volume and select
the Add Volume Copy from the menu. This task will create the mirroring volumes with two copies of
its extents. This procedure allows you to place mirrored volume in a single pool or specify a primary
and a secondary pool to migrate data between two storage pools.
Summary statement calculates the real and virtual capacity value of the volume. The virtual
capacity is the size presented to hosts and other Copy Services such as FlashCopy and
Metro/Global Mirror.
The addvdiskcopy command adds a copy to an existing volume, which changes a non-mirrored
volume into a mirrored volume. Use the -copies parameter to specify the number of copies to add
to the volume; this is currently limited to the default value of 1 copy. Use the -mdiskgrp parameter
to specify the managed disk group that will provide storage for the copy; the lsmdiskgrp CLI
command lists the available managed disk groups and the amount of available storage in each
group.
Uempty
You also can create mirrored volumes using the GUI Create Volumes Mirrored and Custom preset
options. Mirrored preset create mirrored volumes with predefined parameters such a volume format
and default sync rate.
The custom preset allows you to modify and specify specific parameters such as changing the sync
rate parameter to specify the rate at which the volume copies will resynchronize after loss of
synchronization.
Uempty
The new mirrored volume entry displays two copies. By default, the asterisk associated with volume
copy 0 is used to identify the primary copy. This copy used by Storwize V7000 storage system for
reads and writes. The addvdiskcopy request added copy 1 for this volume. Copy 1 is used for
writes only.
The two volume copies need to be synchronized. Spectrum Virtualize Volume Mirroring
automatically copies the data of copy 0 to copy 1; while supporting concurrent application
reads/writes.
The Running Tasks status bubble indicates that one volume synchronization task is running in the
background. You can click within the display to view the task progress.
Uempty
When a server writes to a mirrored volume, the system writes the data to both copies. If the Primary
volume copy is available and synchronized, any reads from the volume are directed to it. However,
if the primary copy is unavailable, the system use the secondary copy to read. Volume
Mirroring support two possible values for I/O Time-out Configuration (attribute
mirror_write_priority):
• Latency (default value): short time-out prioritizing low host latency. This option indicates a copy
that is slow to respond to a write I/O goes out of sync if the other copy successfully writes the
data.
• Redundancy: long time-out prioritizing redundancy. This option indicates a copy that is slow to
respond to a write I/O may use the full Error Recovery Procedure (ERP) time. The response to
the I/O is delayed until it completes to keep the copy in sync if possible.
Volume Mirroring ceases to use the slow copy for a period of between 4 to 6 minutes, and
subsequent I/O data is not affected by a slow copy. Synchronization is suspended during this
period. After the copy suspension completes, Volume Mirroring resumes, which allows I/O
data and synchronization operations to the slow copy that will, typically, shortly complete the
synchronization.
Uempty
The volume property details confirms that the two volume copies are identical, but assigned to
different storage pools. The capacity bar for the volume copies indicates that both volumes are fully
allocated volumes with writes performed on both copies. The synchronization or background copy
rate defaults to 50% (depending on the method used to create the volume mirroring), which is set to
2 MBps. You can change the synchronization rate to one of the specified rates to increase the
background copy rated. You can issue a chvdisk -syncrate command to change the
synchronization rate using the CLI.
The background synchronization rate can be monitored from the Monitoring > Performance view.
The default synchronization rate is typically too low for Flash drive mirrored volumes. Instead, set
the synchronization rate to 80 or above.
Uempty
Volume Mirroring processing is independent of the storage pool extent size. When a volume copy is
created it has, one set of extents (copy 0), and a second set of extents created on the secondary
volume copy (copy 1). The two sets of extents or volume copies can reside in the same or different
storage pools.
Using volume mirroring over volume migration is beneficial because with volume mirroring storage
pools do not need to have the same extent size as is a case with volume migration. This allows
volume mirroring to eliminate the impact to volume availability if one or more MDisks, or the entire
storage pool fails. If one of the mirrored volumes copies becomes unavailable, updates to the
volume are logged to by the Storwize V7000, allowing for the resynchronization of the volume
copies when the mirror is reestablished. The resynchronization between both copies is incremental
and is started by the Storwize V7000 automatically. Therefore, volume mirroring provides higher
availability to applications at the local site and reducing or minimizing the requirement to implement
host-based mirroring solutions.
Uempty
The primary copy is used by Storwize V7000 for both reads and writes. You can change volume
copy 1 to be the primary copy by right-clicking on its entry and select Make Primary from the menu
list.
The GUI generates the chvdisk -primary command to designate volume copy 1 as the primary
copy for the selected volume, volume ID.
A use case for designating volume copy 1 as the primary copy is the migration of a volume to a new
storage system.
For a test period, it might be desirable to have both the read and write I/Os directed at the new
storage system of the volume while still maintain a copy in the storage system scheduled for
removal.
Uempty
You can convert a mirrored volume into a non-mirrored volume by deleting one copy or by splitting
one copy to create a new non-mirrored volume. During the deletion process for one of the volume
copies, the management GUI issues a rmvdiskcopy command followed by the -copy number (in
this example Copy 0). Once the process is complete, only volume copy 1 of the volume remains. If
volume copy 1 was a thin-provisioned volume, it is automatically converted to a fully allocated copy.
The volume can now be managed independently by Easy Tier; based on the activity associated
with extents of the individual volume copy.
Uempty
The two copies created by volume mirroring may be split apart and either of the copies may be
retained to support the active Volume. The remaining copy is available as a static version of the
data. This capability may be used to migrate a volume between managed disk groups with different
extent sizes.
Volume mirroring does not create a second volume before you split copies. Volume mirroring adds
a second copy of the data under the same volume so you end up having one volume presented to
the host with two copies of data connected to this volume. Only splitting copies creates another
volume and then both volumes have only one copy of the data.
Uempty
Volume Volume
Copy0 Copy1
Extent 1a Extent 1a
Extent 2a Extent 2a
Extent 3a Extent 3a
Extent 1b Extent 1b
Extent 2b Extent 2b
Although the two volume copies are identical, they appear to the host as one volume. If one of the
mirrored volume copies is temporarily unavailable, for example, because the storage system that
provides the storage pool is unavailable, the volume remains accessible to servers. The system
remembers which areas of the volume are written and resynchronizes these areas when both
copies are available. The secondary can service read I/O when the primary is offline without user
intervention. All volume migration activities occur within the Storwize V7000 it is totally transparent
to attaching servers and user applications.
Uempty
To protect against mirrored volumes being taken offline, and to ensure the high availability of the
system, follow the guidelines for setting up quorum disks where multiple quorum candidate disks
are allocated on different storage systems.
The Storwize V7000 system maintains quorum disks which contains a reserved area that is used
exclusively for system management to record a backup of system configuration data to be used in
the event of a disaster. Volume mirroring maintains some state data on the quorum disks. If a
quorum disk is not accessible and volume mirroring is unable to update the state information, a
mirrored volume might need to be taken offline to maintain data integrity.
Mirrored volumes can be taken offline if there is no quorum disk available. This behavior occurs
because synchronization status for mirrored volumes is recorded on the quorum disk.
Uempty
• When creating a mirrored volume, you can only have a maximum number of two copies. Both
copies will be created with the same virtualization policy. The first Storage Pool specified will
contain the primary copy.
▪ To have a volume mirrored using different policies, you need to add a volume copy with a
different policy to a volume that has only one copy.
▪ Both copies can be located in different Storage Pools.
▪ It is not possible to create a volume with two copies when specifying a set of MDisks.
• You can add a volume copy to an existing volume. Each volume copy can have a different
space allocation policy. However, the two existing volumes with one copy each cannot be
merged into a single mirrored volume with two copies.
• You can remove a volume copy from a mirrored volume, only one copy remains.
• You can split a volume copy from a mirrored volume and create a new volume with the split
copy. This function can only be performed when the volume copies are synchronized;
otherwise, use the -force command.
▪ Volume copies can not possible to recombined after they have been split.
▪ Adding and splitting in one workflow enables migrations that are not currently allowed.
Uempty
▪ The split volume copy can be used as a means for creating a point-in-time copy (clone).
• You can expand or shrink both of the volume copies at once.
▪ All volume copies always have the same size.
▪ All copies must be synchronized before expanding or shrinking them.
• When a volume gets deleted, all copies get deleted.
Uempty
0 1
MDisk MDisk
12 15
13 14
Add to pool 3
2
DS3KNAVY4
DS3KNAVY5 DS3KNAVY6 DS3KNAVY7
MDisk MDisk
This storage system replacement approach is the one that most likely take the shortest elapsed
time. It might be the most appropriate for time sensitive situations such as impending lease
terminations of an old storage system where lease extensions might be too costly. Two steps are
involved:
• The add MDisks step: After the LUNs from the new storage system have been discovered as
unmanaged MDisks they then are added to the existing pool that represents the system being
replaced. The storage pool temporarily contain MDisks from both storage systems.
• The remove MDisks step: Remove at the same time all the MDisks representing the departing
storage system. The removal causes the allocated extents of all volumes in the pool to be
migrated from these MDisks to the newly added MDisks. The removed MDisks become
unmanaged. The storage system can then be removed from Storwize V7000 management.
Uempty
New volume
Import Wizard (create image volume, migrate to striped)
(existing data)
o Export to image mode (migrate striped MDisk to image
Volume
MDisk for export)
New volume
(existing data) Migration Wizard (import multiple volumes; map to host)
You should now be aware that the only time that data migration is disruptive to applications is when
a storage system LUN is moved to or from Storwize V7000 control. In all other cases, Storwize
V7000 managed data movement is totally transparent. Applications proceed blissfully unaware of
changes being made in the storage infrastructure.
Uempty
Keywords
• Non-virtualized image type
• Virtualized striped type
• Multipathing
• Zoning
• Striped mode
• Image mode
• Sequential mode
• Volume copy
• Destination pool
• Extent size
• Import Wizard
• System migration
• MDisks
• Volume
• Volume copy
Uempty
Review questions (1 of 2)
1. The three virtualization types for volumes are:
Uempty
Review answers (1 of 2)
1. The three virtualization types for volumes are:
The answers are striped, sequential, and image.
Uempty
Review questions (2 of 2)
4. Which of the following is not performed by the Import Wizard
when a volume from an external storage system is being
migrated to the Storwize V7000:
a. Create a migration pool with the proper extent size
b. Unzone and unmap the volume from the external storage system
c. Create an image type volume to point to storage on the MDisk being
imported
d. Migrate the volume from image to striped type
Uempty
Review answers (2 of 2)
4. Which of the following is not performed by the Import Wizard when
a volume from an external storage system is being migrated to the
Storwize V7000:
a. Create a migration pool with the proper extent size
b. Unzone and unmap the volume from the external storage system
c. Create an image type volume to point to storage on the MDisk being
imported
d. Migrate the volume from image to striped type
The answer is unzone and unmap the volume from the external
storage system.
Uempty
Unit summary
• Analyze data migration options available with Storwize V7000
• Implement data migration from one storage pool to another
• Implement data migration of existing data to Storwize V7000 managed
storage using the Import and System Migration Wizards
• Implement the Export migration from a striped type volume to image
type to remove it from Storwize V7000 management
• Differentiate between a volume migration and volume mirroring
Uempty
Overview
The Spectrum Virtualize provides data replication services for mission-critical data using FlashCopy
(point-in-time copy).
This unit examines the functions provided by FlashCopy illustrates their usage with example
scenarios.
References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html
Uempty
Unit objectives
• Identify I/O access to source and target volumes during a FlashCopy
operation
• Classify the purpose of consistency groups for both FlashCopy and
Remote Copy operations
• Summarize FlashCopy use cases and correlate to GUI provided
FlashCopy presets
• Recognize usage scenarios for incremental FlashCopy and reverse
FlashCopy
• Discuss host system considerations to enable usage of a FlashCopy
target volume and the Mirroring auxiliary volume
• Recognize the bitmap space needed for Copy Services and Volume
Mirroring
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Uempty
Lower Cache
Forwarding
RAID
Forwarding Virtual
This illustrates the Spectrum Virtualize software architecture and the placement of the Replication
(Copy Services) function below the Upper Cache. With the latest and previous software code
release, the FlashCopy function is implemented above the Upper Cache (fast-write cache) in the
I/O stack. The I/O stack cache rearchitecture improves the processing of FlashCopy operations
with:
• Near instant prepare (versus minutes) same for Global Mirror with Change Volumes
• Multiple snapshots of golden image share cache data (instead of N copies)
• Full stride write for FlashCopy volumes no matter what the grain size
• You can now configure 255 FlashCopy consistency groups - up from 127 previously
With the new cache architecture, you now have a two-layer cache – upper cache and lower cache.
You will notice that cache now sit above FlashCopy as well as below FlashCopy. So in this design,
before you can take a FlashCopy, if there is anything in the upper cache it needs to be transferred
to the lower cache before the pointer table can be taken. The pointer table can also be taken
without having destage cache to disk before hand.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The Storwize V7000 offers a network-based, SAN-wide FlashCopy (point-in-time copy) capability
obviating the need to use copy service functions on a storage system-by-storage system basis.
The FlashCopy function is designed to create copies for backup, parallel processing, testing, and
development, and have the copies available almost immediately. As part of the Storwize V7000
Copy Services function, you can create a point-in-time copy (PiT) of one or more volumes for any
storage being virtualized. Volumes can remain online and active while you create consistent copies
of the data sets. Because the copy is performed at the block level, it operates below the host
operating system and cache and is therefore not apparent to the host. FlashCopy accomplished
through the use of a bitmap (or bit array) that tracks changes to the data after the FlashCopy is
initiated, and an indirection layer, which allows data to be read from the source volume
transparently.
This function is included with the base IBM Spectrum virtualize license.
Uempty
FlashCopy functions
• Full / incremental copy
ƒ Copies only the changes from either the source or target data since the last
FlashCopy operation
• Multi-target FlashCopy
ƒ Supports copying of up to 256 target volumes from a single source volume
• Cascaded FlashCopy
ƒ Creates copies of copies and supports full, incremental, or nocopy operations
• Reverse FlashCopy
ƒ Allows data from an earlier point-in-time copy to be restored with minimal
disruption to the host
• FlashCopy nocopy with thin provisioning
ƒ Provides a combination of using thin-provisioned volumes and FlashCopy
together to help reduce disk space requirements when making copies
• Consistency groups
ƒ Addresses issue where application data is on multiple volumes
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Listed are a list of FlashCopy functions are included in the IBM Spectrum Virtualize software
license.
• In an incremental FlashCopy, the initial mapping copies all of the data from the source volume
to the target volume. Subsequent FlashCopy mappings only copy data that has been modified
since the initial FlashCopy mapping. This reduces the amount of time that it takes to re-create
an independent FlashCopy image. You can define a FlashCopy mapping as incremental only
when you create the FlashCopy mapping.
• Multiple target FlashCopy mappings allows up to 256 target volumes to be copied from a
single source volume. Each relationship between a source and target volume is managed by a
unique mapping such that a single volume can be the source volume in up to 256 mappings.
Each of the mappings from a single source can be started and stopped independently. If
multiple mappings from the same source are active (in the copying or stopping states), a
dependency exists between these mappings.
• The Cascaded FlashCopy function allows a FlashCopy target volume to be the source volume
of another FlashCopy mapping.
• A Reverse FlashCopy functions only allows the data that is required to bring the target volume
current is copied. If no updates have been made to the target since the last refresh, the
direction change can be used to restore the source to the previous point-in-time state.
Uempty
• When using a FlashCopy nocopy with thin provisioning function, there are two variations of
this option to consider:
▪ Space-efficient source and target with background copy: Copies only the allocated space.
▪ Space-efficient target with no background copy: Copies only the space that is used for
changes between the source and target and is referred to as “snapshots”.
This function can be used with multi-target, cascaded, and incremental FlashCopy.
• A consistency group is a container for FlashCopy mappings, Global Mirror relationships, and
Metro Mirror relationships. You can add many mappings or relationships to a consistency group,
however FlashCopy mappings, Global Mirror relationships, and Metro Mirror relationships
cannot appear in the same consistency group.
Uempty
FlashCopy implementation
Target volume Source volume
Storwize V7000 Must be on the same as Must be on the same as the
Source Volume Target Volume
Storage pool Does not need to be in same Does not need to be same as
as Source Volume Target Volume
Size • Must be same as Source • Must be same as Target
Volume Volume
• The size of the source and • The size of the source and
target volumes cannot be target volumes cannot be
altered (increased or altered (increased or
decreased) while a decreased) while a FlashCopy
FlashCopy mapping is mapping is defined
defined
Listed are several guidelines to consider before implementing a FlashCopy in your Storwize V7000
storage environment.
• The source and target volumes must be in the same Storwize V7000 cluster and volumes must
be the same “virtual” size.
• The Spectrum Virtualize capabilities enable SAN-wide copy. The target volume can reside in a
storage pool backed by a different storage system from the source volume, enabling more
flexibility than traditional storage systems based point-in-time copy solutions.
• The source and target volumes do not need to be in the same I/O Group or storage pool.
However, they can be within the same storage pool, across storage pools, and across I/O
groups.
• The storage pool extent sizes can differ between the source and target.
• The I/O group ownership of volumes affects only the cache and the layers above the cache in
the Storwize V7000 I/O stack. Below the cache layer the volumes are available for I/O on all
nodes within the Storwize V7000 cluster.
• FlashCopy operations perform in direct proportion to the performance of the source and target
disks. If you have a fast source disk and slow target disk, the performance of the source disk is
Uempty
reduced because it must wait for the write operation to occur at the target before it can write to
the source.
This applies only if the original is block, and is not copied before to the target (a background copy).
Uempty
FlashCopy attributes
• Source volumes can have up to 256 target volumes (Multiple Target
FlashCopy).
• Target volumes can be the source volumes for other FlashCopy
relationships (cascaded FlashCopy).
• Consistency groups are supported to enable FlashCopy across multiple
volumes in the same time.
• Up to 255 FlashCopy consistency groups are supported per system.
• Up to 512 FlashCopy mappings can be placed in one consistency
group.
• Target volume can be updated independently of the source volume.
• Maximum number of supported FlashCopy mappings is 4096 per
Storwize V7000 system.
• Size of the source and target volumes cannot be altered (increased or
decreased) while a FlashCopy mapping is defined.
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The FlashCopy function in the Storwize V7000 features using the lists of following attributes:
• Only 256 FlashCopy mappings that can exist with the same source.
• You can have up to 4096 FlashCopy mappings that can exist with the same source Volume per
system.
• The maximum of FlashCopy Consistency Group that you can have per system is 255, which is
the arbitrary limit that is policed by the software.
• You have a maximum limit of 512 FlashCopy mappings per Consistency Group. The set amount
is based on the time that is taken to prepare a Consistency Group with many mappings.
Uempty
FlashCopy process
write read
This diagram illustrates the general process for how FlashCopy works while the full image copy is
being completed in the background. Also the handling of the redirection of the host I/O which is
being written to the source volume with respect to a T0 point in time while the target volume is held
true to T0.
To create an instant copy of a volume, you must first create a mapping between the source volume
(the disk that is copied) and the target volume (the disk that receives the copy). The source and
target volumes must be of equal size. The volumes do not have to be in the same I/O group or
storage pool. When a FlashCopy operation starts, a checkpoint is made of the source volume. No
data is actually copied at the time a start operation occurs. Instead, the checkpoint creates a bitmap
that indicates that no part of the source volume has been copied. Each bit in the bitmap represents
one region of the source volume. Each region is called a grain.
When data is copied from the source volume to the target volume it is copied in units known as
grains. The default grain size is 256 KB. To facilitate copy granularity for incremental copy the grain
size can be set to 64 KB at initial mapping definition. If a compressed volume is in a FlashCopy
mapping then the default grain size is 64 KB instead of 256 KB.
The priority of the background copy process is controlled by the background copy rate. A rate of
zero indicates that only data being changed on the source should have the original content copied
to the target (also known as copy-on-write or COW). Unchanged data is read from the source. This
Uempty
option is designed primarily for backup applications where a point-in-time version of the source is
only needed temporarily.
A background copy rate of 1 to 100 indicates that the entire source volume is to be copied to the
target volume.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The priority of the background copy process is controlled by the background copy rate. A rate of
zero indicates that only data being changed on the source should have the original content copied
to the target (also known as copy-on-write or COW). Unchanged data is read from the source. This
option is designed primarily for backup applications where a point-in-time version of the source is
only needed temporarily.
A background copy rate of 1 to 100 indicates that the entire source volume is to be copied to the
target volume. The rate value specified corresponds to an attempted bandwidth during the copy
operation:
• 01 to 10 - 128 KBps
• 11 to 20 - 256 KBps
• 21 to 30 - 512 KBps
• 41 to 50 - 2 MBps (the default)
• 51 to 60 - 4 MBps
• 61 to 70 - 8 MBps
• 71 to 80 - 16 MBps
• 81 to 90 - 32 MBps
Uempty
• 91 to 100 - 64 MBps
The background copy rate can be changed dynamically during the background copy operation.
The background copy is performed by one of the nodes of the I/O group in which the source volume
resides. This responsibility is failed over to the other node in the I/O group in the event of a failure of
the node performing the background copy.
Uempty
D Write
copy on demand D'
Write
F copy-on-write F
F' (COW)
copied blocks
Write X' X Read
background
Read Y copy Y' Write
The background copy is performed backwards. That is, it starts with the grain containing the highest
logical block addresses (LBAs) and works backwards towards the grain containing LBA 0. This is
done to avoid any unwanted interactions with sequential I/O streams from the using application.
After the FlashCopy operation has started, both source and target volumes can be accessed for
read and write operations:
• Source reads: Business as usual.
• Target reads: Consult its bitmap. If data has been copied then read from target. If not, read from
the source.
• Source writes: Consult its bitmap. If data has not been copied yet then copy source to target
first before allowing the write (copy on write or COW). Update bitmap.
• Target writes: Consult its bitmap. If data has not been copied yet then copy source to target first
before the write (copy on demand). Update bitmap. One exception to copying the source is if
the entire grain is to be written to the target then copying the source is not necessary.
Uempty
D Write
copy on demand D'
Write
F copy-on-write F
F' (COW)
no
Write X' background
copy
Read Y
Minimize disk capacity
utilization if using
Grain size =256 KB/64 KB
Thin-Provisioned target
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
For a copyrate=0 FlashCopy invocation the background copy is not performed. The target is often
referred to as a snapshot of the source. After the FlashCopy operation has started, both source and
target volumes can be accessed for read and write operations.
Write activity occurs on the target when:
• Write activity has occurred on the source and the point-in-time data has not been copied to the
target yet. The original source data (based on grain size) must be copied to the target before
the write to the source is permitted. This is known as copy-on-write.
• Write activity has occurred on the target to a subset of the blocks managed by a grain where the
point-in-time data has not been copied to the target yet. The original source data (based on
grain size) has to be copied to the target first.
• Read activity to the target is redirected to the source if the data does not reside on the target.
Since no background copy is performed, using a Thin-Provisioned target often minimizes the disk
capacity required.
Uempty
2. Prepare Flush write cache for source, discard cache for target,
place source volume in write-through mode
svctask prestartfcconsistgrp
or
Preparing svctask prestartfcmap
3. Start Set metadata, allow I/O, start copy
svctask startfcconsistgrp
or
Prepared svctask startfcmap
Copying
4. Delete
Discard FlashCopy
Idle_or_copied mapping
*Prepare step can be embedded with start step (manual or automatic)
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Uempty
Upon completion of the preparing event, the mapping is said to be in the prepared state, ready for
the copy operation to be triggered. The source volume is in write-through mode. The target volume
is placed in a not accessible state in anticipation of the FlashCopy start event.
The prepare function can be optionally integrated with the start function.
Start: Once the mappings in a consistency group are in the prepared state, the FlashCopy
relationship can be started or triggered. The optional -prepare parameter allows the prepare and
start functions to be performed together (that is, the FlashCopy is triggered as soon as the prepare
event is completed). During the start:
• I/O is briefly paused on the source volumes to ensure ongoing reads and writes below the
cache layer have been completed.
• Internal metadata are set to allow FlashCopy.
• I/O is then resumed on the source volumes.
• The target volumes are made accessible.
• Read and write caching is enabled for both the source and target volumes. Each mapping is
now in the copying state.
Unless a zero copy rate is specified, the background copy operation copies the source to target
until every grain has been copied. At this point, the mapping progresses from the copying state to
the Idle_or_copied state.
Delete: A FlashCopy mapping is persistent by default (not automatically deleted after the source
has been copied to the target). It can be reactivated by preparing and starting again. The delete
event is used to destroy the mapping relationship. If desired, the mapping can be automatically
deleted at the completion of the background copy if the -autodelete parameter is coded when the
mapping is defined with mkfcmap or changed with chfcmap commands.
FlashCopy can be invoked using the CLI or GUI. Scripting using the CLI is also supported. Refer to
the Redbooks Implementing the IBM System Storage Storwize V7000 V7.6 (SG24-7938) for
guidance regarding scripting.
Uempty
3. Start
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
During the prepare event, writes to the source volume experience additional latency because the
cache is operating in write-through mode while the mapping progresses from preparing to
prepared mode. The target volume is online but not accessible.
The two mechanisms by which a mapping can be stopped are by I/O errors or by command. The
target volume is set offline. Any useful data is lost. To regain access to the target volume start the
mapping again.
If access to the bitmap and metadata has been lost (such as if access to both nodes in an I/O group
has been lost) the FlashCopy mapping is placed in suspended state. In this case, both source and
target volumes are placed offline. When access to metadata becomes available again then the
mapping will return to the copying state and both volumes will become accessible and the
background copy resumed.
The stopping state indicates that the mapping is in the process of transferring data to a dependent
mapping. The behavior of the target volume depends on whether the background copy process had
completed while the mapping was in the copying state. If the copy process had completed then the
target volume remains online while the stopping copy process completes. If the copy process had
not completed then data in the cache is discarded for the target volume. The target volume is taken
offline and the stopping copy process runs. When the data has been copied then a stop complete
asynchronous event is notified. The mapping transitions to the idle_or_copied state if the
Uempty
background copy has completed, or to the stopped state if it has not. The source volume remains
accessible for I/O.
Stopped: The FlashCopy was stopped either by user command or by an I/O error. When a
FlashCopy mapping is stopped, any useful data in the target volume is lost. Because of this, while
the FlashCopy mapping is in this state, the target volume is in the Offline state. In order to regain
access to the target the mapping must be started again (the previous FlashCopy will be lost) or the
FlashCopy mapping must be deleted. While in the Stopped state any data which was written to the
target volume and was not flushed to disk before the mapping was stopped is pinned in the cache.
It cannot be accessed but does consume resource. This data will be destaged after a subsequent
delete command or discarded during a subsequent prepare command. The source volume is
accessible and read and write caching is enabled for the source.
Suspended: The target has been point-in-time copied from the source, and was in the copying
state. Access to the metadata has been lost, and as a consequence, both source and target
volumes are offline. The background copy process has been halted. When the metadata becomes
available again, the FlashCopy mapping will return to the copying state, access to the source and
target volumes will be restored, and the background copy process resumed. Unflushed data which
was written to the source or target before the FlashCopy was suspended is pinned in the cache,
consuming resources, until the FlashCopy mapping leaves the suspended state.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The Storwize V7000 management GUI supports the FlashCopy functionality with three menu
options within the Copy Services menu option:
• The FlashCopy menu option is designed to be a fast path with extensive use of pre-defined
automatic actions embedded in the FlashCopy presets to create target volumes, mappings, and
consistency groups.
• The Consistency Groups menu option is designed to create, display, and manage related
mappings that need to reside in the same consistency group.
• The FlashCopy Mappings menu option is designed to create, display, and manage the
individual mappings. If mappings reside in a consistency group then this information is also
identified.
FlashCopy mappings can be defined from all three menu options but the process is much more
automatic (or less user control) from the FlashCopy menu.
The ensuing examples are designed to illustrate the FlashCopy functions provided by the Storwize
V7000 as well as the productivity aids added with the GUI.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
For fast path FlashCopy processing, select Copy Services > FlashCopy to view a volume list.
Select a volume entry and right-click to select the desired FlashCopy preset.
The Storwize V7000 management GUI provides three FlashCopy presets to support the three
common use case examples for point-in-time copy deployments.
These presets templates that implement best practices as defaults to enhance administrative
productivity. For the FlashCopy presets the target volumes can be automatically created and
FlashCopy mappings defined. If multiple volumes are involved then a consistency group to contain
the related mappings is automatically defined as well. Unique attributes by preset include:
Typical FlashCopy usage examples include:
• Create a target volume such that it is a snapshot of the source (that is, the target contains only
copy-on-write blocks or COW). If deployed with Thin Provisioning technology then the snapshot
might only consume a minimal amount of storage capacity. Use cases for snapshot targets
include:
▪ Backing up source volume to tape media where a full copy of the source on disk is not
needed.
▪ Exploiting Thin Provisioning technology by taking more frequent snapshots of the source
volume and hence facilitate more recovery points for application data.
Uempty
• Create a target volume that is a full copy, or a clone, of the source where subsequent
resynchronization with the source is expected to be either another full copy or is not needed.
Use cases for clone targets include:
▪ Testing applications with pervasive read/write activities.
▪ Performing what-if modeling or reports generation where using static data is sufficient and
separation of these I/Os from the production environment is paramount.
▪ Obtaining a clone of a corrupted source volume for subsequent troubleshooting or
diagnosis.
• Create a target volume that is to be used as a backup of the source where periodic
resynchronization is expected to be frequent and hence incremental updates of the target would
be more cost effective. Use cases for backup targets include:
▪ Maintaining a consistent standby copy of the source volume on disk to minimize recovery
time.
▪ Implementing business analytics where extensive exploration and investigation of business
data for decision support requires the generated intensive I/O activities to be segregated
from production data while the data store needs to be periodically refreshed.
Both the snapshot and backup use cases address data recovery. The recovery point objective
(RPO) denotes at what point (in terms of time) should the application data be recovered or what
amount of data loss is acceptable. After the application becomes unavailable, the recovery time
objective (RTO) indicates how quickly it is needed to be back online or how much down time is
acceptable.
The unit of measure for both RPO and RTO is time with values ranging from seconds to days to
weeks. The closer an application’s RPO and RTO values are to zero the greater the organizations
dependence on that particular process and consequently the higher the priority when recovering
the systems after a disaster.
Uempty
PREPARE_COMPLETED
Cluster
COPY_COMPLETED Event SNMP
Log traps
STOP_COMPLETED
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
FlashCopy events that complete asynchronously are logged and can be used to generate SNMP
traps for notification purposes.
PREPARE_COMPLETED is logged when the FlashCopy mapping or consistency group has
entered the prepared state as a result of a user request to prepare. The user is now able to start (or
stop) the mapping/group.
COPY_COMPLETED is logged when the FlashCopy mapping or consistency group has entered
the idle_or_copied state when it was previously in the copying state. This indicates that the target
volume now contains a complete copy and is no longer dependent on the source volume.
STOP_COMPLETED is logged when the FlashCopy mapping or consistency group has entered
the stopped state as a result of a user request to stop. It is distinct from the error that is logged
when a mapping or group enters the stopped state as a result of an IO error.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The snapshot creates a point-in-time backup of production data. The snapshot is not intended to be
an independent copy. Instead, it is used to maintain a view of the production data at the time that
the snapshot is created. Therefore, the snapshot holds only the data from regions of the production
volume that changed since the snapshot was created. Because the snapshot preset uses thin
provisioning, only the capacity that is required for the changes is used.
To create and start a snapshot, from the Copy Services > FlashCopy window, right-click on the
volume that you want to create a snapshot of or click Actions > Create Snapshot. Upon selection
of the Create Snapshot option, the GUI automatically:
• Creates a volume using a name based on the source volume name with a suffix of _01
appended for easy identification. The real capacity size starts out as 0% of the virtual volume
size and will automatically expand as write activity occurs.
Uempty
No
background
copy
-copyrate 0
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The Storwize V7000 GUI defines a FlashCopy mapping using the mkfcmap command with a
background copy rate of 0.
• Starts the mapping using the startfcmap -prep 4 command where 4 is the object ID of the
mapping, and -prep embeds the FlashCopy prepare process with the start process.
The target volume is now available to be mapped to host objects for host I/O. With a FlashCopy (or
Snapshot in the GUI), it uses disk space only when updates are made to the source or target data
and not for the entire capacity of a volume copy.
Uempty
Background Copy
Rate=0
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The target volume is now available to be mapped to host objects for host I/O. The Snapshot
Thin-provisioned volume uses disk space only when updates are made to the source or target data
and not for the entire capacity of a volume copy.
A running FlashCopy can be modified during the task by right-clicking the mapping. As long as the
task is running, this is possible. (Running Tasks menu). The user is able to modify this parameter as
desired by dragging the slider bar.
Uempty
IBM_Storwize:V009B:V009B1-admin>lsfcmap fcmap3
id 4
name fcmap3
source_vdisk_id 18
source_vdisk_name Basic-WIN1
target_vdisk_id 25
target_vdisk_name Basic-WIN1_01
group_id Target_01
group_name
status copying
progress 14
copy_rate 0 0% COWs
start_time 160610131930
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 0
incremental off
difference 100
grain_size 256
…………….
restore_progress 0
fc_controlled no COW = copy-on-write
IBM_Storwize:V009B:V009B1-admin>
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
All FlashCopy mappings are displayed from the Copy Services > FlashCopy Mappings view.
Observe the default mapping name of fcmap3 assigned to the mapping for source volume and note
the current copy progress of 15 percent is in the mapping entry. Since this mapping has a copy rate
set to 0, the copy progress represents the copy-on-write (COW) activity.
Use the CLI lsfcmap command with either the object name or ID of the mapping to view detailed
information about a mapping. The mapping grain size can be found in this more verbose output.
The grain size for a FlashCopy mapping bitmap defaults to 256 KB for all but the compressed
volume type; which has a default grain size of 64 KB. The default size value can be overridden if
the CLI is used to define the mapping. However, the best practice recommendation is to use the
default values.
The example shows the status of the source and target volume with no write activity.
Uempty
IBM_Storwize:V009B:V009B1-admin>lsfcmap fcmap3
id 4
name fcmap3
source_vdisk_id 18
source_vdisk_name Basic-WIN1
target_vdisk_id 25
target_vdisk_name Basic-WIN1_01
group_id Target_01
group_name
status copying
progress 7
copy_rate 0 7% COWs
start_time 160610131930
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 0
incremental off
difference 100
grain_size 256
…………….
restore_progress 0
fc_controlled no COW = copy-on-write
IBM_Storwize:V009B:V009B1-admin>
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The example shows the status of the source and target volume with writes in progress. Since the
background copy rate is set to 0, the progress of 13% shows that 13% are written to the sources
and caused 14% copy on writes to the target. 14% of the target storage is in use, which is the same
amount of data which was changed on the source after the mapping was started.
When subsequent writes occur on the source volume, the content of the blocks being changed
(written to) is copied to the target volume in order to preserve the point-in-time snapshot target
copy. These blocks are referred to as copy-on-write (COW) blocks; the ‘before’ version of the
content of these blocks is copied as a result of incoming writes to the source. This write activity
caused the real capacity of the Thin-Provisioned target volume to automatically expand: Matching
the quantity of data being written.
It might be worthwhile to emphasize that the FlashCopy operation is based on block copies
controlled by grains of the owning bitmaps. Storwize V7000 is a block level solution so, by design
(and actually per industry standards), the copy operation has no knowledge of OS logical file
structures. The same information is available by using CLI with the help of the lsmap command.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
This topic examines the ability to create consistency group by selecting multiple mappings to be
managed as a single entity.
Uempty
Consistency groups
• FlashCopy consistency groups are used to group multiple copy
operations together that have a need to be controlled at the same time.
ƒ Group can be controlled by starting or stopping with a single operation.
ƒ Ensures that when stopped for any reason, the I/Os to all group members have
all stopped at the same point in time.
í Ensures time consistency across volumes.
Mapping
Mapping
Mapping
Mapping
Mapping
Source Volume Target Volume
Source Volume Target Volume
Source Volume Target Volume
Source Volume Target Volume
Source Volume Target Volume
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Consistency Groups address the requirement to preserve point-in-time data consistency across
multiple volumes for applications that include related data that spans multiple volumes. For these
volumes, Consistency Groups maintain the integrity of the FlashCopy by
ensuring that “dependent writes” are run in the application’s intended sequence.
When Consistency Groups are used, the FlashCopy commands are issued to the FlashCopy
Consistency Group, which performs multiple copy operations on all FlashCopy mappings that are
contained within the Consistency Group at the same time. This allows administrator to tasking
operations such as starting, stopping, and so on, with a single operation. Therefore, if the copying
has to be stopped for any reason, the I/Os to all group members are stopped at the same
“point-in-time” in terms of the host writes to the primary volumes, ensuring time consistency across
volumes.
After an individual FlashCopy mapping is added to a Consistency Group, it can be managed as part
of the group only. Operations, such as prepare, start, and stop, are no longer allowed on the
individual mapping.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
When multiple volumes are selected from the Copy Services > FlashCopy menu, the GUI presets
operate at the consistency group level (instead of mapping level). Besides automatically creating
targets and mappings, a consistency group is also defined to allow multiple mappings to be
managed as a single entity. The copy is automatically started at the consistency group level.
Consistency groups might also be established for FlashCopy mappings of volumes that span
multiple volumes. This allows the FlashCopy operation on multiple volumes to take place as an
atomic operation.
Some installations using non-IBM storage systems have been used to having to wait for the storage
system to mirror copies of LUNs that then needed to be split away from the original LUN before the
cloned LUN might be used by a host. This is a time-consuming process, the time depending on the
size of the LUN. With IBM FlashCopy, the targets, regardless of size can be used immediately after
Start processing completes (seconds).
Consistency groups also be created, modified, and deleted with concise, direct CLI commands.
Uempty
The commands issued by the management GUI for this Clone preset invocation example have
been extracted and highlighted:
• A consistency group is created with the -autodelete parameter; which causes the Storwize
V7000 to automatically delete the consistency group when background copy completes.
• Two fully allocated target volumes are created. The name and size of the target volumes
derives from the source volumes; following the GUI naming convention for FlashCopy. Two
FlashCopy mappings are defined each with the default copy rate of 50 (or 2 MBps).
• Once the volume have been created and a FlashCopy mapping has been established, the
consistency group automatically starts with the startfcconsistgrp command which contains
an embedded prepare.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The Copy Services > FlashCopy view displays the two defined individual mappings that are both
associated with the fccstgrp0 consistency group.
The progress bar for each FlashCopy mapping provides a direct view of the progress of each
background copy. This progress data is also provided through the Running Tasks interface; which
is accessible from any GUI view.
The background copy has a default copy rate of 50 which can be changed dynamically.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
To change the background rate, right-click a fcmap mapping entry and select Edit Properties. Drag
the Background Copy Rate: slider bar in the Edit FlashCopy Mapping box all the way to the right
to increase the value to 100 then click Save.
The generated Storwize V7000task chfcmap -copyrate uses -copyrate to increase the copy rate
to 100 for the specified mapping whose ID. The 100 value causes the background copy rate to
increase from the default 2 MBps to 64 MBps.
Uempty
• Once the copy operation between the source volume and target volume is
complete, the consistency group and mappings are deleted automatically.
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
From the Copy Services > Consistency Group view, you can see the changes in the fcmp1 target
volume now that the background copyrate has been increased. The consistency group has a status
of copying as long as one of its mappings is in the copying status. Once the copy operation for the
source volume to the target volume is complete, the -autodelete specification will take effect.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The topic review the incremental or “no copy” features to create a point-in-time copy of a database
implemented directly on attached FlashCopy or Storwize V7000 systems.
Uempty
Incremental FlashCopy
• Incremental FlashCopy can
substantially reduce the time that Start incremental FlashCopy
is required to re-create an
independent image.
ƒ Copies only the parts of the source Data copied as normal
or target volumes that changed
since the last copy Later …
ƒ Reduces the completion time of
the copy operation
ƒ First copy process copies all of the Some data changed by apps
data from the source volume to the
target volume
Start incremental FlashCopy
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The FlashCopy Incremental Copy makes it possible to perform a background copy between the
source volumes and target volumes without having to copy all of the tracks in the process.
Therefore, Incremental copy reduces the amount of data that needs to be copied subsequent to the
initial invocation of a FlashCopy mapping. The first copy process copies all of the data from the
source volume to the target volume. Rather than copying the entire volume again, only the portions
of the source volume that have been updated at either the source or target are copied. The quantity
of data that needs copying is affected by the grain size. The 64 KB grain size provides more copy
granularity at the expense of using more bits or larger bitmaps than the 256 KB grain size. To be
able to monitor the difference between source and target a “difference” value is maintained in the
FlashCopy mapping details.
Uempty
VB1-NEW
VB1-NEW _TGT
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
If less automation or more administrator control is desired, a FlashCopy mapping can be manually
defined from the Copy Services > FlashCopy Mappings panel by clicking the Create FlashCopy
Mapping button. This path expects the target volume to have been created already.
From the Create FlashCopy Mapping dialog box, specify the source volume and target volume. The
GUI automatically determines the list of eligible targets. An eligible target volume must be the same
size of the source and must not be serving as a target in other FlashCopy mappings. After a target
has been Added, the GUI confirms the source and target pairing.
From the volume entries, the UIDs of the source and target volumes; also observe that they reside
in different storage pools representing different storage systems.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The intent is to use incremental FlashCopy, therefore the Backup preset is selected.
The Advance Settings pane allows the ability to change or override mapping attributes, such as the
copy rate, as the mapping is being defined. You also have the option to add the mapping to a
consistency group. Since this is a one volume one mapping example, a consistency group is not
necessary.
The GUI generates the mkfcmap command contains the -incremental parameter. The incremental
copy option can only be specified at mapping definition. In other words, after a mapping has been
created, there is no way to change its attribute to an incremental copy without deleting the mapping
and redefining it again.
Uempty
VB1-NEW
VB1-NEW _TGT
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
To start the FlashCopy mapping, right-click the mapping entry and select Start from the menu list.
The management GUI generates the startfcmap command with an embedded prepare to start the
mapping.
Uempty
• If FlashCopy mapping has a status of Idle or Copies, the source and target
volumes can act as independent volumes even if a mapping exists between
the two
Both volume in the mapping relationship has read and write caching is enabled
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
From Copy Services > FlashCopy Mapping, you can view the background copying progress. The
FlashCopy Mapping has a status of copying as long as one of its mappings is in the copying
status. The background copy is performed by both nodes of the I/O Group in which the source
volume is found.
When the status changes to Idle or Copies, the source and target volumes can act as independent
volumes even if a mapping exists between the two. Both volume in the mapping relationship has
read and write caching is enabled.
If the mapping is incremental and the background copy is complete, the mapping records the
differences between the source and target volumes only. If the connection to both nodes in the I/O
group that the mapping is assigned to is lost, the source and target volumes are offline.
Uempty
More writes
on source
VB1-NEW
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
As data is added over a period of times to the source volume, the host I/O activity continues while
the background copy is in progress. Incremental FlashCopy copies all of the data when you first
start FlashCopy and then only the changes when you stop and start FlashCopy mapping again.
The target volume contains the point-in-time content of the source volume. Even though
subsequent write activity has occurred on the source volume, it isn’t reflected on the target volume.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The CLI lsfcmap command is used in this example to view the FlashCopy mapping details. The
copy_rate had been updated to 100 percent. Background copy has completed hence the status of
this mapping is idle_or_copied. Recall the Backup preset was selected - causing this mapping to
be defined with autodelete off and incremental on.
Since this mapping is defined with incremental copy, bitmaps are used to track changes to both the
source and target (recall reads/writes are supported for both source and target volumes). The
difference value indicates the percentage of grains that have changed between the source and
target volumes.
This difference percentage represents the amount of grains that need to be copied from the source
to the target with the next background copy. The value of 22 percent in this example is the result of
data having been added or written to the source volume.
Uempty
In this example, we are using the CLI startfcmap -prep 0 command to start the mapping. This
command does not return a successful submission of a long running asynchronous job. In this
case, the background incremental copy.
Since it is an incremental copy, only those blocks related to the changed grains (the 22%) are
copied to the target. The immediately submitted lsfcmap command concise output displays a
status of copying and a progress of 77% already.
A short time later, the lsfcmap 0 verbose output shows the completion of the background copy -
progress 100 and difference 0.
After incremental copy completes, the content of the target volume is updated. At this point, the
content of the two volumes are identical. Subsequent changes to both source and target volumes
are now being tracked anew by Storwize V7000 FlashCopy.
Uempty
VB1_NEW_Ale
3 VB1_NEW_Stout VB1_NEW_Ale
VB1_NEW_Stout
4
Incremental
Copy
VB1_NEW_Ale
VB1_NEW_Stout
Data corrupted
5
/
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
This example illustrates the incremental copy option of FlashCopy. At some point along the way,
data corruption to the source occurs. This might be due to subsequent write activity it is now
deemed that a logical data corruption occurred - perhaps due to a programming bug.
Uempty
VB1_NEW_DEBUG
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Reverse FlashCopy enables FlashCopy targets to become restore points for the source without
breaking the FlashCopy relationship and without having to wait for the original copy operation to
complete. FlashCopy provides the option to take a point-in-time copy of the corrupted volume data
for debugging purposes. It supports multiple targets (up to 256) and therefore multiple rollback
points.
You also have the ability to create an optional copy of the source volume to be made before the
reverse copy operation starts. This ability to restore back to the original source data can be useful
for diagnostic purposes.
A key advantage of the Storwize V7000 Multiple Target Reverse FlashCopy function is that the
reverse FlashCopy does not destroy the original target, which allows processes by using the target,
such as a tape backup, to continue uninterrupted.
This image illustrates that the corrupted volume image is to be captured for future problem
determination (step 6A).
Then the reverse copy feature of FlashCopy is used to restore the source volume from the target
volume (step 6B).
Uempty
VB1_NEW VB1-NEW_01
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
To obtain a volume copy of the corrupted source volume for later debugging, the fast path Copy
Services > FlashCopy menu is used.
Right-click the source volume entry and select Create Clone. The Clone preset will automatically
generate commands to create the target volume, define the source to target FlashCopy mapping,
and start the background copy.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
To restore the source volume to its prior point-in-time copy, a reverse FlashCopy mapping is
defined. This procedural is similar to the used to create the initial FlashCopy Mappings, accept in
this procedural you will reserve the path by creating a new mapping to identify the target volume as
the source, and the original source volume will become the target.
A Warning dialog is displayed by the GUI to caution that the target volume is also a source volume
in another mapping. This is normal for restore and for this example, it is by design.
It is important to known that the source volume to target volume mapping background copy does
not have to be completed when the reverse mapping is started. This is because the reverse copy is
a one time use case, the Clone preset is selected so that the GUI would generate a mapping with
the automatic deletion upon copy completion attribute.
Uempty
Start mapping
VB1-NEW_
VB1-NEW
TGT
VB1-NEW_
DEBUG
Rename
debug volume
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Since the reverse mapping had been defined using the Copy Services > FlashCopy Mapping
menu, its status is Idle. The administrator (instead of the GUI) controls when to start mapping.
The FlashCopy target volume, target_01, which contains the source volume image with corrupted
data, should have a more descriptive name than the default name assigned by the fast path
FlashCopy GUI. You can use the Edit interface from the volume details panel for target_01 to
rename the volume to “target name_DEBUG”.
Uempty
VB1-NEWR
VB1-NEW_TGT
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
To restore the content of the source volume from the target volume, right-click the new
source_TGT volume entry in the reverse FlashCopy mapping and select Start from the pop-up
menu.
Observe the startfcmap command generated by the GUI contains the -restore parameter. The
-restore parameter allows the mapping to be started even if the target volume is being used as a
source in another active FlashCopy mapping.
Uempty
ID 0
ID 1 ROOT_ ID 2 ROOT
BEERS
BEERS_TGT
ROOT_BEERS_
_DEBUG
IBM_Storwize V009B:V009B1-admin> lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,grou
p_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name
,restoring,start_time,rc_controlled
0,fcmap1,2,BEERS,7,BEERS_TGT,,,idle_or_copied,100,100,100,on,,,no, 160620112404,no
1,fcmap0,2,BEERS,8,BEERS_DEBUG,,,copying,84,100,100,off,,,no, 160620111623,no
IBM_Storwize V009B:V009B1-admin> lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,grou
p_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name
,restoring,start_time,rc_controlled
0,fcmap1,2,VB1-NEW,7,VB1-NEW_TGT,,,idle_or_copied,100,100,100,on,,,no, 160620112404,no
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Uempty
VB1-NEW
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
As with any FlashCopy mapping, after background copy has started, both the source and target
volumes are available for read/write access.
For recovery situations, it might be best to shut down the host and reboot the server before using
the restored volume.
This view shows the original source volume has been restored to the content level of the target
volume.
Based on the SDD reported disk serial number, the target_DEBUG volume has been assigned to
the host as drive letter E. It contains the corrupted content of the source volume. Reverse
FlashCopy enables FlashCopy targets to become restore points for the source without breaking the
FlashCopy relationship and without having to wait for the original copy operation to complete.
Uempty
Source Target
1. Optional Copy Volume Volume
of Original X Y
Relationship SAN
OR
Target 2. Reverse
Volume Target
Z FlashCopy Volume
operation W
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The multi-target FlashCopy operation allows several targets for the same source. This can be used
for backup to tape later. Even if the backup is not finished, the user can create an additional target
for the next backup cycle and so on.
Reverse FlashCopy enables FlashCopy targets to become restore points for the source without
breaking the FlashCopy relationship and without having to wait for the original copy operation to
complete. It supports multiple targets (up to 256) and thus multiple rollback points.
A key advantage of the IBM Spectrum Virtualize Multiple Target Reverse FlashCopy function is that
the reverse FlashCopy does not destroy the original target, which allows processes by using the
target, such as a tape backup, to continue uninterrupted.
IBM Spectrum Virtualize also provides the ability to create an optional copy of the source volume to
be made before the reverse copy operation starts. This ability to restore back to the original source
data can be useful for diagnostic purposes.
In this example, the multi-target FlashCopy operation has occurred an error or virus on the source.
Therefore, the administrator needs to reverse FlashCopy the Snapshot data on target1 or target2
can be flashed back. This process is incremental and thus very fast. The host can then work with
the clean data. If a root cause analysis of the original source is store the corrupted data for later
analysis.
Uempty
Reverse FlashCopy:
• Does not require the original FC copies to have been completed.
• Does not destroy the original target content (for example, does not disrupt tape backups
underway).
• Does allow an optional copy of the corrupted source to be made (for example, for diagnostics)
before starting the reverse copy.
• Does allow any target of the multi-target chain to be used as the restore or reversal point.
Uempty
Mapping
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
If your company data must maintain the consistency of data across multiple disk volumes at a
backup location and available 24 hours a day, having eight hours of downtime is unacceptable.
Using the FlashCopy service as part of the backup process can help reduce the time needed for the
backup. When the FlashCopy process is started, your application stops for just a moment, and then
immediately resumes.
Using FlashCopy for backup can also help you optimize the use of thin provisioning, which occurs
when the virtual storage of a volume exceeds its real storage. For example, you can use the
FlashCopy service to map a fully allocated source volume to a thin-provisioned target volume. The
thin-provisioned target volume serves as a consistent snapshot copy that you can use to back up
your data to tape. Because this type of target volume uses less real storage than the source
volume, it can help you reduce costs in power, cooling, and space.
FlashCopy consistency groups ensure data consistency across multiple volumes by putting
dependent volumes in an extended long busy state and then performing the backup. This is
supposed to guarantee integrity of the data in the dependent volumes at the physical level and not
the logical database level.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
In this topic, we will review how FlashCopy utilizes bitmaps to track grains in FlashCopy mappings
or mirroring relationship. In addition, review the functions of Tivoli Storage Manager.
Uempty
Write
and lower cache layers.
Reade
ƒ FlashCopy Indirection layer isolates
the additional latency created by
COW.
í COW latency is handled by the internal
cache operations and not by active Upper Cache
application.
Stage
Destage
ƒ Bitmap governs the I/O redirection
FlashCopy
between both nodes of the Storwize
Indirection layer
V7000.
Destage
FlashCopy bitmap
• Prime location allows FlashCopy to
Stage
benefit from read prefetching and
Lower Cache
coalescing writes to backend
Write
Reade
Write
Reade
storage.
ƒ Much faster because upper cache IOs to storage
write data does directly to the lower controllers
cache. Source Target
Volume Volume
Copy from source to target
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Starting with V7.3 the entire cache subsystem was redesigned and changed accordingly. Cache
has been divided into upper and lower cache. Upper cache serves mostly as write cache and hides
the write latency from the hosts and application. Lower cache is a read/write cache and optimizes
I/O to and from disks.
This copy-on-write process introduces significant latency into write operations. To isolate the active
application from this additional latency, the FlashCopy indirection layer is placed logically between
upper and lower cache. Therefore, the additional latency that is introduced by the copy-on-write
process is encountered only by the internal cache operations and not by the application.
The two level cache design provides additional performance improvements to FlashCopy
mechanism. Because now the FlashCopy layer is above lower cache in the IBM Spectrum
Virtualize software stack, it can benefit from read prefetching and coalescing writes to backend
storage. Also, preparing FlashCopy is much faster because upper cache write data does not have
to go directly to backend storage but to lower cache layer. Additionally, in the multi-target
FlashCopy the target volumes of the same image share cache data. This design is opposite to
previous IBM Spectrum Virtualize code versions where each volume had its own copy of cached
data.
The bitmap governs the I/O redirection (I/O indirection layer) which is maintained in both nodes of
the IBM Storwize V7000 I/O Group to prevent a single point of failure. For the FlashCopy volume
Uempty
capacity per I/O Group, you have a maximum limit on the quantity of FlashCopy mappings that are
using bitmap space from this I/O Group. This maximum configuration uses all 4 GiB of bitmap
space for the I/O Group and allows no Metro or Global Mirror bitmap space. The default is 40 TiB.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Bitmaps are internal Storwize V7000 data structures used to track which grains in FlashCopy
mappings or mirroring relationships, have been copied from the source volume to the target
volume; or from one copy of a volume to another for Volume Mirroring.
Bitmaps consume bitmap space in each I/O group’s node cache. The maximum amount of cache
used for bitmap space is 552 MiB per I/O Group, which is shared among FlashCopy bitmaps,
Remote Copy (Metro/Global Mirroring) bitmaps, Volume Mirroring, and RAID processing bitmaps.
When an Storwize V7000 cluster is initially created, the default bitmap space assigned is 20 MiB
each for FlashCopy, Remote Copy, and Volume Mirroring; and 40 MiB for RAID metadata.
The verbose lsiogrp command output displays, for a given I/O group, the amount of bitmap space
allocated and currently available for each given bitmap space category.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Figure 9-52. Bitmap space and copy capacity (per I/O group)
By default, each I/O group has allotted 20 MB of bitmap space each for FlashCopy, Remote Copy,
and Volume Mirroring.
For FlashCopy, the default 20 MB of bitmap space provides a copy capacity to track 40 TB of target
volume space if the default grain size of 256 KB is used. The 64 KB grain size means four times as
many bits are needed to track the same amount of space; this increased granularity decreases the
total copy capacity to 10 TB or one fourth the amount as the 256 KB grain size. The tradeoff is a
potential decrease in the amount of data that needs to be incrementally copied, which in turn,
reduces copy time and Storwize V7000 CPU utilization.
Incremental FlashCopy requires tracking changes for both the source and target volumes, thus two
bitmaps are needed for each FlashCopy mapping. Consequently for the default grain size of 256
KB, the total copy capacity is reduced from 40 TB to 20 TB. If the 64 KB grain size is selected, the
total copy capacity is reduced from 10 TB to 5 TB.
For Remote Copy (Metro and Global mirroring), the default 20 MB of bitmap space provides a total
capacity of 40 TB per I/O group; likewise for Volume Mirroring.
Uempty
IBM_Storwize:V009B:V009B1-admin>l>lsiogrp 0
id 0
name io_grp0
node_count 2
vdisk_count 8
host_count 4 Update bitmap space for
flash_copy_total_memory 30.0MB
flash_copy_free_memory 29.9MB FlashCopy, Remote Copy
remote_copy_total_memory 10.0MB (MM/GM) and Volume Mirroring
remote_copy_free_memory 10.0MB
mirroring_total_memory 25.0MB
mirroring_free_memory 25.0MB
raid_total_memory 40.0MB
raid_free_memory 40.0MB
maintenance no
compression_active no
accessible_vdisk_count 8
compression_supported yes
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
The chiogrp command is used to control the amount of bitmap space to be set aside for each IO
group.
Use the chiogrp command to release the default allotted cache space if the corresponding function
is not licensed. For example, if Metro/Global Mirror is not licensed, change the bitmap space to 0 to
regain the I/O group cache for other use.
By the same token, if more copy capacity is required, use the chiogrp command to increase the
amount of memory set aside for bitmap space. A maximum of 552 MB, shared among FlashCopy,
Remote Copy, Volume Mirroring, and RAID functions, can be specified per IO group.
Uempty
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Uempty
The management of many large FlashCopy relationships and Consistency Groups is a complex
task without a form of automation for assistance.
IBM Spectrum Virtualize FlashCopy Manager provides fast application-aware backups and restores
leveraging advanced point-in-time image technologies available with the IBM Storwize V7000. In
addition, it provides an optional integration with IBM Tivoli Storage Manager, for long-term storage
of snapshots.
This example shows the integration of Tivoli Storage Manager and FlashCopy Manager from a
conceptual level. Tivoli Storage Manager can be supported on SAN Volume Controller, Storwize
V7000, DS8000, DS3400, DS3500 and DS5000, plus others.
Uempty
Tivoli FlashCopy Manager provides many of the features of Tivoli Storage Manager for Advanced
Copy Services without the requirement to use Tivoli Storage Manager. With Tivoli FlashCopy
Manager, you can coordinate and automate host preparation steps before you issue FlashCopy
start commands to ensure that a consistent backup of the application is made. You can put
databases into hot backup mode and flush the file system cache before starting the FlashCopy.
FlashCopy Manager also allows for easier management of on-disk backups that use FlashCopy,
and provides a simple interface to perform the “reverse” operation.
This example shows the FlashCopy Manager feature.
Uempty
Keywords
• FlashCopy
• Full background copy
• No background copy
• Consistency groups
• GUI presets
• Event notifications
• Thin provisioned target
• Target
• Source
• Copy rate
• Clone
• Incremental FlashCopy
• Bitmap space
• Tivoli Storage FlashCopy Manager
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Uempty
Review questions (1 of 2)
1. True or False: Both the source and target volumes of a
FlashCopy mapping are available for read/write I/O
operations while the background copy is in progress.
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Uempty
Review answers (1 of 2)
1. True or False: Both the source and target volumes of a
FlashCopy mapping are available for read/write I/O
operations while the background copy is in progress.
The answer is true.
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Uempty
Review questions (2 of 2)
3. True or False: Incremental FlashCopy assumes an initial full
background copy so that subsequent background copies
only need to copy the changed blocks to resynchronize the
target.
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Uempty
Review answers (2 of 2)
3. True or False: Bitmap space for Copy Services is managed
in the node cache of the I/O group.
The answer is true.
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Uempty
Unit summary
• Identify I/O access to source and target volumes during a FlashCopy
operation
• Classify the purpose of consistency groups for both FlashCopy and
Remote Copy operations
• Summarize FlashCopy use cases and correlate to GUI provided
FlashCopy presets
• Recognize usage scenarios for incremental FlashCopy and reverse
FlashCopy
• Discuss host system considerations to enable usage of a FlashCopy
target volume and the Mirroring auxiliary volume
• Recognize the bitmap space needed for Copy Services and Volume
Mirroring
Spectrum Virtualize Copy Services: FlashCopy © Copyright IBM Corporation 2012, 2016
Uempty
Overview
The Spectrum Virtualize provides data replication services for mission-critical data using Remote
Copy (Metro Mirror - synchronous copy and Global Mirror - asynchronous copy) of volumes.
This unit examines the functions provided by the Remote Copy features of the Storwize V7000 and
illustrates their usage with example scenarios.
References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html
Uempty
Unit objectives
• Summarize the use of the GUI/CLI to establish a cluster partnership,
create a relationship, start remote mirroring, monitor progress, and
switch the copy direction
• Differentiate among the functions provided with Metro Mirror, Global
Mirror, and Global Mirror with change volumes
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
This topic examines the functions of Remote Copy Services Metro Mirror and Global Mirror.
Uempty
Primary
Site 1 Secondary
Site 2
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Today, when businesses are often required to be operational 24x7x365, and potential disasters due
to weather, power outages, fire, water, or even terrorism pose numerous threats, the importance of
real time disaster recovery and business continuance have become absolutely necessary for many
businesses. Some disasters happen suddenly, stopping all processing at a single point in time, or
interrupts operations in stages that occur over several seconds or even minutes. This is often
referred to as a rolling disaster. Therefore, it is business critical requirement to plan for recovery to
eliminate those potential disaster that can causes system failures where they are immediate,
intermittent, or gradual.
Uempty
PRIMARY SECONDARY
PRIMARY
PRIMARY
SECONDARY
PRIAMRY Lower bandwidth requirement at
Change expense of higher RPOs SECONDARY
FlashCopy volume FlashCopy Change
Volume
mapping mapping
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
IBM Remote Copy Services offers several data replication methods which are a synchronous
remote copy called Metro Mirror (MM), asynchronous remote copy called Global Mirror (GM), and
Global Copy with Changed Volumes.
Each methods will be discussed in details.
Uempty
VDisks VDisks
SAN SAN
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Remote Copy services provides a single point of control when remote copy is enabled in your
network (regardless of the disk subsystems that are used) if those disk subsystems are supported
by the IBM Storwize V7000.
Synchronous and asynchronous transmissions are two different methods of transmission
synchronization. Synchronous transmissions are synchronized by an external clock, while
asynchronous transmissions are synchronized by special signals along the transmission medium.
The general application of remote copy services is to maintain two real-time synchronized copies of
a volume, know as remote mirroring which is a volume mirroring function. The typical requirement
for remote mirroring is over distance. In this case, intercluster copy is used across two Storwize
V7000 clusters using a Fibre Channel interswitch link (ISL) or alternative SAN distance extension
solutions.
Often, the two copies are geographically dispersed between two IBM Storwize V7000 systems.
Although it is possible to use MM or GM within a single system. This is supported using Intracluster
copy support is within the same I/O group (that is, both source and target volumes must be in the
same I/O group). If the master copy fails, you can enable an auxiliary copy for I/O operation.
Uempty
Replication
Upper Cache
• Advanced independent networked-based
FlashCopy (SAN-wide)
Mirroring
• Implemented in the Replication Layer
Thin Provisioning
IBM Remote Copy is an advanced independent networked-based, SAN-wide, storage system copy
service provided by the Storwize V7000.
For both synchronous and asynchronous replication, the storage array on the primary site will send
the transaction acknowledgment to the host on the primary site. The difference between the two
replication technologies is the order of events that take place after the host sends the transaction to
the local storage array.
Therefore, remote Copy is implemented near the top of the Storwize V7000 I/O stack to allow the
host I/O write data to be forwarded to the remote secondary site once it arrives from the host to
facilitate parallelism and minimize latency.
In parallel to forwarding, the data is also being sent to fast-write cache for local processing.
Because Remote Copy replication sits above the cache layer, it binds to an I/O group.
Metro Mirror and Global Mirror are optional features of the Storwize V7000. The idea is to provide
storage system independent, or outside the box copy capability. The customer does not have to
use or license the Copy Services functions on a box by box basis.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Metro Mirror supports copy operations between volumes that are separated by distances up to
300 km. Synchronous mode provides a consistent and continuous copy, which ensures that
updates are committed at both the primary and the secondary sites before the application
considers the updates complete. The host application writes data to the primary site volume but
does not receive the status on the write operation until that write operation is in the Storwize V7000
cache at the secondary site. Therefore, the volume at the secondary site is fully up to date and an
exact match of the volume at the primary site if it is needed in a failover.
Metro Mirror provides the simplest way to maintain an identical copy on both the primary and
secondary volumes.
Uempty
Auxiliary
volume
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Synchronous communications is the more efficient method of communications as the host write
operations to the master volume is mirrored to the cache of the auxiliary volume before an
acknowledgment of the write is sent back to the host that issued the write. This process ensures
that the auxiliary is synchronized in real time, if it is needed in a failover situation
Therefore, both storage volumes process the transaction before an acknowledgment is sent to the
host, meaning the arrays will always be synchronized.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
A Global Mirror relationship allows the host application to receive confirmation of I/O completion
without waiting for updates to have been committed to the secondary site. In asynchronous mode,
Global Mirror enables the distance between two Storwize V7000 clusters to be extended while
reducing latency by posting the completion of local write operations independent from the
corresponding write activity at the secondary site.
Global Mirror provides an asynchronous copy, which means that the secondary volume is not an
exact match of the primary volume at every point in time. The Global Mirror function provides the
same function as Metro Mirror Remote Copy without requiring the hosts to wait for the full round-trip
delay of the long-distance link; however, some delay can be seen on the hosts in congested or
overloaded environments. This asynchronous copy process reduces the latency to the host
application and facilitates longer distance between the two sites. The secondary volume is
generally less than one second behind the primary volume to minimize the amount of data that
must be recovered in the event of a failure. However this requires a link with peak write bandwidth
be provisioned between the two sites.
Make sure that you closely monitor and understand your workload. The distance of Global Mirror
replication is limited primarily by the latency of the WAN Link provided.
Previously, Global Mirroring supported up to 80ms round-trip-time for the GM links to send data to
the remote location. With the release of V7.4, it now support up to 250ms round trip latency and
Uempty
distances of up to 20,000km are supported. Combined with the performance improvements in the
previous software release, these changes and enhancements have greatly improved the reliability
and performance even over poor links.
Uempty
In an asynchronous global operation, as host send write operations to the master volume,
transaction is process by cache and acknowledgment is immediately sent back to the host issuing
the write before the write operation is mirrored to the cache for the auxiliary volume. An update of
this write operation is sent to the secondary site at a later stage, which provides the capability to
perform Remote Copy over distances exceeding the limitations of synchronous Remote Copy.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Traditional Global Mirror operates without cycling; write operations are transmitted to the auxiliary
or secondary volume on a continuously basis triggered by write activity. The secondary volume is
generally within seconds behind the primary volume for all relationships. This achieves a low
recovery point objective (RPO) to minimize the amount of data that must be recovered.
This requires a network to support peak write workloads as well as minimal resource contentions at
both sites. Insufficient resources or network congestion might result in error code 1920 and stopped
GM relationships.
Uempty
A Global Mirror relationship with cycling and changes volumes leverage FlashCopy functionality to
mitigate peak bandwidth requirements by addressing average instead of peak throughput at the
expense of higher recovery point objectives (RPOs).
Replication communication is made possible because all updates to the primary volume are
tracked and where needed, copied to intermediate change volumes. A delta of changed blocks
(known as grains) since the last cycle is transmitted to the secondary periodically. The secondary
volume is much further behind than the primary volume (an older recover point), thus more data
must be recovered in the event of a failover. Because the transmission of changed data can be
smoothed over a longer time period, a lower bandwidth option (hence lower cost) can be deployed.
Change volumes enable background replication of point-in-time images based on cycling periods
(default is every 300 seconds). Same blocks updated repeatedly within a cycle only need to be sent
once thus reducing some amount of the transmission traffic load.
If the background copy does not complete within the cycling period, the next cycle isn’t started until
the prior copy load completes. This would lead to increased or higher RPOs. It also enables the
recovery point objectives to be configurable at the individual relationship level.
A freeze time value is maintained in the GMCV relationship entry. It reflects the time of the last
consistent image being present at the auxiliary site. The RPO might be up to two cycles of time if
Uempty
the background copy completes within the cycle time. If the background copy does not complete
within the cycle time, the RPO of current time minus freeze time would then exceed two cycles.
Benefits of Global Mirroring with Change volumes:
• Bursts of host workload are smoothed over time so much lower link bandwidths can be used
• In the future, acceptance of higher latency on the link can lead to support for distances greater
than 8,000km
• Almost zero impact; I/O pause when triggering next change volume (due to near instant
prepare)
• Less impact to source volumes while prepare, as prepare bound by normal destage, not a
forced flush.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Since change volumes are space efficient volumes, they are same size as the primary and auxiliary
volumes.
When Global Mirror operates in cycling mode, after the initial background copy, changes are
tracked and the data is captured on the master volume and the changed data is copied to
intermediate change volumes using FlashCopy point-in-time copy technology. This process does
not required the change volume to copy the entire content of the master volume, instead it only has
to store data for regions of the master volume that change until the next capture step.
The primary change volume is then replicated to the secondary Global Mirror volume at the target
site periodically, which is then captured in another change volume on the target site. This provides
an always consistent image at the target site and protects your data from being inconsistent during
resynchronization.
The mapping between the two sites are updated on the cycling period (60 seconds to 1 Day.) This
means that the secondary volumes are much further behind the primary volume, and more data
must be recovered in the event of a failover. Because the data transfer can be smoothed over a
longer time period, however, lower bandwidth is required to provide an effective solution.
The data stored on the change volume is the original data from the point that FlashCopy captured
the master volume, and allows the system to piece together the whole master volume state from
that time.
Uempty
The data captured here uses FlashCopy to provide data consistency using this consistent,
point-in-time copy, the changes can be streamed to the DR auxiliary site during the next copy step.
This causes the FlashCopy to pause IOs to the master volume while generating the consistent
point-in-time image. This will show visibility as a single spike in read and write response time. The
spike can be to a few tens of milliseconds for volumes being individually replicated, or to up to a
second or more if volumes are being replicated as part of a large, 100-volume or more, application.
More on this in a bit.
Simultaneously, the process captures the DR volume's data onto a change volume on the DR site
using FlashCopy. This consistently captures the current state of the DR copy, ensuring we can
revert to a known good copy if connectivity is lost during the next copy step. The data stored on the
change volume on the DR site will be regions changed on the DR copy during the next copy step,
and will consist of the previous data for each region, allowing the reversion of the whole DR copy
back to the state at this capture.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
There are some advantages and disadvantages asynchronous between synchronous remote copy
operations:
• All synchronous copies over remote distances can possible have performance impact to host
applications. This performance impact is related to the distance between primary and
secondary volumes and depending on application requirements, its use might be limited based
on the distance between sites. The distance between the two sites is limited to the latency and
bandwidth of the communication link; along with the latency the host application can tolerate.
Therefore, applications are fully exposed to the latency and bandwidth limitations of the
communication link to the secondary. In a truly remote situation, this extra latency can have a
significant adverse effect on application performance.
• In an asynchronous coping, if a failover occur, certain updates (data) might be missing at the
secondary. The application must have an external mechanism for recovering the missing
updates, if possible. This mechanism can involve user intervention.
Recovery on the secondary site involves starting the application on this recent backup and then
rolling forward or backward to the most recent commit point.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
This topic examines the functions of creating a Metro Mirror and Glob Mirror relationship and
partnership.
Uempty
Metro Mirror and Global Mirror partnerships define an association between a local cluster and a
remote cluster. Each cluster can maintain up to three partnerships, and each partnership can be
with a single remote cluster. Up to four clusters can be directly associated with each other.
Clusters also become indirectly associated with each other through partnerships. If two clusters
each have a partnership with a third cluster, those two clusters are indirectly associated. A
maximum of four clusters can be directly or indirectly associated.
Multi-cluster mirroring enables the implementation of a consolidated remote site for disaster
recovery. It also can be used in migration scenarios with the objective of consolidating data centers.
A volume can be in only one Metro or Global Mirror relationship - which defines the relationship to
be at most with two clusters. Up to 8192 relationships (mix of Metro and Global) are supported per
cluster.
Uempty
D B
System A can be a central DR site for the three A ĺ B, A ĺ C, and B ĺ C
other locations
Fully
connected Daisy chained
A C
A B C D
Subsequently, one after the other
B D
A ĺ B, A ĺC, A ĺ D, B ĺ D, and C ĺ D
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Multiple system mirroring allows for various partnership topologies. Each Storwize V7000 system
can maintain up to three partner system relationships, which allows as many as four systems to be
directly associated with each other. This Storwize V7000 partnership capability enables the
implementation of disaster recovery (DR) solutions.
By using a star topology, you can migrate applications by using a process, such as the process that
is described in the following example:
1. Suspend application at A.
2. Remove the A → B relationship.
3. Create the A → C relationship (or the B → C relationship).
A fully connected mesh in which every system has a partnership to each of the three other systems.
This topology allows volumes to be replicated between any pair of systems, for example: A → B, A
→ C, and B → C.
All of the preceding topologies are valid for the intermix of the IBM SAN Volume Controller with the
Storwize V7000 if the Storwize V7000 is set to the replication layer and running IBM Spectrum
Virtualize code 6.3.0 or later.
Uempty
partnership
V V V V
V V V V
SWV7K SWV7K
system A system B
layer = replication layer = storage
partnership
SWV7K SWV7K
partnership
system C system D
layer = storage layer = storage
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
A Storwize V7000 system can be configured with one and only one other Storwize V7000 system to
form a partnership relationship for intercluster mirroring. The partnership is defined on both system.
To facilitate replication partnership of volumes between two Storwize V7000 systems, the system
has to have a layer attribute value of either replication or storage. In addition, the layer attribute is
also used to enable one Storwize system to virtualize and manage another Storwize system.
The rules of usage for the layer attribute are:
• The Storwize V7000 only operates with a layer value of replication; its layer value cannot be
changed.
• The Storwize system has a default layer value of storage.
• A Remote Copy partnership can only be formed between two partners with the same layer
value. A partnership between an Storwize V7000 and a Storwize system requires the Storwize
system to have a layer value of replication.
• An Storwize V7000 cluster can virtualize a Storwize system only if the Storwize system has a
layer value of storage.
• A Storwize system with a layer value of replication can virtualize another Storwize system with a
layer value of storage.
Uempty
• If the connection is broken between the IBM Storwize V7000 systems that are in a partnership,
all (intercluster) MM/GM relationships enter a Disconnected state.
Uempty
partnership
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Use the lspartnershipcandidate command to list the systems that are available for setting up a
two-system partnership. This command is a prerequisite for creating MM/GM relationships. This
command is not supported on IP partnerships. Use mkippartnership for IP
connections.
To create an IBM Storwize V7000 system partnership, use the mkfcpartnership command for
traditional Fibre Channel (FC or FCoE) connections or mkippartnership for IP-based
connections. To establish a fully functional MM/GM partnership, you must issue this command on
both systems. This step is a prerequisite for creating MM/GM relationships between volumes on the
IBM Storwize V7000 systems.
When the partnership is created, you can specify the bandwidth to be used by the background copy
process between the local and remote IBM Storwize V7000 system. If it is not specified, the
bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or equal to
the bandwidth that can be sustained by the intercluster link.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Uses the lssystem command to list the layer value setting of a Storwize system. The layer value
defaults to storage.
Typically changing the layer value is done at initial system setup or as part of a major
reconfiguration event. In order to change the layer value, the Storwize system must not be zoned
with ports of other Storwize V7000 or Storwize systems. This might require changes to the SAN
zoning configuration.
Use the chsystem -layer command to change the layer setting to either replication or storage.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
When two volumes are paired using FlashCopy they are said to be in a mapping. When two
volumes are paired using Metro Mirror or Global Mirror, they are known to be in a relationship.
Use the mkrcrelationship command to create a new MM/GM relationship. This relationship
persists until it is deleted. If you do not use the -global optional parameter, a Metro Mirror
relationship is created instead of a Global Mirror relationship.
You can use the lsrcrelationshipcandidate command to list the volumes that are eligible to
form an MM/GM relationship.
When the command is issued, you can specify the master volume name and auxiliary system to list
the candidates that comply with the prerequisites to create a MM/GM relationship. If the command
is issued with no parameters all of the volumes that are not disallowed by another configuration
state, such as being a FlashCopy target, are listed.
A MM/GM relationship allows the two volumes (a master volume and an auxiliary volume) to be
updated by an application where one volume are mirrored on the other volume. The master volume
contains the production data for application access and the auxiliary volume is a duplicate copy to
be used in disaster recovery scenarios. For the duration of the relationship, the master and auxiliary
attributes never change. The volumes can be in the same Storwize V7000 clustered system or on
two separate Storwize V7000 systems. The two volumes must be the same size.
Uempty
For intracluster copy, they must be in the same I/O group. The master and auxiliary volume cannot
be in an existing relationship and they cannot be the target of a FlashCopy mapping. This
command returns the new relationship (relationship_id) when successful.
When a relationship is initially created, the master volume is assigned the role of the primary
volume, containing a valid copy of data for application read/write access; the auxiliary volume is
assigned the role of the secondary volume, containing a valid copy of data, but it is not available for
application write operations. The copy direction is from primary to secondary. The copy direction
can be manipulated.
A relationship that is not part of a consistency group is called a standalone relationship.
Uempty
Consistency Group
DATA
Relationship DATA
30 GB 30 GB
LOG
Relationship LOG
1 GB 1 GB
Atomic copy of multiple volumes
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Similar to FlashCopy, a consistency group enables the grouping of one or more relationships so
that they are manipulated in unison.
A consistency group can contain zero or more relationships. All relationships in the group must
have matching master and auxiliary clusters and the same copy direction.
Use the mkrcconsistgrp command to create an empty MM/GM Consistency Group.
The MM/GM consistency group name must be unique across all consistency groups that are known
to the systems owning this consistency group. If the consistency group involves two systems, the
systems must be in communication throughout the creation process.
The new consistency group does not contain any relationships and is in the Empty state.
You can add MM/GM relationships to the group (upon creation or afterward) by using the
chrelationship command or it can be a stand-alone MM/GM relationship if no Consistency Group
is specified.
.
Uempty
• Supported
• Supported for:
ࡳ Partnership between SVC
system
ࡳ Partnership between
V7000
ࡳ Partnerships between a
SVC and a V7000s
required both system to
be running software code
6.3.0 or later
• Not supported
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
This IBM support document provides a compatibility table for Metro Mirror and Global Mirror
relationships between SAN Volume Controller and Storwize family system software versions. A
partnership can be formed across Storwize V7000 clusters and Storwize systems (with layer set to
replication) running differing code levels.
An up-to-date code level compatibility matrix for intercluster Remote Copy is maintained by
Storwize V7000 support personnel.
There are some limitations that applies. Note 1 reference that Global Mirror with Change Volume
(GMCV) relationships are not supported between 7.1.0 and earlier levels and 7.5.0 and later levels.
This restriction does not affect Metro Mirror or Global Mirror (cycling mode set to 'none')
relationships. A concurrent upgrade path is available by upgrading the down level system to 7.2.0
or later first.
When planning an upgrade, refer to the concurrent compatibility references and release notes for
the new software level for any additional restrictions that may apply.
Uempty
partnership
Direct
Storwize V7000
cluster C
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
In a network of connected systems, the code level of each system must be supported for
interoperability with the code level of every other system in the network. This applies even if there is
no direct partnership in place between systems. For example, in the figure below, even though
system A has no direct partnership with system C, the code levels of A and C must be compatible,
as well as the partnerships between A-B and B-C.
Uempty
Uempty
• Stop the relationship or the write synchronization between the two volumes. If the auxiliary
volume was granted write access with the stop command, it transitions to the idling state;
otherwise it is placed in the consistent_stopped state.
Switch:
• Reverse the direction of the copy so that the auxiliary volume becomes the primary volume for
the copy process.
• Remote mirroring is also referred to as Remote Copy (rc), hence all the commands reflect the
acronym of “rc”.
• Similar to FlashCopy, SNMP traps can be generated on state change events.
When the two clusters can communicate, the clusters and the relationships spanning them are
described as connected. When they cannot communicate, the clusters and relationships spanning
them are described as disconnected.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
This topic examines the functions of Remote Copy Services Metro Mirror and Glob Mirror.
Uempty
NODE 1 NODE 1
Hosts and controllers
ZONE 3
ZONE 3
NODE 3 NODE 3
NODE 4 NODE 4
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Designed for disaster recovery, Remote Copy allows volumes at a primary site be mirrored to a
secondary site. By using the Metro Mirror and Global Mirror Copy Services features, you can set up
a relationship between two volumes so that updates that are made by an application to one volume
are mirrored on the other volume. The volumes can be in the same Storwize V7000 clustered
system or on two separate Storwize V7000 systems.
The graphic supports the recommended zoning guideline for Remote Copy:
• For each node that is to be zoned to a node in the partner system, zone exactly two Fibre
Channel ports.
• For a dual-redundant fabric, split the two ports from each node between the dual fabric so that
exactly one port from each node is zoned with the partner nodes.
Uempty
Storwize V7000
Switch
Managed Disks (MDisks)
Blue1 Blue2 Blue3
DS5000 DS3000
Storwize
DS4000 Flash XIV
V7000 System DS8000 Switch
Storwize V7000
Interswitch Link (ISL)
Transmit using:
• FCP Managed Disks (MDisks) Site-B
• FCIP
Blue1 Blue2 Blue3
• TCP/IP
with WAN DS5000
DS3000
XIV
DS8000
acceleration
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The SAN fabrics at the two sites are connected with interswitch links (ISL) or SAN distance
extension solutions. For testing or continuous data protection purposes, intracluster mirroring
operations are also supported.
Implicit with connecting the two clusters with ISLs is that the two fabrics of the two clusters must
merge (excluding non-fabric merge solutions from SAN vendors), that is, no switch domain ID
conflicts, no conflicting switch operating parameters, and no conflicting zone definitions. A zone
definition containing all the Storwize V7000 ports of both clusters must be added to enable the two
Storwize V7000 cluster nodes to communicate with one another.
The ISL is also referred to as the intercluster link. It is used to control state changes and coordinate
updates.
The maximum bandwidth for the background copy processes between the clusters must be
specified. Set this value to less than the bandwidth that can be sustained by the intercluster link. If
the parameter is set to a higher value than the link can sustain, the background copy processes
uses the actual available bandwidth.
A mirroring relationship defines a pairing of two volumes with the expectation that these volumes
will contain exactly the same data through the mirroring process.
Uempty
Switch
Managed Disks (MDisks)
Blue1 Blue2 Blue3
DS5000 DS3000
Storwize
DS4000 Flash XIV
V7000 System
IP
WAN
FCIP
Interswitch Link (ISL) Switch
Storwize V7000 Pair
Storwize DS8000
Round-trip latency maximum is 250 millisec for GM DS5000
DS3000
V7000
XIV
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The FCIP protocol extends the distance between SANs by enabling two Fibre Channel switches to
be connected across an IP network. The IP network span is transparent to the FC connection. The
two SANs merge as one fabric as FCIP implements virtual E_Ports or a stretched ISL between the
two ends of the connection. Fibre Channel frames are encapsulated and tunneled through the IP
connection. The UltraNet Edge Router is an example of a product that implements FCIP where the
two edge fabrics merge as one.
SAN extended distance solutions where the SAN fabrics do not merge are also supported. Visit the
Storwize V7000 support page (http://www.ibm.com/storage/support/2145) for more information
regarding:
• The Cisco MDS implementation of InterVSAN Routing (IVR).
• The Brocade SAN Multiprotocol Router implementation of logical SANs (LSANs).
Distance extension using extended distance SFPs are also supported.
The term - intercluster link - is used to generically include the various SAN distance extension
options that enable two Storwize V7000 clusters to be connected and form a partnership.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
This shows examples of Remote Copy Services Metro Mirror and Glob Mirror configurations.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The Storwize V7000 management GUI supports the Remote Copy functionality with two menu
options within the Copy Services function icon:
• The Remote Copy menu option is designed to create, display, and manage consistency groups
and relationships for Metro Mirror and 1Global Mirror.
• The Partnership menu option is designed to define, display, and manage partnerships between
Storwize V7000 clusters.
Uempty
Define - Three
cluster partnerships
Define - One
standalone
GM relationship;
change to GMCV
WISKEE_GM
PINK_SWV7K
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Uempty
OLIVE_Storwize
V7000
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
To implement mirroring between two volumes from two different Storwize V7000 clusters, a cluster
partnership must be defined first. Cluster partnerships must be defined by each of the two Storwize
V7000 clusters forming the partnership.
After SAN connectivity between two Storwize V7000 clusters has been established, define the
partnership from one cluster to another using Copy Services > Partnerships, then click the New
Partnership button.
In this example, after SAN zoning has been updated, NAVY_Storwize V7000 is able to detect the
OLIVE_Storwize V7000 as a system eligible to form a partnership. The Create button is clicked to
establish the partnership from the perspective of the NAVY_Storwize V7000.
The Bandwidth value at the partnership level defines the maximum background copy rate (in MBps)
that Storwize V7000 Remote Copy would allow as the sum of background copy synchronization
activity for all relationships from the direction of this cluster to its partner. The background copy
bandwidth for a given pair of volumes is set to a maximum of 25 MBps by default. Both of these
bandwidth rates might be modified dynamically.
Uempty
NAVY_Storwize V7000
NAVY_Storwize
V7000
OLIVE_Storwize
V7000
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The GUI generates the mkpartnership command to establish a partnership with the
OLIVE_Storwize V7000. Each cluster has a cluster name and a hexadecimal cluster ID. The GUI
generated commands tend to refer to a cluster by its cluster ID instead of its name.
The partnership is now partially configured; as the attempt to form a partnership must also occur
from the partner-to-be.
Uempty
OLIVE_Storwize V7000
NAVY_Storwize
V7000
OLIVE_Storwize
V7000
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Repeat the same steps to establish the partnership from the OLIVE_Storwize V7000 to the
NAVY_Storwize V7000. Once completed, a fully configured partnership is said to exist between the
two clusters.
The Bandwidth value at the partnership level defines the maximum background copy rate (in
MBps) that Storwize V7000 Remote Copy would allow as the sum of background copy
synchronization activity for all relationships from the direction of this cluster to its partner. The
bandwidth value does not have to be identical between the two partner clusters.
Uempty
OLIVE_Storwize V7000
NAVY_Storwize V7000
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
In this scenario, the OLIVE_Storwize V7000 is running with the Storwize V7000 v6.4.1.4 software
level and NAVY_Storwize V7000 is at Storwize V7000 v7.1.0.2. Partnerships have been defined
from both clusters and the partnership between the two clusters have transitioned from partially to
fully configured.
The Storwize V7000 cluster name is always displayed at the root of the bread crumb pathing
beginning with v6.2 of the GUI. When more than one Storwize V7000 cluster is being managed, it is
important to be aware of the cluster to which the GUI is connected so that any configuration
manipulation is performed on the correct cluster.
Uempty
Uempty
NAVY_Storwize V7000
OLIVE_Storwize V7000
PINK_SWV7K
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
At this point of the example scenario, each cluster or system is fully configured or in partnerships
with two partners.
Uempty
Scenario Steps:
1. Start Relationship, write on master
2. Stop with write access, writes from both sites
3. Start Relationship, primary = aux with force
4. Stop at remote site with write access
5. Start at remote site, primary = aux with force
6. Switch mirroring direction from local site
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
This example illustrates a standalone Metro Mirror relationship where the NAVY_Storwize V7000 is
the local site and the OLIVE_Storwize V7000 is the remote site.
The application being mirrored is initially operating at the local site. It is then moved to the remote
site (due to either planned or unplanned events). Eventually operations for the application is
returned to the local site.
Metro Mirror is a synchronous copy environment which provides for a recovery point objective of
zero (no data loss). This example illustrates the Storwize V7000 implementation of remote mirroring
along with its terminology.
Uempty
2
1
WINES_A
WINES_M
3 4
writes
WINES_M
WINES_A
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The WINES_M volume on the NAVY_Storwize V7000 (local) is to be in a Metro Mirror relationship
with the WINES_A volume on the OLIVE_Storwize V7000 (remote).The copy direction of the
mirroring relationship is controlled by where the relationship is defined - thus, this relationship must
be defined from the NAVY_Storwize V7000.
To define a relationship with the GUI from the local cluster, click Copy Services > Remote Copy >
New Relationship to open the New Relationship dialog.
Select the type of relationship desired (Metro Mirror, Global Mirror, or Global Mirror with Change
Volumes), and specify whether this relationship is an intracluster or intercluster relationship. For an
intercluster relationship, the remote cluster name needs to be chosen.
The local cluster is referred to as the master cluster and the local volume is called the master
volume. The remote cluster is referred to as the auxiliary cluster and the remote volume is called
the auxiliary volume.
From the local NAVY_Storwize V7000, the WINES_M volume is identified as the master volume.
Communication between the two clusters caused a list of eligible auxiliary volumes to be sent from
the auxiliary OLIVE_Storwize V7000 to the master NAVY_Storwize V7000.
An eligible auxiliary volume is defined to be the same size as the master volume and must not be
the auxiliary in another Remote Copy relationship.
Uempty
NAVY_Storwize V7000
writes
WINES_M WINES_A
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Uempty
writes
WINES_M WINES_A
OLIVE_Storwize V7000
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Even though the relationship was defined from the NAVY_Storwize V7000, the relationship entry
exists in both clusters.
In the NAVY_Storwize V7000 > Copy Services > Remote Copy view, a relationship with the
default name of rcrel0 is shown with an ID of 10. This ID value is derived from the object ID of the
master volume WINES_M.
In the OLIVE_Storwize V7000 > Copy Services > Remote Copy view, a relationship with the
default name of rcrel0 is shown with an ID of 3. This ID value is derived from the object ID of the
auxiliary volume WINES_A.
Based on the choices made when the relationship was defined, the current state of the relationship
is inconsistent_stopped. The content of the two volumes are not consistent and the relationship
has not been started yet.
Note the value of the Primary Volume field: It indicates the current copy direction of the
relationship. The volume name listed is the ‘copy from’ volume.
Uempty
The verbose format of the lsrcrelationship command provides a bit more information than the
GUI in a more compact format.
Observe the cluster and volumes are identified by either master or auxiliary. Verify the object IDs of
each volume and their correlation to the relationship object IDs between the two clusters.
The primary entry has a value of master - meaning the copy direction is from the master volume to
the auxiliary volume. The GUI, being more user friendly, plugs in the name of the master volume in
its display. Either way, the copy direction is identified by the value contained in the primary entry.
At this point, application writes to the master volume are likely occurring. The relationship copy
direction has been set but mirroring has not been started yet.
Below the dotted line is the detailed information of the volumes of the relationship can be obtained
using the lsvdisk command with the ID or name of the volume.
If a volume is defined in a Remote Copy relationship, the relationship name and ID are maintained
in the volume entry. Compare the two volume entries: Their RC_name value is the same, but the
RC_id value matches the ID of the individual volumes.
Note the UID of each volume for later reference.
Uempty
NAVY_Storwize V7000
WINES_M
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The master volume, WINES_M, is used by a Windows application. Its current content is shown.
Generally, the auxiliary volume is not assigned to a host until the mirroring operation has been
stopped. The auxiliary volume is not available for write operations and should not be made
available for read operations as well.
Again it should be emphasized that Remote Copy is based on block copies controlled by grains of
the owning bitmaps. The copy operation has no knowledge of the OS logical file structures. Folders
used in these examples facilitate easier before/after comparisons and are for illustrative purposes
only.
Uempty
A standalone mirroring relationship can be started using the CLI startrcrelationship command.
This command can be issued from either cluster.
After starting, the lsrcrelationship command is issued on each cluster to confirm that
synchronization or background copy has begun. Until the content of the master volume has been
copied to the auxiliary volume, the relationship is in the state of inconsistent_copying.
The background copy rate for a given pair of volumes is set to 25 MBps by default and is subject to
the maximum ‘copy from’ partnership level bandwidth value since multiple background copies might
be in progress concurrently. The 25 MBps default rate can be changed with the chsystem
-relationshipbandwidthlimit parameter. Be aware that this value is controlled at the cluster
level. The changed copy bandwidth value is applicable to the background copy rate of all
relationships.
Uempty
Progress of the background copy operation can be monitored from either cluster with the
lsrcrelationshipprogress command using either the relationship name or object ID. Its output
displays the copy progress as a percentage to completion. When the progress reaches 100% or
copy completion, the command output displays a null value for the relationship object ID.
While the background copy is in progress, the status of auxiliary volume is offline. After the copy is
completed, the volume status transitions to online automatically.
Uempty
Uempty
WINES_A
WINES_M
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The two Remote Desktop sessions show that the content of the two volumes is identical.
The master volume might still be processing application read/write I/Os.
Since the relationship was stopped with write access, read/write I/Os are allowed on the auxiliary
volume as well.
Uempty
writes
WINES_M
writes
WINES_A
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
This drift on data content of the volumes is illustrated with the different folders being written from
each host on its own drive/volume.
To resume mirroring of the two volumes, a decision has to be made as to which host is deemed to
have the current data.
Uempty
In the CLI commands listed above the dotted line, the volumes are out of sync. Any write activity on
either volume causes the relationship to become no longer synchronized.
Since the copy direction is ambiguous when the relationship is in the idling state, at restart, the
copy direction must be specified with the -primary parameter.
If write activity has occurred, on either or both volumes, then the two volumes need to be
resynchronized. The -force keyword must be coded to acknowledge the out of synchronization
status. Background copy is invoked to return the relationship to the consistent and synchronized
state again.
In this example, the WINES_A volume is deemed to contain the valid data and the mirroring
direction is to be reversed. The startrcrelationship command is coded with -primary aux and
-force so that the relationship can be returned to the consistent and synchronized state.
Note the primary aux value in the verbose lsrcrelationship command output - the copy
direction is now from the auxiliary volume to the master volume. The auxiliary volume is now
functioning in the primary role.
Uempty
WINES_M WINES_A
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Again, the relationship has been stopped with -access so that the content of the volumes can be
examined.
Because of the change in copy direction (primary=aux), the content of the auxiliary volume
(WINES_A) has been propagated to the master volume (WINES_M).
Another way to look at this - the WINES_A volume is now functioning in the primary role (copy
direction is primary=aux), and its content is being mirrored to the WINES_M volume (now
functioning in the secondary role).
Uempty
writes
WINES_M WINES_A
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The relationship is restarted with -force since application write activity is on-going.
Uempty
writes
WINES_M WINES_A
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
When the relationship state is consistent and synchronized, the copy direction can be changed
dynamically with the switchrcrelationship command.
In this example, the -primary master resets the copy direction to the original value at the start of
this example. Write access to the auxiliary volume, WINES_A, is removed, and reverted to the
master volume, WINES_M. The master volume is functioning in the primary role again.
Because of the change in write access capability between the two volumes in the relationship, it is
crucial that no outstanding application I/O is in progress when the switch direction command is
issued. Typically the host application would be shut down and restarted for every direction change.
The scenario illustrates that it is very easy to transfer application workloads from one site to
another, for example to serve as disaster recovery tests.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The direction of the copy is reset back to the original value - primary master. To summarize:
The direction of the copy operation can be reversed, perhaps as part of a graceful failover, when
the relationship is in a synchronized state. The switchrcrelationship command will only succeed
when the relationship is in one of the following states:
• Consistent_synchronized
• Consistent_stopped and in sync
• Idling and in sync
If the relationship is not synchronized, the startrcrelationship command can be used with the
-primary and -force parameters to manage the copy direction.
Uempty
WISKEE_GA
IBM_2145:PINK_SWV7K:PINKadmin>
mkrcrelationship -master WISKEE_GM
Define - One -aux WISKEE_GA -cluster NAVY_SVC
standalone -name WISK_Rel1 -global
GM relationship; RC Relationship, id [2], successfully created
change to GMCV
WISKEE_GM
PINK_SWV7K
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
This existing Global Mirror relationship will now be updated to cycling mode.
Uempty
2 3
Master
Master
change
FlashCopy
volume
mapping
4
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Use the GUI to allocate the master change volume by right-clicking the relationship entry at the
master system site (PINK_SWV7K). Select Global Mirror Change Volumes > Create New to
cause the GUI to generate the appropriate mkvdisk command to create a Thin-Provisioned change
volume - based on the size and pool of the master volume. The the chrcrelationship command is
used to add the newly created master change volume to the relationship.
The relationship can be active (does not need to be idle or stopped) to add a change volume.
Uempty
3 WISKEE_GM
vdisk0
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The newly created change volume can be seen in the Master Change Volume column of the
relationship entry - notice its default name.
Right-click the relationship entry to select Global Mirror Change Volumes > Properties (Master)
to view details of the change volume. Notice it is already defined in two FlashCopy mappings.
Uempty
WISKEE_GM
WISKEE_GMFC
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The volume Edit interface can be used to change the default name of the change volume to one
that is more relevant to the specific Global Mirror relationship.
Uempty
2 3
Auxiliary
Auxiliary
FlashCopy change
mapping volume
4
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
A change volume is also needed at the auxiliary cluster site (NAVY_SVC) of the relationship.
Right-click the relationship entry then select Global Mirror Change Volumes > Create New to
cause the GUI to generate the appropriate mkvdisk command to create a Thin-Provisioned change
volume - same size and in the same pool as the auxiliary volume. The the chrcrelationship
command is then used to add the newly created auxiliary change volume to the relationship.
Uempty
2
WISKEE_GA
3 WISKEE_GAFC
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The newly created change volume can be displayed in the Auxiliary Change Volume column of
the relationship entry.
Right-click the relationship entry to select Global Mirror Change Volumes > Properties (Auxiliary)
to view details of this change volume.
Again a more appropriate name can be assigned to the newly created auxiliary change volume.
Notice this volume is already defined in two FlashCopy mappings as well.
Uempty
WISKEE_GM
WISKEE_GMFC
NAVY_SVC
WISKEE_GA
WISKEE_GAFC
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Go to the Copy Services > FlashCopy Mappings view of each partnership system to view the two
FlashCopy mappings automatically defined by SVC Remote Copy as a result of adding change
volumes to the Global Mirror relationship.
Examine the background copy rates of the two mappings at each site:
Mapping with a background copy rate of 0: For reach cycle, this mapping provides a snapshot or
consistent point-in-time image of the source volume (either master or auxiliary volume)
Mapping background copy rate of 50: For each cycle, this mapping provides a means to recover
the source volume (master or auxiliary volume) to the prior recovery point if needed. Under normal
circumstances this mapping is not started.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Examine the details of both FlashCopy mappings associated with the master volume and note that
they are controlled by SVC Remote Copy (rc_controlled, yes).
When a cycle is automatically started, the delta of changed blocks (grains) of the master volume is
identified for transmission from the master cluster to the auxiliary cluster.
On-going application writes might be occurring on the master volume during transmission. The
snapshot FlashCopy mapping (from the master volume to its change volume) is automatically
started at the beginning of the cycle so that incoming writes to the master volume cause
copy-on-write blocks (COWs) to be written to its change volume.
The changed blocks (grains) being sent are read from the master change volume. The actual data
is obtained from the master (if no subsequent writes occurred) or its change volume (the blocks
have been updated on the master volume after the cycle began).
The FlashCopy mapping from the master change volume to the master volume is only used if a
recovery situation is detected. The point-in-time snapshot change volume can be used to recover
the content of the master volume.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
When a cycle is automatically started, a signal is sent to the auxiliary site to prepare for the
incoming changed blocks (grains) to be written to the auxiliary volume.
Because the auxiliary volume will be updated with new blocks of data, the snapshot FlashCopy
mapping (from the auxiliary volume to its change volume) is automatically started at the beginning
of the cycle to provide a consistent recovery point for the auxiliary volume. Incoming writes to the
auxiliary volume cause copy-on-write blocks (COWs) to be written to its change volume.
The FlashCopy mapping from the auxiliary change volume to the auxiliary volume is only used if a
recovery situation is detected. The point-in-time snapshot change volume content can be used to
recover the content of the auxiliary volume to its prior recovery point.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
To change to cycling mode from the default of none to multi, the relationship needs to be in the
idling or stopped state.
The cycling period defaults to 300 seconds with a valid value range from 60 to 86400 seconds
(86400 being 24 hours).
This example decrease the cycling period to 180 seconds to have more frequent recovery points.
And to illustrate what happens if the copy time required exceeds the cycle interval.
Uempty
WISKEE_GM WISKEE_GA
WISKEE_GMFC WISKEE_GAFC
NAVY_SVC
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The updated cycling mode parameters are reflected in the relationship entry of both the master and
auxiliary clusters.
The relationship is now in cycling mode. Grains associated with updated blocks on the master
volume will be identified and transmitted to the auxiliary cluster automatically every 180 seconds.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
In addition to the cycle period and cycling mode data in the verbose output of the relationship,
examine the progress value. Quite a bit of write activity has transpired on the master volume and
these changed blocks need to copied to the auxiliary volume once the relationship is started.
The relationship now contains a freeze time. For cycling mode, freeze time is the time of the last
consistent image on the auxiliary volume.
Uempty
BM_2145:NAVY_SVC:NAVYadmin> lsrcrelationshipprogress 4
id progress
NAVY_SVC
4 93
IBM_2145:NAVY_SVC:NAVYadmin>lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,group_name,status,progre
ss,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name,restoring,start_time,rc_controlled
1,fcmap2,4,WISKEE_GA,16,WISKEE_GAFC,,,copying,22,0,100,off,2,fcmap0,no,130808200155,yes
2,fcmap0,16,WISKEE_GAFC,4,WISKEE_GA,,,idle_or_copied,0,50,100,off,1,fcmap2,no,,yes
IBM_2145:NAVY_SVC:NAVYadmin>lssevdiskcopy -delim ,
vdisk_id,vdisk_name,copy_id,mdisk_grp_id,mdisk_grp_name,capacity,used_capacity,real_capacity,free_capacity,over
allocation,autoexpand,warning,grainsize,se_copy,compressed_copy,uncompressed_used_capacity
16,WISKEE_GAFC,0,1,DS3K_SATApool,15.00GB,3.94GB,4.25GB,319.20MB,352,on,80,256,yes,no,3.94GB
IBM_2145:NAVY_SVC:NAVYadmin>svqueryclock
Thu Aug 8 20:04:53 CDT 2013
IBM_2145:NAVY_SVC:NAVYadmin>lsrcrelationshipprogress 4
id progress
4 96
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Once the relationship has been started, grains representing changed blocks from the master
cluster are transmitted to the auxiliary cluster automatically for each cycle.
At the master cluster, the master volume to its change volume FlashCopy mapping is started. It is in
the copying state so that subsequent writes to the master volume cause COW blocks to be copied
to the master change volume. The mapping start time provides an indication of the start time of the
cycling period.
At the auxiliary cluster, the auxiliary volume to its change volume FlashCopy mapping is in the
copying state as well. Before changed blocks are written to the auxiliary volume, its COW blocks
are first copied to the auxiliary change volume.
Recall that the GUI created change volumes are Thin-Provisioned. As writes occur, the capacity of
the Thin-Provisioned target automatically expands.
In this example, the amount of changed blocks to be copied is taking longer than the cycling period
of 180 seconds.
Uempty
IBM_2076:PINK_SWV7K:PINKadmin>lsrcrelationship 2 PINK_SWV7K
id 2 writes
name WISK_rel1
master_cluster_name PINK_SWV7K
master_vdisk_id 2
master_vdisk_name WISKEE_GM
aux_cluster_name NAVY_SVC WISKEE_GM WISKEE_GA
aux_vdisk_id 4
aux_vdisk_name WISKEE_GA
primary master WISKEE_GMFC WISKEE_GAFC
state consistent_copying
progress 95
freeze_time 2013/08/08/20/01/55 Recovery point
status online
sync
copy_type global
cycle_period_seconds 180
cycling_mode multi
master_change_vdisk_id 3
master_change_vdisk_name WISKEE_GMFC
aux_change_vdisk_id 16
aux_change_vdisk_name WISKEE_GAFC
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
When the time needed to complete the background copy is greater than the cycling period, the
copy is allowed to complete. The next cycle is started immediate after the completion of the
previous cycle.
At copy completion, the freeze time is updated with the start time of the just completed cycle. Recall
the freeze time for cycling mode is the time of the last consistent image on the auxiliary volume; or
the recovery point.
The implication of not being able to complete the copy of the changed blocks within the cycling
period (due to either too much changed data or not enough bandwidth) is the freeze time isn’t
updated and thus the auxiliary volume content is at a previous or older recovery point.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Review the relationship details again - the freeze time or recovery point has been updated with the
start time of the just completed cycle.
The state of a started relationship In cycling mode is always consistent copying - even when the
relationship progress is at 100%.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Uempty
NAVY_SVC
In between background copy cycles, the SVC Remote Copy function performs housekeeping that
readies the environment for the next background copy cycle.
For a given relationship, the snapshot FlashCopy mapping at each site is stopped. The allocated
capacity of the target change volumes from the previous cycle is freed.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
As with the previous Metro Mirror example, lsrcrelationship command output from both clusters
confirm that this Global Mirror relationship has been defined with the volumes not yet synchronized.
The copy direction is from master - which is from the WISKEE_GM volume on the PINK_SWV7K to
the WISKEE_GA volume on the NAVY_SVC.
Uempty
PINK_SWV7K
1
2
WISKEE_GM WISKEE_GA 3
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
To start a relationship from the GUI, go to Copy Services > Remote Copy, right-click the desired
relationship entry and select Start from the pop-up list.
In the Metro Mirror example, the relationship was defined with the GUI and started with the CLI. So
with this example, we are showing the opposite, the relationship was defined with the CLI, and we
are using the GUI to start it.
Uempty
NAVY_SVC
WISKEE_GM WISKEE_GA
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The Remote Copy background copy or synchronization progress can be monitored from either
cluster.
Uempty
WISKEE_GM
WISKEE_GA
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
The Monitoring > Performance view provides real-time I/O statistics. Given no other activity
occurring on the PINK_SWV7K system, the MDisks read bandwidth of 25 MBps is consistent with
the background copy relationship bandwidth limit. Data is read from the extents of the master
volume on the PINK_SWV7K and sent to the partner cluster.
The Monitoring > Performance view provides real-time I/O statistics. Given no other activity
occurring on the NAVY_SVC cluster, the MDisks write bandwidth of 25 MBps again confirms the
background copy relationship bandwidth limit. Data is written to the extents of the auxiliary volume
on NAVY_SVC.
Uempty
2
3
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Figure 10-76. Background copy completed: Stop relationship with write access
The relationship can be stopped from either cluster. Right-click the relationship entry and select
Stop from the pop-up list. As discussed previously, the write access can be selected so that the
content of the auxiliary volume can be verified.
Uempty
Traditional Global Mirror (without change volumes) implements asynchronous continuous copy to
maintain a consistent image on the auxiliary volume that is within seconds of the master volume to
provide a low recovery point (RPO).
The requires a network to support peak write workloads as well as minimal resource contention at
both sites. Insufficient resources or network congestion might result in error code 1920 and thus
stopped Global Mirror relationships.
The link tolerance function represents the number of seconds that the primary cluster tolerates slow
response time from its partner. The default is 300 seconds. When the poor response extends past
the specified tolerance, a 1920 error code is logged and one or more Global Mirror relationships is
stopped.
Global Mirror with change volumes (also referred to as cycling mode) uses FlashCopy as a means
to mitigate peak bandwidth requirements, but at the expense of higher recovery point objectives
(RPOs). It does enable the RPO to be configurable at the individual relationship level.
Uempty
1
PINK_SWV7K
v6.3 – Change between
3 GM and GMCV without
4 incurring initial copy
2
NAVY_SVC
3
v7 – Change between
MM and GM without
2 4 incurring initial copy
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
While a new Global Mirror relationship with change volumes can be created, existing Global Mirror
relationships can also be changed to cycling mode to avoid the overhead of resynchronizing
volumes that already contain consistent and synchronized data.
The change to cycling mode is permitted as long as the relationship state is idling or stopped.
Changing from cycling mode to non-cycling (or traditional) is also supported.
Beginning with v7, changing between Global Mirror to Metro Mirror or vice versa is also available.
These options to change the copy type of existing relationships provide operational flexibility,
reduce complexity, and ensure continued availability.
Uempty
4
2
To change from
MM ÅÆ GM
both partners
must be at v7
IBM_2145:NAVY_SVC:NAVYadmin>lspartnership OLIVE_SVC
. . .
code_level 6.4.1.4 (build 75.3.1303080000)
. . .
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Do note that changing the relationship type between Metro Mirror to Global Mirror (or vice versa) is
a v7 enhancement, and does require both partners to be at the minimum v7 code level.
Uempty
WINES_A
WISKEE_GA WINES_M
Ensuing pages will
examine this
relationship
Unable to communicate
Define - One due to link failures
standalone
GM relationship;
change to GMCV
WISKEE_GM
PINK_SWV7K
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
If all the intercluster links fail between two clusters, then communication is no longer possible
between the two clusters in the partnership. This section examines the state of relationships when a
pair of SVC clusters is disconnected and no longer able to communicate.
To minimize the potential of link failures, it is best practice to have more than one physical link
between sites. These links need to have a different physical routing infrastructure such that the
failure of one link does not affect the other links.
Uempty
writes
WINES_M WINES_A
OLIVE_SVC
IBM_2145:OLIVE_SVC:OLIVEadmin>lsrcrelationship -delim ,
id,name,master_cluster_id,master_cluster_name,master_vdisk_id,master_vdi
sk_name,aux_cluster_id,aux_cluster_name,aux_vdisk_id,aux_vdisk_name,prim
ary,consistency_group_id,consistency_group_name,state,bg_copy_priority,p
rogress,copy_type,cycling_mode,freeze_time
3,rcrel0,0000020063617C80,NAVY_SVC,10,WINE_M,0000020062C17C56,OLIVE_SVC,
3,WINES_A,master,,,consistent_synchronized,50,,metro,none,
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
We will focus on the Metro Mirror relationship the NAVY_SVC and OLIVE_SVC to study the
relationship behavior when the clusters of a partnership can no longer communicate.
The Metro Mirror relationship, rcrel0, exists between the master volume WINES_M and the
auxiliary volume WINES_A. The copy direction is from the master volume to the auxiliary volume
(primary=master).
Prior to the connectivity failure between the two clusters, the relationship is in the
consistent_synchronized state when viewed from both clusters.
Uempty
A total link outage or connectivity failure between the NAVY_SVC and OLIVE_SVC causes the
cluster partnership to change from fully_configured to not_present. This is shown in the
lspartnership output for both clusters after connectivity was lost.
Uempty
WINES_M WINES_A
OLIVE_SVC
IBM_2145:OLIVE_SVC:OLIVEadmin>lsrcrelationship –delim ,
,id,name,master_cluster_id,master_cluster_name,master_vdisk_id,master_vd
isk_name,aux_cluster_id,aux_cluster_name,aux_vdisk_id,aux_vdisk_name,pri
mary,consistency_group_id,consistency_group_name,state,bg_copy_priority,
progress,copy_type,cycling_mode,freeze_time3,rcrel0,0000020063617C80,NAV
Y_SVC,10,WINE_M,0000020062C17C56,OLIVE_SVC,3,WINES_A,master,,,consistent
_disconnected,50,,metro,none,2013/08/09/11/00/29
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
After a total link failure between the two clusters, the copy direction of the relationship does not
change (primary=master) but changed data of the master volume can no longer be sent to the
auxiliary volume.
Examine the rcrel0 relationship state on each cluster:
• On the NAVY_SVC it is in the idling_disconnected state. Mirroring activity for the volumes is
no longer active because changes can no longer be sent to the auxiliary volume.
• On the OLIVE_SVC it is in the consistent_disconnected state. At the time of the disconnect,
the auxiliary volume was consistent but it is no longer able to receive updates.
Even though updates are no longer being sent to the auxiliary volume, the changes are tracked by
the mirroring relationship bitmap so that the two volumes can be resynchronized once the
connectivity between the clusters is recovered.
Uempty
After the link outage, the host using the WINES_M volume continues to operate normally, reading
and writing data on the volume.
Remote copy is an SVC internal function and is totally transparent to the host application.
Uempty
The WINES_A auxiliary volume is no longer able to obtain updates occurring on the WINES_M
master volume.
The relationship on the OLIVE_SVC captures the date and time of the connectivity failure as
freeze_time when its state changed to the consistent_disconnected state. The freeze time is the
recovery point of the WINES_A volume content - it is the last known time when data was consistent
with the master volume.
The progress value is unknown from this cluster. It has no indication as to how much write activity
has occurred on the master value.
It is recommended that at this time (or some time prior to restarting the relationship), a FlashCopy
of the auxiliary volume to a Thin-Provisioned target volume taken with a copy rate of zero, and kept
until the relationship state is consistent_synchronized again. This approach will avoid a “rolling
disaster” where a second outage during the resynchronization would cause the auxiliary volume to
be in a corrupted state.
Uempty
Examine the relationship detail after connectivity between the two clusters has been restored. From
the master cluster, note that:
The relationship is in a consistent_stopped state. It is not automatically restarted.
The progress of 61 percent indicates 39 percent of the grains on the WINES_M master volume
need to be copied to the WINES_A auxiliary volume.
The freeze time of the auxiliary volume’s consistent_disconnected time has been obtained from the
relationship on the auxiliary cluster (since connectivity of the clusters has been restored; enabling
this freeze time value to be transmitted).
The relationship is out of sync.
Uempty
Examine the relationship detail after connectivity between the two clusters have been restored.
From the auxiliary cluster, note that:
The relationship is in a consistent_stopped state. It is not automatically restarted.
The progress of 61 percent indicates 39 percent of the grains on the WINES_M master volume
need to be copied to the WINES_A auxiliary volume.
The relationship stopped state allows a FlashCopy to be used at the OLIVE_SVC site to capture
the data on the WINES_A volume as of the freeze time; before restarting the mirroring relationship.
Uempty
IBM_2145:NAVY_SVC:NAVYadmin> IBM_2145:OLIVE_SVC:OLIVEadmin>
lsrcrelationship 10 lsrcrelationship -delim , 3
id,10 id,3
name,rcrel0 name,rcrel0
master_cluster_id,0000020063617C80 master_cluster_id,0000020063617C80
master_cluster_name,NAVY_SVC master_cluster_name,NAVY_SVC
master_vdisk_id,10 master_vdisk_id,10
master_vdisk_name,WINE_M master_vdisk_name,WINE_M
aux_cluster_id,0000020062C17C56 aux_cluster_id,0000020062C17C56
aux_cluster_name,OLIVE_SVC aux_cluster_name,OLIVE_SVC
aux_vdisk_id,3 aux_vdisk_id,3
aux_vdisk_name,WINES_A aux_vdisk_name,WINES_A
primary,master primary,master
consistency_group_id, consistency_group_id,
consistency_group_name, consistency_group_name,
state,inconsistent_copying state,inconsistent_copying
bg_copy_priority,50 bg_copy_priority,50
progress,62 progress,64
freeze_time, freeze_time,
status,online writes status,online
sync, sync,
…….. ……..
…….. ……..
WINES_M WINES_A
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Because I/O activity have occurred, the relationship is out-of-sync and must be started with -force.
Restart of the relationship causes the changed content (39% of the grains) of the primary/master
volume to be copied to the auxiliary volume.
During this background copy, the relationship is in the inconsistent_copying state. Recall the
auxiliary volume is set offline during this state and will not be brought online until the copy
completes and the relationship returns to the consistent_synchronized state.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Once the state of the volumes is consistent_synchronized, any FlashCopy taken of the
WINES_A volume can now be discarded.
Uempty
WINES_M = WINES_A
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
To verify the content of the auxiliary volume, the relationship is stopped again with write access.
The WINES_A volume is mapped to the OLIVEWIN1 host and an inspection of the drive content
confirms the data is identical to the WINES_M master volume.
It might be worthwhile to reiterate that all SVC background copy operations (FlashCopy, Remote
Copy, and Volume Mirroring) are based on block copies controlled by the grains of the owning
bitmaps. SVC is a block level solution, so by design (and actually per industry standards) these
copy operations have no knowledge of OS logical file structures. Folders used in these examples
facilitate easier before/after comparisons and are for illustrative purposes only.
Uempty
FlashCopy Mirroring
Virtualization Feature
Source or Target Primary or Secondary
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
As shown with the first table above, the FlashCopy target volume can also participate in a
Metro/Global Mirror relationship. Constraints as to how these functions can be used together are:
• A FlashCopy mapping cannot be manipulated to change the contents of the target volume of
that mapping when the target volume is the primary volume of a Metro Mirror or Global Mirror
relationship that is actively mirroring.
• A FlashCopy mapping must be in the idle_copied state when its target volume is the
secondary volume of a Metro Mirror or Global Mirror relationship.
• The two volumes of a given FlashCopy mapping must be in the same I/O group; when the
target volume is also participating in a Metro/Global Mirror relationship.
For details refer to Storwize V7000 InfoCenter > Product overview > Technical overview >
Copy Services features.
Uempty
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
This listed are the most up-to-date (as of this publication) Copy Services configuration limits. The
list of the configuration limits and restrictions specific to IBM Spectrum Virtualize software version
7.6 code are available by way of the following website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004369.
Uempty
Keywords
• FlashCopy • Global Mirror with cycling and
• Flash mapping change volume
• Consistency groups • Synchronization
• Copy rate • Freeze time
• Master
• Cycle process
• Auxiliary
• Partnership
• Remote Copy
• Relationship
• Metro Mirror
• Synchronous
• Global Mirror
• Asynchronous
• Global Mirror without cycling
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Uempty
Review questions (1 of 2)
1. True or False: Upon the restart of a Remote Copy
relationship, a 100% background copy is performed to
ensure the master and auxiliary volumes contain the same
content.
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Uempty
Review answers (1 of 2)
1. True or False: Upon the restart of a Remote Copy
relationship, a 100% background copy is performed to
ensure the master and auxiliary volumes contain the same
content.
The answer is false.
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Uempty
Review questions (2 of 2)
3. True or False: Metro Mirror is a synchronous copy
environment which provides for a recovery point objective of
zero.
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Uempty
Review answers (2 of 2)
3. True or False: Metro Mirror is a synchronous copy
environment which provides for a recovery point objective of
zero.
The answer is true.
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Uempty
Unit summary
• Summarize the use of the GUI/CLI to establish a cluster partnership,
create a relationship, start remote mirroring, monitor progress, and
switch the copy direction
• Differentiate among the functions provided with Metro Mirror, Global
Mirror, and Global Mirror with change volumes
Spectrum Virtualize Copy Services: Remote Copy © Copyright IBM Corporation 2012, 2016
Uempty
Overview
This unit examines administrative management options that assist you in monitoring,
troubleshooting, and servicing a Storwize V7000 environment. This unit also highlights the
importance of the Service Assistant Too and introduce IBM Spectrum storage offerings.
References
IBM Storwize V7000 Implementation Gen2
http://www.redbooks.ibm.com/abstracts/sg248244.html
Uempty
Unit objectives
• Recognize system monitoring features to help maintain nodes and
components availability
• Evaluate and filter administrative task commands entries that are
captured in the audit log
• Employ system configuration backup and extract the backup files from
the system using the CLI or GUI
• Summarize the benefits of an SNMP, syslog, and email server for
forwarding alerts and events
• Recall procedures to upgrade the system software and drive microcode
firmware to a higher code level
• Identify the functions of Service Assistant tool for management access
• List the benefits of IBM Spectrum storage offerings
Uempty
• Access
• Settings
• Service Assistant (SA)
• IBM Spectrum Storage
This topic discusses the system monitoring, event log detections and performance monitoring of
the Storwize V7000 environment.
Uempty
System Details
• System Details option has been
removed from Monitoring menu.
ƒ Modified information is still
available directly from the System
(dynamic) panel.
• Monitoring > System
ƒ Determine system status
ƒ Dynamic view of system capacity
and operating state
í Monitor individual nodes and
attached enclosures
í Monitor individual hardware
components
Although the System Detail option is no longer part of the latest Storwize V7000 management GUI
software code, you can still view modified information on control and expansion enclosures and
various hardware components of the system through its dynamic display. From the System panel,
you can monitor capacity and view nodes details to determine whether the nodes in your system
are online. In addition, you can view individual hardware components and monitor their operating
state.
To monitor nodes in the management GUI, select Monitoring > System. Select the node that you
want to monitor to view status for the node. For systems with multiple expansion enclosures, the
number indicates the total of detected expansion enclosure that are attached to the control
enclosure. Select the expansion enclosure to display the entire rack view of these enclosures.
Uempty
When an issue or warning occurs on the system, the management GUI Health Status indicator
(rightmost area of the control panel) will change color. The health status indicator can be green
(healthy), yellow (degraded or warning), or red (critical). Depending on the type event that
occurred, a status alerts provides message information or alerts about internal and external system
events, or remote partnerships.
If there is a critical system error, the Health Status bar turns red and alerts the system administrator
for immediate action. The Health Status indicator view does not change for non-critical errors. A
status alert in the form of an X widget icon can appear next to the Health Status. The status alert
provides a time stamp and brief description of the event that occurred. Each alert is a hyperlink and
redirects you to the Monitoring > Event panel for actions.
Uempty
Although the IBM Storwize V7000 dynamic view allows you to not only view monitor capacity and its
component details, it also provides a visible indication that the system is operating healthy or an
issue has occurred in its operating state. The system reports all informational, warnings, and errors
related to any changes detected by the system to the events log.
Events added to the log are classified as either alerts or messages based on the following criteria:
• An alert is logged when the event requires an action. These errors can include hardware errors
in the system itself as well as errors about other components of the entire system. Certain alerts
have an associated error code, which defines the service action that is required. The service
actions are automated through the fix procedures. If configured, a call home to IBM by way of
email is generated to request assistant or replacement parts. Messages are fixed when you
acknowledge reading them and mark them as fixed. If the alert does not have an error code, the
alert represents an unexpected change in the state. This situation must be investigated to
determine whether this unexpected change represents a failure. Investigate the cause of an
alert and resolve it as soon as it is reported.
• A message is logged when a change that is expected is reported, for instance, when an array
build completes.
Uempty
Each event recorded in the event log includes fields with information that can be used to diagnose
problems. Each event has a time stamp that indicates when the action occurred or the command
was submitted on the system.
When logs are displayed in the command-line interface, the time stamps for the logs in CLI are the
system time. However, when logs are displayed in the management GUI, the time stamps are
translated to the local time where the web browser is running.
Events can be filtered to sort them according to the need or export them to the external
comma-separated values (CSV) file.
Uempty
Primary debug tool for Storwize V7000 is the event log, which can be accessed from the
management GUI Monitoring > Events or using CLI by issuing the lseventlog command.
Like the other menu options, Events window allows you to filter and add many other parameters
related to events.
Uempty
Maintenance mode
• Maintenance mode is a mechanism for preventing unnecessary
messages from being sent to IBM and administrators.
ƒ Maintenance mode is designed to be used by the directed maintenance
procedures (DMPs) rather than the administrator directly.
• The DMPs might direct the administrator to perform hardware actions
which will look like an error to the system.
ƒ For example, removing a drive from an enclosure.
• Under these scenarios it is not helpful to send Call Home emails to IBM
and event notifications to administrators.
ƒ To address this issue, Storwize V7000has the concept of a maintenance mode
which can be set by modifying the I/O group properties.
í svctask chiogrp -maintenance yes
ƒ Maintenance mode only applies to errors in the SAS domain and the Storwize
V7000hardware.
ƒ The DMPs will control maintenance mode without any need for administrator
action.
Many events or problems that occur in your Storwize V7000 system environment require little to no
user action. The Maintenance mode is a directed maintenance procedures (DPMs) mechanism for
preventing unnecessary messages such sending call home emails to IBM and event notifications to
administrators. Users have the option to indicate which I/O group needs to be placed in
maintenance mode while carrying out service procedures on a storage enclosure by issuing the
svctask chiogrp -maintenance yes.
Once you enter maintenance mode, it will continues until otherwise specified. Therefore, the mode
can be switched off using the same command with no to ensure that unwarranted type of events
and problems are not all reported.
The DMPs will control maintenance mode without any need for administrator action. It is switched
off after 30 minutes in any case.
Uempty
You can right-click on an event and select Properties to view more specific information. In this
example, an Error code 1690 is generated with the alerts message, which indicates that
Flash_Mdisk_02 RAID array is not protected by sufficient spares. This type of error generates a
Recommended Action that require attention and have an associated fix procedure. Alerts are listed
in priority order and should be fixed sequentially by using the available fix procedures.
Uempty
Errors with error code might direct you to carry out certain service procedures to replace a
hardware component using the directed maintenance procedure (DMP) step by step guidance,
while ensuring that sufficient redundancy is maintained in the system environment.
A Run Fix procedure is a wizard that helps you troubleshoot and correct the cause of an error.
Certain fix procedures will reconfigure the system, based on your responses; ensure that actions
are carried out in the correct sequence; and, prevent or mitigate the loss of data. For this reason,
you must always run the fix procedure to fix an error, even if the fix might seem obvious. The fix
procedure might bring the system out of a Degraded state and into a Healthy state.
In a normal situation during the daily administration of the Storwize V7000, you are unlikely to see
error events. However, as events messages and alerts are displayed, there might be a continuing
flow of informational messages. Therefore, typical Events displays only recommended actions.
Uempty
Example (1 of 3):
Scenario of a Directed Maintenance Procedure
• Troubleshooting scenario:
Ambient temperature is greater
than warning threshold.
í System will overheat and
eventually shut down if the error
situation is not fixed.
To address the event, use the
status alert link or select
Monitoring > Event.
Select the event message and
run the Recommended Action:
Run Fix procedure.
In this scenarios a status alert has indicated that an unresolved event caused by a room
temperature that is too high, which might cause the system to overheat and eventually shut down if
the error situation is not fixed.
To run the fix procedure for the error with the highest priority, click Recommended Action at the top
of the Event page and click Run Fix Procedure. When you fix higher priority events first, the system
can often automatically mark lower priority events as fixed.
While the Recommended Actions filter is active, the event list shows only alerts for errors that have
not been fixed, sorted in order of priority. The first event in this list is the same as the event
displayed in the Recommended Action panel at the top of the Event page of the management GUI.
If it is necessary to fix errors in a different order, select an error alert in the event log and then click
Action > Run Fix Procedure.
Selecting the Run Fix procedure brings in the first Window of the DMP that shows the first step of
the DMP procedure. In this example, the system reports that drive 2 (flash module 2) in slot 5 is
measuring a temperature that is too high. In addition, the system has validated to report that all four
fans in both canisters are operational and online.
Uempty
Example (2 of 3):
Scenario of a Directed Maintenance Procedure
• The next step in the DMP procedure is for the administrator:
Measure the room temperature.
Make sure the ambient temperature is within the system specifications.
In the next phase the DMP procedure, the user is ask to verify the events reported with few more
inputs related. In this case, it’s the room temperature which needs verification.
Suggestions are provided that could be probable indications or solutions to the event. Overheating
might be caused by blocked air vents, incorrectly mounted blank carriers in a flash module slot, or a
room temperature that is too high.
Uempty
Example (3 of 3):
Scenario of a Directed Maintenance Procedure
• In this DMP procedure step, storage:
Checks whether the error condition is resolved.
Verify all events of the same type are marked as fixed, if possible.
Once the error is fixed, system return the system to healthy status from the earlier degraded status.
Event log is also updated.
Uempty
You can use fix procedures to diagnose and resolve the event error code alerts. Fix procedures
help simplify these tasks by automating as many of the tasks as possible. One or more panels
might be displayed with instructions for you to replace parts or perform other repair activity. When
the last repair action is completed, the procedures might attempt to restore failed devices to the
system. After you complete the fix, you see the statement Click OK to mark the error as fixed. Click
OK. This action marks the error as fixed in the event log and prevents this instance of the error from
being listed again.
When fixing hardware faults, the fix procedures might direct you to perform hardware actions that
look like an error to the system, for example, replacing a drive. In these situations, the fix
procedures enter maintenance mode automatically. New events are entered into the event log
when they occur. However, a specific set of events are not notified unless they are still unfixed
when exiting maintenance mode. The events that were recorded in maintenance mode are fixed
automatically when the issue is resolved. Maintenance mode prevents unnecessary messages
from being sent.
Uempty
In this example, the system event log has captured the threshold information of a thin-provisioned
volume. If a solution is not applied the storage devices might run out of physical space. Therefore,
the administrator needs to verify that the storage device has physical storage space available, and
add more physical storage as needed.
Uempty
• Settings
• Service Assistant (SA)
• IBM Spectrum Storage
This topic discusses the requirements and procedures to reset system password.
Uempty
Logs executed
action commands
The system maintains an audit log of successfully executed commands, indicating which users
performed particular actions at certain times. An audit log tracks actions that are issued through the
management GUI or the CLI. You can view the audit log entries by selecting Access > Audit Log or
the CLI catauditlog command displays its entries.
The audit log entries can be customized to display the following types of information:
• Time and date when the action or command was issued on the system
• Name of the user who performed the action or command
• IP address of the system where the action or command was issued
• Parameters that were issued with the command
• Results of the command or action returns code of the action command
• Sequence number
• Object identifier that is associated with the command or action
The GUI provides the advantage of filter or search among the audit log entries to reduce the
quantity of output.
Uempty
The in-memory portion of the audit log has a capacity of 1 MB and can store about 6000 commands on
average (affected by the length of commands and parameters issued). When the in-memory log is full, its
content is automatically written to a local file on the configuration node in the /dumps/audit directory.
The catauditlog CLI command when used with the -first parameter provides the requested
most recent number of entries with the CLI. In this example, the command returns a list of five
in-memory audit log entries.
The lsdumps command with -prefix /dumps/audit is used to list the files on disk. These files
can be downloaded from the system for later analysis should it be required by problem
determination. The file entries are in readable text format. The following commands are not
recorded in the audit log:
• All commands that failed
• dumpconfig
• cpdumps
• cleardumps
• finderr
• dumperrlog
• dumpinternallog
• svcservicetask dumperrlog
• svcservicetask finderr
Uempty
This topic discusses the system monitoring and event log detections of the Storwize
V7000environment.
Uempty
A problem determination process might require additional information from the Storwize V7000 for
analysis by IBM Support personnel. This data collection can be performed using the svc_snap
command or the GUI. The GUI provides for a simpler download of the support information.
Click Settings > Support, then click Download Support Package, select the type of support
package advised by IBM Support personnel to download and click Download.
An alternative to the management GUI is to use the Service Assistant GUI to download support
information. This path might be necessary if, due to an error condition, the management GUI is
unavailable.
Uempty
ƒ MDiskdumps
í Information about all bad blocks on a managed disk. These include migrated
medium errors and RAID kill sectors
ƒ Enclosuredumps
í A collection of debug information relevant to the hardware in the enclosure.
There are some additional commands to trigger special dumps that are more relevant to V7000.
These commands are created using trigger commands like triggerdrivedump or
triggerenclosuredump and can be found in the dumps folder. IBM Support guides you on how to
create and where to find these files if required.
Uempty
ƒ svcconfig backup command creates these files in the system /tmp directory.
svc.config.backup.sh This file contains the names of the commands that were issued to
create the backup of the system.
svc.config.backup.log This file contains details about the backup, including any error
information that might have been reported.
The Storwize V7000 system configuration data is stored on all nodes in the system and is internally
hardened so that in normal circumstances the Storwize V7000 should never lose its configuration
settings. However, in exceptional circumstances this metadata might become corrupted or lost.
You can use CLI to trigger configuration backup either manually on an ad hoc basis or by an
automatic process regularly. The svcconfig backup command generates a new backup file.
Triggering a backup using the GUI is not possible, but you can save the output from GUI.
The CLI command svcconfig backup backs up the system configuration metadata in the
configuration node /tmp directory. These files are typically downloaded or copied from the system
for safekeeping. It might be a good practice to first issue the svcconfig clear -all command to
delete existing copies of the backup files and then perform the configuration backup.
The application user data is not backed up as part of this process.
The IBM Support Center should be consulted before any configuration data restore activity is
attempted.
Uempty
Figure 11-22. Download config backup file from system using the GUI
In addition to the /tmp directory, a copy of the config.backup.xml from the svcconfig backup
command is also kept in the /dump directory.
The system actually creates its own set of configuration metadata backup files automatically each
day at 1 a.m. local time. These files are created in the /dump directory and contain ‘cron’ in the file
names.
Right-click the file entry provides another method to download backup files.
The content of the configuration backup files can be viewed using a web browser or a text
processing tool such as WordPad.
This output is from the copy of the backup file extracted from the /dumps directory using the GUI. It
contains the same data as the file in the /tmp directory.
Uempty
Figure 11-23. Example of CLI: PSCP Storwize V7000 config backup file
The backup files can be downloaded from the system using pscp and archived in concert with
installation asset protection procedures.
Run configuration backup and download for archiving on a regularly scheduled basis or at a
minimum after each major change to the Storwize V7000 configuration (such as defining or
changing volumes, storage pool, or host object mappings).
The content of the configuration backup files can be viewed using a web browser or text processing
tool such as WordPad.
This output is from the backup file extracted from the /tmp directory. It contains a listing of all the
objects defined in the system.
Uempty
Figure 11-24. Network: Managing system, service IP addresses, ports and connectivity
Use the Network panel to manage the management IP addresses for the system, service IP
addresses for the nodes, and iSCSI and Fibre Channel configurations. The system must support
Fibre Channel or Fibre Channel over Ethernet connections to your storage area network (SAN).
• Management IP addresses can be defined for the system. The system supports one to four IP
addresses. You can assign these addresses to two Ethernet ports and their backup ports.
Multiple ports and IP addresses provide redundancy for the system in the event of connection
interruptions.
• The service IP addresses are used to access the service assistant tool, which you can use to
complete service-related actions on the node. All nodes in the system have different service
addresses. A node that is operating in service state does not operate as a member of the
system.
• Use the Ethernet ports panel to display and change how Ethernet ports on the system are being
used.
• From the iSCSI panel, you can configure settings for the system to attach to iSCSI-attached
hosts.
• You can use the Fibre Channel ports panel in addition to SAN fabric zoning to restrict
node-to-node communication. You can specify specific ports to prevent communication
Uempty
between nodes in the local system or between nodes in a remote-copy partnership. This port
specification is called Fibre Channel port masking.
Uempty
Storwize V7000 system management occurs across Ethernet connections using the system
management IP address owned by the configuration node. Each node has two Ethernet ports and
both can be used for system management. Ethernet port 1 must be configured. Ethernet port 2 is
optional can be used as an alternate system management interface.
The configuration node is the only node that activates the system management IP address and the
only node that receives system management requests. If the configuration node fails, another node
in the system becomes the configuration node automatically and the system management IP
addresses are transferred during configuration node failover.
If the Ethernet link to the configuration node fails (or some other component failures related to the
Ethernet network occur) because the event is unknown to the Storwize V7000 then no configuration
node failover would be triggered. Therefore, configuring Ethernet port 2 as a management interface
allows access to the system using an alternate IP address.
Use Settings > Network > Management IP Addresses to configure port 2 as the backup system
IP management address. You can use the alternate IP address to access to the Storwize
V7000management GUI and CLI.
The chsystemrip command is used to set or change the IP address of either Ethernet ports.
Actually most of the commands with ‘cluster’ have been replaced with ‘system’. The svcinfo
lssystemip command has been replaced with the lssystemip command to list the system IP
addresses.
Uempty
Notifications
• Configure the Storwize V7000to alert the user and IBM when new events
are added to the system.
• Can choose to receive alerts about:
ƒ Errors (for example, hardware faults inside the system)
ƒ Warnings (errors detected in the environment)
ƒ Info (for example, asynchronous progress messages)
ƒ Inventory (email only)
• Alerting methods are:
ƒ SNMP traps
ƒ Syslog messages
ƒ Email call home
• Call Home to IBM is performed using email.
ƒ Will send Errors and Inventory back to an IBM email address to automatically open PMRs.
ƒ IBM will call the customer.
The Storwize V7000 uses Simple Network Management Protocol (SNMP) traps, syslog messages,
and Call Home email to notify you and the IBM Support Center when significant events are
detected. Any combination of these notification methods can be used simultaneously.
Notifications are normally sent immediately after an event is raised. However, there are events that
can occur because of service actions that are being performed. If a recommended service action is
active then these events are notified only if they are still unfixed when the service action completes.
Uempty
Notifications: Email
• Call Home support is
initiated for the following
reasons or types of data:
ƒ Problem or event
notification: Data is sent
when there is a problem or
event that might require the
attention of IBM service
personnel.
ƒ Inventory information: A
notification is sent to provide
the necessary status and
hardware information to IBM
service personnel.
The Call Home feature allows electronic call home messaging transmission of operational and
error-related data to IBM and other users through a Simple Mail Transfer Protocol (SMTP) server
connection in the form of an event notification e-mail. Call home automatically notifies IBM service
personnel when errors occur in the hardware components of the system or sends data for error
analysis and resolution. Configuring call home reduces the response time for IBM Support to
address the issues.
Configure an SMTP server to be able to send e-mails. The SMTP server must allow the relaying of
e-mails from the Storwize V7000 system IP address.
Click Settings > Notifications, then select Email and Enable Notifications to configure the email
settings, including contact information and email recipients. A test function can be invoked to verify
communication infrastructure.
Uempty
Notifications: SNMP
• Standard protocol for
managing networks and
exchanging messages.
• Identify servers or managers
using Settings >
Notifications > SNMP.
SNMP server can be configured
to receive all or a subset of IP addresses for this server
event types.
í Up to six SNMP servers can be
configured.
Use MIB (Management
Information Base) to read and
interpret these Storwize
V7000events. SNMP server can be
í Available from the Storwize configured to receive all of
V7000support website. these types of events
The Simple Network Management Protocol (SNMP) is a standard protocol for managing networks
and exchanging messages. The system can send SNMP messages that notify personnel about an
event. You can use an SNMP manager to view the SNMP messages that the system sends. Up to
six SNMP servers can be configured.
You can also use the Management Information Base (MIB) file for SNMP to configure a network
management program to receive SNMP messages that are sent by the system. This file can be
used with SNMP messages from all versions of the software to read and interpret these Storwize
V7000 events.
To configure the SNMP server, identify the management server IP address, remote server port
number, and community name so that the Storwize V7000 generated SNMP messages can be view
from the identified SNMP server. Each Storwize V7000 detected event is assigned a notification
type of either error, warning, or information. The SNMP server can be configured to receive all or a
subset of these types of events.
Uempty
The syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver
on an IP network. Click Settings > Notifications, then select Syslog to identify a syslog server.
The IP network can be either IPv4 or IPv6. The system can send syslog messages that notify
personnel about an event. Syslog error event logging is available to enable the integration of
Storwize V7000events with an enterprise’s central management repository.
The system can transmit syslog messages in either expanded or concise format. You can use a
syslog manager to view the syslog messages that the system sends. The system uses the User
Datagram Protocol (UDP) to transmit the syslog message. You can specify up to a maximum of six
syslog servers.
Uempty
Use the Security panel to configure and manage remote authentication services and encryption
settings on the system.
The system supports two methods of enhanced security for the system. With remote authentication
services, an external authentication server can be used to authenticate users to system data and
resources. User credentials are managed externally through various supported authentication
services, such as LDAP.
When you configure remote authentication, you do not need to configure users on the system or
assign additional passwords. Instead you can use your existing passwords and user groups that
are defined on the remote service to simplify user management and access, to enforce password
policies more efficiently, and to separate user management from storage management.
For availability, multiple LDAP servers can be defined. These LDAP servers must all be the same
type (for example MS AD). Authentication requests are routed to those LDAP servers marked as
Preferred unless the connection fails or a user name isn’t found. Requests are distributed across all
the defined preferred LDAP servers in round robin fashion for load balancing.
Additionally the system supports encryption of data stored on drives that are attached to the
system. To use encryption, you must obtain an encryption license and configure encryption to be
used on the system. Only Storwize V7000 Gen2 systems support encryption.
Uempty
The system provides optional encryption of data at rest, which protects against the potential
exposure of sensitive user data and user metadata that is stored on discarded, lost, or stolen
storage devices. Encryption can only be enabled and configured on enclosures that support
encryption. Encryption of system data and system metadata is not required, so system data and
metadata are not encrypted.
Uempty
Click Settings > System > Licensing to update Storwize V7000 licensed capacities. The base
software license provided with your system includes the use of its basic functions; however, the
following additional licenses can be purchased to expand the capabilities of your system.
Administrators are responsible for purchasing additional licenses and configuring the systems
within the license agreements, which includes configuring the settings of each licensed function on
the system.
The system supports capacity-based licensing that grants you a number of terabytes (TB) for
additional licensed functions. Administrators are responsible for managing use within the terms of
the existing licenses and for purchasing additional licenses when existing license settings are no
longer sufficient for the needs of their organization. In addition, the system also issues warning
messages if the capacity used for licensed functions is above 90% of the license settings that are
specified on the system.
Uempty
You can upgrade the system to the latest software and internal drives firmware levels by using the
management GUI System Status panel, Actions button. Select the code fix pack as well as the
software Upgrade Test utility. In addition to the code software, additional documentation such as
release notes, flashes, and hints/tips can be clicked from this page.
The download information links to the code compatibility cross reference (or use web search engine
to directly access the page).
Review the cross reference as upgrading from older code levels to v7 might require an intermediate
upgrade to v6 first.
The site also provides links to information that is valuable for planning the upgrade. Read the
information carefully and act accordingly. It is recommended to perform the upgrade during off peak
load times. The updated node is unavailable during the upgrade and therefore the other node must
handle the complete load. All this information can be used to create a good plan for the upgrade.
It is recommend that you perform upgrades at the lowest utilization of the systems – such as over
the weekend – which most already do. Since nodes will be in a cache write-through mode, means
that none of the node during the upgrade process will have cache enabled because all writes will be
going to the actual disk drives. So be aware there could be some impact in performance with no
available cache.
Uempty
Basically, it will do one I/O group at a time. However, once your system is half-way into the process
to finish, then cache is turned off for the rest of the upgrade.
Uempty
To your
workstation
You can upgrade the system to the latest software and internal drives firmware levels by using the
management GUI System Status panel, Actions button. Select the code fix pack as well as the
software Upgrade Test utility. In addition to the code software, additional documentation such as
release notes, flashes, and hints/tips can be clicked from this page.
The download information links to the code compatibility cross reference (or use web search engine
to directly access the page).
Review the cross reference as upgrading from older code levels to v7 might require an intermediate
upgrade to v6 first.
The site also provides links to information that is valuable for planning the upgrade. Read the
information carefully and act accordingly. It is recommended to perform the upgrade during off peak
load times. The updated node is unavailable during the upgrade and therefore the other node must
handle the complete load. All this information can be used to create a good plan for the upgrade.
It is recommend that you perform upgrades at the lowest utilization of the systems – such as over
the weekend – which most already do. Since nodes will be in a cache write-through mode, means
that none of the node during the upgrade process will have cache enabled because all writes will be
going to the actual disk drives. So be aware there could be some impact in performance with no
available cache.
Uempty
Basically, it will do one I/O group at a time. However, once your system is half-way into the process
to finish, then cache is turned off for the rest of the upgrade.
Uempty
Best practice: To minimize potential issues that might arise during the installation of the upgrade package, resolve all
unfixed errors in the Storwize V7000 system event log prior to the upgrade activity.
The Storwize V7000 Upgrade Test utility tests for known issues that might prevent a software
upgrade to complete successfully. The utility supports Storwize V7000, Storwize V7000, Storwize
V5000, IBM Flex System V7000, Storwize 3500 and Storwize 3700 for software upgrade.
The installation and usage of this utility is non-disruptive and does not require any nodes to be
restarted, so there is no interruption to host I/O. The utility will only be installed on the current
configuration node. Download the utility along with the system upgrade code. The utility is installed
on the system in the same manner as the system upgrade package. After installation of the utility
the Storwize V7000GUI automatically runs and displays its output.
The utility can be run as many times as necessary on the same system to perform a readiness
check in preparation for a software upgrade. We strongly recommend running this utility for a final
time immediately prior to applying the upgrade, making sure that there have not been any new
releases of the utility since it was previously downloaded.
To download the Software Upgrade Test Utility, navigate to Fix Central, choose your specific
product and download from the “Select fixes” page for your product.
Ensure that you have no unfixed errors in the log and that the system date and time are correctly
set. Start the fix procedures, and ensure that you fix any outstanding errors before you attempt to
concurrently update the code.
For systems running Storwize V7000 code levels prior to v6 the CLI is used to invoke the utility.
Uempty
Automatic Manual
• Preferred method • Provides more
• Upgrades each node flexibility
in the system • Remove node from
systematically system
Configuration node is • Upgrades software on
updated last node
Versus
As each node restarted,
there might be some
• Return node to the
degradation in the system
maximum I/O rate Configuration node is
during updated last
There are two update methods in which to upgrade the system software code: Automatic or
Manual.
The automatic method is the preferred procedure for upgrading software on nodes. During the
automatic upgrade process, the system will upgrade each node in the system one at a time, and
the new code is staged on the nodes, before upgrading the configuration node.
While each node restarts, there might be some degradation in the maximum I/O rate that can be
sustained by the system. After all the nodes in the system are successfully restarted with the new
software level, the new software level is automatically committed.
To provide more flexibility in the upgrade process, you can also upgrade each node manually.
During this manual procedure, the upgrade is prepared, you remove a node from the system,
upgrade the software on the node, and return the node to the system. As same as the automatic
upgrade, you must still upgrade all the nodes in the clustered system. Repeat all the steps in this
procedure for each node that you upgrade that is not a configuration node.
Uempty
To view the current firmware level running, use the Settings menu and select System > Upgrade
Software. This action can also be performed from the Monitoring > System view, select Action
and select Update System.
The Update System software window will also displays fetched information on latest software
version available for update. The displayed version may not always be the recommended version,
so always refer to IBM Fix Central for the latest tested version available.
To initiate the upgrade, click the Update button and browse to the location of the downloaded
software update package. Once you have selected both the files, click the Update option to
proceed with the system install.
Select the method in which to perform the update: automatic or manual. The process begins with
uploading the software code package to the system.
Uempty
Once the Update Test Utility has been installed, the Update System wizard prompts for the code
version to be checked. It will then generate an svcupgradetest command that invokes the utility to
assess the current system environment.
The purpose of running the test utility is to verify that no errors are reported and warnings of
potential issues. and that the system is ready to update. If any issue is discovered by the test utility,
the firmware update stops and provides a Read more link to help address any issue detected. The
Update Test Utility can be run as many times using CLI as necessary on the same system to
perform a readiness check in preparation for a software upgrade. We suggest that you run this
utility for a final time immediately prior to applying the upgrade. The Update Test Utility is not
supported to run as an individual tool using the management GUI.
After the Upgrade Test utility output is reviewed and if necessary, all issues have been resolved,
you can click Resume to continue with the update.
Uempty
During the update process, the first node in the IO group is taken offline. Once the code level is
verified by the system, the GUI generates the applysoftware command to apply the system
software code to the updating node. The system Health Status pod will also flag the condition as a
node status alert. Although, Storwize V7000 supports Concurrent Code Load, you can expect
performance degradation as each node is taken offline in turn while the software is being installed.
You can issue the CLI lsnode command to verify which node is the configuration is the
configuration node. In a four-node system, the configuration node isn’t typically upgraded until after
half of the nodes of the system have been upgraded.
The update process can take some time to complete. Once the node that was being updated has
been restarted with the upgraded software, it is placed back online with an updated code level. The
next node is line will repeat the process to be taken offline for the software upgrade.
If you are updating multiple systems, the software upgrade should be allowed to complete on one
system before it is started on the other system. Do not upgrade both systems concurrently.
The administrator can also issue the svqueryclock command to view a duration time reference for
the particular upgrade in process.
Uempty
Configuration node
Alternate
NODE2
paths used
A node being updated can no longer participate in IO activity in the IO group. While the node being
upgraded is offline, the other node in the IO group operates in write-through mode. As a result, all
IO activity for the volumes in the IO group is directed to the other node in the IO group by the host
multipathing software. Ensure that hosts with IO activity have access to all configured paths (use
multipath driver interfaces such as the SDD datapath query device command for verification).
From the Windows host perspective, the SDD datapath query device command can be used to
monitored path status. In this example, NODE1 is offline. All paths to NODE1 are expected to be
unavailable, therefore any IO activities associated with these unavailable paths would be failed
over to other paths of the device. The SDD software automatically routes IO using paths to the
alternate node NODE2. SDD balances IO across all paths to the alternate node when there are no
paths to the preferred node.
Uempty
During the upgrade, the GUI might go offline temporarily as the V7000 Controllers
are restarted during the upgrade. Refresh your browser to reconnect.
There is a thirty-minute delay or time-out built in between node upgrades. This delay allows time for
the host multipathing software to rediscover paths to the nodes that are upgraded, so that there is
no loss of access when another node in the IO group is upgraded.
You cannot interrupt the upgrade and switch to installing a different software level. Each node in the
system can takes from 15 to 60 minutes to upgrade (typical 30 minutes). You cannot invoke the
new functions of the upgraded code until all member nodes are upgraded and the upgrade is
committed. The system takes a conservative approach to ensure that paths are stabilized before
proceeding.
After completing the update to the last node, a system upgrade process is performed.
Uempty
During the upgrade process, entries are made in the event log which indicates the node status
during the upgrade or failure that may have occur during the upgrade process. It will also indicate
discovery of IO ports.
Uempty
The new system code level is displayed in the Upgrade System view. The status will either indicate
that the system is running the most up-to-date code level, or a new software update is available for
update.
You can reissue the CLI lsnode command to view the new configuration node of the system.
Because of the operational limitations that occur during the update process, the code update is a
user task. If you have problems with an update and must call for support, see the topic about how to
get information, help, and technical assistance.
Uempty
CLI command:
ࡳ Multiple drives can be upgraded per invocation.
lsdependentvdisks -drive drive_id
applydrivesoftware -file name -type firmware -drive drive_id
Storwize V7000 administrative management © Copyright IBM Corporation 2012, 2016
Depending on the version of software you are running on your system, you can upgrade a
solid-state drive (SSD) by downloading and applying firmware updates using the management GUI
or using the CLI.
The management GUI allows you to update individual drives or update all drives that have available
updates.
Depending on the number of drives and the size of the system, drive updates can take up to 10
hours to complete.
You can monitor the progress of the update, using the management GUI Running Tasks icon and
then click Drive Update Operations. You can also use the Monitoring > Events panel to view any
completion or error messages that are related to the update.
There are some codes in which the drive upgrade procedure is supported only by using the CLI.
Using scp or pscp, copy the firmware upgrade file and the Software Upgrade Test Utility package to
the /home/admin/upgrade directory by using the management IP address. Next, run the
applydrivesoftware command. You must specify the firmware upgrade file, the firmware type,
and the drive ID. To apply the upgrade even if it causes one or more volumes to go offline, specify
the -force option.
Uempty
Manufacturers sometimes need to release firmware updates down the road to address technical
issues and bugs that are revealed once the SSDs are sold into the market. However, a firmware
update can offer performance enhancements along with better host system compatibility and drive
reliability.
All drives being updated must be in good standing. If you purchased drives some time ago,
chances are you will need to update the shipped version to a newer one, providing the drive does
not meet the list of the following conditions.
Uempty
Here is an example of the lsdrive command that displays ssd drive attributes.
Uempty
Without animation
With animation
• Create a user message to be displayed at the time of login to the GUI or CLI.
A new code level typically includes GUI enhancements and sometimes the path to display some of
the objects are reorganized.
You can enable animated navigation menu selections that are bigger. This requires the user to
re-login again.
Create a login message to be displayed to anyone logging into the GUI or in a CLI session.
Uempty
While not required, it might be worthwhile to refresh GUI objects using the General section to
change settings that are related to how information is displayed on the GUI.
• Use the Clear button to restore all default preferences.
• Change the default system timeout.
• You management GUI provides as a default, the web address to a locally installed version of
the information center. The information provides in-depth documentation on the Storwize
V7000system support and capabilities.
• The Refresh GUI Cache synchronize the GUI with the system and trigger an automatic browser
refresh.
▪ For those who do not prefer the dynamic menu icon that tends to be a little challenging at
time. You can enable the management GUI to operate in low graphics mode. This mode
provides a non-dynamic version of the management GUI for slower connection and menu
selection. Before the management GUI can to perform in low graphics mode, you must
re-log in to the management GUI.
▪ When create a storage pool, the availability to change the extents size is disable by default.
You must click the Advanced pool settings option to enable this feature.
Uempty
VVols is a new feature introduced in IBM Spectrum Virtualize 7.6. This new functionality allows
users to create volumes on IBM Spectrum Virtualize directly from VMware VCenter server.
Hosts must be running ESXi version 6.0 or higher to use VVols functionality. In addition, the host
must be added to the storage system, with the host-type field set to VVol. You can also enable
VVols on existing hosts by changing the host type to VVol.
The Settings > System > VVols section allows to enable or disable the functionality.
Download the latest publication, Implementing VVOLS on the SVC and Storwize Family to learn
more about the supported feature.
Uempty
To access the management GUI, you must ensure that your web browser is supported and has the
appropriate settings enabled. IBM supports higher versions of the browsers if the vendors do not
remove or disable function that the product relies upon. For browser levels higher than the versions
that are certified with the product, customer support accepts usage-related and defect-related
service requests. If the support center cannot re-create the issue, support might request the client
to re-create the problem on a certified browser version. Defects are not accepted for cosmetic
differences between browsers or browser versions that do not affect the functional behavior of the
product. If a problem is identified in the product, defects are accepted. If a problem is identified with
the browser, IBM might investigate potential solutions or work-arounds that the client can
implement until a permanent solution becomes available.
Uempty
There are some situations where refreshing GUI objects won't work, because of reloads, the
webpage might still be using the old files from the cache. Therefore, you need to refresh your cache
first! Your browser has a folder in which certain items that have been downloaded are stored for
future use. Graphic images (such as buttons and icons), photo's, and even entire web pages are
examples of items which are saved or cached. This method will not only refresh GUI objects, but it
will clear the web browser history.
Uempty
This topic discusses the requirements and procedures to reset system password.
Uempty
The Service Assistant (SA) interface primary use is to perform service-related tasks when a node is
in service state or is not yet a member of a system. You should complete service actions on node
canisters only when directed to do so by the fix procedures.
The storage system management GUI operates only when there is an online system. Use the
service assistant if you are unable to create a system or if both node canisters in a control
enclosure are in service state. The node canister might also be in a service state because it has a
hardware issue, has corrupted data, or has lost its configuration data.
The service assistant does not provide any facilities to help you service expansion enclosures.
Always service the expansion enclosures by using the management GUI.
If used inappropriately, the service actions that are available through the service assistant can
cause loss of access to data or even data loss.
Uempty
node SA IP/service
SA can be accessed using the node service IP address with the superuser ID. The Service
Assistant can also be reached using the system IP address with /service appended.
An alternative to using the system GUI to set the service IP address is using the Change Service
IP option of the Service Assistant navigation tree.
To start the application, complete the following steps.
• Start a supported web browser and point your web browser to serviceaddress/service for the
node that you want to work on.
• Log on to the service assistant using the superuser password. If you are accessing a new node
canister, the default password is passw0rd. If the node canister is a member of a system or has
been a member of a system, use the password for the superuser password.
Uempty
The Node Detail section displays data that is associated with the selected node:
• The Node tab shows general information about the node canister that includes the node state
and whether it is a configuration node.
• The Hardware tab shows information about the hardware.
• The Access tab shows the management IP addresses and the service addresses for this node.
• The Location tab identifies the enclosure in which the node canister is located.
• The Ports tab shows information about the I/O ports.
Uempty
SA service-related actions
• Collect logs to create and download a package of files to send to
support personnel.
• Remove the data for the system from a node.
• Recover a system if it fails.
• Install a code package from the support site or rescue the code from
another node.
• Upgrade code on node canisters manually versus performing a
standard upgrade procedure.
• Configure a control enclosure chassis after replacement.
• Change the service IP address that is assigned to Ethernet port 1 for
the current node canister.
• Install a temporary SSH key if a key is not installed and CLI access is
required.
• Restart the services used by the system.
Listed are a number of service-related actions that can be performed using the Service Assistant
interface. A number of tasks that are performed by the service assistant cause the node canister to
restart. It is not possible to maintain the service assistant connection to the node canister when it
restarts. If the current node canister on which the tasks are performed is also the node canister that
the browser is connected to and you lose your connection, reconnect and log on to the service
assistant again after running the tasks.
Uempty
If system data has been lost from all nodes, an administrator might be directed to remove system
data from the node canisters, and perform a system recovery to retrieve system through the
Service Assistant Tool. The procedure to recover the entire system is known as T3 recovery. This
procedure assumes that the system reported a system error code 550 or error code 578. To
address the issue, perform a service action to place each node in a service state.
For a complete list of prerequisites and conditions for recovering the system, see the following
information:
• “Recover System Procedure” in the Troubleshooting, Recovery, and Maintenance Guide
• Recover System Procedure in the Information Center
Uempty
Storwize storage
products to stay current subscriber name
with product support
information.
Click links in email to
download directly to
download the latest
codes.
Visit the IBM Support website http://www.ibm.com/storage/support/2076 to view the latest storage
information. The Downloads page of the Storwize V7000 support website contains documentation
as well as the software code.
Tools, such as Easy Tier STAT and the IBM Comprestimator are available from this site. Third-party
host software integration such as device driver for VMware VAAI and Microsoft Windows Volume
Shadow Copy Service are also available for download from this site.
You can also subscribe to IBM Notifications for the IBM storage products to stay current with
product support information.
An email containing the URL to the code package website is sent automatically to subscribers
when a new code level is released.
Uempty
To proactively prevent any problems, stay informed on IBM’s technical support resource for all IBM
products and services:
• Receive technical notifications about the new code releases, technical updates, or possible
problems, it is highly recommend to use My Notifications on the IBM website. You can register
your
• IBM devices and get the latest information about the device.
• Allow a fast reaction on problems, we recommend to configure the Call Home function to send
emails to IBM support and also event, information, inventory mails, and others to the
administrators. As described in this unit.
Uempty
This topic provides an overview of IBM Spectrum Storage family and its Offerings.
Uempty
The IBM Spectrum Storage family is the industry’s first software family based on proven
technologies and designed specifically to simplify storage management, scale to keep up with data
growth, and optimize data economics. It represents a new, more agile way of storing data, and
helps organizations prepare themselves for new storage demands and workloads. The software
defined storage solutions included in the IBM Spectrum Storage family can help organizations
simplify their storage infrastructures, cut costs, and start gaining more business value from their
data.
IBM Spectrum Storage provides the following benefits:
▪ Simplify and integrate storage management and data protection across traditional and new
applications
▪ Deliver elastic scalability with high performance for analytics, big data, social, and mobile
▪ Unify siloed storage to deliver data without borders with built-in hybrid cloud support
▪ Optimize data economics with intelligent data tiering from flash to tape and cloud
▪ Build on open architectures that support industry standards that include OpenStack and
Hadoop
Uempty
IBM Spectrum
Tivoli Storage Manager (TSM)
Protect
Control Protect Archive
IBM Spectrum
Linear Tape File System (LTFS)
Archive
IBM Spectrum
Software from XIV System
Accelerate
Private, Public
Any Storage Flash Systems or Hybrid Cloud
IBM Spectrum
Elastic Storage - GPFS
Scale
Uempty
IBM Spectrum Virtualize
IBM Spectrum Virtualize is an industry-leading storage virtualization product that enhances existing
storage to improve resource utilization and productivity to achieve a simpler, more
scalable and cost-efficient IT infrastructure.
The functionality of IBM Spectrum Virtualize is provided by IBM SAN Volume Controller.
IBM Spectrum Accelerate
IBM Spectrum Accelerate is a software defined storage solution, which is born of the proven XIV
integrated storage offering, which designed to help speed delivery of data across the organization
and add extreme flexibility to cloud deployments.
IBM Spectrum Accelerate delivers hotspot-free performance, easy management scaling, and
proven enterprise functionality such as advanced mirroring and flash caching to different
deployment platforms.
IBM Spectrum Scale
IBM Spectrum Scale is a proven high-performance data and file management solution that can
manage over one billion petabytes of unstructured data. Spectrum Scale redefines the economics
of data storage using policy-driven automation: as time passes and organizational needs change,
data can be moved back and forth between flash, disk and tape storage tiers without manual
intervention.
IBM Spectrum Scale, delivered by IBM General Parallel File System or GPFS (code name Elastic
Storage).
Uempty
Uempty
Website
Directory of worldwide contacts http://www.ibm.com/planetwide
Requires
IBM user ID
The table lists websites where you can find help, technical assistance, and more information about
IBM products. IBM maintains pages on the web where you can get information about IBM products
and fee services, product implementation and usage assistance, break and fix service support, and
the latest technical information.
Uempty
PDF publications
This table lists PDF publications that are also available in the information center. Click the number
in the “Order number” column to be redirected.
• IBM Storwize V7000 Gen2 Quick Installation Guide: This guide provides instructions for
unpacking your shipping order and installing your system. The first of three chapters describes
verifying your order, becoming familiar with the hardware components, and meeting
environmental requirements. The second chapter describes installing the hardware and
attaching data cables and power cords. The last chapter describes accessing the management
GUI to initially configure your system.
• IBM Storwize V7000 Quick Installation Guide: This guide provides detailed instructions for
unpacking your shipping order and installing your system. The first of three chapters describes
verifying your order, becoming familiar with the hardware components, and meeting
environmental requirements. The second chapter describes installing the hardware and
attaching data cables and power cords. The last chapter describes accessing the management
GUI to initially configure your system.
• IBM Storwize V7000 Expansion Enclosure Installation Guide, Machine type 2076: This guide
provides instructions for unpacking your shipping order and installing the 2076 expansion
enclosure for the Storwize V7000 system.
Uempty
• IBM Storwize V7000 Troubleshooting, Recovery, and Maintenance Guide: This guide describes
how to service, maintain, and troubleshoot the Storwize V7000 system.
• Storwize V7000 Gen2 Installation Poster: The installation poster provides an illustrated
sequence of steps for installing the enclosure in a rack and beginning the setup process.
• IBM Systems Safety Notices: This guide contains translated caution and danger statements.
Each caution and danger statement in the Storwize V7000 documentation has a number that
you can use to locate the corresponding statement in your language in the IBM Systems Safety
Notices document.
• IBM Storwize V7000 Read First Flyer: This document introduces the major components of the
Storwize V7000 system and describes how to get started with the IBM Storwize V7000 Quick
Installation Guide.
• IBM System Storage SAN Volume Controller and IBM Storwize V7000 Command-Line
Interface User's Guide: This guide describes the commands that you can use from the Storwize
V7000 command-line interface (CLI).
• IBM Statement of Limited Warranty (2145 and 2076): This multilingual document provides
information about the IBM warranty for machine types 2145 and 2076.
• IBM License Agreement for Machine Code: This multilingual guide contains the License
Agreement for Machine Code for the Storwize V7000 product.
Uempty
Keywords
• Node hardware replacement • Service assistant IP address
• Worldwide name (WWN) • User group
• Event notifications • Remote user
• Directory Services • System audit log entry
• Email • IBM Spectrum Control
• SNMP • IBM Spectrum Protect
• Syslog • IBM Spectrum Archive
• Remote authentication • IBM Spectrum Virtualize
• Support package • IBM Spectrum Accelerate
• Upgrade test utility • IBM Spectrum Scale
Uempty
Review questions (1 of 3)
1. True or False: The system audit log contains both
information and action commands issued for the system.
Uempty
Review answers (1 of 3)
1. True or False: The system audit log contains both
information and action commands issued for the system.
The answer is false.
Uempty
Review questions (2 of 3)
5. True or False: The Storwize V7000 system IP address can
be accessed from either Ethernet port 1 or port 2 for system
management.
Uempty
Review answers (2 of 3)
5. True or False: The Storwize V7000 system IP address can
be accessed from either Ethernet port 1 or port 2 for system
management.
The answer is false.
Uempty
Review questions (3 of 3)
9. Which IBM Spectrum Storage offerings can scale out and
support Yottabytes of data?
Uempty
Review answers (3 of 3)
9. Which IBM Spectrum Storage offerings can scale out and
support Yottabytes of data?
The answer is IBM Spectrum Scale.
Uempty
Unit summary
• Recognize system monitoring features to help maintain nodes and
components availability
• Evaluate and filter administrative task commands entries that are
captured in the audit log
• Employ system configuration backup and extract the backup files from
the system using the CLI or GUI
• Summarize the benefits of an SNMP, syslog, and email server for
forwarding alerts and events
• Recall procedures to upgrade the system software and drive microcode
firmware to a higher code level
• Identify the functions of Service Assistant tool for management access
• List the benefits of IBM Spectrum storage offerings
backpg