SVC-Array-Configuration - 6 4 1
SVC-Array-Configuration - 6 4 1
SVC-Array-Configuration - 6 4 1
Version 6.4.1
GC27-2286-04
Note
Before using this information and the product it supports, read the information in “Notices” on page 343.
This edition applies to IBM System Storage SAN Volume Controller, Version 6.4.1, and to all subsequent releases
and modifications until otherwise indicated in new editions.
This edition replaces GC27-2286-03.
© Copyright IBM Corporation 2003, 2012.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . ix Metro Mirror and Global Mirror relationships
between clustered systems . . . . . . . . 83
Tables . . . . . . . . . . . . . . . xi Metro Mirror and Global Mirror partnerships . . 83
Global Mirror configuration requirements . . . 88
Long distance links for Metro Mirror and Global
About this guide . . . . . . . . . . xiii Mirror partnerships. . . . . . . . . . . 89
Accessibility . . . . . . . . . . . . . . xiii Using the intersystem link for host traffic . . . 91
Who should use this guide . . . . . . . . . xiii Metro Mirror and Global Mirror consistency
Summary of changes . . . . . . . . . . . xiii groups . . . . . . . . . . . . . . . 92
Summary of changes for GC27-2286-04, SAN Background copy bandwidth impact on
Volume Controller Software Installation and foreground I/O latency . . . . . . . . . 95
Configuration Guide . . . . . . . . . . xiii Migrating a Metro Mirror relationship to a Global
Emphasis . . . . . . . . . . . . . . . xiv Mirror relationship . . . . . . . . . . . 96
SAN Volume Controller library and related Using FlashCopy to create a consistent image
publications . . . . . . . . . . . . . . xiv before restarting a Global Mirror or Metro Mirror
How to order IBM publications . . . . . . . xviii relationship . . . . . . . . . . . . . 97
Sending your comments . . . . . . . . . xviii Monitoring Global Mirror performance with the
IBM System Storage Productivity Center. . . . 98
Chapter 1. SAN Volume Controller The gmlinktolerance feature . . . . . . . . 99
overview . . . . . . . . . . . . . . 1 Valid combinations of FlashCopy and Metro Mirror
Introduction to the SAN Volume Controller or Global Mirror functions . . . . . . . . . 101
management GUI . . . . . . . . . . . . . 5
Checking your web browser settings for the Chapter 3. SAN fabric and LAN
management GUI . . . . . . . . . . . . 5 configuration . . . . . . . . . . . 103
Presets . . . . . . . . . . . . . . . 7 SAN fabric overview . . . . . . . . . . . 103
Virtualization . . . . . . . . . . . . . . 9 Configuration details . . . . . . . . . . . 103
Symmetric virtualization . . . . . . . . . . 11 SAN configuration, zoning, and split-site system
Object overview . . . . . . . . . . . . . 12 rules summary . . . . . . . . . . . . 104
Object naming . . . . . . . . . . . . 14 External storage-system configuration details 107
Systems . . . . . . . . . . . . . . 14 Fibre Channel host bus adapter configuration
Nodes . . . . . . . . . . . . . . . 19 details . . . . . . . . . . . . . . . 111
I/O groups and uninterruptible power supply. . 20 Fibre Channel over Ethernet host attachments 112
Internal storage and external storage . . . . . 23 Converged network adapter configuration
Storage pools and volumes . . . . . . . . 30 details . . . . . . . . . . . . . . . 112
System high availability . . . . . . . . . . 55 iSCSI configuration details . . . . . . . . 113
Node management and support tools. . . . . . 56 Node configuration details . . . . . . . . 116
IBM System Storage Productivity Center. . . . 56 Solid-state drive configuration details . . . . 118
Assist On-site and remote service . . . . . . 57 SAN switch configuration details . . . . . . 120
Event notifications . . . . . . . . . . . 58 Example SAN Volume Controller configurations 125
Inventory information email . . . . . . . . 61 Split-site system configuration overview . . . 127
Performance statistics . . . . . . . . . . . 62 Quorum disk configuration. . . . . . . . 134
User roles . . . . . . . . . . . . . . . 63 System configuration by using SAN fabrics with
User authentication configuration . . . . . . . 63 long-distance fiber connections . . . . . . 136
Bitmap space configuration for Copy Services,
Chapter 2. Copy Services features . . . 65 volume mirroring, or RAID . . . . . . . 137
FlashCopy function . . . . . . . . . . . . 65 Comparison of mirroring methods . . . . . 140
FlashCopy applications . . . . . . . . . 66 Zoning details . . . . . . . . . . . . . 140
Host considerations for FlashCopy integrity . . 66 Zoning examples . . . . . . . . . . . 143
FlashCopy mappings . . . . . . . . . . 67 Zoning considerations for Metro Mirror and
FlashCopy consistency groups . . . . . . . 75 Global Mirror . . . . . . . . . . . . 148
Grains and the FlashCopy bitmap . . . . . . 78 Switch operations over long distances . . . . 149
Background copy and cleaning rates . . . . . 78
Metro Mirror and Global Mirror . . . . . . . 80 Chapter 4. Creating a clustered
Metro Mirror and Global Mirror relationships . . 81 system . . . . . . . . . . . . . . 151
Initiating system creation from the front panel . . 151
Contents v
Switch zoning limitations for HDS Thunder, Configuring HP 3PAR F-Class and T-Class storage
Hitachi AMS 200, AMS 500, and AMS 1000, or systems . . . . . . . . . . . . . . . 272
HDS TagmaStore WMS . . . . . . . . . 251 HP 3PAR supported models . . . . . . . 272
Supported topologies . . . . . . . . . . 252 Supported HP 3PAR firmware levels . . . . 272
Quorum disks on HDS Thunder, Hitachi AMS Concurrent maintenance on HP 3PAR systems 272
200, AMS 500, and AMS 1000, and HDS HP 3PAR user interfaces. . . . . . . . . 272
TagmaStore WMS systems . . . . . . . . 252 Logical units and target ports on HP 3PAR
Host type for HDS Thunder, Hitachi AMS 200, systems . . . . . . . . . . . . . . 273
AMS 500, and AMS 1000, and HDS TagmaStore Switch zoning for HP 3PAR storage systems . . 274
WMS . . . . . . . . . . . . . . . 252 Configuration settings for HP 3PAR systems . . 275
Advanced functions for HDS Thunder, Hitachi Quorum disks on HP 3PAR storage arrays . . 276
AMS 200, AMS 500, and AMS 1000, and HDS Clearing SCSI reservations and registrations . . 276
TagmaStore WMS . . . . . . . . . . . 252 Copy functions for HP 3PAR storage array . . 276
Logical unit creation and deletion on HDS Thin provisioning for HP 3PAR storage array 276
Thunder, Hitachi AMS 200, AMS 500, and AMS Configuring HP StorageWorks MA and EMA
1000, and HDS TagmaStore WMS systems . . . 253 systems . . . . . . . . . . . . . . . 277
Configuring settings for HDS Thunder, Hitachi HP MA and EMA definitions . . . . . . . 277
AMS 200, AMS 500, and AMS 1000, and HDS Configuring HP MA and EMA systems. . . . 279
TagmaStore WMS systems . . . . . . . . 254 Supported models of HP MA and EMA systems 281
Configuring HDS TagmaStore USP and NSC Supported firmware levels for HP MA and
systems . . . . . . . . . . . . . . . 259 EMA systems . . . . . . . . . . . . 281
Supported models of the HDS USP and NSC 259 Concurrent maintenance on HP MA and EMA
Supported firmware levels for HDS USP and systems . . . . . . . . . . . . . . 281
NSC . . . . . . . . . . . . . . . 259 Configuration interface for HP MA and EMA
User interface on the HDS USP and NSC . . . 260 systems . . . . . . . . . . . . . . 282
Logical units and target ports on the HDS USP Sharing the HP MA or EMA between a host and
and NSC . . . . . . . . . . . . . . 260 a SAN Volume Controller . . . . . . . . 282
Switch zoning limitations for the HDS USP and Switch zoning limitations for HP MA and EMA
NSC . . . . . . . . . . . . . . . 260 systems . . . . . . . . . . . . . . 282
Concurrent maintenance on the HDS USP and Quorum disks on HP MA and EMA systems 283
NSC . . . . . . . . . . . . . . . 261 Advanced functions for HP MA and EMA. . . 283
Quorum disks on HDS USP and NSC . . . . 261 SAN Volume Controller advanced functions . . 284
Host type for HDS USP and NSC systems . . . 262 LU creation and deletion on the HP MA and
Advanced functions for HDS USP and NSC . . 262 EMA . . . . . . . . . . . . . . . 284
Configuring Hitachi TagmaStore AMS 2000 family Configuring settings for the HP MA and EMA 285
of systems . . . . . . . . . . . . . . 264 Configuring HP StorageWorks EVA systems . . . 288
Supported Hitachi TagmaStore AMS 2000 family Supported models of the HP EVA . . . . . 288
of systems models . . . . . . . . . . . 264 Supported firmware levels for HP EVA. . . . 288
Supported firmware levels for Hitachi Concurrent maintenance on the HP EVA . . . 288
TagmaStore AMS 2000 family of systems . . . 264 User interface on the HP EVA system . . . . 289
Concurrent maintenance on Hitachi TagmaStore Sharing the HP EVA controller between a host
AMS 2000 family of systems . . . . . . . 264 and the SAN Volume Controller . . . . . . 289
User interface on Hitachi TagmaStore AMS 2000 Switch zoning limitations for the HP EVA
family of systems . . . . . . . . . . . 264 system . . . . . . . . . . . . . . 289
Sharing the Hitachi TagmaStore AMS 2000 Quorum disks on HP StorageWorks EVA
family of systems between a host and the SAN systems . . . . . . . . . . . . . . 289
Volume Controller . . . . . . . . . . . 265 Copy functions for HP StorageWorks EVA
Switch zoning limitations for Hitachi systems . . . . . . . . . . . . . . 289
TagmaStore AMS 2000 family of systems . . . 265 Logical unit configuration on the HP EVA . . . 289
Supported topologies . . . . . . . . . . 265 Logical unit presentation . . . . . . . . 290
Quorum disks on Hitachi TagmaStore AMS 2000 Configuration interface for the HP EVA . . . 290
family of systems . . . . . . . . . . . 266 Configuration settings for HP StorageWorks
Host type for Hitachi TagmaStore AMS 2000 EVA systems . . . . . . . . . . . . 291
family of systems . . . . . . . . . . . 266 Configuring HP StorageWorks MSA1000 and
Advanced functions for Hitachi TagmaStore MSA1500 systems . . . . . . . . . . . . 292
AMS 2000 family of systems . . . . . . . 266 Supported models of the HP MSA1000 and
Logical unit creation and deletion on Hitachi MSA1500 system . . . . . . . . . . . 292
TagmaStore AMS 2000 family of systems . . . 267 Supported firmware levels for the HP MSA1000
Configuring settings for Hitachi TagmaStore and MSA1500 . . . . . . . . . . . . 292
AMS 2000 family of systems . . . . . . . 267 User interfaces on the HP MSA1000 and
MSA1500 . . . . . . . . . . . . . . 292
Contents vii
Installing the IBM System Storage Support for Uninstalling the IBM System Storage Support for
Microsoft Volume Shadow Copy Service and Microsoft Volume Shadow Copy Service and
Virtual Disk Service software . . . . . . . 330 Virtual Disk Service software . . . . . . . . 339
Configuring the VMware Web Service
connection . . . . . . . . . . . . . 332 Appendix. Accessibility features for
Creating the free and reserved pools of volumes 333 IBM SAN Volume Controller . . . . . . 341
Verifying the installation . . . . . . . . 334
Changing the configuration parameters. . . . . 335
Adding, removing, or listing volumes and Notices . . . . . . . . . . . . . . 343
FlashCopy relationships . . . . . . . . . . 336 Trademarks . . . . . . . . . . . . . . 345
Error codes for IBM System Storage Support for
Microsoft Volume Shadow Copy Service and Index . . . . . . . . . . . . . . . 347
Virtual Disk Service software . . . . . . . . 337
This publication also describes the configuration tools, both command-line and
web-based, that you can use to define, expand, and maintain the storage of the
SAN Volume Controller.
Accessibility
IBM has a long-standing commitment to people with disabilities. In keeping with
that commitment to accessibility, IBM strongly supports the U.S. Federal
government's use of accessibility as a criterion in the procurement of Electronic
Information Technology (EIT).
IBM strives to provide products with usable access for everyone, regardless of age
or ability.
For more information, see “Accessibility features for IBM SAN Volume Controller,”
on page 341.
Before using the SAN Volume Controller, you should have an understanding of
storage area networks (SANs), the storage requirements of your enterprise, and the
capabilities of your storage units.
Summary of changes
This summary of changes describes new functions that have been added to this
release. Technical changes or additions to the text and illustrations are indicated by
a vertical line to the left of the change. This document also contains terminology,
maintenance, and editorial changes.
Changed information
Emphasis
Different typefaces are used in this guide to show emphasis.
The IBM System Storage SAN Volume Controller Information Center contains all of
the information that is required to install, configure, and manage the SAN Volume
Controller. The information center is updated between SAN Volume Controller
product releases to provide the most current documentation. The information
center is available at the following website:
publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
Unless otherwise noted, the publications in the SAN Volume Controller library are
available in Adobe portable document format (PDF) from the following website:
www.ibm.com/storage/support/2145
Table 2 on page xvii lists IBM publications that contain information related to the
SAN Volume Controller.
Table 3 lists websites that provide publications and other information about the
SAN Volume Controller or related products or technologies.
Table 3. IBM documentation and related websites
Website Address
Support for SAN Volume Controller www.ibm.com/storage/support/2145
(2145)
Support for IBM System Storage www.ibm.com/storage/support/
and IBM TotalStorage products
IBM Publications Center www.ibm.com/e-business/linkweb/publications/
servlet/pbi.wss
IBM Redbooks® publications www.redbooks.ibm.com/
To view a PDF file, you need Adobe Acrobat Reader, which can be downloaded
from the Adobe website:
The IBM Publications Center offers customized search functions to help you find
the publications that you need. Some publications are available for you to view or
download at no charge. You can also order publications. The publications center
displays prices in your local currency. You can access the IBM Publications Center
through the following website:
www.ibm.com/e-business/linkweb/publications/servlet/pbi.wss
To submit any comments about this book or any other SAN Volume Controller
documentation:
v Go to the feedback page on the website for the SAN Volume Controller
Information Center at publib.boulder.ibm.com/infocenter/svc/ic/
index.jsp?topic=/com.ibm.storage.svc.console.doc/feedback.htm. There you can
use the feedback page to enter and submit comments or browse to the topic and
use the feedback link in the running footer of that page to identify the topic for
which you have a comment.
v Send your comments by email to [email protected]. Include the following
information for this publication or use suitable replacements for the publication
title and form number for the publication on which you are commenting:
– Publication title: IBM System Storage SAN Volume Controller Software Installation
and Configuration Guide
– Publication form number: GC27-2286-02
– Page, table, or illustration numbers that you are commenting on
– A detailed description of any information that should be changed
A SAN is a high-speed Fibre Channel network that connects host systems and
storage devices. In a SAN, a host system can be connected to a storage device
across the network. The connections are made through units such as routers and
switches. The area of the network that contains these units is known as the fabric of
the network.
The SAN Volume Controller software performs the following functions for the host
systems that attach to SAN Volume Controller:
v Creates a single pool of storage
v Provides logical unit virtualization
v Manages logical volumes
v Mirrors logical volumes
The SAN Volume Controller system also provides the following functions:
v Large scalable cache
v Copy Services
– IBM FlashCopy® (point-in-time copy) function, including thin-provisioned
FlashCopy to make multiple targets affordable
– Metro Mirror (synchronous copy)
– Global Mirror (asynchronous copy)
– Data migration
v Space management
– IBM System Storage Easy Tier® to migrate the most frequently used data to
higher performing storage
– Metering of service quality when combined with IBM Tivoli® Storage
Productivity Center
– Thin-provisioned logical volumes
– Compressed volumes to consolidate storage
Figure 1 on page 2 shows hosts, SAN Volume Controller nodes, and RAID storage
systems connected to a SAN fabric. The redundant SAN fabric comprises a
fault-tolerant arrangement of two or more counterpart SANs that provide alternate
paths for each SAN-attached device.
Host zone
Node
Redundant
SAN fabric
Node
Node
RAID RAID
storage system storage system
svc00600
Storage system zone
Volumes
A system of SAN Volume Controller nodes presents volumes to the hosts. Most of
the advanced functions that SAN Volume Controller provides are defined on
volumes. These volumes are created from managed disks (MDisks) that are
presented by the RAID storage systems. All data transfer occurs through the SAN
Volume Controller nodes, which is described as symmetric virtualization.
Node
Redundant
SAN fabric
Node
I/O is sent to
managed disks.
RAID RAID
storage system storage system
svc00601
Data transfer
The nodes in a system are arranged into pairs known as I/O groups. A single pair is
responsible for serving I/O on a given volume. Because a volume is served by two
nodes, there is no loss of availability if one node fails or is taken offline.
System management
The SAN Volume Controller nodes in a clustered system operate as a single system
and present a single point of control for system management and service. System
management and error reporting are provided through an Ethernet interface to one
of the nodes in the system, which is called the configuration node. The configuration
node runs a web server and provides a command-line interface (CLI). The
configuration node is a role that any node can take. If the current configuration
node fails, a new configuration node is selected from the remaining nodes. Each
node also provides a command-line interface and web interface for performing
hardware service actions.
Fabric types
I/O operations between hosts and SAN Volume Controller nodes and between
SAN Volume Controller nodes and RAID storage systems are performed by using
the SCSI standard. The SAN Volume Controller nodes communicate with each
other by using private SCSI commands.
Table 4 on page 4 shows the fabric type that can be used for communicating
between hosts, nodes, and RAID storage systems. These fabric types can be used at
the same time.
Solid-state drives
Some SAN Volume Controller nodes contain solid-state drives (SSDs). These
internal SSDs can be used to create RAID-managed disks (MDisks) that in turn can
be used to create volumes. SSDs provide host servers with a pool of
high-performance storage for critical applications.
Figure 3 shows this configuration. Internal SSD MDisks can also be placed in a
storage pool with MDisks from regular RAID storage systems, and IBM System
Storage Easy Tier performs automatic data placement within that storage pool by
moving high-activity data onto better performing storage.
Node
with SSDs Redundant
SAN fabric
svc00602
The nodes are always installed in pairs, with a minimum of one and a maximum
of four pairs of nodes constituting a system. Each pair of nodes is known as an I/O
group. All I/O operations that are managed by the nodes in an I/O group are
cached on both nodes.
I/O groups take the storage that is presented to the SAN by the storage systems as
MDisks and translates the storage into logical disks (volumes) that are used by
You can access the management GUI by opening any supported web browser and
entering the management IP addresses. You can connect from any workstation that
can communicate with the system. In addition to simple setup, configuration, and
management functions, the management GUI provides several additional functions
that help filter and sort data on the system:
Filtering and sorting objects
On panels that include columns, you can sort each column by clicking the
column heading. You can use the filtering feature to display items that
include only text that you specify.
Selecting multiple objects
You can use the Ctrl key to select multiple items and choose actions on
those objects by right-clicking them to display the actions menu. You can
right-click any column heading to add or remove columns from the table.
Using preset options
The management GUI includes several preestablished configuration
options to help you save time during the configuration process. For
example, you can select from several preset options when creating a new
volume. These preset options incorporate commonly used parameters.
For a complete description of the management GUI, launch the e-Learning module
by selecting Tutorials > A Tour of the Management GUI.
See the management GUI support information on the following website for the
supported operating systems and web browsers:
www.ibm.com/storage/support/2145
Procedure
Presets are available for creating volumes and FlashCopy mappings and for setting
up RAID configuration.
Volume presets
In the management GUI, FlashCopy mappings include presets that can be used for
test environments and backup solutions.
RAID configuration presets are used to configure all available drives based on
recommended values for the RAID level and drive class. The system detects the
installed hardware then recommends a configuration that uses all the drives to
build arrays that are protected with the appropriate amount of spare drives. Each
preset has a specific goal for the number of drives per array, the number of spare
drives to maintain redundancy, and whether the drives in the array are balanced
across enclosure chains, thus protecting the array from enclosure failures.
Table 7 on page 9 describes the presets that are used for solid-state drives (SSDs)
for the SAN Volume Controller.
Virtualization
Virtualization is a concept that applies to many areas of the information technology
industry.
For data storage, virtualization includes the creation of a pool of storage that
contains several disk systems. These systems can be supplied from various
vendors. The pool can be split into volumes that are visible to the host systems
that use them. Therefore, volumes can use mixed back-end storage and provide a
common way to manage a storage area network (SAN).
Historically, the term virtual storage has described the virtual memory techniques
that have been used in operating systems. The term storage virtualization, however,
describes the shift from managing physical volumes of data to logical volumes of
data. This shift can be made on several levels of the components of storage
networks. Virtualization separates the representation of storage between the
operating system and its users from the actual physical storage components. This
technique has been used in mainframe computers for many years through methods
such as system-managed storage and products like the IBM Data Facility Storage
Management Subsystem (DFSMS). Virtualization can be applied at the following
four main levels:
At the server level
Manages volumes on the operating systems servers. An increase in the
Types of virtualization
Virtualization at any level provides benefits. When several levels are combined, the
benefits of those levels can also be combined. For example, you can combine
benefits by attaching a RAID controller to a virtualization engine that provides
virtual volumes for a virtual file system.
Server level
SAN
Fabric level
Metadata server
Storage device
level
21ihpp
Figure 4. Levels of virtualization
Symmetric virtualization
The SAN Volume Controller provides symmetric virtualization.
Virtualization splits the storage that is presented by the storage systems into
smaller chunks that are known as extents. These extents are then concatenated,
using various policies, to make volumes. With symmetric virtualization, host
systems can be isolated from the physical storage. Advanced functions, such as
data migration, can run without the need to reconfigure the host. With symmetric
virtualization, the virtualization engine is the central configuration point for the
SAN.
Figure 5 on page 12 shows that the storage is pooled under the control of the
virtualization engine, because the separation of the control from the data occurs in
the data path. The virtualization engine performs the logical-to-physical mapping.
I/O
SAN fabric
Virtualizer
I/O
The virtualization engine directly controls access to the storage and to the data that
is written to the storage. As a result, locking functions that provide data integrity
and advanced functions, such as cache and Copy Services, can be run in the
virtualization engine itself. Therefore, the virtualization engine is a central point of
control for device and advanced function management. Symmetric virtualization
can be used to build a firewall in the storage network. Only the virtualization
engine can grant access through the firewall.
Symmetric virtualization can cause some problems. The main problem that is
associated with symmetric virtualization is scalability. Scalability can cause poor
performance because all input/output (I/O) must flow through the virtualization
engine. To solve this problem, you can use an n-way cluster of virtualization
engines that has failover capacity. You can scale the additional processor power,
cache memory, and adapter bandwidth to achieve the required level of
performance. Additional memory and processing power are needed to run
advanced services such as Copy Services and caching.
Object overview
The SAN Volume Controller solution is based on a group of virtualization
concepts. Before setting up your SAN Volume Controller environment, you should
understand the concepts and the objects in the environment.
Each SAN Volume Controller single processing unit is called a node. Nodes are
deployed in pairs to make up a clustered system. A system can consist of one to
four pairs of nodes. Each pair of nodes is known as an I/O group and each node
can be in only one I/O group.
Volumes are logical disks that are presented by the systems. Each volume is
associated with a particular I/O group. The nodes in the I/O group provide access
to the volumes in the I/O group. When an application server performs I/O to a
volume, it can access the volume with either of the nodes in the I/O group.
Because each I/O group has only two nodes, the distributed cache is only
two-way.
The nodes in a system see the storage that is presented by back-end storage
systems as a number of disks, known as managed disks (MDisks).
Each MDisk is divided into a number of extents which are numbered, from 0,
sequentially from the start to the end of the MDisk. MDisks are collected into
groups, known as storage pools.
Each volume is made up of one or two volume copies. Each volume copy is an
independent physical copy of the data that is stored on the volume. A volume with
two copies is known as a mirrored volume. Volume copies are made out of MDisk
extents. All the MDisks that contribute to a particular volume copy must belong to
the same storage pool.
A volume can be thin-provisioned. This means that the capacity of the volume as
seen by host systems, called the virtual capacity, can be different from the amount
of storage that is allocated to the volume from MDisks, called the real capacity.
Thin-provisioned volumes can be configured to automatically expand their real
capacity by allocating new extents.
At any one time, a single node in a system can manage configuration activity. This
node is known as the configuration node and manages a cache of the information
that describes the system configuration and provides a focal point for
configuration.
For a SCSI over Fibre Channel (FC) or Fibre Channel over Ethernet (FCoE)
connection, the nodes detect the FC or FCoE ports that are connected to the SAN.
These correspond to the worldwide port names (WWPNs) of the FC or FCoE host
bus adapters (HBAs) that are present in the application servers. You can create
logical host objects that group WWPNs that belong to a single application server or
to a set of them.
For a SCSI over Ethernet connection, the iSCSI qualified name (IQN) identifies the
iSCSI target (destination) adapter. Host objects can have both IQNs and WWPNs.
SAN Volume Controller hosts are virtual representations of the physical host
systems and application servers that are authorized to access the system volumes.
Each SAN Volume Controller host definition specifies the connection method (SCSI
over Fibre Channel or SCSI over Ethernet), the Fibre Channel port or IQN, and the
volumes that the host applications can access.
The system provides block-level aggregation and volume management for disk
storage within the SAN. The system manages a number of back-end storage
systems and maps the physical storage within those storage systems into logical
disk images that can be seen by application servers and workstations in the SAN.
The SAN is configured in such a way that the application servers cannot see the
back-end physical storage. This prevents any possible conflict between the system
and the application servers both trying to manage the back-end storage.
Choose a meaningful name when you create an object. If you do not choose a
name for the object, the system generates one for you. A well-chosen name serves
not only as a label for an object, but also as a tool for tracking and managing the
object. Choosing a meaningful name is particularly important if you decide to use
configuration backup and restore.
Naming rules
When you choose a name for an object, the following rules apply:
v Names must begin with a letter.
Attention: Do not start names by using an underscore even though it is
possible. The use of the underscore as the first character of a name is a reserved
naming convention that is used by the system configuration restore process.
v The first character cannot be numeric.
v The name can be a maximum of 63 characters with the following exceptions:
– The name can be a maximum of 15 characters for Remote Copy relationships
and groups.
– The lsfabric command displays long object names that are truncated to 15
characters for nodes and systems.
– Version 5.1.0 systems display truncated volume names when they are
partnered with a version 6.1.0 or later system that has volumes with long
object names (lsrcrelationshipcandidate or lsrcrelationship commands).
– The front panel displays the first 15 characters of object names.
v Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 -
9), underscore (_), period (.), hyphen (-), and space.
v Names must not begin or end with a space.
v Object names must be unique within the object type. For example, you can have
a volume named ABC and an MDisk called ABC, but you cannot have two
volumes called ABC.
v The default object name is valid (object prefix with an integer).
v Objects can be renamed to their current names.
Systems
All your configuration, monitoring, and service tasks are performed at the system
level. Therefore, after configuring your system, you can take advantage of the
virtualization and the advanced features of the SAN Volume Controller system.
A system can consist of between two to eight SAN Volume Controller nodes.
All configuration settings are replicated across all nodes in the system. Because
configuration is performed at the system level, management IP addresses are
assigned to the system. Each interface accesses the system remotely through the
Ethernet system-management address.
System management
A clustered system is managed using a command-line session or the management
GUI over an Ethernet connection.
Each SAN Volume Controller system can have zero to four management IP
addresses. You can assign up to two Internet Protocol Version 4 (IPv4) addresses
and up to two Internet Protocol Version 6 (IPv6) addresses.
Each SAN Volume Controller node has one or two management IP addresses and
up to two Internet Small Computer System Interface over Internet Protocol (iSCSI
IP) addresses per node.
Depending on the configuration, each SAN Volume Controller node has one or two
management IP addresses and up to six Internet Small Computer System Interface
over Internet Protocol (iSCSI IP) addresses per node.
Note: Management IP addresses that are assigned to a system are different from
iSCSI IP addresses and are used for different purposes. If iSCSI is used, iSCSI
addresses are assigned to node ports. On the configuration node, a port has
multiple IP addresses active at the same time.
In addition to these IP addresses, you can optionally add one service IP address
per node to provide access to the service assistant.
Management IP failover
If the configuration node fails, the IP addresses for the clustered system are
transferred to a new node. The system services are used to manage the transfer of
the management IP addresses from the failed configuration node to the new
configuration node.
Note: Some Ethernet devices might not forward ARP packets. If the ARP
packets are not forwarded, connectivity to the new configuration node cannot be
established automatically. To avoid this problem, configure all Ethernet devices
If the Ethernet link to the SAN Volume Controller system fails because of an event
unrelated to the SAN Volume Controller, such as a cable being disconnected or an
Ethernet router failure, the SAN Volume Controller does not attempt to fail over
the configuration node to restore management IP access. SAN Volume Controller
provides the option for two Ethernet ports, each with its own management IP
address, to protect against this type of failure. If you cannot connect through one
IP address, attempt to access the system through the alternate IP address.
Note: IP addresses that are used by hosts to access the system over an Ethernet
connection are different from management IP addresses.
SAN Volume Controller supports the following protocols that make outbound
connections from the system:
v Email
v Simple Network Mail Protocol (SNMP)
v Syslog
v Network Time Protocol (NTP)
These protocols operate only on a port configured with a management IP address.
When making outbound connections, the SAN Volume Controller uses the
following routing decisions:
v If the destination IP address is in the same subnet as one of the management IP
addresses, the SAN Volume Controller system sends the packet immediately.
v If the destination IP address is not in the same subnet as either of the
management IP addresses, the system sends the packet to the default gateway
for Ethernet port 1.
v If the destination IP address is not in the same subnet as either of the
management IP addresses and Ethernet port 1 is not connected to the Ethernet
network, the system sends the packet to the default gateway for Ethernet port 2.
When configuring any of these protocols for event notifications, use these routing
decisions to ensure that error notification works correctly in the event of a network
failure.
Note: The system can continue to run without loss of access to data as long as one
nodes from each I/O group is available.
In this situation, some nodes must stop operating and processing I/O requests
from hosts to preserve data integrity while maintaining data access. If a group
contains less than half the nodes that were active in the system, the nodes in that
group stop operating and processing I/O requests from hosts.
It is possible for a system to split into two groups with each group containing half
the original number of nodes in the system. A quorum disk determines which
group of nodes stops operating and processing I/O requests. In this tie-break
situation, the first group of nodes that accesses the quorum disk is marked as the
owner of the quorum disk and as a result continues to operate as the system,
handling all I/O requests. If the other group of nodes cannot access the quorum
disk or finds the quorum disk is owned by another group of nodes, it stops
operating as the system and does not handle I/O requests.
System state
The state of the clustered system holds all of the configuration and internal data.
The system state information is held in nonvolatile memory. If the mainline power
fails, the node maintains the internal power long enough for the system state
information to be stored on the internal disk drive of each node, and the write
cache data and configuration information that is held in memory is stored on the
internal disk drive of the node. If the partner node is still online, it attempts to
flush the cache and continue operation with the write cache disabled.
Figure 6 on page 18 shows an example of a system that contains four nodes. The
system state shown in the shaded box does not actually exist. Instead, each node in
the system maintains an identical copy of the system state. When a change is made
to the configuration or internal system data, the same change is applied to all
nodes.
The system contains a single node that is elected as the configuration node. The
configuration node can be thought of as the node that controls the updating of
system state. For example, a user request is made (1), that results in a change being
made to the configuration. The configuration node controls updates to the system
(2). The configuration node then forwards the change to all nodes (including Node
1), and they all make the state change at the same point in time (3). Using this
state-driven model of clustering ensures that all nodes in the system know the
exact system state at any one time. If the configuration node fails, the system can
elect a new node to take over its responsibilities.
2
System state
3 3 3 3
21iifm
Figure 6. Clustered system, nodes, and system state
Only the data that describes the system configuration is backed up. You must back
up your application data using the appropriate backup methods.
For complete disaster recovery, regularly back up the business data that is stored
on volumes at the application server level or the host level.
Nodes
Each SAN Volume Controller node is a single processing unit within a SAN Volume
Controller system.
For redundancy, nodes are deployed in pairs to make up a system. A system can
have one to four pairs of nodes. Each pair of nodes is known as an I/O group.
Each node can be in only one I/O group. A maximum of four I/O groups each
containing two nodes is supported.
At any one time, a single node in the system manages configuration activity. This
configuration node manages a cache of the configuration information that describes
the system configuration and provides a focal point for configuration commands. If
the configuration node fails, another node in the system takes over its
responsibilities.
Configuration node
A configuration node is a single node that manages configuration activity of the
system.
If the configuration node fails, the system chooses a new configuration node. This
action is called configuration node failover. The new configuration node takes over
the management IP addresses. Thus you can access the system through the same
Figure 7 shows an example clustered system that contains four nodes. Node 1 has
been designated the configuration node. User requests (1) are handled by node 1.
1 Configuration
Node
IP Interface
Volumes are logical disks that are presented to the SAN by SAN Volume Controller
nodes. Volumes are also associated with an I/O group. The SAN Volume
Controller does not contain any internal battery backup units and therefore must
be connected to an uninterruptible power supply to provide data integrity in the
event of a system-wide power failure.
I/O groups
The pair of nodes is known as an input/output (I/O) group. An I/O group is defined
during the system configuration process.
Volumes are logical disks that are presented to the SAN by SAN Volume Controller
nodes. Volumes are also associated with the I/O group.
When an application server performs I/O to a volume, it can access the volume
with either of the nodes in the I/O group. When you create a volume, you can
specify a preferred node. Many of the multipathing driver implementations that
SAN Volume Controller supports use this information to direct I/O to the
preferred node. The other node in the I/O group is used only if the preferred node
is not accessible.
If you do not specify a preferred node for a volume, the node in the I/O group
that has the fewest volumes is selected by SAN Volume Controller to be the
preferred node.
After the preferred node is chosen, it can be changed only when the volume is
moved to a different I/O group.
Attention: Prior to 6.4, moving a volume to a different I/O group was disruptive
to host I/O.
Read I/O is processed by referencing the cache in the node that receives the I/O. If
the data is not found, it is read from the disk into the cache. The read cache can
provide better performance if the same node is chosen to service I/O for a
particular volume.
I/O traffic for a particular volume is, at any one time, managed exclusively by the
nodes in a single I/O group. Thus, although a clustered system can have eight
nodes within it, the nodes manage I/O in independent pairs. This means that the
I/O capability of the SAN Volume Controller scales well, because additional
throughput can be obtained by adding additional I/O groups.
Figure 8 shows a write operation from a host (1), that is targeted for volume A.
This write is targeted at the preferred node, Node 1 (2). The write operation is
cached and a copy of the data is made in the partner node, the cache for Node 2
(3). The host views the write as complete. At some later time, the data is written,
or de-staged, to storage (4).
I/O Group
Volume Volume
A B
Alternative
node paths
2
Preferred Preferred
node path node path
Node 1 Node 2
3
Cached data Cached data
21iiix
When a node fails within an I/O group, the other node in the I/O group assumes
the I/O responsibilities of the failed node. Data loss during a node failure is
prevented by mirroring the I/O read and write data cache between the two nodes
in an I/O group.
If only one node is assigned to an I/O group or if a node has failed in an I/O
group, the cache is flushed to the disk and then goes into write-through mode.
Therefore, any writes for the volumes that are assigned to this I/O group are not
When a volume is created, the I/O group to provide access to the volume must be
specified. However, volumes can be created and added to I/O groups that contain
offline nodes. I/O access is not possible until at least one of the nodes in the I/O
group is online.
2145 UPS-1U
A 2145 UPS-1U is used exclusively to maintain data that is held in the SAN
Volume Controller dynamic random access memory (DRAM) in the event of an
unexpected loss of external power. This use differs from the traditional
uninterruptible power supply that enables continued operation of the device that it
supplies when power is lost.
With a 2145 UPS-1U, data is saved to the internal disk of the SAN Volume
Controller node. The uninterruptible power supply units are required to power the
SAN Volume Controller nodes even when the input power source is considered
uninterruptible.
Each SAN Volume Controller node monitors the operational state of the
uninterruptible power supply to which it is attached.
If the 2145 UPS-1U reports a loss of input power, the SAN Volume Controller node
stops all I/O operations and dumps the contents of its dynamic random access
memory (DRAM) to the internal disk drive. When input power to the 2145 UPS-1U
is restored, the SAN Volume Controller node restarts and restores the original
contents of the DRAM from the data saved on the disk drive.
A SAN Volume Controller node is not fully operational until the 2145 UPS-1U
battery state indicates that it has sufficient charge to power the SAN Volume
Controller node long enough to save all of its memory to the disk drive. In the
event of a power loss, the 2145 UPS-1U has sufficient capacity for the SAN Volume
Controller to save all its memory to disk at least twice. For a fully charged 2145
UPS-1U, even after battery charge has been used to power the SAN Volume
Controller node while it saves dynamic random access memory (DRAM) data,
sufficient battery charge remains so that the SAN Volume Controller node can
become fully operational as soon as input power is restored.
Important: Do not shut down a 2145 UPS-1U without first shutting down the SAN
Volume Controller node that it supports. Data integrity can be compromised by
pushing the 2145 UPS-1U on/off button when the node is still operating. However,
in the case of an emergency, you can manually shut down the 2145 UPS-1U by
pushing the 2145 UPS-1U on/off button when the node is still operating. Service
actions must then be performed before the node can resume normal operations. If
multiple uninterruptible power supply units are shut down before the nodes they
support, data can be corrupted.
Internal storage
The SAN Volume Controller 2145-CF8 and the SAN Volume Controller 2145-CG8
have a number of drives attached to it. These drives are used to create a
Redundant Array of Independent Disks (RAID), which are presented as managed
disks (MDisks) in the system.
External storage
The SAN Volume Controller can detect logical units (LUs) on an external storage
system that is attached through Fibre Channel connections. These LUs are detected
as managed disks (MDisks) in the system and must be protected from drive
failures by using RAID technology on an external storage system.
Storage systems provide the storage that the SAN Volume Controller system
detects as one or more MDisks.
SAN Volume Controller supports storage systems that implement the use of RAID
technology and also those that do not use RAID technology. RAID implementation
provides redundancy at the disk level, which prevents a single physical disk
failure from causing a RAID managed disk (MDisk), storage pool, or associated
volume failure. Physical capacity for storage systems can be configured as RAID 1,
RAID 0+1, RAID 5, RAID 6, or RAID 10.
Storage systems divide array storage into many Small Computer System Interface
(SCSI) logical units (LUs) that are presented on the SAN. Ensure that you assign an
entire array to an MDisk as you create the MDisk, to present the array as a single
SCSI LU that is recognized by SAN Volume Controller as a single RAID MDisk.
You can then use the virtualization features of SAN Volume Controller to create
volumes from the MDisk.
The exported storage devices are detected by the system and reported by the user
interfaces. The system can also determine which MDisks each storage system is
presenting and can provide a view of MDisks that is filtered by the storage system.
Therefore, you can associate the MDisks with the RAID that the system exports.
The storage system can have a local name for the RAID or single disks that it is
providing. However, it is not possible for the nodes in the system to determine this
name, because the namespace is local to the storage system. The storage system
makes the storage devices visible with a unique ID, called the logical unit number
(LUN). This ID, along with the storage system serial number or numbers (there can
be more than one controller in a storage system), can be used to associate the
MDisks in the system with the RAID exported by the system.
Attention: If you delete a RAID that is being used by SAN Volume Controller,
the storage pool goes offline and the data in that group is lost.
MDisks
A managed disk (MDisk) is a logical unit of physical storage. MDisks are either
arrays (RAID) from internal storage or volumes from external storage systems.
MDisks are not visible to host systems.
An MDisk might consist of multiple physical disks that are presented as a single
logical disk to the storage area network (SAN). An MDisk always provides usable
blocks of physical storage to the system even if it does not have a one-to-one
correspondence with a physical disk.
Each MDisk is divided into a number of extents, which are numbered, from 0,
sequentially from the start to the end of the MDisk. The extent size is a property of
storage pools. When an MDisk is added to a storage pool, the size of the extents
that the MDisk is divided into depends on the attribute of the storage pool to
which it has been added.
Access modes
The access mode determines how the clustered system uses the MDisk. The
following list describes the types of possible access modes:
Unmanaged
The MDisk is not used by the system.
Managed
The MDisk is assigned to a storage pool and provides extents that volumes
can use.
Image The MDisk is assigned directly to a volume with a one-to-one mapping of
extents between the MDisk and the volume.
Array The MDisk represents a set of drives in a RAID from internal storage.
Attention: If you add an MDisk that contains existing data to a storage pool
while the MDisk is in unmanaged or managed mode, you lose the data that it
contains. The image mode is the only mode that preserves this data.
Storage system
21iinu
Key: = Physical disks = Logical disks (managed disks as seen by the 2145 or 2076)
Attention: If you have observed intermittent breaks in links or if you have been
replacing cables or connections in the SAN fabric or LAN configuration, you might
have one or more MDisks in degraded status. If an I/O operation is attempted
when a link is broken and the I/O operation fails several times, the system
partially excludes the MDisk and it changes the status of the MDisk to degraded.
You must include the MDisk to resolve the problem.
You can include the MDisk by either selecting Physical Storage > MDisks: Action
> Include Excluded MDisk in the management GUI, or by issuing the following
command in the command-line interface (CLI):
includemdisk mdiskname/id
Extents
Each MDisk is divided into chunks of equal size called extents. Extents are a unit of
mapping that provide the logical connection between MDisks and volume copies.
MDisk path
Each MDisk from external storage has an online path count, which is the number
of nodes that have access to that MDisk; this represents a summary of the I/O
path status between the system nodes and the storage device. The maximum path
count is the maximum number of paths that have been detected by the system at
any point in the past. If the current path count is not equal to the maximum path
RAID properties
A Redundant Array of Independent Disks (RAID) is a method of configuring
drives for high availability and high performance. The information in this topic
applies only to SAN Volume Controller solid-state drives (SSDs) that provide
high-speed managed-disk (MDisk) capability for SAN Volume Controller 2145-CF8
and SAN Volume Controller 2145-CG8 nodes.
SAN Volume Controller supports hot-spare drives. When a RAID member drive
fails, the system automatically replaces the failed member with a hot-spare drive
and resynchronizes the array to restore its redundancy.
Clustered system
1..4 0..32
I/O group Drive Node
0..1
0..4 1
0..128 Use = member
Storage pool
1 0..1
1. In the management GUI, you cannot create arrays of all sizes because the size depends
on how the drives are configured.
2. Redundancy means how many drive failures the array can tolerate. In some
circumstances, an array can tolerate more than one drive failure. More details are
included in “Drive failures and redundancy.”
3. DS means drive size.
4. Between 1 and MC/2.
Array initialization
When an array is created, the array members are synchronized with each other by
a background initialization process. The array is available for I/O during this
process: Initialization has no impact on availability due to member drive failures.
If an array has the necessary redundancy, a drive is removed from the array if it
fails or access to it is lost. If a suitable spare drive is available, it is taken into the
array, and the drive then starts to synchronize.
Each array has a set of goals that describe the preferred location and performance
of each array member. If you lose access to a node, you lose access to all the drives
in the node. Drives that are configured as members of the array are not removed
from the array. Once the node is available, the system copies the data that was
modified while the node was offline from the good drive to the out-of-date drive.
You can manually start an exchange, and the array goals can also be updated to
facilitate configuration changes.
RAID can be configured through the Easy Setup wizard when you first install your
system, or later through the Configure Internal Storage wizard. You can either use
the recommended configuration, which is the fully automatic configuration, or you
can set up a different configuration.
If you select the recommended configuration, all available drives are configured
based on recommended values for the RAID level and drive class. The
recommended configuration uses all the drives to build arrays that are protected
with the appropriate amount of spare drives.
The management GUI also provides a set of presets to help you configure for
different RAID types. You can tune RAID configurations slightly based on best
practices. The presets vary according to how the drives are configured. Selections
include the drive class, the preset from the list that is shown, whether to configure
spares, whether to optimize for performance, whether to optimized for capacity,
and the number of drives to provision.
For greatest control and flexibility, you can use the mkarray command-line interface
(CLI) command to configure RAID on your system.
If your system has both solid-state drives (SSDs) and traditional hard disk drives,
you can use the Easy Tier function to migrate the most frequently used data to
higher performing storage.
Each array member is protected by a set of spare drives that are valid matches.
Some of these spare drives are more suitable than other spare drives. For example,
some spare drives could degrade the array performance, availability, or both. For a
given array member, a good spare drive is online and is in the same node. A good
spare drive has either one of the following characteristics:
v An exact match of member goal capacity, performance, and location.
v A performance match: the spare drive has a capacity that is the same or larger
and has the same or better performance.
A good spare also has either of these characteristics:
v A drive with a use of spare.
v A concurrent-exchange old drive that is destined to become a hot-spare drive
when the exchange completes.
The array attribute spare_goal is the number of good spares that are needed to
protect each array member. This attribute is set when the array is created and can
be changed with the charray command.
If the number of good spares that an array member is protected by is below the
array spare goal, you receive event error 084300.
Storage pool
21iipo
Figure 11. Storage pool
All MDisks in a pool are split into extents of the same size. Volumes are created
from the extents that are available in the pool. You can add MDisks to a storage
pool at any time either to increase the number of extents that are available for new
volume copies or to expand existing volume copies.
You can specify a warning capacity for a storage pool. A warning event is
generated when the amount of space that is used in the storage pool exceeds the
warning capacity. This is especially useful in conjunction with thin-provisioned
volumes that have been configured to automatically consume space from the
storage pool.
You can add only MDisks that are in unmanaged mode. When MDisks are added
to a storage pool, their mode changes from unmanaged to managed.
You can delete MDisks from a group under the following conditions:
v Volumes are not using any of the extents that are on the MDisk.
v Enough free extents are available elsewhere in the group to move any extents
that are in use from this MDisk.
Extents
To track the space that is available on an MDisk, the SAN Volume Controller
divides each MDisk into chunks of equal size. These chunks are called extents and
are indexed internally. Extent sizes can be 16, 32, 64, 128, 256, 512, 1024, 2048, 4096,
or 8192 MB. The choice of extent size affects the total amount of storage that is
managed by the system.
You specify the extent size when you create a new storage pool. You cannot change
the extent size later; it must remain constant throughout the lifetime of the storage
pool.
Use volume mirroring to add a copy of the disk from the destination storage pool.
After the copies are synchronized, you can free up extents by deleting the copy of
the data in the source storage pool. The FlashCopy function and Metro Mirror can
also be used to create a copy of a volume in a different storage pool.
A system can manage 2^22 extents. For example, with a 16 MB extent size, the
system can manage up to 16 MB x 4,194,304 = 64 TB of storage.
When you choose an extent size, consider your future needs. For example, if you
currently have 40 TB of storage and you specify an extent size of 16 MB for all
storage pools, the capacity of the system is limited to 64 TB of storage in the
future. If you select an extent size of 64 MB for all storage pools, the capacity of
the system can grow to 256 TB.
Using a larger extent size can waste storage. When a volume is created, the storage
capacity for the volume is rounded to a whole number of extents. If you configure
the system to have a large number of small volumes and you use a large extent
size, this can cause storage to be wasted at the end of each volume.
Information about the maximum volume, MDisk, and system capacities for each
extent size is included in the Configuration Limits and Restrictions document on
the product support website:
www.ibm.com/storage/support/2145
Easy Tier eliminates manual intervention when assigning highly active data on
volumes to faster responding storage. In this dynamically tiered environment, data
movement is seamless to the host application regardless of the storage tier in
which the data resides. Manual controls exist so that you can change the default
behavior, for example, such as turning off Easy Tier on storage pools that have
both types of MDisks.
All MDisks belong to one tier or the other, which includes MDisks that are not yet
part of a storage pool.
SAN Volume Controller supports solid-state drives (SSDs) that offer a number of
potential benefits over magnetic hard disk drives (HDDs), such as faster data
access and throughput, better performance, and less power consumption.
SDDs are, however, much more expensive than HDDs. To optimize SSD
performance and help provide a cost-effective contribution to the overall system,
Easy Tier can cause infrequently accessed data to reside on lower cost HDDs and
frequently accessed data to reside on SSDs.
Determining the amount of data activity in an extent and when to move the extent
to the proper storage tier is usually too complex a task to manage manually.
Easy Tier evaluation mode collects usage statistics for each storage extent for a
storage pool where the capability of moving data from one tier to the other tier is
not possible or is disabled. An example of such a storage pool is a pool of
homogeneous MDisks, where all MDisks are typically HDDs. A summary file is
created in the /dumps directory on the configuration node
(dpa_heat.node_name.date.time.data), which can be offloaded and viewed by
using the IBM Storage Tier Advisor Tool.
Easy Tier automatic data placement also measures the amount of data access, but
then acts on the measurements to automatically place the data into the appropriate
tier of a storage pool that contains both MDisk tiers.
Dynamic data movement is transparent to the host server and application users of
the data, other than providing improved performance.
For a storage pool and volume to be automatically managed by Easy Tier, ensure
that the following conditions are met:
v The volume must be striped.
v The storage pool must contain both MDisks that belong to the generic_ssd tier
and MDisks that belong to the generic_hdd tier.
Volumes that are added to storage pools use extents from generic_hdd MDisks
initially, if available. Easy Tier then collects usage statistics to determine which
extents to move to generic_ssd MDisks.
When IBM System Storage Easy Tier evaluation mode is enabled for a storage pool
with a single tier of storage, Easy Tier collects usage statistics for all the volumes in
the pool.
Volumes are not monitored when the easytier attribute of a storage pool is set to
off or auto with a single tier of storage. You can enable Easy Tier evaluation mode
for a storage pool with a single tier of storage by setting the easytier attribute of
the storage pool to on.
You can control or view data placement settings by using the following
command-line interface (CLI) commands:
chmdiskgrp
Modifies the properties of the storage pool. Use this command to turn on
evaluation mode on a storage pool with a single tier of storage and to turn
off Easy Tier functions on a storage pool with more than one tier of
storage.
lsmdiskgrp
Lists storage pool information.
lsvdisk
Lists volume information.
lsvdiskcopy
Lists volume copy information.
mkmdiskgrp
Creates a new storage pool.
Other MDisk commands such as addmdisk, chmdisk, and lsmdisk can be used to
view or set the tier an MDisk belongs to.
When IBM System Storage Easy Tier on SAN Volume Controller automatic data
placement is active, Easy Tier measures the host access activity to the data on each
storage extent, provides a mapping that identifies high activity extents, and then
moves the high-activity data according to its relocation plan algorithms.
To automatically relocate the data, Easy Tier performs the following actions:
1. Monitors volumes for host access to collect average usage statistics for each
extent over a rolling 24-hour period of I/O activity.
2. Analyzes the amount of I/O activity for each extent to determine if the extent
is a candidate for migrating to or from the higher performing solid-state drive
(SSD) tier.
3. Develops an extent relocation plan for each storage pool to determine exact
data relocations within the storage pool. Easy Tier then automatically relocates
the data according to the plan.
Automatic data placement is enabled by default for storage pools with more than
one tier of storage. When you enable automatic data placement, by default all
If you want to disable automatic data placement for a volume or storage pool, set
the easytier attribute to off.
Extracting and viewing performance data with the IBM Storage Tier Advisor
Tool:
You can use the IBM Storage Tier Advisor Tool, hereafter referred to as advisor
tool, to view performance data that is collected by IBM System Storage Easy Tier
over a 24-hour operational cycle. The advisor tool is the application that creates a
Hypertext Markup Language (HTML) file that you use to view the data when you
point your browser to the file.
To download the Storage Tier Advisor Tool, click Downloads at this website:
www.ibm.com/storage/support/2145
To extract the summary performance data, follow these steps using the
command-line interface (CLI):
Procedure
1. Find the most recent dpa_heat.node_name.date.time.data file in the clustered
system by entering the following command-line interface (CLI) command:
lsdumps node_id | node_name
where node_id | node_name is the node ID or name to list the available dumps
for.
2. If necessary, copy the most recent summary performance data file to the current
configuration node. Enter the following command:
cpdumps -prefix /dumps/dpa_heat.node_name.date.time.data node_id | node_name
3. Use PuTTY scp (pscp) to copy the summary performance data in a binary
format from the configuration node to a local directory.
4. From a Microsoft Windows command prompt, use the advisor tool to transform
the binary file in your local directory into an HTML file in the local directory.
5. Point your browser to the HTML file in your local directory.
Results
What to do next
You can view this information to analyze workload statistics and evaluate which
logical volumes might be candidates for Easy Tier management. If you have not
enabled the Easy Tier function, you can use the usage statistics gathered by the
monitoring process to help you determine whether to use Easy Tier to enable
potential performance improvements in your storage environment.
Some limitations exist when using the IBM System Storage Easy Tier function on
SAN Volume Controller.
v The Easy Tier function supports the following tiered storage configurations:
– Local (internal) Serial Attached SCSI (SAS) solid-state drives (SSDs) in a
storage pool with Fibre Channel-attached hard disk drives (HDDs).
– External Fibre Channel-attached SSDs in a storage pool with Fibre
Channel-attached hard disk drives (HDDs).
v To avoid unpredictable performance results, do not use the Easy Tier function to
migrate between SAS drives and Serial Advanced Technology Attachment
(SATA) drives.
v To ensure optimal performance, all MDisks in a storage pool tier must have the
same technology and performance characteristics.
v Easy Tier automatic data placement is not supported on volume copies, which
are image mode or sequential. I/O monitoring for such volumes is supported,
but you cannot migrate extents on such volumes unless you convert image or
sequential volume copies to striped volumes.
v Automatic data placement and extent I/O activity monitors are supported on
each copy of a mirrored volume. The Easy Tier function works with each copy
independently of the other copy. For example, you can enable or disable Easy
Tier automatic data placement for each copy independently of the other copy.
v SAN Volume Controller creates new volumes or volume expansions using
extents from MDisks from the HDD tier, if possible, but uses extents from
MDisks from the SSD tier if necessary.
v When a volume is migrated out of a storage pool that is managed with the Easy
Tier function, Easy Tier automatic data placement mode is no longer active on
that volume. Automatic data placement is also turned off while a volume is
being migrated even if it is between pools that both have Easy Tier automatic
data placement enabled. Automatic data placement for the volume is re-enabled
with the migration is complete.
When an MDisk is deleted from a storage pool with the force parameter, extents
in use are migrated to MDisks in the same tier as the MDisk being removed, if
possible. If insufficient extents exist in that tier, extents from the other tier are used.
When Easy Tier automatic data placement is enabled for a volume, the
migrateexts command-line interface (CLI) command cannot be used on that
volume.
When SAN Volume Controller migrates a volume to a new storage pool, Easy Tier
automatic data placement between the generic SSD tier and the generic HDD tier
is temporarily suspended. After the volume is migrated to its new storage pool,
Easy Tier automatic data placement between the generic SSD tier and the generic
HDD tier resumes for the newly moved volume, if appropriate.
When SAN Volume Controller migrates a volume from one storage pool to
another, it attempts to migrate each extent to an extent in the new storage pool
from the same tier as the original extent. In some cases, such as a target tier being
unavailable, the other tier is used. For example, the generic SSD tier might be
unavailable in the new storage pool.
If the automatic data placement is enabled in the new storage pool, pending Easy
Tier status changes are assigned after the volume completes its move to the new
storage pool. Although the status changes are based on volume use in the old
storage pool, the new status is honored in the new storage pool.
Easy Tier automatic data placement does not support image mode. No automatic
data placement occurs in this situation. When a volume with Easy Tier automatic
data placement mode active is migrated to image mode, Easy Tier automatic data
placement mode is no longer active on that volume.
The Easy Tier function does support evaluation mode for image mode volumes.
Volumes
A volume is a logical disk that the system presents to the hosts.
Types
If you are unsure if there is sufficient free space to create a striped volume
copy, select one of the following options:
v Check the free space on each MDisk in the storage pool using the
lsfreeextents command.
v Let the system automatically create the volume copy by not supplying a
specific stripe set.
Figure 12 shows an example of a storage pool that contains three MDisks.
This figure also shows a striped volume copy that is created from the
extents that are available in the storage pool.
Storage pool
21iisr
Extent 3c
Sequential
When extents are selected, they are allocated sequentially on one MDisk to
create the volume copy if enough consecutive free extents are available on
the chosen MDisk.
Image Image-mode volumes are special volumes that have a direct relationship
with one MDisk. If you have an MDisk that contains data that you want to
merge into the clustered system, you can create an image-mode volume.
When you create an image-mode volume, a direct mapping is made
between extents that are on the MDisk and extents that are on the volume.
The MDisk is not virtualized. The logical block address (LBA) x on the
MDisk is the same as LBA x on the volume.
When you create an image-mode volume copy, you must assign it to a
storage pool. An image-mode volume copy must be at least one extent in
size. The minimum size of an image-mode volume copy is the extent size
of the storage pool to which it is assigned.
The extents are managed in the same way as other volume copies. When
the extents have been created, you can move the data onto other MDisks
that are in the storage pool without losing access to the data. After you
move one or more extents, the volume copy becomes a virtualized disk,
and the mode of the MDisk changes from image to managed.
You can use more sophisticated extent allocation policies to create volume copies.
When you create a striped volume, you can specify the same MDisk more than
once in the list of MDisks that are used as the stripe set. This is useful if you have
a storage pool in which not all the MDisks are of the same capacity. For example,
if you have a storage pool that has two 18 GB MDisks and two 36 GB MDisks, you
can create a striped volume copy by specifying each of the 36 GB MDisks twice in
the stripe set so that two-thirds of the storage is allocated from the 36 GB disks.
If you delete a volume, you destroy access to the data that is on the volume. The
extents that were used in the volume are returned to the pool of free extents that is
in the storage pool. The deletion might fail if the volume is still mapped to hosts.
The deletion might also fail if the volume is still part of a FlashCopy, Metro Mirror,
or Global Mirror mapping. If the deletion fails, you can specify the force-delete flag
to delete both the volume and the associated mappings to hosts. Forcing the
deletion deletes the Copy Services relationship and mappings.
States
A volume can be in one of three states: online, offline, and degraded. Table 12
describes the different states of a volume.
Table 12. Volume states
State Description
Online At least one synchronized copy of the
volume is online and available if both nodes
in the I/O group can access the volume. A
single node can only access a volume if it
can access all the MDisks in the storage pool
that are associated with the volume.
Offline The volume is offline and unavailable if
both nodes in the I/O group are missing, or
if none of the nodes in the I/O group that
are present can access any synchronized
copy of the volume. The volume can also be
offline if the volume is the secondary of a
Metro Mirror or Global Mirror relationship
that is not synchronized. A thin-provisioned
volume goes offline if a user attempts to
write an amount of data that exceeds the
available disk space.
Degraded The status of the volume is degraded if one
node in the I/O group is online and the
other node is either missing or cannot access
any synchronized copy of the volume.
Note: If you have a degraded volume and
all of the associated nodes and MDisks are
online, call the IBM Support Center for
assistance.
You can select to have read and write operations stored in cache by specifying a
cache mode. You can specify the cache mode when you create the volume. After
the volume is created, you can change the cache mode.
Compressed volumes:
Like thin-provisioned volumes, compressed volumes have virtual, real, and used
capacities:
v Real capacity is the extent space that is allocated from the storage pool. The real
capacity is also set when the volume is created and, like thin-provisioned
volumes, can be expanded or shrunk down to the used capacity.
v Virtual capacity is available to hosts. The virtual capacity is set when the volume
is created and can be changed afterward.
v Used capacity is the amount of real capacity used to store customer data and
metadata after compression.
v Capacity before compression is the amount of customer data that has been
written to the volume and then compressed.
Note: The capacity before compression does not include regions where zero
data is written to unallocated space.
You can also monitor information on compression usage to determine the savings
to your storage capacity when volumes are compressed. To monitor system-wide
compression savings and capacity, select Monitoring > System and either select
the system name or Compression View. You can compare the amount of capacity
used before compression is applied to the capacity that is used for all compressed
volumes. In addition you can view the total percentage of capacity savings when
compression is used on the system. In addition you can also monitor compression
savings across individual pools and volumes. For volumes, you can use these
compression values to determine which volumes have achieved the highest
compression savings.
Note: The capacity before compression does not include regions where zero data is
written to unallocated space.
Benefits of compression
When you use compression, monitor overall performance and CPU utilization to
ensure that other system functions have adequate bandwidth. If compression is
used excessively, overall bandwidth for the system might be impacted. To view
performance statistics that are related to compression, select Monitoring >
Performance and then select Compression % on the CPU Utilization graph.
Compression can be used to consolidate storage in both block storage and file
system environments. Compressing data reduces the amount of capacity that is
needed for volumes and directories. Compression can be used to minimize storage
utilization of logged data. Many applications, such as lab test results, require
constant recording of application or user status. Logs are typically represented as
text files or binary files that contain a high repetition of the same data patterns.
By using volume mirroring, you can convert an existing fully allocated volume to a
compressed volume without disrupting access to the original volume content. The
management GUI contains specific directions on converting a generic volume to a
compressed volume.
Before implementing compressed volumes on your system, assess the current types
of data and volumes that are used on your system. Do not compress data which is
already compressed as part of its normal workload. Data, such as video,
compressed file formats, (.zip files), or compressed user productivity file formats
There are various configuration items that affect the performance of compression
on the system. To attain high compression ratios and performance on your system,
ensure that the following guidelines have been met:
v If you have only a small number (between 10 and 20) of compressed volumes,
configure them on one I/O group and do not split compressed volumes between
different I/O groups.
v For larger numbers of compressed volumes on systems with more than one I/O
group, distribute compressed volumes across I/O groups to ensure access to
these volumes are evenly distributed among the I/O groups.
v Identify and use compressible data only. Different data types have different
compression ratios, and it is important to determine the compressible data
currently on your system. You can use tools that estimate the compressible data
or use commonly known ratios for common applications and data types. Storing
these data types on compressed volumes saves disk capacity and improves the
benefit of using compression on your system. The following table shows the
compression ratio for common applications and data types:
Table 14. Compression ratio for data types. Table 14 describes the compression ratio of
common data types and applications that provide high compression ratios.
Data Types/Applications Compression Ratios
®
Oracle and DB2 Up to 80%
Microsoft Office 2003 Up to 60%
Microsoft Office 2007 Up to 20%
Computer-Aided Design and Up to 70%
Computer-Aided Manufacturing
(CAD/CAM)
Oil/Gas Up to 50%
v Ensure that you have an extra 10% of capacity in the storage pools that are used
for compressed volumes for the additional metadata and to provide an error
margin in the compression ratio.
v Use compression on homogeneous volumes.
v Avoid using any client, file-system, or application based-compression with the
system compression.
v Do not compress encrypted data.
Compression requires dedicated hardware resources within the nodes which are
assigned or de-assigned when compression is enabled or disabled. Compression is
enabled whenever the first compressed volume in an I/O group is created and is
disabled when the last compressed volume is removed from the I/O group.
Use Monitoring > Performance in the management GUI during periods of high
host workload to measure CPU utilization.
Table 15. CPU utilization of nodes
SAN Volume SAN Volume
SAN Volume Controller 2145-CG8 Controller 2145-CG8
Per Node Controller 2145-CF8 (4 CPU cores) 1 (6 CPU cores)1
CPU already close to 25% 25% 50%
or above:
1
To determine whether your 2145-CG8 nodes contain 4 or 6 CPU cores, select
Monitoring > System to view VPD information related to the processor. The
version entry for 2145-CG8 nodes contains one of the following two values:
v Intel Xeon CPU E5630 - 4 cores
v Intel Xeon CPU E5645 - 6 cores
For more detailed planning and implementation information, see the Redpaper,
"IBM Real-time Compression in SAN Volume Controller and Storwize V7000."
Mirrored volumes:
By using volume mirroring, a volume can have two physical copies. Each volume
copy can belong to a different storage pool, and each copy has the same virtual
capacity as the volume. In the management GUI, an asterisk (*) indicates the
primary copy of the mirrored volume. The primary copy indicates the preferred
volume for read requests.
When a server writes to a mirrored volume, the system writes the data to both
copies. When a server reads a mirrored volume, the system picks one of the copies
to read. If one of the mirrored volume copies is temporarily unavailable; for
example, because the storage system that provides the storage pool is unavailable,
the volume remains accessible to servers. The system remembers which areas of
the volume are written and resynchronizes these areas when both copies are
available.
You can create a volume with one or two copies, and you can convert a
non-mirrored volume into a mirrored volume by adding a copy. When a copy is
added in this way, the SAN Volume Controller clustered system synchronizes the
new copy so that it is the same as the existing volume. Servers can access the
volume during this synchronization process.
The volume copy can be any type: image, striped, sequential, and either
thin-provisioned or fully allocated. The two copies can be of completely different
types.
When you use volume mirroring, consider how quorum candidate disks are
allocated. Volume mirroring maintains some state data on the quorum disks. If a
quorum disk is not accessible and volume mirroring is unable to update the state
information, a mirrored volume might need to be taken offline to maintain data
integrity. To ensure the high availability of the system, ensure that multiple
quorum candidate disks, allocated on different storage systems, are configured.
When a mirrored volume uses disk extents on a solid-state drive (SSD) that is on a
SAN Volume Controller node, synchronization is lost if one of the nodes goes
offline either during a concurrent code upgrade or because of maintenance. During
code upgrade, the synchronization must be restored within 30 minutes or the
upgrade stalls. Unlike volume copies from external storage systems, access to the
volume during the period that the SSD volume copies are not synchronized
depends on the single node that contains the SSD storage associated with the
synchronized volume copy. The default synchronization rate is typically too low
for SSD mirrored volumes. Instead, set the synchronization rate to 80 or above.
With a fast failover, during normal processing of host write I/O the system
submits writes (with a timeout of ten seconds) to both copies. If one write succeeds
and the other write takes longer than ten seconds then the slow write times out
and is aborted. The duration of the slow copy I/O abort sequence depends on the
backend that the mirror copy is configured from. For example if the I/O is
performed over the Fibre Channel network then the I/O abort sequence normally
takes around ten to twenty seconds, or in rare cases, longer than twenty seconds.
When the I/O abort sequence completes, the volume mirror configuration is
The volume mirror ceases to use the slow copy for a period of between four to six
minutes, and subsequent I/O is satisfied by the remaining synchronized copy.
During the four-to-six minute duration, synchronization is suspended.
Additionally, the volume's synchronization progress shows less than 100% and
decreases if the volume receives additional host writes. After the copy suspension
completes, volume mirroring synchronization resumes, and the slow copy starts
synchronizing.
Image mode MDisks are members of a storage pool, but they do not contribute to
free extents. Image mode volumes are not affected by the state of the storage pool
because the storage pool controls image mode volumes through the association of
the volume to an MDisk. Therefore, if an MDisk that is associated with an image
mode volume is online and the storage pool of which they are members goes
offline, the image mode volume remains online. Conversely, the state of a storage
pool is not affected by the state of the image mode volumes in the storage pool.
An image mode volume behaves just as a managed mode volume in terms of the
Metro Mirror, Global Mirror, and FlashCopy Copy Services. Image mode volumes
are different from managed mode in two ways:
v Migration. An image mode volume can be migrated to another image mode
volume. It becomes managed while the migration is ongoing, but returns to
image mode when the migration is complete.
v Quorum disks. Image mode volumes cannot be quorum disks. This means that a
clustered system with only image mode volumes does not have a quorum disk.
Several methods can be used to migrate image mode volumes into managed mode
volumes.
To perform any type of migration activity on an image mode volume, the image
mode volume must first be converted into a managed mode volume. The volume
is automatically converted into a managed mode volume whenever any kind of
migration activity is attempted. After the image-mode-to-managed-mode migration
operation has occurred, the volume becomes a managed mode volume and is
treated the same way as any other managed mode volume.
If the image mode disk has a partial last extent, this last extent in the image mode
volume must be the first to be migrated. This migration is processed as a special
case. After this special migration operation has occurred, the volume becomes a
managed mode volume and is treated in the same way as any other managed
mode volume. If the image mode disk does not have a partial last extent, no
special processing is performed. The image mode volume is changed into a
managed mode volume and is treated the same way as any other managed mode
volume.
An image mode disk can also be migrated to another image mode disk. The image
mode disk becomes managed while the migration is ongoing, but returns to image
mode when the migration is complete.
Procedure
1. Dedicate one storage pool to image mode volumes.
2. Dedicate one storage pool to managed mode volumes.
3. Use the migrate volume function to move the volumes.
Thin-provisioned volumes:
Virtual capacity is the volume storage capacity that is available to a host. Real
capacity is the storage capacity that is allocated to a volume copy from a storage
pool. In a fully allocated volume, the virtual capacity and real capacity are the
same. In a thin-provisioned volume, however, the virtual capacity can be much
larger than the real capacity.
SAN Volume Controller must maintain extra metadata that describes the contents
of thin-provisioned volumes. This means the I/O rates that are obtained from
thin-provisioned volumes are slower than those obtained from fully allocated
volumes that are allocated on the same MDisks.
When you configure a thin-provisioned volume, you can use the warning level
attribute to generate a warning event when the used real capacity exceeds a
specified amount or percentage of the total real capacity. You can also use the
warning event to trigger other actions, such as taking low-priority applications
offline or migrating data into other storage pools.
If a thin-provisioned volume does not have enough real capacity for a write
operation, the volume is taken offline and an error is logged (error code 1865,
event ID 060001). Access to the thin-provisioned volume is restored by either
increasing the real capacity of the volume or increasing the size of the storage pool
that it is allocated on.
When you create a thin-provisioned volume, you can choose the grain size for
allocating space in 32 KB, 64 KB, 128 KB, or 256 KB chunks. The grain size that
you select affects the maximum virtual capacity for the thin-provisioned volume.
The default grain size is 256 KB, and is the strongly recommended option. If you
select 32 KB for the grain size, the volume size cannot exceed 260,000 GB. The
grain size cannot be changed after the thin-provisioned volume has been created.
Generally, smaller grain sizes save space but require more metadata access, which
can adversely impact performance. If you are not going to use the thin-provisioned
volume as a FlashCopy source or target volume, use 256 KB to maximize
performance. If you are going to use the thin-provisioned volume as a FlashCopy
source or target volume, specify the same grain size for the volume and for the
FlashCopy function.
When you create a thin-provisioned volume, set the cache mode to readwrite to
maximize performance. If the cache mode is set to none, the SAN Volume
Controller system cannot cache the thin-provisioned metadata, which decreases
performance.
When you create an image mode volume, you can designate it as thin-provisioned.
An image mode thin-provisioned volume has a virtual capacity and a real capacity.
You can use an image mode volume to move a thin-provisioned volume between
two SAN Volume Controller clustered systems by using the following procedure.
The procedure is similar to that used for fully allocated volumes, but has an extra
step during the import process to specify the existing thin-provisioned metadata,
rather than to create a new, empty volume.
1. If the volume is not already in image mode, migrate the volume to image mode
and wait for the migration to complete.
2. Delete the volume from the exporting system.
The import option is valid only for SAN Volume Controller thin-provisioned
volumes. If you use this method to import a thin-provisioned volume that is
created by RAID storage systems into a clustered system, SAN Volume Controller
cannot detect it as a thin-provisioned volume. However, you can use the volume
mirroring feature to convert an image-mode fully allocated volume to a
thin-provisioned volume.
Procedure
1. Start with a single copy, fully allocated volume.
2. Add a thin-provisioned copy to the volume. Use a small real capacity and the
autoexpand feature.
3. Wait while the volume mirroring feature synchronizes the copies.
4. Remove the fully allocated copy from the thin-provisioned volume.
Results
Any grains of the fully allocated volume that contain all zeros do not cause any
real capacity to be allocated on the thin-provisioned copy. Before you create the
mirrored copy, you can fill the free capacity on the volume with a file that contains
all zeros.
I/O governing:
You can set the maximum amount of I/O activity that a host sends to a volume.
This amount is known as the I/O governing rate. The governing rate can be
expressed in I/Os per second or MB per second.
I/O governing does not affect FlashCopy and data migration I/O rates.
I/O governing on a Metro Mirror and Global Mirror secondary volume does not
affect the rate of data copy from the primary volume.
Host objects
A host system is a computer that is connected to SAN Volume Controller through
either a Fibre Channel interface or an IP network.
A host object is a logical object in SAN Volume Controller that represents a list of
worldwide port names (WWPNs) and a list of iSCSI names that identify the
interfaces that the host system uses to communicate with SAN Volume Controller.
iSCSI names can be either iSCSI qualified names (IQNs) or extended unique
identifiers (EUIs).
A typical configuration has one host object for each host system that is attached to
SAN Volume Controller. If a cluster of hosts accesses the same storage, you can
add host bus adapter (HBA) ports from several hosts to one host object to make a
simpler configuration. A host object can have both WWPNs and iSCSI names. In
addition, hosts can be connected to the system using Fibre Channel over Ethernet,
where hosts are identified with WWPNs but accessed through an IP network.
The system does not automatically present volumes to the host system. You must
map each volume to a particular host object to enable the volume to be accessed
through the WWPNs or iSCSI names that are associated with the host object. For
Fibre Channel hosts, the system reports the node login count, which is the number
of nodes that can detect each WWPN. If the count is less than expected for the
current configuration, you might have a connectivity problem. For iSCSI-attached
hosts, the number of logged-in nodes refers to iSCSI sessions that are created
between hosts and nodes, and might be greater than the current number of nodes
on the system.
When you create a new host object, the configuration interfaces provide a list of
unconfigured WWPNs. These represent the WWPNs that the system has detected.
Candidate iSCSI names are not available and must be entered manually.
The system can detect only WWPNs that have connected to the system through the
Fibre Channel network or through any IP nework. Some Fibre Channel HBA
device drivers do not let the ports remain logged in if no disks are detected on the
fabric or IP network. This can prevent some WWPNs from appearing in the list of
candidate WWPNs. The configuration interface provides a method to manually
type the port names.
Note: You must not include a WWPN or an iSCSI name that belongs to a SAN
Volume Controller node in a host object.
Port masks
You can use the port-mask property of the host object to control the Fibre Channel
ports on each SAN Volume Controller node that a host can access. The port mask
For each login between a host Fibre Channel port and node Fibre Channel port, the
node examines the port mask for the associated host object and determines if
access is allowed or denied. If access is denied, the node responds to SCSI
commands as if the HBA WWPN is unknown.
The port mask is four binary bits. Valid mask values range from 0000 (no ports
enabled) to 1111 (all ports enabled). For example, a mask of 0011 enables port 1
and port 2. The default value is 1111.
When you create a host mapping to a Fibre Channel attached host, the host ports
that are associated with the host object can view the LUN that represents the
volume on up to eight Fibre Channel ports. Nodes follow the American National
Standards Institute (ANSI) Fibre Channel (FC) standards for SCSI LUs that are
accessed through multiple node ports. All nodes within a single I/O group present
a consistent set of SCSI LUs across all ports on those nodes.
Similarly, all nodes within a single I/O group present a consistent set of SCSI LUs
across all iSCSI ports on those nodes.
Host mapping
Host mapping is the process of controlling which hosts have access to specific
volumes within the system.
The act of mapping a volume to a host makes the volume accessible to the
WWPNs or iSCSI names such as iSCSI qualified names (IQNs) or extended-unique
identifiers (EUIs) that are configured in the host object.
Each host mapping associates a volume with a host object and provides a way for
all WWPNs and iSCSI names in the host object to access the volume. You can map
a volume to multiple host objects. When a mapping is created, multiple paths
might exist across the SAN fabric or Ethernet network from the hosts to the nodes
that are presenting the volume. Without a multipathing device driver, most
operating systems present each path to a volume as a separate storage device. The
multipathing software manages the many paths that are available to the volume
and presents a single storage device to the operating system. If there are multiple
paths, the SAN Volume Controller requires that the multipathing software run on
the host.
Note: The iSCSI names and associated IP addresses for the SAN Volume Controller
nodes can fail over between nodes in the I/O group, which negates the need for
multipathing drivers in some configurations. Multipathing drivers are still
recommended, however, to provide the highest availability.
Figure 13 and Figure 14 show two volumes, and the mappings that exist between
the host objects and these volumes.
Logical
SCSI mapping ID = 2
svc00581
SCSI mapping ID = 6
Volume Volume Volume2
Vdisk
1 2 3
Physical
Logical
Figure 14. Hosts, WWPNs, IQNs or EUIs, volumes, and SCSI mappings
LUN masking is usually implemented in the device driver software on each host.
The host has visibility of more LUNs than it is intended to use, and device driver
software masks the LUNs that are not to be used by this host. After the masking is
complete, only some disks are visible to the operating system. The SAN Volume
Controller can support this type of configuration by mapping all volumes to every
host object and by using operating system-specific LUN masking technology. The
default, and recommended, SAN Volume Controller behavior, however, is to map
This prevents accidental data corruption that is caused when a server overwrites
data on another server. The Reserve and Persistent Reserve commands are often
used by clustered-system software to control access to SAN Volume Controller
volumes.
If a server is not shut down or removed from the server system in a controlled
way, the server's standard and persistent reserves are maintained. This prevents
other servers from accessing data that is no longer in use by the server that holds
the reservation. In this situation, you might want to release the reservation and
allow a new server to access the volume.
When possible, you should have the server that holds the reservation explicitly
release the reservation to ensure that the server cache is flushed and that the server
software is aware that access to the volume has been lost. In circumstances where
this is not possible, you can use operating system specific tools to remove
reservations. Consult the operating system documentation for details.
When you use the rmvdiskhostmap CLI command or the management GUI to
remove host mappings, SAN Volume Controller nodes with a software level of
4.1.0 or later can remove the server's standard reservations and persistent
reservations that the host has on the volume.
Maximum configurations
Ensure that you are familiar with the maximum configurations of the SAN Volume
Controller.
See the following website for the latest maximum configuration support:
www.ibm.com/storage/support/2145
Each I/O group within a system consists of a pair of nodes. If a node fails within
an I/O group, the other node in the I/O group assumes the I/O responsibilities of
the failed node. If the node contains solid-state drives (SSDs), the connection from
a node to its SSD can be a single point of failure in the event of an outage to the
node itself. Use RAID 10 or RAID 1 to remove this single point of failure.
If a system of SAN Volume Controller nodes is split into two partitions (for
example due to a SAN fabric fault), the partition with most nodes continues to
process I/O operations. If a system is split into two equal-sized partitions, a
quorum disk is accessed to determine which half of the system continues to read
and write data.
Each SAN Volume Controller node has four Fibre Channel ports, which can be
used to attach the node to multiple SAN fabrics. For high availability, attach the
www.ibm.com/systems/support
The SAN Volume Controller Volume Mirroring feature can be used to mirror data
across storage systems. This feature provides protection against a storage system
failure.
The SAN Volume Controller Metro Mirror and Global Mirror features can be used
to mirror data between systems at different physical locations for disaster recovery.
SSPC
SAN Volume SAN Volume
Controller Controller management
system GUI in browser window
DS8000 HW
Management
Tivoli Storage Console
Productivity Center
for Replication
DS8000 Element
Manager
Storwize V7000
management GUI
in browser window
Storwize V7000
system
svc00546
Figure 15. Overview of the IBM System Storage Productivity Center
For more information on SSPC, see the IBM System Storage Productivity Center
Introduction and Planning Guide.
The IBM Assist On-site tool is a remote desktop-sharing solution that is offered
through the IBM website. With it, the IBM service representative can remotely view
your system to troubleshoot a problem. You can maintain a chat session with the
IBM service representative so that you can monitor the activity and either
understand how to fix the problem yourself or allow the representative to fix it for
you.
www.ibm.com/support/assistonsite/
When you access the website, you sign in and enter a code that the IBM service
representative provides to you. This code is unique to each IBM Assist On-site
session. A plug-in is downloaded onto your management workstation to connect
you and your IBM service representative to the remote service session. The IBM
Assist On-site tool contains several layers of security to protect your applications
and your computers. You can also use security features to restrict access by the
IBM service representative.
Your IBM service representative can provide you with more detailed instructions
for using the tool.
Event notifications
The SAN Volume Controller product can use Simple Network Management
Protocol (SNMP) traps, syslog messages, and Call Home email to notify you and
the IBM Support Center when significant events are detected. Any combination of
these notification methods can be used simultaneously. Notifications are normally
sent immediately after an event is raised. However, there are some events that
might occur because of service actions that are being performed. If a recommended
service action is active, these events are notified only if they are still unfixed when
the service action completes.
Each event that SAN Volume Controller detects is assigned a notification type of
Error, Warning, or Information. When you configure notifications, you specify
where the notifications should be sent and which notification types are sent to that
recipient.
Events with notification type Error or Warning are shown as alerts in the event log.
Events with notification type Information are shown as messages.
SNMP traps
You can use the Management Information Base (MIB) file for SNMP to configure a
network management program to receive SNMP messages that are sent by the
system. This file can be used with SNMP messages from all versions of the
software. More information about the MIB file for SNMP is available at this
website:
www.ibm.com/storage/support/2145
Syslog messages
The syslog protocol is a standard protocol for forwarding log messages from a
sender to a receiver on an IP network. The IP network can be either IPv4 or IPv6.
The system can send syslog messages that notify personnel about an event. The
system can transmit syslog messages in either expanded or concise format. You can
use a syslog manager to view the syslog messages that the system sends. The
system uses the User Datagram Protocol (UDP) to transmit the syslog message.
You can specify up to a maximum of six syslog servers.You can use the
management GUI or the SAN Volume Controller command-line interface to
configure and modify your syslog settings.
Table 17 on page 60 shows how SAN Volume Controller notification codes map to
syslog security-level codes.
Table 18 shows how SAN Volume Controller values of user-defined message origin
identifiers map to syslog facility codes.
Table 18. SAN Volume Controller values of user-defined message origin identifiers and
syslog facility codes
SAN Volume
Controller value Syslog value Syslog facility code Message format
0 16 LOG_LOCAL0 Full
1 17 LOG_LOCAL1 Full
2 18 LOG_LOCAL2 Full
3 19 LOG_LOCAL3 Full
4 20 LOG_LOCAL4 Concise
5 21 LOG_LOCAL5 Concise
6 22 LOG_LOCAL6 Concise
7 23 LOG_LOCAL7 Concise
The Call Home feature transmits operational and event-related data to you and
IBM through a Simple Mail Transfer Protocol (SMTP) server connection in the form
of an event notification email. When configured, this function alerts IBM service
personnel about hardware failures and potentially serious configuration or
environmental issues.
To send email, you must configure at least one SMTP server. You can specify as
many as five additional SMTP servers for backup purposes. The SMTP server must
accept the relaying of email from the SAN Volume Controller management IP
address. You can then use the management GUI or the SAN Volume Controller
command-line interface to configure the email settings, including contact
information and email recipients. Set the reply address to a valid email address.
Send a test email to check that all connections and infrastructure are set up
correctly. You can disable the Call Home function at any time using the
management GUI or the SAN Volume Controller command-line interface.
Notifications can be sent using email, SNMP, or syslog. The data sent for each type
of notification is the same. It includes:
v Record type
v Machine type
v Machine serial number
v Error ID
v Error code
v Software version
v FRU part number
v Cluster (system) name
v Node ID
v Error sequence number
v Time stamp
v Object type
v Object ID
v Problem data
Emails contain the following additional information that allow the Support Center
to contact you:
v Contact names for first and second contacts
v Contact phone numbers for first and second contacts
v Alternate contact numbers for first and second contacts
v Offshift phone number
v Contact email address
v Machine location
To send data and notifications to IBM service personnel, use one of the following
email addresses:
v For SAN Volume Controller nodes located in North America, Latin America,
South America or the Caribbean Islands, use [email protected]
v For SAN Volume Controller nodes located anywhere else in the world, use
[email protected]
Because inventory information is sent using the Call Home email function, you
must meet the Call Home function requirements and enable the Call Home email
function before you can attempt to send inventory information email. You can
adjust the contact information, adjust the frequency of inventory email, or
manually send an inventory email using the management GUI or the SAN Volume
Controller command-line interface.
Performance statistics
Real-time performance statistics provide short-term status information for the SAN
Volume Controller system. The statistics are shown as graphs in the management
GUI.
You can use system statistics to monitor the bandwidth of all the volumes,
interfaces, and MDisks that are being used on your system. You can also monitor
the overall CPU utilization for the system. These statistics summarize the overall
performance health of the system and can be used to monitor trends in bandwidth
and CPU utilization. You can monitor changes to stable values or differences
between related statistics, such as the latency between volumes and MDisks. These
differences then can be further evaluated by performance diagnostic tools.
You can also select node-level statistics, which can help you determine the
performance impact of a specific node. As with system statistics, node statistics
help you to evaluate whether the node is operating within normal performance
metrics.
The CPU utilization graph shows the current percentage of CPU usage and specific
data points on the graph that show peaks in utilization. If compression is being
used, you can monitor the amount of CPU resources being used for compression
and the amount available to the rest of the system.
The Interfaces graph displays data points for serial-attached SCSI (SAS), Fibre
Channel, and iSCSI interfaces. You can use this information to help determine
connectivity issues that might impact performance.
The Volumes and MDisks graphs on the Performance panel show four metrics:
Read, Write, Read latency, and Write latency. You can use these metrics to help
User roles
Each user of the management GUI must provide a user name and a password to
sign on. Each user also has an associated role such as monitor, copy operator,
service, administrator, or security administrator. These roles are defined at the
system level. For example, a user may perform the administrator role for one
system and perform the service role for another system.
Monitor
Users with this role can view objects and system configuration but cannot
configure, modify, or manage the system or its resources.
Copy Operator
Users with this role have monitor-role privileges and can change and
manage all Copy Services functions.
Service
Users with this role have monitor-role privileges and can view the system
information, begin the disk-discovery process, and include disks that have
been excluded. This role is used by service personnel.
Administrator
Users with this role can access all functions on the system except those that
deal with managing users, user groups, and authentication.
Security Administrator (SecurityAdmin role name)
Users with this role can access all functions on the system, including
managing users, user groups, and user authentication.
You can create two types of users who access the system. These types are based on
how the users are authenticated to the system. Local users must provide either a
password, a Secure Shell (SSH) key, or both. Local users are authenticated through
the authentication methods that are located on the SAN Volume Controller system.
If the local user needs access to the management GUI, a password is needed for
the user. If the user requires access to the command-line interface (CLI) through
SSH, either a password or a valid SSH key file is necessary. Local users must be
part of a user group that is defined on the system. User groups define roles that
authorize the users within that group to a specific set of operations on the system.
To manage users and user groups on the system using the management GUI, select
User Management > Users. To configure remote authentication with Tivoli
Integrated Portal or Lightweight Directory Access Protocol, select Settings >
Directory Services.
The following Copy Services features are available for all supported hosts that are
connected to SAN Volume Controller:
FlashCopy
Makes an instant, point-in-time copy from a source volume to a target
volume.
Metro Mirror
Provides a consistent copy of a source volume on a target volume. Data is
written to the target volume synchronously after it is written to the source
volume, so that the copy is continuously updated.
Global Mirror
Provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously, so that the copy is
continuously updated, but the copy might not contain the most recent
updates in the event that a disaster recovery operation is performed.
FlashCopy function
The FlashCopy function is a Copy Services feature that is available with the SAN
Volume Controller system.
In its basic mode, the IBM FlashCopy function copies the contents of a source
volume to a target volume. Any data that existed on the target volume is lost and
is replaced by the copied data. After the copy operation has completed, the target
volumes contain the contents of the source volumes as they existed at a single
point in time unless target writes have been performed. The FlashCopy function is
sometimes described as an instance of a time-zero copy (T 0) or point-in-time copy
technology. Although the FlashCopy operation takes some time to complete, the
resulting data on the target volume is presented so that the copy appears to have
occurred immediately.
To create consistent backups of dynamic data, use the FlashCopy feature to capture
the data at a particular time. The resulting image of the data can be backed up, for
example, to a tape device. When the copied data is on tape, the data on the
FlashCopy target disks become redundant and can now be discarded. Usually in
this backup condition, the target data can be managed as read-only.
It is often very important to test a new version of an application with real business
data before the existing production version of the application is updated or
replaced. This testing reduces the risk that the updated application fails because it
is not compatible with the actual business data that is in use at the time of the
update. Such an application test might require write access to the target data.
You can also use the FlashCopy feature to create restart points for long running
batch jobs. This means that if a batch job fails several days into its run, it might be
possible to restart the job from a saved copy of its data rather than rerunning the
entire multiday job.
After the mapping is started, all of the data that is stored on the source volume
can be accessed through the target volume. This includes any operating system
control information, application data, and metadata that was stored on the source
volume. Because of this, some operating systems do not allow a source volume
and a target volume to be addressable on the same host.
To ensure the integrity of the copy that is made, it is necessary to completely flush
the host cache of any outstanding reads or writes before you proceed with the
FlashCopy operation. You can flush the host cache by unmounting the source
volumes from the source host before you start the FlashCopy operation.
Because the target volumes are overwritten with a complete image of the source
volumes, it is important that any data held on the host operating system (or
application) caches for the target volumes is discarded before the FlashCopy
mappings are started. The easiest way to ensure that no data is held in these
caches is to unmount the target volumes before starting the FlashCopy operation.
Some operating systems and applications provide facilities to stop I/O operations
and ensure that all data is flushed from caches on the host. If these facilities are
available, they can be used to prepare and start a FlashCopy operation. See your
host and application documentation for details.
Some operating systems are unable to use a copy of a volume without synthesis.
Synthesis performs a transformation of the operating system metadata on the
target volume to enable the operating system to use the disk. See your host
documentation on how to detect and mount the copied volumes.
Perform the following steps to flush data from your host volumes and start a
FlashCopy operation:
Procedure
1. If you are using UNIX or Linux operating systems, perform the following steps:
a. Quiesce all applications to the source volumes that you want to copy.
b. Use the unmount command to unmount the designated drives.
c. Prepare and start the FlashCopy operation for those unmounted drives.
d. Remount your volumes with the mount command and resume your
applications.
2. If you are using the Microsoft Windows operating system using drive letter
changes, perform the following steps:
a. Quiesce all applications to the source volumes that you want to copy.
b. Go into your disk management window and remove the drive letter on
each drive that you want to copy. This unmounts the drive.
c. Prepare and start the FlashCopy operation for those unmounted drives.
d. Remount your volumes by restoring the drive letters and resume your
applications.
If you are using the chkdsk command, perform the following steps:
a. Quiesce all applications to the source volumes that you want to copy.
b. Issue the chkdsk /x command on each drive you want to copy. The /x
option unmounts, scans, and remounts the volume.
c. Ensure that all applications to the source volumes are still quiesced.
d. Prepare and start the FlashCopy operation for those unmounted drives.
Note: If you can ensure that no reads and writes are issued to the source
volumes after you unmount the drives, you can immediately remount and then
start the FlashCopy operation.
FlashCopy mappings
A FlashCopy mapping defines the relationship between a source volume and a
target volume.
The FlashCopy feature makes an instant copy of a volume at the time that it is
started. To create an instant copy of a volume, you must first create a mapping
between the source volume (the disk that is copied) and the target volume (the
disk that receives the copy). The source and target volumes must be of equal size.
A mapping can be created between any two volumes in a system. The volumes do
not have to be in the same I/O group or storage pool. When a FlashCopy
operation starts, a checkpoint is made of the source volume. No data is actually
copied at the time a start operation occurs. Instead, the checkpoint creates a bitmap
that indicates that no part of the source volume has been copied. Each bit in the
bitmap represents one region of the source volume. Each region is called a grain.
During a read operation to the target volume, the bitmap is used to determine if
the grain has been copied. If the grain has been copied, the data is read from the
target volume. If the grain has not been copied, the data is read from the source
volume.
You can copy up to 256 target volumes from a single source volume. Each
relationship between a source and target volume is managed by a unique mapping
such that a single volume can be the source volume in up to 256 mappings.
Each of the mappings from a single source can be started and stopped
independently. If multiple mappings from the same source are active (in the
copying or stopping states), a dependency exists between these mappings.
Example 1:
Note: If both mappings were in the same consistency group and therefore
started at the same time, the order of dependency is decided internally when the
consistency group is started.
Example 2:
When you create a mapping, you specify a clean rate. The clean rate is used to
control the rate that data is copied from the target volume of the mapping to the
target volume of a mapping that is either the latest copy of the target volume, or is
the next oldest copy of the source volume. The clean rate is used in the following
situations:
v The mapping is in the stopping state
v The mapping is in the copying state and has a copy rate of zero
v The mapping is in the copying state and the background copy has completed
You can use the clean rate to minimize the amount of time that a mapping is in the
stopping state. If the mapping has not completed, the target volume is offline
while the mapping is stopping. The target volume remains offline until the
mapping is restarted.
You also specify a copy rate when you create a mapping. When the mapping is in
the copying state, the copy rate determines the priority that is given to the
background copy process. If you want a copy of the whole source volume so that a
mapping can be deleted and still be accessed from the target volume, you must
copy all the data that is on the source volume to the target volume.
The default values for both the clean rate and the copy rate is 50.
When a mapping is started and the copy rate is greater than zero, the unchanged
data is copied to the target volume, and the bitmap is updated to show that the
copy has occurred. After a time, the length of which depends on the priority that
was determined by the copy rate and the size of the volume, the whole volume is
copied to the target. The mapping returns to the idle_or_copied state and you can
now restart the mapping at any time to create a new copy at the target.
While the mapping is in the copying state, you can set the copy rate to zero and
the clean rate to a value other than zero to minimize the amount of time a
mapping is in the stopping state.
If you use multiple target mappings, the mapping can stay in the copying state
after all of the source data is copied to the target (the progress is 100%). This
situation can occur if mappings that were started earlier and use the same source
disk are not yet 100% copied.
You can stop the mapping at any time after it has been started. Unless the target
volume already contains a complete copy of the source volume, this action makes
the target inconsistent and the target volume is taken offline. The target volume
remains offline until the mapping is restarted.
You can also set the autodelete attribute. If this attribute is set to on, the mapping
is automatically deleted when the mapping reaches the idle_or_copied state and
the progress is 100%.
Notes:
1. If a FlashCopy source volume goes offline, any FlashCopy target volumes that
depend on that volume also go offline.
2. If a FlashCopy target volume goes offline, any FlashCopy target volumes that
depend on that volume also go offline. The source volume remains online.
Before you start the mapping, you must prepare it. Preparing the mapping ensures
that the data in the cache is de-staged to disk and a consistent copy of the source
exists on disk. At this time, the cache goes into write-through mode. Data that is
written to the source is not cached in the SAN Volume Controller nodes; it passes
straight through to the MDisks. The prepare operation for the mapping might take
some time to complete; the actual length of time depends on the size of the source
volume. You must coordinate the prepare operation with the operating system.
Depending on the type of data that is on the source volume, the operating system
or application software might also cache data write operations. You must flush, or
synchronize, the file system and application program before you prepare and start
the mapping.
Note: The startfcmap and startfcconsistgrp commands can take some time to
process.
If you do not want to use consistency groups, the SAN Volume Controller allows a
mapping to be treated as an independent entity. In this case, the mapping is
known as a stand-alone mapping. For mappings that have been configured in this
way, use the prestartfcmap and startfcmap commands rather than the
prestartfcconsistgrp and startfcconsistgrp commands.
You can start a mapping with a target volume that is the source volume of another
active mapping that is in either the idle_copied, stopped, or copying states. If the
mapping is in the copying state, the restore parameter is required for the
startfcmap and prestartfcmap commands. You can restore the contents of a
FlashCopy source volume by using the target of the same FlashCopy mapping or a
For FlashCopy target volumes, the SAN Volume Controller sets a bit in the inquiry
data for those mapping states where the target volume could be an exact image of
the source volume. Setting this bit enables the Veritas Volume Manager to
distinguish between the source and target volumes and provide independent
access to both.
Attention: The prepare command can corrupt any data that previously
resided on the target volume because cached writes are discarded. Even if
the FlashCopy mapping is never started, the data from the target might
have logically changed during the act of preparing to start the FlashCopy
mapping.
Flush done The FlashCopy mapping automatically moves from the preparing state to
the prepared state after all cached data for the source is flushed and all
cached data for the target is no longer valid.
To preserve the cross volume consistency group, the start of all of the
FlashCopy mappings in the consistency group must be synchronized
correctly with respect to I/Os that are directed at the volumes. This is
achieved during the start command.
As part of the start command, read and write caching is enabled for both
the source and target volumes.
Modify The following FlashCopy mapping properties can be modified:
v FlashCopy mapping name
v Clean rate
v Consistency group
v Copy rate (for background copy or stopping copy priority)
v Automatic deletion of the mapping when the background copy is
complete
Stop There are two separate mechanisms by which a FlashCopy mapping can
be stopped:
v You have issued a command
v An I/O error has occurred
Delete This command requests that the specified FlashCopy mapping is deleted.
If the FlashCopy mapping is in the stopped state, the force flag must be
used.
Flush failed If the flush of data from the cache cannot be completed, the FlashCopy
mapping enters the stopped state.
Copy complete After all of the source data has been copied to the target and there are no
dependent mappings, the state is set to copied. If the option to
automatically delete the mapping after the background copy completes is
specified, the FlashCopy mapping is automatically deleted. If this option
is not specified, the FlashCopy mapping is not automatically deleted and
can be reactivated by preparing and starting again.
Bitmap The node has failed.
Online/Offline
Thin-provisioned FlashCopy
You can have a mix of thin-provisioned and fully allocated volumes in FlashCopy
mappings. One common combination is a fully allocated source with a
thin-provisioned target, which enables the target to consume a smaller amount of
real storage than the source.
Consider the following information when you create your FlashCopy mappings:
v If you are using a fully allocated source with a thin-provisioned target, disable
background copy and cleaning mode on the FlashCopy map by setting both the
background copy rate and cleaning rate to zero. Otherwise, if these features are
enabled, all the source is copied onto the target volume. This causes the
thin-provisioned volume to either go offline or to grow as large as the source.
v If you are using only thin-provisioned source, only the space that is used on the
source volume is copied to the target volume. For example, if the source volume
has a virtual size of 800 GB and a real size of 100 GB of which 50 GB have been
used, only the used 50 GB are copied.
v A FlashCopy bitmap contains one bit for every grain on a volume. For example,
if you have a thin-provisioned volume with 1 TB virtual size (100 MB real
capacity), you must have a FlashCopy bitmap to cover the 1 TB virtual size even
though only 100 MB of real capacity is allocated.
The consistency group is specified when the mapping is created. You can also
change the consistency group later. When you use a consistency group, you
prepare and start that group instead of the individual mappings. This process
ensures that a consistent copy is made of all the source volumes. Mappings to
control at an individual level are known as stand-alone mappings. Do not place
stand-alone mappings into a consistency group because they become controlled as
part of that consistency group.
When you copy data from one volume to another, the data might not include all
that you need to use the copy. Many applications have data that spans multiple
volumes and requires that data integrity is preserved across volumes. For example,
the logs for a particular database usually reside on a different volume than the
volume that contains the data.
Consistency groups address the problem of applications having related data that
spans multiple volumes. In this situation, IBM FlashCopy operations must be
performed in a way that preserves data integrity across the multiple volumes. One
requirement for preserving the integrity of data being written is to ensure that
dependent writes are run in the intended sequence of the application.
You can set the autodelete attribute for FlashCopy consistency groups. If this
attribute is set to on, the consistency group is automatically deleted when the last
mapping in the group is deleted or moved out of the consistency group.
svc00702
(No state) Individual FlashCopy mappings that
are not in a consistency group.
svc00698
The following list is a typical sequence of write operations for a database update
transaction:
1. A write operation updates the database log so that it indicates that a database
update is about to take place.
2. A second write operation updates the database.
3. A third write operation updates the database log so that it indicates that the
database update has completed successfully.
The database ensures correct ordering of these writes by waiting for each step to
complete before starting the next. The database log is often placed on a different
volume than the database. In this case, ensure that FlashCopy operations are
performed without changing the order of these write operations. For example,
consider the possibility that the database (update 2) is copied slightly earlier than
the database log (update 1 and 3), which means the copy on the target volume will
contain updates (1) and (3) but not (2). In this case, if the database is restarted
from a backup made from the FlashCopy target disks, the database log indicates
that the transaction has completed successfully when, in fact, it has not. The
transaction is lost and the integrity of the database is compromised.
See the following website for the latest maximum configuration support:
www.ibm.com/storage/support/2145
The grain size is 64 KB or 256 KB. The FlashCopy bitmap contains one bit for each
grain. The bit records whether the associated grain has been split by copying the
grain from the source to the target.
A write to the newest target volume must consider the state of the grain for its
own mapping and the grain of the next oldest mapping.
v If the grain of the intermediate mapping or the next oldest mapping has not
been copied, it must be copied before the write is allowed to proceed. This is
done to preserve the contents of the next oldest mapping. The data written to
the next oldest mapping can come from a target or source.
v If the grain of the target that is being written has not been copied, the grain is
copied from the oldest already copied grain in the mappings that are newer than
the target (or the source if no targets are already copied). After the copy is
complete, the write can be applied to the target.
If the grain that is being read has been split, the read returns data from the target
that is being read. If the read is to an uncopied grain on an intermediate target
volume, each of the newer mappings are examined to determine if the grain has
been split. The read is surfaced from the first split grain found or from the source
volume if none of the newer mappings have a split grain.
If NOCOPY is specified, background copy is disabled. You can specify NOCOPY for
short-lived FlashCopy mappings that are only used for backups, for example.
Because the source data set is not expected to significantly change during the
lifetime of the FlashCopy mapping, it is more efficient in terms of managed disk
(MDisk) I/Os to not perform a background copy.
Note: For the command-line interface (CLI), the value NOCOPY is equivalent to
setting the copy rate to 0 (zero).
Table 21 provides the relationship of the copy and cleaning rate values to the
attempted number of grains to be split per second. A grain is the unit of data
represented by a single bit.
Table 21. Relationship between the rate, data rate and grains per second values
User-specified rate
attribute value Data copied/sec 256 KB grains/sec 64 KB grains/sec
1 - 10 128 KB 0.5 2
11 - 20 256 KB 1 4
21 - 30 512 KB 2 8
31 - 40 1 MB 4 16
The data copied/sec and the grains/sec numbers represent standards that the SAN
Volume Controller tries to achieve. The SAN Volume Controller is unable to
achieve these standards if insufficient bandwidth is available from the nodes to the
physical disks that make up the managed disks (MDisks) after taking into account
the requirements of foreground I/O. If this situation occurs, background copy I/O
contends for resources on an equal basis with I/O that arrives from hosts. Both
tend to see an increase in latency and consequential reduction in throughput with
respect to the situation had the bandwidth not been limited. Background copy,
stopping copy, and foreground I/O continue to make forward progress and do not
stop, hang, or cause the node to fail.
The background copy is performed by one of the nodes that belong to the I/O
group in which the source volume resides. This responsibility is moved to the
other node in the I/O group in the event of the failure of the node that performs
the background and stopping copy.
The background copy starts with the grain that contains the highest logical block
numbers (LBAs) and works in reverse towards the grain that contains LBA 0. The
background copy is performed in reverse to avoid any unwanted interactions with
sequential write streams from the application.
The stopping copy operation copies every grain that is split on the stopping map
to the next map (if one exists) which is dependent on that grain. The operation
starts searching with the grain that contains the highest LBAs and works in reverse
towards the grain that contains LBA 0. Only those grains that other maps are
dependent upon are copied.
Cleaning mode
When you create or modify a FlashCopy mapping, you can specify a cleaning rate
for the FlashCopy mapping that is independent of the background copy rate. The
cleaning rates shown in Table 21 on page 78 control the rate at which the cleaning
process operates. The cleaning process copies data from the target volume of a
mapping to the target volumes of other mappings that are dependent on this data.
The cleaning process must complete before the FlashCopy mapping can go to the
stopped state.
Cleaning mode allows you to activate the cleaning process when the FlashCopy
mapping is in the copying state. This keeps your target volume accessible while
the cleaning process is running. When operating in this mode, it is possible that
host I/O operations can prevent the cleaning process from reaching 100% if the
Cleaning mode is active if the background copy progress has reached 100% and
the mapping is in the copying state, or if the background copy rate is set to 0.
Although the application only writes to a single volume, the system maintains two
copies of the data. If the copies are separated by a significant distance, the Metro
Mirror and Global Mirror copies can be used as a backup for disaster recovery. A
prerequisite for Metro Mirror and Global Mirror operations between systems is
that the SAN fabric to which they are attached provides adequate bandwidth
between the systems.
For both Metro Mirror and Global Mirror copy types, one volume is designated as
the primary and the other volume is designated as the secondary. Host
applications write data to the primary volume, and updates to the primary volume
are copied to the secondary volume. Normally, host applications do not perform
I/O operations to the secondary volume.
The Metro Mirror feature provides a synchronous-copy process. When a host writes
to the primary volume, it does not receive confirmation of I/O completion until
the write operation has completed for the copy on both the primary volume and
the secondary volume. This ensures that the secondary volume is always
up-to-date with the primary volume in the event that a failover operation must be
performed. However, the host is limited to the latency and bandwidth limitations
of the communication link to the secondary volume.
Global Mirror can operate with or without cycling. When operating without
cycling, write operations are applied to the secondary volume as soon as possible
after they are applied to the primary volume. The secondary volume is generally
less than 1 second behind the primary volume, which minimizes the amount of
data that must be recovered in the event of a failover. This requires that a
high-bandwidth link be provisioned between the two sites, however.
When Global Mirror operates with cycling mode, changes are tracked and where
needed copied to intermediate change volumes. Changes are transmitted to the
secondary site periodically. The secondary volumes are much further behind the
primary volume, and more data must be recovered in the event of a failover.
Because the data transfer can be smoothed over a longer time period, however,
lower bandwidth is required to provide an effective solution.
Note: A system can participate in active Metro Mirror and Global Mirror
relationships with itself and up to three other systems.
v Intersystem and intrasystem Metro Mirror and Global Mirror relationships can
be used concurrently within a system.
v The intersystem link is bidirectional. This means that it can copy data from
system A to system B for one pair of volumes while copying data from system B
to system A for a different pair of volumes.
v The copy direction can be reversed for a consistent relationship.
v Consistency groups are supported to manage a group of relationships that must
be kept synchronized for the same application. This also simplifies
administration, because a single command that is issued to the consistency
group is applied to all the relationships in that group.
v SAN Volume Controller supports a maximum of 8192 Metro Mirror and Global
Mirror relationships per clustered system.
Typically, the master volume contains the production copy of the data and is the
volume that the application normally accesses. The auxiliary volume typically
contains a backup copy of the data and is used for disaster recovery.
Global Mirror with cycling also makes use of change volumes, which hold earlier
consistent revisions of data when changes are made. A change volume can be
created for both the master volume and the auxiliary volume of the relationship.
The master and auxiliary volumes are defined when the relationship is created,
and these attributes never change. However, either volume can operate in the
primary or secondary role as necessary. The primary volume contains a valid copy
of the application data and receives updates from the host application, analogous
to a source volume. The secondary volume receives a copy of any updates to the
primary volume, because these updates are all transmitted across the mirror link.
Therefore, the secondary volume is analogous to a continuously updated target
volume. When a relationship is created, the master volume is assigned the role of
primary volume and the auxiliary volume is assigned the role of secondary
volume. Therefore, the initial copying direction is from master to auxiliary. When
the relationship is in a consistent state, you can reverse the copy direction.
The two volumes in a relationship must be the same size. When the two volumes
are in the same system, they must be in the same I/O group.
If change volumes are defined, they must be the same size and in the same I/O
group as the associated master volume or auxiliary volume.
Copy types
A Metro Mirror copy ensures that updates are committed to both the primary and
secondary volumes before sending confirmation of I/O completion to the host
application. This ensures that the secondary volume is synchronized with the
primary volume in the event that a failover operation is performed.
A Global Mirror copy allows the host application to receive confirmation of I/O
completion before the updates are committed to the secondary volume. If a
failover operation is performed, the host application must recover and apply any
updates that were not committed to the secondary volume.
States
When a Metro Mirror or Global Mirror relationship is created with two volumes in
different clustered systems, the distinction between the connected and
disconnected states is important. These states apply to both systems, the
relationships, and the consistency groups. The following relationship states are
possible:
InconsistentStopped
The primary volume is accessible for read and write I/O operations, but
the secondary volume is not accessible for either operation. A copy process
must be started to make the secondary volume consistent.
InconsistentCopying
The primary volume is accessible for read and write I/O operations, but
the secondary volume is not accessible for either operation. This state is
entered after an startrcrelationship command is issued to a consistency
group in the InconsistentStopped state. This state is also entered when an
startrcrelationship command is issued, with the force option, to a
consistency group in the Idling or ConsistentStopped state.
ConsistentStopped
The secondary volume contains a consistent image, but it might be out of
date with respect to the primary volume. This state can occur when a
relationship was in the ConsistentSynchronized state and experiences an
error that forces a freeze of the consistency group. This state can also occur
when a relationship is created with the CreateConsistentFlag parameter set
to TRUE.
ConsistentSynchronized
The primary volume is accessible for read and write I/O operations. The
secondary volume is accessible for read-only I/O operations.
Metro Mirror and Global Mirror relationships manage heavy workloads differently:
v Metro Mirror typically maintains the relationships that are in the copying or
synchronized states, which causes the primary host applications to see degraded
performance.
v Noncycling Global Mirror requires a higher level of write performance to
primary host applications. If the link performance is severely degraded, the link
tolerance feature automatically stops noncycling Global Mirror relationships
when the link tolerance threshold is exceeded. As a result, noncycling Global
Mirror write operations can suffer degraded performance if Metro Mirror
relationships use most of the capability of the intersystem link.
v Multiple-cycling Global Mirror relationships do not degrade performance in
heavy workload situations. Global Mirror relationships instead allow the
secondary volume to trail further behind the primary volume until the workload
has lessened.
You can create new Metro Mirror and Global Mirror partnerships between systems
with different software levels. If the partnerships are between a SAN Volume
Controller version 6.3.0 system and a system that is at 4.3.1, each system can
participate in a single partnership with another system. If the systems are all either
SAN Volume Controller version 5.1.0 or later, each system can participate in up to
three system partnerships. A maximum of four systems are permitted in the same
connected set. A partnership cannot be formed between a SAN Volume Controller
version 6.3.0 and one that is running a version that is earlier than 4.3.1.
Systems also become indirectly associated with each other through partnerships. If
two systems each have a partnership with a third system, those two systems are
indirectly associated. A maximum of four systems can be directly or indirectly
associated.
The nodes within the system must know not only about the relationship between
the two volumes but also about an association among systems.
The following examples show possible partnerships that can be established among
SAN Volume Controller clustered systems.
System A System B
svc00522
System A System B
svc00523
System D
svc00524
Figure 20. Four systems in a partnership. System A might be a disaster recovery site.
System A System B
System C svc00525
Figure 21. Three systems in a migration situation. Data Center B is migrating to C. System A
is host production, and System B and System C are disaster recovery.
System A System C
svc00526
System B System D
Figure 22. Systems in a fully connected mesh configuration. Every system has a partnership
to each of the three other systems.
svc00527
svc00554
Figure 24. An unsupported system configuration
To establish a Metro Mirror and Global Mirror partnership between two systems,
you must run the mkpartnership command from both systems. For example, to
establish a partnership between system A and system B, you must run the
mkpartnership command from system A and specify system B as the remote
system. At this point the partnership is partially configured and is sometimes
described as one-way communication. Next, you must run the mkpartnership
command from system B and specify system A as the remote system. When this
command completes, the partnership is fully configured for two-way
communication between the systems. You can also use the management GUI to
create Metro Mirror and Global Mirror partnerships.
The state of the partnership helps determine whether the partnership operates as
expected. In addition to being fully configured, a system partnership can have the
following states:
Partially Configured
Indicates that only one system partner is defined from a local or remote
system to the displayed system and is started. For the displayed system to
be configured fully and to complete the partnership, you must define the
system partnership from the system that is displayed to the corresponding
local or remote system. You can do this by issuing the mkpartnership
command on the local and remote systems that are in the partnership, or
by using the management GUI to create a partnership on both the local
and remote systems.
Fully Configured
Indicates that the partnership is defined on the local and remote systems
and is started.
Remote Not Present
Indicates that the remote system is not present to the partnership.
Partially Configured (Local Stopped)
Indicates that the local system is only defined to the remote system and the
local system is stopped.
Fully Configured (Local Stopped)
Indicates that a partnership is defined on both the local and remote
systems. The remote system is present, but the local system is stopped.
Fully Configured (Remote Stopped)
Indicates that a partnership is defined on both the local and remote
systems. The remote system is present, but the remote system is stopped.
Fully Configured (Local Excluded)
Indicates that a partnership is defined between a local and remote system;
however, the local system has been excluded. Usually this state occurs
when the fabric link between the two systems has been compromised by
too many fabric errors or slow response times of the system partnership.
To resolve these errors. check the event log for 1720 errors by selecting
Service and Maintenance > Analyze Event Log.
To change Metro Mirror and Global Mirror partnerships, use the chpartnership
command. To delete Metro Mirror and Global Mirror partnerships, use the
rmpartnership command.
Attention: Before you run the rmpartnership command, you must remove all
relationships and groups that are defined between the two systems. To display
system relationships and groups, run the lsrcrelationship and srcconsistgrp
commands. To remove the relationships and groups that are defined between the
two systems, run the rmrcrelationship and rmrcconsistgrp commands.
You can control the rate at which the initial background copy from the local system
to the remote system is performed. The bandwidth parameter specifies this rate in
whole megabytes per second.
You can create partnerships with SAN Volume Controller and Storwize V7000
systems to allow Metro Mirror and Global Mirror to operate between the two
systems. To be able to create these partnerships, both clustered systems must be at
version 6.3.0 or later.
Partnership Partnership
Replication layer
Storage layer
Volumes
Storwize V7000
svc00750
System D Layer = storage
Figure 25. Example configuration for replication between SAN Volume Controller and a
Storwize V7000 system
Clustered systems can be configured into partnerships only with other systems in
the same layer. Specifically, this means the following configurations:
v SAN Volume Controller systems can be in partnerships with other SAN Volume
Controller systems.
v With default settings, a Storwize V7000 system can be in a partnership with
other Storwize V7000 systems.
v A Storwize V7000 system can be in a partnership with a SAN Volume Controller
system if the Storwize V7000 system is switched to the replication layer.
v A replication-layer Storwize V7000 system can be in a partnership with another
replication-layer Storwize V7000 system. A replication-layer Storwize V7000
cannot be in a partnership with a storage-layer Storwize V7000 system.
v A Storwize V7000 system can present storage to SAN Volume Controller only if
the Storwize V7000 system is in the storage layer.
To view the current layer of a clustered system, enter the lssystem command-line
interface (CLI) command.
All components in the SAN must be capable of sustaining the workload that is
generated by application hosts and the Global Mirror background copy process. If
all of the components in the SAN cannot sustain the workload, the Global Mirror
relationships are automatically stopped to protect your application hosts from
increased response times.
Node Node
Switch Switch
fabric 2A fabric 2B
Node Node
svc00367
Local system Remote system
You can use Fibre Channel extenders or SAN routers to increase the distance
between two systems. Fibre Channel extenders transmit Fibre Channel packets
across long links without changing the contents of the packets. SAN routers
provide virtual nPorts on two or more SANs to extend the scope of the SAN. The
SAN router distributes the traffic from one virtual nPort to the other virtual nPort.
The two Fibre Channel fabrics are independent of each other. Therefore, nPorts on
each of the fabrics cannot directly log in to each other. See the following website
for specific firmware levels and the latest supported hardware:
www.ibm.com/storage/support/2145
If you use Fibre Channel extenders or SAN routers, you must meet the following
requirements:
v For SAN Volume Controller software level 4.1.1 or later, the round-trip latency
between sites cannot exceed 80 ms for either Fibre Channel extenders or SAN
routers.
v The configuration must be tested with the expected peak workloads.
v Metro Mirror and Global Mirror require a specific amount of bandwidth for
intersystem heartbeat traffic. The amount of traffic depends on the number of
nodes that are in both the local system and the remote system. Table 22 lists the
intersystem heartbeat traffic for the primary system and the secondary system.
These numbers represent the total traffic between two systems when there are
no I/O operations running on the copied volumes. Half of the data is sent by
the primary system and half of the data is sent by the secondary system so that
traffic is evenly divided between all of the available intersystem links. If you
have two redundant links, half of the traffic is sent over each link.
Table 22. Intersystem heartbeat traffic in Mbps
System 1 System 2
2 nodes 4 nodes 6 nodes 8 nodes
2 nodes 2.6 4.0 5.4 6.7
4 nodes 4.0 5.5 7.1 8.6
There is no limit on the Fibre Channel optical distance between SAN Volume
Controller nodes and host servers. You can attach a server to an edge switch in a
core-edge configuration with the SAN Volume Controller system at the core. SAN
Volume Controller systems support up to three ISL hops in the fabric. Therefore,
the host server and the SAN Volume Controller system can be separated by up to
five Fibre Channel links. If you use longwave small form-factor pluggable (SFP)
transceivers, four of the Fibre Channel links can be up to 10 km long.
In this scenario, the hosts in the local system also exchange heartbeats with the
hosts that are in the remote system. Because the intersystem link is being used for
multiple purposes, you must have sufficient bandwidth to support the following
sources of load:
v Global Mirror or Metro Mirror data transfers and the SAN Volume Controller
system heartbeat traffic.
v Local host to remote volume I/O traffic or remote host to local volume I/O
traffic.
Metro Mirror or Global Mirror relationships can only belong to one consistency
group; however, they do not have to belong to a consistency group. Relationships
that are not part of a consistency group are called stand-alone relationships. A
consistency group can contain zero or more relationships. All relationships in a
consistency group must have matching primary and secondary systems, which are
sometimes referred to as master and auxiliary systems. All relationships in a
consistency group must also have the same copy direction and state.
Metro Mirror and Global Mirror relationships cannot belong to the same
consistency group. A copy type is automatically assigned to a consistency group
when the first relationship is added to the consistency group. After the consistency
group is assigned a copy type, only relationships of that copy type can be added to
the consistency group. Global Mirror relationships with different cycling modes
cannot belong to the same consistency group. The type and direction of the
relationships in a consistency group must be the same.
Metro Mirror and Global Mirror consistency groups can be in one of the following
states.
svc00707
secondary volumes are not
accessible for either operation. A
copy process must be started to
make the secondary volumes
consistent.
svc00711
svc00708
svc00712
1
In rows where two Management GUI icons are shown, the first icon indicates a synchronous-copy Metro Mirror
state. The second icon in each row indicates an asynchronous-copy Global Mirror state
Note: Volume copies are synchronized when their contents are consistent. If write
operations take place on either the primary or secondary volume after a consistent
(stopped) or idling state occurs, they might no longer be synchronized.
The background copy bandwidth can affect foreground I/O latency in one of three
ways:
v If the background copy bandwidth is set too high for the intersystem link
capacity, the following results can occur:
– The intersystem link is not able to process the background copy I/Os fast
enough, and the I/Os can back up (accumulate).
– For Metro Mirror, there is a delay in the synchronous secondary write
operations of foreground I/Os.
– For Global Mirror, the work is backlogged, which delays the processing of
write operations and causes the relationship to stop. For Global Mirror in
multiple-cycling mode, a backlog in the intersystem link can congest the local
fabric and cause delays to data transfers.
– The foreground I/O latency increases as detected by applications.
v If the background copy bandwidth is set too high for the storage at the primary
site, background copy read I/Os overload the primary storage and delay
foreground I/Os.
v If the background copy bandwidth is set too high for the storage at the secondary
site, background copy write operations at the secondary overload the secondary
storage and again delay the synchronous secondary write operations of
foreground I/Os.
– For Global Mirror without cycling mode, the work is backlogged and again
the relationship is stopped
The provisioning for optimal bandwidth for the background copy can also be
calculated by determining how much background copy can be allowed before
performance of host I/O becomes unacceptable. The background copy bandwidth
can be decreased slightly to accommodate peaks in workload and provide a safety
margin for host I/O.
Example
If the bandwidth setting at the primary site for the secondary clustered system is
set to 200 MBps (megabytes per second) and the relationships are not
synchronized, the SAN Volume Controller attempts to resynchronize the
relationships at a maximum rate of 200 MBps with a 25 MBps restriction for each
individual relationship. The SAN Volume Controller cannot resynchronize the
relationship if the throughput is restricted. The following can restrict throughput:
v The read response time of back-end storage at the primary system
v The write response time of the back-end storage at the secondary site
v Intersystem link latency
In this scenario, you have the ability to stop I/O operations to the secondary
volume during the migration process.
To stop I/O operations to the secondary volume while migrating a Metro Mirror
relationship to a Global Mirror relationship, you must specify the synchronized
option when you create the Global Mirror relationship.
1. Stop all host I/O operations to the primary volume.
2. Verify that the Metro Mirror relationship is consistent.
After the Global Mirror relationship is created, you can start the relationship and
resume host I/O operations.
In this scenario, you do not have the ability to stop I/O operations to the
secondary volume during the migration process.
If I/O operations to the secondary volume cannot be stopped, the data on the
secondary volume becomes out-of-date. When the Global Mirror relationship is
started, the secondary volume is inconsistent until all of the recent updates are
copied to the remote site.
If you do not require a consistent copy of the volume at the secondary site,
perform the following steps to migrate from a Metro Mirror relationship to a
Global Mirror relationship:
Important: The data on the secondary volume is not usable until the
synchronization process is complete. Depending on your link capabilities and the
amount of data that is being copied, this process can take an extended period of
time. You must set the background copy bandwidth for the intersystem
partnerships to a value that does not overload the intersystem link.
1. Delete the Metro Mirror relationship.
2. Create and start the Global Mirror relationship between the same two volumes.
If you require a consistent copy of the volume at the secondary site, perform the
following steps to migrate from a Metro Mirror relationship to a Global Mirror
relationship:
1. Delete the Metro Mirror relationship.
2. Create a Global Mirror relationship between volumes that were not used for the
Metro Mirror relationship. This preserves the volume so that you can use it if
you require a consistent copy at a later time.
Alternatively, you can use the FlashCopy feature to maintain a consistent copy.
Perform the following steps to use the FlashCopy feature to maintain a consistent
copy:
1. Start a FlashCopy operation for the Metro Mirror volume.
2. Wait for the FlashCopy operation to complete.
3. Create and start the Global Mirror relationship between the same two volumes.
The FlashCopy volume is now your consistent copy.
The SVCTools package that is available on the IBM alphaWorks® website site
provides an example script that demonstrates how to manage the FlashCopy
process. See the copymanager script that is available in the SVCTools package. You
can download the SVCTools package from the following website:
www.alphaworks.ibm.com/tech/svctools/download
If the poor response extends past the specified tolerance, a 1920 error is logged and
one or more Global Mirror relationships are automatically stopped. This protects
the application hosts at the primary site. During normal operation, application
hosts see a minimal impact to response times because the Global Mirror feature
uses asynchronous replication. However, if Global Mirror operations experience
degraded response times from the secondary system for an extended period of
time, I/O operations begin to queue at the primary system. This results in an
extended response time to application hosts. In this situation, the gmlinktolerance
feature stops Global Mirror relationships and the application hosts response time
returns to normal. After a 1920 error has occurred, the Global Mirror auxiliary
volumes are no longer in the consistent_synchronized state until you fix the cause
of the error and restart your Global Mirror relationships. For this reason, ensure
that you monitor the system to track when this occurs.
You can disable the gmlinktolerance feature by setting the gmlinktolerance value to
0 (zero). However, the gmlinktolerance cannot protect applications from extended
response times if it is disabled. It might be appropriate to disable the
gmlinktolerance feature in the following circumstances:
v During SAN maintenance windows where degraded performance is expected
from SAN components and application hosts can withstand extended response
times from Global Mirror volumes.
v During periods when application hosts can tolerate extended response times and
it is expected that the gmlinktolerance feature might stop the Global Mirror
relationships. For example, if you are testing using an I/O generator which is
configured to stress the backend storage, the gmlinktolerance feature might
detect the high latency and stop the Global Mirror relationships. Disabling
gmlinktolerance prevents this at the risk of exposing the test host to extended
response times.
A 1920 error indicates that one or more of the SAN components are unable to
provide the performance that is required by the application hosts. This can be
temporary (for example, a result of maintenance activity) or permanent (for
example, a result of a hardware failure or unexpected host I/O workload). If you
are experiencing 1920 errors, set up a SAN performance analysis tool, such as the
IBM Tivoli Storage Productivity Center, and make sure that it is correctly
configured and monitoring statistics when the problem occurs. Set your SAN
performance analysis tool to the minimum available statistics collection interval.
For the IBM Tivoli Storage Productivity Center, the minimum interval is five
minutes. If several 1920 errors have occurred, diagnose the cause of the earliest
error first. The following questions can help you determine the cause of the error:
v Was maintenance occurring at the time of the error? This might include
replacing a storage system physical disk, upgrading the firmware of the storage
system, or performing a code upgrade on one of the SAN Volume Controller
systems. You must wait until the maintenance procedure is complete and then
restart the Global Mirror relationships in noncycling mode. You must wait until
In the host zone, the host systems can identify and address the SAN Volume
Controller nodes. You can have more than one host zone and more than one disk
zone. Unless you are using a dual-core fabric design, the system zone contains all
ports from all SAN Volume Controller nodes in the system. Create one zone for
each host Fibre Channel port. In a disk zone, the SAN Volume Controller nodes
identify the storage systems. Generally, create one zone for each external storage
system. If you are using the Metro Mirror and Global Mirror feature, create a zone
with at least one port from each node in each system; up to four systems are
supported.
Note: Some operating systems cannot tolerate other operating systems in the same
host zone, although you might have more than one host type in the SAN fabric.
For example, you can have a SAN that contains one host that runs on an IBM AIX®
operating system and another host that runs on a Microsoft Windows operating
system.
Configuration details
Storage area network (SAN) configurations that contain SAN Volume Controller
nodes must be configured correctly.
A SAN configuration that contains SAN Volume Controller nodes must follow
configuration rules for the following components:
v Storage systems
v Nodes
v Fibre Channel host bus adapters (HBAs)
Note: If the system has an FC adapter fitted, some host systems can be directly
attached without using a SAN switch. Check the support pages on the product
website for the current details of supported host OS / driver / HBA types.
v Converged network adapters (CNAs)
A path is a logical connection between two Fibre Channel ports. The path can exist
only if both of the two Fibre Channel ports are in the same zone.
A core switch is the switch that contains the SAN Volume Controller ports. Because
most SAN fabric traffic might flow through the system, put the SAN Volume
Controller in the core of the fabric. Some configurations have a core switch that
contains only interswitch links (ISLs) and a storage edge switch that contains the
SAN Volume Controller ports. In this rules summary, a storage edge switch is the
same as a core switch.
You must connect the two production sites by Fibre Channel links. These Fibre
Channel links provide paths for SAN Volume Controller node-to-node
communication as well as for host access to SAN Volume Controller nodes.
SAN Volume Controller split-site system supports two different approaches for
node-to-node intrasystem communication between production sites:
v Attach each SAN Volume Controller node to the Fibre Channel switches in the
local and the remote production sites directly. Thus, all node-to-node traffic can
be done without passing intersite ISLs. This is called a split-site system
configuration without ISLs between SAN Volume Controller nodes.
v Attach each SAN Volume Controller node only to local Fibre Channel switches
and configure ISLs between production sites for SAN Volume Controller node to
local Fibre Channel switches. Configure ISLs between the production sites for
SAN Volume Controller node-to-node traffic. This is referred to as a split-site
system configuration with ISLs between SAN Volume Controller nodes.
Note: SAN Volume Controller version 5.1.0 or later is required for support.
See the Split-site system configuration using interswitch links and Split-site system
configuration without using interswitch links topics for the rules that apply to
these types of configurations.
SAN Volume Controller supports any SAN fabric configuration that is supported
by the SAN vendors.
Host connectivity:
v Paths between hosts and SAN Volume Controller can support up to three ISL
hops between hosts and SAN Volume Controller nodes.
v SAN Volume Controller supports SAN routing technologies (including FCIP
links) between the SAN Volume Controller and hosts. The use of long-distance
FCIP connections, however, might degrade the performance of any servers that
are attached through this technology.
v Hosts can be connected to the Storwize V7000 Fibre Channel ports directly or
through a SAN fabric.
Intersystem connectivity:
v SAN Volume Controller supports SAN routing technology (including FCIP links)
for intersystem connections that use Metro Mirror or Global Mirror.
Zoning rules
Notes:
1. Apply these rules to each fabric that contains SAN Volume Controller ports.
2. If the edge devices contain more stringent zoning requirements, follow the
storage system rules to further restrict the SAN Volume Controller zoning rules.
For example, IBM System Storage DS4000® does not support a storage system
A and storage system B in the same zone.
Host zoning:
v SAN Volume Controller requires single-initiator zoning for all large
configurations that contain more than 64 host objects. Each server Fibre Channel
port must be in its own zone, which contains the Fibre Channel port and SAN
www.ibm.com/storage/support/2145
A storage-system logical unit (LU) must not be shared between the SAN Volume
Controller and a host.
You can configure certain storage systems to safely share resources between the
SAN Volume Controller system and direct-attached hosts. This type of
configuration is described as a split storage system. In all cases, it is critical that
you configure the storage system and SAN so that the SAN Volume Controller
system cannot access logical units (LUs) that a host or another SAN Volume
Controller system can also access. This split storage system configuration can be
arranged by storage system logical unit number (LUN) mapping and masking. If
the split storage system configuration is not guaranteed, data corruption can occur.
When a storage system is detected on the SAN, the SAN Volume Controller
attempts to recognize it using its Inquiry data. If the device is not supported, the
SAN Volume Controller configures the device as a generic device. A generic device
might not function correctly when it is addressed by a SAN Volume Controller
system, especially under failure scenarios. However, the SAN Volume Controller
system does not regard accessing a generic device as an error condition and does
not log an error. Managed disks (MDisks) that are presented by generic devices are
not eligible to be used as quorum disks.
The SAN Volume Controller system is configured to manage LUs that are exported
only by RAID storage systems. Non-RAID storage systems are not supported. If
you are using SAN Volume Controller to manage solid-state drive (SSD) or other
JBOD (just a bunch of disks) LUs that are presented by non-RAID storage systems,
the SAN Volume Controller system itself does not provide RAID functions.
Consequently these LUs are exposed to data loss in the event of a disk failure.
If a single RAID storage system presents multiple LUs, either by having multiple
RAID configured or by partitioning one or more RAID into multiple LUs, each LU
Note: A connection coming from a host can be either a Fibre Channel or an iSCSI
connection.
Host Host
SDD RDAC
Volume
SAN
Node
MDisk Array
1fvlwd
Disk array
Figure 27. Storage system shared between SAN Volume Controller node and a host
It is also possible to split a host so that it accesses some of its LUNs through the
SAN Volume Controller system and some directly. In this case, the multipathing
software that is used by the storage system must be compatible with the SAN
Volume Controller multipathing software. Figure 28 on page 110 is a supported
Host
SDD
Volume
SAN
Node
MDisk Array
1fvlyh
IBM DS8000
Figure 28. IBM System Storage DS8000 LUs accessed directly with a SAN Volume Controller
node
In the case where the RAID storage system uses multipathing software that is
compatible with SAN Volume Controller multipathing software (see Figure 29 on
page 111), it is possible to configure a system where some LUNs are mapped
directly to the host and others are accessed through the SAN Volume Controller.
An IBM TotalStorage Enterprise Storage Server® (ESS) that uses the same
multipathing driver as a SAN Volume Controller node is one example. Another
example with IBM System Storage DS5000 is shown in Figure 29 on page 111.
SDD RDAC
Volume
SAN
Node
MDisk Array
lg15h8
Figure 29. IBM DS5000 direct connection with a SAN Volume Controller node on one host
The SAN Volume Controller system must be configured to export volumes only to
host Fibre Channel ports that are on the list of supported HBAs. See the Support
for SAN Volume Controller (2145) website for specific firmware levels and the
latest supported hardware:
www.ibm.com/storage/support/2145
Operation with other HBAs is not supported. You can attach hosts to the SAN
Volume Controller Fibre Channel ports directly or through a SAN fabric. For
specific HBA-supported connection methods, see www.ibm.com/storage/support/
2145.
The SAN Volume Controller system does not specify the number of host Fibre
Channel ports or HBAs that a host or a partition of a host can have. The number
of host Fibre Channel ports or HBAs are specified by the host multipathing device
driver. The SAN Volume Controller system supports this number; however it is
subject to the configuration rules for SAN Volume Controller. To obtain optimal
Chapter 3. SAN fabric and LAN configuration 111
performance and to prevent overloading, the workload to each SAN Volume
Controller port must be equal. You can achieve an even workload by zoning
approximately the same number of host Fibre Channel ports to each SAN Volume
Controller Fibre Channel port.
The following Linux distributions are supported by SAN Volume Controller for
FCoE attachment:
v Red Hat Enterprise Linux
v SuSe Linux Enterprise Server
For current interoperability information about supported software levels, see the
following website:
www.ibm.com/storage/support/2145
Ensure that all hosts running the Linux operating system use the correct host bus
adapters (HBAs) and host software.
For current interoperability information about HBAs and platform levels, see this
website:
www.ibm.com/storage/support/2145
Ensure that all hosts have the correct HBA device drivers and firmware levels.
www.ibm.com/storage/support/2145
The SAN Volume Controller system must be configured to export volumes only to
host CNAs that are on the list of supported CNAs. See the Support for SAN
www.ibm.com/storage/support/2145
The SAN Volume Controller system does not specify the number of host CNA
ports or CNAs that a host or a partition of a host can have. The number of host
CNA ports or CNAs are specified by the host multipathing device driver. The SAN
Volume Controller system supports this number; however, it is subject to the
configuration rules for SAN Volume Controller. To obtain optimal performance and
to prevent overloading, the workload to each SAN Volume Controller port must be
equal. You can achieve an even workload by zoning approximately the same
number of host CNA ports to each SAN Volume Controller Fibre Channel port or
FCoE port.
The SAN Volume Controller supports configurations that use N-port virtualization
in the converged network adapter, host bus adapter, or SAN switch (FC/FCF).
You can attach the SAN Volume Controller to Small Computer System Interface
Over Internet Protocol (iSCSI) hosts using the Ethernet ports of the SAN Volume
Controller.
Note: SAN Volume Controller supports SAN devices that bridge iSCSI connections
into a Fibre Channel network.
iSCSI connections route from hosts to the SAN Volume Controller over the LAN.
You must follow the SAN Volume Controller configuration rules for iSCSI host
connections:
v SAN Volume Controller supports up to 256 iSCSI sessions per node
v SAN Volume Controller currently supports one iSCSI connection per session
v SAN Volume Controller port limits are now shared between Fibre Channel
WWPNs and iSCSI names
SAN Volume Controller nodes have two or four Ethernet ports. These ports are
either for 1 Gbps support or 10 Gbps support, depending on the model.
For each Ethernet port, a maximum of one IPv4 address and one IPv6 address can
be designated for iSCSI I/O.
iSCSI hosts connect to the SAN Volume Controller through the node-port IP
address. If the node fails, the address becomes unavailable and the host loses
communication with SAN Volume Controller. To allow hosts to maintain access to
data, the node-port IP addresses for the failed node are transferred to the partner
node in the I/O group. The partner node handles requests for both its own
node-port IP addresses and also for node-port IP addresses on the failed node. This
process is known as node-port IP failover. In addition to node-port IP addresses,
the iSCSI name and iSCSI alias for the failed node are also transferred to the
partner node. After the failed node recovers, the node-port IP address and the
iSCSI name and alias are returned to the original node.
iSCSI IP requirements: Node iSCSI IP addresses are used for host iSCSI I/O
access to volumes. Node iSCSI IP addresses are also used to access a remote
Internet Storage Name Service (iSNS) server, if configured.
v For each node Ethernet port, a maximum of one IPv4 address and one IPv6
address can be designated for iSCSI I/O. This is in addition to any system
addresses configured on the port.
v Each node Ethernet port can be configured on the same subnet with the same
gateway, or you can have each Ethernet port on separate subnets and use
different gateways.
v If configuring a system to use node Ethernet ports 1 and 2 for iSCSI I/O, ensure
that the overall configuration also meets the system IP requirements listed
above.
v To ensure iSCSI IP failover operations, nodes in the same I/O group must be
connected to the same set of subnets on the same node ports. However, you can
configure node Ethernet ports in different I/O groups to use different subnets
and different gateways.
v IP addresses configured for system management and service access must not be
used for iSCSI I/O.
Common IP requirements:
v Every IP address must be unique within the system and within the networks the
system is attached to.
v If node Ethernet ports are connected to different isolated networks, then a
different subnet must be used for each network.
A SAN Volume Controller volume can be mapped the same way either to a Fibre
Channel host, an iSCSI host, or both.
For the latest maximum configuration support information, see the IBM System
Storage SAN Volume Controller website:
www.ibm.com/storage/support/2145
A clustered Ethernet port consists of one Ethernet port from each node in the
clustered system that is connected to the same Ethernet switch. Ethernet
configuration commands can be used for clustered Ethernet ports or node Ethernet
ports. SAN Volume Controller systems can be configured with redundant Ethernet
networks.
To assign an IP address to each node Ethernet port for iSCSI I/O, use the
cfgportip command. The MTU parameter of this command specifies the maximum
transmission unit (MTU) to improve iSCSI performance.
Attention: With the iSCSI initiator, you can set two passwords: one for discovery
and another for iSCSI session I/O. However, SAN Volume Controller requires that
both passwords for each type of authentication be the same. That is, two identical
passwords for one-way CHAP, and two identical passwords for two-way CHAP
that are different from those for one-way.
When using an iSCSI connection, you must consider the iSCSI protocol limitations:
v There is no SLP support for discovery.
v Header and data digest support is provided only if the initiator is configured to
negotiate.
v Only one connection per session is supported.
v A maximum of 256 iSCSI sessions per SAN Volume Controller iSCSI target is
supported.
v Only ErrorRecoveryLevel 0 (session restart) is supported.
v The behavior of a host that supports both Fibre Channel and iSCSI connections
and accesses a single volume can be unpredictable and depends on the
multipathing software.
SAN Volume Controller 2145-8F2 nodes contain two 2-port host bus adapters
(HBAs). If one HBA fails, the node operates in degraded mode. If an HBA is
physically removed, the configuration is not supported.
SAN Volume Controller 2145-CG8, SAN Volume Controller 2145-CF8, SAN Volume
Controller 2145-8F4, SAN Volume Controller 2145-8G4, and SAN Volume
Controller 2145-8A4 nodes contain one 4-port HBA.
SAN Volume Controller 2145-CG8 contains one additional 2-port Fiber Channel
over Ethernet (FCoE) converged network adapter (CNA).
Volumes
Each node presents a volume to the SAN through four Fibre Channel ports or two
FCoE ports. Each volume is accessible from the two nodes in an I/O group. Each
HBA port can recognize up to eight paths to each logical unit (LU) that is
presented by the clustered system. The hosts must run a multipathing device
driver before the multiple paths can resolve to a single device. You can use fabric
zoning to reduce the number of paths to a volume that are visible by the host.
The number of paths through the network from an I/O group to a host must not
exceed eight; configurations that exceed eight paths are not supported. Each node
has four 8 Gbps Fibre Channel ports, two 10 G FCoE ports, and each I/O group
has two nodes. Therefore, without any zoning, the number of paths to a volume is
12 multiplied by the number of host ports.
SAN Volume Controller supports more than four Fibre Channel and FCoE ports
per node with the following restrictions:
v Systems with a combined total of more than four Fibre Channel and FCoE ports
on a node must be running version 6.4.0 or later.
v A system with a total of more than four FC and FCoE ports cannot establish a
remote copy partnership to any other system running a version earlier than
6.4.0.
v A system running 6.4.0 or later that has a remote copy partnership to another
system that is running an earlier version cannot add another node with a
combined total of more than four FC and FCoE ports. Activating additional
ports by enabling FCoE or installing new hardware on existing nodes in the
system is also not allowed.
To resolve these limitations, you must upgrade the software on the remote system
to 6.4.0 or later or disable the additional hardware by using the chnodehw -legacy
CLI command .
Optical connections
Valid optical connections are based on the fabric rules that the manufacturers
impose for the following connection methods:
v Host to a switch
v Back end to a switch
v Interswitch links (ISLs)
Optical fiber connections can be used between a node and its switches.
Systems that use the intersystem Metro Mirror and Global Mirror feature can use
optical fiber connections between the switches, or they can use distance-extender
technology that is supported by the switch manufacturer.
Ethernet connection
To avoid communication between nodes that are being routed across interswitch
links (ISLs), connect all SAN Volume Controller nodes to the same Fibre Channel
or FCF switches.
No ISL hops are permitted among the SAN Volume Controller nodes within the
same I/O group. However, one ISL hop is permitted among SAN Volume
Controller nodes that are in the same system though different I/O groups. If your
To avoid communication between nodes and storage systems that are being routed
across ISLs, connect all storage systems to the same Fibre Channel switches as the
SAN Volume Controller nodes. One ISL hop between the SAN Volume Controller
nodes and the storage controllers is permitted. If your configuration requires more
than one ISL, contact your IBM service representative.
To avoid communication between nodes and storage systems that are being routed
across ISLs, connect all storage systems to the same Fibre Channel or FCF switches
as the SAN Volume Controller nodes. One ISL hop between the SAN Volume
Controller nodes and the storage systems is permitted. If your configuration
requires more than one ISL, contact your IBM service representative.
In larger configurations, it is common to have ISLs between host systems and the
SAN Volume Controller nodes.
Port speed
The Fibre Channel ports on SAN Volume Controller 2145-CF8 and SAN Volume
Controller 2145-CG8 nodes can operate at 2 Gbps, 4 Gbps, or 8 Gbps. The FCoE
ports on SAN Volume Controller 2145-CG8 nodes can operate at 10 Gbps. The
Fibre Channel ports on SAN Volume Controller 2145-8F4, SAN Volume Controller
2145-8G4 and SAN Volume Controller 2145-8A4 nodes can operate at 1 Gbps, 2
Gbps, or 4 Gbps. The Fibre Channel and FCoE ports on all these node types
autonegotiate the link speed that is used with the FC switch. The ports normally
operate at the maximum speed that is supported by both the SAN Volume
Controller port and the switch. However, if a large number of link errors occur, the
ports might operate at a lower speed than what could be supported.
Optional solid-state drives (SSDs) provide high-speed MDisk capability for SAN
Volume Controller 2145-CF8 and SAN Volume Controller 2145-CG8 nodes. Each
node supports up to four SSDs. SSDs are local drives that are not accessible over
the SAN fabric.
Note: These details do not apply to solid-state drive (SSD) storage within
SAN-attached storage systems such as the IBM System Storage DS8000. In these
situations, you can use either MDisks in a high-performance storage pool or the
Easy Tier function to configure your storage.
Follow these SAN Volume Controller SSD configuration details for nodes, I/O
groups, and systems:
v Nodes that contain SSDs can coexist in a single SAN Volume Controller system
with any other supported nodes.
v Quorum functionality is not supported on SSDs within SAN Volume Controller
nodes.
The following SAN Volume Controller SSD configuration details are recommended
processes.
For optimal performance, use only SSDs from a single I/O group in a single
storage pool.
Volumes:
The following details are not recommended but are similar to SSD configuration
processes from an earlier release.
Note: If required, you can create more than one array and storage pool per
node.
Volumes:
v Volumes must be mirrored in one of the following two ways:
For Fibre Channel connections, the SAN Volume Controller nodes must always be
connected to SAN switches only. Each node must be connected to each of the
counterpart SANs that are in the redundant fabric. Any Fibre Channel
configuration that uses a direct physical connection between a host and a SAN
Volume Controller node is not supported. When attaching iSCSI hosts to SAN
Volume Controller nodes, Ethernet switches must be used.
All back-end storage systems must always be connected to SAN switches only.
Multiple connections are permitted from redundant storage systems to improve
data bandwidth performance. A connection between each redundant storage
system and each counterpart SAN is not required. For example, in an IBM System
Storage DS4000 configuration in which the IBM DS4000 contains two redundant
When you attach a node to a SAN fabric that contains core directors and edge
switches, connect the node ports to the core directors and connect the host ports to
the edge switches. In this type of fabric, the next priority for connection to the core
directors is the storage systems, leaving the host ports connected to the edge
switches.
A SAN Volume Controller SAN must follow all switch manufacturer configuration
rules, which might place restrictions on the configuration. Any configuration that
does not follow switch manufacturer configuration rules is not supported.
Within an individual SAN fabric, only mix switches from different vendors if the
configuration is supported by the switch vendors.When using this option for FCF
Switch to FC Switch connectivity, you should review and plan as documented in
“ISL oversubscription” on page 124.
With ISLs between nodes in the same system, the inter-switch links (ISLs) are
considered a single point of failure. Figure 30 illustrates this example.
Node A Node B
If ISL 1 or ISL 2 fails, the communication between Node A and Node B fails for a
period of time, and the node is not recognized, even though there is still a
connection between the nodes.
To ensure that a Fibre Channel link failure does not cause nodes to fail when there
are ISLs between nodes, it is necessary to use a redundant configuration. This is
Node A Node B
With a redundant configuration, if any one of the links fails, communication on the
system does not fail.
FCoE servers and SAN Volume Controller systems can be connected in several
different ways. The following examples show the various supported
configurations.
Fibre Fibre
Channel Channel
10 Gbps
Ethernet
svc00758
Fibre Channel over Clustered
Ethernet storage system
Figure 32. Fibre Channel forwarder linked to existing Fibre Channel SAN
The second example, Figure 33, is almost the same as the first example but without
an existing Fibre Channel SAN. It shows a SAN Volume Controller system
connected to a Fibre Channel forwarder switch along with any FCoE hosts and
FCoE storage systems. The connections are 10 GB Ethernet.
2.
Clustered
system
Figure 33. Fibre Channel forwarder linked to hosts and storage systems without an existing
Fibre Channel SAN
In the third example, Figure 34 on page 124, a Fibre Channel host connects into the
Fibre Channel ports on the Fibre Channel forwarder. The SAN Volume Controller
system is connected to a Fibre Channel forwarder switch along with any FCoE
storage systems. The connections are 10 GB Ethernet. The Fibre Channel forwarder
is linked to the existing Fibre Channel SAN by using Fibre Channel ISLs. Any
Fibre Channel hosts or storage systems remain on the existing Fibre Channel SAN.
The FCoE host connects to a 10 GB Ethernet switch (transit switch) that is
connected to the Fibre Channel forwarder.
10 Gbps
Ethernet Fibre Fibre
Fibre Channel over
Channel Channel
Ethernet storage
svc00756
Clustered
system
Figure 34. Fibre Channel host connects into the Fibre Channel ports on the Fibre Channel
forwarder
The fourth example, Figure 35, is about the same as the previous example but
without an existing Fibre Channel SAN. The Fibre Channel hosts connect to Fibre
Channel ports on the Fibre Channel forwarder.
4.
Clustered
system
Figure 35. Fibre Channel host connects into the Fibre Channel ports on the Fibre Channel
forwarder without an existing Fibre Channel SAN.
ISL oversubscription
Perform a thorough SAN design analysis to avoid ISL congestion. Do not configure
the SAN to use SAN Volume Controller to SAN Volume Controller traffic or SAN
Volume Controller to storage system traffic across ISLs that are oversubscribed. For
host to SAN Volume Controller traffic, do not use an ISL oversubscription ratio
that is greater than 7 to 1. Congestion on the ISLs can result in severe SAN Volume
Controller performance degradation and I/O errors on the host.
Note: The SAN Volume Controller port speed is not used in the oversubscription
calculation.
You can use director class switches within the SAN to connect large numbers of
RAID controllers and hosts to a SAN Volume Controller system. Because director
class switches provide internal redundancy, one director class switch can replace a
SAN that uses multiple switches. However, the director class switch provides only
network redundancy; it does not protect against physical damage (for example,
flood or fire), which might destroy the entire function. A tiered network of smaller
switches or a core-edge topology with multiple switches in the core can provide
comprehensive redundancy and more protection against physical damage for a
network in a wide area. Do not use a single director class switch to provide more
than one counterpart SAN because this does not constitute true redundancy.
Figure 36 illustrates a small SAN configuration. Two Fibre Channel switches are
use to provide redundancy. Each host system, SAN Volume Controller node, and
storage system is connected to both Fibre Channel switches.
Host Systems
Fabric 1 Fabric 2
Storage Systems
Figure 37 on page 126 illustrates a medium-sized fabric that consists of three Fibre
Channel switches. These switches are interconnected with interswitch links (ISLs).
For redundancy, use two fabrics with each host system, SAN Volume Controller
node, and storage system that connect to two fabrics. Since a clustered storage ITE
may have SCEs in different chassis and require the use of ISL to connect the
chassis, ISLs may be used between nodes in the same cluster. The example fabric
attaches the SAN Volume Controller nodes and the storage systems to the core
switch. There are no ISL hops between SAN Volume Controller nodes or between
nodes and the storage systems.
Fabric 1
Core Switch
svc00420
Nodes Storage Systems
Figure 38 illustrates a large fabric that consists of two core Fibre Channel switches
and edge switches that are interconnected with ISLs. For redundancy, use two
fabrics with each host system, SAN Volume Controller node, and storage system
that is being connected. Both fabrics attach the SAN Volume Controller nodes to
both core fabrics and distribute the storage systems between the two core switches.
This ensures that no ISL hops exist between SAN Volume Controller nodes or
between nodes and the storage systems.
Fabric 1
Nodes
Storage Systems Storage Systems
Figure 39 on page 127 illustrates a fabric where the host systems are located at two
different sites. A long-wave optical link is used to interconnect switches at the
different sites. For redundancy, use two fabrics and at least two separate
long-distance links. If a large number of host systems are at the remote site, use
ISL trunking to increase the available bandwidth between the two sites.
Fabric 1
svc00422
Nodes Storage Systems
To provide protection against failures that affect an entire location, such as a power
failure, you can use a configuration that splits a single system across three physical
locations.
Switch Switch
Node Node
Figure 40. A split-site system with a quorum disk located at a third site
In Figure 40, the storage system that hosts the third-site quorum disk is attached
directly to a switch at both the primary and secondary sites using longwave Fibre
Channel connections. If either the primary site or the secondary site fails, you must
ensure that the remaining site has retained direct access to the storage system that
hosts the quorum disks.
A split-site configuration is supported only when the storage system that hosts the
quorum disks supports extended quorum. Although SAN Volume Controller can
use other types of storage systems for providing quorum disks, access to these
quorum disks is always through a single path.
For quorum disk configuration requirements, see the Guidance for Identifying and
Changing Managed Disks Assigned as Quorum Disk Candidates technote at the
following website:
http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311
The SAN Volume Controller split-site system cannot guarantee that it can operate
after the failure of two failure domains. If this situation occurs, use Metro Mirror
or Global Mirror on a second SAN Volume Controller system for extended disaster
recovery. You configure and manage Metro Mirror or Global Mirror partnerships
that include a split-site system in the same way as other remote copy relationships.
SAN Volume Controller supports SAN routing technology (including FCIP links)
for intersystem connections that use Metro Mirror or Global Mirror.
The partner SAN Volume Controller split-site system must not be located in a
production site of the SAN Volume Controller split-site system. However, it can be
collocated with the storage system that provides the active quorum disk for the
split-site system.
| Configure the SAN Volume Controller split-site system that does not include
| interswitch links (ISLs) according to the following rules:
| v The minimal SAN configuration consists of one Fibre Channel switch per
| production site as two separate fabrics. For highest reliability, two switches per
| production site are recommended. Single fabric configurations are not supported
| for split I/O group systems
| v As with every SAN Volume Controller clustered system, you can use ISLs for
| host-to-node (with up to 3 hops) or for node-to-storage (at most 1 hop) access.
| However, configure the SAN zones so that ISLs are not used in paths between
| SAN Volume Controller nodes.
| v Attach two ports of each SAN Volume Controller node to the Fibre Channel
| switches in the production site where the node resides.
| v Attach the remaining two ports of each SAN Volume Controller node to the
| Fibre Channel switches in the other production site.
| v Connect each storage system at the production sites to Fibre Channel switches in
| the site where the storage system resides.
| v Connect the storage system with the active quorum disks to Fibre Channel
| switches in both production sites.
| v To avoid fabric topology changes in case of IP errors, it is a good practice to
| configure FCIP links so that they do not carry ISLs.
| It is strictly required that the links from both production sites to the quorum site
| are independent and do not share any long-distance equipment.
| Note: You do not need to UPS-protect FCIP routers or active WDM devices that
| are used only for the node-to-quorum communication.
| A split-site configuration is supported only when the storage system that hosts the
quorum disks supports extended quorum. Although SAN Volume Controller can
use other types of storage systems for providing quorum disks, access to these
quorum disks is always through a single path.
However, the SAN Volume Controller system does not guarantee that it can
survive the failure of two sites.
| v For every storage system, create one zone that contains SAN Volume Controller
| ports from every node and all storage system ports, unless otherwise stated by
| the zoning guidelines for that storage system. However, do not connect a storage
| system in one site directly to a switch fabric in the other site. Instead, connect
| each storage system only to switched fabrics in the local site. (In split-site system
| configurations with ISLs in node-to-node paths, these fabrics belong to the
| public SAN).
v Each SAN Volume Controller node must have two direct Fibre Channel
connections to one or more SAN fabrics at both locations that contain SAN
Volume Controller nodes.
v Ethernet port 1 on every SAN Volume Controller node must be connected to the
| same subnet or subnets. Ethernet port 2 (if used) of every node must be
| connected to the same subnet (this may be a different subnet from port 1). The
| same principle applies to other Ethernet ports.
| v A SAN Volume Controller node must be located in the same rack as the 2145
| UPS or 2145 UPS-1U that supplies its power.
| v You can have powered components between the SAN Volume Controller and the
| switches in a split-site configuration. For example, you can use powered dense
wavelength division multiplexing (DWDM) Fibre Channel extenders.
v You might be required to provide and replace longwave SFP transceivers.
| Each SAN consists of at least one fabric that spans both production sites. At least
| one fabric of the public SAN includes also the quorum site. You can configure
| private and public SANs using different approaches.
| v Use dedicated Fibre Channel switches for each SAN
| To implement private and public SANs with dedicated switches, any combination
| of supported switches can be used. For the list of supported switches and for
| supported switch partitioning and virtual fabric options see the SAN Volume
| Controller interoperability website:
http://www.ibm.com/storage/support/2145
| Like for every managed disk, all SAN Volume Controller nodes need access to the
| quorum disk using the same storage system ports. If a storage system with active
| and or passive controllers (like IBM DS3000, DS4000, and DS5000 or IBM FAStT) is
| attached to a fabric, then the storage system must be connected with both internal
| controllers to this fabric. This is illustrated in Figure 41 on page 133.
Server 1 Server 3
Server 2 Server 4
ISL
Public SAN1 Public SAN1
ISL
Private SAN1 Private SAN1
Storage Storage
Switch Switch
Active
Quorum
svc00754
|
| Figure 41. Split-site system nodes with ISLs, DS3000, DS4000, and DS5000 connected to
| both fabrics
| By using FCIP, passive WDM, or active WDM for quorum site connectivity you
| can add to the extension. The connections have to be reliable. It is strictly required
| that the links from both production sites to the quorum site are independent and
| do not share any long-distance equipment.
| Note: It is not required to UPS-protect FCIP routers or active WDM devices that
| are used only for the node-to-quorum communication.
| A split-site configuration is supported only when the storage system that hosts the
quorum disks supports extended quorum. Although SAN Volume Controller can
use other types of storage systems for providing quorum disks, access to these
quorum disks is always through a single path.
http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311
For example, a SAN Volume Controller split-site system (no matter how many
nodes) with a total of 4 gigabits of split-site system bandwidth on the private
SANs is a valid configuration as long as the peek host-write I/O workload does
not exceed 200 megabytes per second.
A system can have only one active quorum disk used for a tie-break situation.
However the system uses three quorum disks to record a backup of system
configuration data to be used in the event of a disaster. The system automatically
selects one active quorum disk from these three disks. The active quorum disk can
be specified by using the chquorum command-line interface (CLI) command with
the active parameter. To view the current quorum disk status, use the lsquorum
command. In the management GUI, select Pools > MDisks by Pools or Pools >
External Storage.
The other quorum disk candidates provide redundancy if the active quorum disk
fails before a system is partitioned. To avoid the possibility of losing all the
quorum disk candidates with a single failure, assign quorum disk candidates on
multiple storage systems.
When you change the managed disks that are assigned as quorum candidate disks,
follow these general guidelines:
v When possible, aim to distribute the quorum candidate disks so that each MDisk
is provided by a different storage system. For information about which storage
systems are supported for quorum disk use, refer to the supported hardware list.
v Before you change a quorum candidate disk, ensure that the status of the
managed disk that is being assigned as a quorum candidate disk is reported as
online and that it has a capacity of 512 MB or larger.
v When you are using a split-site configuration, use the override yes parameter
because this parameter disables the mechanism that moves quorum disks when
they become degraded.
To provide protection against failures that affect an entire location (for example, a
power failure), you can use volume mirroring with a configuration that splits a
single clustered system between two physical locations. For further information,
see the split-site configuration information. For detailed guidance about split-site
configuration for high-availability purposes, contact your IBM regional advanced
technical specialist.
Generally, when the nodes in a system have been split among sites, configure the
system this way:
v Site 1: Half of system nodes + one quorum disk candidate
v Site 2: Half of system nodes + one quorum disk candidate
v Site 3: Active quorum disk
This configuration ensures that a quorum disk is always available, even after a
single-site failure.
The following scenarios describe examples that result in changes to the active
quorum disk:
v Scenario 1:
1. Site 3 is either powered off or connectivity to the site is broken.
2. The system selects a quorum disk candidate at site 2 to become the active
quorum disk.
3. Site 3 is either powered on or connectivity to the site is restored.
4. Assuming that the system was correctly configured initially, SAN Volume
Controller automatically recovers the configuration when the power is
restored.
v Scenario 2:
1. The storage system that is hosting the preferred quorum disk at site 3 is
removed from the configuration.
2. If possible, the system automatically configures a new quorum disk
candidate at site 1 or 2.
3. The system selects a quorum disk candidate at site 1 or 2 to become the
active quorum disk.
4. A new storage system is added to site 3.
Fibre Channel over IP (FCIP) routers can be used for quorum disk connections
under the following circumstances:
v The FCIP router device is supported for SAN Volume Controller remote
mirroring (Metro Mirror or Global Mirror).
v The maximum round-trip delay must not exceed 80 ms, which means 40 ms
each direction.
v A minimum bandwidth of 2 megabytes per second is guaranteed for
node-to-quorum traffic.
Note:
1. To avoid fabric topology changes in case of IP errors, it is a good practice to
configure FCIP links so that they do not carry ISLs.
2. Connections using iSCSI are not supported.
Passive wavelength division multiplexing (WDM) devices can be used for quorum
disk connections. These connections rely on SFP transceivers with different
wavelengths (referred to as colored SFP transceivers) for fiber sharing. The
following requirements apply when using these type of connections:
v The WDM vendor must support the colored SFP transceivers for usage in the
WDM device.
v The Fibre Channel switch vendor must support the colored SFP transceivers for
ISL.
v IBM supports the WDM device for SAN Volume Controller Metro Mirror or
Global Mirror.
v The SFP transceivers must comply with the SFP/SFP+ power and heat
specifications.
Note: To purchase colored SFP transceivers for passive WDM, contact your WDM
vendor.
The maximum distance between the system and host or the system and the storage
system is 300 m for shortwave optical connections and 10 km for longwave optical
connections. Longer distances are supported between systems that use the
intersystem Metro Mirror or Global Mirror feature.
When you use longwave optical fiber connections, follow these guidelines:
Note: Do not split system operation over a long optical distance; otherwise, you
can use only asymmetric disaster recovery with substantially reduced performance.
Instead, use two system configurations for all production disaster-recovery
systems.
5 TB of
incremental
FlashCopy
source volume
capacity
Volume 0 20 MB 512 MB 40 TB of
mirroring mirrored
volumes
80 TB array
capacity in
three-disk RAID
5 array
The following tables describe the amount of bitmap space necessary to configure
the various copy services functions and RAID.
Before you specify the configuration changes, consider the following factors.
v For FlashCopy relationships, only the source volume allocates space in the
bitmap table.
v For Metro Mirror or Global Mirror relationships, two bitmaps exist. One is used
for the master clustered system and one is used for the auxiliary system because
the direction of the relationship can be reversed.
v The smallest possible bitmap is 4 KB; therefore, a 512 byte volume requires 4 KB
of bitmap space.
To manage the bitmap memory from the management GUI, select the I/O group in
Home > System Status and then select the Manage tab. You can also use the
lsiogrp and chiogrp command-line interface (CLI) commands to modify the
settings.
Zoning details
Ensure that you are familiar with these zoning details. These details explain zoning
for external storage system zones and host zones. More details are included in the
SAN configuration, zoning, and split-site system rules summary.
Paths to hosts
The number of paths through the network from the SAN Volume Controller nodes
to a host must not exceed eight. Configurations in which this number is exceeded
are not supported.
v Each CG8 Node (In SAN Volume Controller, this is model 300) has four FC ports
and two FCoE ports, and each I/O Group has two nodes. Therefore, with no
zoning in a dual-SAN environment, the number of paths to a volume is six
multiplied by the number of host ports. For other nodes with no FCoE ports,
with no zoning in a dual-SAN environment, the number of paths to a volume is
four multiplied by the number of host ports.
v This rule exists to limit the number of paths that must be resolved by the
multipathing device driver.
If you want to restrict the number of paths to a host, zone the switches so that
each host bus adapter (HBA) port is zoned with one SAN Volume Controller port
for each node in the clustered system. If a host has multiple HBA ports, zone each
port to a different set of SAN Volume Controller ports to maximize performance
and redundancy. This also applies to a host with a Converged Network Adapter
(CNA) card.
Switch zones that contain storage system ports must not have more than 40 ports.
A configuration that exceeds 40 ports is not supported.
The switch fabric must be zoned so that the SAN Volume Controller nodes can
detect the back-end storage systems and the front-end host HBAs. Typically, the
front-end host HBAs and the back-end storage systems are not in the same zone.
The exception to this is where split host and split storage system configuration is
in use.
All nodes in a system must be able to detect the same ports on each back-end
storage system. Operation in a mode where two nodes detect a different set of
ports on the same storage system is degraded, and the system logs errors that
request a repair action. This can occur if inappropriate zoning is applied to the
fabric or if inappropriate LUN masking is used. This rule has important
implications for back-end storage, such as IBM DS4000 storage systems, which
impose exclusive rules for mappings between HBA worldwide node names
(WWNNs) and storage partitions.
Each SAN Volume Controller port must be zoned so that it can be used for
internode communications. When configuring switch zoning, you can zone some
SAN Volume Controller node ports to a host or to back-end storage systems.
When configuring zones for communication between nodes in the same system,
the minimum configuration requires that all Fibre Channel ports on a node detect
at least one Fibre Channel port on each other node in the same system. You cannot
reduce the configuration in this environment.
It is critical that you configure storage systems and the SAN so that a system
cannot access logical units (LUs) that a host or another system can also access. You
can achieve this configuration with storage system logical unit number (LUN)
mapping and masking.
If a node can detect a storage system through multiple paths, use zoning to restrict
communication to those paths that do not travel over ISLs.
With Metro Mirror and Global Mirror configurations, additional zones are required
that contain only the local nodes and the remote nodes. It is valid for the local
hosts to see the remote nodes or for the remote hosts to see the local nodes. Any
zone that contains the local and the remote back-end storage systems and local
nodes or remote nodes, or both, is not valid.
For systems that are running SAN Volume Controller version 5.1 or later: For
best results in Metro Mirror and Global Mirror configurations, zone each node so
that it can communicate with at least one Fibre Channel port on each node in each
remote system. This configuration maintains redundancy of the fault tolerance of
port and node failures within local and remote systems. For communications
between multiple SAN Volume Controller version 5.1 systems, this also achieves
optimal performance from the nodes and the intersystem links.
The minimum configuration requirement is to zone both nodes in one I/O group
to both nodes in one I/O group at the secondary site. The I/O group maintains
fault tolerance of a node or port failure at either the local or remote site location. It
does not matter which I/O groups at either site are zoned because I/O traffic can
be routed through other nodes to get to the destination. However, if an I/O group
that is doing the routing contains the nodes that are servicing the host I/O, there is
no additional burden or latency for those I/O groups because the I/O group nodes
are directly connected to the remote system.
For systems that are running SAN Volume Controller version 4.3.1 or earlier: The
minimum configuration requirement is that all nodes must detect at least one Fibre
Channel port on each node in the remote system. You cannot reduce the
configuration in this environment.
In configurations with a version 5.1 system that is partnered with a system that is
running a SAN Volume Controller version 4.3.1 or earlier, the minimum
configuration requirements of the version 4.3.1 or earlier system apply.
If only a subset of the I/O groups within a system are using Metro Mirror and
Global Mirror, you can restrict the zoning so that only those nodes can
communicate with nodes in remote systems. You can have nodes that are not
members of any system zoned to detect all the systems. You can then add a node
to the system in case you must replace a node.
Host zones
The configuration rules for host zones are different depending upon the number of
hosts that will access the system. For configurations of fewer than 64 hosts per
system, SAN Volume Controller supports a simple set of zoning rules that enable a
small set of host zones to be created for different environments. For configurations
of more than 64 hosts per system, SAN Volume Controller supports a more
restrictive set of host zoning rules. These rules apply for both Fibre Channel (FC)
and Fibre Channel over Ethernet (FCoE) connectivity.
To obtain the best overall performance of the system and to prevent overloading,
the workload to each SAN Volume Controller port must be equal. This can
typically involve zoning approximately the same number of host Fibre Channel
ports to each SAN Volume Controller Fibre Channel port.
For systems with fewer than 64 hosts attached, zones that contain host HBAs must
contain no more than 40 initiators including the SAN Volume Controller ports that
act as initiators. A configuration that exceeds 40 initiators is not supported. A valid
zone can be 32 host ports plus 8 SAN Volume Controller ports. When it is possible,
place each HBA port in a host that connects to a node into a separate zone. Include
exactly one port from each node in the I/O groups that are associated with this
host. This type of host zoning is not mandatory, but is preferred for smaller
configurations.
Note: If the switch vendor recommends fewer ports per zone for a particular SAN,
the rules that are imposed by the vendor takes precedence over SAN Volume
Controller rules.
To obtain the best performance from a host with multiple Fibre Channel ports, the
zoning must ensure that each Fibre Channel port of a host is zoned with a
different group of SAN Volume Controller ports.
Each HBA port must be in a separate zone and each zone must contain exactly one
port from each SAN Volume Controller node in each I/O group that the host
accesses.
Note: A host can be associated with more than one I/O group and therefore access
volumes from different I/O groups in a SAN. However, this reduces the maximum
number of hosts that can be used in the SAN. For example, if the same host uses
volumes in two different I/O groups, this consumes one of the 256 hosts in each
I/O group. If each host accesses volumes in every I/O group, there can be only
256 hosts in the configuration.
Zoning examples
These zoning examples describe ways for zoning a switch. In the examples, a list
of port names that are inside brackets ([]) represent a single zone whose zone
members are the list of ports shown.
Example 1
A0 B0
Node A Node B
A1 B1
Host Q
Q0
svc00549
Storage Storage
System I System J
I0 I1 J0
svc00551
Figure 43. An example of a storage system zone
Node A Node B
A0 A1 B0 B1
svc00550
8. Follow the same steps 5 on page 144 through 7 to create the following list of
zones for switch Y:
One zone per host port:
[A2, B2, P1]
[A3, B3, Q1]
Storage zone:
Example 2
The following example describes a SAN environment that is like the previous
example except for the addition of four hosts that have two ports each.
v Two nodes called A and B
v Nodes A and B have four ports each
– Node A has ports A0, A1, A2, and A3
– Node B has ports B0, B1, B2, and B3
v Six hosts called P, Q, R, S, T, and U
v Four hosts have four ports each and the other two hosts have two ports each as
described in Table 30.
Table 30. Six hosts and their ports
P Q R S T U
P0 Q0 R0 S0 T0 U0
P1 Q1 R1 S1 T1 U1
P2 Q2 R2 S2
P3 Q3 R3 S3
Attention: Hosts T and U (T0 and U0) and (T1 and U1) are zoned to different
SAN Volume Controller ports so that each SAN Volume Controller port is
zoned to the same number of host ports.
6. Create one storage zone per storage system on switch X:
[A0, A1, B0, B1, I0, I1]
[A0, A1, B0, B1, J0]
[A0, A1, B0, B1, K0, K1, K2, K3]
7. Create one internode zone on switch X:
[A0, A1, B0, B1]
8. Follow the same steps 5 on page 147 through 7 to create the following list of
zones for switch Y:
One zone per host port:
[A2, B2, P2]
[A3, B3, P3]
[A2, B2, Q2]
[A3, B3, Q3]
[A2, B2, R2]
[A3, B3, R3]
[A2, B2, S2]
[A3, B3, S3]
[A2, B2, T1]
[A3, B3, U1]
Storage zone:
[A2, A3, B2, B3, I2, I3]
[A2, A3, B2, B3, J1]
[A2, A3, B2, B3, K4, K5, K6, K7]
One internode zone:
[A2, A3, B2, B3]
SAN configurations that use intrasystem Metro Mirror and Global Mirror
relationships do not require additional switch zones.
For intersystem Metro Mirror and Global Mirror relationships, you must perform
the following steps to create the additional zones that are required:
1. Configure your SAN so that Fibre Channel traffic can be passed between the
two clustered systems. To configure the SAN this way, you can connect the
systems to the same SAN, merge the SANs, or use routing technologies.
2. Optional: Configure zoning to enable all nodes in the local fabric to
communicate with all nodes in the remote fabric.
Note: For systems that are running SAN Volume Controller version 4.3 or
later: For Metro Mirror and Global Mirror configurations, zone two fibre
channel ports on each node in the local system to two fibre channel ports on
each node in the remote system. If dual-redundant fabrics are available, zone
one port from each node across on each fabric to provide the greatest fault
tolerance. For each system, two ports on each node should have no remote
zones, only local zones.
Note: If you are using McData Eclipse routers, model 1620, only 64 port pairs
are supported, regardless of the number of iFCP links that are used.
3. Optional: As an alternative to step 2, choose a subset of nodes in the local
system to be zoned to the nodes in the remote system. Minimally, you must
ensure that one whole I/O group in the local system has connectivity to one
whole I/O group in the remote system. I/O between the nodes in each system
is then routed to find a path that is permitted by the configured zoning.
Reducing the number of nodes that are zoned together can reduce the
complexity of the intersystem zoning and might reduce the cost of the routing
hardware that is required for large installations. Reducing the number of nodes
also means that I/O must make extra hops between the nodes in the system,
which increases the load on the intermediate nodes and can increase the
performance impact; in particular, for Metro Mirror.
4. Optional: Modify the zoning so that the hosts that are visible to the local
system can recognize the remote system. This enables a host to examine data in
both the local and remote system.
5. Verify that system A cannot recognize any of the back-end storage that is
owned by system B. A system cannot access logical units (LUs) that a host or
another system can also access.
If you are setting up long-distance links, consult the documentation from your
switch vendor to ensure that you set them up correctly.
The first phase to create a system is performed from the front panel of the SAN
Volume Controller. The second phase is performed from a web browser by
accessing the management GUI.
To access the CLI, you must use the PuTTY client to generate Secure Shell (SSH)
key pairs that secure data flow between the SAN Volume Controller system
configuration node and a client.
Before you create a system, ensure that all SAN Volume Controller nodes are
correctly installed, cabled, and powered on.
When you create the system, you must specify either an IPv4 or an IPv6 system
address for port 1. After the system is created, you can specify additional IP
addresses for port 1 and port 2 until both ports have an IPv4 address and an IPv6
address.
If you choose to have the IBM service representative or IBM Business Partner
initially create the system, you must provide the following information before
configuring the system:
v For a system with an IPv4 address:
– Management IPv4 address
– Subnet mask
– Gateway IPv4 address
v For a system with an IPv6 address
– Management IPv6 address
– IPv6 prefix
– Gateway IPv6 address
Define these addresses on the Configuration Data Table planning chart, which is
used when installing a clustered system.
Attention: The management IPv4 address and the IPv6 address must not be the
same as any other device accessible on the network.
In the following figure, bold lines indicate the select button was pressed. Lighter
lines indicate the navigational path (up or down and left or right). The circled X
indicates if the select button is pressed, an action occurs using the data that is
entered.
Use the front panel and follow these steps to create and configure the system:
Procedure
1. Choose a node that you want to make a member of the system that you are
creating.
Note: You add nodes using a different process after you have successfully
created and initialized the system.
2. Press and release the up or down button until Action? is displayed.
3. Press and release the select button.
4. Depending on whether you are creating a system with an IPv4 address or an
IPv6 address, press and release the up or down button until either New Cluster
IP4? or New Cluster IP6? is displayed.
5. Press and release the select button.
6. Press and release the left or right button until either IP4 Address: or IP6
Address: is displayed.
New Confirm
IP4 IP4 IP4 Create?
Cluster Gateway: Cancel?
Address: Subnet:
IP4? x
Confirm
New IP6 IP6 IP6 Create? Cancel?
Cluster Address: Prefix: Gateway:
IP6? x
svc00676
Figure 45. New Cluster IP4? and New Cluster IP6? options on the front-panel display
The following steps provide the information to complete the task for creating a
system with an IPv4 address.
1. You might need to press the select button to enter edit mode. The first IPv4
address number is shown.
2. Press the up button if you want to increase the value that is displayed; press
the down button if you want to decrease that value. If you want to quickly
increase the highlighted value, hold the up button. If you want to quickly
decrease the highlighted value, hold the down button.
Note: To change the address scrolling speed, see the note at the end of this
topic.
3. Press the right or left buttons to move to the number field that you want to
update. Use the right button to move to the next field and use the up or down
button to change the value of this field.
4. Repeat step 3 for each of the remaining fields of the IPv4 Address.
5. After you have changed the last field of the IPv4 Address, press the select
button to leave edit mode. Press the right button to move to the next stage.
IP4 Subnet: is displayed.
6. Press the select button to enter edit mode.
7. Use the up or down button to increase or decrease the value of the first field
of the IPv4 Subnet to the value that you have chosen.
8. Use the right button to move to the next field and use the up or down buttons
to change the value of this field.
9. Repeat step 8 for each of the remaining fields of the IPv4 Subnet.
10. After you have changed the last field of IPv4 Subnet, press the select button to
leave edit mode. Press the right button to move to the next stage.
11. Press the select button to enter edit mode. Press the right button. IP4 Gateway:
is displayed.
12. Press the up button if you want to increase the value that is displayed; press
the down button if you want to decrease that value. If you want to quickly
increase the highlighted value, hold the up button. If you want to quickly
decrease the highlighted value, hold the down button.
13. Use the right button to move to the next field and use the up or down button
to change the value of this field.
14. Repeat step 13 for each of the remaining fields of the IPv4 Gateway.
15. Press and release the right button until Confirm Create? is displayed.
After you complete this task, the following information is displayed on the service
display screen:
v Cluster: is displayed on line 1.
v A temporary, system-assigned clustered system name that is based on the IP
address is displayed on line 2.
Note: To disable the fast increase and decrease address scrolling speed function
using the front panel, press and hold the down arrow button, press and release the
select button, and then release the down arrow button. The disabling of the fast
increase and decrease function lasts until system creation is completed or until the
feature is enabled again. If you press and hold the up or down arrow button while
the function is disabled, the value increases or decreases once every 2 seconds. To
enable the fast increase and decrease function again, press and hold the up arrow
button, press and release the select button, and then release the up arrow button.
What to do next
After you have created the clustered system on the front panel with the correct IP
address format, you can finish the system configuration by accessing the
management GUI, completing the creation of the system, and adding nodes to the
system.
Before you access the management GUI, you must ensure that your web browser is
supported and has the appropriate settings enabled.
To access the management GUI, point your supported browser to the management
IP address.
www.ibm.com/storage/support/2145
For settings requirements, see the information about checking your web browser
settings for the management GUI.
The following steps provide the information to complete the task for creating a
system with an IPv6 address:
1. You might need to press the select button to enter edit mode. The first IPv6
address number is shown.
2. Press the up button if you want to increase the value that is displayed; press
the down button if you want to decrease that value. If you want to quickly
increase the highlighted value, hold the up button. If you want to quickly
decrease the highlighted value, hold the down button.
Note: To change the address scrolling speed, see the note at the end of this
topic.
3. Press the right button or left button to move to the number field that you
want to update. Use the right button to move to the next field and use the up
or down button to change the value of this field.
4. Repeat step 3 for each of the remaining fields of the IPv6 Address.
5. After you have changed the last field of the IPv6 Address, press the select
button to leave edit mode. Press the right button to move to the next stage.
IP6 Prefix: is displayed.
6. Press the select button to enter edit mode.
7. Use the up or down button to increase or decrease the value of the first field
of the IPv6 Prefix to the value that you have chosen.
8. Use the right button to move to the next field and use the up or down button
to change the value of this field.
9. Repeat step8 for each of the remaining fields of the IPv6 Prefix.
10. After you have changed the last field of IPv6 Prefix, press the select button to
leave edit mode. Press the right button to move to the next stage.
11. Press the select button to enter edit mode. Press the right button. IP6 Gateway:
is displayed.
12. Use the up or down button to quickly increase or decrease the value of the
first field of the IPv6 Gateway to the value that you have chosen.
13. Use the right button to move to the next field and use the up or down button
to change the value of this field.
14. Repeat step 13 for each of the remaining fields of the IPv6 Gateway.
15. Press and release the right button until Confirm Create? is displayed.
16. Press the select button to complete this task.
After you complete this task, the following information is displayed on the service
display screen:
v Cluster: is displayed on line 1.
v A temporary, system-assigned clustered system name that is based on the IP
address is displayed on line 2.
Note: To disable the fast increase and decrease address scrolling speed function
using the front panel, press and hold the down arrow button, press and release the
select button, and then release the down arrow button. The disabling of the fast
increase and decrease function lasts until system creation is completed or until the
feature is enabled again. If you press and hold the up or down arrow button while
the function is disabled, the value increases or decreases once every 2 seconds. To
enable the fast increase and decrease function again, press and hold the up arrow
button, press and release the select button, and then release the up arrow button.
After you have created the clustered system on the front panel with the correct IP
address format, you can finish the system configuration by accessing the
management GUI, completing the creation of the system, and adding nodes to the
system.
Before you access the management GUI, you must ensure that your web browser is
supported and has the appropriate settings enabled.
To access the management GUI, point your supported browser to the management
IP address.
www.ibm.com/storage/support/2145
For settings requirements, see the information about checking your web browser
settings for the management GUI.
www.ibm.com/storage/support/2145
Firmware and software for the SAN Volume Controller and its attached adapters
are tested and released as a single package. The package number increases each
time a new release is made.
Some code levels support upgrades only from specific previous levels, or the code
can be installed only on certain hardware types. If you upgrade to more than one
level above your current level, you might be required to install an intermediate
level. For example, if you are upgrading from level 1 to level 3, you might need to
install level 2 before you can install level 3. For information about the prerequisites
for each code level, see the website:
www.ibm.com/storage/support/2145
Note: Once the system software upgrade has completed, the Fibre Channel over
Ethernet (FCoE) functionality can be enabled on each node by following the fix
procedures for these events using the management GUI. Note that the FCoE
activation procedure involves a node reboot and it is therefore recommended to
allow time for host multipathing to recover between activation of different nodes
in the same I/O group.
During the automatic upgrade process, each node in a system is upgraded one at a
time, and the new code is staged on the nodes. While each node restarts, there
might be some degradation in the maximum I/O rate that can be sustained by the
system. After all the nodes in the system are successfully restarted with the new
code level, the new level is automatically committed.
The upgrade can normally be performed concurrently with normal user I/O
operations. However, there is a possibility that performance could be impacted. If
any restrictions apply to the operations that can be performed during the upgrade,
these restrictions are documented on the SAN Volume Controller website that you
use to download the upgrade packages. During the upgrade procedure, the
majority of configuration commands are not available. Only the following SAN
Volume Controller commands are operational from the time the upgrade process
starts to the time that the new code level is committed, or until the process has
been backed out:
v All information commands
v The rmnode command
To determine when your upgrade process has completed, you are notified through
the management GUI. If you are using the command-line interface, issue the
lssoftwareupgradestatus command to display the status of the upgrade.
Because of the operational limitations that occur during the upgrade process, the
code upgrade is a user task.
Multipathing driver
Before you upgrade, ensure that the multipathing driver is fully redundant with
every path available and online. You might see errors related to the paths going
away (fail over) and the error count increasing during the upgrade. When the
paths to the nodes are back, the nodes fall back to become a fully redundant
system. After the 30-minute delay, the paths to the other node go down.
If you are using IBM Subsystem Device Driver Path Control Module (SDDPCM) as
the multipathing software on the host, increased I/O error counts are displayed by
the pcmpath query device or pcmpath query adapter commands to monitor the
state of the multipathing software.
Note: To increase the amount of time between the two nodes that contain
volume copies and prevent them from going offline during the upgrade
process, consider manually upgrading the code.
Table 33 defines the synchronization rates.
Table 33. Resynchronization rates of volume copies
Synchronization rate Data copied/sec
1-10 128 KB
11-20 256 KB
21-30 512 KB
31-40 1 MB
41-50 2 MB
51-60 4 MB
61-70 8 MB
71-80 16 MB
81-90 32 MB
91-100 64 MB
Attention: To increase the amount of time between the two nodes going
offline during the upgrade process, consider manually upgrading the code.
You can create new Metro Mirror and Global Mirror partnerships between systems
with different software levels. If the partnerships are between a SAN Volume
Controller version 6.3.0 system and a system that is at 4.3.1, each system can
participate in a single partnership with another system. If the systems are all either
SAN Volume Controller version 5.1.0 or later, each system can participate in up to
three system partnerships. A maximum of four systems are permitted in the same
connected set. A partnership cannot be formed between a SAN Volume Controller
version 6.3.0 and one that is running a version that is earlier than 4.3.1.
With SAN Volume Controller version 6.4.0 or later, support for four Fibre Channel
and two Fibre Channel over Ethernet (FCoE) ports has been enabled. If a clustered
system contains these software versions, it will not be possible to establish a
remote copy partnership with another system running a software version earlier
than 6.4.0. If a system running 6.4.0 or later has an existing remote copy
partnership with another system running an earlier software version, you will not
be able to add a node with a combined total of more than four Fibre Channel and
FCoE ports. You also will not be able to activate additional ports (either by
enabling FCoE or installing new hardware) on existing nodes in the system. To
resolve these problems, you have two options:
v Upgrade the software on the remote system to 6.4.0 or later, or
v Use the chnodehw -legacy CLI command to disable the additional hardware on
nodes in the system with 6.4.0 or later software version installed
The -legacy parameter of the chnodehw CLI controls activating and deactivating the
FCoE ports.
Where software_level indicates the level of software the node must interoperate
with. If the value is less than 6.4.0, then the node will configure its hardware to
only support a maximum of four Fibre Channel/FCoE ports. And node_name |
node_id (required) specifies the node to be modified. The variable that follows the
parameter is either:
v The node name that you assigned when you added the node to the system
v The node ID that is assigned to the node (not the worldwide node name)
With support for six ports (four Fibre Channel and two FCoE ports) on each node
with 6.4.0 code, there are rules that govern how to setup a partnership with a
pre-6.4.0 system.
v A 6.4.0 system cannot form a partnership with a pre-6.4.0 system with more than
4 FC/FCoE I/O ports enabled.
For example, a multi-cluster partnership configuration between three systems, A,
B, and C.
A <-> B<-> C
System A has pre-6.4.0 installed, and systems B and C have 6.4.0 installed.
The remote copy services are possible in this configuration only if System B does
not have FCoE ports enabled.
Partnerships between systems A and B will not be affected because of activated
FCoE ports on nodes in system C.
v If a 6.4.0 system has an already established partnership with a pre-6.4.0 system
and if additional hardware (four Fibre Channel and two FCoE ports) is enabled
while the partnership is stopped, then the partnership cannot be started again
until the remote system has been upgraded or the extra hardware is disabled
using the chnodehw -legacy command.
v A node with a legacy hardware configuration (including a system that has been
upgraded from 6.3.0 to 6.4.0 that has 10Gb Ethernet adapters) will generate
event logs indicating that new hardware (the FCoE function) is available and
should be enabled with the chnodehw command. If you want to continue to
operate remote copy partnerships with systems running older levels of software,
you will need to leave this event log unfixed.
When a node is added to a system, the system will check for (started) partnerships
and determine the lowest software level of the partnered systems. This software
level will be passed to the node being added to the system. The node will perform
the equivalent of a chnodehw –legacy software level command as it joins the system.
If you are upgrading a system from a release previous to version 6.4.0 on systems
that contain 10 Gbps Ethernet cards, the upgrade process shows an alert event.
Each node logs this alert event with error code 1199, Detected hardware needs
activation.
www.ibm.com/storage/support/2145
The code is installed directly on the SAN Volume Controller system. System code
upgrades can only be performed in a strict order. The rules for upgrading from
any given version to the latest version are also provided on the website.
This procedure is for upgrading from SAN Volume Controller version 6.1.0 or later.
To upgrade from version 5.1.x or earlier, see the relevant information center or
publications that are available at this website:
www.ibm.com/storage/support/2145
Before you upgrade your software, review the conceptual information in the topic
Upgrading the system to understand how the upgrade process works. Allow
adequate time, such as up to a week in some cases, to look for potential problems
or known bugs. Use the Software Upgrade Test Utility to help you find these
problems. You can download the most current version of this tool at the following
website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000585
If you want to upgrade without host I/O, shut down all hosts before you start the
upgrade.
When you are ready to upgrade, click Settings > General > Upgrade Software in
the management GUI and follow the instructions.
Monitor the upgrade information in the management GUI to determine when the
upgrade is complete.
Note: The drive upgrade procedure is currently available only by using the CLI.
Procedure
1. Run the following command for the drive that you are upgrading.
lsdependentvdisks -drive drive_id
If any volumes are returned, continuing with this procedure takes the volumes
offline. To avoid losing access to data, resolve any redundancy errors to remove
this problem before you continue with the upgrade procedure.
2. Locate the firmware upgrade file at the following website:
www.ibm.com/storage/support/2145
This website also provides a link to the Software Upgrade Test Utility. This
utility indicates if any of your drives are not running at the latest level of
firmware.
3. Using scp or pscp, copy the firmware upgrade file and the Software Upgrade
Test Utility package to the /home/admin/upgrade directory by using the
management IP address.
4. Run the applydrivesoftware command. You must specify the firmware
upgrade file, the firmware type, and the drive ID:
applydrivesoftware -file name -type firmware -drive drive_id
Attention: Do not use the -type fpga option, which upgrades Field
Programmable Gate Array (FPGA) firmware, unless directed to do so by an
IBM service representative.
During this manual procedure, the upgrade is prepared, you remove a node from
the system, upgrade the code on the node, and return the node to the system. You
Prerequisites
Before you begin to upgrade nodes manually, ensure that the following
requirements are met:
v The system software must be at version 6.1.0 or higher. To manually upgrade
from version 4.3.1.1 or 5.1.x software, see the User-paced Software Upgrade
Procedure - Errata that is included with the IBM System Storage SAN Volume
Controller Software Installation and Configuration Guide at this website:
www.ibm.com/storage/support/2145
v The latest SAN Volume Controller upgrade package has been downloaded to
your management workstation.
v Each I/O group has two nodes.
v Errors in the system event log are addressed and marked as fixed.
v There are no volumes, MDisks, or storage systems with Degraded or Offline
status.
v The service assistant IP is configured to every node in the system.
v The system superuser password is known.
v The SAN Volume Controller configuration has been backed up and saved.
v The latest version of the SAN Volume Controller Software Upgrade Test Utility
is downloaded, is installed, and has been run to verify that there are no issues
with the current system environment. You can download the most current
version of this tool at the following website:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000585
v You have physical access to the hardware.
If you want to upgrade without host I/O, shut down all hosts before you start the
upgrade.
What to do next
Procedure
After you verify that the prerequisites for a manual upgrade are met, follow these
steps:
1. Use the management GUI to display the nodes in the system and record this
information. For all the nodes in the system, verify the following information:
v Confirm that all nodes are online.
v Record the name of the configuration node. This node must be upgraded
last.
v Record the names and I/O groups that are assigned to each node.
v Record the service IP address for each node.
2. If you are using the management GUI, view the External Storage panel to
ensure that everything is online and also verify that internal storage is present.
3. If you are using the command-line interface, issue this command for each
storage system:
lscontroller controller_name_or_controller_id
What to do next
Next: “Upgrading all nodes except the configuration node” on page 166
Procedure
Procedure
Note: When the configuration node is removed from the system, the SSH
connection to the system closes.
4. Open a web browser and type http://service_assistant_ip in the address
field. The service assistant IP address is the IP address for the service assistant
on the node that was just deleted.
5. On the service assistant home page, click Exit service state and press Go. Use
the management GUI to add the node to the system. The node will then be
upgraded before joining the system and will remain in the adding state for
some time.
This action automatically upgrades the code on this last node, which was the
configuration node.
What to do next
Procedure
1. Verify that the system is running at the correct software version and that no
other errors in the system need to be resolved.
To verify the new version number for the software in the management GUI,
select Monitoring > System. The software version is listed under the graphical
representation of the system. Check for new alerts in the Monitoring > Events
panel.
2. Verify that all the nodes are online. In the management GUI, select Monitoring
> System. Ensure that all nodes are present and online.
Results
These procedures are nondisruptive because changes to your SAN environment are
not required. The replacement (new) node uses the same worldwide node name
(WWNN) as the node that you are replacing. An alternative to this procedure is to
replace nodes disruptively either by moving volumes to a new I/O group or by
rezoning the SAN. The disruptive procedures, however, require additional work on
the hosts.
Note: For nodes that contain solid-state drives (SSDs): if the existing SSDs are
being moved to the new node, the new node must contain the necessary
serial-attached SCSI (SAS) adapter to support SSDs.
v All nodes that are configured in the system are present and online.
v All errors in the system event log are addressed and marked as fixed.
v There are no volumes, managed disks (MDisks), or external storage systems
with a status of degraded or offline.
v The replacement node is not powered on.
v The replacement node is not connected to the SAN.
v You have a 2145 UPS-1U unit (feature code 8115) for each new SAN Volume
Controller 2145-CG8 SAN Volume Controller 2145-CF8, or SAN Volume
Controller 2145-8A4 node.
v You have backed up the system configuration and saved the
svc.config.backup.xml file.
v The replacement node must be able to operate at the Fibre Channel or Ethernet
connection speed of the node it is replacing.
v If the node being replaced contains solid-state drives (SSDs), transfer all SSDs
and SAS adapters to the new node if it supports the drives. To prevent losing
access to the data, if the new node does not support the existing SSDs, transfer
the data from the SSDs before replacing the node.
Important:
Tip: You can change the WWNN of the node you are replacing to the factory
default WWNN of the replacement node to ensure that the number is unique.
5. The node ID and possibly the node name change during this task. After the
system assigns the node ID, the ID cannot be changed. However, you can
change the node name after this task is complete.
Procedure
1. (If the system software version is at 5.1 or later, complete this step.)
Confirm that no hosts have dependencies on the node.
When shutting down a node that is part of a system or when deleting the
node from a system, you can use either the management GUI or a
command-line interface (CLI) command. In the management GUI, select
Monitoring > System > Manage. Click Show Dependent Volumes to display
all the volumes that are dependent on a node. You can also use the node
parameter with the lsdependentvdisks CLI command to view dependent
volumes.
If dependent volumes exist, determine if the volumes are being used. If the
volumes are being used, either restore the redundant configuration or suspend
the host application. If a dependent quorum disk is reported, repair the access
to the quorum disk or modify the quorum disk configuration.
2. Use these steps to determine the system configuration node, and the ID,
name, I/O group ID, and I/O group name for the node that you want to
replace. If you already know the physical location of the node that you want
to replace, you can skip this step and proceed to step 3 on page 171.
Tip: If one of the nodes that you want to replace is the system configuration
node, replace it last.
a. Issue this command from the command-line interface (CLI):
lsnode -delim :
This output is an example of the output that is displayed for this
command:
id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name:
config_node:UPS_unique_id:hardware:iscsi_name:iscsi_alias
3:dvt113294:100089J137:5005076801005A07:online:0:io_grp0:yes:
20400002096810C7:8A4:iqn.1986-03.com.ibm:2145.ldcluster-80.dvt113294:
14:des113004:10006BR010:5005076801004F0F:online:0:io_grp0:no:
2040000192880040:8G4:iqn.1986-03.com.ibm:2145.ldcluster-80.des113004:
Important:
a. Record and mark the order of the Fibre Channel or Ethernet cables with
the node port number (port 1 to 4 for Fibre Channel, or port 1 to 2 for
Ethernet) before you remove the cables from the back of the node. The
Fibre Channel ports on the back of the node are numbered 1 to 4 from left
to right. You must reconnect the cables in the exact order on the
replacement node to avoid issues when the replacement node is added to
the system. If the cables are not connected in the same order, the port IDs
can change, which impacts the ability of the host to access volumes. See
the hardware documentation specific to your model to determine how the
ports are numbered.
b. Do not connect the replacement node to different ports on the switch or
director. The SAN Volume Controller can have 4 Gbps or 8 Gbps HBAs.
However, do not move them to faster switch or director ports at this time
to avoid issues when the replacement node is added to the system. This
task is separate and must be planned independently of replacing nodes in
a system.
5. Issue this CLI command to delete this node from the system and I/O group:
rmnode node_name or node_id
Where node_name or node_id is the name or ID of the node that you want to
delete. You can use the CLI to verify that the deletion process has completed.
6. Issue this CLI command to ensure that the node is no longer a member of the
system:
lsnode
Important: Do not connect the Fibre Channel or Ethernet cables during this
step.
9. If you are removing SSDs from an old node and inserting them into a new
node, see the IBM System Storage SAN Volume Controller Hardware Maintenance
Guide for specific instructions.
10. Power on the replacement node.
11. Record the WWNN of the replacement node. You can use this name if you
plan to reuse the node that you are replacing.
12. Perform these steps to change the WWNN name of the replacement node to
match the name that you recorded in step 3 on page 171:
For SAN Volume Controller V6.1.0 or later:
a. With the Cluster panel displayed, press the up or down button until the
Actions option is displayed.
b. Press and release the select button.
c. Press the up or down button until Change WWNN? is displayed.
d. Press and release the select button to display the current WWNN.
e. Press the select button to switch into edit mode. The Edit WWNN? panel is
displayed.
f. Change the WWNN to the numbers that you recorded in step 3 on page
171.
g. Press and release the select button to exit edit mode.
h. Press the right button to confirm your selection. The Confirm WWNN? panel
is displayed.
i. Press the select button to confirm.
Wait one minute. If Cluster: is displayed on the front panel, this indicates
that the node is ready to be added to the system. If Cluster: is not displayed,
Important: If the WWNN is not what you recorded in step 3 on page 171,
you must repeat step 12 on page 172.
15. Issue this CLI command to add the node to the system and ensure that the
node has the same name as the original node and is in the same I/O group as
the original node. See the addnode CLI command documentation for more
information.
addnode -wwnodename WWNN -iogrp iogroupname/id
WWNN and iogroupname/id are the values that you recorded for the original
node.
The SAN Volume Controller V5.1 and later automatically reassigns the node
with the name that was used originally. For versions before V5.1, use the name
parameter with the svctask addnode command to assign a name. If the
original name of the node name was automatically assigned by SAN Volume
Controller, it is not possible to reuse the same name. It was automatically
assigned if its name starts with node. In this case, either specify a different
name that does not start with node or do not use the name parameter so that
SAN Volume Controller automatically assigns a new name to the node.
If necessary, the new node is updated to the same SAN Volume Controller
software version as the system. This update can take up to 20 minutes.
Important:
a. Both nodes in the I/O group cache data; however, the cache sizes are
asymmetric. The replacement node is limited by the cache size of the
partner node in the I/O group. Therefore, it is possible that the
replacement node does not use the full cache size until you replace the
other node in the I/O group.
b. You do not have to reconfigure the host multipathing device drivers
because the replacement node uses the same WWNN and WWPN as the
previous node. The multipathing device drivers should detect the recovery
of paths that are available to the replacement node.
c. The host multipathing device drivers take approximately 30 minutes to
recover the paths. Do not upgrade the other node in the I/O group until
for at least 30 minutes after you have successfully upgraded the first node
in the I/O group. If you have other nodes in different I/O groups to
upgrade, you can perform those upgrades while you wait.
16. Query paths to ensure that all paths have been recovered before proceeding to
the next step. If you are using the IBM System Storage Multipath Subsystem
Device Driver (SDD), the command to query paths is datapath query device.
Documentation that is provided with your multipathing device driver shows
how to query paths.
17. Repair the faulty node.
If you want to use the repaired node as a spare node, perform these steps.
For SAN Volume Controller V6.1.0 or later:
Table 34 lists the models and software version requirements for nodes.
Table 34. Node model names and software version requirements
Required system SAN Volume Controller
Node model software version
SAN Volume Controller 2145-CG8 6.2.0 or later
SAN Volume Controller 2145-CF8 5.1.0 or later
SAN Volume Controller 2145-8A4 4.3.1 or later
SAN Volume Controller 2145-8G4 4.3.x or later
SAN Volume Controller 2145-8F4 4.3.x or later
SAN Volume Controller 2145-8F2 4.3.x or later
Procedure
1. Install the SAN Volume Controller nodes and the uninterruptible power supply
units in the rack.
2. Connect the SAN Volume Controller nodes to the LAN.
What to do next
For specific instructions about adding a new node or adding a replacement node to
a clustered system, see the information about adding nodes to a clustered system
in the IBM System Storage SAN Volume Controller Troubleshooting Guide.
Before you attempt to replace a faulty node with a spare node you, must ensure
that you meet the following requirements:
v You know the name of the system that contains the faulty node.
v A spare node is installed in the same rack as the system that contains the faulty
node.
v You must make a record of the last five characters of the original worldwide
node name (WWNN) of the spare node. If you repair a faulty node, and you
want to make it a spare node, you can use the WWNN of the node. You do not
want to duplicate the WWNN because it is unique. It is easier to swap in a node
when you use the WWNN.
If a node fails, the system continues to operate with degraded performance until
the faulty node is repaired. If the repair operation takes an unacceptable amount of
time, it is useful to replace the faulty node with a spare node. However, the
appropriate procedures must be followed and precautions must be taken so you do
not interrupt I/O operations and compromise the integrity of your data.
In particular, ensure that the partner node in the I/O group is online.
v If the other node in the I/O group is offline, start the fix procedures to
determine the fault.
v If you have been directed here by the fix procedures, and subsequently the
partner node in the I/O group has failed, see the procedure for recovering from
offline volumes after a node or an I/O group failed.
v If you are replacing the node for other reasons, determine the node you want to
replace and ensure that the partner node in the I/O group is online.
v If the partner node is offline, you will lose access to the volumes that belong to
this I/O group. Start the fix procedures and fix the other node before proceeding
to the next step.
The following table describes the changes that are made to your configuration
when you replace a faulty node in a clustered system.
If you choose to assign your own names, you must type the node
name on the Adding a node to a cluster panel. You cannot
manually assign a name that matches the naming convention
used for names assigned automatically by SAN Volume
Controller. If you are using scripts to perform management tasks
on the system and those scripts use the node name, you can
avoid the need to make changes to the scripts by assigning the
original name of the node to a spare node. This name might
change during this procedure.
Go to the procedure “Replacing nodes nondisruptively” on page 169 for the specific steps
to replace a faulty node in a system.
The serial numbers can be viewed on your storage system. If the serial numbers
are not displayed, the worldwide node name (WWNN) or worldwide port name
(WWPN) is displayed. The WWNN or WWPN can be used to identify the different
storage systems.
External storage systems reside on the SAN fabric and are addressable by one or
more worldwide port names (WWPNs). An external storage system might contain
one or more logical units (LUs), each identified by a different logical unit number
(LUN). External storage systems that are managed by SAN Volume Controller
typically contain multiple LUs.
Use one of the following techniques to control the access to the storage systems
and devices:
Logical units (LUs) or managed disks (MDisks) should be made accessible to all
ports on all the SAN Volume Controller nodes for a clustered system.
Attention: SAN Volume Controller does not take any action to prevent two
systems from accessing the same MDisks. If two systems are configured so that
they can detect the same MDisks, data corruption is likely to occur.
General guidelines
You must follow these general guidelines when configuring your storage systems.
v Avoid splitting arrays into multiple logical disks at the storage system level.
Where possible, create a single logical disk from the entire capacity of the array.
v Depending on the redundancy that is required, create RAID-5 (RAID 5) arrays
using between 5 and 8 data bits plus parity components (that is, 5 + P, 6 + P, 7 +
P or 8 + P).
v Do not mix managed disks (MDisks) that greatly vary in performance in the
same storage pool tier. The overall storage pool performance in a tier is limited
by the slowest MDisk. Because some storage systems can sustain much higher
I/O bandwidths than others, do not mix MDisks that are provided by low-end
storage systems with those that are provided by high-end storage systems in the
same tier. You must consider the following factors:
– The underlying RAID type that the storage system is using to implement the
MDisk.
– The number of physical disks in the array and the physical disk type (for
example: 10,000 or 15,000 rpm, Fibre Channel or SATA).
v When possible, include similarly sized MDisks in a storage pool tier. This makes
it easier to balance the MDisks in the storage pool tier. If the MDisks in a
storage pool tier are significantly different sizes, you can balance the proportion
of space that is allocated on each MDisk by including the larger MDisk multiple
times in the MDisk list. This is specified when you create a new volume. For
example, if you have two 400 MB disks and one 800 MB disk that are identified
as MDisk 0, 1, and 2, you can create the striped volume with the MDisk IDs of
0:1:2:2. This doubles the number of extents on the 800 MB drive, which
accommodates it being double the size of the other MDisks.
v Perform the appropriate calculations to ensure that your storage systems are
configured correctly.
v If any storage system that is associated with an MDisk has the allowquorum
parameter set to no, the chquorum command will fail for that MDisk. Before
setting the allowquorum parameter to yes on any storage system, check the
following website for storage system configuration requirements:
www.ibm.com/storage/support/2145
In this scenario, you have two RAID-5 arrays and both contain 5 + P components.
Array A has a single logical disk that is presented to the SAN Volume Controller
clustered system. This logical disk is seen by the system as mdisk0. Array B has
three logical disks that are presented to the system. These logical disks are seen by
the system as mdisk1, mdisk2, and mdisk3. All four MDisks are assigned to the
same storage pool that is named mdisk_grp0. When a volume is created by
striping across this storage pool, array A presents the first extent and array B
presents the next three extents. As a result, when the system reads and writes to
the volume, the loading is split 25% on the disks in array A and 75% on the disks
in array B. The performance of the volume is about one third of what array B can
sustain.
A failure of a storage device can affect a larger amount of storage that is presented
to the hosts. To provide redundancy, storage devices can be configured as arrays
that use either mirroring or parity to protect against single failures.
When creating arrays with parity protection (for example, RAID-5 arrays) consider
how many component disks you want to use in each array. If you use a large
amount of disks, you can reduce the number of disks that are required to provide
availability for the same total capacity (1 per array). However, more disks mean
that it takes a longer time to rebuild a replacement disk after a disk failure, and
during this period a second disk failure causes a loss of all array data. More data
is affected by a disk failure for a larger number of member disks because
performance is reduced while you rebuild onto a hot spare (a redundant disk) and
more data is exposed if a second disk fails before the rebuild operation is
complete. The smaller the number of disks, the more likely it is that write
operations span an entire stripe (stripe size, multiplied by the number of members,
minus one). In this case, write performance is improved. The number of disk
drives required to provide availability can be unacceptable if arrays are too small.
Notes:
1. For optimal performance, use arrays with between 6 and 8 member disks.
2. When creating arrays with mirroring, the number of component disks in each
array does not affect redundancy or performance.
Notes:
1. The performance of a storage pool is generally governed by the slowest MDisk
in the storage pool.
2. The reliability of a storage pool is generally governed by the weakest MDisk in
the storage pool.
3. If a single MDisk in a group fails, access to the entire group is lost.
Use the following guidelines when you group similar disks:
v Group equally performing MDisks in a single tier of a pool.
v Group similar arrays in a single tier. For example, configure all 6 + P RAID-5
arrays in one tier of a pool.
v Group MDisks from the same type of storage system in a single tier of a pool.
v Group MDisks that use the same type of underlying physical disk in a single tier
of a pool. For example, group MDisks by Fibre Channel or SATA.
v Do not use single disks. Single disks do not provide redundancy. Failure of a
single disk results in total data loss of the storage pool to which it is assigned.
Under one scenario, you could have two storage systems that are attached behind
your SAN Volume Controller. One device is an IBM TotalStorage Enterprise
Storage Server (ESS), which contains ten 6 + P RAID-5 arrays and MDisks 0
through 9. The other device is an IBM System Storage DS5000, which contains a
single RAID-1 array, MDisk10, one single JBOD, MDisk11, and a large 15 + P
RAID-5 array, MDisk12.
If you assigned MDisks 0 through 9 and MDisk11 into a single storage pool, and
the JBOD MDisk11 fails, you lose access to all of the IBM ESS arrays, even though
they are online. The performance is limited to the performance of the JBOD in the
IBM DS5000 storage system, therefore slowing down the IBM ESS arrays.
To fix this problem, you can create three groups. The first group must contain the
IBM ESS arrays, MDisks 0 through 9, the second group must contain the RAID 1
array, and the third group must contain the large RAID 5 array.
Because of this overhead, consider the type of I/O that your application performs
during a FlashCopy operation. Ensure that you do not overload the storage. The
calculations contain a heavy weighting when the FlashCopy feature is active. The
weighting depends on the type of I/O that is performed. Random writes have a
much higher overhead than sequential writes. For example, the sequential write
would have copied the entire 256 K.
You can spread the FlashCopy source volumes and the FlashCopy target volumes
between as many managed disk (MDisk) groups as possible. This limits the
potential bottle-necking of a single storage system, (assuming that the storage
pools contain MDisks from different storage systems). However, this can still result
in potential bottlenecks if you want to maintain all your target volumes on a single
storage system. You must ensure that you add the appropriate weighting to your
calculations.
Ensure that you follow the guidelines for using image mode volumes. This might
be difficult because a configuration of logical disks and arrays that performs well
in a direct SAN-attached environment can contain hot spots or hot component
disks when they are connected through the clustered system.
If the existing storage systems do not follow the configuration guidelines, consider
completing the data migration away from the image mode volume before
resuming I/O operations on the host systems. If I/O operations are continued and
the storage system does not follow the guidelines, I/O operations can fail at the
hosts and ultimately loss of access to the data can occur.
The procedure for importing managed disks (MDisks) that contain existing data
depends on the amount of free capacity that you have in the system. You must
have the same amount of free space in the system as the size of the data that you
want to migrate into the system. If you do not have this amount of available
capacity, the migration causes the storage pool to have an uneven distribution of
data because some MDisks are more heavily loaded than others. Further migration
operations are required to ensure an even distribution of data and subsequent I/O
loading.
When importing an image mode volume that has a certain amount of gigabytes
and your system has at least that amount in a single storage pool, follow the Start
New Migration wizard in the management GUI at Physical Storage > Migration to
import the image mode volumes and to provide an even distribution of data.
When importing an image mode volume that has a certain amount of gigabytes
and your system does not have at least that amount of free capacity in a single
storage pool, follow the Start New Migration wizard in the management GUI at
Physical Storage > Migration to import the image mode volumes. Do not select
the destination pool at the end of the wizard. This will cause the system to create
the image mode volumes but does not migrate the data away from the image
mode volumes. Use volume mirroring or migration to move the data around as
you want.
The virtualization features of the SAN Volume Controller enable you to choose
how your storage is divided and presented to hosts. While virtualization provides
you with a great deal of flexibility, it also offers the potential to set up an
overloaded storage system. A storage system is overloaded if the quantity of I/O
transactions that are issued by the host systems exceeds the capability of the
storage to process those transactions. If a storage system is overloaded, it causes
delays in the host systems and might cause I/O transactions to time out in the
host. If I/O transactions time out, the host logs errors and I/Os fail to the
applications.
Under this scenario, you have used the SAN Volume Controller system to
virtualize a single array and to divide the storage across 64 host systems. If all host
systems attempt to access the storage at the same time, the single array is
overloaded.
Procedure
1. Use Table 35 on page 185 to calculate the I/O rate for each RAID in the storage
system.
Note: The actual number of I/O operations per second that can be processed
depends on the location and length of each I/O, whether the I/O is a read or a
write operation and on the specifications of the component disks of the array.
For example, a RAID-5 array with eight component disks has an approximate
I/O rate of 150×7=1050.
For each volume that is the source or target of an active FlashCopy mapping,
consider the type of application that you want to use the volume and record
the additional weighting for the volume.
Example
For example, a FlashCopy mapping is used to provide point-in-time backups.
During the FlashCopy process, a host application generates an I/O workload of
random read and write operations to the source volume. A second host
7. Interpret the result. If the I/O rate that is generated by the application exceeds
the I/O rate per volume that you calculated, you might be overloading your
storage system. You must carefully monitor the storage system to determine if
the backend storage limits the overall performance of the storage system. It is
also possible that the previous calculation is too simplistic to model your
storage use after. For example, the calculation assumes that your applications
generate the same I/O workload to all volumes, which might not be the case.
You can use the I/O statistics facilities that are provided by the SAN Volume
Controller to measure the I/O rate of your MDisks. You can also use the
performance and I/O statistics facilities that are provided by your storage
systems.
If your storage system is overloaded there are several actions that you can take to
resolve the problem:
v Add more backend storage to the system to increase the quantity of I/O that
can be processed by the storage system. The SAN Volume Controller provides
virtualization and data migration facilities to redistribute the I/O workload of
volumes across a greater number of MDisks without having to take the storage
offline.
v Stop unnecessary FlashCopy mappings to reduce the amount of I/O operations
that are submitted to the backend storage. If you perform FlashCopy operations
in parallel, consider reducing the amount of FlashCopy mappings that start in
parallel.
v Adjust the queue depth to limit the I/O workload that is generated by a host.
Depending on the type of host and type of host bus adapters (HBAs), it might
be possible to limit the queue depth per volume or limit the queue depth per
HBA, or both. The SAN Volume Controller also provides I/O governing features
that can limit the I/O workload that is generated by hosts.
Note: Although these actions can be used to avoid I/O time-outs, performance of
your storage system is still limited by the amount of storage that you have.
Your setup must meet the following requirements to maximize the amount of I/O
operations that applications can run on Global Mirror volumes:
v The Global Mirror volumes at the remote system must be in dedicated storage
pools that only contain other Global Mirror volumes.
v Configure storage systems to support the Global Mirror workload that is
required of them. The following guidelines can be used to fulfill this
requirement:
– Dedicate storage systems to only Global Mirror volumes
– Configure the storage system to guarantee sufficient quality of service for the
disks that are being used by Global Mirror operations
– Ensure that physical disks are not shared between Global Mirror volumes and
other I/O operations. For example, do not split an individual array.
v For Global Mirror storage pools, use MDisks with the same characteristics. For
example, use MDisks that have the same RAID level, physical disk count, and
disk speed. This requirement is important to maintain performance when you
use the Global Mirror feature.
You must provision the storage systems that are attached to the remote system to
accommodate the following items:
v The peak application workload to the Global Mirror volumes
v The specified background copy level
v All I/O operations that run on the remote system
The FlashCopy, volume mirroring, and thin-provisioned volume functions can all
have a negative impact on system performance. The impact depends on the type of
I/O taking place, and is estimated using a weighting factor from Table 38.
To calculate the average I/O rate per volume, use the following equation:
I/O rate = (I/O capacity) / ( V + weighting factor for FlashCopy +
weighting factor for volume mirroring + weighting factor for thin-provisioned)
If the average I/O rate to the volumes in the example exceeds 94.6, the system
would be overloaded. As approximate guidelines, a heavy I/O rate is 200, a
medium I/O rate is 80, and a low I/O rate is 10.
With volume mirroring, a single volume can have multiple copies in different
storage pools. The I/O rate for such a volume is the minimum I/O rate calculated
from each of its MDisk Groups.
If system storage is overloaded, you can migrate some of the volumes to storage
pools with available capacity.
Note: Solid-state drives (SSDs) are exempt from these calculations, with the
exception of overall node throughput, which increases substantially for each
additional SSD in the node.
The discovery process systematically recognizes all visible ports on the SAN for
devices that identify themselves as storage systems and the number of logical units
(LUs) that they export. The LUs can contain new storage or a new path for
previously discovered storage. The set of LUs forms the SAN Volume Controller
managed disk (MDisk) view.
The discovery process runs when ports are added to or deleted from the SAN and
when certain error conditions occur. You can also manually run the discovery
process using the detectmdisk command-line interface (CLI) command or the
Discover MDisks function from the management GUI. The detectmdisk command
and the Discover MDisks function have the clustered system rescan the Fibre
Channel network. The rescan discovers any new MDisks that might have been
added to the system and rebalances MDisk access across the available
storage-system device ports.
Note: Some storage systems do not automatically export LUs to the SAN Volume
Controller.
Ensure that you are familiar with the following guidelines for exporting LUs to the
SAN Volume Controller system:
v When you define the SAN Volume Controller as a host object to the storage
systems, you must include all ports on all nodes and candidate nodes.
v When you first create an LU, you must wait until it is initialized before you
export it to the SAN Volume Controller.
Attention: Failure to wait for the LUs to initialize can result in excessive
discovery times and an unstable view of the SAN.
v Do not present new LUs to the SAN Volume Controller until the array
initialization and format is complete. If you add a LUN to a storage pool before
Important: The LU must be identified by the same logical unit number (LUN)
on all ports.
Some storage systems enable you to expand the size of a logical unit (LU) using
vendor-specific disk-configuration software that is provided. The steps in this
procedure are required for the SAN Volume Controller to use extra capacity that is
provided in this way.
To ensure that this additional capacity is available to the SAN Volume Controller,
follow these steps:
Procedure
1. Issue the rmmdisk CLI command to remove the managed disk (MDisk) from the
storage pool. Use the force parameter to migrate data on the specified MDisk
to other MDisks in the storage pool. The command completes asynchronously
if -force is specified. You can check the progress of active migrations by
running the lsmigrate command.
2. Use the vendor-specific, disk-configuration software to expand the size of the
logical unit on the storage system.
3. Issue the detectmdisk CLI command to rescan the Fibre Channel network. The
rescan process discovers any changes to existing MDisks and any new MDisks
that have been added to the clustered system. This command completes
asynchronously and might take a few minutes. To determine whether a
discovery operation is still in progress, use the lsdiscoverystatus command.
4. Issue the lsmdisk CLI command to display the additional capacity that has
been expanded.
5. Issue the addmdisk CLI command to add the MDisk back to the group.
Results
The extra capacity is available for use by the SAN Volume Controller system.
Note:
v The volume becomes a striped MDisk not an image-mode volume.
v All data that is stored on this MDisk is migrated to the other MDisks in
the storage pool.
v This CLI command can fail if there are not enough free extents in the
storage pool.
b. If the MDisk is in image mode and you do not want to convert the volume
to a striped volume, stop all I/O to the image mode volume.
c. Issue the following CLI command to remove the host mapping and any
SCSI reservation that the host has on the volume:
rmvdiskhostmap -host host name virtual disk name
Where host name is the name of the host for which you want to remove the
volume mapping and virtual disk name is the name of the volume for which
you want to remove mapping.
d. Issue the following command to delete the volume:
rmvdisk virtual disk name
Where virtual disk name is the name of the volume that you want to delete.
2. Remove the LU mapping on the storage system so that the LUN is not visible
to the SAN Volume Controller system.
3. Issue the following CLI command to clear all error counters on the MDisk:
includemdisk MDisk number
Where MDisk number is the number of the MDisk that you want to modify.
4. Issue the following CLI command to rescan the Fibre Channel network and
detect that the LU is no longer there.
detectmdisk MDisk number
Where MDisk number is the number of the MDisk that you want to modify. The
MDisk is removed from the configuration.
5. Issue the following CLI command to verify that the MDisk is removed:
lsmdisk MDisk number
Where MDisk number is the number of the MDisk that you want to modify.
v If the MDisk is still displayed, repeat steps 3 and 4.
6. Configure the mapping of the LU to the new LUN on the storage system.
7. Issue the following CLI command:
detectmdisk
8. Issue the following CLI command to check that the MDisk now has the correct
LUN:
lsmdisk
When the SAN Volume Controller system can access an LU through multiple
storage systems ports, the system uses the following criteria to determine the
accessibility of these ports:
v The SAN Volume Controller node is a member of a clustered system.
v The SAN Volume Controller node has Fibre Channel connections to the storage
systems port.
v The SAN Volume Controller node has successfully discovered the LU.
v Slandering has not caused the SAN Volume Controller node to exclude access to
the MDisk through the storage systems port.
An MDisk path is presented to the clustered system for all SAN Volume Controller
nodes that meet these criteria.
When an MDisk is created, SAN Volume Controller selects one of the storage
system ports to access the MDisk.
Table 39 describes the algorithm that SAN Volume Controller uses to select the
storage system port.
Table 39. Storage system port selection algorithm
Criteria Description
Accessibility Creates an initial set of candidate storage-system ports. The set of
candidate storage-system ports include the ports that are accessible
by the highest number of nodes.
Slandering Reduces the set of candidate storage-system ports to those with the
lowest number of nodes.
Preference Reduces the set of candidate storage-system ports to those that the
storage system uses as preferred ports.
Load balance Selects the port from the set of candidate storage-system ports that
has the lowest MDisk access count.
After the initial device port selection is made for an MDisk, the following events
can cause the selection algorithm to rerun:
v A new node joins the system and has a different view of the storage system than
the other nodes in the system.
v The detectmdisk command-line interface (CLI) command is run or the Discover
MDisks management GUI function is used. The detectmdisk CLI command and
the Discover MDisks function have the system rescan the Fibre Channel
Procedure
1. Issue the following CLI command to list the storage system:
lscontroller
2. Record the name or identification for the storage system that you want to
determine.
3. Issue the following CLI command:
lscontroller controllername/identification
Procedure
where controller_id is the ID of the storage system that you want to rename.
Perform the following steps to delete existing LUs and replace them with new
LUs:
Procedure
1. Issue the following CLI command to delete the managed disks (MDisks) that
are associated with the LUs from their storage pools:
rmmdisk -mdisk MDisk name1:MDisk name2 -force MDisk group name
Where MDisk name1:MDisk name2 are the names of the MDisks to delete.
2. Delete the existing LUs using the configuration software of the storage system.
3. Issue the following command to delete the associated MDisks from the
clustered system:
detectmdisk
4. Configure the new LUs using the configuration software of the storage system.
5. Issue the following command to add the new LUs to the system:
detectmdisk
You must follow the zoning guidelines for your switch and also ensure that the
storage system (controller) is set up correctly for use with the SAN Volume
Controller.
You must create one or more arrays on the new storage system.
If your storage system provides array partitioning, create a single partition from
the entire capacity available in the array. You must record the LUN number that
you assign to each partition. You must also follow the mapping guidelines (if your
storage system requires LUN mapping) to map the partitions or arrays to the SAN
Volume Controller ports. You can determine the SAN Volume Controller ports by
following the procedure for determining WWPNs.
Procedure
1. Issue this CLI command to ensure that the clustered system has detected the
new storage (MDisks):
detectmdisk
2. Determine the storage-system name to validate that this is the correct storage
system. The storage system is automatically assigned a default name.
v If you are unsure which storage system is presenting the MDisks, issue this
command to list the storage systems:
lscontroller
3. Find the new storage system in the list. The new storage system has the
highest-numbered default name.
194 SAN Volume Controller: Software Installation and Configuration Guide
4. Record the name of the storage system and follow the instructions in the
section about determining a storage-system system name.
5. Issue this command to change the storage-system name to something that you
can easily use to identify it:
chcontroller -name newname oldname
where newname is the name that you want to change the storage system to and
oldname is the name that you are changing.
6. Issue this command to list the unmanaged MDisks:
lsmdisk -filtervalue mode=unmanaged:controller_name=new_name
These MDisks should correspond with the arrays or partitions that you have
created.
7. Record the field controller LUN number. This number corresponds with the
LUN number that you assigned to each of the arrays or partitions.
8. Create a new MDisk group (storage pool) and add only the arrays that belong
to the new storage system to this MDisk group. To avoid mixing RAID types,
create a new MDisk group for each set of array types (for example, RAID-5,
RAID-1). Give each MDisk group that you create a descriptive name. For
example, if your storage system is named FAST650-fred, and the MDisk group
contains RAID-5 arrays, name the MDisk Group F600-fred-R5.
mkmdiskgrp -ext 16 -name mdisk_grp_name
-mdisk colon separated list of RAID-x mdisks returned
in step 4
An alternative to following this procedure is to migrate all of the volumes that are
using storage in this storage pool to another storage pool. Using this method, you
can consolidate the volumes in a single or new group. However, you can only
migrate one volume at a time. The procedure outlined below migrates all the data
through a single command.
You can also use this procedure to remove or replace a single MDisk in a group. If
an MDisk experiences a partial failure, such as a degraded array, and you can still
read the data from the disk but cannot write to it, you can replace just that MDisk.
Procedure
1. Add the new storage system to your clustered-system configuration.
2. Issue the following command:
addmdisk -mdisk mdiskx:mdisky:mdiskz... mdisk_grp_name
Where mdiskx:mdisky:mdiskz... are the old MDisks that you want to delete and
mdisk_grp_name is the name of the storage pool that contains the MDisks that
you want to delete. Depending upon the number and size of the MDisks, and
the number and size of the volumes that are using these MDisks, this
operation takes some time to complete, even though the command returns
immediately.
5. Check the progress of the migration process by issuing the following
command:
lsmigrate
6. When all the migration tasks are complete, for example, the command in step
5 returns no output, verify that the MDisks are unmanaged.
7. Access the storage system and unmap the LUNs from the SAN Volume
Controller ports.
Note: You can delete the LUNs if you no longer want to preserve the data
that is on the LUNs.
8. Issue the following CLI command:
detectmdisk
9. Verify that there are no MDisks for the storage system that you want
decommission.
10. Remove the storage system from the SAN so that the SAN Volume Controller
ports can no longer access the storage system.
When you remove LUs from your storage system, the managed disks (MDisks)
that represent those LUs might still exist in the system. However, the system
cannot access these MDisks because the LUs that these MDisks represent have
been unconfigured or removed from the storage system. You must remove these
MDisks.
Procedure
1. Run the includemdisk CLI command on all the affected MDisks.
2. Run the rmmdisk CLI command on all affected MDisks. This puts the MDisks
into the unmanaged mode.
Results
All of the MDisks that represent unconfigured LUs are removed from the system.
The system uses a quorum disk to manage a SAN fault that splits the system
exactly in half. One half of the system continues to operate, and the other half
stops until the SAN connectivity is restored.
During quorum disk discovery, the system assesses each logical unit (LU) to
determine its potential use as a quorum disk. From the set of eligible LUs, the
system nominates three quorum candidate disks.
If possible, the quorum disk candidates are presented by different devices. After
the quorum candidate disks are selected, the system selects one of the candidate
quorum disks to become the active quorum disk, which means it is used first to
break a tie in the event of a system partition. After the active quorum disk is
selected, the system does not attempt to ensure that the candidate quorum disks
are presented by different devices. However, you can also manually select the
active quorum disk if you want to ensure the active quorum disk is presented by a
different device. Selecting the active quorum disk is useful in split-site system
configurations and ensures that the most highly available quorum disk is used. To
view a list of current quorum disk candidates, use the lsquorum command. You can
set the active parameter on the chquorum command to set a disk as an active
quorum disk. The quorum disk candidates can be updated by configuration
activity if other eligible LUs are available.To change a quorum candidate disk in
the management GUI, select Pools > MDisks by Pools or Pools > External
Storage.
If no quorum disk candidates are found after the discovery, one of the following
situations has occurred:
v No LUs exist in managed space mode. An error is logged when this situation
occurs.
v LUs exist in managed space mode, but they do not meet the eligibility criteria.
An error is logged when this situation occurs.
You must issue the detectmdisk command-line interface (CLI) command or use the
Discover MDisks function from the management GUI to have the clustered system
rescan the Fibre Channel network. The rescan process discovers any new MDisks
that might have been added to the system and rebalances MDisk access across the
available storage system ports.
The following categories represent the types of service actions for storage systems:
v Controller code upgrade
v Field replaceable unit (FRU) replacement
Ensure that you are familiar with the following guidelines for upgrading controller
code:
v Check to see if the SAN Volume Controller supports concurrent maintenance for
your storage system.
v Allow the storage system to coordinate the entire upgrade process.
v If it is not possible to allow the storage system to coordinate the entire upgrade
process, perform the following steps:
1. Reduce the storage system workload by 50%.
2. Use the configuration tools for the storage system to manually failover all
logical units (LUs) from the controller that you want to upgrade.
3. Upgrade the controller code.
4. Restart the controller.
5. Manually failback the LUs to their original controller.
6. Repeat for all controllers.
FRU replacement
Ensure that you are familiar with the following guidelines for replacing FRUs:
v If the component that you want to replace is directly in the host-side data path
(for example, cable, Fibre Channel port, or controller), disable the external data
paths to prepare for upgrade. To disable external data paths, disconnect or
disable the appropriate ports on the fabric switch. The SAN Volume Controller
ERPs reroute access over the alternate path.
v If the component that you want to replace is in the internal data path (for
example, cache, or drive) and did not completely fail, ensure that the data is
backed up before you attempt to replace the component.
Partnerships among the replication and storage layer systems are limited by these
rules:
v A SAN Volume Controller is always in a replication layer. This cannot be
changed.
v Storage layer systems can be used as external storage only by replication layer
systems.
v Replication layer systems can participate in Metro Mirror or Global Mirror
partnerships only with other replication layer systems.
v Storage layer systems can participate in Metro Mirror or Global Mirror
partnerships only with other storage layer systems.
The storage layer systems support quorum disks. Clustered systems that have a
storage level system can choose MDisks that are presented by a storage level
system as quorum disks.
Replication layer systems can use storage that is presented by the storage layer
system, but Metro Mirror and Global Mirror cannot interoperate between the two
systems. Replication layer systems can participate in Metro Mirror and Global
Mirror partnerships only with other replication layer systems, and storage layer
systems can participate in Metro Mirror and Global Mirror partnerships only with
other storage layers.
See the Metro Mirror and Global Mirror partnerships topic for more information.
Volumes that are defined on a storage layer system can be used by the replication
layer system as a source or target for advanced copy functions such as FlashCopy,
Sharing the storage layer system between a host and the SAN
Volume Controller
A Flex System V7000 Storage Node, Storwize V7000, or Storwize V7000 Unified
system can present some volumes to a SAN Volume Controller and other volumes
to hosts on the SAN. However, an individual volume cannot be presented to both
a SAN Volume Controller and a host simultaneously.
www.ibm.com/storage/support/storwize/v7000
www.ibm.com/storage/support/storwize/v7000/unified
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
Method Description
Port Mode Allows access to logical units that you want to define on a
per storage-system port basis. SAN Volume Controller
visibility (through switch zoning, physical cabling, and so on)
must allow the SAN Volume Controller system to have the
same access from all nodes and the accessible storage system
ports have been assigned the same set of logical units with
the same logical unit number. This method of access control is
not recommended for SAN Volume Controller connection.
WWN Mode Allows access to logical units using the WWPN of each of the
ports of an accessing host device. All WWPNs of all the SAN
Volume Controller nodes in the same system must be added
to the list of linked paths in the storage system configuration.
This becomes the list of host (SAN Volume Controller) ports
for an LD Set or group of logical units. This method of access
control allows sharing because different logical units can be
accessed by other hosts.
www.ibm.com/storage/support/2145
The Compellent system must use a firmware level that is supported by the SAN
Volume Controller. For specific firmware levels and the latest supported hardware,
see the following website:
www.ibm.com/storage/support/2145
The Compellent Storage Center GUI (graphical user interface) is used to manage
the storage center. Compellent provides access to your Compellent system from
any standard Internet browser or from any host computer via a local area network
(LAN) or a wide area network (WAN).
Before you create, delete, or migrate logical units, you must read the storage
configuration guidelines that are specified in the Compellent documentation.
Figure 46 shows the suggested cabling for attachment of the Compellent storage
system to SAN Volume Controller.
Fabric 1 Fabric 2
A R A R A R A R
1 2 3 4 5 6 7 8
Port Number Port Number
A=Active R=Reserve
svc00697
Figure 46. Suggested cabling to attach the Compellent storage system
To assign storage to the SAN Volume Controller, you must create a server object
that represents each storage node in a SAN Volume Controller clustered system.
When you create a server object for a SAN Volume Controller storage node, select
Other > Other MultiPath as the operating system. After you create all your
storage nodes as servers, it is recommended that you create a server cluster and
add all related nodes to it.
Migrating volumes
You can use the standard migration procedure to migrate volumes from the
Compellent system to the SAN Volume Controller system.
You can configure your environment so that other hosts can communicate with the
Compellent system for storage requirements that fall outside of the SAN Volume
Controller. You can also configure hosts that communicate with the SAN Volume
Controller directly for storage to also communicate directly with the Compellent
Storage Center for storage. Ensure that you carefully plan and have suitable
documentation before you follow either of these scenarios.
The SAN Volume Controller can use logical units (LUs) that are exported by the
Compellent system as quorum disks.
Compellent advanced functions are not supported with SAN Volume Controller.
Access Logix
Access Logix is an optional feature of the firmware code that provides the
functionality that is known as LUN Mapping or LUN Virtualization.
You can use the software tab in the storage systems properties page of the EMC
Navisphere GUI to determine if Access Logix is installed.
After Access Logix is installed it can be disabled but not removed. The following
are the two modes of operation for Access Logix:
v Access Logix not installed: In this mode of operation, all LUNs are accessible
from all target ports by any host. Therefore, the SAN fabric must be zoned to
ensure that only the SAN Volume Controller can access the target ports.
v Access Logix enabled: In this mode of operation, a storage group can be formed
from a set of LUNs. Only the hosts that are assigned to the storage group are
allowed to access these LUNs.
The following prerequisites must be met before you can configure an EMC
CLARiiON controller with Access Logix installed:
v The EMC CLARiiON controller is not connected to the SAN Volume Controller
v You have a RAID controller with LUs and you have identified which LUs you
want to present to the SAN Volume Controller
You must complete the following tasks to configure an EMC CLARiiON controller
with Access Logix installed:
v Register the SAN Volume Controller ports with the EMC CLARiiON
v Configure storage groups
The association between the SAN Volume Controller and the LU is formed when
you create a storage group that contains both the LU and the SAN Volume
Controller.
The following prerequisites must be met before you can register the SAN Volume
Controller ports with an EMC CLARiiON controller that has Access Logix
installed:
v The EMC CLARiiON controller is not connected to the SAN Volume Controller
v You have a RAID controller with LUs and you have identified which LUs you
want to present to the SAN Volume Controller
Each initiator port [worldwide port name (WWPN)] must be registered against a
host name and against a target port to which access is granted. If a host has
multiple initiator ports, multiple table entries with the same host name are listed. If
a host is allowed access using multiple target ports, multiple table entries are
listed. For SAN Volume Controller hosts, all WWPN entries should carry the same
host name.
Procedure
1. Connect the Fibre Channel and zone the fabric as required.
2. Issue the detectmdisk command-line interface (CLI) command.
3. Right-click on the storage system from the Enterprise Storage window.
4. Select Connectivity Status. The Connectivity Status window is displayed.
5. Click New. The Create Initiator Record window is displayed.
6. Wait for the list of SAN Volume Controller ports to appear in the dialog box.
Use the WWPN to Identify them. This can take several minutes.
7. Click Group Edit.
8. Select all instances of all the SAN Volume Controller ports in the Available
dialog box.
9. Click the right arrow to move them to the selected box.
10. Fill in the HBA WWN field. You must know the following information:
v WWNN of each SAN Volume Controller in the clustered system
v WWPN of each port ID for each node on the system
The HBA WWN field is made up of the WWNN and the WWPN for the SAN
Volume Controller port. The following is an example of the output:
50:05:07:68:01:00:8B:D8:50:05:07:68:01:20:8B:D8
Results
All your WWPNs are registered against the host name that you specified.
Notes:
1. A subset of logical units (LUs) can form a storage group.
2. An LU can be in multiple storage groups.
3. A host can be added to a storage group. This host has access to all LUs in the
storage group.
4. A host cannot be added to a second storage group.
Procedure
1. Right-click on the storage system from the Enterprise Storage window.
2. Select Create Storage Group. The Create Storage Group window is displayed.
3. Enter a name for your storage group in the Storage Group Name field.
4. If available, select Dedicated in the Sharing State field.
5. Click OK. The storage group is created.
6. Right-click the storage group in the Enterprise Storage window.
206 SAN Volume Controller: Software Installation and Configuration Guide
7. Select Properties. The Storage Group Properties window is displayed.
8. Perform the following steps from the Storage Group Properties window:
a. Select the LUNs tab.
b. Select the LUNs that you want the SAN Volume Controller to manage in
the Available LUNs table.
Attention: Ensure that the LUs that you have selected are not used by
another storage group.
c. Click the forward arrow button.
d. Click Apply. A Confirmation window is displayed.
e. Click Yes to continue. A Success window is displayed.
f. Click OK.
g. Select the Hosts tab.
h. Select the host that you created when you registered the SAN Volume
Controller ports with the EMC CLARiiON.
Attention: Ensure that only SAN Volume Controller hosts (initiator ports)
are in the storage group.
i. Click the forward arrow button.
j. Click OK. The Confirmation window is displayed.
k. Click Yes to continue. A Success window is displayed.
l. Click OK.
Procedure
Configure the switch zoning such that no hosts can access these LUs.
www.ibm.com/storage/support/2145
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
The EMC CLARiiON FC series and the SAN Volume Controller clustered system
allow concurrent replacement of the following components:
v Disk drives
v Controller fans (fans must be replaced within 2 minutes or controllers are shut
down)
v Disk enclosure fans (fans must be replaced within 2 minutes or controllers are
shut down)
v Controller (service processor: you must first disable cache)
v Fibre Channel Bypass cards (LCC)
v Power supplies (you must first remove fans)
v Uninterruptible power supply battery (SPS)
EMC CLARiiON FC devices require that the I/O is quiesced during code upgrade.
Consequently, the SAN Volume Controller system does not support concurrent
upgrade of the FC controller code.
The EMC CLARiiON CX series and the SAN Volume Controller system allow
concurrent replacement of the following components:
v Disk drives
v Controller (service processor or drawer controller)
v Power/cooling modules (modules must be replaced within 2 minutes or
controllers are shut down)
v Uninterruptible power supply battery (SPS)
The SAN Volume Controller system and EMC CLARiiON CX devices support
concurrent code upgrade of the CX controllers.
Note:
v EMC CLARiiON procedures for concurrent upgrade must be followed in all
cases.
v The CX Series also has a feature called Data In Place Upgrade which allows you
to upgrade from one model to another (for example, from the CX200 to the
CX600) with no data loss or migration required. This is not a concurrent
operation.
Navisphere or Navicli
The following user interface applications are available with EMC CLARiiON
systems:
v Navisphere is the web-based application that can be accessed from any web
browser.
Note: Some options and features are only accessible through the CLI.
Communication with the EMC CLARiiON in both cases is out-of-band. Therefore,
the host does not need to be connected to the storage over Fibre Channel and
cannot be connected without Access Logix.
The EMC CLARiiON FC4500 and CX200 systems limit the number of initiator
HBAs to only allow 15 connections for each storage system port. This limit is less
than the 16 initiator ports that are required to connect to an 8-node clustered
system in a dual fabric configuration. To use EMC CLARiiON FC4500 and CX200
systems with an 8-node system, you must zone the system to use one SAN Volume
Controller port for each node in each fabric. This reduces the initiator HBA count
to eight.
EMC CLARiiON FC4700 and CX400 systems provide 4 target ports and allow 64
connections. Using a single SAN fabric, a 4-node system requires 64 connections (4
× 4 × 4), which is equal to the number of connections that are allowed. If split
support with other hosts is required, this can cause issues. You can reduce either
the number of initiator ports or target ports so that only 32 of the available 64
connections are used.
CX600 models
EMC CLARiiON CX600 systems provide 8 target ports and allow 128 connections.
A 4-node system consumes all 128 connections (4 × 4 × 8). An 8-node system
exceeds the connection limit and no reduction methods can be used.
A SAN Volume Controller configuration that only includes the EMC CLARiiON is
permitted.
Advanced copy functions for EMC CLARiiON, for example, SnapView, MirrorView
and SANcopy, are not supported for disks that are managed by the SAN Volume
Controller because the copy function does not extend to the SAN Volume
Controller cache.
MetaLUN
MetaLUN allows a logical unit (LU) to be expanded using LUs in other RAID
groups. The SAN Volume Controller only supports MetaLUN for the migration of
image mode volumes.
The following settings and options are supported by the SAN Volume Controller:
v System
v Port
v Logical unit
Table 40 lists the global settings that are supported by the SAN Volume Controller.
Table 40. EMC CLARiiON global settings supported by the SAN Volume Controller
EMC CLARiiON SAN Volume Controller
Option default setting required setting
Access Controls (Access Logix Not installed Either Installed or Not
installed) Installed
Subsystem Package Type 3 3
Table 41 lists the options that can be set by the EMC CLARiiON.
Table 41. EMC CLARiiON controller settings supported by the SAN Volume Controller
EMC CLARiiON default SAN Volume Controller
Option setting required setting
Read Cache Enabled Enable Enable
Read Cache Size 200 MB Default recommended
Statistics Logging Disable Either Enable or Disable
Note: The SAN Volume Controller cannot obtain or change the configuration
options that are listed above. You must configure the options that are listed above.
Table 42 lists port settings, the EMC CLARiiON defaults, and the required settings
for SAN Volume Controller clustered systems.
Table 42. EMC CLARiiON port settings
EMC CLARiiON default SAN Volume Controller
Option setting required setting
Port speed Depends on the model Any
Table 43 lists the options that must be set for each LU that is accessed by the SAN
Volume Controller. LUs that are accessed by hosts can be configured differently.
Table 43. EMC CLARiiON LU settings supported by the SAN Volume Controller
EMC CLARiiON default SAN Volume Controller
Option setting required setting
LU ID Auto N/A
RAID Type 5 Any RAID Group
RAID Group Any available RAID Any available RAID Group
Group
Offset 0 Any setting
LU Size ALL LBAs in RAID Group Any setting
Placement Best Fit Either Best Fit or First Fit
UID N/A N/A
Default Owner Auto N/A
Auto Assignment Disabled Disabled
Verify Priority ASAP N/A
Rebuild Priority ASAP N/A
Strip Element Size 128 N/A
Read Cache Enabled Enabled Enabled
Write Cache Enabled Enabled Enabled
Idle Threshold 0–254 0–254
Max Prefetch Blocks 0–2048 0–2048
Maximum Prefetch IO 0–100 0–100
Minimum Prefetch Size 0–65534 0–65534
Prefetch Type 0, 1, or 2 0, 1, or 2
Prefetch Multiplier 0 to 2048 or 0 to 324 0 to 2048 or 0 to 324
Retain prefetch Enabled or Disabled Enabled or Disabled
Prefetch Segment Size 0 to 2048 or 0 to 32 0 to 2048 or 0 to 32
Idle Delay Time 0 to 254 0 to 254
Verify Priority ASAP, High, Medium, or Low
Low
Write Aside 16 to 65534 16 to 65534
Note: The SAN Volume Controller cannot obtain or change the configuration
options that are listed above. You must configure the options that are listed above.
On some versions of Symmetrix and Symmetrix DMX, the setting of SPC-2 can be
configured. SPC-2 is set either on a per-port basis or on a per-initiator basis. LUs
that are mapped to SAN Volume Controller must be configured with SPC-2
disabled.
Note: Changing the value of the SPC-2 setting on a live system can cause errors. If
you have a live system running with SPC-2 enabled on LUs mapped to SAN
Volume Controller, contact the IBM Support Center for guidance on how to
proceed. Do not disable SPC-2 on a live system before taking guidance from the
IBM Support Center.
www.ibm.com/storage/support/2145
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
The EMC Symmetrix and Symmetrix DMX are Enterprise class devices that
support nondisruptive replacement of the following components:
v Channel Director
v Disk Director
v Cache card
v Disk drive
v Cooling fan
v Comms card
You can configure and control the exported storage as described below.
You can use the EMC Control Center to manage and monitor the EMC Symmetrix
and Symmetrix DMX systems.
You can use Volume Logix for volume configuration management. Volume Logix
allows you to control access rights to the storage when multiple hosts share target
ports.
SYMCLI
The EMC Symmetrix Command Line Interface (SYMCLI) allows the server to
monitor and control the EMC Symmetrix and Symmetrix DMX.
An EMC Symmetrix or Symmetrix DMX system can be shared between a host and
a SAN Volume Controller under the following conditions:
v When possible, avoid sharing target ports between the SAN Volume Controller
system and other hosts. If this cannot be avoided, you must regularly check the
combined I/O workload that is generated by the SAN Volume Controller system
and the other hosts. The performance of either the SAN Volume Controller
system or the hosts is impacted if the workload exceeds the target port
capabilities.
v A single host must not be connected to a SAN Volume Controller and an EMC
Symmetrix or Symmetrix DMX because the multipathing drivers (for example,
subsystem device driver (SDD) and PowerPath) cannot coexist.
Switch zoning
The SAN Volume Controller switch zone must include at least one target port on
two or more Fibre Channel adapters to avoid a single point of failure.
The EMC Symmetrix and Symmetrix DMX must be configured to present logical
units (LUs) to all SAN Volume Controller initiator ports that are in the fabric zone.
Only SAN Volume Controller initiator ports that are LUN masked on the EMC
Symmetrix or Symmetrix DMX controller should be present in the fabric zone.
Note: The EMC Symmetrix and Symmetrix DMX systems present themselves to a
SAN Volume Controller clustered system as separate controllers for each port
zoned to the SAN Volume Controller. For example, if one of these storage systems
has 4 ports zoned to the SAN Volume Controller, each port appears as a separate
controller rather than one controller with 4 WWPNs. In addition, a given logical
unit (LU) must be mapped to the SAN Volume Controller through all controller
ports zoned to the SAN Volume Controller using the same logical unit number
(LUN).
The SAN Volume Controller uses a logical unit (LU) that is presented by an EMC
Symmetrix or Symmetrix DMX as a quorum disk. The SAN Volume Controller
provides a quorum disk even if the connection is through a single port.
Symmetrix device
Meta device
Meta device is an EMC term for a concatenated chain of EMC Symmetrix devices.
This enables the EMC Symmetrix to provide LUs that are larger than a hyper. Up
to 255 hypers can be concatenated to form a single meta device. Meta devices can
be created using the form meta and add dev commands from the SYMCLI. This
allows an extremely large LU to be created, however, if exported to the SAN
Volume Controller, only the first 1 PB is used.
Do not extend or reduce meta devices that are used for managed disks (MDisks).
Reconfiguration of a meta device that is used for an MDisk causes unrecoverable
data-corruption.
You can specify EMC Symmetrix and Symmetrix DMX settings with the set
Symmetrix command from the Symmetrix Command Line Interface (SYMCLI). The
settings can be viewed using the symconfigure command from the SYMCLI.
Table 44 lists the EMC Symmetrix global settings that can be used with SAN
Volume Controller clustered systems.
Table 44. EMC Symmetrix and Symmetrix DMX global settings
EMC Symmetrix and
Symmetrix DMX default SAN Volume Controller
Option setting required setting
max_hypers_per_disk - Any
dynamic_rdf Disable Any
fba_multi_access_cache Disable N/A
Raid_s_support Disable Enable or Disable
The target port characteristics can be viewed using the symcfg command from the
SYMCLI.
Table 45 lists the EMC Symmetrix and Symmetrix DMX port settings that can be
used with the SAN Volume Controller clustered system.
Table 45. EMC Symmetrix and Symmetrix DMX port settings that can be used with the SAN
Volume Controller
EMC Symmetrix and
Symmetrix DMX default SAN Volume Controller
Option setting required setting
Disk_Array Enabled Disabled
Volume_Set_Addressing Enabled Disabled
Hard_Addressing Enabled Enabled
Non_Participating Disabled Disabled
Global_3rdParty_Logout Enabled Enabled
Tagged_Commands Enabled Enabled
Common_Serial_Number - Enabled
Disable_Q_Reset_on_UA Disabled Disabled
Return_busy_for_abort Disabled Disabled
SCSI-3 Disabled Disabled or Enabled
Environ_Set Disabled Disabled
Unique_WWN Enabled Enabled
Point_to_Point Disabled Enabled
VCM_State Disabled Disabled or Enabled
OpenVMS Disabled Disabled
Note: If your Symmetrix or Symmetrix DMX has SPC-2 enabled, do not disable it.
Contact the IBM Support Center for guidance on how to proceed.
Logical unit settings for the EMC Symmetrix and Symmetrix DMX
Logical unit (LU) settings are configurable at the LU level.
LU characteristics can be set using the set device command from the Symmetrix
Command Line Interface (SYMCLI).
Table 46 lists the options that must be set for each LU that is accessed by the SAN
Volume Controller.
Table 46. EMC Symmetrix and Symmetrix DMX LU settings supported by the SAN Volume
Controller
EMC Symmetrix and SAN Volume
Symmetrix DMX Controller required
Option default setting setting
emulation - FBA
attribute - Set all attributes to
disabled.
Table 47 lists the EMC Symmetrix and Symmetrix DMX Initiator settings supported
by the SAN Volume Controller.
Table 47. EMC Symmetrix and Symmetrix DMX initiator settings supported by the SAN
Volume Controller
EMC Symmetrix and SAN Volume
Symmetrix DMX Controller required
Option default setting setting
SPC-2 Disabled Disabled
Note: If your Symmetrix or Symmetrix DMX has SPC-2 enabled for SAN Volume
Controller initiators, do not disable it. Contact the IBM Support Center for
guidance on how to proceed.
LUs can be mapped to a particular director or target port using the map dev
command from the Symmetrix Command Line Interface (SYMCLI). LUs can be
unmapped using the unmap dev command from the SYMCLI.
218 SAN Volume Controller: Software Installation and Configuration Guide
Volume Logix and masking
Volume Logix allows you to restrict access to particular WWPNs on the fabric for
Symmetrix Volumes.
This function can be switched on and off by changing the VMC_State port setting.
The SAN Volume Controller requires that you do not share target ports between a
host and a SAN Volume Controller. However, you can still use Volume Logix to
protect the system from errors that can occur if the SAN is not correctly
configured.
To mask a volume to the SAN Volume Controller, you must first identify the SAN
Volume Controller ports that are connected to each system. This can be done using
the EMC Symmetrix symmask command.
The SAN Volume Controller automatically logs in to any EMC Symmetrix system
that it sees on the fabric. You can use the SAN Volume Controller lsnode CLI
command to find the correct port identifiers.
After you have identified the ports, you can map each volume on each port to
each WWPN. The EMC Symmetrix stores the LUN masking in a database, so you
must apply the changes you have made to refresh the contents of the database to
view the changes.
Note: VMAX settings provided in this section must be applied before configuring
SAN Volume Controller LUNS.
www.ibm.com/storage/support/2145
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
Note: The minimum supported SAN Volume Controller level for the attachment of
EMC VMAX is 4.3.1.
The SAN Volume Controller and EMC VMAX support concurrent upgrade of the
EMC VMAX firmware.
You can configure and control the exported storage as described in the following
sections.
You can use the EMC Control Center to manage and monitor the EMC VMAX
systems.
You can use Volume Logix for volume configuration management. With Volume
Logix, you can control access rights to the storage when multiple hosts share target
ports.
SYMCLI
The EMC Symmetrix Command Line Interface (SYMCLI) is used by the server to
monitor and control the EMC VMAX.
Switch zoning
The SAN Volume Controller switch zone must include at least one target port on
two or more Fibre Channel adapters to avoid a single point of failure.
The EMC VMAX must be configured to present logical units (LUs) to all SAN
Volume Controller initiator ports that are in the fabric zone.
Only SAN Volume Controller initiator ports that are LUN-masked on the EMC
VMAX controller should be present in the fabric zone.
Note: An EMC VMAX system presents itself to a SAN Volume Controller clustered
system as one WWNN with a minimum of two and a maximum of 16 WWPNs
supported.
You can connect a maximum of 16 EMC VMAX ports to the SAN Volume
Controller system. There are no further special zoning requirements.
Configurations that are set up to adhere to the requirements that are described in
previous SAN Volume Controller releases are also supported, but should not be
followed for new installations.
The SAN Volume Controller uses a logical unit (LU) that is presented by an EMC
VMAX as a quorum disk. The SAN Volume Controller provides a quorum disk
even if the connection is through a single port.
VMAX device
VMAX device is an EMC term for an LU that is hosted by an EMC VMAX. These
are all emulated devices and have exactly the same characteristics. The following
are the characteristics of a VMAX device:
v N cylinders
v 15 tracks per cylinder
v 64 logical blocks per track
v 512 bytes per logical block
VMAX devices can be created using the create dev command from the EMC
Symmetrix Command Line Interface (SYMCLI). The configuration of an LU can be
changed using the convert dev command from the SYMCLI. Each physical storage
device in an EMC VMAX is partitioned into 1 to 128 hyper-volumes (hypers). Each
hyper can be up to 16 GB. A VMAX device maps to one or more hypers,
depending on how it is configured. The following configurations are examples of
hyper configurations:
v Hypers can be mirrored (2-way, 3-way, 4-way).
v Hypers can be formed into RAID-S groups.
Meta device
Meta device is an EMC term for a concatenated chain of EMC VMAX devices. The
EMC VMAX uses a meta device to provide LUs that are larger than a hyper. Up to
255 hypers can be concatenated to form a single meta device. Using the form meta
and add dev commands from the SYMCLI, you can create meta devices, which
produce an extremely large LU. However, if exported to the SAN Volume
Controller, only the first 1 PB are used.
Attention: Do not extend or reduce meta devices that are used for managed disks
(MDisks). Reconfiguring a meta device that is used for an MDisk causes
unrecoverable data corruption.
You can specify EMC VMAX settings with the set Symmetrix command from the
Symmetrix Command Line Interface (SYMCLI). The settings can be viewed using
the symconfigure command from the SYMCLI.
Table 48 lists the EMC VMAX global settings that must be set for the SAN Volume
Controller.
Table 48. EMC VMAX global settings
EMC VMAX default SAN Volume Controller
Option setting required setting
Maximum number of hypers 512 Any
per disk
Switched RDF Configuration Disabled Default
state
Concurrent RDF Configuration Enabled Default
state
Dynamic RDF Configuration Enabled Any
state
Concurrent Dynamic RDF Enabled Default
Configuration
RDF Data Mobility Disabled Default
Configuration State
Access Control Configuration Enabled Default
State
Device Masking (ACLX) Enabled Default
Config State
Multi LRU Device Assignment None Default
Disk Group Assignments In Use Default
Hot Swap Policy Permanent Default
Symmetrix Disk Library Disabled Default
FBA Geometry Emulation Native Default
3 Dynamic Mirrors Enabled Default
PAV Mode DynamicStandardPAV Default
PAV Alias Limit 31 Default
The target port characteristics can be viewed using the symcfg command from the
SYMCLI.
Table 49 on page 224 lists the options that must be used with the SAN Volume
Controller.
LU characteristics can be set using the set device command from the Symmetrix
Command Line Interface (SYMCLI).
Table 50 lists the options that must be set for each LU that is accessed by the SAN
Volume Controller.
Table 50. EMC VMAX LU settings supported by the SAN Volume Controller
SAN Volume
EMC VMAX default Controller required
Option setting setting
emulation - FBA
attribute - Set all attributes to
disabled.
Table 51 lists the fibre-specific flag settings that must be set for the SAN Volume
Controller.
Table 51. EMC VMAX fibre-specific flag settings supported by the SAN Volume Controller
SAN Volume
EMC VMAX default Controller required
Option setting setting
Volume_Set_Addressing(V) Disabled Default
Non_Participating(NP) Disabled Default
LUs can be mapped to a particular director or target port using the map dev
command from the Symmetrix Command Line Interface (SYMCLI). LUs can be
unmapped using the unmap dev command from the SYMCLI.
This function can be switched on and off by changing the VMC_State port setting.
The SAN Volume Controller requires that you do not share target ports between a
host and a SAN Volume Controller. However, you can still use Volume Logix to
protect the system from errors that can occur if the SAN is not correctly
configured.
To mask a volume to the SAN Volume Controller, you must first identify the SAN
Volume Controller ports that are connected to each system. You can identify these
ports using the EMC Symmetrix symmask command.
The SAN Volume Controller automatically logs in to any EMC VMAX system that
it sees on the fabric. You can use the SAN Volume Controller lsnode CLI command
to find the correct port identifiers.
After you have identified the ports, you can map each volume on each port to
each WWPN. The EMC VMAX stores the LUN masking in a database; so you
must apply the changes that you have made to refresh the contents of the database
to view the changes.
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
You can use the ETERNUSmgr web-based configuration utility. See the
documentation that is provided with the Fujitsu ETERNUS system for more
information.
Use the following sequence of steps to configure the Fujitsu ETERNUS system:
1. Configure the SAN Volume Controller host response pattern.
2. Register the host world wide names (WWNs) and associate them with the host
response pattern.
3. Setup the affinity group for SAN Volume Controller volumes or setup LUN
mapping.
4. Create or reassign storage to the SAN Volume Controller.
For all other settings and procedures, consider the SAN Volume Controller a host.
See the documentation that is provided with the Fujitsu ETERNUS system.
CA parameters
The following table lists the port settings that are required. See the documentation
that is provided with your Fujitsu ETERNUS system for more information because
some options are only available on certain models.
The SAN Volume Controller requires that a new host response pattern is created. If
the Host Affinity/Host Table Settings Mode is used, this host response pattern
must be associated with each WWN. If the Host Affinity/Host Table Settings Mode
is not used, this host response pattern must be associated with the target port.
The following table lists the settings that are required. See the documentation that
is provided with your Fujitsu ETERNUS system for more information because
some options are only available on certain models.
Notes:
Host WWNs
After the SAN Volume Controller is zoned on the fabric to see the Fujitsu
ETERNUS, the system might not initially appear in the list of controllers when you
issue the lscontroller CLI command. This is normal and expected behavior.
See the documentation that is provided with the Fujitsu ETERNUS system to add
all SAN Volume Controller WWPNs as host WWNs. The following restrictions
apply:
v The SAN Volume Controller WWNs must be associated with a host response
pattern. The host response pattern must be defined prior to registration. If you
use an incorrect or default host response pattern, you can lose access to data.
v All SAN Volume Controller WWNs must be registered on all Fujitsu ETERNUS
ports on the same fabric. If the WWNs are not registered, you can lose access to
data.
Affinity groups/zones
Use the affinity groups/zones mode to protect the SAN Volume Controller LUs if
the SAN is incorrectly configured. The affinity group mode is setup in the CA
configuration. See the documentation that is provided with your Fujitsu ETERNUS
system for more information about using the affinity groups/zones mode. The
following restrictions apply:
v Each SAN Volume Controller must have exactly one affinity group/zone.
v The SAN Volume Controller affinity group/zone must be associated with all
SAN Volume Controller WWNs.
LUN mapping
You can use the LUN mapping mode (also called the zone settings mode for some
models) with the following restrictions:
v The SAN zoning must only allow a single SAN Volume Controller access to this
target port.
v The host response pattern must be set in CA configuration using the required
SAN Volume Controller settings.
Note: If you use the LUN mapping mode, you cannot use the host affinity mode.
The host affinity mode is set to OFF.
Ensure that you understand all SAN Volume Controller and Fujitsu ETERNUS
restrictions before you assign storage to the SAN Volume Controller. See the
documentation that is provided with the Fujitsu ETERNUS system for more
information.
See the documentation that is provided with the Fujitsu ETERNUS system for
more information.
Procedure
1. Enter the IP address of the IBM ESS in a web browser to access the ESS
Specialist.
Important: If you are adding SAN Volume Controller ports to a volume that
is already assigned to other SAN Volume Controller ports, you must select the
Use same ID/LUN in source and target check box.
www.ibm.com/storage/support/2145
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
Web server
A web server runs on each of the controllers on the system. During normal
operation, the user interface application provides only basic monitoring of the
system and displays an event log. If you press the reset button on the controller to
put the controller into diagnostic mode, the user interface application allows
firmware upgrades and system configuration resets.
Sharing the IBM ESS between a host and the SAN Volume
Controller
The IBM Enterprise Storage Server (ESS) can be shared between a host and a SAN
Volume Controller.
The following restrictions apply when you share the IBM ESS between a host and
a SAN Volume Controller:
v If an IBM ESS port is in the same zone as a SAN Volume Controller port, that
same IBM ESS port should not be in the same zone as another host.
v A single host can have both IBM ESS direct-attached and SAN Volume
Controller virtualized disks configured to it.
v If a LUN is managed by the SAN Volume Controller, it cannot be mapped to
another host.
www.ibm.com/storage/support/2145
To avoid a single point of failure on the IBM ESS, you must have a minimum of
two SAN connections from two separate adapter bays. The maximum number of
IBM ESS SAN connections in the SAN Volume Controller switch zone is 16.
Before you delete or unmap a logical unit (LU) from the SAN Volume Controller,
remove the LU from the managed disk (MDisk) group. The following is supported:
v LU size of 1 GB to 1 PB.
v RAID 5 and RAID 10 LUs.
v LUs can be added dynamically.
IBM System Storage DS5000, IBM DS4000, and IBM DS3000 are similar systems.
The concepts in this section apply generally to all three systems; however, some
options might not be available. See the documentation that is provided with your
system for specific information.
The following steps provide the supported options and impact on the SAN Volume
Controller system:
Procedure
1. Set the host type for SAN Volume Controller to IBM TS SAN VCE. For higher
security, create a storage partition for every host that will have access to the
storage system. If you set a default host group and add another host other than
SAN Volume Controller to the default group, the new host automatically has
full read and write access to all LUNs on the storage system.
2. See the following website for the scripts that are available to change the setup
of the IBM System Storage DS5000, IBM DS4000, or IBM System Storage
DS3000 system:
www.ibm.com/storage/support/
What to do next
The following limitation applies to IBM DS5000, IBM DS4000, or IBM DS3000 Copy
Services:
v Do not use IBM DS5000, IBM DS4000, or IBM System Storage DS3000 Copy
Services when the SAN Volume Controller system is attached to an IBM DS5000,
IBM DS4000, or IBM DS3000 system.
v You can use partitioning to allow IBM DS5000, IBM DS4000, or IBM DS3000
Copy Services usage for other hosts.
The following information applies to the access LUN (also known as the Universal
Transport Mechanism (UTM) LUN):
v The access/UTM LUN is a special LUN that allows a IBM DS5000, IBM DS4000,
or IBM DS3000 system to be configured through software over the Fibre Channel
connection.
v The access/UTM LUN does not have to be in the partition that contains the
SAN Volume Controller ports because the access/UTM LUN is not required by
the SAN Volume Controller system. No errors are generated if the access/UTM
LUN is not in the partition.
v If the access/UTM LUN is included in the SAN Volume Controller partition, the
access/UTM LUN must not be configured as logical unit number 0. If the SAN
Volume Controller partition (the host group) has been created with multiple
hosts, the access LUN must be present in all hosts and must be the same logical
unit number.
The storage manager for IBM System Storage DS5000, IBM DS4000 and IBM
DS3000 systems has several options and actions that you can use.
The controller disable data transfer option is not supported when a SAN Volume
Controller is attached to IBM System Storage DS5000, IBM DS4000, or IBM DS3000
systems.
Do not set an array offline because you can lose access to the storage pool.
The array increase capacity option is supported, but the new capacity is not usable
until the MDisk is removed from the storage pool and re-added to the storage
pool. You might have to migrate data to increase the capacity.
You can redistribute logical drives or change ownership of the preferred path;
however, these options might not take effect until a discovery is started on the
SAN Volume Controller clustered system. You can use the detectmdisk
command-line interface (CLI) command to restart a system discovery process. The
discovery process rescans the Fibre Channel network to discover any new MDisks
that might have been added to the system and to rebalance MDisk access across
the available storage system ports.
You must only use the controller reset option if you are directed to do so by IBM
Service and the alternate controller is functional and available to the SAN. The
SAN Volume Controller reset is automatically recovered by the SAN Volume
Controller system.
Check your MDisks to ensure that they have not been set to the degraded state
during the controller reset process. You can issue the includemdisk CLI command
to repair degraded MDisks.
www.ibm.com/storage/support/2145
Note: Some older levels of IBM System Storage DS4000 microcode support a
maximum of 32 LUNs per host partition. Newer firmware versions allow 256 up to
2048 LUNs per host partition.
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
The website includes the maximum number of LUNs per partition that are
supported by the firmware level.
See your IBM System Storage DS5000, IBM DS4000, or IBM DS3000 series
documentation for information about concurrent maintenance.
Note: The SAN Volume Controller partition must either contain all the host ports
of the SAN Volume Controller system that are connected to the SAN or are zoned
to have access to the storage system ports. For example, configure so that each
SAN Volume Controller host bus adapter (HBA) port on SAN Volume Controller
will be able to see at least one port on storage system A and one port on storage
system B.
Note: The FASsT series 200 does not support quorum disks.
You can enable the SAN Volume Controller to be introduced to an existing SAN
environment, so that you have the option of using image mode LUNs to import
the existing data into the virtualization environment without requiring a backup
and restore cycle. Each partition can only access a unique set of HBA ports, as
defined by the worldwide port names (WWPNs). For a single host to access
multiple partitions, unique host fibre ports (WWPNs) must be assigned to each
partition. All LUNs within a partition are identified to the assigned host fibre ports
(no subpartition LUN mapping).
To allow Host A to access the LUNs in partition B, you must remove one of the
HBAs (for example, A1) from the access list for partition 0 and add it to partition
1. A1 cannot be on the access list for more than one partition.
To add a SAN Volume Controller into this configuration without backup and
restore cycles requires a set of unique SAN Volume Controller HBA port WWPNs
for each partition. This allows the IBM System Storage DS5000, IBM DS4000, or
IBM DS3000 system to make the LUNs known to the SAN Volume Controller,
which then configures these LUNs as image-mode LUNs and identifies them to the
required hosts. This violates a requirement that all SAN Volume Controller nodes
Scenario: the SAN Volume Controller nodes cannot see all back-end
storage
The IBM DS4000 series has eight partitions with 30 LUNs in each.
Perform the following steps to allow the SAN Volume Controller nodes to see all
back-end storage:
1. Change the mappings for the first four partitions on the IBM DS4000 system
such that each partition is mapped to one port on each node. This maintains
redundancy across the system.
2. Create a new partition on the system that is mapped to all four ports on all the
nodes.
3. Gradually migrate the data into the managed disks (MDisks) in the target
partition. As storage is freed from the source partitions, it can be reused as new
storage in the target partition. As partitions are deleted, new partitions that
must be migrated can be mapped and migrated in the same way. The host side
data access and integrity is maintained throughout this process.
Some IBM System Storage DS5000, IBM DS4000, and IBM DS3000 storage systems
are supported for use with a SAN Volume Controller clustered system.
To create a logical disk, set the host type for SAN Volume Controller to IBM TS SAN
VCE.
The access LUN, also known as the Universal Transport Mechanism (UTM) LUN,
is the configuration interface for IBM System Storage DS5000, IBM DS4000, and
IBM System Storage DS3000 systems.
The access LUN might not be in a partition that contains the SAN Volume
Controller ports because it is not required by the SAN Volume Controller clustered
system. The UTM LUN is a special LUN that allows IBM System Storage DS5000,
IBM DS4000, and IBM System Storage DS3000 systems to be configured through
suitable software over the Fibre Channel connection. Because the SAN Volume
Controller does not require the UTM LUN, it does not generate errors either way.
IBM System Storage DS5000, IBM DS4000, and IBM System Storage DS3000
systems must not have the Access UTM LUN that is presented as LUN 0 (zero).
It is possible to use in-band (over Fibre Channel) and out-of-band (over Ethernet)
to allow the configuration software to communicate with more than one IBM
Chapter 7. Configuring and servicing external storage systems 237
System Storage DS5000, IBM DS4000, or IBM System Storage DS3000 system. If
you are using in-band configuration, the Access UTM LUN must be configured in
a partition that does not include any logical units that are accessed by the SAN
Volume Controller system.
Note: In-band is not supported for access to the LUN while in the SAN Volume
Controller partition.
You must configure the following settings for IBM System Storage DS5000, IBM
DS4000, and IBM DS3000 systems:
v Set the host type for SAN Volume Controller to IBM TS SAN VCE.
v Set the system so that both storage systems have the same worldwide node
name (WWNN). See the following website for the scripts that are available to
change the setup for IBM System Storage DS5000, IBM DS4000, and IBM DS3000
systems:
www.ibm.com/storage/support/
v Ensure that the AVT option is enabled. The host type selection should have
already enabled the AVT option. View the storage system profile data to confirm
that you have the AVT option enabled. This storage profile is presented as a text
view in a separate window. See the following website for the scripts that are
available to enable the AVT option:
www.ibm.com/storage/support/
v You must have the following options enabled on any logical units that are
mapped to IBM System Storage DS5000, IBM DS4000, and IBM DS3000 systems:
– read caching
– write caching
– write cache mirroring
v You must not have caching without batteries enabled.
Table 52 lists the global settings that can be used with SAN Volume Controller
clustered systems.
Table 52. IBM System Storage DS5000, DS4000, and IBM DS3000 system global options
and settings
Option Setting
Start flushing 50%
Stop flushing 50%
Cache block size 4 Kb (for systems running 06.x or earlier)
Attention: See the IBM DS5000, IBM DS4000, or IBM DS3000 documentation for
details on how to modify the settings.
Use a host type for SAN Volume Controller of IBM TS SAN VCE to establish the
correct global settings for the SAN Volume Controller system.
Use the following option settings for a LUN that will be attached to SAN Volume
Controller clustered system.
Table 53. Option settings for a LUN
Parameter Setting
Segment size 256 KB
Capacity reserved for future segment size Yes
changes
Maximum future segment size 2,048 KB
Modification priority High
Read cache Enabled
Write cache Enabled
Write cache without batteries Disabled
Write cache with mirroring Enabled
Flush write cache after (in seconds) 10.00
Dynamic cache read prefetch Enabled
Set the host type for SAN Volume Controller to IBM TS SAN VCE when you create a
new LU.
See the documentation that is provided with your system for information about
other settings.
After you have defined at least one storage complex, storage unit, and I/O port,
you can define the SAN Volume Controller as a host and create host connections. If
you have not defined all of these required storage elements, use the IBM System
Storage DS6000 Storage Manager or the IBM DS6000 command-line interface (CLI)
to define these elements and return to this topic after they are configured.
This task assumes that you have already launched the IBM System Storage DS6000
Storage Manager.
Procedure
1. Click Real-time manager > Manage hardware > Host systems.
2. Select Create from the Select Action list. The Create Host System wizard is
displayed.
3. Perform the following steps to select a host type:
a. Select IBM SAN Volume Controller (SVC) from the Host Type list.
Note: You must add all of the SAN Volume Controller node ports.
b. Select FC Switch fabric (P-P) from the Attachment Port Type list.
c. Click Add.
d. Select Group ports to share a common set of volumes.
e. Click Next. The Define host WWPN panel is displayed.
5. Specify a WWPN for each SAN Volume Controller node port that you are
configuring. After you have defined all SAN Volume Controller node port
WWPNs, click Next.
6. Perform the following steps in the Specify storage units panel:
a. Select all the available storage units that use the ports that you defined in
step 5.
b. Click Add to move the selected storage units to the Selected storage units
field.
c. Click Next. The Specify storage units parameters panel is displayed
7. Perform the following steps in the Specify storage units parameters panel:
a. Select a host attachment identifier from the table.
b. Click the following specific storage unit I/O ports in the This host
attachment can login to field. The available ports are displayed in the
Available storage unit I/O ports table.
c. Select each port in the Available storage unit I/O ports table.
Note: The Type for each port should be FcSf. If the listed type is not FcSf,
click Configure I/O Ports. The Configure I/O Ports panel is displayed.
Click the port that you want to configure and select Change to FcSf from
the Select Action list.
d. Click Apply assignment.
e. Click OK. The Verification panel is displayed.
8. Verify that the attributes and values that are displayed in the table are correct.
9. Click Finish if the values that are displayed in the table are correct. Otherwise,
click Back to return to the previous panels and change the values that are not
correct.
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
www.ibm.com/storage/support/2145
Web server
You can manage, configure, and monitor the IBM DS6000 through the IBM System
Storage DS6000 Storage Manager.
CLI
You can also manage, configure, and monitor the IBM DS6000 through the IBM
System Storage DS command-line interface.
After you have defined at least one storage complex, storage unit, and I/O port,
you can define the SAN Volume Controller as a host and create host connections. If
you have not defined all of these required storage elements, use the IBM System
Storage DS8000 Storage Manager or the IBM System Storage DS® command-line
interface to define these elements and return to this topic after they are configured.
This task assumes that you have already launched the IBM System Storage DS8000
Storage Manager.
Procedure
1. Click Real-time manager > Manage hardware > Host connections.
2. Select Create new host connection from the Task list. The Create Host System
wizard begins.
3. Perform the following steps on the Define Host Ports panel:
a. Enter a unique name of up to 12 characters for each port in the Host
Connection Nickname field. The value is used to automatically assign
nicknames for the host ports as they are added to the Host WWPN table.
This is a required field.
b. Select Fibre Channel Point-to-Point/Switched (FcSf) for the port type.
c. Select IBM SAN Volume Controller (SVC) from the Host Type list.
d. In the Host WWPN field, enter the 16-digit worldwide port name (WWPN)
manually, or select the WWPN from the list. Click Add.
e. Click Next. The Map Host Ports to a Volume Group panel is displayed.
4. Perform the following steps in the Map Host Ports to a Volume Group panel:
a. You can choose to either map the ports to an existing volume group or
create a new one.
b. After completing that task, click Next. The Define I/O Ports panel is
displayed.
5. Perform the following steps in the Define I/O Ports panel:
a. Select either Automatic (any valid I/O port) or Manual selection of I/O
ports to assign I/O ports.
b. Click Next. The Verification panel is displayed.
6. Verify that the attributes and values that are displayed in the table are correct.
7. Click Finish if the values that are displayed in the table are correct. Otherwise,
click Back to return to the previous panels and change the incorrect values.
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
www.ibm.com/storage/support/2145
Web server
You can manage, configure, and monitor the IBM DS8000 through the IBM System
Storage DS8000 Storage Manager.
CLI
You can also manage, configure, and monitor the IBM DS8000 through the IBM
System Storage DS command-line interface.
The information in this section also applies to the supported models of the Sun
StorEdge series and the HP XP series.
www.ibm.com/storage/support/2145
244 SAN Volume Controller: Software Installation and Configuration Guide
Supported firmware levels for HDS Lightning
The SAN Volume Controller supports the HDS Lightning.
See the following website for specific HDS Lightning firmware levels and the latest
supported hardware:
www.ibm.com/storage/support/2145
Note: Concurrent upgrade of the controller firmware is supported with the SAN
Volume Controller.
HDS Lightning has a laptop in the controller frame. The laptop runs the Service
Processor (SVP) as the primary configuration user interface. You can use SVP to
perform most configuration tasks and to monitor the controller.
HiCommand
The HiCommand is a graphical user interface that allows basic creation of storage
and system monitoring. The HiCommand communicates with HDS Lightning
through Ethernet.
Sharing the HDS Lightning 99xxV between a host and the SAN
Volume Controller
There are restrictions for sharing an HDS Lightning 99xxV between a host and a
SAN Volume Controller clustered system.
Sharing ports
The HDS Lightning 99xxV can be shared between a host and a SAN Volume
Controller system under the following conditions:
v The same host cannot be connected to both a SAN Volume Controller system
and an HDS Lightning at the same time because the Hitachi HiCommand
Dynamic Link Manager (HDLM) and the subsystem device driver (SDD) cannot
coexist.
v A controller port cannot be shared between a host and a SAN Volume Controller
system. If a controller port is used by a SAN Volume Controller system, it must
not be present in a switch zone that allows a host to access the port.
v Logical units (LUs) cannot be shared between a host and a SAN Volume
Controller system.
You can connect the SAN Volume Controller system to the HDS Lightning under
the following conditions:
v For SAN Volume Controller software version 4.2.1 and later, you can connect a
maximum of 16 HDS Lightning ports to the SAN Volume Controller system
without any special zoning requirements.
v For SAN Volume Controller software version 4.2.0, the following conditions
apply:
– Logical Unit Size Expansion (LUSE) and Virtual LVI/LUN operations cannot
be run on a disk that is managed by the SAN Volume Controller system.
LUNs that are created using LUSE and Virtual LVI/LUN can be mapped to
the system after they are created.
– Only disks with open emulation can be mapped to the SAN Volume
Controller system.
– IBM S/390® disks cannot be used with the SAN Volume Controller system.
– Only Fibre Channel connections can connect the SAN Volume Controller
system to the HDS Lightning.
Switch zoning
Advanced copy functions for HDS Lightning (for example, ShadowImage, Remote
Copy, and Data Migration) are not supported for disks that are managed by the
SAN Volume Controller, because the copy function does not extend to the SAN
Volume Controller cache.
The HDS Lightning 99xxV supports Logical Unit Expansion (LUSE). LUSE is not a
concurrent operation. LUSE is accomplished by concatenating between 2 and 26
Attention: LUSE destroys all data that exists on the LU, except on a Windows
system.
TrueCopy
Virtual LVI/LUNs
The HDS Lightning 99xxV supports Virtual LVI/LUNs. Virtual LVI/LUNs is not a
concurrent operation. Virtual LVI/LUNs allows you to divide LUNs into several
smaller virtual LUNs for use by the HDS Lightning. You must first create existing
LUNs into free space and then define their own LUNs using that free space.
Virtual LVI/LUNs must not be managed or mapped to a SAN Volume Controller.
LUNs that are set up using either LUSE or Virtual LVI/LUNs appear as normal
LUNs after they are created. Therefore, LUNs that are set up using LUSE or Virtual
LVI/LUNs can be used by the SAN Volume Controller after they are created.
Write protect
The HDS Lightning system can have up to 8192 LUs defined; however, only 256
LUs can be mapped to a single port. Report LUNs is supported by LUN 0, so the
SAN Volume Controller can detect all LUNs.
In the event that a LUN 0 is not configured, the HDS Lightning system presents a
pseudo-LUN at LUN 0. The inquiry data for this pseudo-LUN slightly differs from
the inquiry data of normal LUNs. The difference allows the SAN Volume
Controller to recognize the pseudo-LUN and exclude it from I/O. The pseudo
LUN can accept the report LUNs command.
The HDS Lightning system supports both open-mode attachment and S/390
attachment. The emulation mode is set when the LU is defined. All LUNs that are
presented to a SAN Volume Controller must use open emulation. All LUNs with
open emulation use a standard 512 byte block size.
The HDS Lightning system can only have certain sized LUs that are defined. These
LUs can be expanded by merging 2 - 36 of these LUs using the Logical Unit Size
Special LUs
When an LU is mapped to a host, you have the option to make it a command LUN.
Command LUNs support in-band configuration commands, but not I/O.
Therefore, you cannot map command LUNs to the SAN Volume Controller.
Table 55 on page 249 lists the HDS Lightning controller settings that are supported
by the SAN Volume Controller.
Table 56 lists the HDS Lightning port settings that are supported by the SAN
Volume Controller.
Table 56. HDS Lightning port settings supported by the SAN Volume Controller
HDS Lightning default SAN Volume Controller
Option setting required setting
Address AL/PA AL/PA
Fabric On On
Connection Point-to-Point Point-to-Point
Security switch On On or off
Host type Default Windows
Note: These settings only apply to LUs that are accessible by the SAN Volume
Controller.
Configuring HDS Thunder, Hitachi AMS 200, AMS 500, and AMS 1000,
and HDS TagmaStore WMS systems
You can attach Hitachi Data Systems (HDS) Thunder, Hitachi AMS 200, AMS 500,
and AMS 1000, and HDS TagmaStore Workgroup Modular Storage (WMS) systems
to a SAN Volume Controller clustered system.
Supported HDS Thunder, Hitachi AMS 200, AMS 500, and AMS
1000, and HDS TagmaStore WMS models
You can attach certain HDS Thunder, Hitachi AMS 200, AMS 500, and AMS 1000,
and HDS TagmaStore Workgroup Modular Storage (WMS) models to SAN Volume
Controller clustered systems.
www.ibm.com/storage/support/2145
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
In-band configuration
Disable the system command LUN when you use the user interface applications.
The Storage Navigator Modular (SNM) is the primary user interface application for
configuring HDS Thunder, Hitachi AMS 200, AMS 500, and AMS 1000, and HDS
TagmaStore WMS systems. Use SNM to upgrade firmware, change settings, and to
create and monitor storage.
HiCommand
HiCommand is another configuration user interface that is available for the HDS
Thunder, Hitachi AMS 200, AMS 500, and AMS 1000, and HDS TagmaStore WMS
systems. You must have access to SNM to use HiCommand to configure settings.
HiCommand only allows basic creation of storage and provides some monitoring
features.
Web server
A web server runs on each of the controllers on the system. During normal
operation, the user interface only provides basic monitoring of the system and
displays an event log. If you put a controller into diagnostic mode by pressing the
reset button on the controller, the user interface provides firmware upgrades and
system configuration resets.
Sharing the HDS Thunder, Hitachi AMS 200, AMS 500, and
AMS 1000, or HDS TagmaStore WMS between a host and the
SAN Volume Controller
You can share the HDS Thunder, Hitachi AMS 200, AMS 500, and AMS 1000, and
HDS TagmaStore Workgroup Modular Storage (WMS) systems between a host and
a SAN Volume Controller clustered system, with certain restrictions.
The HDS Thunder, Hitachi AMS 200, AMS 500, and AMS 1000, or HDS TagmaStore
WMS systems present themselves to a SAN Volume Controller clustered system as
separate storage systems for each port zoned to the SAN Volume Controller. For
example, if one of these storage systems has four ports zoned to the SAN Volume
Controller, each port appears as a separate storage system rather than one storage
system with four WWPNs. In addition, a given logical unit (LU) must be mapped
to the SAN Volume Controller through all storage system ports that are zoned to
the SAN Volume Controller using the same logical unit number (LUN).
Supported topologies
You can connect a maximum of 16 HDS Thunder ports to the SAN Volume
Controller system without any special zoning requirements.
You can use the set quorum disk CLI command or the management GUI to select
quorum disks.
Host type for HDS Thunder, Hitachi AMS 200, AMS 500, and
AMS 1000, and HDS TagmaStore WMS
When the HDS Thunder, Hitachi AMS 200, AMS 500, and AMS 1000, and HDS
TagmaStore Workgroup Modular Storage (WMS) systems are attached to a SAN
Volume Controller clustered system, set the host mode attribute to the Microsoft
Windows application that is available on each storage system.
For example, when using HDS TagmaStore WMS, select Windows, or when using
the Hitachi AMS 200, AMS 500, and AMS 1000, select Windows 2003.
Advanced copy functions for HDS Thunder, Hitachi AMS 200, AMS 500, and AMS
1000, and HDS TagmaStore WMS systems are not supported for disks that are
managed by the SAN Volume Controller systems because the copy function does
not extend to the SAN Volume Controller cache. For example, ShadowImage,
TrueCopy, and HiCopy are not supported.
LUN Security enables LUN masking by the worldwide node name (WWNN) of the
initiator port. This function is not supported for logical units (LUs) that are used
by SAN Volume Controller systems.
Partitioning
Partitioning splits a RAID into up to 128 smaller LUs, each of which serves as an
independent disk-like entity. The SAN Volume Controller system and HDS
Thunder, Hitachi AMS 200, AMS 500, and AMS 1000, and HDS TagmaStore WMS
systems support the partitioning function.
The HDS Thunder, Hitachi AMS 200, AMS 500, and AMS 1000, and HDS
TagmaStore WMS systems allow the last LU that is defined in a RAID group to be
expanded. This function is not supported when these storage systems are attached
to a SAN Volume Controller system. Do not perform dynamic array expansion on
LUs that are in use by a SAN Volume Controller system.
Note: Use in this context means that the LU has a LUN number that is associated
with a Fibre Channel port, and this Fibre Channel port is contained in a switch
zone that also contains SAN Volume Controller Fibre Channel ports.
The HDS Thunder 95xxV, Hitachi AMS 200, AMS 500, and AMS 1000, and HDS
TagmaStore WMS systems support host storage domains (HSD) and virtual Fibre
Channel ports. Each Fibre Channel port can support multiple HSDs. Each host in a
given HSD is presented with a virtual target port and a unique set of LUNs.
The Thunder 9200 does not support HSD and virtual Fibre Channel the ports.
For example, the Storage Navigator Modular GUI enables you to create LUN A,
delete LUN A, and then create LUN B with the same unique ID as LUN A. If a
SAN Volume Controller clustered system is attached, data corruption can occur
because the system might not realize that LUN B is different than LUN A.
Attention: Before you use the Storage Navigator Modular GUI to delete a LUN,
remove the LUN from the storage pool that contains it.
To prevent the existing LUNs from rejecting I/O operations during the dynamic
addition of LUNs, perform the following procedure to add LUNs:
1. Create the new LUNs using the Storage Navigator Modular GUI.
2. Quiesce all I/O operations.
3. Perform either an offline format or an online format of all new LUNs on the
controller using the Storage Navigator Modular GUI. Wait for the format to
complete.
4. Go into the LUN mapping function of the Storage Navigator Modular GUI.
Add mapping for the new LUN to all of the controller ports that are available
to the SAN Volume Controller system on the fabric.
5. Restart the controller. (Model 9200 only)
6. After the controller has restarted, restart I/O operations.
If LUN mapping is used as described in the LUN mapping topic, you must restart
the controller to pick up the new LUN mapping configuration. For each storage
pool that contains an MDisk that is supported by an LU on the system, all
volumes in those storage pools go offline.
Global settings for the HDS Thunder, Hitachi AMS 200, AMS 500,
and AMS 1000, and HDS TagmaStore WMS systems
Global settings apply across HDS Thunder, Hitachi AMS 200, AMS 500, and AMS
1000, and HDS TagmaStore WMS systems.
Controller settings for HDS Thunder, Hitachi AMS 200, AMS 500,
and AMS 1000, and HDS TagmaStore WMS systems
Controller settings apply across the entire HDS Thunder, Hitachi AMS 200, AMS
500, and AMS 1000, and HDS TagmaStore WMS systems. Options are not available
within the scope of a single controller.
Port settings for the HDS Thunder, Hitachi AMS 200, AMS 500,
and AMS 1000, and HDS TagmaStore WMS systems
Port settings are configurable at the port level.
The settings listed in Table 59 apply to disk controllers that are in a switch zone
that contains SAN Volume Controller nodes. If the system is shared between a
SAN Volume Controller clustered system and another host, you can configure with
different settings than shown if both of the following conditions are true:
v The ports are included in switch zones.
v The switch zones only present the ports directly to the hosts and not to a SAN
Volume Controller system.
Logical unit settings for the HDS Thunder, Hitachi AMS 200, AMS
500, and AMS 1000, and HDS TagmaStore WMS systems
Logical unit (LU) settings apply to individual LUs that are configured in the HDS
Thunder, Hitachi AMS 200, AMS 500, and AMS 1000, and HDS TagmaStore WMS
systems.
You must configure the systems LUs as described in Table 60 if the logical unit
number (LUN) is associated with ports in a switch zone that is accessible to the
SAN Volume Controller clustered system.
Table 60. HDS Thunder, Hitachi AMS 200, AMS 500, and AMS 1000, and HDS TagmaStore
WMS systems LU settings for the SAN Volume Controller
Option Required values Default setting
LUN default controller Controller 0 or Controller 1 Any
Note: These settings only apply to LUs that are accessible by the SAN Volume
Controller system.
Scenario 1: The configuration application enables you to change the serial number
for an LU. Changing the serial number also changes the unique user identifier
(UID) for the LU. Because the serial number is also used to determine the WWPN
of the controller ports, two LUNs cannot have the same unique ID on the same
SAN because two controllers cannot have the same WWPN on the same SAN.
Scenario 2: The serial number is also used to determine the WWPN of the
controller ports. Therefore, two LUNs must not have the same ID on the same
SAN because this results in two controllers having the same WWPN on the same
SAN. This is not a valid configuration.
Attention: Do not change the serial number for an LU that is managed by a SAN
Volume Controller system because this can result in data loss or undetected data
corruption.
The SAN Volume Controller supports the S-TID M-LUN and M-TID M-LUN
modes on Thunder 9200, and Mapping Mode enabled or disabled on Thunder
95xx. You must restart the controllers for changes to LUN mapping to take effect.
Attention: The HDS Thunder, Hitachi AMS 200, AMS 500, and AMS 1000, and
HDS TagmaStore WMS systems do not provide an interface that enables a SAN
Volume Controller clustered system to detect and ensure that the mapping or
masking and virtualization options are set properly. Therefore, you must ensure
that these options are set as described in this topic.
In S-TID M-LUN mode all LUs are accessible through all ports on the system with
the same LUN number on each port. You can use this mode in environments
where the system is not being shared between a host and a SAN Volume
Controller system.
If a system is shared between a host and a SAN Volume Controller system, you
must use M-TID M-LUN mode. Configure the system so that each LU that is
exported to the SAN Volume Controller system can be identified by a unique LUN.
The LUN must be the same on all ports through which the LU can be accessed.
A SAN Volume Controller system can access controller ports x and y. The system
also sees an LU on port x that has LUN number p. In this situation the following
conditions must be met:
v The system must see either the same LU on port y with LUN number p or it
must not see the LU at all on port y.
v The LU cannot appear as any other LUN number on port y.
v The LU must not be mapped to any system port that is zoned for use directly by
a host in a configuration where the system is shared between a host and a
clustered system.
The information in this section also applies to the supported models of the HP XP
and the Sun StorEdge series.
www.ibm.com/storage/support/2145
See the following website for specific firmware levels and the latest supported
hardware:
Web server
The HDS USP and NSC use the Storage Navigator as the main configuration GUI.
The Storage Navigator GUI runs on the SVP and is accessed through a web
browser.
Logical units and target ports on the HDS USP and NSC
Logical units (LUs) that are exported by the HDS USP and NSC report
identification descriptors in the vital product data (VPD). The SAN Volume
Controller uses the LUN associated binary type-3 IEEE Registered Extended
descriptor to identify the LU.
The HDS USP and NSC do not use LU groups so all LUs are independent. The LU
access model is active-active and does not use preferred access ports. Each LU can
be accessed from any target port that is mapped to the LU. Each target port has a
unique WWPN and worldwide node name (WWNN). The WWPN matches the
WWNN on each port.
Note: You must wait until the LU is formatted before presenting it to the SAN
Volume Controller.
Special LUs
The HDS USP and NSC can use any logical device (LDEV) as a Command Device.
Command Devices are the target for HDS USP or NSC copy service functions.
Therefore, do not export Command Devices to a SAN Volume Controller.
The SAN Volume Controller can be connected to the HDS USP or NSC with the
following restrictions:
v If an LU is mapped to a SAN Volume Controller port as LUN x, the LU must
appear as LUN x for all mappings to target ports.
v Only Fibre Channel connections can be used to connect a SAN Volume
Controller to the HDS USP or NSC system.
v Because the SAN Volume Controller limits the number of worldwide node
names (WWNNs) for each storage system and the HDS USP and NSC present a
separate WWNN for each port, the number of target ports that the SAN Volume
Controller can resolve as one storage system is limited. Perform the following
steps to provide connections to more target ports:
Note: The HDS USP and NSC systems present themselves to a SAN Volume
Controller clustered system as separate controllers for each port zoned to the SAN
Volume Controller. For example, if one of these storage systems has 4 ports zoned
to the SAN Volume Controller, each port appears as a separate controller rather
than one controller with 4 WWPNs. In addition, a given logical unit (LU) must be
mapped to the SAN Volume Controller through all controller ports zoned to the
SAN Volume Controller using the same logical unit number (LUN).
Controller splitting
You can split the HDS USP or NSC between other hosts and the SAN Volume
Controller under the following conditions:
v A host cannot be simultaneously connected to both an HDS USP or NSC and a
SAN Volume Controller.
v Port security must be enabled for target ports that are shared.
v An LU that is mapped to a SAN Volume Controller cannot be simultaneously
mapped to another host.
Note: Sun StorEdge systems are not supported to host SAN Volume Controller
quorum disks.
The SAN Volume Controller clustered system uses a quorum disk to store
important system configuration data and to break a tie in the event of a SAN
failure. The system automatically chooses three managed disks (MDisks) as
quorum disk candidates. Each disk is assigned an index number: either 0, 1, or 2.
Although a system can be configured to use up to three quorum disks, only one
quorum disk is elected to resolve a tie-break situation. The purpose of the other
quorum disks is to provide redundancy if a quorum disk fails before the system is
partitioned.
To host any of the three quorum disks on these HDS TagmaStore USP, HP
XP10000/12000, or NSC55 storage systems, ensure that each of the following
conditions have been met:
To host any of the three quorum disks on these HDS TagmaStore USPv, USP-VM,
or HP XP20000/24000 systems, ensure that each of the following requirements
have been met:
v Firmware version Main 60-04-01-00/02 or later is running. Contact HDS or HP
support for details on installing and configuring the correct firmware version.
v Host Option 39 is enabled. Contact HDS or HP support for details on Host
Option 39.
Note: This must be applied to the HDS or HP host group that is used for SAN
Volume Controller.
v All SAN Volume Controller ports are configured in a single HDS or HP host
group.
After you have verified these requirements for the appropriate storage system,
complete the following steps on the SAN Volume Controller command-line
interface to set the quorum disks:
1. Issue the chcontroller command:
chcontroller -allowquorum yes controller_id or controller_name
Attention: Failure to meet these conditions or to follow these steps can result in
data corruption.
The Support for SAN Volume Controller (2145) website provides current
information about quorum support:
www.ibm.com/storage/support/2145
The following advanced system functions for HDS USP and NSC are not
supported for disks that are managed by the SAN Volume Controller:
v TrueCopy
v ShadowImage
v Extended Copy Manager
v Extended Remote Copy
v NanoCopy
v Data migration
v RapidXchange
v Multiplatform Backup Restore
v Priority Access
v HARBOR File-Level Backup/Restore
v HARBOR File Transfer
v FlashAccess
All advanced SAN Volume Controller functions are supported on logical unit (LU)
that are exported by the HDS USP or NSC system.
LU Expansion
The HDS USP and NSC support Logical Unit Expansion (LUSE). LUSE is not a
concurrent operation. LUSE allows you to create a single LU by concatenating
logical devices (LDEVs). Before LUSE can be performed, the LDEVs must be
unmounted from hosts and paths must be removed.
Attention:
1. LUSE destroys all data that exists on the LDEV.
2. Do not perform LUSE on any LDEV that is used to export an LU to a SAN
Volume Controller.
If data exists on an LDEV and you want to use image mode migration to import
the data to a SAN Volume Controller, do not perform LUSE on the disk before you
import the data.
LUs that are created using LUSE can be exported to a SAN Volume Controller.
Virtual LVI/LUNs
The HDS USP and NSC support Virtual LVI/LUNs (VLL). VLL is not a concurrent
operation. VLL allows you to create several LUs from a single LDEV. You can only
create new LUs from free space on the LDEV.
Attention: Do not perform VLL on disks that are managed by the SAN Volume
Controller.
LUs that are created using VLL can be exported to a SAN Volume Controller.
www.ibm.com/storage/support/2145
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
Important: A Hitachi Data Systems (HDS) Field Engineer must perform all
maintenance operations.
In-band configuration
Disable the system command LUN when you use the user interface applications.
The Storage Navigator Modular (SNM) is the primary user interface application for
configuring Hitachi TagmaStore AMS 2000 family of systems. Use SNM to upgrade
firmware, change settings, and to create and monitor storage.
HiCommand is another configuration user interface that is available for the Hitachi
TagmaStore AMS 2000 family of systems. You must have access to SNM to use
HiCommand to configure settings. HiCommand only allows basic creation of
storage and provides some monitoring features.
Web server
A web server runs on each of the controllers on the system. During normal
operation, the user interface only provides basic monitoring of the system and
displays an event log. If you put a controller into diagnostic mode by pressing the
reset button on the controller, the user interface provides firmware upgrades and
system configuration resets.
Switch zoning
The Hitachi TagmaStore AMS 2000 family of systems present themselves to a SAN
Volume Controller clustered system as separate storage systems for each port
zoned to the SAN Volume Controller. For example, if one of these storage systems
has four ports zoned to the SAN Volume Controller, each port appears as a
separate storage system rather than one storage system with four WWPNs. In
addition, a given logical unit (LU) must be mapped to the SAN Volume Controller
through all storage system ports that are zoned to the SAN Volume Controller
using the same logical unit number (LUN).
Supported topologies
You can connect a maximum of 16 Hitachi TagmaStore AMS 2000 family of
systems ports to the SAN Volume Controller system without any special zoning
requirements.
You can use the chquorum CLI command or the management GUI to select quorum
disks.
For example, when using Hitachi TagmaStore AMS 2000 family of systems, select
Windows 2003.
Advanced copy functions for Hitachi TagmaStore AMS 2000 family of systems are
not supported for disks that are managed by the SAN Volume Controller systems
because the copy function does not extend to the SAN Volume Controller cache.
For example, ShadowImage, TrueCopy, and HiCopy are not supported.
LUN Security
LUN Security enables LUN masking by the worldwide node name (WWNN) of the
initiator port. This function is not supported for logical units (LUs) that are used
by SAN Volume Controller systems.
Partitioning
Partitioning splits a RAID into up to 128 smaller LUs, each of which serves as an
independent disk-like entity. The SAN Volume Controller system and Hitachi
TagmaStore AMS 2000 family of systems support the partitioning function.
The Hitachi TagmaStore AMS 2000 family of systems allow the last LU that is
defined in a RAID group to be expanded. This function is not supported when
these storage systems are attached to a SAN Volume Controller system. Do not
perform dynamic array expansion on LUs that are in use by a SAN Volume
Controller system.
Note: Use in this context means that the LU has a LUN number that is associated
with a Fibre Channel port, and this Fibre Channel port is contained in a switch
zone that also contains SAN Volume Controller Fibre Channel ports.
The Hitachi TagmaStore AMS 2000 family of systems support host storage domains
(HSD) and virtual Fibre Channel ports. Each Fibre Channel port can support
multiple HSDs. Each host in a given HSD is presented with a virtual target port
and a unique set of LUNs.
For example, the Storage Navigator Modular GUI enables you to create LUN A,
delete LUN A, and then create LUN B with the same unique ID as LUN A. If a
SAN Volume Controller clustered system is attached, data corruption can occur
because the system might not realize that LUN B is different than LUN A.
Attention: Before you use the Storage Navigator Modular GUI to delete a LUN,
remove the LUN from the storage pool that contains it.
To prevent the existing LUNs from rejecting I/O operations during the dynamic
addition of LUNs, perform the following procedure to add LUNs:
1. Create the new LUNs using the Storage Navigator Modular GUI.
2. Quiesce all I/O operations.
3. Perform either an offline format or an online format of all new LUNs on the
controller using the Storage Navigator Modular GUI. Wait for the format to
complete.
4. Go into the LUN mapping function of the Storage Navigator Modular GUI.
Add mapping for the new LUN to all of the storage system ports that are
available to the SAN Volume Controller system on the fabric.
5. Restart the storage system (Model 9200 only).
6. After the storage system has restarted, restart I/O operations.
If LUN mapping is used as described in the LUN mapping topic, you must restart
the controller to pick up the new LUN mapping configuration. For each storage
pool that contains an MDisk that is supported by an LU on the system, all
volumes in those storage pools go offline.
There are no available options with the scope of a single storage system.
Table 62. Hitachi TagmaStore AMS 2000 family of systems port settings supported by the
SAN Volume Controller
SAN Volume Controller
Option Default setting required setting
Port settings
Mapping mode On On
Port type Fibre Fibre
Reset LIP mode (signal) Off Off
Reset LIP mode (process) Off Off
LIP port all reset mode Off Off
Host group list
Host connection mode 1 Windows
HostGroupName "G000" "G000"
Middleware Unsupported Unsupported
Host system configuration
Platform Windows
HostGroupName "G000" "G000"
Middleware Unsupported Unsupported
Host group information settings
HostGroupNumber 0 0
HostGroupName "G000" "G000"
Host group options
Host connection mode 1 Standard mode Standard mode
Host connection mode 2 Off Off
HP-UX mode Off Off
PSUE read reject mode Off Off
Mode parameters changed Off Off
notification mode
NACA mode (AIX only) Off Off
Task management isolation Off Off
mode
Unique reserve mode 1 Off Off
Port-ID conversion mode Off Off
Tru cluster mode Off Off
Product serial response mode Off Off
Logical unit settings for the Hitachi TagmaStore AMS 2000 family
of systems
Logical unit (LU) settings apply to individual LUs that are configured in the
Hitachi TagmaStore AMS 2000 family of systems.
You must configure the systems LUs as described in Table 63 if the logical unit
number (LUN) is associated with ports in a switch zone that is accessible to the
SAN Volume Controller clustered system.
Table 63. Hitachi TagmaStore AMS 2000 family of systems LU settings for the SAN Volume
Controller
SAN Volume Controller
Option Default setting required setting
LUN management information
Security Off Off
Note: LUN Security enables
LUN masking by the
worldwide node name
(WWNN) of the initiator
port. This function is not
supported for logical units
(LUs) that are used by SAN
Volume Controller systems.
LU mapping One-to-one One-to-one
LAN management options
Maintenance port IP address Off Off
automatic change mode
IPv4 DHCP Off Off
IPv6 address setting mode Auto Auto
Negotiation Auto Auto
Note: These settings only apply to LUs that are accessible by the SAN Volume
Controller system.
Scenario 1: The configuration application enables you to change the serial number
for an LU. Changing the serial number also changes the unique user identifier
(UID) for the LU. Because the serial number is also used to determine the WWPN
of the controller ports, two LUNs cannot have the same unique ID on the same
SAN because two controllers cannot have the same WWPN on the same SAN.
Scenario 2: The serial number is also used to determine the WWPN of the
controller ports. Therefore, two LUNs must not have the same ID on the same
SAN because this results in two controllers having the same WWPN on the same
SAN. This is not a valid configuration.
Attention: Do not change the serial number for an LU that is managed by a SAN
Volume Controller system because this can result in data loss or undetected data
corruption.
Attention: The Hitachi TagmaStore AMS 2000 family of systems do not provide
an interface that enables a SAN Volume Controller clustered system to detect and
ensure that the mapping or masking and virtualization options are set properly.
Therefore, you must ensure that these options are set as described in this topic.
In S-TID M-LUN mode all LUs are accessible through all ports on the system with
the same LUN number on each port. You can use this mode in environments
where the system is not being shared between a host and a SAN Volume
Controller system.
If a system is shared between a host and a SAN Volume Controller system, you
must use M-TID M-LUN mode. Configure the system so that each LU that is
exported to the SAN Volume Controller system can be identified by a unique LUN.
The LUN must be the same on all ports through which the LU can be accessed.
Example
A SAN Volume Controller system can access controller ports x and y. The system
also sees an LU on port x that has LUN number p. In this situation the following
conditions must be met:
v The system must see either the same LU on port y with LUN number p or it
must not see the LU at all on port y.
v The LU cannot appear as any other LUN number on port y.
www.ibm.com/storage/support/2145
The HP 3PAR F-Class (Models 200 and 400) and the HP 3PAR T-Class (Models 400
and 800) are supported for use with Storwize V7000. These systems will be
referred to as HP 3PAR storage arrays.
www.ibm.com/storage/support/2145
The management console accesses the array via the IP address of the HP 3PAR
storage array. All configuration and monitoring steps are intuitively available
through this interface.
The CLI may be installed locally on a Microsoft Windows or Linux host. The CLI
is also available through Secure Shell (SSH).
For clarification, partitions in the HP 3PAR storage array are exported as Virtual
Volumes with a Virtual Logical Unit Number (VLUN) either manually or
automatically assigned to the partition.
LUNs
HP 3PAR storage arrays have highly developed thin provisioning capabilities. The
HP 3PAR storage array has a maximum Virtual Volume size of 16TB. A partition
Virtual Volume is referenced by the ID of the VLUN.
HP 3PAR storage arrays can export up to 4096 LUNs to the SAN Volume
Controller (SAN Volume Controller maximum limit). The largest Logical Unit size
supported by SAN Volume Controller under PTF 6.2.0.4 is 2TB. SAN Volume
Controller will not display or exceeded this capacity.
LUN IDs
HP 3PAR storage arrays will identify exported Logical Units through SCSI
Identification Descriptor type 3.
The 64-bit IEEE Registered Identifier (NAA=5) for the Logical Unit is in the form
5-OUI-VSID .
The 3PAR IEEE Company ID of 0020ACh, the rest is a vendor specific ID (for
example 50002AC000020C3A).
Virtual Volumes (VVs) and their corresponding Logical Units (VLUNs) are created,
modified, or deleted through the provisioning option in the Management Console
or through the CLI commands. VVs are formatted to all zeros upon creation.
LUN presentation
VLUNs are exported through the HP 3PAR storage array’s available FC ports using
the export options on Virtual Volumes. The Ports are designated at setup and
configured separately as either Host or Target (Storage connection), with ports
identified by a node : slot : port representation.
There are no constraints on which ports or hosts a logical unit may be addressable.
To apply Export to a logical unit, complete the following steps:
1. Highlight the Virtual Volume that is associated with the Logical Unit.
2. Select Export.
Special LUNs
There are no special considerations forLogical Unit numbering. LUN 0 may be
exported where necessary.
An HP 3PAR storage array may contain dual- and/or quad-ported FC cards. Each
WWPN is identified with the pattern 2N:SP:00:20:AC:MM:MM:MM, where N is the
node, S is the slot and P is the port number on the controller and N is the
controller’s address. The MM:MM:MM represents the system's serial number.
Port 2 in slot 1 of controller 0 would have the World Wide Port Name (WWPN) of
20:12:00:02:AC:00:0C:3A The last 4 digits of serial number 1303130 in hex
(3130=0x0C3A). This system has a World Wide Node Name (WWNN) for all ports
of 2F:F7:00:02:AC:00:0C:3A.
LU access model
All controllers are Active/Active. In all conditions, it is recommended to multipath
across FC controller cards to avoid an outage from controller failure. All HP 3PAR
controllers are equal in priority, so there is no benefit to using an exclusive set for
a specific LU.
LU grouping
There are no preferred access ports on the HP 3PAR storage arrays as all ports are
Active/Active across all controllers.
Detecting Ownership
Fabric zoning
When zoning an HP 3PAR storage array to the SAN Volume Controller backend
ports, be sure there are multiple zones or multiple HP 3PAR storage arrays and
SAN Volume Controller ports per zone to enable multipathing.
The HP 3PAR storage array may support LUN masking to enable multiple servers
to access separate LUNs through a common controller port. There are no issues
with mixing workloads or server types in this setup.
Host splitting
Controller splitting
HP 3PAR storage array LUNs that are mapped to the Storwize V7000 cluster
cannot be mapped to other hosts. LUNs that are not presented to Storwize V7000
may be mapped to other hosts.
The management console enables the intuitive setup of the HP 3PAR storage array
LUNs and export to the Storwize V7000 clustered system.
From the HP 3PAR storage array Management Console the following dialog of
options are involved in setting up of Logical Units:
Note: If Tiering is to be utilized, it is not good practice to mix LUNs with different
performance in the same SAN Volume Controller MDiskgrp.
Setup of ports
Setup of host
Host Persona should be: 6 – Generic Legacy. All SAN Volume Controller ports
need to be included.
To export LUNs to SAN Volume Controller, select the host definition that was
created for SAN Volume Controller.
The host options required to present the HP 3PAR storage array to SAN Volume
Controller systems is 6 - Legacy Controller.
The Storwize V7000 clustered system selects disks that are presented by the HP
3PAR storage array as quorum disks. To maintain availability with the system,
ideally each quorum disk should reside on a separate disk subsystem.
You cannot use the HP 3PAR storage array to clear SCSI reservations and
registrations on volumes that are managed by SAN Volume Controller. The option
is not available in the GUI.
Note: The setvv -clrsv command should only be used under qualified
supervision.
Note: When you configure a SAN Volume Controller clustered system to work
with an HP MA or EMA, you must not exceed the limit of 96 process logins.
Procedure
1. Verify that the front panel of the SAN Volume Controller is clear of errors.
2. Ensure that the HP StorageWorks Operator Control Panel (OCP) on each
system is clear of errors. The Operator Control Panel consists of seven green
LEDs at the rear of each HSG80 controller.
3. Ensure that you can use an HP StorageWorks command-line interface (CLI) to
configure the HSG80 controllers.
4. Issue the SHOW THIS command and SHOW OTHER command to verify
these items:
a. Ensure that the system firmware is at a supported level. See this website
for the latest firmware support:
www.ibm.com/storage/support/2145.
b. Ensure that the controllers are configured for MULTIBUS FAILOVER with
each other.
Note: To verify, there should be no orange lights on any disks in the system.
7. Issue the SHOW UNITS FULL command to verify these items:
a. Ensure that all LUNs are set to RUN and NOWRITEPROTECT.
b. Ensure that all LUNs are ONLINE to either THIS or OTHER controller.
c. Ensure that all LUNs that are to be made available to the SAN Volume
Controller have ALL access.
d. Ensure that all LUNs do not specify Host Based Logging.
8. Issue the SHOW CONNECTIONS FULL command to verify that you have
enough spare entries for all combinations of SAN Volume Controller ports and
HP MA or EMA ports.
9. Connect up to four Fibre Channel cables between the Fibre Channel switches
and the HP MA or EMA system.
10. Ensure that the Fibre Channel switches are zoned so that the SAN Volume
Controller and the HP MA or EMA system are in a zone.
11. Issue the SHOW THIS command and SHOW OTHER command to verify
that each connected port is running. This example is similar to the output that
is displayed: PORT_1_TOPOLOGY=FABRIC.
12. Issue the SHOW CONNECTIONS FULL command to verify that the new
connections have been created for each SAN Volume Controller port and HP
MA or EMA port combination.
13. Verify that No rejected hosts is displayed at the end of the SHOW
CONNECTIONS output.
14. Perform these steps from the SAN Volume Controller command-line interface
(CLI):
a. Issue the detectmdisk CLI command to discover the storage system.
b. Issue the lscontroller CLI command to verify that the two serial numbers
of each HSG80 controller in the storage system appear under the ctrl_s/n
(controller serial number) column in the output. The serial numbers appear
as a single concatenated string.
c. Issue the lsmdisk CLI command to verify that the additional MDisks that
correspond to the UNITS shown in the HP MA or EMA system.
Results
You can now use the SAN Volume Controller CLI commands to create a storage
pool. You can also create and map volumes from these storage pools. Check the
front panel of the SAN Volume Controller to ensure that there are no errors. After
the host has reloaded the Fibre Channel driver, you can perform I/O to the
volumes. For more details, see the host attachment information.
www.ibm.com/storage/support/2145
Attention: The SAN Volume Controller only supports configurations in which the
HSG80 cache is enabled in writeback mode. Running with only a single controller
results in a single point of data loss.
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
Note: Concurrent upgrade of the system firmware is not supported with the SAN
Volume Controller.
The configuration and service utility can connect to the system in the following
ways:
v RS232 interface
v In-band over Fibre Channel
v Over TCP/IP to a proxy agent, which then communicates with the system
in-band over Fibre Channel.
For the Command Console to communicate with the HSG80 controllers, the host
that runs the service utility must be able to access the HSG80 ports over the SAN.
This host can therefore also access LUs that are visible to SAN Volume Controller
nodes and cause data corruption. To avoid this, set the UNIT_OFFSET option to
199 for all connections to this host. This ensures that the host is able to recognize
only the CCL.
Attention: The HP MA and EMA systems are supported with a single HSG80
controller or dual HSG80 controllers. Because the SAN Volume Controller only
supports configurations in which the HSG80 cache is enabled in write-back mode,
running with a single HSG80 controller results in a single point of data loss.
Switch zoning
For SAN Volume Controller clustered systems that have installed software version
1.1.1, a single Fibre Channel port that is attached to the system can be present in a
switch zone that contains SAN Volume Controller Fibre Channel ports, whether the
For SAN Volume Controller systems that have software version 1.2.0 or later
installed, switches can be zoned so that HSG80 controller ports are in the switch
zone that contains all of the ports for each SAN Volume Controller node.
Multiple ports from an HSG80 controller must be physically connected to the Fibre
Channel SAN to enable servicing of the HP MA or EMA system. However, switch
zoning must be used as described in this topic.
Note: If the HP Command Console is not able to access a Fibre Channel port on
each of the HSG80 controllers in a two-controller system, there is a risk of an
undetected single point of failure.
The SAN Volume Controller uses a logical unit (LU) that is presented by an HSG80
controller as a quorum disk. The quorum disk is used even if the connection is by
a single port, although this is not recommended. If you are connecting the HP MA
or EMA system with a single Fibre Channel port, ensure that you have another
system on which to put your quorum disk. You can use the chquorum
command-line interface (CLI) command to move quorum disks to another system.
SAN Volume Controller clustered systems that are attached only to the HSG80
controllers are supported.
Advanced copy functions for HP MA and EMA systems (for example, SnapShot
and RemoteCopy) are not supported for disks that are managed by the SAN
Volume Controller because the copy function does not extend to the SAN Volume
Controller cache.
Partitioning
Write protection of LUNs is not supported for use with the SAN Volume
Controller.
Attention: A JBOD
provides no redundancy at
the physical disk-drive level.
A single disk failure can
result in the loss of an entire
storage pool and its
associated volumes.
Mirrorset 2-6 smallest member
RAIDset 3 - 14 1.024 terabytes
Stripeset 2 - 24 1.024 terabytes
Striped Mirrorset 2 - 48 1.024 terabytes
Note: LUs can be created and deleted on an HSG80 controller while I/O
operations are performed to other LUs. You do not need to restart the HP MA or
EMA subsystem.
The following table lists the global settings for HP MA and EMA systems:
Table 65. HP MA and EMA global settings supported by the SAN Volume Controller
HSG80 controller default SAN Volume Controller
Option setting required setting
DRIVE_ERROR_THRESHOLD 800 Default
FAILEDSET Not defined n/a
Table 66 describes the options that can be set by HSG80 controller command-line
interface (CLI) commands for each HSG80 controller.
Table 66. HSG80 controller settings that are supported by the SAN Volume Controller
HSG80 controller default SAN Volume Controller
Option setting required setting
ALLOCATION_CLASS 0 Any value
CACHE_FLUSH_TIME 10 Any value
COMMMAND_CONSOLE_LUN Not defined Any value
CONNECTIONS_UNLOCKED CONNECTIONS_ CONNECTIONS_
UNLOCKED UNLOCKED
NOIDENTIFIER Not defined No identifier
MIRRORED_CACHE Not defined Mirrored
MULTIBUS_FAILOVER Not defined MULTIBUS_FAILOVER
NODE_ID Worldwide name as on the Default
label
PROMPT None Any value
REMOTE_COPY Not defined Any value
SCSI_VERSION SCSI-2 SCSI-3
SMART_ERROR_EJECT Disabled Any value
TERMINAL_PARITY None Any value
TERMINAL_SPEED 9600 Any value
TIME Not defined Any value
Restriction: Only one port per HSG80 pair can be used with the SAN Volume
Controller.
Table 67 lists the HSG80 controller port settings that the SAN Volume Controller
supports:
Table 67. HSG80 controller port settings supported by the SAN Volume Controller
SAN Volume Controller
Option HSG80 default setting required setting
PORT_1/2-AL-PA 71 or 72 Not applicable
PORT_1/2_TOPOLOGY Not defined FABRIC
Note: The HP MA and EMA systems support LUN masking that is configured
with the SET unit number ENABLE_ACCESS_PATH command. When used with a
SAN Volume Controller, the access path must be set to all ("SET unit number
ENABLE_ACCESS_PATH=ALL") and all LUN masking must be handled
exclusively by the SAN Volume Controller. You can use the SHOW
CONNECTIONS FULL command to check access rights.
Table 68 describes the options that must be set for each LU that is accessed by the
SAN Volume Controller. LUs that are accessed by hosts can be configured
differently.
Table 68. HSG80 controller LU settings supported by the SAN Volume Controller
SAN Volume
HSG80 controller Controller required
Option default setting setting
TRANSFER_RATE_REQUESTED 20MHZ Not applicable
Table 69 lists the default and required HSG80 controller connection settings:
Table 69. HSG80 connection default and required settings
HSG80 controller default HSG80 controller required
Option setting setting
OPERATING_SYSTEM Not defined WINNT
RESERVATION_STYLE CONNECTION_BASED Not applicable
UNIT_OFFSET 0 0 or 199
LUN masking
Note: The SAN Volume Controller ports must not be in the REJECTED_HOSTS
list. This list can be seen with the SHOW CONNECTIONS FULL command.
You cannot use LUN masking to restrict the initiator ports or the target ports that
the SAN Volume Controller uses to access LUs. Configurations that use LUN
masking in this way are not supported. LUN masking can be used to prevent other
initiators on the SAN from accessing LUs that the SAN Volume Controller uses,
but the preferred method for this is to use SAN zoning.
LU virtualization
The HP MA and EMA subsystems also provide LU virtualization by the port and
by the initiator. This is achieved by specifying a UNIT_OFFSET for the connection.
The use of LU virtualization for connections between the HSG80 controller target
ports and SAN Volume Controller initiator ports is not supported.
www.ibm.com/storage/support/2145
See the following website for specific HP EVA firmware levels and the latest
supported hardware:
www.ibm.com/storage/support/2145
Fabric zoning
The SAN Volume Controller switch zone must include at least one target port from
each HSV controller in order to have no single point of failure.
EVA VDisks are created and deleted using the Command View EVA utility.
Note: A VDisk is formatted during the creation process; therefore, the capacity of
the VDisk will determine the length of time it takes to be created and formatted.
Ensure that you wait until the VDisk is created before you present it to the SAN
Volume Controller.
A single VDisk can consume the entire disk group capacity or the disk group can
be used for multiple VDisks. The amount of disk group capacity consumed by a
VDisk depends on the VDisk capacity and the selected redundancy level. There are
three redundancy levels:
v Vraid 1 - High redundancy (mirroring)
v Vraid 5 - Moderate redundancy (parity striping)
v Vraid 0 - No redundancy (striping)
Volumes are formatted during creation. The time it takes to format the volumes
depends on the capacity.
Note: All nodes and ports in the SAN Volume Controller clustered system must be
represented as one host to the HP EVA.
Special LUs
The Console LU is a special volume that represents the SCSI target device. It is
presented to all hosts as LUN 0.
The Command View EVA system communicates in-band with the HSV controllers.
Table 70 lists the system options that you can access using the Command View
EVA.
Table 70. HP StorageWorks EVA global options and required settings
SAN Volume Controller
Option HP EVA default setting required setting
Console LUN ID 0 Any
Disk replacement delay 1 Any
Table 71 describes the options that must be set for each LU that is accessed by
other hosts. LUs that are accessed by hosts can be configured differently.
Table 71. HP StorageWorks EVA LU options and required settings
SAN Volume
HP EVA Default Controller Required
Option Setting Setting
Capacity None Any
Write cache Write-through or Write-back
Write-back
Read cache On On
Redundancy Vraid0 Any
Preferred path No preference No preference
Write protect Off Off
Table 72 on page 292 lists the host options and settings that can be changed using
the Command View EVA.
www.ibm.com/storage/support/2145
See the following web site for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
You can use the following configuration utilities with HP MSA1000 or MSA1500
systems in a SAN Volume Controller environment:
v The CLI through an out-of-band configuration that is accessed through a host
that is connected to the serial port of the HP MSA1000 or MSA1500.
v The GUI through an in-band configuration that uses the HP Array Configuration
Utility (ACU).
Notes:
1. If the HP ACU is installed in a configuration that HP does not support, some
of its functionality might not be available.
2. If you use an in-band configuration, you must ensure that LUs that are used
by the SAN Volume Controller cannot be accessed by a direct-attached host.
All stripe sizes are supported; however, use a consistent stripe size for the HP
StorageWorks MSA.
Note: If you are using the CLI, use the cache=enabled setting.
Set the Selective Storage Presentation (SSP), also known as ACL to enabled.
You can use the built-in Linux profile or Default profile to set the host profile
settings. If you use the Default profile, you must issue the following Serial port
CLI command to change the host profile settings:
change mode Default mode number
where mode number is the numeric value for the mode that you want to change.
See the documentation that is provided for your HP StorageWorks MSA for
additional information.
You can use the standard migration procedure to migrate logical units from the HP
StorageWorks MSA to the SAN Volume Controller system with the following
restrictions:
v You cannot share the HP StorageWorks MSA between a host and the SAN
Volume Controller system. You must migrate all hosts at the same time.
v The subsystem device driver (SDD) and securepath cannot coexist because they
have different QLogic driver requirements.
v The QLogic driver that is supplied by HP must be removed and the driver that
is supported by IBM must be installed.
The following table lists the global settings for an HP MSA system:
www.ibm.com/storage/support/2145
For SAN Volume Controller version 4.3.1.7, this is only the MSA2000fc dual
controller model that is configured with each controller module attached to both
fabrics. For details, refer to the HP StorageWorks Modular Model User Guide section
on connecting two data hosts through two switches where all four ports must be
used and cross-connected to both SAN fabrics.
For the supported firmware levels and hardware, see the following website:
www.ibm.com/storage/support/2145
To access the MSA2000 system initially you can go through either a serial interface
or Dynamic Host Configuration Protocol (DHCP). You can also configure user
access and privileges.
The SMU is a web-based GUI that runs on each controller that is accessible
through the IP address of each controller. All management and monitoring tasks
can be completed on each controller.
The CLI is accessible through Secure Shell (SSH), Telnet, and serial port. The CLI
includes all functionality that is available in the GUI.
The controller calls an array a virtual disk (VDisk). SAS and SATA disks cannot be
mixed within a VDisk, and the maximum number of VDisks per controller is 16.
VDisks can be divided into volumes, which are then presented to the host. There
can be up to 128 volumes per controller. The capacity of a volume is between 1 MB
and 16 TB.
LUN IDs
Note: The Expose to all hosts option can cause confusion in multisystem
environments.
v LUN assignments
You can also modify, expand, or delete a volume or VDisk using either the SMU or
the CLI.
Note: Before you delete the LUN on the MSA2000 system, use the rmdisk
command to delete the MDisk on the SAN Volume Controller clustered system.
LUN presentation
You can also use the SMU or the CLI to map and unmap MSA2000 LUNs.
To map a logical unit (volume from a VDisk), from the SMU complete these steps:
1. In the Storage Management Utility SMU interface, go to Manage > Volume
Management > VDisk or Volume > Volume Mapping.
2. Under the section Assign Host Access Privileges, select Map Host to Volume.
3. For each SAN Volume Controller WWPN, select SVC WWPN in the HOST
WWN-Name menu.
4. Enter the LUN number to present to the SAN Volume Controller. For example,
use 0 for the first volume, then use 1 for the second, until all volumes are
assigned.
5. Select read-write for the Port 0 Access and Port 1 Access.
6. Click Map it. The resulting mapping is displayed in the Current Host-Volume
Relationships section.
Important: Use this section to verify that the LUN ID is consistent and all SAN
Volume Controller WWPNs have been mapped.
Note: LUNs from controller module A and controller module B can have the same
LUN IDs (0). Controller module A and Controller module B appear on the SAN
Volume Controller system as separate controllers. Managed disks (MDisks) on the
system should be in separate storage pools so that each controller module has its
own separate storage pool for its presented MDisks.
Special LUNs
Volumes can have a LUN ID from 0 to 126 on each controller. LUN 0 on the
MSA2000 is visible from both controllers, but can only be used to access storage
from the preferred controller. LUN 0 on the other controller does not present
storage.
The MSA2000 system has two dual-active controllers with two ports each. You
must set these as point-to-point using the SMU interface.
In the Storage Management Utility SMU interface, go to Manage > General Config
> Host Port Configuration. Select Advanced Options and specify point to point for
Change Host Topology.
LU access model
The MSA2000 is a dual-active system. Each LUN has an owning controller, and
I/O is serviced only by ports on that controller. Failover automatically takes place
when one controller fails (shuts down). There is no way for SAN Volume
Controller to force failover.
LU grouping
The MSA system has two ports per controller. The I/O is through port 0, and port
1 is linked to port 0 of the other controller during a failure or a code upgrade.
Detecting ownership
The LUN is reported only by the target ports of the owning controller.
Failover
The only way to cause failover of LUs from one controller to the other is to shut
down one of the controllers. The MSA2000 system cannot normally present all the
system LUNs through both controllers. Therefore, it requires a four-port connection
to two SAN fabrics. Failover for MS2000 systems involves the surviving controller
taking its ports offline, then returning with one of its ports, emulating the WWPNs
of the failed controller.
Note: This behavior also means that half of the operational paths from the
surviving controller are taken away when failover takes place, which allows the
port from the controller that is shutting down to be emulated.
Fabric zoning
Each SAN Volume Controller switch zone must include at least one target port
from each controller to have no single point of failure. This means, for example,
that the zone on the first fabric has Port 0 MSA Controller A with Port 1 of MSA
Controller B and the SAN Volume Controller ports. The zone on the second fabric
Target ports may not be shared between SAN Volume Controller and other hosts.
Host splitting
A single host must not be connected to SAN Volume Controller and an MSA2000
system simultaneously.
Controller splitting
MSA2000 system LUNs must be mapped only to the SAN Volume Controller
clustered system. The four target ports are all required for dual SAN-fabric
connections and cannot be shared.
Table 73 describes the port settings that are supported by the SAN Volume
Controller.
Table 73. MSA2000 system port settings for use with the SAN Volume Controller
Values (any limits on the
Option possible values) Description
Host Port Configuration 2 Gbps or 4 Gbps Set according to the fabric
speed.
Internal Host Port Straight-through Set to Straight-through for a
Interconnect point-to-point Fibre Channel
connection.
Host Port Configuration Point-to-Point Set to Point-to-point for use
with SAN Volume Controller.
The MSA volumes can be created after you create a volume (RAID 0 is not
supported), or they can be added later to the volume. LUNs can be configured in
16K, 32K, and 64K (default) chunks by using the advanced option. Table 74 on
page 301 describes the preferred options available when you create a logical unit
(LU).
There is no specific host option to present the MSA2000 systems to SAN Volume
Controller systems. Use Microsoft Windows 2003 (Microsoft Windows 2003) as the
host setting for SAN Volume Controller.
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
The following table lists the access control methods that are available:
Method Description
Port Mode Allows access to logical units that you want to define on a
per-storage-controller port basis. SAN Volume Controller
visibility (such as through switch zoning or physical cabling)
must allow the SAN Volume Controller system to have the
same access from all nodes. The accessible controller ports
must also be assigned the same set of logical units with the
same logical unit number. This method of access control is not
recommended for a SAN Volume Controller connection.
WWN Mode Allows access to logical units using the WWPN of each of the
ports of an accessing host device. All WWPNs of all the SAN
Volume Controller nodes in the same system must be added
to the list of linked paths in the controller configuration. This
becomes the list of host (SAN Volume Controller) ports for an
LD Set or group of logical units. This method of access
control allows sharing because different logical units can be
accessed by other hosts.
Attention: You must configure NetApp FAS systems in single-image mode. SAN
Volume Controller does not support NetApp FAS systems that are in
multiple-image mode.
The information in this section also applies to the supported models of the IBM
N5000 series and the IBM N7000 series.
www.ibm.com/storage/support/2145
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
See the documentation that is provided with your NetApp FAS system for more
information about the web server and CLI.
Web server
You can manage, configure, and monitor the NetApp FAS through the FileView
GUI.
CLI
You can access the command-line interface through a direct connection to the filer
serial console port or by using the filer IP address to establish a telnet session.
LUs that are exported by the NetApp FAS system report identification descriptors
in the vital product data (VPD). The SAN Volume Controller clustered system uses
the LUN-associated binary type-3 IEEE Registered Extended descriptor to identify
the LU. For a NetApp LUN that is mapped to the SAN Volume Controller system,
set the LUN Protocol Type to Linux.
The NetApp FAS system does not use LU groups so that all LUs are independent.
The LU access model is active-active. Each LU has a preferred filer, but can be
accessed from either filer. The preferred filer contains the preferred access ports for
the LU. The SAN Volume Controller system detects and uses this preference.
The NetApp FAS reports a different worldwide port name (WWPN) for each port
and a single worldwide node name (WWNN).
Procedure
1. Log on to the NetApp FAS.
2. Go to Filer View and authenticate.
3. Click Volumes and identify a volume to use to create an LU. A list of volumes
is displayed.
4. Identify a volume that has sufficient free space for the LUN size that you want
to use.
5. Click LUNs on the left panel.
6. Click Add in the list.
7. Enter the following:
a. In the Path field, enter /vol/volx/lun_name where volx is the name of the
volume identified above and lun_name is a generic name.
b. In the LUN Protocol Type field, enter Linux.
c. Leave the Description field blank.
d. In the Size field, enter a LUN size.
e. In the Units field, enter the LUN size in units.
f. Select the Space Reserved box.
Note: If the Space Reserved box is not selected and the file system is full,
the LUN goes offline. The storage pool also goes offline and you cannot
access the volumes.
g. Click Add.
Note: To check the LUN settings, go to the Manage LUNs section and click
the LUN you want to view. Ensure that the Space Reserved setting is set.
Procedure
1. Log on to the NetApp FAS.
2. Go to Filer View and authenticate.
3. Click LUNs on the left panel.
4. Click Manage. A list of LUNs is displayed.
5. Click the LUN that you want to delete.
6. Click Delete.
7. Confirm the LUN that you want to delete.
Procedure
1. Log on to the NetApp FAS.
2. Go to Filer View and authenticate.
3. Click LUNs on the left panel.
4. Click Initiator Groups.
5. Click Add in the list.
6. Enter the following:
a. In the Group Name field, enter the name of the initiator group or host.
b. In the Type list, select FCP.
c. In the Operating System field, select Linux.
d. In the Initiators field, enter the list of WWPNs of all the ports of the nodes
in the cluster that are associated with the host.
Note: Delete the WWPNs that are displayed in the list and manually enter
the list of SAN Volume Controller node ports. You must enter the ports of
all nodes in the SAN Volume Controller clustered system.
7. Click Add.
Procedure
1. Log on to the NetApp FAS.
2. Go to Filer View and authenticate.
3. Click LUNs on the left panel.
4. Click Manage. A list of LUNs is displayed.
5. Click the LUN that you want to map.
6. Click Map LUN.
7. Click Add Groups to Map.
8. Select the name of the host or initiator group from the list and click Add.
Notes:
a. You can leave the LUN ID section blank. A LUN ID is assigned based on
the information the controllers are currently presenting.
b. If you are re-mapping the LUN from one host to another, you can also
select the Unmap box.
9. Click Apply.
Fabric zoning
The SAN Volume Controller switch zone must include at least one target port from
each filer to avoid a single point of failure.
Target ports can be shared between the SAN Volume Controller system and other
hosts. However, you must define separate initiator groups (igroups) for the SAN
Volume Controller initiator ports and the host ports.
Host splitting
A single host cannot be connected to both the SAN Volume Controller system and
the NetApp FAS to avoid the possibility of an interaction between multipathing
drivers.
Controller splitting
You can connect other hosts directly to both the NetApp FAS and the SAN Volume
Controller system under the following conditions:
v Target ports are dedicated to each host or are in a different igroup than the SAN
Volume Controller system
v LUNs that are in the SAN Volume Controller system igroup are not included in
any other igroup
The SAN Volume Controller supports concurrent maintenance on the NetApp FAS.
www.ibm.com/storage/support/2145
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
The minimum supported SAN Volume Controller level for the attachment of
Nexsan SATABeast is 5.1.0.3.
Creating arrays
You create and configure volumes in the Configure Volumes section of the GUI.
You can use the standard migration procedure to migrate logical units from the
Nexsan SATABeast to the SAN Volume Controller clustered system.
See the following website for the latest models that can be used with SAN Volume
Controller systems:
www.ibm.com/storage/support/2145
See the following website for specific firmware levels and the latest supported
hardware:
www.ibm.com/storage/support/2145
Because some maintenance operations restart the Pillar Axiom system, you cannot
perform hardware maintenance or firmware upgrades while the system is attached
to a SAN Volume Controller clustered system.
The AxiomONE Storage Services Manager is a browser-based GUI that allows you
to configure, manage, and troubleshoot Pillar Axiom systems.
The Pillar Data Systems command-line interface (CLI) communicates with the
system through an XML-based application programming interface (API) over a
TCP/IP network. The Pillar Data Systems CLI is installed through the AxiomOne
Storage Service Manager. You can use the Pillar Data Systems CLI to issue all
commands, run scripts, request input files to run commands, and run commands
through a command prompt. The Pillar Data Systems CLI can run on all operating
systems that can be used with Pillar Axiom systems.
AxiomONE CLI
The AxiomONE CLI is installed through the AxiomONE Storage Service Manager.
You can use the AxiomONE CLI to perform administrative tasks. The AxiomONE
CLI can run on a subset of operating systems that can be used with Pillar Axiom
systems.
LUNs
You can use the AxiomONE Storage Services Manager to create and delete LUNs.
Important:
1. When you create a LUN, it is not formatted and therefore can still contain
sensitive data from previous usage.
2. You cannot map more than 256 Pillar Axiom LUNs to a SAN Volume
Controller clustered system.
You can create LUNs in a specific volume group or in a generic volume group. A
single LUN can use the entire capacity of a disk group. However, for SAN Volume
Controller systems, LUNs cannot exceed 1 PB. When LUNs are exactly 1 PB, a
warning is issued in the SAN Volume Controller system event log.
The amount of capacity that the LUN uses is determined by the capacity of the
LUN and the level of redundancy. You can define one of three levels of
redundancy:
v Standard, which stores only the original data
v Double, which stores the original data and one copy
v Triple, which stores the original data and two copies
For all levels of redundancy, data is striped across multiple RAID-5 groups.
LUNs that are exported by the Pillar Axiom system report identification
descriptors in the vital product data (VPD). The SAN Volume Controller system
uses the LUN-associated binary type-2 IEEE Registered Extended descriptor to
identify the LUN. The following format is used:
CCCCCCLLLLMMMMMM
You can find the identifier in the AxiomONE Storage Services Manager. From the
AxiomONE Storage Services Manager, click Storage > LUNs > Identity. The
identifier is listed in the LUID column. To verify that the identifier matches the
UID that the SAN Volume Controller system lists, issue the lsmdisk mdisk_id or
mdisk_name from the command-line interface and check the value in the UID
column.
Moving LUNs
If you want to migrate more than 256 LUNs on an existing Pillar Axiom system to
the SAN Volume Controller clustered system, you must use the SAN Volume
Controller clustered-system migration function. The Pillar Axiom system allows up
to 256 LUNs per host and the SAN Volume Controller system must be configured
as a single host. Because the SAN Volume Controller system is not limited to 256
volumes, you can migrate your existing Pillar Axiom system setup to the SAN
Volume Controller system. You must then virtualize groups of LUNs and then
Target ports
Pillar Axiom systems with one pair of controllers report a different worldwide port
name (WWPN) for each port and a single worldwide node name (WWNN).
Systems with more than one pair of controllers report a unique WWNN for each
controller pair.
LUN groups are not used so that all LUNs are independent. The LUN access
model is active-active/asymmetric with one controller having ownership of the
LUN. All I/O operations to the LUN on this controller is optimized for
performance. You can use the lsmdisk mdisk_id or mdisk_name CLI command to
determine the assigned controller for a LUN.
To balance I/O load across the controllers, I/O operations can be performed
through any port. However, performance is higher on the ports of the controller
that own the LUNs. By default, the LUNs that are mapped to the SAN Volume
Controller system are accessed through the ports of the controller that owns the
LUNs.
Fabric zoning
The SAN Volume Controller switch zone must include at least one target port from
each Pillar Axiom controller to avoid a single point of failure.
Target ports can be shared between the SAN Volume Controller system and other
hosts.
Host splitting
A single host cannot be connected to both the SAN Volume Controller system and
the Pillar Axiom system to avoid the possibility of an interaction between
multipathing drivers.
Controller splitting
Pillar Axiom system LUNs that are mapped to the SAN Volume Controller system
cannot be mapped to other hosts. Pillar Axiom system LUNs that are not mapped
to the SAN Volume Controller system can be mapped to other hosts.
Table 76 lists the system options that you can access using the AxiomONE Storage
Services Manager.
Table 76. Pillar Axiom global options and required settings
Pillar Axiom default SAN Volume Controller
Option setting required setting
Enable Automatic Failback of Y N/A
NAS Control Units
Link Aggregation N N/A
DHCP/Static - Any
Call-home - Any
Table 77 lists the options that must be set for each LU that is accessed by other
hosts. LUs that are accessed by hosts can be configured differently. You can use the
AxiomONE Storage Services Manager to change these settings.
Table 77. Pillar Axiom LU options and required settings
SAN Volume
Pillar Axiom Default Controller Required
Option Setting Setting
LUN Access All hosts Select hosts
Protocol FC FC
LUN Assignment Auto Any
Attention: Do not
change the LUN
assignment after the
LUNs are mapped to
the SAN Volume
Controller clustered
system.
Select Port Mask All On All On
Quality of Service Various No preference. See the
note below.
Note: If you do not know the Quality of Service setting, you can use the following:
v Priority vs other Volumes = Medium
v Data is typically accessed = Mixed
v I/O Bias = Mixed
For the latest RamSan models that can be used with SAN Volume Controller
systems, see the following website:
www.ibm.com/storage/support/2145
For the supported firmware levels and hardware, see the following website:
www.ibm.com/storage/support/2145
The web GUI is an applet based on Java that is accessible through the IP address
of the RamSan system. All configuration and monitoring steps are available
through this interface. Be default, the web GUI uses SSL encryption to
communicate with the RamSan system.
RamSan CLI
The command-line interface (CLI) is accessible through SSH, Telnet, and RS-232
port. The CLI includes all functionality that is available in the GUI with the
exception of statistics monitoring. The CLI includes a diagnostics interface,
however, for internal hardware checks.
RamSan systems are shipped with a particular capacity of user space, which
depends on the model. A partition with this capacity is known as a logical unit.
RamSan systems can export up to 1024 LUNs to the SAN Volume Controller
through various exported FC ports. The maximum logical-unit size is the full,
usable capacity of the RamSan system.
LUN IDs
RamSan LUNs are created, modified, or deleted either by using a wizard tutorial
in the GUI or by entering a CLI command. LUNs are not formatted to all zeros
upon creation.
To create a logical unit, highlight Logical Units and select Create toolbar. To
modify, resize, or delete an LU, select the appropriate toolbar button when the
specific logical unit is highlighted in the navigation tree.
Note: Delete the MDisk on the SAN Volume Controller clustered system before
you delete the LUN on the RamSan system.
LUNs are exported through the available FC ports of RamSan systems by access
policies. Access policies are associations of the logical unit, port, and host. A
RamSan system requires that one of the three items is unique across all available
access policies. LUNs that are to be presented to SAN Volume Controller must be
presented to all node ports in the system through at least two ports on the RamSan
system. Present each LU to the SAN Volume Controller on the same LUN through
all target ports.
To apply access policies to a logical unit, highlight the specific logical unit in the
GUI and click the Access toolbar button.
Special LUNs
The RamSan system has no special considerations for logical unit numbering. LUN
0 can be exported where necessary. In one RamSan model, a licensed Turbo feature
is available to create a logical unit up to half the size of the cache to keep locked in
the DRAM cache for the highest performance. No identification difference exists
with a Turbo or locked LUN as opposed to any other LUN ID.
LU access model
LU grouping
There are no preferred access ports for the RamSan system because all ports are
Active/Active across all controllers.
Fabric zoning
To enable multipathing, ensure that you have multiple zones or multiple RamSan
and SAN Volume Controller ports for each zone when you are zoning a RamSan
system to the SAN Volume Controller back-end ports.
The RamSan system can support LUN masking to enable multiple servers to access
separate LUNs through a common controller port. There are no issues with mixing
workloads or server types in this setup. LUN Masking is a licensed feature of the
RamSan system.
Host splitting
Controller splitting
RamSan system LUNs that are mapped to the SAN Volume Controller clustered
system cannot be mapped to other hosts. LUNs that are not presented to the SAN
Volume Controller can be mapped to other hosts.
When you create a logical unit (LU), the options in Table 79 are available on
RamSan systems.
Table 79. RamSan LU options
SAN Volume
Controller
Option Data type Range Default setting Comments
Name String 1 Logical unit Any This is only
character - 32 number for
characters management
reference.
No host options are required to present the RamSan systems to SAN Volume
Controller systems.
You must not use the RamSan CLI to clear SCSI reservations and registrations on
volumes that are managed by the SAN Volume Controller. This option is not
available on the GUI.
See the SAN Volume Controller (2145) website for the latest Xiotech ISE models
that can be used with SAN Volume Controller systems:
www.ibm.com/storage/support/2145
Note: Thin provisioning is not supported for use with SAN Volume Controller and
is currently not supported for Xiotech ISE.
See the SAN Volume Controller (2145) website for the supported firmware levels
and hardware:
www.ibm.com/storage/support/2145
www.xiotech.com
The Xiotech ISE Storage Management GUI is a Java-based interface that you can
use to configure, manage, and troubleshoot Xiotech ISE storage systems. The
Xiotech ISE Storage Management GUI is designed and supported on Microsoft
Windows systems and has the following minimum requirements:
Internet Explorer v6.02800.1106, SP1, Q903235 or higher (JavaScript enabled;
XML/XSL rendering enabled)
The Xiotech ISE command-line interface (CLI) communicates with the system
through a serial port that is connected to a computer that runs a terminal
emulation program, such as Microsoft HyperTerminal or PuTTY. The Xiotech ISE
CLI is primarily used to configure the network adapter TCP/IP settings.
A null modem cable is required. Configure the serial port on the computer as
follows:
v 115200 baud
v 8 data bits
v No parity
v 1-stop bit
v No flow control
LUNs
An Xiotech ISE logical unit is referred to as a volume. Xiotech ISE volumes are
enumerated devices that all share identical characteristics.
A single Xiotech ISE volume can potentially consume the entire capacity that is
allocated for SAN Volume Controller storage pools, but it cannot exceed the SAN
Volume Controller 2 TB LUN size limit. Any LUN that is 2 TB or larger is
truncated to 2 TB, and a warning message is generated for each path to the LUN.
LUN IDs
LUNs that are exported by Xiotech ISE systems are guaranteed to be unique. They
are created with a combination of serial numbers and counters along with a
standard IEEE registered extended format.
Xiotech ISE LUNs are created and deleted by using either the Xiotech ISE Storage
Management GUI or CLI. LUNs are formatted to all zeros at creation.
Xiotech ISE LUNs are presented to the SAN Volume Controller interface using the
following rules:
v LUNs can be presented to one or more selected hosts.
v Configuration is easier if you create one host name for the SAN Volume
Controller.
v No individual LUN volume on the Xiotech ISE system can exceed 2 TB in size.
v For the managed reliability features to be effective on the Xiotech ISE system,
use either RAID 1 or RAID 5 when you create volumes.
v The write-back and write-through cache options are available depending on the
performance requirements on each individual LUN. Generally, write-back
caching provides the best performance.
v Although either Linux or Windows can be used, Linux is recommended for
volumes that are intended for use on the SAN Volume Controller.
To present Xiotech ISE LUNs to the SAN Volume Controller, follow these steps:
1. On the Xiotech ISE system, create a single host name for the SAN Volume
Controller and assign all SAN Volume Controller host bus adapter (HBA) ports
to that host name as shown in Table 80.
Table 80. Host information for Xiotech ISE
Operating system
Name type HBA ports Mapping
SVC_Cluster Linux 500507680130535F Volume01 (lun:1)
5005076801305555 Volume02 (lun:2)
500507680140535F
5005076801405555
2. When you create new volumes that are intended for use on the SAN Volume
Controller, assign them to the host name that is used to represent the SAN
Volume Controller.
Special LUNs
The Xiotech ISE storage system does not use a special LUN. Storage can be
presented by using any valid LUN, including 0.
Each Xiotech ISE1 system has two physical Fibre Channel ports, and each Xiotech
ISE2 system has eight physical Fibre Channel ports. These are, by default, intended
to provide failover or multipath capability. The worldwide node name (WWNN)
and worldwide port name (WWPN) are typically similar, such as in the following
examples:
LU access model
The Xiotech ISE system has no specific ownership of any LUN by any module.
Because data is striped over all disks in a DataPac, performance is generally
unaffected by the choice of a target port.
LU grouping
The Xiotech ISE system does not use LU grouping; all LUNs are independent
entities.
There are no preferred access ports for the Xiotech ISE system.
Detecting ownership
Fabric zoning
To avoid a single point of failure, the SAN Volume Controller switch zone must
include both target ports from each Xiotech ISE controller.
Target ports can be shared between the SAN Volume Controller system and other
hosts.
Host splitting
Xiotech ISE system logical unit numbers (LUNs) that are mapped to the SAN
Volume Controller system cannot be mapped to other hosts. Xiotech ISE system
LUNs that are not mapped to the SAN Volume Controller system can be mapped
to other hosts.
The only specific setting is the host operating system type: Windows or Linux. This
setting is irrelevant as it is not currently used by the system.
Logical unit (LU) settings for the Xiotech ISE system are configurable at the LU
level.
Table 81 lists the options that must be set for each LU that is accessed by other
hosts. LUs that are accessed by hosts can be configured differently. You can use the
Xiotech ISE Storage Management GUI or CLI to change these settings.
Table 81. Xiotech ISE LU settings
SAN Volume
Controller
Option Data type Range Default setting Comments
Capacity Int 1 GB to 2 TB No Any SAN Volume
Controller
supports up
to 2 TB.
You must use specific settings to identify SAN Volume Controller systems as hosts
to the Xiotech ISE storage system.
A Xiotech ISE host is a single WWPN; however, multiple WWPNs can be included
in a single host definition on the Xiotech ISE system.
A Xiotech ISE host also can consist of more than one WWPN. The recommended
method is to make each SAN Volume Controller node a Xiotech ISE host and to
make a Xiotech ISE cluster that corresponds to all the nodes in the SAN Volume
Controller system. To do this, include all of the SAN Volume Controller WWPNs
under the same Xiotech ISE host name.
See the following website for the latest IBM XIV Storage System models that can
be used with SAN Volume Controller clustered systems:
www.ibm.com/storage/support/2145
See the following website for the supported firmware levels and hardware:
www.ibm.com/storage/support/2145
Some maintenance operations might require a complete restart of IBM XIV Storage
System systems. Such procedures are not supported when the system is attached to
the SAN Volume Controller.
www.ibm.com/systems/support/storage/XIV
The IBM XIV Storage System Storage Management GUI is a Java-based GUI that
you can use to configure, manage, and troubleshoot IBM XIV Storage System
systems. The IBM XIV Storage System Storage Management GUI can run on all
operating systems that can be used with IBM XIV Storage System systems.
The IBM XIV Storage System command-line interface (XCLI) communicates with
the systems through an XML-based API over a TCP/IP network. You can use the
XCLI to issue all commands, run scripts, request input files to run commands, and
run commands through a command prompt. The XCLI can run on all operating
systems that can be used with IBM XIV Storage System systems.
LUNs
An IBM XIV Storage System Logical Unit is referred to as a volume. IBM XIV
Storage System and volumes are enumerated devices that all share identical
characteristics.
A single IBM XIV Storage System volume can potentially consume the entire
capacity that is allocated for SAN Volume Controller managed disk (MDisk)
groups, and it can also exceed the SAN Volume Controller 1 PB LUN size limit.
Any LUN that is 1 PB or larger is truncated to 1 PB, and a warning message is
generated for each path to the LUN.
IBM XIV Storage System volumes consume chunks of 17,179,869,184 bytes (17 GB),
although you can create volumes with an arbitrary block count.
LUN IDs
LUNs that are exported by IBM XIV Storage System models report Identification
Descriptors 0, 1, and 2 in VPD page 0x83. SAN Volume Controller uses the EUI-64
compliant type 2 descriptor CCCCCCMMMMMMLLLL, where CCCCCC is the IEEE
company ID, MMMMMM is the System Serial Number transcribed to hexadecimal
(10142->0x010142, for example) and LLLL is 0000-0xFFFF, which increments each
time a LUN is created. You can identify the LLLL value by using the IBM XIV
Storage System GUI or CLI to display the volume serial number.
IBM XIV Storage System LUNs are created and deleted using the IBM XIV Storage
System GUI or CLI. LUNs are formatted to all zeros upon creation, but to avoid a
significant formatting delay, zeros are not written.
Special LUNs
IBM XIV Storage System systems do not use a special LUN; storage can be
presented using any valid LUN, including 0.
IBM XIV Storage System systems have no specific ownership of any LUN by any
module. Because data is striped over all disks in the system, performance is
generally unaffected by the choice of a target port.
LU grouping
IBM XIV Storage System models do not use LU grouping; all LUNs are
independent entities. To protect a single IBM XIV Storage System volume from
accidental deletion, you can create a consistency group containing all LUNs that
are mapped to a single SAN Volume Controller clustered system.
There are no preferred access ports for IBM XIV Storage System models.
Detecting ownership
Ownership is not relevant to IBM XIV Storage System models.
XIV Nextra LUNs are presented to the SAN Volume Controller interface using the
following rules:
v LUNs can be presented to one or more selected hosts.
v XIV Nextra maps consist of sets of LUN pairs and linked hosts.
v A volume can only appear once in a map.
v A LUN can only appear once in a map.
v A host can only be linked to one map.
To present XIV Nextra LUNs to the SAN Volume Controller, perform the following
steps:
1. Create a map with all of the volumes that you intend to manage with the SAN
Volume Controller system.
2. Link the WWPN for all node ports in the SAN Volume Controller system into
the map. Each SAN Volume Controller node port WWPN is recognized as a
separate host by XIV Nextra systems.
IBM XIV Storage System Type Number 2810 LUNs are presented to the SAN
Volume Controller interface using the following rules:
v LUNs can be presented to one or more selected hosts or clusters.
v Clusters are collections of hosts.
To present IBM XIV Storage System Type Number 2810 LUNs to the SAN Volume
Controller, perform the following steps:
1. Use the IBM XIV Storage System GUI to create an IBM XIV Storage System
cluster for the SAN Volume Controller system.
2. Create a host for each node in the SAN Volume Controller.
3. Add a port to each host that you created in step 2. You must add a port for
each port on the corresponding node.
XIV Nextra systems are single-rack systems. All XIV Nextra WWNNs include zeros
as the last two hexadecimal digits. In the following example, WWNN
2000001738279E00 is IEEE extended; the WWNNs that start with the number 1 are
IEEE 48 bit:
WWNN 2000001738279E00
WWPN 1000001738279E13
WWPN 1000001738279E10
WWPN 1000001738279E11
WWPN 1000001738279E12
IBM XIV Storage System Type Number 2810 systems are multi-rack systems, but
only single racks are supported. All IBM XIV Storage System Type Number 2810
WWNNs include zeros as the last four hexadecimal digits. For example:
WWNN 5001738000030000
WWPN 5001738000030153
WWPN 5001738000030121
Fabric zoning
To avoid a single point of failure, the SAN Volume Controller switch zone must
include at least one target port from each IBM XIV Storage System controller.
Target ports can be shared between the SAN Volume Controller system and other
hosts.
Host splitting
Controller splitting
IBM XIV Storage System system LUNs that are mapped to the SAN Volume
Controller system cannot be mapped to other hosts. IBM XIV Storage System
system LUNs that are not mapped to the SAN Volume Controller system can be
mapped to other hosts.
Table 82 lists the options that must be set for each LU that is accessed by other
hosts. LUs that are accessed by hosts can be configured differently. You can use the
IBM XIV Storage System and XIV Nextra Storage Management GUI or CLI to
change these settings.
Table 82. IBM XIV options and required settings
IBM XIV
Storage
System and SAN
XIV Nextra Volume
default Controller
Option Data Type Range setting setting
Capacity int 17,179,869,184 bytes None Any
(17 GB), up to the
total system
capacity
ORBlock count
Notes:
v SAN Volume Controller supports up to 1 PB.
v LUNs are allocated in 17-GB chunks.
v Using a block count results in LUNs that are arbitrarily sized, but that still consume
multiples of 17 GB.
An XIV Nextra host is a single WWPN, so one XIV Nextra host must be defined
for each SAN Volume Controller node port in the clustered system. An XIV Nextra
host is considered to be a single SCSI initiator. Up to 256 XIV Nextra hosts can be
presented to each port. Each SAN Volume Controller host object that is associated
with the XIV Nextra system must be associated with the same XIV Nextra LUN
map because each LU can only be in a single map.
An IBM XIV Storage System Type Number 2810 host can consist of more than one
WWPN. Configure each SAN Volume Controller node as an IBM XIV Storage
System Type Number 2810 host and create a cluster of IBM XIV Storage System
systems that corresponds to each of the SAN Volume Controller nodes in the SAN
Volume Controller system.
Table 83 on page 328 lists the host options and settings that can be changed using
the IBM XIV Storage System and XIV Nextra Storage Management GUI.
You must not use the vol_clear_keys command to clear SCSI reservations and
registrations on volumes that are managed by SAN Volume Controller.
The following components are used to provide support for the service:
v SAN Volume Controller
v The cluster CIM server
v IBM System Storage hardware provider, known as the IBM System Storage
Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service
software
v Microsoft Volume Shadow Copy Service
v The vSphere Web Services when it is in a VMware virtual platform
The IBM System Storage hardware provider is installed on the Windows host.
To provide the point-in-time shadow copy, the components complete the following
process:
1. A backup application on the Windows host initiates a snapshot backup.
2. The Volume Shadow Copy Service notifies the IBM System Storage hardware
provider that a copy is needed.
3. The SAN Volume Controller prepares the volumes for a snapshot.
4. The Volume Shadow Copy Service quiesces the software applications that are
writing data on the host and flushes file system buffers to prepare for the copy.
5. The SAN Volume Controller creates the shadow copy using the FlashCopy
Copy Service.
6. The Volume Shadow Copy Service notifies the writing applications that I/O
operations can resume, and notifies the backup application that the backup was
successful.
The Volume Shadow Copy Service maintains a free pool of volumes for use as a
FlashCopy target and a reserved pool of volumes. These pools are implemented as
virtual host systems on the SAN Volume Controller.
Installation overview
The steps for implementing the IBM System Storage Support for Microsoft Volume
Shadow Copy Service and Virtual Disk Service software must be completed in the
correct sequence.
Before you begin, you must have experience with or knowledge of administering a
Windows Server operating system.
Procedure
1. Verify that the system requirements are met.
2. Install the IBM System Storage Support for Microsoft Volume Shadow Copy
Service and Virtual Disk Service software.
3. Verify the installation.
4. Create a free pool of volumes and a reserved pool of volumes on the SAN
Volume Controller.
5. Optionally, you reconfigure the services to change the configuration that you
established during the installation.
You must satisfy all of the prerequisites that are listed in the system requirements
section before starting the installation.
Perform the following steps to install the IBM System Storage Support for
Microsoft Volume Shadow Copy Service and Virtual Disk Service software on the
Windows server:
Notes:
a. If these settings change after installation, you can use the ibmvcfg.exe tool
to update Microsoft Volume Shadow Copy and Virtual Disk Services
software with the new settings.
b. If you do not have the IP address or user information, contact your SAN
Volume Controller administrator.
The InstallShield Wizard Complete panel is displayed.
10. Click Finish. If necessary, the InstallShield Wizard prompts you to restart the
system.
11. Make the IBM Hardware Provider for VSS-VDS aware of the SAN Volume
Controller, as follows:
Chapter 8. IBM System Storage support for Microsoft Volume Shadow Copy Service and Virtual Disk Service for Windows 331
a. Open a command prompt.
b. Change directories to the hardware provider directory; the default
directory is C:\Program Files\IBM\Hardware Provider for VSS-VDS\.
c. Use the ibmvcfg command to set the cluster ID for the SAN Volume
Controller cluster, as follows:
ibmvcfg set targetSVC cluster_id
The cluster_id value must be the SAN Volume Controller cluster ID. To find
the cluster ID in the management GUI, click Home > System Status. The
ID is listed under Info.
Note: Only Shadow Copy Service for the RDM disks, acting as raw disks, and
presented to the virtual host in physical mode is supported.
To manipulate RDM disks in the virtual host, the IBM System Storage Support for
Microsoft Volume Shadow Copy Service and Virtual Disk Service software must
interact with the VMware ESX Server. This is accomplished through the VMware
Web Service exposed by the ESX Server, which holds the virtual host.
VMware tools, which collect host information such as IP address, hostname, and so
on, must be installed so that the virtual host can communicate with the vSphere
Web Service.
There are four parameters available only in the VMware virtual platform:
v vmhost
v vmuser
v vmpassword
v vmcredential
Using the ibmvcfg command, perform the following steps to configure each
parameter:
When a shadow copy is created, the IBM System Storage Support for Microsoft
Volume Shadow Copy Service and Virtual Disk Service software selects a volume
in the free pool, assigns it to the reserved pool, and then removes it from the free
pool. This protects the volume from being overwritten by other Volume Shadow
Copy Service users.
Chapter 8. IBM System Storage support for Microsoft Volume Shadow Copy Service and Virtual Disk Service for Windows 333
To successfully perform a Volume Shadow Copy Service operation, there must be
enough volumes mapped to the free pool. The volumes must be the same size as
the source volumes.
Use the management GUI or the SAN Volume Controller command-line interface
(CLI) to perform the following steps:
Procedure
1. Create a host for the free pool of volumes.
v You can use the default name VSS_FREE or specify a different name.
v Associate the host with the worldwide port name (WWPN)
5000000000000000 (15 zeroes).
2. Create a virtual host for the reserved pool of volumes.
v You can use the default name VSS_RESERVED or specify a different name.
v Associate the host with the WWPN 5000000000000001 (14 zeroes).
3. Map the logical units (VDisks) to the free pool of volumes.
What to do next
Procedure
If you are able to successfully perform all of these verification tasks, the IBM
System Storage Support for Microsoft Volume Shadow Copy Service and Virtual
Disk Service software was successfully installed on the Windows server.
Chapter 8. IBM System Storage support for Microsoft Volume Shadow Copy Service and Virtual Disk Service for Windows 335
Table 85. Configuration commands (continued)
Command Description Example
ibmvcfg set namespace <namespace> Specifies the namespace value that ibmvcfg set namespace \root\ibm
master console is using.
ibmvcfg set vssFreeInitiator Specifies the WWPN of the host. The ibmvcfg set vssFreeInitiator
<WWPN> default value is 5000000000000000. 5000000000000000
Modify this value only if there is a
host already in your environment
with a WWPN of 5000000000000000.
ibmvcfg set vssReservedInitiator Specifies the WWPN of the host. The ibmvcfg set vssFreeInitiator
<WWPN> default value is 5000000000000001. 5000000000000001
Modify this value only if there is a
host already in your environment
with a WWPN of 5000000000000001.
ibmvcfg set vmhost Specifies the vSphere Web Service ibmvcfg set vmhost
https://ESX_Server_IP/sdk location on the ESX Server, which https://9.11.110.90/sdk
holds the virtual host.
ibmvcfg set vmuser username Specifies the user that can log in to ibmvcfg set vmuser root
the ESX Server and has the privileges
to manipulate the RDM disks.
ibmvcfg set vmpassword password Sets the password for the vmuser to ibmvcfg set vmpassword pwd
log in.
ibmvcfg set vmcredential Specifies the session credential store ibmvcfg set vmcredential
credential_store path for the vSphere Web Service. "C:\VMware-Certs\vmware.keystore"
The credential store can be generated
by the Java keytool located in
C:\Prorgam Files\IBM\Hardware
Provider for VSS-VDS\jre\bin\
keytool.exe.
The IBM System Storage Support for Microsoft Volume Shadow Copy Service and
Virtual Disk Service software maintains a free pool of volumes and a reserved pool
of volumes. These pools are implemented as virtual host systems on the SAN
Volume Controller.
Error codes for IBM System Storage Support for Microsoft Volume
Shadow Copy Service and Virtual Disk Service software
The IBM System Storage Support for Microsoft Volume Shadow Copy Service and
Virtual Disk Service software logs error messages in the Windows Event Viewer
and in private log files.
You can view error messages by going to the following locations on the Windows
server where the IBM System Storage Support for Microsoft Volume Shadow Copy
Service and Virtual Disk Service software is installed:
v The Windows Event Viewer in Application Events. Check this log first.
v The log file ibmVSS.log, which is located in the directory where the IBM System
Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk
Service software is installed.
Table 87 on page 338 lists the errors messages that are reported by the IBM System
Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk
Service software.
Chapter 8. IBM System Storage support for Microsoft Volume Shadow Copy Service and Virtual Disk Service for Windows 337
Table 87. Error messages for the IBM System Storage Support for Microsoft Volume Shadow Copy Service and
Virtual Disk Service software
Code Message Symbolic name
1000 JVM Creation failed. ERR_JVM
1001 Class not found: %1. ERR_CLASS_NOT_FOUND
1002 Some required parameters are missing. ERR_MISSING_PARAMS
1003 Method not found: %1. ERR_METHOD_NOT_FOUND
1004 A missing parameter is required. Use the ERR_REQUIRED_PARAM
configuration utility to set this parameter:
%1.
1600 The recovery file could not be created. ERR_RECOVERY_FILE_
CREATION_FAILED
1700 ibmGetLunInfo failed in ERR_ARELUNSSUPPORTED_
AreLunsSupported.
IBMGETLUNINFO
1800 ibmGetLunInfo failed in FillLunInfo. ERR_FILLLUNINFO_IBMGETLUNINFO
1900 Failed to delete the following temp files: ERR_GET_TGT_CLEANUP
%1
2500 Error initializing log. ERR_LOG_SETUP
2501 Unable to search for incomplete Shadow ERR_CLEANUP_LOCATE
Copies. Windows Error: %1.
2502 Unable to read incomplete Shadow Copy ERR_CLEANUP_READ
Set information from file: %1.
2503 Unable to cleanup snapshot stored in file: ERR_CLEANUP_SNAPSHOT
%1.
2504 Cleanup call failed with error: %1. ERR_CLEANUP_FAILED
2505 Unable to open file: %1. ERR_CLEANUP_OPEN
2506 Unable to create file: %1. ERR_CLEANUP_CREATE
2507 HBA: Error loading hba library: %1. ERR_HBAAPI_LOAD
3000 An exception occurred. Check the ERR_ESSSERVICE_EXCEPTION
ESSService log.
3001 Unable to initialize logging. ERR_ESSSERVICE_LOGGING
3002 Unable to connect to the CIM agent. ERR_ESSSERVICE_CONNECT
Check your configuration.
3003 Unable to get the Storage Configuration ERR_ESSSERVICE_SCS
Service. Check your configuration.
3004 An internal error occurred with the ERR_ESSSERVICE_INTERNAL
following information: %1.
3005 Unable to find the VSS_FREE controller. ERR_ESSSERVICE_FREE_CONTROLLER
3006 Unable to find the VSS_RESERVED ERR_ESSSERVICE_RESERVED_
controller. Check your configuration.
CONTROLLER
3007 Unable to find suitable targets for all ERR_ESSSERVICE_INSUFFICIENT_
volumes.
TARGETS
3008 The assign operation failed. Check the ERR_ESSSERVICE_ASSIGN_FAILED
CIM agent log for details.
Procedure
1. Log on to the Windows server as the local administrator.
2. Click Start > Control Panel from the task bar. The Control Panel window is
displayed.
3. Double-click Add or Remove Programs. The Add or Remove Programs
window is displayed.
4. Select IBM System Storage Support for Microsoft Volume Shadow Copy
Service and Volume Service software and click Remove.
5. Click Yes when you are prompted to verify that you want to completely
remove the program and all of its components.
6. Click Finish.
Results
The IBM System Storage Support for Microsoft Volume Shadow Copy Service and
Virtual Disk Service software is no longer installed on the Windows server.
Chapter 8. IBM System Storage support for Microsoft Volume Shadow Copy Service and Virtual Disk Service for Windows 339
340 SAN Volume Controller: Software Installation and Configuration Guide
Appendix. Accessibility features for IBM SAN Volume Controller
Accessibility features help a user who has a physical disability, such as restricted
mobility or limited vision, to use software products successfully.
Accessibility features
These are the major accessibility features associated with the SAN Volume Controller
Information Center:
v You can use screen-reader software and a digital speech synthesizer to hear what
is displayed on the screen. PDF documents have been tested using Adobe
Reader version 7.0. HTML documents have been tested using JAWS version 13.0.
v This product uses standard Windows navigation keys.
Keyboard navigation
You can use keys or key combinations to perform operations and initiate menu
actions that can also be done through mouse actions. You can navigate the SAN
Volume Controller Information Center from the keyboard by using the shortcut keys
for your browser or screen-reader software. See your browser or screen-reader
software Help for a list of shortcut keys that it supports.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
Almaden Research
650 Harry Road
Bldg 80, D3-304, Department 277
San Jose, CA 95120-6099
U.S.A.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information is for planning purposes only. The information herein is subject to
change before the products described become available.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
If you are viewing this information softcopy, the photographs and color
illustrations may not appear.
Trademarks
IBM trademarks and special non-IBM trademarks in this information are identified
and attributed.
IBM, the IBM logo, and ibm.com® are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the web at Copyright and
trademark information at www.ibm.com/legal/copytrade.shtml.
Adobe and the Adobe logo are either registered trademarks or trademarks of
Adobe Systems Incorporated in the United States, and/or other countries.
Intel, Intel logo, Intel Xeon, and Pentium are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Oracle and/or its affiliates.
Notices 345
346 SAN Volume Controller: Software Installation and Configuration Guide
Index
Numerics C compatibility
IBM System Storage DS4000
2145 UPS-1U cache allocations models 235
operation 22 Bull FDA 201 compatibility models
NEC iStorage 302 IBM System Storage DS3000 235
Call Home 58, 61 IBM System Storage DS4000 235
A capacity
real 49
IBM System Storage DS5000 235
about this document IBM XIV storage system models 323
virtual 49 Pillar Axiom models 309
sending comments xviii
changes in guide RamSan 313
about this guide xiii
summary xiii Xiotech Emprise 318
access control
changes summary xiii Compellent
Bull FDA 201
clearing SCSI reservations and configuration 201
NEC iStorage 302
registrations creating servers 201
Access Logix 204
HP 3PAR 276 creating storage pools 201
accessibility
CLI commands creating volumes 201
disability xiii, 341
detectmdisk 194 mapping volumes to servers 201
features 341
rmmdisk 194 compressed volumes
keyboard xiii, 341
upgrading software 157 thin-provisioned 42
navigation 341
clustered system concurrent maintenance
overview xiii
copy methods 140 EMC CLARiiON 208
repeat rate
clustered systems HP EVA 289
up and down buttons 341
adding nodes 174 IBM XIV storage system 323
shortcut keys 341
backing up configuration file 18 Nexsan SATABeast 307
accessing
Call Home email 58, 61 Pillar Axiom 309
publications 341
configuration backup overview 18 RamSan systems 313
administrator user role 63
creating 151 Xiotech Emprise systems 318
advanced copy
high availability 55 configuration
Pillar Axiom systems 313
IP failover 15 balanced storage system 184
advisor tool
management 15 Compellent 201
Storage Tier 37
operation 16 converged network adapter 112
Assist On-site remote service 57
operation over long distances 136 DS3000 series Storage Manager 234
audience xiii
overview 14 DS4000 series Storage Manager 234
automatic data placement
powering on and off 18 DS5000 series Storage Manager 234
Easy Tier 38
quorum disks 134 Enterprise Storage Server
overview 35
replacing or adding nodes 169 balanced 184
AxiomONE CLI 309
state 17 general 229
AxiomONE Storage Services
commands Fujitsu ETERNUS 226
Manager 309
detectmdisk 192 IBM DS6000 240
ibmvcfg add 336 IBM DS8000 242, 243
ibmvcfg listvols 336
B ibmvcfg rem 336
IBM ESS systems 229
IBM Storwize V7000 storage
bitmap space 137 ibmvcfg set cimomHost 335 systems 199
Brocade ibmvcfg set cimomPort 335 IBM System Storage DS5000, IBM
switch ports 121 ibmvcfg set namespace 335 DS4000, and IBM DS3000 233
browsers ibmvcfg set password 335 maximum sizes 55
See web browsers ibmvcfg set storageProtocol 335 node details 116
Bull FDA systems ibmvcfg set timeout 335 node failover 15
access control methods 201 ibmvcfg set trustpassword 335 Pillar Axiom 309
cache allocations 201 ibmvcfg set username 335 restoring 18
configuring 200 ibmvcfg set usingSSL 335 rules
logical units 200 ibmvcfg set vmcredential 335 SAN 105
platform type 200 ibmvcfg set vmhost 335 SAN details 103
snapshot volume and link ibmvcfg set vmpassword 335 storage systems
volume 201 ibmvcfg set vmuser 335 array guidelines 181
supported firmware 200 ibmvcfg set vssFreeInitiator 335 data migration guidelines 183
target ports 200 ibmvcfg set vssReservedInitiator 335 FlashCopy mapping
ibmvcfg showcfg 335 guidelines 182
upgrading software 157 image mode volumes 183
comments introduction 179
sending xviii logical disk guidelines 181
Index 349
IBM System Storage Support for
Microsoft Volume Shadow Copy Service
K Metro Mirror (continued)
consistency groups 92
and Virtual Disk Service software keyboard intersystem link 91
configuring VMware Web Service accessibility xiii, 341 migrating relationship 96
connection 332 overview 80
creating pools of volumes 333 partnerships 83, 89
error messages 337 L relationships 81
ibmvcfg.exe 335, 336 LAN relationships between systems 83
installation overview 329 configuration 103 upgrading system software 157
installation procedure 330 legal notices zoning considerations 148
overview 329 Notices 343 migration
system requirements 330 trademarks 345 data
uninstalling 339 link volume partitioned IBM DS5000, IBM
verifying the installation 334 Bull FDA 201 DS4000, and IBM DS3000 236
IBM XIV storage systems NEC iStorage 302 logical units
CLI 324 logical unit configuration HP StorageWorks MSA 293
concurrent maintenance 323 HP StorageWorks MSA 293 volumes
configuration settings 327 logical unit numbers image mode 48
configuring 323 NetApp FAS 305 mirrored volumes 45
copy functions 329 logical units modes
firmware 323 adding 194 operation
host settings 327 discovering 189 Easy Tier 34
logical unit options (LU) 327 expanding 190 modification
logical units 324 Fujitsu ETERNUS 229 logical unit mapping 190
models 323 HDS Lightning 248 monitoring
Storage Management GUI 324 HP 3PAR 273 software upgrades
target ports 324 HP MSA2000 systems 296 automatically 162
user interface 324 HP StorageWorks EVA 291 manually 163
zoning 326 IBM DS5000, IBM DS4000, and IBM MSA2000 system
ibmvcfg.exe DS3000 237, 239 copy functions 301
changing configuration IBM XIV 327
parameters 335 mapping
volumes and FlashCopy
relationships 336
modifying 190 N
NEC iStorage 302 navigation
icons NetApp FAS 304 accessibility 341
See also presets Pillar Axiom 312 NEC iStorage
consistency group states unconfigured 196 access control 302
FlashCopy 75 LUs cache allocations 302
Metro Mirror and Global See logical units platform type 302
Mirror 92
snapshot volume and link
identifications
volume 302
storage systems 179
image mode volumes
M NetApp FAS
maintenance creating host objects 305
overview 48
EMC CLARiiON 208 creating logical units 304
thin-provisioned 50
Nexsan SATABeast 307 deleting logical units 304
image-mode volumes
managed disks presenting LUNs to hosts 305
migrating 48
deleting 194 zoning 306
information center xiv
discovering 198 NetApp FAS3000
installation
expanding 190 logical units 303
CD image files 162
overview 24 target ports 303
installations
rebalancing access 198 Nexsan SATABeast
IBM System Storage Support for
removing unconfigured 196 updating 307
Microsoft Volume Shadow Copy
management GUI user interface 307
Service and Virtual Disk Service
introduction 5 node canisters
software 330
management nodes 56 configuration 19
interswitch links
mapping events node verification
congestion 124
FlashCopy 73 upgrading 167
maximum hop count 121
mappings nodes
oversubscription 124
FlashCopy adding 174
inventory information
copy rate 78 configuration 19, 116
emails 61
events 73 connectivity constraints 117
event notifications 58
maximum configurations 55 failover 15
iSCSI
MDisks host bus adapters 116
configuration 113
See managed disks increased connectivity 117
ISLs
memory settings 137 overview 19
See also interswitch link
Metro Mirror replacing 175
See interswitch links
bandwidth 95 replacing nondisruptively 169
Index 351
settings (continued) storage controllers storage systems (continued)
hosts (continued) adding concurrent maintenance (continued)
XIV 327 using the CLI (command-line HDS USP 261
HP MSA systems 294 interface) 194 Hitachi TagmaStore AMS 2000
IBM DS5000, DS4000, and removing family 264
DS3000 240 using the CLI (command-line HP 3PAR 272
logical unit creation and deletion interface) 195 HP MSA1000 294
IBM DS5000, IBM DS4000, and storage pools HP MSA1500 294
IBM DS3000 237 definition 30 HP MSA2000 systems 296
logical units overview 30 HP StorageWorks EMA 281
HP StorageWorks EVA 291 storage systems HP StorageWorks MA 281
IBM DS5000, IBM DS4000, and addition HP XP 261
IBM DS3000 239 using the CLI 194 IBM DS6000 242
Pillar Axiom 312 advanced functions IBM DS8000 244
sharing Compellent 201 IBM N5000 306
HP MSA1000 and MSA1500 294 EMC CLARiiON 210 IBM XIV Storage System 323
shortcut keys EMC Symmetrix 216 NetApp FAS 306
accessibility 341 EMC Symmetrix DMX 216 Nexsan SATABeast 307
keyboard 341 EMC VMAX 222 Pillar Axiom 309
Snap FS Fujitsu ETERNUS 229 RamSan systems 313
Pillar Axiom systems 313 HDS Lightning 246 Sun StorEdge 261
Snap LUN HDS NSC 263 Xiotech Emprise systems 318
Pillar Axiom systems 313 HDS TagmaStore WMS 252 configuration
SnapClone HDS Thunder 252 EMC CLARiiON introduction 204
HP StorageWorks EVA systems 289 HDS USP 263 EMC CLARiiON settings 210
snapshot volume Hitachi TagmaStore AMS 2000 EMC CLARiiON storage
Bull FDA 201 family 266 groups 206
NEC iStorage 302 HP MSA 294 EMC CLARiiON with Access
SNMP traps 58 HP StorageWorks EMA 283, 284 Logix 204
software HP StorageWorks MA 283, 284 EMC CLARiiON without Access
full package 162 HP XP 263 Logix 207
overview 1 IBM DS5000, IBM DS4000, and EMC Symmetrix 213
package IBM DS3000 236 EMC Symmetrix DMX 216
obtaining 162 IBM Enterprise Storage EMC Symmetrix settings 216
revised 162 Server 232 EMC VMAX 219, 222
upgrade package 162 IBM N5000 306 Enterprise Storage Server 229
upgrading automatically 162 NetApp FAS 306 Fujitsu ETERNUS 225
software upgrades Nexsan SATABeast 308 HDS Lightning 244
using the CLI (command-line Sun StorEdge 263 HDS NSC 259
interface) 157 Bull FDA HDS SANrise 1200 250
solid-state drives access control methods 201 HDS TagmaStore WMS 250
configuration rules 118 cache allocations 201 HDS Thunder 250
Easy Tier 33 configuration 200 HDS USP 259
split-site system firmware 200 Hitachi TagmaStore AMS 2000
configuration 127 logical units 200 family 264
configuration using ISL 131 platform type 200 HP 3PAR systems 272
configuration without ISL 129 snapshot volume and link HP EVA 288
SSDs volume 201 HP MSA1000 and MSA1500 292
See solid-state drives target ports 200 HP MSA2000 systems 295
SSPC cabling HP StorageWorks EMA 277
See System Storage Productivity Compellent 201 HP StorageWorks MA 277
Center Compellent HP XP 244, 259
standard reserves configuration 201 IBM DS5000, IBM DS4000, and
overview 55 concurrent maintenance IBM DS3000 233
states Compellent 201 IBM DS6000 240
consistency groups 75, 92 DS4000 series 235 IBM DS8000 243
statistics DS5000 series 235 IBM N5000 302
real-time performance 62 EMC CLARiiON 208 IBM N7000 302
status 17 EMC Symmetrix 213 IBM System Storage DS3000,
node 19 EMC Symmetrix DMX 213 DS4000, and DS5000 232
storage EMC VMAX 220 IBM XIV storage system 323
external 23 Enterprise Storage Server 231 NEC iStorage 301
internal 23 Fujitsu ETERNUS 229 NetApp FAS 302
storage area network (SAN) HDS Lightning 245 Nexsan SATABeast 306
configuring 103 HDS NSC 261 Pillar Axiom 309
fabric overview 103 HDS TagmaStore WMS 250 RamSan Solid 313
HDS Thunder 250 Sun StorEdge 244, 259
Index 353
storage systems (continued) storage systems (continued) storage systems (continued)
models (continued) settings (continued) target ports (continued)
IBM XIV 323 EMC CLARiiON 211 RamSan 314
NetApp FAS 303 HDS TagmaStore WMS 254, 256 Sun StorEdge 260
Nexsan SATABeast 307 HDS Thunder 254, 256 Xiotech Emprise 319
Pillar Axiom 309 Hitachi TagmaStore AMS 2000 updating configuration
Sun StorEdge 244, 259 family 268 existing system using CLI 194
TMS RamSan Solid State HP StorageWorks EMA 285 user interfaces
Storage 313 HP StorageWorks MA 285, 287 Compellent 201
Xiotech Emprise 318 HP StorageWorks MA EMA 287 EMC CLARiiON 208
port selection 192 Lightning 248 EMC Symmetrix 214
port settings sharing EMC Symmetrix DMX 214
EMC CLARiiON 211 Compellent 201 EMC VMAX 220
EMC Symmetrix 217 EMC CLARiiON 209 Fujitsu ETERNUS 226
EMC Symmetrix DMX 217 EMC Symmetrix 214 HDS Lightning 245
EMC VMAX 223 EMC Symmetrix DMX 214 HDS NSC 260
HDS Lightning 249 EMC VMAX 221 HDS TagmaStore WMS 250
HDS TagmaStore WMS 256 HDS Lightning 245 HDS Thunder 250
HDS Thunder 256 HDS TagmaStore WMS 251 HDS USP 260
Hitachi AMS 200, AMS 500, AMS HDS Thunder 251, 252 Hitachi TagmaStore AMS 2000
1000 256 Hitachi TagmaStore AMS 2000 family 264
Hitachi TagmaStore AMS 2000 family 265, 266 HP 3PAR systems 273
family 269 HP EVA 289 HP EVA 289
HP StorageWorks EMA 286 HP StorageWorks EMA 282 HP MSA1000 292
HP StorageWorks MA 286 HP StorageWorks MA 282 HP MSA1500 292
quorum disks IBM DS6000 242 HP MSA2000 systems 295
Compellent 201 IBM DS8000 244 HP XP 260
EMC CLARiiON 209 IBM Enterprise Storage IBM DS6000 242
EMC Symmetrix 215 Server 231 IBM DS8000 244
EMC VMAX 221 Nexsan SATABeast 308 IBM Enterprise Storage
HDS Lightning 246 StorageTek D 236 Server 231
HDS NSC 261 StorageTek FlexLine 236 IBM N5000 303
HDS Thunder, Hitachi AMS 200, storage IBM XIV 324
and HDS TagmaStore WMS 252 external 23 NetApp FAS 303
HDS USP 261 switch zoning Nexsan SATABeast 307
Hitachi TagmaStore AMS 2000 EMC CLARiiON 209 Pillar Axiom 309
family 266 EMC Symmetrix 215 RamSan 314
HP MSA1000 294 EMC Symmetrix DMX 215 Sun StorEdge 260
HP StorageWorks EMA 283 EMC VMAX 221 Xiotech Emprise 319
HP StorageWorks EVA 289 HDS Lightning 246 Volume Logix and masking
HP StorageWorks MA 283 HDS NSC 260 EMC VMAX 225
HP XP 261 HDS TagmaStore WMS 252 zoning
IBM Enterprise Storage HDS Thunder 252 HP 3PAR 275
Server 232 HDS USP 260 HP MSA2000 systems 299
IBM N5000 306 Hitachi TagmaStore AMS 2000 zoning details 140
IBM XIV 328 family 265 Storage Tier Advisor Tool
NetApp FAS 306 HP EVA 289 performance data 37
Nexsan SATABeast 308 HP XP 260 strategy
Pillar Axiom 313 IBM Enterprise Storage software upgrade
RamSan 318 Server 231 using the CLI (command-line
Sun StorEdge 261 IBM XIV 326 interface) 157
Xiotech Emprise 323 NetApp FAS 306 summary
registering Pillar Axiom 311 changes in guide xiii
EMC CLARiiON 205 RamSan 316 summary of changes xiii
removing Sun StorEdge 260 switch zoning
CLI 195 Xiotech Emprise 321 EMC CLARiiON 209
renaming target port groups HP 3PAR 275
CLI 193 Enterprise Storage Server 242 HP MSA2000 systems 299
requirements target ports IBM XIV 326
FlashCopy, volume mirroring, HDS NSC 260 NetApp FAS 306
thin-provisioned volumes 188 HDS USP 260 Pillar Axiom 311
servicing 198 HP StorageWorks MSA 293 RamSan 316
settings HP XP 260 Xiotech Emprise 321
AMS 200, AMS 500, AMS IBM XIV 324 switches
1000 254 NEC iStorage 302 Brocade 121
configuring Hitachi TagmaStore NetApp FAS3000 303 Cisco 121
AMS 2000 267 Pillar Axiom 310 configuring 120
Index 355
write operations
dependent 77
X
Xiotech Emprise
CLI 319
models 318
Storage Management GUI 319
Xiotech ISE
concurrent maintenance 318
configuration settings 322
configuring 318
copy functions 323
firmware 318
logical units 319
target ports 319
user interface 319
zoning 321
XIV storage systems
See IBM XIV storage systems
Z
zoning
details 140
EMC CLARiiON 209
Fujitsu ETERNUS 229
Global Mirror 148
guidelines 140
hosts 140
IBM XIV 326
Metro Mirror 148
NetApp FAS 306
overview 143
Pillar Axiom 311
RamSan 316
storage systems 140
Xiotech Emprise 321
Printed in USA
GC27-2286-04