Clariion Configuration: Introduction To Navisphere
Clariion Configuration: Introduction To Navisphere
Clariion Configuration: Introduction To Navisphere
Introduction to Navisphere
1
Topics
2
Interface – Logging in
Username
Password
Scope
If the Management UI has been installed on the array, simply enter the IP Address
of one of the SP’s. You will be prompted for the global or local login for that system.
We will deal with setting up Navisphere security in another module.
3
Interface - Overview
The application uses tree structures to show the relationships of the physical and logical
storage-system components and the relationships of the host components. These trees are
analogous to the hierarchical folder structure of Microsoft Window Explorer.
The tree views appear in the open Enterprise Storage dialog boxes in the Main window.
You use the Storage tree to control the physical and logical components of the managed
storage systems, the Hosts tree to control the LUNs and the storage systems to which the
hosts connect, and the Monitors tree to create and assign event templates to monitored
storage systems.
The managed storage systems are the base icons in the Storage tree. The Storage tree
displays a storage-system icon for each managed storage system. The managed hosts are
the base icons for the Hosts tree. In an Access Logix configuration, the Hosts tree displays
a host icon for each server attached to a storage system. The monitor icons are the base
components in the Monitors tree. The Monitors tree displays a monitor icon for each system
in the domain as well as any legacy storage systems and Agents.
You perform operations on all managed storage-system components using the menu bar,
and on an individual component or multiple components of the same type by using the
menu associated with them.
You can expand and collapse the base icons in all tree views to show icons for their
components (such as SP icons, disk icons, LUN icons, etc.) just as you can expand and
collapse the Explorer folder structure. You use the icons to perform operations on and
display the status and properties of the storage systems and their components.
4
Interface – Monitoring the Array
The PHYSICAL Tree within the Storage Tab provides information about the physical
hardware in an array. It’s a drill down list which starts at the enclosure, followed by
hardware type, followed by individual hardware
5
Configuring the CLARiiON
z LUN definition
z LUN Creation
6
Cache - Overview
7
Cache – HW Requirements
Read Cache
z A single SP with memory installed and Read Cache enabled
Write Cache
z Two storage processors (SPs) with memory installed
z Two power supplies in the SPE (CX600) or DPE2 (CX400
and CX200), and each DAE2-OS
z Two link control cards (LCCs) in each DPE / DAE-OS
z Disks in slots 0:0 through 0:4
z SPS (standby power supply) with a fully charged battery
8
Cache – Configuring through Navisphere
In order to enable cache, you have to right-click on the Array icon and select Properties. Click on
the Memory tab.
The Memory Tab Lets you assign SP memory to the SP A and SP B read cache, write cache, and
RAID 3 memory partitions.
Note Some or all memory properties appear blank if the storage system’s SPs are not installed; N/A
appears if the storage system is unsupported.
SP Memory Displays memory properties for SP A and SP B.
Total Memory Size in Mbytes of the SP’s memory capacity.
SP Usage Size in Mbytes of the SP memory reserved for the SP’s use.
Free Memory Size in Mbytes of the SP memory available for the read cache, write cache, and the
RAID 3 memory partitions.
Memory Allocation Graphical representation of the current SP memory partitions. Displays only for
installed, supported SPs.
User Customizable Partitions
SP A Read Cache Memory Sets the size in Mbytes of SP A’s current read cache memory partition.
The slider is unavailable if the SP is not installed or is unsupported.
SP B Read Cache Memory Sets the size in Mbytes of SP B’s current read-cache memory partition.
The slider is unavailable if the SP is not installed or is unsupported.
Write Cache Memory Sets the size in Mbytes of the current write cache memory partition on both
SPs. The slider is unavailable if the SP is not installed or is unsupported.
RAID 3 Memory If available, sets the size in Mbytes of the RAID 3 memory partition on both SPs.
OK Applies any changes and, if successful, closes the dialog box.
Apply Applies any changes without closing the dialog box.
Cancel Closes the dialog box without applying any changes.
9
Cache – Configuring through Navisphere (cont’d)
10
Supported RAID Types – Individual Disk
RAID
Group
Block 0
Block 1
Block 2
Block 3
Block 4
Drive 1
In an Individual Disk RAID Group, a single disk drive can have single LUN or
multiple LUNs create for it. There is no performance enhancement or
protection. It can be used if a hosts is performing volume management, or
as a means to make use of “stray” drives of differing sizes.
11
Supported RAID Types - RAID 0 (Striping)
Host Writes
Block 8 Block 0 SP zz Distributes
Distributesdata
dataacross
acrossseveral
several
Block 9 Block 1 Front End Fibre disks
Block 10 Block 2 disks
Write
Block 11 Block 3
Cache
zz Several
Severaldisks
disksact
acttogether;
together;
Block 12 Block 4
increased
increased performance,
performance,
Block 13 Block 5 Back End Fibre
Block 14 Block 6
though
thoughno
noprotection
protection
Block 7
RAID
Group
In RAID-0, if any disk fails, storage data is lost forever. Performance can
be quite good, though, as the disks can be accessed in parallel. RAID-0 is
suited to storing data that does not change (software libraries, reference
materials, etc.) or temporary data that can be recreated. Live data that
cannot be recreated should never be stored on RAID-0.
In CLARiiON storage systems, RAID-0 requires a minimum of 3 disks and a
maximum of 16 disks.
• RAID-0 implements a striped disk array: the data is broken down into
blocks and each block is written to a separate disk drive
• I/O performance is greatly improved by spreading the I/O load across
many channels and drives
12
Supported RAID Types - RAID 1 (Mirroring)
RAID
Group
Block 0 Block 0’
Block 1 Block 1’
Block 2 Block 2’
Block 3 Block 3’
Drive 1 Drive 2
In RAID-1, the mirror disk (disk 2 in the drawing above) always holds a
mirror copy of the first disk (disk 1, above). This direct approach is easy to
manage – when writing, send data to both disks; when reading, access
either disk. Performance is best when data is accessed sequentially (as
with database logs). In CLARiiON storage systems, RAID-1 requires two
disks – no more, no less.
With mirroring alone, RAID-1 provides only a single disk of storage. To
store larger amounts of data, let’s see about using both striping and
mirroring together.
• One Write or two Reads possible per mirrored pair
• 100% redundancy of data means no rebuild of data is necessary in case
of disk failure, just a copy to the replacement disk
13
Supported RAID Types – RAID 1/0 (Striping and Mirroring)
RAID
Group
14
Supported RAID Types – RAID 5 (Striping and Parity)
Host Writes
Block 8 Block 0 SP zz Uses
Usesstriping
striping(for
(for
Block 9 Block 1 Front End Fibre performance)
performance)andandparity
parity(for
(for
Block 10 Block 2
Write
data
dataprotection)
protection)
Block 11 Block 3
Cache
Block 12 Block 4 zz Cheaper,
Cheaper,butbutcan
canonly
only
Block 13 Block 5 Back End Fibre survive
survivefailure
failureof
ofone
onedisk
disk
Block 14 Block 6
Block 15 Block 7
RAID
Group
RAID-5 uses parity to protect against data loss. Parity protection uses the
exclusive-OR, or “XOR”, function to, essentially, add the data across the
stripe. Thus, if one of the data disks fails, the missing data can be recreated
by “subtracting” the available data (from the still-functioning disks) from the
parity. Note that in RAID-5, parity data moves from disk to disk (above, from
Disk 5 to Disk 4) as we move from stripe to stripe. In CLARiiON storage
systems, RAID-5 requires a minimum of 3 disks and a maximum of 16 disks.
For larger data sets (like data warehouses), RAID-5 may be a cheaper
alternative to RAID-1/0. However, data protection is not as comprehensive
as RAID-1/0, as multiple disk failures in RAID-5 result in irreparable data
loss. The calculation of parity is complex, which can degrade performance,
particularly for write operations.
• Each entire data block is written on a data disk; parity for blocks in the
same rank is generated on Writes, recorded in a distributed location and
checked on Reads.
• Highest Read data transaction rate
• Medium Write data transaction rate
15
Supported RAID Types – RAID 3 (Striping and Parity)
RAID
Group
16
Supported RAID Types – Hot Spare
HOT
SPARE
1 2 3 4 5
Any drive in an array can be a Hot Spare, with the notable exception of the
Vault Drives (DPE 0-9 on FC Series Arrays; drives 0_0_0 through 0_0_4 on
CX series).
The Hot Spare sits idle until a Drive Fault is registered in the core/base
software on a Protected RAID Group (of type 1, 1/0, 3, 5).
During a drive failure, a Protected RAID Group goes into degraded mode.
In most cases, the next drive failure will result in the permanent loss of all
data on that RAID Group. The Hot Spare is designed to immediately
replace a failed drive without user intervention. This will remove the
degraded designation as soon as possible and help prevent data loss. It
also removes the urgency surrounding drive replacement.
17
Hot Spare - Operation
1. A Disk in a protected RAID Group, fails or is removed.
2. A Global Hot Spare (if available) automatically assumes the identity of
the failed drive.
3. If RAID 3-or-5, The hot spare will be rebuilt from the parity information
in the RAID Group.
4. If RAID 1 or 1/0, the hot spare will equalize data with the surviving
member
When a Protected RAID Group has a drive failure, the Hot Spare
immediately takes the identity in FLARE/Base of the failed drive.
If the RAID Group was Parity protected (RAID 3 or 5), the data will be rebuilt
to the Hot Spare using the data and parity on the remaining drives.
If the RAID Group was Mirror protected (RAID 1 or 1/0), the data will be
rebuilt to the Hot Spare by equalizing with the surviving partner.
Rebuilding is the reconstitution of the missing data by calculation between
the parity disks and the data disks. Equalization is the block copy of data
from a completely rebuilt hot spare or a surviving mirror to a replacement
disk.
18
Hot Spare – Operation cont’d
When a failed drive is replaced in a RAID Group upheld by a Hot Spare, the
data from the Hot Spare will be equalized with the new drive. This will not
take place until the data has been completely rebuilt to the Hot Spare.
After the replaced drive has been completely equalized with the Hot Spare,
it reclaims its original identity (bus, enclosure, slot) and processing resumes
normally. The Hot Spare will resume its standby state, idly waiting for the
next drive failure on another protected RAID Group.
19
RAID Summary
Each RAID type uses different strategies to protect storage data. There are
numerous other RAID types defined by the industry-wide RAID Advisory
Board (www.raid-advisory.com) – those listed above are supported by the
CLARiiON.
20
Creating RAID Groups
There are several advanced parameters that may be set for a RAID Group:
• RAID Group ID, Support RAID Type – As on previous page.
• Choose Disks – A radio button that defines whether the physical disks in
the RAID Group are to be selected “Automatically” or “Manually”. If
“Automatically” is selected, the “Number of Disks” drop-down menu is
activated; if “Manually” is selected, a selection dialog box opens (more
on this shortly). Default: Disks from the lowest-address enclosures
appear first, with disks in the lowest-valued slot position within an
enclosure appearing before disks in higher-valued slot positions.
• Expansion/Defragmentation Priority – Defines the relative priority of
RAID Group expansion and defragmentation activities. Possible values
are “Low”, “Medium”, and “High”. Default: Medium.
• Automatically Destroy after last LUN is unbound – If checked, the RAID
Group is destroyed once the last LUN in the group is unbound. Default:
unchecked.
If disks are to be selected manually, click the “Select…” button to choose the
disks.
21
Logical Units (LUN)
z Part or all of a RAID Group in a CLARiiON storage system
z Hosts see LUNs as physical disks or volumes
z LUNs are created through the bind process and accessed through the SP
to which they are assigned
z All LUNs on a RAID Group share the same RAID Type. Host
0
0 1 2 3 4 hba
0
LUN 0 – 20 GB – RAID 5
SPA 2
LUN 1 – 10 GB – RAID 5
LUN 2 – 30 GB – RAID 5
Logical units are the fundamental access point for data. LUNs are more than
just RAID Groups – they are the fundamental focus of data access and,
when there are system problems, for failover as well. This module looks at
the options available when binding a unit, and how these changes affect the
core software database, as well as how the bind operation transforms the
contents of the disk sectors in storage space. It reviews how end users
access the data stored on logical units through storage processors, and how
access can be transferred from one SP to the other.
There are several terms associated with the data protection performed by
core software at the logical unit level. The bind process and LUN
assignment are two key elements to CLARiiON’s use of logical units.
22
LUN Specifications
23
Logical Unit Binding
To open this dialog box, right-click a storage system icon, click Bind LUN.
Bind LUN (RAID Group systems) Lets you bind one or more LUNs of a specified size within a RAID
Group and specify details such as SP owner, element size, and the number of LUNs to bind.
RAID Type Sets the RAID type of the LUN you are binding.
RAID Group for new LUN Sets the RAID Group for the LUN you are binding. Displays only those RAID
Groups available for the selected RAID type (contain the proper number of disks). The RAID Group
assumes the RAID type of the first LUN that is bound within it. The RAID Group IDs range from 0 through
243; the RAID Group ID is assigned when the RAID Group is created.
Note After one LUN is bound on a RAID Group, the RAID type of the Group displays in the Storage tree.
New Opens the Create RAID Group dialog box and lets you create a new RAID Group.
Free Capacity Size in GB and bytes of the amount of user capacity of the RAID Group available for
binding LUNs.
Largest Contiguous Free Space Size in GB of the largest contiguous span of free space in the RAID
Group. LUNs must fit into a contiguous span of free space.
LUN ID Sets the LUN ID of the new LUN. The default value is the smallest available ID for the currently
selected storage system.
Element Size Sets the stripe element size - the size in disk sectors (512Kbytes) that the storage system
can read and write to on one disk without requiring access to another disk.
Rebuild Priority Sets the rebuild priority for the rebuild operations that occur automatically with a hot
spare and after you replace a failed disk.
Verify Priority Sets the verify priority for a LUN verify operation that occurs automatically when the status
of LUN parity information is unknown, which can occur after an SP fails and the other SP takes over the
other LUN.
Default Owner Sets the SP that is the default owner of the LUN. The default owner has control of the
LUN when the storage system is powered up. LUN Size Sets the size of the new LUN.
Enable Auto Assign Enables or disables Auto-assign for this LUN. Auto-assign applies only to a storage
system that has two SPs and a LUN that is not a hot spare.
Alignment Offset If available, can be used when the host operating system records private information
at the start of the LUN. The default value is zero and this supports most host operating systems.
24
Navisphere Interface – Storage Views
z Expanding the SP’s will show
the LUN(s) owned by that SP
25
Summary
26