Clariion Configuration: Introduction To Navisphere

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

CLARiiON Configuration

Introduction to Navisphere

© 2003 EMC Corporation. All rights reserved. 1

1
Topics

z Introduction to Navisphere Interface

z How to configure Read and Write Cache

z How to create RAID Groups

z How to bind LUNs

© 2003 EMC Corporation. All rights reserved. 2

2
Interface – Logging in

IP Address of SPA or SPB

Username
Password
Scope

© 2003 EMC Corporation. All rights reserved. 3

If the Management UI has been installed on the array, simply enter the IP Address
of one of the SP’s. You will be prompted for the global or local login for that system.
We will deal with setting up Navisphere security in another module.

3
Interface - Overview

© 2003 EMC Corporation. All rights reserved. 4

The application uses tree structures to show the relationships of the physical and logical
storage-system components and the relationships of the host components. These trees are
analogous to the hierarchical folder structure of Microsoft Window Explorer.
The tree views appear in the open Enterprise Storage dialog boxes in the Main window.
You use the Storage tree to control the physical and logical components of the managed
storage systems, the Hosts tree to control the LUNs and the storage systems to which the
hosts connect, and the Monitors tree to create and assign event templates to monitored
storage systems.
The managed storage systems are the base icons in the Storage tree. The Storage tree
displays a storage-system icon for each managed storage system. The managed hosts are
the base icons for the Hosts tree. In an Access Logix configuration, the Hosts tree displays
a host icon for each server attached to a storage system. The monitor icons are the base
components in the Monitors tree. The Monitors tree displays a monitor icon for each system
in the domain as well as any legacy storage systems and Agents.
You perform operations on all managed storage-system components using the menu bar,
and on an individual component or multiple components of the same type by using the
menu associated with them.
You can expand and collapse the base icons in all tree views to show icons for their
components (such as SP icons, disk icons, LUN icons, etc.) just as you can expand and
collapse the Explorer folder structure. You use the icons to perform operations on and
display the status and properties of the storage systems and their components.

4
Interface – Monitoring the Array

© 2003 EMC Corporation. All rights reserved. 5

The PHYSICAL Tree within the Storage Tab provides information about the physical
hardware in an array. It’s a drill down list which starts at the enclosure, followed by
hardware type, followed by individual hardware

5
Configuring the CLARiiON

z Configuring Cache settings

z RAID Type Summary

z RAID Group Creation

z LUN definition

z LUN Creation

© 2003 EMC Corporation. All rights reserved. 6

6
Cache - Overview

Storage Processor A Storage Processor B

WRITE Cache WRITE Cache


SPA’s Copy MIRRORED SPB’s Copy

SPA - READ Cache SPB - READ Cache

SPA - SYSTEM SPB - SYSTEM

© 2003 EMC Corporation. All rights reserved. 7

SP A and SP B read caching


Each SP has a read cache in its memory, which is either enabled or disabled. The
read cache on one SP is independent of the read cache on the other SP. Storage
system read caching for an SP must be enabled before any of the SP's LUNs can
enable read caching.
You can enable or disable an SP’s read cache from the Cache tab in the storage-
system properties dialog. You can set the size of an SP’s read cache from the
Memory tab in the storage-system properties dialog box.
Write caching
Each SP has a write cache in its memory, which mirrors the write cache on the
other SP. Since these caches mirror each other, they are always either enabled or
disabled and always the same size.
You can enable or disable the write cache on both SPs from the Cache tab in the
storage-system properties dialog. You can set the size of the write cache on both
SPs from the Memory tab in the storage-system properties dialog box.
Some other operations, such as setting most of the LUN caching properties, require
that the write cache be disabled. If the write cache is enabled when you perform any
of these operations, the application automatically disables the write cache for you
and re-enables it after the operation is competed.

7
Cache – HW Requirements

Read Cache
z A single SP with memory installed and Read Cache enabled

Write Cache
z Two storage processors (SPs) with memory installed
z Two power supplies in the SPE (CX600) or DPE2 (CX400
and CX200), and each DAE2-OS
z Two link control cards (LCCs) in each DPE / DAE-OS
z Disks in slots 0:0 through 0:4
z SPS (standby power supply) with a fully charged battery

© 2003 EMC Corporation. All rights reserved. 8

8
Cache – Configuring through Navisphere

© 2003 EMC Corporation. All rights reserved. 9

In order to enable cache, you have to right-click on the Array icon and select Properties. Click on
the Memory tab.
The Memory Tab Lets you assign SP memory to the SP A and SP B read cache, write cache, and
RAID 3 memory partitions.
Note Some or all memory properties appear blank if the storage system’s SPs are not installed; N/A
appears if the storage system is unsupported.
SP Memory Displays memory properties for SP A and SP B.
Total Memory Size in Mbytes of the SP’s memory capacity.
SP Usage Size in Mbytes of the SP memory reserved for the SP’s use.
Free Memory Size in Mbytes of the SP memory available for the read cache, write cache, and the
RAID 3 memory partitions.
Memory Allocation Graphical representation of the current SP memory partitions. Displays only for
installed, supported SPs.
User Customizable Partitions
SP A Read Cache Memory Sets the size in Mbytes of SP A’s current read cache memory partition.
The slider is unavailable if the SP is not installed or is unsupported.
SP B Read Cache Memory Sets the size in Mbytes of SP B’s current read-cache memory partition.
The slider is unavailable if the SP is not installed or is unsupported.
Write Cache Memory Sets the size in Mbytes of the current write cache memory partition on both
SPs. The slider is unavailable if the SP is not installed or is unsupported.
RAID 3 Memory If available, sets the size in Mbytes of the RAID 3 memory partition on both SPs.
OK Applies any changes and, if successful, closes the dialog box.
Apply Applies any changes without closing the dialog box.
Cancel Closes the dialog box without applying any changes.

9
Cache – Configuring through Navisphere (cont’d)

© 2003 EMC Corporation. All rights reserved. 10

Storage System Properties - Cache Tab


Displays the current properties of a storage system and lets you modify several of them.
Configuration Page Size Size of a page in Kbytes (KB) of the SP’s read or write cache. Valid values
are 2, 4, 8 or 16 KBytes (8KBytes is the default size). Displays N/A if the SPs are unsupported.
Low Watermark Current low watermark percentage for the storage system. Enable Watermark
Processing must be enabled.
High Watermark Current high watermark percentage for the storage system. Enable Watermark
Processing must be enabled.
Mirrored Write Cache If cleared, this allows single SP (non mirrored) write caching, which we do not
recommend because SP failure with cached data can corrupt all LUNs whose data is cached. By
default, Mirrored Write Cache is enabled and appears dimmed for the SPs in most types of storage
systems.
SP A/B Read Cache Enables or disables read caching for SP A/B.
Write Cache (Disabled) Enables or disables write caching for the storage system, and displays the
current state of the storage system’s write caching. When the check box is cleared, write caching is
unavailable and the storage system writes all modified pages to disk and clears the cache.
RAID 3 Write Buffer Available only if supported by the Core Software installed on the storage
system.
HA Cache Vault (supported only on CX-Series storage systems) Determines the availability of
storage-system write caching when a single drive in the cache vault fails. When you select the check
box (this is the default), write caching is disabled if a single vault disk fails. When you clear the check,
write caching is not disabled if a single disk fails. Important If you do not disable write caching when
a single cache vault disk fails, the data is at risk if another cache vault disk should fail.
Statistics Displays statistics properties for SP A and SP B.

10
Supported RAID Types – Individual Disk

Host Writes zz CLARiiON


Block 0 SP CLARiiONarrays
arraysallow
allowthe
the
Block 1 Front End Fibre creation
creationof
ofan
an“individual
“individualdisk”
disk”
Block 2 RAID Group.
RAID Group.
Write
Block 3
Block 4
Cache zz There
Thereisisno
noperformance
performanceor
or
Back End Fibre protection
protectionadvantage.
advantage.

RAID
Group

Block 0
Block 1
Block 2
Block 3
Block 4

Drive 1

© 2003 EMC Corporation. All rights reserved. 11

In an Individual Disk RAID Group, a single disk drive can have single LUN or
multiple LUNs create for it. There is no performance enhancement or
protection. It can be used if a hosts is performing volume management, or
as a means to make use of “stray” drives of differing sizes.

11
Supported RAID Types - RAID 0 (Striping)
Host Writes
Block 8 Block 0 SP zz Distributes
Distributesdata
dataacross
acrossseveral
several
Block 9 Block 1 Front End Fibre disks
Block 10 Block 2 disks
Write
Block 11 Block 3
Cache
zz Several
Severaldisks
disksact
acttogether;
together;
Block 12 Block 4
increased
increased performance,
performance,
Block 13 Block 5 Back End Fibre
Block 14 Block 6
though
thoughno
noprotection
protection
Block 7

RAID
Group

STRIPE 0 Block 0 Block 1 Block 2 Block 3 Block 4


STRIPE 1 Block 5 Block 6 Block 7 Block 8 Block 9
STRIPE 2 Block 10 Block 11 Block 12 Block 13 Block 14

Drive 1 Drive 2 Drive 3 Drive 4 Drive 5

© 2003 EMC Corporation. All rights reserved. 12

In RAID-0, if any disk fails, storage data is lost forever. Performance can
be quite good, though, as the disks can be accessed in parallel. RAID-0 is
suited to storing data that does not change (software libraries, reference
materials, etc.) or temporary data that can be recreated. Live data that
cannot be recreated should never be stored on RAID-0.
In CLARiiON storage systems, RAID-0 requires a minimum of 3 disks and a
maximum of 16 disks.
• RAID-0 implements a striped disk array: the data is broken down into
blocks and each block is written to a separate disk drive
• I/O performance is greatly improved by spreading the I/O load across
many channels and drives

12
Supported RAID Types - RAID 1 (Mirroring)

Host Writes zz Makes


Block 0 SP Makesan
anexact
exactmirror
mirrorcopy
copyof
of
Block 1 Front End Fibre all
alldata
data
Block 2
Write zz Great
Greatprotection
protectionbut
buthigher
higherinin
Block 3 Cache cost
cost than other RAIDtypes
than other RAID types
Back End Fibre

RAID
Group

Block 0 Block 0’
Block 1 Block 1’
Block 2 Block 2’
Block 3 Block 3’

Drive 1 Drive 2

© 2003 EMC Corporation. All rights reserved. 13

In RAID-1, the mirror disk (disk 2 in the drawing above) always holds a
mirror copy of the first disk (disk 1, above). This direct approach is easy to
manage – when writing, send data to both disks; when reading, access
either disk. Performance is best when data is accessed sequentially (as
with database logs). In CLARiiON storage systems, RAID-1 requires two
disks – no more, no less.
With mirroring alone, RAID-1 provides only a single disk of storage. To
store larger amounts of data, let’s see about using both striping and
mirroring together.
• One Write or two Reads possible per mirrored pair
• 100% redundancy of data means no rebuild of data is necessary in case
of disk failure, just a copy to the replacement disk

13
Supported RAID Types – RAID 1/0 (Striping and Mirroring)

Host Writes zz Uses


Block 8 Block 0 SP
Usesstriping
stripingand
andmirroring
mirroring
together
together
Block 1 Front End Fibre
Block 2 zz Good
Goodperformance
performancefromfrom
Block 3 Write striping;
striping;great
greatdata
dataprotection
protection
Cache
Block 4 from
frommirroring
mirroringbut
butcostly
costly––
Block 5 Back End Fibre twice
twiceasasmany
manydisks
disksasasRAID
RAID00
Block 6
Block 7

RAID
Group

STRIPE 0 Block 0 Block 0’ Block 1 Block 1’ Block 2 Block 2’


STRIPE 1 Block 3 Block 3’ Block 4 Block 4’ Block 5 Block 5’
STRIPE 2 Block 6 Block 6’ Block 7 Block 7’ Block 8 Block 8’

Drive 1 Drive 2 Drive 3 Drive 4 Drive 5 Drive 6

© 2003 EMC Corporation. All rights reserved. 14

Since RAID-1/0 uses striping, RAID-1/0 improves performance. Mirrored


protection is easy to implement, which also enhances performance. RAID-
1/0 also provides the best data protection – as many as half of the disks in a
RAID-1/0 group can fail, and the data is still accessible through the
remaining disks. It is suited to applications (like OLTP) that require large
data sets and maximum performance. In CLARiiON storage systems,
RAID-1/0 requires an even number of disks, with a minimum of 4 disks and
a maximum of 16 disks.
RAID-1/0 is sometimes called the “Mercedes” level of data protection –
maximum data protection and optimum performance, though at a high price.
For this reason, there are other, less expensive data protection options.
• RAID-1/0 has the same fault tolerance as RAID-1
• RAID-1/0 has the same overhead for fault-tolerance as mirroring alone
• High I/O rates are achieved by striping RAID-1 segments
• Excellent solution for sites that would have otherwise gone with RAID-1
but need some additional performance boost

14
Supported RAID Types – RAID 5 (Striping and Parity)

Host Writes
Block 8 Block 0 SP zz Uses
Usesstriping
striping(for
(for
Block 9 Block 1 Front End Fibre performance)
performance)andandparity
parity(for
(for
Block 10 Block 2
Write
data
dataprotection)
protection)
Block 11 Block 3
Cache
Block 12 Block 4 zz Cheaper,
Cheaper,butbutcan
canonly
only
Block 13 Block 5 Back End Fibre survive
survivefailure
failureof
ofone
onedisk
disk
Block 14 Block 6
Block 15 Block 7

RAID
Group

STRIPE 0 Block 0 Block 1 Block 2 Block 3 PARITY


STRIPE 1 Block 4 Block 5 Block 6 PARITY Block 7
STRIPE 2 Block 8 Block 9 PARITY Block 10 Block 11
STRIPE 3 Block 12 PARITY Block 13 Block 14 Block 15

Drive 1 Drive 2 Drive 3 Drive 4 Drive 5

© 2003 EMC Corporation. All rights reserved. 15

RAID-5 uses parity to protect against data loss. Parity protection uses the
exclusive-OR, or “XOR”, function to, essentially, add the data across the
stripe. Thus, if one of the data disks fails, the missing data can be recreated
by “subtracting” the available data (from the still-functioning disks) from the
parity. Note that in RAID-5, parity data moves from disk to disk (above, from
Disk 5 to Disk 4) as we move from stripe to stripe. In CLARiiON storage
systems, RAID-5 requires a minimum of 3 disks and a maximum of 16 disks.
For larger data sets (like data warehouses), RAID-5 may be a cheaper
alternative to RAID-1/0. However, data protection is not as comprehensive
as RAID-1/0, as multiple disk failures in RAID-5 result in irreparable data
loss. The calculation of parity is complex, which can degrade performance,
particularly for write operations.
• Each entire data block is written on a data disk; parity for blocks in the
same rank is generated on Writes, recorded in a distributed location and
checked on Reads.
• Highest Read data transaction rate
• Medium Write data transaction rate

15
Supported RAID Types – RAID 3 (Striping and Parity)

Host Writes zz Uses


Usesstriping
striping(for
(forperformance)
performance)
Byte 8 Byte 0 SP and
andparity
parity(for
(fordata
dataprotection)
protection)
Byte 9 Byte 1 Front End Fibre
Byte 10 Byte 2
zz Small element size (byte-level),
Small element size (byte-level),
Byte 11 Byte 3 Write means
meansbetter
betterperformace
performacefor for
Cache large
largeI/O’s
I/O’sworse
worsefor
forsmall
smalli/o’s.
i/o’s.
Byte 12 Byte 4
Byte 13 Byte 5 Back End Fibre zz Cheaper – can only survive
Cheaper – can only survive
Byte 14 Byte 6 failure
failureofofone
onedisk
disk
Byte 15 Byte 7

RAID
Group

STRIPE 0 Byte 0 Byte 1 Byte 2 Byte 3 PARITY


STRIPE 1 Byte 4 Byte 5 Byte 6 Byte 7 PARITY
STRIPE 2 Byte 8 Byte 9 Byte 10 Byte 11 PARITY
STRIPE 3 Byte 12 Byte 13 Byte 14 Byte 15 PARITY

Drive 1 Drive 2 Drive 3 Drive 4 Drive 5

© 2003 EMC Corporation. All rights reserved. 16

RAID-3 uses parity to protect against data loss. RAID-3 is optimized to


provide better performance (compared to RAID-5) for single-threaded reads
and writes that are larger than 32 KB. For multi-threaded or small-sized
I/Os, RAID-5 is preferred.
• Very high Read data transfer rate
• Very high Write data transfer rate
• Disk failure has a low impact on throughput
• The data block is subdivided (”striped") and written on the data disks.
Stripe parity is generated on writes, and recorded on the parity disk.

16
Supported RAID Types – Hot Spare

z A Global Hot Spare is an idle disk drive which replaces a failed


drive in any protected RAID Group.
z CLARiiON Hot Spare Implementation
z Any disk on a CLARiiON can be configured as a Hot Spare except
for Vault Drives (DPE /DAE-OS drives 0-4)
z A Hot Spare disk drive needs to be at least as large as the largest
drives it hopes to replace.

HOT
SPARE

Drive Drive Drive Drive Drive

1 2 3 4 5

Protected RAID Group


© 2003 EMC Corporation. All rights reserved. 17

Any drive in an array can be a Hot Spare, with the notable exception of the
Vault Drives (DPE 0-9 on FC Series Arrays; drives 0_0_0 through 0_0_4 on
CX series).
The Hot Spare sits idle until a Drive Fault is registered in the core/base
software on a Protected RAID Group (of type 1, 1/0, 3, 5).
During a drive failure, a Protected RAID Group goes into degraded mode.
In most cases, the next drive failure will result in the permanent loss of all
data on that RAID Group. The Hot Spare is designed to immediately
replace a failed drive without user intervention. This will remove the
degraded designation as soon as possible and help prevent data loss. It
also removes the urgency surrounding drive replacement.

17
Hot Spare - Operation
1. A Disk in a protected RAID Group, fails or is removed.
2. A Global Hot Spare (if available) automatically assumes the identity of
the failed drive.
3. If RAID 3-or-5, The hot spare will be rebuilt from the parity information
in the RAID Group.

Drive Drive Drive Drive Drive Drive 3


1 2 3 4 5 hot spare

4. If RAID 1 or 1/0, the hot spare will equalize data with the surviving
member

Drive Drive Drive 2


1 2 hot spare

© 2003 EMC Corporation. All rights reserved. 18

When a Protected RAID Group has a drive failure, the Hot Spare
immediately takes the identity in FLARE/Base of the failed drive.
If the RAID Group was Parity protected (RAID 3 or 5), the data will be rebuilt
to the Hot Spare using the data and parity on the remaining drives.
If the RAID Group was Mirror protected (RAID 1 or 1/0), the data will be
rebuilt to the Hot Spare by equalizing with the surviving partner.
Rebuilding is the reconstitution of the missing data by calculation between
the parity disks and the data disks. Equalization is the block copy of data
from a completely rebuilt hot spare or a surviving mirror to a replacement
disk.

18
Hot Spare – Operation cont’d

z When the failed drive is physically replaced, the data from


the Hot Spare is equalized to the new drive.
z This will not take place until the hot spare has finished
rebuilding and the RAID Group is no longer degraded.
z After equalizing, the replaced drive and the Hot Spare will
assume their proper disk identities.
z The Hot Spare will then be placed in a “ready” state for the
next disk failure.

Drive Drive Drive Drive Drive Hot


1 2 3 4 5 Spare

© 2003 EMC Corporation. All rights reserved. 19

When a failed drive is replaced in a RAID Group upheld by a Hot Spare, the
data from the Hot Spare will be equalized with the new drive. This will not
take place until the data has been completely rebuilt to the Hot Spare.
After the replaced drive has been completely equalized with the Hot Spare,
it reclaims its original identity (bus, enclosure, slot) and processing resumes
normally. The Hot Spare will resume its standby state, idly waiting for the
next drive failure on another protected RAID Group.

19
RAID Summary

z CLARiiON storage systems provide several different


RAID types with different levels of protection

Level of Data Disk Quality/Cost


RAID Type Protection Counts of Protection
Single Disk none 1 none
0 none 3-16 none
1 mirror 2 high/high
1/0 striping/mirror 4-16 (even) high/high
3 striping/parity 5 or 9 good/low
5 striping/parity 3-16 good/low
Hot Spare N/A 1 N/A

© 2003 EMC Corporation. All rights reserved. 20

Each RAID type uses different strategies to protect storage data. There are
numerous other RAID types defined by the industry-wide RAID Advisory
Board (www.raid-advisory.com) – those listed above are supported by the
CLARiiON.

20
Creating RAID Groups

© 2003 EMC Corporation. All rights reserved. 21

There are several advanced parameters that may be set for a RAID Group:
• RAID Group ID, Support RAID Type – As on previous page.
• Choose Disks – A radio button that defines whether the physical disks in
the RAID Group are to be selected “Automatically” or “Manually”. If
“Automatically” is selected, the “Number of Disks” drop-down menu is
activated; if “Manually” is selected, a selection dialog box opens (more
on this shortly). Default: Disks from the lowest-address enclosures
appear first, with disks in the lowest-valued slot position within an
enclosure appearing before disks in higher-valued slot positions.
• Expansion/Defragmentation Priority – Defines the relative priority of
RAID Group expansion and defragmentation activities. Possible values
are “Low”, “Medium”, and “High”. Default: Medium.
• Automatically Destroy after last LUN is unbound – If checked, the RAID
Group is destroyed once the last LUN in the group is unbound. Default:
unchecked.
If disks are to be selected manually, click the “Select…” button to choose the
disks.

21
Logical Units (LUN)
z Part or all of a RAID Group in a CLARiiON storage system
z Hosts see LUNs as physical disks or volumes
z LUNs are created through the bind process and accessed through the SP
to which they are assigned
z All LUNs on a RAID Group share the same RAID Type. Host
0
0 1 2 3 4 hba
0
LUN 0 – 20 GB – RAID 5
SPA 2
LUN 1 – 10 GB – RAID 5

LUN 2 – 30 GB – RAID 5

LUN 3 – 15 GB – RAID 5 SPB 1


hba
1
3
RAID 5 Group
© 2003 EMC Corporation. All rights reserved. 22

Logical units are the fundamental access point for data. LUNs are more than
just RAID Groups – they are the fundamental focus of data access and,
when there are system problems, for failover as well. This module looks at
the options available when binding a unit, and how these changes affect the
core software database, as well as how the bind operation transforms the
contents of the disk sectors in storage space. It reviews how end users
access the data stored on logical units through storage processors, and how
access can be transferred from one SP to the other.
There are several terms associated with the data protection performed by
core software at the logical unit level. The bind process and LUN
assignment are two key elements to CLARiiON’s use of logical units.

22
LUN Specifications

z There can be multiple LUNs on a single RAID Group


– 1 to 128 LUNs per RAID-0, RAID-1, RAID-1/0, or RAID-5
RAID Group
– 32 LUNs per RAID Group prior to Release 12 Base software
– 1 LUN per RAID-3 Group
– You can create up to 223 LUNs per FC Series array
– You can create up to 1024 on CX600
– You can create up to 512 on a CX400
– You can create up to 256 on a CX200
z You can enable/disable Read/Write caching on a per
LUN basis.
– Preserve caching performance for most critical applications

© 2003 EMC Corporation. All rights reserved. 23

Some comments on the rules above:


• Each RAID Group can be partitioned into at least one LUN.
• If a RAID Group has a single LUN that uses all the space in a RAID
Group, then the expansion of the RAID Group results in the expansion of
a LUN. This is the only way to increase the size of a bound LUN.

23
Logical Unit Binding

© 2003 EMC Corporation. All rights reserved. 24

To open this dialog box, right-click a storage system icon, click Bind LUN.
Bind LUN (RAID Group systems) Lets you bind one or more LUNs of a specified size within a RAID
Group and specify details such as SP owner, element size, and the number of LUNs to bind.
RAID Type Sets the RAID type of the LUN you are binding.
RAID Group for new LUN Sets the RAID Group for the LUN you are binding. Displays only those RAID
Groups available for the selected RAID type (contain the proper number of disks). The RAID Group
assumes the RAID type of the first LUN that is bound within it. The RAID Group IDs range from 0 through
243; the RAID Group ID is assigned when the RAID Group is created.
Note After one LUN is bound on a RAID Group, the RAID type of the Group displays in the Storage tree.
New Opens the Create RAID Group dialog box and lets you create a new RAID Group.
Free Capacity Size in GB and bytes of the amount of user capacity of the RAID Group available for
binding LUNs.
Largest Contiguous Free Space Size in GB of the largest contiguous span of free space in the RAID
Group. LUNs must fit into a contiguous span of free space.
LUN ID Sets the LUN ID of the new LUN. The default value is the smallest available ID for the currently
selected storage system.
Element Size Sets the stripe element size - the size in disk sectors (512Kbytes) that the storage system
can read and write to on one disk without requiring access to another disk.
Rebuild Priority Sets the rebuild priority for the rebuild operations that occur automatically with a hot
spare and after you replace a failed disk.
Verify Priority Sets the verify priority for a LUN verify operation that occurs automatically when the status
of LUN parity information is unknown, which can occur after an SP fails and the other SP takes over the
other LUN.
Default Owner Sets the SP that is the default owner of the LUN. The default owner has control of the
LUN when the storage system is powered up. LUN Size Sets the size of the new LUN.
Enable Auto Assign Enables or disables Auto-assign for this LUN. Auto-assign applies only to a storage
system that has two SPs and a LUN that is not a hot spare.
Alignment Offset If available, can be used when the host operating system records private information
at the start of the LUN. The default value is zero and this supports most host operating systems.

24
Navisphere Interface – Storage Views
z Expanding the SP’s will show
the LUN(s) owned by that SP

z Expand a RAID Group will show


the physical disk(s) that
comprise that Group.

z Expand a RAID Group will also


show the LUN(s) bound to that
Group.

© 2003 EMC Corporation. All rights reserved. 25

25
Summary

z Navigating the Navisphere Interface

z Configuring Read and Write Cache

z Supported CLARiiON RAID Types

z Configuring RAID Groups

z LUN Definitions & How to bind LUNs

© 2003 EMC Corporation. All rights reserved. 26

26

You might also like