h12204 VP For New VNX Series WP
h12204 VP For New VNX Series WP
h12204 VP For New VNX Series WP
Abstract
This white paper discusses the benefits of Virtual Provisioning
on the EMC VNX2 series storage systems. It provides an
overview of this technology and describes how Virtual
Provisioning is implemented on the VNX2.
August 2015
Copyright 2015 EMC Corporation. All Rights Reserved.
For the most up-to-date listing of EMC product names, see EMC
Corporation Trademarks on EMC.com.
Audience
This white paper is intended for IT planners, storage architects, administrators, and
others involved in evaluating, managing, operating, or designing VNX storage
systems.
Terminology
The following terminology appears in this white paper:
Allocated capacity For a pool, this is the space currently used by all LUNs in the
pool. For a thin LUN, this is the physical space used by the LUN. For a thick LUN, this
is the host-visible capacity used by the LUN. Allocated capacity is slightly larger than
the capacity used by the host because metadata exists at the pool LUN level. This is
also known as total allocation.
Available capacity The amount of actual physical pool space that is currently not
allocated for pool LUNs.
Automatic Volume Management (AVM) Feature of VNX that creates and manages
File volumes automatically. AVM organizes volumes into Storage Pools for File that
can be allocated to File Systems.
Classic LUN A logical unit of storage created on a user-defined RAID group. The
amount of physical space allocated is the same as the user capacity seen by the host
server. Classic LUNs cannot be created on a pool; they are always created on a RAID
group.
High water mark Trigger point at which VNX performs one or more actions, such as
sending a warning message or extending a File System, as directed by the related
features software/parameter settings.
Introduction
One of the biggest challenges facing storage administrators is balancing the storage
requirements for various competing applications in their data centers. Administrators
are typically forced to allocate space, initially, based on anticipated storage growth.
They do this to reduce the management expense and application downtime incurred
when they need to add more storage as their business grows. This generally results in
the over-provisioning of storage capacity, which then leads to higher costs; increased
power, cooling, and floor space requirements; and lower capacity utilization rates.
Even with careful planning, it may be necessary to provision additional storage in the
future. This may require application downtime depending on the operating systems
involved.
VNX Virtual Provisioning technology is designed to address these concerns. Thin LUNs
and thin-enabled File Systems can present more storage to an application than is
physically available. Storage managers are freed from the time-consuming
administrative work of deciding how to allocate drive capacity. Instead, an array-
based mapping service builds and maintains all of the storage structures based on a
few high-level user inputs. Drives are grouped into storage pools that form the basis
for provisioning actions and advanced data services. Physical storage from the pool
can be automatically allocated only when required for maximum efficiency.
Business requirements
Organizations, both large and small, need to reduce the cost of managing their
storage infrastructure while meeting rigorous service level requirements and
accommodating explosive storage capacity growth.
Thin provisioning addresses several business objectives that have drawn increasing
focus:
Reducing capital expenditures and ongoing costs
Thin provisioning reduces capital costs by delivering storage capacity on actual
demand instead of allocating storage capacity on anticipated demand. Ongoing
costs are reduced because fewer drives consume less power and cooling, and less
floor space.
Maximizing the utilization of storage assets
Organizations need to accommodate growth by drawing more value from the same
Storage Pools
Virtual Provisioning utilizes Storage Pool technology. A pool is somewhat analogous
to a RAID Group, which is a physical collection of drives on LUNs are created. But
pools have several advantages over RAID Groups:
Pools allow you to take advantage of advanced data services like FAST VP,
Compression, and Deduplication
Multiple drive types can be mixed into a pool to create multiple tiers with each
tier having its own RAID configuration
They can contain a few drives or hundreds of drives whereas RAID Groups are
limited to 16 drives
Because of the large number of drives supported in a pool, pool-based
provisioning spreads workloads over many resources requiring minimal
planning and management effort
When selecting a number of drives that result in multiple RAID Groups, the
system will automatically create the private RAID Groups. See Table 1 for the
preferred drive count options for each RAID type
Pool Attributes
Pools are simple to create because they require few user inputs:
Pool Name: For example, Pool 0
Drives: Number and type
RAID Protection level
Figure 4 shows an example of how to create a Storage Pool named Pool 0 with RAID 5
(4+1) protection for the Flash VP Optimized drives, RAID 5 (4+1) protection for the
SAS drives, and RAID 6 (6+2) for the NL-SAS drives.
Oversubscribing a pool
Thin provisioning allows you to oversubscribe a pool where capacity presented to the
hosts exceeds the physical capacity in a pool. Figure 5 shows an example of an
oversubscribed pool.
Oversubscribed
Capacity
Total
Subscription
Total
% Full Threshold Total Allocation Capacity
Allocated Capacity
Figure 5 Pool capacity diagram
Total Capacity is the amount of physical capacity available to all LUNs in the
pool
Total Allocation is the amount of physical capacity that is currently assigned to
LUNs
Total Subscription is the total capacity reported to the host
Oversubscribed Capacity is the amount of capacity that exceeds the capacity
in a pool
You can also monitor the Storage Pool capacity information in the Storage Pool
Properties page in Unisphere as shown in Figure 6. This displays Physical and Virtual
Capacity information such as:
Total Capacity
Free Capacity
Percent Full
Total Allocation
Snapshot Allocation
Total Subscription
Snapshot Subscription
Percent Subscribed
Oversubscribed By (if applicable)
Expanding Pools
Since pools can run out of space, it is a best practice to ensure that a monitoring
strategy is in place and you have the appropriate resources to expand the pool when
necessary. Adding drives to a pool is a non-disruptive operation and the increased
capacity can be immediately used by LUNs in the pool.
When a Storage Pool is expanded, the sudden introduction of new empty drives
combined with relatively full, existing drives may cause a data imbalance. This
imbalance is resolved by automatic one-time data relocation, referred to as a
rebalance. This rebalance relocates a portion of the data to the new drives, based on
capacity, to utilize the new spindles.
With Fully Automated Storage Tiering for Virtual Pools (FAST VP) enabled, this
rebalance will take performance data into account and load balance across the new
spindles. Refer to the EMC VNX2 FAST VP A Detailed Review White Paper for more
information on FAST VP.
Pools can be as large as the maximum number of drives (excluding vault drives and
hot spares) allowed per system type. For example, a VNX7600 can contain 996 drives
in a single pool or between all pools. Vault drives (the first four drives in a storage
system) cannot be part of a pool, so Unisphere dialog boxes and wizards do not allow
you to select these drives. Large pools must be created by using multiple operations.
Depending on the system type, pools can be created by using the maximum allowed
drive increment and then expanded until you reach the desired number of drives in a
pool. Once the pool is fully initialized, you can create LUNs on it. For example, to
create a 240-drive pool on a VNX5600, you need to create a pool with 120 drives and
Users need to be aware of fault domains when using large pools. A fault domain
refers to data availability. A Virtual provisioning pool is made up of one or more
private RAID groups. A pools fault domain is a single-pool, private RAID group. That
is, the availability of a pool is the availability of any single private RAID group. Unless
RAID 6 is the pools level of protection, avoid creating pools with very large numbers
of RAID groups. For more information regarding the benefits of smaller pools, refer to
the EMC VNX Unified Best Practices for Performance Applied Best Practices white
paper on EMC Online Support. The maximum pool and LUN limits for the available
models are shown in Table 4.
Table 4 Pool and LUN limits
Model Max pools Max disks per pool Max disks per array Max LUNs per pool/array
VNX5200 15 121 125 1000
VNX5400 20 246 250 1000
VNX5600 40 496 500 1100
VNX5800 40 746 750 2100
VNX7600 60 996 1000 3000
VNX8000 60 1496 1500 4000
Pool LUNs
A VNX pool LUN is similar to a classic LUN in many ways. Many of the same Unisphere
operations and CLI commands can be used on pool LUNs and classic LUNs. Most
user-oriented functions work the same way, including underlying data integrity
features, LUN migration, local and remote protection, and LUN properties information.
Pool LUNs are comprised of a collection of slices and have the option to be thin or
thick. A slice is a unit of capacity which is allocated from the private RAID Groups to
the pool LUN when it needs additional storage. Starting with VNX Operating
Environment (OE) for Block Release 33, the slice size has been reduced from 1 GB to
256 MB.
Thick LUNs
Thick LUNs are also available in VNX. Unlike a thin LUN, a thick LUNs capacity is fully
reserved and allocated on creation so it will never run out of capacity. Users can also
better control which tier the slices are initially written to. For example, as pools are
initially being created and there is still sufficient space in the highest tier, users can
be assured that when they create a LUN with either Highest Available Tier or Start
High, then Auto-Tier, data will be written to the highest tier because the LUN is
allocated immediately.
Management
You can easily manage File System and Checkpoint Storage Space Reclaim by using
either the Unisphere software or Control Station CLI. When a File System is created on
thin LUNs, a new tab named Space Reclaim appears in the File System Properties
Window. Similarly, when the SnapSure Checkpoint Storage is on thin LUNs, a new tab
named Checkpoint Storage Space Reclaim appears in the File System Properties
Window. Figure 9 shows an example of the two new reclaim tabs. While File System
and Checkpoint Storage Space Reclaim are two entirely separate features, the
controlling of Space Reclaim is very similar.
After increasing the Free Capacity within the Block Storage Pool, the user can use the
server_mount restore command via Control Station CLI to restore access to
After navigating to the Replications tab under Data Protection > Mirrors and
Replications > Replications for File, you will see that Replication Sessions for the
impacted File Systems are not shown. This is depicted in Figure 19. As the affected
File Systems are unmounted, and the replication services for these File Systems are
stopped, the Replication Sessions will not be shown until the File System is restored
and the Data Mover is rebooted.
References
EMC Online Support
EMC Virtual Provisioning Release Notes
EMC VNX Unified Best Practices for Performance Applied Best Practices
Guide
EMC VNX2 FAST VP A Detailed Review
EMC VNX2 Deduplication and Compression
Managing Volumes and File Systems with VNX AVM
Hyper-V
In Hyper-V environments, EMC recommends that you select the dynamically
expanding option for the Virtual Hard Disk (VHD) when used with VNX thin LUNs as
this preserves disk resources. When using pass-through volumes, the file system or
guest OS dictates whether the volume will be thin-friendly. For more information on
using Hyper-V Server, see the Using EMC CLARiiON with Microsoft Hyper-V Server
white paper on EMC Online Support.
While under an Out of Space condition, none of the following operations are allowed
on the affected storage. The following operations will be allowed once the Out of
Space condition is cleared and access has been restored.
File System
o Create/Mount/Delete/Unmount/Extend/Read or Write to FS
Virtual Data Movers
o Create/Delete
RepV2
o Create/Start/Stop/Failover/Switchover/Reverse/Modify/Refresh/Delete
SnapSure
o Create/Modify/Delete/Restore/Refresh
FSCK
o Start
Quota
o On/Off/Report/Edit/Check/Tree List
File Deduplication and Compression
o Enable/Disable/Suspend
DHSM
o Offline/Enable/Disable/Recall
NDMP
o File System (Source)/SnapSure Checkpoint (Source)/Restore
VAAI VMDK Clone/Instant Provision/Snap
CAVA