VNX Unified Storage Management - Student Guide
VNX Unified Storage Management - Student Guide
VNX Unified Storage Management - Student Guide
Student Guide
Education Services
November 2015
[email protected]
[email protected]
Welcome to VNX Unified Storage Management.
Copyright ©2015 EMC Corporation. All Rights Reserved. Published in the USA. EMC believes the information in this publication is accurate as of its
publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. The trademarks, logos, and
service marks (collectively "Trademarks") appearing in this publication are the property of EMC Corporation and other parties. Nothing contained in this
publication should be construed as granting any license or right to use any Trademark without the prior written permission of the party that owns the
Trademark.
EMC, EMC² AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems,
Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Bus-Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator,
Centera, CenterStage, CentraStar, EMC CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert ,cLARiiON, ClientPak, CloudArray, Codebook
Correlation Technology, Common Information Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft, Connectrix, Constellation
Computing, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection Advisor, DBClassify, DD Boost, Dantz,
DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO, Document Sciences, Documentum, DR Anywhere,
ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC Centera, EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM.
eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum,
HighRoad, HomeBase, Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, ISIS,Kazeon, EMC LifeLine,
Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Multi-Band
Deduplication,Navisphere, Netstorage, NetWorker, nLayers, EMC OnCourse, OnAlert, OpenScale, Petrocloud, PixTools, Powerlink, PowerPath, PowerSnap,
ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional, QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare,
RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, EMC Snap, SnapImage,
SnapSure, SnapView, SourceOne, SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix
DMX, Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, Velocity, Viewlets, ViPR,
Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence,
VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net, WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO,
YottaYotta, Zero-Friction Enterprise Storage.
The prerequisite courses for this class are shown above the management courses, while the
expert-level classes are below.
This VNX Unified Storage Management course also has two derivative courses: VNX Block
Storage Management, and VNX File Storage Management. Each of these courses are a
subset of the ‘Unified’ course, focusing specifically on its own storage services.
Technical certification through the EMC Proven™ Professional VNX Solutions Specialist Exam
for Storage Administrators (E20-547) is based on the prerequisite courses and VNX Unified
Storage Management (or a combination of VNX Block Storage Management and VNX File
Storage Management).
The VNX Series unifies EMC’s file-based and block-based offerings into a single product that
can be managed with one easy to use GUI. VNX is a storage solution designed for a wide
range of environments that include midtier-to-enterprise. The back end storage connectivity
is via Serial attached SCSI (SAS) which provides up to 6 Gb/s connection.
The VNX Unified Storage platform supports the NAS protocols (CIFS for Windows and NFS
for UNIX/Linux, including pNFS, and FTP/SFTP for all clients), as well native block protocols
(Fiber Channel, iSCSI, and FCoE).
VNX is built upon a fully redundant architecture for high availability and performance.
Shown here are the current VNX models. There are also derivative models, VNX-F and VNX-
CA, which are specialized for particular requirements.
All VNX Storage systems have two Storage Processors (SPs). SPs carry out the tasks of saving and
retrieving of block data. SPs utilize I/O modules to provide connectivity to hosts and 6 Gb/s Serial
Attached SCSI (SAS) to connect to disks in Disk Array Enclosures. SPs manage RAID groups and
Storage Pools and are accessed and managed through SP Ethernet ports using either a CLI or EMC
Unisphere. Unisphere is web-based management software. SPs are major components contained
inside both Disk Processor Enclosures and Storage Processor Enclosures.
Storage Processor Enclosures (SPEs) house two Storage Processors and I/O interface modules.
SPEs are used in the high-end-enterprise VNX models, and connect to external Disk Array Enclosures.
Disk Processor Enclosures (DPEs) house two Storage Processors and the first tray of disks. DPEs
are used in the midsize-to-high-end VNX models.
A Data Mover Enclosure (DME) houses the File CPU modules called Data Mover X-Blades. Data
Movers provide file host access to a VNX storage array. This access is achieved by connecting the DMs
to the SPs for back-end (block) connectivity to the disk enclosures. DMEs are used in all File and
Unified VNX models and act as a gateway between the file and block storage environments.
A Control Station (CS) allows management of File storage, and act (in File or Unified systems) as a
gateway to the Storage Processors. Only Storage Processors can manage Block storage. Control
Stations also provide Data Mover failover capabilities.
Disk Array Enclosures (DAEs) house the non-volatile hard and Flash drives used in the VNX
storage systems.
Standby Power Supplies (SPSs) provides power to the SPs and first Disk Array Enclosure to ensure
that any data that is in transit is saved if a power failure occurs.
• VNX high availability and redundancy features provide five-nines (99.999% availability)
access to data.
• All hardware components are redundant, or have the option to be redundant. Redundant
components include: dual Storage Processors with mirrored cache, Data Movers, Control
Stations, storage media via RAID and sparing, etc.
• Paths to Block data are also redundant within the array. Each drive has two ports
connected to redundant SAS paths. (Outside of the array, path redundancy can be
provided at both the host and network levels.
• Network features LACP (Link Aggregation Control Protocol) and Ethernet Channel protect
against an Ethernet link failure, while Fail Safe Networking protects against failures of an
Ethernet switch.
Deduplication and compression are available for both Block and File services. While
compression for both Block and File use the same underlying technology, File-level
deduplication uses EMC Avamar technology.
VNX File Level Retention is a capability available to VNX File that protects files in a NAS
environment from modification and deletion until a user specified retention date.
With quotas, a limit can be specified on the number of allocated disk blocks and/or files that
a user/group/tree can have on a VNX file system, controlling the amount of disk space and
the number of files that a user/group/tree can consume.
FAST VP automates movement of data across media types based on the level of the data’s
activity. This optimizes the use of high performance and high capacity drives according to
their strongest attributes. FAST VP improves performance and cost efficiency.
FAST Cache uses Flash drives to add an extra cache tier. This extends the array’s read-write
cache and ensures that unpredictable I/O spikes are serviced at Flash speeds.
VNX SnapSure is a feature for File data services. SnapSure provides a read-only or
read/write, point-in-time view of VNX file systems. SnapSure is used primarily for low-
activity applications such as backup and user-level access to previous file versions.
SnapSure uses Copy on First Write technology.
VNX Snapshot is Block feature that integrates with VNX Pools to provide a point-in-time
copy of a source LUN using redirect on first write methodology.
VNX SnapView Snapshot is also a Block feature that provides point-in-time copies of source
LUNs. SnapView integrates with Classic LUNs and uses Copy on First Write technology.
Appliance-based, RecoverPoint/SE local protection replicates all block data for local
operational recovery, providing DVR-like rollback of production applications to any point-in-
time. It tracks all data changes to every protected LUN (Journal volume).
VNX Remote Protection features include SAN Copy, MirrorView, Replicator and
RecoverPoint/SE CRR.
SAN Copy copies LUN data between VNX storage systems and any other storage array. SAN
Copy is software-based, and provides full or incremental copies, utilizing SAN protocols (FC
or iSCSI) for data transfer.
MirrorView is a feature of VNX for Block used for remote disaster recovery solutions.
MirrorView is available in both synchronous (MirrorView/S) and asynchronous
(MirrorView/A) modes.
Replicator is a VNX File features that produces a read-only copy of source file system. The
copy can be local or remote. VNX Replicator transfers file system data over an IP network.
Changes to the source file system are tracked and transmitted on a time interval. VNX
Replicator can be used as an asynchronous disaster recovery solution for both NFS and
CIFS.
VNX supports NDMP (Network Data Management Protocol), which is an open standard
backup protocol designed for NAS environments. During NDMP operations, backup software
is installed on a third party host, while the Data Mover is connected to the backup media
and server as the NDMP server. NDMP provides the ability to backup multi-protocol (CIFS
and NFS) file systems.
EMC Common Event Enabler is a File-level alerting framework for CIFS and NFS. It notifies
antivirus servers of potentially virulent client files, and uses third-party antivirus software to
resolve virus issues.
VNX Controller-Based Encryption encrypts all data at the Storage Processor. All data on disk
is encrypted such that it is unreadable if removed from the system and attached to a
different VNX.
Unisphere Analyzer is the VNX performance analysis tool to help identify bottlenecks and
hotspots in VNX storage systems, and enable users to evaluate and fine-tune the
performance of their VNX system.
Unisphere Quality of Service Manager (UQM) measures, monitors, and controls application
performance on the VNX storage system.
VNX Family Monitoring and Reporting automatically collects block and file storage statistics
along with configuration data, and stores them into a database that can be viewed from
dashboards and reports.
If you have not yet consumed any of the eLearnings listed, you can register and enroll in
these during your off hours this week.
In addition to the prerequisite courses, we also identified additional training that you may
find valuable after this management course. These courses are VNX Unified Storage
Performance Workshop, VNX Block Storage Remote Protection with MirrorView, and VNX
File Storage Remote Protection with Replicator.
The Unisphere GUI is the primary management interface for the system. From it, both the
block and file aspects of the system are managed. It is a web-based application that resides
on the VNX, accessed using a browser, such as Internet Explorer, addressed to the VNX.
Unisphere Client software is also available as an installable application for Windows
platforms. Management is performed over a secure network connection to the VNX system.
The File CLI option is available for file administrative tasks. The tasks are performed over a
secure network connection using Secure Shell (SSH) to the VNX Control Station. Or over a
direct serial connection to the Control Station. The File CLI option is useful for scripting
administrative tasks for file.
The Block CLI option is available as an installable application and is used for block
administrative tasks. The tasks are performed over a secure network connection to the VNX
Storage Processors, A or B. The Block CLI can be used to automate management functions
through scripts and batch files.
Some of the system management tasks relate to settings on the system such as network
addressing, services, and caching. System hardware can be viewed and configured.
Security relating to management is also available, such as management accounts and
storage domain configuration. The system software is also managed from Unisphere.
Reports can also be generated about the system configuration, status, and availability.
System monitoring and alert notification can also be managed within Unisphere.
File storage related tasks are also available in Unisphere, such as Data Mover networking
and services settings. Management of storage space for file relating to pools and volumes is
provided. File systems and all their features are managed. CIFS shares and servers are
managed as well as NFS exports. Unisphere also manages both local and remote VNX file
replication features.
Unisphere provides block storage management tasks, such as network and Fibre Channel
connectivity settings. Storage provisioning for Storage Pools and RAID Groups are available.
LUNs and all their features are also managed. Host access to storage is managed within
VNX Storage Groups with Unisphere. It also manages both local and remote VNX block
replication features.
Task pane: It is task based navigation which means common tasks are placed together
facilitating the access. Depending on the menu selected different tasks will appear.
Main pane: It is where the pertinent information about a particular menu is displayed.
The division between Task Pane and Main Pane can be resized by clicking the mouse with
the cursor over the division bar, and dragging it to the new position. Also, the Task Pane
can be hidden by clicking the right arrow on the division bar which will expand the Main
Pane. The Task Pane can be expanded again by clicking the left arrow on the division bar
which will re-dimension the size of the Main Pane.
This course includes a lab exercise that provides the learner hands-on experience
accessing, operating and navigating the Unisphere interface.
Some operations available from the setup page are: change the SP host name, create a
new Global Administrator account, manage the SSL/TLS Certificate, update parameters for
agent communication, restart Management Server, Recover Domain, set
RemotelyAnywhere access restrictions, and many other functions.
• server_ commands require a “movername” entry and execute directly to a Data Mover.
(For example, server_ifconfig server_2…)
The Control Station also includes the full command set for Block CLI.
The Block CLI is installed on supported Windows, Linux and UNIX-based systems. It is also
included on the VNX Control Station in its /nas/sbin directory.
The GUI does offer an option from its Task Pane for running File CLI commands. The
Control Station CLI option within Unisphere allows you to enter commands one at a time
and view its output result.
This Lab covers VNX management with Unisphere. System login and Unisphere general
navigation is performed along with Unisphere navigation to specific File and Block functions.
The File command line interface will be invoked from within Unisphere.
Please discuss as a group your experience with the lab exercise. Were there any issues or
problems encountered in doing the lab exercise? Are there relevant use cases that the lab
exercise objectives could apply to? What are some possible concerns relating to the lab
subject?
A secure network connection is established between the management interface and the
VNX using industry standard protocols; Secure Socket Layer (SSL), Transport Layer
Security (TLS), or Secure Shell (SSH). These industry standard protocols use certificates
that establish a trust and authentication between the management interface and the VNX.
They then encrypt communication between each other to establish the privacy required for
secure communications. Note: If using the File CLI via serial connection, physical security of
the VNX is required to assure management access security.
The administrative user then supplies login credentials to the management interface which
are passed over the secure connection to the VNX. The VNX examines the user credentials
against its user accounts for user authentication and authorization. The VNX will then
maintain an audit log of the user’s management activities.
Audit information on VNX for Block systems is contained within the event log on each SP.
The log contains a time-stamped record for each event, with information about the storage
system, the affected SP and the associated host. An audit record is also created every time
a user logs in, enters a request through Unisphere, or Secure CLI command.
On VNX for File systems the auditing feature used is native to the Control Station Linux
kernel and is enabled by default. The feature is configured to record management user
authentications and captures the management activities initiated from the Control Station.
Events are logged when specified sensitive file systems and system configurations are
modified.
This course includes a lab exercise that provides the learner hands-on experience creating
local user accounts and assigning a role to the user.
The LDAP authentication scope is used when the VNX is configured to bind to an LDAP
domain. The VNX performs an LDAP query to the domain to authenticate the administrative
users. LDAP domain users and groups are mapped to user and group IDs on the VNX. When
the “use LDAP” option is selected during user login, the Global or Local scope setting is
disregarded.
The Global authentication scope is used when the VNX is configured to be a member of a
Storage Domain. All the systems within the domain can be managed using a single sign-on
with a global account. If a user selects the “Global” scope during login to a VNX that is not a
Storage Domain member, Unisphere will use local authentication for the user.
The Local authentication scope is used to manage a specific system only. Logging into a
system using a local user account is recommended when there are a large number of
systems in the domain and you want to restrict visibility to a single system and or certain
features on a given system.
When you start a session, Unisphere prompts you for a username, password, and scope.
These credentials are encrypted and sent to the storage management server. The storage
management server then attempts to find a match within the user account information. If a
match is found, you are identified as an authenticated user. All subsequent requests that
the applet sends contain the cached digest in the authentication header.
To achieve this integration, the VNX is configured to bind to the LDAP domain to form an
authentication channel with the domain. When an LDAP login is performed, the VNX passes
the LDAP user credentials to the User Search Path of the LDAP server over the
authentication channel. Role-based management is also configured for the user based on
membership in an LDAP group. A management Role is defined for the LDAP group. The VNX
automatically creates an identically named VNX group and the role is assigned to the VNX
group. A mapping between the LDAP and VNX groups provides the management role to the
LDAP user.
The Use LDAP option must be selected for the Unisphere login to be authenticated by the
LDAP domain. The user will be able to perform management tasks based on the
management role configured for the LDAP group of which the user is a member. LDAP users
are also able to use File CLI management. The CLI login to the VNX Control Station requires
that the user input the username in the <username>@<domain name> format.
Alerts may come from the Block side or the backend, or from the File side of the VNX
system.
This page will report the tasks with the following information:
Start Time - Time the administrator initiated task. The start time is in the format:
month/date/year hours:minutes
The logged task properties can be visualized by double-clicking the mouse over the
selection or by hitting the Properties button.
The page can be configured to display log messages from the Control Station or the Data
Movers based on a selected time interval and severity level:
Severity - Severity of event. The severity is converted from a numerical value (0-6) in the
log file to one of four named values. Events provides a comparison
To view details about an event right-click the mouse over the record and select details.
Storage System - Name of the storage system that generated the event. Displays N/A for
non-device event types
Device - Name of the device within the storage system on which the event occurred.
Displays N/A for non-device event types
Host - Name for the currently running Agent – SP Agent or Host Agent
The system “Event” notifications are based on pre-defined system events such as a
temperature being too high. As displayed in this table, these notifications are configured
based on the Facility affected and the Severity levels (Critical, Error, Warning, Info). The
user can set what is the action that must be taken in case the defined criteria is met, and
what is the destination of the notification: path of Control Station log file, Single SNMP trap
for the traps, or a list of e-mail addresses separated by a comma.
The other tabs of the Notifications for File are Storage Usage, Storage Projection and Data
Mover Load. These refer to notifications based on resource utilization. The user can also
configure conditions or thresholds for triggering the notifications.
When creating a template, the user is able to define Severity level and Category for general
events or configure notifications for explicit events. The severity levels are Info, Warning,
Error, and Critical. The Categories relate to the events pertaining to Basic Array feature,
MirrorView, SnapView, SAN Copy, VNX Snapshots, etc.
Some of the actions that can be configured regarding a notification include the following:
• Logging the event in an event log file
• Sending an email message for single or multiple system events to a specific email
address
• Generating an SNMP trap
• Calling home to the service provider
• Running a script
The “Statistics” page displays a live graph of the statistics for components of the VNX. The
legend under the graphic explains the chart data. The graph can display a maximum of 14
statistics at any one time.
The top line on the page includes two arrows that allows the user to navigate backward and
forward in the accumulated data, and text stating the time period covered by the visible
graph.
To manipulate the graph, the user can right-click the graph and select:
• Export Data: to export the data in the graph into a comma-separated values file
• Print: to print the graph, rotated or scaled to fit a page as needed
• Time Interval: to change the time period displayed by the graph
• Select Stats: to add or remove types of statistical data displayed in the graph
• Polling Control: to change the polling interval for statistical update queries, and to
disable and enable statistical update polling
• Polling Interval: the rate at which an object is polled
The default polling interval for updated stats is five minutes for Data Mover and
storage system data. File system data is polled at a fixed interval of 10 minutes.
The Unisphere Analyzer feature lets the user monitor the performance of the storage-
system components: LUNs, the storage processors (SPs) that own them, and their disk
modules. Unisphere Analyzer gathers block storage-system performance statistics and
presents them in various types of charts. This information allows the administrator to find
and anticipate bottlenecks in the disk storage component utilization.
Analyzer can display the performance data in real time or as a file containing past
performance data from an archive. The user can capture the performance data in an archive
file at any time and store it on the host where Unisphere was launched.
The statistics are displayed as seven different types of charts: Performance Survey chart,
Performance Summary, Performance detail, Performance Overview (for RAID Group LUNs,
metaLUNs only), and LUN IO Disk detail chart (for LUNs only).
https://edutube.emc.com/Player.aspx?vno=25uGUJW3sapbkcJ+HWoiQg==&autoplay=true
https://edutube.emc.com/Player.aspx?vno=4NadO6Lvj+IdSHaUXsu12g==&autoplay=true
The multi-domain feature offers the option of single sign-on which allows you to log in to
the entire multi-domain environment by using one user account. In this instance, each
domain within the environment must have matching credentials. Alternatively, you can use
login on-demand.
In a multi-domain environment, you can add or remove systems and manage global users
only on a local domain (that is, the domain of the system to which you are pointing
Unisphere). To perform these operations on a remote domain, you must open a new
instance of Unisphere and type the IP address of a system in that remote domain.
If only the Unisphere Client is installed on a Windows system, the Unisphere UI is launched
locally and pointed to any Unisphere Server system in the environment. You can also
optionally install both the Unisphere Client and Server on the same Windows system. The
Unisphere Server accepts requests from Unisphere Client and the requests are processed
within the Windows system. The Unisphere Server can be configured as a domain member
or a domain master for managing multiple VNX systems within the same UI.
The Unisphere Client and Server packages provide for faster Unisphere startup times since
the Unisphere applet does not have to download from the VNX Control Station or SPs. This
can be very advantageous when managing systems in different geographic locations
connected via slow WAN links. Another advantage of running Unisphere Server on a
Windows system is it lowers management CPU cycles on the VNX SPs for certain
management tasks.
Please discuss as a group your experience with the lab exercise. Were there any issues or
problems encountered in doing the lab exercise? Are there relevant real world use cases
that the lab exercise objectives could apply to? What are some concerns relating to the lab
subject?
https://edutube.emc.com/Player.aspx?vno=L1l3uClTNZmFAX7HP2O+Qg==&autoplay=true
https://edutube.emc.com/Player.aspx?vno=s/dAs/D/VgiDpC03OlBN6Q==&autoplay=true
https://edutube.emc.com/Player.aspx?vno=sYF/frALloGIYd4ArmccXg==&autoplay=true
Notes
When you type this command, you may receive a message that resembles the
following: DiskPart succeeded in creating the specified partition.
The align= number parameter is typically used together with hardware RAID
Logical Unit Numbers (LUNs) to improve performance when the logical units are
not cylinder aligned. This parameter aligns a primary partition that is not cylinder
aligned at the beginning of a disk and then rounds the offset to the closest
alignment boundary. number is the number of kilobytes (KB) from the beginning
of the disk to the closest alignment boundary. The command fails if the primary
partition is not at the beginning of the disk. If you use the command together
with the offset =number option, the offset is within the first usable cylinder on
the disk.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 1
This lesson covers the benefits and process to migrate a LUN and the procedures for
expanding Pool LUNs and the Classic LUNs overview. It also shows how to proceed with the
volume extension in a Windows 2012 Server.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 2
The LUN Migration feature allows data to be moved from one LUN to another, regardless of
RAID type, disk type, LUN type, speed and number of disks in the RAID Group or Pool. LUN
Migration moves data from a source LUN to a destination LUN (of the same or larger size)
within a single storage system. This migration is accomplished without disruption to
applications running on the host though there may be a performance impact during the
migration. A LUN Migration can be cancelled by the administrator at any point in the
migration process. If cancelled before it completes, the source LUN returns to its original
state and the destination LUN is destroyed. Once a migration is complete the destination
LUN assumes the identity of the source, taking on its LUN ID, WWN, and its Storage Group
membership. The source LUN is destroyed to complete the migration operation.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 3
A benefit of the LUN Migration feature is its use in storage system tuning. LUN Migration
moves data from a source LUN to a destination LUN (of the same or larger size) within a
single storage system. This migration is accomplished without disruption to applications
running on the host. LUN Migration can enhance performance or increase disk utilization for
the changing business needs and applications by allowing the user to change LUN type and
characteristics, such as RAID type or size (Destination must be the same size or larger),
while production volumes remain online. LUNs can be moved between Pools, between RAID
Groups, or between Pools and RAID Groups.
When a Thin LUN is migrated to another Thin LUN, only the consumed space is copied.
When a Thick LUN or Classic LUN is migrated to a Thin LUN, the space reclamation feature
is invoked and only the consumed capacity is copied.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 4
The LUN Migration feature does have some guidelines for use.
The LUNs used for migration may not be private LUNs, nor may they be in the process of
binding, expanding or migrating.
Either LUN, or both LUNs, may be metaLUNs, but neither LUN may be a component LUN of
a metaLUN.
The destination LUN may not be part of SnapView or MirrorView operation. This includes
Clone Private LUNs, Write Intent Log LUNs, and Reserved LUN Pool LUNs.
Note the Destination LUN is required to be at least as large as the Source LUN, but may be
larger.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 5
When the FAST Cache feature is being used, ensure FAST Cache is OFF on LUNs being
migrated. This prevents the migration’s I/O from consuming capacity in the FAST Cache
that may otherwise benefit workload I/O.
When migrating into or between FAST VP pool-based LUNs, the initial allocation of the LUN
and the allocation policy have an important effect on its performance and capacity
utilization. Tiering policy setting (Highest, Auto, Lowest) determines which tier within the
pool the data of the source LUN will be first allocated to. Be sure to set the correct policy
needed to ensure the expected starting performance for all the source LUN’s data. As much
capacity from the source LUN will be allocated as possible to the appropriate tier. Once the
migration is complete the user can adjust the tiering policy.
There will be a lowering in the rate of migration when the source or destination LUN is a
thin LUN. It is difficult to determine the transfer rate when the source LUN is a thin LUN but
the transfer rate will be lower than migrations involving thick or classic LUNs. The decrease
in the rate depends on how sparsely the thin LUN is populated with user data, and how
sequential in nature of the stored data is. A densely populated LUN with highly sequential
data increases the transfer rate. Random data and sparsely populated LUNs decrease it.
ASAP priority LUN migrations with normal cache settings should be used with caution. They
may have an adverse effect on system performance. EMC recommends that the user
execute at the High priority, unless migration time is critical.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 6
The VNX classic LUN expansion (metaLUN) feature allows a base classic LUN to be
expanded to increase LUN capacity. A base LUN is expanded by aggregating it with another
classic LUN or LUNs, called component LUNs. When expanded, it forms a metaLUN which
preserves the personality of the base LUN. There are two methods of aggregating the base
and components to form the metaLUN; concatenating and striping.
With concatenation, the capacity of the component LUN is added to the end of the base LUN
and is available immediately. The I/O flow to the metaLUN is through the base LUN until its
space is consumed, then the I/O flow extends onto the component LUN. It is recommended
(but not required) to use component LUNs of the same size, RAID type, and disks (in both
number and type) to maintain the performance profile of the base LUN. If the component
LUN differs from the base LUN, the performance of the metaLUN will vary.
With striping, the capacity of the component LUN is interlaced with that of the base LUN by
a restriping process that forms RAID stripes across the base and component LUNs.
Therefore, a component LUN must have the same size and RAID type and is recommended
(but not required) to use the same number and type of disks. If the base LUN is populated
with data, the restriping process will take time to complete and can impact performance.
While the existing base LUN data is available, the additional capacity will not be available
until the restriping process completes. Once complete, the I/O flow of the metaLUN is
interlaced between the base and component LUNs, thus preserving or increasing the
performance profile of the base LUN.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 7
A benefit of the VNX metaLUN feature is its ability to increase the capacity of a classic LUN.
A RAID Group is limited to 16 disks maximum, thus the size of a classic LUN is limited to
the space provided by 16 disks. MetaLUNs are constructed using multiple classic LUNs
which can be created from disks in different RAID Groups and thus avoid the 16 disk
capacity limit. VNX metaLUNs provide flexibility and scalability to the storage environment.
Another metaLUN benefit is the performance affect of additional disks. With more disks
available to the metaLUN bandwidth to the LUN increases, thus its I/O throughput can be
higher benefiting performance to the metaLUN. VNX metaLUNs provide performance
adaptability to the storage environment.
MetaLUNs are functionally similar to volumes created with host volume managers, but with
some important distinctions. To create a volume manager stripe, all component LUNs must
be made available to the host, and each will have a unique address. Only a single LUN, with
a single address, is presented to the host with metaLUNs. If a volume is to be replicated
with VNX replication products (SnapView, VNX Snapshot, MirrorView and SAN Copy), a
usable image requires consistent handling of fracture and session start operations on all
member LUNs at the same time. MetaLUNs simplify replication by presenting a single object
to the replication software. This also makes it easier to share the volume across multiple
hosts – an action that volume managers will not allow.
The use of a host striped volume manager has the effect of multithreading requests
consisting of more than one volume stripe segment which increases concurrency to the
storage system. MetaLUNs have no multithreading effect since the multiplexing of the
component LUNs are done on the storage system. VNX metaLUNs provide ease of storage
usage and management.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 8
The VNX metaLUN feature does have some guidelines for use.
A base LUN can be a regular classic LUN or it can be a metaLUN.
A metaLUN can span multiple RAID Groups.
When creating a concatenated metaLUN, it is recommended that the base LUN and the
component LUNs be of the same RAID type.
As a result of the increase in back-end activity associated with restriping, it is
recommended to expand only one LUN per RAID Group at the same time.
The host workload and the restriping operation share the same system resources. So a
heavy restriping workload will have a performance impact on host storage operations.
Likewise, a heavy host storage workload will have an impact on the time it takes to expand
a striped metaLUN.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 9
In the systems drop-down list on the menu bar, select a storage system.
Right-click the base LUN and select Expand. When the “Expand Storage Wizard Dialog”
opens, follow the steps.
Another option is from the task list, under Wizards. Select RAID Group LUN Expansion
Wizard.
Follow the steps in the wizard, and when available, click the Learn more links for
additional information.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 10
The Pool LUN expansion feature is available for both Thick and Thin Pool LUNs. The
expanded capacity is immediately available for use by the host. The expansion is done in
the same manner for either type of LUN but it allocates physical storage differently. When a
Thick Pool LUN is expanded, its expanded size must be available from physical disk space in
the pool and is allocated to the LUN during the expansion. When a Thin Pool LUN is
expanded, physical disk space from the pool does not get allocated as part of the
expansion. It is the in-use capacity that drives the allocation of physical storage to the Thin
LUN.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 11
A benefit of the Pool LUN Expansion feature is its fast, easy, on-line expansion of LUN
capacity. A few easy clicks in Unisphere is all it takes to increase the LUN capacity. Another
key capability is that the LUN performance is not changed by the capacity expansion. Since
its performance is based on the physical storage of the pool it is built from, the
performance characteristics of the expanded LUN will stay the same as it was prior to the
expansion. Also, the expansion process itself has no performance impact on the LUN.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 12
There are a few guidelines for expanding Pool LUN capacities. A capacity expansion cannot
be done on a pool LUN if it is part of a data protection or LUN-migration operation. For a
thick LUN expansion, the pool must have enough physical storage space available for the
expansion to succeed; whereas, for a thin LUN the physical storage space does not need to
be available. The host OS must also support the capacity expansion of the LUN.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 13
This Lab covers the VNX advanced storage features of LUN expansion and migration. In the
lab exercise pool-based Thick and Thin LUNs expansions are performed along with a Classic
LUN expansion. A LUN migration is also completed.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 14
This lab covered the VNX advanced storage features of LUN expansion and migration. In the
exercise Thick, Thin and Classic LUNs were expanded and a LUN migration was performed.
Please discuss as a group your experience with the lab exercise. Were there any issues or
problems encountered in doing the lab exercise? Are there relevant use cases that the lab
exercise objectives could apply to? What are some concerns relating to the lab subject?
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 15
This lesson covers the functionality, benefits and configuration of FAST VP.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 16
VNX FAST VP, or Fully Automated Storage Tiering for Virtual Pools tracks data in a Pool at a
granularity of 256 MB – a slice – and ranks slices according to their level of activity and how
recently that activity took place. Slices that are heavily and frequently accessed will be
moved to the highest tier of storage, typically Flash drives, while the data that is accessed
least will be moved to lower performing, but higher capacity storage – typically NL-SAS
drives. This sub-LUN granularity makes the process more efficient, and enhances the
benefit achieved from the addition of Flash drives.
The ranking process is automatic, and requires no user intervention. When FAST VP is
implemented, the storage system measures, analyzes, and implements a dynamic storage-
tiering policy in a faster and more efficient way than a human analyst. Relocation of slices
occurs according to a schedule which is user-configurable, but which defaults to a daily
relocation. Users can also start a manual relocation if desired. FAST VP operations depend
on tiers of disks – up to three are allowed, and a minimum of two are needed for
meaningful FAST VP operation. The tiers relate to the disk type in use. Note that no
distinction is made between 10k rpm and 15k rpm SAS disks, and it is therefore
recommended that disk speeds not be mixed in a tier.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 17
FAST VP enables the user to create storage pools with heterogeneous device classes and
place the data on the class of devices or tier that is most appropriate for the block of data.
Pools allocate and store data in 256 MB slices which can be migrated or relocated, allowing
FAST VP to reorganize LUNs onto different tiers of the Pool. This relocation is transparent to
the hosts accessing the LUNs.
For example, when a LUN is first created it may have a very high read/write workload with
I/Os queued to it continuously. The user wants that LUN to have the best response time
possible in order to maximize productivity of the process that relies on this storage. Over
time, that LUN may become less active or stop being used and another LUN may become
the focus of the operation. VNX systems configured with EMC’s FAST VP software would
automatically relocate inactive slices to a lower storage tier, freeing up the more expensive
storage devices for the newly created and more active slices.
The administrator can use FAST VP with LUNs regardless of whether those LUNs are also in
use by other VNX software features, such as Data Compression, SnapView, MirrorView,
RecoverPoint, and so on.
The tiers from highest to lowest are Flash, SAS, and NL-SAS, described in FAST VP as
Extreme Performance, Performance, and Capacity respectively. FAST VP differentiates each
of the tiers by drive type, but it does not take rotational speed into consideration. EMC
strongly recommends the same rotational speeds per drive type in a given pool. FAST VP is
not supported for RAID groups because all the disks in a RAID group, unlike those in a Pool,
must be of the same type (all Flash, all SAS, or all NL-SAS). The lowest performing disks in
a RAID group determine a RAID group’s overall performance.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 18
FAST VP uses a number of mechanisms to optimize performance and efficiency. It removes
the need for manual, resource intensive, LUN migrations, while still providing the
performance levels required by the most active dataset. Another process that can be
performed is the rebalance. Upon the expansion of a storage pool, the system recognizes
the newly added space and initiates an auto-tiering data relocation operation. It can lower
the Total Cost of Ownership (TCO) and increase performance by intelligently managing data
placement.
Applications that exhibit skew, and have workloads that are fairly stable over time will
benefit from the addition of FAST VP.
The VNX series of storage systems deliver high value by providing a unified approach to
auto-tiering for file and block data. Both block and file data can use virtual pools and FAST
VP. This provides compelling value for users who want to optimize the use of high-
performance drives across their environment.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 19
During storage pool creation, the user can select RAID protection on a per-tier basis. Each
tier has a single RAID type, and once the RAID configuration is set for that tier in the pool,
it cannot be changed. The table above shows the RAID configuration that are supoorted for
each tier.
The drives used in a Pool can be configured in many ways – supported RAID types are RAID
1/0, RAID 5, and RAID 6. For each of those RAID types, there are recommended
configurations. These recommended configurations balance performance, protection, and
data efficiency. The configurations shown on the slide are those recommended for the
supported RAID types. Note that, though each tier may have a different RAID type, any
single tier may have only 1 RAID type associated with it, and that type cannot be changed
once configured.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 20
FAST VP policies are available for storage systems with the FAST VP enabler installed. The
policies define if and how data is moved between the storage tiers.
Use the “Highest Available Tier” policy when quick response times are a priority.
A small portion of a large set of data may be responsible for most of the I/O activity in a
system. FAST VP allows for moving a small percentage of the “hot” data to higher tiers
while maintaining the rest of the data in the lower ties.
The “Auto Tier” policy automatically relocates data to the most appropriate tier based on
the activity level of each data slice.
The “Start High, then Auto Tier” is the recommended policy for each newly created pool,
because it takes advantage of the “Highest Available Tier” and “Auto-Tier” policies.
Use the “Lowest Available Tier” policy when cost effectiveness is the highest priority. With
this policy, data is initially placed on the lowest available tier with capacity.
Users can set all LUN level policies except the “No Data Movement” policy both during and
after LUN creation. The “No Data Movement” policy is only available after LUN creation. If a
LUN is configured with this policy, no slices provisioned to the LUN are relocated across
tiers.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 21
Unisphere or Navisphere Secure CLI lets the user schedule the days of the week, start time,
and durations for data relocation for all participating tiered Pools in the storage system.
Unisphere or Navisphere Secure CLI also lets the user initiate a manual data relocation at
any time. To ensure that up-to-date statistics and settings are accounted for properly prior
to a manual relocation, FAST VP analyzes all statistics gathered independently of its
regularly scheduled hourly analysis before starting the relocation.
FAST VP scheduling involves defining the timetable and duration to initiate Analysis and
Relocation tasks for Pools enabled for tiering. Schedules can be configured to be run daily,
weekly, or just single iteration. A default schedule will be configured when the FAST enabler
is installed.
Relocation tasks are controlled by a single schedule, and affect all Pools configured for
tiering.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 22
The first step to configuring FAST VP is to have a tiered Pool.
To create a Heterogeneous Pool select the storage system in the systems drop-down list on
the menu bar. Select Storage > Storage Configuration > Storage Pools. In Pools, click
Create.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 23
The next step is to configure the Pool with teirs. In the General tab, under Storage Pool
Parameters, select Pool” The user can create pools that use multiple RAID types, one RAID
type per tier, to satisfy multiple tiering requirements within a pool. To do this the pool must
contain multiple disk types.
When creating the pool, select the RAID type for each tier
For the Extreme Performance tier, there are two types of disks that can be used: FAST
Cache optimized Flash drives and FAST VP optimized Flash drives. A RAID Group created by
FAST VP can use only one type, though both types can appear in the tier. If both types of
drive are present, the drive selection dialog shows them separately.
When the user expands an existing pool by adding additional drives, the system selects the
same RAID type that was used when the user created the pool.
When the user expands an existing pool by adding a new disk type tier, the user needs to
select the RAID type that is valid for the new disk type. For example, best practices suggest
using RAID 6 for NL-SAS drives, and RAID 6, 5, or 1/0 for other drives.
The Tiering Policy selection for the Pool is on the Advanced tab. A drop-down list of tiering
policies is available for selection.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 24
There is a default Tiering policy that gets put into place when a Pool is created – it is Start
High then Auto-Tier (Recommended). This policy is applied to all LUNs that are created
from the Pool. The policy can be adjusted on a per-LUN bases by going to the LUN
Properties page and accessing the Tiering tab. The various Tiering Policies are available
from a drop-down for selection.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 25
Provided the FAST enabler is present, select the Tiering tab from the Storage Pool
Properties window to display the status and configuration options.
Scheduled means FAST VP relocation is scheduled for the Pool. Data relocation for the
pool will be performed based on the FAST schedule in the Manage Auto-Tiering dialog. If
a tier fills to 90% capacity, data will be moved to another tier.
The Relocation Schedule button launches the Manage Auto-Tiering dialog when
clicked.
Data Relocation Status has several states. Ready means no relocations in progress for this
pool, Relocating means relocations are in progress for this pool and Paused means
relocations are paused for this pool.
Data to Move Down is the total amount of data (in GB) to move down from one tier to
another; Data to Move Up is the total amount of data (in GB) to move up from one tier to
another; Data to Move Within is the amount of data (in GB) that will be relocated inside the
tier based on I/O access.
Estimated time for data relocation is the estimated time (in hours) required to complete
data relocation
Note: If the FAST enabler is not installed, certain information will not be displayed.
Tier Details shows information for each tier in the Pool. The example Pool has 2 tiers, SAS
(Performance) and NL-SAS (Capacity).
Tier Name is the Name of the tier assigned by provider or lower level software.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 26
The Manage Auto-Tiering option available from Unisphere allows users to view and
configure various options.
The Data Relocation Rate controls how aggressively all scheduled data relocations will be
performed on the system when they occur. This applies to scheduled data relocations. The
rate settings are high, medium (default), and low. A low setting has little impact to
production I/O, but means that the tiering operations will take longer to make a full pass
through all the pools with tiering enabled. The high setting has the opposite effect. Though
relocation operations will proceed at a much faster pace, FAST VP will not consume so much
of the storage system resources that server I/Os time out. Operations are throttled by the
storage system.
The Data Relocation Schedule if enabled, controls the system FAST VP schedule. The
schedule controls allows configuring the days of the week, the time of day to start data
relocation, and the data relocation duration (hours selection 0-23; minutes selection of 0,
15, 30, &.45, but will be editable to allow for minutes set through CLI). The default
schedule is determined by the provider and will be read by Unisphere. Changes that are
applied to the schedule are persistent. The scheduled days use the same start time and
duration.
When the “Enabled” box is clear (not checked), the scheduling controls are grayed out, and
no data relocations are started by the scheduler. Even if the system FAST VP scheduler is
disabled, data relocations at the pool level may be manually started.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 27
Unisphere or Navisphere Secure CLI lets the user manage data relocation.
The user can initiate a manual data relocation at any time. To ensure that up-to-date
statistics and settings are accounted for properly prior to a manual relocation, FAST VP
analyzes all statistics gathered independently of its regularly scheduled hourly analysis
before starting the relocation.
Data relocation can also be managed with an array-wide scheduler. Relocation tasks
controlled with the single array-wide schedule affect all Pools configured for tiering. For
Pools existing before the installation of FAST VP, their Data Relocation is off by default.
Pools created after the installation of FAST VP, their Data Relocation is on by default. These
default setting can be changed as needed.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 28
The Start Data Relocation dialog displays all of the pools that were selected and the
action that is about to take place. If FAST is Paused, this dialog will contain a message
alerting the user that FAST is in a Paused state and that relocations will resume once FAST
is resumed (provided that the selected window for the relocations did not expire in the
meantime). If one or more Pools are already actively relocating data, it will be noted in the
confirmation message.
Data Relocation Rates are High, Medium, and Low. The default setting of the Data
Relocation Rate is determined by the Data Relocation Rate defined in the Manage FAST
dialog. The default Data Relocation Duration is 8 hours.
When the “Stop Data Relocation” menu item is selected, a confirmation dialog is displayed
noting all of the pools that were selected and the action that is about to take place. If one
or more pools are not actively relocating data, it will be noted in the confirmation message.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 29
The Tiering Summary pane can be configured from the Customize menu on the
Dashboard. The icon displays information about the status of tiering. This view is available
for all arrays regardless of the FAST enabler. When the FAST enabler is not installed, it will
display no FAST data and instead will show the user a message alerting them to the fact
that this feature is not supported on this system.
Relocation Status: Indicates the tiering relocation status. Can be Enabled or Paused.
Pools with data to be moved: the number of Pools that have data queued up to move
between tiers. This is a hot link that takes the user to the Pools table under Storage >
Storage Configuration > Storage Pools.
Scheduled Pools: the number of tiered pools associated with the FAST schedule. This is
also a hot link that takes the user to Storage > Storage Configuration > Storage Pools.
Active Pool Relocations: the number of pools with active data relocations running. This is
also a hot link that takes the user to Storage > Storage Configuration > Storage Pools.
Additional information includes the quantity of data to be moved up (GB), the quantity of
data to be moved down (GB), the estimated time to perform the relocation, the relocation
rate, and data to be moved within a tier if the tier has been expanded.
.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 30
This lesson covers functionality, benefits, and configuration of EMC FAST Cache.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 31
FAST Cache uses Flash drives to enhance read and write performance for frequently
accessed data in specified LUNs. FAST Cache consists of a storage pool of Flash disks
configured to function as FAST Cache. The FAST Cache is based on the locality of reference
of the data set. A data set with high locality of reference (data areas that are frequently
accessed) is a good candidate for FAST Cache. By promoting the data set to the FAST
Cache, the storage system services any subsequent requests for this data faster from the
Flash disks that make up the FAST Cache; thus, reducing the load on the disks in the LUNs
that contain the data (the underlying disks). The data is flushed out of cache when it is no
longer accessed as frequently as other data, per the Least Recently Used Algorithm.
FAST Cache consists of one or more pairs of mirrored disks (RAID 1) and provides both
read and write caching. For reads, the FAST Cache driver copies data off the disks being
accessed into the FAST Cache. For writes, FAST Cache effectively buffers the data waiting to
be written to disk. In both cases, the workload is off-loaded from slow rotating disks to the
faster Flash disks in FAST Cache.
FAST Cache operations are non-disruptive to applications and users. It uses internal
memory resources and does not place any load on host resources.
FAST Cache should be disabled for Write Intent Log (WIL) LUNs or Clone Private LUNs
(CPLs). Enabling FAST Cache for these LUNs is a misallocation of the FAST Cache and may
reduce the effectiveness of FAST Cache for other LUNs.
FAST Cache can be enabled on Classic LUNs and Pools once the FAST Cache enabler is
installed.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 32
FAST Cache improves the application performance, especially for workloads with frequent
and unpredictable large increases in I/O activity. FAST Cache provides low latency and high
I/O performance without requiring a large number of Flash disks. It is also expandable while
I/O to and from the storage system is occurring. Applications such as File and OLTP (online
transaction processing) have data sets that can benefit from the FAST Cache. The
performance boost provided by FAST Cache varies with the workload and the cache size.
Another important benefit is improved total cost of ownership (TCO) of the system. FAST
Cache copies the hot or active subsets of data to Flash drives in chunks. By offloading many
if not most of the remaining IOPS after FAST Cache, the user can fill the remainder of their
storage needs with low cost, high capacity disk drives. This ratio of a small amount of Flash
paired with a lot of disk offers the best performance ($/IOPS) at the lowest cost ($/GB) with
optimal power efficiency (IOPS/KWH).
Use FAST Cache and FAST VP together to yield high performance and TCO from the storage
system. For example, use FAST Cache optimized Flash drives to create FAST Cache, and
use FAST VP for pools consisting of SAS and NL-SAS disk drives. From a performance point
of view, FAST Cache provides an immediate performance benefit to bursty data, while FAST
VP moves more active data to SAS drives and less active data to NL-SAS drives. From a
TCO perspective, FAST Cache can service active data with fewer Flash drives, while FAST VP
optimizes disk utilization and efficiency with SAS and NL-SAS drives.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 33
To create FAST Cache, the user needs at least 2 FAST Cache optimized drives in the
system, which will be configured in RAID 1 mirrored pairs. Once the enabler is installed, the
system uses the Policy Engine and Memory Map components to process and execute FAST
Cache.
• Policy Engine – Manages the flow of I/O through FAST Cache. When a chunk of data on
a LUN is accessed freuqnetly, it is copied temporarily to FAST Cache (FAST Cache
optimized drives). The Policy Engine also maintains statistical information about the data
access patterns. The policies defined by the Policy Engine are system-defined and cannot
be changed by the user.
• Memory Map – Tracks extents usage and ownership in 64 KB chunks of granularity. The
Memory Map maintains information on the state of 64 KB chunks of storage and the
contents in FAST Cache. A copy of the Memory Map is stored in DRAM memory, so when
the FAST Cache enabler is installed, SP memory is dynamically allocated to the FAST
Cache Memory Map. The size of the Memory Map increases linearly with the size of FAST
Cache being created. A copy of the Memory Map is also mirrored to the Flash disks to
maintain data integrity and high availability of data.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 34
During FAST Cache operations, the application gets the acknowledgement for an IO
operation once it has been serviced by the FAST Cache. FAST Cache algorithms are
designed such that the workload is spread evenly across all the flash drives that have been
used for creating FAST Cache.
During normal operation, a promotion to FAST Cache is initiated after the Policy Engine
determines that 64 KB block of data is being accessed frequently. To be considered, the 64
KB block of data must be accessed by reads and/or writes multiple times within a short
period of time.
A FAST Cache Flush is the process in which a FAST Cache page is copied to te HDDs and the
page is freed for use. The least recently used (LRU) algorithm determines which data blocks
to flush to make room for the new promotions.
FAST Cache contains a cleaning process which proactively copies dirty pages to the
underlying physical devices during times of minimal backend activity.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 35
FAST Cache is created and configured on the system from the System Properties FAST
Cache tab page. From the page click the Create button to start the initializing operation.
The Flash drives are then configured for FAST Cache. The user has an option for the system
to automatically select the Flash drives to be used by FAST Cache or the user can manually
select the drives. When the initializing operation is complete, the cache state is Enabled.
The cache stays in the Enabled state until a failure occurs or the user choose to destroy the
cache. To change the size of FAST Cache after it is configured, the user must destroy and
recreate FAST Cache. This requires FAST Cache to flush all dirty pages currently contained
in FAST Cache. When FAST Cache is created again, it must repopulate its data (warm-up
period).
If a sufficient number of Flash drives are not available to enable FAST Cache, Unisphere
displays an error message, and FAST Cache cannot be created.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 36
The FAST Cache option will only be available if the FAST Cache enabler is installed on the
storage system.
When a Classic LUN is created, as shown in the example on the top left, FAST Cache is
enabled by default (as is Read and Write Cache).
If the Classic LUN has already been created as shown in the example on the bottom left,
and FAST Cache has not been enabled for the LUN, the Cache tab in the LUN Properties
window can be used to configure FAST Cache.
Note that checking the Enable Caching checkbox checks all boxes below it (SP Read Cache,
SP Write Cache, FAST Cache).
Enabling FAST Cache for Pool LUNs differs from that of a Classic LUNs in that FAST Cache is
configured at the Pool level only as shown in the examples on the right. In other words, all
LUNs created in the Pool will have FAST Cache enabled or disabled collectively depending on
the state of the FAST Cache Enabled box.
The FAST Cache Enabled box will be enabled by default if the FAST Cache enabler was
installed before the Pool was created. If the Pool was created prior to installing the FAST
Cache enabler, FAST Cache is disabled on the Pool by default. To enable FAST Cache on the
Pool, launch the Storage Pool Properties window and select the Enabled box under FAST
Cache as shown in the example on the bottom right.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 37
The FAST Cache enabler is required to be installed on the VNX for the feature to be
available. Once installed, the VNX needs to have FAST Cache optimized Flash drives
installed and configured as RAID 1 mirrored pairs. FAST VP drives cannot be used for FAST
Cache. FAST Cache is configured on Classic LUNs individually. FAST Cache is enabled by
default at the Pool level for Pool LUNs. All LUNs created from the Pool will have FAST Cache
enabled on them. If the FAST Cache enabler was installed after the Pool was created FAST
Cache is disabled by default. Likewise, if a Classic LUN was created prior to the FAST Cache
enabler being installed, the Classic LUN will have FAST Cache disabled by default. FAST
Cache should be disabled for Write Intent Log (WIL) LUNs or Clone Private LUNs (CPLs).
Enabling FAST Cache for these LUNs is a misallocation of the FAST Cache and may reduce
the effectiveness of FAST Cache for other LUNs.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 38
This table shows the FAST Cache maximum configuration options. The Maximun FAST
Cache in the last column depend on the drive count in the second column (Flash Disk
Capacity). For example: VNX5400 can have up to 10 drives of 100 GB or up to 10 drives of
200 GB.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 39
This lesson covers the space efficiency features of Block Deduplication and Block
Compression. It provides a functional overview and the architecture of each feature as well
as the storage environments that are suited for each of them. The enablement and
management of the features are detailed and their guidelines and limits are examined.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 40
VNX Block Deduplication and Block Compression are optional storage efficiency software
features for VNX Block storage systems. They are available to the array via specific
enablers; a Deduplication enabler and a Compression enabler. The features cannot be
enabled on the same LUNs as they are mutually exclusive to each other on a per-LUN basis.
If Block Compression is enabled on a LUN it cannot also have Block Deduplication enabled.
Conversely, if Block Deduplication is enabled on a LUN it cannot also have Block
Compression enabled.
In general Block Deduplication uses a hash digest process for identifying duplicate data
contained within Pool LUNs and consolidate it in such a way that only one actual copy of the
data is used by many sources. This feature can result in significant space savings depending
on the nature of the data. VNX Block Deduplication utilizes a fixed block deduplication
method with a set size of 8 KB to remove redundant data from a dataset. Block
Deduplication is run post-process on the selected dataset. Deduplication is performed within
a Storage Pool for either Thick or Thin Pool LUNs with the resultant deduplicated LUN being
a Thin LUN. As duplicate data is identified, if a 256 MB pool slice is freed up, the free space
of the slice is returned to the Storage Pool. Block Deduplication cannot be directly enabled
on Classic LUNs. A manual migration of the Classic LUN can be performed to a Thin LUN,
then Deduplication can be enabled on the Thin LUN. For applications requiring consistent
and predictable performance, EMC recommends using Thick LUNs. If Thin LUN performance
is not acceptable, then do not use Block Deduplication.
In general Block Compression uses a compression algorithm that attempts to reduce the
total space used by a dataset. VNX Block Compression works in 64 KB chunk increments to
reduce the storage footprint of a dataset by at least 8 KB and provide savings to the user.
Compression is not done on a chunk if the space savings are less that 8 KB. If a 256 MB
pool slice is freed up by compression, the free space of the slice is returned to the Storage
Pool. Because accessing compressed data may cause a decompression operation before the
I/O is completed, compression is not suggested to be used on active datasets.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 41
The VNX Block Deduplication feature operates at the Storage Pool level. Non-deduplicated
and deduplicated LUNs can coexist within the same pool. The deduplication architecture
utilizes a Deduplication Container that is private space within a Storage Pool. There is only
one deduplication container per pool. The container holds all the data for the deduplication-
enabled LUNs within the specific pool. The container is created automatically when
deduplication is enabled on a pool LUN and conversely is destroyed automatically when
deduplication is disabled on the last LUN or when that LUN is deleted. Existing LUNs are
migrated to the container when deduplication is enabled on them. When a LUN is created
with deduplication enabled, the LUN gets created directly in the container. Because
deduplication is an SP process that uses hashing to detect duplicate 8 KB blocks on LUNs
within the pool, LUN SP ownership is critical to the feature performance. The container SP
Allocation Owner is determined by the SP Default Owner of the first LUN in the container.
To avoid deduplication feature performance issues, it is critical to use a common SP Default
Owner for all the LUNs within the pool that are deduplication enabled. This will result in the
container SP Allocation Owner matching the SP Default Owner for the specific pool’s
deduplicated LUNs. If LUNs from multiple pools are deduplication-enabled it is
recommended to balance the multiple deduplication containers between the SPs.
The deduplication process runs against each deduplication container as a background task
12 hours after its last session completed. Each SP can run three concurrent container
sessions. Other sessions needing to run are queued. If a session runs for four hours straight
the session is paused and the first queued session will start. The session checks the
container for 64 GB of new or updated data, if it exists the session runs a hash digest on
each 8 KB block of the 64 GB of data to identify duplicate block candidates. Candidate
blocks are then compared bit by bit to verify the data is exactly the same. The oldest
identical blocks are kept and duplicate blocks are removed (evacuated from the container).
The deduplication container uses a Virtual Block Map (VBM) to index the removed duplicate
blocks to the single instance saved block. Any freed pool slices are returned to the pool. If a
session starts and there is less that 64 GB of new or updated data, the hash digest portion
of the process is run to identify without the removal of duplicate data and the session
complete timer is reset.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 42
The VNX Block Compression feature can be used on Thick and Thin Pool LUNs and on
Classic RAID Group LUNs. Compressed Pool LUNs will remain in the same Storage Pool. A
Thick LUN will be migrated to a Thin LUN by the compression process. When compression is
enabled on a Classic LUN, compression will migrate the LUN to a Thin LUN. The operator
must select a Storage Pool having enough capacity to migrate the LUN. The compression is
done on the Classic LUN in-line during the migration to a Thin LUN.
The compression process operates on 64 KB data chunks on the LUN. It will only compress
the data if a space savings of 8 KB or more can be realized. The process will not modify any
data chunk if less that 8 KB space saving would result. The compression feature is always
ongoing to compression enabled LUNs. It can be manually paused by the operator. The rate
of compression for a LUN is also selectable between High, Medium and Low. The default
value is set to Medium. This setting is not a level of compression for the data but rather is
the rate at which the compression runs on the data. A Low rate can be selected when
response-time critical applications are running on the storage system. As data compression
frees 256 MB pool slices, that space is returned to the pool for its use.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 43
VNX Block space efficiency features are best suited for storage environments that require
space efficiency combined with a high degree of availability. Both Deduplication and
Compression use Thin LUNs to reclaim saved space to the pool, thus their use will only be
suited for environments where Thin LUN performance is acceptable. The features work best
in environments where data is static and thus can best leverage the features’ storage
efficiencies.
Block Deduplication is well suited for environments where large amounts of duplicate data
are stored and that do not experience over 30% write IOs. Avoid environments that have
large amounts of unique data as it will not benefit from the space savings the feature
provides. If the environment is over 30 % write active that will tend to drive a constant
cycle of undoing and redoing the deduplication. Also avoid environments where sequential
or large block IOs are present.
Block Compression is very suitable to data archive environments. Avoid compression in time
sensitive application environments. This is because when compressed data is read, it has to
be decompressed inline and that affects the individual I/O thread performance. Also avoid
environments where data is active. If compressed data is written to, it first must be
decompressed and written back in uncompressed form, thus consuming space in the LUN.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 44
When creating a Pool LUN, the user is given the option of enabling VNX Block Deduplication
at the time of creation. Notice that the Thin checkbox is also enabled, since Deduplicated
LUNs are Thin LUNs by definition. From the Advanced tab the SP Default Owner of the LUN
can be selected. If this is the first LUN from the pool to be deduplication-enabled, the same
SP will be the Deduplication Container’s Allocation Owner. A warning message will be
displayed for creating a deduplication-enabled LUN that has an SP Default Owner that does
not match the pool Deduplication Container Allocation Owner. In the example shown, the
Block Pool already contains a deduplication enabled LUN having an SPB Default Owner and
the pool’s Deduplication Container Allocation Owner is SPB. The warning message alerts the
operator that selecting SPA as a Default Owner of the LUN will cause a performance impact.
Therefore the operator should select SPB as the Default Owner for the LUN to match the
existing SP Allocation Owner of the container.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 45
Dedulication can be enabled on an existing pool LUN by going to the LUNs page in
Unisphere and right-clicking the LUN to select the Deduplication option from the drop-down
selection. It can also be enabled from the LUN Properties page from the Deduplication tab.
If the existing LUN SP Default Owner does not match the pool Deduplication Container SP
Allocation Owner a warning message is displayed showing the operator the Optimal SP
Owner and recommending changing the SP Default Owner. If the LUN uses a feature not
supported, like VNX Snapshots, the user receives a message relating to how the system will
proceed.
Deduplication for the LUN can also be turned off from the LUNs page or the LUN Properties
page.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 46
The State of Block Deduplication for a Storage Pool can be viewed on the Storage Pool
Properties page from the Deduplication tab. If it is running, a percentage complete and
remaining space is shown. Deduplication on the pool can be Paused or Resumed, the
Tiering policy can be set and the Deduplication Rate can be set to Low, Medium (default) or
High. The page will also display the amount of space that is shared and between the
deduplicated LUNs, including VNX Snapshots. A display is also given for the estimated value
of the capacity saved for deduplicated LUNs and VNX Snapshots.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 47
To enable Block Compression on a pool LUN, from the Unisphere LUNs page select the LUN
and go to its Properties page. On the Compression tab check the Turn On Compression
option. The compression Rate of Low, Medium (default) or High can also be selected. Once
Compression is enabled it can be Paused from the same location. The slide illustrates the
Compression tab for a Thin LUN and a Thick LUN. Notice the difference in Consumed
Capacity. Enabling Compression on the Thick LUN will cause it to be migrated to a Thin LUN
and less Consumed Capacity will be a result. Additional space savings will be realized by the
compression of data on the LUN as well.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 48
To enable Block Compression on a Classic LUN, access its Properties page from Unisphere
and select the Compression tab and click the Turn On Compression button. The system will
migrate the Classic LUN to a pool Thin LUN and displays a window for the user to select an
existing capacity capable pool or allows one to be created.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 49
Block Deduplication has some feature interoperability guidelines. They are listed on the
table and are continued on the next slide.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 50
This slide continues the feature interoperability guidelines for Block Deduplication.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 51
Block Compression has some feature interoperability guidelines. They are listed on the table
and are continued on the next slide.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 52
This slide continues the feature interoperability guidelines for Block Compression. It also
displays a table detailing Compression operation limits by VNX array model.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 53
This lesson covers the Data-At-Rest Encryption (D@RE) advanced storage feature. It
describes the feature’s benefits and its guidelines and considerations. It also details
activating the feature for use in the VNX.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 54
The Data-At-Rest (D@RE) feature secures user data on VNX disk drives through strong
encryption. If a drive is physically stolen from the VNX system, the user data is unreadable.
The data is encrypted and decrypted by embedded encryption hardware in the SAS
controller. D@RE issues a unique encryption key for each drive that is configured in a Pool
or RAID Group. The encryption happens on the direct I/O path between the SAS controller
and the disk and is transparent to all upper-level data operations. The hardware encrypts
and decrypts at near line speed with a negligible performance impact. Since the SAS
controller hardware performs all the encryption, all VNX disk drive types are supported. VNX
D@RE requires no special disk hardware unlike other data protection solutions which us self
encrypting drives (SEDs). The D@RE feature is provided by the DataAtRestEncryption
enabler and is installed on all new VNX systems shipped from manufacturing. The enabler is
available as an NDU to upgrade currently deployed VNX systems. A separate Activation step
is required to start the user data encryption of the drives. If the VNX already contains
unencrypted data, the activation process will encrypt the existing data and all new data.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 55
The design objective of D@RE is to secure data stored on the VNX disks in the event of
physical theft. Some D@RE benefits are its ability to encrypt all data stored on the VNX.
This includes both File and Block data. It will also encrypt any existing data on the VNX and
does this with minimal performance impact. Because the encryption is done on the direct
I/O path from the SAS controller to the disk drive, all VNX storage features are unaffected
and are supported. The feature uses the existing SAS controller hardware of the VNX with
MCx systems so there is no special disk drive hardware needed. The feature works with the
existing supported VNX disk drives, all types (Flash, SAS and NL-SAS) and all vendors.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 56
The D@RE feature does have some guidelines and considerations. Before activating D@RE,
all FAST Cache LUNs need to be destroyed. This is required so that the data held in FAST
Cache is written back to disk and can thus be encrypted. Encrypting existing data is time
consuming. This is due to a design choice to limit encrypting the existing data to 5% of
available bandwidth and maintain the rest of the bandwidth for host I/O workloads. For
systems containing a large amount of data, the encryption of existing data can take tens of
days or more. The D@RE keystore contains all of the existing keys used to encrypt each
drive and six copies of it are stored on a system private LUN that protected by a 2 X 3
mirror. Each time a disk is configured into either a Pool or RAID Group the system provides
an alert to the operator to perform a keystore backup. The backup requires operator
intervention and should be stored off the VNX in the unlikely event of keystore loss or
corruption. Should a keystore recovery be needed, a support engagement will be required.
Similarly, only support can revert the SAS I/O modules to an un-encrypted state.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 57
To activate D@RE its wizard must be used and is available for selection from the Wizards
task pane. The wizard screens are shown and display caution a message that once activated
will be irreversible.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 58
From the System Properties page access the Encryption tab to see the status of D@RE. The
feature is active and the encryption of existing data is ongoing and can take some time to
complete. The SPs will need to be rebooted one at a time in any order to complete enabling
the D@RE feature on the system. Make sure the first SP rebooted fully back online prior to
rebooting the second SP. The SPs can be rebooted when the encryption status of: “In
Progress”, Scrubbing”, or “Encrypted” exists.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 59
The keystore backup operation is selectable from the System Management section of the
task pane. The keystore backup is a manual operation and should be done upon activating
D@RE and each time a drive is added to a Pool or RAID Group since D@RE will issue an
encryption key for the new drive. Backup the keystore to a location off the VNX. This
precaution is recommended should the existing keystore be lost or corrupted. Without a
keystore, all data on a D@RE activated VNX becomes unavailable until as keystore recovery
operation is completed by EMC support.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 60
This module covered the advanced storage features of LUN Migration, LUN Expansion,
FAST VP, FAST Cache, storage efficiencies (Block Deduplication and Block Compression),
and D@RE. Their functionality was described, the benefits identified, and guidelines for
operation were listed. It also provided the configuration steps for the FAST VP, FAST Cache
and D@RE features.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 61
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: Advanced Storage Features 62
This module focuses on the theory and operation and the management of VNX Local
Replication options for Block—SnapView Snapshots, SnapView Clones, and VNX SnapShot.
If all VNX Snapshots are removed from a Thick LUN, the driver will detect this and begin the
defragmentation process. This converts Thick LUN slices back to contiguous 256 MB
addresses. The process runs in the background and can take a significant amount of time.
The user can not disable this conversion process directly, however, it can be prevented by
keeping at least one VNX Snapshot of the Thick LUN.
Note: while a delete process is running, the Snapshot name remains used. So, if one needs
to create a new Snapshot with the same name, it is advisable to rename the Snapshot prior
to deleting it.
Note: The list of devices on any one Data Mover may vary widely. The devices presented
here are merely examples of what might be displayed, depending on the network specifics
of a given model.
Example:
$ server_ifconfig server_2 -all
server_2 :
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
vnx2fsn0 protocol=IP device=fsn0
inet=10.127.57.122 netmask=255.255.255.224 broadcast=10.127.57.127
UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:26:a4:7e
• From the Top Navigation Bar, click System > Hardware > Data Movers.
• Right click server_2 and click Properties.
• Enter the IP address of the NTP server.
• Click Apply to accept the changes.
Note: To verify NTP status, using CLI run the server_date command.
Not Locked: All files start as Not Locked. A Not Locked file is an unprotected file that is
treated as a regular file in a file system. In an FLR file system, the state of an unprotected file
can change to Locked or remain as Not Locked.
Locked: A user cannot modify, extend, or delete a Locked file. The file remains Locked until its
retention period expires. An administrator can perform two actions on a Locked file:
• Increase the file Retention Date to extend the existing retention period
• If the Locked file is initially empty, move the file to the Append-only state
Append-only: You cannot delete, rename, and modify the data in an Append-only file, but you
can add data to it. The file can remain in the Append-only state forever. However, you can
transition it back to the Locked state by setting the file status to Read-only with a Retention
Date.
Expired: When the retention period ends, the file transitions from the Locked state to the
Expired state. You cannot modify or rename a file in the Expired state, but you can delete the
file. An Expired file can have its retention period extended such that the file transitions back to
the Locked state. An empty expired file can also transition to the Append-only state.
https://edutube.emc.com/Player.aspx?vno=+t2ve3LqIbbGdq7pRRKjyw==&autoplay=true
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 1
This lesson covers the purpose of SnapSure, introduces the key components and explains
how SnapSure uses VNX storage.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 2
SnapSure is a VNX for File, Local Protection feature that saves disk space and time by
creating a point-in-time view of a file system. This logical view is called a checkpoint, also
known as a “snapshot”, and can be mounted as a read-only or writeable file system.
SnapSure is mainly used by low-activity, read-only applications such as backups and file
system restores. It’s writeable checkpoints can also be used in application testing or
decision support scenarios.
SnapSure is not a discrete copy product and does not maintain a mirror relationship
between source and target volumes. It maintains pointers to track changes to the primary
file system and reads data from either the primary file system or from a specified copy
area. The copy area is referred to as a SavVol, and is defined as a VNX for File metavolume.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 3
SnapSure checkpoints provide users with multiple point-in-time views of their data. In the
illustration above the user’s live, production data is a business proposal Microsoft Word
document. If they need to access what that file looked like on previous days, they can
easily access read-only versions of that file as viewed from different times. This can be
useful for restoring lost files or simply for checking what the data looked like previously. In
this example, checkpoints were taken on each day of the week.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 4
PFS
A production file system, or PFS, is any typical VNX file system that is being used by an
application or user.
SavVol
Each PFS with a checkpoint has an associated save volume, or SavVol. The first change
made to each PFS data block triggers SnapSure to copy that data block to the SavVol.
Bitmap
SnapSure maintains a bitmap of every data block in the PFS where it identifies if the data
block has changed since the creation of the checkpoint. Each PFS with a checkpoint has
one bitmap that always refer to the most recent checkpoint.
Blockmap
A blockmap of the SavVol is maintained to record the address in the SavVol of each “point-
in-time” saved PFS data block. Each checkpoint has its own blockmap.
Checkpoint
A point-in-time view of the PFS. SnapSure uses a combination of live PFS data and saved
data to display what the file system looked like at a particular point-in-time. A checkpoint is
thus dependent on the PFS and is not a disaster recovery solution. Checkpoints are also
known as snapshots.
Displayed on this slide is a PFS with three data blocks of content. When the first file system
checkpoint is created, a SavVol is also created. The SavVol is a specially marked
metavolume that holds the single Bitmap, the particular checkpoint’s blockmap (as we will
see, each additional checkpoint will have its own blockmap), and space to preserve the
original data values of blocks in the PFS that have been modified since the establishment of
the checkpoint. The bitmap holds one bit for every block on the PFS.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 5
This series of slides illustrate how SnapSure operates to preserve a point-in-time view of
PFS data. This slide and the next show how an initial write to a data block on the PFS is
processed by SnapSure.
A write to DB2 of the PFS is initiated and SnapSure holds the write request. The bitmap for
DB2 is 0 indicating SnapSure needs to perform a copy on first write operation for PFS DB2.
SnapSure copies DB2 data to the first address location in the SavVol. Thus the point-in-time
view of DB2 data is preserved within the SavVol by SnapSure.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 6
This slide continues with the initial PFS write operation to DB2 of the PFS.
With the original DB2 data copied to the SavVol, SnapSure updates the bitmap value for
DB2 to 1 indicating that that data is preserved in the SavVol. The blockmap is also updated
with the address in the SavVol where DB2 data is stored. SnapSure releases the write hold
and the new DB2 data is written to the PFS.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 7
This slide illustrates SnapSure operations with multiple checkpoints of a PFS.
Upon creation of a subsequent checkpoint, SnapSure creates a new bitmap and blockmap
for the newest checkpoint. The bitmap for any older checkpoint is removed. Only the most
recent read-only checkpoint will have a bitmap.
A write to the PFS uses a similar technique as seen in the prior two slides. The write to the
PFS is held and SnapSure examines the newest checkpoint bitmap to see if the point-in-
time view of the data needs to be copied to the SavVol. If the bitmap value is 0 the PFS
original data is copied to the SavVol, the bitmap and blockmap are updated, and the write
of data to the PFS is released. If the bitmap value for the data were 1, this would indicate
that the point-in-time view of data had already been preserved and thus SnapSure would
simply write the new data to the PFS.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 8
When a read is made from the newest checkpoint, SnapSure examines the checkpoint
bitmap. If the value for the data block is 1, this indicates that the original data is in the
SavVol and SnapSure then gets the SavVol location for the point-in-time data from the
blockmap and retrieves the data from the SavVol location. If the bitmap value for the data
block was 0, this indicates that the data on the PFS is unchanged and thus SnapSure
retrieves the data directly from the PFS.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 9
When a read is made from an old checkpoint, SnapSure cannot simply read the bitmap.
Instead, it will first have to examine the desired checkpoint’s blockmap to check for any
data that has been copied to the SavVol. SnapSure will continue to read through
subsequently newer blockmaps as it makes its way to the newest checkpoint. The first
referenced value is always the one that is used. If no blockmap contains a reference to the
data, that indicates the PFS holds the needed data and SnapSure will read the data from
the PFS.
For this example a read request is made from Checkpoint 1 for DB1. SnapSure examines
Blockmap1 for DB1 and, as seen, its blockmap does not have a reference for DB1 so
SnapSure progresses to the next newer checkpoint blockmap. In this example Blockmap2
does hold a reference for DB1 therefore SnapSure will go to the SavVol address to retrieve
DB1 data for the read request. In this example, should the read request have been for DB3,
SnapSure would have gone to the PFS to retrieve the data for the read request.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 10
SnapSure requires a SavVol to hold data. When you create the first checkpoint of a PFS,
SnapSure creates and manages the SavVol automatically by using the same storage pool as
the PFS. The following criteria is used for automatic SavVol creation:
• If PFS ≥ 20GB, then SavVol = 20GB
• If PFS < 20GB and PFS > 64MB, then SavVol = PFS size
• If PFS ≤ 64MB, then SavVol = 64MB
If you create another checkpoint, SnapSure uses the same SavVol, but logically separates
the point-in-time data using unique checkpoint names.
The SavVol can be manually created and managed as well. All that is needed is an unused
metavolume. The size of the metavolume is recommended to be 10% of the PFS. Creating
a SavVol manually provides for more control regarding the placement of the savVol.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 11
SnapSure utilizes a feature to automatically extend the SavVol to prevent inactivation of
older checkpoints. By default, the High Water Mark (HWM) is set to 90%, but this amount
can be lowered if necessary. By default, SnapSure is not able to consume more than 20%
of the space available to the VNX. This limit of 20% can be changed in the param file
/nas/sys/nas_param.
If the SavVol was created automatically, the SavVol space will be extended in 20 GB
increments until the capacity is below the HWM once more. If the SavVol was manually
created, the automatic extension feature will extend the SavVol by 10% of the PFS. In
order to extend the SavVol, there must be unused disk space of the same type that the
SavVol resides on.
If the HWM is set to 0%, this tells SnapSure not to extend the SavVol when a checkpoint
reaches near-full capacity. Instead, SnapSure will use the remaining space and then deletes
the data in the oldest checkpoint and recycles the space to keep the most recent checkpoint
active. It repeats this behavior each time a checkpoint needs space.
The SnapSure refresh feature conserves SavVol space by recycling used space. Rather than
use new SavVol space when creating a new checkpoint of the PFS, use the refresh feature
anytime after you create one or more checkpoints. You can refresh any active checkpoint of
a PFS, and in any order. The refresh operation is irreversible. When you refresh a
checkpoint, SnapSure maintains the file system name, ID, and mount state of the
checkpoint for the new one. The PFS must remain mounted during a refresh.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 12
With SnapSure, you can automate the creation and refresh of read-only checkpoints.
Automated checkpoint refresh can be configured with the CLI nas_ckpt_schedule command,
Unisphere, or a Linux cron job script. Checkpoint creation and refresh can be scheduled on
arbitrary, multiple hours of a day and days of a week or month. You can also specify
multiple hours of a day on multiple days of a week, and have more than one schedule per
PFS.
You must have appropriate VNX for File administrative privileges to use the various
checkpoint scheduling and management options. Administrative roles that have read-only
privileges can only list and view schedules. Roles with modify privileges can list, view,
change, pause, and resume schedules. Roles with full-control privileges can create and
delete checkpoint schedules in addition to all other options.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 13
This lesson covers how Writeable Checkpoints work as well as some limitations.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 14
Writeable Checkpoints
• Can be mounted and exported as a read-write file systems
• Share the same SavVol with read-only checkpoints
• Add write capabilities to both local and remote checkpoints
Writeable checkpoints share the same SavVol with read-only checkpoints. The amount of
space used is proportional to the amount of data written to the writeable checkpoint file
system. Block overwrites do not consume more space.
There is no SavVol shrink. The SavVol grows to accommodate a busy writeable checkpoint
file system. The space cannot be returned to the cabinet until all checkpoints of a file
system are deleted.
A deleted writeable checkpoint returns its space to the SavVol.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 15
You can create, delete, and restore writeable checkpoints.
Writeable checkpoints are branched from “baseline” read-only checkpoints. A baseline
checkpoint exists for the lifetime of the writeable checkpoint. Any writeable checkpoint must
be deleted before the baseline is deleted. Writeable checkpoints and their baselines cannot
be refreshed or be part of a checkpoint schedule.
This feature is fully supported in CLI and Unisphere.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 16
The deletion of a writeable checkpoint works just like a read-only checkpoint deletion.
The Unisphere GUI allows deletion of both a baseline and any writeable in one-step,
otherwise the CLI requires the writeable to be deleted first.
In case of a restore from a writeable checkpoint to a PFS, the writeable checkpoint must be
remounted as a read-only file system before the restore starts. The GUI does this
automatically, the CLI requires the user to remount the writeable checkpoint as read-only
first.
The restore then proceeds in the background (same as a read-only restore). The writeable
checkpoint cannot be mounted read-write during the background restore. The read-write
checkpoint remains mounted read-only after the background restore completes.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 17
A writeable checkpoint requires at least one read-only checkpoint for use as a baseline to
the writeable checkpoint. If no read-only checkpoint exists, the system will automatically
create one when a writeable checkpoint is created. Unlike read-only checkpoints which have
only one bit-map for the newest checkpoint, each writeable checkpoint will have a bitmap
as well as a blockmap. Data written to the writeable checkpoint is written directly into the
SavVol for the PFS. The writeable checkpoint uses the bitmap and blockmap in the same
manner as read-only checkpoints; the bitmap identifies if the checkpoint data resides on
the PFS or if it is in the SavVol and the blockmap will identify the SavVol address for the
written data. A Writeable checkpoint uses the same SavVol of a PFS as read-only
checkpoints.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 18
When a write request is made to a writeable checkpoint the data is written to the SavVol.
The writeable checkpoint bitmap for the data is set to 1 and its blockmap will contain the
SavVol address for the data.
This example uses a PFS having an existing read-only checkpoint that is saving point-in-
time data to the SavVol. A write request is made to DB3 of the writeable checkpoint. The
data will be written into the SavVol and the writeable checkpoint bitmap and blockmap will
be updated; the bitmap for the data block will be set to 1 and the blockmap will be updated
with the SavVol address that holds the data.
If a rewrite operation is performed on a writeable checkpoint data block, the data in the
SavVol for that data block is simply overwritten and no additional SavVol space is consumed
by the rewrite operation.
Read operations to a writeable checkpoint use the same methodology as the read-only
checkpoints.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 19
You can have only one writeable checkpoint per baseline read-only checkpoint. There is a
maximum of 16 writeable checkpoints per PFS. Writeable checkpoints do not count against
the 96 user checkpoint limit. So, all together there could a total of 112 user checkpoints per
PFS. However, any checkpoint created and used by other VNX features, such as VNX
Replicator, count towards the limit. If there are 95 read-only checkpoints on the PFS and
the user tries to use VNX Replicator, the replication will fail as the VNX needs to create two
checkpoints for that replication session.
Writeable checkpoints count towards the maximum number of file systems per cabinet
(4096) and the maximum number of mounted file systems per Data Mover (2048).
You cannot create a checkpoint from a writeable checkpoint.
You can create a writeable checkpoint from a scheduled r/o checkpoint. However, if the
writeable checkpoint exists when the schedule executes a refresh, it will fail.
Warnings are displayed in Unisphere when creating a writeable checkpoint on a scheduled
checkpoint. No warning is displayed using the CLI.
For additional information on limitations, see Using VNX Snapsure
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 20
This lesson covers creating a checkpoint file system and accessing it using Windows and
Linux or UNIX clients. This lesson also covers checkpoint schedule creation.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 21
CVFS (Checkpoint Virtual File System) is a navigation feature that provides NFS and CIFS
clients with read-only access to online, mounted checkpoints from within the PFS
namespace. This eliminates the need for administrator involvement in recovering point-in-
time files. The checkpoints are automatically mounted and able to be read by end users.
Hiding the checkpoint directory from the list of file system contents provides a measure of
access control by requiring clients to know the exact directory name to access the
checkpoints. The name of the hidden checkpoint directory is .ckpt by default. You can
change the name from .ckpt to a name of your choosing by using a parameter in the
slot_(x)/param file. You can change the checkpoint name presented to NFS/CIFS clients
when they list the .ckpt directory, to a custom name, if desired. The default format of
checkpoint name is: yyyy_mm_dd_hh.mm.ss_<Data_Mover_timezone>.
You can only change the default checkpoint name when you mount the checkpoint. To
change the name of a checkpoint pfs04_ckpt1 of pfs_04 to Monday while mounting the
checkpoint on Data Mover 2 on mountpoint /pfs04_ckpt1, use the following CLI command:
server_mount server_2 -o cvfsname=Monday pfs04_ckpt1 /pfs04_ckpt1
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 22
To view checkpoint data using a Linux or Unix machine, first the production file system will
need to be mounted on the client machine. If you list the files in the file system, you will
not see any .ckpt directory. The .ckpt directory needs to be explicitly specified in the list
command path to view its contents. Each checkpoint will appear as a data directory. Only
checkpoints that are mounted and read-only will be displayed.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 23
If we change directory to one of the checkpoint directories, we will see the contents of the
production file system at the time the checkpoint was taken. End users can copy any file
that has been accidentally deleted from the .ckpt directory into the production file system.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 24
Another method of accessing checkpoint data via CIFS is to use the Shadow Copy Client
(SCC). The SCC is a Microsoft Windows feature that allows Windows users to access
previous versions of a file via the Microsoft Volume Shadow Copy Service. The SCC will
need to be downloaded from Microsoft online if using Windows 2000 or XP. SCC is also
supported by VNX to enable Windows clients to list, view, copy, and restore from files in
checkpoints created with SnapSure. To view the checkpoint data via SCC, the Previous
Versions tab of the file system Properties window will need to be accessed.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 25
In Unisphere, you can schedule checkpoint creation and refreshes at multiple hours of a
day, days of a week, or days of a month. You can also specify multiple hours of a day on
multiple days of a week to further simplify administrative tasks. More than one schedule per
PFS is supported. You can also create a schedule of a PFS that already has a checkpoint
created on it, and modify existing schedules.
Under the Schedules tab you can find a list of schedules and their runtimes. Runtimes are
based on the time zone set on the Control Station of the VNX. There are four possible
schedule states:
Active: Schedule is past its first execution time and is to run at least once in the future.
Pending: Schedule has not yet run.
Paused: Schedule has been stopped and is not to run until resumed, at which point, the
state returns to Active.
Complete: Schedule has reached its end time or maximum execution times and is not to
run again unless the end time is changed to a future date.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 26
An automated checkpoint refresh solution can be configured using Unisphere or the Control
Statioin CLI command: nas_ckpt_schedule. There is an option to enter names, separated
with commas, for the checkpoints that are to be created in the schedule. The number of
names you type must equal the number of checkpoints specified in the “Number of
checkpoints to keep” field. If you do not type any checkpoint names, the system assigns
default names for the checkpoints in the format; ckpt_<schedule name>_<nnn>. In this
automatic naming scheme, schedule_name is the name of the associated checkpoint
schedule and nnn is an incremental number, starting at 001.
If scripts are going to be used for checkpoints, utilizing the Relative Naming feature can
make script writing easier by defining a prefix name for the checkpoint. The prefix name is
defined when the schedule is created. When the checkpoint is created, the schedule uses
the relative prefix, delimiter, and starting index to create a checkpoint file name relative to
the order of checkpoints, starting with 000 by default and incrementing with each new
checkpoint. This makes the checkpoint names consistent, predictable, and easily scripted.
If the prefix were defined as “nightly”, the delimiter set to “.”, and the starting index set to
“0”, the first checkpoint created with this schedule would be named nightly.000 and the
second would be named nightly.001.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 27
A checkpoint is not intended to be a mirror, disaster recovery, or high-availability tool. It is
partially derived from real time PFS data. A checkpoint might become inaccessible or
unreadable if the associated PFS is inaccessible. Only a PFS and its checkpoints saved to a
tape or an alternate storage location can be used for disaster recovery.
SnapSure allows multiple checkpoint schedules to be created for each PFS. However, EMC
supports a total of 96 read-only checkpoints and 16 writeable (scheduled or otherwise) per
PFS, as system resources permit. This limit includes checkpoints that currently exist, are
created in a schedule, or pending in other schedules for the PFS, and internally created
checkpoints, such as for backups.
Checkpoint creation and refresh failures can occur if the schedule conflicts with other
background processes, such as the internal VNX for File database backup process that
occurs from 1 to 5 minutes past the hour. If a refresh failure occurs due to a schedule or
resource conflict, you can manually refresh the affected checkpoint, or let it automatically
refresh in the next schedule cycle. Also, do not schedule checkpoint creation or refreshes
within 15 minutes of each other in the same schedule or between schedules running on the
same PFS. Refresh-failure events are sent to the /nas/log/sys_log file.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 28
This lesson covers planning for SnapSure, including scheduling concerns and performance
considerations.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 29
When planning and configuring checkpoint schedules, there are some very important
considerations. If these points are not carefully included in your planning, undesirable
results will likely occur, such as checkpoints that are not created and/or updated.
Some key points to consider are:
• Do not schedule checkpoint creation/refresh operations to take place at the same
time as the VNX Database backup. This function begins at one minute past every
hour. During the VNX for File database backup, the database is frozen and new
configurations (such as a checkpoint configuration) are not possible. In some very
large scale implementations, this database backup could take several minutes to
complete.
• Do not schedule checkpoints to occur at the same time. This could require careful
forethought.
When scheduled tasks are missed because resources are temporarily unavailable, they are
automatically retried for a maximum of 15 times, each time sleeping for 15 seconds before
retrying. Retries do not occur on such conditions as network outages or insufficient disk
space.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 30
Depending on the type of operation, SnapSure can cause a decrease in performance.
Creating a checkpoint requires the PFS to be paused. Therefore, PFS write activity is
suspended, but read activity continues while the system creates the checkpoint. The pause
time depends on the amount of data in the cache, but it is typically one second or less.
SnapSure needs time to create the SavVol for the file system if the checkpoint is the first
one.
Deleting a checkpoint requires the PFS to be paused. All PFS write activity is suspended
momentarily, but read activity continues while the system deletes the checkpoint.
Restoring a PFS from a checkpoint requires the PFS to be frozen. This means that all PFS
activities are suspended during the restore initialization process. When read activity is
suspended during a freeze, connections to CIFS users are broken. However, this is not the
case when write activity is suspended.
The PFS will see performance degradation every time a block is modified for the first time
only. This is known as Copy on First Write. Once that particular block is modified, any
other modifications to that same block will not impact performance.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 31
Refreshing a checkpoint requires it to be frozen. Checkpoint read activity is suspended while
the system refreshes the checkpoint. During a refresh, the checkpoint is deleted and
another one is created with the same name. Clients attempting to access the checkpoint
during a refresh process experience the following:
NFS clients: The system continuously tries to connect indefinitely. When the system
thaws, the file system automatically remounts.
CIFS clients: Depending on the application running on Windows, or if the system freezes
for more than 45 seconds, the Windows application might drop the link. The share might
need to be remounted and remapped.
If a checkpoint becomes inactive for any reason, read/write activity on the PFS continues
uninterrupted.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 32
Writes to a single SavVol are purely sequential. NL-SAS drives have very good sequential
I/O performance that is comparable to SAS drives. On the other hand, reads from a SavVol
are nearly always random where SAS drives perform better. Workload analysis is important
in determining if NL-SAS drives are appropriate for SavVols.
Many SnapSure checkpoints are never read from at all; or, if they are, the reads are
infrequent and are not performance-sensitive. In these cases, NL-SAS drives could be used
for SavVols. If checkpoints are used for testing, data mining and data sharing, and
experience periods of heavy read access, then SAS drives are a better choice.
Be careful when using multiple SavVols on a single set of NL-SAS drives since the I/O at the
disk level will appear random where SAS drives perform better.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 33
This lesson covers the management of checkpoint storage and memory, and how to modify
checkpoint schedules.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 34
To configure auto extend for a SavVol, or to determine how much SavVol storage a
particular file system has, simply access the Properties page of one of the file system’s
checkpoints. Towards the bottom of the page there will be a link for Checkpoint Storage.
This link will provide information regarding the state of the SavVol, it’s metavolume name
and dVol usage, and auto extend settings. There is only one SavVol per file system, no
matter how many checkpoints there are associated with that file system. You may also
manually extend a SavVol from the checkpoint storage page.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 35
SavVol storage may also be verified by using the Control Station CLI as shown here on this
slide. By list all of the checkpoints in a file system, we can also determine how much space
each checkpoint is using. The order of checkpoint creation is the order in which the
checkpoints will be listed.
The y in the inuse field shows that the checkpoints are mounted. The value in the fullmark
field is the current SavVol HWM. The value in the total_savvol_used field is the cumulative
total of the SavVol used by all PFS checkpoints and not each individual checkpoint in the
SavVol. The value in the ckpt_usage_on_savvol field is the SavVol space used by a specific
checkpoint. The values displayed in the total_savvol_used and ckpt_usage_on_savvol fields
are rounded up to the nearest integer. Therefore, the displayed sum of all
ckpt_usage_on_savvol values might not equal the total_savvol_used value.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 36
As we mentioned in a previous lesson, when a checkpoint is refreshed, SnapSure deletes
the checkpoint and creates a new checkpoint, recycling SavVol space while maintaining the
old file system name, ID, and mount state. This one way of creating more SavVol space
without actually extending the SavVol. Once a SavVol is extended, even if all the checkpoint
data is deleted, that space is not returned to the system unless the SavVol has been
created from Thin pool LUNs. In other words, a SavVol built on classic or Thick pool LUNs is
not decreased in size automatically by the system. When refreshing a checkpoint,
SnapSure will first unmount the checkpoint and delete the old checkpoint data. Then, a new
checkpoint will be created and assigned as the active, or newest, checkpoint. Next,
SnapSure will remount the checkpoint back on the Data Mover.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 37
The checkpoint refresh, restore, and delete operations may all be performed from the
Checkpoints page. Navigate to Data Protection > Snapshots > File System Checkpoints.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 38
Once a checkpoint schedule is up and running, several settings may be modified without
having to create a new schedule. The schedule name, description and times are some of
the values that are modifiable. Checkpoint name and schedule recurrence cannot be
modified, even if the schedule is paused. In this case, a new schedule will need to be
created.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 39
VNX for File allocates up to 1 GB of physical RAM per Data Mover to store the blockmaps for
all checkpoints of all file systems on a Data Mover. If a Data Mover has less than 4 GB of
RAM, then 512 MB will be allocated.
Each time a checkpoint is read, the system queries it to find the location of the required
data block. For any checkpoint, blockmap entries that are needed by the system but not
resident in main memory are paged in from the SavVol. The entries stay in main memory
until system memory consumption requires them to be purged.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 40
The server_sysstat command, when run with the option switch, “-blockmap”, provides
the current blockmap memory allocation and the amount of blocks paged to disk while not
in use. Each Data Mover has a predefined blockmap memory quota that is dependent on the
hardware type and VNX for File code being used. For more information please refer to the
VNX Network Server Release Notes.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 41
This module covered the key points listed.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 42
This Lab covers VNX SnapSure local replication. First SnapSure is configured and
checkpoints are created. Then checkpoints are used to restore files from NFS and CIFS
clients. A Checkpoint Refresh operation is performed; and finally, a file system Restore is
performed from a checkpoint.
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 43
This lab covered VNX SnapSure local replication. Checkpoints were created and files were
restored from NFS and CIFS clients. A Checkpoint Refresh operation was performed and a
file system was restored from a checkpoint.
Please discuss as a group your experience with the lab exercise. Were there any issues or
problems encountered in doing the lab exercise? Are there relevant use cases that the lab
exercise objectives could apply to? What are some concerns relating to the lab subject?
Copyright 2015 EMC Corporation. All rights reserved. [email protected] Module: VNX SnapSure 44
This module focuses on performing and testing Data Mover failover and failback.
https://edutube.emc.com/Player.aspx?vno=u/nei80YW2SjhLC/erYBhQ==&autoplay=true