h12557 Storage Ms Hyper V Virtualization WP
h12557 Storage Ms Hyper V Virtualization WP
h12557 Storage Ms Hyper V Virtualization WP
EMC Solutions
Abstract
This white paper examines deployment and integration of a Microsoft Windows
Server Hyper-V virtualization solution on EMC storage arrays, with details on
integration, storage solutions, availability, and mobility options for Windows
Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2 Hyper-V.
November 2013
Table of contents
Executive summary............................................................................................................................... 5
Business case .................................................................................................................................. 5
Introduction.......................................................................................................................................... 6
Purpose ........................................................................................................................................... 6
Scope .............................................................................................................................................. 6
Audience ......................................................................................................................................... 6
Technology overview ............................................................................................................................ 7
Microsoft Hyper-V ............................................................................................................................ 7
Storage connectivity options for virtual machines .............................................................................. 10
Virtual machine direct connectivity using iSCSI .............................................................................. 10
Virtual machine direct connectivity with virtual Fibre Channel ........................................................ 11
SMB 3.0 File Shares ....................................................................................................................... 16
Hyper-V Server managed connectivity ............................................................................................ 18
Virtual Hard Disks .......................................................................................................................... 20
Virtual hard disk types ................................................................................................................... 22
Windows Server 2012 R2 new VHD features ................................................................................... 25
Online VHD re-sizing.................................................................................................................. 25
Shared virtual hard disk ............................................................................................................ 25
Pass-through disks ........................................................................................................................ 26
Storage connectivity summary ....................................................................................................... 29
Availability and mobility for virtual machines..................................................................................... 30
Windows failover clustering for Hyper-V servers ............................................................................. 30
Windows failover clustering for virtual machines............................................................................ 32
Virtual machine live migrations within clusters .............................................................................. 32
Shared-nothing live migration ........................................................................................................ 34
Storage live migration .................................................................................................................... 34
Windows failover clustering with Cluster Shared Volumes.............................................................. 35
Sizing of CSVs ................................................................................................................................ 38
Site disaster protection with Hyper-V Replica ................................................................................. 38
Site disaster protection with Cluster Enabler .................................................................................. 39
Cluster Enabler CSV behavior ......................................................................................................... 41
EMC VPLEX ..................................................................................................................................... 42
VPLEX Local ............................................................................................................................... 42
VPLEX Metro with AccessAnywhere............................................................................................ 42
VPLEX Geo with AccessAnywhere............................................................................................... 43
VPLEX with Windows failover clustering ..................................................................................... 43
EMC Storage with Microsoft Hyper-V Virtualization
Executive summary
Business case
For many customers, there has been a growing need to provide ever-increasing
physical server deployments to service business needs. This has led to several
inefficiencies in operational areas, including the overprovisioning of server CPU,
memory, and storage. Power and cooling costs, and the requirements for floor space
in data centers, grow with each added physical server, whether the resources are
overprovisioned or not. Large numbers of physical servers, and the inefficiencies of
overprovisioning these servers, result in high costs and a poor return on investment
(ROI).
You can use Microsoft Windows Server 2008 R2, Windows Server 2012, and Windows
Server 2012 R2 Hyper-V to consolidate multiple physical server environments to
achieve significant space, power, and cooling savings, and maintain availability and
performance targets. EMC storage arrays provide additional value by allowing you to
consolidate storage resources, implement advanced high-availability solutions, and
provide seamless multi-site protection of data assets.
For consolidating data center operations, the Microsoft Hyper-V hypervisor provides a
scalable solution for virtualization on the Windows Server environment. Large-scale
consolidation saves money when you optimize and consolidate storage resources to
a single storage repository. Centralized storage also enhances the advanced features
of Hyper-V.
Introduction
Purpose
Scope
This white paper explains how to use Microsoft Hyper-V with EMC storage arrays to
provide RAID protection and improve core system performance. This white paper also
explains how to use complementary EMC technologies such as EMC Replication
Manager, EMC Storage Integrator (ESI), and EMC Solutions Enabler for Hyper-V
environments to improve dynamic placement capabilities for Hyper-V landscapes.
Audience
This white paper is for administrators who use Microsoft Windows Server 2008 R2,
Windows Server 2012, and Windows Server 2012 R2. This white paper is also for
administrators, storage architects, customers, and EMC field personnel who want to
understand the implementation of Hyper-V solutions on EMC storage platforms.
Technology overview
Microsoft Hyper-V
Microsoft Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012
R2 provide the Hyper-V server role on the applicable versions of Windows Server.
When a Windows Server instance has the Hyper-V role installed, the original
operating system instance is called the parent partition.
Microsoft provides, supports, and recommends running a Hyper-V server in the
minimal footprint of a Windows Server Core installation. Windows Server Installation
Options, on Microsoft TechNet, provides more information about Windows Server
installation options, including Server Core. Hyper-V products are only available for the
64-bit (x64) release of Microsoft Windows, and require that the server hardware
environment supports hardware assisted virtualization (Intel-VT or AMD-V).
When you install the Hyper-V server role, you also install the Windows Hyper-V
virtualization hypervisor for the parent partition. Figure 1 shows the Hyper-V Manager
Management Console (MMC) that you can use to define virtual machine instances.
Figure 1.
Hyper-V Manager
You can use products such as Microsoft System Center Virtual Machine Manager
(SCVMM) in more complicated Hyper-V deployments that include many physical
servers and virtual machine instances. The SCVMM solution provides a
comprehensive management framework with centralized command and control
features. SCVMM also includes functionality in the Performance and Resource
Optimization (PRO) subsystem, and storage integration based on the Storage
Management Initiative Specification (SMI-S). Figure 2 shows these options.
Figure 2.
Figure 3.
With Windows Server 2008 R2, you can configure two types of storage devices for a
virtual machine from the settings options. Provision a storage device as a Virtual Hard
Disk (VHD) that is connected to one of the IDE or SCSI Controller adapters. You can
also provision a device that is connected as a physical hard disk (also called a passthrough storage device).
Windows Server 2012 includes VHD provisioning, pass-through support, and a new
virtual Fibre Channel option. Virtual Fibre Channel creates synthetic Fibre Channel
adapters that allow direct storage access using the Fibre Channel protocol. Virtual
machine direct connectivity with virtual Fibre Channel on page 11 provides more
information.
Virtual machine
direct connectivity
using iSCSI
Virtual machine instances running Windows Server can use storage provided directly
to the virtual machine as an iSCSI target. For this type of connectivity, the operating
system of the virtual machine must implement the Microsoft iSCSI Initiator software
and must access network resources through a virtual network interface.
As the virtual machine itself is directly accessing the iSCSI storage device through the
network, the operating system within the virtual machine is responsible for all
management of the disk device and subsequent volume management. An iSCSI
target device must be appropriately configured for the virtual machine to access the
iSCSI devices.
For channel redundancy, we recommend using EMC PowerPath or Microsoft Multipath
I/O (MPIO) from the virtual machine instead of NIC teaming 1. We also recommend
using jumbo frames for I/O intensive applications, by setting the Maximum
Transmission Unit (MTU) to 9,000. The MTU should be the same for the storage array,
network infrastructure (switch or fabric interconnect), Hyper-V server network cards
servicing iSCSI traffic, and on the virtual NICs for the virtual machine.
The following example sets the MTU on a NIC in Server 2012 using PowerShell:
set-netadapteradvancedproperty name iSCSIA -RegistryKeyword
"*JumboPacket" -RegistryValue "9014"
For clustered environments, disable the cluster network communication for any
network interfaces that you plan to use for iSCSI. As shown in Figure 4, you can
disable it by opening the iSCSI Properties dialog box for the iSCSI network that was
discovered by Windows failover clustering.
In this white paper, "we" refers to the EMC engineering team that validated the solution.
10
Figure 4.
Note: The EMC Host Connectivity Guide for Windows, available on EMC Online Support,
provides more details for configuring the Microsoft iSCSI Initiator.
One benefit of iSCSI storage is the support for clustering within virtual machines.
Shared storage clustering between virtual machines is supported with iSCSI for both
Windows 2008 R2 Hyper-V and Windows Server 2012 Hyper-V. Windows failover
clustering for virtual machines on page 32 provides more information.
For most configurations you must still provision a VHD device to support the
installation of the virtual machine operating system. Hyper-V Server managed
connectivity on page 18 provides details.
Note: Third-party hardware iSCSI solutions can support a boot from an iSCSI SAN solution;
however, these solutions are beyond the scope of this white paper due to the specific
details required for each implementation.
Virtual machine
direct connectivity
with virtual Fibre
Channel
With Windows Server 2012, you can use the virtual Fibre Channel feature to provide
direct Fibre Channel connectivity to storage arrays from virtual machines, giving
optimal storage performance and full protocol access. Virtual Fibre Channel also
supports guest-based clustering on Hyper-V servers that are running Windows Server
2012. Virtual machines must be running Windows Server 2008, Windows Server 2008
R2, or Windows Server 2012 to support the virtual Fibre Channel feature.
To support Hyper-V virtual Fibre Channel, you must use N Port ID Virtualization (NPIV)
capable FC host bus adapters (HBA) and NPIV capable FC switches. NPIV assigns
World Wide Names (WWN) to the virtual Fibre Channel adapters that are presented to
a virtual machine. Zoning and masking can then be performed between the storage
array front end ports directly to the virtual WWNs created by NPIV for the virtual
machines. No zoning or masking is necessary for the Hyper-V server.
For the initial configuration of virtual FC, you must create a Fibre Channel SAN within
the Hyper-V Virtual SAN Manager, as shown in Figure 5. The Hyper-V SAN is a logical
construct where physical HBA ports are assigned. You can place one or multiple HBAs
within a Hyper-V SAN for port isolation or for deterministic fabric redundancy. Use the
same virtual SAN configuration and naming convention on all Hyper-V servers in a
11
clustered environment. This ensures that each node can take ownership and host a
highly available guest with virtual Fibre Channel.
Figure 5.
After a virtual SAN is created, do the following to present virtual Fibre Channel (FC)
controllers to the virtual machine:
1.
From within the virtual machine settings (with the virtual machine powered
off), click Add Hardware.
2.
3.
12
Figure 6.
13
Virtual FC
Adapter A
Virtual Machine
Virtual FC
Adapter B
Virtual SAN A
Virtual SAN B
Physical Server(s)
Physical FC
Adapter A
Physical FC
Adapter B
Fabric A
Fabric B
Storage Director A
Storage Director B
Figure 7.
Two NPIV based WWPNs, or Hyper-V address sets, are associated with each virtual FC
adapter, as shown in Figure 8. Both of these WWPNs must be zoned, masked, or
registered to the appropriate storage for live migration to work for that virtual
machine. When powered on, only one of the two WWPN addresses are used by the
guest at a specified time. If you request a live migration, Hyper-V uses the inactive
address to log into the storage array and ensure connectivity before the live migration
continues. After the live migration, the previously active WWPN becomes inactive. It is
important to validate connectivity and live migration functionality before putting a
virtual machine into production using the virtual Fibre Channel feature.
14
Figure 8.
You can also use the Get-VM PowerShell cmdlet to get WWPNs by viewing the
FibreChannelHostBusAdapters property, or using the Get-VMFibreChannelHba
command, as shown in the following WWPN PowerShell example:
Get-VMFibreChannelHba -Computername MSTPM3035 -VMName FCPTSMIS |
ft SanName, WorldWidePortNameSetA, WorldWidePortNameSetB autosize
SanName
------SAN_A
SAN_B
WorldWidePortNameSetA
--------------------C003FF22E51D0000
C003FF8A1F380000
WorldWidePortNameSetB
--------------------C003FF22E51D0001
C003FF8A1F380001
The virtual machine must manage multi pathing when you use the virtual Fibre
Channel feature. You can use either native MPIO or EMC PowerPath from within the
virtual machine to control load balancing and path failover where multiple virtual FC
adapters or multiple targets are configured.
Both Microsoft and EMC recommend that you use virtual Fibre Channel instead of
pass-through devices when direct storage access is required by the virtual machine or
required by the layered software. If components in the environment do not support
15
NPIV and cannot use virtual FC, pass-through devices can still be used and are
supported.
SMB 3.0 File
Shares
SMB 3.0 is the current iteration of the SMB protocol, also known as Common Internet
File System (CIFS). The SMB protocol is often used to provide shared access to files
over a TCP/IP-based network in Microsoft Windows-based environments. The SMB
3.0 protocol, which is supported by Microsoft for Hyper-V, provides new core
performance and high availability enhancement features. Server Message Block
overview, on Microsoft TechNet, provides more details about SMB 3.0.
SMB file shares can be presented directly to a virtual machine for storage use or,
starting with Windows Server 2012, be used as a target to support virtual hard disks
used by virtual machines. Both the EMC VNX and VNXe family of storage arrays have
SMB 3.0 support in their latest software releases. Some of the SMB 3.0 features
supported include Multichannel, Continuous Availability, Offload Copy (Windows
Server 2012 Offloaded Data Transfer (ODX), and Directory Leasing.
Note: EMC VNX Series: Introduction to SMB 3.0 Support and EMC VNXe Series: Introduction
to SMB 3.0 Support, on EMC Online Support, provides more details about SMB 3.0 support
for VNX and VNXe arrays.
When configuring SMB 3.0 based storage for Hyper-V it is important to include the
following steps:
1.
Ensure the Hyper-V computer accounts, the SYSTEM account and all Hyper-V
administrators have full control permissions to the appropriate file share
folder.
2.
3.
To enable Continuous Availability on the VNX platform, use the CLI from the control
station as in the following example:
1.
From an SSH client (like Putty) connect to the VNX control station.
2.
Run the server_mount command against the primary data mover that owns
the file system.
Note the file system and path name, SMB_FS on /SMB_FS for the specified
example in Figure 9.
Figure 9.
16
3.
Mount the file system with the Continuous Availability (CA) option:
server_mount server_2 o smbca SMB_FS
4.
Note: For VNXe, SMBCA can be enabled within Unisphere, under Advanced Options in CIFS
Share Detail.
For VNX, synchronous writes can be enabled within Unisphere, with the following
steps:
1.
From Storage Array > Storage Configuration > File Systems, click the Mounts
tab.
2.
Select the mount associated with the targeted file system, and then click
Properties.
3.
Select Set Advanced Options and ensure that Direct Writes Enabled and CIFS
Sync Writes Enabled are selected.
Notes:
For VNX OE 8.x, we recommend not enabling direct writes. EMC VNX Unified
Best Practices For Performance on EMC Online Support provides additional
details.
For VNXe, you can enable synchronous writes within Unisphere, under
Advanced Attributes in Shared Folder Detail.
In Windows, SMB shares can be used by specifying the Universal Naming Convention
(UNC) path within PowerShell cmdlets, as in the following example. You can also use
Hyper-V Manager, as shown in Figure 10.
PowerShell example with SMB storage:
New-VHD -Path \\SFSERVER00\SHARE00\VM00.VHDX -Dynamic -SizeBytes
100GB
ComputerName
Path
VhdFormat
VhdType
FileSize
Size
MinimumSize
LogicalSectorSize
PhysicalSectorSize
BlockSize
ParentPath
FragmentationPercentage
Alignment
Attached
:
:
:
:
:
:
:
:
:
:
:
:
:
:
EMCFT302
\\SFSERVER00\SHARE00\VM00.VHDX
VHDX
Dynamic
4194304
107374182400
512
4096
33554432
0
1
False
17
DiskNumber
IsDeleted
Number
Figure 10.
Hyper-V Server
managed
connectivity
:
: False
:
When you first deploy a virtual machine in Hyper-V, you must often provide the
location for the VHD storage that represents the operating system image. When you
format the volume to be used for virtual machine storage, we recommend using a 64
KB allocation unit size (AU). The 64 KB AU helps to ensure the VHD files within the file
system are aligned with the boundaries of the underlying storage device.
As shown in Figure 11, the initial configuration requires a Hyper-V management
name, and also the location for the virtual machine configuration information. If you
want to provide high availability for the virtual machine, then this location should
represent a SAN device that is available to all nodes within the cluster. In most high
availability cases, the storage location for the operating system VHD resides on a
Cluster Shared Volume (CSV).
18
Figure 11.
Subsequent steps in the New Virtual Machine Wizard request sizing information for
memory allocation and network connectivity, which are beyond the scope of this
white paper. Microsoft Hyper-V online help provides information about these
parameters.
Use the New Virtual Machine Wizard to configure the Virtual Hard Disk (VHD or VHDX)
for the operating system installation as shown in Figure 12. The default location for
the VHD is based on the previous location specified in the Location field, and the VHD
Name field is based on the name provided for the virtual machine.
19
Figure 12.
Size the VHD appropriately for the operating system being installed. When configured
through the wizard in this manner, the VHD created will be a dynamic VHD or dynamic
VHDX file when using Windows Server 2012. You can manually configure VHD devices
for the virtual machine so as to specify the VHD characteristics. To allow for manual
configuration of VHD devices, select Attach a virtual hard disk later.
After the New Virtual Machine Wizard has completed, select the Settings option from
the Hyper-V Manager console for further modifications to the virtual machine
configuration. The configuration is stored in an XML document located in the
previously specified virtual machine directory. The name of the configuration file is
based on a Global Unique Identifier (GUID) for the virtual machine.
You can map manually configured VHDs to either the IDE or SCSI controllers defined
in the virtual machine configuration. At least one such VHD must exist to host and
install the operating system.
Virtual Hard Disks
You can define VHD devices that can be later repurposed to virtual machines. You can
create VHD devices outside of a virtual machine by selecting New > Hard Disk within
Hyper-V Manager, as shown in Figure 13. In clustered environments, you can launch
the same hard disk creation wizard from the failover cluster manager by right clicking
Roles and then selecting Virtual machines > New Hard Disk.
20
Figure 13.
Define and associate a VHD from within the Settings option for a virtual machine by
selecting the controller (IDE or SCSI) to which a VHD will be associated. To define and
map the VHD within the virtual machine, open the settings for the virtual machine as
shown in Figure 14. By selecting the hardware device (SCSI Controller in this
example), you can define the new VHD to be created and assigned to that controller.
Figure 14.
For Windows 2008 R2 and Windows Server 2012 VHDs, you must assign virtual
machine boot disks to an IDE controller.
Server 2008 R2 and Server 2012 support only two devices per IDE controller; because
of this, you can configure only four IDE VHD devices for any specified virtual machine.
If additional VHD devices are required, you must define them as SCSI controller
managed devices.
21
For Windows Server 2012 R2, you can configure two generations of virtual machines.
Generation 1 virtual machines are compatible with the previous version of Hyper-V,
and Generation 2 virtual machines add support for the following new functionality:
Secure boot
Generation 2 virtual machines also remove support for the legacy network adapter
and IDE drives.
SCSI controllers support multiple disk devices per controller, and are a more scalable
solution for configurations when multiple LUN devices or multiple VHD devices are
required. Each virtual machine can have four virtual SCSI controllers, with 64 disks
per controller. You can present 256 virtual SCSI disks to a virtual machine. For nonboot devices, use SCSI controllers for presenting additional storage to the virtual
machine.
For I/O intensive workloads, especially with Windows 2008 R2, you may need to
allocate multiple virtual SCSI adapters to a virtual machine. With Windows 2008 R2,
each virtual SCSI controller has a single channel, with a maximum queue depth of
256 per adapter. Additionally, a single virtual CPU is used for storage I/O interrupt
handling. Due to these limits, you may need to use multiple virtual SCSI adapters to
reach the IOPS potential of the underlying storage.
Windows Server 2012 greatly reduced the need to present multiple virtual SCSI
controllers to improve performance. Windows Server 2012 provides a minimum of
one channel per virtual SCSI device/per controller. One channel is added for every 16
virtual CPUs presented to the virtual machine. The queue depth was also changed to
256 per device, and the I/O interrupt handling was changed to be distributed across
all virtual CPUs presented to the virtual machine. Because of these changes, you
usually need only a single virtual SCSI controller for a virtual machine with Window
Server 2012 or Windows Server 2012 R2.
Virtual hard disk
types
Two virtual hard disk formats are available natively with Hyper-V. For Windows Server
2008 R2, the VHD hard disk format is used. With Windows Server 2012 both the VHD
and VHDX hard disk formats are supported. The VHD format supports files up to 2 TB
in size, while the VHDX format supports files up to 64 TB in size.
You can use three different types of VHD disks when you configure new or additional
storage devices, as shown in Figure 15. The choice between a fixed size and
dynamically expanding format is usually based on the storage utilization
requirements, as there is a difference in how storage is allocated for these two types.
For this reason, the two selections affect storage provisioning functionality such as
that provided by virtual/thin provisioning technologies within storage arrays.
22
Figure 15.
A fixed size VHD or VHDX device is fully written to at creation time; as a result, when
selecting this VHD type, all storage equal to the size of the VHD file is consumed
within the targeted thin pools. The creation of the fixed device can also take a long
time because of the requirement to write the full size of the file to the storage array.
To help address the time it takes to create a fixed VHD, Windows Server 2012 has a
feature called Offloaded Data Transfer, also referred to as ODX. ODX can offload the
writing of repeating patterns to a storage device. If ODX is supported by the target
storage array, the creation of fixed VHD files (either VHD or VHDX) offloads the series
of contiguous writes to be handled by the storage array. This increases the speed at
which the VHD is created.
Another benefit of the ODX write offload capability, as implemented by both the
VMAX and VNX storage arrays, is in virtual provisioning environments; the zeros that
represent the fixed VHD are not allocated within the thin pool. This makes a fixed VHD
file space efficient with virtual provisioning where ODX is available. The fixed VHD
continues to show its full size within the file system. For example, if a 100 GB fixed
VHD is created, 100 GB is consumed within the file system, but that space is not be
consumed within the thin pool.
You can have a fixed VHDX type that is not allocated within a thin pool, where ODX is
disabled or not supported by the array. Windows Server 2012 contains a native thin
reclamation capability based on the TRIM and UNMAP SCSI commands. This reclaim
functionality is supported from within virtual machines using the VHDX disk format. If
you present a fixed VHDX to a virtual machine, and that virtual machine is running
Windows Server 2012, a file system format causes Windows to issue reclaim requests
23
for the entire size of the underlying VHDX to the storage device. As a result, the space
for the fixed VHDX is no longer allocated in the thin pool. Both the VMAX and VNX
support the UNMAP functionality native to Windows Server 2012.
Dynamically expanding VHD devices do not pre-allocate all storage that is defined for
them, even if ODX is available in Server 2012 environments; however, these devices
can suffer a slight degradation in performance because storage must be allocated
when the operating system or applications within the virtual machine need more
allocations. Also, with the dynamic VHD file format there are storage alignment
concerns due to the internal constructs of the file as it grows. This causes additional
storage performance overhead for both read and write operations.
Alignment is not a problem with the Windows Server 2012 VHDX format. The VHDX
internal constructs ensure 1 MB alignment on the underlying storage device.
Microsoft also made significant improvements for the storage performance of
dynamic VHDX files, compared to the legacy VHD format. For most workloads,
dynamic VHDX files perform similarly to fixed VHDX files, except for sequential write
workloads where the dynamic VHDX file must expand consistently into the file
system. After you allocate dynamic VHDX space, there is no difference in performance
when compared to a fixed VHDX file.
When using dynamic VHD files, you can over provision a file system. A dynamic VHD
file has both a current size and a potential size. The current size is the size it
consumes in the file system, and the potential size is the size that is specified during
creation, and which can be consumed by the virtual machine. An administrator can
assign VHD files with potential sizes that total more than the size of the file system.
This is not necessarily a problem, but be careful to monitor file systems and ensure
adequate free space where dynamic VHD files are used. You can use fixed VHD files
to avoid the possibility of over-allocation.
You can use third-party tools in the public domain to create a fixed VHD device where,
like a dynamic VHD, pre allocation is not executed. Be cautious when using public
domain solutions. Although such solutions work, they may not be supported by
Microsoft. One such tool is provided at http://code.msdn.microsoft.com/vhdtool.
Such third-party tools are unnecessary when ODX is available, or when reclamation
support is available and the virtual machine is running Windows Server 2012, as the
resulting fixed VHD file is space efficient for virtual provisioning.
Based on the considerations in the previous paragraphs, we recommend the
following when using VHD files:
Use fixed VHD files if the potential for over-allocating file system space is
not desired.
24
Convert VHD files to VHDX files when migrating virtual machines to Hyper-V
on Server 2012 if:
Use dynamic VHDX files for general-purpose workloads where initial write
performance is not required.
Use fixed VHDX files if the potential for over-allocating file system space is
not desired.
Use VHD files instead of pass-through or virtual Fibre Channel storage unless
there are specific requirements for these technologies by applications within
the virtual machine.
Differencing disk is the third VHD format option. This disk device is configured to
provide an associated storage area that is created against a source VHD. You can use
this style of configuration when, for example, you create a gold master VHD, and you
create multiple virtual machine instances. In such configurations, you must protect
the gold master from being updated by any individual virtual machine. Each virtual
machine must write its own changes. In this instance, the gold master VHD behaves
as a read-only device, and all changes written by the virtual machine are saved to the
differencing disk device. There will always be an association between the differencing
disk and the gold master for, without the original gold master, the differencing disk
only maintains changes, and does not represent a fully independent copy.
Windows Server
2012 R2 new VHD
features
Windows Server 2012 R2 offers several new features specific to the VHD format,
including online VHD re-sizing and shared VHD:
Online VHD re-sizing
In previous versions of Hyper-V, you had to power off the virtual machine before
resizing a virtual hard disk. Windows Server 2012 R2 allows you to resize a virtual
hard disk for the VHDX format, when presented through a virtual SCSI adapter and
while the virtual machine is running. This feature is independent of any storage array
specific functionality. However, when combined with the ability to expand storage
pools and LUNs within EMC storage arrays, the feature offers an end to end capability
for adding capacity when using virtual hard disks non-disruptively. Online Virtual
Hard Disk Resizing Overview, in Microsoft TechNet, provides more details about this
feature.
Shared virtual hard disk
Shared virtual hard disks enable multiple virtual machines to access the same VHDX
file. The main benefit of this functionality is the support for shared storage within a
virtual machine-based Windows failover cluster.
Microsoft supports the use of shared VHDX files within Cluster Shared Volumes or
within SMB file shares that support certain parts of the SMB 3.0 protocol. You can use
25
shared virtual hard disks with CSVs on EMC block-based storage. VNX or VNXe SMB
3.0 based file shares do not currently support the use of shared VHDs.
You can enable shared VHDs from Hyper-V manager or PowerShell. You must power
off the virtual machine to modify the required setting. You can use Hyper-V manager,
from within the settings of the virtual machine, to enable virtual hard disk sharing, as
shown in Figure 16.
Figure 16.
With PowerShell, you can enable hard disk sharing either when you add the virtual
disk to the virtual machine or after you add the disk.
To enable hard disk sharing when adding the virtual disk to a virtual machine:
Add-VMHardDiskDrive -VMName VM1 -Path
C:\ClusterStorage\Volume1\Shared.vhdx -ShareVirtualDisk
To enable hard disk sharing on a virtual disk already added to a virtual machine:
$Drive = get-vm VM1 | Get-VMHardDiskDrive | Where {$_.Path -like
"C:\ClusterStorage\Volume1\Shared.vhdx"}
Set-VMHardDiskDrive $Drive -SupportPersistentReservations $true
Virtual Hard Disk Sharing Overview, in Microsoft TechNet, provides more details
about this feature.
Pass-through
disks
Because of the way that I/O is generated to a VHD device located on a volume
managed by the parent partition is processed, several levels of indirection are
imposed. The virtual machine operating system services I/O within the virtual
machine and passes I/O to the storage device. In turn, as the VHD is physically
owned by the parent partition, the parent must now receive and drive the I/O to the
physical disk that it owns. This multi-level indirection of I/O does not provide the best
EMC Storage with Microsoft Hyper-V Virtualization
26
performance, although the overhead is relatively small. The best performance occurs
when there are the fewest levels of indirection. For storage devices presented to a
virtual machine, the best performance occurs when you use virtual Fibre Channel
devices, pass-through devices, or iSCSI devices directly to the virtual machine. As
previously mentioned, we recommend that you use virtual Fibre Channel instead of
pass-through devices where it is supported in a specified environment.
Pass-through devices must be configured as offline to the parent partition and are
therefore inaccessible for any parent managed functions, such as creation or
management of volumes. These offline disk devices are then configured as storage
devices directly to the virtual machine. You can use the disk management console or,
alternatively, the DISKPART command line interface, to transition SAN storage devices
between online and offline status. The SAN Policy set, which is accessed through the
DISKPART command line interface, manages the default storage devices state.
You can use the Hyper-V MMC to configure pass-through devices as shown in Figure
17. Disks can be allocated against a SCSI controller that is configured for the virtual
machine. Each SCSI controller can map up to 64 pass-through devices, and up to four
discrete SCSI controllers may be configured to an individual virtual machine. This
provides support for up to 256 SCSI devices.
Figure 17.
27
After you configure the required disk devices as pass-through devices to the virtual
machine, the operating system of the virtual machine detects and displays them as
shown in Figure 18. In this instance, the virtual machine has been configured with a
VHD device that is used as a boot device. The Virtual HD ATA Device is the boot
device. The EMC SYMMETRIX SCSI Disk Device identifies the two pass-through
devices, as this is the detected storage device from the Windows Server 2008 R2
operating system of the virtual machine.
Figure 18.
You must configure storage devices that have been configured as pass-through
devices to a virtual machine in the same way as is typical for storage devices to a
physical server. Administrators should follow the recommendations provided for a
physical environment, which can include the requirements to align partitions on
applicable operating systems. Windows Server 2008 R2 and Windows Server 2012,
do not require manual partition alignment, as partitions are automatically aligned to
a 1 MB offset. When you create a New Technology File System (NTFS) volume, follow
Microsoft SQL Server and Microsoft Exchange Server recommendations for such tasks
as selecting an allocation unit size of 64 KB when formatting volumes.
You can deploy virtual machine instances that use pass-through devices as the boot
device for the operating system disk device. You must define the pass-through device
before installing the operating system of the virtual machine and before selecting the
pass-through disk (configured through the IDE controller) as the install location.
In clustered environments, ensure that the proper resource dependencies are in place
for pass-through devices with their respective virtual machine, as shown in Figure 19.
The pass-through disk should be within the virtual machine group or role. The virtual
machine resource and virtual machine configuration resource must also be made
dependent on the pass-through devices. The UpdateClusterVirtualMachineConfiguration PowerShell cmdlet can be used to help set the
proper dependencies for a clustered virtual machine.
28
Figure 19.
Storage
connectivity
summary
EMC storage arrays provide and support all forms of storage connectivity required by
Windows Server 2008 R2 and Windows Server 2012 Hyper-V. You can deploy any
form of Hyper-V Server managed by, or directed to virtual machine connectivity, and
can even combine multiple forms of connectivity to satisfy application-level
requirements.
Each form of storage connectivity provides different management or operational
features. For example, storage that is provisioned directly to a virtual machine, using
virtual Fibre Channel, iSCSI connectivity, or pass-through disks, restricts that storage
volume only to virtual machines. Conversely, storage allocated in VHD devices
created on volumes within the Hyper-V server allows a single LUN to be collocated
among any number of virtual machines. The various VHD devices are also shared
with, and located on the parent-managed volume.
When you use a common volume to collocate VHD devices, you can also affect some
high-availability or mobility solutions, because a change to the single LUN affects all
virtual machines located on the LUN. This can affect configurations using failover
clustering. However, the implementation of Clustered Shared Volumes (CSV) with
Windows Server 2008 R2 and Windows Server 2012 failover clustering addresses the
need for high availability of consolidated VHD deployments.
When you collocate VHD devices onto a single storage LUN, consider how to address
the cumulative workload. In cases where an application such as Microsoft SQL Server
or Microsoft Exchange Server is deployed within a virtual machine, use sizing best
practices to ensure that the underlying storage is able to support the anticipated
workload. When collocated VHD devices are placed on a common storage volume
(LUN), provision the device to ensure that it can satisfy the cumulative workload of all
applications and operating systems located on the VHDs. When storage has been
29
You can implement Windows failover clustering for use with Hyper-V virtual machines
in the same way as implementing the Windows cluster environment for other
applications such as SQL Server or Exchange Server. Virtual machines become
another form of application that failover clustering can manage and protect.
Use the failover cluster management wizard to configure a new application that
converts an existing virtual server instance into a highly available configuration. Use
the option to configure a virtual machine as shown in Figure 20. Shut down the virtual
machine to configure it for high availability, and locate all storage objects, including
items such as ISO images that are mounted to the virtual machine, to SAN storage.
Failover clustering with Windows 2008 R2 assumes that access to storage objects
from all nodes within the cluster is symmetrical. This means that all drive mappings,
file locations, and mount points are identical, and during configuration, checks are
made to ensure that this condition is met.
With failover clustering with Windows Server 2012, you can have asymmetrical
storage configurations, where the same storage is not connected to all nodes in the
cluster. Such configurations are possible in many geographically dispersed cluster
scenarios. In this case, the cluster validation wizards only validate storage against
nodes in a common site. Wizard failure results when mandatory requirements are not
met. You will receive warnings when failover clustering is not able to verify some of
these aspects, or when failure is likely. Read the warnings for information about how
to fix the problems.
30
Figure 20.
After you import a virtual machine into failover clustering, manage and maintain the
virtual machine through the failover cluster management interface. Avoid starting and
stopping the virtual machine outside of the control of failover clustering. If the virtual
machine shuts down outside of the control of failover clustering, the clustering
software assumes that the virtual machine has failed and restarts the virtual
machine.
Failover Cluster manager, where necessary, launches the required virtual machine
management interfaces. Use failover clustering to manage all availability options and
state changes for the virtual machine.
When you import a virtual machine instance into a high-availability configuration, the
machine must include all related storage disk devices so that you can manage the
virtual machine correctly. The High Availability Wizard fails if it is unable to include all
storage configured for the virtual machine within the cluster environment. Configure
all shared storage correctly across the cluster nodes. When you add disk storage
devices, correctly configure the devices as shared storage within the cluster.
The primary goal of Windows Server failover clustering is to maintain availability of
the virtual machine when the virtual machine becomes unavailable due to
unforeseen failures; however, this protection does not always maintain the virtual
machine state through such transitions. As an example of this style of protection,
consider the case of a physical node failure where one or more virtual machines were
running. Windows failover clustering detects that the virtual machines are not
operational and that a node is no longer available and attempts to restart the virtual
machines on a remaining node within the cluster configuration.
31
Windows failover
clustering for
virtual machines
Availability for the virtual machine resources is ensured through the use of Windows
failover clustering at the parent level; however, protection at the virtual machine level
may not provide high availability for the applications running within the virtual
machines. For example, a server instance cannot start if a virtual machine instance
has corrupted files. The high-availability protection for the virtual machine can ensure
that the virtual machine is running, but cannot ensure that the operating system
itself, or the applications installed on the server, are accessible.
Windows failover clustering checks at the application level to ensure that services are
accessible. For example, a clustered SQL Server instance continually undergoes
Look Alive and Is Alive checks to ensure that the SQL Server instance is
accessible to user connections. Implementing clustering within the virtual machines
can provide this additional level of protection.
You cannot configure a Failover Cluster within virtual machines that are running
Windows Server 2008 R2 or Windows Server 2012 using virtual disks or pass-through
disks. This limitation is because of the filtering of the necessary SCSI-3 Persistent
Reservation commands. However, you can form Windows Cluster configurations with
virtual machines that are running Windows Server 2008 R2 with iSCSI shared storage
devices. In such configurations, the iSCSI initiator is implemented within the child
virtual machines, and the shared storage is defined on the iSCSI LUNs.
With Windows Server 2012 you can use both iSCSI and virtual Fibre Channel as
shared storage within a virtual machine cluster. You can also use SMB file share
storage with certain clustered applications, such as SQL Server. If you use SMB file
share storage, you should also use SMB 3.0 based file shares.
With Windows Server 2012 R2, you can also use VHDs as shared storage between
virtual machines that run Windows failover clustering. Windows Server 2012 R2 new
VHD features on page 25 provides more information about the shared virtual hard
disk feature.
Virtual machine
live migrations
within clusters
Movement of virtual machines within a cluster was different for systems before
Windows 2008 R2. When an administrator or an automated management tool
requested a move, the virtual machine state was saved to a disk and then resumed
after disk resources were moved to the target node. This move, or quick migration
operation, took so long that outages often occurred, even though the virtual machine
state would then resume.
With Windows Server 2008 R2 and Windows Server 2012 for failover cluster nodes,
you can use the live migration functionality available with the clustering environment.
Live migrations move virtual machines transparently between nodes. Unlike quick
migration move requests, there is no outage for a client application, and the
migration between nodes is completely transparent. To achieve this level of client
transparency, live migrations copy the memory state representing the virtual machine
from one server to another so as to mitigate any loss of service.
Live migration configurations require a robust network configuration between the
nodes within the cluster. This network configuration optimizes the memory copy
between the nodes and enables an efficient virtual machine transition. For such live
migration configurations, you must have at least one dedicated 1 Gb (or greater)
network between cluster nodes to enable the memory copy. We also recommend that
EMC Storage with Microsoft Hyper-V Virtualization
32
you dedicate specific private networks exclusively to live migration traffic, as shown
in Figure 21. Networks that are disabled for cluster communication can still be used
for live migration traffic. Deselect networks within the live migration settings if you do
not want to use them.
Figure 21.
When you use a live migration, failover clustering replicates the virtual machine
configuration and memory state to the target node of the migration. Multiple cycles of
replicating the memory state occur to reduce the amount of changes that need to be
sent on subsequent cycles.
You can use live migration operations for virtual machines that contain virtual disks,
pass-through disks, virtual Fibre Channel storage, or iSCSI storage as presented
directly to the virtual machine. We recommend using CSVs for virtual disks, but they
are not required for live migrations. You can migrate a virtual machine with dedicated
storage devices that are used for virtual disk access. If you migrate a virtual machine,
the virtual disks transition from offline to online on the target cluster node during the
live migration process.
Network connectivity allows for the timely transfer of state, and the migration
process, as a final phase, momentarily suspends the machine instance, and switches
all disk resources to the target node. After this process, the virtual machine
immediately resumes processing. The transition of the virtual machine is required to
complete within a TCP/IP timeout interval such that no loss of connectivity is
experienced by client applications.
Note: The live migration process is different from the quick migration process because no
suspension of virtual machine state to disk occurs. Failover clustering still provides support
for quick migrations.
If the migration of the virtual machine cannot execute successfully, the migration
process reverts the virtual machine back to the originating node. This also maintains
the availability of the virtual machine to ensure that client access is not impacted.
You can also terminate a live migration by using the Cancel in progress Live
Migration option in the Cluster Manager console.
33
Shared-nothing
live migration
Windows Server 2012 introduces a new type of live migration referred to as a sharednothing live migration. This form of live migration allows for the movement of nonclustered virtual machines between Hyper-V hosts when there is no shared storage.
The migration can occur between hosts using local storage, SAN storage or SMB 3.0
file shares. If both hosts have access to the SMB file share, then no storage
movement is necessary. When non-shared storage is used, Hyper-V uses these steps
to initiate a storage live migration:
1.
Throughout most of the migration, reads and writes are serviced from the
source virtual disks, while the contents of the source are copied, over the
network, to the new destination VHDs.
2.
Following the initial full copy of the source, writes are mirrored to the source
and destination VHDs. Outstanding changes to the source are also replicated
to the target.
3.
When the source and target VHDs are synchronized, the virtual machine live
migration begins, following the same process used for shared storage live
migrations.
Offloaded Data Transfer can be used as a part of the migration. Storage live
migration on page 34 provides more details.
4.
When the live migration completes, the virtual machine runs from the
destination server and the original source VHDs are deleted.
Starting with Windows Server 2012 you can migrate the virtual hard disk storage of a
virtual machine between LUNs non-disruptively. You can migrate storage on standalone hosts or on Hyper-V clusters where virtual hard disks reside or will reside on
CSVs or SMB 3.0 file shares. You can start the storage migration process from Hyper-V
manager for stand-alone hosts, from Failover Cluster Manager for clustered hosts (as
shown in Figure 22) or from PowerShell, by using the Move-VMStorage cmdlet. If
SCVMM exists in the environment, you can start migrations from the SCVMM console
or from PowerShell.
If the virtual machine that is being migrated is offline, the machine remains offline
and the virtual hard disks are moved between the source and target. If the virtual
machine that is being migrated is online, a live storage migration occurs, using the
following process:
1.
Throughout most of the migration, reads and writes are serviced from the
source virtual disks while the contents of the source are copied to the new
destination VHDs.
2.
Following the initial full copy of the source, writes are mirrored to the source
and destination VHDs. Outstanding changes to the source are also replicated
to the target.
3.
When the source and target VHDs are synchronized, the virtual machine
begins using the target VHDs.
34
4.
Figure 22.
You can accelerate the storage migration process with ODX. If the storage array where
the migration occurs supports ODX, the storage migration automatically runs ODX.
Using ODX greatly enhances the speed of the initial copy operation between the
source and target devices. For EMC Symmetrix VMAX, EMC VNX and EMC VNXe
systems where ODX is supported, both the source and target must reside in the same
storage array. EMC environments also require a Windows hotfix for Server 2012
support with ODX. The hotfix ensures that if ODX copy operations are rejected that the
host based copy engages and resumes from where the ODX copy left off. The hotfix
also corrects an issue with clustered storage live migration that can lead to data loss.
You can download the Update that improves cloud service provider resiliency in
Windows Server 2012 hotfix from Microsoft Support at:
http://support.microsoft.com/kb/2870270. Windows Server 2012 Offloaded Data
Transfer on page 47 provides more details.
Windows failover
clustering with
Cluster Shared
Volumes
You can use Windows Server 2008 R2 and Windows Server 2012 to configure shared
SAN storage volumes so that all nodes within a given cluster configuration can access
the volume concurrently. In this configuration, the volume is mounted as read/write
to all nodes at the same time. The new model for allowing direct read/write access
from multiple cluster nodes is called Cluster Shared Volumes (CSVs). CSV supports
running multiple virtual machines on different nodes where the VHD storage devices
are located on a commonly accessible storage device.
CSVs help make the transition process for VHD ownership during live migrations more
efficient, as no transition of ownership and subsequent mounting is required, as is
typical for cluster storage devices. The SAN storage configured as CSVs is mounted
and accessible by all cluster nodes.
35
The CSV feature is enabled by default in Windows Server 2012. You must enable the
CSV feature in Windows Server 2008 R2. To enable the feature, select Enable Cluster
Shared Volumes, or select Enable Cluster Shared Volumes from Failover Cluster
Manager on a Windows Server 2008 R2 cluster, as shown in Figure 23.
Figure 23.
After you enable CSV, in Windows 2008 R2, a new Cluster Shared Volumes option
appears in Failover Cluster Manager. In Windows Server 2012, you can access CSVs
at Storage > Disks in Failover Cluster Manager. As shown in Figure 24, you can use this
option to convert any disk within the available storage group to a CSV.
36
Figure 24.
For Windows Server 2008 R2 and Windows Server 2012, you must format a disk with
NTFS to be added as a CSV. Resilient File System (ReFS) is not supported for CSV use
on Windows Server 2012. For Windows Server 2012, the CSV files system is called
CSVFS. Although the name has changed, the underlying file system is still NTFS. If a
CSV is removed from a cluster, the file system designation returns to NTFS, with all
data on the file system remaining intact.
After you convert a SAN device to be used as a CSV volume, you can access the
storage device on all cluster nodes. The CSV volume is mounted to a common, but
local, location on all nodes, which ensures that the namespace to VHD objects is
identical on all cluster nodes. The namespace attributed for each CSV volume is
based on the system drive location, which must be the same for all cluster nodes. The
namespace includes a ClusterStorage location, in which the volumes are
physically mounted on each node. The mount location is a sequentially generated
name of the form Volume1 where the appended numeric value is incremented for
each subsequent volume.
Note: You can rename the mount points assigned to CSVs. To rename the specified volume
based mount point, select Rename from Windows Explorer. The new name appears on all
nodes of the cluster.
All CSV devices list the current owner for the resource. The owner must coordinate
access to the various VHD devices that represent virtual machine storage within the
cluster. Virtual machines continue to run on only a single physical server at any time.
When a virtual machine that is deployed on CSV storage configured within the cluster
is to be brought online, the node that is starting the virtual machine communicates
with the CSV owner to request permission to generate I/O to the VHD device when the
virtual machine is brought into operation. The node that starts the virtual machine
locks the VHD device to ensure that no other process can write to the VHD from any
other node. If the VHD has already been locked by another node, then the request is
denied. When the CSV owner grants permission, the node generates direct I/O to the
VHD on the storage device as needed by the virtual machine.
CSVs also protect against external failure scenarios, such as physical connectivity
loss from a given node. If connectivity from a node is lost to the underlying storage,
EMC Storage with Microsoft Hyper-V Virtualization
37
I/O operations are redirected over the CSV network to the current owning node. This
functionality prevents the failure of a virtual machine as a result of the loss of storage
connectivity. While this functionality allows the virtual machine to continue
operating, this indirection should not be relied on to provide ongoing access to the
virtual machine. Performance is affected when running in redirected mode, resolve
the loss of connectivity, or execute a live migration.
Sizing of CSVs
CSVs are NTFS volumes, and have the same limits as NTFS. NTFS volumes and CSVs
have a theoretical maximum of the largest NTFS volume of 256 TB. You can determine
appropriate sizing for CSV volumes based on the cumulative workload expected from
the VHD files located in the CSV.
The CSV is physically represented by a single LUN presented from a storage array. The
LUN is supported by some number of physical disks within the array. Use the typical
sizing for both storage allocation and I/O capacity to ensure that both the storage
allocation for a given CSV and the I/O requirements are adequately met.
Undersizing the LUN for I/O load results in poor performance for all VHDs located on
the CSV, and for all applications installed in the virtual machines that use the VHDs.
We recommend adding multiple CSVs to distribute workloads across available
resources.
Site disaster
protection with
Hyper-V Replica
Windows Server 2012 includes Hyper-V Replica, a native replication technology for
virtual machines. You can use Hyper-V Replica to enable asynchronous host-based
replication of VHDs between standalone hosts or clusters. You can also use Hyper-V
Replica to enable virtual machine replication between sites without shared storage.
Hyper-V Replica is useful for branch offices and for replicating virtual machines to
hosted cloud providers.
When using Hyper-V Replica, you can enable or disable replication for each VHD. You
can select data that you do not want to replicate, such as an operating system page
file, and create a separate virtual disk that you configure for that workload and
disable for replication.
Note: You can only replicate VHDs. If you configure a virtual machine with pass-through or
virtual Fibre channel storage, Hyper-V replica is blocked.
When you replicate specific VHDs within a virtual machine, an initial full copy of the
data from the primary virtual machine is sent to the replica virtual machine location.
This replication can be over the network or you can manually copy the VHD files to the
replica site. If you manually copy the files, a file comparison is performed and
ensures that only incremental changes are replicated for the initial synchronization.
After the initial synchronization, changes in the source virtual machine are
transmitted over the network at periodic replication frequency intervals. The
replication frequency is dependent on cycle times. Hyper-V replica requires cycle
times of at least five minutes with Windows Server 2012. With Windows Server 2012
R2 you can configure the replication frequency at 30 second, 5 minute, or 15 minute
cycles.
38
Hyper-V Replica also allows for additional recovery points. Multiple recovery points
enable the ability to recover to an earlier point in time. Windows Server 2012
supports 16 hourly recovery points while Windows Server 2012 R2 supports 24 hourly
recovery points.
Windows Server 2012 R2 includes extended replication, a feature that enables
support for a second replica, where the replica server forwards changes that occur on
the primary virtual machines to a third server. This functionality can enable three site
solutions which provide for additional disaster recovery protection in the event of a
single site or regional disruption.
You can move a virtual machine to a replica server in a planned failover. With a
planned failover, any changes which have not been replicated are first copied to the
replica site, so that no data is lost. After data is moved to the replica site, you can
configure reverse replication to send changes back to the original site. For unplanned
failovers, you can bring the the replica virtual machine online. You can lose some
data in an unplanned failover. When you use extended replication, replication
continues to the extended replica server if a planned or unplanned failover occurs.
Note: Planning and configuration for Hyper-V Replica is outside the scope of this white
paper. Deploy Hyper-V Replica, on Microsoft TechNet, provides more details.
Site disaster
protection with
Cluster Enabler
EMC Cluster Enabler has been a supported product for many years under the
Windows Geographically Dispersed Clustering program. You can use Cluster Enabler
product to seamlessly integrate multi-site storage replication into the framework
provided by Windows failover clustering. The Microsoft Windows Server Catalog lists
compatible solutions.
Cluster Enabler supports failover cluster configurations with multiple forms of
storage-based replication, including both synchronous and asynchronous, replication
across sites. Cluster Enabler provides a plug-in architecture to support various EMC
replication products, including EMC RecoverPoint, EMC Symmetrix Remote Data
Facility (SRDF) and EMC MirrorView .
Because of this tight integration with the Windows failover cluster framework, valid
supported failover cluster configurations and deployed applications are fully
supported under the Cluster Enabler solution set. This includes Windows Hyper-V
virtual machines.
Note: Steps for installing Cluster Enabler within a Microsoft cluster environment are beyond
the scope of this white paper. Cluster Enabler product guides on EMC Online Support
provide more details.
39
Figure 25.
The clustered disks within the resource group have their dependencies modified to
include the Cluster Enabler resource defined for the group. This ensures that
transitions to other nodes are coordinated appropriately. For lateral movements, or
movement to nodes that are within the same site as the owning node, no transition of
the replication state is required. If you request that the resource be moved to a peer
node, or a node that is located in the remote site, the Cluster Enabler resource
coordinates with the underlying storage replication service to transition the remote
disk to a read/write state. Management of the replication state of disk devices is fully
managed by the cluster enabler resource, and this functionality is transparent to the
administrator.
The Cluster Enabler environment includes a Failover Cluster Management Console to
configure and manage Cluster Enabler specific functionality, as shown in Figure 26.
You can use the console to identify resources used within the various groups
configured in the geographically dispersed cluster environment. The management
framework uses a logical construct of sites, and logically displays resources based on
this layout.
All movement, configuration of resources, and online/offline status changes of
resources within the cluster continue to be executed through the standard Failover
Cluster Management Console. The Cluster Enabler Manager Console is used to
configure newly created resource group integration, or to introduce new shared disk
resources into the cluster configuration.
40
Figure 26.
For Cluster Enabler with Windows Server 2008 R2, you can run virtual machines only
on the primary site where the CSV devices are online and read/write enabled. Cluster
Enabler, with Windows Server 2008 R2, blocks virtual machines from running on the
secondary site because the devices are read/write disabled.
This is different from failover cluster behavior without Cluster Enabler configured,
where virtual machines are allowed on the secondary site but in redirected access
mode. Geo-clustering is the reason for this restriction, because site to site network
transfers would have higher network latencies and more expensive bandwidth
requirements. Cluster Enabler restricts virtual machines to remain on the site on
which they have direct access to the disk, and move them only when the CSV disk
fails over to the secondary site.
For Windows Server 2012, virtual machines can run on any node regardless of where
the CSV disk is online. This means that the virtual machine can failover to a node
where the CSV disk is marked as write-disabled and run in redirected access mode.
EMC Storage with Microsoft Hyper-V Virtualization
41
To avoid this state, a virtual machine can be restricted to a site by editing the
possible owners list and limiting the virtual machine resources to run from specific
nodes in the cluster.
EMC product guides on EMC Online Support provide configuration and management
details for supported Cluster Enabler replication technologies.
EMC VPLEX
Figure 27.
VPLEX configurations
VPLEX Local
VPLEX Local provides seamless, non-disruptive data mobility and allows you to
manage multiple heterogeneous arrays from a single interface within a data center.
VPLEX Local also provides increased availability, simplified management, and
improved utilization across multiple arrays.
VPLEX Metro with AccessAnywhere
You can use VPLEX Metro with AccessAnywhere technology to enable active/active,
block level access to data between two sites within synchronous distances. The
distance is limited to what synchronous behavior can withstand and also considers
host application stability. We recommend that, depending on the application,
replication latency for Metro be less than or equal to 5 ms RTT.
You can use the combination of virtual storage with VPLEX Metro and virtual servers
to provide transparent movement of virtual machines and storage across a distance.
This technology provides improved utilization across heterogeneous arrays and
multiple sites.
42
SITE-B
Ethernet WAN
(Host Connectivity)
Distributed Devices
RAID-1 Mirrors Across Sites
Ethernet WAN
(VPLEX Connectivity)
VPLEX Cluster 1
Figure 28.
VPLEX Cluster 2
43
storage arrays and sites, disaster avoidance with proactive virtual machine mobility
between data centers, and core disaster recovery for unplanned events.
With VPLEX distributed virtual volumes, you do not need to use manual failover
processes. The global cache coherency layer in VPLEX presents a consistent view of
data at any point in time. You can manage virtual machine mobility across sites with
quick or live migrations like traditional shared storage solutions.
The configuration and use of VPLEX with Windows failover clustering and Hyper-V is
outside the scope of this document. Hyper-V Live Migration with VPLEX Geo on EMC
Online Support provides more information about VPLEX and Hyper-V.
Manual or scripted
disaster recovery
with storage
replication
You can also use the import-vm PowerShell cmdlet for disaster recovery with
Windows Server 2012 for storage replication and Hyper-V. With this enhanced
PowerShell cmdlet, you can import virtual machines from their original configuration
and VHD files. You can also replicate configuration files and replicate data between
sites, and then import directly from the replicated data. You do not need to export the
virtual machines as required with Windows 2008 R2.
If all virtual switches are available with the same names and all mount point locations
are identical to the original configuration, you can import all virtual machine
configuration files in a specified directory hierarchy on a target host with the
following command:
Get-ChildItem .\*.xml -recurse | import-vm
44
Microsoft System
Center Virtual
Machine Manager
You can use Microsoft System Center Virtual Machine Manager (SCVMM) to efficiently
manage a Hyper-V environment that can incorporate hundreds of physical servers.
SCVMM integrates with the various availability products such as failover clustering for
a system that provides centralized management, reporting, and alerts. SCVMM also
provides management services for VMware servers and their virtual machine
resources.
You can use the centralized management console for a centralized view of all
managed servers and resources. From the management console, you can discover,
deploy, or migrate, existing virtual machines between managed physical servers. You
can use this functionality to dynamically manage physical and virtual resources
within the landscape, and to adapt to changing business demands.
SCVMM 2012 provides standards-based discovery and automation of iSCSI and Fibre
Channel block storage resources in a virtualized data center environment. These new
capabilities build on the Storage Management Initiative Specification (SMI-S) that
was developed by the Storage Networking Industry Association (SNIA). The SMI-S
standardized management interface enables an application such as SCVMM to
discover, assign, configure, and automate storage for heterogeneous arrays in a
unified way. An SMI-S Provider uses SMI-S to enable storage management. To take
advantage of this new storage capability, EMC updated the SMI-S Provider to support
the SCVMM 2012 RTM and SP1 releases.
EMC SMI-S Provider supports unified management of multiple types of storage arrays.
With the one-to-many model enabled by the SMI-S standard, a virtual machine
manager can interoperate, by using the EMC SMI-S Provider, with multiple disparate
storage systems from the same virtual machine manager console that is used to
manage all other private cloud components. Table 1 outlines some of the benefits of
centralized storage management with SCVMM.
45
Table 1.
Benefit
Description
Reduce costs
Simplify
administration
Deploy faster
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
on EMC Online Support provides details about the integration between SCVMM and
the EMC SMI-S provider.
The EMC SMI-S Provider is based on EMC Solutions Enabler and supports block based
storage (FC/iSCSI) for both VMAX and VNX.
46
VNX operating
environment file
based SMI-S
Provider
The VNX operating environment 8.1 or later supports SCVMM with an SMI-S provider
that runs natively on the VNX control station. This SMI-S Provider is enabled by
default and supports file based (CIFS) storage. The following basic functionality
supports NAS storage within SCVMM and is supported by the VNX provider:
Creating file systems and shares on VNX CIFS or NFS based servers
Updating SCVMM when creating new file systems or shares from management
applications other than SCVMM (for example, Unisphere).
Note: SCVMM is updated by rescanning from the Provider area of the SCVMM console.
Configuring the System Center Virtual Machine Manager Console for the NAS SMI-S
Provider, on EMC Online Support, provides updated instructions to configure SCVMM
for use with the VNX File SMI-S Provider.
Windows Server
2012 Offloaded
Data Transfer
Offloaded Data Transfer (ODX) is a new feature of the Windows Server 2012 operating
system and the Windows 8 client. ODX enables Windows Server to offload data
transfers between LUNs, or offload the writing of repeating patterns, to the storage
area network (SAN). By offloading the data transfer or repeating the write pattern to
the SAN, client-server network usage, CPU utilization and storage input output
operations are reduced to nearly zero as the data movement is performed by the
intelligent storage array. These operations can take a fraction of the time compared to
conventional methods. ODX starts a copy request with an offload read operation and
retrieves a token representing the data from the storage device. ODX then uses an
offload write command, which includes the token, to request data movement from
the source disk to the destination disk. The storage system then performs the actual
data movement. Figure 29 illustrates the ODX process.
47
Figure 29.
ODX process
You can use ODX based copy operations within a physical LUN or across multiple
LUNs from the same storage array. You can also use ODX copy operations across
multiple Windows Server 2012 hosts that have a source LUN on one server and a
target on the secondary server within the same array. In this latter case, SMB 3.0 is
required (and is implemented by Windows Server instances). Hyper-V virtual
machines also support ODX for the Windows Server 2012 operating system. ODX
supports virtual machine storage for VHDs (VHDX only,) Pass-through hard disks,
virtual Fibre Channel LUNs, or iSCSI LUNs presented directly to the virtual machine.
ODX is enabled by default within Windows Server 2012 and you can use it for any file
copy operation where the file is greater than 256 KB in size. Windows automatically
detects whether ODX is supported by a given storage device. If the storage device
does not support ODX, the device uses a standard host-based copy. If ODX is
supported, but an offload request is rejected by the storage array, Windows reverts to
a host based copy to complete the operation. In some cases, when ODX is rejected,
Windows waits three minutes before again attempting to use ODX against that
device. The copy operation that failed an ODX call can continue to use legacy copy
operations until completion.
ODX is especially useful for copying large files between file shares, deploying virtual
machines from templates, and performing storage live migrations of virtual machines
between LUNs. In addition to copy operations, ODX can be used for offloading the
writing of repeating patterns to a storage device. For example, Hyper-V with Windows
EMC Storage with Microsoft Hyper-V Virtualization
48
Server 2012 uses ODX to offload writing a range of zeros when creating fixed VHDs.
Windows Offloaded Data Transfers overview, on Microsoft TechNet, provides more
details.
ODX support
requirements
The client that requests the copy operation must be ODX-aware. You must use
Windows Server 2012 or Windows 8 to initiate the copy operation for ODX to engage.
ODX also requires the storage arrays within the SAN to support the offload requests
from the operating system based on the T10 specifications (http://www.t10.org/.)
For ODX to be leveraged in Hyper-V virtual machines, virtual SCSI adapters with VHDX
(VHD format is not supported) and pass-through disks or virtual Fibre Channel
adapters are required.
If ODX is enabled on an EMC storage array, for example, following a code upgrade,
you must either reboot the Windows 2012 server or mask and unmask the devices so
that Windows can detect the change in ODX support. Windows Server 2012 discovers
device feature support characteristics only at the time of initial device discovery and
enumeration. Any change in device feature support characteristics for previously
discovered devices is not recognized without a host reboot or device re-discovery.
Table 2 lists ODX support for EMC storage arrays:
Table 2.
Storage array
Supported version
Notes
VNX Block
VNX File
VNXe
VNXe OE version
2.4.0.20932 (MR4)
released on 1/7/2013
VMAX
Enginuity version
5876.229.145 (Q2 2013
SR)
Enabled by default
Starting with SCVMM 2012 R2, you can use ODX when you deploy virtual machines
from templates. When using the network transfer type, SCVMM 2012 R2
automatically attempts to use ODX to perform the virtual machine deployments if ODX
is supported in the environment.
For ODX to be used, the library server, Hyper-V hosts, and clusters need an
appropriate run as account for their host management credentials. You can assign
EMC Storage with Microsoft Hyper-V Virtualization
49
the credential by specifying a run as account, which has permissions to the servers to
be added, while adding the server or cluster into SCVMM. The run as account is then
assigned to the host management credentials as shown in Figure 30.
Figure 30.
For clustered hosts previously added to SCVMM, the ability to change the host
management credential can be disabled from within the SCVMM console. To change
the credential, run the following PowerShell commands:
$Cluster = Get-SCVMHostCluster -Name HyperVR2Clus.contoso.com
$RunAs = Get-SCRunAsAccount -Name dcadmin
Set-SCVmHostCluster -VMHostCluster $Cluster
VMHostManagementCredential $RunAs
When ODX is automatically invoked, the create virtual machine job performing the
deployment displays a step called Deploy file (using Fast File Copy) as shown in
Figure 31.
50
Figure 31.
If ODX fails or is not used when you create a virtual machine, the deployment
continues and completes by reverting to a traditional host based copy. The job
displays a status of Completed w/ Info which notes the failure to use ODX. Figure 32
shows an example.
Figure 32.
Windows Server
2012 thin
provisioning space
reclamation
Storage arrays such as VMAX and VNX support a pooling and storage allocation ondemand functionality called virtual (or thin) provisioning. You can use thin
provisioning to allocate storage for a specific device, within a thin pool, when a server
writes data for the first time. Resources are more efficiently used by only allocating
storage on-demand. Over time, the data written by the server cancan be deleted, but
the space allocated within the thin pool persists, leading to inefficient storage
utilization. Windows Server 2012 includes a new feature that allows the operating
system to request that previously written, but now deleted data, is reclaimed. This
reclaim functionality frees the allocated, but no longer required data, from within a
thin pool.
Windows Server 2012 supports detecting thinly provisioned storage and issuing T10
standard UNMAP or TRIM based reclaim commands against that storage. Windows
51
Server 2012 uses the UNMAP specification for reclaim operations against EMC
storage. The following EMC storage supports detection and reclamation with UNMAP:
When a volume residing on a thin provisioned drive is formatted with the quick
option, the entire size of the volume is reclaimed in real-time.
When the optimize-volume PowerShell cmdlet is used with the Retrim option.
When a file or groups of files are deleted from a file system, Windows
automatically issues reclaim commands for the area of the file system that was
freed based on the file deletion. This is also true for CSV volumes, even if they
are not in redirected mode. This automated method of reclamation reduces the
need of running optimize operations; however to achieve full efficiency, an
optimize drive operation may still need to be run.
Windows Server 2012 supports reclaim operations against both NTFS and ReFS
formatted volumes. The new VHDX virtual disk format, native to Windows Server
2012, also supports reclaim operations from within a Hyper-V virtual machine to a
virtual disk. You can perform all reclaim operations supported on a physical LUN
within and against a VHDX based virtual disk or against a pass-through disk
presented to a Hyper-V based virtual machine.
52
Figure 33.
You can globally disable the default behavior of issuing reclaim operations on a
Windows 2012 server. Modify the disabledeletenotify parameter to prevent
reclaim operations from being issued against all volumes on the server. This setting
can be changed with the Fsutil command line tool included with Windows Server
2012.
To disable reclaim operations run the following from an elevated command prompt:
Fsutil behavior set DisableDeleteNotify 1
Update that improves cloud service provider resiliency in Windows Server 2012 hotfix
package from http://support.microsoft.com/kb/2870270. The hotfix contains a fix to
help prevent file system hangs while reclaim operations are being performed.
53
EMC Replication
Manager
Reduces or eliminates the need for scripting solutions for replication tasks.
Integration with physical, VMware, Hyper-V, or IBM AIX VIO virtual environments
EMC Storage Integrator (ESI) for Windows Suite is a set of tools which integrate
Microsoft Windows and Microsoft applications with EMC storage arrays. The suite
includes: ESI for Windows, ESI PowerShell Toolkit, ESI Service, ESI SCOM
Management Packs, ESI SCO Integration Pack, and the ESI Service PowerShell Toolkit.
You can use ESI for Windows to view, provision, and manage block and file storage
for Microsoft Windows environments. ESI supports the EMC Symmetrix VMAX, EMC
VNX, EMC VNXe and EMC CLARiiON CX4 series of storage arrays.
In addition to physical environments ESI also supports storage provisioning and
discovery for Windows virtual machines that run on Microsoft Hyper-V, in addition to
other types of hypervisors. For Hyper-V, ESI supports the creation of VHDs and passthrough disks, and also supports the creation of host disks and clustered shared
volumes.
54
Figure 34.
The ESI PowerShell Toolkit is a powerful option for discovering and managing
Windows environments, including Hyper-V. ESI includes over 150 PowerShell cmdlets
for discovering and managing virtual machines, servers and storage arrays. For
example, the following script uses ESI to take all Hyper-V registered hosts, discover
all host volumes and map them to the underlying storage LUN and pool (the script
output is shown in Figure 35).
$myobj = @()
$hypervsystem=get-emchypervsystem
foreach ($system in $hypervsystem){
$volumes=$system | get-emchostvolume
foreach ($vol in $volumes){
$lun= $vol | get-emclun
$pool= $lun | get-emcstoragepool
$myobjtemp = New-Object System.Object
$myobjtemp | Add-member -name ComputerName -type NoteProperty
-value $system.Name
$myobjtemp | Add-member -name VolumePath -type NoteProperty value $vol.MountPath
$myobjtemp | Add-member -name VNXLunName -type NoteProperty value $Lun.Name
$myobjtemp | Add-member -name VNXLunCapacity -type
NoteProperty -value $Lun.Capacity
$myobjtemp | Add-member -name PoolName -type NoteProperty value $pool.Name
$myobjtemp | Add-member -name PoolTotal -type NoteProperty value $pool.TotalCapacity
$myobjtemp | Add-member -name PoolAvailable -type NoteProperty
-value $pool.AvailableCapacity
$myobj += $myobjtemp
}
}
55
$myobj | Out-GridView
Figure 35.
ESI can be downloaded from EMC Online Support. ESI release notes and online help
provide more information about ESI.
EMC Solutions
Enabler
EMC Solutions Enabler is a prerequisite for many layered product offerings from EMC.
Installation of Solutions Enabler at the parent level is fully supported and provides
the necessary support for configurations such as Cluster Enabler, when run at the
Hyper-V server level. Deployments of Solutions Enabler within virtual machines that
are using iSCSI storage devices are also fully supported.
For gatekeeper access, Solutions Enabler also supports a virtual machine where
gatekeepers are presented over virtual Fibre Channel. When gatekeepers are
presented to a virtual machine over virtual Fibre channel, no additional steps are
required. We recommend using virtual Fibre channel instead of pass-through devices
when presenting gatekeeper devices to Hyper-V virtual machines.
56
Note: Solutions Enabler cannot function against storage devices that are VHD devices, even
when the VHD devices are located on EMC Symmetrix storage. The underlying LUN
configuration for a storage device that is used for VHD placement cannot be detected from
the child partition.
In certain cases, you must implement Solutions Enabler within a virtual machine that
is using pass-through storage devices presented through the Hyper-V server and
intended as gatekeepers. EMC supports installing Solutions Enabler with a child
virtual machine using pass-through storage devices only when the parent is running
Windows Server 2008 R2 or Window Server 2012, and when the appropriate settings
for the virtual machine have been made.
EMC Solutions Enabler implements extended SCSI commands, which are by default,
filtered by the parent where virtual disks or pass-through disks are used. A bypass of
this filtering is provided with Windows Server 2008 R2 and Windows Server 2012
Hyper-V, and this pass-through must be enabled to allow for appropriate discovery
options from the virtual machine. Planning for Disks and Storage, on Microsoft
TechNet, provides information about full pass-through of SCSI commands. We
recommend allowing SCSI command pass-through only for those virtual machines
where it is necessary.
To disable the filtering of SCSI commands, you can run the following PowerShell
script on a Hyper-V parent partition. In this example, the name of the affected virtual
machine is passed to the PowerShell script when it is executed.
$Target = $args[0]
$VSManagementService = gwmi MSVM_VirtualSystemManagementService namespace "root\virtualization"
foreach ($Child in Get-WmiObject -Namespace root\virtualization
Msvm_ComputerSystem -Filter "ElementName='$Target'")
{ $VMData = Get-WmiObject -Namespace root\virtualization -Query
"Associators of {$Child} Where
ResultClass=Msvm_VirtualSystemGlobalSettingData
AssocClass=Msvm_ElementSettingData"
$VMData.AllowFullSCSICommandSet=$true
$VSManagementService.ModifyVirtualSystem($Child,
$VMData.PSBase.GetText(1)) | out-null
}
Figure 36 provides an example of how to run this script. In the example, the script is
first displayed, and then the virtual machine named ManagementServer is provided
as the target for disabling SCSI filtering.
The script is provided as-is, and includes no validation or error checking functionality.
57
Figure 36.
You can also check the current value of the SCSI filtering. The following PowerShell
script reports on the current SCSI filtering status. You must provide the name of the
virtual machine target to be reported on.
$Target = $args[0]
foreach ($Child in Get-WmiObject -Namespace root\virtualization
Msvm_ComputerSystem -Filter "ElementName='$Target'")
{
$VMData = Get-WmiObject -Namespace root\virtualization -Query
"Associators of {$Child}
Where ResultClass=Msvm_VirtualSystemGlobalSettingData
AssocClass=Msvm_ElementSettingData"
Write-host "Virtual Machine:" $VMData.ElementName
Write-Host "Currently ByPassing SCSI Filtering:"
$VMData.AllowFullSCSICommandSet
}
Once set, the setting persists for the virtual machine, as the setting is recorded
against the virtual machine configuration. For the setting to take effect, you must
restart the virtual machine, after the setting has been changed.
58
Conclusion
EMC storage arrays provide an extremely scalable storage solution, which provides
customers with industry-leading capabilities to deploy, maintain, and protect
Windows Hyper-V environments. EMC storage provides scale-out solutions for
applications such as Microsoft Windows Hyper-V, allowing flexible data protection
options to meet different performance, availability, functionality, and economic
requirements. Support for a wide range of service levels with a single storage
infrastructure provides a key building block for implementing Information Lifecycle
Management (ILM) by deploying a tiered storage strategy.
EMC technologies provide an easier and more reliable way to provision storage in
Microsoft Windows Hyper-V environments, while enabling transparent, non-disruptive
data mobility between storage tiers. Industry-leading multi-site protection through
the use of VPLEX or Cluster Enabler allows customers to implement a complete endto-end solution for virtual machine management and protection.
59