Vsphere Esxi Vcenter Server 702 Storage Guide
Vsphere Esxi Vcenter Server 702 Storage Guide
Vsphere Esxi Vcenter Server 702 Storage Guide
Update 2
VMware vSphere 7.0
VMware ESXi 7.0
vCenter Server 7.0
vSphere Storage
You can find the most up-to-date technical documentation on the VMware website at:
https://docs.vmware.com/
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
©
Copyright 2009-2021 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
1 Introduction to Storage 15
Traditional Storage Virtualization Models 15
Software-Defined Storage Models 17
vSphere Storage APIs 18
VMware, Inc. 3
vSphere Storage
VMware, Inc. 4
vSphere Storage
VMware, Inc. 5
vSphere Storage
VMware, Inc. 6
vSphere Storage
VMware, Inc. 7
vSphere Storage
VMware, Inc. 8
vSphere Storage
VMware, Inc. 9
vSphere Storage
Virtual Machines with RDMs Must Ignore SCSI INQUIRY Cache 276
VMware, Inc. 10
vSphere Storage
VMware, Inc. 11
vSphere Storage
VMware, Inc. 12
vSphere Storage
VMware, Inc. 13
About vSphere Storage
vSphere Storage describes virtualized and software-defined storage technologies that VMware
®
ESXi™ and VMware vCenter Server offer, and explains how to configure and use these
technologies.
At VMware, we value inclusion. To foster this principle within our customer, partner, and internal
community, we create content using inclusive language.
Intended Audience
This information is for experienced system administrators who are familiar with the virtual
machine and storage virtualization technologies, data center operations, and SAN storage
concepts.
VMware, Inc. 14
Introduction to Storage
1
vSphere supports various storage options and functionalities in traditional and software-defined
storage environments. A high-level overview of vSphere storage elements and aspects helps you
plan a proper storage strategy for your virtual data center.
In vSphere environment, a traditional model is built around the following storage technologies
and ESXi and vCenter Server virtualization functionalities.
In traditional storage environments, the ESXi storage management process starts with
storage space that your storage administrator preallocates on different storage systems.
ESXi supports local storage and networked storage.
See Types of Physical Storage.
A storage area network (SAN) is a specialized high-speed network that connects computer
systems, or ESXi hosts, to high-performance storage systems. ESXi can use Fibre Channel or
iSCSI protocols to connect to storage systems.
Fibre Channel
VMware, Inc. 15
vSphere Storage
Fibre Channel (FC) is a storage protocol that the SAN uses to transfer data traffic from ESXi
host servers to shared storage. The protocol packages SCSI commands into FC frames. To
connect to the FC SAN, your host uses Fibre Channel host bus adapters (HBAs).
See Chapter 4 Using ESXi with Fibre Channel SAN.
Internet SCSI
Internet iSCSI (iSCSI) is a SAN transport that can use Ethernet connections between computer
systems, or ESXi hosts, and high-performance storage systems. To connect to the storage
systems, your hosts use hardware iSCSI adapters or software iSCSI initiators with standard
network adapters.
In the ESXi context, the terms device and LUN are used interchangeably. Typically, both
terms mean a storage volume that is presented to the host from a block storage system and
is available for formatting.
See Target and Device Representations and Chapter 14 Managing Storage Devices.
Virtual Disks
A virtual machine on an ESXi host uses a virtual disk to store its operating system, application
files, and other data associated with its activities. Virtual disks are large physical files, or sets
of files, that can be copied, moved, archived, and backed up as any other files. You can
configure virtual machines with multiple virtual disks.
To access virtual disks, a virtual machine uses virtual SCSI controllers. These virtual controllers
include BusLogic Parallel, LSI Logic Parallel, LSI Logic SAS, and VMware Paravirtual. These
controllers are the only types of SCSI controllers that a virtual machine can see and access.
Each virtual disk resides on a datastore that is deployed on physical storage. From the
standpoint of the virtual machine, each virtual disk appears as if it were a SCSI drive
connected to a SCSI controller. Whether the physical storage is accessed through storage or
network adapters on the host is typically transparent to the VM guest operating system and
applications.
®
VMware vSphere VMFS
The datastores that you deploy on block storage devices use the native vSphere Virtual
Machine File System (VMFS) format. It is a special high-performance file system format that is
optimized for storing virtual machines.
NFS
An NFS client built into ESXi uses the Network File System (NFS) protocol over TCP/IP to
access an NFS volume that is located on a NAS server. The ESXi host can mount the volume
and use it as an NFS datastore.
VMware, Inc. 16
vSphere Storage
In addition to virtual disks, vSphere offers a mechanism called raw device mapping (RDM).
RDM is useful when a guest operating system inside a virtual machine requires direct access
to a storage device. For information about RDMs, see Chapter 19 Raw Device Mapping.
With the software-defined storage model, a virtual machine becomes a unit of storage
provisioning and can be managed through a flexible policy-based mechanism. The model
involves the following vSphere technologies.
®
VMware vSphere Virtual Volumes™ (vVols)
The vVols functionality changes the storage management paradigm from managing space
inside datastores to managing abstract storage objects handled by storage arrays. With
vVols, an individual virtual machine, not the datastore, becomes a unit of storage
management. And storage hardware gains complete control over virtual disk content, layout,
and management.
VMware vSAN
vSAN is a distributed layer of software that runs natively as a part of the hypervisor. vSAN
aggregates local or direct-attached capacity devices of an ESXi host cluster and creates a
single storage pool shared across all hosts in the vSAN cluster.
Storage Policy Based Management (SPBM) is a framework that provides a single control
panel across various data services and storage solutions, including vSAN and vVols. Using
storage policies, the framework aligns application demands of your virtual machines with
capabilities provided by storage entities.
See Chapter 20 Storage Policy Based Management.
I/O Filters
I/O filters are software components that can be installed on ESXi hosts and can offer
additional data services to virtual machines. Depending on implementation, the services might
include replication, encryption, caching, and so on.
VMware, Inc. 17
vSphere Storage
This Storage publication describes several Storage APIs that contribute to your storage
environment. For information about other APIs from this family, including vSphere APIs - Data
Protection, see the VMware website.
VASA becomes essential when you work with vVols, vSAN, vSphere APIs for I/O Filtering (VAIO),
and storage VM policies. See Chapter 21 Using Storage Providers.
n Hardware Acceleration APIs. Help arrays to integrate with vSphere, so that vSphere can
offload certain storage operations to the array. This integration significantly reduces CPU
overhead on the host. See Chapter 24 Storage Hardware Acceleration.
n Array Thin Provisioning APIs. Help to monitor space use on thin-provisioned storage arrays to
prevent out-of-space conditions, and to perform space reclamation. See ESXi and Array Thin
Provisioning.
VMware, Inc. 18
Getting Started with a Traditional
Storage Model 2
Setting up your ESXi storage in traditional environments, includes configuring your storage
systems and devices, enabling storage adapters, and creating datastores.
n Datastore Characteristics
Local Storage
Local storage can be internal hard disks located inside your ESXi host. It can also include external
storage systems located outside and connected to the host directly through protocols such as
SAS or SATA.
Local storage does not require a storage network to communicate with your host. You need a
cable connected to the storage unit and, when required, a compatible HBA in your host.
The following illustration depicts a virtual machine using local SCSI storage.
VMware, Inc. 19
vSphere Storage
ESXi Host
VMFS
vmdk
SCSI Device
In this example of a local storage topology, the ESXi host uses a single connection to a storage
device. On that device, you can create a VMFS datastore, which you use to store virtual machine
disk files.
Although this storage configuration is possible, it is not a best practice. Using single connections
between storage devices and hosts creates single points of failure (SPOF) that can cause
interruptions when a connection becomes unreliable or fails. However, because most of local
storage devices do not support multiple connections, you cannot use multiple paths to access
local storage.
ESXi supports various local storage devices, including SCSI, IDE, SATA, USB, SAS, flash, and
NVMe devices.
Note You cannot use IDE/ATA or USB drives to store virtual machines.
Local storage does not support sharing across multiple hosts. Only one host has access to a
datastore on a local storage device. As a result, although you can use local storage to create
VMs, you cannot use VMware features that require shared storage, such as HA and vMotion.
However, if you use a cluster of hosts that have just local storage devices, you can implement
vSAN. vSAN transforms local storage resources into software-defined shared storage. With
vSAN, you can use features that require shared storage. For details, see the Administering
VMware vSAN documentation.
Networked Storage
Networked storage consists of external storage systems that your ESXi host uses to store virtual
machine files remotely. Typically, the host accesses these systems over a high-speed storage
network.
Networked storage devices are shared. Datastores on networked storage devices can be
accessed by multiple hosts concurrently. ESXi supports multiple networked storage technologies.
VMware, Inc. 20
vSphere Storage
In addition to traditional networked storage that this topic covers, VMware supports virtualized
shared storage, such as vSAN. vSAN transforms internal storage resources of your ESXi hosts
into shared storage that provides such capabilities as High Availability and vMotion for virtual
machines. For details, see the Administering VMware vSAN documentation.
Note The same LUN cannot be presented to an ESXi host or multiple hosts through different
storage protocols. To access the LUN, hosts must always use a single protocol, for example,
either Fibre Channel only or iSCSI only.
To connect to the FC SAN, your host should be equipped with Fibre Channel host bus adapters
(HBAs). Unless you use Fibre Channel direct connect storage, you need Fibre Channel switches
to route storage traffic. If your host contains FCoE (Fibre Channel over Ethernet) adapters, you
can connect to your shared Fibre Channel devices by using an Ethernet network.
Note Starting from vSphere 7.0, VMware no longer supports software FCoE in production
environments.
Fibre Channel Storage depicts virtual machines using Fibre Channel storage.
VMware, Inc. 21
vSphere Storage
ESXi Host
Fibre
Channel
HBA
SAN
VMFS
vmdk
Fibre
Channel Array
In this configuration, a host connects to a SAN fabric, which consists of Fibre Channel switches
and storage arrays, using a Fibre Channel adapter. LUNs from a storage array become available
to the host. You can access the LUNs and create datastores for your storage needs. The
datastores use the VMFS format.
For specific information on setting up the Fibre Channel SAN, see Chapter 4 Using ESXi with
Fibre Channel SAN.
Hardware iSCSI
Your host connects to storage through a third-party adapter capable of offloading the iSCSI
and network processing. Hardware adapters can be dependent and independent.
Software iSCSI
VMware, Inc. 22
vSphere Storage
Your host uses a software-based iSCSI initiator in the VMkernel to connect to storage. With
this type of iSCSI connection, your host needs only a standard network adapter for network
connectivity.
You must configure iSCSI initiators for the host to access and display iSCSI storage devices.
ESXi Host
Software
Adapter
iSCSI Ethernet
HBA NIC
LAN LAN
VMFS VMFS
In the left example, the host uses the hardware iSCSI adapter to connect to the iSCSI storage
system.
In the right example, the host uses a software iSCSI adapter and an Ethernet NIC to connect to
the iSCSI storage.
iSCSI storage devices from the storage system become available to the host. You can access the
storage devices and create VMFS datastores for your storage needs.
For specific information on setting up the iSCSI SAN, see Chapter 10 Using ESXi with iSCSI SAN.
VMware, Inc. 23
vSphere Storage
You can mount an NFS volume directly on the ESXi host. You then use the NFS datastore to
store and manage virtual machines in the same way that you use the VMFS datastores.
NFS Storage depicts a virtual machine using the NFS datastore to store its files. In this
configuration, the host connects to the NAS server, which stores the virtual disk files, through a
regular network adapter.
ESXi Host
Ethernet
NIC
LAN
NFS
vmdk
NAS
Appliance
For specific information on setting up NFS storage, see Understanding Network File System
Datastores.
VMware, Inc. 24
vSphere Storage
Different storage vendors present the storage systems to ESXi hosts in different ways. Some
vendors present a single target with multiple storage devices or LUNs on it, while others present
multiple targets with one LUN each.
In this illustration, three LUNs are available in each configuration. In one case, the host connects
to one target, but that target has three LUNs that can be used. Each LUN represents an individual
storage volume. In the other example, the host detects three different targets, each having one
LUN.
Targets that are accessed through the network have unique names that are provided by the
storage systems. The iSCSI targets use iSCSI names. Fibre Channel targets use World Wide
Names (WWNs).
Note ESXi does not support accessing the same LUN through different transport protocols,
such as iSCSI and Fibre Channel.
A device, or LUN, is identified by its UUID name. If a LUN is shared by multiple hosts, it must be
presented to all hosts with the same UUID.
VMware, Inc. 25
vSphere Storage
ESXi supports Fibre Channel (FC), Internet SCSI (iSCSI), Fibre Channel over Ethernet (FCoE), and
NFS protocols. Regardless of the type of storage device your host uses, the virtual disk always
appears to the virtual machine as a mounted SCSI device. The virtual disk hides a physical
storage layer from the virtual machine’s operating system. This allows you to run operating
systems that are not certified for specific storage equipment, such as SAN, inside the virtual
machine.
Note Starting from vSphere 7.0, VMware no longer supports software FCoE in production
environments.
The following graphic depicts five virtual machines using different types of storage to illustrate
the differences between each type.
Software
iSCSI
Adapter
Fibre
VMFS Channel iSCSI Ethernet Ethernet
SCSI Device HBA HBA NIC NIC
Note This diagram is for conceptual purposes only. It is not a recommended configuration.
VMware, Inc. 26
vSphere Storage
After the devices get registered with your host, you can display all available local and networked
devices and review their information. If you use third-party multipathing plug-ins, the storage
devices available through the plug-ins also appear on the list.
Note If an array supports implicit asymmetric logical unit access (ALUA) and has only standby
paths, the registration of the device fails. The device can register with the host after the target
activates a standby path and the host detects it as active. The advanced system /Disk/
FailDiskRegistration parameter controls this behavior of the host.
For each storage adapter, you can display a separate list of storage devices available for this
adapter.
Generally, when you review storage devices, you see the following information.
Name Also called Display Name. It is a name that the ESXi host assigns to the device based on
the storage type and manufacturer. Generally, you can change this name to a name of
your choice. See Rename Storage Devices.
Identifier A universally unique identifier that is intrinsic to the device. See Storage Device Names
and Identifiers.
Operational State Indicates whether the device is attached or detached. See Detach Storage Devices.
LUN Logical Unit Number (LUN) within the SCSI target. The LUN number is provided by the
storage system. If a target has only one LUN, the LUN number is always zero (0).
Drive Type Information about whether the device is a flash drive or a regular HDD drive. For
information about flash drives and NVMe devices, see Chapter 15 Working with Flash
Devices.
Transport Transportation protocol your host uses to access the device. The protocol depends on
the type of storage being used. See Types of Physical Storage.
Owner The plug-in, such as the NMP or a third-party plug-in, that the host uses to manage
paths to the storage device. See Pluggable Storage Architecture and Path
Management.
Hardware Acceleration Information about whether the storage device assists the host with virtual machine
management operations. The status can be Supported, Not Supported, or Unknown.
See Chapter 24 Storage Hardware Acceleration.
Sector Format Indicates whether the device uses a traditional, 512n, or advanced sector format, such
as 512e or 4Kn. See Device Sector Formats.
Partition Format A partition scheme used by the storage device. It can be of a master boot record (MBR)
or GUID partition table (GPT) format. The GPT devices can support datastores greater
than 2 TB. See Device Sector Formats.
VMware, Inc. 27
vSphere Storage
Multipathing Policies Path Selection Policy and Storage Array Type Policy the host uses to manage paths to
storage. See Chapter 18 Understanding Multipathing and Failover.
Paths Paths used to access storage and their status. See Disable Storage Paths.
The Storage Devices view allows you to list the hosts' storage devices, analyze their information,
and modify properties.
Procedure
All storage devices available to the host are listed in the Storage Devices table.
4 To view details for a specific device, select the device from the list.
Icon Description
Refresh Refresh information about storage adapters, topology, and file systems.
Turn On LED Turn on the locator LED for the selected devices.
Turn Off LED Turn off the locator LED for the selected devices.
Mark as Local Mark the selected devices as local for the host.
Mark as Remote Mark the selected devices as remote for the host.
Unmark as Perennially Reserved Clear the perennial reservation from the selected device.
VMware, Inc. 28
vSphere Storage
6 Use the following tabs to access additional information and modify properties for the
selected device.
Tab Description
Properties View device properties and characteristics. View and modify multipathing
policies for the device.
Paths Display paths available for the device. Disable or enable a selected path.
Procedure
All storage adapters installed on the host are listed in the Storage Adapters table.
4 Select the adapter from the list and click the Devices tab.
Storage devices that the host can access through the adapter are displayed.
Icon Description
Refresh Refresh information about storage adapters, topology, and file systems.
The following table compares networked storage technologies that ESXi supports.
VMware, Inc. 29
vSphere Storage
Fibre Channel over FCoE/SCSI Block access of data/LUN n Converged Network Adapter (hardware
Ethernet FCoE)
n NIC with FCoE support (software FCoE)
iSCSI IP/SCSI Block access of data/LUN n iSCSI HBA or iSCSI-enabled NIC (hardware
iSCSI)
n Network adapter (software iSCSI)
The following table compares the vSphere features that different types of storage support.
NAS over NFS Yes Yes NFS 3 and NFS No No Yes Yes
4.1
Note Local storage supports a cluster of virtual machines on a single host (also known as a
cluster in a box). A shared virtual disk is required. For more information about this configuration,
see the vSphere Resource Management documentation.
ESXi supports different classes of adapters, including SCSI, iSCSI, RAID, Fibre Channel, Fibre
Channel over Ethernet (FCoE), and Ethernet. ESXi accesses the adapters directly through device
drivers in the VMkernel.
Depending on the type of storage you use, you might need to enable and configure a storage
adapter on your host.
For information on setting up software FCoE adapters, see Chapter 6 Configuring Fibre Channel
over Ethernet.
VMware, Inc. 30
vSphere Storage
For information on configuring different types of iSCSI adapters, see Chapter 11 Configuring iSCSI
and iSER Adapters and Storage.
Note Starting from vSphere 7.0, VMware no longer supports software FCoE in production
environments.
Prerequisites
You must enable certain adapters, for example software iSCSI or FCoE, before you can view their
information. To configure adapters, see the following:
Procedure
Icon Description
Add Software Adapter Add a storage adapter. Applies to software iSCSI and software FCoE.
Refresh Refresh information about storage adapters, topology, and file systems on the host.
Rescan Storage Rescan all storage adapters on the host to discover newly added storage devices or VMFS
datastores.
Rescan Adapter Rescan the selected adapter to discover newly added storage devices.
5 To view details for a specific adapter, select the adapter from the list.
6 Use tabs under Adapter Details to access additional information and modify properties for
the selected adapter.
Tab Description
Properties Review general adapter properties that typically include a name and model of the
adapter and unique identifiers formed according to specific storage standards. For
iSCSI and FCoE adapters, use this tab to configure additional properties, for example,
authentication.
Devices View storage devices the adapter can access. Use the tab to perform basic device
management tasks. See Display Storage Devices for an Adapter.
VMware, Inc. 31
vSphere Storage
Tab Description
Paths List and manage all paths the adapter uses to access storage devices.
Targets (Fibre Channel and Review and manage targets accessed through the adapter.
iSCSI)
Network Port Binding Configure port binding for software and dependent hardware iSCSI adapters.
(iSCSI only)
Datastore Characteristics
Datastores are logical containers, analogous to file systems, that hide specifics of each storage
device and provide a uniform model for storing virtual machine files. You can display all
datastores available to your hosts and analyze their properties.
n You can create a VMFS datastore, an NFS version 3 or 4.1 datastore, or a vVols datastore
using the New Datastore wizard. A vSAN datastore is automatically created when you enable
vSAN.
n When you add an ESXi host to vCenter Server, all datastores on the host are added to
vCenter Server.
The following table describes datastore details that you can see when you review datastores
through the vSphere Client. Certain characteristic might not be available or applicable to all types
of datastores.
Type VMFS File system that the datastore uses. For information
NFS about VMFS and NFS datastores and how to manage
them, see Chapter 17 Working with Datastores.
vSAN
For information about vSAN datastores, see the
vVol
Administering VMware vSAN documentation.
For information about vVols, see Chapter 22 Working
with VMware vSphere Virtual Volumes (vVols).
VMware, Inc. 32
vSphere Storage
Extents VMFS Individual extents that the datastore spans and their
capacity.
Drive Type VMFS Type of the underlying storage device, such as a flash
drive or a regular HHD drive. For details, see Chapter 15
Working with Flash Devices.
vSAN
vVol
Capability Sets VMFS Information about storage data services that the
underlying storage entity provides. You cannot modify
Note A multi-extent VMFS datastore
them.
assumes capabilities of only one of its
extents.
NFS
vSAN
vVol
Tags VMFS Datastore capabilities that you define and associate with
NFS datastores in a form of tags. For information, see Assign
Tags to Datastores.
vSAN
vVol
Multipathing VMFS Path selection policy the host uses to access storage.
vVol For more information, see Chapter 18 Understanding
Multipathing and Failover.
VMware, Inc. 33
vSphere Storage
Use the Datastores view to list all datastores available in the vSphere infrastructure inventory,
analyze the information, and modify properties.
Procedure
1 Navigate to any inventory object that is a valid parent object of a datastore, such as a host, a
cluster, or a data center, and click the Datastores tab.
Datastores that are available in the inventory appear in the center panel.
2 Use the options from a datastore right-click menu to perform basic tasks for a selected
datastore.
Availability of specific options depends on the type of the datastore and its configuration.
Option Description
Register VM Register an existing virtual machine in the inventory. See the vSphere Virtual
Machine Administration documentation.
Increase Datastore Capacity Increase the capacity of the VMFS datastore or add an extent. See Increase VMFS
Datastore Capacity.
Browse Files Navigate to the datastore file browser. See Use Datastore Browser.
Mount Datastore Mount the datastore to certain hosts. See Mount Datastores.
Unmount Datastore Unmount the datastore from certain hosts. See Unmount Datastores.
Maintenance Mode Use datastore maintenance mode. See the vSphere Resource Management
documentation.
Configure Storage I/O Control Enable Storage I/O Control for the VMFS datastore. See the vSphere Resource
(VMFS) Management documentation.
Edit Space Reclamation (VMFS) Change space reclamation settings for the VMFS datastore. See Change Space
Reclamation Settings.
Delete Datastore (VMFS) Remove the VMFS datastore. See Remove VMFS Datastores.
Tags & Custom Attributes Use tags to encode information about the datastore. See Assign Tags to
Datastores.
Tab Description
Monitor View alarms, performance data, resource allocation, events, and other status information for the
datastore.
Configure View and modify datastore properties. Menu items that you can see depend on the datastore type.
VMware, Inc. 34
vSphere Storage
Tab Description
Virtual machines that require high bandwidth, low latency, and persistence can benefit from this
technology. Examples include VMs with acceleration databases and analytics workload.
To use persistent memory with your ESXi host, you must be familiar with the following concepts.
PMem Datastore
After you add persistent memory to your ESXi host, the host detects the hardware, and then
formats and mounts it as a local PMem datastore. ESXi uses VMFS-L as a file system format.
Only one local PMem datastore per host is supported.
Note When you manage physical persistent memory, make sure to evacuate all VMs from
the host and place the host into maintenance mode.
The PMem datastore is used to store virtual NVDIMM devices and traditional virtual disks of a
VM. The VM home directory with the vmx and vmware.log files cannot be placed on the
PMem datastore.
ESXi exposes persistent memory to a VM in two different modes. PMem-aware VMs can have
direct access to persistent memory. Traditional VMs can use fast virtual disks stored on the
PMem datastore.
Direct-Access Mode
In this mode, also called virtual PMem (vPMem) mode, a PMem region can be presented to a
VM as a virtual non-volatile dual in-line memory module (NVDIMM) module. The VM uses the
VMware, Inc. 35
vSphere Storage
NVDIMM module as a standard byte-addressable memory that can persist across power
cycles.
You can add one or several NVDIMM modules when provisioning the VM.
The VMs must be of the hardware version ESXi 6.7 or later and have a PMem-aware guest
OS. The NVDIMM device is compatible with latest guest OSes that support persistent
memory, for example, Windows 2016.
This mode, also called virtual PMem disks (vPMemDisk) mode, is available to any traditional
VM and supports any hardware version, including all legacy versions. VMs are not required to
be PMem-aware. When you use this mode, you create a regular SCSI virtual disk and attach a
PMem VM storage policy to the disk. The policy automatically places the disk on the PMem
datastore.
To place the virtual disk on the PMem datastore, you must apply the host-local PMem default
storage policy to the disk. The policy is not editable.
The policy can be applied only to virtual disks. Because the VM home directory does not
reside on the PMem datastore, make sure to place it on any standard datastore.
After you assign the PMem storage policy to the virtual disk, you cannot change the policy
through the VM Edit Setting dialog box. To change the policy, migrate or clone the VM.
The following graphic illustrates how the persistent memory components interact.
PMem-aware VM Traditional VM
PMem Storage
Policy
PMem Datastore
Persistent Memory
For information about how to configure and manage VMs with NVDIMMs or virtual persistent
memory disks, see the vSphere Resource Management documentation and vSphere Virtual
Machine Administration.
VMware, Inc. 36
vSphere Storage
However, unlike regular datastores, such as VMFS or vVol, the PMem datastore does not appear
in the Datastores view of the vSphere Client. Regular datastore administrative tasks do not apply
to it.
Procedure
Option Description
esxcli command Use the esxcli storage filesystem list to list the PMem datastore.
VMware, Inc. 37
Overview of Using ESXi with a
SAN 3
Using ESXi with a SAN improves flexibility, efficiency, and reliability. Using ESXi with a SAN also
supports centralized management, failover, and load balancing technologies.
n You can store data securely and configure multiple paths to your storage, eliminating a single
point of failure.
n Using a SAN with ESXi systems extends failure resistance to the server. When you use SAN
storage, all applications can instantly be restarted on another host after the failure of the
original host.
n You can perform live migration of virtual machines using VMware vMotion.
n Use VMware High Availability (HA) in conjunction with a SAN to restart virtual machines in
their last known state on a different server if their host fails.
n Use VMware Fault Tolerance (FT) to replicate protected virtual machines on two different
hosts. Virtual machines continue to function without interruption on the secondary host if the
primary one fails.
n Use VMware Distributed Resource Scheduler (DRS) to migrate virtual machines from one host
to another for load balancing. Because storage is on a shared SAN array, applications
continue running seamlessly.
n If you use VMware DRS clusters, put an ESXi host into maintenance mode to have the system
migrate all running virtual machines to other ESXi hosts. You can then perform upgrades or
other maintenance operations on the original host.
The portability and encapsulation of VMware virtual machines complements the shared nature of
this storage. When virtual machines are located on SAN-based storage, you can quickly shut
down a virtual machine on one server and power it up on another server, or suspend it on one
server and resume operation on another server on the same network. This ability allows you to
migrate computing resources while maintaining consistent shared access.
VMware, Inc. 38
vSphere Storage
If you are working with multiple hosts, and each host is running multiple virtual machines, the
storage on the hosts is no longer sufficient. You might need to use external storage. The SAN
can provide a simple system architecture and other benefits.
When performing ESXi host or infrastructure maintenance, use vMotion to migrate virtual
machines to other host. If shared storage is on the SAN, you can perform maintenance
without interruptions to the users of the virtual machines. Virtual machine working processes
continue throughout a migration.
Load balancing
You can add a host to a DRS cluster, and the host's resources become part of the cluster's
resources. The distribution and use of CPU and memory resources for all hosts and virtual
machines in the cluster are continuously monitored. DRS compares these metrics to an ideal
resource use. The ideal use considers the attributes of the cluster's resource pools and virtual
machines, the current demand, and the imbalance target. If needed, DRS performs or
recommends virtual machine migrations.
Disaster recovery
You can use VMware High Availability to configure multiple ESXi hosts as a cluster. The
cluster provides rapid recovery from outages and cost-effective high availability for
applications running in virtual machines.
When you purchase new storage systems, use Storage vMotion to perform live migrations of
virtual machines from existing storage to their new destinations. You can perform the
migrations without interruptions of the virtual machines.
VMware, Inc. 39
vSphere Storage
When you use SAN storage with ESXi, the following considerations apply:
n You cannot use SAN administration tools to access operating systems of virtual machines
that reside on the storage. With traditional tools, you can monitor only the VMware ESXi
operating system. You use the vSphere Client to monitor virtual machines.
n The HBA visible to the SAN administration tools is part of the ESXi system, not part of the
virtual machine.
When you use multiple arrays from different vendors, the following considerations apply:
n If your host uses the same SATP for multiple arrays, be careful when you change the default
PSP for that SATP. The change applies to all arrays. For information on SATPs and PSPs, see
Chapter 18 Understanding Multipathing and Failover.
n Some storage arrays make recommendations on queue depth and other settings. Typically,
these settings are configured globally at the ESXi host level. Changing settings for one array
impacts other arrays that present LUNs to the host. For information on changing queue
depth, see the VMware knowledge base article at http://kb.vmware.com/kb/1267.
n Use single-initiator-single-target zoning when zoning ESXi hosts to Fibre Channel arrays. With
this type of configuration, fabric-related events that occur on one array do not impact other
arrays. For more information about zoning, see Using Zoning with Fibre Channel SANs.
When you make your LUN decision, the following considerations apply:
n Each LUN must have the correct RAID level and storage characteristic for the applications
running in virtual machines that use the LUN.
n If multiple virtual machines access the same VMFS, use disk shares to prioritize virtual
machines.
VMware, Inc. 40
vSphere Storage
You might want fewer, larger LUNs for the following reasons:
n More flexibility to create virtual machines without asking the storage administrator for more
space.
n More flexibility for resizing virtual disks, doing snapshots, and so on.
You might want more, smaller LUNs for the following reasons:
n More flexibility, as the multipathing policy and disk shares are set per LUN.
n Use of Microsoft Cluster Service requires that each cluster disk resource is in its own LUN.
When the storage characterization for a virtual machine is unavailable, it might not be easy to
determine the number and size of LUNs to provision. You can experiment using either a
predictive or adaptive scheme.
Procedure
2 Create a VMFS datastore on each LUN, labeling each datastore according to its
characteristics.
3 Create virtual disks to contain the data for virtual machine applications in the VMFS
datastores created on LUNs with the appropriate RAID level for the applications'
requirements.
Note Disk shares are relevant only within a given host. The shares assigned to virtual
machines on one host have no effect on virtual machines on other hosts.
VMware, Inc. 41
vSphere Storage
Procedure
1 Provision a large LUN (RAID 1+0 or RAID 5), with write caching enabled.
Results
If performance is acceptable, you can place additional virtual disks on the VMFS. If performance
is not acceptable, create a new, large LUN, possibly with a different RAID level, and repeat the
process. Use migration so that you do not lose virtual machines data when you recreate the LUN.
n High Tier. Offers high performance and high availability. Might offer built-in snapshots to
facilitate backups and point-in-time (PiT) restorations. Supports replication, full storage
processor redundancy, and SAS drives. Uses high-cost spindles.
n Mid Tier. Offers mid-range performance, lower availability, some storage processor
redundancy, and SCSI or SAS drives. Might offer snapshots. Uses medium-cost spindles.
n Lower Tier. Offers low performance, little internal storage redundancy. Uses low-end SCSI
drives or SATA.
Not all VMs must be on the highest-performance and most-available storage throughout their
entire life cycle.
When you decide where to place a virtual machine, the following considerations apply:
n Criticality of the VM
A virtual machine might change tiers throughout its life cycle because of changes in criticality or
changes in technology. Criticality is relative and might change for various reasons, including
changes in the organization, operational processes, regulatory requirements, disaster planning,
and so on.
VMware, Inc. 42
vSphere Storage
Most SAN hardware is packaged with storage management software. In many cases, this
software is a Web application that can be used with any Web browser connected to your
network. In other cases, this software typically runs on the storage system or on a single server,
independent of the servers that use the SAN for storage.
n Storage array management, including LUN creation, array cache management, LUN mapping,
and LUN security.
If you run the SAN management software on a virtual machine, you gain the benefits of a virtual
machine, including failover with vMotion and VMware HA. Because of the additional level of
indirection, however, the management software might not see the SAN. In this case, you can use
an RDM.
Note Whether a virtual machine can run management software successfully depends on the
particular storage system.
n Identification of critical applications that require more frequent backup cycles within a given
period.
n Recovery point and recovery time goals. Consider how precise your recovery point must be,
and how long you are willing to wait for it.
n The rate of change (RoC) associated with the data. For example, if you are using
synchronous/asynchronous replication, the RoC affects the amount of bandwidth required
between the primary and secondary storage devices.
n Identification of peak traffic periods on the SAN. Backups scheduled during those peak
periods can slow the applications and the backup process.
VMware, Inc. 43
vSphere Storage
Include a recovery-time objective for each application when you design your backup strategy.
That is, consider the time and resources necessary to perform a backup. For example, if a
scheduled backup stores so much data that recovery requires a considerable amount of time,
examine the scheduled backup. Perform the backup more frequently, so that less data is backed
up at a time and the recovery time decreases.
If an application requires recovery within a certain time frame, the backup process must provide
a time schedule and specific data processing to meet the requirement. Fast recovery can require
the use of recovery volumes that reside on online storage. This process helps to minimize or
eliminate the need to access slow offline media for missing data components.
The Storage APIs - Data Protection that VMware offers can work with third-party products.
When using the APIs, third-party software can perform backups without loading ESXi hosts with
the processing of backup tasks.
The third-party products using the Storage APIs - Data Protection can perform the following
backup tasks:
n Perform a full, differential, and incremental image backup and restore of virtual machines.
n Perform a file-level backup of virtual machines that use supported Windows and Linux
operating systems.
n Ensure data consistency by using Microsoft Volume Shadow Copy Services (VSS) for virtual
machines that run supported Microsoft Windows operating systems.
Because the Storage APIs - Data Protection use the snapshot capabilities of VMFS, backups do
not require that you stop virtual machines. These backups are nondisruptive, can be performed
at any time, and do not need extended backup windows.
For information about the Storage APIs - Data Protection and integration with backup products,
see the VMware website or contact your vendor.
VMware, Inc. 44
Using ESXi with Fibre Channel
SAN 4
When you set up ESXi hosts to use FC SAN storage arrays, special considerations are necessary.
This section provides introductory information about how to use ESXi with an FC SAN array.
If you are new to SAN technology, familiarize yourself with the basic terminology.
A storage area network (SAN) is a specialized high-speed network that connects host servers to
high-performance storage subsystems. The SAN components include host bus adapters (HBAs)
in the host servers, switches that help route storage traffic, cables, storage processors (SPs), and
storage disk arrays.
A SAN topology with at least one switch present on the network forms a SAN fabric.
To transfer traffic from host servers to shared storage, the SAN uses the Fibre Channel (FC)
protocol that packages SCSI commands into Fibre Channel frames.
To restrict server access to storage arrays not allocated to that server, the SAN uses zoning.
Typically, zones are created for each group of servers that access a shared group of storage
devices and LUNs. Zones define which HBAs can connect to which SPs. Devices outside a zone
are not visible to the devices inside the zone.
Zoning is similar to LUN masking, which is commonly used for permission management. LUN
masking is a process that makes a LUN available to some hosts and unavailable to other hosts.
VMware, Inc. 45
vSphere Storage
When transferring data between the host server and storage, the SAN uses a technique known
as multipathing. Multipathing allows you to have more than one physical path from the ESXi host
to a LUN on a storage system.
Generally, a single path from a host to a LUN consists of an HBA, switch ports, connecting cables,
and the storage controller port. If any component of the path fails, the host selects another
available path for I/O. The process of detecting a failed path and switching to another is called
path failover.
A globally unique identifier for a port that allows certain applications to access the port. The
FC switches discover the WWPN of a device or host and assign a port address to the device.
Within a SAN, each port has a unique port ID that serves as the FC address for the port. This
unique ID enables routing of data through the SAN to that port. The FC switches assign the
port ID when the device logs in to the fabric. The port ID is valid only while the device is
logged on.
When N-Port ID Virtualization (NPIV) is used, a single FC HBA port (N-port) can register with the
fabric by using several WWPNs. This method allows an N-port to claim multiple fabric addresses,
each of which appears as a unique entity. When ESXi hosts use a SAN, these multiple, unique
identifiers allow the assignment of WWNs to individual virtual machines as part of their
configuration.
The types of storage that your host supports include active-active, active-passive, and ALUA-
compliant.
Supports access to the LUNs simultaneously through all the storage ports that are available
without significant performance degradation. All the paths are active, unless a path fails.
A system in which one storage processor is actively providing access to a given LUN. The
other processors act as a backup for the LUN and can be actively providing access to other
LUN I/O. I/O can be successfully sent only to an active port for a given LUN. If access through
VMware, Inc. 46
vSphere Storage
the active storage port fails, one of the passive storage processors can be activated by the
servers accessing it.
Supports Asymmetric Logical Unit Access (ALUA). ALUA-compliant storage systems provide
different levels of access per port. With ALUA, the host can determine the states of target
ports and prioritize paths. The host uses some of the active paths as primary, and uses others
as secondary.
n Can prevent non-ESXi systems from accessing a particular storage system, and from possibly
destroying VMFS data.
n Can be used to separate different environments, for example, a test from a production
environment.
With ESXi hosts, use a single-initiator zoning or a single-initiator-single-target zoning. The latter is
a preferred zoning practice. Using the more restrictive zoning prevents problems and
misconfigurations that can occur on the SAN.
For detailed instructions and best zoning practices, contact storage array or switch vendors.
When a virtual machine interacts with its virtual disk stored on a SAN, the following process takes
place:
1 When the guest operating system in a virtual machine reads or writes to a SCSI disk, it sends
SCSI commands to the virtual disk.
2 Device drivers in the virtual machine’s operating system communicate with the virtual SCSI
controllers.
VMware, Inc. 47
vSphere Storage
b Maps the requests for the blocks on the virtual disk to blocks on the appropriate physical
device.
c Sends the modified I/O request from the device driver in the VMkernel to the physical
HBA.
6 Depending on a port the HBA uses to connect to the fabric, one of the SAN switches receives
the request. The switch routes the request to the appropriate storage device.
VMware, Inc. 48
Configuring Fibre Channel
Storage 5
When you use ESXi systems with SAN storage, specific hardware and system requirements exist.
n N-Port ID Virtualization
n Make sure that ESXi systems support the SAN storage hardware and firmware combinations
you use. For an up-to-date list, see the VMware Compatibility Guide.
n Configure your system to have only one VMFS volume per LUN.
n Unless you are using diskless servers, do not set up the diagnostic partition on a SAN LUN.
If you use diskless servers that boot from a SAN, a shared diagnostic partition is appropriate.
n Use RDMs to access raw disks. For information, see Chapter 19 Raw Device Mapping.
n For multipathing to work properly, each LUN must present the same LUN ID number to all
ESXi hosts.
n Make sure that the storage device driver specifies a large enough queue. You can set the
queue depth for the physical HBA during a system setup.
n On virtual machines running Microsoft Windows, increase the value of the SCSI TimeoutValue
parameter to 60. With this increase, Windows can tolerate delayed I/O resulting from a path
failover. For information, see Set Timeout on Windows Guest OS.
VMware, Inc. 49
vSphere Storage
n You cannot use multipathing software inside a virtual machine to perform I/O load balancing
to a single physical LUN. However, when your Microsoft Windows virtual machine uses
dynamic disks, this restriction does not apply. For information about configuring dynamic
disks, see Set Up Dynamic Disk Mirroring.
Storage provisioning
To ensure that the ESXi system recognizes the LUNs at startup time, provision all LUNs to the
appropriate HBAs before you connect the SAN to the ESXi system.
Provision all LUNs to all ESXi HBAs at the same time. HBA failover works only if all HBAs see
the same LUNs.
For LUNs that are shared among multiple hosts, make sure that LUN IDs are consistent across
all hosts.
When you use vCenter Server and vMotion or DRS, make sure that the LUNs for the virtual
machines are provisioned to all ESXi hosts. This action provides the most ability to move
virtual machines.
When you use vMotion or DRS with an active-passive SAN storage device, make sure that all
ESXi systems have consistent paths to all storage processors. Not doing so can cause path
thrashing when a vMotion migration occurs.
For active-passive storage arrays not listed in Storage/SAN Compatibility, VMware does not
support storage port failover. In those cases, you must connect the server to the active port
on the storage array. This configuration ensures that the LUNs are presented to the ESXi
host.
You should follow the configuration guidelines provided by your storage array vendor. During FC
HBA setup, consider the following issues.
n Do not mix FC HBAs from different vendors in a single host. Having different models of the
same HBA is supported, but a single LUN cannot be accessed through two different HBA
types, only through the same type.
VMware, Inc. 50
vSphere Storage
n Set the timeout value for detecting a failover. To ensure optimal performance, do not change
the default value.
1 Design your SAN if it is not already configured. Most existing SANs require only minor
modification to work with ESXi.
Most vendors have vendor-specific documentation for setting up a SAN to work with VMware
ESXi.
4 Set up the HBAs for the hosts you have connected to the SAN.
7 (Optional) Set up your system for VMware HA failover or for using Microsoft Clustering
Services.
N-Port ID Virtualization
N-Port ID Virtualization (NPIV) is an ANSI T11 standard that describes how a single Fibre Channel
HBA port can register with the fabric using several worldwide port names (WWPNs). This allows
a fabric-attached N-port to claim multiple fabric addresses. Each address appears as a unique
entity on the Fibre Channel fabric.
Only virtual machines with RDMs can have WWN assignments, and they use these assignments
for all RDM traffic.
VMware, Inc. 51
vSphere Storage
When a virtual machine has a WWN assigned to it, the virtual machine’s configuration file (.vmx)
is updated to include a WWN pair. The WWN pair consists of a World Wide Port Name (WWPN)
and a World Wide Node Name (WWNN). When that virtual machine is powered on, the VMkernel
creates a virtual port (VPORT) on the physical HBA which is used to access the LUN. The VPORT
is a virtual HBA that appears to the FC fabric as a physical HBA. As its unique identifier, the
VPORT uses the WWN pair that was assigned to the virtual machine.
Each VPORT is specific to the virtual machine. The VPORT is destroyed on the host and no longer
appears to the FC fabric when the virtual machine is powered off. When a virtual machine is
migrated from one host to another, the VPORT closes on the first host and opens on the
destination host.
When virtual machines do not have WWN assignments, they access storage LUNs with the
WWNs of their host’s physical HBAs.
n NPIV can be used only for virtual machines with RDM disks. Virtual machines with regular
virtual disks use the WWNs of the host’s physical HBAs.
For information, see the VMware Compatibility Guide and refer to your vendor
documentation.
n Use HBAs of the same type. VMware does not support heterogeneous HBAs on the same
host accessing the same LUNs.
n If a host uses multiple physical HBAs as paths to the storage, zone all physical paths to
the virtual machine. This is required to support multipathing even though only one path at
a time will be active.
n Make sure that physical HBAs on the host can detect all LUNs that are to be accessed by
NPIV-enabled virtual machines running on that host.
n When configuring a LUN for NPIV access at the storage level, make sure that the NPIV LUN
number and NPIV target ID match the physical LUN and Target ID.
n Zone the NPIV WWPNs so that they connect to all storage systems the cluster hosts can
access, even if the VM does not use the storage. If you add any new storage systems to a
cluster with one or more NPIV-enabled VMs, add the new zones, so the NPIV WWPNs can
detect the new storage system target ports.
VMware, Inc. 52
vSphere Storage
n NPIV supports vMotion. When you use vMotion to migrate a virtual machine it retains the
assigned WWN.
If you migrate an NPIV-enabled virtual machine to a host that does not support NPIV,
VMkernel reverts to using a physical HBA to route the I/O.
n If your FC SAN environment supports concurrent I/O on the disks from an active-active array,
the concurrent I/O to two different NPIV ports is also supported.
When you use ESXi with NPIV, the following limitations apply:
n Because the NPIV technology is an extension to the FC protocol, it requires an FC switch and
does not work on the direct attached FC disks.
n When you clone a virtual machine or template with a WWN assigned to it, the clones do not
retain the WWN.
n Disabling and then re-enabling the NPIV capability on an FC switch while virtual machines are
running can cause an FC link to fail and I/O to stop.
You can create from 1 to 16 WWN pairs, which can be mapped to the first 1 to 16 physical FC
HBAs on the host.
Typically, you do not need to change existing WWN assignments on your virtual machine. In
certain circumstances, for example, when manually assigned WWNs are causing conflicts on the
SAN, you might need to change or remove WWNs.
Prerequisites
n Before configuring WWN, ensure that the ESXi host can access the storage LUN access
control list (ACL) configured on the array side.
n If you want to edit the existing WWNs, power off the virtual machine.
Procedure
1 Right-click the virtual machine in the inventory and select Edit Settings.
VMware, Inc. 53
vSphere Storage
3 Create or edit the WWN assignments by selecting one of the following options:
Option Description
Temporarily disable NPIV for this Disable but do not remove the existing WWN assignments for the virtual
virtual machine machine.
Leave unchanged Retain the existing WWN assignments. The read-only WWN assignments
section displays the node and port values of any existing WWN
assignments.
Generate new WWNs Generate new WWNs, overwriting any existing WWNs. The WWNs of the
HBA are not affected. Specify the number of WWNNs and WWPNs. A
minimum of two WWPNs are required to support failover with NPIV.
Typically only one WWNN is created for each virtual machine.
Remove WWN assignment Remove the WWNs assigned to the virtual machine. The virtual machine
uses the HBA WWNs to access the storage LUN.
What to do next
VMware, Inc. 54
Configuring Fibre Channel over
Ethernet 6
To access Fibre Channel storage, an ESXi host can use the Fibre Channel over Ethernet (FCoE)
protocol.
Note Starting from vSphere 7.0, VMware no longer supports software FCoE in production
environments.
The FCoE protocol encapsulates Fibre Channel frames into Ethernet frames. As a result, your
host does not need special Fibre Channel links to connect to Fibre Channel storage. The host can
use 10 Gbit lossless Ethernet to deliver Fibre Channel traffic.
The adapters that VMware supports generally fall into two categories, hardware FCoE adapters
and software FCoE adapters that use the native FCoE stack in ESXi.
For information on adapters that can be used with VMware FCoE, see the VMware Compatibility
Guide
When such adapter is installed, your host detects and can use both CNA components. In the
vSphere Client, the networking component appears as a standard network adapter (vmnic) and
the Fibre Channel component as a FCoE adapter (vmhba). You do not need to configure the
hardware FCoE adapter to use it.
VMware, Inc. 55
vSphere Storage
A software FCoE adapter uses the native FCoE protocol stack in ESXi to perform some of the
FCoE processing. You must use the software FCoE adapter with a compatible NIC.
VMware supports two categories of NICs with the software FCoE adapters.
The extent of the offload capabilities might depend on the type of the NIC. Generally, the
NICs offer Data Center Bridging (DCB) and I/O offload capabilities.
Any NICs that offer Data Center Bridging (DCB) and have a minimum speed of 10 Gbps. The
network adapters are not required to support any FCoE offload capabilities.
Unlike the hardware FCoE adapter, the software adapter must be activated. Before you activate
the adapter, you must properly configure networking.
Note The number of software FCoE adapters you activate corresponds to the number of
physical NIC ports. ESXi supports a maximum of four software FCoE adapters on one host.
Follow these guidelines when you configure a network switch for software FCoE environment:
n On the ports that communicate with your ESXi host, disable the Spanning Tree Protocol
(STP). Having the STP enabled might delay the FCoE Initialization Protocol (FIP) response at
the switch and cause an all paths down (APD) condition.
The FIP is a protocol that FCoE uses to discover and initialize FCoE entities on the Ethernet.
n Make sure that you have a compatible firmware version on the FCoE switch.
VMware, Inc. 56
vSphere Storage
n Whether you use a partially offloaded NIC or a non-FCoE capable NIC, make sure that the
latest microcode is installed on the network adapter.
n If you use the non-FCoE capable NIC, make sure that it has the DCB capability for software
FCoE enablement.
n If the network adapter has multiple ports, when configuring networking, add each port to a
separate vSwitch. This practice helps you to avoid an APD condition when a disruptive event,
such as an MTU change, occurs.
n Do not move a network adapter port from one vSwitch to another when FCoE traffic is active.
If you make this change, reboot your host afterwards.
n If you changed the vSwitch for a network adapter port and caused a failure, moving the port
back to the original vSwitch resolves the problem.
Note Starting from vSphere 7.0, VMware no longer supports software FCoE in production
environments.
This procedure explains how to create a single VMkernel network adapter connected to a single
FCoE physical network adapter through a vSphere Standard switch. If your host has multiple
network adapters or multiple ports on the adapter, connect each FCoE NIC to a separate
standard switch. For more information, see the vSphere Networking documentation.
Procedure
5 To enable Jumbo Frames, change MTU (Bytes) to the value of 2500 or more, and click Next.
6 Click the Add adapters icon, and select the network adapter (vmnic#) that supports FCoE.
Network label is a friendly name that identifies the VMkernel adapter that you are creating,
for example, FCoE.
VMware, Inc. 57
vSphere Storage
FCoE traffic requires an isolated network. Make sure that the VLAN ID you enter is different
from the one used for regular networking on your host. For more information, see the
vSphere Networking documentation.
9 After you finish configuration, review the information and click Finish.
Results
You have created the virtual VMkernel adapter for the physical FCoE network adapter installed
on your host.
Note To avoid FCoE traffic disruptions, do not remove the FCoE network adapter (vmnic#) from
the vSphere Standard switch after you set up FCoE networking.
Note Starting from vSphere 7.0, VMware no longer supports software FCoE in production
environments.
The number of software FCoE adapters you can activate corresponds to the number of physical
FCoE NIC ports on your host. ESXi supports the maximum of four software FCoE adapters on one
host.
Prerequisites
Procedure
3 Under Storage, click Storage Adapters, and click the Add Software Adapter icon.
5 On the Add Software FCoE Adapter dialog box, select an appropriate vmnic from the drop-
down list of physical network adapters.
Only those adapters that are not yet used for FCoE traffic are listed.
6 Click OK.
VMware, Inc. 58
vSphere Storage
Results
After you activate the software FCoE adapter, you can view its properties. If you do not use the
adapter, you can remove it from the list of adapters.
VMware, Inc. 59
Booting ESXi from Fibre Channel
SAN 7
When you set up your host to boot from a SAN, your host's boot image is stored on one or more
LUNs in the SAN storage system. When the host starts, it boots from the LUN on the SAN rather
than from its local disk.
ESXi supports booting through a Fibre Channel host bus adapter (HBA) or a Fibre Channel over
Ethernet (FCoE) converged network adapter (CNA).
Caution When you use boot from SAN with multiple ESXi hosts, each host must have its own
boot LUN. If you configure multiple hosts to share the boot LUN, ESXi image corruption might
occur.
If you use boot from SAN, the benefits for your environment include the following:
n Cheaper servers. Servers can be more dense and run cooler without internal storage.
n Easier server replacement. You can replace servers and have the new server point to the old
boot location.
n Less wasted space. Servers without local disks often take up less space.
VMware, Inc. 60
vSphere Storage
n Easier backup processes. You can back up the system boot images in the SAN as part of the
overall SAN backup procedures. Also, you can use advanced array features such as
snapshots on the boot image.
n Improved management. Creating and managing the operating system image is easier and
more efficient.
n Better reliability. You can access the boot disk through multiple paths, which protects the disk
from being a single point of failure.
ESXi system Follow vendor recommendations for the server booting from a SAN.
requirements
Adapter Configure the adapter, so it can access the boot LUN. See your vendor documentation.
requirements
Access control n Each host must have access to its own boot LUN only, not the boot LUNs of other hosts. Use
storage system software to make sure that the host accesses only the designated LUNs.
n Multiple servers can share a diagnostic partition. You can use array-specific LUN masking to
achieve this configuration.
Multipathing support Multipathing to a boot LUN on active-passive arrays is not supported because the BIOS does not
support multipathing and is unable to activate a standby path.
SAN considerations If the array is not certified for a direct connect topology, the SAN connections must be through a
switched topology. If the array is certified for the direct connect topology, the SAN connections
can be made directly to the array. Boot from SAN is supported for both switched topology and
direct connect topology.
Hardware- specific If you are running an IBM eServer BladeCenter and use boot from SAN, you must disable IDE
considerations drives on the blades.
VMware, Inc. 61
vSphere Storage
This section describes the generic boot-from-SAN enablement process on the rack-mounted
servers. For information on enabling the boot from SAN option on Cisco Unified Computing
System FCoE blade servers, refer to Cisco documentation.
Procedure
Because configuring the SAN components is vendor-specific, refer to the product documentation
for each item.
Procedure
1 Connect network cable, referring to any cabling guide that applies to your setup.
a From the SAN storage array, make the ESXi host visible to the SAN. This process is often
called creating an object.
b From the SAN storage array, set up the host to have the WWPNs of the host’s adapters
as port names or node names.
c Create LUNs.
d Assign LUNs.
Caution If you use a scripted installation process to install ESXi in boot from SAN mode, take
special steps to avoid unintended data loss.
VMware, Inc. 62
vSphere Storage
Prerequisites
Procedure
Because changing the boot sequence in the BIOS is vendor-specific, refer to vendor
documentation for instructions. The following procedure explains how to change the boot
sequence on an IBM host.
Procedure
1 Power on your system and enter the system BIOS Configuration/Setup Utility.
Results
Procedure
VMware, Inc. 63
vSphere Storage
Procedure
1 Run lputil.
3 Select an adapter.
Procedure
2 To configure the adapter parameters, press ALT+E at the Emulex prompt and follow these
steps.
3 To configure the boot device, follow these steps from the Emulex main menu.
4 Boot into the system BIOS and move Emulex first in the boot controller sequence.
VMware, Inc. 64
vSphere Storage
Procedure
1 While booting the server, press Ctrl+Q to enter the Fast!UTIL configuration utility.
Option Description
One HBA If you have only one HBA, the Fast!UTIL Options page appears. Skip to Step
Step 3.
Multiple HBAs If you have more than one HBA, select the HBA manually.
a In the Select Host Adapter page, use the arrow keys to position the
pointer on the appropriate HBA.
b Press Enter.
3 In the Fast!UTIL Options page, select Configuration Settings and press Enter.
4 In the Configuration Settings page, select Adapter Settings and press Enter.
7 Select the Boot Port Name entry in the list of storage processors (SPs) and press Enter.
If you are using an active-passive storage array, the selected SP must be on the preferred
(active) path to the boot LUN. If you are not sure which SP is on the active path, use your
storage array management software to find out. The target IDs are created by the BIOS and
might change with each reboot.
VMware, Inc. 65
vSphere Storage
9 Perform the appropriate action depending on the number of LUNs attached to the SP.
Option Description
One LUN The LUN is selected as the boot LUN. You do not need to enter the Select
LUN page.
Multiple LUNs The Select LUN page opens. Use the pointer to select the boot LUN, then
press Enter.
10 If any remaining storage processors show in the list, press C to clear the data.
11 Press Esc twice to exit and press Enter to save the setting.
VMware, Inc. 66
Booting ESXi with Software FCoE
8
ESXi supports booting from FCoE capable network adapters.
Only NICs with partial FCoE offload support the boot capabilities with the software FCoE. If you
use the NICs without FCoE offload, the software FCoE boot is not supported.
When you install and boot ESXi from an FCoE LUN, the host can use a VMware software FCoE
adapter and a network adapter with FCoE capabilities. The host does not require a dedicated
FCoE HBA.
You perform most configurations through the option ROM of your network adapter. The network
adapters must support one of the following formats, which communicate parameters about an
FCoE boot device to VMkernel.
n FCoE Boot Parameter Table (FBPT). FBPT is defined by VMware for third-party vendors to
implement a software FCoE boot.
The configuration parameters are set in the option ROM of your adapter. During an ESXi
installation or a subsequent boot, these parameters are exported in to system memory in either
FBFT format or FBPT format. The VMkernel can read the configuration settings and use them to
access the boot LUN.
Requirements
n Use ESXi of a compatible version.
VMware, Inc. 67
vSphere Storage
n Be FCoE capable.
n Contain FCoE boot firmware which can export boot information in FBFT format or FBPT
format.
Considerations
n You cannot change software FCoE boot configuration from within ESXi.
n Coredump is not supported on any software FCoE LUNs, including the boot LUN.
n Boot LUN cannot be shared with other hosts even on shared storage. Make sure that the host
has access to the entire boot LUN.
When you configure your host for a software FCoE boot, you perform several tasks.
Prerequisites
n Contain either a FCoE Boot Firmware Table (FBFT) or a FCoE Boot Parameter Table (FBPT).
For information about network adapters that support software FCoE boot, see the VMware
Compatibility Guide.
Procedure
VMware, Inc. 68
vSphere Storage
Procedure
u In the option ROM of the network adapter, specify software FCoE boot parameters.
These parameters include a boot target, boot LUN, VLAN ID, and so on.
Prerequisites
n Configure the option ROM of the network adapter, so that it points to a target boot LUN.
Make sure that you have information about the bootable LUN.
n Change the boot order in the system BIOS to the following sequence:
a The network adapter that you use for the software FCoE boot.
Procedure
The ESXi installer verifies that FCoE boot is enabled in the BIOS and, if needed, creates a
standard virtual switch for the FCoE capable network adapter. The name of the vSwitch is
VMware_FCoE_vSwitch. The installer then uses preconfigured FCoE boot parameters to
discover and display all available FCoE LUNs.
2 On the Select a Disk page, select the software FCoE LUN that you specified in the boot
parameter setting.
If the boot LUN does not appear in this menu, make sure that you correctly configured boot
parameters in the option ROM of the network adapter.
VMware, Inc. 69
vSphere Storage
5 Change the boot order in the system BIOS so that the FCoE boot LUN is the first bootable
device.
ESXi continues booting from the software FCoE LUN until it is ready to be used.
What to do next
If needed, you can rename and modify the VMware_FCoE_vSwitch that the installer
automatically created. Make sure that the Cisco Discovery Protocol (CDP) mode is set to Listen
or Both.
Problem
When you install or boot ESXi from FCoE storage, the installation or the boot process fails. The
FCoE setup that you use includes a VMware software FCoE adapter and a network adapter with
partial FCoE offload capabilities.
Solution
n Make sure that you correctly configured boot parameters in the option ROM of the FCoE
network adapter.
n During installation, monitor the BIOS of the FCoE network adapter for any errors.
n Use the esxcli command to verify whether the boot LUN is present.
VMware, Inc. 70
Best Practices for Fibre Channel
Storage 9
When using ESXi with Fibre Channel SAN, follow recommendations to avoid performance
problems.
The vSphere Client offers extensive facilities for collecting performance information. The
information is graphically displayed and frequently updated.
You can also use the resxtop or esxtop command-line utilities. The utilities provide a detailed
look at how ESXi uses resources. For more information, see the vSphere Resource Management
documentation.
Check with your storage representative if your storage system supports Storage API - Array
Integration hardware acceleration features. If it does, refer to your vendor documentation to
enable hardware acceleration support on the storage system side. For more information, see
Chapter 24 Storage Hardware Acceleration.
n Do not change the path policy the system sets for you unless you understand the
implications of making such a change.
n Document everything. Include information about zoning, access control, storage, switch,
server and FC HBA configuration, software and firmware versions, and storage cable plan.
VMware, Inc. 71
vSphere Storage
n Make several copies of your topology maps. For each element, consider what happens to
your SAN if the element fails.
n Verify different links, switches, HBAs, and other elements to ensure that you did not miss
a critical failure point in your design.
n Ensure that the Fibre Channel HBAs are installed in the correct slots in the host, based on slot
and bus speed. Balance PCI bus load among the available buses in the server.
n Become familiar with the various monitor points in your storage network, at all visibility
points, including host's performance charts, FC switch statistics, and storage performance
statistics.
n Be cautious when changing IDs of the LUNs that have VMFS datastores being used by your
ESXi host. If you change the ID, the datastore becomes inactive and its virtual machines fail.
Resignature the datastore to make it active again. See Managing Duplicate VMFS Datastores.
After you change the ID of the LUN, rescan the storage to reset the ID on your host. For
information on using the rescan, see Storage Rescan Operations.
Procedure
4 Under Advanced System Settings, select the Disk.EnableNaviReg parameter and click the
Edit icon.
Results
This operation disables the automatic host registration that is enabled by default.
VMware, Inc. 72
vSphere Storage
If the environment is properly configured, the SAN fabric components (particularly the SAN
switches) are only minor contributors because of their low latencies relative to servers and
storage arrays. Make sure that the paths through the switch fabric are not saturated, that is, that
the switch fabric is running at the highest throughput.
If you encounter any problems with storage array performance, consult your storage array
vendor documentation for any relevant information.
To improve the array performance in the vSphere environment, follow these general guidelines:
n When assigning LUNs, remember that several hosts might access the LUN, and that several
virtual machines can run on each host. One LUN used by a host can service I/O from many
different applications running on different operating systems. Because of this diverse
workload, the RAID group containing the ESXi LUNs typically does not include LUNs used by
other servers that are not running ESXi.
n SAN storage arrays require continual redesign and tuning to ensure that I/O is load-balanced
across all storage array paths. To meet this requirement, distribute the paths to the LUNs
among all the SPs to provide optimal load-balancing. Close monitoring indicates when it is
necessary to rebalance the LUN distribution.
Tuning statically balanced storage arrays is a matter of monitoring the specific performance
statistics, such as I/O operations per second, blocks per second, and response time.
Distributing the LUN workload to spread the workload across all the SPs is also important.
Each server application must have access to its designated storage with the following conditions:
Because each application has different requirements, you can meet these goals by selecting an
appropriate RAID group on the storage array.
VMware, Inc. 73
vSphere Storage
n Place each LUN on a RAID group that provides the necessary performance levels. Monitor the
activities and resource use of other LUNs in the assigned RAID group. A high-performance
RAID group that has too many applications doing I/O to it might not meet performance goals
required by an application running on the ESXi host.
n Ensure that each host has enough HBAs to increase throughput for the applications on the
host for the peak period. I/O spread across multiple HBAs provides faster throughput and
less latency for each application.
n To provide redundancy for a potential HBA failure, make sure that the host is connected to a
dual redundant fabric.
n When allocating LUNs or RAID groups for ESXi systems, remember that multiple operating
systems use and share that resource. The LUN performance required by the ESXi host might
be much higher than when you use regular physical machines. For example, if you expect to
run four I/O intensive applications, allocate four times the performance capacity for the ESXi
LUNs.
n When you use multiple ESXi systems in with vCenter Server, the performance requirements
for the storage subsystem increase correspondingly.
n The number of outstanding I/Os needed by applications running on the ESXi system must
match the number of I/Os the HBA and storage array can handle.
VMware, Inc. 74
Using ESXi with iSCSI SAN
10
ESXi can connect to external SAN storage using the Internet SCSI (iSCSI) protocol. In addition to
traditional iSCSI, ESXi also supports iSCSI Extensions for RDMA (iSER).
When the iSER protocol is enabled, the host can use the same iSCSI framework, but replaces the
TCP/IP transport with the Remote Direct Memory Access (RDMA) transport.
n iSCSI Multipathing
n iSCSI Initiators
n Error Correction
On the host side, the iSCSI SAN components include iSCSI host bus adapters (HBAs) or Network
Interface Cards (NICs). The iSCSI network also includes switches and routers that transport the
storage traffic, cables, storage processors (SPs), and storage disk systems.
VMware, Inc. 75
vSphere Storage
ESXi Host
Software
Adapter
iSCSI Ethernet
HBA NIC
LAN LAN
VMFS VMFS
The client, called iSCSI initiator, operates on your ESXi host. It initiates iSCSI sessions by issuing
SCSI commands and transmitting them, encapsulated into the iSCSI protocol, to an iSCSI server.
The server is known as an iSCSI target. Typically, the iSCSI target represents a physical storage
system on the network.
The target can also be a virtual iSCSI SAN, for example, an iSCSI target emulator running in a
virtual machine. The iSCSI target responds to the initiator's commands by transmitting required
iSCSI data.
iSCSI Multipathing
When transferring data between the host server and storage, the SAN uses a technique known
as multipathing. With multipathing, your ESXi host can have more than one physical path to a
LUN on a storage system.
Generally, a single path from a host to a LUN consists of an iSCSI adapter or NIC, switch ports,
connecting cables, and the storage controller port. If any component of the path fails, the host
selects another available path for I/O. The process of detecting a failed path and switching to
another is called path failover.
For more information on multipathing, see Chapter 18 Understanding Multipathing and Failover.
VMware, Inc. 76
vSphere Storage
Each node has a node name. ESXi uses several methods to identify a node.
IP Address
Each iSCSI node can have an IP address associated with it so that routing and switching
equipment on your network can establish the connection between the host and storage. This
address is like the IP address that you assign to your computer to get access to your
company's network or the Internet.
iSCSI Name
A worldwide unique name for identifying the node. iSCSI uses the iSCSI Qualified Name (IQN)
and Extended Unique Identifier (EUI).
By default, ESXi generates unique iSCSI names for your iSCSI initiators, for example,
iqn.1998-01.com.vmware:iscsitestox-68158ef2. Usually, you do not have to change the
default value, but if you do, make sure that the new iSCSI name you enter is worldwide
unique.
iSCSI Alias
A more manageable name for an iSCSI device or port used instead of the iSCSI name. iSCSI
aliases are not unique and are intended to be a friendly name to associate with a port.
Each node has one or more ports that connect it to the SAN. iSCSI ports are end-points of an
iSCSI session.
iSCSI names are formatted in two different ways. The most common is the IQN format.
For more details on iSCSI naming requirements and string profiles, see RFC 3721 and RFC 3722 on
the IETF website.
n yyyy-mm is the year and month when the naming authority was established.
n naming-authority is the reverse syntax of the Internet domain name of the naming authority.
For example, the iscsi.vmware.com naming authority can have the iSCSI qualified name form
of iqn.1998-01.com.vmware.iscsi. The name indicates that the vmware.com domain name
was registered in January of 1998, and iscsi is a subdomain, maintained by vmware.com.
VMware, Inc. 77
vSphere Storage
n unique name is any name you want to use, for example, the name of your host. The naming
authority must make sure that any names assigned following the colon are unique, such as:
n iqn.1998-01.com.vmware.iscsi:name1
n iqn.1998-01.com.vmware.iscsi:name2
n iqn.1998-01.com.vmware.iscsi:name999
The 16-hexadecimal digits are text representations of a 64-bit number of an IEEE EUI (extended
unique identifier) format. The top 24 bits are a company ID that IEEE registers with a particular
company. The remaining 40 bits are assigned by the entity holding that company ID and must be
unique.
iSCSI Initiators
To access iSCSI targets, your ESXi host uses iSCSI initiators.
The initiator is a software or hardware installed on your ESXi host. The iSCSI initiator originates
communication between your host and an external iSCSI storage system and sends data to the
storage system.
In the ESXi environment, iSCSI adapters configured on your host play the role of initiators. ESXi
supports several types of iSCSI adapters.
For information on configuring and using iSCSI adapters, see Chapter 11 Configuring iSCSI and
iSER Adapters and Storage.
VMware, Inc. 78
vSphere Storage
This type of adapter can be a card that presents a standard network adapter and iSCSI
offload functionality for the same port. The iSCSI offload functionality depends on the host's
network configuration to obtain the IP, MAC, and other parameters used for iSCSI sessions.
An example of a dependent adapter is the iSCSI licensed Broadcom 5709 NIC.
Implements its own networking and iSCSI configuration and management interfaces.
Typically, an independent hardware iSCSI adapter is a card that either presents only iSCSI
offload functionality or iSCSI offload functionality and standard NIC functionality. The iSCSI
offload functionality has independent configuration management that assigns the IP, MAC,
and other parameters used for the iSCSI sessions. An example of an independent adapter is
the QLogic QLA4052 adapter.
Hardware iSCSI adapters might need to be licensed. Otherwise, they might not appear in the
client or vSphere CLI. Contact your vendor for licensing information.
The traditional iSCSI protocol carries SCSI commands over a TCP/IP network between an iSCSI
initiator on a host and an iSCSI target on a storage device. The iSCSI protocol encapsulates the
commands and assembles that data in packets for the TCP/IP layer. When the data arrives, the
iSCSI protocol disassembles the TCP/IP packets, so that the SCSI commands can be
differentiated and delivered to the storage device.
iSER differs from traditional iSCSI as it replaces the TCP/IP data transfer model with the Remote
Direct Memory Access (RDMA) transport. Using the direct data placement technology of the
RDMA, the iSER protocol can transfer data directly between the memory buffers of the ESXi host
and storage devices. This method eliminates unnecessary TCP/IP processing and data coping,
and can also reduce latency and the CPU load on the storage device.
In the iSER environment, iSCSI works exactly as before, but uses an underlying RDMA fabric
interface instead of the TCP/IP-based interface.
Because the iSER protocol preserves the compatibility with iSCSI infrastructure, the process of
enabling iSER on the ESXi host is similar to the iSCSI process. See Configure iSER with ESXi.
VMware, Inc. 79
vSphere Storage
Different iSCSI storage vendors present storage to hosts in different ways. Some vendors
present multiple LUNs on a single target. Others present multiple targets with one LUN each.
In these examples, three LUNs are available in each of these configurations. In the first case, the
host detects one target but that target has three LUNs that can be used. Each of the LUNs
represents individual storage volume. In the second case, the host detects three different targets,
each having one LUN.
Host-based iSCSI initiators establish connections to each target. Storage systems with a single
target containing multiple LUNs have traffic to all the LUNs on a single connection. With a system
that has three targets with one LUN each, the host uses separate connections to the three LUNs.
This information is useful when you are trying to aggregate storage traffic on multiple
connections from the host with multiple iSCSI adapters. You can set the traffic for one target to a
particular adapter, and use a different adapter for the traffic to another target.
The types of storage that your host supports include active-active, active-passive, and ALUA-
compliant.
Supports access to the LUNs simultaneously through all the storage ports that are available
without significant performance degradation. All the paths are always active, unless a path
fails.
A system in which one storage processor is actively providing access to a given LUN. The
other processors act as a backup for the LUN and can be actively providing access to other
LUN I/O. I/O can be successfully sent only to an active port for a given LUN. If access through
the active storage port fails, one of the passive storage processors can be activated by the
servers accessing it.
VMware, Inc. 80
vSphere Storage
Supports Asymmetric Logical Unit Access (ALUA). ALUA-compliant storage systems provide
different levels of access per port. With ALUA, hosts can determine the states of target ports
and prioritize paths. The host uses some of the active paths as primary and others as
secondary.
Supports access to all available LUNs through a single virtual port. Virtual port storage
systems are active-active storage devices, but hide their multiple connections though a single
port. ESXi multipathing does not make multiple connections from a specific port to the
storage by default. Some storage vendors supply session managers to establish and manage
multiple connections to their storage. These storage systems handle port failovers and
connection balancing transparently. This capability is often called transparent failover.
You must configure your host and the iSCSI storage system to support your storage access
control policy.
Discovery
A discovery session is part of the iSCSI protocol. It returns the set of targets you can access on
an iSCSI storage system. The two types of discovery available on ESXi are dynamic and static.
Dynamic discovery obtains a list of accessible targets from the iSCSI storage system. Static
discovery can access only a particular target by target name and address.
For more information, see Configure Dynamic or Static Discovery for iSCSI and iSER on ESXi Host.
Authentication
iSCSI storage systems authenticate an initiator by a name and key pair. ESXi supports the CHAP
authentication protocol. To use CHAP authentication, the ESXi host and the iSCSI storage system
must have CHAP enabled and have common credentials.
For information on enabling CHAP, see Configuring CHAP Parameters for iSCSI or iSER Storage
Adapters.
Access Control
Access control is a policy set up on the iSCSI storage system. Most implementations support one
or more of three types of access control:
n By initiator name
n By IP address
Only initiators that meet all rules can access the iSCSI volume.
VMware, Inc. 81
vSphere Storage
Using only CHAP for access control can slow down rescans because the ESXi host can discover
all targets, but then fails at the authentication step. iSCSI rescans work faster if the host discovers
only the targets it can authenticate.
When a virtual machine interacts with its virtual disk stored on a SAN, the following process takes
place:
1 When the guest operating system in a virtual machine reads or writes to SCSI disk, it sends
SCSI commands to the virtual disk.
2 Device drivers in the virtual machine’s operating system communicate with the virtual SCSI
controllers.
b Maps the requests for the blocks on the virtual disk to blocks on the appropriate physical
device.
c Sends the modified I/O request from the device driver in the VMkernel to the iSCSI
initiator, hardware or software.
5 If the iSCSI initiator is a hardware iSCSI adapter, independent or dependent, the adapter
performs the following tasks.
6 If the iSCSI initiator is a software iSCSI adapter, the following takes place.
d The physical NIC sends IP packets over Ethernet to the iSCSI storage system.
7 Ethernet switches and routers on the network carry the request to the appropriate storage
device.
VMware, Inc. 82
vSphere Storage
Error Correction
To protect the integrity of iSCSI headers and data, the iSCSI protocol defines error correction
methods known as header digests and data digests.
Both parameters are disabled by default, but you can enable them. These digests pertain to,
respectively, the header and SCSI data being transferred between iSCSI initiators and targets, in
both directions.
Header and data digests check the noncryptographic data integrity beyond the integrity checks
that other networking layers, such as TCP and Ethernet, provide. The digests check the entire
communication path, including all elements that can change the network-level traffic, such as
routers, switches, and proxies.
The existence and type of the digests are negotiated when an iSCSI connection is established.
When the initiator and target agree on a digest configuration, this digest must be used for all
traffic between them.
Enabling header and data digests requires additional processing for both the initiator and the
target, and can affect throughput and CPU use performance.
Note Systems that use the Intel Nehalem processors offload the iSCSI digest calculations, as a
result, reducing the impact on performance.
For information on enabling header and data digests, see Configuring Advanced Parameters for
iSCSI.
VMware, Inc. 83
Configuring iSCSI and iSER
Adapters and Storage 11
Before ESXi can work with iSCSI SAN, you must set up your iSCSI environment.
The process of preparing your iSCSI environment involves the following steps:
Step Details
Set up iSCSI storage For information, see your storage vendor documentation. In addition, follow these
recommendations:
n ESXi iSCSI SAN Recommendations and Restrictions
n Chapter 13 Best Practices for iSCSI Storage
n Configure Dynamic or Static Discovery for iSCSI and iSER on ESXi Host
VMware, Inc. 84
vSphere Storage
n To ensure that the host recognizes LUNs at startup time, configure all iSCSI storage targets
so that your host can access them and use them. Configure your host so that it can discover
all available iSCSI targets.
n Unless you are using diskless servers, set up a diagnostic partition on local storage. If you
have diskless servers that boot from iSCSI SAN, see General Recommendations for Boot from
iSCSI SAN for information about diagnostic partitions with iSCSI.
n Set the SCSI controller driver in the guest operating system to a large enough queue.
n On virtual machines running Microsoft Windows, increase the value of the SCSI TimeoutValue
parameter. With this parameter set up, the Windows VMs can better tolerate delayed I/O
resulting from a path failover. For information, see Set Timeout on Windows Guest OS.
n Configure your environment to have only one VMFS datastore for each LUN.
n You cannot use virtual-machine multipathing software to perform I/O load balancing to a
single physical LUN.
n ESXi does not support multipathing when you combine independent hardware adapters with
either software or dependent hardware adapters.
iSCSI Networking
For certain types of iSCSI adapters, you must configure VMkernel networking.
You can verify the network configuration by using the vmkping utility.
VMware, Inc. 85
vSphere Storage
The independent hardware iSCSI adapter does not require VMkernel networking. You can
configure network parameters, such as an IP address, subnet mask, and default gateway on the
independent hardware iSCSI adapter.
Independent Hardware Third-party adapter that Not required. For information, see Edit Network
iSCSI Adapter offloads the iSCSI and Settings for Hardware iSCSI.
network processing and
management from your
host.
Discovery Methods
For all types of iSCSI adapters, you must set the dynamic discovery address or static discovery
address. In addition, you must provide a target name of the storage system. For software iSCSI
and dependent hardware iSCSI, the address is pingable using vmkping.
See Configure Dynamic or Static Discovery for iSCSI and iSER on ESXi Host.
CHAP Authentication
Enable the CHAP parameter on the initiator and the storage system side. After authentication is
enabled, it applies to all targets that are not yet discovered. It does not apply to targets that are
already discovered.
VMware, Inc. 86
vSphere Storage
Prerequisites
For information about licensing, installation, and firmware updates, see vendor documentation.
The process of setting up the independent hardware iSCSI adapter includes these steps.
Step Description
View Independent Hardware iSCSI View an independent hardware iSCSI adapter and verify that it is correctly installed
Adapters and ready for configuration.
Modify General Properties for iSCSI If needed, change the default iSCSI name and alias assigned to your iSCSI
or iSER Adapters adapters. For the independent hardware iSCSI adapters, you can also change the
default IP settings.
Edit Network Settings for Hardware Change default network settings so that the adapter is configured properly for the
iSCSI iSCSI SAN.
Configure Dynamic or Static Set up dynamic discovery. With dynamic discovery, each time the initiator contacts
Discovery for iSCSI and iSER on a specified iSCSI storage system, it sends the SendTargets request to the system.
ESXi Host The iSCSI system responds by supplying a list of available targets to the initiator. In
addition to the dynamic discovery method, you can use static discovery and
manually enter information for the targets.
Set Up CHAP for iSCSI or iSER If your iSCSI environment uses the Challenge Handshake Authentication Protocol
Storage Adapter (CHAP), configure it for your adapter.
Enable Jumbo Frames for If your iSCSI environment supports Jumbo Frames, enable them for the adapter.
Independent Hardware iSCSI
After you install an independent hardware iSCSI adapter on the host, it appears on the list of
storage adapters available for configuration. You can view its properties.
Prerequisites
Procedure
If installed, the hardware iSCSI adapter appears on the list of storage adapters.
VMware, Inc. 87
vSphere Storage
iSCSI Name Unique name formed according to iSCSI standards that identifies the iSCSI adapter.
You can edit the iSCSI name.
iSCSI Alias A friendly name used instead of the iSCSI name. You can edit the iSCSI alias.
Procedure
3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure.
5 In the IPv4 settings section, disable IPv6 or select the method to obtain IP addresses.
Note The automatic DHCP option and static option are mutually exclusive.
Option Description
Use static IPv4 settings Enter the IPv4 IP address, subnet mask, and default gateway for the iSCSI
adapter.
6 In the IPv6 settings section, disable IPv6 or select an appropriate option for obtaining IPv6
addresses.
Note Automatic options and the static option are mutually exclusive.
Option Description
VMware, Inc. 88
vSphere Storage
Option Description
Obtain IPv6 addresses automatically Use router advertisement to obtain IPv6 addresses.
through Router Advertisement
Override Link-local address for IPv6 Override the link-local IP address by configuring a static IP address.
7 In the DNS settings section, provide IP addresses for a preferred DNS server and an alternate
DNS server.
An example of a dependent iSCSI adapter is a Broadcom 5709 NIC. When installed on a host, it
presents its two components, a standard network adapter and an iSCSI engine, to the same port.
The iSCSI engine appears on the list of storage adapters as an iSCSI adapter (vmhba).
The iSCSI adapter is enabled by default. To make it functional, you must connect it, through a
virtual VMkernel adapter (vmk), to a physical network adapter (vmnic) associated with it. You can
then configure the iSCSI adapter.
After you configure the dependent hardware iSCSI adapter, the discovery and authentication
data is passed through the network connection. The iSCSI traffic goes through the iSCSI engine,
bypassing the network.
The entire setup and configuration process for the dependent hardware iSCSI adapters involves
several steps.
Step Description
View Dependent Hardware iSCSI View a dependent hardware iSCSI adapter to verify that it is correctly loaded.
Adapters
Modify General Properties for If needed, change the default iSCSI name and alias assigned to your adapter.
iSCSI or iSER Adapters
Determine Association Between You must create network connections to bind dependent iSCSI and physical
iSCSI and Network Adapters network adapters. To create the connections correctly, determine the name of the
physical NIC with which the dependent hardware iSCSI adapter is associated.
Configure Port Binding for iSCSI or Configure connections for the traffic between the iSCSI component and the physical
iSER network adapters. The process of configuring these connections is called port
binding.
VMware, Inc. 89
vSphere Storage
Step Description
Configure Dynamic or Static Set up dynamic discovery. With dynamic discovery, each time the initiator contacts
Discovery for iSCSI and iSER on a specified iSCSI storage system, it sends the SendTargets request to the system.
ESXi Host The iSCSI system responds by supplying a list of available targets to the initiator. In
addition to the dynamic discovery method, you can use static discovery and
manually enter information for the targets.
Set Up CHAP for iSCSI or iSER If your iSCSI environment uses the Challenge Handshake Authentication Protocol
Storage Adapter (CHAP), configure it for your adapter.
Set Up CHAP for Target You can also configure different CHAP credentials for each discovery address or
static target.
Enable Jumbo Frames for If your iSCSI environment supports Jumbo Frames, enable them for the adapter.
Networking
n When you use any dependent hardware iSCSI adapter, performance reporting for a NIC
associated with the adapter might show little or no activity, even when iSCSI traffic is heavy.
This behavior occurs because the iSCSI traffic bypasses the regular networking stack.
n If you use a third-party virtual switch, for example Cisco Nexus 1000V DVS, disable automatic
pinning. Use manual pinning instead, making sure to connect a VMkernel adapter (vmk) to an
appropriate physical NIC (vmnic). For information, refer to your virtual switch vendor
documentation.
n The Broadcom iSCSI adapter performs data reassembly in hardware, which has a limited
buffer space. When you use the Broadcom iSCSI adapter in a congested network or under
heavy load, enable flow control to avoid performance degradation.
Flow control manages the rate of data transmission between two nodes to prevent a fast
sender from overrunning a slow receiver. For best results, enable flow control at the end
points of the I/O path, at the hosts and iSCSI storage systems.
To enable flow control for the host, use the esxcli system module parameters command.
For details, see the VMware knowledge base article at http://kb.vmware.com/kb/1013413
If installed, the dependent hardware iSCSI adapter (vmhba#) appears on the list of storage
adapters under such category as, for example, Broadcom iSCSI Adapter. If the dependent
hardware adapter does not appear on the list of storage adapters, check whether it needs to be
licensed. See your vendor documentation.
VMware, Inc. 90
vSphere Storage
Procedure
The default details for the adapter appear, including the iSCSI name, iSCSI alias, and the
status.
What to do next
Although the dependent iSCSI adapter is enabled by default, to make it functional, you must set
up networking for the iSCSI traffic and bind the adapter to the appropriate VMkernel iSCSI port.
You then configure discovery addresses and CHAP parameters.
Procedure
4 Select the iSCSI adapter (vmhba#) and click the Network Port Binding tab under adapter
details.
5 Click Add.
The network adapter (vmnic#) that corresponds to the dependent iSCSI adapter is listed in
the Physical Network Adapter column.
What to do next
If the VMkernel Adapter column is empty, create a VMkernel adapter (vmk#) for the physical
network adapter (vmnic#) and then bind them to the associated dependent hardware iSCSI. See
Setting Up Network for iSCSI and iSER.
VMware, Inc. 91
vSphere Storage
When you use the software iSCSI adapters, consider the following:
n Designate a separate network adapter for iSCSI. Do not use iSCSI on 100 Mbps or slower
adapters.
n Avoid hard coding the name of the software adapter, vmhbaXX, in the scripts. It is possible
for the name to change from one ESXi release to another. The change might cause failures of
your existing scripts if they use the hardcoded old name. The name change does not affect
the behavior of the iSCSI software adapter.
The process of configuring the software iSCSI adapter involves several steps.
Step Description
Activate or Disable the Software Activate your software iSCSI adapter so that your host can use it to access iSCSI
iSCSI Adapter storage.
Modify General Properties for iSCSI If needed, change the default iSCSI name and alias assigned to your adapter.
or iSER Adapters
Configure Port Binding for iSCSI or Configure connections for the traffic between the iSCSI component and the
iSER physical network adapters. The process of configuring these connections is called
port binding.
Configure Dynamic or Static Set up dynamic discovery. With dynamic discovery, each time the initiator
Discovery for iSCSI and iSER on contacts a specified iSCSI storage system, it sends the SendTargets request to the
ESXi Host system. The iSCSI system responds by supplying a list of available targets to the
initiator. In addition to the dynamic discovery method, you can use static
discovery and manually enter information for the targets.
Set Up CHAP for iSCSI or iSER If your iSCSI environment uses the Challenge Handshake Authentication Protocol
Storage Adapter (CHAP), configure it for your adapter.
Set Up CHAP for Target You can also configure different CHAP credentials for each discovery address or
static target.
Enable Jumbo Frames for If your iSCSI environment supports Jumbo Frames, enable them for the adapter.
Networking
Prerequisites
Note If you boot from iSCSI using the software iSCSI adapter, the adapter is enabled and the
network configuration is created at the first boot. If you disable the adapter, it is reenabled each
time you boot the host.
Procedure
VMware, Inc. 92
vSphere Storage
Option Description
Enable the software iSCSI adapter a Under Storage, click Storage Adapters, and click the Add icon.
b Select Software iSCSI Adapter and confirm that you want to add the
adapter.
The software iSCSI adapter (vmhba#) is enabled and appears on the list
of storage adapters. After enabling the adapter, the host assigns the
default iSCSI name to it. You can now complete the adapter
configuration.
Disable the software iSCSI adapter a Under Storage, click Storage Adapters, and select the adapter
(vmhba#) to disable.
b Click the Properties tab.
c Click Disable and confirm that you want to disable the adapter.
After the reboot, the adapter no longer appears on the list of storage
adapters. The storage devices associated with the adapter become
inaccessible. You can later activate the adapter.
For more information about the iSER protocol, see Using iSER Protocol with ESXi.
The entire setup and configuration process for VMware iSER involves several steps.
Step Description
Install and View an RDMA To configure iSER with ESXi, you must first install an RDMA capable network adapter, for
Capable Network Adapter example, Mellanox Technologies MT27700 Family ConnectX-4. After you install this type
of adapter, the vSphere Client displays its two components, an RDMA adapter and a
physical network adapter vmnic#.
Enable the VMware iSER To be able to use the RDMA capable adapter for iSCSI, use the esxcli to enable the
Adapter VMware iSER storage component. The component appears in the vSphere Client as a
vmhba# storage adapter under the VMware iSCSI over RDMA (iSER) Adapter category.
Modify General Properties for If needed, change the default name and alias assigned to the iSER storage adapter
iSCSI or iSER Adapters vmhba#.
VMware, Inc. 93
vSphere Storage
Step Description
Configure Port Binding for You must create network connections to bind the iSER storage adapter vmhba# and the
iSCSI or iSER RDMA capable network adapter vmnic#. The process of configuring these connections is
called port binding.
Note iSER does not support NIC teaming. When configuring port binding, use only one
RDMA adapter per vSwitch.
Configure Dynamic or Static Set up the dynamic or static discovery for your iSER storage adapter vmhba#. With the
Discovery for iSCSI and iSER dynamic discovery, each time the initiator contacts a specified iSER storage system, it
on ESXi Host sends the SendTargets request to the system. The iSER system responds by supplying a
list of available targets to the initiator. With the static discovery, you manually enter
information for the targets.
Set Up CHAP for iSCSI or If your environment uses the Challenge Handshake Authentication Protocol (CHAP),
iSER Storage Adapter configure it for your iSER storage adapter vmhba#.
Set Up CHAP for Target You can also configure different CHAP credentials for each discovery address or static
target.
Enable Jumbo Frames for If your environment supports Jumbo Frames, enable them for the iSER storage adapter
Networking vmhba#.
You can use the vSphere Client to view the RDMA adapter and its corresponding network
adapter.
Procedure
In this example, the RDMA adapter appears on the list as vmrdma0. The Paired Uplink column
displays the network component as the vmnic1 physical network adapter.
VMware, Inc. 94
vSphere Storage
3 To verify the description of the adapter, select the RDMA adapter from the list, and click the
Properties tab.
Results
You can use the vmnic# network component of the adapter for such storage configurations as
iSER or NVMe over RDMA. For the iSER configuration steps, see Configure iSER with ESXi. For
information about NVMe over RDMA, see Configure Adapters for NVMe over RDMA (RoCE v2)
Storage.
Prerequisites
n Make sure that your iSCSI storage supports the iSER protocol.
n Install the RDMA capable adapter on your ESXi host. For information, see Install and View an
RDMA Capable Network Adapter.
n For RDMA capable adapters that support RDMA over Converged Ethernet (RoCE), determine
the RoCE version that the adapter uses.
n Enable flow control on the ESXi host. To enable flow control for the host, use the esxcli
system module parameters command. For details, see the VMware knowledge base article at
http://kb.vmware.com/kb/1013413.
n Make sure to configure RDMA switch ports to create lossless connections between the iSER
initiator and target.
VMware, Inc. 95
vSphere Storage
Procedure
1 Use the ESXi Shell or vSphere CLI to enable the VMware iSER storage adapter and set its
RoCE version.
c Specify the RoCE version that iSER uses to connect to the target.
Use the RoCE version of the RDMA capable adapter. The command you enter is similar to
the following:
When the command completes, a message similar to the following appears in the
VMkernel log.
If you do not specify the RoCE version, the host defaults to the highest RoCE version the
RDMA capable adapter supports.
VMware, Inc. 96
vSphere Storage
c Under Storage, click Storage Adapters, and review the list of adapters.
If you enabled the adapter, it appears as a storage vmhba# on the list under the VMware
iSCSI over RDMA (iSER) Adapter category.
3 Select the iSER storage vmhba# to review its properties or perform the following tasks.
Option Description
Configure port binding for the iSER You must create network connections to bind the iSER storage adapter
storage adapter vmhba# and the RDMA capable network adapter vmnic#. The process of
configuring these connections is called port binding. For general information
about port binding, see Setting Up Network for iSCSI and iSER. To configure
port binding for iSER, see Configure Port Binding for iSCSI or iSER.
Set up dynamic or static discovery For information, see Configure Dynamic or Static Discovery for iSCSI and
for the iSER storage adapter iSER on ESXi Host.
Configure the Challenge Handshake For information, see Set Up CHAP for iSCSI or iSER Storage Adapter.
Authentication Protocol (CHAP) for
the iSER storage adapter
What to do next
For more information, see the VMware knowledge base article at https://kb.vmware.com/s/
article/79148.
VMware, Inc. 97
vSphere Storage
Important When you modify any default properties for your adapters, make sure to use correct
formats for their names and IP addresses.
Prerequisites
Procedure
3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure.
4 Click the Properties tab, and click Edit in the General panel.
Option Description
iSCSI Name Unique name formed according to iSCSI standards that identifies the iSCSI
adapter. If you change the name, make sure that the name you enter is
worldwide unique and properly formatted. Otherwise, certain storage
devices might not recognize the iSCSI adapter.
iSCSI Alias A friendly name you use instead of the iSCSI name.
Results
If you change the iSCSI name, it is used for new iSCSI sessions. For existing sessions, the new
settings are not used until you log out and log in again.
What to do next
For other configuration steps you can perform for the iSCSI or iSER storage adapters, see the
following topics:
VMware, Inc. 98
vSphere Storage
Configuring the network connection involves creating a virtual VMkernel adapter for each
physical network adapter. You use 1:1 mapping between each virtual and physical network
adapter. You then associate the VMkernel adapter with an appropriate iSCSI or iSER adapter.
This process is called port binding.
Host
vSwitch
vmnic
physical NIC
IP network
n You can connect the software iSCSI adapter with any physical NICs available on your host.
n The dependent iSCSI adapters must be connected only to their own physical NICs.
n You must connect the iSER adapter only to the RDMA-capable network adapter.
For specific considerations on when and how to use network connections with software iSCSI,
see the VMware knowledge base article at http://kb.vmware.com/kb/2038869.
VMware, Inc. 99
vSphere Storage
You can use multiple physical adapters in a single or multiple switch configurations.
In the multiple switch configuration, you designate a separate vSphere switch for each virtual-to-
physical adapter pair.
iSCSI1 vmnic1
vmk1
vSwitch2
iSCSI2 vmnic2
vmk2
An alternative is to add all NICs and VMkernel adapters to the single vSphere switch. The number
of VMkernel adapters must correspond to the number of physical adapters on the vSphere
Standard switch. The single switch configuration is not appropriate for iSER because iSER does
not support NIC teaming.
iSCSI2 vmnic2
vmk2 vmnic1
iSCSI1
vmk1
For that type of configuration, you must override the default network setup and make sure that
each VMkernel adapter maps to only one corresponding active physical adapter, as the table
indicates.
You can also use distributed switches. For more information about vSphere distributed switches
and how to change the default network policy, see the vSphere Networking documentation.
The following considerations apply when you use multiple physical adapters:
n Physical network adapters must be on the same subnet as the storage system they connect
to.
n (Applies only to iSCSI and not to iSER) If you use separate vSphere switches, you must
connect them to different IP subnets. Otherwise, VMkernel adapters might experience
connectivity problems and the host fails to discover the LUNs.
n The single switch configuration is not appropriate for iSER because iSER does not support
NIC teaming.
Do not use port binding when any of the following conditions exist:
n Array target iSCSI ports are in a different broadcast domain and IP subnet.
n VMkernel adapters used for iSCSI connectivity exist in different broadcast domains, IP
subnets, or use different virtual switches.
Note In iSER configurations, the VMkernel adapters used for iSER connectivity cannot be
used for converged traffic. The VMkernel adapters that you created to enable connectivity
between the ESXi host with iSER and the iSER target must be used only for iSER traffic.
Note Make sure that all target portals are reachable from all VMkernel ports when port binding
is used. Otherwise, iSCSI sessions might fail to create. As a result, the rescan operation might
take longer than expected.
No Port Binding
If you do not use port binding, the ESXi networking layer selects the best VMkernel port based
on its routing table. The host uses the port to create an iSCSI session with the target portal.
Without the port binding, only one session per each target portal is created.
If your target has only one network portal, you can create multiple paths to the target by adding
multiple VMkernel ports on your ESXi host and binding them to the iSCSI initiator.
vmk1
192.168.0.1/24
vmnic1
Same subnet
vmk2
192.168.0.2/24
vmnic2 Single Target:
IP 192.168.0.10/24
vmk3 Network
192.168.0.3/24
vmnic3
vmk2
192.168.0.4/24
vmnic4
In this example, all initiator ports and the target portal are configured in the same subnet. The
target is reachable through all bound ports. You have four VMkernel ports and one target portal,
so total of four paths are created.
You can create multiple paths by configuring multiple ports and target portals on different IP
subnets. By keeping initiator and target ports in different subnets, you can force ESXi to create
paths through specific ports. In this configuration, you do not use port binding because port
binding requires that all initiator and target ports are on the same subnet.
vmk1 SP/Controller A:
IP
Network
vmk2 SP/Controller B:
ESXi selects vmk1 when connecting to Port 0 of Controller A and Controller B because all three
ports are on the same subnet. Similarly, vmk2 is selected when connecting to Port 1of Controller
A and B. You can use NIC teaming in this configuration.
Paths Description
In this example, you keep all bound vmkernel ports in one subnet (N1) and configure all target
portals in another subnet (N2). You can then add a static route for the target subnet (N2).
N1 N2
vmk1
SP/Controller A
192.168.1.1/24 Port 0
10.115.179.1/24
vmnic1
IP
Network
vmk2
SP/Controller B
192.168.1.2/24 Port 0
10.115.179.2/24
vmnic2
In this configuration, you use static routing when using different subnets. You cannot use the port
binding with this configuration.
vmk1
SP/Controller A
192.168.1.1/24 Port 0
0.115.155.1/24
vmnic1
IP
Network
vmk2
SP/Controller A
192.168.2.1/24 Port 0
0.115.179.1/24
vmnic2
You configure vmk1 and vmk2 in separate subnets, 192.168.1.0 and 192.168.2.0. Your target portals
are also in separate subnets, 10.115.155.0 and 10.155.179.0.
You can add the static route for 10.115.155.0 from vmk1. Make sure that the gateway is reachable
from vmk1.
You then add static route for 10.115.179.0 from vmk2. Make sure that the gateway is reachable
from vmk2.
Starting with vSphere 6.5, you can configure a separate gateway per VMkernel port. If you use
DHCP to obtain IP configuration for a VMkernel port, gateway information can also be obtained
using DHCP.
To see gateway information per VMkernel port, use the following command:
Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type Gateway DHCP DNS
---- -------------- ------------- -------------- ------------ -------------- --------
vmk0 10.115.155.122 255.255.252.0 10.115.155.255 DHCP 10.115.155.253 true
vmk1 10.115.179.209 255.255.252.0 10.115.179.255 DHCP 10.115.179.253 true
vmk2 10.115.179.146 255.255.252.0 10.115.179.255 DHCP 10.115.179.253 true
With separate gateways per VMkernel port, you use port binding to reach targets in different
subnets.
The following tasks discuss the network configuration with a vSphere Standard switch and a
single physical network adapter. If you have multiple network adapters, see Multiple Network
Adapters in iSCSI or iSER Configuration.
Note iSER does not support NIC teaming. When configuring port binding for iSER, use only one
RDMA-enabled physical adapter (vmnic#) and one VMkernel adapter (vmk#) per vSwitch.
® ®
You can also use the VMware vSphere Distributed Switch™ and VMware NSX Virtual Switch™
in the port biding configuration. For information about NSX virtual switches, see the VMware NSX
Data Center for vSphere documentation.
If you use a vSphere distributed switch with multiple uplink ports, for port binding, create a
separate distributed port group per each physical NIC. Then set the team policy so that each
distributed port group has only one active uplink port. For detailed information on distributed
switches, see the vSphere Networking documentation.
Procedure
What to do next
For other configuration steps you can perform for the iSCSI or iSER storage adapters, see the
following topics:
Prerequisites
n If you are creating a VMkernel adapter for dependent hardware iSCSI, you must use the
physical network adapter (vmnic#) that corresponds to the iSCSI component. See Determine
Association Between iSCSI and Network Adapters.
n With the iSER adapter, make sure to use an appropriate RDMA-capable vmnic#. See Install
and View an RDMA Capable Network Adapter.
Procedure
5 Click the Add adapters icon, and select an appropriate network adapter (vmnic#) to use for
iSCSI.
A network label is a friendly name that identifies the VMkernel adapter that you are creating,
for example, iSCSI or iSER.
You created the virtual VMkernel adapter (vmk#) for a physical network adapter (vmnic#) on
your host.
a Under Networking, select VMkernel Adapters, and select the VMkernel adapter (vmk#)
from the list.
b Click the Policies tab, and verify that the corresponding physical network adapter
(vmnic#) appears as an active adapter under Teaming and failover.
What to do next
If your host has one physical network adapter for iSCSI traffic, bind the VMkernel adapter that
you created to the iSCSI or iSER vmhba adapter.
If you have multiple network adapters, you can create additional VMkernel adapters and then
perform iSCSI binding. The number of virtual adapters must correspond to the number of
physical adapters on the host. For information, see Multiple Network Adapters in iSCSI or iSER
Configuration.
Prerequisites
Create a virtual VMkernel adapter for each physical network adapter on your host. If you use
multiple VMkernel adapters, set up the correct network policy.
Procedure
3 Under Storage, click Storage Adapters, and select the appropriate iSCSI or iSER adapter
(vmhba# ) from the list.
4 Click the Network Port Binding tab and click the Add icon.
Note Make sure that the network policy for the VMkernel adapter is compliant with the
binding requirements.
You can bind the software iSCSI adapter to one or more VMkernel adapters. For a dependent
hardware iSCSI adapter or the iSER adapter, only one VMkernel adapter associated with the
correct physical NIC is available.
6 Click OK.
The network connection appears on the list of network port bindings for the iSCSI or iSER
adapter.
Procedure
3 Under Storage, click Storage Adapters, and select the appropriate iSCSI or iSER adapter
from the list.
4 Click the Network Port Binding tab and select the VMkernel adapter from the list.
6 Review the VMkernel adapter and physical adapter information by switching between
available tabs.
After you create network connections for iSCSI, an iSCSI indicator becomes enabled in the
vSphere Client. The indicator shows that a particular virtual or physical network adapter is iSCSI-
bound. To avoid disruptions in iSCSI traffic, follow these guidelines and considerations when
managing iSCSI-bound virtual and physical network adapters:
n Make sure that the VMkernel network adapters are assigned addresses on the same subnet
as the iSCSI storage portal they connect to.
n iSCSI adapters using VMkernel adapters cannot connect to iSCSI ports on different subnets,
even if the iSCSI adapters discover those ports.
n When using separate vSphere switches to connect physical network adapters and VMkernel
adapters, make sure that the vSphere switches connect to different IP subnets.
n If VMkernel adapters are on the same subnet, they must connect to a single vSwitch.
n If you migrate VMkernel adapters to a different vSphere switch, move associated physical
adapters.
n Do not make changes that might break association of VMkernel adapters and physical
network adapters. You can break the association if you remove one of the adapters or the
vSphere switch that connects them. Or if you change the 1:1 network policy for their
connection.
Problem
The VMkernel adapter's port group policy is considered non-compliant in the following cases:
n The VMkernel adapter is connected to more than one physical network adapter.
Solution
Set up the correct network policy for the iSCSI-bound VMkernel adapter. See Setting Up Network
for iSCSI and iSER.
Jumbo Frames are Ethernet frames with the size that exceeds 1500 Bytes. The maximum
transmission unit (MTU) parameter is typically used to measure the size of Jumbo Frames.
When you use Jumbo Frames for iSCSI traffic, the following considerations apply:
n Check with your vendors to ensure your physical NICs and iSCSI adapters support Jumbo
Frames.
n To set up and verify physical network switches for Jumbo Frames, consult your vendor
documentation.
The following table explains the level of support that ESXi provides to Jumbo Frames.
To enable Jumbo Frames, change the default value of the maximum transmission units (MTU)
parameter. You change the MTU parameter on the vSphere switch that you use for iSCSI traffic.
For more information, see the vSphere Networking documentation.
Procedure
3 Under Networking, click Virtual switches, and select the vSphere switch that you want to
modify from the list.
This step sets the MTU for all physical NICs on that standard switch. Set the MTU value to the
largest MTU size among all NICs connected to the standard switch. ESXi supports the MTU
size of up to 9000 Bytes.
Use the Advanced Options settings to change the MTU parameter for the iSCSI HBA.
Procedure
3 Under Storage, click Storage Adapters, and select the independent hardware iSCSI adapter
from the list of adapters.
Dynamic Discovery
Also known as SendTargets discovery. Each time the initiator contacts a specified iSCSI
server, the initiator sends the SendTargets request to the server. The server responds by
supplying a list of available targets to the initiator. The names and IP addresses of these
targets appear on the Static Discovery tab. If you remove a static target added by dynamic
discovery, the target might be returned to the list the next time a rescan happens, the
storage adapter is reset, or the host is rebooted.
Note With software and dependent hardware iSCSI, ESXi filters target addresses based on
the IP family of the iSCSI server address specified. If the address is IPv4, IPv6 addresses that
might come in the SendTargets response from the iSCSI server are filtered out. When DNS
names are used to specify an iSCSI server, or when the SendTargets response from the iSCSI
server has DNS names, ESXi relies on the IP family of the first resolved entry from DNS
lookup.
Static Discovery
In addition to the dynamic discovery method, you can use static discovery and manually
enter information for the targets. The iSCSI or iSER adapter uses a list of targets that you
provide to contact and communicate with the iSCSI servers.
When you set up static or dynamic discovery, you can only add new iSCSI targets. You cannot
change any parameters of an existing target. To make changes, remove the existing target and
add a new one.
Prerequisites
Procedure
3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure.
What to do next
For other configuration steps you can perform for the iSCSI or iSER storage adapters, see the
following topics:
Procedure
3 Under Storage, click Storage Adapters, and select the iSCSI adapter to modify from the list.
If you are removing the static target that was dynamically discovered, you need to remove it
from the storage system before performing the rescan. Otherwise, your host will
automatically discover and add the target to the list of static targets when you rescan the
adapter.
CHAP uses a three-way handshake algorithm to verify the identity of your host and, if applicable,
of the iSCSI target when the host and target establish a connection. The verification is based on a
predefined private value, or CHAP secret, that the initiator and target share.
ESXi supports CHAP authentication at the adapter level. In this case, all targets receive the same
CHAP name and secret from the iSCSI initiator. For software and dependent hardware iSCSI
adapters, and for iSER adapters, ESXi also supports per-target CHAP authentication, which allows
you to configure different credentials for each target to achieve greater level of security.
Before configuring CHAP, check whether CHAP is enabled at the iSCSI storage system. Also,
obtain information about the CHAP authentication method the system supports. If CHAP is
enabled, configure it for your initiators, making sure that the CHAP authentication credentials
match the credentials on the iSCSI storage.
Unidirectional CHAP
In unidirectional CHAP authentication, the target authenticates the initiator, but the initiator
does not authenticate the target.
Bidirectional CHAP
The bidirectional CHAP authentication adds an extra level of security. With this method, the
initiator can also authenticate the target. VMware supports this method for software and
dependent hardware iSCSI adapters, and for iSER adapters.
For software and dependent hardware iSCSI adapters, and for iSER adapters, you can set
unidirectional CHAP and bidirectional CHAP for each adapter or at the target level. Independent
hardware iSCSI supports CHAP only at the adapter level.
When you set the CHAP parameters, specify a security level for CHAP.
Note When you specify the CHAP security level, how the storage array responds depends on
the array’s CHAP implementation and is vendor-specific. For information on CHAP authentication
behavior in different initiator and target configurations, consult the array documentation.
None The host does not use CHAP authentication. If Independent hardware iSCSI
authentication is enabled, use this option to disable it. Software iSCSI
Dependent hardware iSCSI
iSER
Use unidirectional CHAP if The host prefers a non-CHAP connection, but can use a Software iSCSI
required by target CHAP connection if required by the target. Dependent hardware iSCSI
iSER
Use unidirectional CHAP unless The host prefers CHAP, but can use non-CHAP Independent hardware iSCSI
prohibited by target connections if the target does not support CHAP. Software iSCSI
Dependent hardware iSCSI
iSER
Use unidirectional CHAP The host requires successful CHAP authentication. The Independent hardware iSCSI
connection fails if CHAP negotiation fails. Software iSCSI
Dependent hardware iSCSI
iSER
Use bidirectional CHAP The host and the target support bidirectional CHAP. Software iSCSI
Dependent hardware iSCSI
iSER
The CHAP name cannot exceed 511 alphanumeric characters and the CHAP secret cannot exceed
255 alphanumeric characters. Some adapters, for example the QLogic adapter, might have lower
limits, 255 for the CHAP name and 100 for the CHAP secret.
Prerequisites
n Before setting up CHAP parameters for software or dependent hardware iSCSI, determine
whether to configure unidirectional or bidirectional CHAP. Independent hardware iSCSI
adapters do not support bidirectional CHAP.
n Verify CHAP parameters configured on the storage side. Parameters that you configure must
match the ones on the storage side.
Procedure
c Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure.
2 Click the Properties tab and click Edit in the Authentication panel.
n None
n Use bidirectional CHAP. To configure bidirectional CHAP, you must select this option.
Make sure that the name you specify matches the name configured on the storage side.
n To set the CHAP name to the iSCSI adapter name, select Use initiator name.
n To set the CHAP name to anything other than the iSCSI initiator name, deselect Use
initiator name and enter a name in the Name text box.
5 Enter an outgoing CHAP secret to be used as part of authentication. Use the same secret that
you enter on the storage side.
Make sure to use different secrets for the outgoing and incoming CHAP.
7 Click OK.
Results
If you change the CHAP parameters, they are used for new iSCSI sessions. For existing sessions,
new settings are not used until you log out and log in again.
What to do next
For other configuration steps you can perform for the iSCSI or iSER storage adapters, see the
following topics:
The CHAP name cannot exceed 511 and the CHAP secret 255 alphanumeric characters.
Prerequisites
n Verify CHAP parameters configured on the storage side. Parameters that you configure must
match the ones on the storage side.
Procedure
c Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure.
3 From the list of available targets, select a target to configure and click Authentication.
n None
n Use bidirectional CHAP. To configure bidirectional CHAP, you must select this option.
Make sure that the name you specify matches the name configured on the storage side.
n To set the CHAP name to the iSCSI adapter name, select Use initiator name.
n To set the CHAP name to anything other than the iSCSI initiator name, deselect Use
initiator name and enter a name in the Name text box.
6 Enter an outgoing CHAP secret to be used as part of authentication. Use the same secret that
you enter on the storage side.
Make sure to use different secrets for the outgoing and incoming CHAP.
8 Click OK.
Results
If you change the CHAP parameters, they are used for new iSCSI sessions. For existing sessions,
new settings are not used until you log out and login again.
The following table lists advanced iSCSI parameters that you can configure using the vSphere
Client. In addition, you can use the vSphere CLI commands to configure some of the advanced
parameters. For information, see the Getting Started with ESXCLI documentation.
Depending on the type of your adapters, certain parameters might not be available.
Important Do not change the advanced iSCSI settings unless VMware support or Storage
Vendors direct you to change them.
Header Digest Increases data integrity. When the header digest parameter is enabled, the system
performs a checksum over each header part of the iSCSI Protocol Data Unit (PDU). The
system verifies the data using the CRC32C algorithm.
Data Digest Increases data integrity. When the data digest parameter is enabled, the system
performs a checksum over each PDU data part. The system verifies the data using the
CRC32C algorithm.
Note Systems that use the Intel Nehalem processors offload the iSCSI digest
calculations for software iSCSI. This offload helps to reduce the impact on performance.
ErrorRecoveryLevel iSCSI Error Recovery Level (ERL) value that the iSCSI initiator on the host negotiates
during a login.
LoginRetryMax Maximum number of times the ESXi iSCSI initiator attempts to log into a target before
ending the attempts.
MaxOutstandingR2T Defines the R2T (Ready to Transfer) PDUs that can be in transition before an
acknowledge PDU is received.
FirstBurstLength Specifies the maximum amount of unsolicited data an iSCSI initiator can send to the
target during the execution of a single SCSI command, in bytes.
MaxBurstLength Maximum SCSI data payload in a Data-In or a solicited Data-Out iSCSI sequence, in bytes.
MaxRecvDataSegLength Maximum data segment length, in bytes, that can be received in an iSCSI PDU.
MaxCommands Maximum SCSI commands that can be queued on the iSCSI adapter.
DefaultTimeToWait Minimum time in seconds to wait before attempting a logout or an active task
reassignment after an unexpected connection termination or reset.
DefautTimeToRetain Maximum time in seconds, during which reassigning the active task is still possible after a
connection termination or reset.
LoginTimeout Time in seconds the initiator will wait for the login response to finish.
LogoutTimeout Time in seconds initiator will wait to get a response for Logout request PDU.
RecoveryTimeout Specifies the amount of time, in seconds, that can lapse while a session recovery is
performed. If the timeout exceeds its limit, the iSCSI initiator ends the session.
No-Op Interval Specifies the time interval, in seconds, between NOP-Out requests sent from your iSCSI
initiator to an iSCSI target. The NOP-Out requests serve as the ping mechanism to verify
that a connection between the iSCSI initiator and the iSCSI target is active.
No-Op Timeout Specifies the amount of time, in seconds, that can lapse before your host receives a
NOP-In message. The iSCSI target sends the message in response to the NOP-Out
request. When the no-op timeout limit is exceeded, the initiator ends the current session
and starts a new one.
ARP Redirect With this parameter enabled, storage systems can move iSCSI traffic dynamically from
one port to another. Storage systems that perform array-based failovers require the ARP
parameter.
Delayed ACK With this parameter enabled, storage systems can delay an acknowledgment of received
data packets.
Caution Do not make any changes to the advanced iSCSI settings unless you are working with
the VMware support team or otherwise have thorough information about the values to provide
for the settings.
Prerequisites
Procedure
3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure.
Option Description
At the adapter level Click the Advanced Options tab and click Edit.
5 Enter any required values for the advanced parameters you want to modify.
By default, software iSCSI and dependent hardware iSCSI initiators start one iSCSI session
between each initiator port and each target port. If your iSCSI initiator or target has more than
one port, your host can have multiple sessions established. The default number of sessions for
each target equals the number of ports on the iSCSI adapter times the number of target ports.
Using vSphere CLI, you can display all current sessions to analyze and debug them. To create
more paths to storage systems, you can increase the default number of sessions by duplicating
existing sessions between the iSCSI adapter and target ports.
You can also establish a session to a specific target port. This capability is useful if your host
connects to a single-port storage system that presents only one target port to your initiator. The
system then redirects additional sessions to a different target port. Establishing a new session
between your iSCSI initiator and another target port creates an additional path to the storage
system.
n Some storage systems do not support multiple sessions from the same initiator name or
endpoint. Attempts to create multiple sessions to such targets can result in an unpredictable
behavior of your iSCSI environment.
n Storage vendors can provide automatic session managers. Using the automatic session
manages to add or delete sessions, does not guarantee lasting results and can interfere with
the storage performance.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
Option Description
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
Option Description
-A|--adapter=str The iSCSI adapter name, for example, vmhba34. This option is required.
-s|--isid=str The ISID of a session to duplicate. You can find it by listing all sessions.
What to do next
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
Option Description
-A|--adapter=str The iSCSI adapter name, for example, vmhba34. This option is required.
-s|--isid=str The ISID of a session to remove. You can find it by listing all session.
What to do next
You can use boot from the SAN if you do not want to handle maintenance of local storage or
have diskless hardware configurations, such as blade systems.
Configure the iSCSI HBA to boot from the SAN. For Use the software iSCSI adapter and a network adapter that
information on configuring the HBA, see Configure supports the iSCSI Boot Firmware Table (iBFT) format. For
Independent Hardware iSCSI Adapter for SAN Boot information, see VMware ESXi Installation and Setup.
The following guidelines apply to booting from the independent hardware iSCSI and iBFT.
n Review any vendor recommendations for the hardware you use in your boot configuration.
n For installation prerequisites and requirements, review vSphere Installation and Setup.
n The boot LUN must be visible only to the host that uses the LUN. No other host on the
SAN is permitted to see that boot LUN.
n If a LUN is used for a VMFS datastore, multiple hosts can share the LUN.
n With the independent hardware iSCSI only, you can place the diagnostic partition on the
boot LUN. If you configure the diagnostic partition in the boot LUN, this LUN cannot be
shared across multiple hosts. If a separate LUN is used for the diagnostic partition,
multiple hosts can share the LUN.
n If you boot from SAN using iBFT, you cannot set up a diagnostic partition on a SAN LUN.
To collect your host's diagnostic information, use the vSphere ESXi Dump Collector on a
remote server. For information about the ESXi Dump Collector, see vCenter Server
Installation and Setup and vSphere Networking.
Caution If you use scripted installation to install ESXi when booting from a SAN, you must take
special steps to avoid unintended data loss.
Procedure
1 Connect network cables, referring to any cabling guide that applies to your setup.
Verify configuration of any routers or switches on your storage network. Storage systems
must be able to ping the iSCSI adapters in your hosts.
a Create a volume (or LUN) on the storage system for your host to boot from.
b Configure the storage system so that your host has access to the assigned LUN.
This step might involve updating ACLs with the IP addresses, iSCSI names, and the CHAP
authentication parameter you use on your host. On some storage systems, in addition to
providing access information for the ESXi host, you must also explicitly associate the
assigned LUN with the host.
e Record the iSCSI name and IP addresses of the targets assigned to the host.
This procedure discusses how to enable the QLogic iSCSI HBA to boot from the SAN. For more
information and more up-to-date details about QLogic adapter configuration settings, see the
QLogic website.
Procedure
2 Use the BIOS to set the host to boot from the installation media first.
3 During server POST, press Crtl+q to enter the QLogic iSCSI HBA configuration menu.
a From the Fast!UTIL Options menu, select Configuration Settings > Host Adapter
Settings.
b (Optional) Configure the following settings for your host adapter: initiator IP address,
subnet mask, gateway, initiator iSCSI name, and CHAP.
Procedure
1 From the Fast!UTIL Options menu, select Configuration Settings > iSCSI Boot Settings.
2 Before you can set SendTargets, set Adapter Boot mode to Manual.
n If only one iSCSI target and one LUN are available at the target address, leave Boot
LUN and iSCSI Name blank.
After your host reaches the target storage system, these text boxes are populated
with appropriate information.
n If more than one iSCSI target and LUN are available, supply values for Boot LUN and
iSCSI Name.
c Save changes.
4 From the iSCSI Boot Settings menu, select the primary boot device.
If more than one LUN exists within the target, you can select a specific LUN ID by pressing
Enter after you locate the iSCSI device.
6 Return to the Primary Boot Device Setting menu. After the rescan, Boot LUN and iSCSI
Name are populated. Change the value of Boot LUN to the appropriate LUN ID.
Check with your storage representative if your storage system supports Storage API - Array
Integration hardware acceleration features. If it does, refer to your vendor documentation to
enable hardware acceleration support on the storage system side. For more information, see
Chapter 24 Storage Hardware Acceleration.
n Do not change the path policy the system sets for you unless you understand the
implications of making such a change.
n Make several copies of your topology maps. For each element, consider what happens to
your SAN if the element fails.
n Cross off different links, switches, HBAs, and other elements to ensure that you did not
miss a critical failure point in your design.
n Ensure that the iSCSI HBAs are installed in the correct slots in the ESXi host, based on slot
and bus speed. Balance PCI bus load among the available buses in the server.
n Become familiar with the various monitor points in your storage network, at all visibility
points, including ESXi performance charts, Ethernet switch statistics, and storage
performance statistics.
n Change LUN IDs only when VMFS datastores deployed on the LUNs have no running virtual
machines. If you change the ID, virtual machines running on the VMFS datastore might fail.
After you change the ID of the LUN, you must rescan your storage to reset the ID on your
host. For information on using the rescan, see Storage Rescan Operations.
n If you change the default iSCSI name of your iSCSI adapter, make sure that the name you
enter is worldwide unique and properly formatted. To avoid storage access problems, never
assign the same iSCSI name to different adapters, even on different hosts.
If the network environment is properly configured, the iSCSI components provide adequate
throughput and low enough latency for iSCSI initiators and targets. If the network is congested
and links, switches or routers are saturated, iSCSI performance suffers and might not be
adequate for ESXi environments.
If issues occur with storage system performance, consult your storage system vendor’s
documentation for any relevant information.
When you assign LUNs, remember that you can access each shared LUN through a number of
hosts, and that a number of virtual machines can run on each host. One LUN used by the ESXi
host can service I/O from many different applications running on different operating systems.
Because of this diverse workload, the RAID group that contains the ESXi LUNs should not include
LUNs that other hosts use that are not running ESXi for I/O intensive applications.
Load balancing is the process of spreading server I/O requests across all available SPs and their
associated host server paths. The goal is to optimize performance in terms of throughput (I/O per
second, megabytes per second, or response times).
SAN storage systems require continual redesign and tuning to ensure that I/O is load balanced
across all storage system paths. To meet this requirement, distribute the paths to the LUNs
among all the SPs to provide optimal load balancing. Close monitoring indicates when it is
necessary to manually rebalance the LUN distribution.
Tuning statically balanced storage systems is a matter of monitoring the specific performance
statistics (such as I/O operations per second, blocks per second, and response time) and
distributing the LUN workload to spread the workload across all the SPs.
Each server application must have access to its designated storage with the following conditions:
Because each application has different requirements, you can meet these goals by selecting an
appropriate RAID group on the storage system.
n Place each LUN on a RAID group that provides the necessary performance levels. Monitor the
activities and resource use of other LUNS in the assigned RAID group. A high-performance
RAID group that has too many applications doing I/O to it might not meet performance goals
required by an application running on the ESXi host.
n To achieve maximum throughput for all the applications on the host during the peak period,
install enough network adapters or iSCSI hardware adapters. I/O spread across multiple ports
provides faster throughput and less latency for each application.
n To provide redundancy for software iSCSI, make sure that the initiator is connected to all
network adapters used for iSCSI connectivity.
n When allocating LUNs or RAID groups for ESXi systems, remember that multiple operating
systems use and share that resource. The LUN performance required by the ESXi host might
be much higher than when you use regular physical machines. For example, if you expect to
run four I/O intensive applications, allocate four times the performance capacity for the ESXi
LUNs.
n When you use multiple ESXi systems with vCenter Server, the storage performance
requirements increase.
n The number of outstanding I/Os needed by applications running on an ESXi system must
match the number of I/Os the SAN can handle.
Network Performance
A typical SAN consists of a collection of computers connected to a collection of storage systems
through a network of switches. Several computers often access the same storage.
The following graphic shows several computer systems connected to a storage system through
an Ethernet switch. In this configuration, each system is connected through a single Ethernet link
to the switch. The switch is connected to the storage system through a single Ethernet link.
When systems read data from storage, the storage responds with sending enough data to fill the
link between the storage systems and the Ethernet switch. It is unlikely that any single system or
virtual machine gets full use of the network speed. However, this situation can be expected when
many systems share one storage device.
When writing data to storage, multiple systems or virtual machines might attempt to fill their
links. As a result, the switch between the systems and the storage system might drop network
packets. The data drop might occur because the switch has more traffic to send to the storage
system than a single link can carry. The amount of data the switch can transmit is limited by the
speed of the link between it and the storage system.
1 Gbit
1 Gbit
1 Gbit
dropped packets
Recovering from dropped network packets results in large performance degradation. In addition
to time spent determining that data was dropped, the retransmission uses network bandwidth
that can otherwise be used for current transactions.
iSCSI traffic is carried on the network by the Transmission Control Protocol (TCP). TCP is a
reliable transmission protocol that ensures that dropped packets are retried and eventually reach
their destination. TCP is designed to recover from dropped packets and retransmits them quickly
and seamlessly. However, when the switch discards packets with any regularity, network
throughput suffers. The network becomes congested with requests to resend data and with the
resent packets. Less data is transferred than in a network without congestion.
Most Ethernet switches can buffer, or store, data. This technique gives every device attempting
to send data an equal chance to get to the destination. The ability to buffer some transmissions,
combined with many systems limiting the number of outstanding commands, reduces
transmissions to small bursts. The bursts from several systems can be sent to a storage system in
turn.
If the transactions are large and multiple servers are sending data through a single switch port,
an ability to buffer can be exceeded. In this case, the switch drops the data it cannot send, and
the storage system must request a retransmission of the dropped packet. For example, if an
Ethernet switch can buffer 32 KB, but the server sends 256 KB to the storage device, some of the
data is dropped.
Most managed switches provide information on dropped packets, similar to the following:
*: interface is up
IHQ: pkts in input hold queue IQD: pkts dropped from input queue
OHQ: pkts in output hold queue OQD: pkts dropped from output queue
RXBS: rx rate (bits/sec) RXPS: rx rate (pkts/sec)
TXBS: tx rate (bits/sec) TXPS: tx rate (pkts/sec)
TRTL: throttle count
In this example from a Cisco switch, the bandwidth used is 476303000 bits/second, which is less
than half of wire speed. The port is buffering incoming packets, but has dropped several packets.
The final line of this interface summary indicates that this port has already dropped almost
10,000 inbound packets in the IQD column.
Configuration changes to avoid this problem involve making sure several input Ethernet links are
not funneled into one output link, resulting in an oversubscribed link. When several links
transmitting near capacity are switched to a smaller number of links, oversubscription becomes
possible.
Generally, applications or systems that write much data to storage must avoid sharing Ethernet
links to a storage device. These types of applications perform best with multiple connections to
storage devices.
Multiple Connections from Switch to Storage shows multiple connections from the switch to the
storage.
1 Gbit
1 Gbit
1 Gbit
1 Gbit
Using VLANs or VPNs does not provide a suitable solution to the problem of link
oversubscription in shared configurations. VLANs and other virtual partitioning of a network
provide a way of logically designing a network. However, they do not change the physical
capabilities of links and trunks between switches. When storage traffic and other network traffic
share physical connections, oversubscription and lost packets might become possible. The same
is true of VLANs that share interswitch trunks. Performance design for a SAN must consider the
physical limitations of the network, not logical allocations.
Switches that have ports operating near maximum throughput much of the time do not provide
optimum performance. If you have ports in your iSCSI SAN running near the maximum, reduce
the load. If the port is connected to an ESXi system or iSCSI storage, you can reduce the load by
using manual load balancing.
If the port is connected between multiple switches or routers, consider installing additional links
between these components to handle more load. Ethernet switches also commonly provide
information about transmission errors, queued packets, and dropped Ethernet packets. If the
switch regularly reports any of these conditions on ports being used for iSCSI traffic,
performance of the iSCSI SAN will be poor.
After the devices get registered with your host, you can display all available local and networked
devices and review their information. If you use third-party multipathing plug-ins, the storage
devices available through the plug-ins also appear on the list.
Note If an array supports implicit asymmetric logical unit access (ALUA) and has only standby
paths, the registration of the device fails. The device can register with the host after the target
activates a standby path and the host detects it as active. The advanced system /Disk/
FailDiskRegistration parameter controls this behavior of the host.
For each storage adapter, you can display a separate list of storage devices available for this
adapter.
Generally, when you review storage devices, you see the following information.
Name Also called Display Name. It is a name that the ESXi host assigns to the device based on
the storage type and manufacturer. Generally, you can change this name to a name of
your choice. See Rename Storage Devices.
Identifier A universally unique identifier that is intrinsic to the device. See Storage Device Names
and Identifiers.
Operational State Indicates whether the device is attached or detached. See Detach Storage Devices.
LUN Logical Unit Number (LUN) within the SCSI target. The LUN number is provided by the
storage system. If a target has only one LUN, the LUN number is always zero (0).
Drive Type Information about whether the device is a flash drive or a regular HDD drive. For
information about flash drives and NVMe devices, see Chapter 15 Working with Flash
Devices.
Transport Transportation protocol your host uses to access the device. The protocol depends on
the type of storage being used. See Types of Physical Storage.
Owner The plug-in, such as the NMP or a third-party plug-in, that the host uses to manage
paths to the storage device. See Pluggable Storage Architecture and Path
Management.
Hardware Acceleration Information about whether the storage device assists the host with virtual machine
management operations. The status can be Supported, Not Supported, or Unknown.
See Chapter 24 Storage Hardware Acceleration.
Sector Format Indicates whether the device uses a traditional, 512n, or advanced sector format, such
as 512e or 4Kn. See Device Sector Formats.
Partition Format A partition scheme used by the storage device. It can be of a master boot record (MBR)
or GUID partition table (GPT) format. The GPT devices can support datastores greater
than 2 TB. See Device Sector Formats.
Multipathing Policies Path Selection Policy and Storage Array Type Policy the host uses to manage paths to
storage. See Chapter 18 Understanding Multipathing and Failover.
Paths Paths used to access storage and their status. See Disable Storage Paths.
The Storage Devices view allows you to list the hosts' storage devices, analyze their information,
and modify properties.
Procedure
All storage devices available to the host are listed in the Storage Devices table.
4 To view details for a specific device, select the device from the list.
Icon Description
Refresh Refresh information about storage adapters, topology, and file systems.
Turn On LED Turn on the locator LED for the selected devices.
Turn Off LED Turn off the locator LED for the selected devices.
Mark as Local Mark the selected devices as local for the host.
Mark as Remote Mark the selected devices as remote for the host.
Unmark as Perennially Reserved Clear the perennial reservation from the selected device.
6 Use the following tabs to access additional information and modify properties for the
selected device.
Tab Description
Properties View device properties and characteristics. View and modify multipathing
policies for the device.
Paths Display paths available for the device. Disable or enable a selected path.
Procedure
All storage adapters installed on the host are listed in the Storage Adapters table.
4 Select the adapter from the list and click the Devices tab.
Storage devices that the host can access through the adapter are displayed.
Icon Description
Refresh Refresh information about storage adapters, topology, and file systems.
This table introduces different storage device formats that ESXi supports.
ESXi detects and registers the 4Kn devices and automatically emulates them as 512e. The device
is presented to upper layers in ESXi as 512e. But the guest operating systems always see it as a
512n device. You can continue using existing VMs with legacy guest OSes and applications on the
host with the 4Kn devices.
n ESXi does not support 4Kn SSD and NVMe devices, or 4Kn devices as RDMs.
n You can use the 4Kn device to configure a coredump partition and coredump file.
n Only the NMP plug-in can claim the 4Kn devices. You cannot use the HPP to claim these
devices.
n With vSAN, you can use only the 4Kn capacity HDDs for vSAN Hybrid Arrays. For
information, see the Administering VMware vSAN documentation.
n Due to the software emulation layer, the performance of the 4Kn devices depends on the
alignment of the I/Os. For best performance, run workloads that issue mostly 4K aligned I/Os.
n Workloads accessing the emulated 4Kn device directly using scatter-gather I/O (SGIO) must
issue I/Os compatible with the 512e disk.
Device Physical Blocksize Logical Blocksize Logical Block Count Size Format
Type
-------------------- ------------------ ----------------- ------------------- -----------
-----------
naa.5000xxxxxxxxx36f 512 512 2344225968 1144641 MiB 512n
naa.5000xxxxxxxxx030 4096 512 3516328368 1716957 MiB 4Kn SWE
naa.5000xxxxxxxxx8df 512 512 2344225968 1144641 MiB 512n
naa.5000xxxxxxxxx4f4 4096 512 3516328368 1716957 MiB 4Kn SWE
Device Identifiers
Depending on the type of storage, the ESXi host uses different algorithms and conventions to
generate an identifier for each storage device.
Storage-provided identifiers
The ESXi host queries a target storage device for the device name. From the returned
metadata, the host extracts or generates a unique identifier for the device. The identifier is
based on specific storage standards, is unique and persistent across all hosts, and has one of
the following formats:
n naa.xxx
n eui.xxx
n t10.xxx
Path-based identifier
When the device does not provide an identifier, the host generates an mpx.path name,
where path represents the first path to the device, for example, mpx.vmhba1:C0:T1:L3. This
identifier can be used in the same way as the storage-provided identifies.
The mpx.path identifier is created for local devices on the assumption that their path names
are unique. However, this identifier is not unique or persistent, and can change after every
system restart.
vmhbaAdapter:CChannel:TTarget:LLUN
n vmhbaAdapter is the name of the storage adapter. The name refers to the physical
adapter on the host, not to the SCSI controller used by the virtual machines.
Software iSCSI adapters and dependent hardware adapters use the channel number to
show multiple paths to the same target.
n TTarget is the target number. Target numbering is determined by the host and might
change when the mappings of targets visible to the host change. Targets that are shared
by different hosts might not have the same target number.
n LLUN is the LUN number that shows the position of the LUN within the target. The LUN
number is provided by the storage system. If a target has only one LUN, the LUN number
is always zero (0).
For example, vmhba1:C0:T3:L1 represents LUN1 on target 3 accessed through the storage
adapter vmhba1 and channel 0.
Legacy identifier
vml.number
The legacy identifier includes a series of digits that are unique to the device. The identifier
can be derived in part from the metadata obtained through the SCSI INQUIRY command. For
nonlocal devices that do not provide SCSI INQUIRY identifiers, the vml.number identifier is
used as the only available unique identifier.
For the devices that support only NGUID format, the host-generated device identifier changes
depending on the version of ESXi. The ESXi host of version 6.7 and earlier created the
t10.xxx_controller_serial_number identifier. Starting with 6.7 Update 1, the host creates two
identifiers: eui.xxx (NGUID) as primary and t10.xxx_controller_serial_number as alternative
primary.
EUI64 ID Format NGUID ID Format ESXi 6.7 and earlier ESXi 6.7 Update 1 and later
Note If your host has NGUID-only devices and you upgrade the host to ESXi 7.0.x from an
earlier version, the device identifier changes from t10.xxx_controller_serial_number to eui.xxx
(NGUID) in the entire ESXi environment. If you use the device identifier in any of your customer
scripts, you must reflect this format change.
When upgrading your stateless hosts from version 6.7 and earlier to version 7.0.x, perform these
steps to retain the storage configuration. If you perform the upgrade without following the
instructions, all storage configurations captured in host profiles might not be retained across the
upgrade. As a result, you might encounter host profile compliance failures after the upgrade.
Prerequisites
n The environment includes NVMe devices that support only NGUID format.
Procedure
c Obtain the namespace information for the NVMe device using the HBA and the
namespace ID.
In the output, for an NGUID-only NVMe device, the field IEEE Extended Unique Identifier
contains 0 and Namespace Globally Unique Identifier contains a non-zero value.
2 To retain storage configurations captured in the host profile, perform these steps when
upgrading a stateless host to 7.0.x.
For example, you can copy the esx.conf file to a VMFS datastore.
# cp /etc/vmware/esx.conf /vmfs/volumes/datastore1/
After the upgrade, the host is not compliant with the profile and might remain in
maintenance mode.
c Apply the device settings for NGUID-only NVMe devices using new ID formats.
Run the following command from the host indicating the location of the esx.conf file.
3 Copy the settings from the host and reset host customizations.
a In the vSphere Client, click Home > Policies and Profiles > Host Profiles, and click the
profile attached to the host.
b Click Configure tab > Copy Setting from Host and select the host.
c To reset customizations, navigate to the host and select Host Profiles > Reset Host
Customizations from the right-click menu.
4 From the host's right-click menu, select Host Profiles > Remediate.
Procedure
When you perform VMFS datastore management operations, such as creating a VMFS datastore
or RDM, adding an extent, and increasing or deleting a VMFS datastore, your host or the vCenter
Server automatically rescans and updates your storage. You can disable the automatic rescan
feature by turning off the Host Rescan Filter. See Turn Off Storage Filters.
In certain cases, you need to perform a manual rescan. You can rescan all storage available to
your host or to all hosts in a folder, cluster, and data center.
If the changes you make are isolated to storage connected through a specific adapter, perform a
rescan for this adapter.
Perform the manual rescan each time you make one of the following changes.
n Reconnect a cable.
n Add a single host to the vCenter Server after you have edited or removed from the vCenter
Server a datastore shared by the vCenter Server hosts and the single host.
Important If you rescan when a path is unavailable, the host removes the path from the list of
paths to the device. The path reappears on the list as soon as it becomes available and starts
working again.
Procedure
1 In the vSphere Client object navigator, browse to a host, a cluster, a data center, or a folder
that contains hosts.
Option Description
Scan for New Storage Devices Rescan all adapters to discover new storage devices. If new devices are
discovered, they appear in the device list.
Scan for New VMFS Volumes Rescan all storage devices to discover new datastores that have been
added since the last scan. Any new datastores appear in the datastore list.
Procedure
3 Under Storage, click Storage Adapters, and select the adapter to rescan from the list.
The Disk.MaxLUN parameter also determines how many LUNs the SCSI scan code attempts to
discover using individual INQUIRY commands if the SCSI target does not support direct discovery
using REPORT_LUNS.
You can modify the Disk.MaxLUN parameter depending on your needs. For example, if your
environment has a smaller number of storage devices with LUN IDs from 1 through 100, set the
value to 101. As a result, you can improve device discovery speed on targets that do not support
REPORT_LUNS. Lowering the value can shorten the rescan time and boot time. However, the
time to rescan storage devices might also depend on other factors, including the type of the
storage system and the load on the storage system.
In other cases, you might need to increase the value if your environment uses LUN IDs that are
greater than 1023.
Procedure
4 In the Advanced System Settings table, select Disk.MaxLUN and click the Edit icon.
5 Change the existing value to the value of your choice, and click OK.
The value you enter specifies the LUN ID that is after the last one you want to discover.
For example, to discover LUN IDs from 1 through 100, set Disk.MaxLUN to 101.
Storage connectivity problems are caused by a variety of reasons. Although ESXi cannot always
determine the reason for a storage device or its paths being unavailable, the host differentiates
between a permanent device loss (PDL) state of the device and a transient all paths down (APD)
state of storage.
A condition that occurs when a storage device becomes inaccessible to the host and no
paths to the device are available. ESXi treats this as a transient condition because typically
the problems with the device are temporary and the device is expected to become available
again.
Typically, the PDL condition occurs when a device is unintentionally removed, or its unique ID
changes, or when the device experiences an unrecoverable hardware error.
When the storage array determines that the device is permanently unavailable, it sends SCSI
sense codes to the ESXi host. After receiving the sense codes, your host recognizes the device
as failed and registers the state of the device as PDL. For the device to be considered
permanently lost, the sense codes must be received on all its paths.
After registering the PDL state of the device, the host stops attempts to reestablish connectivity
or to send commands to the device.
The vSphere Client displays the following information for the device:
If no open connections to the device exist, or after the last connection closes, the host removes
the PDL device and all paths to the device. You can disable the automatic removal of paths by
setting the advanced host parameter Disk.AutoremoveOnPDL to 0.
If the device returns from the PDL condition, the host can discover it, but treats it as a new
device. Data consistency for virtual machines on the recovered device is not guaranteed.
Note When a device fails without sending appropriate SCSI sense codes or an iSCSI login
rejection, the host cannot detect PDL conditions. In this case, the host continues to treat the
device connectivity problems as APD even when the device fails permanently.
H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x25 0x0 or Logical Unit Not Supported
Planned device removal is an intentional disconnection of a storage device. You might also plan
to remove a device for such reasons as upgrading your hardware or reconfiguring your storage
devices. When you perform an orderly removal and reconnection of a storage device, you
complete a number of tasks.
Task Description
Migrate virtual machines from the device you plan to detach. vCenter Server and Host Management
For an iSCSI device with a single LUN per target, delete the static target See Remove Dynamic or Static iSCSI
entry from each iSCSI HBA that has a path to the storage device. Targets.
Perform any necessary reconfiguration of the storage device by using the See your vendor documentation.
array console.
Mount the datastore and restart the virtual machines. See Mount Datastores.
You might need to detach the device to make it inaccessible to your host, when, for example,
you perform a hardware upgrade on the storage side.
Prerequisites
Procedure
Results
The device becomes inaccessible. The operational state of the device changes to Unmounted.
What to do next
If multiple hosts share the device, detach the device from each host.
Procedure
4 Select the detached storage device and click the Attach icon.
Results
The following items in the vSphere Client indicate that the device is in the PDL state:
n A warning about the device being permanently inaccessible appears in the VMkernel log file.
To recover from the unplanned PDL condition and remove the unavailable device from the host,
perform the following tasks.
Task Description
Power off and unregister all virtual machines that are running on the datastore affected by See vSphere Virtual
the PDL condition. Machine Administration.
Rescan all ESXi hosts that had access to the device. See Perform Storage
Rescan.
Note If the rescan is not successful and the host continues to list the device, some
pending I/O or active references to the device might still exist. Check for any items that
might still have an active reference to the device or datastore. The items include virtual
machines, templates, ISO images, raw device mappings, and so on.
The reasons for an APD state can be, for example, a failed switch or a disconnected storage
cable.
In contrast with the permanent device loss (PDL) state, the host treats the APD state as transient
and expects the device to be available again.
The host continues to retry issued commands in an attempt to reestablish connectivity with the
device. If the host's commands fail the retries for a prolonged period, the host might be at risk of
having performance problems. Potentially, the host and its virtual machines might become
unresponsive.
To avoid these problems, your host uses a default APD handling feature. When a device enters
the APD state, the host turns on a timer. With the timer on, the host continues to retry non-virtual
machine commands for a limited time period only.
By default, the APD timeout is set to 140 seconds. This value is typically longer than most devices
require to recover from a connection loss. If the device becomes available within this time, the
host and its virtual machine continue to run without experiencing any problems.
If the device does not recover and the timeout ends, the host stops its attempts at retries and
stops any non-virtual machine I/O. Virtual machine I/O continues retrying. The vSphere Client
displays the following information for the device with the expired APD timeout:
Even though the device and datastores are unavailable, virtual machines remain responsive. You
can power off the virtual machines or migrate them to a different datastore or host.
If later the device paths become operational, the host can resume I/O to the device and end the
special APD treatment.
If you disable the APD handling, the host will indefinitely continue to retry issued commands in an
attempt to reconnect to the APD device. This behavior might cause virtual machines on the host
to exceed their internal I/O timeout and become unresponsive or fail. The host might become
disconnected from vCenter Server.
Procedure
4 In the Advanced System Settings table, select the Misc.APDHandlingEnable parameter and
click the Edit icon.
Results
If you disabled the APD handling, you can reenable it and set its value to 1 when a device enters
the APD state. The internal APD handling feature turns on immediately and the timer starts with
the current timeout value for each device in APD.
The timeout period begins immediately after the device enters the APD state. After the timeout
ends, the host marks the APD device as unreachable. The host stops its attempts to retry any I/O
that is not coming from virtual machines. The host continues to retry virtual machine I/O.
By default, the timeout parameter on your host is set to 140 seconds. You can increase the value
of the timeout if, for example, storage devices connected to your ESXi host take longer than 140
seconds to recover from a connection loss.
Note If you change the timeout parameter after the device becomes unavailable, the change
does not take effect for that particular APD incident.
Procedure
4 In the Advanced System Settings table, select the Misc.APDTimeout parameter and click the
Edit icon.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
n on - Device is connected.
n dead - Device has entered the APD state. The APD timer starts.
Procedure
4 From the list of storage devices, select one or more disks and enable or disable the locator
LED indicator.
Option Description
Prerequisites
n Verify that the devices you plan to erase are not in use.
Procedure
4 Select one or more devices and click the Erase Partitions icon.
5 Verify that the partition information you are erasing is not critical.
WSFC cluster nodes that are spread over several ESXi hosts require physical RDMs. The RDMs
are shared among all hosts where cluster nodes run. The host with the active node holds
persistent SCSI-3 reservations on all shared RDM devices. When the active node is running and
devices are locked, no other host can write to the devices. If another participating host boots
while the active node is holding the lock on the devices, the boot might take unusually long time
because the host unsuccessfully attempts to contact the locked devices. The same issue might
also affect rescan operations.
To prevent this problem, activate perennial reservation for all devices on the ESXi hosts where
secondary WSFC nodes with RDMs reside. This setting informs the ESXi host about the
permanent SCSI reservation on the devices, so that the host can skip the devices during the boot
or storage rescan process.
If you later re-purpose the marked devices as VMFS datastores, remove the reservation to avoid
unpredictable datastore behavior.
For information about WSFC clusters, see the Setup for Windows Server Failover Clustering
documentation.
Prerequisites
Before marking a device as perennially reserved, make sure the device does not contain a VMFS
datastore.
Procedure
4 From the list of storage devices, select the device and click one of the following icons.
Option Description
Note Repeat the procedure for each RDM device that is participating in the
WSFC cluster.
Unmark as Perennially Reserved Clear perennial reservation for the device that was previously marked.
Results
The configuration is permanently stored with the ESXi host and persists across restarts.
Example
You can also use the esxcli command to mark the devices participating in the WSFC cluster.
In the output of the esxcli command, search for the entry Is Perennially Reserved: true.
Unlike regular HDDs that are electromechanical devices containing moving parts, flash devices
use semiconductors as their storage medium and have no moving parts. Typically, the flash
devices are resilient and provide faster access to data.
To detect flash devices, ESXi uses an inquiry mechanism based on T10 standards. Check with
your vendor whether your storage array supports the ESXi mechanism of flash device detection.
After the host detects the flash devices, you can use them for several tasks and functionalities.
If you use NVMe storage, enable the high-performance plug-in (HPP) to improve your storage
performance. See VMware High Performance Plug-In and Path Selection Schemes.
For specifics about using NVMe storage with ESXi, see Chapter 16 About VMware NVMe Storage.
vSAN vSAN requires flash devices. For more information, see the Administering VMware
vSAN documentation.
VMFS Datastores Create VMFS datastores on flash devices. Use the datastores for the following
purposes:
n Store virtual machines. Certain guest operating systems can identify virtual
disks stored on these datastores as flash virtual disks.
n Allocate datastore space for the ESXi host swap cache. See Configure Host
Cache with VMFS Datastore.
Virtual Flash Resource (VFFS) If required by your vendor, set up a virtual flash resource and use it for I/O caching
filters. See Chapter 23 Filtering Virtual Machine I/O.
Guest operating systems can use standard inquiry commands such as SCSI VPD Page (B1h) for
SCSI devices and ATA IDENTIFY DEVICE (Word 217) for IDE devices.
For linked clones, native snapshots, and delta-disks, the inquiry commands report the virtual flash
status of the base disk.
Operating systems can detect that a virtual disk is a flash disk under the following conditions:
n Detection of flash virtual disks is supported on VMs with virtual hardware version 8 or later.
n Devices backing a shared VMFS datastore must be marked as flash on all hosts.
n If the VMFS datastore includes several device extents, all underlying physical extents must be
flash-based.
When you configure vSAN or set up a virtual flash resource, your storage environment must
include local flash devices.
However, ESXi might not recognize certain storage devices as flash devices when their vendors
do not support automatic flash device detection. In other cases, certain devices might not be
detected as local, and ESXi marks them as remote. When devices are not recognized as the local
flash devices, they are excluded from the list of devices offered for vSAN or virtual flash
resource. Marking these devices as local flash makes them available for vSAN and virtual flash
resource.
ESXi does not recognize certain devices as flash when their vendors do not support automatic
flash disk detection. The Drive Type column for the devices shows HDD as their type.
Caution Marking the HDD devices as flash might deteriorate the performance of datastores and
services that use them. Mark the devices only if you are certain that they are flash devices.
Prerequisites
Procedure
4 From the list of storage devices, select one or several HDD devices and click the Mark as
Flash Disk ( ) icon.
Results
What to do next
If the flash device that you mark is shared among multiple hosts, make sure that you mark the
device from all hosts that share the device.
Prerequisites
n Power off virtual machines that reside on the device and unmount an associated datastore.
Procedure
4 From the list of storage devices, select one or several remote devices and click the Mark as
Local icon.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
n Make sure to use the latest firmware with flash devices. Frequently check with your storage
vendors for any updates.
n Carefully monitor how intensively you use the flash device and calculate its estimated lifetime.
The lifetime expectancy depends on how actively you continue to use the flash device. See
Estimate Lifetime of Flash Devices.
n If you use NVMe devices for storage, enable the high-performance plug-in (HPP) to improve
your storage performance. For specifics of using the NVMe devices, see VMware High
Performance Plug-In and Path Selection Schemes
Typically, storage vendors provide reliable lifetime estimates for a flash device under ideal
conditions. For example, a vendor might guarantee a lifetime of 5 years under the condition of 20
GB writes per day. However, the more realistic life expectancy of the device depends on how
many writes per day your ESXi host actually generates. Follow these steps to calculate the
lifetime of the flash device.
Prerequisites
Note the number of days passed since the last reboot of your ESXi host. For example, ten days.
Procedure
1 Obtain the total number of blocks written to the flash device since the last reboot.
Run the esxcli storage core device stats get -d=device_ID command. For example:
The Blocks Written item in the output shows the number of blocks written to the device since
the last reboot. In this example, the value is 629,145,600. After each reboot, it resets to 0.
One block is 512 bytes. To calculate the total number of writes, multiply the Blocks Written
value by 512, and convert the resulting value to GB.
In this example, the total number of writes since the last reboot is approximately 322 GB.
Divide the total number of writes by the number of days since the last reboot.
If the last reboot was ten days ago, you get 32 GB of writes per day. You can average this
number over the time period.
vendor provided number of writes per day times vendor provided life span divided by actual
average number of writes per day
For example, if your vendor guarantees a lifetime of 5 years under the condition of 20 GB
writes per day, and the actual number of writes per day is 30 GB, the life span of your flash
device will be approximately 3.3 years.
When you set up the virtual flash resource, you create a new file system, Virtual Flash File
System (VFFS). VFFS is a derivative of VMFS, which is optimized for flash devices and is used to
group the physical flash devices into a single caching resource pool. As a non-persistent
resource, it cannot be used to store virtual machines.
After you set up the virtual flash resource, you can use it for I/O caching filters. See Chapter 23
Filtering Virtual Machine I/O.
n You can have only one virtual flash resource on a single ESXi host. The virtual flash resource
is managed at the host's level.
n You cannot use the virtual flash resource to store virtual machines. Virtual flash resource is a
caching layer only.
n You can use only local flash devices for the virtual flash resource.
n You can create the virtual flash resource from mixed flash devices. All device types are
treated equally and no distinction is made between SAS, SATA, or PCI express connectivity.
When creating the resource from mixed flash devices, make sure to group similar performing
devices together to maximize performance.
n You cannot use the same flash devices for the virtual flash resource and vSAN. Each requires
its own exclusive and dedicated flash device.
To set up a virtual flash resource, you use local flash devices connected to your host or host
cluster. To increase the capacity of your virtual flash resource, you can add more devices, up to
the maximum number indicated in the Configuration Maximums documentation. An individual
flash device must be exclusively allocated to the virtual flash resource. No other vSphere
functionality, such as vSAN or VMFS, can share the device with the virtual flash resource.
Procedure
Option Description
Add Capacity If you are creating the virtual flash resource on an individual host.
Add Capacity on Cluster If you are creating the virtual flash resource on a cluster.
5 From the list of available entities, select one or more to use for the virtual flash resource and
click OK.
If your flash devices do not appear on the list, see Marking Storage Devices.
Option Description
volume ID - Configure using the If you previously created a VFFS volume on one of the host's flash devices
existing VFFS volume extents using the vmkfstools command, the volume also appears on the list of
eligible entities. You can select just this volume for the virtual flash resource.
Or combine it with the unclaimed devices. ESXi uses the existing VFFS
volume to extend it over other devices.
Results
The virtual flash resource is created. The Device Backing area lists all devices that you use for the
virtual flash resource.
What to do next
Use the virtual flash resource for I/O caching filters developed through vSphere APIs for I/O
Filtering.
You can increase the capacity by adding more flash devices to the virtual flash resource.
Prerequisites
n Verify that the virtual flash resource is not used for I/O filters.
Procedure
3 Under Storage, click Virtual Flash Resource Management and click Remove All.
Results
After you remove the virtual flash resource and erase the flash device, the device is available for
other operations.
Procedure
Parameter Description
VFLASH.ResourceUsageThreshold The system triggers the Host vFlash resource usage alarm when a virtual
flash resource use exceeds the threshold. The default threshold is 80%. You
can change the threshold to an appropriate value. The alarm is cleared when
the virtual flash resource use drops below the threshold.
5 Click OK.
Your ESXi hosts can use a portion of a flash-backed storage entity as a swap cache shared by all
virtual machines.
The host-level cache is made up of files on a low-latency disk that ESXi uses as a write-back
cache for virtual machine swap files. All virtual machines running on the host share the cache.
Host-level swapping of virtual machine pages makes the best use of potentially limited flash
device space.
Prerequisites
Create a VMFS datastore using flash devices as backing. See Create a VMFS Datastore.
Procedure
4 Select the flash datastore in the list and click the Edit icon.
6 Click OK.
Problem
By default, auto-partitioning deploys VMFS file systems on any unused local storage disks on
your host, including flash disks.
However, a flash disk formatted with VMFS becomes unavailable for such features as virtual flash
and vSAN. Both features require an unformatted flash disk and neither can share the disk with
any other file system.
Solution
To ensure that auto-partitioning does not format the flash disk with VMFS, use the following boot
options when you install ESXi or boot the ESXi host for the first time:
n autoPartition=TRUE
n skipPartitioningSsds=TRUE
If you use Auto Deploy, set these parameters on a reference ESXi host.
1 In the vSphere Client, navigate to the host to use as a reference host and click the Configure
tab.
2 Click System to open the system options, and click Advanced System Settings.
Parameter Value
VMkernel.Boot.autoPartition True
VMkernel.Boot.skipPartitioningSsds True
If flash disks that you plan to use with the virtual flash resource and vSAN already have VMFS
datastores, remove the datastores.
n Add Controller for the NVMe over RDMA (RoCE v2) or FC-NVMe Adapter
NVMe is a method for connecting and transferring data between a host and a target storage
system. NVMe is designed for use with faster storage media equipped with non-volatile
memory, such as flash devices. This type of storage can achieve low latency, low CPU usage,
and high performance, and generally serves as an alternative to SCSI storage.
NVMe Transports
The NVMe storage can be directly attached to a host using a PCIe interface or indirectly
through different fabric transports. VMware NVMe over Fabrics (NVMe-oF) provides a
distance connectivity between a host and a target storage device on a shared storage array.
The following types of transports for NVMe currently exist. For more information, see
Requirements and Limitations of VMware NVMe Storage.
NVMe over RDMA Shared NVMe-oF storage. With the RoCE v2 technology.
NVMe Namespaces
In the NVMe storage array, a namespace is a storage volume backed by some quantity of
non-volatile memory. In the context of ESXi, the namespace is analogous to a storage device,
or LUN. After your ESXi host discovers the NVMe namespace, a flash device that represents
the namespace appears on the list of storage devices in the vSphere Client. You can use the
device to create a VMFS datastore and store virtual machines.
NVMe Controllers
A controller is associated with one or several NVMe namespaces and provides an access
path between the ESXi host and the namespaces in the storage array. To access the
controller, the host can use two mechanisms, controller discovery and controller connection.
For information, see Add Controller for the NVMe over RDMA (RoCE v2) or FC-NVMe
Adapter.
Controller Discovery
With this mechanism, the ESXi host first contacts a discovery controller. The discovery
controller returns a list of available controllers. After you select a controller for your host to
access, all namespaces associated with this controller become available to your host.
Controller Connection
Your ESXi host connects to the controller that you specify. All namespaces associated with
this controller become available to your host.
NVMe Subsystem
Generally, an NVMe subsystem is a storage array that might include several NVMe
controllers, several namespaces, a non-volatile memory storage medium, and an interface
between the controller and non-volatile memory storage medium. The subsystem is identified
by a subsystem NVMe Qualified Name (NQN).
By default, the ESXi host uses the HPP to claim the NVMe-oF targets. When selecting
physical paths for I/O requests, the HPP applies an appropriate Path Selection Scheme (PSS).
For information about the HPP, see VMware High Performance Plug-In and Path Selection
Schemes. To change the default path selection mechanism, see Change the Path Selection
Policy.
In NVMe-oF environments, targets can present namespaces, equivalent to LUNs in SCSI, to a host
in active/active or asymmetric access modes. ESXi is able to discover and use namespaces
presented in either way. ESXi internally emulates NVMe-oF targets as SCSI targets and presents
them as active/active SCSI targets or implicit ALUA SCSI targets.
ESXi Host
PCle Adapter
(vmhba#)
Local NVMe
Storage Device
NVMe Controllers
NVMe Namespaces
To access the NVMe over Fibre Channel storage, install a Fibre Channel storage adapter that
supports NVMe on your ESXi host. You do not need to configure the adapter. It automatically
connects to an appropriate NVMe subsystem and discovers all shared NVMe storage devices
that it can reach. You can later reconfigure the adapter and disconnect its controllers or connect
other controllers that were not available during the host boot. For more information, see Add
Controller for the NVMe over RDMA (RoCE v2) or FC-NVMe Adapter.
ESXi Host
NVMe over
Fibre Channel
Adapter (vmhba#)
FC Fabric
NVMe subsystem
NVMe Controllers
NVMe Namespaces
To access storage, the ESXi host uses an RDMA network adapter installed on your host and a
software NVMe over RDMA storage adapter. You must configure both adapters to use them for
storage discovery. For more information, see Configure Adapters for NVMe over RDMA (RoCE
v2) Storage.
ESXi Host
Software NVMe
over RDMA
Adapter
RDMA Network
Adapter
RDMA
Fabric
NVMe subsystem
NVMe Controllers
NVMe Namespaces
n Hardware NVMe over PCIe adapter. After you install the adapter, your ESXi host detects it
and displays in the vSphere Client as a storage adapter (vmhba) with the protocol indicated
as PCIe. You do not need to configure the adapter.
n Network adapter that supports RDMA over Converged Ethernet (RoCE v2). To configure the
adapter, see View RDMA Network Adapters.
n Software NVMe over RDMA adapter. This software component must be enabled on your
ESXi host and connected to an appropriate network RDMA adapter. For information, see
Enable Software NVMe over RDMA Adapters.
n NVMe controller. You must add a controller after you configure a software NVMe over RDMA
adapter. See Add Controller for the NVMe over RDMA (RoCE v2) or FC-NVMe Adapter.
n Hardware NVMe adapter. Typically, it is a Fibre Channel HBA that supports NVMe. When you
install the adapter, your ESXi host detects it and displays in the vSphere Client as a standard
Fibre Channel adapter (vmhba) with the storage protocol indicated as NVMe. You do not
need to configure the hardware NVMe adapter to use it.
n NVMe controller. You do not need to configure the controller. After you install the required
hardware NVMe adapter, it automatically connects to all targets and controllers that are
reachable at the moment. You can later disconnect the controllers or connect other
controllers that were not available during the host boot. See Add Controller for the NVMe
over RDMA (RoCE v2) or FC-NVMe Adapter.
n Make sure that active paths are presented to the host. The namespaces cannot be registered
until the active path is discovered.
Shared Storage Functionality SCSI over Fabric Storage NVMe over Fabric Storage
Shared Storage Functionality SCSI over Fabric Storage NVMe over Fabric Storage
To establish lossless networks, you can select one of the available QoS settings.
If the above command options are not set to true, run the following command.
n Automatic Configuration. Starting ESXi 7.0, you can apply DCB PFC configuration
automatically on the host RNIC, if the RNIC driver supports DCB and DCBx.
You can verify the current DCB settings by running the following command.
n Manual configuration. In some cases, the RNIC drivers provide a method to manually
configure the DCB PFC using driver specific parameters. To use this method, see vendor
specific driver documentation. For example,in Mellanox ConnectX-4/5 driver, you can set the
PFC priority value to 3 by running the following command and reboot the host.
The following video walks you through the steps of configuring NVMe over RDMA adapters.
Procedure
What to do next
After you enable the software NVMe over RDMA adapter, add NVMe controllers, so that the host
can discover the NVMe targets. See Add Controller for the NVMe over RDMA (RoCE v2) or FC-
NVMe Adapter.
Procedure
1 On your ESXi host, install an adapter that supports RDMA (RoCE v2), for example, Mellanox
Technologies MT27700 Family ConnectX-4.
The host discovers the adapter and the vSphere Client displays its two components, an
RDMA adapter and a physical network adapter.
2 In the vSphere Client, verify that the RDMA adapter is discovered by your host.
In this example, the RDMA adapter appears on the list as vmrdma0. The Paired Uplink
column displays the network component as the vmnic1 physical network adapter.
d To verify the description of the adapter, select the RDMA adapter from the list, and click
the Properties tab.
What to do next
You can now create the software NVMe over RDMA adapter.
The following diagram displays the port binding for the NVMe over RDMA adapter.
Initiator on Initiator on
IP Subnet-1 IP Subnet-2
For more information about creating switches, see Create a vSphere Standard Switch or Create
a vSphere Distributed Switch in the vSphere Networking documentation.
vmk1 vmk2
portgroup/
ESXi
vmknic
PG1 PG2
192.168.50.x 192.168.50.x
vSwitch 1 vSwitch 2
vmrdma0 vmrdma1
vmnic/
vmrdma vmnic1 vmnic2
C0 C1 C0 C1
VMkernel adapter for each physical network adapter. You use 1:1 mapping between each virtual
and physical network adapter.
Procedure
1 Create a vSphere standard switch with a VMkernel adapter and the network component.
a In the vSphere Client, select your host and click the Networks tab.
Note Ensure to select the physical network adapter that corresponds to the RDMA
adapter. To see the association between the RDMA adapter vmrdma, and the physical
network adapter vmnic, see View RDMA Network Adapters.
If you are using VLAN for the storage path, enter the VLAN ID.
The illustration shows that the physical network adapter and the VMkernel adapter are
connected to the vSphere standard switch. Through this connection, the RDMA adapter is
bound to the VMkernel adapter.
3 Verify the configuration of the VMkernel binding for the RDMA adapter.
a Under Networking list, click RDMA adapters, and select the RDMA adapter from the list.
b Click the VMkernel adapters binding tab and verify that the associated VMkernel adapter
appears on the page.
In this example, the vmrdma0 RDMA adapter is paired to the vmnic1 network adapter and
is connected to the vmk1 VMkernel adapter.
Configure VMkernel Binding with a vSphere Standard Switch and NIC Teaming
You can configure VMkernel port binding for the RDMA adapter using a vSphere standard switch
with the NIC teaming configuration. You can use NIC teaming to achieve network redundancy.
You can configure two or more network adapters (NICs) as a team for high availability and load
balancing.
Procedure
1 Create a vSphere standard switch with a VMkernel adapter and the network component with
the NIC teaming configuration.
a In the vSphere Client, select your host and click Networks tab.
f Select the required physical adapter vmnic, and add it under Active adapters.
g Select another physical adapter vmnic, and add it under Standby adapters.
If you are using VLAN for the storage path, enter the VLAN ID.
a Click the Configure tab, and select Virtual switches under Networking.
f Under Standby adapters > Failover order, move the other physcial adapters.
3 Repeat steps 1 and 2 to add and configure additional set of teamed rnics. To verify if the
adapter is configured, click the Configure tab and select VMkernel adapters
Procedure
1 Create a vSphere distributed switch with a VMkernel adapter and the network component.
a In the vSphere Client, select Datacenter, and click the Networks tab.
b Click Actions , and select Distributed Switch > New Distributed Switch.
Ensure that the location of the data center is present within your host, and click Next.
d Select the ESXi version as 7.0.0 and later, and click Next.
b Right-click the DSwitch, and select Add and Manage Hosts from the menu.
h In the vSphere Client, select the DSwitch, and click the Ports tab.
You can view the uplinks created for your switch here.
3 Create distributed port groups for the NVMe over RDMA storage path.
b Click Actions and select Distributed Port Group > New Distributed Port Group.
c Under Configure Settings, enter the general properties of the port group.
Note Network connectivity issues might occur if you do not configure VLAN properly.
a In the vSphere Client, expand the DSwitch list, and select the distributed port group.
c In the Select Member Hosts dialog box, select your host and click OK.
d In the Configure VMkernel Adapter dialog box, ensure that the MTU matches to the
Switch MTU.
e Click Finish.
c Assign one uplink as Active for the port group, and the other uplink as Unused.
What to do next
After you complete the configuration, click Configure, and verify whether the physical adapter
tab on your host lists the DVSwitch for the NICs selected.
Prerequisites
On your ESXi host, install an adapter that supports RDMA (RoCE v2), for example, Mellanox
Technologies MT27700 Family ConnectX-4. Configure the VMkernel binding for the RDMA
adapter. For information, see View RDMA Network Adapters.
Procedure
3 Under Storage, click Storage Adapters, and click the Add Software Adapter icon.
4 Select Add software NVMe over RDMA adapter, and select an appropriate RDMA adapter
(vmrdma) from the drop-down menu.
Note If you get an error message that prevents you from creating the software NVMe over
RDMA adapter, make sure that the VMkernel binding for the RDMA adapter is correctly
configured. For information, see View RDMA Network Adapters.
Results
The software NVMe over RDMA adapter appears in the list as a vmhba storage adapter. You can
remove the adapter if you need to free the underlying RDMA network adapter for other
purposes.
Add Controller for the NVMe over RDMA (RoCE v2) or FC-
NVMe Adapter
Use the vSphere Client to add an NVMe controller. After you add the controller, the NVMe
namespaces associated with the controller become available to your ESXi host. The NVMe
storage devices that represent the namespaces in the ESXi environment appear on the storage
devices list.
If you use NVMe over RDMA (RoCE v2) storage, you must add a controller after you configure a
software NVMe over RDMA adapter. With FC-NVMe storage, after you install the required
adapter, it automatically connects to all targets that are reachable at the moment. You can later
reconfigure the adapter and disconnect its controllers or connect other controllers that were not
available during the host boot.
Prerequisites
Make sure that your ESXi host has appropriate adapters for your type of storage. See
Requirements and Limitations of VMware NVMe Storage.
Procedure
3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure.
5 To add the controller, select one of the following options, and click Add.
Option Description
Automatically discover controllers This method indicates that your host can accept a connection to any
available controller.
a Specify the following parameter for a discovery controller.
n For NVMe over RDMA (RoCE v2), the IP address and transport port
number.
n For FC-NVMe, the WorldWideNodeName and WorldWidePortName.
b Click Discover Controllers.
c From the list of controllers, select the controller to use.
Enter controller details manually With this method, your host requests a connection to a specific controller
with the following parameters:
n Subsystem NQN
n Controller identification. For NVMe over RDMA (RoCE v2), the IP address
and transport port number. For FC-NVMe, the WorldWideNodeName
and WorldWidePortName.
n Admin queue size. An optional parameter that specifies the size of the
admin queue of the controller. A default value is 16.
n Keepalive timeout. An optional parameter to specify in seconds the keep
alive timeout between the adapter and the controller. A default timeout
value is 60 seconds.
Results
The controller appears on the list of controllers. Your host can now discover the NVMe
namespaces that are associated with the controller. The NVMe storage devices that represent
the namespaces in the ESXi environment appear on the storage devices list in the vSphere Client.
You cannot remove the NVMe over PCIe and FC-NVMe adapters.
Procedure
3 Under Storage, click Storage Adapters, and select the adapter (vmhba#) to remove.
5 Click the Remove icon (Remove the host's storage adapter) to remove the NVMe over RDMA
adapter.
n Types of Datastores
n Creating Datastores
n Enable or Disable Support for Clustered Virtual Disks on the VMFS6 Datastore
Types of Datastores
Depending on the storage you use, datastores can be of different types.
VMFS (version 5 and 6) Datastores that you deploy on block storage devices use
the vSphere Virtual Machine File System (VMFS) format.
VMFS is a special high-performance file system format that
is optimized for storing virtual machines. See
Understanding VMFS Datastores.
NFS (version 3 and 4.1) An NFS client built into ESXi uses the Network File System
(NFS) protocol over TCP/IP to access a designated NFS
volume. The volume is located on a NAS server. The ESXi
host mounts the volume as an NFS datastore, and uses it
for storage needs. ESXi supports versions 3 and 4.1 of the
NFS protocol. See Understanding Network File System
Datastores
Depending on your storage type, some of the following tasks are available for the datastores.
n Create datastores. You can use the vSphere Client to create certain types of datastores.
n Organize the datastores. For example, you can group them into folders according to business
practices. After you group the datastores, you can assign the same permissions and alarms
on the datastores in the group at one time.
n Add the datastores to datastore clusters. A datastore cluster is a collection of datastores with
shared resources and a shared management interface. When you create the datastore
cluster, you can use Storage DRS to manage storage resources. For information about
datastore clusters, see the vSphere Resource Management documentation.
Use the vSphere Client to set up the VMFS datastore in advance on the block-based storage
device that your ESXi host discovers. The VMFS datastore can be extended to span over several
physical storage devices that include SAN LUNs and local storage. This feature allows you to
pool storage and gives you flexibility in creating the datastore necessary for your virtual
machines.
You can increase the capacity of the datastore while the virtual machines are running on the
datastore. This ability lets you add new space to your VMFS datastores as your virtual machine
requires it. VMFS is designed for concurrent access from multiple physical machines and enforces
the appropriate access controls on the virtual machine files.
For all supported VMFS version, ESXi offers complete read and write support. On the supported
VMFS datastores, you can create and power on virtual machines.
The following table compares major characteristics of VMFS5 and VMFS6. For additional
information, see Configuration Maximums .
Access for ESXi hosts version 6.5 and later Yes Yes
Storage devices greater than 2 TB for each VMFS extent Yes Yes
Support for virtual machines with large capacity virtual Yes Yes
disks, or disks greater than 2 TB
n Datastore Extents. A spanned VMFS datastore must use only homogeneous storage devices,
either 512n, 512e, or 4Kn. The spanned datastore cannot extend over devices of different
formats.
n Block Size. The block size on a VMFS datastore defines the maximum file size and the amount
of space a file occupies. VMFS5 and VMFS6 datastores support the block size of 1 MB.
n Storage vMotion. Storage vMotion supports migration across VMFS, vSAN, and vVols
datastores. vCenter Server performs compatibility checks to validate Storage vMotion across
different types of datastores.
n Storage DRS. VMFS5 and VMFS6 can coexist in the same datastore cluster. However, all
datastores in the cluster must use homogeneous storage devices. Do not mix devices of
different formats within the same datastore cluster.
n Device Partition Formats. Any new VMFS5 or VMFS6 datastore uses GUID partition table
(GPT) to format the storage device. The GPT format enables you to create datastores larger
than 2 TB. If your VMFS5 datastore has been previously upgraded from VMFS3, it continues
to use the master boot record (MBR) partition format, which is characteristic for VMFS3.
Conversion to GPT happens only after you expand the datastore to a size larger than 2 TB.
Note Always have only one VMFS datastore for each LUN.
You can store multiple virtual machines on the same VMFS datastore. Each virtual machine,
encapsulated in a set of files, occupies a separate single directory. For the operating system
inside the virtual machine, VMFS preserves the internal file system semantics, which ensures
correct application behavior and data integrity for applications running in virtual machines.
When you run multiple virtual machines, VMFS provides specific locking mechanisms for the
virtual machine files. As a result, the virtual machines can operate safely in a SAN environment
where multiple ESXi hosts share the same VMFS datastore.
In addition to the virtual machines, the VMFS datastores can store other files, such as the virtual
machine templates and ISO images.
VMFS volume
disk1
virtual
disk2 disk
files
disk3
For information on the maximum number of hosts that can connect to a single VMFS datastore,
see the Configuration Maximums document.
To ensure that multiple hosts do not access the same virtual machine at the same time, VMFS
provides on-disk locking.
Sharing the VMFS volume across multiple hosts offers several advantages, for example, the
following:
n You can use VMware Distributed Resource Scheduling (DRS) and VMware High Availability
(HA).
You can distribute virtual machines across different physical servers. That means you run a
mix of virtual machines on each server, so that not all experience high demand in the same
area at the same time. If a server fails, you can restart virtual machines on another physical
server. If the failure occurs, the on-disk lock for each virtual machine is released. For more
information about VMware DRS, see the vSphere Resource Management documentation. For
information about VMware HA, see the vSphere Availability documentation.
n You can use vMotion to migrate running virtual machines from one physical server to another.
For information about migrating virtual machines, see the vCenter Server and Host
Management documentation.
To create a shared datastore, mount the datastore on those ESXi hosts that require the
datastore access. See Mount Datastores.
Metadata is updated each time you perform datastore or virtual machine management
operations. Examples of operations requiring metadata updates include the following:
n Creating a template
When metadata changes are made in a shared storage environment, VMFS uses special locking
mechanisms to protect its data and prevent multiple hosts from concurrently writing to the
metadata.
Depending on its configuration and the type of underlying storage, a VMFS datastore can use
different types of locking mechanisms. It can exclusively use the atomic test and set locking
mechanism (ATS-only), or use a combination of ATS and SCSI reservations (ATS+SCSI).
ATS-Only Mechanism
For storage devices that support T10 standard-based VAAI specifications, VMFS provides ATS
locking, also called hardware assisted locking. The ATS algorithm supports discrete locking per
disk sector. All newly formatted VMFS5 and VMFS6 datastores use the ATS-only mechanism if
the underlying storage supports it, and never use SCSI reservations.
When you create a multi-extent datastore where ATS is used, vCenter Server filters out non-ATS
devices. This filtering allows you to use only those devices that support the ATS primitive.
In certain cases, you might need to turn off the ATS-only setting for a VMFS5 or VMFS6
datastore. For information, see Change Locking Mechanism to ATS+SCSI.
ATS+SCSI Mechanism
A VMFS datastore that supports the ATS+SCSI mechanism is configured to use ATS and
attempts to use it when possible. If ATS fails, the VMFS datastore reverts to SCSI reservations. In
contrast with the ATS locking, the SCSI reservations lock an entire storage device while an
operation that requires metadata protection is performed. After the operation completes, VMFS
releases the reservation and other operations can continue.
Datastores that use the ATS+SCSI mechanism include VMFS5 datastores that were upgraded
from VMFS3. In addition, new VMFS5 or VMFS6 datastores on storage devices that do not
support ATS use the ATS+SCSI mechanism.
If the VMFS datastore reverts to SCSI reservations, you might notice performance degradation
caused by excessive SCSI reservations.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
u To display information related to VMFS locking mechanisms, run the following command:
Results
The table lists items that the output of the command might include.
Typically, VMFS5 datastores that were previously upgraded from VMFS3 continue using the ATS
+SCSI locking mechanism. If the datastores are deployed on ATS-enabled hardware, they are
eligible for an upgrade to ATS-only locking. Depending on your vSphere environment, you can
use one of the following upgrade modes:
n The online upgrade to the ATS-only mechanism is available for most single-extent VMFS5
datastores. While you perform the online upgrade on one of the hosts, other hosts can
continue using the datastore.
n The offline upgrade to ATS-only must be used for VMFS5 datastores that span multiple
physical extents. Datastores composed of multiple extents are not eligible for the online
upgrade. These datastores require that no hosts actively use the datastores at the time of
the upgrade request.
Procedure
Procedure
1 Upgrade all hosts that access the VMFS5 datastore to the newest version of vSphere.
2 Determine whether the datastore is eligible for an upgrade of its current locking mechanism
by running the esxcli storage vmfs lockmode list command.
The following sample output indicates that the datastore is eligible for an upgrade. It also
shows the current locking mechanism and the upgrade mode available for the datastore.
3 Depending on the upgrade mode available for the datastore, perform one of the following
actions:
Online Verify that all hosts have consistent storage connectivity to the VMFS
datastore.
Most datastores that do not span multiple extents are eligible for an online upgrade. While you
perform the online upgrade on one of the ESXi hosts, other hosts can continue using the
datastore. The online upgrade completes only after all hosts have closed the datastore.
Prerequisites
If you plan to complete the upgrade of the locking mechanism by putting the datastore into
maintenance mode, disable Storage DRS. This prerequisite applies only to an online upgrade.
Procedure
esxcli storage vmfs lockmode set -a|--ats -l|--volume-label= VMFS label -u|--
volume-uuid= VMFS UUID.
a Close the datastore on all hosts that have access to the datastore, so that the hosts can
recognize the change.
n Put the datastore into maintenance mode and exit maintenance mode.
b Verify that the Locking Mode status for the datastore changed to ATS-only by running:
c If the Locking Mode displays any other status, for example ATS UPGRADE PENDING,
check which host has not yet processed the upgrade by running:
You might need to switch to the ATS+SCSI locking mechanism when, for example, your storage
device is downgraded. Or when firmware updates fail and the device no longer supports ATS.
The downgrade process is similar to the ATS-only upgrade. As with the upgrade, depending on
your storage configuration, you can perform the downgrade in online or offline mode.
Procedure
esxcli storage vmfs lockmode set -s|--scsi -l|--volume-label= VMFS label -u|--
volume-uuid= VMFS UUID.
2 For an online mode, close the datastore on all hosts that have access to the datastore, so
that the hosts can recognize the change.
Sparse disks use the copy-on-write mechanism, in which the virtual disk contains no data, until
the data is copied there by a write operation. This optimization saves storage space.
Depending on the type of your datastore, delta disks use different sparse formats.
SEsparse For virtual disks larger than 2 TB. For all disks.
VMFSsparse
VMFS5 uses the VMFSsparse format for virtual disks smaller than 2 TB.
VMFSsparse is implemented on top of VMFS. The VMFSsparse layer processes I/Os issued to
a snapshot VM. Technically, VMFSsparse is a redo-log that starts empty, immediately after a
VM snapshot is taken. The redo-log expands to the size of its base vmdk, when the entire
vmdk is rewritten with new data after the VM snapshotting. This redo-log is a file in the VMFS
datastore. Upon snapshot creation, the base vmdk attached to the VM is changed to the
newly created sparse vmdk.
SEsparse
SEsparse is a default format for all delta disks on the VMFS6 datastores. On VMFS5, SEsparse
is used for virtual disks of the size 2 TB and larger.
SEsparse is a format similar to VMFSsparse with some enhancements. This format is space
efficient and supports the space reclamation technique. With space reclamation, blocks that
the guest OS deletes are marked. The system sends commands to the SEsparse layer in the
hypervisor to unmap those blocks. The unmapping helps to reclaim space allocated by
SEsparse once the guest operating system has deleted that data. For more information about
space reclamation, see Storage Space Reclamation.
Snapshot Migration
You can migrate VMs with snapshots across different datastores. The following considerations
apply:
n If you migrate a VM with the VMFSsparse snapshot to VMFS6, the snapshot format changes
to SEsparse.
n When a VM with a vmdk of the size smaller than 2 TB is migrated to VMFS5, the snapshot
format changes to VMFSsparse.
n You cannot mix VMFSsparse redo-logs with SEsparse redo-logs in the same hierarchy.
VMFS5 Datastores
You cannot upgrade a VMFS5 datastore to VMFS6. If you have a VMFS5 datastore in your
environment, create a VMFS6 datastore and migrate virtual machines from the VMFS5 datastore
to VMFS6.
VMFS3 Datastores
ESXi no longer supports VMFS3 datastores. The ESXi host automatically upgrades VMFS3 to
VMFS5 when mounting existing datastores. The host performs the upgrade operation in the
following circumstances:
n At the first boot after an upgrade to ESXi 7.0 or later, when the host mounts all discovered
VMFS3 datastores.
n When you manually mount the VMFS3 datastores that are discovered after the boot, or
mount persistently unmounted datastores.
Typically, the NFS volume or directory is created by a storage administrator and is exported from
the NFS server. You do not need to format the NFS volume with a local file system, such as
VMFS. Instead, you mount the volume directly on the ESXi hosts and use it to store and boot
virtual machines in the same way that you use the VMFS datastores.
In addition to storing virtual disks on NFS datastores, you can use NFS as a central repository for
ISO images, virtual machine templates, and so on. If you use the datastore for the ISO images,
you can connect the CD-ROM device of the virtual machine to an ISO file on the datastore. You
then can install a guest operating system from the ISO file.
Site Recovery Manager Yes Site Recovery Manager does not support
NFS 4.1 datastores for array-based
replication and vVols replication. You can
use Site Recovery Manager with NFS v 4.1
datastores for vSphere Replication.
NFS 4.1 VMs do not support the legacy Fault Tolerance mechanism.
NFS Upgrades
When you upgrade ESXi from a version earlier than 6.5, existing NFS 4.1 datastores automatically
begin supporting functionalities that were not available in the previous ESXi release. These
functionalities include vVols, hardware acceleration, and so on.
ESXi does not support automatic datastore conversions from NFS version 3 to NFS 4.1.
If you want to upgrade your NFS 3 datastore, the following options are available:
n Create the NFS 4.1 datastore, and then use Storage vMotion to migrate virtual machines from
the old datastore to the new one.
n Use conversion methods provided by your NFS storage server. For more information, contact
your storage vendor.
n Unmount the NFS 3 datastore, and then mount as NFS 4.1 datastore.
Caution If you use this option, make sure to unmount the datastore from all hosts that have
access to the datastore. The datastore can never be mounted by using both protocols at the
same time.
n NFS Networking
An ESXi host uses TCP/IP network connection to access a remote NAS server. Certain
guidelines and best practices exist for configuring the networking when you use NFS
storage.
n NFS Security
With NFS 3 and NFS 4.1, ESXi supports the AUTH_SYS security. In addition, for NFS 4.1, the
Kerberos security mechanism is supported.
n NFS Multipathing
NFS 4.1 supports multipathing as per protocol specifications. For NFS 3 multipathing is not
applicable.
n NFS Datastores
When you create an NFS datastore, make sure to follow specific guidelines.
n Make sure that the NAS servers you use are listed in the VMware HCL. Use the correct
version for the server firmware.
n Ensure that the NFS volume is exported using NFS over TCP.
n Make sure that the NAS server exports a particular share as either NFS 3 or NFS 4.1. The NAS
server must not provide both protocol versions for the same share. The NAS server must
enforce this policy because ESXi does not prevent mounting the same share through
different NFS versions.
n NFS 3 and non-Kerberos (AUTH_SYS) NFS 4.1 do not support the delegate user functionality
that enables access to NFS volumes using nonroot credentials. If you use NFS 3 or non-
Kerberos NFS 4.1, ensure that each host has root access to the volume. Different storage
vendors have different methods of enabling this functionality, but typically the NAS servers
use the no_root_squash option. If the NAS server does not grant root access, you can still
mount the NFS datastore on the host. However, you cannot create any virtual machines on
the datastore.
n If the underlying NFS volume is read-only, make sure that the volume is exported as a read-
only share by the NFS server. Or mount the volume as a read-only datastore on the ESXi
host. Otherwise, the host considers the datastore to be read-write and might not open the
files.
NFS Networking
An ESXi host uses TCP/IP network connection to access a remote NAS server. Certain guidelines
and best practices exist for configuring the networking when you use NFS storage.
n For network connectivity, use a standard network adapter in your ESXi host.
n ESXi supports Layer 2 and Layer 3 Network switches. If you use Layer 3 switches, ESXi hosts
and NFS storage arrays must be on different subnets and the network switch must handle
the routing information.
n Configure a VMkernel port group for NFS storage. You can create the VMkernel port group
for IP storage on an existing virtual switch (vSwitch) or on a new vSwitch. The vSwitch can be
a vSphere Standard Switch (VSS) or a vSphere Distributed Switch (VDS).
n If you use multiple ports for NFS traffic, make sure that you correctly configure your virtual
switches and physical switches.
NFS 3 locking on ESXi does not use the Network Lock Manager (NLM) protocol. Instead, VMware
provides its own locking protocol. NFS 3 locks are implemented by creating lock files on the NFS
server. Lock files are named .lck-file_id..
Because NFS 3 and NFS 4.1 clients do not use the same locking protocol, you cannot use
different NFS versions to mount the same datastore on multiple hosts. Accessing the same virtual
disks from two incompatible clients might result in incorrect behavior and cause data corruption.
NFS Security
With NFS 3 and NFS 4.1, ESXi supports the AUTH_SYS security. In addition, for NFS 4.1, the
Kerberos security mechanism is supported.
NFS 3 supports the AUTH_SYS security mechanism. With this mechanism, storage traffic is
transmitted in an unencrypted format across the LAN. Because of this limited security, use NFS
storage on trusted networks only and isolate the traffic on separate physical switches. You can
also use a private VLAN.
NFS 4.1 supports the Kerberos authentication protocol to secure communications with the NFS
server. Nonroot users can access files when Kerberos is used. For more information, see Using
Kerberos for NFS 4.1.
In addition to Kerberos, NFS 4.1 supports traditional non-Kerberos mounts with the AUTH_SYS
security. In this case, use root access guidelines for NFS version 3.
Note You cannot use two security mechanisms, AUTH_SYS and Kerberos, for the same NFS 4.1
datastore shared by multiple hosts.
NFS Multipathing
NFS 4.1 supports multipathing as per protocol specifications. For NFS 3 multipathing is not
applicable.
NFS 3 uses one TCP connection for I/O. As a result, ESXi supports I/O on only one IP address or
hostname for the NFS server, and does not support multiple paths. Depending on your network
infrastructure and configuration, you can use the network stack to configure multiple connections
to the storage targets. In this case, you must have multiple datastores, each datastore using
separate network connections between the host and the storage.
NFS 4.1 provides multipathing for servers that support the session trunking. When the trunking is
available, you can use multiple IP addresses to access a single NFS volume. Client ID trunking is
not supported.
NFS 3 and NFS 4.1 support hardware acceleration that allows your host to integrate with NAS
devices and use several hardware operations that NAS storage provides. For more information,
see Hardware Acceleration on NAS Devices.
NFS Datastores
When you create an NFS datastore, make sure to follow specific guidelines.
The NFS datastore guidelines and best practices include the following items:
n You cannot use different NFS versions to mount the same datastore on different hosts. NFS 3
and NFS 4.1 clients are not compatible and do not use the same locking protocol. As a result,
accessing the same virtual disks from two incompatible clients might result in incorrect
behavior and cause data corruption.
n NFS 3 and NFS 4.1 datastores can coexist on the same host.
n ESXi cannot automatically upgrade NFS version 3 to version 4.1, but you can use other
conversion methods. For information, see NFS Protocols and ESXi.
n When you mount the same NFS 3 volume on different hosts, make sure that the server and
folder names are identical across the hosts. If the names do not match, the hosts see the
same NFS version 3 volume as two different datastores. This error might result in a failure of
such features as vMotion. An example of such discrepancy is entering filer as the server
name on one host and filer.domain.com on the other. This guideline does not apply to NFS
version 4.1.
n If you use non-ASCII characters to name datastores and virtual machines, make sure that the
underlying NFS server offers internationalization support. If the server does not support
international characters, use only ASCII characters, or unpredictable failures might occur.
Supported services, including NFS, are described in a rule set configuration file in the ESXi firewall
directory /etc/vmware/firewall/. The file contains firewall rules and their relationships with
ports and protocols.
The behavior of the NFS Client rule set (nfsClient) is different from other rule sets.
For more information about firewall configurations, see the vSphere Security documentation.
When you add, mount, or unmount an NFS datastore, the resulting behavior depends on the
version of NFS.
n If the nfsClient rule set is disabled, ESXi enables the rule set and disables the Allow All IP
Addresses policy by setting the allowedAll flag to FALSE. The IP address of the NFS server is
added to the allowed list of outgoing IP addresses.
n If the nfsClient rule set is enabled, the state of the rule set and the allowed IP address policy
are not changed. The IP address of the NFS server is added to the allowed list of outgoing IP
addresses.
Note If you manually enable the nfsClient rule set or manually set the Allow All IP Addresses
policy, either before or after you add an NFS v3 datastore to the system, your settings are
overridden when the last NFS v3 datastore is unmounted. The nfsClient rule set is disabled
when all NFS v3 datastores are unmounted.
When you remove or unmount an NFS v3 datastore, ESXi performs one of the following actions.
n If none of the remaining NFS v3 datastores are mounted from the server of the datastore
being unmounted, ESXi removes the server's IP address from the list of outgoing IP
addresses.
n If no mounted NFS v3 datastores remain after the unmount operation, ESXi disables the
nfsClient firewall rule set.
Procedure
4 Scroll down to an appropriate version of NFS to make sure that the port is open.
n Use Cisco's Hot Standby Router Protocol (HSRP) in IP Router. If you are using a non-Cisco
router, use Virtual Router Redundancy Protocol (VRRP) instead.
n Follow Routed NFS L3 recommendations offered by storage vendor. Contact your storage
vendor for details.
n If you are planning to use systems with top-of-rack switches or switch-dependent I/O device
partitioning, contact your system vendor for compatibility and support.
n The environment supports only the NFS protocol. Do not use other storage protocols such as
FCoE over the same physical network.
n The NFS traffic in this environment can be routed only over a LAN. Other environments such
as WAN are not supported.
The RPCSEC_GSS Kerberos mechanism is an authentication service. It allows an NFS 4.1 client
installed on ESXi to prove its identity to an NFS server before mounting an NFS share. The
Kerberos security uses cryptography to work across an insecure network connection.
The ESXi implementation of Kerberos for NFS 4.1 provides two security models, krb5 and krb5i,
that offer different levels of security.
n Kerberos for authentication and data integrity (krb5i), in addition to identity verification,
provides data integrity services. These services help to protect the NFS traffic from
tampering by checking data packets for any potential modifications.
Kerberos supports cryptographic algorithms that prevent unauthorized users from gaining
access to NFS traffic. The NFS 4.1 client on ESXi attempts to use either the AES256-CTS-HMAC-
SHA1-96 or AES128-CTS-HMAC-SHA1-96 algorithm to access a share on the NAS server. Before
using your NFS 4.1 datastores, make sure that AES256-CTS-HMAC-SHA1-96 or AES128-CTS-
HMAC-SHA1-96 are enabled on the NAS server.
The following table compares Kerberos security levels that ESXi supports.
Kerberos for authentication Integrity checksum for RPC Yes with DES Yes with AES
only (krb5) header
Kerberos for authentication Integrity checksum for RPC No krb5i Yes with AES
and data integrity (krb5i) header
n When multiple ESXi hosts share the NFS 4.1 datastore, you must use the same Active
Directory credentials for all hosts that access the shared datastore. To automate the
assignment process, set the user in host profiles and apply the profile to all ESXi hosts.
n You cannot use two security mechanisms, AUTH_SYS and Kerberos, for the same NFS 4.1
datastore shared by multiple hosts.
Prerequisites
n Familiarize yourself with the guidelines in NFS Storage Guidelines and Requirements.
n For details on configuring NFS storage, consult your storage vendor documentation.
Procedure
1 On the NFS server, configure an NFS volume and export it to be mounted on the ESXi hosts.
a Note the IP address or the DNS name of the NFS server and the full path, or folder name,
for the NFS share.
For NFS 4.1, you can collect multiple IP addresses or DNS names to use the multipathing
support that the NFS 4.1 datastore provides.
b If you plan to use Kerberos authentication with NFS 4.1, specify the Kerberos credentials
to be used by ESXi for authentication.
2 On each ESXi host, configure a VMkernel Network port for NFS traffic.
3 If you plan to use Kerberos authentication with the NFS 4.1 datastore, configure the ESXi
hosts for Kerberos authentication.
What to do next
When multiple ESXi hosts share the NFS 4.1 datastore, you must use the same Active Directory
credentials for all hosts that access the shared datastore. You can automate the assignment
process by setting the user in host profiles and applying the profile to all ESXi hosts.
Prerequisites
n Make sure that Microsoft Active Directory (AD) and NFS servers are configured to use
Kerberos.
n Make sure that the NFS server exports are configured to grant full access to the Kerberos
user.
Procedure
What to do next
After you configure your host for Kerberos, you can create an NFS 4.1 datastore with Kerberos
enabled.
Procedure
Option Description
The following task describes how to synchronize the ESXi host with the NTP server.
The best practice is to use the Active Domain server as the NTP server.
Procedure
5 Click OK.
Prerequisites
Set up an AD domain and a domain administrator account with the rights to add hosts to the
domain.
Procedure
Files stored in all Kerberos datastores are accessed using these credentials.
n Gathers NFS statistics to investigate problems when you deploy a new configuration, such as
a new NFS server or network, in the NFS environment.
n Publishes latency statistics about the success and failure of NFS operations.
No option Obtain both NFS statistics and RPC statistics for all NFS
datastores.
-v DSNAME1, DSNAME2, ... Display both NFS and RPC statistics for the specified NFS
datastores. Use this option in conjunction with the type of
the NFS datastore, for example, -3 or -4.
Creating Datastores
You use the New Datastore wizard to create your datastores. Depending on the type of your
storage and storage needs, you can create a VMFS, NFS, or vVols datastore.
A vSAN datastore is automatically created when you enable vSAN. For information, see the
Administering VMware vSAN documentation.
You can also use the New Datastore wizard to manage VMFS datastore copies.
Prerequisites
2 To discover newly added storage devices, perform a rescan. See Storage Rescan Operations.
3 Verify that storage devices you are planning to use for your datastores are available. See
Storage Device Characteristics.
Procedure
1 In the vSphere Client object navigator, browse to a host, a cluster, or a data center.
4 Enter the datastore name and if necessary, select the placement location for the datastore.
Important The device you select must not have any values displayed in the Snapshot
Volume column. If a value is present, the device contains a copy of an existing VMFS
datastore. For information on managing datastore copies, see Managing Duplicate VMFS
Datastores.
Option Description
VMFS6 Default format on all hosts that support VMFS6. The ESXi hosts of version
6.0 or earlier cannot recognize the VMFS6 datastore.
VMFS5 VMFS5 datastore supports access by the ESXi hosts of version 6.7 or earlier.
Option Description
Use all available partitions Dedicates the entire disk to a single VMFS datastore. If you select this
option, all file systems and data currently stored on this device are
destroyed.
Use free space Deploys a VMFS datastore in the remaining free space of the disk.
b If the space allocated for the datastore is excessive for your purposes, adjust the capacity
values in the Datastore Size field.
c For VMFS6, specify the block size and define space reclamation parameters. See Space
Reclamation Requests from VMFS Datastores.
8 In the Ready to Complete page, review the datastore configuration information and click
Finish.
Results
The datastore on the SCSI-based storage device is created. It is available to all hosts that have
access to the device.
What to do next
After you create the VMFS datastore, you can perform the following tasks:
n Change the capacity of the datastore. See Increase VMFS Datastore Capacity.
n Enable shared vmdk support. See Enable or Disable Support for Clustered Virtual Disks on
the VMFS6 Datastore.
Prerequisites
n If you plan to use Kerberos authentication with the NFS 4.1 datastore, make sure to configure
the ESXi hosts for Kerberos authentication.
Procedure
1 In the vSphere Client object navigator, browse to a host, a cluster, or a data center.
n NFS 3
n NFS 4.1
Important If multiple hosts access the same datastore, you must use the same protocol on
all hosts.
Option Description
Datastore name The system enforces a 42 character limit for the datastore name.
Server The server name or IP address. You can use IPv6 or IPv4 formats.
With NFS 4.1, you can add multiple IP addresses or server names if the NFS
server supports trunking. The ESXi host uses these values to achieve
multipathing to the NFS server mount point.
5 Select Mount NFS read only if the volume is exported as read-only by the NFS server.
6 To use Kerberos security with NFS 4.1, enable Kerberos and select an appropriate Kerberos
model.
Option Description
Use Kerberos for authentication and In addition to identity verification, provides data integrity services. These
data integrity (krb5i) services help to protect the NFS traffic from tampering by checking data
packets for any potential modifications.
If you do not enable Kerberos, the datastore uses the default AUTH_SYS security.
7 If you are creating a datastore at the data center or cluster level, select hosts that mount the
datastore.
Procedure
1 In the vSphere Client object navigator, browse to a host, a cluster, or a data center.
4 Enter the datastore name and select a backing storage container from the list of storage
containers.
Make sure to use the name that does not duplicate another datastore name in your data
center environment.
If you mount the same vVols datastore to several hosts, the name of the datastore must be
consistent across all hosts.
What to do next
After you create the vVols datastore, you can perform such datastore operations as renaming
the datastore, browsing datastore files, unmounting the datastore, and so on.
Each VMFS datastore created on a storage device has a unique signature, also called UUID, that
is stored in the file system superblock. When the storage device is replicated or its snapshot is
taken on the array side, the resulting device copy is identical, byte-for-byte, with the original
device. For example, if the original storage device contains a VMFS datastore with UUIDX, the
copy appears to contain a datastore copy with the same UUIDX.
In addition to LUN snapshots and replications, certain device operations, such as LUN ID changes,
might produce a copy of the original datastore.
ESXi can detect the VMFS datastore copy. You can mount the datastore copy with its original
UUID or change the UUID. The process of changing the UUID is called the datastore
resignaturing.
Whether you select resignaturing or mounting without resignaturing depends on how the LUNs
are masked in the storage environment. If your hosts can see both copies of the LUN, then
resignaturing is the optimal method.
You can keep the signature if, for example, you maintain synchronized copies of virtual machines
at a secondary site as part of a disaster recovery plan. In the event of a disaster at the primary
site, you mount the datastore copy and power on the virtual machines at the secondary site.
When resignaturing a VMFS copy, ESXi assigns a new signature (UUID) to the copy, and mounts
the copy as a datastore distinct from the original. All references to the original signature in virtual
machine configuration files are updated.
n After resignaturing, the storage device replica that contained the VMFS copy is no longer
treated as a replica.
n A spanned datastore can be resignatured only if all its extents are online.
n The resignaturing process is fault tolerant. If the process is interrupted, you can resume it
later.
n You can mount the new VMFS datastore without a risk of its UUID conflicting with UUIDs of
any other datastore from the hierarchy of device snapshots.
Prerequisites
n Perform a storage rescan on your host to update the view of storage devices presented to
the host.
n Unmount the original VMFS datastore that has the same UUID as the copy you plan to mount.
You can mount the VMFS datastore copy only if it does not collide with the original VMFS
datastore.
Procedure
1 In the vSphere Client object navigator, browse to a host, a cluster, or a data center.
4 Enter the datastore name and if necessary, select the placement location for the datastore.
5 From the list of storage devices, select the device that has a specific value displayed in the
Snapshot Volume column.
The value present in the Snapshot Volume column indicates that the device is a copy that
contains a copy of an existing VMFS datastore.
Option Description
Mount with resignaturing Under Mount Options, select Assign a New Signature and click Next .
Mount without resignaturing Under Mount Options, select Keep Existing Signature.
If a shared datastore has powered on virtual machines and becomes 100% full, you can increase
the datastore capacity. You can perform this action only from the host where the powered on
virtual machines are registered.
Depending on your storage configuration, you can use one of the following methods to increase
the datastore capacity. You do not need to power off virtual machines when using either method
of increasing the datastore capacity.
Increase the size of an expandable datastore. The datastore is considered expandable when
the backing storage device has free space immediately after the datastore extent.
Add an Extent
Increase the capacity of an existing VMFS datastore by adding new storage devices to the
datastore. The datastore can span over multiple storage devices, yet appear as a single
volume.
The spanned VMFS datastore can use any or all its extents at any time. It does not need to fill
up a particular extent before using the next one.
Note Datastores that support only the hardware assisted locking, also called the atomic test
and set (ATS) mechanism, cannot span over non-ATS devices. For more information, see
VMFS Locking Mechanisms.
Prerequisites
You can increase the datastore capacity if the host storage meets one of the following
conditions:
n The backing device for the existing datastore has enough free space.
Procedure
Option Description
To expand an existing datastore Select the device for which the Expandable column reads YES.
extent
To add an extent Select the device for which the Expandable column reads NO.
Depending on the current layout of the disk and on your previous selections, the menu items
you see might vary.
Use free space to expand the Expands an existing extent to a required capacity.
datastore
Use free space Deploys an extent in the remaining free space of the disk. This menu item is
available only when you are adding an extent.
Use all available partitions Dedicates the entire disk to a single extent. This menu item is available only
when you are adding an extent and when the disk you are formatting is not
blank. The disk is reformatted, and the datastores and any data that it
contains are erased.
The minimum extent size is 1.3 GB. By default, the entire free space on the storage device is
available.
7 Click Next.
8 Review the proposed layout and the new configuration of your datastore, and click Finish.
For information on the use of clustered virtual disks in VM clusters, see the Setup for Windows
Server Failover Clustering documentation.
Prerequisites
Follow these guidelines when you use a datastore for clustered virtual disks:
n The storage array must support the ATS, Write Exclusive – All Registrant (WEAR) SCSI-3
reservation type.
n ESXi supports only Fibre Channel arrays for this type of configurations.
n Only VMFS6 datastores support clustered disks. The datastores you use cannot be expanded
or span multiple extents.
n Storage devices must be claimed by the NMP. ESXi does not support third-party plug-ins
(MPPs) in the clustered virtual disk configurations.
n Make sure that the virtual disks you use for clustering are in the Thick Provision Eager Zeroed
format.
Procedure
3 Under Datastore Capabilities, click one of the following options next to the Clustered VMDK
item.
Option Description
Enable To enable support for clustered virtual disks on the datastore. After you
enable the support, you can place the clustered virtual disks on this VMFS
datastore.
Disable To disable the support. Before disabling, make sure to power off all virtual
machines with the clustered virtual disks.
n Unmount Datastores
When you unmount a datastore, it remains intact, but can no longer be seen from the hosts
that you specify. The datastore continues to appear on other hosts, where it remains
mounted.
n Mount Datastores
You can mount a datastore you previously unmounted. You can also mount a datastore on
additional hosts, so that it becomes a shared datastore.
Note If the host is managed by vCenter Server, you cannot rename the datastore by directly
accessing the host from the VMware Host Client. You must rename the datastore from vCenter
Server.
Procedure
Results
The new name appears on all hosts that have access to the datastore.
Unmount Datastores
When you unmount a datastore, it remains intact, but can no longer be seen from the hosts that
you specify. The datastore continues to appear on other hosts, where it remains mounted.
Do not perform any configuration operations that might result in I/O to the datastore while the
unmounting is in progress.
Note Make sure that the datastore is not used by vSphere HA Heartbeating. vSphere HA
Heartbeating does not prevent you from unmounting the datastore. However, if the datastore is
used for heartbeating, unmounting it might cause the host to fail and restart any active virtual
machine.
Prerequisites
When appropriate, before unmounting datastores, make sure that the following prerequisites are
met:
Procedure
3 If the datastore is shared, select the hosts from which to unmount the datastore.
Results
After you unmount a VMFS datastore from all hosts, the datastore is marked as inactive. If you
unmount an NFS or a vVols datastore from all hosts, the datastore disappears from the
inventory. You can mount the unmounted VMFS datastore. To mount the NFS or vVols datastore
that has been removed from the inventory, use the New Datastore wizard.
What to do next
If you unmounted the VMFS datastore as a part of a storage removal procedure, you can now
detach the storage device that is backing the datastore. See Detach Storage Devices.
Mount Datastores
You can mount a datastore you previously unmounted. You can also mount a datastore on
additional hosts, so that it becomes a shared datastore.
A VMFS datastore that has been unmounted from all hosts remains in inventory, but is marked as
inaccessible. You can use this task to mount the VMFS datastore to a specified host or multiple
hosts.
If you have unmounted an NFS or a vVols datastore from all hosts, the datastore disappears from
the inventory. To mount the NFS or vVols datastore that has been removed from the inventory,
use the New Datastore wizard.
A datastore of any type that is unmounted from some hosts while being mounted on others, is
shown as active in the inventory.
Procedure
2 Right-click the datastore to mount and select one of the following options:
n Mount Datastore
3 Select the hosts that should access the datastore and click OK.
4 To list all hosts that share the datastore, navigate to the datastore, and click the Hosts tab.
Note The delete operation for the datastore permanently deletes all files associated with virtual
machines on the datastore. Although you can delete the datastore without unmounting, it is
preferable that you unmount the datastore first.
Prerequisites
n Make sure that the datastore is not used for vSphere HA heartbeating.
Procedure
Procedure
2 Explore the contents of the datastore by navigating to existing folders and files.
Copy to Copy selected folders or files to a new location, either on the same
datastore or on a different datastore.
Move to Move selected folders or files to a new location, either on the same
datastore or on a different datastore.
Inflate Convert a selected thin virtual disk to thick. This option applies only to thin-
provisioned disks.
In addition to their traditional use as storage for virtual machines files, datastores can serve to
store data or files related to virtual machines. For example, you can upload ISO images of
operating systems from a local computer to a datastore on the host. You then use these images
to install guest operating systems on the new virtual machines.
Note You cannot upload files directly to the vVols datastores. You must first create a folder on
the vVols datastore, and then upload the files into the folder. The created folders in vVols
datastores for block storage have a limited storage capacity space of 4GB. The vVols datastore
supports direct uploads of folders.
Prerequisites
Procedure
Option Description
Upload a file a Select the target folder and click Upload Files.
b Locate the item to upload on the local computer and click Open.
Upload a folder (available only in a Select the datastore or the target folder and click Upload Folders.
the vSphere Client) b Locate the item to upload on the local computer and click Ok.
4 Refresh the datastore file browser to see the uploaded files or folders on the list.
What to do next
You might experience problems when deploying an OVF template that you previously exported
and then uploaded to a datastore. For details and a workaround, see the VMware Knowledge
Base article 2117310.
Prerequisites
Procedure
Note Virtual disk files are moved or copied without format conversion. If you move a virtual disk
to a datastore that belongs to a host different from the source host, you might need to convert
the virtual disk. Otherwise, you might not be able to use the disk.
Prerequisites
Procedure
5 (Optional) Select Overwrite files and folders with matching names at the destination.
6 Click OK.
Prerequisites
Procedure
You use the datastore browser to inflate the thin virtual disk.
Prerequisites
n Make sure that the datastore where the virtual machine resides has enough space.
n Remove snapshots.
Procedure
1 In the vSphere Client, navigate to the folder of the virtual disk you want to inflate.
2 Expand the virtual machine folder and browse to the virtual disk file that you want to convert.
The file has the .vmdk extension and is marked with the virtual disk ( ) icon.
Note The option might not be available if the virtual disk is thick or when the virtual machine
is running.
Results
The inflated virtual disk occupies the entire datastore space originally provisioned to it.
Prerequisites
Before you change the device filters, consult with the VMware support team.
Procedure
In the Name and Value text boxes at the bottom of the page, enter appropriate information.
Name Value
config.vpxd.filter.vmfsFilter False
config.vpxd.filter.rdmFilter False
config.vpxd.filter.sameHostsAndTra False
nsportsFilter
config.vpxd.filter.hostRescanFilter False
Note If you turn off this filter, your hosts continue to perform a rescan each
time you present a new LUN to a host or a cluster.
Storage Filtering
vCenter Server provides storage filters to help you avoid storage device corruption or
performance degradation that might be caused by an unsupported use of storage devices.
These filters are available by default.
config.vpxd.filter.vmfsFilter Filters out storage devices, or LUNs, that are already used by a VMFS datastore
(VMFS Filter) on any host managed by vCenter Server. The LUNs do not show up as candidates
to be formatted with another VMFS datastore or to be used as an RDM.
config.vpxd.filter.rdmFilter Filters out LUNs that are already referenced by an RDM on any host managed by
(RDM Filter) vCenter Server. The LUNs do not show up as candidates to be formatted with
VMFS or to be used by a different RDM.
For your virtual machines to access the same LUN, the virtual machines must share
the same RDM mapping file. For information about this type of configuration, see
the vSphere Resource Management documentation.
config.vpxd.filter.sameHostsAndTra Filters out LUNs ineligible for use as VMFS datastore extents because of host or
nsportsFilter storage type incompatibility. Prevents you from adding the following LUNs as
(Same Hosts and Transports Filter) extents:
n LUNs not exposed to all hosts that share the original VMFS datastore.
n LUNs that use a storage type different from the one the original VMFS
datastore uses. For example, you cannot add a Fibre Channel extent to a
VMFS datastore on a local storage device.
config.vpxd.filter.hostRescanFilter Automatically rescans and updates VMFS datastores after you perform datastore
(Host Rescan Filter) management operations. The filter helps provide a consistent view of all VMFS
datastores on all hosts managed by vCenter Server.
Note If you present a new LUN to a host or a cluster, the hosts automatically
perform a rescan no matter whether you have the Host Rescan Filter on or off.
Prerequisites
Procedure
2 Log in to your virtual machine and configure the disks as dynamic mirrored disks.
Name Value
scsi#.returnNoConnectDuringAPD True
scsi#.returnBusyOnNoConnectStatus False
e If you use ESXi version 6.7 or later, include an additional parameter for each virtual disk
participating in the software RAID-1 configuration.
The parameter prevents guest OS I/O failures when a storage device fails.
Name Value
scsi#:1.passthruTransientErrors True
scsi#:2.passthruTransientErrors True
f Click OK.
Typically, a partition to collect diagnostic information, also called a core dump, is created on a
local storage device during ESXi installation. You can also configure ESXi Dump Collector keep
core dumps on a network server. For information on setting up ESXi Dump Collector, see the
VMware ESXi Installation and Setup documentation.
Another option is to use a file on a VMFS datastore to collect the diagnostic information.
Note VMFS datastores on software iSCSI do not support core dump files.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
1 Create a VMFS datastore core dump file by running the following command:
The command takes the following options, but they are not required and can be omitted:
Option Description
--datastore | -d datastore_UUID Specify the datastore for the dump file. If not provided, the system selects a
or datastore_name datastore of sufficient size.
--file | -f file_name Specify the file name of the dump file. If not provided, the system creates a
unique name for the file.
--size |-s file_size_MB Set the size in MB of the dump file. If not provided, the system creates a file
of the size appropriate for the memory installed in the host.
Option Description
--enable |-e Enable or disable the dump file. This option cannot be specified when
unconfiguring the dump file.
--path | -p The path of the core dump file to use. The file must be pre-allocated.
Option Description
--smart | -s This flag can be used only with --enable | -e=true. It causes the file to be
selected using the smart selection algorithm.
For example,
esxcli system coredump file set --smart --enable true
The output similar to the following indicates that the core dump file is active and configured:
What to do next
For information about other commands you can use to manage the core dump files, see the
ESXCLI Reference documentation.
You can temporarily deactivate the core dump file. If you do not plan to use the deactivated file,
you can remove it from the VMFS datastore. To remove the file that has not been deactivated,
you can use the esxcli system coredump file remove command with the --force | -F
parameter.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
Option Description
--file | -f Enter the name of the dump file to be removed. If you do not enter the
name, the command removes the default configured core dump file.
--force | -F Deactivate and unconfigure the dump file being removed. This option is
required if the file has not been previously deactivated and is active.
Results
The core dump file becomes disabled and is removed from the VMFS datastore.
Problem
You can check metadata consistency when you experience problems with a VMFS datastore or a
virtual flash resource. For example, perform a metadata check if one of the following occurs:
n You see metadata errors in the vmkernel.log file similar to the following:
n You see corruption being reported for a datastore in events tabs of vCenter Server.
Solution
To check metadata consistency, run VOMA from the CLI of an ESXi host. VOMA can be used to
check and fix minor inconsistency issues for a VMFS datastore or logical volumes that back the
VMFS datastore.
Metadata check and fix Examples of metadata check and fix include, but are not limited to, the following:
n Validation of VMFS volume header for basic metadata consistency.
n Checking consistency of VMFS resource files (system file).
n Checking the pathname and connectivity of all files.
Affinity metadata check and To enable the affinity check for VMFS6, use the -a|--affinityChk option.
fix Several examples of affinity metadata check and fix include the following:
n Affinity flags in resource types and FS3_ResFileMetadata.
n Validation of affinity flags in SFB RC meta (FS3_ResourceClusterMDVMFS6).
n Validation of all entries in the affinityInfo entries in rcMeta of RC, including the
overflow key, to make sure that no invalid entries exist. Checking for missing entries.
Directory validation VOMA can detect and correct the following errors:
n Directory hash block corruption.
n Alloc map corruption.
n Link blocks corruptions.
n Directory entry block corruptions.
Based on the nature of the corruption, VOMA can either fix only the corrupted entries or
fully reconstruct the hash block, alloc map blocks and link blocks.
Lost and found files During a filesystem check, VOMA can find files that are not referenced anywhere in the
filesystem. These orphaned files are valid and complete, but do not have a name or
directory entry on the system.
If VOMA encounters orphaned files during scanning, it creates a directory named lost
+found at the root of the volume to store the orphaned files. The names of the files use
the Filesequence-number format.
Command options that the VOMA tool takes include the following.
-d|--device Indicate the device or disk to inspect. Make sure to provide the absolute path to the
device partition backing the VMFS datastore. If the datastore spans multiple devices,
provide the UUID of the head extent.
For example, voma -m vmfs -f check -d /vmfs/devices/disks/naa.xxxx:x
If you use the -x|--extractDump command, enter multiple device paths, with a
partition qualifier, separated with a comma. The number of device paths you enter
equals the number of spanned devices.
-D|--dumpfile Indicate the dump file to save the collected metadata dump.
-Y Indicate that you run VOMA without using PE tables for address resolution.
-Z| --file Indicate that you run VOMA on extracted device files.
Example
Prerequisites
Power off any virtual machines that are running or migrate them to a different datastore.
Procedure
1 Obtain the name and partition number of the device that backs the VMFS datastore that you
want to check.
The Device Name and Partition columns in the output identify the device. For example:
Provide the absolute path to the device partition that backs the VMFS datastore, and provide
a partition number with the device name. For example:
The output lists possible errors. For example, the following output indicates that the
heartbeat address is invalid.
XXXXXXXXXXXXXXXXXXXXXXX
Phase 2: Checking VMFS heartbeat region
ON-DISK ERROR: Invalid HB address
Phase 3: Checking all file descriptors.
Phase 4: Checking pathname and connectivity.
Phase 5: Checking resource reference counts.
The pointer block cache is a host-wide cache that is independent from VMFS. The cache is
shared across all datastores that are accessed from the same ESXi host.
The size of the pointer block cache is controlled by /VMFS3/MinAddressableSpaceTB and /VMFS3/
MaxAddressableSpaceTB. You can configure the minimum and maximum sizes on each ESXi host.
/VMFS3/MinAddressableSpaceTB
The minimum value is minimum amount of memory that the system guarantees to the pointer
block cache. For example, 1 TB of open file space requires approximately 4 MB of memory.
Default value is 10 TB.
/VMFS3/MaxAddressableSpaceTB
The parameter defines the maximum limit of pointer blocks that can be cached in memory.
Default value is 32 TB. Maximum value is 128 TB. Typically, the default value of the /VMFS3/
MaxAddressableSpaceTB parameter is adequate.
However, as the size of the open vmdk files increases, the number of pointer blocks related
to those files also increases. If the increase causes any performance degradation, you can
adjust the parameter to its maximum value to provide more space for the pointer block
cache. Base the maximum size of the pointer block cache on the working set, or the active
pointer blocks required.
The /VMFS3/MaxAddressableSpaceTB parameter also controls the growth of the pointer block
cache. When the size of the pointer block cache approaches the configured maximum size, a
pointer block eviction process starts. The mechanism leaves active pointer blocks, but
removes non-active or less active blocks from the cache, so that space can be reused.
To change the values for the pointer block cache, use the Advanced System Settings dialog box
of the vSphere Client or the esxcli system settings advanced set -o command.
You can use the esxcli storage vmfs pbcache command to obtain information about the size of
the pointer block cache and other statistics. This information assists you in adjusting minimum
and maximum sizes of the pointer block cache, so that you can get maximum performance.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
u To obtain or reset the pointer block cache statistics, use the following command:
Option Description
Caution Changing advanced options is considered unsupported. Typically, the default settings
produce the optimum result. Change the advanced options only when you get specific
instructions from VMware technical support or a knowledge base article.
Procedure
Option Description
VMFS3.MinAddressableSpaceTB Minimum size of all open files that VMFS cache guarantees to support.
VMFS3.MaxAddressableSpaceTB Maximum size of all open files that VMFS cache supports before eviction
starts.
6 Click OK.
Example: Use the esxcli Command to Change the Pointer Block Cache
You can also use the esxcli system settings advanced set -o to modify the size of the
pointer block cache. The following example describes how to set the size to its maximum value of
128 TB.
If a failure of any element in the SAN network, such as an adapter, switch, or cable, occurs, ESXi
can switch to another viable physical path. This process of path switching to avoid failed
components is known as path failover.
In addition to path failover, multipathing provides load balancing. Load balancing is the process
of distributing I/O loads across multiple physical paths. Load balancing reduces or removes
potential bottlenecks.
Note Virtual machine I/O might be delayed for up to 60 seconds while path failover takes place.
With these delays, the SAN can stabilize its configuration after topology changes. In general, the
I/O delays might be longer on active-passive arrays and shorter on active-active arrays.
or more switches in the SAN fabric and one or more storage processors on the storage array
device itself.
In the following illustration, multiple physical paths connect each server with the storage device.
For example, if HBA1 or the link between HBA1 and the FC switch fails, HBA2 takes over and
provides the connection. The process of one HBA taking over for another is called HBA failover.
switch switch
SP1 SP2
storage array
Similarly, if SP1 fails or the links between SP1 and the switches breaks, SP2 takes over. SP2
provides the connection between the switch and the storage device. This process is called SP
failover. VMware ESXi supports both HBA and SP failovers.
n ESXi does not support multipathing when you combine an independent hardware adapter
with software iSCSI or dependent iSCSI adapters in the same host.
n Multipathing between software and dependent adapters within the same host is supported.
n On different hosts, you can mix both dependent and independent adapters.
The following illustration shows multipathing setups possible with different types of iSCSI
initiators.
host 1 host 2
software
adapter
HBA2 HBA1 NIC2 NIC1
IP network
SP
iSCSI storage
On the illustration, Host1 has two hardware iSCSI adapters, HBA1 and HBA2, that provide two
physical paths to the storage system. Multipathing plug-ins on your host, whether the VMkernel
NMP or any third-party MPPs, have access to the paths by default. The plug-ins can monitor
health of each physical path. If, for example, HBA1 or the link between HBA1 and the network
fails, the multipathing plug-ins can switch the path over to HBA2.
Multipathing plug-ins do not have direct access to physical NICs on your host. As a result, for this
setup, you first must connect each physical NIC to a separate VMkernel port. You then associate
all VMkernel ports with the software iSCSI initiator using a port binding technique. Each VMkernel
port connected to a separate NIC becomes a different path that the iSCSI storage stack and its
storage-aware multipathing plug-ins can use.
For information about configuring multipathing for software iSCSI, see Setting Up Network for
iSCSI and iSER.
When using one of these storage systems, your host does not see multiple ports on the storage
and cannot choose the storage port it connects to. These systems have a single virtual port
address that your host uses to initially communicate. During this initial communication, the
storage system can redirect the host to communicate with another port on the storage system.
The iSCSI initiators in the host obey this reconnection request and connect with a different port
on the system. The storage system uses this technique to spread the load across available ports.
If the ESXi host loses connection to one of these ports, it automatically attempts to reconnect
with the virtual port of the storage system, and should be redirected to an active, usable port.
This reconnection and redirection happens quickly and generally does not disrupt running virtual
machines. These storage systems can also request that iSCSI initiators reconnect to the system,
to change which storage port they are connected to. This allows the most effective use of the
multiple ports.
The Port Redirection illustration shows an example of port redirection. The host attempts to
connect to the 10.0.0.1 virtual port. The storage system redirects this request to 10.0.0.2. The
host connects with 10.0.0.2 and uses this port for I/O communication.
Note The storage system does not always redirect connections. The port at 10.0.0.1 could be
used for traffic, also.
10.0.0.2
storage
10.0.0.2
storage
If the port on the storage system that is acting as the virtual port becomes unavailable, the
storage system reassigns the address of the virtual port to another port on the system. Port
Reassignment shows an example of this type of port reassignment. In this case, the virtual port
10.0.0.1 becomes unavailable and the storage system reassigns the virtual port IP address to a
different port. The second port responds to both addresses.
10.0.0.1
10.0.0.2
storage
10.0.0.1
10.0.0.1
10.0.0.2
storage
With this form of array-based failover, you can have multiple paths to the storage only if you use
multiple ports on the ESXi host. These paths are active-active. For additional information, see
iSCSI Session Management.
When a path fails, storage I/O might pause for 30-60 seconds until your host determines that the
link is unavailable and performs the failover. If you attempt to display the host, its storage
devices, or its adapters, the operation might appear to stall. Virtual machines with their disks
installed on the SAN can appear unresponsive. After the failover, I/O resumes normally and the
virtual machines continue to run.
A Windows virtual machine might interrupt the I/O and eventually fail when failovers take too
long. To avoid the failure, set the disk timeout value for the Windows virtual machine to at least
60 seconds.
This procedure explains how to change the timeout value by using the Windows registry.
Prerequisites
Procedure
4 Double-click TimeOutValue.
5 Set the value data to 0x3c (hexadecimal) or 60 (decimal) and click OK.
After you make this change, Windows waits at least 60 seconds for delayed disk operations
to finish before it generates errors.
To manage multipathing, ESXi uses a special VMkernel layer, Pluggable Storage Architecture
(PSA). The PSA is an open and modular framework that coordinates various software
modules responsible for multipathing operations. These modules include generic multipathing
modules that VMware provides, NMP and HPP, and third-party MPPs.
The NMP is the VMkernel multipathing module that ESXi provides by default. The NMP
associates physical paths with a specific storage device and provides a default path selection
algorithm based on the array type. The NMP is extensible and manages additional
submodules, called Path Selection Policies (PSPs) and Storage Array Type Policies (SATPs).
PSPs and SATPs can be provided by VMware, or by a third party.
The PSPs are submodules of the VMware NMP. PSPs are responsible for selecting a physical
path for I/O requests.
The SATPs are submodules of the VMware NMP. SATPs are responsible for array-specific
operations. The SATP can determine the state of a particular array-specific path, perform a
path activation, and detect any path errors.
The PSA offers a collection of VMkernel APIs that third parties can use to create their own
multipathing plug-ins (MPPs). The modules provide specific load balancing and failover
functionalities for a particular storage array. The MPPs can be installed on the ESXi host. They
can run in addition to the VMware native modules, or as their replacement.
The HPP replaces the NMP for high-speed devices, such as NVMe. The HPP can improve the
performance of ultra-fast flash devices that are installed locally on your ESXi host, and is the
default plug-in that claims NVMe-oF targets.
To support multipathing, the HPP uses the Path Selection Schemes (PSS). A particular PSS is
responsible for selecting physical paths for I/O requests.
For information, see VMware High Performance Plug-In and Path Selection Schemes.
Claim Rules
The PSA uses claim rules to determine which plug-in owns the paths to a particular storage
device.
NMP Native Multipathing Plug-in. Generic VMware multipathing module that is used SCSI
storage devices.
PSP Path Selection Plug-in. Handles path selection for a SCSI storage device.
SATP Storage Array Type Plug-in. Handles path failover for a given SCSI storage array.
MPP (third-party) Multipathing Plug-in. A multipathing module developed and provided by a third
party.
HPP Native High-Performance Plug-in provided by VMware. It is used with ultra-fast local
and networked flash devices, such as NVMe.
PSS Path Selection Scheme. Handles multipathing for NVMe storage devices.
VMware provides generic native multipathing modules, called VMware NMP and VMware HPP. In
addition, the PSA offers a collection of VMkernel APIs that third-party developers can use. The
software developers can create their own load balancing and failover modules for a particular
storage array. These third-party multipathing modules (MPPs) can be installed on the ESXi host
and run in addition to the VMware native modules, or as their replacement.
When coordinating the VMware native modules and any installed third-party MPPs, the PSA
performs the following tasks:
n Routes I/O requests for a specific logical device to the MPP managing that device.
As the Pluggable Storage Architecture illustration shows, multiple third-party MPPs can run in
parallel with the VMware NMP or HPP. When installed, the third-party MPPs can replace the
behavior of the native modules. The MPPs can take control of the path failover and the load-
balancing operations for the specified storage devices.
VMKernel
VMware SATP
Generally, the VMware NMP supports all storage arrays listed on the VMware storage HCL and
provides a default path selection algorithm based on the array type. The NMP associates a set of
physical paths with a specific storage device, or LUN.
For additional multipathing operations, the NMP uses submodules, called SATPs and PSPs. The
NMP delegates to the SATP the specific details of handling path failover for the device. The PSP
handles path selection for the device.
n Performs actions necessary to handle path failures and I/O command retries.
ESXi automatically installs an appropriate SATP for an array you use. You do not need to obtain
or download any SATPs.
2 The PSP selects an appropriate physical path on which to issue the I/O.
3 The NMP issues the I/O request on the path selected by the PSP.
5 If the I/O operation reports an error, the NMP calls the appropriate SATP.
6 The SATP interprets the I/O command errors and, when appropriate, activates the inactive
paths.
7 The PSP is called to select a new path on which to issue the I/O.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
Results
This command typically shows the NMP and, if loaded, the HPP and the MASK_PATH module. If
any third-party MPPs have been loaded, they are listed as well.
For more information about the command, see the ESXCLI Concepts and Examples and ESXCLI
Reference documentation.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
Use the --device | -d=device_ID parameter to filter the output of this command to show a
single device.
......
eui.6238666462643332
Device Display Name: SCST_BIO iSCSI Disk (eui.6238666462643332)
Storage Array Type: VMW_SATP_DEFAULT_AA
Storage Array Type Device Config: {action_OnRetryErrors=off}
Path Selection Policy: VMW_PSP_FIXED
Path Selection Policy Device Config: {preferred=vmhba65:C0:T0:L0;current=vmhba65:C0:T0:L0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba65:C0:T0:L0
Is USB: false
For more information about the command, see the ESXCLI Concepts and Examples and ESXCLI
Reference documentation.
The plug-ins are submodules of the VMware NMP. The NMP assigns a default PSP for each logical
device based on the device type. You can override the default PSP. For more information, see
Change the Path Selection Policy.
The Most Recently Used (VMware) policy is enforced by VMW_PSP_MRU. It selects the first
working path discovered at system boot time. When the path becomes unavailable, the host
selects an alternative path. The host does not revert to the original path when that path
becomes available. The Most Recently Used policy does not use the preferred path setting.
This policy is default for most active-passive storage devices.
The VMW_PSP_MRU supports path ranking. To set ranks to individual paths, use the esxcli
storage nmp psp generic pathconfig set command. For details, see the VMware
knowledge base article at http://kb.vmware.com/kb/2003468 and the ESXCLI Reference
documentation.
This Fixed (VMware) policy is implemented by VMW_PSP_FIXED. The policy uses the
designated preferred path. If the preferred path is not assigned, the policy selects the first
working path discovered at system boot time. If the preferred path becomes unavailable, the
host selects an alternative available path. The host returns to the previously defined
preferred path when it becomes available again.
VMW_PSP_RR enables the Round Robin (VMware) policy. Round Robin is the default policy
for many arrays. It uses an automatic path selection algorithm rotating through the
configured paths.
Both active-active and active-passive arrays use the policy to implement load balancing
across paths for different LUNs. With active-passive arrays, the policy uses active paths. With
active-active arrays, the policy uses available paths.
The latency mechanism that is activated for the policy by default makes it more adaptive. To
achieve better load balancing results, the mechanism dynamically selects an optimal path by
considering the following path characteristics:
n I/O bandwidth
n Path latency
To change the default parameters for the adaptive latency Round Robin policy or to disable
the latency mechanism, see the Change Default Parameters for Latency Round Robin.
To set other configurable parameters for VMW_PSP_RR, use the esxcli storage nmp psp
roundrobin command. For details, see the ESXCLI Reference documentation.
VMware SATPs
Storage Array Type Plug-ins (SATPs) are responsible for array-specific operations. The SATPs
are submodules of the VMware NMP.
ESXi offers an SATP for every type of array that VMware supports. ESXi also provides default
SATPs that support non-specific active-active, active-passive, ALUA, and local devices.
Each SATP accommodates special characteristics of a certain class of storage arrays. The SATP
can perform the array-specific operations required to detect path state and to activate an
inactive path. As a result, the NMP module itself can work with multiple storage arrays without
having to be aware of the storage device specifics.
Generally, the NMP determines which SATP to use for a specific storage device and associates
the SATP with the physical paths for that storage device. The SATP implements the tasks that
include the following:
n Performs array-specific actions necessary for storage fail-over. For example, for active-
passive devices, it can activate passive paths.
VMW_SATP_LOCAL
VMW_SATP_DEFAULT_AA
VMW_SATP_DEFAULT_AP
VMW_SATP_ALUA
For more information, see the VMware Compatibility Guide and the ESXCLI Reference
documentation.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
Results
For each SATP, the output displays information that shows the type of storage array or system
the SATP supports. The output also shows the default PSP for any LUNs that use this SATP.
Placeholder (plugin not loaded) in the Description column indicates that the SATP is not loaded.
For more information about the command, see the ESXCLI Concepts and Examples and ESXCLI
Reference documentation.
The HPP replaces the NMP for high-speed devices, such as NVMe. The HPP is the default plug-in
that claims NVMe-oF targets. Within ESXi, the NVMe-oF targets are emulated and presented to
users as SCSI targets. The HPP supports only active/active and implicit ALUA targets.
In vSphere version 7.0 Update 1 and earlier, NMP remains the default plug-in for local NVMe
devices, but you can replace it with HPP. Starting from vSphere 7.0 Update 2, HPP becomes the
default plug-in for local NVMe and SCSI devices, but you can replace it with NMP.
Second-level plug-ins No No
Path Selection Schemes (PSS)
You can use the vSphere Client or the esxcli command to change the default path selection
mechanism.
For information about configuring the path mechanisms in the vSphere Client, see Change the
Path Selection Policy. To configure with the esxcli command, see ESXi esxcli HPP Commands.
FIXED
With this scheme, a designated preferred path is used for I/O requests. If the preferred path
is not assigned, the host selects the first working path discovered at the boot time. If the
preferred path becomes unavailable, the host selects an alternative available path. The host
returns to the previously defined preferred path when it becomes available again.
When you configure FIXED as a path selection mechanism, select the preferred path.
This is the default scheme for the devices claimed by HPP. After transferring a specified
number of bytes or I/Os on a current path, the scheme selects the path using the round robin
algorithm.
To configure the LB-RR path selection mechanism, specify the following properties:
n IOPS indicates the I/O count on the path to be used as criteria to switch a path for the
device.
n Bytes indicates the byte count on the path to be used as criteria to switch a path for the
device.
After transferring a specified number of I/Os on a current path, default is 1000, the system
selects an optimal path that has the least number of outstanding I/Os.
When configuring this mechanism, specify the IOPS parameter to indicate the I/O count on
the path to be used as criteria to switch a path for the device.
After transferring a specified number of bytes on a current path, default is 10 MB, the system
selects an optimal path that has the least number of outstanding bytes.
To configure this mechanism, use the Bytes parameter to indicate the byte count on the path
to be used as criteria to switch a path for the device.
To achieve better load balancing results, the mechanism dynamically selects an optimal path
by considering the following path characteristics:
n Latency evaluation time parameter indicates at what time interval, in milliseconds, the
latency of paths must be evaluated.
n Sampling I/Os per path parameter controls how many sample I/Os must be issued on
each path to calculate latency of the path.
n Use the HPP for local NVMe and SCSI devices, and NVMe-oF devices.
n Do not activate the HPP for HDDs or slower flash devices. The HPP is not expected to
provide any performance benefits with devices incapable of at least 200 000 IOPS.
n If you use NVMe over Fibre Channel devices, follow general recommendations for Fibre
Channel storage. See Chapter 4 Using ESXi with Fibre Channel SAN.
n If you use NVMe-oF, do not mix transport types to access the same namespace.
n When using NVMe-oF namespaces, make sure that active paths are presented to the host.
The namespaces cannot be registered until the active path is discovered.
n Configure your VMs to use VMware Paravirtual controllers. See the vSphere Virtual Machine
Administration documentation.
n If a single VM drives a significant share of the device's I/O workload, consider spreading the
I/O across multiple virtual disks. Attach the disks to separate virtual controllers in the VM.
Otherwise, I/O throughput might be limited due to saturation of the CPU core responsible for
processing I/Os on a particular virtual storage controller.
For information about device identifiers for NVMe devices that support only NGUID ID format,
see NVMe Devices with NGUID Device Identifiers.
Use the esxcli storage core claimrule add command to enable the HPP or NMP on your ESXi
host.
To run the esxcli storage core claimrule add, you can use the ESXi Shell or vSphere CLI. For
more information, see Getting Started with ESXCLI and ESXCLI Reference.
Examples in this topic demonstrate how to enable HPP and set up the path selection schemes
(PSS).
Note Enabling the HPP is not supported on PXE booted ESXi hosts.
Prerequisites
Set up your VMware NVMe storage environment. For more information, see Chapter 16 About
VMware NVMe Storage.
Procedure
1 Create an HPP claim rule by running the esxcli storage core claimrule add command.
Method Description
Based on the NVMe controller esxcli storage core claimrule add –-type vendor --nvme-controller-
model model
For example,
esxcli storage core claimrule add --rule 429 --type vendor --nvme-
controller-model "ABCD*" --plugin HPP
Based on the PCI vendor ID and esxcli storage core claimrule add –-type vendor –-pci-vendor-id –-
subvendor ID pci-sub-vendor-id
For example,
esxcli storage core claimrule add --rule 429 --type vendor --pci-
vendor-id 8086 --pci-sub-vendor-id 8086 --plugin HPP.
Method Description
Set the PSS based on the device ID esxcli storage hpp device set
For example,
esxcli storage hpp device set --device=device --pss=FIXED --
path=preferred path
Set the PSS based on the vendor/ Use the --config-string option with the
model esxcli storage core claimrule add command.
For example,
esxcli storage core claimrule add -r 914 -t vendor -V vendor -M
model -P HPP --config-string "pss=LB-Latency,latency-eval-
time=40000"
By default, ESXi passes every I/O through the I/O scheduler. However, using the scheduler might
create internal queuing, which is not efficient with the high-speed storage devices.
You can configure the latency sensitive threshold and enable the direct submission mechanism
that helps I/O to bypass the scheduler. With this mechanism enabled, the I/O passes directly from
PSA through the HPP to the device driver.
For the direct submission to work properly, the observed average I/O latency must be lower than
the latency threshold you specify. If the I/O latency exceeds the latency threshold, the system
stops the direct submission and temporarily reverts to using the I/O scheduler. The direct
submission is resumed when the average I/O latency drops below the latency threshold again.
You can set the latency threshold for a family of devices claimed by HPP. Set the latency
threshold using the vendor and model pair, the controller model, or PCIe vendor ID and sub
vendor ID pair.
Procedure
1 Set the latency sensitive threshold for the device by running the following command:
Option Example
Vendor/model Set the latency sensitive threshold parameter for all devices with the indicated
vendor and model:
esxcli storage core device latencythreshold set -v 'vendor1' -m
'model1' -t 10
NVMe controller model Set the latency sensitive threshold for all NVMe devices with the indicated
controller model:
esxcli storage core device latencythreshold set -c
'controller_model1' -t 10
PCIe vendor/subvendor ID Set the latency sensitive threshold for devices with 0x8086 as PCIe vendor ID
and 0x8086 as PCIe sub vendor ID.
esxcli storage core device latencythreshold set -p '8086' -s '8086'
-t 10
3 Monitor the status of the latency sensitive threshold. Check VMkernel logs for the following
entries:
n Latency Sensitive Gatekeeper turned off for device device. Threshold of XX msec is
exceeded by command completed in YYY msec
See Getting Started with ESXCLI for an introduction, and ESXCLI Reference for details of the
esxcli command use.
esxcli storage hpp path List the paths currently claimed by the -d|--device=device Display information for a
list high-performance plug-in. specific device.
-p|--path=path Limit the output to a specific
path.
esxcli storage hpp device List the devices currently controlled by -d|--device=device Show a specific device.
list the high-performance plug-in.
esxcli storage hpp device Configure settings for an HPP device. -B|--bytes=long Maximum bytes on the path,
set after which the path is switched.
--cfg-file Update the configuration file and
runtime with the new setting. If the device is
claimed by another PSS, ignore any errors
when applying to runtime configuration.
-d|--device=device The HPP device upon
which to operate. Use any of the UIDs that
the device reports. Required.
-I|--iops=long Maximum IOPS on the path,
after which the path is switched.
-T|--latency-eval-time=long Control at what
interval, in ms, the latency of paths must be
evaluated.
-L|--mark-device-local=bool Set HPP to treat
the device as local or not.
-M|--mark-device-ssd=bool Specify whether
or not the HPP treats the device as an SSD.
-p|--path=str The path to set as the
preferred path for the device.
-P|--pss=pss_name The path selection scheme
to assign to the device. If you do not specify
the value, the system selects the default. For
the description of path selection schemes,
see VMware High Performance Plug-In and
Path Selection Schemes. Options include:
n FIXED
Suboptions include:
-T|--latency-eval-time=long
-S|--sampling-ios-per-path=long
n LB-RR Default
Suboptions include:
-B|--bytes=long
-I|--iops=long
-S|--sampling-ios-per-path=long Control how
many sample I/Os must be issued on each
path to calculate latency of the path.
esxcli storage hpp device List the devices that were marked or -d|--device=device Limit the output to a
usermarkedssd list unmarked as SSD by user. specific device.
The module that owns the device becomes responsible for managing the multipathing support
for the device. By default, the host performs a periodic path evaluation every five minutes and
assigns unclaimed paths to the appropriate module.
For the paths managed by the NMP module, a second set of claim rules is used. These rules
assign an SATP and PSP modules to each storage device and determine which Storage Array
Type Policy and Path Selection Policy to apply.
Use the vSphere Client to view the Storage Array Type Policy and Path Selection Policy assigned
to a specific storage device. You can also check the status of all available paths for this storage
device. If needed, you can change the default Path Selection Policy using the client.
To change the default multipathing module or SATP, modify claim rules using the vSphere CLI.
You can find some information about modifying claim rules in Using Claim Rules.
Procedure
5 Click the Properties tab and review the module that owns the device, for example NMP or
HPP.
Under Multipathing Policies, you can also see the Path Selection Policy and, if applicable, the
Storage Array Type Policy assigned to the device.
6 Click the Paths tab to review all paths available for the storage device and the status of each
path. The following path status information can appear:
Status Description
Active (I/O) Working path or multiple paths that currently transfer data.
Standby Paths that are inactive. If the active path fails, they can become operational
and start transferring I/O.
Dead Paths that are no longer available for processing I/O. A physical medium
failure or array misconfiguration can cause this status.
If you are using the Fixed path policy, you can see which path is the preferred path. The
preferred path is marked with an asterisk (*) in the Preferred column.
Procedure
5 Under Multipathing Policies, review the module that owns the device, such as NMP. You can
also see the Path Selection Policy and Storage Array Type Policy assigned to the device.
6 Under Paths, review the device paths and the status of each path. The following path status
information can appear:
Status Description
Active (I/O) Working path or multiple paths that currently transfer data.
Standby Paths that are inactive. If the active path fails, they can become operational
and start transferring I/O.
Dead Paths that are no longer available for processing I/O. A physical medium
failure or array misconfiguration can cause this status.
If you are using the Fixed path policy, you can see which path is the preferred path. The
preferred path is marked with an asterisk (*) in the Preferred column.
Procedure
4 Select the item whose paths you want to change and click the Properties tab.
6 Select a path policy and configure its settings. Your options change depending on the type of
a storage device you use.
n For information about path policies for SCSI devices, see Path Selection Plug-Ins and
Policies.
n For information about path mechanisms for NVMe devices, see VMware High
Performance Plug-In and Path Selection Schemes.
7 To save your settings and exit the dialog box, click OK.
You use the esxcli command to change the default parameters of the latency mechanism or
disable the mechanism.
Prerequisites
Set the path selection policy to Round Robin. See Change the Path Selection Policy.
Procedure
Parameter Description
-S|--num-sampling-cycles=sampling When --type is set to latency, this parameter controls how many I/Os to
value use to calculate the average latency of each path. The default value of this
parameter is 16.
-T|--latency-eval-time=time in ms When --type is set to latency, this parameter controls the frequency at
which the latency of paths is updated. Default is 3 minutes.
2 Verify whether the latency Round Robin and its parameters are configured correctly.
or
What to do next
To disable the latency mechanism, in the Advanced System Settings for your host, change the
Misc.EnablePSPLatencyPolicy parameter to 0.
You disable a path using the Paths panel. You have several ways to access the Paths panel, from
a datastore, a storage device, an adapter, or a vVols Protocol Endpoint view.
Procedure
n Storage Adapters
n Storage Devices
n Protocol Endpoints
4 In the right pane, select the item whose paths you want to disable, an adapter, storage
device, or Protocol Endpoint, and click the Paths tab.
These claim rules determine which multipathing module, the NMP, HPP, or a third-party MPP,
claims the specific device.
Depending on the device type, these rules assign a particular SATP submodule that provides
vendor-specific multipathing management to the device.
You can use the esxcli commands to add or change the core and SATP claim rules. Typically,
you add the claim rules to load a third-party MPP or to hide a LUN from your host. Changing
claim rules might be necessary when default settings for a specific device are not sufficient.
For more information about commands available to manage PSA claim rules, see the Getting
Started with ESXCLI.
For a list of storage arrays and corresponding SATPs and PSPs, see the Storage/SAN section of
the vSphere Compatibility Guide.
Multipathing Considerations
Specific considerations apply when you manage storage multipathing plug-ins and claim rules.
n If no SATP is assigned to the device by the claim rules, the default SATP for iSCSI or FC
devices is VMW_SATP_DEFAULT_AA. The default PSP is VMW_PSP_FIXED.
n When the system searches the SATP rules to locate a SATP for a given device, it searches
the driver rules first. If there is no match, the vendor/model rules are searched, and finally the
transport rules are searched. If no match occurs, NMP selects a default SATP for the device.
n If VMW_SATP_ALUA is assigned to a specific storage device, but the device is not ALUA-
aware, no claim rule match occurs for this device. The device is claimed by the default SATP
based on the device's transport type.
n The default PSP for all devices claimed by VMW_SATP_ALUA is VMW_PSP_MRU. The
VMW_PSP_MRU selects an active/optimized path as reported by the VMW_SATP_ALUA, or
n While VMW_PSP_MRU is typically selected for ALUA arrays by default, certain ALUA storage
arrays need to use VMW_PSP_FIXED. To check whether your storage array requires
VMW_PSP_FIXED, see the VMware Compatibility Guide or contact your storage vendor.
When using VMW_PSP_FIXED with ALUA arrays, unless you explicitly specify a preferred
path, the ESXi host selects the most optimal working path and designates it as the default
preferred path. If the host selected path becomes unavailable, the host selects an alternative
available path. However, if you explicitly designate the preferred path, it will remain preferred
no matter what its status is.
n By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this rule,
unless you want to unmask these devices.
Claim rules indicate whether the NMP, HPP, or a third-party MPP manages a given physical path.
Each claim rule identifies a set of paths based on the following parameters:
n Vendor/model strings
Procedure
If you do not use the claimrule-class option, the MP rule class is implied.
Example: Sample Output of the esxcli storage core claimrule list Command
Rule Class Rule Class Type Plugin Matches
MP 10 runtime vendor HPP vendor=NVMe model=*
MP 10 file vendor HPP vendor=NVMe model=*
MP 50 runtime transport NMP transport=usb
MP 51 runtime transport NMP transport=sata
MP 52 runtime transport NMP transport=ide
MP 53 runtime transport NMP transport=block
MP 54 runtime transport NMP transport=unknown
MP 101 runtime vendor MASK_PATH vendor=DELL model=Universal Xport
MP 101 file vendor MASK_PATH vendor=DELL model=Universal Xport
MP 200 runtime vendor MPP_1 vendor=NewVend model=*
MP 200 file vendor MPP_1 vendor=NewVend model=*
n The NMP claims all paths connected to storage devices that use the USB, SATA, IDE, and
Block SCSI transportation.
n The rules for HPP, MPP_1, MPP_2, and MPP_3 have been added, so that the modules can
claim specified devices. For example, the HPP claims all devices with vendor NVMe. All
devices handled by the inbox nvme driver are claimed regardless of the actual vendor. The
MPP_1 module claims all paths connected to any model of the NewVend storage array.
n You can use the MASK_PATH module to hide unused devices from your host. By default, the
PSA claim rule 101 masks Dell array pseudo devices with a vendor string DELL and a model
string Universal Xport.
n The Rule Class column in the output describes the category of a claim rule. It can be MP
(multipathing plug-in), Filter, or VAAI.
n The Class column shows which rules are defined and which are loaded. The file parameter in
the Class column indicates that the rule is defined. The runtime parameter indicates that the
rule has been loaded into your system. For a user-defined claim rule to be active, two lines
with the same rule number must exist, one line for the rule with the file parameter and
another line with runtime. Several default system-defined claim rules have only one line with
the Class of runtime. You cannot modify these rules.
n The default rule 65535 assigns all unclaimed paths to the NMP. Do not delete this rule.
Examples when you add a PSA claim rule include the following:
n You load a new third-party MPP and must define the paths that this module claims.
Warning You cannot create rules where two different plug-ins claim paths to the same device.
Your attempts to create these claim rules fail with a warning in vmkernel.log.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
Option Description
-u|--autoassign Adds a claim rule based on its characteristics. The rule number is not
required.
-c|--claimrule-class=<cl> Claim rule class to use in this operation. You can specify MP (default), Filter,
or VAAI.
To configure hardware acceleration for a new array, add two claim rules,
one for the VAAI filter and another for the VAAI plug-in. See Add Hardware
Acceleration Claim Rules for detailed instructions.
-D|--driver=<driver> Driver for the HBA of the paths to use. Valid only if --type is driver.
-f|--force Force claim rules to ignore validity checks and install the rule anyway.
--if-unset=<str> Run this command if this advanced user variable is not set to 1.
-i|--iqn=<iscsi_name> iSCSI Qualified Name for the target. Valid only when --type is target.
-P|--plugin=<plugin> PSA plug-in to use. The values are NMP, MASK_PATH, or HPP. Third parties can
also provide their own PSA plug-ins. Required.
-r|--rule=<rule_ID> Rule ID to use. The rule ID indicates the order in which the claim rule is to be
evaluated. User-defined claim rules are evaluated in numeric order starting
with 101.
You can run esxcli storage core claimrule list to determine which rule
IDs are available.
Option Description
-R|--transport=<transport> Transport of the paths to use. Valid only if --type is transport. The following
values are supported.
n block — block storage
n fc — Fibre Channel
n iscsivendor — iSCSI
n iscsi — not currently used
n ide — IDE storage
n sas — SAS storage
n sata — SATA storage
n usb — USB storage
n parallel — parallel
n fcoe — FCoE
n unknown
-t|--type=<type> Type of matching to use for the operation. Valid values are the following.
Required.
n vendor
n location
n driver
n transport
n device
n target
-a|--xcopy-use-array-values Use the array reported values to construct the XCOPY command to be sent
to the storage array. This applies to VAAI claim rules only.
-s|--xcopy-use-multi-segs Use multiple segments when issuing an XCOPY request. Valid only if --
xcopy-use-array-values is specified.
-m|--xcopy-max-transfer-size Maximum data transfer size in MB when you use a transfer size different
than array reported. Valid only if --xcopy-use-array-values is specified.
-k|--xcopy-max-transfer-size-kib Maximum transfer size in KiB for the XCOPY commands when you use a
transfer size different than array reported. Valid only if --xcopy-use-array-
values is specified.
2 To load the new claim rule into your system, use the following command:
3 To apply claim rules that are loaded, use the following command:
Option Description
-A|--adapter=<adapter> If --type is location, name of the HBA for the paths to run the claim rules
on. To run claim rules on paths from all adapters, omit this option.
-C|--channel=<channel> If --type is location, value of the SCSI channel number for the paths to run
the claim rules on. To run claim rules on paths with any channel number,
omit this option.
-L|--lun=<lun_id> If --type is location, value of the SCSI LUN for the paths to run claim rules
on. To run claim rules on paths with any LUN, omit this option.
-p|--path=<path_uid> If --type is path, this option indicates the unique path identifier (UID) or the
runtime name of a path to run claim rules on.
-T|--target=<target> If --type is location, value of the SCSI target number for the paths to run
claim rules on. To run claim rules on paths with any target number, omit this
option.
-t|--type=<location|path|all> Type of claim to perform. By default, uses all, which means claim rules run
without restriction to specific paths or SCSI addresses. Valid values are
location, path, and all.
-w|--wait You can use this option only if you also use --type all.
If the option is included, the claim waits for paths to settle before running
the claim operation. In that case, the system does not start the claiming
process until it is likely that all paths on the system have appeared before
starting the claim process.
After the claiming process has started, the command does not return until
device registration has completed.
If you add or remove paths during the claiming or the discovery process,
this option might not work correctly.
# esxcli storage core claimrule add -r 500 -t vendor -V NewVend -M NewMod -P NMP
After you run the esxcli storage core claimrule list command, you can see the new claim
rule appearing on the list.
The following output indicates that the claim rule 500 has been loaded into the system and is
active.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
Note By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this
rule, unless you want to unmask these devices.
Option Description
This step removes the claim rule from the File class.
This step removes the claim rule from the Runtime class.
Mask Paths
You can prevent the host from accessing storage devices or LUNs or from using individual paths
to a LUN. Use the esxcli commands to mask the paths. When you mask paths, you create claim
rules that assign the MASK_PATH plug-in to the specified paths.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
2 Assign the MASK_PATH plug-in to a path by creating a new claim rule for the plug-in.
5 If a claim rule for the masked path exists, remove the rule.
Results
After you assign the MASK_PATH plug-in to a path, the path state becomes irrelevant and is no
longer maintained by the host. As a result, commands that display the masked path's information
might show the path state as dead.
Unmask Paths
When you need the host to access the masked storage device, unmask the paths to the device.
Note When you run an unclaim operation using a device property, for example, device ID or
vendor, the paths claimed by the MASK_PATH plug-in are not unclaimed. The MASK_PATH plug-
in does not track any device property of the paths that it claims.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
3 Reload the path claiming rules from the configuration file into the VMkernel.
4 Run the esxcli storage core claiming unclaim command for each path to the masked
storage device.
For example:
Results
Your host can now access the previously masked storage device.
You might need to create an SATP rule when you install a third-party SATP for a specific storage
array.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
1 To add a claim rule for a specific SATP, run the esxcli storage nmp satp rule add
command. The command takes the following options.
Option Description
-b|--boot This rule is a system default rule added at boot time. Do not modify
esx.conf or add to a host profile.
-c|--claim-option=string Set the claim option string when adding a SATP claim rule.
-e|--description=string Set the claim rule description when adding a SATP claim rule.
-d|--device=string Set the device when adding SATP claim rules. Device rules are mutually
exclusive with vendor/model and driver rules.
-D|--driver=string Set the driver string when adding a SATP claim rule. Driver rules are
mutually exclusive with vendor/model rules.
-f|--force Force claim rules to ignore validity checks and install the rule anyway.
-M|--model=string Set the model string when adding SATP a claim rule. Vendor/Model rules are
mutually exclusive with driver rules.
-o|--option=string Set the option string when adding a SATP claim rule.
-P|--psp=string Set the default PSP for the SATP claim rule.
-O|--psp-option=string Set the PSP options for the SATP claim rule.
-R|--transport=string Set the claim transport type string when adding a SATP claim rule.
-t|--type=string Set the claim type when adding a SATP claim rule.
-V|--vendor=string Set the vendor string when adding SATP claim rules. Vendor/Model rules are
mutually exclusive with driver rules.
Note When searching the SATP rules to locate a SATP for a given device, the NMP searches
the driver rules first. If there is no match, the vendor/model rules are searched, and finally the
transport rules. If there is still no match, NMP selects a default SATP for the device.
When you run the esxcli storage nmp satp list -s VMW_SATP_INV command, you can see the
new rule on the list of VMW_SATP_INV rules.
This mechanism ensures that I/O for a particular virtual machine file goes into its own separate
queue and avoids interfering with I/Os from other files.
This capability is enabled by default. You can use the vSphere Client or the esxcli commands to
disable or reenable the capability.
If you turn off the per file I/O scheduling model, your host reverts to a legacy scheduling
mechanism. The legacy scheduling maintains only one I/O queue for each virtual machine and
storage device pair. All I/Os between the virtual machine and its virtual disks are moved into this
queue. As a result, I/Os from different virtual disks might interfere with each other in sharing the
bandwidth and affect each other's performance.
Note Do not disable per file scheduling if you have the HPP plug-in and the latency sensitive
threshold parameter configured for high-speed local devices. Disabling per file scheduling might
cause unpredictable behavior.
Procedure
Option Description
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
u To enable or disable per file I/O scheduling, run the following commands:
Option Description
esxcli system settings kernel Disable per file I/O scheduling for VMFS and NFS 3.
set -s isPerFileSchedModelActive
-v FALSE
esxcli system settings kernel Enable per file I/O scheduling for VMFS and NFS 3.
set -s isPerFileSchedModelActive
-v TRUE
esxcli system module parameters List the current status of NFS 4.1 file based scheduler
list -m nfs41client
esxcli system module parameters Disable file based scheduler for NFS 4.1.
set -m nfs41client -p
fileBasedScheduler=0
esxcli system module parameters Enable file based scheduler for NFS 4.1.
set -m nfs41client -p
fileBasedScheduler=1
The following topics contain information about RDMs and provide instructions on how to create
and manage RDMs.
The file gives you some of the advantages of direct access to a physical device, but keeps some
advantages of a virtual disk in VMFS. As a result, it merges the VMFS manageability with the raw
device access.
Virtual
machine
opens reads,
writes
VMFS volume
address
mapping file mapped device
resolution
Typically, you use VMFS datastores for most virtual disk storage. On certain occasions, you might
use raw LUNs or logical disks located in a SAN.
For example, you might use raw LUNs with RDMs in the following situations:
n When SAN snapshot or other layered applications run in the virtual machine. The RDM
enables backup offloading systems by using features inherent to the SAN.
n In any MSCS clustering scenario that spans physical hosts, such as virtual-to-virtual clusters
and physical-to-virtual clusters. In this case, cluster data and quorum disks are configured as
RDMs rather than as virtual disks on a shared VMFS.
Think of an RDM as a symbolic link from a VMFS volume to a raw LUN. The mapping makes LUNs
appear as files in a VMFS volume. The RDM, not the raw LUN, is referenced in the virtual machine
configuration. The RDM contains a reference to the raw LUN.
n In the virtual compatibility mode, the RDM acts like a virtual disk file. The RDM can use
snapshots.
n In the physical compatibility mode, the RDM offers direct access to the SCSI device for those
applications that require lower-level control.
Provides a user-friendly name for a mapped device. When you use an RDM, you do not need
to refer to the device by its device name. You refer to it by the name of the mapping file, for
example:
/vmfs/volumes/myVolume/myVMDirectory/myRawDisk.vmdk
Stores unique identification information for each mapped device. VMFS associates each RDM
with its current SCSI device, regardless of changes in the physical configuration of the server
because of adapter hardware changes, path changes, device relocation, and so on.
Makes it possible to use VMFS distributed locking for raw SCSI devices. Distributed locking on
an RDM makes it safe to use a shared raw LUN without losing data when two virtual machines
on different servers try to access the same LUN.
File Permissions
Makes file permissions possible. The permissions of the mapping file are enforced at file-open
time to protect the mapped volume.
Makes it possible to use file system utilities to work with a mapped volume, using the
mapping file as a proxy. Most operations that are valid for an ordinary file can be applied to
the mapping file and are redirected to operate on the mapped device.
Snapshots
Makes it possible to use virtual machine snapshots on a mapped volume. Snapshots are not
available when the RDM is used in physical compatibility mode.
vMotion
Lets you migrate a virtual machine with vMotion. The mapping file acts as a proxy to allow
vCenter Server to migrate the virtual machine by using the same mechanism that exists for
migrating virtual disk files.
Host 1 Host 2
VMotion
VM1 VM2
VMFS volume
mapping file
address
resolution
mapped device
Makes it possible to run some SAN management agents inside a virtual machine. Similarly,
any software that needs to access a device by using hardware-specific SCSI commands can
be run in a virtual machine. This kind of software is called SCSI target-based software. When
you use SAN management agents, select a physical compatibility mode for the RDM.
Makes it possible to use the NPIV technology that allows a single Fibre Channel HBA port to
register with the Fibre Channel fabric using several worldwide port names (WWPNs). This
ability makes the HBA port appear as multiple virtual ports, each having its own ID and virtual
port name. Virtual machines can then claim each of these virtual ports and use them for all
RDM traffic.
Note You can use NPIV only for virtual machines with RDM disks.
VMware works with vendors of storage management software to ensure that their software
functions correctly in environments that include ESXi. Some applications of this kind are:
n Snapshot software
n Replication software
Such software uses a physical compatibility mode for RDMs so that the software can access SCSI
devices directly.
Various management products are best run centrally (not on the ESXi machine), while others run
well on the virtual machines. VMware does not certify these applications or provide a
compatibility matrix. To find out whether a SAN management application is supported in an ESXi
environment, contact the SAN management software provider.
n The RDM is not available for direct-attached block devices or certain RAID devices. The RDM
uses a SCSI serial number to identify the mapped device. Because block devices and some
direct-attach RAID devices do not export serial numbers, they cannot be used with RDMs.
n If you are using the RDM in physical compatibility mode, you cannot use a snapshot with the
disk. Physical compatibility mode allows the virtual machine to manage its own, storage-
based, snapshot or mirroring operations.
Virtual machine snapshots are available for RDMs with virtual compatibility mode.
n You cannot map to a disk partition. RDMs require the mapped device to be a whole LUN.
n If you use vMotion to migrate virtual machines with RDMs, make sure to maintain consistent
LUN IDs for RDMs across all participating ESXi hosts.
Key contents of the metadata in the mapping file include the location of the mapped device
(name resolution), the locking state of the mapped device, permissions, and so on.
In virtual mode, the VMkernel sends only READ and WRITE to the mapped device. The mapped
device appears to the guest operating system exactly the same as a virtual disk file in a VMFS
volume. The real hardware characteristics are hidden. If you are using a raw disk in virtual mode,
you can realize the benefits of VMFS such as advanced file locking for data protection and
snapshots for streamlining development processes. Virtual mode is also more portable across
storage hardware than physical mode, presenting the same behavior as a virtual disk file.
In physical mode, the VMkernel passes all SCSI commands to the device, with one exception: the
REPORT LUNs command is virtualized so that the VMkernel can isolate the LUN to the owning
virtual machine. Otherwise, all physical characteristics of the underlying hardware are exposed.
Physical mode is useful to run SAN management agents or other SCSI target-based software in
the virtual machine. Physical mode also allows virtual-to-physical clustering for cost-effective high
availability.
VMFS5 and VMFS6 support greater than 2 TB disk size for RDMs in virtual and physical modes.
VMFS uniquely identifies all mapped storage devices, and the identification is stored in its internal
data structures. Any change in the path to a raw device, such as a Fibre Channel switch failure or
the addition of a new HBA, can change the device name. Dynamic name resolution resolves
these changes and automatically associates the original device with its new name.
Host 3 Host 4
VM3 VM4
“shared” access
address
mapping file mapped
resolutiion device
VMFS volume
The following table provides a comparison of features available with the different modes.
Table 19-1. Features Available with Virtual Disks and Raw Device Mappings
ESXi Features Virtual Disk File Virtual Mode RDM Physical Mode RDM
Use virtual disk files for the cluster-in-a-box type of clustering. If you plan to reconfigure your
cluster-in-a-box clusters as cluster-across-boxes clusters, use virtual mode RDMs for the cluster-
in-a-box clusters.
Although the RDM disk file has the same.vmdk extension as a regular virtual disk file, the RDM
contains only mapping information. The actual virtual disk data is stored directly on the LUN.
This procedure assumes that you are creating a new virtual machine. For information, see the
vSphere Virtual Machine Administration documentation.
Procedure
a Right-click any inventory object that is a valid parent object of a virtual machine, such as a
data center, folder, cluster, resource pool, or host, and select New Virtual Machine.
3 (Optional) To delete the default virtual hard disk that the system created for your virtual
machine, move your cursor over the disk and click the Remove icon.
a Click Add New Devices and select RDM Disk from the list.
b From the list of LUNs, select a target raw LUN and click OK.
The system creates an RDM disk that maps your virtual machine to the target LUN. The
RDM disk is shown on the list of virtual devices as a new hard disk.
a Click the New Hard Disk triangle to expand the properties for the RDM disk.
You can place the RDM on the same datastore where your virtual machine configuration
files reside, or select a different datastore.
Note To use vMotion for virtual machines with enabled NPIV, make sure that the RDM
files and the virtual machine files are located on the same datastore. You cannot perform
Storage vMotion when NPIV is enabled.
Option Description
Physical Allows the guest operating system to access the hardware directly.
Physical compatibility is useful if you are using SAN-aware applications
on the virtual machine. However, a virtual machine with a physical
compatibility RDM cannot be cloned, made into a template, or migrated if
the migration involves copying the disk.
Virtual Allows the RDM to behave as if it were a virtual disk, so you can use such
features as taking snapshots, cloning, and so on. When you clone the
disk or make a template out of it, the contents of the LUN are copied into
a .vmdk virtual disk file. When you migrate a virtual compatibility mode
RDM, you can migrate the mapping file or copy the contents of the LUN
into a virtual disk.
Disk modes are not available for RDM disks using physical compatibility mode.
Option Description
Independent - Persistent Disks in persistent mode behave like conventional disks on your physical
computer. All data written to a disk in persistent mode are written
permanently to the disk.
Independent - Nonpersistent Changes to disks in nonpersistent mode are discarded when you power
off or reset the virtual machine. With nonpersistent mode, you can restart
the virtual machine with a virtual disk in the same state every time.
Changes to the disk are written to and read from a redo log file that is
deleted when you power off or reset.
Procedure
2 Click the Virtual Hardware tab and click Hard Disk to expand the disk options menu.
3 Click the device ID that appears next to Physical LUN to open the Edit Multipathing Policies
dialog box.
4 Use the Edit Multipathing Policies dialog box to enable or disable paths, set multipathing
policy, and specify the preferred path.
For information on managing paths, see Chapter 18 Understanding Multipathing and Failover.
Problem
Certain guest operating systems or applications that run in the virtual machines with the RDMs
display unpredictable behavior.
Cause
This behavior might be caused by cached SCSI INQUIRY data that interferes with specific guest
operating systems and applications.
When the ESXi host first connects to a target storage device, it issues the SCSI INQUIRY
command to obtain basic identification data from the device. By default, ESXi caches the
received SCSI INQUIRY data (Standard, page 80, and page 83), and the data remains unchanged
afterwards. Responses for subsequent SCSI INQUIRY commands are returned from the cache.
However, specific guest operating systems running in virtual machines with RDMs must query the
LUN instead of using SCSI INQUIRY data cached by ESXi. In these cases, you can configure the
VM to ignore the SCSI INQUIRY cache.
Solution
Make the changes only when your storage vendor recommends that you do so.
Option Description
Modify the .vmx file of the virtual Use this method for the VMs with hardware version 8 or later.
machine with the RDM a Add the following parameter to the file:
scsix:y.ignoreDeviceInquiryCache = "true"
where x is the SCSI controller number and y is the SCSI target number of
the RDM.
b Reboot the VM.
Use the esxcli command Because you configure the setting at a host level, no VM hardware version
limitations apply.
No VM reboot is required.
No matter which method you use to set the SCSI INQUIRY cache parameter to true, the VM
starts contacting the LUN directly for the SCSI INQUIRY data.
As an abstraction layer, SPBM abstracts storage services delivered by vVols, vSAN, I/O filters, or
other storage entities.
Rather than integrating with each individual type of storage and data services, SPBM provides a
universal framework for different types of storage entities.
UI CLI API/SDK
I/O Filter
Vendors
vSAN Traditional
vVols
(VMFS, NFS)
n Advertisement of storage capabilities and data services that storage arrays and other
entities, such as I/O filters, offer.
n Bidirectional communications between ESXi and vCenter Server on one side, and storage
arrays and entities on the other.
vSphere offers default storage policies. In addition, you can define policies and assign them to
the virtual machines.
You use the VM Storage Policies interface to create a storage policy. When you define the policy,
you specify various storage requirements for applications that run on the virtual machines. You
can also use storage policies to request specific data services, such as caching or replication, for
virtual disks.
You apply the storage policy when you create, clone, or migrate the virtual machine. After you
apply the storage policy, the SPBM mechanism assists you with placing the virtual machine in a
matching datastore. In certain storage environments, SPBM determines how the virtual machine
storage objects are provisioned and allocated within the storage resource to guarantee the
required level of service. The SPBM also enables requested data services for the virtual machine
and helps you to monitor policy compliance.
Whether you must perform a specific step might depend on the type of storage or data services
that your environment offers.
Step Description
Populate the VM The VM Storage Policies interface is populated with information about datastores and data
Storage Policies services that are available in your storage environment. This information is obtained from
interface with storage providers and datastore tags.
appropriate data. n For entities represented by storage providers, verify that an appropriate provider is
registered.
Entities that use the storage provider include vSAN, vVols, and I/O filters. Depending on
the type of storage entity, some providers are self-registered. Other providers must be
manually registered.
See Use Storage Providers to Populate the VM Storage Policies Interface and Register
Storage Providers for vVols.
n Tag datastores that are not represented by storage providers. You can also use tags to
indicate a property that is not communicated through the storage provider, such as
geographical location or administrative group.
Create predefined A storage policy component describes a single data service, such as replication, that must be
storage policy provided for the virtual machine. You can define the component in advance and associate it
components. with multiple VM storage policies. The components are reusable and interchangeable.
See Create Storage Policy Components.
Create VM storage When you define storage policies for virtual machines, you specify storage requirements for
policies. applications that run on the virtual machines.
See Creating and Managing VM Storage Policies.
Apply the VM storage You can apply the storage policy when deploying the virtual machine or configuring its virtual
policy to the virtual disks.
machine. See Assign Storage Policies to Virtual Machines.
Check compliance for Verify that the virtual machine uses the datastore that is compliant with the assigned storage
the VM storage policy. policy.
See Check Compliance for a VM Storage Policy.
To create and manage your storage policies, you use the VM Storage Policy interface of the
vSphere Client.
This information is obtained from storage providers, also called VASA providers. Another source
is datastore tags.
Certain datastores, for example, vVols and vSAN, are represented by the storage providers.
Through the storage providers, the datastores can advertise their capabilities in the VM
Storage Policy interface. These datastore capabilities, data services, and other characteristics
with ranges of values populate the VM Storage Policy interface.
You use these characteristics when you define datastore-based placement and service rules
for your storage policy.
Data Services
I/O filters on your hosts are also represented by the storage providers. The storage provider
delivers information about the data services of the filters to the VM Storage Policy interface.
You use this information when defining the rules for host-based data services, also called
common rules. Unlike the datastore-specific rules, these rules do not define storage
placement and storage requirements for the virtual machine. Instead, they activate the
requested I/O filter data services for the virtual machine.
Tags
Generally, VMFS and NFS datastores are not represented by a storage provider. They do not
display their capabilities and data services in the VM Storage Polices interface. You can use
tags to encode information about these datastores. For example, you can tag your VMFS
datastores as VMFS-Gold and VMFS-Silver to represent different levels of service.
For vVols and vSAN datastores, you can use tags to encode information that is not
advertised by the storage provider, such as geographical location (Palo Alto), or
administrative group (Accounting).
Similar to the storage capabilities and characteristics, all tags associated with the datastores
appear in the VM Storage Policies interface. You can use the tags when you define the tag-
based placement rules.
Entities that use the storage provider include vSAN, vVols, and I/O filters. Depending on the type
of the entity, some providers are self-registered. Other providers, for example, the vVols storage
provider, must be manually registered. After the storage providers are registered, they deliver
the following data to the VM Storage Policies interface:
n Storage capabilities and characteristics for such datastores as vVols and vSAN.
Prerequisites
Register the storage providers that require manual registration. For more information, see the
appropriate documentation:
Procedure
3 In the Storage Providers list, view the storage providers registered with vCenter Server.
The list shows general information including the name of the storage provider, its URL and
status, storage entities that the provider represents, and so on.
4 To display more details, select a specific storage provider or its component from the list.
You can apply a new tag that contains general storage information to a datastore. For more
details about the tags, their categories, and how to manage the tags, see the vCenter Server and
Host Management documentation.
Prerequisites
Required privileges:
n vSphere Tagging.Create vSphere Tag Category on the root vCenter Server instance
n vSphere Tagging.Assign or Unassign vSphere Tag on the root vCenter Server instance
Procedure
e Click OK.
c Specify the properties for the tag. See the following example.
Name Texas
d Click OK.
b Right-click the datastore, and select Tags & Custom Attributes > Assign Tag.
c From the list of tags, select an appropriate tag, for example, Texas in the Storage
Location category, and click Assign.
Results
The new tag is assigned to the datastore and appears on the datastore Summary tab in the Tags
pane.
What to do next
When creating a VM storage policy, you can reference the tag to include the tagged datastore in
the list of compatible storage resources. See Create a VM Storage Policy for Tag-Based
Placement.
Or you can exclude the tagged datastore from the VM storage policy. For example, your VM
storage policy can include vVols datastores located in Taxes and California, but exclude
datastores located in Nevada.
To learn more about how to use tags in VM storage policies, watch the following video.
Rules
The rule is a basic element of the VM storage policy. Each individual rule is a statement that
describes a single requirement for virtual machine storage and data services.
Rule Sets
Within a storage policy, individual rules are organized into collections of rules, or rule sets.
Typically, the rule sets can be in one of the following categories: rules for host-based services
and datastore-specific rules.
Each rule set must include placement rules that describe requirements for virtual machine
storage resources. All placement rules within a single rule set represent a single storage
entity. These rules can be based on storage capabilities or tags.
In addition, the datastore-specific rule set can include optional rules or storage policy
components that describe data services to provide for the virtual machine. Generally, these
rules request such services as caching, replication, other services provided by storage
systems.
To define the storage policy, one datastore-specific set is required. Additional rule sets are
optional. A single policy can use multiple sets of rules to define alternative storage placement
parameters, often from several storage providers.
Placement rules specify a particular storage requirement for the VM and enable SPBM to
distinguish compatible datastores among all datastores in the inventory. These rules also
describe how the virtual machine storage objects are allocated within the datastore to
receive the required level of service. For example, the rules can list vVols as a destination and
define the maximum recovery point objective (RPO) for the vVols objects.
When you provision the virtual machine, these rules guide the decision that SPBM makes
about the virtual machine placement. SPBM finds the vVols datastores that can match the
rules and satisfy the storage requirements of the virtual machine. See Create a VM Storage
Policy for vVols.
Tag-based rules reference datastore tags. These rules can define the VM placement, for
example, request as a target all datastores with the VMFS-Gold tag. You can also use the tag-
based rules to fine-tune your VM placement request further. For example, exclude datastores
with the Palo Alto tag from the list of your vVols datastores. See Create a VM Storage Policy
for Tag-Based Placement.
This rule set activates data services provided by the host. The set for host-based services
can include rules or storage policy components that describe particular data services, such as
encryption or replication.
Unlike datastore-specific rules, this set does not include placement rules. Rules for host-based
services are generic for all types of storage and do not depend on the datastore. See Create
a VM Storage Policy for Host-Based Data Services.
Rules or predefined storage policy components to activate Capability-based or tag-based placement rules that
data services installed on ESXi hosts. For example, describe requirements for virtual machine storage
replication by I/O filters. resources. For example, vVols placement.
If the rule set for host-based services is not present, meeting all the rules of a single datastore-
specific rule set is sufficient to satisfy the entire policy. If the rule set for host-based services is
present, the policy matches the datastore that satisfies the host services rules and all rules in one
of the datastore-specific sets.
or or
rule 1_1 rule 2_1 rule 3_1
Available data services include encryption, I/O control, caching, and so on. Certain data services,
such as encryption, are provided by VMware. Others can be offered by third-party I/O filters that
you install on your host.
The data services are usually generic for all types of storage and do not depend on a datastore.
Adding datastore-specific rules to the storage policy is optional.
If you add datastore-specific rules, and both the I/O filters on the host and storage offer the
same type of service, for example, encryption, your policy can request this service from both
providers. As a result, the virtual machine data is encrypted twice, by the I/O filter and your
storage. However, replication provided by vVols and replication provided by the I/O filter cannot
coexist in the same storage policy.
Prerequisites
n For information about encrypting your virtual machines, see the vSphere Security
documentation.
n For information about I/O filters, see Chapter 23 Filtering Virtual Machine I/O.
n For information about storage policy components, see About Storage Policy Components.
Procedure
c Click Create.
Option Action
3 On the Policy structure page under Host based services, select enable host based rules.
4 On the Host based services page, define rules to enable and configure data services
provided by your host.
a Click the tab for the data service category, for example, Encryption.
b Define custom rules for the data service category or use predefined components.
Option Description
Use storage policy component Select a storage policy component from the drop-down menu. This
option is available only if you have predefined components in your
database.
Custom Define custom rules for the data service category by specifying an
appropriate provider and values for the rules.
Note You can enable several data services. If you use encryption with other data
services, set the Allow I/O filters before encryption parameter to True, so that other
services, such as replication, can analyze clear text data before it is encrypted.
5 On the Storage compatibility page, review the list of datastores that match this policy.
To be compatible with the policy for host-based services, datastores must be connected to
the host that provides these services. If you add datastore-specific rule sets to the policy, the
compatible datastores must also satisfy storage requirements of the policy.
6 On the Review and finish page, review the storage policy settings and click Finish.
Results
The new VM storage policy for host-based data services appears on the list.
The procedure assumes that you are creating the VM storage policy for vVols. For information
about the vSAN storage policy, see the Administering VMware vSAN documentation.
Prerequisites
n Verify that the vVols storage provider is available and active. See Register Storage Providers
for vVols.
n Make sure that the VM Storage Policies interface is populated with information about storage
entities and data services that are available in your storage environment. See Populating the
VM Storage Policies Interface.
n Define appropriate storage policy components. See Create Storage Policy Components.
Procedure
c Click Create.
Option Action
Name Enter the name of the storage policy, for example vVols Storage Policy.
3 On the Policy structure page under Datastore specific rules, enable rules for a target storage
entity, such as vVols storage.
You can enable rules for several datastores. Multiple rule sets allow a single policy to define
alternative storage placement parameters, often from several storage providers.
4 On the Virtual Volumes rules page, define storage placement rules for the target vVols
datastore.
b From the Add Rule drop-down menu, select available capability and specify its value.
For example, you can specify the number of read operations per second for the vVols
objects.
You can include as many rules as you need for the selected storage entity. Verify that the
values you provide are within the range of values that the vVols datastore advertises.
c To fine-tune your placement request further, click the Tags tab and add a tag-based rule.
Tag-based rules can filter datastores by including or excluding specific placement criteria.
For example, your VM storage policy can include vVols datastores located in Taxes and
California, but exclude datastores located in Nevada.
The data services, such as encryption, caching, or replication, are offered by the storage. The
VM storage policy that references data services, requests these services for the VM when the
VM is placed to the vVols datastore.
a Click the tab for the data service category, for example, Replication.
b Define custom rules for the data service category or use predefined components.
Option Description
Use storage policy component Select a storage policy component from the drop-down menu. This
option is available only if you have predefined components in your
database.
Custom Define custom rules for the data service category by specifying an
appropriate provider and values for the rules.
6 On the Storage compatibility page, review the list of datastores that match this policy.
If the policy includes several rule sets, the datastore must satisfy at least one rule set and all
rules within this set.
7 On the Review and finish page, review the storage policy settings and click Finish.
Results
The new VM storage policy compatible with vVols appears on the list.
What to do next
You can now associate this policy with a virtual machine, or designate the policy as default.
Prerequisites
n Make sure that the VM Storage Policies interface is populated with information about storage
entities and data services that are available in your storage environment. See Populating the
VM Storage Policies Interface.
Procedure
c Click Create.
Option Action
3 On the Policy structure page under Datastore specific rules, enable tag-based placement
rules.
a Click Add Tag Rule and define tag-based placement criteria. Use the following as an
example.
Option Example
Tags Gold
All datastores with the Gold tag become compatible as the storage placement target.
5 On the Storage compatibility page, review the list of datastores that match this policy.
6 On the Review and finish page, review the storage policy settings and click Finish.
Results
The new VM storage policy compatible with tagged datastores appears on the list.
Prerequisites
Procedure
2 Select the storage policy, and click one of the following icons:
n Edit
n Clone
4 If editing the storage policy that is used by a virtual machine, reapply the policy to the virtual
machine.
Option Description
Manually later If you select this option, the compliance status for all virtual disks and virtual
machine home objects associated with the storage policy changes to Out of
Date. To update configuration and compliance, manually reapply the storage
policy to all associated entities. See Reapply Virtual Machine Storage Policy.
Now Update virtual machine and compliance status immediately after editing the
storage policy.
You cannot assign the predefined component directly to a virtual machine or virtual disk. Instead,
you must add the component to the VM storage policy, and assign the policy to the virtual
machine.
The component describes one type of service from one service provider. The services can vary
depending on the providers that you use, but generally belong in one of the following categories.
n Compression
n Caching
n Encryption
n Replication
When you create the storage policy component, you define the rules for one specific type and
grade of service.
The following example shows that virtual machines VM1 and VM2 have identical placement
requirements, but must have different grades of replication services. You can create the storage
policy components with different replication parameters and add these components to the
related storage policies.
The provider of the service can be a storage system, an I/O filter, or another entity. If the
component references an I/O filter, the component is added to the host-based rules of the
storage policy. Components that reference entities other than the I/O filters, for example, a
storage system, are added to the datastore-specific rule sets.
n Each component can include only one set of rules. All characteristics in this rule set belong to
a single provider of the data services.
n If the component is referenced in the VM storage policy, you cannot delete the component.
Before deleting the component, you must remove it from the storage policy or delete the
storage policy.
n When you add components to the policy, you can use only one component from the same
category, for example caching, per a set of rules.
Procedure
1 In the vSphere Client, open the New Storage Policy Component dialog box.
4 Enter a name, for example, 4-hour Replication, and a description for the policy component.
Make sure that the name does not conflict with the names of other components or storage
policies.
For example, if you are configuring 4-hour replication, set the Recovery Point Objective (RPO)
value to 4.
For encryption based on I/O filters, set the Allow I/O filters before encryption parameter.
Encryption provided by storage does not require this parameter.
Option Description
False (default) Does not allow the use of other I/O filters before the encryption filter.
True Allows the use of other I/O filters before the encryption filter. Other filters,
such as replication, can analyze clear text data before it is encrypted.
8 Click OK.
Results
What to do next
You can add the component to the VM storage policy. If the data service that the component
references is provided by the I/O filters, you add the component to the host-based rules of the
storage policy. Components that reference entities other than the I/O filters, for example, a
storage system, are added to the datastore-specific rule sets.
Procedure
1 In the vSphere Client, navigate to the storage policy component to edit or clone.
Option Description
Edit Settings When editing, you cannot change the category of the data service and the
provider. For example, if the original component references replication
provided by I/O filters, these settings must remain unchanged.
Clone When cloning, you can customize any settings of the original component.
4 If a VM storage policy that is assigned to a virtual machine references the policy component
you edit, reapply the storage policy to the virtual machine.
Manually later If you select this option, the compliance status for all virtual disks and virtual
machine home objects associated with the storage policy changes to Out of
Date. To update configuration and compliance, manually reapply the storage
policy to all associated entities. See Reapply Virtual Machine Storage Policy.
Now Update virtual machine and compliance status immediately after editing the
storage policy.
If you do not specify the storage policy, the system uses a default storage policy that is
associated with the datastore. If your storage requirements for the applications on the virtual
machine change, you can modify the storage policy that was originally applied to the virtual
machine.
This topic describes how to assign the VM storage policy when you create a virtual machine. For
information about other deployment methods that include cloning, deployment from a template,
and so on, see the vSphere Virtual Machine Administration documentation.
You can apply the same storage policy to the virtual machine configuration file and all its virtual
disks. If storage requirements for your virtual disks and the configuration file are different, you
can associate different storage policies with the VM configuration file and the selected virtual
disks.
Procedure
1 Start the virtual machine provisioning process and follow the appropriate steps.
2 Assign the same storage policy to all virtual machine files and disks.
a On the Select storage page, select a storage policy from the VM Storage Policy drop-
down menu.
Based on its configuration, the storage policy separates all datastores into compatible
and incompatible. If the policy references data services offered by a specific storage
entity, for example, vVols, the compatible list includes datastores that represent only that
type of storage.
The datastore becomes the destination storage resource for the virtual machine
configuration file and all virtual disks.
c If you use the replication service with vVols, specify the replication group.
Replication groups indicate which VMs and virtual disks must be replicated together to a
target site.
Option Description
Preconfigured replication group Replication groups that are configured in advance on the storage side.
vCenter Server and ESXi discover the replication groups, but do not
manage their life cycle.
Automatic replication group vVols creates a replication group and assigns all VM objects to this
group.
Use this option if requirements for storage placement are different for virtual disks. You can
also use this option to enable I/O filter services, such as caching and replication, for your
virtual disks.
a On the Customize hardware page, expand the New hard disk pane.
b From the VM storage policy drop-down menu, select the storage policy to assign to the
virtual disk.
Use this option to store the virtual disk on a datastore other than the datastore where the
VM configuration file resides.
Results
After you create the virtual machine, the Summary tab displays the assigned storage policies and
their compliance status.
What to do next
If storage placement requirements for the configuration file or the virtual disks change, you can
later modify the virtual policy assignment.
You can edit the storage policy for a powered-off or powered-on virtual machine.
When changing the VM storage policy assignment, you can apply the same storage policy to the
virtual machine configuration file and all its virtual disks. You can also associate different storage
policies with the VM configuration file and the virtual disks. You might apply different policies
when, for example, storage requirements for your virtual disks and the configuration file are
different.
Procedure
c Click the storage policy you want to change, and click VM Compliance.
You can see the list of virtual machines that use this storage policy.
Option Actions
Apply the same storage policy to all Select the policy from the VM storage policy drop-down menu.
virtual machine objects
Apply different storage policies to a Turn on the Configure per disk option.
the VM home object and virtual b Select the object, for example, VM home.
disks c In the VM Storage Policy column, select the policy from the drop-down
menu.
5 If you use vVols policy with replication, configure the replication group.
Replication groups indicate which VMs and virtual disks must be replicated together to a
target site.
You can select a common replication group for all objects or select different replication
groups for each storage object.
Results
The storage policy is assigned to the virtual machine and its disks.
Prerequisites
Verify that the virtual machine has a storage policy that is associated with it.
Procedure
Compliant The datastore that the virtual machine or virtual disk uses has the storage capabilities
compatible with the policy requirements.
Noncompliant The datastore that the virtual machine or virtual disk uses does not have the storage
capabilities compatible with the policy requirements. You can migrate the virtual machine files
and virtual disks to compliant datastores.
Out of Date The status indicates that the policy has been edited, but the new requirements have not been
communicated to the datastore where the virtual machine objects reside. To communicate the
changes, reapply the policy to the objects that are out of date.
Not Applicable This storage policy references datastore capabilities that are not supported by the datastore
where virtual machine resides.
What to do next
When you cannot bring the noncompliant datastore into compliance, migrate the files or virtual
disks to a compatible datastore. See Find Compatible Storage Resource for Noncompliant Virtual
Machine.
If the status is Out of Date, reapply the policy to the objects. See Reapply Virtual Machine
Storage Policy.
Occasionally, a storage policy that is assigned to a virtual machine can be in the noncompliant
status. This status indicates that the virtual machine or its disks use datastores that are
incompatible with the policy. You can migrate the virtual machine files and virtual disks to
compatible datastores.
Use this task to determine which datastores satisfy the requirements of the policy.
Procedure
1 Verify that the storage policy for the virtual machine is in the noncompliant state.
The VM Storage Policy Compliance panel on the VM Storage Policies pane shows the
Noncompliant status.
3 Display the list of compatible datastores for the noncompliant storage policy.
The list of datastores that match the requirements of the policy appears.
What to do next
You can migrate the virtual machine or its disks to one of the datastores in the list.
Prerequisites
The compliance status for a virtual machine is Out of Date. The status indicates that the policy
has been edited, but the new requirements have not been communicated to the datastore.
Procedure
Compliance
Status Description
Compliant The datastore that the virtual machine or virtual disk uses has the storage capabilities that the
policy requires.
Noncompliant The datastore supports specified storage requirements, but cannot currently satisfy the
storage policy. For example, the status might become Noncompliant when physical resources
of the datastore are unavailable. You can bring the datastore into compliance by making
changes in the physical configuration of your host cluster. For example, by adding hosts or
disks to the cluster. If additional resources satisfy the storage policy, the status changes to
Compliant.
When you cannot bring the noncompliant datastore into compliance, migrate the files or virtual
disks to a compatible datastore. See Find Compatible Storage Resource for Noncompliant
Virtual Machine.
Not Applicable The storage policy references datastore capabilities that are not supported by the datastore.
The generic default storage policy that ESXi provides applies to all datastores and does not
include rules specific to any storage type.
In addition, ESXi offers the default storage policies for object-based datastores, vSAN or
vVols. These policies guarantee the optimum placement for the virtual machine objects within
the object-based storage.
For information about the default storage policy for vVols, see vVols and VM Storage
Policies.
VMFS and NFS datastores do not have specific default policies and can use the generic
default policy or a custom policy you define for them.
You can create a VM storage policy that is compatible with vSAN or vVols. You can then
designate this policy as the default for vSAN and vVols datastores. The user-defined default
policy replaces the default storage policy that VMware provides.
Each vSAN and vVols datastore can have only one default policy at a time. However, you can
create a single storage policy with multiple placement rule sets, so that it matches multiple
vSAN and vVols datastores. You can designate this policy as the default policy for all
datastores.
When the VM storage policy becomes the default policy for a datastore, you cannot delete
the policy unless you disassociate it from the datastore.
Note Do not designate a storage policy with replication rules as a default storage policy.
Otherwise, the policy prevents you from selecting replication groups.
Prerequisites
Create a storage policy that is compatible with vVols or vSAN. You can create a policy that
matches both types of storage.
Procedure
4 From the list of available storage policies, select a policy to designate as the default and click
OK.
Results
The selected storage policy becomes the default policy for the datastore. The system assigns
this policy to any virtual machine objects that you provision on the datastore when no other
policy is selected.
Storage providers that manage arrays and storage abstractions, are called persistence
storage providers. Providers that support vVols or vSAN belong to this category. In addition
to storage, persistence providers can provide other data services, such as replication.
Another category of providers is I/O filter storage providers, or data service providers. These
providers offer data services that include host-based caching, compression, and encryption.
Both persistence storage and data service providers can belong to one of these categories.
Built-in storage providers are offered by VMware. Typically, they do not require registration.
For example, the storage providers that support vSAN or I/O filters are build-in and become
registered automatically.
When a third party offers a storage provider, you typically must register the provider. An
example of such a provider is the vVols provider. You use the vSphere Client to register and
manage each storage provider component.
The following graphic illustrates how different types of storage providers facilitate
communications between vCenter Server and ESXi and other components of your storage
environment. For example, the components might include storage arrays, vVols storage, and I/O
filters.
vCenter
Server
SPBM
I/O Filter
Storage
Provider
I/O
Filter
ESXi-1
Multi-Array vVols
X100 Array
Storage Storage
Provider I/O Provider
Filter
vVols
ESXi-2
Storage
X200 Array
Information that the storage provider supplies can be divided into the following categories:
n Storage data services and capabilities. This type of information is essential for such
functionalities as vSAN, vVols, and I/O filters. The storage provider that represents these
functionalities integrates with the Storage Policy Based Management (SPBM) mechanism. The
storage provider collects information about data services that are offered by underlying
storage entities or available I/O filters.
You reference these data services when you define storage requirements for virtual
machines and virtual disks in a storage policy. Depending on your environment, the SPBM
mechanism ensures appropriate storage placement for a virtual machine or enables specific
data services for virtual disks. For details, see Creating and Managing VM Storage Policies.
n Storage status. This category includes reporting about status of various storage entities. It
also includes alarms and events for notifying about configuration changes.
This type of information can help you troubleshoot storage connectivity and performance
problems. It can also help you to correlate array-generated events and alarms to
corresponding performance and load changes on the array.
n Storage DRS information for the distributed resource scheduling on block devices or file
systems. This information helps to ensure that decisions made by Storage DRS are
compatible with resource management decisions internal to the storage systems.
Typically, vendors are responsible for supplying storage providers. The VMware VASA program
defines an architecture that integrates third-party storage providers into the vSphere
environment, so that vCenter Server and ESXi hosts can communicate with the storage
providers.
n Make sure that every storage provider you use is certified by VMware and properly
deployed. For information about deploying the storage providers, contact your storage
vendor.
n Make sure that the storage provider is compatible with the vCenter Server and ESXi versions.
See VMware Compatibility Guide.
n Do not install the VASA provider on the same system as vCenter Server.
n When you upgrade a storage provider to a later VASA version, you must unregister and
reregister the provider. After registration, vCenter Server can detect and use the functionality
of the new VASA version.
When you upgrade a storage provider to a later VASA version, you must unregister and
reregister the provider. After registration, vCenter Server can detect and use the functionality of
the later VASA version.
Note If you use vSAN, the storage providers for vSAN are registered and appear on the list of
storage providers automatically. vSAN does not support manual registration of storage
providers. See the Administering VMware vSAN documentation.
Prerequisites
Verify that the storage provider component is installed on the storage side and obtain its
credentials from your storage administrator.
Procedure
4 Enter connection information for the storage provider, including the name, URL, and
credentials.
Action Description
Direct vCenter Server to the storage Select the Use storage provider certificate option and specify the
provider certificate certificate's location.
Use a thumbprint of the storage If you do not guide vCenter Server to the provider certificate, the certificate
provider certificate thumbprint is displayed. You can check the thumbprint and approve it.
vCenter Server adds the certificate to the truststore and proceeds with the
connection.
The storage provider adds the vCenter Server certificate to its truststore when vCenter
Server first connects to the provider.
6 Click OK.
Results
vCenter Server registers the storage provider and establishes a secure SSL connection with it.
What to do next
To troubleshoot registration of your storage provider, see the VMware Knowledge Base article
https://kb.vmware.com/s/article/49798.
Use the vSphere Client to view general storage provider information and details for each storage
component.
Procedure
3 In the Storage Providers list, view the storage providers registered with vCenter Server.
The list shows general information including the name of the storage provider, its URL and
status, version of VASA APIs, storage entities the provider represents, and so on.
4 To display additional details, select a specific storage provider or its component from the list.
Note A single storage provider can support storage systems from multiple different
vendors.
Procedure
3 From the list of storage providers, select a storage provider and click one of the following
icons.
Option Description
Synchronize Storage Providers Synchronize all storage providers with the current state of the environment.
Remove Unregister storage providers that you do not use. After this operation,
vCenter Server closes the connection and removes the storage provider
from its configuration.
This option is also useful when you upgrade a storage provider to a later
VASA version. In this case, you must unregister and then reregister the
provider. After registration, vCenter Server can detect and use the
functionality of the later VASA version.
Refresh certificate vCenter Server warns you when a certificate assigned to a storage provider
is about to expire. You can refresh the certificate to continue using the
provider.
If you fail to refresh the certificate before it expires, vCenter Server
discontinues using the provider.
Results
vCenter Server closes the connection and removes the storage provider from its configuration.
n About vVols
n vVols Concepts
n vVols Architecture
n Configure vVols
n Troubleshooting vVols
About vVols
With vVols, an individual virtual machine, not the datastore, becomes a unit of storage
management, while storage hardware gains complete control over virtual disk content, layout,
and management.
LUNs or NFS, and uses these datastores as virtual machine storage. Typically, the datastore is
the lowest granularity level at which data management occurs from a storage perspective.
However, a single datastore contains multiple virtual machines, which might have different
requirements. With the traditional approach, it is difficult to meet the requirements of an
individual virtual machine.
The vVols functionality helps to improve granularity. It helps you to differentiate virtual machine
services on a per application level by offering a new approach to storage management. Rather
than arranging storage around features of a storage system, vVols arranges storage around the
needs of individual virtual machines, making storage virtual-machine centric.
vVols maps virtual disks and their derivatives, clones, snapshots, and replicas, directly to objects,
called virtual volumes, on a storage system. This mapping allows vSphere to offload intensive
storage operations such as snapshot, cloning, and replication to the storage system.
By creating a volume for each virtual disk, you can set policies at the optimum level. You can
decide in advance what the storage requirements of an application are, and communicate these
requirements to the storage system. The storage system creates an appropriate virtual disk
based on these requirements. For example, if your virtual machine requires an active-active
storage array, you no longer must select a datastore that supports the active-active model.
Instead, you create an individual virtual volume that is automatically placed to the active-active
array.
vVols Concepts
With vVols, abstract storage containers replace traditional storage volumes based on LUNs or
NFS shares. In vCenter Server, the storage containers are represented by vVols datastores.
vVols datastores store virtual volumes, objects that encapsulate virtual machine files.
Watch the video to learn more about different components of the vVols functionality.
n Protocol Endpoints
Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct
access to virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy,
called the protocol endpoint, to communicate with virtual volumes and virtual disk files that
virtual volumes encapsulate. ESXi uses protocol endpoints to establish a data path on
demand from virtual machines to their respective virtual volumes.
n vVols Datastores
A vVols datastore represents a storage container in vCenter Server and the vSphere Client.
Virtual volumes are stored natively inside a storage system that is connected to your ESXi hosts
through Ethernet or SAN. They are exported as objects by a compliant storage system and are
managed entirely by hardware on the storage side. Typically, a unique GUID identifies a virtual
volume. Virtual volumes are not preprovisioned, but created automatically when you perform
virtual machine management operations. These operations include a VM creation, cloning, and
snapshotting. ESXi and vCenter Server associate one or more virtual volumes to a virtual
machine.
Data-vVol
A data virtual volume that corresponds directly to each virtual disk .vmdk file. As virtual disk
files on traditional datastores, virtual volumes are presented to virtual machines as SCSI disks.
Data-vVol can be either thick or thin-provisioned.
Config-vVol
A configuration virtual volume, or a home directory, represents a small directory that contains
metadata files for a virtual machine. The files include a .vmx file, descriptor files for virtual
disks, log files, and so forth. The configuration virtual volume is formatted with a file system.
When ESXi uses the SCSI protocol to connect to storage, configuration virtual volumes are
formatted with VMFS. With NFS protocol, configuration virtual volumes are presented as an
NFS directory. Typically, it is thin-provisioned.
Swap-vVol
Created when a VM is first powered on. It is a virtual volume to hold copies of VM memory
pages that cannot be retained in memory. Its size is determined by the VM’s memory size. It
is thick-provisioned by default.
Snapshot-vVol
A virtual memory volume to hold the contents of virtual machine memory for a snapshot.
Thick-provisioned.
Other
A virtual volume for specific features. For example, a digest virtual volume is created for
Content-Based Read Cache (CBRC).
Typically, a VM creates a minimum of three virtual volumes, data-vVol, config-vVol, and swap-
vVol. The maximum depends on how many virtual disks and snapshots reside on the VM.
For example, the following SQL server has six virtual volumes:
n Config-vVol
n Snapshot-vVol
By using different virtual volumes for different VM components, you can apply and manipulate
storage policies at the finest granularity level. For example, a virtual volume that contains a virtual
disk can have a richer set of services than the virtual volume for the VM boot disk. Similarly, a
snapshot virtual volume can use a different storage tier compared to a current virtual volume.
Disk Provisioning
The vVols functionality supports a concept of thin and thick-provisioned virtual disks. However,
from the I/O prospective, implementation and management of thin or thick provisioning by the
arrays is transparent to the ESXi host. ESXi offloads to the storage arrays any functions related
to thin provisioning. In the data path, ESXi does not treat the thin or thick virtual volumes
differently.
You select the thin or thick type for your virtual disk at the VM creation time. If your disk is thin
and resides on a vVols datastore, you cannot change its type later by inflating the disk.
Shared Disks
You can place a shared disk on a vVols storage that supports SCSI Persistent Reservations for
vVols. You can use this disk as a quorum disk and eliminate RDMs in the MSCS clusters. For more
information, see the vSphere Resource Management documentation.
The storage provider is implemented through VMware APIs for Storage Awareness (VASA) and
is used to manage all aspects of vVols storage. The storage provider integrates with the Storage
Monitoring Service (SMS), shipped with vSphere, to communicate with vCenter Server and ESXi
hosts.
The storage provider delivers information from the underlying storage container. The storage
container capabilities appear in vCenter Server and the vSphere Client. Then, in turn, the storage
provider communicates virtual machine storage requirements, which you can define in the form
of a storage policy, to the storage layer. This integration process ensures that a virtual volume
created in the storage layer meets the requirements outlined in the policy.
Typically, vendors are responsible for supplying storage providers that can integrate with
vSphere and provide support to vVols. Every storage provider must be certified by VMware and
properly deployed. For information about deploying and upgrading the vVols storage provider to
a version compatible with current ESXi release, contact your storage vendor.
After you deploy the storage provider, you must register it in vCenter Server, so that it can
communicate with vSphere through the SMS.
A storage container is a part of the logical storage fabric and is a logical unit of the underlying
hardware. The storage container logically groups virtual volumes based on management and
administrative needs. For example, the storage container can contain all virtual volumes created
for a tenant in a multitenant deployment, or a department in an enterprise deployment. Each
storage container serves as a virtual volume store and virtual volumes are allocated out of the
storage container capacity.
Typically, a storage administrator on the storage side defines storage containers. The number of
storage containers, their capacity, and their size depend on a vendor-specific implementation. At
least one container for each storage system is required.
After you register a storage provider associated with the storage system, vCenter Server
discovers all configured storage containers along with their storage capability profiles, protocol
endpoints, and other attributes. A single storage container can export multiple capability profiles.
As a result, virtual machines with diverse needs and different storage policy settings can be a
part of the same storage container.
Initially, all discovered storage containers are not connected to any specific host, and you cannot
see them in the vSphere Client. To mount a storage container, you must map it to a vVols
datastore.
Protocol Endpoints
Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct
access to virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called
the protocol endpoint, to communicate with virtual volumes and virtual disk files that virtual
volumes encapsulate. ESXi uses protocol endpoints to establish a data path on demand from
virtual machines to their respective virtual volumes.
Each virtual volume is bound to a specific protocol endpoint. When a virtual machine on the host
performs an I/O operation, the protocol endpoint directs the I/O to the appropriate virtual
volume. Typically, a storage system requires just a few protocol endpoints. A single protocol
endpoint can connect to hundreds or thousands of virtual volumes.
On the storage side, a storage administrator configures protocol endpoints, one or several per
storage container. The protocol endpoints are a part of the physical storage fabric. The storage
system exports the protocol endpoints with associated storage containers through the storage
provider. After you map the storage container to a vVols datastore, the ESXi host discovers the
protocol endpoints and they become visible in the vSphere Client. The protocol endpoints can
also be discovered during a storage rescan. Multiple hosts can discover and mount the protocol
endpoints.
In the vSphere Client, the list of available protocol endpoints looks similar to the host storage
devices list. Different storage transports can be used to expose the protocol endpoints to ESXi.
When the SCSI-based transport is used, the protocol endpoint represents a proxy LUN defined
by a T10-based LUN WWN. For the NFS protocol, the protocol endpoint is a mount point, such as
an IP address and a share name. You can configure multipathing on the SCSI-based protocol
endpoint, but not on the NFS-based protocol endpoint. No matter which protocol you use, the
storage array can provide multiple protocol endpoints for availability purposes.
Protocol endpoints are managed per array. ESXi and vCenter Server assume that all protocol
endpoints reported for an array are associated with all containers on that array. For example, if
an array has two containers and three protocol endpoints, ESXi assumes that virtual volumes on
both containers can be bound to all three protocol points.
The storage system replies with a protocol endpoint ID that becomes an access point to the
virtual volume. The protocol endpoint accepts all I/O requests to the virtual volume. This binding
exists until ESXi sends an unbind request for the virtual volume.
For later bind requests on the same virtual volume, the storage system can return different
protocol endpoint IDs.
When receiving concurrent bind requests to a virtual volume from multiple ESXi hosts, the
storage system can return the same or different endpoint bindings to each requesting ESXi host.
In other words, the storage system can bind different concurrent hosts to the same virtual
volume through different endpoints.
The unbind operation removes the I/O access point for the virtual volume. The storage system
might unbind the virtual volume from its protocol endpoint immediately, or after a delay, or take
some other action. A bound virtual volume cannot be deleted until it is unbound.
vVols Datastores
A vVols datastore represents a storage container in vCenter Server and the vSphere Client.
After vCenter Server discovers storage containers exported by storage systems, you must
mount them as vVols datastores. The vVols datastores are not formatted in a traditional way like,
for example, VMFS datastores. You must still create them because all vSphere functionalities,
including FT, HA, DRS, and so on, require the datastore construct to function properly.
You use the datastore creation wizard in the vSphere Client to map a storage container to a
vVols datastore. The vVols datastore that you create corresponds directly to the specific storage
container.
From a vSphere administrator prospective, the vVols datastore is similar to any other datastore
and is used to hold virtual machines. Like other datastores, the vVols datastore can be browsed
and lists virtual volumes by virtual machine name. Like traditional datastores, the vVols datastore
supports unmounting and mounting. However, such operations as upgrade and resize are not
applicable to the vVols datastore. The vVols datastore capacity is configurable by the storage
administrator outside of vSphere.
You can use the vVols datastores with traditional VMFS and NFS datastores and with vSAN.
Note The size of a virtual volume must be a multiple of 1 MB, with a minimum size of 1 MB. As a
result, all virtual disks that you provision on a vVols datastore must be an even multiple of 1 MB. If
the virtual disk you migrate to the vVols datastore is not an even multiple of 1 MB, extend the disk
to the nearest even multiple of 1 MB.
A VM storage policy is a set of rules that contains placement and quality-of-service requirements
for a virtual machine. The policy enforces appropriate placement of the virtual machine within
vVols storage and guarantees that storage can satisfy virtual machine requirements.
You use the VM Storage Policies interface to create a vVols storage policy. When you assign the
new policy to the virtual machine, the policy enforces that the vVols storage meets the
requirements.
The default No Requirements policy that VMware provides has the following characteristics:
n You can create a VM storage policy for vVols and designate it as the default.
vVols supports NFS version 3 and 4.1, iSCSI, Fibre Channel, and FCoE.
No matter which storage protocol is used, protocol endpoints provide uniform access to both
SAN and NAS storage. A virtual volume, like a file on other traditional datastore, is presented to a
virtual machine as a SCSI disk.
Note A storage container is dedicated to SCSI or NAS and cannot be shared across those
protocol types. An array can present one storage container with SCSI protocol endpoints and a
different container with NFS protocol endpoints. The container cannot use a combination of SCSI
and NFS protocol endpoints.
When the SCSI-based protocol is used, the protocol endpoint represents a proxy LUN defined by
a T10-based LUN WWN.
As any block-based LUNs, the protocol endpoints are discovered using standard LUN discovery
commands. The ESXi host periodically rescans for new devices and asynchronously discovers
block‐based protocol endpoints. The protocol endpoint can be accessible by multiple paths.
Traffic on these paths follows well‐known path selection policies, as is typical for LUNs.
On SCSI-based disk arrays at VM creation time, ESXi makes a virtual volume and formats it as
VMFS. This small virtual volume stores all VM metadata files and is called the config‐vVol. The
config‐vVol functions as a VM storage locator for vSphere.
vVols on disk arrays supports the same set of SCSI commands as VMFS and use ATS as a locking
mechanism.
Each ESXi host can have multiple HBAs and each HBA can have properties configured on it. One
of these properties is the authentication method that the HBA must use. Authentication is
optional, but if implemented, it must be supported by both the initiator and target. CHAP is an
authentication method that can be used both directions between initiator and target.
For more information about different CHAP authentication methods, see Selecting CHAP
Authentication Method. To configure CHAP on your ESXi host, see Configuring CHAP Parameters
for iSCSI or iSER Storage Adapters.
No matter which version you use, a storage array can provide multiple protocol endpoints for
availability purposes.
In addition, NFS version 4.1 introduces trunking mechanisms that enable load balancing and
multipathing.
vVols on NAS devices supports the same NFS Remote Procedure Calls (RPCs) that ESXi hosts
use when connecting to NFS mount points.
vVols Architecture
An architectural diagram provides an overview of how all components of the vVols functionality
interact with each other.
Data center
VM storage policies
Storage
Monitoring VMware vSphere
Service
VASA APIs
Protocol endpoints
VASA
provider
Storage array
Virtual volumes are objects exported by a compliant storage system and typically correspond
one-to-one with a virtual machine disk and other VM-related files. A virtual volume is created and
manipulated out-of-band, not in the data path, by a VASA provider.
A VASA provider, or a storage provider, is developed through vSphere APIs for Storage
Awareness. The storage provider enables communication between the ESXi hosts, vCenter
Server, and the vSphere Client on one side, and the storage system on the other. The VASA
provider runs on the storage side and integrates with the vSphere Storage Monitoring Service
(SMS) to manage all aspects of vVols storage. The VASA provider maps virtual disk objects and
their derivatives, such as clones, snapshots, and replicas, directly to the virtual volumes on the
storage system.
The ESXi hosts have no direct access to the virtual volumes storage. Instead, the hosts access
the virtual volumes through an intermediate point in the data path, called the protocol endpoint.
The protocol endpoints establish a data path on demand from the virtual machines to their
respective virtual volumes. The protocol endpoints serve as a gateway for direct in-band I/O
between ESXi hosts and the storage system. ESXi can use Fibre Channel, FCoE, iSCSI, and NFS
protocols for in-band communication.
The virtual volumes reside inside storage containers that logically represent a pool of physical
disks on the storage system. On the vCenter Server and ESXi side, storage containers are
presented as vVols datastores. A single storage container can export multiple storage capability
sets and provide different levels of service to different virtual volumes.
Communication with the VASA provider is protected by SSL certificates. These certificates can
come from the VASA provider or from the VMCA.
n Certificates can be directly provided by the VASA provider for long-term use. They can be
either self-generated and self-signed, or derived from an external Certificate Authority.
n Certificates can be generated by the VMCA for use by the VASA provider.
When a host or VASA provider is registered, VMCA follows these steps automatically, without
involvement from the vSphere administrator.
1 When a VASA provider is first added to the vCenter Server storage management service
(SMS), it produces a self‐signed certificate.
2 After verifying the certificate, the SMS requests a Certificate Signing Request (CSR) from the
VASA provider.
3 After receiving and validating the CSR, the SMS presents it to the VMCA on behalf of the
VASA provider, requesting a CA signed certificate.
4 The signed certificate with the root certificate is passed to the VASA provider. The VASA
provider can authenticate all future secure connections originating from the SMS on vCenter
Server and on ESXi hosts.
In vVols environment, snapshots are managed by ESXi and vCenter Server, but are performed by
the storage array.
Each snapshot creates an extra virtual volume object, snapshot, or memory, virtual volume, that
holds the contents of virtual machine memory. Original VM data is copied to this object, and it
remains read-only, which prevents the guest operating system from writing to snapshot. You
cannot resize the snapshot virtual volume. And it can be read only when the VM is reverted to a
snapshot. Typically, when you replicate the VM, its snapshot virtual volume is also replicated.
Storage Container
The base virtual volume remains active, or read-write. When another snapshot is created, it
preserves the new state and data of the virtual machine at the time you take the snapshot.
Deleting snapshots leaves the base virtual volume that represents the most current state of the
virtual machine. Snapshot virtual volumes are discarded. Unlike snapshots on the traditional
datastores, virtual volume snapshots do not need to commit their contents to the base virtual
volume.
Storage Container
Base vVol
Read Write
For information about creating and managing snapshots, see the vSphere Virtual Machine
Administration documentation.
n The storage system or storage array that you use must support vVols and integrate with the
vSphere components through vSphere APIs for Storage Awareness (VASA). The storage
array must support thin provisioning and snapshotting.
n Protocol endpoints
n Storage containers
n Storage profiles
n Replication configurations if you plan to use vVols with replication. See Requirements for
Replication with vVols.
n If you use iSCSI, activate the software iSCSI adapters on your ESXi hosts. Configure
Dynamic Discovery and enter the IP address of your vVols storage system. See Configure
the Software iSCSI Adapter.
n Synchronize all components in the storage array with vCenter Server and all ESXi hosts. Use
Network Time Protocol (NTP) to do this synchronization.
For more information, contact your vendor and see VMware Compatibility Guide
Procedure
5 Click OK.
Configure vVols
To configure your vVols environment, follow several steps.
Prerequisites
Procedure
What to do next
You can now provision virtual machines on the vVols datastore. For information on creating
virtual machines, see Provision Virtual Machines on vVols Datastores and the vSphere Virtual
Machine Administration documentation.
After registration, the vVols provider communicates with vCenter Server. The provider reports
characteristics of underlying storage and data services, such as replication, that the storage
system provides. The characteristics appear in the VM Storage Policies interface and can be used
to create a VM storage policy compatible with the vVols datastore. After you apply this storage
policy to a virtual machine, the policy is pushed to vVols storage. The policy enforces optimal
placement of the virtual machine within vVols storage and guarantees that storage can satisfy
virtual machine requirements. If your storage provides extra services, such as caching or
replication, the policy enables these services for the virtual machine.
Prerequisites
Verify that an appropriate version of the vVols storage provider is installed on the storage side.
Obtain credentials of the storage provider.
Procedure
4 Enter connection information for the storage provider, including the name, URL, and
credentials.
Action Description
Direct vCenter Server to the storage Select the Use storage provider certificate option and specify the
provider certificate certificate's location.
Use a thumbprint of the storage If you do not guide vCenter Server to the provider certificate, the certificate
provider certificate thumbprint is displayed. You can check the thumbprint and approve it.
vCenter Server adds the certificate to the truststore and proceeds with the
connection.
The storage provider adds the vCenter Server certificate to its truststore when vCenter
Server first connects to the provider.
Results
Procedure
1 In the vSphere Client object navigator, browse to a host, a cluster, or a data center.
4 Enter the datastore name and select a backing storage container from the list of storage
containers.
Make sure to use the name that does not duplicate another datastore name in your data
center environment.
If you mount the same vVols datastore to several hosts, the name of the datastore must be
consistent across all hosts.
What to do next
After you create the vVols datastore, you can perform such datastore operations as renaming
the datastore, browsing datastore files, unmounting the datastore, and so on.
Procedure
4 To view details for a specific item, select this item from the list.
5 Use tabs under Protocol Endpoint Details to access additional information and modify
properties for the selected protocol endpoint.
Tab Description
Properties View the item properties and characteristics. For SCSI (block) items, view
and edit multipathing policies.
Paths (SCSI protocol endpoints Display paths available for the protocol endpoint. Disable or enable a
only) selected path. Change the Path Selection Policy.
Procedure
4 Select the protocol endpoint whose paths you want to change and click the Properties tab.
6 Select a path policy and configure its settings. Your options change depending on the type of
a storage device you use.
The path policies available for your selection depend on the storage vendor support.
n For information about path policies for SCSI devices, see Path Selection Plug-Ins and
Policies.
n For information about path mechanisms for NVMe devices, see VMware High
Performance Plug-In and Path Selection Schemes.
7 To save your settings and exit the dialog box, click OK.
Note All virtual disks that you provision on a vVols datastore must be an even multiple of 1 MB.
A virtual machine that runs on a vVols datastore requires an appropriate VM storage policy.
After you provision the virtual machine, you can perform typical VM management tasks. For
information, see the vSphere Virtual Machine Administration documentation.
Procedure
VMware provides a default No Requirements storage policy for vVols. If you need, you can
create a custom storage policy compatible with vVols.
To guarantee that the vVols datastore fulfills specific storage requirements when allocating a
virtual machine, associate the vVols storage policy with the virtual machine.
Array-based replication is policy driven. After you configure your vVols storage for replication,
information about replication capabilities and replication groups is delivered from the array by
the storage provider. This information shows in the VM Storage Policy interface of vCenter
Server.
You use the VM storage policy to describe replication requirements for your virtual machines.
The parameters that you specify in the storage policy depend on how your array implements
replication. For example, your VM storage policy might include such parameters as the
replication schedule, replication frequency, or recovery point objective (RPO). The policy might
also indicate the replication target, a secondary site where your virtual machines are replicated,
or specify whether replicas must be deleted.
By assigning the replication policy during VM provisioning, you request replication services for
your virtual machine. After that, the array takes over the management of all replication schedules
and processes.
Array based
replication
config swap data DB1 DB2 DB3 config swap data DB1 DB2 DB3
vVols vVols
Site 1 Site 2
Storage Requirements
Implementation of vVols replication depends on your array and might be different for storage
vendors. Generally, the following requirements apply to all vendors.
n The storage arrays that you use to implement replication must be compatible with vVols.
n The arrays must integrate with the version of the storage (VASA) provider compatible with
vVols replication.
n The storage arrays must be replication capable and configured to use vendor-provided
replication mechanisms. Typical configurations usually involve one or two replication targets.
Any required configurations, such as pairing of the replicated site and the target site, must be
also performed on the storage side.
n When applicable, replication groups and fault domains for vVols must be preconfigured on
the storage side.
For more information, contact your vendor and see VMware Compatibility Guide.
vSphere Requirements
n Use the vCenter Server and ESXi versions that support vVols storage replication. vCenter
Server and ESXi hosts that are older than 6.5 release do not support replicated vVols
storage. Any attempts to create a replicated VM on an incompatible host fail with an error.
For information, see VMware Compatibility Guide.
n If you plan to migrate a virtual machine, make sure that target resources, such as the ESXi
hosts and vVols datastores, support storage replication.
vCenter Server and ESXi can discover replication groups, but do not manage their life cycle.
Replication groups, also called consistency groups, indicate which VMs and virtual disks must be
replicated together to a target site. You can assign components of the same virtual machine,
such as the VM configuration file and virtual disks, to different preconfigured replication groups.
Or exclude certain VM components from replication.
Storage Container
If no preconfigured groups are available, vVols can use an automatic method. With the automatic
method, vVols creates a replication group on demand and associates this group with a vVols
object being provisioned. If you use the automatic replication group, all components of a virtual
machine are assigned to the group. You cannot mix preconfigured and automatic replication
groups for components of the same virtual machine.
Fault domains are configured and reported by the storage array, and are not exposed in the
vSphere Client. The Storage Policy Based Management (SPBM) mechanism discovers fault
domains and uses them for validation purposes during a virtual machine creation.
For example, provision a VM with two disks, one associated with replication group Anaheim: B,
the second associated with replication group Anaheim: C. SPBM validates the provisioning
because both disks are replicated to the same target fault domains.
Source Target
Repl. Group:”Anaheim:D”
Repl. Group:”New-York:B”
Repl. Group:”New-York:C”
Repl. Group:”New-York:D”
Valid Configuration
Now provision a VM with two disks, one associated with replication group Anaheim: B, the
second associated with replication group Anaheim: D. This configuration is invalid. Both
replication groups replicate to the New-York fault domain, however, only one replicates to the
Boulder fault domain.
Source Target
Repl. Group:”Anaheim:D”
Repl. Group:”New-York:B”
Repl. Group:”New-York:C”
Repl. Group:”New-York:D”
Invalid Configuration
The workflow to activate replication for your virtual machines includes steps typical for the virtual
machine provisioning on vVols storage.
1 Define the VM storage policy compatible with replication storage. The datastore-based rules
of the policy must include the replication component. See Create a VM Storage Policy for
vVols.
After you configure the storage policy that includes replication, vCenter Server discovers
available replication groups.
2 Assign the replication policy to your virtual machine. If configured, select a compatible
replication group, or use the automatic assignment. See Assign Storage Policies to Virtual
Machines.
n You can apply the replication storage policy only to a configuration virtual volume and a data
virtual volume. Other VM objects inherit the replication policy in the following way:
n The memory virtual volume inherits the policy of the configuration virtual volume.
n The digest virtual volume inherits the policy of the data virtual volume.
n The swap virtual volume, which exists while a virtual machine is powered on, is excluded
from replication.
n If you do not apply the replication policy to a VM disk, the disk is not replicated.
n The replication storage policy should not be used as a default storage policy for a datastore.
Otherwise, the policy prevents you from selecting replication groups.
n Replication preserves snapshot history. If a snapshot was created and replicated, you can
recover to the application consistent snapshot.
n You can replicate a linked clone. If a linked clone is replicated without its parent, it becomes a
full clone.
n If a descriptor file belongs to a virtual disk of one VM, but resides in the VM home of another
VM, both VMs must be in the same replication group. If the VMs are located in different
replication groups, both of these replication groups must be failed over at the same time.
Otherwise, the descriptor might become unavailable after the failover. As a result, the VM
might fail to power on.
n In your vVols with replication environment, you might periodically run a test failover workflow
to ensure that the recovered workloads are functional after a failover.
The resulting test VMs that are created during the test failover are fully functional and
suitable for general administrative operations. However, certain considerations apply:
n All VMs created during the test failover must be deleted before the test failover stops.
The deletion ensures that any snapshots or snapshot-related virtual volumes that are part
of the VM, such as the snapshot virtual volume, do not interfere with stopping of the test
failover.
n You can create fast clones only if the policy applied to the new VM contains the same
replication group ID as the VM being cloned. Attempts to place the child VM outside of
the replication group of the parent VM fail.
n With vVols, you can use advanced storage services that include replication, encryption,
deduplication, and compression on individual virtual disks. Contact your storage vendor for
information about services they support with vVols.
n vVols functionality supports backup software that uses vSphere APIs - Data Protection.
Virtual volumes are modeled on virtual disks. Backup products that use vSphere APIs - Data
Protection are as fully supported on virtual volumes as they are on VMDK files on a LUN.
Snapshots that the backup software creates using vSphere APIs - Data Protection look as
non-vVols snapshots to vSphere and the backup software.
Note vVols does not support SAN transport mode. vSphere APIs - Data Protection
automatically selects an alternative data transfer method.
For more information about integration with the vSphere Storage APIs - Data Protection,
consult your backup software vendor.
n vVols supports such vSphere features as vSphere vMotion, Storage vMotion, snapshots,
linked clones, and DRS.
n You can use clustering products, such as Oracle Real Application Clusters, with vVols. To use
these products, you activate the multiwrite setting for a virtual disk stored on the vVols
datastore.
For more details, see the knowledge base article at http://kb.vmware.com/kb/2112039. For a list
of features and products that vVols functionality supports, see VMware Product Interoperability
Matrixes.
vVols Limitations
Improve your experience with vVols by knowing the following limitations:
n Because the vVols environment requires vCenter Server, you cannot use vVols with a
standalone host.
n A vVols storage container cannot span multiple physical arrays. Some vendors present
multiple physical arrays as a single array. In such cases, you still technically use one logical
array.
n Host profiles that contain vVols datastores are vCenter Server specific. After you extract this
type of host profile, you can attach it only to hosts and clusters managed by the same
vCenter Server as the reference host.
n Customers
Changing storage profiles must be an array-side operation, not a storage migration to another
container.
When you use block storage, the PE represents a proxy LUN defined by a T10-based LUN WWN.
For NFS storage, the PE is a mount point, such as an IP address or DNS name, and a share name.
Typically, configuration of PEs is array-specific. When you configure PEs, you might need to
associate them with specific storage processors, or with certain hosts. To avoid errors when
creating PEs, do not configure them manually. Instead, when possible, use storage-specific
management tools.
If your environment uses LUN IDs that are greater than 1023, change the number of scanned
LUNs through the Disk.MaxLUN parameter. See Change the Number of Scanned Storage Devices.
When you use the vSphere Client, you cannot change the VM storage policy assignment for
swap-vVol, memory-vVol, or snapshot-vVol.
n On block storage, ESXi gives a large queue depth to I/O because of a potentially high number
of virtual volumes. The Scsi.ScsiVVolPESNRO parameter controls the number of I/O that can
be queued for PEs. You can configure the parameter on the Advanced System Settings page
of the vSphere Client.
Suppose that your VM has two virtual disks, and you take two snapshots with memory. Your VM
might occupy up to 10 vVols objects: a config-vVol, a swap-vVol, two data-vVols, four snapshot-
vVols, and two memory snapshot-vVols.
n When appropriate, use vSphere HA or Site Recovery Manager to protect the storage
provider VM.
Troubleshooting vVols
The troubleshooting topics provide solutions to problems that you might encounter when using
vVols.
esxcli storage vvol daemon unbindall Unbind all virtual volumes from all
VASA providers known to the
ESXi host.
esxcli storage vvol vasacontext get Show the VASA context (VC
UUID) associated with the host.
esxcli storage vvol vasaprovider list List all storage (VASA) providers
associated with the host.
esxcli storage vvol stats get Get statistics for all VASA providers -e|--entity=str Enter entity ID.
(default), or for specified namespace -n|--namespace=str Enter node
or entity in the given namespace. namespace expression.
-r|--raw Enable raw format output.
esxcli storage vvol stats list List all the statistics nodes (default), -n|--namespace=str Enter node
or nodes under a specified namespace expression.
namespace.
esxcli storage vvol stats enable Enable statistics tracking for the
complete namespace.
esxcli storage vvol stats disable Disable statistics tracking for the
complete namespace.
esxcli storage vvol stats add Enable statistics tracking for a -e|--entity=str Enter entity ID.
specific entity under a specific -n|--namespace=str Enter node
namespace. namespace expression.
esxcli storage vvol stats remove Removes specific entity for statistics -e|--entity=str Enter entity ID.
tracking under the specified -n|--namespace=str Enter node
namespace. namespace expression.
esxcli storage vvol stats reset Reset the statistics counter for the -e|--entity=str Enter entity ID.
specified statistics namespace or -n|--namespace=str Enter node
entity. namespace expression.
Problem
The vSphere Client shows the datastore as inaccessible. You cannot use the datastore for virtual
machine provisioning.
Cause
This problem might occur when you fail to configure protocol endpoints for the SCSI-based
storage container that is mapped to the virtual datastore. Like traditional LUNs, SCSI protocol
endpoints need to be configured so that an ESXi host can detect them.
Solution
Before creating virtual datastores for SCSI-based containers, make sure to configure protocol
endpoints on the storage side.
Problem
An OVF template or a VM being migrated from a nonvirtual datastore might include additional
large files, such as ISO disk images, DVD images, and image files. If these additional files cause
the configuration virtual volume to exceed its 4-GB limit, migration or deployment to a virtual
datastore fails.
Cause
The configuration virtual volume, or config-vVol, contains various VM-related files. On traditional
nonvirtual datastores, these files are stored in the VM home directory. Similar to the VM home
directory, the config-vVol typically includes the VM configuration file, virtual disk and snapshot
descriptor files, log files, lock files, and so on.
On virtual datastores, all other large-sized files, such as virtual disks, memory snapshots, swap,
and digest, are stored as separate virtual volumes.
Config-vVols are created as 4-GB virtual volumes. Generic content of the config-vVol usually
consumes only a fraction of this 4-GB allocation, so config-vVols are typically thin-provisioned to
conserve backing space. Any additional large files, such as ISO disk images, DVD images, and
image files, might cause the config-vVol to exceed its 4-GB limit. If such files are included in an
OVF template, deployment of the VM OVF to vVols storage fails. If these files are part of an
existing VM, migration of that VM from a traditional datastore to vVols storage also fails.
Solution
n For OVF deployment. Because you cannot deploy an OVF template that contains excess files
directly to a virtual datastore, first deploy the VM to a nonvirtual datastore. Remove any
excess content from the VM home directory, and migrate the resulting VM to vVols storage.
The I/O filters can be offered by VMware or created by third parties through vSphere APIs for I/O
Filtering (VAIO).
VMware offers certain categories of I/O filters. In addition, third-party vendors can create the I/O
filters. Typically, they are distributed as packages that provide an installer to deploy the filter
components on vCenter Server and ESXi host clusters.
After the I/O filters are deployed, vCenter Server configures and registers an I/O filter storage
provider, also called a VASA provider, for each host in the cluster. The storage providers
communicate with vCenter Server and make data services offered by the I/O filter visible in the
VM Storage Policies interface. You can reference these data services when defining common
rules for a VM policy. After you associate virtual disks with this policy, the I/O filters are enabled
on the virtual disks.
Datastore Support
I/O filters can support all datastore types including the following:
n VMFS
n NFS 3
n NFS 4.1
n vVol
n vSAN
n Replication. Replicates all write I/O operations to an external target location, such as another
host or cluster.
n Encryption. Offered by VMware. Provides encryption mechanisms for virtual machines. For
more information, see the vSphere Security documentation.
n Caching. Implements a cache for virtual disk data. The filter can use a local flash storage
device to cache the data and increase the IOPS and hardware utilization rates for the virtual
disk. If you use the caching filter, you might need to configure a Virtual Flash Resource.
n Storage I/O control. Offered by VMware. Throttles the I/O load towards a datastore and
controls the amount of storage I/O that is allocated to virtual machines during periods of I/O
congestion. For more information, see the vSphere Resource Management documentation.
Note You can install several filters from the same category, such as caching, on your ESXi host.
However, you can have only one filter from the same category per virtual disk.
A combination of user world and VMkernel infrastructure provided by ESXi. With the
framework, you can add filter plug-ins to the I/O path to and from virtual disks. The
infrastructure includes an I/O filter storage provider (VASA provider). The provider integrates
with the Storage Policy Based Management (SPBM) system and exports filter capabilities to
vCenter Server.
The following figure illustrates the components of I/O filtering and the flow of I/O between the
guest OS and the virtual disk.
Virtual Machine
GuestOS
I/O Path
Filter 1
Filter 2
Filter N
I/O Path
Virtual Disk
Each Virtual Machine Executable (VMX) component of a virtual machine contains a Filter
Framework that manages the I/O filter plug-ins attached to the virtual disk. The Filter Framework
invokes filters when the I/O requests move between the guest operating system and the virtual
disk. Also, the filter intercepts any I/O access towards the virtual disk that happens outside of a
running VM.
The filters run sequentially in a specific order. For example, a replication filter executes before a
cache filter. More than one filter can operate on the virtual disk, but only one for each category.
Once all filters for the particular disk verify the I/O request, the request moves to its destination,
either the VM or the virtual disk.
Because the filters run in user space, any filter failures impact only the VM, but do not affect the
ESXi host.
Storage providers for I/O filtering are software components that are offered by vSphere. They
integrate with I/O filters and report data service capabilities that I/O filters support to vCenter
Server.
The capabilities populate the VM Storage Policies interface and can be referenced in a VM
storage policy. You then apply this policy to virtual disks, so that the I/O filters can process I/O
for the disks.
If your caching I/O filter uses local flash devices, you need to configure a virtual flash resource,
also known as VFFS volume. You configure the resource on your ESXi host before activating the
filter. While processing the virtual machine read I/Os, the filter creates a virtual machine cache
and places it on the VFFS volume.
VM
I/O Path
Cache Filter
I/O Path
Virtual
Machine
Cache
Flash Flash
Storage Storage
Devices Devices
ESXi
To set up a virtual flash resource, you use flash devices that are connected to your host. To
increase the capacity of your virtual flash resource, you can add more flash drives. An individual
flash drive must be exclusively allocated to a virtual flash resource and cannot be shared with
any other vSphere service, such as vSAN or VMFS.
n Use the latest version of ESXi and vCenter Server compatible with I/O filters. Older versions
might not support I/O filters, or provide only partial support.
n Check for any additional requirements that individual partner solutions might have. In specific
cases, your environment might need flash devices, extra physical memory, or network
connectivity and bandwidth. For information, contact your vendor or your VMware
representative.
n Web server to host partner packages for filter installation. The server must remain available
after initial installation. When a new host joins the cluster, the server pushes appropriate I/O
filter components to the host.
Prerequisites
n For information about I/O filters provided by third parties, contact your vendor or your
VMware representative.
Procedure
VMware partners create I/O filters through the vSphere APIs for I/O Filtering (VAIO) developer
program.
The filter packages are distributed as solution bundle ZIP packages that can include I/O filter
daemons, I/O filter libraries, CIM providers, and other associated components.
Typically, to deploy the filters, you run installers provided by vendors. Installation is performed at
an ESXi cluster lever. You cannot install the filters on selected hosts directly.
Note If you plan to install I/O filters on the vSphere 7.0 and later cluster, your cluster cannot
include ESXi 6.x hosts. Filters built using the vSphere 6.x VAIO program cannot work on ESXi 7.0
and later hosts because the CIM provider is 32-bit on ESXi 6.x and 64-bit on ESXi 7.0 and later. In
turn, filters built using the vSphere 7.0 and later VAIO program are not supported on ESXi 6.x
hosts.
Prerequisites
Procedure
The installer deploys the appropriate I/O filter extension on vCenter Server and the filter
components on all hosts within a cluster.
A storage provider, also called a VASA provider, is automatically registered for every ESXi
host in the cluster. Successful auto-registration of the I/O filter storage providers triggers an
event at the host level. If the storage providers fail to auto-register, the system raises alarms
on the hosts.
When you install a third-party I/O filter, a storage provider, also called VASA provider, is
automatically registered for every ESXi host in the cluster. Successful auto-registration of the I/O
filter storage providers triggers an event at the host level. If the storage providers fail to auto-
register, the system raises alarms on the hosts.
Procedure
1 Verify that the I/O filter storage providers appear as expected and are active.
When the I/O filter providers are properly registered, capabilities and data services that the
filters offer populate the VM Storage Policies interface.
2 Verify that the I/O filter components are listed on your cluster and ESXi hosts.
Option Actions
Prerequisites
For the caching I/O filters, configure the virtual flash resource on your ESXi host before activating
the filter. See Set Up Virtual Flash Resource.
Procedure
Make sure that the virtual machine policy lists data services provided by the I/O filters.
To activate data services that the I/O filter provides, associate the I/O filter policy with virtual
disks. You can assign the policy when you provision the virtual machine.
What to do next
If you later want to disable the I/O filter for a virtual machine, you can remove the filter rules from
the VM storage policy and re-apply the policy. See Edit or Clone a VM Storage Policy. Or you can
edit the settings of the virtual machine and select a different storage policy that does not include
the filter.
You can assign the I/O filter policy during an initial deployment of a virtual machine. This topic
describes how to assign the policy when you create a new virtual machine. For information about
other deployment methods, see the vSphere Virtual Machine Administration documentation.
Note You cannot change or assign the I/O filter policy when migrating or cloning a virtual
machine.
Prerequisites
Verify that the I/O filter is installed on the ESXi host where the virtual machine runs.
Procedure
1 Start the virtual machine provisioning process and follow the appropriate steps.
2 Assign the same storage policy to all virtual machine files and disks.
a On the Select storage page, select a storage policy from the VM Storage Policy drop-
down menu.
b Select the datastore from the list of compatible datastores and click Next.
The datastore becomes the destination storage resource for the virtual machine
configuration file and all virtual disks. The policy also activates I/O filter services for the
virtual disks.
Use this option to enable I/O filters just for your virtual disks.
a On the Customize hardware page, expand the New hard disk pane.
b From the VM storage policy drop-down menu, select the storage policy to assign to the
virtual disk.
Use this option to store the virtual disk on a datastore other than the datastore where the
VM configuration file resides.
Results
After you create the virtual machine, the Summary tab displays the assigned storage policies and
their compliance status.
What to do next
You can later change the virtual policy assignment. See Change Storage Policy Assignment for
Virtual Machine Files and Disks.
When you work with I/O filters, the following considerations apply:
n vCenter Server uses ESX Agent Manager (EAM) to install and uninstall I/O filters. As an
administrator, never invoke EAM APIs directly for EAM agencies that are created or used by
vCenter Server. All operations related to I/O filters must go through VIM APIs. If you
accidentally modify an EAM agency that was created by vCenter Server, you must revert the
changes. If you accidentally destroy an EAM agency that is used by I/O filters, you must call
Vim.IoFilterManager#uninstallIoFilter to uninstall the affected I/O filters. After
uninstalling, perform a fresh reinstall.
n When a new host joins the cluster that has I/O filters, the filters installed on the cluster are
deployed on the host. vCenter Server registers the I/O filter storage provider for the host.
Any cluster changes become visible in the VM Storage Policies interface of the vSphere
Client.
n When you move a host out of a cluster or remove it from vCenter Server, the I/O filters are
uninstalled from the host. vCenter Server unregisters the I/O filter storage provider.
n If you use a stateless ESXi host, it might lose its I/O filter VIBs during a reboot. vCenter Server
checks the bundles installed on the host after it reboots, and pushes the I/O filter VIBs to the
host if necessary.
Prerequisites
Procedure
1 Uninstall the I/O filter by running the installer that your vendor provides.
During uninstallation, a third party I/O filter installer automatically places the hosts into
maintenance mode.
If the uninstallation is successful, the filter and any related components are removed from the
hosts.
2 Verify that the I/O filter components are properly uninstalled from your ESXi hosts. Use one
of the following methods:
n View the I/O filters in the vSphere Client. See View I/O Filters and Storage Providers.
When you upgrade an ESXi 6.x host that has custom I/O filter VIBs to version 7.0 and later, all
supported custom VIBs are migrated. However, the legacy I/O filters cannot work on ESXi 7.0
and later. The filters generally include 32 bit CIM providers, while ESXi 7.0 and later requires 64
bit CIM applications. You need to upgrade the legacy filters to make them compatible.
An upgrade consists of uninstalling the old filter components and replacing them with the new
filter components. To determine whether an installation is an upgrade, vCenter Server checks the
names and versions of existing filters. If the existing filter names match the names of the new
filters but have different versions, the installation is considered an upgrade.
Prerequisites
n Required privileges:Host.Config.Patch.
n Upgrade your hosts to ESXi 7.0 and later. If you use vSphere Lifecycle Manager for the
upgrade, see the Managing Host and Cluster Lifecycle documentation.
Procedure
During the upgrade, a third party I/O filter installer automatically places the hosts into
maintenance mode.
The installer identifies any existing filter components and removes them before installing the
new filter components.
2 Verify that the I/O filter components are properly upgraded in your ESXi hosts. Use one of
the following methods:
n View the I/O filters in the vSphere Client. See View I/O Filters and Storage Providers.
Results
After the upgrade, the system places the hosts back into operational mode.
n Because I/O filters are datastore-agnostic, all types of datastores, including VMFS, NFS,
vVols, and vSAN, are compatible with I/O filters.
n I/O filters support RDMs in virtual compatibility mode. No support is provided to RDMs in
physical compatibility mode.
n You cannot change or assign the I/O filter policy while migrating or cloning a virtual machine.
You can change the policy after you complete the migration or cloning.
n When you clone or migrate a virtual machine with I/O filter policy from one host to another,
make sure that the destination host has a compatible filter installed. This requirement applies
to migrations initiated by an administrator or by such functionalities as HA or DRS.
n When you convert a template to a virtual machine, and the template is configured with I/O
filter policy, the destination host must have the compatible I/O filter installed.
n If you use vCenter Site Recovery Manager to replicate virtual disks, the resulting disks on the
recovery site do not have the I/O filter policies. You must create the I/O filter policies in the
recovery site and reattach them to the replicated disks.
n You can attach an encryption I/O filter to a new virtual disk when you create a virtual
machine. You cannot attach the encryption filter to an existing virtual disk.
n If your virtual machine has a snapshot tree associated with it, you cannot add, change, or
remove the I/O filter policy for the virtual machine.
If you use Storage vMotion to migrate a virtual machine with I/O filters, a destination datastore
must be connected to hosts with compatible I/O filters installed.
You might need to migrate a virtual machine with I/O filters across different types of datastores,
for example between VMFS and vVols. If you do so, make sure that the VM storage policy
includes rule sets for every type of datastore you are planning to use. For example, if you
migrate your virtual machine between the VMFS and vVols datastores, create a mixed VM
storage policy that includes the following rules:
n Rule Set 1 for the VMFS datastore. Because Storage Policy Based Management does not offer
an explicit VMFS policy, the rule set must include tag-based rules for the VMFS datastore.
When Storage vMotion migrates the virtual machine, the correct rule set that corresponds to the
target datastore is selected. The I/O filter rules remain unchanged.
If you do not specify rules for datastores and define only Common Rules for the I/O filters, the
system applies default storage policies for the datastores.
If an I/O filter installation fails on a host, the system generates events that report the failure. In
addition, an alarm on the host shows the reason for the failure. Examples of failures include the
following:
n The VIB requires the host to be in maintenance mode for an upgrade or uninstallation.
n The VIB requires the host to reboot after the installation or uninstallation.
n Attempts to put the host in maintenance mode fail because the virtual machine cannot be
evacuated from the host.
vCenter Server can resolve some failures. You might have to intervene for other failures. For
example, you might need to edit the VIB URL, manually evacuate or power off virtual machines,
or manually install or uninstall VIBs.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
Options for the install command allow you to perform a dry run, specify a specific VIB,
bypass acceptance-level verification, and so on. Do not bypass verification on production
systems. See the ESXCLI Reference documentation.
Block storage devices, Fibre Channel and iSCSI, and NAS devices support the hardware
acceleration.
For additional details, see the VMware knowledge base article at http://kb.vmware.com/kb/
1021976.
n VMFS clustered locking and metadata operations for virtual machine files
ESXi Support T10 SCSI standard, or block Support NAS plug-ins for array
storage plug-ins for array integration integration
(VAAI)
Note If your SAN or NAS storage fabric uses an intermediate appliance in front of a storage
system that supports hardware acceleration, the intermediate appliance must also support
hardware acceleration and be properly certified. The intermediate appliance might be a storage
virtualization appliance, I/O acceleration appliance, encryption appliance, and so on.
The status values are Unknown, Supported, and Not Supported. The initial value is Unknown.
For block devices, the status changes to Supported after the host successfully performs the
offload operation. If the offload operation fails, the status changes to Not Supported. The status
remains Unknown if the device provides partial hardware acceleration support.
With NAS, the status becomes Supported when the storage can perform at least one hardware
offload operation.
When storage devices do not support or provide partial support for the host operations, your
host reverts to its native methods to perform unsupported operations.
n Full copy, also called clone blocks or copy offload. Enables the storage arrays to make full
copies of data within the array without having the host read and write the data. This
operation reduces the time and network load when cloning virtual machines, provisioning
from a template, or migrating with vMotion.
n Block zeroing, also called write same. Enables storage arrays to zero out a large number of
blocks to provide newly allocated storage, free of previously written data. This operation
reduces the time and network load when creating virtual machines and formatting virtual
disks.
n Hardware assisted locking, also called atomic test and set (ATS). Supports discrete virtual
machine locking without use of SCSI reservations. This operation allows disk locking per
sector, instead of the entire LUN as with SCSI reservations.
Check with your vendor for the hardware acceleration support. Certain storage arrays require
that you activate the support on the storage side.
On your host, the hardware acceleration is enabled by default. If your storage does not support
the hardware acceleration, you can disable it.
In addition to hardware acceleration support, ESXi includes support for array thin provisioning.
For information, see ESXi and Array Thin Provisioning.
As with any advanced settings, before you disable the hardware acceleration, consult with the
VMware support team.
Procedure
n VMFS3.HardwareAcceleratedLocking
n DataMover.HardwareAcceleratedMove
n DataMover.HardwareAcceleratedInit
In the vSphere 5.x and later releases, these extensions are implemented as the T10 SCSI
commands. As a result, with the devices that support the T10 SCSI standard, your ESXi host can
communicate directly and does not require the VAAI plug-ins.
If the device does not support T10 SCSI or provides partial support, ESXi reverts to using the
VAAI plug-ins, installed on your host. The host can also use a combination of the T10 SCSI
commands and plug-ins. The VAAI plug-ins are vendor-specific and can be either VMware or
partner developed. To manage the VAAI capable device, your host attaches the VAAI filter and
vendor-specific VAAI plug-in to the device.
For information about whether your storage requires VAAI plug-ins or supports hardware
acceleration through T10 SCSI commands, see the VMware Compatibility Guide or contact your
storage vendor.
You can use several esxcli commands to query storage devices for the hardware acceleration
support information. For the devices that require the VAAI plug-ins, the claim rule commands are
also available. For information about esxcli commands, see Getting Started with ESXCLI.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
The output shows the hardware acceleration, or VAAI, status that can be unknown,
supported, or unsupported.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
u Run the esxcli storage core device vaai status get -d=device_ID command.
If a VAAI plug-in manages the device, the output shows the name of the plug-in attached to
the device. The output also shows the support status for each T10 SCSI based primitive, if
available. Output appears in the following example:
for the device. You can use the esxcli commands to list the hardware acceleration filter and
plug-in claim rules.
Procedure
In this example, the filter claim rules specify devices that the VAAI_FILTER filter claims.
In this example, the VAAI claim rules specify devices that the VAAI plug-in claims.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
1 Define a new claim rule for the VAAI filter by running the
esxcli storage core claimrule add --claimrule-class=Filter --plugin=VAAI_FILTER
command.
2 Define a new claim rule for the VAAI plug-in by running the
esxcli storage core claimrule add --claimrule-class=VAAI command.
Note Only the filter-class rules must be run. When the VAAI filter claims a device, it
automatically finds the proper VAAI plug-in to attach.
You can use the XCOPY mechanism with all storage arrays that support the SCSI T10 based
VMW_VAAIP_T10 plug-in developed by VMware. To enable the XCOPY mechanism, create a
claim rule of the VAAI class.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
For information about the options that the command takes, see Add Multipathing Claim Rules.
Option Description
-s|--xcopy-use-multi-segs Use multiple segments for XCOPY commands. Valid only when --xcopy-
use-array-values is specified.
-m|--xcopy-max-transfer-size Maximum transfer size in MB for the XCOPY commands when you use a
transfer size different than array reported. Valid only when --xcopy-use-
array-values is specified.
-k|--xcopy-max-transfer-size-kib Maximum transfer size in KiB for the XCOPY commands when you use a
transfer size different than array reported. Valid only if --xcopy-use-array-
values is specified.
n # esxcli storage core claimrule add -r 914 -t vendor -V XtremIO -M XtremApp -P VMW_VAAIP_T10 -c
VAAI -a -s -k 64
n # esxcli storage core claimrule add -r 65430 -t vendor -V EMC -M SYMMETRIX -P VMW_VAAIP_SYMM -c
VAAI -a -s -m 200
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
The VAAI NAS framework supports both versions of NFS storage, NFS 3 and NFS 4.1.
The VAAI NAS uses a set of storage primitives to offload storage operations from the host to the
array. The following list shows the supported NAS operations:
Supports an ability of NAS device to clone virtual disk files. This operation is similar to the
VMFS block cloning, except that NAS devices clone entire files instead of file segments. Tasks
that benefit from the full file clone operation include VM cloning, Storage vMotion, and
deployment of VMs from templates.
When the ESXi host copies data with VAAI NAS, it does not need to read the data from the
NAS and write back the data to the NAS. The host simply sends the copy command
offloading it to the NAS. The copy process is done in the NAS, which reduces the load on the
host.
This operation, also called array-based or native snapshots, offloads creation of virtual
machine snapshots and linked clones to the array.
Reserve Space
Supports an ability of storage arrays to allocate space for a virtual disk file in the thick format.
Typically, when you create a virtual disk on an NFS datastore, the NAS server determines the
allocation policy. The default allocation policy on most NAS servers is thin and does not
guarantee backing storage to the file. However, the reserve space operation can instruct the
NAS device to use vendor-specific mechanisms to reserve space for a virtual disk. As a result,
you can create thick virtual disks on the NFS datastore if the backing NAS server supports
the reserve space operation.
Extended Statistics
Supports visibility to space use on NAS devices. The operation enables you to query space
utilization details for virtual disks on NFS datastores. The details include the size of a virtual
disk and the space consumption of the virtual disk. This functionality is useful for thin
provisioning.
With NAS storage devices, the hardware acceleration integration is implemented through
vendor-specific NAS plug-ins. These plug-ins are typically created by vendors and are distributed
as vendor packages. No claim rules are required for the NAS plug-ins to function.
Several tools for installing and updating NAS plug-ins are available. They include the esxcli
commands and vSphere Lifecycle Manager. For more information, see the VMware ESXi Upgrade
and Managing Host and Cluster Lifecycle documentation. For installation and update
recommendations, see the Knowledge Base article.
Note NAS storage vendors might provide additional settings that can affect the performance
and operation of VAAI. Follow the vendor's recommendations and configure the appropriate
settings on both the NAS storage array and ESXi. See your storage vendor documentation for
more information.
By default, all newly created VMs support traditional ESXi snapshot technology. To use the NFS
native snapshot technology, enable it for the VM.
Prerequisites
n Verify that the NAS array supports the fast file clone operation with the VAAI NAS program.
n On your ESXi host, install vendor-specific NAS plug-in that supports the fast file cloning with
VAAI.
n Follow the recommendations of your NAS storage vendor to configure any required settings
on both the NAS array and ESXi. See your storage vendor documentation for more
information.
Procedure
1 In the vSphere Client, right-click the virtual machine and select Edit Settings.
If the parameter exists, make sure that its value is set to True. If the parameter is not present,
add it and set its value to True.
Name Value
snapshot.alwaysAllowNative True
For any primitive that the array does not implement, the array returns an error. The error triggers
the ESXi host to attempt the operation using its native methods.
The VMFS data mover does not leverage hardware offloads and instead uses software data
movement when one of the following occurs:
n The source and destination VMFS datastores have different block sizes.
n The source file type is RDM and the destination file type is non-RDM (regular file).
n The source VMDK type is eagerzeroedthick and the destination VMDK type is thin.
n The logical address and transfer length in the requested operation are not aligned to the
minimum alignment required by the storage device. All datastores created with the vSphere
Client are aligned automatically.
n The VMFS has multiple LUNs or extents, and they are on different arrays.
Hardware cloning between arrays, even within the same VMFS datastore, does not work.
Thick provisioning
It is a traditional model of the storage provisioning. With the thick provisioning, large amount
of storage space is provided in advance in anticipation of future storage needs. However, the
space might remain unused causing underutilization of storage capacity.
Thin provisioning
This method contrast with thick provisioning and helps you eliminate storage underutilization
problems by allocating storage space in a flexible on-demand manner. With ESXi, you can
use two models of thin provisioning, array-level and virtual disk-level.
Thin provisioning allows you to report more virtual storage space than there is real physical
capacity. This discrepancy can lead to storage over-subscription, also called over-
provisioning. When you use thin provisioning, monitor actual storage usage to avoid
conditions when you run out of physical storage space.
By default, ESXi offers a traditional storage provisioning method for virtual machines. With this
method, you first estimate how much storage the virtual machine might need for its entire life
cycle. You then provision a fixed amount of storage space to the VM virtual disk in advance, for
example, 40 GB. The entire provisioned space is committed to the virtual disk. A virtual disk that
immediately occupies the entire provisioned space is a thick disk.
ESXi supports thin provisioning for virtual disks. With the disk-level thin provisioning feature, you
can create virtual disks in a thin format. For a thin virtual disk, ESXi provisions the entire space
required for the disk’s current and future activities, for example 40 GB. However, the thin disk
uses only as much storage space as the disk needs for its initial operations. In this example, the
thin-provisioned disk occupies only 20 GB of storage. If the disk requires more space, it can
expand into its entire 40 GB of provisioned space.
VM 1 VM 2
THICK THIN
80GB
40GB 40GB
provisioned
40GB capacity
20GB
used
capacity
virtual disks
20GB
datastore
40GB
NFS datastores with Hardware Acceleration and VMFS datastores support the following disk
provisioning policies. On NFS datastores that do not support Hardware Acceleration, only thin
format is available.
You can use Storage vMotion or cross-host Storage vMotion to transform virtual disks from one
format to another.
Creates a virtual disk in a default thick format. Space required for the virtual disk is allocated
when the disk is created. Data remaining on the physical device is not erased during creation,
but is zeroed out on demand later on first write from the virtual machine. Virtual machines do
not read stale data from the physical device.
A type of thick virtual disk that supports clustering features such as Fault Tolerance. Space
required for the virtual disk is allocated at creation time. In contrast to the thick provision lazy
zeroed format, the data remaining on the physical device is zeroed out when the virtual disk
is created. It might take longer to create virtual disks in this format than to create other types
of disks. Increasing the size of an Eager Zeroed Thick virtual disk causes a significant stun
time for the virtual machine.
Thin Provision
Use this format to save storage space. For the thin disk, you provision as much datastore
space as the disk would require based on the value that you enter for the virtual disk size.
However, the thin disk starts small and at first, uses only as much datastore space as the disk
needs for its initial operations. If the thin disk needs more space later, it can grow to its
maximum capacity and occupy the entire datastore space provisioned to it.
Thin provisioning is the fastest method to create a virtual disk because it creates a disk with
just the header information. It does not allocate or zero out storage blocks. Storage blocks
are allocated and zeroed out when they are first accessed.
Note If a virtual disk supports clustering solutions such as Fault Tolerance, do not make the
disk thin.
This procedure assumes that you are creating a new virtual machine. For information, see the
vSphere Virtual Machine Administration documentation.
Procedure
a Right-click any inventory object that is a valid parent object of a virtual machine, such as a
data center, folder, cluster, resource pool, or host, and select New Virtual Machine.
b Click the New Hard disk triangle to expand the hard disk options.
With a thin virtual disk, the disk size value shows how much space is provisioned and
guaranteed to the disk. At the beginning, the virtual disk might not use the entire
provisioned space. The actual storage use value can be less than the size of the virtual
disk.
Results
What to do next
If you created a virtual disk in the thin format, you can later inflate it to its full size.
Procedure
3 Review the storage use information in the upper right area of the Summary tab.
Results
Storage Usage shows how much datastore space is occupied by virtual machine files, including
configuration and log files, snapshots, virtual disks, and so on. When the virtual machine is
running, the used storage space also includes swap files.
For virtual machines with thin disks, the actual storage use value might be less than the size of
the virtual disk.
Procedure
3 Click the Hard Disk triangle to expand the hard disk options.
The Type text box shows the format of your virtual disk.
What to do next
If your virtual disk is in the thin format, you can inflate it to its full size.
You use the datastore browser to inflate the thin virtual disk.
Prerequisites
n Make sure that the datastore where the virtual machine resides has enough space.
n Remove snapshots.
Procedure
1 In the vSphere Client, navigate to the folder of the virtual disk you want to inflate.
2 Expand the virtual machine folder and browse to the virtual disk file that you want to convert.
The file has the .vmdk extension and is marked with the virtual disk ( ) icon.
Note The option might not be available if the virtual disk is thick or when the virtual machine
is running.
Results
The inflated virtual disk occupies the entire datastore space originally provisioned to it.
Over-subscription can be possible because usually not all virtual machines with thin disks need
the entire provisioned datastore space simultaneously. However, if you want to avoid over-
subscribing the datastore, you can set up an alarm that notifies you when the provisioned space
reaches a certain threshold.
For information on setting alarms, see the vCenter Server and Host Management documentation.
If your virtual machines require more space, the datastore space is allocated on a first come first
served basis. When the datastore runs out of space, you can add more physical storage and
increase the datastore.
The ESXi host integrates with block-based storage and performs these tasks:
n The host can recognize underlying thin-provisioned LUNs and monitor their space use to
avoid running out of physical space. The LUN space might change if, for example, your VMFS
datastore expands or if you use Storage vMotion to migrate virtual machines to the thin-
provisioned LUN. The host warns you about breaches in physical LUN space and about out-
of-space conditions.
n The host can run the automatic T10 unmap command from VMFS6 and VM guest operating
systems to reclaim unused space from the array. VMFS5 supports a manual space
reclamation method.
Note ESXi does not support enabling and disabling of thin provisioning on a storage device.
Requirements
To use the thin provisioning reporting and space reclamation features, follow these requirements:
Unmap command originating form VMFS Manual for VMFS5. Use Automatic for VMFS6
esxcli storage vmfs
unmap.
Unmap command originating from guest OS Yes. Limited support. Yes (VMFS6)
n Use storage systems that support T10-based vSphere Storage APIs - Array Integration
(VAAI), including thin provisioning and space reclamation. For information, contact your
storage provider and check the VMware Compatibility Guide.
The following sample flow demonstrates how the ESXi host and the storage array interact to
generate breach of space and out-of-space warnings for a thin-provisioned LUN. The same
mechanism applies when you use Storage vMotion to migrate virtual machines to the thin-
provisioned LUN.
1 Using storage-specific tools, your storage administrator provisions a thin LUN and sets a soft
threshold limit that, when reached, triggers an alert. This step is vendor-specific.
2 Using the vSphere Client, you create a VMFS datastore on the thin-provisioned LUN. The
datastore spans the entire logical size that the LUN reports.
3 As the space used by the datastore increases and reaches the set soft threshold, the
following actions take place:
You can contact the storage administrator to request more physical space. Alternatively,
you can use Storage vMotion to evacuate your virtual machines before the LUN runs out
of capacity.
4 If no space is left to allocate to the thin-provisioned LUN, the following actions take place:
Caution In certain cases, when a LUN becomes full, it might go offline or get unmapped
from the host.
You can resolve the permanent out-of-space condition by requesting more physical
space from the storage administrator.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
Results
The following thin provisioning status indicates that the storage device is thin-provisioned.
Note Some storage systems present all devices as thin-provisioned no matter whether the
devices are thin or thick. Their thin provisioning status is always yes. For details, check with your
storage vendor.
You free storage space inside the VMFS datastore when you delete or migrate the VM,
consolidate a snapshot, and so on. Inside the virtual machine, storage space is freed when you
delete files on the thin virtual disk. These operations leave blocks of unused space on the storage
array. However, when the array is not aware that the data was deleted from the blocks, the
blocks remain allocated by the array until the datastore releases them. VMFS uses the SCSI
unmap command to indicate to the array that the storage blocks contain deleted data, so that
the array can unallocate these blocks.
ESXi host
Storage Array
VMFS Datastore
VMs
Physical Disk
Blocks
The command can also originate directly from the guest operating system. Both VMFS5 and
VMFS6 datastores can provide support to the unmap command that proceeds from the guest
operating system. However, the level of support is limited on VMFS5.
Depending on the type of your VMFS datastore, you use different methods to configure space
reclamation for the datastore and your virtual machines.
Watch the following video to learn more about how space reclamation works.
The operation helps the storage array to reclaim unused free space. Unmapped space can be
then used for other storage allocation requests and needs.
n Unmap requests are sent at a constant rate, which helps to avoid any instant load on the
backing array.
For VMFS6 datastores, you can configure the following space reclamation parameters.
Granularity defines the minimum size of a released space sector that underlying storage can
reclaim. Storage cannot reclaim those sectors that are smaller in size than the specified
granularity.
For VMFS6, reclamation granularity equals the block size. When you specify the block size as
1 MB, the granularity is also 1 MB. Storage sectors of the size smaller than 1 MB are not
reclaimed.
Note Certain storage arrays recommend an optimal unmap granularity. ESXi supports
automatic unmap processing on arrays with the recommended unmap granularity of 1 MB or
greater, for example 16 MB,. On the arrays with the optimal granularity of 1 MB and less, the
unmap operation is supported if the granularity is a factor of 1 MB. For example, 1 MB is
divisible by 512 bytes, 4 KB, 64 KB, and so on.
The method can be either priority or fixed. When the method you use is priority, you
configure the priority rate. For the fixed method, you must indicate the bandwidth in MB per
second.
This parameter defines the rate at which the space reclamation operation is performed when
you use the priority reclamation method. Typically, VMFS6 can send the unmap commands
either in bursts or sporadically depending on the workload and configuration. For VMFS6,
you can specify one of the following options.
Space Reclamation
Priority Description Configuration
None Disables the unmap operations for the datastore. vSphere Client
esxcli command
Low (default) Sends the unmap command at a less frequent rate, 25–50 MB per vSphere Client
second. esxcli command
Medium Sends the command at a rate twice faster than the low rate, 50–100 esxcli command
MB per second.
High Sends the command at a rate three times faster than the low rate, over esxcli command
100 MB per second.
Note The ESXi host of version 6.5 does not recognize the medium and high priority rates. If
you migrate the VMs to the host version 6.5, the rate defaults to low.
After you enable space reclamation, the VMFS6 datastore can start releasing the blocks of
unused space only when it has at least one open file. This condition can be fulfilled when, for
example, you power on one of the VMs on the datastore.
At the VMFS6 datastore creation time, the only available method for the space reclamation is
priority. To use the fixed method, edit the space reclamation settings of the existing datastore.
Procedure
1 In the vSphere Client object navigator, browse to a host, a cluster, or a data center.
The parameters define granularity and the priority rate at which space reclamation
operations are performed. You can also use this page to disable space reclamation for the
datastore.
Option Description
Block size The block size on a VMFS datastore defines the maximum file size and the
amount of space the file occupies. VMFS6 supports the block size of 1 MB.
Space reclamation granularity Specify granularity for the unmap operation. Unmap granularity equals the
block size, which is 1 MB.
Storage sectors of the size smaller than 1 MB are not reclaimed.
Note In the vSphere Client, the only available settings for the space reclamation priority are
Low and None. To change the settings to Medium or High, use the esxcli command. See Use
the ESXCLI Command to Change Space Reclamation Parameters.
Results
After you enable space reclamation, the VMFS6 datastore can start releasing the blocks of
unused space only when it has at least one open file. This condition can be fulfilled when, for
example, you power on one of the VMs on the datastore.
Procedure
Option Description
Enable automatic space reclamation Use the fixed method for space reclamation. Specify reclamation bandwidth
at fixed rate in MB per second.
Results
The modified value for the space reclamation priority appears on the General page for the
datastore.
Procedure
Option Description
esxcli storage vmfs reclaim config set --volume-label datastore_name --reclaim-method fixed -b 100
Procedure
a Under Properties, expand File system and review the value for the space reclamation
granularity.
b Under Space Reclamation, review the setting for the space reclamation priority.
If you configured any values through the esxcli command, for example, Medium or High for
the space reclamation priority, these values also appear in the vSphere Client.
Prerequisites
Install ESXCLI. See Getting Started with ESXCLI. For troubleshooting, run esxcli commands in the
ESXi Shell.
Procedure
1 To reclaim unused storage blocks on the thin-provisioned device, run the following command:
Option Description
-l|--volume-label=volume_label The label of the VMFS volume to unmap. A mandatory argument. If you
specify this argument, do not use -u|--volume-uuid=volume_uuid.
-u|--volume-uuid=volume_uuid The UUID of the VMFS volume to unmap. A mandatory argument. If you
specify this argument, do not use -l|--volume-label=volume_label.
2 To verify whether the unmap process has finished, search for unmap in the vmkernel.log file.
Inside a virtual machine, storage space is freed when, for example, you delete files on the thin
virtual disk. The guest operating system notifies VMFS about freed space by sending the unmap
command. The unmap command sent from the guest operating system releases space within the
VMFS datastore. The command then proceeds to the array, so that the array can reclaim the
freed blocks of space.
Generally, the guest operating systems send the unmap commands based on the unmap
granularity they advertise. For details, see documentation provided with your guest operating
system.
The following considerations apply when you use space reclamation with VMFS6:
n VMFS6 processes the unmap request from the guest OS only when the space to reclaim
equals 1 MB or is a multiple of 1 MB. If the space is less than 1 MB or is not aligned to 1 MB, the
unmap requests are not processed.
n For VMs with snapshots in the default SEsparse format, VMFS6 supports the automatic space
reclamation only on ESXi hosts version 6.7 or later.
Space reclamation affects only the top snapshot and works when the VM is powered on.
However, for a limited number of the guest operating systems, VMFS5 supports the automatic
space reclamation requests.
To send the unmap requests from the guest operating system to the array, the virtual machine
must meet the following prerequisites:
n The guest operating system must be able to identify the virtual disk as thin.
With Cloud Native Storage, you can create persistent container volumes independent of virtual
machine and container life cycle. vSphere storage backs the volumes, and you can set a storage
policy directly on the volumes. After you create the volumes, you can review them and their
backing storage objects in the vSphere Client, and monitor their storage policy compliance.
vSphere Cloud Native Storage supports persistent volumes in the following Kubernetes
distributions:
n Generic Kubernetes, also called vanilla, that you install from the official repositories. This
vSphere Storage documentation covers only generic Kubernetes.
n vSphere with Tanzu. For more information, see the vSphere with Tanzu Configuration and
Management documentation.
Kubernetes Cluster
Pod
Persistent Volume
vSphere Storage
Storage
Policy
VMDK vSAN File Share
Kubernetes Cluster
In the Cloud Native Storage environment, you can deploy a generic Kubernetes cluster in a
cluster of virtual machines. On top of the Kubernetes cluster, you deploy your containerized
applications. Applications can be stateful and stateless.
Note For information on supervisor clusters and TKG clusters that you can run in vSphere
with Tanzu, see the vSphere with Tanzu Configuration and Management documentation.
Pod
A pod is a group of one or more containerized applications that share such resources as
storage and network. Containers inside a pod are started, stopped, and replicated as a
group.
Container Orchestrator
Stateful Application
PersistentVolume
n Virtual disks support volumes that are mounted as ReadWriteOnce. These volumes can
be used only by a single Pod in Kubernetes.
Starting with vSphere 7.0, you can use the vSphere encryption technology to protect
FCD virtual disks that back persistent volumes. For more information, see Use Encryption
with Cloud Native Storage.
n vSAN file shares support ReadWriteMany volumes that are mounted by many nodes.
These volumes can be shared between multiple Pods or applications running across
Kubernetes nodes or across Kubernetes clusters. For information about possible
configurations with file shares, see Using vSAN File Service to Provision File Volumes.
StorageClass
Kubernetes uses a StorageClass to define different tiers of storage and to describe different
types of requirements for storage backing the PersistentVolume. In the vSphere environment,
a storage class can be linked to a storage policy. As a vSphere administrator, you create
storage policies that describe different storage requirements. The VM storage policies can be
used as a part of StorageClass definition for dynamic volume provisioning.
The following sample YAML file references the Gold storage policy that you created earlier
using the vSphere Client. The resulting persistent volume VMDK is placed on a compatible
datastore that satisfies the Gold storage policy requirements.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gold-sc
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.vsphere.vmware.com
parameters:
storagepolicyname: "Gold"
PersistentVolumeClaim
Once the claim is created, the PersistentVolume is automatically bound to the claim. Pods use
the claim to mount the PersistentVolume and access storage.
When you delete this claim, the corresponding PersistentVolume object and the underlying
storage are deleted.
kind: PersistentVolumeClaim
metadata:
name: persistent-VMDK
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: gold-sc
Kubernetes
Clusters Pods
CSI
CNS Control
Plane
SPBM Policy
vSphere vSAN/VMFS/NFS/vVols
Kubernetes Cluster
Note For information on supervisor clusters and TKG clusters that you can run in vSphere
with Tanzu, see the vSphere with Tanzu Configuration and Management documentation.
The vSphere CSI is an out-of-tree plug-in that exposes vSphere storage to containerized
workloads on container orchestrators, such as Kubernetes. The plug-in enables vSAN and
other types of vSphere storage.
The vSphere CSI communicates with the CNS control plane on vCenter Server for all storage
provisioning operations. The vSphere CSI supports the following functionalities:
n Kubernetes zones.
On Kubernetes, the CSI driver is used with the out-of-tree vSphere Cloud Provider Interface
(CPI). The CSI driver is shipped as a container image and must be deployed by the cluster
administrator. For information, see the Driver Deployment section of the Kubernetes vSphere
CSI Driver documentation on GitHub.
For information about the CSI variations used in supervisor clusters and TKG clusters that you
can run in vSphere with Tanzu, see the vSphere with Tanzu Configuration and Management
documentation.
The CNS server component, or the CNS control plane, resides in vCenter Server. It is an
extension of vCenter Server management that implements the provisioning and life cycle
operations for the container volumes.
When provisioning container volumes, it interacts with vCenter Server to create storage
objects that back the volumes. The Storage Policy Based Management functionality
guarantees a required level of service to the volumes.
The CNS also performs query operations that allow you to manage and monitor container
volumes and their backing storage objects through vCenter Server.
Also called Improved Virtual Disk. It is a named virtual disk unassociated with a VM. These
disks reside on a vSAN, VMFS, NFS, or vVols datastore and back ReadWriteOnce container
volumes.
The FCD technology allows to perform life cycle operations related to persistent volumes
outside of the VM or pod life cycle. If the VM is a Kubernetes node that runs multiple
container based applications and uses persistent volumes and virtual disks for many
applications, CNS facilitates life cycle operations at the container and persistent volume
granularity.
It is a vSAN layer that provides file shares. Currently, it supports NFSv3 and NFSv4.1 file
shares. Cloud Native Storage uses vSAN file shares for persistent volumes of the
ReadWriteMany type. A single ReadWriteMany volume can be mounted by multiple nodes.
The volume can be shared between multiple pods or applications running across Kubernetes
nodes or across Kubernetes clusters.
Storage Policy Based Management is a vCenter Server service that supports provisioning of
persistent volumes according to specified storage requirements. After provisioning, the
service monitors compliance of the volume with the required policy characteristics.
When a Kubernetes pod request an RWM volume, Cloud Native Storage communicates with
vSAN file service to create an NFS-based file share of the requested size and storage class.
Cloud Native Storage then mounts the RWM volume into the Kubernetes worker node where the
pod runs. If multiple nodes are requesting access to the RWM volume, Cloud Native Storage
determines that the RWM volume already exists for that particular deployment and mounts the
existing volume into the nodes.
To be able to support RWM volumes, your environment must include the following items.
n vSAN file service enabled. For information, see the Administering VMware vSAN
documentation.
n Compatible version of CSI. For information, see the Kubernetes vSphere CSI Driver
documentation on GitHub.
Kubernetes-1
namespace-1
APP APP
App-1 App-2
PVC
PV
File-1
vSAN Datastore
Kubernetes-1 Kubernetes-2
namespace-1 namespace-2
APP APP
App-1 App-2
PVC-1 PVC-2
PV-1 PV-2
File-1
vSAN Datastore
n Deploy and manage the vSphere CSI. For information, see the Driver Deployment section of
the Kubernetes vSphere CSI Driver documentation on GitHub.
n Provision persistent volumes. For information about block volumes, see vSphere CSI Driver -
Block Volume. For information about file volumes, see vSphere CSI Driver - File Volume.
n Perform life cycle operations for the VM storage policies. For example, create a VM storage
policy to be used for a Kubernetes storage class and communicate its name to the
Kubernetes user. See Create a Storage Policy for Kubernetes.
n Use the Cloud Native Storage section of the vSphere Client to monitor health and storage
policy compliance of the container volumes across the Kubernetes clusters. See Monitor
Container Volumes Across Kubernetes Clusters.
n A Kubernetes cluster deployed on the virtual machines. For details about deploying the
vSphere CSI plug-in and running the Kubernetes cluster on vSphere, see the Driver
Deployment documentation in GitHub.
n Use the VMware Paravirtual SCSI controller for the primary disk on the Node VM.
n All virtual machines must have access to a shared datastore, such as vSAN.
n Set the disk.EnableUUID parameter on each node VM. See Configure Kubernetes Cluster
Virtual Machines.
n To avoid errors and unpredictable behavior, do not take snapshots of CNS node VMs.
n Use a compatible version of CSI. For information, see the Kubernetes vSphere CSI Driver
documentation on GitHub.
n Enable and configure the vSAN file service. You must configure the necessary file service
domains, IP pools, network, and so on. For information, see the Administering VMware vSAN
documentation.
n Follow specific guidelines to configure network access from a guest OS in the Kubernetes
node to a vSAN file share. See Configuring Network Access to vSAN File Share.
Setting Up Network
When configuring the networks, follow these requirements:
n On every Kubernetes node, use a dedicated vNIC for the vSAN file share traffic.
n Make sure that the traffic through the dedicated vNIC is routable to one or many vSAN file
service networks.
n Make sure that only the guest OS on each Kubernetes node can directly access the vSAN file
share through the file share IP address. The pods in the node cannot ping or access the vSAN
file share by its IP address.
CNS CSI driver ensures that only those pods that are configured to use the CNS file volume
can access the vSAN file share by creating a mount point in the guest OS.
n Avoid creating an IP address clash between the node VMs and vSAN file shares.
The following illustration is an example of the CNS network configuration with the vSAN file share
service.
Kubernetes Primary
bridge bridge
CNS CSI FS FS
mount mount
Pod/node
network
Dedicated file
share network
vSphere Management
network Switch/Routers
vSAN FS Network
vSAN FS
CNS
appliance VM
n The configuration uses separate networks for different items in the CNS environment.
Network Description
Pod or node network Kubernetes uses this network for the node to node or
pod to pod communication.
Dedicated file share network CNS file volume data traffic uses this network.
vSAN file share network Network where the vSAN file share is enabled and
where file shares are available.
n Every Kubernetes node has a dedicated vNIC for the file traffic. This vNIC is separate from
the vNIC used for the node to node or pod to pod communication.
n Only those applications that are configured to use the CNS file share have access to vSAN file
shares through the mount point in the node guest OS. For example, in the illustration, the
following takes place:
n App-1 and App-2 pods are configured to use a file volume, and have access to the file
share through the mount point created by the CSI driver.
n App-3 and App-4 are not configured with a file volume and cannot access file shares.
n The vSAN file shares are deployed as containers in a vSAN file share appliance VM on the
ESXi host. A Kubernetes deployer, which is a software or service that can configure, deploy,
and manage Kubernetes clusters, configures necessary routers and switches, so that the
guest OS in the Kubernetes node can access the vSAN file shares.
Security Limitations
Although the dedicated vNIC prevents an unauthorized pod from accessing the file shares
directly, certain security limitations exist:
n The CNS file functionality assumes that anyone who has the CNS file volume ID is an
authorized user of the volume. Any user that has the CNS file volume ID can access the data
stored in the volume.
n CNS file volume supports only the AUTH_SYS authentication, which is a user ID-based
authentication. To protect access to the data in the CNS file volume, you must use
appropriate user IDs for the containers accessing the CNS file volume.
n An unbound ReadWriteMany persistent volume referring to a CNS file volume can be bound
by a persistent volume claim created by any Kubernetes user under any namespace. Make
sure that only authorized users have access to Kubernetes to avoid security issues.
You can restrict access to only specific vSAN clusters where the file service is enabled. When
deploying the Kubernetes cluster, configure the CSI driver with access to specific file service
vSAN clusters. As a result, the CSI driver can provision the file volumes only on those vSAN
clusters.
In the default configuration, the CSI driver uses any file service vSAN cluster available in vCenter
Server for the file volume provisioning. The CSI driver does not verify which file service vSAN
cluster is accessible while provisioning file volumes.
You can create several roles to assign sets of permissions on the objects that participate in the
Cloud Native Storage environment.
Note These roles need to be created only for generic Kubernetes clusters. If you work in the
vSphere with Tanzu environment, use the Workload Storage Manager role for storage
operations.
For more information about roles and permissions in vSphere, and how to create a role, see the
vSphere Security documentation.
CNS- Datastore > Low level file operations Allows performing read, Shared datastore where
Datastore write, delete, and rename persistent volumes reside.
operations in the datastore
browser.
CNS-HOST- Host > Configuration > Storage partition Allows vSAN datastore Required on a vSAN cluster
CONFIG- configuration management. with vSAN file service.
STORAGE Required for file volume
only.
CNS-VM Virtual machine > Change Configuration > Allows adding an existing All cluster node VMs.
Add existing disk virtual disk to a virtual
machine.
Read-only Default role Users with the Read Only All hosts where the nodes
role for an object are VMs reside
allowed to view the state of Data center
the object and details
about the object. For
example, users with this
role can find the shared
datastore accessible to all
node VMs.
For zone and topology-
aware environments, all
ancestors of node VMs,
such as a host, cluster, and
data center must have the
Read-only role set for the
vSphere user configured to
use the CSI driver and CCM.
This is required to allow
reading tags and
categories to prepare the
nodes' topology.
The storage policy will be associated with the virtual disk or vSAN file share that back the
Kubenetes container.
If you have multiple vCenter Server instances in your environment, create the VM storage policy
on each instance. Use the same policy name across all instances.
Prerequisites
n The Kubernetes user identifies the Kubernetes cluster where the stateful containerized
application will be deployed.
n The Kubernetes user collects storage requirements for the containerized application and
communicates them to the vSphere user.
Procedure
c Click Create.
Option Action
Name Enter the name of the storage policy, for example Space-Efficient.
3 On the Policy structure page under Datastore-specific rules, select Enable rules for vSAN
storage and click Next.
4 On the vSAN page, define the policy rule set and click Next.
a On the Availability tab, define the Site disaster tolerance and Failures to tolerate.
Note For Site disaster tolerance, select None - standard cluster. Do not select options
related to the stretch cluster. Cloud Native Storage does not support vSAN stretch
clusters and site disaster tolerance.
b On the Advanced Policy Rules tab, define advanced policy rules, such as number of disk
stripes per object and flash read cache reservation.
5 On the Storage compatibility page, review the list of vSAN datastores that match this policy
and click Next.
6 On the Review and finish page, review the policy settings, and click Finish.
What to do next
You can now inform the Kubernetes user of the storage policy name. The VM storage policy you
created will be used as a part of storage class definition for dynamic volume provisioning.
Perform these steps for each of the VM nodes that participate in the cluster.
Prerequisites
n Create several VMs for your Kubernetes cluster. For the VM requirements, see Requirements
and Limitations of Cloud Native Storage.
Note To avoid errors and unpredictable behavior, do not take snapshots of CNS node VMs.
Procedure
1 In the vSphere Client, right-click the virtual machine and select Edit Settings.
If the parameter exists, make sure that its value is set to True. If the parameter is not present,
add it and set its value to True.
Name Value
disk.EnableUUID True
Note If you experience failures on the Kubernetes CNS server, the CNS objects in the vSphere
Client might not display correctly until full synchronization takes place.
Procedure
2 Click the Monitor tab and click Container Volumes under Cloud Native Storage.
3 Observe the container volumes available in your environment and monitor their storage
policy compliance status.
4 Click the SEE ALL link in the Label column to view additional details.
The details include the name of the PersistentVolumeClaim, StorageClass, and so on, and
help you map the volume to the Kubernetes objects associated with it.
5 Click the link in the Volume Name column to review various components that back the
volume and such details as placement, compliance, and storage policy.
Note The Virtual Volumes screen is available only when the underlying datastore is vSAN.
Using encryption in your vSphere environment requires some preparation, and includes setting
up a trusted connection between vCenter Server and a key provider. vCenter Server can then
retrieve keys from the key provider as needed. For information about components that
participate in the vSphere encryption process, see vSphere Virtual Machine Encryption
Components in the vSphere Security documentation.
Procedure
b From the right-click menu, select VM Policies > Edit VM Storage Policies.
c From the VM storage policy drop-down menu, select VM Encryption Policy and click OK.
To expedite the encryption process of the node VMs, you can encrypt only the VM home.
3 Create encrypted persistent volumes in the Kubernetes cluster with the vSphere CSI setup.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: encryption
provisioner: csi.vsphere.vmware.com
parameters:
storagePolicyName: "VM Encryption Policy"
datastore: vsanDatastore
The PersistentVolumeClaim must include the name of the encryption storage class in the
storageClassName field.
Note After you make a change using the vmkfstools, the vSphere Client might not be updated
immediately. Use a refresh or rescan operation from the client.
For more information on the ESXi Shell, see Getting Started with ESXCLI.
Target specifies a partition, device, or path to apply the command option to.
options One or more command-line options and associated arguments that you use to
specify the activity for vmkfstools to perform. For example, selecting the disk
format when creating a new virtual disk.
After entering the option, specify a target on which to perform the operation.
Target can indicate a partition, device, or path.
partition Specifies disk partitions. This argument uses a disk_ID:P format, where disk_ID
is the device ID returned by the storage array and P is an integer that
represents the partition number. The partition digit must be greater than zero
(0) and must correspond to a valid VMFS partition.
device Specifies devices or logical volumes. This argument uses a path name in the
ESXi device file system. The path name begins with /vmfs/devices, which is the
mount point of the device file system.
Use the following formats when you specify different types of devices:
n /vmfs/devices/disks for local or SAN-based disks.
n /vmfs/devices/lvm for ESXi logical volumes.
n /vmfs/devices/generic for generic SCSI devices.
path Specifies a VMFS file system or file. This argument is an absolute or relative
path that names a directory symbolic link, a raw device mapping, or a file
under /vmfs.
n To specify a VMFS file system, use this format:
/vmfs/volumes/file_system_UUID
or
/vmfs/volumes/file_system_label
/vmfs/volumes/file_system_label|file_system_UUID/[dir]/
myDisk.vmdk
The long and single-letter forms of the options are equivalent. For example, the following
commands are identical.
-v Suboption
The -v suboption indicates the verbosity level of the command output.
-v --verbose number
You can specify the -v suboption with any vmkfstools option. If the output of the option is not
suitable for use with the -v suboption, vmkfstools ignores -v.
Note Because you can include the -v suboption in any vmkfstools command line, -v is not
included as a suboption in the option descriptions.
-P|--queryfs
-h|--humanreadable
When you use this option on any file or directory that resides on a VMFS datastore, the option
lists the attributes of the specified datastore. The listed attributes typically include the file system
label, the number of extents for the datastore, the UUID, and a list of the devices where each
extent resides.
Note If any device backing VMFS file system goes offline, the number of extents and available
space change accordingly.
You can specify the -h|--humanreadable suboption with the -P option. If you do so, vmkfstools lists
the capacity of the volume in a more readable form.
~ vmkfstools -P -h /vmfs/volumes/my_vmfs
VMFS-5.81 (Raw Major Version: 14) file system spanning 1 partitions.
File system label (if any): my_vmfs
Mode: public
Capacity 99.8 GB, 97.5 GB available, file block size 1 MB, max supported file size 62.9 TB
UUID: 571fe2fb-ec4b8d6c-d375-XXXXXXXXXXXX
Partitions spanned (on "lvm"):
eui.3863316131XXXXXX:1
Is Native Snapshot Capable: YES
-C|--createfs [vmfs5|vmfs6|vfat]
This option creates the VMFS datastore on the specified SCSI partition, such as disk_ID:P. The
partition becomes the head partition of the datastore. For VMFS5 and VMFS6, the only available
block size is 1 MB.
n -S|--setfsname - Define the volume label of the VMFS datastore you are creating. Use this
suboption only with the -C option. The label you specify can be up to 128 characters long and
cannot contain any leading or trailing blank spaces.
Note vCenter Server supports the 80 character limit for all its entities. If a datastore name
exceeds this limit, the name gets shortened when you add this datastore to vCenter Server.
After you define a volume label, you can use it whenever you specify the VMFS datastore for
the vmkfstools command. The volume label appears in listings generated for the ls -l
command and as a symbolic link to the VMFS volume under the /vmfs/volumes directory.
To change the VMFS volume label, use the ln -sf command. Use the following as an
example:
datastore is the new volume label to use for the UUID VMFS.
Note If your host is registered with vCenter Server, any changes you make to the VMFS
volume label get overwritten by vCenter Server. This operation guarantees that the VMFS
label is consistent across all vCenter Server hosts.
When you add an extent, you span the VMFS datastore from the head partition across the
partition specified by span_partition.
You must specify the full path name for the head and span partitions, for example /vmfs/devices/
disks/disk_ID:1. Each time you use this option, you add an extent to the VMFS datastore, so that
the datastore spans multiple partitions.
Caution When you run this option, you lose all data that previously existed on the SCSI device
you specified in span_partition.
The extended datastore spans two partitions, naa.disk_ID_1:1 and naa.disk_ID_2:1. In this
example, naa.disk_ID_1:1 is the name of the head partition.
You might increase the datastore size after the underlying storage had its capacity increased.
This option expands the VMFS datastore or its specific extent. For example,
n zeroedthick (default) – Space required for the virtual disk is allocated during creation. Any
data remaining on the physical device is not erased during creation, but is zeroed out on
demand on first write from the virtual machine. The virtual machine does not read stale data
from disk.
n eagerzeroedthick – Space required for the virtual disk is allocated at creation time. In contrast
to zeroedthick format, the data remaining on the physical device is zeroed out during
creation. It might take much longer to create disks in this format than to create other types of
disks.
n thin – Thin-provisioned virtual disk. Unlike with the thick format, space required for the virtual
disk is not allocated during creation, but is supplied, zeroed out, on demand.
n 2gbsparse – A sparse disk with the maximum extent size of 2 GB. You can use disks in this
format with hosted VMware products, such as VMware Fusion. However, you cannot power
on the sparse disk on an ESXi host unless you first re-import the disk with vmkfstools in a
compatible format, such as thick or thin.
Thick, zeroedthick, and thin formats usually behave the same because the NFS server and not
the ESXi host determines the allocation policy. The default allocation policy on most NFS servers
is thin. However, on NFS servers that support Storage APIs - Array Integration, you can create
virtual disks in zeroedthick format. The reserve space operation enables NFS servers to allocate
and guarantee space.
For more information on array integration APIs, see Chapter 24 Storage Hardware Acceleration.
-c|--createvirtualdisk size[bB|sS|kK|mM|gG]
-d|--diskformat [thin|zeroedthick|eagerzeroedthick]
-W|--objecttype [file|vsan|vvol]
--policyFile fileName
This option creates a virtual disk at the specified path on a datastore. Specify the size of the
virtual disk. When you enter the value for size, you can indicate the unit type by adding a suffix
of k (kilobytes), m (megabytes), or g (gigabytes). The unit type is not case-sensitive. vmkfstools
interprets either k or K to mean kilobytes. If you do not specify a unit type, vmkfstools defaults to
bytes.
n -W|--objecttype specifies whether the virtual disk is a file on a VMFS or NFS datastore, or an
object on a vSAN or vVols datastore.
-w|--writezeros
This option cleans the virtual disk by writing zeros over all its data. Depending on the size of your
virtual disk and the I/O bandwidth to the device hosting the virtual disk, completing this
command might take a long time.
Caution When you use this command, you lose any existing data on the virtual disk.
-j|--inflatedisk
This option converts a thin virtual disk to eagerzeroedthick, preserving all existing data. The
option allocates and zeroes out any blocks that are not already allocated.
-k|--eagerzero
While performing the conversion, this option preserves any data on the virtual disk.
-K|--punchzero
This option deallocates all zeroed out blocks and leaves only those blocks that were allocated
previously and contain valid data. The resulting virtual disk is in thin format.
-U|--deletevirtualdisk
You must specify the original filename or file path oldName and the new filename or file path
newName.
A non-root user cannot clone a virtual disk or an RDM. You must specify the original filename or
file path oldName and the new filename or file path newName.
Use the following suboptions to change corresponding parameters for the copy you create.
n -W|--objecttype specifies whether the virtual disk is a file on a VMFS or NFS datastore, or an
object on a vSAN or vVols datastore.
By default, ESXi uses its native methods to perform the cloning operations. If your array supports
the cloning technologies, you can off-load the operations to the array. To avoid the ESXi native
cloning, specify the -N|--avoidnativeclone option.
You can configure a virtual machine to use this virtual disk by adding lines to the virtual machine
configuration file, as in the following example:
scsi0:0.present = TRUE
scsi0:0.fileName = /vmfs/volumes/myVMFS/myOS.vmdk
If you want to convert the format of the disk, use the -d|--diskformat suboption.
This suboption is useful when you import virtual disks in a format not compatible with ESXi, for
example 2gbsparse format. After you convert the disk, you can attach this disk to a new virtual
machine you create in ESXi.
For example:
-X|--extendvirtualdisk newSize[bBsSkKmMgGtT]
Specify the newSize parameter adding an appropriate unit suffix. The unit type is not case-
sensitive. vmkfstools interprets either k or K to mean kilobytes. If you do not specify the unit type,
vmkfstools defaults to kilobytes.
The newSize parameter defines the entire new size, not just the increment you add to the disk.
For example, to extend a 4-g virtual disk by 1 g, enter: vmkfstools -X 5g disk name.
You can extend the virtual disk to the eagerzeroedthick format by using the -d
eagerzeroedthick option.
n Do not extend the base disk of a virtual machine that has snapshots associated with it. If you
do, you can no longer commit the snapshot or revert the base disk to its original size.
n After you extend the disk, you might need to update the file system on the disk. As a result,
the guest operating system recognizes the new size of the disk and can use it.
Use this option to convert virtual disks of type LEGACYSPARSE, LEGACYPLAIN, LEGACYVMFS,
LEGACYVMFS_SPARSE, and LEGACYVMFS_RDM.
-M|--migratevirtualdisk
-r|--createrdm device
/vmfs/devices/disks/disk_ID:P
You can configure a virtual machine to use the my_rdm.vmdk mapping file by adding the following
lines to the virtual machine configuration file:
scsi0:0.present = TRUE
scsi0:0.fileName = /vmfs/volumes/myVMFS/my_rdm.vmdk
After you establish this type of mapping, you can use it to access the raw disk as you access any
other VMFS virtual disk.
/vmfs/devices/disks/device_ID
For the .vmdk name, use this format. Make sure to create the datastore before using the
command.
/vmfs/volumes/datastore_name/example.vmdk
For example,
-q|--queryrdm my_rdm.vmdk
This option prints the name of the raw disk RDM. The option also prints other identification
information, like the disk ID, for the raw disk.
# vmkfstools -q /vmfs/volumes/VMFS/my_vm/my_rdm.vmdk
-g|--geometry
The output is in the form: Geometry information C/H/S, where C represents the number of
cylinders, H represents the number of heads, and S represents the number of sectors.
Note When you import virtual disks from hosted VMware products to the ESXi host, you might
see a disk geometry mismatch error message. A disk geometry mismatch might also trigger
problems when you load a guest operating system or run a newly created virtual machine.
-x|--fix [check|repair]
For example,
-e|--chainConsistent
Caution Using the -L option can interrupt the operations of other servers on a SAN. Use the -L
option only when troubleshooting clustering setups.
Unless advised by VMware, never use this option on a LUN hosting a VMFS volume.
n -L reserve – Reserves the specified LUN. After the reservation, only the server that reserved
that LUN can access it. If other servers attempt to access that LUN, a reservation error
appears.
n -L release – Releases the reservation on the specified LUN. Other servers can access the
LUN again.
n -L lunreset – Resets the specified LUN by clearing any reservation on the LUN and making
the LUN available to all servers again. The reset does not affect any of the other LUNs on the
device. If another LUN on the device is reserved, it remains reserved.
n -L targetreset – Resets the entire target. The reset clears any reservations on all the LUNs
associated with that target and makes the LUNs available to all servers again.
n -L busreset – Resets all accessible targets on the bus. The reset clears any reservation on all
the LUNs accessible through the bus and makes them available to all servers again.
n -L readkeys – Reads the reservation keys registered with a LUN. Applies to SCSI-III persistent
group reservation functionality.
n -L readresv – Reads the reservation state on a LUN. Applies to SCSI-III persistent group
reservation functionality.
/vmfs/devices/disks/disk_ID:P
-B|--breaklock device
/vmfs/devices/disks/disk_ID:P
You can use this command when a host fails in the middle of a datastore operation, such as
expand the datastore, add an extent, or resignature. When you run this command, make sure
that no other host is holding the lock.