AHV Admin Guide v6 - 0

Download as pdf or txt
Download as pdf or txt
You are on page 1of 142

AHV 6.

AHV Administration Guide


July 20, 2021
Contents

1. AHV Overview............................................................................................... 4
Storage Overview........................................................................................................................................ 4
AHV Turbo........................................................................................................................................ 5
Acropolis Dynamic Scheduling in AHV....................................................................................................... 5
Disabling Acropolis Dynamic Scheduling......................................................................................... 7
Enabling Acropolis Dynamic Scheduling..........................................................................................7
Virtualization Management Web Console Interface....................................................................................7
Viewing the AHV Version on Prism Element................................................................................... 8
Viewing the AHV Version on Prism Central.................................................................................... 8

2. Node Management...................................................................................... 10
Nonconfigurable AHV Components.......................................................................................................... 10
Nutanix Software............................................................................................................................ 10
AHV Settings.................................................................................................................................. 10
Controller VM Access................................................................................................................................11
Admin User Access to Controller VM............................................................................................ 11
Nutanix User Access to Controller VM.......................................................................................... 13
Controller VM Password Complexity Requirements...................................................................... 14
AHV Host Access......................................................................................................................................14
Initial Configuration......................................................................................................................... 15
Accessing the AHV Host Using the Admin Account...................................................................... 16
Changing the Root User Password................................................................................................17
Changing Nutanix User Password................................................................................................. 17
AHV Host Password Complexity Requirements............................................................................ 17
Verifying the Cluster Health...................................................................................................................... 18
Putting a Node into Maintenance Mode................................................................................................... 20
Exiting a Node from the Maintenance Mode............................................................................................ 22
Shutting Down a Node in a Cluster (AHV)............................................................................................... 23
Starting a Node in a Cluster (AHV).......................................................................................................... 24
Shutting Down an AHV Cluster................................................................................................................ 25
Changing CVM Memory Configuration (AHV).......................................................................................... 27
Changing the AHV Hostname...................................................................................................................28
Changing the Name of the CVM Displayed in the Prism Web Console................................................... 28
Adding a Never-Schedulable Node (AHV Only)....................................................................................... 29
Compute-Only Node Configuration (AHV Only)........................................................................................31
Adding a Compute-Only Node to an AHV Cluster.........................................................................32

3. Host Network Management........................................................................35


Prerequisites for Configuring Networking................................................................................................. 36
AHV Networking Recommendations......................................................................................................... 36
IP Address Management................................................................................................................40
Layer 2 Network Management..................................................................................................................40
About Virtual Switch....................................................................................................................... 40
Virtual Switch Requirements.......................................................................................................... 48
Virtual Switch Limitations............................................................................................................... 49
Virtual Switch Management............................................................................................................49
Enabling LAG and LACP on the ToR Switch (AHV Only)............................................................. 50

ii
VLAN Configuration........................................................................................................................ 50
IGMP Snooping.............................................................................................................................. 53
Switch Port ANalyzer on AHV Hosts............................................................................................. 54
Uplink Configuration (AHV Only).............................................................................................................. 60
Updating the Uplink Configuration (AHV Only).............................................................................. 61
Enabling LAG and LACP on the ToR Switch (AHV Only)............................................................. 63
Enabling RSS Virtio-Net Multi-Queue by Increasing the Number of VNIC Queues..................................64
Changing the IP Address of an AHV Host............................................................................................... 66

4. Virtual Machine Management.................................................................... 70


Supported Guest VM Types for AHV....................................................................................................... 70
Creating a VM (AHV)................................................................................................................................ 70
Managing a VM (AHV)..............................................................................................................................80
Windows VM Provisioning.........................................................................................................................92
Nutanix VirtIO for Windows............................................................................................................ 92
Installing Windows on a VM.........................................................................................................101
Windows Defender Credential Guard Support in AHV................................................................ 103
Affinity Policies for AHV.......................................................................................................................... 108
Configuring VM-VM Anti-Affinity Policy........................................................................................ 109
Removing VM-VM Anti-Affinity Policy.......................................................................................... 110
Performing Power Operations on VMs by Using Nutanix Guest Tools (aCLI)........................................110
UEFI Support for VM.............................................................................................................................. 111
Creating UEFI VMs by Using aCLI.............................................................................................. 112
Getting Familiar with UEFI Firmware Menu.................................................................................113
Secure Boot Support for VMs...................................................................................................... 116
Secure Boot Considerations.........................................................................................................116
Creating/Updating a VM with Secure Boot Enabled.................................................................... 117
Virtual Machine Network Management................................................................................................... 118
Virtual Machine Memory and CPU Hot-Plug Configurations.................................................................. 118
Hot-Plugging the Memory and CPUs on Virtual Machines (AHV)............................................... 118
Virtual Machine Memory Management (vNUMA)................................................................................... 119
Enabling vNUMA on Virtual Machines......................................................................................... 119
GPU and vGPU Support.........................................................................................................................121
Supported GPUs...........................................................................................................................121
GPU Pass-Through for Guest VMs..............................................................................................122
NVIDIA GRID Virtual GPU Support on AHV................................................................................123
PXE Configuration for AHV VMs............................................................................................................ 133
Configuring the PXE Environment for AHV VMs......................................................................... 134
Configuring a VM to Boot over a Network...................................................................................135
Uploading Files to DSF for Microsoft Windows Users............................................................................136
Enabling Load Balancing of vDisks in a Volume Group.........................................................................136
Live vDisk Migration Across Storage Containers................................................................................... 137
Migrating a vDisk to Another Container....................................................................................... 138
OVAs........................................................................................................................................................140
OVA Restrictions.......................................................................................................................... 140

Copyright.................................................................................................... 142
License.....................................................................................................................................................142
Conventions............................................................................................................................................. 142
Version.....................................................................................................................................................142

iii
1
AHV OVERVIEW
As the default option for Nutanix HCI, the native Nutanix hypervisor, AHV, represents a unique approach to
virtualization that offers the powerful virtualization capabilities needed to deploy and manage enterprise applications.
AHV compliments the HCI value by integrating native virtualization along with networking, infrastructure, and
operations management with a single intuitive interface - Nutanix Prism.
Virtualization teams find AHV easy to learn and transition to from legacy virtualization solutions with familiar
workflows for VM operations, live migration, VM high availability, and virtual network management. AHV includes
resiliency features, including high availability and dynamic scheduling without the need for additional licensing,
and security is integral to every aspect of the system from the ground up. AHV also incorporates the optional Flow
Security and Networking, allowing easy access to hypervisor-based network microsegmentation and advanced
software-defined networking.
See the Field Installation Guide for information about how to deploy and create a cluster. Once you create the cluster
by using Foundation, you can use this guide to perform day-to-day management tasks.

AOS and AHV Compatibility


For information about the AOS and AHV compatibility with this release, see Software Bundled in This Release
Family in the Acropolis Family Release Notes.

Limitations

Number of online VMs per host 128

Number of online VM virtual disks per host 256

Number of VMs to edit concurrently 64


(for example, with vm.create/delete and power operations)

Nested Virtualization
Nutanix does not support nested virtualization (nested VMs) in an AHV cluster.

Storage Overview
AHV uses a Distributed Storage Fabric to deliver data services such as storage provisioning, snapshots,
clones, and data protection to VMs directly.
In AHV clusters, AOS passes all disks to the VMs as raw SCSI block devices. By that means, the I/O path is
lightweight and optimized. Each AHV host runs an iSCSI redirector, which establishes a highly resilient storage path
from each VM to storage across the Nutanix cluster.
QEMU is configured with the iSCSI redirector as the iSCSI target portal. Upon a login request, the redirector
performs an iSCSI login redirect to a healthy Stargate (preferably the local one).

AHV | AHV Overview | 4


Figure 1: AHV Storage

AHV Turbo
AHV Turbo represents significant advances to the data path in AHV. AHV Turbo provides an I/O path that bypasses
QEMU and services storage I/O requests, which lowers CPU usage and increases the amount of storage I/O available
to VMs.
AHV Turbo is enabled by default on VMs running in AHV clusters.
When using QEMU, all I/O travels through a single queue, which can impact performance. The AHV Turbo design
introduces a multi-queue approach to allow data to flow from a VM to the storage, resulting in a much higher I/O
capacity. The storage queues scale out automatically to match the number of vCPUs configured for a given VM,
making even higher performance possible as the workload scales up.
The AHV Turbo technology is transparent to the VMs, but you can achieve even greater performance if:

• The VM has multi-queue enabled and an optimum number of queues. See Enabling RSS Virtio-Net Multi-Queue
by Increasing the Number of VNIC Queues on page 64 for instructions about how to enable multi-queue and
set an optimum number of queues. Consult your Linux distribution documentation to make sure that the guest
operating system fully supports multi-queue before you enable it.
• You have installed the latest Nutanix VirtIO package for Windows VMs. Download the VirtIO package from the
Downloads section of Nutanix Support Portal. No additional configuration is required.
• The VM has more than one vCPU.
• The workloads are multi-threaded.

Acropolis Dynamic Scheduling in AHV


Acropolis Dynamic Scheduling (ADS) proactively monitors your cluster for any compute and storage I/O
contentions or hotspots over a period of time. If ADS detects a problem, ADS creates a migration plan that
eliminates hotspots in the cluster by migrating VMs from one host to another.

AHV | AHV Overview | 5


You can monitor VM migration tasks from the Task dashboard of the Prism Element web console.
Following are the advantages of ADS:

• ADS improves the initial placement of the VMs depending on the VM configuration.
• Nutanix Volumes uses ADS for balancing sessions of the externally available iSCSI targets.

Note: ADS honors all the configured host affinities, VM-host affinities, VM-VM antiaffinity policies, and HA policies.

By default, ADS is enabled and Nutanix recommends you keep this feature enabled. However, see Disabling
Acropolis Dynamic Scheduling on page 7 for information about how to disable the ADS feature. See Enabling
Acropolis Dynamic Scheduling on page 7 for information about how to enable the ADS feature if you
previously disabled the feature.
ADS monitors the following resources:

• VM CPU Utilization: Total CPU usage of each guest VM.


• Storage CPU Utilization: Storage controller (Stargate) CPU usage per VM or iSCSI target
ADS does not monitor memory and networking usage.

How Acropolis Dynamic Scheduling Works


Lazan is the ADS service in an AHV cluster. AOS selects a Lazan manager and Lazan solver among the hosts in the
cluster to effectively manage ADS operations.
ADS performs the following tasks to resolve compute and storage I/O contentions or hotspots:

• The Lazan manager gathers statistics from the components it monitors.


• The Lazan solver (runner) checks the statistics for potential anomalies and determines how to resolve them, if
possible.
• The Lazan manager invokes the tasks (for example, VM migrations) to resolve the situation.

Note:

• During migration, a VM consumes resources on both the source and destination hosts as the High
Availability (HA) reservation algorithm must protect the VM on both hosts. If a migration fails due to
lack of free resources, turn off some VMs so that migration is possible.
• If a problem is detected and ADS cannot solve the issue (for example, because of limited CPU or
storage resources), the migration plan might fail. In these cases, an alert is generated. Monitor these
alerts from the Alerts dashboard of the Prism Element web console and take necessary remedial
actions.
• If the host, firmware, or AOS upgrade is in progress and if any resource contention occurs during the
upgrade period, ADS does not perform any resource contention rebalancing.

When Is a Hotspot Detected?


Lazan runs every 15 minutes and analyzes the resource usage for at least that period of time. If the resource utilization
of an AHV host remains >85% for the span of 15 minutes, Lazan triggers migration tasks to remove the hotspot.

Note: For a storage hotspot, ADS looks at the last 40 minutes of data and uses a smoothing algorithm to use the most
recent data. For a CPU hotspot, ADS looks at the last 10 minutes of data only, that is, the average CPU usage over the
last 10 minutes.

Following are the possible reasons if there is an obvious hotspot, but the VMs did not migrate:

AHV | AHV Overview | 6


• Lazan cannot resolve a hotspot. For example:

• If there is a huge VM (16 vCPUs) at 100% usage, and accounts for 75% of the AHV host usage (which is also
at 100% usage).
• The other hosts are loaded at ~ 40% usage.
In these situations, the other hosts cannot accommodate the large VM without causing contention there as well.
Lazan does not prioritize one host or VM over others for contention, so it leaves the VM where it is hosted.
• Number of all-flash nodes in the cluster is less than the replication factor.
If the cluster has an RF2 configuration, the cluster must have a minimum of two all-flash nodes for successful
migration of VMs on all the all-flash nodes.

Migrations Audit
Prism Central displays the list of all the VM migration operations generated by ADS. In Prism Central, go to Menu
-> Activity -> Audits to display the VM migrations list. You can filter the migrations by clicking Filters and
selecting Migrate in the Operation Type tab. The list displays all the VM migration tasks created by ADS with
details such as the source and target host, VM name, and time of migration.

Disabling Acropolis Dynamic Scheduling


Perform the procedure described in this topic to disable ADS. Nutanix recommends you keep ADS
enabled.

Procedure

1. Log onto a Controller VM in your cluster with SSH.

2. Disable ADS.
nutanix@cvm$ acli ads.update enable=false
Even after you disable the feature, the checks for the contentions or hotspots run in the background and if any
anomalies are detected, an alert is raised in the Alerts dashboard. However, no action is taken by ADS to solve
the contentions. You need to manually take the remedial actions or you can enable the feature.

Enabling Acropolis Dynamic Scheduling


If you have disabled the ADS feature and want to enable the feature, perform the following procedure.

Procedure

1. Log onto a Controller VM in your cluster with SSH.

2. Enable ADS.
nutanix@cvm$ acli ads.update enable=true

Virtualization Management Web Console Interface


You can manage the virtualization management features by using the Prism GUI (Prism Element and
Prism Central web consoles).
You can do the following by using the Prism web consoles:

• Configure network connections


• Create virtual machines

AHV | AHV Overview | 7


• Manage virtual machines (launch console, start/shut down, take snapshots, migrate, clone, update, and delete)
• Monitor virtual machines
• Enable VM high availability
See Prism Web Console Guide and Prism Central Guide for more information.

Viewing the AHV Version on Prism Element


You can see the AHV version installed in the Prism Element web console.

About this task


To view the AHV version installed on the host, do the following.

Procedure

1. Log on to Prism Web Console

2. The Hypervisor Summary widget widget on the top left side of the Home page displays the AHV version.

Figure 2: LCM Page Displays AHV Version

Viewing the AHV Version on Prism Central


You can see the AHV version installed in the Prism Central console.

About this task


To view the AHV version installed on any host in the clusters managed by the Prism Central, do the
following.

Procedure

1. Log on to Prism Central.

2. In side bar, select Hardware > Hosts > Summary tab.

3. Click the host you want to see the hypervisor version for.

AHV | AHV Overview | 8


4. The Host detail view page displays the Properties widget that lists the Hypervisor Version.

Figure 3: Hypervisor Version in Host Detail View

AHV | AHV Overview | 9


2
NODE MANAGEMENT
Nonconfigurable AHV Components
The components listed here are configured by the Nutanix manufacturing and installation processes. Do
not modify any of these components except under the direction of Nutanix Support.

Nutanix Software
Modifying any of the following Nutanix software settings may inadvertently constrain performance of your Nutanix
cluster or render the Nutanix cluster inoperable.

• Local datastore name.


• Configuration and contents of any CVM (except memory configuration to enable certain features).

Important: Note the following important considerations about Controller VMs.

• Do not delete the Nutanix CVM.


• Do not take a snapshot of the CVM for backup.
• Do not rename, modify, or delete the admin and nutanix user accounts of the CVM.
• Do not create additional CVM user accounts.
Use the default accounts (admin or nutanix), or use sudo to elevate to the root account.
• Do not decrease CVM memory below recommended minimum amounts required for cluster and add-in
features.
Nutanix Cluster Checks (NCC), preupgrade cluster checks, and the AOS upgrade process detect and
monitor CVM memory.
• Nutanix does not support the usage of third-party storage on the host part of Nutanix clusters.
Normal cluster operations might be affected if there are connectivity issues with the third-party storage
you attach to the hosts in a Nutanix cluster.
• Do not run any commands on a CVM that are not in the Nutanix documentation.

AHV Settings
Nutanix AHV is a cluster-optimized hypervisor appliance.
Alteration of the hypervisor appliance (unless advised by Nutanix Technical Support) is unsupported and may result
in the hypervisor or VMs functioning incorrectly.
Unsupported alterations include (but are not limited to):

• Hypervisor configuration, including installed packages

AHV | Node Management | 10


• Controller VM virtual hardware configuration file (.xml file). Each AOS version and upgrade includes a specific
Controller VM virtual hardware configuration. Therefore, do not edit or otherwise modify the Controller VM
virtual hardware configuration file.
• iSCSI settings
• Open vSwitch settings

• Installation of third-party software not approved by Nutanix


• Installation or upgrade of software packages from non-Nutanix sources (using yum, rpm, or similar)
• Taking snapshots of the Controller VM
• Creating user accounts on AHV hosts
• Changing the timezone of the AHV hosts. By default, the timezone of an AHV host is set to UTC.

Controller VM Access
Although each host in a Nutanix cluster runs a hypervisor independent of other hosts in the cluster, some operations
affect the entire cluster.
Most administrative functions of a Nutanix cluster can be performed through the web console (Prism), however,
there are some management tasks that require access to the Controller VM (CVM) over SSH. Nutanix recommends
restricting CVM SSH access with password or key authentication.
This topic provides information about how to access the Controller VM as an admin user and nutanix user.
You can perform most administrative functions of a Nutanix cluster through the Prism web consoles or REST
API. Nutanix recommends using these interfaces whenever possible and disabling Controller VM SSH access with
password or key authentication. Some functions, however, require logging on to a Controller VM with SSH. Exercise
caution whenever connecting directly to a Controller VM as it increases the risk of causing cluster issues.

Warning: When you connect to a Controller VM with SSH, ensure that the SSH client does not import or change
any locale settings. The Nutanix software is not localized, and running the commands with any locale other than
en_US.UTF-8 can cause severe cluster issues.

To check the locale used in an SSH session, run /usr/bin/locale. If any environment variables are set
to anything other than en_US.UTF-8, reconnect with an SSH configuration that does not import or change
any locale settings.

Admin User Access to Controller VM


You can access the Controller VM as the admin user (admin user name and password) with SSH. For security
reasons, the password of the admin user must meet Controller VM Password Complexity Requirements. When you
log on to the Controller VM as the admin user for the first time, you are prompted to change the default password.
See Controller VM Password Complexity Requirements to set a secure password.
After you have successfully changed the password, the new password is synchronized across all Controller VMs and
interfaces (Prism web console, nCLI, and SSH).

Note:

• As an admin user, you cannot access nCLI by using the default credentials. If you are logging in as the
admin user for the first time, you must log on through the Prism web console or SSH to the Controller
VM. Also, you cannot change the default password of the admin user through nCLI. To change the
default password of the admin user, you must log on through the Prism web console or SSH to the
Controller VM.

AHV | Node Management | 11


• When you make an attempt to log in to the Prism web console for the first time after you upgrade to
AOS 5.1 from an earlier AOS version, you can use your existing admin user password to log in and then
change the existing password (you are prompted) to adhere to the password complexity requirements.
However, if you are logging in to the Controller VM with SSH for the first time after the upgrade as
the admin user, you must use the default admin user password (Nutanix/4u) and then change the default
password (you are prompted) to adhere to the Controller VM Password Complexity Requirements.
• You cannot delete the admin user account.
• The default password expiration age for the admin user is 60 days. You can configure the minimum and
maximum password expiration days based on your security requirement.

• nutanix@cvm$ sudo chage M <MAX-AGE> admin

• nutanix@cvm$ sudo chage <MIN-AGE> admin

When you change the admin user password, you must update any applications and scripts using the admin user
credentials for authentication. Nutanix recommends that you create a user assigned with the admin role instead of
using the admin user for authentication. The Prism Web Console Guide describes authentication and roles.
Following are the default credentials to access a Controller VM.

Table 1: Controller VM Credentials

Interface Target User Name Password

SSH client Nutanix Controller VM admin Nutanix/4u


nutanix nutanix/4u
Prism web console Nutanix Controller VM admin Nutanix/4u

Accessing the Controller VM Using the Admin User Account

About this task


Perform the following procedure to log on to the Controller VM by using the admin user with SSH for the
first time.

Procedure

1. Log on to the Controller VM with SSH by using the management IP address of the Controller VM and the
following credentials.

• User name: admin


• Password: Nutanix/4u
You are now prompted to change the default password.

2. Respond to the prompts, providing the current and new admin user password.
Changing password for admin.
Old Password:
New password:
Retype new password:

AHV | Node Management | 12


Password changed.
See the requirements listed in Controller VM Password Complexity Requirements to set a secure password.
For information about logging on to a Controller VM by using the admin user account through the Prism web
console, see Logging Into The Web Console in the Prism Web Console Guide.

Nutanix User Access to Controller VM


You can access the Controller VM as the nutanix user (nutanix user name and password) with SSH. For security
reasons, the password of the nutanix user must meet the Controller VM Password Complexity Requirements on
page 14. When you log on to the Controller VM as the nutanix user for the first time, you are prompted to
change the default password.
See Controller VM Password Complexity Requirements on page 14to set a secure password.
After you have successfully changed the password, the new password is synchronized across all Controller VMs and
interfaces (Prism web console, nCLI, and SSH).

Note:

• As a nutanix user, you cannot access nCLI by using the default credentials. If you are logging in as the
nutanix user for the first time, you must log on through the Prism web console or SSH to the Controller
VM. Also, you cannot change the default password of the nutanix user through nCLI. To change the
default password of the nutanix user, you must log on through the Prism web console or SSH to the
Controller VM.
• When you make an attempt to log in to the Prism web console for the first time after you upgrade the
AOS from an earlier AOS version, you can use your existing nutanix user password to log in and then
change the existing password (you are prompted) to adhere to the password complexity requirements.
However, if you are logging in to the Controller VM with SSH for the first time after the upgrade as the
nutanix user, you must use the default nutanix user password (nutanix/4u) and then change the default
password (you are prompted) to adhere to the Controller VM Password Complexity Requirements on
page 14.
• You cannot delete the nutanix user account.
• The default password expiration age for the nutanix user is 60 days. You can configure the minimum
and maximum password expiration days based on your security requirement.

• nutanix@cvm$ sudo chage M <MAX-AGE> nutanix

• nutanix@cvm$ sudo chage <MIN-AGE> nutanix

When you change the nutanix user password, you must update any applications and scripts using the nutanix user
credentials for authentication. Nutanix recommends that you create a user assigned with the nutanix role instead of
using the nutanix user for authentication. The Prism Web Console Guide describes authentication and roles.
Following are the default credentials to access a Controller VM.

Table 2: Controller VM Credentials

Interface Target User Name Password

SSH client Nutanix Controller VM admin Nutanix/4u


nutanix nutanix/4u
Prism web console Nutanix Controller VM admin Nutanix/4u

AHV | Node Management | 13


Accessing the Controller VM Using the Nutanix User Account

About this task


Perform the following procedure to log on to the Controller VM by using the nutanix user with SSH for the
first time.

Procedure

1. Log on to the Controller VM with SSH by using the management IP address of the Controller VM and the
following credentials.

• User name: nutanix


• Password: nutanix/4u
You are now prompted to change the default password.

2. Respond to the prompts, providing the current and new nutanix user password.
Changing password for nutanix.
Old Password:
New password:
Retype new password:
Password changed.
See Controller VM Password Complexity Requirements on page 14to set a secure password.
For information about logging on to a Controller VM by using the nutanix user account through the Prism web
console, see Logging Into The Web Console in the Prism Web Console Guide.

Controller VM Password Complexity Requirements


The password must meet the following complexity requirements:

• At least eight characters long.


• At least one lowercase letter.
• At least one uppercase letter.
• At least one number.
• At least one special character.
• At least four characters difference from the old password.
• Must not be among the last 5 passwords.
• Must not have more than 2 consecutive occurrences of a character.
• Must not be longer than 199 characters.

AHV Host Access


You can perform most of the administrative functions of a Nutanix cluster using the Prism web consoles or REST
API. Nutanix recommends using these interfaces whenever possible. Some functions, however, require logging on to
an AHV host with SSH.

Note: From AOS 5.15.5, AHV has two new user accounts—admin and nutanix.

Nutanix provides the following users to access the AHV host:

AHV | Node Management | 14


• root—It is used internally by the AOS. The root user is used for the initial access and configuration of the AHV
host.
• admin—It is used to log on to an AHV host. The admin user is recommended for accessing the AHV host.

• nutanix—It is used internally by the AOS and must not be used for interactive logon.

Exercise caution whenever connecting directly to a AHV host as it increases the risk of causing cluster issues.
Following are the default credentials to access an AHV host:

Table 3: AHV Host Credentials

Interface Target User Name Password

SSH client AHV Host root nutanix/4u


admin There is no default
password for admin.
You must set it during
the initial configuration.

nutanix nutanix/4u

Initial Configuration

About this task


The AHV host is shipped with the default password for the root and nutanix users, which must be changed using
SSH when you log on to the AHV host for the first time. After changing the default passwords and the admin
password, all subsequent logins to the AHV host must be with the admin user.
Perform the following procedure to change admin user account password for the first time:

Note: Perform this initial configuration on all the AHV hosts.

Procedure

1. Use SSH and log on to the AHV host using the root account.
$ ssh root@<AHV Host IP Address>
Nutanix AHV
root@<AHV Host IP Address> password: # default password nutanix/4u

2. Change the default root user password.


root@ahv# passwd root
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

3. Change the default nutanix user password.


root@ahv# passwd nutanix
Changing password for user nutanix.
New password:
Retype new password:

AHV | Node Management | 15


passwd: all authentication tokens updated successfully.

4. Change the admin user password.


root@ahv# passwd admin
Changing password for user admin.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Accessing the AHV Host Using the Admin Account

About this task


After setting the admin password in the Initial Configuration on page 15, use the admin user for all subsequent
logins.
Perform the following procedure to log on to the Controller VM by using the admin user with SSH for the first time.

Procedure

1. Log on to the AHV host with SSH using the admin account.
$ ssh admin@ <AHV Host IP Address>
Nutanix AHV

2. Enter the admin user password configured in the Initial Configuration on page 15.
admin@<AHV Host IP Address> password:

3. Append sudo to the commands if privileged access is required.


$ sudo ls /var/log

Changing Admin User Password

About this task


Perform these steps to change the admin password on every AHV host in the cluster:

Procedure

1. Log on to the AHV host using the admin account with SSH.

2. Enter the admin user password configured in the Initial Configuration on page 15.

3. Run the sudo command to change to admin user password.


$ sudo passwd admin

4. Respond to the prompts and provide the new password.


[sudo] password for admin:
Changing password for user admin.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Note: Repeat this step for each AHV host.

See AHV Host Password Complexity Requirements on page 17 to set a secure password.

AHV | Node Management | 16


Changing the Root User Password

About this task


Perform these steps to change the root password on every AHV host in the cluster:

Procedure

1. Log on to the AHV host using the admin account with SSH.

2. Run the sudo command to change to root user.

3. Change the root password.


root@ahv# passwd root

4. Respond to the prompts and provide the current and new root password.
Changing password for root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Note: Repeat this step for each AHV host.

See AHV Host Password Complexity Requirements on page 17 to set a secure password.

Changing Nutanix User Password

About this task


Perform these steps to change the nutanix password on every AHV host in the cluster:

Procedure

1. Log on to the AHV host using the admin account with SSH.

2. Run the sudo command to change to root user.

3. Change the nutanix password.


root@ahv# passwd nutanix

4. Respond to the prompts and provide the current and new root password.
Changing password for nutanix.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Note: Repeat this step for each AHV host.

See AHV Host Password Complexity Requirements on page 17 to set a secure password.

AHV Host Password Complexity Requirements


The password you choose must meet the following complexity requirements:

AHV | Node Management | 17


• In configurations with high-security requirements, the password must contain:

• At least 15 characters.
• At least one upper case letter (A–Z).
• At least one lower case letter (a–z).
• At least one digit (0–9).
• At least one printable ASCII special (non-alphanumeric) character. For example, a tilde (~), exclamation point
(!), at sign (@), number sign (#), or dollar sign ($).
• At least eight characters different from the previous password.
• At most three consecutive occurrences of any given character.
The password cannot be the same as the last 24 passwords.

• In configurations without high-security requirements, the password must contain:

• At least eight characters.


• At least one upper case letter (A–Z).
• At least one lower case letter (a–z).
• At least one digit (0–9).
• At least one printable ASCII special (non-alphanumeric) character. For example, a tilde (~), exclamation point
(!), at sign (@), number sign (#), or dollar sign ($).
• At least three characters different from the previous password.
• At most three consecutive occurrences of any given character.
The password cannot be the same as the last 10 passwords.
In both types of configuration, if a password for an account is entered three times unsuccessfully within a 15-minute
period, the account is locked for 15 minutes.

Verifying the Cluster Health


Before you perform operations such as restarting a CVM or AHV host and putting an AHV host into
maintenance mode, check if the cluster can tolerate a single-node failure.

Before you begin


Ensure that you are running the most recent version of NCC.

About this task

Note: If you see any critical alerts, resolve the issues by referring to the indicated KB articles. If you are unable to
resolve any issues, contact Nutanix Support.

Perform the following steps to avoid unexpected downtime or performance issues.

AHV | Node Management | 18


Procedure

1. Review and resolve any critical alerts. Do one of the following:

» In the Prism Element web console, go to the Alerts page.


» Log on to a Controller VM (CVM) with SSH and display the alerts.
nutanix@cvm$ ncli alert ls

Note: If you receive alerts indicating expired encryption certificates or a key manager is not reachable, resolve
these issues before you shut down the cluster. If you do not resolve these issues, data loss of the cluster might occur.

2. Verify if the cluster can tolerate a single-node failure. Do one of the following:

» In the Prism Element web console, in the Home page, check the status of the Data Resiliency Status
dashboard.
Verify that the status is OK. If the status is anything other than OK, resolve the indicated issues before you
perform any maintenance activity.
» Log on to a Controller VM (CVM) with SSH and check the fault tolerance status of the cluster.
nutanix@cvm$ ncli cluster get-domain-fault-tolerance-status type=node

Domain Type : NODE


Component Type : STATIC_CONFIGURATION
Current Fault Tolerance : 1
Fault Tolerance Details :
Last Update Time : Wed Nov 18 14:22:09 GMT+05:00 2015

Domain Type : NODE


Component Type : ERASURE_CODE_STRIP_SIZE
Current Fault Tolerance : 1
Fault Tolerance Details :
Last Update Time : Wed Nov 18 13:19:58 GMT+05:00 2015

Domain Type : NODE


Component Type : METADATA
Current Fault Tolerance : 1
Fault Tolerance Details :
Last Update Time : Mon Sep 28 14:35:25 GMT+05:00 2015

Domain Type : NODE


Component Type : ZOOKEEPER
Current Fault Tolerance : 1
Fault Tolerance Details :
Last Update Time : Thu Sep 17 11:09:39 GMT+05:00 2015

Domain Type : NODE


Component Type : EXTENT_GROUPS
Current Fault Tolerance : 1
Fault Tolerance Details :
Last Update Time : Wed Nov 18 13:19:58 GMT+05:00 2015

Domain Type : NODE


Component Type : OPLOG
Current Fault Tolerance : 1
Fault Tolerance Details :
Last Update Time : Wed Nov 18 13:19:58 GMT+05:00 2015

Domain Type : NODE

AHV | Node Management | 19


Component Type : FREE_SPACE
Current Fault Tolerance : 1
Fault Tolerance Details :
Last Update Time : Wed Nov 18 14:20:57 GMT+05:00 2015
The value of the Current Fault Tolerance column must be at least 1 for all the nodes in the cluster.

Putting a Node into Maintenance Mode


You are required to put a node into maintenance mode for reasons such as making changes to the network
configuration of a node, performing manual firmware upgrades, or any other.

Before you begin

CAUTION: Verify the data resiliency status of your cluster. If the cluster only has replication factor 2 (RF2), you can
only shut down one node for each cluster. If an RF2 cluster would have more than one node shut down, shut down the
entire cluster.

See Verifying the Cluster Health on page 18 to check if the cluster can tolerate a single-node failure. Do not
proceed if the cluster cannot tolerate a single-node failure.

About this task


When a host is in maintenance mode, AOS marks the host as unschedulable so that no new VM instances are created
on it. Next, an attempt is made to evacuate VMs from the host.
If the evacuation attempt fails, the host remains in the "entering maintenance mode" state, where it is marked
unschedulable, waiting for user remediation. You can shut down VMs on the host or move them to other nodes. Once
the host has no more running VMs, it is in maintenance mode.
When a host is in maintenance mode, VMs are moved from that host to other hosts in the cluster. After exiting
maintenance mode, those VMs are automatically returned to the original host, eliminating the need to manually move
them.
VMs with CPU passthrough, PCI passthrough, and host affinity policies are not migrated to other hosts in the cluster.
See Shutting Down a Node in a Cluster (AHV) on page 23 for more information about such VMs.
Agent VMs are always shut down if you put a node in maintenance mode and are powered on again after exiting
maintenance mode.
Perform the following steps to put the node into maintenance mode.

Procedure

1. Use SSH to log on to a Controller VM in the cluster.

2. Determine the IP address of the node you want to put into maintenance mode.
nutanix@cvm$ acli host.list
Note the value of Hypervisor IP for the node you want to put in maintenance mode.

AHV | Node Management | 20


3. Put the node into maintenance mode.
nutanix@cvm$ acli host.enter_maintenance_mode hypervisor-IP-address [wait="{ true |
false }" ] [non_migratable_vm_action="{ acpi_shutdown | block }" ]

Note: Never put Controller VM and AHV hosts into maintenance mode on single-node clusters. It is recommended
to shutdown user VMs before proceeding with disruptive changes.

Replace hypervisor-IP-address with either the IP address or host name of the AHV host you want to shut
down.
By default, the non_migratable_vm_action parameter is set to block, which means VMs with CPU
passthrough, PCI passthrough, and host affinity policies are not migrated or shut down when you put a node into
maintenance mode.
If you want to automatically shut down such VMs, set the non_migratable_vm_action parameter to
acpi_shutdown.
Agent VMs are always shut down if you put a node in maintenance mode and are powered on again after exiting
maintenance mode.
For example:
nutanix@cvm$ acli host.enter_maintenance_mode 10.x.x.x

4. Verify if the host is in the maintenance mode.


nutanix@cvm$ acli host.get host-ip
In the output that is displayed, ensure that node_state equals to kEnteredMaintenanceMode and
schedulable equals to False.
Do not continue if the host has failed to enter the maintenance mode.

5. See Verifying the Cluster Health on page 18 to once again check if the cluster can tolerate a single-node
failure.

6. Put the CVM into the maintenance mode.


nutanix@cvm$ ncli host edit id=host-ID enable-maintenance-mode=true
Replace host-ID with the ID of the host.
This step prevents the CVM services from being affected by any connectivity issues.
Determine the ID of the host by running the following command:
nutanix@cvm$ ncli host list
An output similar to the following is displayed:

Id : aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee::1234
Uuid : ffffffff-gggg-hhhh-iiii-jjjjjjjjjjj
Name : XXXXXXXXXXX-X
IPMI Address : X.X.Z.3
Controller VM Address : X.X.X.1
Hypervisor Address : X.X.Y.2
In this example, the host ID is 1234.
Wait for a few minutes until the CVM is put into the maintenance mode.

AHV | Node Management | 21


7. Verify if the CVM is in the maintenance mode. Run the following command on the CVM that you put in the
maintenance mode.
nutanix@cvm$ genesis status | grep -v "\[\]"
An output similar to the following is displayed:

nutanix@cvm$ genesis status | grep -v "\[\]"


genesis: [11622, 11637, 11660, 11661, 14941, 14943]
scavenger: [9037, 9066, 9067, 9068]
zeus: [7615, 7650, 7651, 7653, 7663, 7680]
Only the Scavenger, Genesis, and Zeus processes must be running (process ID is displayed next to the process
name).
Do not continue if the CVM has failed to enter the maintenance mode, because it can cause a service interruption.

8. Perform the maintenance activity.

What to do next
Proceed to remove the node from the maintenance mode. See Exiting a Node from the Maintenance Mode on
page 22 for more information.

Exiting a Node from the Maintenance Mode


After you perform any maintenance activity, exit the node from the maintenance mode.

About this task


Perform the following to exit the host from the maintenance mode.

Procedure

1. Remove the CVM from the maintenance mode.

a. From any other CVM in the cluster, run the following command to exit the CVM from the maintenance mode.
nutanix@cvm$ ncli host edit id=host-ID enable-maintenance-mode=false
Replace host-ID with the ID of the host.

Note: The command fails if you run the command from the CVM that is in the maintenance mode.

b. Verify if all processes on all the CVMs are in the UP state.


nutanix@cvm$ cluster status | grep -v UP

Do not continue if the CVM has failed to exit the maintenance mode.

AHV | Node Management | 22


2. Remove the AHV host from the maintenance mode.

a. From any CVM in the cluster, run the following command to exit the AHV host from the maintenance mode.
nutanix@cvm$ acli host.exit_maintenance_mode host-ip
Replace host-ip with the new IP address of the host.
This command migrates (live migration) all the VMs that were previously running on the host back to the host.
b. Verify if the host has exited the maintenance mode.
nutanix@cvm$ acli host.get host-ip
In the output that is displayed, ensure that node_state equals to kAcropolisNormal and schedulable
equals to True.
Contact Nutanix Support if any of the steps described in this document produce unexpected results.

Shutting Down a Node in a Cluster (AHV)


Before you begin

CAUTION: Verify the data resiliency status of your cluster. If the cluster only has replication factor 2 (RF2), you can
only shut down one node for each cluster. If an RF2 cluster would have more than one node shut down, shut down the
entire cluster.

See Verifying the Cluster Health on page 18 to check if the cluster can tolerate a single-node failure. Do not
proceed if the cluster cannot tolerate a single-node failure.

About this task


Before you shut down the node, shut down the Controller VM on that node. When you shut down the Controller VM,
put the node in maintenance mode.
When a host is in maintenance mode, VMs that can be migrated are moved from that host to other hosts in the cluster.
After exiting maintenance mode, those VMs are returned to the original host, eliminating the need to manually move
them.
If a host is put in maintenance mode, the following VMs are not migrated:

• VMs with CPU passthrough, PCI passthrough, and host affinity policies are not migrated to other hosts in
the cluster. You can shut down such VMs by setting the non_migratable_vm_action parameter to
acpi_shutdown.
• Agent VMs are always shut down if you put a node in maintenance mode and are powered on again after exiting
maintenance mode.
Perform the following procedure to shut down a node.

Procedure

1. If the Controller VM is running, shut down the Controller VM.

a. Log on to the CVM with SSH.


b. List all the hosts in the cluster.
nutanix@cvm$ acli host.list
An output similar to the following is displayed:
nutanix@cvm$ acli host.list

AHV | Node Management | 23


Hypervisor IP Hypervisor DNS Name Host UUID Comput
e Only Schedulable
Hypervisor Type Hypervisor Name
10.x.x.x 10.x.x.x 23e5c9b6-xxxx-447a-88c1-xxxx True
True kKvm
AHV
10.x.x.x 10.x.x.x 225869b8-0054-4a16-baab-xxxx False
True kKvm
AHV
10.x.x.x 10.x.x.x 20698a7f-0f3b-4b6e-ad49-xxxx False
True kKvm
AHV
10.x.x.x 10.x.x.x ab265af6-98b7-4b5f-8d1b-xxxx False
Note the value of Hypervisor IP for the node you want to shut down.
c. Put the node into maintenance mode.
See Putting a Node into Maintenance Mode on page 20 for instructions about how to put a node into
maintenance mode.
d. Shut down the Controller VM.
nutanix@cvm$ cvm_shutdown -P now
See Starting a Node in a Cluster (AHV) on page 24 for instructions about how to start a CVM.

2. Log on to the AHV host with SSH.

3. Shut down the host.


root@ahv# shutdown -h now

What to do next
See Starting a Node in a Cluster (AHV) on page 24 for instructions about how to start a node, including
how to start a CVM and how to exit a node from maintenance mode.

Starting a Node in a Cluster (AHV)


About this task

Procedure

1. On the hardware appliance, power on the node.

2. Log on to the AHV host with SSH.

3. If the node is in maintenance mode, log on to the Controller VM and take the node out of maintenance mode.
See Exiting a Node from the Maintenance Mode on page 22 for more information.

4. Log on to another CVM in the Nutanix cluster with SSH.

5. Verify that all services are up on all the CVMs.


nutanix@cvm$ cluster status
If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix
cluster.
CVM: <host IP-Address> Up
Zeus UP [9935, 9980, 9981, 9994, 10015, 10037]
Scavenger UP [25880, 26061, 26062]

AHV | Node Management | 24


Xmount UP [21170, 21208]
SysStatCollector UP [22272, 22330, 22331]
IkatProxy UP [23213, 23262]
IkatControlPlane UP [23487, 23565]
SSLTerminator UP [23490, 23620]
SecureFileSync UP [23496, 23645, 23646]
Medusa UP [23912, 23944, 23945, 23946, 24176]
DynamicRingChanger UP [24314, 24404, 24405, 24558]
Pithos UP [24317, 24555, 24556, 24593]
InsightsDB UP [24322, 24472, 24473, 24583]
Athena UP [24329, 24504, 24505]
Mercury UP [24338, 24515, 24516, 24614]
Mantle UP [24344, 24572, 24573, 24634]
VipMonitor UP [18387, 18464, 18465, 18466, 18474]
Stargate UP [24993, 25032]
InsightsDataTransfer UP [25258, 25348, 25349, 25388, 25391, 25393,
25396]
Ergon UP [25263, 25414, 25415]
Cerebro UP [25272, 25462, 25464, 25581]
Chronos UP [25281, 25488, 25489, 25547]
Curator UP [25294, 25528, 25529, 25585]
Prism UP [25718, 25801, 25802, 25899, 25901, 25906,
25941, 25942]
CIM UP [25721, 25829, 25830, 25856]
AlertManager UP [25727, 25862, 25863, 25990]
Arithmos UP [25737, 25896, 25897, 26040]
Catalog UP [25749, 25989, 25991]
Acropolis UP [26011, 26118, 26119]
Uhura UP [26037, 26165, 26166]
Snmp UP [26057, 26214, 26215]
NutanixGuestTools UP [26105, 26282, 26283, 26299]
MinervaCVM UP [27343, 27465, 27466, 27730]
ClusterConfig UP [27358, 27509, 27510]
Aequitas UP [27368, 27567, 27568, 27600]
APLOSEngine UP [27399, 27580, 27581]
APLOS UP [27853, 27946, 27947]
Lazan UP [27865, 27997, 27999]
Delphi UP [27880, 28058, 28060]
Flow UP [27896, 28121, 28124]
Anduril UP [27913, 28143, 28145]
XTrim UP [27956, 28171, 28172]
ClusterHealth UP [7102, 7103, 27995, 28209,28495, 28496,
28503, 28510,
28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645, 28646,
28648, 28792,
28793, 28837, 28838, 28840, 28841, 28858, 28859, 29123, 29124, 29127, 29133, 29135, 29142,
29146, 29150,
29161, 29162, 29163, 29179, 29187, 29219, 29268, 29273]

Shutting Down an AHV Cluster


You might need to shut down an AHV cluster to perform a maintenance activity or tasks such as relocating
the hardware.

Before you begin


Ensure the following before you shut down the cluster.
1. Upgrade to the most recent version of NCC.

AHV | Node Management | 25


2. Log on to a Controller VM (CVM) with SSH and run the complete NCC health check.
nutanix@cvm$ ncc health_checks run_all
If you receive any failure or error messages, resolve those issues by referring to the KB articles indicated in the
output of the NCC check results. If you are unable to resolve these issues, contact Nutanix Support.

Warning: Warning: If you receive alerts indicating expired encryption certificates or a key manager is not
reachable, resolve these issues before you shut down the cluster. If you do not resolve these issues, data loss of the
cluster might occur.

About this task


Shut down an AHV cluster in the following sequence.

Procedure

1. Shut down the services or VMs associated with AOS features or Nutanix products. For example, shut down all the
Nutanix file server VMs (FSVMs). See the documentation of those features or products for more information.

2. Shut down all the guest VMs in the cluster in one of the following ways.

» Shut down the guest VMs from within the guest OS.
» Shut down the guest VMs by using the Prism Element web console.
» If you are running many VMs, shut down the VMs by using aCLI:

a. Log on to a CVM in the cluster with SSH.


b. Shut down all the guest VMs in the cluster.
nutanix@cvm$ for i in `acli vm.list power_state=on | awk '{print $1}' | grep -v NTNX` ; do
acli vm.shutdown $i ; done

c. Verify if all the guest VMs are shut down.


nutanix@CVM$ acli vm.list power_state=on

d. If any VMs are on, consider powering off the VMs from within the guest OS. To force shut down through
AHV, run the following command:
nutanix@cvm$ acli vm.off vm-name
Replace vm-name with the name of the VM you want to shut down.

3. Stop the Nutanix cluster.

a. Log on to any CVM in the cluster with SSH.


b. Stop the cluster.
nutanix@cvm$ cluster stop

c. Verify if the cluster services have stopped.


nutanix@CVM$ cluster status

The output displays the message The state of the cluster: stop, which confirms that the cluster has
stopped.

Note: Some system services continue to run even if the cluster has stopped.

AHV | Node Management | 26


4. Shut down all the CVMs in the cluster. Log on to each CVM in the cluster with SSH and shut down that CVM.
nutanix@cvm$ sudo shutdown -P now

5. Shut down each node in the cluster. Perform the following steps for each node in the cluster.

a. Log on to the IPMI web console of each node.


b. Under Remote Control > Power Control, select Power Off Server - Orderly Shutdown to gracefully
shut down the node.
c. Ping each host to verify that all AHV hosts are shut down.

6. Complete the maintenance activity or any other tasks.

7. Start all the nodes in the cluster.

a. Press the power button on the front of the block for each node.
b. Log on to the IPMI web console of each node.
c. On the System tab, check the Power Control status to verify if the node is powered on.

8. Start the cluster.

a. Wait for approximately 5 minutes after you start the last node to allow the cluster services to start.
All CVMs start automatically after you start all the nodes.
b. Log on to any CVM in the cluster with SSH.
c. Start the cluster.
nutanix@cvm$ cluster start

d. Verify that all the cluster services are in the UP state.


nutanix@cvm$ cluster status

e. Start the guest VMs from within the guest OS or use the Prism Element web console.
If you are running many VMs, start the VMs by using aCLI:
nutanix@cvm$ for i in `acli vm.list power_state=off | awk '{print $1}' | grep -v NTNX` ;
do acli vm.on $i; done

f. Start the services or VMs associated with AOS features or Nutanix products. For example, start all the FSVMs.
See the documentation of those features or products for more information.
g. Verify if all guest VMs are powered on by using the Prism Element web console.

Changing CVM Memory Configuration (AHV)


About this task
You can increase the memory reserved for each Controller VM in your cluster by using the 1-click Controller VM
Memory Upgrade available from the Prism Element web console. Increase memory size depending on the workload
type or to enable certain AOS features. See the Increasing the Controller VM Memory Size topic in the Prism Web
Console Guide for CVM memory sizing recommendations and instructions about how to increase the CVM memory.

AHV | Node Management | 27


Changing the AHV Hostname
To change the name of an AHV host, log on to any Controller VM (CVM) in the cluster and run the
change_ahv_hostname script.

About this task


Perform the following procedure to change the name of an AHV host:

Procedure

1. Log on to any CVM in the cluster with SSH.

2. Change the hostname of the AHV host.


nutanix@cvm$ change_ahv_hostname --host_ip=host-IP-address --host_name=new-host-name
Replace host-IP-address with the IP address of the host whose name you want to change and new-host-name
with the new hostname for the AHV host.

Note: This entity must fulfill the following naming conventions:

• The maximum length is 64 characters.


• Allowed characters are uppercase and lowercase letters (A-Z and a-z), decimal digits (0-9), dots (.),
and hyphens (-).
• The entity name must start and end with a number or letter.

If you want to update the hostname of multiple hosts in the cluster, run the script for one host at a time
(sequentially).

Note: The Prism Element web console displays the new hostname after a few minutes.

Changing the Name of the CVM Displayed in the Prism Web Console
You can change the CVM name that is displayed in the Prism web console. The procedure described
in this document does not change the CVM name that is displayed in the terminal or console of an SSH
session.

About this task


You can change the CVM name by using the change_cvm_display_name script. Run this script from a CVM other
than the CVM whose name you want to change. When you run the change_cvm_display_name script, AOS performs
the following steps:

• 1. Checks if the new name starts with NTNX- and ends with -CVM. The CVM name must have only letters,
numbers, and dashes (-).
2. Checks if the CVM has received a shutdown token.
3. Powers off the CVM. The script does not put the CVM or host into maintenance mode. Therefore, the VMs are
not migrated from the host and continue to run with the I/O operations redirected to another CVM while the
current CVM is in a powered off state.
4. Changes the CVM name, enables autostart, and powers on the CVM.
Perform the following to change the CVM name displayed in the Prism web console.

Procedure

1. Use SSH to log on to a CVM other than the CVM whose name you want to change.

AHV | Node Management | 28


2. Change the name of the CVM.
nutanix@cvm$ change_cvm_display_name --cvm_ip=CVM-IP --cvm_name=new-name
Replace CVM-IP with the IP address of the CVM whose name you want to change and new-name with the new
name for the CVM.
The CVM name must have only letters, numbers, and dashes (-), and must start with NTNX- and end with -CVM.

Note: Do not run this command from the CVM whose name you want to change, because the script powers off the
CVM. In this case, when the CVM is powered off, you lose connectivity to the CVM from the SSH console and the
script abruptly ends.

Adding a Never-Schedulable Node (AHV Only)


Add a never-schedulable node if you want to add a node to increase data storage on your Nutanix cluster,
but do not want any AHV VMs to run on that node.

About this task


AOS never schedules any VMs on a never-schedulable node. Therefore, a never-schedulable node configuration
ensures that no additional compute resources such as CPUs are consumed from the Nutanix cluster. In this way, you
can meet the compliance and licensing requirements of your virtual applications.
Note the following points about a never-schedulable node configuration.

Note:

• Ensure that at any given time, the cluster has a minimum of three nodes (never-schedulable or
otherwise) in function. To add your first never-schedulable node to your Nutanix cluster, the cluster
must comprise of at least three schedulable nodes.
• You can add any number of never-schedulable nodes to your Nutanix cluster.
• If you want a node that is already a part of the cluster to work as a never-schedulable node, remove that
node from the cluster and then add that node as a never-schedulable node.
• If you no longer need a node to work as a never-schedulable node, remove the node from the cluster.

Perform the following procedure to add a never-schedulable node.

Note: *
If you want a node that is already a part of the cluster to work as a never-schedulable node, see step 1, skip
step 2, and then proceed to step 3.
If you want to add a new node as a never-schedulable node, skip step 1 and proceed to step 2.

AHV | Node Management | 29


Procedure

1. * Perform the following steps if you want a node that is already a part of the cluster to work as a never-
schedulable node:

a. Determine the UUID of the node that you want to use as a never-schedulable node:
nutanix@cvm$ ncli host ls
You require the UUID of the node when you are adding the node back to the cluster as a never-schedulable
node.
b. Remove the node from the cluster.
For information about how to remove a node from a cluster, see the Modifying a Cluster topic in Prism Web
Console Guide.
c. Proceed to step 3.

2. * Perform the following steps if you want to add a new node as a never-schedulable node:

a. Image the node as an AHV node by using Foundation.


See Field Installation Guide for instructions about how to image a node as an AHV node.
b. Use SSH to log on to the host as the root user.
Log on to the node by using the following credentials:

• username: root
• password: nutanix/4u
c. Determine the UUID of the node.
root@host$ cat /etc/nutanix/factory_config.json
An output similar to the following is displayed:
{"rackable_unit_serial": "xxxx", "node_uuid": "xxxx", "node_serial": "xxxx"
Note the value of the "node_uuid" parameter.
You require the UUID of the node when you are adding the node to the cluster as a never-schedulable node.

3. Log on to a Controller VM in the cluster with SSH.

4. Add a node as a never-schedulable node.


nutanix@cvm$ ncli -h true cluster add-node node-uuid=uuid-of-the-node never-schedulable-
node=true
Replace uuid-of-the-node with the UUID of the node you want to add as a never-schedulable node.
The never-schedulable-node parameter is an optional and is required only if you want to add a never-schedulable
node.
If you no longer need a node to work as a never-schedulable node, remove the node from the cluster.
If you want the never-schedulable node to now work as a schedulable node, remove the node from the cluster and
add the node back to the cluster by using the Prism Element web console.

Note: For information about how to add a node (other than a never-schedulable node) to a cluster, see the
Expanding a Cluster topic in Prism Web Console Guide.

AHV | Node Management | 30


Compute-Only Node Configuration (AHV Only)
A compute-only (CO) node allows you to seamlessly and efficiently expand the computing capacity (CPU and
memory) of your AHV cluster. The Nutanix cluster uses the resources (CPUs and memory) of a CO node exclusively
for computing purposes.
You can use a supported server or an existing hyperconverged (HC) node as a CO node. To use a node as CO, image
the node as CO by using Foundation and then add that node to the cluster by using the Prism Element web console.
See Field Installation Guide for information about how to image a node as a CO node.

Note: If you want an existing HC node that is already a part of the cluster to work as a CO node, remove that node
from the cluster, image that node as CO by using Foundation, and add that node back to the cluster. See the Modifying a
Cluster topic in Prism Web Console Guide for information about how to remove a node.

Key Features of Compute-Only Node


Following are the key features of CO nodes.

• CO nodes do not have a Controller VM (CVM) and local storage.


• AOS sources the storage for vDisks associated with VMs running on CO nodes from the hyperconverged (HC)
nodes in the cluster.
• You can seamlessly manage your VMs (CRUD operations, ADS, and HA) by using the Prism Element web
console.
• AHV runs on the local storage media of the CO node.
• To update AHV on a cluster that contains a compute-only node, use the Life Cycle Manager. See Life Cycle
Manager Guide: LCM Updates for more information.

Use Case of Compute-Only Node


CO nodes enable you to achieve more control and value from restrictive licenses such as Oracle. A CO node is part
of a Nutanix HC cluster, and there is no CVM running on the CO node (VMs use CVMs running on the HC nodes to
access disks). As a result, licensed cores on the CO node are used only for the application VMs.
Applications or databases that are licensed on a per CPU core basis require the entire node to be licensed and that
also includes the cores on which the CVM runs. With CO nodes, you get a much higher ROI on the purchase of
your database licenses (such as Oracle and Microsoft SQL Server) since the CVM does not consume any compute
resources.

Minimum Cluster Requirements


Following are the minimum cluster requirements for compute-only nodes.

• The Nutanix cluster must be at least a three-node cluster before you add a compute-only node.
However, Nutanix recommends that the cluster has four nodes before you add a compute-only node.
• The ratio of compute-only to hyperconverged nodes in a cluster must not exceed the following:
1 compute-only : 2 hyperconverged
• All the hyperconverged nodes in the cluster must be all-flash nodes.
• The number of vCPUs assigned to CVMs on the hyperconverged nodes must be greater than or equal to the total
number of available cores on all the compute-only nodes in the cluster. The CVM requires a minimum of 12
vCPUs. See the CVM vCPU and vRAM Allocation topic in Field Installation Guide for information about how
Foundation allocates memory and vCPUs to your platform model.

AHV | Node Management | 31


• The total amount of NIC bandwidth allocated to all the hyperconverged nodes must be twice the amount of the
total NIC bandwidth allocated to all the compute-only nodes in the cluster.
Nutanix recommends you use dual 25 GbE on CO nodes and quad 25 GbE on an HC node serving storage to a CO
node.
• The AHV version of the compute-only node must be the same as the other nodes in the cluster.
When you are adding a CO node to the cluster, AOS checks if the AHV version of the node matches with the
AHV version of the existing nodes in the cluster. If there is a mismatch, the add node operation fails.
See the Expanding a Cluster topic in the Prism Web Console Guide for general requirements about adding a node to a
Nutanix cluster.

Restrictions
Nutanix does not support the following features or tasks on a CO node in this release:
1. Host boot disk replacement
2. Network segmentation

Supported AOS Versions


Nutanix supports compute-only nodes on AOS releases 5.11 or later.

Supported Hardware Platforms


Compute-only nodes are supported on the following hardware platforms.

• All the NX series hardware


• Dell XC Core
• Cisco UCS

Networking Configuration
To perform network tasks on a compute-only node such as creating an Open vSwitch bridge or changing uplink load
balancing, you must add the --host flag to the manage_ovs commands as shown in the following example:
nutanix@cvm$ manage_ovs --host IP_address_of_co_node --bridge_name bridge_name
create_single_bridge
Replace IP_address_of_co_node with the IP address of the CO node and bridge_name with the name of bridge you
want to create.

Note: Run the manage_ovs commands for a CO from any CVM running on a hyperconverged node.

Perform the networking tasks for each CO node in the cluster individually.
See the Host Network Management section in the AHV Administration Guide for more information about networking
configuration of the AHV hosts.

Adding a Compute-Only Node to an AHV Cluster

About this task


Perform the following procedure to add a compute-only node to a Nutanix cluster.

Procedure

1. Log on to the Prism Element web console.

AHV | Node Management | 32


2. Do one of the following:

» Click the gear icon in the main menu and select Expand Cluster in the Settings page.
» Go to the hardware dashboard (see the Hardware Dashboard topic in Prism Web Console Guide) and click
Expand Cluster.

3. In the Select Host screen, scroll down and, under Manual Host Discovery, click Discover Hosts
Manually.

Figure 4: Discover Hosts Manually

AHV | Node Management | 33


4. Click Add Host.

Figure 5: Add Host

5. Under Host or CVM IP, type the IP address of the AHV host and click Save.
This node does not have a Controller VM and you must therefore provide the IP address of the AHV host.

6. Click Discover and Add Hosts.


Prism Element discovers this node and the node appears in the list of nodes in the Select Host screen.

7. Select the node to display the details of the compute-only node.

8. Click Next.

9. In the Configure Host screen, click Expand Cluster.


The add node process begins and Prism Element performs a set of checks before the node is added to the cluster.
Check the progress of the operation in the Tasks menu of the Prism Element web console. The operation takes
approximately five to seven minutes to complete.

10. Check the Hardware Diagram view to verify if the node is added to the cluster.
You can identity a node as a CO node if the Prism Element web console does not display the IP address for the
CVM.
3
HOST NETWORK MANAGEMENT
Network management in an AHV cluster consists of the following tasks:

• Configuring Layer 2 switching through virtual switch and Open vSwitch bridges. When configuring virtual switch
vSwitch, you configure bridges, bonds, and VLANs.
• Optionally changing the IP address, netmask, and default gateway that were specified for the hosts during the
imaging process.

Virtual Networks (Layer 2)


Each VM network interface is bound to a virtual network. Each virtual network is bound to a single VLAN; trunking
VLANs to a virtual network is not supported. Networks are designated by the Layer 2 type (vlan) and the VLAN
number.
By default, each virtual network maps to virtual switch such as the default virtual switch vs0. However, you can
change this setting to map a virtual network to a custom virtual switch. The user is responsible for ensuring that
the specified virtual switch exists on all hosts, and that the physical switch ports for the virtual switch uplinks are
properly configured to receive VLAN-tagged traffic.
For more information about virtual switches, see About Virtual Switch on page 40.
A VM NIC must be associated with a virtual network. You can change the virtual network of a vNIC without deleting
and recreating the vNIC.

Managed Networks (Layer 3)


A virtual network can have an IPv4 configuration, but it is not required. A virtual network with an IPv4 configuration
is a managed network; one without an IPv4 configuration is an unmanaged network. A VLAN can have at most one
managed network defined. If a virtual network is managed, every NIC is assigned an IPv4 address at creation time.
A managed network can optionally have one or more non-overlapping DHCP pools. Each pool must be entirely
contained within the network's managed subnet.
If the managed network has a DHCP pool, the NIC automatically gets assigned an IPv4 address from one of the pools
at creation time, provided at least one address is available. Addresses in the DHCP pool are not reserved. That is, you
can manually specify an address belonging to the pool when creating a virtual adapter. If the network has no DHCP
pool, you must specify the IPv4 address manually.
All DHCP traffic on the network is rerouted to an internal DHCP server, which allocates IPv4 addresses. DHCP
traffic on the virtual network (that is, between the guest VMs and the Controller VM) does not reach the physical
network, and vice versa.
A network must be configured as managed or unmanaged when it is created. It is not possible to convert one to the
other.

AHV | Host Network Management | 35


Figure 6: AHV Networking Architecture

Prerequisites for Configuring Networking


Change the configuration from the factory default to the recommended configuration. See AHV Networking
Recommendations on page 36.

AHV Networking Recommendations


Nutanix recommends that you perform the following OVS configuration tasks from the Controller VM, as described
in this documentation:

• Viewing the network configuration


• Configuring uplink bonds with desired interfaces using the Virtual Switch (VS) configurations.
• Assigning the Controller VM to a VLAN
For performing other network configuration tasks such as adding an interface to a bridge and configuring LACP for
the interfaces in a bond, follow the procedures described in the AHV Networking best practices documentation under
Solutions Documentation on the Nutanix Support portal.
Nutanix recommends that you configure the network as follows:

Table 4: Recommended Network Configuration

Network Component Best Practice

Virtual Switch Do not modify the OpenFlow tables of any bridges configured in any VS
configurations in the AHV hosts.
Do not rename default virtual switch vs0. You cannot delete the default virtual
switch vs0.
Do not delete or rename OVS bridge br0.
Do not modify the native Linux bridge virbr0.

AHV | Host Network Management | 36


Network Component Best Practice

Switch Hops Nutanix nodes send storage replication traffic to each other in a distributed
fashion over the top-of-rack network. One Nutanix node can, therefore,
send replication traffic to any other Nutanix node in the cluster. The
network should provide low and predictable latency for this traffic. Ensure
that there are no more than three switches between any two Nutanix
nodes in the same cluster.

Switch Fabric A switch fabric is a single leaf-spine topology or all switches connected to the
same switch aggregation layer. The Nutanix VLAN shares a common broadcast
domain within the fabric. Connect all Nutanix nodes that form a cluster to the
same switch fabric. Do not stretch a single Nutanix cluster across multiple,
disconnected switch fabrics.
Every Nutanix node in a cluster should therefore be in the same L2 broadcast
domain and share the same IP subnet.

WAN Links A WAN (wide area network) or metro link connects different physical sites
over a distance. As an extension of the switch fabric requirement, do not
place Nutanix nodes in the same cluster if they are separated by a WAN.

VLANs Add the Controller VM and the AHV host to the same VLAN. Place all CVMs
and AHV hosts in a cluster in the same VLAN. By default the CVM and AHV
host are untagged, shown as VLAN 0, which effectively places them on the native
VLAN configured on the upstream physical switch.

Note: Do not add any other device (including guest VMs) to the VLAN to
which the CVM and hypervisor host are assigned. Isolate guest VMs on one or
more separate VLANs.

Nutanix recommends configuring the CVM and hypervisor host VLAN as the
native, or untagged, VLAN on the connected switch ports. This native VLAN
configuration allows for easy node addition and cluster expansion. By default,
new Nutanix nodes send and receive untagged traffic. If you use a tagged VLAN
for the CVM and hypervisor hosts instead, you must configure that VLAN while
provisioning the new node, before adding that node to the Nutanix cluster.
Use tagged VLANs for all guest VM traffic and add the required guest VM
VLANs to all connected switch ports for hosts in the Nutanix cluster. Limit guest
VLANs for guest VM traffic to the smallest number of physical switches and
switch ports possible to reduce broadcast network traffic load. If a VLAN is no
longer needed, remove it.

Default VS bonded port (br0- Aggregate the fastest links of the same speed on the physical host to a VS bond on
up) the default vs0 and provision VLAN trunking for these interfaces on the physical
switch.
By default, interfaces in the bond in the virtual switch operate in the recommended
active-backup mode.

Note: The mixing of bond modes across AHV hosts in the same cluster is not
recommended and not supported.

AHV | Host Network Management | 37


Network Component Best Practice

1 GbE and 10 GbE If 10 GbE or faster uplinks are available, Nutanix recommends that you use them
interfaces (physical host) instead of 1 GbE uplinks.
Recommendations for 1 GbE uplinks are as follows:

• If you plan to use 1 GbE uplinks, do not include them in the same bond as the
10 GbE interfaces.
Nutanix recommends that you do not use uplinks of different speeds in the
same bond.
• If you choose to configure only 1 GbE uplinks, when migration of memory-
intensive VMs becomes necessary, power off and power on in a new host
instead of using live migration. In this context, memory-intensive VMs are
VMs whose memory changes at a rate that exceeds the bandwidth offered by
the 1 GbE uplinks.
Nutanix recommends the manual procedure for memory-intensive VMs
because live migration, which you initiate either manually or by placing the
host in maintenance mode, might appear prolonged or unresponsive and might
eventually fail.
Use the aCLI on any CVM in the cluster to start the VMs on another AHV
host:
nutanix@cvm$ acli vm.on vm_list host=host
Replace vm_list with a comma-delimited list of VM names and replace host
with the IP address or UUID of the target host.
• If you must use only 1GbE uplinks, add them into a bond to increase
bandwidth and use the balance-TCP (LACP) or balance-SLB bond mode.

IPMI port on the hypervisor Do not use VLAN trunking on switch ports that connect to the IPMI
host interface. Configure the switch ports as access ports for management
simplicity.

Upstream physical switch Nutanix does not recommend the use of Fabric Extenders (FEX) or similar
technologies for production use cases. While initial, low-load implementations
might run smoothly with such technologies, poor performance, VM lockups, and
other issues might occur as implementations scale upward (see Knowledge Base
article KB1612). Nutanix recommends the use of 10Gbps, line-rate, non-blocking
switches with larger buffers for production workloads.
Cut-through versus store-and-forward selection depends on network design. In
designs with no oversubscription and no speed mismatches you can use low-
latency cut-through switches. If you have any oversubscription or any speed
mismatch in the network design, then use a switch with larger buffers. Port-to-port
latency should be no higher than 2 microseconds.
Use fast-convergence technologies (such as Cisco PortFast) on switch ports that
are connected to the hypervisor host.

AHV | Host Network Management | 38


Network Component Best Practice

Physical Network Layout Use redundant top-of-rack switches in a traditional leaf-spine architecture.
This simple, flat network design is well suited for a highly distributed,
shared-nothing compute and storage architecture.
Add all the nodes that belong to a given cluster to the same Layer-2 network
segment.
Other network layouts are supported as long as all other Nutanix recommendations
are followed.

Jumbo Frames The Nutanix CVM uses the standard Ethernet MTU (maximum transmission
unit) of 1,500 bytes for all the network interfaces by default. The standard 1,500
byte MTU delivers excellent performance and stability. Nutanix does not support
configuring the MTU on network interfaces of a CVM to higher values.
You can enable jumbo frames (MTU of 9,000 bytes) on the physical network
interfaces of AHV hosts and guest VMs if the applications on your guest VMs
require them. If you choose to use jumbo frames on hypervisor hosts, be sure to
enable them end to end in the desired network and consider both the physical and
virtual network infrastructure impacted by the change.

Controller VM Do not remove the Controller VM from either the OVS bridge br0 or the
native Linux bridge virbr0.

Rack Awareness and Block Block awareness and rack awareness provide smart placement of
Awareness Nutanix cluster services, metadata, and VM data to help maintain data
availability, even when you lose an entire block or rack. The same network
requirements for low latency and high throughput between servers in the
same cluster still apply when using block and rack awareness.

Note: Do not use features like block or rack awareness to stretch a Nutanix
cluster between different physical sites.

Oversubscription Oversubscription occurs when an intermediate network device or link does


not have enough capacity to allow line rate communication between the
systems connected to it. For example, if a 10 Gbps link connects two switches
and four hosts connect to each switch at 10 Gbps, the connecting link is
oversubscribed. Oversubscription is often expressed as a ratio—in this case 4:1,
as the environment could potentially attempt to transmit 40 Gbps between the
switches with only 10 Gbps available. Achieving a ratio of 1:1 is not always
feasible. However, you should keep the ratio as small as possible based on budget
and available capacity. If there is any oversubscription, choose a switch with
larger buffers.
In a typical deployment where Nutanix nodes connect to redundant top-of-
rack switches, storage replication traffic between CVMs traverses multiple
devices. To avoid packet loss due to link oversubscription, ensure that the switch
uplinks consist of multiple interfaces operating at a faster speed than the Nutanix
host interfaces. For example, for nodes connected at 10 Gbps, the inter-switch
connection should consist of multiple 10 Gbps or 40 Gbps links.

This diagram shows the recommended network configuration for an AHV cluster. The interfaces in the diagram are
connected with colored lines to indicate membership to different VLANs:

AHV | Host Network Management | 39


Figure 7:

IP Address Management
IP Address Management (IPAM) is a feature of AHV that allows it to assign IP addresses automatically to VMs by
using DHCP. You can configure each virtual network with a specific IP address subnet, associated domain settings,
and IP address pools available for assignment to VMs.
An AHV network is defined as a managed network or an unmanaged network based on the IPAM setting.
Managed Network
Managed network refers to an AHV network in which IPAM is enabled.
Unmanaged Network
Unmanaged network refers to an AHV network in which IPAM is not enabled or is disabled.
IPAM is enabled, or not, in the Create Network dialog box when you create a virtual network for Guest VMs. See
Configuring a Virtual Network for Guest VM Interfaces topic in the Prism Web Console Guide.

Note: You can enable IPAM only when you are creating a virtual network. You cannot enable or disable IPAM for an
existing virtual network.

IPAM enabled or disabled status has implications. For example, when you want to reconfigure the IP address of a
Prism Central VM, the procedure to do so may involve additional steps for managed networks (that is, networks with
IPAM enabled) where the new IP address belongs to an IP address range different from the previous IP address range.
See Reconfiguring the IP Address and Gateway of Prism Central VMs in Prism Central Guide.

Layer 2 Network Management


AHV uses virtual switch (VS) to connect the Controller VM, the hypervisor, and the guest VMs to each other
and to the physical network. Virtual switch is configured by default on each AHV node and the VS services start
automatically when you start a node.
To configure virtual networking in an AHV cluster, you need to be familiar with virtual switch. This documentation
gives you a brief overview of virtual switch and the networking components that you need to configure to enable the
hypervisor, Controller VM, and guest VMs to connect to each other and to the physical network.

About Virtual Switch


Virtual switches or VS are used to manage multiple bridges and uplinks.
The VS configuration is designed to provide flexibility in configuring virtual bridge connections. A virtual switch
(VS) defines a collection of AHV nodes and the uplink ports on each node. It is an aggregation of the same OVS

AHV | Host Network Management | 40


bridge on all the compute nodes in a cluster. For example, vs0 is the default virtual switch is an aggregation of the br0
bridge and br0-up uplinks of all the nodes.
After you configure a VS, you can use the VS as reference for physical network management instead of using the
bridge names as reference.
For overview about Virtual Switch, see Virtual Switch Considerations on page 43.
For information about OVS, see About Open vSwitch on page 47.

Virtual Switch Workflow


A virtual switch (VS) defines a collection of AHV compute nodes and the uplink ports on each node. It is an
aggregation of the same OVS bridge on all the compute nodes in a cluster. For example, vs0 is the default virtual
switch is an aggregation of the br0 bridge of all the nodes.
The system creates the default virtual switch vs0 connecting the default bridge br0 on all the hosts in the cluster
during installation of or upgrade to the compatible versions of AOS and AHV. Default virtual switch vs0 has the
following characteristics:

• The default virtual switch cannot be deleted.


• The default bridges br0 on all the nodes in the cluster map to vs0. thus, vs0 is not empty. It has at least one uplink
configured.
• The default management connectivity to a node is mapped to default bridge br0 that is mapped to vs0.
• The default parameter values of vs0 - Name, Description, MTU and Bond Type - can be modified subject to
aforesaid characteristics.
• The default virtual switch is configured with the Active-Backup uplink bond type.
For more information about bond types, see the Bond Type table.
The virtual switch aggregates the same bridges on all nodes in the cluster. The bridges (for example, br1) connect to
the physical port such as eth3 (Ethernet port) via the corresponding uplink (for example, br1-up). The uplink ports
of the bridges are connected to the same physical network. For example, the following illustration shows that vs0 is
mapped to the br0 bridge, in turn connected via uplink br0-up to various (physical) Ethernet ports on different nodes.

Figure 8: Virtual Switch

Uplink configuration uses bonds to improve traffic management. The bond types are defined for the aggregated OVS
bridges.A new bond type - No uplink bond - provides a no-bonding option. A virtual switch configured with the No
uplink bond uplink bond type has 0 or 1 uplinks.

AHV | Host Network Management | 41


When you configure a virtual switch with any other bond type, you must select at least two uplink ports on every
node.
If you change the uplink configuration of vs0, AOS applies the updated settings to all the nodes in the cluster one
after the other (the rolling update process). To update the settings in a cluster, AOS performs the following tasks
when configuration method applied is Standard:
1. Puts the node in maintenance mode (migrates VMs out of the node)
2. Applies the updated settings
3. Checks connectivity with the default gateway
4. Exits maintenance mode
5. Proceeds to apply the updated settings to the next node
AOS does not put the nodes in maintenance mode when the Quick configuration method is applied.

Table 5: Bond Types

Bond Type Use Case Maximum VM NIC Maximum Host


Throughput Throughput

Active-Backup Recommended. Default 10 Gb 10 Gb


configuration, which transmits all
traffic over a single active adapter.

Active-Active with MAC Works with caveats for multicast 10 Gb 20 Gb


pinning traffic. Increases host bandwidth
utilization beyond a single 10 Gb
Also known as balance-slb adapter. Places each VM NIC
on a single adapter at a time. Do
not use this bond type with link
aggregation protocols such as
LACP.

Active-Active LACP and link aggregation required. 20 Gb 20 Gb


Increases host and VM bandwidth
Also known as LACP with
utilization beyond a single 10 Gb
balance-tcp
adapter by balancing VM NIC TCP
and UDP sessions among adapters.
Also used when network switches
require LACP negotiation.
The default LACP settings are:

• Speed—Fast (1s)
• Mode—Active fallback-active-
backup
• Priority—Default. This is not
configurable.

AHV | Host Network Management | 42


Bond Type Use Case Maximum VM NIC Maximum Host
Throughput Throughput

No Uplink Bond No uplink or a single uplink on each - -


host.
Virtual switch configured with the
No uplink bond uplink bond type has
0 or 1 uplinks. When you configure
a virtual switch with any other bond
type, you must select at least two
uplink ports on every node.

Note the following points about the uplink configuration.

• Virtual switches are not enabled in a cluster that has one or more compute-only nodes. See Virtual Switch
Limitations on page 49 and Virtual Switch Requirements on page 48.
• If you select the Active-Active policy, you must manually enable LAG and LACP on the corresponding ToR
switch for each node in the cluster.
• If you reimage a cluster with the Active-Active policy enabled, the default virtual switch (vs0) on the reimaged
cluster is once again the Active-Backup policy. The other virtual switches are removed during reimage.
• Nutanix recommends configuring LACP with fallback to active-backup or individual mode on the ToR switches.
The configuration and behavior varies based on the switch vendor. Use a switch configuration that allows both
switch interfaces to pass traffic after LACP negotiation fails.

Virtual Switch Considerations

Virtual Switch Deployment


A VS configuration is deployed using rolling update of the clusters. After the VS configuration (creation or update) is
received and execution starts, every node is first put into maintenance mode before the VS configuration is made or
modified on the node. This is called the Standard recommended default method of configuring a VS.
You can select the Quick method of configuration also where the rolling update does not put the clusters in
maintenance mode. The VS configuration task is marked as successful when the configuration is successful on the
first node. Any configuration failure on successive nodes triggers corresponding NCC alerts. There is no change to
the task status.

Note:
If you are modifying an existing bond, AHV removes the bond and then re-creates the bond with the
specified interfaces.
Ensure that the interfaces you want to include in the bond are physically connected to the Nutanix appliance
before you run the command described in this topic. If the interfaces are not physically connected to the
Nutanix appliance, the interfaces are not added to the bond.

The VS configuration is stored and re-enforced at system reboot.


The VM NIC configuration also displays the VS details. When you Update VM configuration or Create NIC for a
VM, the NIC details show the virtual switches that can be associated. This allows you to change a virtual network and
the associated virtual switch.
To change the virtual network, select the virtual network in the Subnet Name dropdown list in the Create NIC or
Update NIC dialog box..

AHV | Host Network Management | 43


Figure 9: Create VM - VS Details

AHV | Host Network Management | 44


Figure 10: VM NIC - VS Details

Impact of Installation of or Upgrade to Compatible AOS and AHV Versions


See Virtual Switch Requirements for information about minimum and compatible AOS and AHV versions.
When you upgrade the AOS to a compatible version from an older version, the upgrade process:

• Triggers the creation of the default virtual switch vs0 which is mapped to bridge br0 on all the nodes.

AHV | Host Network Management | 45


• Validates bridge br0 and its uplinks for consistency in terms of MTU and bond-type on every node.
If valid, it adds the bridge br0 of each node to the virtual switch vs0.
If br0 configuration is not consistent, the system generates an NCC alert which provides the failure reason and
necessary details about it.
The system migrates only the bridge br0 on each node to the default virtual switch vs0 because the connectivity of
bridge br0 is guaranteed.
• Does not migrate any other bridges to any other virtual switches during upgrade. You need to manually migrate
the other bridges after instal or upgrade is complete.

Bridge Migration
After upgrading to a compatible version of AOS, you can migrate bridges other than br0 that existed on the nodes.
When you migrate the bridges, the system converts the bridges to virtual switches.
See Virtual Switch Migration Requirements in Virtual Switch Requirements on page 48.

Note: You can migrate only those bridges that are present on every compute node in the cluster. See Migrating Bridges
after Upgrade topic in Network Management in the Prism Web Console Guide.

Cluster Scaling Impact


VS management for cluster scaling (addition or removal of nodes) is seamless.
Node Removal
When you remove a node, the system detects the removal and automatically removes the node from all the
VS configurations that include the node and generates an internal system update. For example, a node has
two virtual switches, vs1 and vs2, configured apart from the default vs0. When you remove the node from the
cluster, the system removes the node for the vs1 and vs2 configurations automatically with internal system
update.
Node Addition
When you add a new node or host to a cluster, the bridges or virtual switches on the new node are treated in
the following manner:

Note: If a host already included in a cluster is removed and then added back, it is treated as a new host.

• The system validates the default bridge br0 and uplink bond br0-up to check if it conforms to the default
virtual switch vs0 already present on the cluster.
If br0 and br0-up conform, the system includes the new host and its uplinks in vs0.
If br0 and br0-up do not conform,then the system generates an NCC alert.
• The system does not automatically add any other bridge configured on the new host to any other virtual
switch in the cluster.
It generates NCC alerts for all the other non-default virtual switches.
• You can manually include the host in the required non-default virtual switches. Update a non-default
virtual switch to include the host.
For information about updating a virtual switch in Prism Element Web Console, see the Configuring a
Virtual Network for Guest VMs section in the Prism Web Console Guide.
For information about updating a virtual switch in Prism Central, see the Network Connections section in
the Prism Central Guide.

AHV | Host Network Management | 46


VS Management
You can manage virtual switches from Prism Central or Prism Web Console. You can also use aCLI or REST APIs to
manage them. See the Acropolis API Reference and Command Reference guides for more information.
You can also use the appropriate aCLI commands for virtual switches from the following list:

• net.create_virtual_switch
• net.list_virtual_switch
• net.get_virtual_switch
• net.update_virtual_switch
• net.delete_virtual_switch
• net.migrate_br_to_virtual_switch
• net.disable_virtual_switch

About Open vSwitch


Open vSwitch (OVS) is an open-source software switch implemented in the Linux kernel and designed to work in
a multiserver virtualization environment. By default, OVS behaves like a Layer 2 learning switch that maintains a
MAC address learning table. The hypervisor host and VMs connect to virtual ports on the switch.
Each hypervisor hosts an OVS instance, and all OVS instances combine to form a single switch. As an example, the
following diagram shows OVS instances running on two hypervisor hosts.

Figure 11: Open vSwitch

Default Factory Configuration


The factory configuration of an AHV host includes a default OVS bridge named br0 (configured with the default
virtual switch vs0) and a native linux bridge called virbr0.
Bridge br0 includes the following ports by default:

• An internal port with the same name as the default bridge; that is, an internal port named br0. This is the access
port for the hypervisor host.
• A bonded port named br0-up. The bonded port aggregates all the physical interfaces available on the node. For
example, if the node has two 10 GbE interfaces and two 1 GbE interfaces, all four interfaces are aggregated

AHV | Host Network Management | 47


on br0-up. This configuration is necessary for Foundation to successfully image the node regardless of which
interfaces are connected to the network.

Note:
Before you begin configuring a virtual network on a node, you must disassociate the 1 GbE interfaces
from the br0-up port. This disassociation occurs when you modify the default virtual switch (vs0)
and create new virtual switches. Nutanix recommends that you aggregate only the 10 GbE or faster
interfaces on br0-up and use the 1 GbE interfaces on a separate OVS bridge deployed in a separate
virtual switch.
See Virtual Switch Management on page 49 for information about virtual switch management.

The following diagram illustrates the default factory configuration of OVS on an AHV node:

Figure 12: Default factory configuration of Open vSwitch in AHV

The Controller VM has two network interfaces by default. As shown in the diagram, one network interface connects
to bridge br0. The other network interface connects to a port on virbr0. The Controller VM uses this bridge to
communicate with the hypervisor host.

Virtual Switch Requirements

Virtual Switch Migration Requirements


The AOS upgrade process initiates the virtual switch migration. The virtual switch migration is successful only when
the following requirements are fulfilled:

• Before migrating to Virtual Switch, all bridge br0 bond interfaces should have the same bond type on all hosts
in the cluster. For example, all hosts should use the Active-backup bond type or balance-tcp. If some hosts use
Active-backup and other hosts use balance-tcp, virtual switch migration fails.

AHV | Host Network Management | 48


• Before migrating to Virtual Switch, if using LACP:

• Confirm that all bridge br0 lacp-fallback parameters on all hosts are set to the case sensitive value True with
manage_ovs show_uplinks |grep lacp-fallback:. Any host with lowercase true causes virtual switch
migration failure.
• Confirm that the LACP speed on the physical switch is set to fast or 1 second. Also ensure that the switch
ports are ready to fallback to individual mode if LACP negotiation fails due to a configuration such as no
lacp suspend-individual.

• Before migrating to the Virtual Switch, confirm that the upstream physical switch is set to spanning-tree
portfast or spanning-tree port type edge trunk. Failure to do so may lead to a 30-second network
timeout and the virtual switch migration may fail because it uses 20-second non-modifiable timer.

Virtual Switch Limitations


MTU Restriction
Virtual switch deployments support a maximum MTU value of 9216 bytes for virtual switches.

Note:
Nutanix CVMs use the standard Ethernet MTU (maximum transmission unit) of 1,500 bytes for
all the network interfaces by default. The standard 1,500 byte MTU delivers excellent performance
and stability. Nutanix does not support configuring the MTU on the network interfaces of a CVM to
higher values.

If you choose to configure higher MTU values of up to 9216 bytes on hypervisor hosts, ensure that you
enable them end-to-end in the desired network. Consider both the physical and virtual network infrastructure
impacted by the change.
Single node and Two-node cluster configuration.
Virtual switch cannot be deployed is your single-node or two-node cluster has any instantiated user VMs. The
virtual switch creation or update process involves a rolling restart, which checks for maintenance mode and
whether you can migrate the VMs. On a single-node or two-node cluster, any instantiated user VMs cannot be
migrated and the virtual switch operation fails.
Therefore, power down all user VMs for virtual switch operations in a single-node or two-node cluster.
Compute-only node is not supported.
Virtual switch is not compatible with Compute-only (CO) nodes. If a CO node is present in the cluster,
then the virtual switches are not deployed (including the default virtual switch). You need to use the
net.disable_virtual_switch aCLI command to disable the virtual switch workflow if you want to expand
a cluster which has virtual switches and includes a CO node.
The net.disable_virtual_switch aCLI command cleans up all the virtual switch entries from the IDF. All
the bridges mapped to the virtual switch or switches are retained as they are.
Including a storage-only node in a VS is not necessary.
Virtual switch is compatible with Storage-only (SO) nodes but you do not need to include an SO node in any
virtual switch, including the default virtual switch.

Virtual Switch Management


Virtual Switch can be viewed, created, updated or deleted from both Prism Web Console as well as Prism Central.

AHV | Host Network Management | 49


Virtual Switch Views and Visualization
For information on the virtual switch network visualization in Prism Element Web Console, see the Network
Visualization topic in the Prism Web Console Guide.

Virtual Switch Create, Update and Delete Operations


For information about the procedures to create, update and delete a virtual switch in Prism Element Web Console, see
the Configuring a Virtual Network for Guest VMs section in the Prism Web Console Guide.
For information about the procedures to create, update and delete a virtual switch in Prism Central, see the Network
Connections section in the Prism Central Guide.

Enabling LAG and LACP on the ToR Switch (AHV Only)


Active-Active with MAC pinning does not require link aggregation since each source MAC address
is balanced to only one adapter at a time. If you select the Active-Active NIC-teaming policy, you must
enable LAG and LACP on the corresponding ToR switch for each node in the cluster one after the other.

About this task


In an Active-Active NIC-teaming configuration, network traffic is balanced among the members of the team based
on source and destination IP addresses and TCP and UDP ports. With link aggregation negotiated by LACP, the
uplinks appear as a single layer-2 link so a VM MAC address can appear on multiple links and use the bandwidth
of all uplinks. If you do not enable LAG and LACP in this Active-Active configuration, network traffic might be
dropped by the switch.
Perform the following procedure to enable LAG and LACP on a TOR switch after enabling the Active-Active policy
by using the Prism Element web console.

Procedure

1. Put the node in maintenance mode. This is in addition to the previous maintenance mode that enabled Active-
Active on the node.

2. Enable LAG and LACP on the ToR switch connected to that node.

3. Exit maintenance mode after LAG and LACP is successfully enabled.

4. Repeat steps 1 to 3 for every node in the cluster.

VLAN Configuration
You can set up a VLAN-based segmented virtual network on an AHV node by assigning the ports on virtual bridges
managed by virtual switches to different VLANs. VLAN port assignments are configured from the Controller VM
that runs on each node.
For best practices associated with VLAN assignments, see AHV Networking Recommendations on page 36. For
information about assigning guest VMs to a virtual switch and VLAN, see Network Connections section in Prism
Central Guide..

Assigning an AHV Host to a VLAN

About this task

Note: Perform the following procedure during a scheduled downtime. Before you begin, stop the cluster. Once the
process begins, hosts and CVMs partially lose network access to each other and VM data or storage containers become
unavailable until the process completes.

To assign an AHV host to a VLAN, do the following on every AHV host in the cluster:

AHV | Host Network Management | 50


Procedure

1. Log on to the AHV host with SSH.

2. Put the AHV host and the CVM in maintenance mode.


See Putting a Node into Maintenance Mode on page 20 for instructions about how to put a node into maintenance
mode.

3. Assign port br0 (the internal port on the default OVS bridge, br0 on defaul virtual switch vs0) to the VLAN that
you want the host be on.
root@ahv# ovs-vsctl set port br0 tag=host_vlan_tag
Replace host_vlan_tag with the VLAN tag for hosts.

4. Confirm VLAN tagging on port br0.


root@ahv# ovs-vsctl list port br0

5. Check the value of the tag parameter that is shown.

6. Verify connectivity to the IP address of the AHV host by performing a ping test.

7. Exit the AHV host and the CVM from the maintenance mode.
See Exiting a Node from the Maintenance Mode on page 22 for more information.

Assigning the Controller VM to a VLAN


By default, the public interface of a Controller VM is assigned to VLAN 0. To assign the Controller VM to
a different VLAN, change the VLAN ID of its public interface. After the change, you can access the public
interface from a device that is on the new VLAN.

About this task

Note: Perform the following procedure during a scheduled downtime. Before you begin, stop the cluster. Once the
process begins, hosts and CVMs partially lose network access to each other and VM data or storage containers become
unavailable until the process completes.

Note: To avoid losing connectivity to the Controller VM, do not change the VLAN ID when you are logged on to the
Controller VM through its public interface. To change the VLAN ID, log on to the internal interface that has IP address
192.168.5.254.

Perform these steps on every Controller VM in the cluster. To assign the Controller VM to a VLAN, do the
following:

Procedure

1. Log on to the AHV host with SSH.

2. Put the AHV host and the Controller VM in maintenance mode.


See Putting a Node into Maintenance Mode on page 20 for instructions about how to put a node into
maintenance mode.

3. Check the Controller VM status on the host.


root@host# virsh list
An output similar to the following is displayed:
root@host# virsh list
Id Name State
----------------------------------------------------

AHV | Host Network Management | 51


1 NTNX-CLUSTER_NAME-3-CVM running
3 3197bf4a-5e9c-4d87-915e-59d4aff3096a running
4 c624da77-945e-41fd-a6be-80abf06527b9 running

root@host# logout

4. Log on to the Controller VM.


root@host# ssh [email protected]
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

5. Assign the public interface of the Controller VM to a VLAN.


nutanix@cvm$ change_cvm_vlan vlan_id
Replace vlan_id with the ID of the VLAN to which you want to assign the Controller VM.
For example, add the Controller VM to VLAN 201.
nutanix@cvm$ change_cvm_vlan 201

6. Confirm VLAN tagging on the Controller VM.


root@host# virsh dumpxml cvm_name
Replace cvm_name with the CVM name or CVM ID to view the VLAN tagging information.

Note: Refer to step 3 for Controller VM name and Controller VM ID.

An output similar to the following is displayed:


root@host# virsh dumpxml 1 | grep "tag id" -C10 --color
<target dev='vnet2'/>
<model type='virtio'/>
<driver name='vhost' queues='4'/>
<alias name='net2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='50:6b:8d:b9:0a:18'/>
<source bridge='br0'/>
<vlan>
<tag id='201'/>
</vlan>
<virtualport type='openvswitch'>
<parameters interfaceid='c46374e4-c5b3-4e6b-86c6-bfd6408178b5'/>
</virtualport>
<target dev='vnet0'/>
<model type='virtio'/>
<driver name='vhost' queues='4'/>
<alias name='net3'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
root@host#

7. Check the value of the tag parameter that is shown.

8. Restart the network service.


nutanix@cvm$ sudo service network restart

9. Verify connectivity to the Controller VMs external IP address by performing a ping test from the same subnet.
For example, perform a ping from another Controller VM or directly from the host itself.

AHV | Host Network Management | 52


10. Exit the AHV host and the Controller VM from the maintenance mode.
See Exiting a Node from the Maintenance Mode on page 22 for more information.

IGMP Snooping
On an AHV host, when multicast traffic flows to a virtual switch the host floods the Mcast traffic to all the VMs on
the specific VLAN. This mechanism is inefficient when many of the VMs on the VLAN do not need that multicast
traffic. IGMP snooping allows the host to track which VMs on the VLAN need the multicast traffic and send the
multicast traffic to only those VMs. For example, assume there are 50 VMs on VLAN 100 on virtual switch vs1 and
only 25 VMs need to receive (hence, receiver VMs) the multicast traffic. Turn on IGMP snooping to help the AHV
host track the 25 receiver VMs and deliver the multicast traffic to only the 25 receiver VMs instead of pushing the
multicast traffic to all 50 VMs.
When IGMP snooping is enabled in a virtual switch on a VLAN, the ToR switch or router queries the VMs about the
Mcast traffic that the VMs are interested in. When the switch receives a join request from a VM in response to the
query, it adds the VM to the multicast list for that source entry as a receiver VM. When the switch sends a query, only
the VMs that require the multicast traffic respond to the switch. The VMs that do not need the traffic do not respond
at all. So, the switch does not add a VM to a multicast group or list unless it receives a response from that VM for the
query.
Typically, in a multicast scenario, there is a source entity that casts the multicast traffic. This source may be another
VM in this target cluster (that contains the target VMs that need to receive the multicast traffic) or another cluster
connected to the target cluster. The host in the target cluster acts as the multicast router. Enable IGMP snooping in
the virtual switch that hosts the VLAN connecting the VMs. You must also enable either the native Acropolis IGMP
querier on the host or a separate third party querier that you install on the host. The native Acropolis IGMP querier
sends IGMP v2 query packets to the VMs.
The IGMP Querier sends out queries periodically to keep the multicast groups or lists updated. The periodicity of the
query is determined by the IGMP snooping timeout value that you must specify when you enable IGMP snooping.
For example, if you have configured the IGMP snooping timeout as 30 seconds then the IGMP Querier sends out a
query every 15 seconds.
When you enable IGMP snooping and are using the native Acropolis IGMP querier, you must configure the IGMP
VLAN list. The IGMP VLAN list is a list of VLANs that the native IGMP Querier must send the query out to. This
list value is a comma-separated list of the VLAN IDs that the query needs to be sent to. If you do not provide a list of
VLANs, then the native IGMP Querier sends the query to all the VLANs in the switch.
When a VM needs to receive the multicast traffic from specific multicast source, configure the multicast application
on the VM to listen to the queries received by the VM from the IGMP Querier. Also, configure The multicast
application on the VM to respond to the relevant query, that is, the query for the specific multicast source. The
response that the application sends is logged by the virtual switch which then sends the multicast traffic to that VM
instead of flooding it to all the VMs on the VLAN.
A multicast source always sends multicast traffic to a multicast group or list that is indicated by a multicast group IP
address.

Enabling or Disabling IGMP Snooping


IGMP snooping helps you manage multicast traffic to specific VMs configured on a VLAN.

About this task


You can only enable IGMP snooping using aCLI.

AHV | Host Network Management | 53


Procedure

• Run the following command:


net.update_virtual_switch virtual-switch-name enable_igmp_snooping=true
enable_igmp_querier=[true | false] igmp_query_vlan_list=VLAN IDs
igmp_snooping_timeout=timeout
Provide:

• virtual-switch-name—The name of the virtual switch in which the VLANs are configured. For example,
the name of the default virtual switch is vs0. Provide the name of the virtual switch exactly as it is configured.
• enable_igmp_snooping=[true | false]—true to enable IGMP snooping. Provide false to disable IGMP
snooping. The default setting is false.
• enable_igmp_querier=[true | false]—true to enable the native IGMP querier. Provide false to disable
the native IGMP querier. The default setting is false.
• igmp_query_vlan_list=VLAN IDs—List of VLAN IDs mapped to the virtual switch for which IGMP querier
is enabled. When it's not set or set as an empty list, querier is enabled for all VLANs of the virtual switch.
• igmp_snooping_timeout=timeout—An integer indicating time in seconds. For example, you can provide 30
to indicate IGMP snooping timeout of 30 seconds.
The default timeout is 300 seconds.
You can set the timeout in the range of 15 - 3600 seconds.

What to do next
You can verify whether IGMP snooping is enabled or disabled by running the following command:
net.get_virtual_switch virtual-switch-name
The output of this command includes the following sample configuration:
igmp_config {
enable_querier: True
enable_snooping: True
}
The above sample shows that IGMP snooping and the native acropolis IGMP querier are enabled.

Switch Port ANalyzer on AHV Hosts


Switch Port ANalyzer (SPAN) or port mirroring enables you to mirror traffic from interfaces of the AHV
hosts to the VNIC of guest VMs. SPAN mirrors some or all packets from a set of source ports to a set
of destination ports. You can mirror inbound, outbound, or bidirectional traffic on a set of source ports.
You can then use the mirrored traffic for security analysis and gain visibility of traffic flowing through the
set of source ports. SPAN is a useful tool for troubleshooting packets and can prove to be necessary for
compliance reasons.
AHV supports the following types of source ports in a SPAN session:
1. A bond port that is already mapped to a Virtual Switch (VS) such as vs0, vs1, or any other VS you have created.
2. A non-bond port that is already mapped to a VS such as vs0, vs1, or any other VS you have created.
3. An uplink port that is not assigned to any VS or bridge on the host.

Important Considerations
Consider the following before you configure SPAN on AHV hosts:

• In this release, AHV supports mirroring of traffic only from physical interfaces.

AHV | Host Network Management | 54


• The SPAN destination VM or guest VM must be running on the same AHV host where the source ports are
located.
• Delete the SPAN session before you delete the SPAN destination VM or VNIC. Otherwise, the state of the SPAN
session is displayed as error.
• AHV does not support SPAN from a member of a bond port. For example, if you have mapped br0-up to bridge
br0 with members eth0 and eth1, you cannot create a SPAN session with either eth0 or eth1 as the source port.
You must use only br0-up as the source port.
• AHV supports different types of source ports in one session. For example, you can create a session with br0-up
(bond port) and eth5 (single uplink port) on the same host as two different source ports in the same session. You
can even have two different bond ports in the same session.
• One SPAN session supports up to two source and two destination ports.
• One host supports up to two SPAN sessions.
• You cannot create a SPAN session on an AHV host that is in the maintenance mode.
• If you move the uplink interface to another Virtual Switch, the SPAN session fails. Note that the system does not
generate an alert in this situation.
• With TCP Segmentation Offload, multiple packets belonging to the same stream can be coalesced into a single
one before being delivered to the SPAN destination VM. With TCP Segmentation Offload enabled, there can be
a difference between the number of packets received on the uplink interface and packets forwarded to the SPAN
destination VM (session packet count <= uplink interface packet count). However, the byte count at the SPAN
destination VM is closer to the number at the uplink interface.

Configuring SPAN on an AHV Host


To configure SPAN on an AHV host, create a SPAN destination VNIC where you assign that VNIC to a
guest VM (SPAN destination VM). After you create the VNIC, create a SPAN session specifying the source
and destination ports between which you want to run the SPAN session.

Before you begin


Ensure that you have created the guest VM that you want to configure as the SPAN destination VM.

Note: The SPAN destination VM must run on the same AHV host where the source ports are located. Therefore,
Nutanix highly recommends that you create or modify the guest VM as an agent VM so that the VM is not migrated
from the host.

Command and example for modifying a guest VM as an agent VM: (Recommended)


nutanix@cvm$ acli vm.update vm-name agent_vm=true
nutanix@cvm$ acli vm.update span-dest-VM agent_vm=true
In this example, span-dest-VM is the name of the guest VM that you are modifying as an agent VM.

About this task


Perform the following procedure to configure SPAN on an AHV host:

Procedure

1. Log on to a Controller VM in the cluster with SSH.

AHV | Host Network Management | 55


2. Determine the name and UUID of the guest VM that you want to configure as the SPAN destination VM.
nutanix@cvm$ acli vm.list
Example:
nutanix@cvm$ acli vm.list
VM name VM UUID
span-dest-VM 85abfdd5-7419-4f7c-bffa-8f961660e516
In this example, span-dest-VM is the name and 85abfdd5-7419-4f7c-bffa-8f961660e516 is the UUID of the
guest VM.

Note: If you delete the SPAN destination VM without deleting the SPAN session you create with this SPAN
destination VM, the SPAN session State displays kError.

3. Create a SPAN destination VNIC for the guest VM.


nutanix@cvm$ acli vm.nic_create vm-name type=kSpanDestinationNic
Replace vm-name with the name of the guest VM on which you want to configure SPAN.

Note: Do not include any other parameter when you are creating a SPAN destination VNIC.

Example:
nutanix@cvm$ acli vm.nic_create span-dest-VM type=kSpanDestinationNic
NicCreate: complete

Note: If you delete the SPAN destination VNIC without deleting the SPAN session you create with this SPAN
destination VNIC, the SPAN session State displays kError.

4. Determine the MAC address of the VNIC.


nutanix@cvm$ acli vm.nic_get vm-name
Replace vm-name with the name of the guest VM to which you assigned the VNIC.
Example:

nutanix@cvm$ acli vm.nic_get span-dest-VM


x.x.x.x {
connected: True
ip_address: "x.x.x.x"
mac_addr: "50:6b:8d:8b:2c:94"
network_name: "mgmt"
network_type: "kNativeNetwork"
network_uuid: "c14b0092-877e-489b-a399-2749a60b3206"
type: "kNormalNic"
uuid: "9dd4f307-2506-4354-86a3-0b99abdeba6c"
vlan_mode: "kAccess"
}
50:6b:8d:de:c6:44 {
mac_addr: "50:6b:8d:de:c6:44"
network_type: "kNativeNetwork"
type: "kSpanDestinationNic"
uuid: "b59e99bc-6bc7-4fab-ac35-543695c300d1"
}
Note the MAC address (value of mac_addr) of the VNIC whose type is set to kSpanDestinationNic.

5. Determine the UUID of the host whose traffic you want to monitor by using SPAN.
nutanix@cvm$ acli host.list

AHV | Host Network Management | 56


6. Create a SPAN session.
nutanix@cvm$ acli net.create_span_session span-session-name description="description-text"
source_list=\{uuid=host-uuid,type=kHostNic,identifier=source-port-name,direction=traffic-
type} dest_list=\{uuid=vm-uuid,type=kVmNic,identifier=vnic-mac-address}
Replace the variables mentioned in the command for the following parameters with their appropriate values as
follows:

• span-session-name: Replace span-session-name with a name for the session.


• description (Optional): Replace description-text with a description for the session. This is an optional
parameter.

Note:
All source_list and dest_list parameters are mandatory inputs. The parameters do not have default
values. Provide an appropriate value for each parameter.

Source list parameters:

• uuid: Replace host-uuid with the UUID of the host whose traffic you want to monitor by using SPAN.
(determined in step 5).
• type: Specify kHostNic as the type. Only the kHostNic type is supported in this release.
• identifier: Replace source-port-name with name of the source port whose traffic you want to mirror. For
example, br0-up, eth0, or eth1.
• direction: Replace traffic-type with kIngress if you want to mirror inbound traffic, kEgress for
outbound traffic, or kBiDir for bidirectional traffic.
Destination list parameters:

• uuid: Replace vm-uuid with the UUID of the guest VM that you want to configure as the SPAN destination
VM. (determined in step 2).
• type: Specify kVmNic as the type. Only the kVmNic type is supported in this release.
• identifier: Replace vnic-mac-address with the MAC address of the destination port where you want to
mirror the traffic (determined in step 4).

Note: The syntax for source_list and dest_list is as follows:


source_list/dest_list=[{key1=value1,key2=value2,..}]
Each pair of curly brackets includes the details of one source or destination port with a comma-separated
list of the key-value pairs. There must not be any space between two key-value pairs.
One SPAN session supports up to two source and two destination ports. If you want to include an extra
port, separate the curly brackets with a semicolon (no space) and list the key-value pairs of the second
port in the other curly bracket.

Example:
nutanix@cvm$ acli net.create_span_session span1 description="span session 1"
source_list=\{uuid=492a2bda-ffc0-486a-8bc0-8ae929471714,type=kHostNic,identifier=br0-
up,direction=kBiDir} dest_list=\{uuid=85abfdd5-7419-4f7c-
bffa-8f961660e516,type=kVmNic,identifier=50:6b:8d:de:c6:44}
SpanCreate: complete

AHV | Host Network Management | 57


7. Display the list of all SPAN sessions running on a host.
nutanix@cvm$ acli net.list_span_session
Example:
nutanix@cvm$ acli net.list_span_session
Name UUID State
span1 69252eb5-8047-4e3a-8adc-91664a7104af kActive

Possible values for State are:

• kActive: Denotes that the SPAN session is active.


• kError: denotes that there is an error and the configuration is not working. For example, if there are two
surces and one source is down, the State of the session is diplayed as kError.

8. Display the details of a SPAN session.


nutanix@cvm$ acli net.get_span_session span-session-name
Replace span-session-name with the name of the SPAN session whose details you want to view.
Example:
nutanix@cvm$ acli net.get_span_session span1
span1 {
config {
datapath_name: "s6925"
description: "span session 1"
destination_list {
nic_type: "kVmNic"
port_identifier: "50:6b:8d:de:c6:44"
uuid: "85abfdd5-7419-4f7c-bffa-8f961660e516"
}
name: "span1"
session_uuid: "69252eb5-8047-4e3a-8adc-91664a7104af"
source_list {
direction: "kBiDir"
nic_type: "kHostNic"
port_identifier: "br0-up"
uuid: "492a2bda-ffc0-486a-8bc0-8ae929471714"
}
}
stats {
name: "span1"
session_uuid: "69252eb5-8047-4e3a-8adc-91664a7104af"
state: "kActive"
stats_list {
tx_byte_cnt: 67498
tx_pkt_cnt: 436
}
}
}
Note the value of the datapath_name field in the SPAN session configuration, which is a unique key
that identifies the SPAN session. You might need the unique key to correctly identify the SPAN session for
troubleshooting reasons.

Updating a SPAN Session


You can update any of the details of a SPAN session. When you are updating a SPAN session, specify
the values of the parameters you want to update and then specify the rest of the parameters again as you
specified them when you created the SPAN session. For example, if you want to change only the name

AHV | Host Network Management | 58


and description, specify the updated name and description and then include the complete details of the
source and destination ports again even though you are not updating those details.

About this task


Perform the following procedure to update a SPAN session:

Procedure

1. Log on to a Controller VM in the cluster with SSH.

2. Update the SPAN session.


nutanix@cvm$ acli net.update_span_session span-session-name description="description-text"
source_list=\{uuid=host-uuid,type=kHostNic,identifier=source-port-name,direction=traffic-
type} dest_list=\{uuid=vm-UUID,type=kVmNic,identifier=vNIC-mac-address}
The update command includes the same parameters as the create command. See Configuring SPAN on an AHV
Host on page 55 for more information.
Example:
nutanix@cvm$ acli net.update_span_session span1 name=span_br0_to_span_dest
description="span from br0-up to span-dest VM" source_list=\{uuid=492a2bda-
ffc0-486a-8bc0-8ae929471714,type=kHostNic,identifier=br0-up,direction=kBiDir} dest_list=
\{uuid=85abfdd5-7419-4f7c-bffa-8f961660e516,type=kVmNic,identifier=50:6b:8d:de:c6:44}
SpanUpdate: complete

nutanix@cvm$ acli net.list_span_session


Name UUID State
span_br0_to_span_dest 69252eb5-8047-4e3a-8adc-91664a7104af kActive

nutanix@cvm$ acli net.get_span_session span_br0_to_span_dest


span_br0_to_span_dest {
config {
datapath_name: "s6925"
description: "span from br0-up to span-dest VM"
destination_list {
nic_type: "kVmNic"
port_identifier: "50:6b:8d:de:c6:44"
uuid: "85abfdd5-7419-4f7c-bffa-8f961660e516"
}
name: "span_br0_to_span_dest"
session_uuid: "69252eb5-8047-4e3a-8adc-91664a7104af"
source_list {
direction: "kBiDir"
nic_type: "kHostNic"
port_identifier: "br0-up"
uuid: "492a2bda-ffc0-486a-8bc0-8ae929471714"
}
}
stats {
name: "span_br0_to_span_dest"
session_uuid: "69252eb5-8047-4e3a-8adc-91664a7104af"
state: "kActive"
stats_list {
tx_byte_cnt: 805705
tx_pkt_cnt: 4792
}
}

AHV | Host Network Management | 59


}
In this example, only the name and description were updated. However, complete details of the source and
destation ports were included in the command again.
If you want to change the name of a SPAN session, specify the existing name first and then include the new name
by using the “name=” parameter as shown in this example.

Deleting a SPAN Session


Delete the SPAN session if you want to disable SPAN on an AHV host. Nutanix recommends that you
delete the SPAN session associated with a SPAN destination VM or SPAN destination VNIC.

About this task


Perform the following procedure to delete a SPAN session:

Procedure

1. Log on to a Controller VM in the cluster with SSH.

2. Delete the SPAN session.


nutanix@cvm$ acli net.delete_span_session span-session-name
Replace span-session-name with the name of the SPAN session you want to delete.

Uplink Configuration (AHV Only)


For information about creating and updating a virtual switch that manages the uplinks, see the Creating or Updating
Virtual Switch topic in the Prism Central Guide or the Prism Web Console Guide.
The default virtual switch uses the default bond br0-up via default bridge br0. The default NIC-teaming policy of the
bond br0-up of the default bridge br0 in your AHV cluster is Active-Backup. You can change the NIC-teaming
policy of br0-up to No Uplink Bond, Active-Active, Active-Active with MAC pinning, or retain the default
Active-Backup policy. If you select the Active-Active policy, you must manually enable Link Aggregation (LAG)
and LACP on the corresponding top-of-rack (ToR) switch for each node in the cluster.
A virtual switch configured with the No uplink bond uplink bond type has 0 or 1 uplinks. When you configure
a virtual switch with any other bond type, you must select at least two uplink ports on every node. For more
information about bond types, see the Bond Type table in the Creating or Updating a Virtual Switch topic in the
Prism Central Guide or the Prism Web Console Guide.

Note:

• The uplink configuration feature is supported only on AHV.


• Uplink management allows you to change the configuration on only br0. You cannot use uplink
management to add new bridges and edit custom bridges.
• Once you use uplink management to change the br0 configuration, you cannot use manage_ovs to
change configuration of any bridge (br1, and so on). If have more than one bridge in your setup, do not
use uplink management and use manage_ovs instead.

By default, the virtual switch aggregates all the physical interfaces available on br0-up on the node. You can modify
this default NIC configuration by modifying the default virtual switch configuration and selecting the interfaces that
must belong to br0-up. You can either choose to have only 10G interfaces, only 1G interfaces, or retain the default
setting, that is all the available interfaces aggregated into br0-up.
See the Host Network Management chapter in the AHV Administration Guide for more information about the default
virtual switch vs0.

AHV | Host Network Management | 60


See the Host Network Management chapter in the AHV Administration Guide for more information about the default
virtual switch vs0.
If you change the uplink configuration of vs0, AOS applies the updated settings to all the nodes in the cluster one
after the other (the rolling update process). To update the settings in a cluster, AOS performs the following tasks:
1. Puts the node in maintenance mode (migrates VMs out of the node)
2. Applies the updated settings
3. Checks connectivity with the default gateway
4. Exits maintenance mode
5. Proceeds to apply the updated settings to the next node
AOS takes a backup of the existing network settings before applying the updated settings on a randomly chosen first
node. If the application of the updated settings fails on the first node, the previous settings are restored to the node
and the updated settings are not applied to the rest of the nodes in the cluster. However, if the updated settings are
successfully applied to the first node but fail on any subsequent node(s), alerts and warnings for the failed node(s) are
generated and AOS proceeds to apply the updated settings on the remaining nodes in the cluster.
Note the following points about the uplink configuration.

• If the cluster has one or more compute-only nodes, AOS does not apply the Active-Active with MAC pinning
and Active-Active policies.
• If you select the Active-Active policy, you must manually enable LAG and LACP on the corresponding ToR
switch for each node in the cluster.
• If you reimage a cluster with the Active-Active policy enabled, the default NIC-teaming policy on the reimaged
cluster is once again the Active-Backup policy.
• Nutanix recommends configuring LACP with fallback to active-backup or individual mode on the ToR switches.
The configuration and behavior varies based on the switch vendor. Use a switch configuration that allows both
switch interfaces to pass traffic after LACP negotiation fails. Otherwise the eth interface that the AHV node
chooses as active might be different than the switch port that the switch chooses as active for some switch models.
Therefore, approximately only 50% of the nodes may have connectivity and might result in connectivity failures
once the cluster comes back up after it is reimaged.
To avoid any connectivity failures, Nutanix recommends that you keep one switch port powered on and shut down
all the other switch ports on the switches connecting to br0-up for the same node. This setting ensures that both
the node and the switch choose the same interface as active.
• The Select All option, which allows you to aggregate all the available physical interfaces into br0-up, is available
only for the Active-Backup policy.

Updating the Uplink Configuration (AHV Only)


You can change the NIC-teaming policy and grouping of interfaces in br0-up.

About this task


Perform the following procedure to update the uplink configuration.

CAUTION: Changing the uplink configuration is not reversible. Be sure the planned updates are correct before
attempting this procedure.

Procedure

1. Log on to the Prism Element web console.

AHV | Host Network Management | 61


2. Click Network in the main menu and, in the network visualizer screen, click Uplink Configuration.

Figure 13: Uplink Configuration

3. Select one of the following NIC-teaming policies.

» Active-Backup. A single interface in the bond is chosen randomly at boot to be active. The other interfaces
act as a backup until the active interface fails. This is the recommended and default policy.
Active-backup is the simplest NIC-teaming policy, easily allowing connections to multiple upstream switches
without additional switch configuration. The limitation is that traffic from all VMs uses only the single active
link within the bond at one time. All backup links remain unused until the active link fails.
Do not configure LAG on the connected ToR switch.
» Active-Active with MAC pinning. (Also called balance-slb) All the interfaces in the bond are active.
Source MAC addresses are periodically rebalanced among interfaces based on the network load. Do not
configure link aggregation on the connected ToR switch.
» Active-Active. (Also called LACP with balance-tcp) All the interfaces in the bond are active. Traffic is
balanced among adapters based on source and destination addresses and TCP and UDP ports.
If you select this option, you must manually enable LAG and LACP on the corresponding ToR switch for
each node in the cluster one after the other. See Enabling LAG and LACP on the ToR Switch (AHV Only) on
page 50 for more information.

AHV | Host Network Management | 62


4. Select one of the following NIC configurations.

» Select All. Aggregates all the physical interfaces available on the node. This is the default option.

Note: This option is available only for the active-backup policy.

» 10G. Aggregates only the 10G interfaces available in the node into br0-up.
» 1G. Aggregates only the 1G interfaces available in the node into br0-up.

Figure 14: Update Uplink Configuration

5. Click Save.

Note: AOS applies this configuration to every node in the cluster, one node at a time.

You can click Tasks in the main menu to monitor the progress of this operation.

Enabling LAG and LACP on the ToR Switch (AHV Only)


Active-Active with MAC pinning does not require link aggregation since each source MAC address
is balanced to only one adapter at a time. If you select the Active-Active NIC-teaming policy, you must
enable LAG and LACP on the corresponding ToR switch for each node in the cluster one after the other.

AHV | Host Network Management | 63


About this task
In an Active-Active NIC-teaming configuration, network traffic is balanced among the members of the team based
on source and destination IP addresses and TCP and UDP ports. With link aggregation negotiated by LACP, the
uplinks appear as a single layer-2 link so a VM MAC address can appear on multiple links and use the bandwidth
of all uplinks. If you do not enable LAG and LACP in this Active-Active configuration, network traffic might be
dropped by the switch.
Perform the following procedure to enable LAG and LACP on a TOR switch after enabling the Active-Active policy
by using the Prism Element web console.

Procedure

1. Put the node in maintenance mode. This is in addition to the previous maintenance mode that enabled Active-
Active on the node.

2. Enable LAG and LACP on the ToR switch connected to that node.

3. Exit maintenance mode after LAG and LACP is successfully enabled.

4. Repeat steps 1 to 3 for every node in the cluster.

Enabling RSS Virtio-Net Multi-Queue by Increasing the Number of VNIC


Queues
Multi-Queue in VirtIO-net enables you to improve network performance for network I/O-intensive guest
VMs or applications running on AHV hosts.

About this task


You can enable VirtIO-net multi-queue by increasing the number of VNIC queues. If an application uses many
distinct streams of traffic, Receive Side Scaling (RSS) can distribute the streams across multiple VNIC DMA rings.
This increases the amount of RX buffer space by the number of VNIC queues (N). Also, most guest operating
systems pin each ring to a particular vCPU, handling the interrupts and ring-walking on that vCPU, by that means
achieving N-way parallelism in RX processing. However, if you increase the number of queues beyond the number of
vCPUs, you cannot achieve extra parallelism.
Following workloads have the greatest performance benefit of VirtIO-net multi-queue:

• VMs where traffic packets are relatively large


• VMs with many concurrent connections
• VMs with network traffic moving:

• Among VMs on the same host


• Among VMs across hosts
• From VMs to the hosts
• From VMs to an external system
• VMs with high VNIC RX packet drop rate if CPU contention is not the cause
You can increase the number of queues of the AHV VM VNIC to allow the guest OS to use multi-queue VirtIO-net
on guest VMs with intensive network I/O. Multi-Queue VirtIO-net scales the network performance by transferring
packets through more than one Tx/Rx queue pair at a time as the number of vCPUs increases.
Nutanix recommends that you be conservative when increasing the number of queues. Do not set the number
of queues larger than the total number of vCPUs assigned to a VM. Packet reordering and TCP retransmissions
increase if the number of queues is larger than the number vCPUs assigned to a VM. For this reason, start by

AHV | Host Network Management | 64


increasing the queue size to 2. The default queue size is 1. After making this change, monitor the guest VM and
network performance. Before you increase the queue size further, verify that the vCPU usage has not dramatically or
unreasonably increased.
Perform the following steps to make more VNIC queues available to a guest VM. See your guest OS documentation
to verify if you must perform extra steps on the guest OS to apply the additional VNIC queues.

Note: You must shut down the guest VM to change the number of queues. Therefore, make this change during a
planned maintenance window. The VNIC status might change from Up->Down->Up or a restart of the guest OS might
be required to finalize the settings depending on the guest OS implementation requirements.

Procedure

1. (Optional) Nutanix recommends that you ensure the following:

a. AHV and AOS are running the latest version.


b. AHV guest VMs are running the latest version of the Nutanix VirtIO driver package.
For RSS support, ensure you are running Nutanix VirtIO 1.1.6 or later. See Nutanix VirtIO for Windows on
page 92 for more information about Nutanix VirtIO.

2. Determine the exact name of the guest VM for which you want to change the number of VNIC queues.
nutanix@cvm$ acli vm.list
An output similar to the following is displayed:
nutanix@cvm$ acli vm.list
VM name VM UUID
ExampleVM1 a91a683a-4440-45d9-8dbe-xxxxxxxxxxxx
ExampleVM2 fda89db5-4695-4055-a3d4-xxxxxxxxxxxx
...

3. Determine the MAC address of the VNIC and confirm the current number of VNIC queues.
nutanix@cvm$ acli vm.nic_get VM-name
Replace VM-name with the name of the VM.
An output similar to the following is displayed:
nutanix@cvm$ acli vm.nic_get VM-name
...
mac_addr: "50:6b:8d:2f:zz:zz"
...
(queues: 2) <- If there is no output of 'queues', the setting is default (1 queue).

Note: AOS defines queues as the maximum number of Tx/Rx queue pairs (default is 1).

4. Check the number of vCPUs assigned to the VM.


nutanix@cvm$ acli vm.get VM-name | grep num_vcpus
An output similar to the following is displayed:
nutanix@cvm$ acli vm.get VM-name | grep num_vcpus
num_vcpus: 1

5. Shut down the guest VM.


nutanix@cvm$ acli vm.shutdown VM-name
Replace VM-name with the name of the VM.

AHV | Host Network Management | 65


6. Increase the number of VNIC queues.
nutanix@cvm$acli vm.nic_update VM-name vNIC-MAC-address queues=N
Replace VM-name with the name of the guest VM, vNIC-MAC-address with the MAC address of the VNIC, and N
with the number of queues.

Note: N must be less than or equal to the vCPUs assigned to the guest VM.

7. Start the guest VM.


nutanix@cvm$ acli vm.on VM-name
Replace VM-name with the name of the VM.

8. Confirm in the guest OS documentation if any additional steps are required to enable multi-queue in VirtIO-net.

Note: Microsoft Windows has RSS enabled by default.

For example, for RHEL and CentOS VMs, do the following:

a. Log on to the guest VM.


b. Confirm if irqbalance.service is active or not.
uservm# systemctl status irqbalance.service
An output similar to the following is displayed:
irqbalance.service - irqbalance daemon
Loaded: loaded (/usr/lib/systemd/system/irqbalance.service; enabled; vendor preset:
enabled)
Active: active (running) since Tue 2020-04-07 10:28:29 AEST; Ns ago

c. Start irqbalance.service if it is not active.

Note: It is active by default on CentOS VMs. You might have to start it on RHEL VMs.

uservm# systemctl start irqbalance.service

d. Run the following command:


uservm$ ethtool -L ethX combined M
Replace M with the number of VNIC queues.
Note the following caveat from the RHEL 7 Virtualization Tuning and Optimization Guide : 5.4. NETWORK
TUNING TECHNIQUES document:
"Currently, setting up a multi-queue virtio-net connection can have a negative effect on the performance of
outgoing traffic. Specifically, this may occur when sending packets under 1,500 bytes over the Transmission
Control Protocol (TCP) stream."

9. Monitor the VM performance to make sure that the expected network performance increase is observed and that
the guest VM vCPU usage is not dramatically increased to impact the application on the guest VM.
For assistance with the steps described in this document, or if these steps do not resolve your guest VM network
performance issues, contact Nutanix Support.

Changing the IP Address of an AHV Host


Change the IP address, netmask, or gateway of an AHV host.

AHV | Host Network Management | 66


Before you begin
Perform the following tasks before you change the IP address, netmask, or gateway of an AHV host:

CAUTION: All Controller VMs and hypervisor hosts must be on the same subnet.

Warning: Ensure that you perform the steps in the exact order as indicated in this document.

1. Verify the cluster health by following the instructions in KB-2852.


Do not proceed if the cluster cannot tolerate failure of at least one node.
2. Put the AHV host into the maintenance mode.
See Putting a Node into Maintenance Mode on page 20 for instructions about how to put a node into maintenance
mode.

About this task


Perform the following procedure to change the IP address, netmask, or gateway of an AHV host.

Procedure

1. Edit the settings of port br0, which is the internal port on the default bridge br0.

a. Log on to the host console as root.


You can access the hypervisor host console either through IPMI or by attaching a keyboard and monitor to the
node.
b. Open the network interface configuration file for port br0 in a text editor.
root@ahv# vi /etc/sysconfig/network-scripts/ifcfg-br0

c. Update entries for host IP address, netmask, and gateway.


The block of configuration information that includes these entries is similar to the following:
ONBOOT="yes"
NM_CONTROLLED="no"
PERSISTENT_DHCLIENT=1
NETMASK="subnet_mask"
IPADDR="host_ip_addr"
DEVICE="br0"
TYPE="ethernet"
GATEWAY="gateway_ip_addr"
BOOTPROTO="none"

• Replace host_ip_addr with the IP address for the hypervisor host.


• Replace subnet_mask with the subnet mask for host_ip_addr.
• Replace gateway_ip_addr with the gateway address for host_ip_addr.
d. Save your changes.
e. Restart network services.
/etc/init.d/network restart

f. Assign the host to a VLAN. For information about how to add a host to a VLAN, see Assigning an AHV Host
to a VLAN on page 50.
g. Verify network connectivity by pinging the gateway, other CVMs, and AHV hosts.

AHV | Host Network Management | 67


2. Log on to the Controller VM that is running on the AHV host whose IP address you changed and restart genesis.
nutanix@cvm$ genesis restart
If the restart is successful, output similar to the following is displayed:
Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
Genesis started on pids [30378, 30379, 30380, 30381, 30403]
See Controller VM Access on page 11 for information about how to log on to a Controller VM.
Genesis takes a few minutes to restart.

3. Verify if the IP address of the hypervisor host has changed. Run the following nCLI command from any CVM
other than the one in the maintenance mode.
nutanix@cvm$ ncli host list
An output similar to the following is displayed:

nutanix@cvm$ ncli host list


Id : aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee::1234
Uuid : ffffffff-gggg-hhhh-iiii-jjjjjjjjjjj
Name : XXXXXXXXXXX-X
IPMI Address : X.X.Z.3
Controller VM Address : X.X.X.1
Hypervisor Address : X.X.Y.4 <- New IP Address
...

4. Stop the Acropolis service on all the CVMs.

a. Stop the Acropolis service on each CVM in the cluster.


nutanix@cvm$ genesis stop acropolis

Note: You cannot manage your guest VMs after the Acropolis service is stopped.

b. Verify if the Acropolis service is DOWN on all the CVMs, except the one in the maintenance mode.
nutanix@cvm$ cluster status | grep -v UP
An output similar to the following is displayed:

nutanix@cvm$ cluster status | grep -v UP

2019-09-04 14:43:18 INFO zookeeper_session.py:143 cluster is attempting to connect to


Zookeeper

2019-09-04 14:43:18 INFO cluster:2774 Executing action status on SVMs X.X.X.1, X.X.X.2,
X.X.X.3

The state of the cluster: start

Lockdown mode: Disabled


CVM: X.X.X.1 Up
Acropolis DOWN []
CVM: X.X.X.2 Up, ZeusLeader
Acropolis DOWN []
CVM: X.X.X.3 Maintenance

5. From any CVM in the cluster, start the Acropolis service.


nutanix@cvm$ cluster start

AHV | Host Network Management | 68


6. Verify if all processes on all the CVMs, except the one in the maintenance mode, are in the UP state.
nutanix@cvm$ cluster status | grep -v UP

7. Exit the AHV host and the CVM from the maintenance mode.
See Exiting a Node from the Maintenance Mode on page 22 for more information.

AHV | Host Network Management | 69


4
VIRTUAL MACHINE MANAGEMENT
The following topics describe various aspects of virtual machine management in an AHV cluster.

Supported Guest VM Types for AHV


The compatibility matrix available on the Nutanix Support portal includes the latest supported AHV guest VM OSes.

Maximum vDisks per Bus Type

• SCSI: 256
• PCI: 6
• IDE: 4

Creating a VM (AHV)
In AHV clusters, you can create a new virtual machine (VM) through the Prism Element web console.

About this task


When creating a VM, you can configure all of its components, such as number of vCPUs and memory,
but you cannot attach a volume group to the VM. Attaching a volume group is possible only when you are
modifying a VM.
To create a VM, do the following:

AHV | Virtual Machine Management | 70


Procedure

1. In the VM dashboard, click the Create VM button.

Note: This option does not appear in clusters that do not support this feature.

The Create VM dialog box appears.

AHV | Virtual Machine Management | 71


Figure 15: Create VM Dialog Box

AHV | Virtual Machine Management | 72


2. Do the following in the indicated fields:

a. Name: Enter a name for the VM.


b. Description (optional): Enter a description for the VM.
c. Timezone: Select the timezone that you want the VM to use. If you are creating a Linux VM, select (UTC)
UTC.

Note:
The RTC of Linux VMs must be in UTC, so select the UTC timezone if you are creating a Linux
VM.
Windows VMs preserve the RTC in the local timezone, so set up the Windows VM with the
hardware clock pointing to the desired timezone.

d. Use this VM as an agent VM: Select this option to make this VM as an agent VM.
You can use this option for the VMs that must be powered on before the rest of the VMs (for example, to
provide network functions before the rest of the VMs are powered on on the host) and must be powered off
after the rest of the VMs are powered off (for example, during maintenance mode operations). Agent VMs
are never migrated to any other host in the cluster. If an HA event occurs or the host is put in maintenance
mode, agent VMs are powered off and are powered on on the same host once that host comes back to a
normal state.
If an agent VM is powered off, you can manually start that agent VM on another host and the agent VM now
permanently resides on the new host. The agent VM is never migrated back to the original host. Note that
you cannot migrate an agent VM to another host while the agent VM is powered on.
e. vCPU(s): Enter the number of virtual CPUs to allocate to this VM.
f. Number of Cores per vCPU: Enter the number of cores assigned to each virtual CPU.
g. Memory: Enter the amount of memory (in MiBs) to allocate to this VM.

AHV | Virtual Machine Management | 73


3. (For GPU-enabled AHV clusters only) To configure GPU access, click Add GPU in the Graphics section, and
then do the following in the Add GPU dialog box:

Figure 16: Add GPU Dialog Box

See GPU and vGPU Support in AHV Administration Guide for more information.

a. To configure GPU pass-through, in GPU Mode, click Passthrough, select the GPU that you want to
allocate, and then click Add.
If you want to allocate additional GPUs to the VM, repeat the procedure as many times as you need to. Make
sure that all the allocated pass-through GPUs are on the same host. If all specified GPUs of the type that you

AHV | Virtual Machine Management | 74


want to allocate are in use, you can proceed to allocate the GPU to the VM, but you cannot power on the VM
until a VM that is using the specified GPU type is powered off.
See GPU and vGPU Support in AHV Administration Guide for more information.
b. To configure virtual GPU access, in GPU Mode, click virtual GPU, select a GRID license, and then select
a virtual GPU profile from the list.

Note: This option is available only if you have installed the GRID host driver on the GPU hosts in the cluster.
See Installing NVIDIA GRID Virtual GPU Manager (Host Driver) in the AHV Administration Guide.

You can assign multiple virtual GPU to a VM. A vGPU is assigned to the VM only if a vGPU is available
when the VM is starting up.
Before you add multiple vGPUs to the VM, see Multiple Virtual GPU Support and Restrictions for Multiple
vGPU Support in the AHV Admin Guide.

Note:
Multiple vGPUs are supported on the same VM only if you select the highest vGPU profile type.

After you add the first vGPU, to add multiple vGPUs, see Adding Multiple vGPUs to the Same VM in the
AHV Admin Guide.

4. Select one of the following firmware to boot the VM.

» Legacy BIOS: Select legacy BIOS to boot the VM with legacy BIOS firmware.
» UEFI: Select UEFI to boot the VM with UEFI firmware. UEFI firmware supports larger hard drives, faster
boot time, and provides more security features. For more information about UEFI firmware, see UEFI
Support for VM section in the AHV Administration Guide.
» Secure Boot is supported with AOS 5.16. The current support to Secure Boot is limited to the aCLI. For
more information about Secure Boot, see Secure Boot Support for VMs section in the AHV Administration
Guide. To enable Secure Boot, do the following:

• Select UEFI.
• Power-off the VM.
• Log on to the aCLI and update the VM to enable Secure Boot. For more information, see Updating a VM to
Enable Secure Boot in the AHV Administration Guide.

AHV | Virtual Machine Management | 75


5. To attach a disk to the VM, click the Add New Disk button.
The Add Disk dialog box appears.

Figure 17: Add Disk Dialog Box

Do the following in the indicated fields:

a. Type: Select the type of storage device, DISK or CD-ROM, from the pull-down list.
The following fields and options vary depending on whether you choose DISK or CD-ROM.
b. Operation: Specify the device contents from the pull-down list.

• Select Clone from ADSF file to copy any file from the cluster that can be used as an image onto the
disk.
• Select Empty CD-ROM to create a blank CD-ROM device. (This option appears only when CD-ROM
is selected in the previous field.) A CD-ROM device is needed when you intend to provide a system
image from CD-ROM.
• Select Allocate on Storage Container to allocate space without specifying an image. (This
option appears only when DISK is selected in the previous field.) Selecting this option means you are
allocating space only. You have to provide a system image later from a CD-ROM or other source.
• Select Clone from Image Service to copy an image that you have imported by using image service
feature onto the disk. For more information on the Image Service feature, see the Configuring Images
section of Migration Guide.
c. Bus Type: Select the bus type from the pull-down list. The choices are IDE, SCSI, or SATA.
d. ADSF Path: Enter the path to the desired system image.
This field appears only when Clone from ADSF file is selected. It specifies the image to copy. Enter the
path name as /storage_container_name/iso_name.iso. For example to clone an image from myos.iso
in a storage container named crt1, enter /crt1/myos.iso. When a user types the storage container

AHV | Virtual Machine Management | 76


name (/storage_container_name/), a list appears of the ISO files in that storage container (assuming one
or more ISO files had previously been copied to that storage container).
e. Image: Select the image that you have created by using the image service feature.
This field appears only when Clone from Image Service is selected. It specifies the image to copy.
f. Storage Container: Select the storage container to use from the pull-down list.
This field appears only when Allocate on Storage Container is selected. The list includes all storage
containers created for this cluster.
g. Size: Enter the disk size in GiB.
h. Index: Displays Next Available by default.
i. When all the field entries are correct, click the Add button to attach the disk to the VM and return to the
Create VM dialog box.
j. Repeat this step to attach additional devices to the VM.

AHV | Virtual Machine Management | 77


6. To create a network interface for the VM, click the Add New NIC button.
The Create NIC dialog box appears.

Figure 18: Create NIC Dialog Box

Do the following in the indicated fields:

a. Subnet Name: Select the target virtual LAN from the pull-down list.
The list includes all defined networks (see the Network Configuration For VM Interfaces in the Prism Web
Console Guide).
b. VLAN ID: This is a read-only field that displays the VLAN ID.
c. Virtual Switch: This is a read-only field that displays the virtual switch that is configured for the VLAN
you selected. The default value is the default virtual switch vs0. This option is displayed only if you add a
VLAN ID in the VLAN ID field.
d. Network Address/Prefix: This is a read-only field that displays the network IP address and prefix.
e. When all the field entries are correct, click the Add button to create a network interface for the VM and
return to the Create VM dialog box.
f. Repeat this step to create additional network interfaces for the VM.

7. To configure affinity policy for this VM, click Set Affinity.


The Set VM Host Affinity dialog box appears.

a. Select the host or hosts on which you want configure the affinity for this VM.
b. Click Save.
The selected host or hosts are listed. This configuration is permanent. The VM will not be moved from this
host or hosts even in case of HA event and will take effect once the VM starts.

AHV | Virtual Machine Management | 78


8. To customize the VM by using Cloud-init (for Linux VMs) or Sysprep (for Windows VMs), select the Custom
Script check box.
Fields required for configuring Cloud-init and Sysprep, such as options for specifying a configuration script or
answer file and text boxes for specifying paths to required files, appear below the check box.

Figure 19: Create VM Dialog Box (custom script fields)

9. To specify a user data file (Linux VMs) or answer file (Windows VMs) for unattended provisioning, do one of
the following:

» If you uploaded the file to a storage container on the cluster, click ADSF path, and then enter the path to the
file.
Enter the ADSF prefix (adsf://) followed by the absolute path to the file. For example, if the user data
is in /home/my_dir/cloud.cfg, enter adsf:///home/my_dir/cloud.cfg. Note the use of three
slashes.
» If the file is available on your local computer, click Upload a file, click Choose File, and then upload the
file.
» If you want to create or paste the contents of the file, click Type or paste script, and then use the text box
that is provided.

AHV | Virtual Machine Management | 79


10. To copy one or more files to a location on the VM (Linux VMs) or to a location in the ISO file (Windows VMs)
during initialization, do the following:

a. In Source File ADSF Path, enter the absolute path to the file.
b. In Destination Path in VM, enter the absolute path to the target directory and the file name.
For example, if the source file entry is /home/my_dir/myfile.txt then the entry for the Destination
Path in VM should be /<directory_name>/copy_desitation> i.e. /mnt/myfile.txt.
c. To add another file or directory, click the button beside the destination path field. In the new row that
appears, specify the source and target details.

11. When all the field entries are correct, click the Save button to create the VM and close the Create VM dialog
box.
The new VM appears in the VM table view.

Managing a VM (AHV)
You can use the web console to manage virtual machines (VMs) in Acropolis managed clusters.

About this task


After creating a VM, you can use the web console to start or shut down the VM, launch a console window, update the
VM configuration, take a snapshot, attach a volume group, migrate the VM, clone the VM, or delete the VM.

Note: Your available options depend on the VM status, type, and permissions. Unavailable options are grayed out.

To accomplish one or more of these tasks, do the following:

Procedure

1. In the VM dashboard, click the Table view.

AHV | Virtual Machine Management | 80


2. Select the target VM in the table (top section of screen).
The Summary line (middle of screen) displays the VM name with a set of relevant action links on the right. You
can also right-click on a VM to select a relevant action.
The possible actions are Manage Guest Tools, Launch Console, Power on (or Power off), Take
Snapshot, Migrate, Clone, Update, and Delete.

Note: VM pause and resume feature is not supported on AHV.

The following steps describe how to perform each action.

Figure 20: VM Action Links

3. To manage guest tools as follows, click Manage Guest Tools.


You can also enable NGT applications (self-service restore, Volume Snapshot Service and application-consistent
snapshots) also as part of manage guest tools.

a. Select Enable Nutanix Guest Tools check box to enable NGT on the selected VM.
b. Select Mount Nutanix Guest Tools to mount NGT on the selected VM.
Ensure that VM must have at least one empty IDE CD-ROM slot to attach the ISO.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A
CD with volume label NUTANIX_TOOLS gets attached to the VM.
c. To enable self-service restore feature for Windows VMs, click Self Service Restore (SSR) check box.
The Self-Service Restore feature is enabled of the VM. The guest VM administrator can restore the desired
file or files from the VM. For more information on self-service restore feature, see Self-Service Restore in
the guide titled Data Protection and Recovery with Prism Element.

d. After you select Enable Nutanix Guest Tools check box the VSS snapshot feature is enabled by default.
After this feature is enabled, Nutanix native in-guest VmQuiesced Snapshot Service (VSS) agent takes
snapshots for VMs that support VSS.

Note:
The AHV VM snapshots are not application consistent. The AHV snapshots are taken from the VM
entity menu by selecting a VM and clicking Take Snapshot.
The application consistent snapshots feature is available with Protection Domain based snapshots
and Recovery Points in Prism Central. For more information, see Implementation Guidelines for

AHV | Virtual Machine Management | 81


Asynchronous Disaster Recovery section in the guide titled Data Protection and Recovery with
Prism Element.

e. Click Submit.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A
CD with volume label NUTANIX_TOOLS gets attached to the VM.

Note:

• If you clone a VM, by default NGT is not enabled on the cloned VM. If the cloned VM is
powered off, enable NGT from the UI and power on the VM. If cloned VM is powered on,
enable NGT from the UI and restart the nutanix guest agent service.
• If you want to enable NGT on multiple VMs simultaneously, see Enabling NGT and Mounting
the NGT Installer Simultaneously on Multiple Cloned VMs in the Prism Web Console Guide.

If you eject the CD, you can mount the CD back again by logging into the Controller VM and running the
following nCLI command.
nutanix@cvm$ ncli ngt mount vm-id=virtual_machine_id
For example, to mount the NGT on the VM with
VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987, type the
following command.
nutanix@cvm$ ncli ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-
a345-4e52-9af6-
c1601e759987

AHV | Virtual Machine Management | 82


4. To launch a console window, click the Launch Console action link.
This opens a Virtual Network Computing (VNC) client and displays the console in a new tab or window. This
option is available only when the VM is powered on. The console window includes four menu options (top
right):

• Clicking the Mount ISO button displays the following window that allows you to mount an ISO image to
the VM. To mount an image, select the desired image and CD-ROM drive from the pull-down lists and then
click the Mount button.

Figure 21: Mount Disk Image Window


• Clicking the C-A-D icon button sends a CtrlAltDel command to the VM.
• Clicking the camera icon button takes a screenshot of the console window.
• Clicking the power icon button allows you to power on/off the VM. These are the same options that you
can access from the Power On Actions or Power Off Actions action link below the VM table (see next
step).

Figure 22: Virtual Network Computing (VNC) Window

AHV | Virtual Machine Management | 83


5. To start or shut down the VM, click the Power on (or Power off) action link.
Power on begins immediately. If you want to power off the VMs, you are prompted to select one of the
following options:

• Power Off. Hypervisor performs a hard power off action on the VM.
• Power Cycle. Hypervisor performs a hard restart action on the VM.
• Reset. Hypervisor performs an ACPI reset action through the BIOS on the VM.
• Guest Shutdown. Operating system of the VM performs a graceful shutdown.
• Guest Reboot. Operating system of the VM performs a graceful restart.

Note: If you perform power operations such as Guest Reboot or Guest Shutdown by using the Prism Element
web console or API on Windows VMs, these operations might silently fail without any error messages if at that
time a screen saver is running in the Windows VM. Perform the same power operations again immediately, so
that they succeed.

6. To make a backup of the VM, click the Take Snapshot action link.
This displays the Take Snapshot dialog box. Enter a name for the snapshot and then click the Submit button to
start the backup.

Note: These snapshots (stored locally) cannot be replicated to other sites.

Figure 23: Take Snapshot Dialog Box

AHV | Virtual Machine Management | 84


7. To migrate the VM to another host, click the Migrate action link.
This displays the Migrate VM dialog box. Select the target host from the pull-down list (or select the System
will automatically select a host option to let the system choose the host) and then click the Migrate button
to start the migration.

Figure 24: Migrate VM Dialog Box

Note: Nutanix recommends to live migrate VMs when they are under light load. If they are migrated while
heavily utilized, migration may fail because of limited bandwidth.

8. To clone the VM, click the Clone action link.


This displays the Clone VM dialog box, which includes the same fields as the Create VM dialog box. A cloned
VM inherits the most the configurations (except the name) of the source VM. Enter a name for the clone and
then click the Save button to create the clone. You can optionally override some of the configurations before

AHV | Virtual Machine Management | 85


clicking the Save button. For example, you can override the number of vCPUs, memory size, boot priority,
NICs, or the guest customization.

Note:

• You can clone up to 250 VMs at a time.


• You cannot override the secure boot setting while cloning a VM, unless the source VM already had
secure boot setting enabled.

Figure 25: Clone VM Window

9. To modify the VM configuration, click the Update action link.


The Update VM dialog box appears, which includes the same fields as the Create VM dialog box. Modify the
configuration as needed, and then save the configuration. In addition to modifying the configuration, you can

AHV | Virtual Machine Management | 86


attach a volume group to the VM and enable flash mode on the VM. If you attach a volume group to a VM that
is part of a protection domain, the VM is not protected automatically. Add the VM to the same Consistency
Group manually.
(For GPU-enabled AHV clusters only) You can add pass-through GPUs if a VM is already using GPU pass-
through. You can also change the GPU configuration from pass-through to vGPU or vGPU to pass-through,

AHV | Virtual Machine Management | 87


change the vGPU profile, add more vGPUs, and change the specified vGPU license. However, you need to
power off the VM before you perform these operations.

• Before you add multiple vGPUs to the VM, see Multiple Virtual GPU Support and Restrictions for Multiple
vGPU Support in the AHV Admin Guide.
• Multiple vGPUs are supported on the same VM only if you select the highest vGPU profile type.
• For more information on vGPU profile selection, see:

• Virtual GPU Types for Supported GPUs in the NVIDIA Virtual GPU Software User Guide in the
NVIDIA's Virtual GPU Software Documentation webpage, and
• GPU and vGPU Support in the AHV Administration Guide.
• After you add the first vGPU, to add multiple vGPUs, see Adding Multiple vGPUs to the Same VM in the
AHV Admin Guide.

AHV | Virtual Machine Management | 88


Figure 26: VM Update Dialog Box

Note: If you delete a vDisk attached to a VM and snapshots associated with this VM exist, space associated with
that vDisk is not reclaimed unless you also delete the VM snapshots.

To increase the memory allocation and the number of vCPUs on your VMs while the VMs are powered on (hot-
pluggable), do the following:

a. In the vCPUs field, you can increase the number of vCPUs on your VMs while the VMs are powered on.
b. In the Number of Cores Per vCPU field, you can change the number of cores per vCPU only if the VMs
are powered off.

Note: This is not a hot-pluggable feature.

AHV | Virtual Machine Management | 89


c. In the Memory field, you can increase the memory allocation on your VMs while the VMs are powered on.
For more information about hot-pluggable vCPUs and memory, see the Virtual Machine Memory and CPU Hot-
Plug Configurations topic in the AHV Administration Guide.
To attach a volume group to the VM, do the following:

a. In the Volume Groups section, click Add volume group, and then do one of the following:

» From the Available Volume Groups list, select the volume group that you want to attach to the VM.
» Click Create new volume group, and then, in the Create Volume Group dialog box, create a
volume group (see Creating a Volume Group in the Prism Web Console Guide). After you create a
volume group, select it from the Available Volume Groups list.
Repeat these steps until you have added all the volume groups that you want to attach to the VM.
b. Click Add.

AHV | Virtual Machine Management | 90


10. To enable flash mode on the VM, click the Enable Flash Mode check box.

» After you enable this feature on the VM, the status is updated in the VM table view. To view the status of
individual virtual disks (disks that are flashed to the SSD), click the update disk icon in the Disks pane in
the Update VM window.

» You can disable the flash mode feature for individual virtual disks. To update the flash mode for individual
virtual disks, click the update disk icon in the Disks pane and deselect the Enable Flash Mode check box.

Figure 27: Update VM Resources

AHV | Virtual Machine Management | 91


Figure 28: Update VM Resources - VM Disk Flash Mode

11. To delete the VM, click the Delete action link. A window prompt appears; click the OK button to delete the
VM.
The deleted VM disappears from the list of VMs in the table.

Windows VM Provisioning
Nutanix VirtIO for Windows
Nutanix VirtIO is a collection of drivers for paravirtual devices that enhance the stability and performance of
virtual machines on AHV.
Nutanix VirtIO is available in two formats:

• To install Windows in a VM on AHV, use the VirtIO ISO.


• To update VirtIO for Windows, use the VirtIO MSI installer file.
Use Nutanix Guest Tools (NGT) to install the Nutanix VirtlO Package. For more information about installing the
Nutanix VirtIO package using the NGT, see NGT Installation in the Prism Web Console Guide.

VirtIO Requirements
Requirements for Nutanix VirtIO for Windows.
VirtIO supports the following operating systems:

• Microsoft Windows server version: Windows 2008 R2 or later


• Microsoft Windows client version: Windows 7 or later

Note: On Windows 7 and Windows Server 2008 R2, install Microsoft KB3033929 or update the operating system with
the latest Windows Update to enable support for SHA2 certificates.

AHV | Virtual Machine Management | 92


Installing or Upgrading Nutanix VirtIO for Windows
Download Nutanix VirtIO and the Nutanix VirtIO Microsoft installer (MSI). The MSI installs and upgrades
the Nutanix VirtIO drivers.

Before you begin


Make sure that your system meets the VirtIO requirements described in VirtIO Requirements on page 92.

About this task


If you have already installed Nutanix VirtIO, use the following procedure to upgrade VirtIO to a latest version. If you
have not yet installed Nutanix VirtIO, use the following procedure to install Nutanix VirtIO.

Procedure

1. Go to the Nutanix Support portal and select Downloads > AHV > VirtIO.

2. Select the appropriate VirtIO package.

» If you are creating a new Windows VM, download the ISO file. The installer is available on the ISO if your
VM does not have Internet access.
» If you are updating drivers in a Windows VM, download the MSI installer file.

Figure 29: Search filter and VirtIO options

3. Run the selected package.

» For the ISO: Upload the ISO to the cluster, as described in Prism Web Console Guide: Configuring Images.
» For the MSI: open the download file to run the MSI.

AHV | Virtual Machine Management | 93


4. Read and accept the Nutanix VirtIO license agreement. Click Install.

Figure 30: Nutanix VirtIO Windows Setup Wizard

The Nutanix VirtIO setup wizard shows a status bar and completes installation.

Manually Installing or Upgrading Nutanix VirtIO


Manually install or upgrade Nutanix VirtIO.

Before you begin


Make sure that your system meets the VirtIO requirements described in VirtIO Requirements on page 92.

About this task

Note: To automatically install Nutanix VirtIO, see Installing or Upgrading Nutanix VirtIO for Windows on
page 93.

If you have already installed Nutanix VirtIO, use the following procedure to upgrade VirtIO to a latest version. If you
have not yet installed Nutanix VirtIO, use the following procedure to install Nutanix VirtIO.

Procedure

1. Go to the Nutanix Support portal and select Downloads > AHV > VirtIO.

2. Do one of the following:

» Extract the VirtIO ISO into the same VM where you load Nutanix VirtIO, for easier installation.
If you choose this option, proceed directly to step 7.
» Download the VirtIO ISO for Windows to your local machine.
If you choose this option, proceed to step 3.

3. Upload the ISO to the cluster, as described in the Configuring Images topic of Prism Web Console Guide.

4. Locate the VM where you want to install the Nutanix VirtIO ISO and update the VM.

AHV | Virtual Machine Management | 94


5. Add the Nutanix VirtIO ISO by clicking Add New Disk and complete the indicated fields.

• TYPE: CD-ROM
• OPERATION: CLONE FROM IMAGE SERVICE
• BUS TYPE: IDE
• IMAGE: Select the Nutanix VirtIO ISO

6. Click Add.

7. Log on to the VM and browse to Control Panel > Device Manager.

AHV | Virtual Machine Management | 95


8. Note: Select the x86 subdirectory for 32-bit Windows, or the amd64 for 64-bit Windows.

Open the devices and select the specific Nutanix drivers for download. For each device, right-click and Update
Driver Software into the drive containing the VirtIO ISO. For each device, follow the wizard instructions until
you receive installation confirmation.

a. System Devices > Nutanix VirtIO Balloon Drivers


b. Network Adapter > Nutanix VirtIO Ethernet Adapter.
c. Processors > Storage Controllers > Nutanix VirtIO SCSI pass through Controller
The Nutanix VirtIO SCSI pass-through controller prompts you to restart your system. Restart at any time to
install the controller.

AHV | Virtual Machine Management | 96


Figure 31: List of Nutanix VirtIO downloads

AHV | Virtual Machine Management | 97


Creating a Windows VM on AHV with Nutanix VirtIO
Create a Windows VM in AHV, or migrate a Windows VM from a non-Nutanix source to AHV, with the
Nutanix VirtIO drivers.

Before you begin

• Upload the Windows installer ISO to your cluster as described in the Web Console Guide: Configuring Images.
• Upload the Nutanix VirtIO ISO to your cluster as described in the Web Console Guide: Configuring Images.

About this task


To install a new or migrated Windows VM with Nutanix VirtIO, complete the following.

Procedure

1. Log on to the Prism web console using your Nutanix credentials.

2. At the top-left corner, click Home > VM.


The VM page appears.

AHV | Virtual Machine Management | 98


3. Click + Create VM in the corner of the page.
The Create VM dialog box appears.

Figure 32: Create VM dialog box

AHV | Virtual Machine Management | 99


4. Complete the indicated fields.

a. NAME: Enter a name for the VM.


b. Description (optional): Enter a description for the VM.
c. Timezone: Select the timezone that you want the VM to use. If you are creating a Linux VM, select (UTC)
UTC.

Note:
The RTC of Linux VMs must be in UTC, so select the UTC timezone if you are creating a Linux
VM.
Windows VMs preserve the RTC in the local timezone, so set up the Windows VM with the
hardware clock pointing to the desired timezone.

d. Number of Cores per vCPU: Enter the number of cores assigned to each virtual CPU.
e. MEMORY: Enter the amount of memory for the VM (in GiBs).

5. If you are creating a Windows VM, add a Windows CD-ROM to the VM.

a. Click the pencil icon next to the CD-ROM that is already present and fill out the indicated fields.

• OPERATION: CLONE FROM IMAGE SERVICE


• BUS TYPE: IDE
• IMAGE: Select the Windows OS install ISO.
b. Click Update.
The current CD-ROM opens in a new window.

6. Add the Nutanix VirtIO ISO.

a. Click Add New Disk and complete the indicated fields.

• TYPE: CD-ROM
• OPERATION: CLONE FROM IMAGE SERVICE
• BUS TYPE: IDE
• IMAGE: Select the Nutanix VirtIO ISO.
b. Click Add.

7. Add a new disk for the hard drive.

a. Click Add New Disk and complete the indicated fields.

• TYPE: DISK
• OPERATION: ALLOCATE ON STORAGE CONTAINER
• BUS TYPE: SCSI
• STORAGE CONTAINER: Select the appropriate storage container.
• SIZE: Enter the number for the size of the hard drive (in GiB).
b. Click Add to add the disk driver.

AHV | Virtual Machine Management | 100


8. If you are migrating a VM, create a disk from the disk image.

a. Click Add New Disk and complete the indicated fields.

• TYPE: DISK
• OPERATION: CLONE FROM IMAGE
• BUS TYPE: SCSI
• CLONE FROM IMAGE SERVICE: Click the drop-down menu and choose the image you created
previously.
b. Click Add to add the disk driver.

9. Optionally, after you have migrated or created a VM, add a network interface card (NIC).

a. Click Add New NIC.


b. In the VLAN ID field, choose the VLAN ID according to network requirements and enter the IP address, if
necessary.
c. Click Add.

10. Click Save.

What to do next
Install Windows by following Installing Windows on a VM on page 101.

Installing Windows on a VM
Install a Windows virtual machine.

Before you begin


Create a Windows VM as described in the Migration Guide: Creating a Windows VM on AHV after Migration.

Procedure

1. Log on to the web console.

2. Click Home > VM to open the VM dashboard.

3. Select the Windows VM.

4. In the center of the VM page, click Power On.

5. Click Launch Console.


The Windows console opens in a new window.

6. Select the desired language, time and currency format, and keyboard information.

7. Click Next > Install Now.


The Windows setup dialog box shows the operating systems to install.

8. Select the Windows OS you want to install.

9. Click Next and accept the license terms.

10. Click Next > Custom: Install Windows only (advanced) > Load Driver > OK > Browse.

AHV | Virtual Machine Management | 101


11. Choose the Nutanix VirtIO driver.

a. Select the Nutanix VirtIO CD drive.


b. Expand the Windows OS folder and click OK.

Figure 33: Select the Nutanix VirtIO drivers for your OS

The Select the driver to install window appears.

AHV | Virtual Machine Management | 102


12. Select the VirtIO SCSI driver (vioscsi.inf) and click Next.

Figure 34: Select the Driver for Installing Windows on a VM

The amd64 folder contains drivers for 64-bit operating systems. The x86 folder contains drivers for 32-bit
operating systems.

Note: From Nutanix VirtIO driver version 1.1.5, the driver package contains Windows Hardware Quality Lab
(WHQL) certified driver for Windows.

13. Select the allocated disk space for the VM and click Next.
Windows shows the installation progress, which can take several minutes.

14. Enter your user name and password information and click Finish.
Installation can take several minutes.
Once you complete the logon information, Windows setup completes installation.

15. Follow the instructions in Installing or Upgrading Nutanix VirtIO for Windows on page 93 to install other
drivers which are part of Nutanix VirtIO package.

Windows Defender Credential Guard Support in AHV


AHV enables you to use the Windows Defender Credential Guard security feature on Windows guest VMs.

AHV | Virtual Machine Management | 103


Windows Defender Credential Guard feature of Microsoft Windows operating systems allows you to securely isolate
user credentials from the rest of the operating system. By that means, you can protect guest VMs from credential theft
attacks such as Pass-the-Hash or Pass-The-Ticket.
See the Microsoft documentation for more information about the Windows Defender Credential Guard security
feature.

Windows Defender Credential Guard Architecture in AHV

Figure 35: Architecture

Windows Defender Credential Guard uses Microsoft virtualization-based security to isolate user credentials in the
virtualization-based security (VBS) module in AHV. When you enable Windows Defender Credential Guard on an
AHV guest VM, the guest VM runs on top of AHV running both the Windows OS and VBS. Each Windows OS
guest VM, which has credential guard enabled, has a VBS to securely store credentials.

Windows Defender Credential Guard Requirements


Ensure the following to enable Windows Defender Credential Guard:
1. AOS, AHV, and Windows Server versions support Windows Defender Credential Guard:

• AOS version must be 5.19 or later


• AHV version must be AHV 20201007.1 or later
•Windows Server version must be Windows server 2016 or later, Windows 10 Enterprise or later and Windows
Server 2019 or later
2. UEFI, Secure Boot, and machine type q35 are enabled in the Windows VM from AOS.
The Prism Element workflow to enable Windows Defender Credential Guard includes the workflow to enable
these features.

Limitations
If you enable Windows Defender Credential Guard for your AHV guest VMs, the following optional configurations
are not supported:

• vTPM support is not available to store MS policies.


• Secure Boot and DMA protection (vIOMMU) is not supported.

AHV | Virtual Machine Management | 104


• Nutanix Live Migration is not supported.
• Cross hypervisor DR of Credential Guard VMs is not supported.

CAUTION: Use of Windows Defender Credential Guard in your AHV clusters impacts VM performance. If you
enable Windows Defender Credential Guard on AHV guest VMs, VM density drops by ~15–20%. This expected
performance impact is due to nested virtualization overhead added as a result of enabling credential guard.

Enabling Windows Defender Credential Guard Support in AHV Guest VMs


You can enable Windows Defender Credential Guard when you are either creating a VM or updating a VM.

About this task


Perform the following procedure to enable Windows Defender Credential Guard:

Procedure

1. Enable Windows Defender Credential Guard when you are either creating a VM or updating a VM. Do one of the
following:

» If you are creating a VM, see step 2.


» If you are updating a VM, see step 3.

AHV | Virtual Machine Management | 105


2. If you are creating a Windows VM, do the following:

a. Log on to the Prism Element web console.


b. In the VM dashboard, click Create VM.
c. Fill in the mandatory fields to configure a VM.
d. Under Boot Configuration, select UEFI, and then select the Secure Boot and Windows Defender
Credential Guard options.

Figure 36: Enable Windows Defender Credential Guard

See UEFI Support for VM on page 111 and Secure Boot Support for VMs on page 116 for more
information about these features.
e. Proceed to configure other attributes for your Windows VM.
See Creating a Windows VM on AHV with Nutanix VirtIO on page 98 for more information.
f. Click Save.
g. Turn on the VM.

3. If you are updating an existing VM, do the following:

a. Log on to the Prism Element web console.


b. In the VM dashboard, click the Table view, select the VM, and click Update.
c. Under Boot Configuration, select UEFI, and then select the Secure Boot and Windows Defender
Credential Guard options.

Note:
If the VM is configured to use BIOS, install the guest OS again.
If the VM is already configured to use UEFI, skip the step to select Secure Boot.

See UEFI Support for VM on page 111 and Secure Boot Support for VMs on page 116 for more
information about these features.
d. Click Save.
e. Turn on the VM.

AHV | Virtual Machine Management | 106


4. Enable Windows Defender Credential Guard in the Windows VM by using group policy.
See the Enable Windows Defender Credential Guard by using the Group Policy procedure of the Manage
Windows Defender Credential Guard topic in the Microsoft documentation to enable VBS, Secure Boot, and
Windows Defender Credential Guard for the Windows VM.

5. Open command prompt in the Windows VM and apply the Group Policy settings:
> gpupdate /force
If you have not enabled Windows Defender Credential Guard (step 4) and perform this step (step 5), a warning
similar to the following is displayed:
Updating policy...

Computer Policy update has completed successfully.

The following warnings were encountered during computer policy processing:

Windows failed to apply the {F312195E-3D9D-447A-A3F5-08DFFA24735E} settings.


{F312195E-3D9D-447A-A3F5-08DFFA24735E} settings might have its own log file. Please click on
the "More information" link.
User Policy update has completed successfully.

For more detailed information, review the event log or run GPRESULT /H GPReport.html from the
command line to access information about Group Policy results.
Event Viewer displays a warning for the group policy with an error message that indicates Secure Boot is not
enabled on the VM.
To view the warning message in Event Viewer, do the following:

• In the Windows VM, open Event Viewer.


• Go to Windows Logs -> System and click the warning with the Source as GroupPolicy (Microsoft-
Windows-GroupPolicy) and Event ID as 1085.

Figure 37: Warning in Event Viewer

Note: Ensure that you follow the steps in the order that is stated in this document to successfully enable Windows
Defender Credential Guard.

AHV | Virtual Machine Management | 107


6. Restart the VM.

7. Verify if Windows Defender Credential Guard is enabled in your Windows VM.

a. Start a Windows PowerShell terminal.


b. Run the following command.
PS > Get-CimInstance -ClassName Win32_DeviceGu
ard -Namespace 'root\Microsoft\Windows\DeviceGuard'
An output similar to the following is displayed.

PS > Get-CimInstance -ClassName Win32_DeviceGuard -Namespace 'root\Microsoft\Windows


\DeviceGuard'
AvailableSecurityProperties : {1, 2, 3, 5}
CodeIntegrityPolicyEnforcementStatus : 0
InstanceIdentifier : 4ff40742-2649-41b8-bdd1-e80fad1cce80
RequiredSecurityProperties : {1, 2}
SecurityServicesConfigured : {1}
SecurityServicesRunning : {1}
UsermodeCodeIntegrityPolicyEnforcementStatus : 0
Version : 1.0
VirtualizationBasedSecurityStatus : 2
PSComputerName
Confirm that both SecurityServicesConfigured and SecurityServicesRunning have the value { 1 }.
Alternatively, you can verify if Windows Defender Credential Guard is enabled by using System Information
(msinfo32):

a. In the Windows VM, open System Information by typing msinfo32 in the search field next to the Start
menu.
b. Verify if the values of the parameters are as indicated in the following screen shot:

Figure 38: Verify Windows Defender Credential Guard

Affinity Policies for AHV


As an administrator of an AHV cluster, you can specify scheduling policies for virtual machines on an AHV cluster.
By defining these policies, you can control placement of the virtual machines on the hosts within a cluster.
You can define two types of affinity policies.

AHV | Virtual Machine Management | 108


VM-Host Affinity Policy
The VM-host affinity policy controls the placement of the VMs. You can use this policy to specify that a selected
VM can only run on the members of the affinity host list. This policy checks and enforces where a VM can be hosted
when you power on or migrate the VM.

Note:

• If you choose to apply the VM-host affinity policy, it limits Acropolis HA and Acropolis Dynamic
Scheduling (ADS) in such a way that a virtual machine cannot be powered on or migrated to a host that
does not conform to requirements of the affinity policy as this policy is enforced mandatorily.
• The VM-host anti-affinity policy is not supported.
• VMs configured with host affinity settings retain these settings if the VM is migrated to a new cluster.
Do not protect such VMs. When you attempt to protect such VMs it is successful. However, some
disaster recovery operations like migration fail and attempts to power on these VMs also fail.

You can define the VM-host affinity policies by using Prism Element during the VM create or update operation. For
more information, see Creating a VM (AHV) in Prism Web Console Guide or AHV Administration Guide.

VM-VM Anti-Affinity Policy


You can use this policy to specify anti-affinity between the virtual machines. The VM-VM anti-affinity policy keeps
the specified virtual machines apart in such a way that when a problem occurs with one host, you should not lose
both the virtual machines. However, this is a preferential policy. This policy does not limit the Acropolis Dynamic
Scheduling (ADS) feature to take necessary action in case of resource constraints.

Note:

• Currently, you can only define VM-VM anti-affinity policy by using aCLI. For more information, see
Configuring VM-VM Anti-Affinity Policy on page 109.
• The VM-VM affinity policy is not supported.

Note: If a VM is cloned that has the affinity policies configured, then the policies are not automatically applied to the
cloned VM. However, if a VM is restored from a DR snapshot, the policies are automatically applied to the VM.

Limitations of Affinity Rules


Even though if a host is removed from a cluster, the host UUID is not removed from the host-affinity list for a VM.

Configuring VM-VM Anti-Affinity Policy


To configure VM-VM anti-affinity policies, you must first define a group and then add all the VMs on which
you want to define VM-VM anti-affinity policy.

About this task

Note: Currently, the VM-VM affinity policy is not supported.

Perform the following procedure to configure the VM-VM anti-affinity policy.

Procedure

1. Log on to the Controller VM with SSH session.

AHV | Virtual Machine Management | 109


2. Create a group.
nutanix@cvm$ acli vm_group.create group_name
Replace group_name with the name of the group.

3. Add the VMs on which you want to define anti-affinity to the group.
nutanix@cvm$ acli vm_group.add_vms group_name vm_list=vm_name
Replace group_name with the name of the group. Replace vm_name with the name of the VMs that you want to
define anti-affinity on.

4. Configure VM-VM anti-affinity policy.


nutanix@cvm$ acli vm_group.antiaffinity_set group_name
Replace group_name with the name of the group.
After you configure the group and then power on the VMs, the VMs that are part of the group are started (attempt
to start) on the different hosts. However, this is a preferential policy. This policy does not limit the Acropolis
Dynamic Scheduling (ADS) feature to take necessary action in case of resource constraints.

Removing VM-VM Anti-Affinity Policy


Perform the following procedure to remove the VM-VM anti-affinity policy.

Procedure

1. Log on to the Controller VM with SSH session.

2. Remove the VM-VM anti-affinity policy.


nutanix@cvm$ acli vm_group.antiaffinity_unset group_name
Replace group_name with the name of the group.
The VM-VM anti-affinity policy is removed for the VMs that are present in the group, and they can start on any
host during the next power on operation (as necessitated by the ADS feature).

Performing Power Operations on VMs by Using Nutanix Guest Tools


(aCLI)
You can initiate safe and graceful power operations such as soft shutdown and restart of the VMs running
on the AHV hosts by using the aCLI. Nutanix Guest Tools (NGT) initiates and performs the soft shutdown
and restart operations within the VM. This workflow ensures a safe and graceful shutdown or restart of the
VM. You can create a pre-shutdown script that you can choose to run before a shutdown or restart of the
VM. In the pre-shutdown script, include any tasks or checks that you want to run before a VM is shut down
or restarted. You can choose to cancel the power operation if the pre-shutdown script fails. If the script
fails, an alert (guest_agent_alert) is generated in the Prism web console.

Before you begin


Ensure that you have met the following prerequisites before you initiate the power operations:
1. NGT is enabled on the VM. All operating systems that NGT supports are supported for this feature.
2. NGT version running on the Controller VM and guest VM is the same.

AHV | Virtual Machine Management | 110


3. (Optional) If you want to run a pre-shutdown script, place the script in the following locations depending on your
VMs:

• Windows VMs: installed_dir\scripts\power_off.bat


The file name of the script must be power_off.bat.
• Linux VMs: installed_dir/scripts/power_off
The file name of the script must be power_off.

About this task

Note: You can also perform these power operations by using the V3 API calls. For more information, see
developer.nutanix.com.

Perform the following steps to initiate the power operations:

Procedure

1. Log on to a Controller VM with SSH.

2. Do one of the following:

» Soft shut down the VM.


nutanix@cvm$ acli vm.guest_shutdown vm_name enable_script_exec=[true or false]
fail_on_script_failure=[true or false]
Replace vm_name with the name of the VM.
» Restart the VM.
nutanix@cvm$ acli vm.guest_reboot vm_name enable_script_exec=[true or false]
fail_on_script_failure=[true or false]
Replace vm_name with the name of the VM.
Set the value of enable_script_exec to true to run your pre-shutdown script and set the value of
fail_on_script_failure to true to cancel the power operation if the pre-shutdown script fails.

UEFI Support for VM


UEFI firmware is a successor to legacy BIOS firmware that supports larger hard drives, faster boot time, and provides
more security features.
VMs with UEFI firmware have the following advantages:

• Boot faster
• Avoid legacy option ROM address constraints
• Include robust reliability and fault management
• Use UEFI drivers

Note:

• Nutanix supports the starting of VMs with UEFI firmware in an AHV cluster. However, if a VM is
added to a protection domain and later restored on a different cluster, the VM loses boot configuration.
To restore the lost boot configuration, see Setting up Boot Device.

AHV | Virtual Machine Management | 111


• Nutanix also provides limited support for VMs migrated from a Hyper-V cluster.

You can create or update VMs with UEFI firmware by using acli commands, Prism Element web console, or Prism
Central web console. For more information about creating a VM by using the Prism Element web console or Prism
Central web console, see Creating a VM (AHV) on page 70. For information about creating a VM by using aCLI,
see Creating UEFI VMs by Using aCLI on page 112.

Note: If you are creating a VM by using aCLI commands, you can define the location of the storage container for
UEFI firmware and variables. Prism Element web console or Prism Central web console does not provide the option to
define the storage container to store UEFI firmware and variables.

For more information about the supported OSes for the guest VMs, see the AHV Guest OS section in the
Compatibility Matrix document.

Creating UEFI VMs by Using aCLI


In AHV clusters, you can create a virtual machine (VM) to start with UEFI firmware by using Acropolis CLI
(aCLI). This topic describes the procedure to create a VM by using aCLI. See the "Creating a VM (AHV)"
topic for information about how to create a VM by using the Prism Element web console.

Before you begin


Ensure that the VM has an empty vDisk.

About this task


Perform the following procedure to create a UEFI VM by using aCLI:

Procedure

1. Log on to any Controller VM in the cluster with SSH.

2. Create a UEFI VM.

nutanix@cvm$ acli vm.create vm-name uefi_boot=true


A VM is created with UEFI firmware. Replace vm-name with a name of your choice for the VM. By default, the
UEFI firmware and variables are stored in an NVRAM container. If you would like to specify a location of the
NVRAM storage container to store the UEFI firmware and variables, do so by running the following command.
nutanix@cvm$ acli vm.create vm-name uefi_boot=true nvram_container=NutanixManagementShare
Replace NutanixManagementShare with a storage container in which you want to store the UEFI variables.
The UEFI variables are stored in a default NVRAM container. Nutanix recommends you to choose a storage
container with at least RF2 storage policy to ensure the VM high availability for node failure scenarios. For more
information about RF2 storage policy, see Failure and Recovery Scenarios in the Prism Web Console Guide
document.

Note: When you update the location of the storage container, clear the UEFI configuration and update the location
of nvram_container to a container of your choice.

What to do next
Go to the UEFI BIOS menu and configure the UEFI firmware settings. For more information about
accessing and setting the UEFI firmware, see Getting Familiar with UEFI Firmware Menu on page 113.

AHV | Virtual Machine Management | 112


Getting Familiar with UEFI Firmware Menu
After you launch a VM console from the Prism Element web console, the UEFI firmware menu allows you to do the
following tasks for the VM.

• Changing default boot resolution


• Setting up boot device
• Changing boot-time value

Changing Boot Resolution


You can change the default boot resolution of your Windows VM from the UEFI firmware menu.

Before you begin


Ensure that the VM is in powered on state.

About this task


Perform the following procedure to change the default boot resolution of your Windows VM by using the
UEFI firmware menu.

Procedure

1. Log on to the Prism Element web console.

2. Launch the console for the VM.


For more details about launching console for the VM, see Managing a VM (AHV) section in Prism Web Console
Guide.

3. To go to the UEFI firmware menu, press the F2 keys on your keyboard.

Tip: To enter UEFI menu, open the VM console, select Reset in the Power off/Reset VM dialog box, and
immediately press F2 when the VM starts to boot.

Important: Resetting the VM causes a downtime. We suggest that you reset the VM only during off-production
hours or during a maintenance period.

Figure 39: UEFI Firmware Menu

AHV | Virtual Machine Management | 113


4. Use the up or down arrow key to go to Device Manager and press Enter.
The Device Manager page appears.

5. In the Device Manager screen, use the up or down arrow key to go to OVMF Platform Configuration and
press Enter.

Figure 40: OVMF Settings

The OVMF Settings page appears.

6. In the OVMF Settings page, use the up or down arrow key to go to the Change Preferred field and use the
right or left arrow key to increase or decrease the boot resolution.
The default boot resolution is 1280X1024.

7. Do one of the following.

» To save the changed resolution, press the F10 key.


» To go back to the previous screen, press the Esc key.

8. Select Reset and click Submit in the Power off/Reset dialog box to restart the VM.
After you restart the VM, the OS displays the changed resolution.

Setting up Boot Device


You cannot set the boot order for UEFI VMs by using the aCLI, Prism Central web console, or Prism
Element web console. You can change the boot device for a UEFI VM by using the UEFI firmware menu.

Before you begin


Ensure that the VM is in powered on state.

Procedure

1. Log on to the Prism Element web console.

2. Launch the console for the VM.


For more details about launching console for the VM, see Managing a VM (AHV) section in Prism Web Console
Guide.

AHV | Virtual Machine Management | 114


3. To go to the UEFI firmware menu, press the F2 keys on your keyboard.

Tip: To enter UEFI menu, open the VM console, select Reset in the Power off/Reset VM dialog box, and
immediately press F2 when the VM starts to boot.

Important: Resetting the VM causes a downtime. We suggest that you reset the VM only during off-production
hours or during a maintenance period.

4. Use the up or down arrow key to go to Boot Manager and press Enter.
The Boot Manager screen displays the list of available boot devices in the cluster.

Figure 41: Boot Manager

5. In the Boot Manager screen, use the up or down arrow key to select the boot device and press Enter.
The boot device is saved. After you select and save the boot device, the VM boots up with the new boot device.

6. To go back to the previous screen, press Esc.

Changing Boot Time-Out Value


The boot time-out value determines how long the boot menu is displayed (in seconds) before the default
boot entry is loaded to the VM. This topic describes the procedure to change the default boot-time value of
0 seconds.

About this task


Ensure that the VM is in powered on state.

Procedure

1. Log on to the Prism Element web console.

2. Launch the console for the VM.


For more details about launching console for the VM, see Managing a VM (AHV) section in Prism Web Console
Guide.

AHV | Virtual Machine Management | 115


3. To go to the UEFI firmware menu, press the F2 keys on your keyboard.

Tip: To enter UEFI menu, open the VM console, select Reset in the Power off/Reset VM dialog box, and
immediately press F2 when the VM starts to boot.

Important: Resetting the VM causes a downtime. We suggest that you reset the VM only during off-production
hours or during a maintenance period.

4. Use the up or down arrow key to go to Boot Maintenance Manager and press Enter.

Figure 42: Boot Maintenance Manager

5. In the Boot Maintenance Manager screen, use the up or down arrow key to go to the Auto Boot Time-out
field.
The default boot-time value is 0 seconds.

6. In the Auto Boot Time-out field, enter the boot-time value and press Enter.

Note: The valid boot-time value ranges from 1 second to 9 seconds.

The boot-time value is changed. The VM starts after the defined boot-time value.

7. To go back to the previous screen, press Esc.

Secure Boot Support for VMs


The pre-operating system environment is vulnerable to attacks by possible malicious loaders. Secure boot addresses
this vulnerability with UEFI secure boot using policies present in the firmware along with certificates, to ensure that
only properly signed and authenticated components are allowed to execute.

Supported Operating Systems


For more information about the supported OSes for the guest VMs, see the AHV Guest OS section in the
Compatibility Matrix document.

Secure Boot Considerations


This section provides the limitations and requirements to use Secure Boot.

AHV | Virtual Machine Management | 116


Limitations
Secure Boot for guest VMs has the following limitation:

• Nutanix does not support converting a VM that uses IDE disks or legacy BIOS to VMs that use Secure Boot.
• The minimum supported version of the Nutanix VirtIO package for Secure boot-enabled VMs is 1.1.6.

Requirements
Following are the requirements for Secure Boot:

• Secure Boot is supported only on the Q35 machine type.

Creating/Updating a VM with Secure Boot Enabled


You can enable Secure Boot with UEFI firmware, either while creating a VM or while updating a VM by using aCLI
commands or Prism Element web console.
See Creating a VM (AHV) on page 70 for instructions about how to enable Secure Boot by using the Prism
Element web console.

Creating a VM with Secure Boot Enabled

About this task


To create a VM with Secure Boot enabled:

Procedure

1. Log on to any Controller VM in the cluster with SSH.

2. To create a VM with Secure Boot enabled:


nutanix@cvm$ acli vm.create <vm_name> secure_boot=true machine_type=q35

Note: Specifying the machine type is required to enable the secure boot feature. UEFI is enabled by default when
the Secure Boot feature is enabled.

Updating a VM to Enable Secure Boot

About this task


To update a VM to enable Secure Boot:

Procedure

1. Log on to any Controller VM in the cluster with SSH.

2. To update a VM to enable Secure Boot, ensure that the VM is powered off.


nutanix@cvm$ acli vm.update <vm_name> secure_boot=true machine_type=q35

Note:

• If you disable the secure boot flag alone, the machine type remains q35, unless you disable that flag
explicitly.
• UEFI is enabled by default when the Secure Boot feature is enabled. Disabling Secure Boot does not
revert the UEFI flags.

AHV | Virtual Machine Management | 117


Virtual Machine Network Management
Virtual machine network management involves configuring connectivity for guest VMs through virtual switches,
VLANs, and VPCs.
For information about creating or updating a virtual switch and other VM network options, see Networkin the Prism
Central Guide. Virtual switch creation and updates are also covered in Prism Web Console Guide.

Virtual Machine Memory and CPU Hot-Plug Configurations


Memory and CPUs are hot-pluggable on guest VMs running on AHV. You can increase the memory allocation and
the number of CPUs on your VMs while the VMs are powered on. You can change the number of vCPUs (sockets)
while the VMs are powered on. However, you cannot change the number of cores per socket while the VMs are
powered on.

Note: You cannot decrease the memory allocation and the number of CPUs on your VMs while the VMs are powered
on.

You can change the memory and CPU configuration of your VMs by using the Acropolis CLI (aCLI), Prism Element
(see Managing a VM (AHV) in the Prism Web Console Guide), or Prism Central (see Managing a VM (AHV and Self
Service) in the Prism Central Guide).
See the AHV Guest OS Compatibility Matrix for information about operating systems on which you can hot plug
memory and CPUs.

Memory OS Limitations
1. On Linux operating systems, the Linux kernel might not make the hot-plugged memory online. If the memory is
not online, you cannot use the new memory. Perform the following procedure to make the memory online.
1. Identify the memory block that is offline.
Display the status of all of the memory.
$ cat /sys/devices/system/memory/memoryXXX/state
Display the state of a specific memory block.
$ grep line /sys/devices/system/memory/*/state

2. Make the memory online.


$ echo online > /sys/devices/system/memory/memoryXXX/state

2. If your VM has CentoOS 7.2 as the guest OS and less than 3 GB memory, hot plugging more memory to that VM
so that the final memory is greater than 3 GB, results in a memory-overflow condition. To resolve the issue, restart
the guest OS (CentOS 7.2) with the following setting:
swiotlb=force

CPU OS Limitation
On CentOS operating systems, if the hot-plugged CPUs are not displayed in /proc/cpuinfo, you might have to
bring the CPUs online. For each hot-plugged CPU, run the following command to bring the CPU online.
$ echo 1 > /sys/devices/system/cpu/cpu<n>/online
Replace <n> with the number of the hot plugged CPU.

Hot-Plugging the Memory and CPUs on Virtual Machines (AHV)

About this task


Perform the following procedure to hot plug the memory and CPUs on the AHV VMs.

AHV | Virtual Machine Management | 118


Procedure

1. Log on the Controller VM with SSH.

2. Update the memory allocation for the VM.


nutanix@cvm$ acli vm.update vm-name memory=new_memory_size
Replace vm-name with the name of the VM and new_memory_size with the memory size.

3. Update the number of CPUs on the VM.


nutanix@cvm$ acli vm.update vm-name num_vcpus=n
Replace vm-name with the name of the VM and n with the number of CPUs.

Note: After you upgrade from a hot-plug unsupported version to the hot-plug supported version, you must power
cycle the VM that was instantiated and powered on before the upgrade, so that it is compatible with the memory
and CPU hot-plug feature. This power-cycle has to be done only once after the upgrade. New VMs created on the
supported version shall have the hot-plug compatibility by default.

Virtual Machine Memory Management (vNUMA)


AHV hosts support Virtual Non-uniform Memory Access (vNUMA) on virtual machines. You can enable vNUMA
on VMs when you create or modify the VMs to optimize memory performance.

Non-uniform Memory Access (NUMA)


In a NUMA topology, memory access times of a VM are dependent on the memory location relative to a processor.
A VM accesses memory local to a processor faster than the non-local memory. You can achieve optimal resource
utilization if both CPU and memory from the same physical NUMA node is used. Memory latency is introduced if
you are running the CPU on one NUMA node (for example, node 0) and the VM accesses the memory from another
node (node 1). Ensure that the virtual topology of VMs matches the physical hardware topology to achieve minimum
memory latency.

Virtual Non-uniform Memory Access (vNUMA)


vNUMA optimizes memory performance of virtual machines that require more vCPUs or memory than the capacity
of a single physical NUMA node. In a vNUMA topology, you can create multiple vNUMA nodes where each
vNUMA node includes vCPUs and virtual RAM. When you assign a vNUMA node to a physical NUMA node, the
vCPUs can intelligently determine the memory latency (high or low). Low memory latency within a vNUMA node
results in low latency within a physical NUMA node.

Enabling vNUMA on Virtual Machines

Before you begin


Before you enable vNUMA, see AHV Best Practices Guide under Solutions Documentation.

About this task


Perform the following procedure to enable vNUMA on your VMs running on the AHV hosts.

Procedure

1. Log on to a Controller VM with SSH.

AHV | Virtual Machine Management | 119


2. Check how many NUMA nodes are available on each AHV host in the cluster.
nutanix@cvm$ hostssh "numactl --hardware"
An output similar to the following is displayed:
============= 10.x.x.x ============
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 128837 MB
node 0 free: 862 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 129021 MB
node 1 free: 352 MB
node distances:
node 0 1
0: 10 21
1: 21 10
============= 10.x.x.x ============
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 128859 MB
node 0 free: 1076 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 129000 MB
node 1 free: 436 MB
node distances:
node 0 1
0: 10 21
1: 21 10
============= 10.x.x.x ============
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 128859 MB
node 0 free: 701 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 129000 MB
node 1 free: 357 MB
node distances:
node 0 1
0: 10 21
1: 21 10
============= 10.x.x.x ============
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 128838 MB
node 0 free: 1274 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 129021 MB
node 1 free: 424 MB
node distances:
node 0 1
0: 10 21
1: 21 10
============= 10.x.x.x ============
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 128837 MB
node 0 free: 577 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 129021 MB
node 1 free: 612 MB
node distances:

AHV | Virtual Machine Management | 120


node 0 1
0: 10 21
1: 21 10
The example output shows that each AHV host has two NUMA nodes.

3. Do one of the following:

» Enable vNUMA if you are creating a VM.


nutanix@cvm$ acli vm.create <vm_name> num_vcpus=x \
num_cores_per_vcpu=x memory=xG \
num_vnuma_nodes=x

» Enable vNUMA if you are modifying an existing VM.


nutanix@cvm$ acli vm.update <vm_name> \
num_vnuma_nodes=x

Replace <vm_name> with the name of the VM on which you want to enable vNUMA or vUMA. Replace x with
the values for the following indicated parameters:

• num_vcpus: Type the number of vCPUs for the VM.


• num_cores_per_vcpu: Type the number of cores per vCPU.
• memory: Type the memory in GB for the VM.
• num_vnuma_nodes: Type the number of vNUMA nodes for the VM.
For example:
nutanix@cvm$ acli vm.create test_vm num_vcpus=20 memory=150G num_vnuma_nodes=2
This command creates a VM with 2 vNUMA nodes, 10 vCPUs and 75 GB memory for each vNUMA node.

GPU and vGPU Support


AHV supports GPU-accelerated computing for guest VMs. You can configure either GPU pass-through or a virtual
GPU.

Note: You can configure either pass-through or a vGPU for a guest VM but not both.

This guide describes the concepts related to the GPU and vGPU support in AHV. For the configuration procedures,
see the Prism Web Console Guide.
For driver installation instructions, see the NVIDIA Grid Host Driver for Nutanix AHV Installation Guide.

Supported GPUs
The following GPUs are supported:

Note: These GPUs are supported only by the AHV version that is bundled with the AOS release.

• NVIDIA® Tesla® M10


• NVIDIA® Tesla® M60
• NVIDIA® Tesla® P40
• NVIDIA® Tesla® P100
• NVIDIA® Tesla® P4

AHV | Virtual Machine Management | 121


• NVIDIA® Tesla® V100 16 GB
• NVIDIA® Tesla® V100 32 GB
• NVIDIA® Tesla® V100S 32 GB
• NVIDIA® Tesla® T4 16 GB
• NVIDIA® Quadro® RTX 6000
• NVIDIA® Quadro® RTX 8000

GPU Pass-Through for Guest VMs


AHV hosts support GPU pass-through for guest VMs, allowing applications on VMs direct access to GPU resources.
The Nutanix user interfaces provide a cluster-wide view of GPUs, allowing you to allocate any available GPU to a
VM. You can also allocate multiple GPUs to a VM. However, in a pass-through configuration, only one VM can use
a GPU at any given time.

Host Selection Criteria for VMs with GPU Pass-Through


When you power on a VM with GPU pass-through, the VM is started on the host that has the specified GPU,
provided that the Acropolis Dynamic Scheduler determines that the host has sufficient resources to run the VM. If
the specified GPU is available on more than one host, the Acropolis Dynamic Scheduler ensures that a host with
sufficient resources is selected. If sufficient resources are not available on any host with the specified GPU, the VM is
not powered on.
If you allocate multiple GPUs to a VM, the VM is started on a host if, in addition to satisfying Acropolis Dynamic
Scheduler requirements, the host has all of the GPUs that are specified for the VM.
If you want a VM to always use a GPU on a specific host, configure host affinity for the VM.

Support for Graphics and Compute Modes


AHV supports running GPU cards in either graphics mode or compute mode. If a GPU is running in compute mode,
Nutanix user interfaces indicate the mode by appending the string compute to the model name. No string is appended
if a GPU is running in the default graphics mode.

Switching Between Graphics and Compute Modes


If you want to change the mode of the firmware on a GPU, put the host in maintenance mode, and then flash the GPU
manually by logging on to the AHV host and performing standard procedures as documented for Linux VMs by the
vendor of the GPU card.
Typically, you restart the host immediately after you flash the GPU. After restarting the host, redo the GPU
configuration on the affected VM, and then start the VM. For example, consider that you want to re-flash an NVIDIA
Tesla® M60 GPU that is running in graphics mode. The Prism web console identifies the card as an NVIDIA Tesla
M60 GPU. After you re-flash the GPU to run in compute mode and restart the host, redo the GPU configuration on the
affected VMs by adding back the GPU, which is now identified as an NVIDIA Tesla M60.compute GPU, and then
start the VM.

Supported GPU Cards


For a list of supported GPUs, see Supported GPUs on page 121.

Limitations
GPU pass-through support has the following limitations:

• Live migration of VMs with a GPU configuration is not supported. Live migration of VMs is necessary when the
BIOS, BMC, and the hypervisor on the host are being upgraded. During these upgrades, VMs that have a GPU
configuration are powered off and then powered on automatically when the node is back up.

AHV | Virtual Machine Management | 122


• VM pause and resume are not supported.
• You cannot hot add VM memory if the VM is using a GPU.
• Hot add and hot remove support is not available for GPUs.
• You can change the GPU configuration of a VM only when the VM is turned off.
• The Prism web console does not support console access for VMs that are configured with GPU pass-through.
Before you configure GPU pass-through for a VM, set up an alternative means to access the VM. For example,
enable remote access over RDP.
Removing GPU pass-through from a VM restores console access to the VM through the Prism web console.

Configuring GPU Pass-Through


For information about configuring GPU pass-through for guest VMs, see Creating a VM (AHV) in the "Virtual
Machine Management" chapter of the Prism Web Console Guide.

NVIDIA GRID Virtual GPU Support on AHV


AHV supports NVIDIA GRID technology, which enables multiple guest VMs to use the same physical GPU
concurrently. Concurrent use is made possible by dividing a physical GPU into discrete virtual GPUs (vGPUs) and
allocating those vGPUs to guest VMs. Each vGPU is allocated a fixed range of the physical GPUs framebuffer and
uses all the GPU processing cores in a time-sliced manner.
Virtual GPUs are of different types (vGPU types are also called vGPU profiles) and differ by the amount of physical
GPU resources allocated to them and the class of workload that they target. The number of vGPUs into which a single
physical GPU can be divided therefore depends on the vGPU profile that is used on a physical GPU.
Each physical GPU supports more than one vGPU profile, but a physical GPU cannot run multiple vGPU profiles
concurrently. After a vGPU of a given profile is created on a physical GPU (that is, after a vGPU is allocated to a
VM that is powered on), the GPU is restricted to that vGPU profile until it is freed up completely. To understand this
behavior, consider that you configure a VM to use an M60-1Q vGPU. When the VM is powering on, it is allocated
an M60-1Q vGPU instance only if a physical GPU that supports M60-1Q is either unused or already running the
M60-1Q profile and can accommodate the requested vGPU.
If an entire physical GPU that supports M60-1Q is free at the time the VM is powering on, an M60-1Q vGPU
instance is created for the VM on the GPU, and that profile is locked on the GPU. In other words, until the physical
GPU is completely freed up again, only M60-1Q vGPU instances can be created on that physical GPU (that is, only
VMs configured with M60-1Q vGPUs can use that physical GPU).

Note: NVIDIA does not support Windows Guest VMs on the C-series NVIDIA vGPU types. See the NVIDIA
documentation on Virtual GPU software for more information.

vGPU Profile Licensing


vGPU profiles are licensed through an NVIDIA GRID license server. The choice of license depends on the type of
vGPU that the applications running on the VM require. Licenses are available in various editions, and the vGPU
profile that you want might be supported by more than one license edition.

Note: If the specified license is not available on the licensing server, the VM starts up and functions normally, but the
vGPU runs with reduced capability.

You must determine the vGPU profile that the VM requires, install an appropriate license on the licensing server, and
configure the VM to use that license and vGPU type. For information about licensing for different vGPU types, see
the NVIDIA GRID licensing documentation.

AHV | Virtual Machine Management | 123


Guest VMs check out a license over the network when starting up and return the license when shutting down. As the
VM is powering on, it checks out the license from the licensing server. When a license is checked back in, the vGPU
is returned to the vGPU resource pool.
When powered on, guest VMs use a vGPU in the same way that they use a physical GPU that is passed through.

Supported GPU Cards


For a list of supported GPUs, see Supported GPUs on page 121.

High Availability Support for VMs with vGPUs


Nutanix conditionally supports high availability (HA) of VMs that have NVIDIA GRID vGPUs configured. You can
restart a VM with vGPUs on another (failover) host which has compatible or identical vGPU resources available. The
vGPU profile available on the failover host must be identical to the vGPU profile configured on the VM that needs
HA. The cluster does not reserve any specific resources to guarantee High Availability for the VMs with vGPUs. The
system attempts to restart the VM after an event. If the failover host has insufficient memory and vGPU resources for
the VM to start, it fails to start after failover.
The following conditions are applicable to HA of VMs with vGPUs:

• Memory is not reserved for the VM on the failover host by the HA process. When the VM fails over, if sufficient
memory is not available, the VM cannot power on.
• vGPU resource is not reserved on the failover host. When the VM fails over, if the required vGPU resources are
not available on the failover host, the VM cannot power on.

Limitations for vGPU Support


vGPU support on AHV has the following limitations:

• You cannot hot-add memory to VMs that have a vGPU.


• The Prism web console does not support console access for VMs that are configured with a vGPU. Before you add
a vGPU to a VM, set up an alternative means to access the VM. For example, enable remote access over RDP.
Removing a vGPU from a VM restores console access to the VM through the Prism web console.

Console Support for VMs with vGPU


Like other VMs, you can access VMs with vGPUs using the console. Enable or disable console support for a VM
with only one vGPU configured. Enabling console support for a VM with multiple vGPUs is not supported. By
default, console support for a VM with vGPUs is disabled.
You cannot recover vGPU console-enabled guest VMs efficiently. When you perform DR of vGPU console-enabled
guest VMs, the VMs recover with the default VGA console (without any alert) instead of vGPU console. The guest
VMs fail to recover when you perform cross-hypervisor disaster recovery (CHDR).
See for more information about configuring the support.

ADS support for VMs with vGPUs


AHV supports Acropolis Dynamic Scheduling (ADS) for VMs with vGPUs.

Note: ADS support requires live migration of VMs with vGPU be operational in the cluster. See Live Migration of
VMs with vGPUs above for minimum NVIDIA and AOS versions that support live migration of VMs with vGPUs.

When a number of VMs with vGPUs are running on a host and you enable ADS support for the cluster, the Lazan
manager invokes VM migration tasks to resolve resource hotspots or fragmentation in the cluster to power on
incoming vGPU VMs. The Lazan manager can migrate vGPU-enabled VMs to other hosts in the cluster only if:

AHV | Virtual Machine Management | 124


• The other hosts support compatible or identical vGPU resources as the source host (hosting the vGPU-enabled
VMs).
• The host affinity is not set for the vGPU-enabled VM.
For more information about limitations, see Live Migration of VMs with Virtual GPUs on page 132 and
Limitations of Live Migration Support on page 132.
For more information about ADS, see Acropolis Dynamic Scheduling in AHV on page 5.

NVIDIA Grid Host Drivers and License Installation


To enable guest VMs to use vGPUs on AHV, you must install NVIDIA drivers on the guest VMs, install the NVIDIA
GRID host driver on the hypervisor, and set up an NVIDIA GRID License Server.
See the NVIDIA Grid Host Driver for Nutanix AHV Installation Guide for details about the workflow to enable guest
VMs to use vGPUs on AHV and the NVIDIA GRID host driver installation instructions.

Multiple Virtual GPU Support


Prism Central and Prism Element Web Console can deploy VMs with multiple virtual GPU instances. This support
harnesses the capabilities of NVIDIA GRID virtual GPU (vGPU) support for multiple vGPU instances for a single
VM.

Note: Multiple vGPUs on the same VM are supported on NVIDIA Virtual GPU software version 10.1 (440.53) or
later.

You can deploy virtual GPUs of different types. A single physical GPU can be divided into the number of vGPUs
depending on the type of vGPU profile that is used on the physical GPU. Each physical GPU on a GPU board
supports more than one type of vGPU profile. For example, a Tesla® M60 GPU device provides different types of
vGPU profiles like M60-0Q, M60-1Q, M60-2Q, M60-4Q, and M60-8Q.
You can only add multiple vGPUs of the same type of vGPU profile to a single VM. For example, consider that you
configure a VM on a Node that has one NVidia Tesla® M60 GPU board. Tesla® M60 provides two physical GPUs,
each supporting one M60-8Q (profile) vGPU, thus supporting a total of two M60-8Q vGPUs for the entire host.
For restrictions on configuring multiple vGPUs on the same VM, see Restrictions for Multiple vGPU Support on
page 125.
For steps to add multiple vGPUs to the same VM, see Creating a VM (AHV) and Adding Multiple vGPUs to a VM in
Prism Web Console Guide or Prism Central Guide.

Restrictions for Multiple vGPU Support

You can configure multiple vGPUs subject to the following restrictions:

• All the vGPUs that you assign to one VM must be of the same type. In the aforesaid example, with the Tesla®
M60 GPU device, you can assign multiple M60-8Q vGPU profiles. You cannot assign one vGPU of the M60-1Q
type and another vGPU of the M60-8Q type.

Note: You can configure any number of vGPUs of the same type on a VM. However, the cluster calculates
a maximum number of vGPUs of the same type per VM. This number is defined as max_instances_per_vm.
This number is variable and changes based on the GPU resources available in the cluster and the number
of VMs deployed. If the number of vGPUs of a specific type that you configured on a VM exceeds the
max_instances_per_vm number, then the VM fails to power on and the following error message is displayed:

Operation failed: NoHostResources: No host has enough available GPU for VM <name of VM>(UUID
of VM).

AHV | Virtual Machine Management | 125


You could try reducing the GPU allotment...
When you configure multiple vGPUs on a VM, after you select the appropriate vGPU type for the first vGPU
assignment, Prism (Prism Central and Prism Element Web Console) automatically restricts the selection of vGPU
type for subsequent vGPU assignments to the same VM.

Figure 43: vGPU Type Restriction Message

AHV | Virtual Machine Management | 126


Note:
You can use CLI (acli) to configure multiple vGPUs of multiple types to the same VM. See Acropolis
Command-Line Interface (aCLI) for information about aCLI. Use the vm.assign <vm.name>
gpu=<gpu-type> command multiple times, once for each vGPU, to add multiple vGPUs of multiple
types to the same VM.
See the GPU board and software documentation for information about the combinations of the number
and types of vGPUs profiles supported by the GPU resources installed in the cluster. For example, see
the NVIDIA Virtual GPU Software Documentation for the vGPU type and number combinations on the
Tesla® M60 board.

• Configure multiple vGPUs only of the highest type using Prism. The highest type of vGPU profile is based on
the driver deployed in the cluster. In the aforesaid example, on a Tesla® M60 device, you can only configure
multiple vGPUs of the M60-8Q type. Prism prevents you from configuring multiple vGPUs of any other type such
as M60-2Q.

Figure 44: vGPU Type Restriction Message

Note:
You can use CLI (acli) to configure multiple vGPUs of other available types. See Acropolis Command-
Line Interface (aCLI) for the aCLI information. Use the vm.assign <vm.name> gpu=<gpu-type>
command multiple times, once for each vGPU, to configure multiple vGPUs of other available types.
See the GPU board and software documentation for more information.

• Configure either a passthrough GPU or vGPUs on the same VM. You cannot configure both passthrough GPU
and vGPUs. Prism automatically disallows such configurations after the first GPU is configured.
• The VM powers on only if the requested type and number of vGPUs are available in the host.
In the aforesaid example, the VM, which is configured with two M60-8Q vGPUs, fails to power on if another VM
sharing the same GPU board is already using one M60-8Q vGPU. This is because the Tesla® M60 GPU board
allows only two M60-8Q vGPUs. Of these, one is already used by another VM. Thus, the VM configured with
two M60-8Q vGPUs fails to power on due to unavailability of required vGPUs.

AHV | Virtual Machine Management | 127


• Multiple vGPUs on the same VM are supported on NVIDIA Virtual GPU software version 10.1 (440.53) or later.
Ensure that the relevant GRID version license is installed and select it when you configure multiple vGPUs.

Adding Multiple vGPUs to the Same VM

About this task


You can add multiple vGPUs of the same vGPU type to:

• A new VM when you create it.


• An existing VM when you update it.

Important:
Before you add multiple vGPUs to the VM, see Multiple Virtual GPU Support and Restrictions for Multiple
vGPU Support in the AHV Admin Guide.

After you add the first vGPU, do the following on the Create VM or Update VM dialog box (the main dialog box) to
add more vGPUs:

Procedure

1. Click Add GPU.

AHV | Virtual Machine Management | 128


2. In the Add GPU dialog box, click Add.
The License field is grayed out because you cannot select a different license when you add a vGPU for the same
VM.
The VGPU Profile is also auto-selected because you can only select the additional vGPU of the same vGPU type
as indicated by the message at the top of the dialog box.

AHV | Virtual Machine Management | 129


Figure 45: Add GPU for multiple vGPUs

AHV | Virtual Machine Management | 130


3. In the main dialog box, you see the newly added vGPU.

Figure 46: New vGPUs Added

4. Repeat the steps for each vGPU addition you want to make.

AHV | Virtual Machine Management | 131


Live Migration of VMs with Virtual GPUs
You can now perform live migration of VMs enabled with virtual GPUs. The primary advantage of the live migration
support is that unproductive downtime is avoided. Therefore, your vGPUs can continue to run while the VMs that are
running the vGPUs are seamlessly migrated in the background. With very low stun times, the graphics user barely
notices the migration.

Note: Live migration of VMs with vGPUs is supported:

• For vGPUs created with NVIDIA Virtual GPU software version 10.1 (440.53).
• On AOS 5.18.1 or later.

Important: In an HA event involving any GPU node, the node locality of the affected vGPU VMs is not restored
after GPU node recovery. The affected vGPU VMs are not migrated back to their original GPU host intentionally to
avoid extended VM stun time expected while migrating vGPU frame buffer. If vGPU VM node locality is required,
migrate the affected vGPU VMs to the desired host manually. For information about the steps to migrate a live VM
with vGPUs, see Migrating Live a VM with Virtual GPUs in the Prism Central Guide and the Prism Web Console
Guide.

Note:
Important frame buffer and VM stun time considerations are:

• The GPU board (for example, NVIDIA Tesla M60) vendor provides the information for maximum
frame buffer size of vGPU types (for example, M60-8Q type) that can be configured on VMs. However,
the actual frame buffer usage may be lower than the maximum sizes.
• The VM stun time depends on the number of vGPUs configured on the VM being migrated. Stun time
may be longer in case of multiple vGPUs operating on the VM.
The stun time also depends on the network factors such bandwidth available for use during the
migration.

For information about the limitations applicable to the live migration support, see Limitations of Live Migration
Support on page 132 and Restrictions for Multiple vGPU Support on page 125.
For information about the steps to migrate live a VM with vGPUs, see Migrating Live a VM with Virtual GPUs in the
Prism Central Guide and the Prism Web Console Guide.

Limitations of Live Migration Support

• Live migration is supported for VMs configured with single or multiple virtual GPUs. It is not supported for VMs
configured with passthrough GPUs.
• The target host for the migration must have adequate and available GPU resources, with the same vGPU types as
configured for the VMs to be migrated, to support the vGPUs on the VMs that need to be migrated.
See Restrictions for Multiple vGPU Support on page 125 for more details.
• The VMs with vGPUs that need to be migrated live cannot be protected with high availability.
• Ensure that the VM is not powered off.
• Ensure that you have the right GPU software license that supports live migration of vGPUs. The source and target
hosts must have the same license type. You require an appropriate license of NVIDIA GRID software version. See
Live Migration of VMs with Virtual GPUs on page 132 for minimum license requirements.

AHV | Virtual Machine Management | 132


Enabling or Disabling Console Support for vGPU VMs

About this task


Enable or disable console support for a VM with only one vGPU configured. Enabling console support for a VM with
multiple vGPUs is not supported. By default, console support for a VM with vGPUs is disabled.
To enable or disable console support for each VM with vGPUs, do the following:

Procedure

1. Run the following aCLI command to check if console support is enabled or disabled for the VM with vGPUs.
acli> vm.get vm-name
Where vm-name is the name of the VM for which you want to check the console support status.
The step result includes the following parameter for the specified VM:
gpu_console=False

Where False indicates that console support is not enabled for the VM. This parameter is displayed as True when
you enable console support for the VM. The default value for gpu_console= is False since console support is
disabled by default.

Note: The console may not display the gpu_console parameter in the output of the vm.get command if the
gpu_console parameter was not previously enabled.

2. Run the following aCLI command to enable or disable console support for the VM with vGPU:
vm.update vm-name gpu_console=true | false
Where:

• true—indicates that you are enabling console support for the VM with vGPU.
• false—indicates that you are disabling console support for the VM with vGPU.

3. Run the vm.get command to check if gpu_console value is true indicating that console support is enabled or
false indicating that console support is disabled as you configured it.
If the value indicated in the vm.get command output is not what is expected, then perform Guest Shutdown
of the VM with vGPU. Next, run the vm.on vm-name aCLI command to turn the VM on again. Then run vm.get
command and check the gpu_console= value.

4. Click a VM name in the VM table view to open the VM details page. Click Launch Console.
The Console opens but only a black screen is displayed.

5. Click on the console screen. Click one of the following key combinations based on the operating system you are
accessing the cluster from.

» For Apple Mac OS: Control+Command+2

» For MS Windows: Ctrl+Alt+2


The console is fully enabled and displays the content.

PXE Configuration for AHV VMs


You can configure a VM to boot over the network in a Preboot eXecution Environment (PXE). Booting over the
network is called PXE booting and does not require the use of installation media. When starting up, a PXE-enabled
VM communicates with a DHCP server to obtain information about the boot file it requires.

AHV | Virtual Machine Management | 133


Configuring PXE boot for an AHV VM involves performing the following steps:

• Configuring the VM to boot over the network.


• Configuring the PXE environment.
The procedure for configuring a VM to boot over the network is the same for managed and unmanaged networks. The
procedure for configuring the PXE environment differs for the two network types, as follows:

• An unmanaged network does not perform IPAM functions and gives VMs direct access to an external Ethernet
network. Therefore, the procedure for configuring the PXE environment for AHV VMs is the same as for a
physical machine or a VM that is running on any other hypervisor. VMs obtain boot file information from the
DHCP or PXE server on the external network.
• A managed network intercepts DHCP requests from AHV VMs and performs IP address management (IPAM)
functions for the VMs. Therefore, you must add a TFTP server and the required boot file information to the
configuration of the managed network. VMs obtain boot file information from this configuration.
A VM that is configured to use PXE boot boots over the network on subsequent restarts until the boot order of the
VM is changed.

Configuring the PXE Environment for AHV VMs


The procedure for configuring the PXE environment for a VM on an unmanaged network is similar to
the procedure for configuring a PXE environment for a physical machine on the external network and is
beyond the scope of this document. This procedure configures a PXE environment for a VM in a managed
network on an AHV host.

About this task


To configure a PXE environment for a VM on a managed network on an AHV host, do the following:

Procedure

1. Log on to the Prism web console, click the gear icon, and then click Network Configuration in the menu.
The Network Configuration dialog box is displayed.

2. On the Virtual Networks tab, click the pencil icon shown for the network for which you want to configure a
PXE environment.
The VMs that require the PXE boot information must be on this network.

3. In the Update Network dialog box, do the following:

a. Select the Configure Domain Settings check box and do the following in the fields shown in the domain
settings sections:

• In the TFTP Server Name field, specify the host name or IP address of the TFTP server.
If you specify a host name in this field, make sure to also specify DNS settings in the Domain Name
Servers (comma separated), Domain Search (comma separated), and Domain Name fields.
• In the Boot File Name field, specify the boot file URL and boot file that the VMs must use. For example,
tftp://ip_address/boot_filename.bin, where ip_address is the IP address (or host name, if you specify
DNS settings) of the TFTP server and boot_filename.bin is the PXE boot file.
b. Click Save.

4. Click Close.

AHV | Virtual Machine Management | 134


Configuring a VM to Boot over a Network
To enable a VM to boot over the network, update the VM's boot device setting. Currently, the only user
interface that enables you to perform this task is the Acropolis CLI (aCLI).

About this task


To configure a VM to boot from the network, do the following:

Procedure

1. Log on to any CVM in the cluster using SSH.

2. Create a VM.

nutanix@cvm$ acli vm.create vm num_vcpus=num_vcpus memory=memory


Replace vm with a name for the VM, and replace num_vcpus and memory with the number of vCPUs and amount
of memory that you want to assign to the VM, respectively.
For example, create a VM named nw-boot-vm.
nutanix@cvm$ acli vm.create nw-boot-vm num_vcpus=1 memory=512

3. Create a virtual interface for the VM and place it on a network.


nutanix@cvm$ acli vm.nic_create vm network=network
Replace vm with the name of the VM and replace network with the name of the network. If the network is an
unmanaged network, make sure that a DHCP server and the boot file that the VM requires are available on the
network. If the network is a managed network, configure the DHCP server to provide TFTP server and boot file
information to the VM. See Configuring the PXE Environment for AHV VMs on page 134.
For example, create a virtual interface for VM nw-boot-vm and place it on a network named network1.
nutanix@cvm$ acli vm.nic_create nw-boot-vm network=network1

4. Obtain the MAC address of the virtual interface.


nutanix@cvm$ acli vm.nic_list vm
Replace vm with the name of the VM.
For example, obtain the MAC address of VM nw-boot-vm.
nutanix@cvm$ acli vm.nic_list nw-boot-vm
00-00-5E-00-53-FF

5. Update the boot device setting so that the VM boots over the network.
nutanix@cvm$ acli vm.update_boot_device vm mac_addr=mac_addr
Replace vm with the name of the VM and mac_addr with the MAC address of the virtual interface that the VM
must use to boot over the network.
For example, update the boot device setting of the VM named nw-boot-vm so that the VM uses the virtual
interface with MAC address 00-00-5E-00-53-FF.
nutanix@cvm$ acli vm.update_boot_device nw-boot-vm mac_addr=00-00-5E-00-53-FF

AHV | Virtual Machine Management | 135


6. Power on the VM.
nutanix@cvm$ acli vm.on vm_list [host="host"]
Replace vm_list with the name of the VM. Replace host with the name of the host on which you want to start the
VM.
For example, start the VM named nw-boot-vm on a host named host-1.
nutanix@cvm$ acli vm.on nw-boot-vm host="host-1"

Uploading Files to DSF for Microsoft Windows Users


If you are a Microsoft Windows user, you can securely upload files to DSF by using the following
procedure.

Procedure

1. Use WinSCP, with SFTP selected, to connect to Controller VM through port 2222 and start browsing the DSF
datastore.

Note: The root directory displays storage containers and you cannot change it. You can only upload files to one of
the storage containers and not directly to the root directory. To create or delete storage containers, you can use the
Prism user interface.

2. Authenticate by using Prism username and password or, for advanced users, use the public key that is managed
through the Prism cluster lockdown user interface.

Enabling Load Balancing of vDisks in a Volume Group


AHV hosts support load balancing of vDisks in a volume group for guest VMs. Load balancing of vDisks
in a volume group enables IO-intensive VMs to use the storage capabilities of multiple Controller VMs
(CVMs).

About this task


If you enable load balancing on a volume group, the guest VM communicates directly with each CVM hosting
a vDisk. Each vDisk is served by a single CVM. Therefore, to use the storage capabilities of multiple CVMs,
create more than one vDisk for a file system and use the OS-level striped volumes to spread the workload. This
configuration improves performance and prevents storage bottlenecks.

Note:

• vDisk load balancing is disabled by default for volume groups that are directly attached to VMs.
However, vDisk load balancing is enabled by default for volume groups that are attached to VMs by
using a data services IP address.
• You can attach a maximum number of 10 load balanced volume groups per guest VM.
• For Linux VMs, ensure that the SCSI device timeout is 60 seconds. For information about how to check
and modify the SCSI device timeout, see the Red Hat documentation at https://access.redhat.com/
documentation/en-us/red_hat_enterprise_linux/5/html/online_storage_reconfiguration_guide/
task_controlling-scsi-command-timer-onlining-devices.

Perform the following procedure to enable load balancing of vDisks by using aCLI.

AHV | Virtual Machine Management | 136


Procedure

1. Log on to a Controller VM with SSH.

2. Do one of the following:

» Enable vDisk load balancing if you are creating a volume group.


nutanix@cvm$ acli vg.create vg_name load_balance_vm_attachments=true
Replace vg_name with the name of the volume group.
» Enable vDisk load balancing if you are updating an existing volume group.
nutanix@cvm$ acli vg.update vg_name load_balance_vm_attachments=true
Replace vg_name with the name of the volume group.

Note: To modify an existing volume group, you must first detach all the VMs that are attached to that volume
group before you enable vDisk load balancing.

3. Verify if vDisk load balancing is enabled.


nutanix@cvm$ acli vg.get vg_name
An output similar to the following is displayed:
nutanix@cvm$ acli vg.get ERA_DB_VG_xxxxxxxx
ERA_DB_VG_xxxxxxxx {
attachment_list {
vm_uuid: "xxxxx"
.
.
.
.
iscsi_target_name: "xxxxx"
load_balance_vm_attachments: True
logical_timestamp: 4
name: "ERA_DB_VG_xxxxxxxx"
shared: True
uuid: "xxxxxx"
}
If vDisk load balancing is enabled on a volume group, load_balance_vm_attachments: True is displayed in the
output. The output does not display the load_balance_vm_attachments: parameter at all if vDisk load balancing is
disabled.

4. (Optional) Disable vDisk load balancing.


nutanix@cvm$ acli vg.update vg_name load_balance_vm_attachments=false
Replace vg_name with the name of the volume group.

Live vDisk Migration Across Storage Containers


vDisk migration allows you to change the container of a vDisk. You can migrate vDisks across storage
containers while they are attached to guest VMs without the need to shut down or delete VMs (live
migration). You can either migrate all vDisks attached to a VM or migrate specific vDisks to another
container.
In a Nutanix solution, you group vDisks into storage containers and attach vDisks to guest VMs. AOS applies storage
policies such as replication factor, encryption, compression, deduplication, and erasure coding at the storage container
level. If you apply a storage policy to a storage container, AOS enables that policy on all the vDisks of the container.
If you want to change the policies of the vDisks (for example, from RF2 to RF3), create another container with a
different policy and move the vDisk to that container. With live migration of vDisks across containers, you can

AHV | Virtual Machine Management | 137


migrate vDisk across containers even if those vDisks are attached to a live VM. Thus, live migration of vDisks across
storage containers enables you to efficiently manage storage policies for guest VMs.

General Considerations
You cannot perform the following operations during an ongoing vDisk migration:

• Clone a VM
• Take a snapshot
• Resize, clone, or take a snapshot of the vDisks that are being migrated
• Migrate images
• Migrate volume groups

Note: During vDisk migration, the logical usage of a vDisk is more than the total capacity of the vDisk. The issue
occurs because the logical usage of the vDisk includes the space occupied in both the source and destination containers.
Once the migration is complete, the logical usage of the vDisk returns to its normal value.
Migration of vDisks stalls if sufficient storage space is not available in the target storage container. Ensure
that the target container has sufficient storage space before you begin migration.

Disaster Recovery Considerations


Consider the following points if you have a disaster recovery and backup setup:

• You cannot migrate vDisks of a VM that is protected by a protection domain or protection policy. When you start
the migration, ensure that the VM is not protected by a protection domain or protection policy. If you want to
migrate vDisks of such a VM, do the following:

• Remove the VM from the protection domain or protection policy.


• Migrate the vDisks to the target container.
• Add the VM back to the protection domain or protection policy.
• Configure the remote site with the details of the new container.
vDisk migration fails if the VM is protected by a protection domain or protection policy.
• If you are using a third-party backup solution, AOS temporarily blocks snapshot operations for a VM if vDisk
migration is in progress for that VM.

Migrating a vDisk to Another Container


You can either migrate all vDisks attached to a VM or migrate specific vDisks to another container.

About this task


Perform the following procedure to migrate vDisks across storage containers:

Procedure

1. Log on to a CVM in the cluster with SSH.

AHV | Virtual Machine Management | 138


2. Do one of the following:

» Migrate all vDisks of a VM to the target storage container.


nutanix@cvm$ acli vm.update_container vm-name container=target-container wait=false
Replace vm-name with the name of the VM whose vDisks you want to migrate and target-container with
the name of the target container.
» Migrate specific vDisks by using either the UUID of the vDisk or address of the vDisk.
Migrate specific vDisks by using the UUID of the vDisk.
nutanix@cvm$ acli vm.update_container vm-name device_uuid_list=disk-UUID container=target-
container wait=false
Replace vm-name with the name of VM, disk-UUID with the UUID of the disk, and target-container with
the name of the target storage container.
Run nutanix@cvm$ acli vm.get <vm-name> to determine the UUID of the vDisk.
You can migrate multiple vDisks at a time by specifying a comma-separated list of vDisk UUIDs.
Alternatively, you can migrate vDisks by using the address of the vDisk.
nutanix@cvm$ acli vm.update_container vm-name disk_addr_list=disk-address
container=target-container wait=false
Replace vm-name with the name of VM, disk-address with the address of the disk, and target-container
with the name of the target storage container.
Run nutanix@cvm$ acli vm.get <vm-name> to determine the address of the vDisk.
Following is the format of the vDisk address:
bus.index

Following is a section of the output of the acli vm.get vm-name command:


disk_list {
addr {
bus: "scsi"
index: 0
}
Combine the values of bus and index as shown in the following example:
nutanix@cvm$ acli vm.update_container TestUVM_1 disk_addr_list=scsi.0 container=test-
container-17475
You can migrate multiple vDisks at a time by specifying a comma-separated list of vDisk addresses.

3. Check the status of the migration in the Tasks menu of the Prism Element web console.

AHV | Virtual Machine Management | 139


4. (Optional) Cancel the migration if you no longer want to proceed with it.
nutanix@cvm$ ecli task.cancel task_list=task-ID
Replace task-ID with the ID of the migration task.
Determine the task ID as follows:
nutanix@cvm$ ecli task.list
In the Type column of the tasks list, look for VmChangeDiskContainer.
VmChangeDiskContainer indicates that it is a vDisk migration task. Note the ID of such a task.

Note: Note the following points about canceling migration:

• If you cancel an ongoing migration, AOS retains the vDisks that have not yet been migrated in
the source container. AOS does not migrate vDisks that have already been migrated to the target
container back to the source container.
• If sufficient storage space is not available in the original storage container, migration of vDisks back
to the original container stalls. To resolve the issue, ensure that the source container has sufficient
storage space.

OVAs
An Open Virtual Appliance (OVA) file is a tar archive file created by converting a virtual machine (VM) into an
Open Virtualization Format (OVF) package for easy distribution and deployment. OVA helps you to quickly create,
move or deploy VMs on different hypervisors.
Prism Central helps you perform the following operations with OVAs:

• Export an AHV VM as an OVA file.


• Upload OVAs of VMs or virtual appliances (vApps). You can import (upload) an OVA file with the QCOW2 or
VMDK disk formats from a URL or the local machine.
• Deploy an OVA file as a VM.
• Download an OVA file to your local machine.
• Rename an OVA file.
• Delete an OVA file.
• Track or monitor the tasks associated with OVA operations in Tasks.
The access to OVA operations is based on your role. See Role Details View in the Prism Central Guide to check if
your role allows you to perform the OVA operations.
For information about:

• Restrictions applicable to OVA operations, see OVA Restrictions on page 140.


• The OVAs dashboard, see OVAs View in the Prism Central Guide.
• Exporting a VM as an OVA, see Managing a VM (AHV and Self Service) in the Prism Central Guide.
• Other OVA operations, see OVA Management in the Prism Central Guide.

OVA Restrictions
You can perform the OVA operations subject to the following restrictions:

AHV | Virtual Machine Management | 140


• Export to or upload OVAs with one of the following disk formats:

• QCOW2: Default disk format auto-selected in the Export as OVA dialog box.
• VMDK: Deselect QCOW2 and select VMDK, if required, before you submit the VM export request when you
export a VM.
• When you export a VM or upload an OVA and the VM or OVA does not have any disks, the disk format is
irrelevant.
• Upload an OVA to multiple clusters using a URL as the source for the OVA. You can upload an OVA only to a
single cluster when you use the local OVA File source.
• Perform the OVA operations only with appropriate permissions. You can run the OVA operations that you have
permissions for, based on your assigned user role.
• The OVA that results from exporting a VM on AHV is compatible with any AHV version 5.18 or later.

AHV | Virtual Machine Management | 141


COPYRIGHT
Copyright 2021 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the United States and/or other
jurisdictions. All other brand and product names mentioned herein are for identification purposes only and may be
trademarks of their respective holders.

License
The provision of this software to you does not grant any licenses or other rights under any Microsoft patents with
respect to anything other than the file server implementation portion of the binaries for this software, including no
licenses or any other rights in any hardware or any devices or software that are used to communicate with or in
connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.

root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a log file.

Version
Last modified: July 20, 2021 (2021-07-20T14:38:41+05:30)

You might also like