Field Installation Guide Cisco HCI UCM

Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

Field Installation Guide

for Cisco HCI Servers


(UCS Manager Mode)
Original Equipment Manufacturer ANY
Cisco M6 and M7
May 9, 2024
Contents

Field Installation Overview......................................................................3

Prerequisites.......................................................................................... 4

Network Requirements........................................................................... 5

Cisco UCS® Domain Mode Configuration for C-Series Servers................6


Unpacking and Mounting The Servers and Fabric Interconnects................................................ 6
Configuring Cisco UCS Fabric Interconnects in Cluster Mode.................................................... 6
Setting Up Network Connections............................................................................................... 6
Configuring Network Connection in Cisco UCS Manager..........................................................10
Preparing Cisco UCS Manager with Recommended Software and Firmware Versions...............10
Discovering Cisco UCS Servers in Cisco UCS Manager............................................................10
UCS Domain Mode Best Practices (C-Series)........................................................................... 11

Downloading Files................................................................................ 13
Downloading AOS Installation Bundle...................................................................................... 13
Downloading Foundation.......................................................................................................... 14

Server Imaging..................................................................................... 15
Prepare Bare-Metal Nodes for Imaging.................................................................................... 15
Considerations for Bare-Metal Imaging.................................................................................... 15
Preparing the Workstation........................................................................................................16
Installing the Foundation VM.................................................................................................... 18
Foundation VM Upgrade........................................................................................................... 18
Upgrading the Foundation VM by Using the GUI............................................................. 18
Upgrading the Foundation VM by Using the CLI.............................................................. 19
Configuring Foundation VM by Using the Foundation GUI.........................................................19

Post-Installation Tasks.......................................................................... 23
Configuring a New Cluster in Prism......................................................................................... 23

Hypervisor ISO Images......................................................................... 26


Verify Hypervisor Support........................................................................................................ 27
Updating an iso_whitelist.json File on Foundation VM...............................................................27

Troubleshooting.................................................................................... 28
Fixing Imaging Issues............................................................................................................... 28

Copyright..............................................................................................31
FIELD INSTALLATION OVERVIEW
This document describes the tasks of preparing Cisco Unified Computing System (UCS) C-Series rack
servers and Cisco UCS Manager and imaging the servers using Nutanix Foundation. The term servers and
nodes are used interchangeably in this document.
The Cisco UCS servers and fabric interconnects must run on specific hardware, software, and firmware
versions. For the complete list of supported hardware components, software, and firmware versions, see
the UCS Hardware and Software Compatibility at the Cisco UCS website.
Foundation is the deployment software of Nutanix that allows you to image a node with a hypervisor and
an AOS of your choice and form a cluster out of the imaged nodes. Foundation is available for download at
https://portal.nutanix.com/#/page/Foundation.
A Nutanix cluster runs on Cisco UCS C-Series Servers in Cisco UCS Domain Mode with a hypervisor and
Nutanix AOS. AOS is the operating system of the Nutanix controller VM, and that must be running on the
hypervisor to provide Nutanix-specific functionality. For the complete list of supported hypervisors and AOS
versions of Cisco UCS M6 and M7 servers, see the Compatibility and Interoperability Matrix.
If you already have a running cluster and want to add nodes to it, you must discover the new nodes in UCS
Manager, image those nodes using Foundation VM, and use the Expand Cluster option in Prism. For
more information, see the Prism Element Web Console Guide.

Original Equipment Manufacturer | Field Installation Overview | 3


PREREQUISITES
Ensure that you meet the following prerequisites before configuring the Cisco UCS C-Series rack servers
in Cisco UCS Domain Mode and imaging them using Foundation.

• You must have access to the Cisco website to download the supported firmware.
• You must have access to the Nutanix Support Portal to download the supported software. The
supported Foundation versions are 5.4.2 or later.
• If you need a vSphere ESXi hypervisor for installation, you must have access to the VMware website.
• You must have all required licenses from Cisco, Nutanix, and VMware.

Original Equipment Manufacturer | Prerequisites | 4


NETWORK REQUIREMENTS
When configuring the Cisco UCS domain fabric interconnect, you need the following information in the
management network:

• Two IP addresses: one for each fabric interconnect management port


• One IP address for fabric interconnect cluster
• Default gateway
• Network mask
• DNS server (optional)
• Domain name (optional)
When configuring a Cisco UCS managed server, you need a set of IP addresses to allocate to the cluster.
Ensure that chosen IP addresses do not overlap with any hosts or services within the environment. You
also need to make sure to open the software ports that are used to manage cluster components and to
enable communication between components such as the controller VM, web console, Prism Central,
hypervisor, and the Cisco hardware. Nutanix recommends that you specify information such as a DNS
server and NTP server even if the cluster is not connected to the Internet or runs in a non-production
environment.

Existing Customer Network


You need the following information during the cluster configuration:

• Default gateway
• Network mask
• DNS server
• NTP server
Check whether a proxy server is in place in the network. To perform this check, you need the IP address
and port number of that server when enabling Nutanix Support on the cluster.

New IP Addresses
Each node in a Nutanix cluster requires three IP addresses, one for each of the following components:

• IPMI interface
• Hypervisor host
• Nutanix controller VM
Nutanix recommends that you use a cluster virtual IP address for each Nutanix cluster.
All controller VMs and hypervisor hosts must be on the same subnet. No systems other than the controller
VMs and hypervisor hosts can be on this network.

Software Ports Required for Management and Communication


For more information about the ports that are required for the Foundation, see Foundation Ports.

Original Equipment Manufacturer | Network Requirements | 5


CISCO UCS® DOMAIN MODE
CONFIGURATION FOR C-SERIES
SERVERS
The Cisco UCS Domain mode configuration involves setting up a pair of fabric interconnects in a
cluster configuration for high availability. The Cisco UCS C-Series servers are connected to each fabric
interconnect and are centrally managed by the Cisco UCS Manager software running on the fabric
Interconnect.
To get the system running, perform the following high-level steps:
1. Unpack and rack the Cisco UCS servers and fabric interconnects.
2. Configure Cisco UCS fabric interconnects in cluster mode.
3. Set up the network connections.
4. Configure the network connections in Cisco UCS Manager.
5. Prepare the Cisco UCS Manager with the recommended software and firmware versions.
6. Discover the Cisco UCS servers in Cisco UCS Manager.

Unpacking and Mounting The Servers and Fabric Interconnects


For information about unpacking the Cisco UCS servers and fabric interconnects from the box and
mounting them into the rack, see the Server Installation and Service Guides on the Cisco UCS website.

Configuring Cisco UCS Fabric Interconnects in Cluster Mode


About this task
Perform the initial network connection and configuration of the fabric interconnects in the cluster mode as
follows:
For more information on the following steps, see the "Console Setup" chapter of Cisco UCS Manager
Getting Started Guide, which is available on the Cisco UCS website.

Procedure

1. Connect the workstation to the serial console port labeled Console on the front panel of the fabric
interconnects.

2. Connect the L1 and L2 ports of one fabric interconnect directly to the L1 and L2 ports on the other fabric
interconnect (L1 to L1 and L2 to L2) using Ethernet cables.
This enables the fabric interconnects to function in a cluster mode.

3. Configure the serial console connection on the workstation and use that serial console connection to
configure the fabric interconnects.
The first fabric interconnect FI-A is configured as primary, and the second fabric interconnect FI-
B is configured as Subordinate. The primary and secondary roles can change during maintenance
operations.

Setting Up Network Connections


This topic describes establishing network connections between the Foundation VM, fabric interconnects,
top of rack (ToR) switch, management switch, and Cisco UCS servers. You can host the Foundation VM

Original Equipment Manufacturer | Cisco UCS® Domain Mode Configuration for C-Series Servers | 6
on your workstation or on your existing virtual infrastructure. The workstation must have access to fabric
interconnects management and uplink networks.

Procedure

1. Set up the uplink and management network connections as follows:

Figure 1: Network Connections

a. Connect the management port on the front panel of each fabric interconnect to the management
switch.
The management port is the port labeled as MGMT0.
b. Connect the workstation to the management network switch.
Alternatively, ensure that the Foundation VM hosted on your virtual infrastructure can access the
management switch.
c. Connect the fabric interconnects to the upstream L2 ToR switch using the uplink ports.
The uplink connections must have sufficient bandwidth to accommodate the traffic from the servers
attached to fabric interconnects.
d. Connect the L1 and L2 ports of one fabric interconnect directly to the L1 and L2 ports on the other
fabric interconnect (L1 to L1 and L2 to L2) using Ethernet cables.
This enables the fabric interconnects to function in a cluster mode.

2. Make a note of fabric interconnect port numbers connected to the ToR switch.
You need this information when configuring the uplink ports of the fabric interconnect in the Cisco UCS
Manager.

3. Connect the Cisco UCS servers to the fabric interconnects using the virtual interface cards (VIC) in one
of the following physical topologies.
For example, the quad port VIC models, such as mLOM VIC 1467 and PCIe VIC 1455, have two
hardware port channels. Ports 1 and 2 are part of one port channel, and ports 3 and 4 are part of the
other. You must connect all the ports of a port channel to the same fabric interconnect and connect at
least one port of all the port channels. Ensure that all the ports of a port channel are connected at the
same speed.

Original Equipment Manufacturer | Cisco UCS® Domain Mode Configuration for C-Series Servers | 7
Sample Network Topology 1: Single VIC
The single VIC network topology must have either the mLOM SKU or the PCIe SKU of VIC using the
single-link or dual-link port channel.

• Single-Link Port Channel: If you use quad port VIC models such as mLOM VIC 1467 or PCIe VIC
1455, connect ports 1 and 3 to the fabric interconnect A and fabric interconnect B as shown in the
following image.

Figure 2: Single VIC Network Topology using Single-Link Port Channel


• Dual-Link Port Channel: If you use quad port VIC models such as mLOM VIC 1467 or PCIe VIC 1455,
connect ports 1 and 2 to the fabric interconnect A and ports 3 and 4 to the fabric interconnect B as
shown in the following image.

Figure 3: Single VIC Network Topology using Dual-Link Port Channel


Sample Network Topology 2: Dual VIC
For VIC-level high-availability, use dual VIC topology consisting of both mLOM VIC and PCIe VIC SKUs on
each Cisco UCS C-Series server.

Original Equipment Manufacturer | Cisco UCS® Domain Mode Configuration for C-Series Servers | 8
• Single-Link Port Channel: If you use quad-port VIC models such as mLOM VIC 1467 and PCIe
VIC 1455, connect port 1 of each VIC to the fabric interconnect A and port 3 of each VIC to the fabric
interconnect B as shown in the following image.

Figure 4: Dual VIC Network Topology using Single-Link Port Channel


• Dual-Link Port Channel: If you use quad-port VIC models such as mLOM VIC 1467 and PCIe VIC
1455, connect ports 1 and 2 of each VIC to the fabric interconnect A and ports 3 and 4 of each VIC to
the fabric interconnect B as shown in the following image.

Figure 5: Dual VIC Network Topology using Dual-Link Port Channel

4. Make a note of the port numbers on the fabric interconnects for which you established the connections
in Step 3. You need this information when configuring the fabric interconnect ports as server ports in the
Cisco UCS Manager.

Original Equipment Manufacturer | Cisco UCS® Domain Mode Configuration for C-Series Servers | 9
Configuring Network Connection in Cisco UCS Manager
This section describes configuring the fabric interconnect ports, MAC pool, and vLAN object, if necessary.

Procedure

1. Log in to the Cisco UCS Manager web interface using the fabric interconnects cluster IP address.

2. Configure the ports of each fabric interconnect as follows:

a. Configure the uplink ports of each fabric interconnect connected to the top of rack (ToR) switch.
b. Configure the ports on each fabric interconnect connected to the Cisco UCS Servers as server ports.
For more information, see the LAN Ports and Port Channels chapter of the Cisco UCS Manager
Network Management Guide.

3. Create a MAC pool with sufficient MAC addresses.


Make a note of the MAC pool name that you intend to use while imaging. Every port channel needs a
MAC address for its vNIC. For example, the quad-port VIC has two port channels, and therefore, the
single-VIC topology needs two MAC addresses.
For more information, see the Configuring MAC Pools chapter of the Cisco UCS Manager Network
Management Guide.

4. Create a VLAN object for the host and CVM traffic.


Make a note of the VLAN object name that you intend to use while imaging. Do not use the default
VLAN object. For more information, see the Configuring VLANs chapter of the Cisco UCS Manager
Network Management Guide. You also need to configure the same VLAN ID in the upstream network.

Preparing Cisco UCS Manager with Recommended Software and


Firmware Versions
Prepare the Cisco UCS manager by downloading and upgrading to the recommended software and
firmware versions as follows.

Procedure

1. From the Cisco website, download the recommended version of the Cisco UCS Infrastructure Software
A bundle and the Cisco UCS server firmware B and C bundles to Cisco UCS Manager.

2. Upgrade the Cisco UCS Manager software and fabric interconnect firmware using the downloaded
Cisco UCS Infrastructure Software A bundle version. For more information, see the Cisco UCS
Manager Firmware management guide on the Cisco website.
Server firmware is automatically upgraded or downgraded later by Foundation during the imaging
process. It uses the server firmware C bundle (that you downloaded as part of step 1) to the Cisco UCS
Manager.

Note: Foundation also specifies the recommended server firmware version that it upgrades during
imaging.

Discovering Cisco UCS Servers in Cisco UCS Manager


You must discover the Cisco UCS servers in Cisco UCS Manager to start imaging the servers using
Foundation.

Original Equipment Manufacturer | Cisco UCS® Domain Mode Configuration for C-Series Servers | 10
Procedure

1. Power on the servers and let the Cisco UCS Manager begin the discovery process.

2. Once the discovery process is complete in the Cisco UCS Manager, ensure that each server is
available in an unassociated state.

3. Make a note of the serial numbers of servers that you intend to use for imaging by navigating in Cisco
UCS Manager, to Equipment > Equipment > Rack Mounts > Servers, then click the server.
The serial number of the server selected appears in the General tab.

UCS Domain Mode Best Practices (C-Series)


The following are the general best practices for Cisco UCS C-Series servers:

• Perform hypervisor and AOS upgrades through Prism.


• Replace nodes using the node removal and node addition processes. For more information, see the
Prism Element Web Console Guide.
• Use UCS Manager to perform advanced hardware configuration and monitoring.

• Use Nutanix Life Cycle Manager (LCM) to manage Cisco UCS C-Series server firmware versions.
• Use a cluster of two fabric interconnects in Ethernet end-host mode.
• Use Cisco UCS Managed Mode configuration.
• Ensure that the default UCS Manager MAC address pool has enough addresses to assign to each
vNIC.
• Do not use server pools for Nutanix servers.
• Do not move service profiles from one Nutanix server to another.
• Set the default maintenance policy to a value other than Immediate. Alternatively, create a new
maintenance policy set to a value other than Immediate and assign it to the service profile of each new
Nutanix node created by the Foundation.
Common Networking Best Practices

• Ensure that IPv6 multicast traffic is allowed between nodes.


• Place CVM network adapters in the same subnet and broadcast domain as the hypervisor
management network adapter.

• (Optional) To simplify the network design, place the Cisco Integrated Management Controller
(CIMC) and Cisco UCS Manager in this same network.
• Place the host and CVM network adapters in the native or default untagged VLAN.
• (Optional) Place the user VM network adapters in appropriate VLANs on the host.
• In ESXi, create a port group for all CVMs that prefers the same top of rack (ToR) switch.

• For all other port groups, use Route based on originating virtual port ID for the standard
vSwitch and Route based on physical NIC load for the distributed vSwitch.
• In AHV, use the default active-backup mode for simplicity. For more advanced networking
configuration, see the AHV Networking best practices guide.

Original Equipment Manufacturer | Cisco UCS® Domain Mode Configuration for C-Series Servers | 11
• In Cisco UCS Domain mode operation, do not connect the dedicated or shared LOM port for
CIMC; use UCS Manager for management instead.
• When you use disjoint layer two upstream networks with the fabric interconnect, follow the Cisco
best practices and do not use pin groups.
• Ensure that UCS Manager, CVMs, and hypervisor hosts are accessible by IP from the Nutanix
Foundation VM.

• Foundation creates one vNIC per hardware port channel; create additional vNICs only if
required.
• Do not enable fabric failover for vNICs in Nutanix nodes.
• Let the hypervisor OS perform NIC teaming.
• Ensure that there is an adequate bandwidth between the upstream switches using redundant
40 Gbps (or greater) connections.
• Do not use LACP or other link aggregation methods with the host when connected to fabric
interconnects.

Original Equipment Manufacturer | Cisco UCS® Domain Mode Configuration for C-Series Servers | 12
DOWNLOADING FILES
Before you begin

Tip:
If you already have a running cluster and want to add these new nodes to it, you must use the
Expand Cluster option in Prism instead of using Foundation. Expand Cluster allows you to
directly reimage a node whose hypervisor/AOS version does not match the cluster's version or a
node that is not running AOS.
See the "Expanding a cluster" section in the Prism Element Web Console Guide for information
on using the Expand Cluster option in Prism.

About this task


This topic details the download procedure for the files required for the cluster creation process. Download
the files as necessary.

Procedure

1. Download the AOS Installation Bundle.

2. Download the Foundation Installation Bundle.

Downloading AOS Installation Bundle


About this task
Nodes from the factory come with AOS and hypervisor (AHV or ESXi) installed. If you intend to use the
pre-installed version of AOS and the hypervisor, then you do not need to download the AOS Installation
Bundle. Skip to Downloading Foundation.
If you want to upgrade AOS and AHV, and your nodes have access to the internet, the upgrade can be
carried out from the Prism web console after cluster creation. In this case as well, you do not need to
download the AOS Installation Bundle. Skip to Downloading Foundation.
However, if you want to upgrade AOS and AHV, and your nodes do not have access to the internet, then
you need to download the corresponding AOS Installation Bundle.
The following steps detail the download procedure for AOS Installation Bundle.

Note: Nutanix recommends using the latest version of AOS which is compatible with your model. For the
complete list of AOS versions compatible with your model, see Compatibility Matrix.

Procedure

1. Log on to the Nutanix support portal (http://portal.nutanix.com).

2. Click the menu icon (at the top left), click Downloads, and select AOS.
The AOS screen appears, which displays information about AOS and the installation bundles of several
AOS versions for download.

3. Click the Download button for the required AOS version to download the corresponding AOS
Installation Bundle named nutanix_installer_package-version#.tar.gz to any convenient location on your
laptop (or your workstation).

Original Equipment Manufacturer | Downloading Files | 13


Downloading Foundation
About this task
A software installation tool called Foundation is used to image the nodes (install the hypervisor and Nutanix
Controller VM) and create a cluster.
The following steps detail the download procedure for Foundation Installation Bundle:

Procedure

1. Log on to the Nutanix support portal (http://portal.nutanix.com).

2. Click the menu icon (at the top left), click Downloads, and then select Foundation.

3. Click the Download button for Foundation Upgrade for CVM or Standalone Foundation VM to
download the Foundation Installation Bundle.

Note:
Download the Foundation Installation Bundle only if a newer version of Foundation is available
in the Nutanix portal and your nodes do not have access to the internet. To find out the
Foundation version on your nodes, check the file foundation_version in the following location on
your node:
/home/nutanix/foundation/
If your nodes have internet access, you do not have to download the Foundation Installation
Bundle. Foundation will automatically notify you if a newer Foundation version is available
when it is launched.

Original Equipment Manufacturer | Downloading Files | 14


SERVER IMAGING
You can use Nutanix Foundation to install Nutanix software and to create clusters from the imaged servers.

Prepare Bare-Metal Nodes for Imaging


You can perform bare-metal imaging from a workstation that has access to the fabric interconnects.
Imaging a cluster in the field requires installing tools (such as Oracle VM VirtualBox and VMware Fusion )
on the workstation and setting up the environment to run these tools. This chapter describes how to install
a selected hypervisor and the Nutanix controller VM on bare-metal nodes and configure the nodes into one
or more clusters.

Before you begin

• Physically install the nodes and fabric interconnects at your site. For information about installing and
configuring Cisco hardware platforms, see Cisco UCS® Domain Mode Configuration for C-Series
Servers on page 6.
• Set up the installation environment (see Preparing the Workstation on page 16).
• Ensure that you have the appropriate node and cluster parameter values needed for installation. The
use of a DHCP server is not supported for controller VMs, so make sure to assign static IP addresses to
controller VMs.

Note: If the Foundation VM is configured with an IP address that is different from other clusters that
require imaging in a network (for example, Foundation VM is configured with a public IP address while
the cluster resides in a private network), repeat Step 8 in Installing the Foundation VM on page 18
to configure a new static IP address for the Foundation VM.

• If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before imaging
the nodes. If the nodes contain only SEDs, enable encryption after you image the nodes. If the nodes
contain both regular hard disk drives (HDDs) and SEDs, do not enable encryption on the SEDs at any
time during the life of the cluster.
For information about enabling and disabling encryption, see the Data-at-Rest Encryption chapter in the
AOS Security Guide.

Note: It is important that you unlock the SEDs before imaging to prevent any data loss. To unlock the
SEDs, contact Nutanix Support or see the KB article 000003750 on the Nutanix Support portal.

After you prepare the bare-metal nodes for Foundation, configure the Foundation VM by using the GUI. For
more information, see Node Configuration and Foundation Launch.

Considerations for Bare-Metal Imaging


Restrictions

• If Spanning Tree Protocol (STP) is enabled on the ports that are connected to the Nutanix host,
Foundation might time out during the imaging process. Therefore, be sure to disable STP by using
PortFast or an equivalent feature on the ports that are connected to the Nutanix host before starting
Foundation.
• Avoid connecting any device that presents virtual media, such as a CD-ROM (for example, plugging a
USB port on a node). Connecting such devices might conflict with the installation when the Foundation
tool tries to mount the virtual CD-ROM hosting the installed ISO.

Original Equipment Manufacturer | Server Imaging | 15


• During bare-metal imaging, you assign IP addresses to the hypervisor host, the controller VMs, and
the IPMI interfaces. Do not assign IP addresses from a subnet that overlaps with the 192.168.5.0/24
address space on the default VLAN. Nutanix uses an internal virtual switch to manage network
communications between the controller VM and the hypervisor host. This switch is associated with a
private network on the default VLAN and uses the 192.168.5.0/24 address space. If you want to use an
overlapping subnet, make sure that you use a different VLAN.
• Foundation creates service profiles and policies with names of the form fdtnNodeSerial (where
NodeSerial is the serial number of the server). If a service profile exists with the name that the
Foundation is attempting to use, that service profile is replaced, and any associated server is
disassociated. Therefore, if the Cisco UCS Manager instance manages servers that are not imaged
with Foundation, make sure that the service profiles associated with those servers do not use the
fdtnNodeSerial format with the serial numbers of the servers that you intend to image.

Recommendations

• Nutanix recommends contacting Cisco Services if you require assistance in imaging, configuration of
bare-metal nodes, and setting up the infrastructure. If you run into any issues during Foundation, then
please contact Cisco Support.

• Connect to a flat switch (no routing tables) instead of a managed switch (routing tables) to protect
the production environment against configuration errors. Foundation includes a multi-homing feature
that allows you to image nodes by using the production IP addresses even when connected to a flat
switch. For information about the network topology and port access required for a cluster, see Network
Requirements on page 5.

Limitations
Configuration of network adapters to use jumbo frames during imaging. Perform this configuration
manually after imaging.

Preparing the Workstation


A workstation is needed to host the Foundation VM during imaging. You can perform these steps either
before going to the installation site (if you use a portable laptop) or at the site (if an active internet
connection is available). The workstation must have access to the fabric interconnects management
network through the management switch and to the fabric interconnects uplink network through the ToR L2
switch. Alternatively, you can also host the Foundation VM on your existing virtual infrastructure.

Before you begin


Get a workstation (laptop or desktop computer) that you can use for the installation. The workstation must
have at least 3 GB of memory (Foundation VM size plus 1 GB), 80 GB of disk space (preferably SSD), and
a physical (wired) network adapter.
To prepare the workstation, do the following:

Original Equipment Manufacturer | Server Imaging | 16


Procedure

1. Go to the Nutanix Support portal and download the following files to a temporary directory on the
workstation:

File Location Description

On the Nutanix Support


Foundation_VM_OVF-version#.tar .tar file contains:
portal, go to Downloads >
Foundation. • Foundation_VM-version#.ovf.
This file is the Foundation
VM OVF configuration
file for the version#
release. For example,
Foundation_VM-3.1.ovf

• Foundation_VM-version#-
disk1.vmdk. This file
is the Foundation VM
VMDK file for the version#
release. For example,
Foundation_VM-3.1-
disk1.vmdk.

On the Nutanix Support portal,


nutanix_installer_package-version#.tar.gz File used for imaging the nodes
go to Downloads > AOS with desired AOS release.
(NOS).

Note:

• For post-installation performance validation, execute the Four Corners Microbenchmark


using X-ray.
• To install a hypervisor other than AHV, you must provide the ISO image of the hypervisor
(see Hypervisor ISO Images on page 26). Make sure that the hypervisor ISO image is
available on the workstation.
• Verify the supported hypervisor and its corresponding versions. For more information, see
Verify Hypervisor Support on page 27.

2. Download the installer for Oracle VM VirtualBox and install it with the default options.
The Oracle VM VirtualBox is a free open-source tool used to create a virtualized environment on the
workstation. For installation and start-up instructions (https://www.virtualbox.org/wiki/Documentation),
see the Oracle VM VirtualBox User Manual.
You can also use any other virtualization environment (VMware ESXi, AHV, and so on) instead of
Oracle VM VirtualBox.

3. Go to the location where you downloaded the Foundation .tar file and extract the contents.
$ tar -xf Foundation_VM_OVF-version#.tar
If the tar utility is not available, use the appropriate utility for your environment.

4. Copy the extracted files to the VirtualBox VMs folder that you created.

Original Equipment Manufacturer | Server Imaging | 17


Installing the Foundation VM
Import the Foundation VM into Oracle VM VirtualBox.

About this task


For installing Foundation on AHV, the vCPU and vRAM values are based on the user environment
requirements. These values are pre-filled for installing Foundation on a Virtual Box and can be edited if
required.

Procedure

1. Start Oracle VM VirtualBox.

2. Click the File menu and select Import Appliance... from the pull-down list.

3. In the Import Virtual Appliance dialog box, browse to the location of the Foundation .ovf file, and
select the Foundation_VM-version#.ovf file.

4. Click Next.

5. Click Import.

6. In the left pane, select Foundation_VM-version#, and click Start.


The VM operating system boots, and the Foundation VM console launches.

7. If the login screen appears, log in as a Nutanix user with the password nutanix/4u.

8. To set an IP address for Foundation VM in the management network:

a. Go to System > Preferences > Network connections > ipv4 settings and provide the following
details:

• IP address
• Gateway
• Netmask
b. Restart the VM.

Foundation VM Upgrade
You can upgrade the Foundation VM using the GUI or CLI.
Ensure that you use the minimum version of Foundation required by your hardware platform. To determine
whether the Foundation needs an upgrade for a hardware platform, see the Prerequisites on page 4 of
this guide. If the nodes you want to include in the cluster are of different models, determine which of their
minimum Foundation versions is the most recent version, then upgrade Foundation to the latest version.

Upgrading the Foundation VM by Using the GUI


The Foundation GUI enables you to perform one-click updates either over the air or from a .tar file that
you manually upload to the Foundation VM. The over-the-air update process downloads and installs the
latest Foundation version from the Nutanix Support portal. By design, the over-the-air update process
downloads and installs a .tar file that does not include Lenovo packages.

Before you begin


If you want to install a .tar file of your choice (optional), download the Foundation .tar file to the
workstation that you use to access or run Foundation. Installers are available on the Foundation download
page at (https://portal.nutanix.com/#/page/Foundation).

Original Equipment Manufacturer | Server Imaging | 18


Procedure

1. Open the Foundation GUI.

2. At the bottom of the Start page, click the Foundation version number.
The Check for Updates box displays the latest Foundation version for upgrading.

3. In the Check for Updates box, do one of the following:

» To perform a one-click over-the-air update, click Update.


» To update Foundation by using an installer that you downloaded to the workstation, click Browse,
select the .tar file, then click Install.

Upgrading the Foundation VM by Using the CLI


You can upgrade the Foundation VM by using the CLI.

Procedure

1. Download the Foundation upgrade bundle (foundation-<version#>.tar.gz) from the Nutanix


Support portal to the /home/nutanix/ directory.

2. Change your working directory to /home/nutanix/.

3. Upgrade Foundation.
$ ./foundation/bin/foundation_upgrade -t foundation-<version#>.tar.gz

Configuring Foundation VM by Using the Foundation GUI


Before you begin
To configure the Foundation VM by using the GUI, do the following:

• Complete the procedure described in Cisco UCS® Domain Mode Configuration for C-Series Servers on
page 6. Ensure that the servers discovered remain in an unassociated state in the Cisco UCS Manager.
• Assign IP addresses to the hypervisor host, the controller VMs, and the IPMI interfaces. Do not assign
IP addresses from a subnet that overlaps with the 192.168.5.0/24 address space on the default VLAN.
Nutanix uses an internal virtual switch to manage network communications between the controller VM
and the hypervisor host. This switch is associated with a private network on the default VLAN and uses
the 192.168.5.0/24 address space. If you want to use an overlapping subnet, make sure that you use a
different VLAN.
• Nutanix does not support mixed-vendor clusters.
• Upgrade Foundation to a later or relevant version. You can also update the foundation-platforms
submodule on Foundation. Updating the submodule enables the Foundation to support the latest
hardware models or components qualified after the release of an installed Foundation version.
• Ensure that the recommended Cisco UCS B and C Series Server firmware bundles are downloaded in
the UCS Manager.

Original Equipment Manufacturer | Server Imaging | 19


Procedure

1. Access the Foundation web GUI using one of the following methods:

• On the Foundation VM desktop, double-click the Nutanix Foundation icon.


• In a web browser inside the Foundation VM, browse to the "http://localhost:8000/gui/index.html"
URL.
• Browse the http://<foundation-vm-ip-address>:8000/gui/index.html URL from a web browser outside the
Foundation VM.

2. On the Start page, configure the details as follows:

a. Select your hardware platform.


For Cisco UCS M6 and M7 Servers in UCS managed mode, Foundation supports only the Cisco
(Install via UCS Manager) option.
b. If Cisco (Install via UCS Manager) is selected as a hardware platform, enter the UCS Manager IP
address and login credentials.
c. Specify whether RDMA is to pass through to the CVMs.
Select No for the Cisco UCS C-Series servers in UCS-managed mode.
d. Configure LACP or LAG for network connections between the nodes and the switch.
Select None for the Cisco UCS C-Series servers in UCS-managed mode. Use port channel for
aggregation.
e. Specify the subnets and gateway addresses for the cluster and the Cisco UCS Manager out-of-band
management network.
f. Select the workstation network adapter that connects to the nodes' network.
g. Create one or two new network interfaces and assign their IP addresses to configure multi-homing.
You need to create network interfaces only when the management network or the host and CVM
network are on different subnets.

3. On the Nodes page, do the following:

a. Select either Discover Nodes or Add IPMI Nodes Manually.


For Cisco UCS servers, you must select only Discover Nodes. Foundation discovers and displays
only the servers that are managed by the Cisco UCS Manager.
b. Select only the servers that you intend to image using their serial numbers and enter their IPMI IP,
host IP, CVM IP, and hostname.
c. Select the node role.

Original Equipment Manufacturer | Server Imaging | 20


4. Select the Tools drop-down list, and select one of the following options:

Option Description

Add Nodes Manually Add nodes manually if they are not already
populated.
You can manually add nodes only in the
standalone Foundation.
If you manually add multiple blocks in a single
instance, all added blocks are assigned the same
number of nodes. To add blocks with different
numbers of nodes, add multiple blocks with the
highest number of nodes, then delete nodes for
each block, as applicable. Alternatively, you can
also repeat the add process to separately add
blocks with different numbers of nodes.

Range Autofill Assign the IP addresses and hostnames in bulk


for each node.
Unlike the CVM Foundation, the standalone
Foundation does not validate these IP addresses
by checking for their uniqueness. Therefore,
manually cross-check and ensure that the IP
addresses are unique and valid.

Reorder Blocks (Optional) Reorder IP addresses and hypervisor


hostnames.

Reinstall Successful Nodes Reinstall the nodes that were successfully


prepared in the previous installation.

Select Only Failed Nodes To debug the issues, select all the failed nodes.

Remove Unselected Rows (Optional) Do not select nodes and click Remove
Unselected Rows to remove nodes.

Note: For AHV hypervisor, the hostname has the following restrictions:

• The maximum character length is 64.


• Consists of a-z, A-Z, 0-9, “-” and “.” only.
• Hostnames must start and end with a number or letter.

Original Equipment Manufacturer | Server Imaging | 21


5. On the Cluster page, you can do any of the following:

• Provide cluster details.


• Configure cluster formation.
• Image nodes without creating a cluster.
• Set the timezone of the FoundationVM to your actual timezone.
• Enable network segmentation to separate CVM network traffic from guest VMs and hypervisors
network traffic.
• Configure settings related to Cisco UCS Manager:

• Do not select the Skip automatic Service Profile creation checkbox.


• Select a MAC pool.
Foundation displays a list of MAC address pools that are discovered from the Cisco UCS
Manager. Ensure that you select a MAC pool that has sufficient MAC addresses for each vNIC on
each node.
• Select a VLAN object to assign a VLAN for the host and CVM network.
Foundation displays a list of VLAN objects discovered from the Cisco UCS Manager. Ensure that
you place the host and CVM network adapters in the native or default untagged VLAN.

Note:

• The Cluster Virtual IP field is essential for ESXi and AHV clusters.
• To provide multiple DNS or NTP servers, enter a list of IP addresses as a multi-line
input. For best practices in configuring NTP servers, see the Recommendations for Time
Synchronization section in the Prism Web Console Guide.

6. On the AOS page, upload an AOS image.

7. On the Hypervisor page, you can do any of the following:

• Specify and upload hypervisor image files.


• Upload the latest hypervisor allowlist JSON file that can be downloaded from the Nutanix Support
portal. This file lists supported hypervisors. For more information, see Hypervisor ISO Images on
page 26.

Note: You can select one or more nodes to be storage-only nodes that host AHV only. You must image
the rest of the nodes with another hypervisor and form a multi-hypervisor cluster.

8. Click Start.
The Installation in Progress page displays the progress status and the individual Log details for in-
progress or completed operations of all the nodes. Click Review Configuration for a read-only view of
the configuration details while the installation is in progress.

Results
After all the operations are completed, the Installation finished page appears.
If you missed any configuration, want to reconfigure, or perform the installation again, click Reset to return
to the Start page.

Original Equipment Manufacturer | Server Imaging | 22


POST-INSTALLATION TASKS
Procedure

1. Add the appropriate Nutanix software licenses. For more information, see License Manager Guide.

2. Add the appropriate licenses for the hypervisor. For more information, see the corresponding vendor
documentation.

3. If you order TPM for your server along with VMware ESXi 7.0 U2 or later versions, the hypervisor can
use TPM 2.0 to encrypt the host configuration, with secure boot enforcement remains disabled.
Nutanix recommends saving a copy of the TPM Encryption Recovery Key in a safe, remote location.
You need this key to recover the host configuration and boot the server back into hypervisor after
serviceability operation such as replacing the motherboard. Operations such as ESXi upgrade, firmware
upgrade, or component replacement might affect the encryption recovery key. VMware vCenter might
send an alert or alarm to back up the key when it notices a change and therefore, it is important to back
up the encryption recovery key regularly and after every operation.
To back up your TPM encryption recovery key, follow the instructions mentioned in https://
kb.vmware.com/s/article/81661.

4. Configure a new cluster in Prism Central. For more details, see Configuring a New Cluster in Prism on
page 23

Configuring a New Cluster in Prism


About this task
Once the cluster is created, it can be configured through the Prism web console. A storage pool and a
container are provisioned automatically when the cluster is created, but many other options require user
input. Following are common cluster configuration tasks to be performed soon after creating a cluster.

Procedure

1. Register the Prism Element to Prism Central.

Note: Make sure that Prism Central runs on a Nutanix cluster.

Original Equipment Manufacturer | Post-Installation Tasks | 23


2. Verify that the cluster passed the latest Nutanix Cluster Check (NCC) tests.

a. Check the installed NCC version and update it if a recent version is available.
For more information, see the Software and Firmware Upgrades section.
b. Run NCC if you downloaded a newer version or did not run it as part of the installation process.
Run NCC from a command line. Open a command window, log on to any controller VM in the
cluster, establish an SSH session, and run the following command:
nutanix@cvm$ ncc health_checks run_all
If the check reports a status other than PASS, resolve the reported issues before proceeding. If you
are unable to resolve the issues, contact Nutanix Support for assistance.
c. Configure NCC so that the cluster checks run and are emailed according to your required
frequency.
nutanix@cvm$ ncc --set_email_frequency=num_hrs
where num_hrs is a positive integer of at least 4 to specify how frequently NCC runs and results
are emailed. For example, to run NCC and email results every 12 hours, specify 12; or every 24
hours, specify 24, and so on. For other commands related to automatically emailing NCC results,
see Automatically Emailing NCC Results in the Nutanix Cluster Check (NCC) Guide for your
version of NCC.

3. Specify the timezone of the cluster.


While logged on to the controller VM (see previous step), run the following commands:
nutanix@cvm$ ncli
ncli> cluster set-timezone timezone=cluster_timezone
Replace cluster_timezone with the timezone of the cluster (for example, America/Los_Angeles,
Europe/London, or Asia/Tokyo). Restart all controller VMs in the cluster after changing the timezone. A
cluster can only tolerate an outage of a single controller VM at a time; so you must restart controller
VMs in a rolling fashion. Ensure that each controller VM is fully operational after a restart before
restarting the next controller VM. For more information on using the nCLI, see the Command
Reference.

4. Specify an outgoing SMTP server (see the Configuring an SMTP Server section).

5. (Optional) If your site security policy allows Nutanix Support to access the cluster, enable the remote
support tunnel.
For more information, see the Controlling Remote Connections section.

Caution: Failing to enable remote support prevents Nutanix Support from directly addressing cluster
issues. Nutanix recommends that all customers allow email alerts at a minimum because it allows
proactive support of customer issues.

6. (Optional) If the site security policy allows Nutanix Support to collect cluster status information, enable
the Pulse feature.
For more information, see the Configuring Pulse section.
This information is used by Nutanix Support to send automated hardware failure alerts and diagnose
potential problems and assist proactively.

7. Add a list of alert email recipients, or if the security policy does not allow it, disable alert emails.
For more information, see the Configuring Email Alerts section.
You can also specify email recipients for specific alerts. For more information, see the Configuring
Alert Policies section.

Original Equipment Manufacturer | Post-Installation Tasks | 24


8. (Optional) If the site security policy permits automatic downloads of upgrade software packages for
cluster components, enable the feature.
For more information, see the Software and Firmware Upgrades section.

Note: To ensure that automatic download of updates can function, allow access to the following URLs
through your firewall:

• *.compute-*.amazonaws.com:80
• release-api.nutanix.com:80

9. License the cluster.


For more information, see the License Management section.

10. For ESXi clusters, add the host to the vCenter management interface.
For more information, see the vSphere Administration Guide.

Original Equipment Manufacturer | Post-Installation Tasks | 25


HYPERVISOR ISO IMAGES
An AHV ISO image is included as part of AOS. However, customers must provide ISO images for other
hypervisors. Check with your hypervisor manufacturer's representative or download an ISO image from
their support site.
Use the custom ISOs that are available on the VMware website (www.vmware.com) at Downloads >
Product Downloads > vSphere > Custom ISOs.
Make sure that you list the MD5 checksum of the hypervisor ISO image in the ISO allowlist file used by the
Foundation. See Verify Hypervisor Support on page 27.
The following table describes the fields that appear in the iso_whitelist.json file for each ISO image.

Table 1: iso_whitelist.json Fields

Name Description

(n/a) Displays the MD5 value for that ISO image.


min_foundation Displays the earliest Foundation version that supports this ISO image. For
example, "2.1" indicates you can install this ISO image using Foundation
version 2.1 or later (but not an earlier version).
hypervisor Displays the hypervisor type (ESX or AHV). Entries with a "linux"
hypervisor are not available; they are for Nutanix internal use only.
min_nos Displays the earliest AOS version compatible with this hypervisor ISO. A
null value indicates that no restrictions exist.
friendly_name Displays a descriptive name for the hypervisor version, for example, "ESX
6.0" or "Windows 2012r2".
version Displays the hypervisor version, for example "6.0" or "2012r2".
unsupported_hardware Lists the platform models that you cannot use on this ISO. A blank list
indicates that no model restrictions exist. However, conditional restrictions
such as the limitation that Haswell-based models support only ESXi
version 5.5 U2a or later are reflected in this field.
compatible_versions Reflects through regular expressions the hypervisor versions that can co-
exist with the ISO version in an Acropolis cluster (primarily for internal
use).
deprecated (optional field) Indicates that this hypervisor image is not supported by the mentioned
Foundation version and later versions. If the value is “null,” the image is
supported by all Foundation versions to date.
filesize Displays the file size of the hypervisor ISO image.

The following sample entries are from the allowlist for an ESX and an AHV image:
"iso_whitelist": {
"478e2c6f7a875dd3dacaaeb2b0b38228": {
"min_foundation": "2.1",
"hypervisor": "esx",
"min_nos": null,
"friendly_name": "ESX 6.0",
"version": "6.0",
"filesize": 329611264,

Original Equipment Manufacturer | Hypervisor ISO Images | 26


"unsupported_hardware": [],
"compatible_versions": {
"esx": ["^6\\.0.*"]
},

"a2a97a6af6a3e397b43e3a4c7a86ee37": {
"min_foundation": "3.0",
"hypervisor": "kvm",
"min_nos": null,
"friendly_name": "20160127",
"compatible_versions": {
"kvm": [
"^el6.nutanix.20160127$"
]
},
"version": "20160127",
"deprecated": "3.1",
"unsupported_hardware": []
},

Verify Hypervisor Support


The list of supported ISO images appears in a iso_whitelist.json file used by Foundation to validate ISO
image files. The files are identified in the allowlist by their MD5 value (not file name), therefore, verify that
the MD5 value of the ISO that you intend to use is listed in the allowlist file.

Before you begin


Download the latest allowlist file from the Foundation page on the Nutanix support portal (https://
portal.nutanix.com/#/page/Foundation). For information about the contents of the allowlist file, see
Hypervisor ISO Images on page 26.

Procedure

1. Obtain the MD5 checksum of the ISO that you want to use.

2. Open the downloaded allowlist file in a text editor and perform a search for the MD5 checksum.

What to do next
If the MD5 checksum is listed in the allowlist file, save the file to the workstation that hosts the Foundation
VM. If the allowlist file on the Foundation VM does not contain the MD5 checksum, replace that file with the
downloaded file before you begin installation.

Updating an iso_whitelist.json File on Foundation VM


Procedure

1. On the Foundation page, click Hypervisor and select a hypervisor from the drop-down list below
Select a hypervisor installer.

2. To upload a new whitelist.json file, click Manage Whitelist and click upload it.

3. After selecting the file, click Upload.

Note: To verify if the iso_whitelist.json is updated successfully, open the Manage Whitelist menu and
check for the date of the newly updated file.

Original Equipment Manufacturer | Hypervisor ISO Images | 27


TROUBLESHOOTING
This section provides guidance for fixing issues with the imaging that might occur when installing
Foundation.

Fixing Imaging Issues


When imaging fails for one or more nodes in the cluster, the progress bar turns red, and a red check
appears next to the hypervisor address field for any node that was not imaged successfully.

About this task


Possible reasons for a failure include the following:

• Network connectivity issues such as:

• The connection dropping intermittently. If intermittent failures persist, look for conflicting IPs.
• Foundation ran out of disk space during the hypervisor or Phoenix preparation phase. Free some space
by deleting unwanted ISO images. In addition, a Foundation crash could leave a /tmp/tmp* directory that
contains a copy of an ISO image that you can unmount (if necessary) and delete. Foundation requires 3
GB for ESXi or AHV.
• The host boots but returns the error indicating an issue with reaching the Foundation VM. The message
varies by hypervisor. For example, on ESXi, you might see a ks.cfg:line 12: "/.pre" script
returned with an error error message. Ensure that you assign the host an IP address on the same
subnet as the Foundation VM or multi-homing is configured. Also, check for IP address conflicts.

Procedure

1. Identify the imaging problems as follows:

• See the individual log file for any failed nodes for information about the problem.
In the Foundation GUI, the Installation Progress page shows the logs. You can also see logs in the
Foundation VM at/home/nutanix/foundation/log.
• If the firmware bundle version specified in the Foundation firmware policy is not available in the
Cisco UCS Manager, download the recommended firmware versions to the Cisco UCS Manager by

Original Equipment Manufacturer | Troubleshooting | 28


following Preparing Cisco UCS Manager with Recommended Software and Firmware Versions on
page 10.
• If mounting the virtual media on the Cisco UCS fabric interconnect fails, see Knowledge Base (KB)
article 000007692 to resolve the issue.
• If Foundation fails after mounting Phoenix ISO image by following the KB article 000007692, the
following error messages appear:

• Foundation:
StandardError: Failed to mount phoenix iso with error: Unable to find mount
entry

• Cisco UCS Manager:


Affected object: sys/rack-unit-15/mgmt/actual-mount-list/actual-mount-entry-1
Description: Server 15 (service profile: org-root/ls-fdtnserial) vmedia mapping
foundation_mnt has failed.

Workaround: Cisco recommends that you de-commission and re-commission the server from the
Cisco UCS Manager. You can restart imaging once the re-commission is complete.
• If the Phoenix IP is not reachable, Foundation fails to connect to Phoenix with the following error
message.
2023-08-01 07:07:23,982Z ERROR Node with ip 10.17.xx.yy is not in phoenix or is
not reachable
2023-08-01 07:07:23,983Z ERROR Exception in running
<ImagingStepRAIDCheckPhoenix(<NodeConfig(10.17.xx.yy) @9590>) @9990>
Traceback (most recent call last):
File "foundation/imaging_step.py", line 161, in _run
File "foundation/imaging_step_misc_hw_checks.py", line 155, in run
StandardError: Couldn't find hardware_config.json in 10.17.xx.yy

Workaround: Cisco recommends that you de-commission and re-commission the server from the
Cisco UCS Manager. You can restart imaging once the re-commission is complete.
• If Foundation fails when you reach the maximum web session limit for the Cisco UCS Manager, the
following error message appears.
StandardError: Request failed, error 572 (User reached maximum session limit)
Workaround: Increase the default value from 32 to higher in the Cisco UCS Manager.

Original Equipment Manufacturer | Troubleshooting | 29


2. When the issue is resolved, do the following:

a. Select Click here to learn how to retry the failed nodes.


A dialog box displays.
b. Click return to your configuration.
Foundation redirects you to the Start page and prompts you to enter the Cisco UCS Manager
password.
If Foundation populates the Cisco UCS Manager password, refresh the web page and re-enter the
password.
c. Click Next.
d. In the Nodes page, click Scan to discover the nodes using Cisco UCS Manager.
e. Repeat from Step a to Step d as necessary to resolve any other imaging errors.
If you are unable to resolve the issue for one or more of the nodes, you might need to image these
nodes one at a time. For more information, contact Support.

Original Equipment Manufacturer | Troubleshooting | 30


COPYRIGHT
Copyright 2024 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the United States and/or
other jurisdictions. All other brand and product names mentioned herein are for identification purposes only
and may be trademarks of their respective holders.

Original Equipment Manufacturer | Copyright | 31

You might also like