Field Installation Guide Cisco HCI UCM
Field Installation Guide Cisco HCI UCM
Field Installation Guide Cisco HCI UCM
Prerequisites.......................................................................................... 4
Network Requirements........................................................................... 5
Downloading Files................................................................................ 13
Downloading AOS Installation Bundle...................................................................................... 13
Downloading Foundation.......................................................................................................... 14
Server Imaging..................................................................................... 15
Prepare Bare-Metal Nodes for Imaging.................................................................................... 15
Considerations for Bare-Metal Imaging.................................................................................... 15
Preparing the Workstation........................................................................................................16
Installing the Foundation VM.................................................................................................... 18
Foundation VM Upgrade........................................................................................................... 18
Upgrading the Foundation VM by Using the GUI............................................................. 18
Upgrading the Foundation VM by Using the CLI.............................................................. 19
Configuring Foundation VM by Using the Foundation GUI.........................................................19
Post-Installation Tasks.......................................................................... 23
Configuring a New Cluster in Prism......................................................................................... 23
Troubleshooting.................................................................................... 28
Fixing Imaging Issues............................................................................................................... 28
Copyright..............................................................................................31
FIELD INSTALLATION OVERVIEW
This document describes the tasks of preparing Cisco Unified Computing System (UCS) C-Series rack
servers and Cisco UCS Manager and imaging the servers using Nutanix Foundation. The term servers and
nodes are used interchangeably in this document.
The Cisco UCS servers and fabric interconnects must run on specific hardware, software, and firmware
versions. For the complete list of supported hardware components, software, and firmware versions, see
the UCS Hardware and Software Compatibility at the Cisco UCS website.
Foundation is the deployment software of Nutanix that allows you to image a node with a hypervisor and
an AOS of your choice and form a cluster out of the imaged nodes. Foundation is available for download at
https://portal.nutanix.com/#/page/Foundation.
A Nutanix cluster runs on Cisco UCS C-Series Servers in Cisco UCS Domain Mode with a hypervisor and
Nutanix AOS. AOS is the operating system of the Nutanix controller VM, and that must be running on the
hypervisor to provide Nutanix-specific functionality. For the complete list of supported hypervisors and AOS
versions of Cisco UCS M6 and M7 servers, see the Compatibility and Interoperability Matrix.
If you already have a running cluster and want to add nodes to it, you must discover the new nodes in UCS
Manager, image those nodes using Foundation VM, and use the Expand Cluster option in Prism. For
more information, see the Prism Element Web Console Guide.
• You must have access to the Cisco website to download the supported firmware.
• You must have access to the Nutanix Support Portal to download the supported software. The
supported Foundation versions are 5.4.2 or later.
• If you need a vSphere ESXi hypervisor for installation, you must have access to the VMware website.
• You must have all required licenses from Cisco, Nutanix, and VMware.
• Default gateway
• Network mask
• DNS server
• NTP server
Check whether a proxy server is in place in the network. To perform this check, you need the IP address
and port number of that server when enabling Nutanix Support on the cluster.
New IP Addresses
Each node in a Nutanix cluster requires three IP addresses, one for each of the following components:
• IPMI interface
• Hypervisor host
• Nutanix controller VM
Nutanix recommends that you use a cluster virtual IP address for each Nutanix cluster.
All controller VMs and hypervisor hosts must be on the same subnet. No systems other than the controller
VMs and hypervisor hosts can be on this network.
Procedure
1. Connect the workstation to the serial console port labeled Console on the front panel of the fabric
interconnects.
2. Connect the L1 and L2 ports of one fabric interconnect directly to the L1 and L2 ports on the other fabric
interconnect (L1 to L1 and L2 to L2) using Ethernet cables.
This enables the fabric interconnects to function in a cluster mode.
3. Configure the serial console connection on the workstation and use that serial console connection to
configure the fabric interconnects.
The first fabric interconnect FI-A is configured as primary, and the second fabric interconnect FI-
B is configured as Subordinate. The primary and secondary roles can change during maintenance
operations.
Original Equipment Manufacturer | Cisco UCS® Domain Mode Configuration for C-Series Servers | 6
on your workstation or on your existing virtual infrastructure. The workstation must have access to fabric
interconnects management and uplink networks.
Procedure
a. Connect the management port on the front panel of each fabric interconnect to the management
switch.
The management port is the port labeled as MGMT0.
b. Connect the workstation to the management network switch.
Alternatively, ensure that the Foundation VM hosted on your virtual infrastructure can access the
management switch.
c. Connect the fabric interconnects to the upstream L2 ToR switch using the uplink ports.
The uplink connections must have sufficient bandwidth to accommodate the traffic from the servers
attached to fabric interconnects.
d. Connect the L1 and L2 ports of one fabric interconnect directly to the L1 and L2 ports on the other
fabric interconnect (L1 to L1 and L2 to L2) using Ethernet cables.
This enables the fabric interconnects to function in a cluster mode.
2. Make a note of fabric interconnect port numbers connected to the ToR switch.
You need this information when configuring the uplink ports of the fabric interconnect in the Cisco UCS
Manager.
3. Connect the Cisco UCS servers to the fabric interconnects using the virtual interface cards (VIC) in one
of the following physical topologies.
For example, the quad port VIC models, such as mLOM VIC 1467 and PCIe VIC 1455, have two
hardware port channels. Ports 1 and 2 are part of one port channel, and ports 3 and 4 are part of the
other. You must connect all the ports of a port channel to the same fabric interconnect and connect at
least one port of all the port channels. Ensure that all the ports of a port channel are connected at the
same speed.
Original Equipment Manufacturer | Cisco UCS® Domain Mode Configuration for C-Series Servers | 7
Sample Network Topology 1: Single VIC
The single VIC network topology must have either the mLOM SKU or the PCIe SKU of VIC using the
single-link or dual-link port channel.
• Single-Link Port Channel: If you use quad port VIC models such as mLOM VIC 1467 or PCIe VIC
1455, connect ports 1 and 3 to the fabric interconnect A and fabric interconnect B as shown in the
following image.
Original Equipment Manufacturer | Cisco UCS® Domain Mode Configuration for C-Series Servers | 8
• Single-Link Port Channel: If you use quad-port VIC models such as mLOM VIC 1467 and PCIe
VIC 1455, connect port 1 of each VIC to the fabric interconnect A and port 3 of each VIC to the fabric
interconnect B as shown in the following image.
4. Make a note of the port numbers on the fabric interconnects for which you established the connections
in Step 3. You need this information when configuring the fabric interconnect ports as server ports in the
Cisco UCS Manager.
Original Equipment Manufacturer | Cisco UCS® Domain Mode Configuration for C-Series Servers | 9
Configuring Network Connection in Cisco UCS Manager
This section describes configuring the fabric interconnect ports, MAC pool, and vLAN object, if necessary.
Procedure
1. Log in to the Cisco UCS Manager web interface using the fabric interconnects cluster IP address.
a. Configure the uplink ports of each fabric interconnect connected to the top of rack (ToR) switch.
b. Configure the ports on each fabric interconnect connected to the Cisco UCS Servers as server ports.
For more information, see the LAN Ports and Port Channels chapter of the Cisco UCS Manager
Network Management Guide.
Procedure
1. From the Cisco website, download the recommended version of the Cisco UCS Infrastructure Software
A bundle and the Cisco UCS server firmware B and C bundles to Cisco UCS Manager.
2. Upgrade the Cisco UCS Manager software and fabric interconnect firmware using the downloaded
Cisco UCS Infrastructure Software A bundle version. For more information, see the Cisco UCS
Manager Firmware management guide on the Cisco website.
Server firmware is automatically upgraded or downgraded later by Foundation during the imaging
process. It uses the server firmware C bundle (that you downloaded as part of step 1) to the Cisco UCS
Manager.
Note: Foundation also specifies the recommended server firmware version that it upgrades during
imaging.
Original Equipment Manufacturer | Cisco UCS® Domain Mode Configuration for C-Series Servers | 10
Procedure
1. Power on the servers and let the Cisco UCS Manager begin the discovery process.
2. Once the discovery process is complete in the Cisco UCS Manager, ensure that each server is
available in an unassociated state.
3. Make a note of the serial numbers of servers that you intend to use for imaging by navigating in Cisco
UCS Manager, to Equipment > Equipment > Rack Mounts > Servers, then click the server.
The serial number of the server selected appears in the General tab.
• Use Nutanix Life Cycle Manager (LCM) to manage Cisco UCS C-Series server firmware versions.
• Use a cluster of two fabric interconnects in Ethernet end-host mode.
• Use Cisco UCS Managed Mode configuration.
• Ensure that the default UCS Manager MAC address pool has enough addresses to assign to each
vNIC.
• Do not use server pools for Nutanix servers.
• Do not move service profiles from one Nutanix server to another.
• Set the default maintenance policy to a value other than Immediate. Alternatively, create a new
maintenance policy set to a value other than Immediate and assign it to the service profile of each new
Nutanix node created by the Foundation.
Common Networking Best Practices
• (Optional) To simplify the network design, place the Cisco Integrated Management Controller
(CIMC) and Cisco UCS Manager in this same network.
• Place the host and CVM network adapters in the native or default untagged VLAN.
• (Optional) Place the user VM network adapters in appropriate VLANs on the host.
• In ESXi, create a port group for all CVMs that prefers the same top of rack (ToR) switch.
• For all other port groups, use Route based on originating virtual port ID for the standard
vSwitch and Route based on physical NIC load for the distributed vSwitch.
• In AHV, use the default active-backup mode for simplicity. For more advanced networking
configuration, see the AHV Networking best practices guide.
Original Equipment Manufacturer | Cisco UCS® Domain Mode Configuration for C-Series Servers | 11
• In Cisco UCS Domain mode operation, do not connect the dedicated or shared LOM port for
CIMC; use UCS Manager for management instead.
• When you use disjoint layer two upstream networks with the fabric interconnect, follow the Cisco
best practices and do not use pin groups.
• Ensure that UCS Manager, CVMs, and hypervisor hosts are accessible by IP from the Nutanix
Foundation VM.
• Foundation creates one vNIC per hardware port channel; create additional vNICs only if
required.
• Do not enable fabric failover for vNICs in Nutanix nodes.
• Let the hypervisor OS perform NIC teaming.
• Ensure that there is an adequate bandwidth between the upstream switches using redundant
40 Gbps (or greater) connections.
• Do not use LACP or other link aggregation methods with the host when connected to fabric
interconnects.
Original Equipment Manufacturer | Cisco UCS® Domain Mode Configuration for C-Series Servers | 12
DOWNLOADING FILES
Before you begin
Tip:
If you already have a running cluster and want to add these new nodes to it, you must use the
Expand Cluster option in Prism instead of using Foundation. Expand Cluster allows you to
directly reimage a node whose hypervisor/AOS version does not match the cluster's version or a
node that is not running AOS.
See the "Expanding a cluster" section in the Prism Element Web Console Guide for information
on using the Expand Cluster option in Prism.
Procedure
Note: Nutanix recommends using the latest version of AOS which is compatible with your model. For the
complete list of AOS versions compatible with your model, see Compatibility Matrix.
Procedure
2. Click the menu icon (at the top left), click Downloads, and select AOS.
The AOS screen appears, which displays information about AOS and the installation bundles of several
AOS versions for download.
3. Click the Download button for the required AOS version to download the corresponding AOS
Installation Bundle named nutanix_installer_package-version#.tar.gz to any convenient location on your
laptop (or your workstation).
Procedure
2. Click the menu icon (at the top left), click Downloads, and then select Foundation.
3. Click the Download button for Foundation Upgrade for CVM or Standalone Foundation VM to
download the Foundation Installation Bundle.
Note:
Download the Foundation Installation Bundle only if a newer version of Foundation is available
in the Nutanix portal and your nodes do not have access to the internet. To find out the
Foundation version on your nodes, check the file foundation_version in the following location on
your node:
/home/nutanix/foundation/
If your nodes have internet access, you do not have to download the Foundation Installation
Bundle. Foundation will automatically notify you if a newer Foundation version is available
when it is launched.
• Physically install the nodes and fabric interconnects at your site. For information about installing and
configuring Cisco hardware platforms, see Cisco UCS® Domain Mode Configuration for C-Series
Servers on page 6.
• Set up the installation environment (see Preparing the Workstation on page 16).
• Ensure that you have the appropriate node and cluster parameter values needed for installation. The
use of a DHCP server is not supported for controller VMs, so make sure to assign static IP addresses to
controller VMs.
Note: If the Foundation VM is configured with an IP address that is different from other clusters that
require imaging in a network (for example, Foundation VM is configured with a public IP address while
the cluster resides in a private network), repeat Step 8 in Installing the Foundation VM on page 18
to configure a new static IP address for the Foundation VM.
• If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before imaging
the nodes. If the nodes contain only SEDs, enable encryption after you image the nodes. If the nodes
contain both regular hard disk drives (HDDs) and SEDs, do not enable encryption on the SEDs at any
time during the life of the cluster.
For information about enabling and disabling encryption, see the Data-at-Rest Encryption chapter in the
AOS Security Guide.
Note: It is important that you unlock the SEDs before imaging to prevent any data loss. To unlock the
SEDs, contact Nutanix Support or see the KB article 000003750 on the Nutanix Support portal.
After you prepare the bare-metal nodes for Foundation, configure the Foundation VM by using the GUI. For
more information, see Node Configuration and Foundation Launch.
• If Spanning Tree Protocol (STP) is enabled on the ports that are connected to the Nutanix host,
Foundation might time out during the imaging process. Therefore, be sure to disable STP by using
PortFast or an equivalent feature on the ports that are connected to the Nutanix host before starting
Foundation.
• Avoid connecting any device that presents virtual media, such as a CD-ROM (for example, plugging a
USB port on a node). Connecting such devices might conflict with the installation when the Foundation
tool tries to mount the virtual CD-ROM hosting the installed ISO.
Recommendations
• Nutanix recommends contacting Cisco Services if you require assistance in imaging, configuration of
bare-metal nodes, and setting up the infrastructure. If you run into any issues during Foundation, then
please contact Cisco Support.
• Connect to a flat switch (no routing tables) instead of a managed switch (routing tables) to protect
the production environment against configuration errors. Foundation includes a multi-homing feature
that allows you to image nodes by using the production IP addresses even when connected to a flat
switch. For information about the network topology and port access required for a cluster, see Network
Requirements on page 5.
Limitations
Configuration of network adapters to use jumbo frames during imaging. Perform this configuration
manually after imaging.
1. Go to the Nutanix Support portal and download the following files to a temporary directory on the
workstation:
• Foundation_VM-version#-
disk1.vmdk. This file
is the Foundation VM
VMDK file for the version#
release. For example,
Foundation_VM-3.1-
disk1.vmdk.
Note:
2. Download the installer for Oracle VM VirtualBox and install it with the default options.
The Oracle VM VirtualBox is a free open-source tool used to create a virtualized environment on the
workstation. For installation and start-up instructions (https://www.virtualbox.org/wiki/Documentation),
see the Oracle VM VirtualBox User Manual.
You can also use any other virtualization environment (VMware ESXi, AHV, and so on) instead of
Oracle VM VirtualBox.
3. Go to the location where you downloaded the Foundation .tar file and extract the contents.
$ tar -xf Foundation_VM_OVF-version#.tar
If the tar utility is not available, use the appropriate utility for your environment.
4. Copy the extracted files to the VirtualBox VMs folder that you created.
Procedure
2. Click the File menu and select Import Appliance... from the pull-down list.
3. In the Import Virtual Appliance dialog box, browse to the location of the Foundation .ovf file, and
select the Foundation_VM-version#.ovf file.
4. Click Next.
5. Click Import.
7. If the login screen appears, log in as a Nutanix user with the password nutanix/4u.
a. Go to System > Preferences > Network connections > ipv4 settings and provide the following
details:
• IP address
• Gateway
• Netmask
b. Restart the VM.
Foundation VM Upgrade
You can upgrade the Foundation VM using the GUI or CLI.
Ensure that you use the minimum version of Foundation required by your hardware platform. To determine
whether the Foundation needs an upgrade for a hardware platform, see the Prerequisites on page 4 of
this guide. If the nodes you want to include in the cluster are of different models, determine which of their
minimum Foundation versions is the most recent version, then upgrade Foundation to the latest version.
2. At the bottom of the Start page, click the Foundation version number.
The Check for Updates box displays the latest Foundation version for upgrading.
Procedure
3. Upgrade Foundation.
$ ./foundation/bin/foundation_upgrade -t foundation-<version#>.tar.gz
• Complete the procedure described in Cisco UCS® Domain Mode Configuration for C-Series Servers on
page 6. Ensure that the servers discovered remain in an unassociated state in the Cisco UCS Manager.
• Assign IP addresses to the hypervisor host, the controller VMs, and the IPMI interfaces. Do not assign
IP addresses from a subnet that overlaps with the 192.168.5.0/24 address space on the default VLAN.
Nutanix uses an internal virtual switch to manage network communications between the controller VM
and the hypervisor host. This switch is associated with a private network on the default VLAN and uses
the 192.168.5.0/24 address space. If you want to use an overlapping subnet, make sure that you use a
different VLAN.
• Nutanix does not support mixed-vendor clusters.
• Upgrade Foundation to a later or relevant version. You can also update the foundation-platforms
submodule on Foundation. Updating the submodule enables the Foundation to support the latest
hardware models or components qualified after the release of an installed Foundation version.
• Ensure that the recommended Cisco UCS B and C Series Server firmware bundles are downloaded in
the UCS Manager.
1. Access the Foundation web GUI using one of the following methods:
Option Description
Add Nodes Manually Add nodes manually if they are not already
populated.
You can manually add nodes only in the
standalone Foundation.
If you manually add multiple blocks in a single
instance, all added blocks are assigned the same
number of nodes. To add blocks with different
numbers of nodes, add multiple blocks with the
highest number of nodes, then delete nodes for
each block, as applicable. Alternatively, you can
also repeat the add process to separately add
blocks with different numbers of nodes.
Select Only Failed Nodes To debug the issues, select all the failed nodes.
Remove Unselected Rows (Optional) Do not select nodes and click Remove
Unselected Rows to remove nodes.
Note: For AHV hypervisor, the hostname has the following restrictions:
Note:
• The Cluster Virtual IP field is essential for ESXi and AHV clusters.
• To provide multiple DNS or NTP servers, enter a list of IP addresses as a multi-line
input. For best practices in configuring NTP servers, see the Recommendations for Time
Synchronization section in the Prism Web Console Guide.
Note: You can select one or more nodes to be storage-only nodes that host AHV only. You must image
the rest of the nodes with another hypervisor and form a multi-hypervisor cluster.
8. Click Start.
The Installation in Progress page displays the progress status and the individual Log details for in-
progress or completed operations of all the nodes. Click Review Configuration for a read-only view of
the configuration details while the installation is in progress.
Results
After all the operations are completed, the Installation finished page appears.
If you missed any configuration, want to reconfigure, or perform the installation again, click Reset to return
to the Start page.
1. Add the appropriate Nutanix software licenses. For more information, see License Manager Guide.
2. Add the appropriate licenses for the hypervisor. For more information, see the corresponding vendor
documentation.
3. If you order TPM for your server along with VMware ESXi 7.0 U2 or later versions, the hypervisor can
use TPM 2.0 to encrypt the host configuration, with secure boot enforcement remains disabled.
Nutanix recommends saving a copy of the TPM Encryption Recovery Key in a safe, remote location.
You need this key to recover the host configuration and boot the server back into hypervisor after
serviceability operation such as replacing the motherboard. Operations such as ESXi upgrade, firmware
upgrade, or component replacement might affect the encryption recovery key. VMware vCenter might
send an alert or alarm to back up the key when it notices a change and therefore, it is important to back
up the encryption recovery key regularly and after every operation.
To back up your TPM encryption recovery key, follow the instructions mentioned in https://
kb.vmware.com/s/article/81661.
4. Configure a new cluster in Prism Central. For more details, see Configuring a New Cluster in Prism on
page 23
Procedure
a. Check the installed NCC version and update it if a recent version is available.
For more information, see the Software and Firmware Upgrades section.
b. Run NCC if you downloaded a newer version or did not run it as part of the installation process.
Run NCC from a command line. Open a command window, log on to any controller VM in the
cluster, establish an SSH session, and run the following command:
nutanix@cvm$ ncc health_checks run_all
If the check reports a status other than PASS, resolve the reported issues before proceeding. If you
are unable to resolve the issues, contact Nutanix Support for assistance.
c. Configure NCC so that the cluster checks run and are emailed according to your required
frequency.
nutanix@cvm$ ncc --set_email_frequency=num_hrs
where num_hrs is a positive integer of at least 4 to specify how frequently NCC runs and results
are emailed. For example, to run NCC and email results every 12 hours, specify 12; or every 24
hours, specify 24, and so on. For other commands related to automatically emailing NCC results,
see Automatically Emailing NCC Results in the Nutanix Cluster Check (NCC) Guide for your
version of NCC.
4. Specify an outgoing SMTP server (see the Configuring an SMTP Server section).
5. (Optional) If your site security policy allows Nutanix Support to access the cluster, enable the remote
support tunnel.
For more information, see the Controlling Remote Connections section.
Caution: Failing to enable remote support prevents Nutanix Support from directly addressing cluster
issues. Nutanix recommends that all customers allow email alerts at a minimum because it allows
proactive support of customer issues.
6. (Optional) If the site security policy allows Nutanix Support to collect cluster status information, enable
the Pulse feature.
For more information, see the Configuring Pulse section.
This information is used by Nutanix Support to send automated hardware failure alerts and diagnose
potential problems and assist proactively.
7. Add a list of alert email recipients, or if the security policy does not allow it, disable alert emails.
For more information, see the Configuring Email Alerts section.
You can also specify email recipients for specific alerts. For more information, see the Configuring
Alert Policies section.
Note: To ensure that automatic download of updates can function, allow access to the following URLs
through your firewall:
• *.compute-*.amazonaws.com:80
• release-api.nutanix.com:80
10. For ESXi clusters, add the host to the vCenter management interface.
For more information, see the vSphere Administration Guide.
Name Description
The following sample entries are from the allowlist for an ESX and an AHV image:
"iso_whitelist": {
"478e2c6f7a875dd3dacaaeb2b0b38228": {
"min_foundation": "2.1",
"hypervisor": "esx",
"min_nos": null,
"friendly_name": "ESX 6.0",
"version": "6.0",
"filesize": 329611264,
"a2a97a6af6a3e397b43e3a4c7a86ee37": {
"min_foundation": "3.0",
"hypervisor": "kvm",
"min_nos": null,
"friendly_name": "20160127",
"compatible_versions": {
"kvm": [
"^el6.nutanix.20160127$"
]
},
"version": "20160127",
"deprecated": "3.1",
"unsupported_hardware": []
},
Procedure
1. Obtain the MD5 checksum of the ISO that you want to use.
2. Open the downloaded allowlist file in a text editor and perform a search for the MD5 checksum.
What to do next
If the MD5 checksum is listed in the allowlist file, save the file to the workstation that hosts the Foundation
VM. If the allowlist file on the Foundation VM does not contain the MD5 checksum, replace that file with the
downloaded file before you begin installation.
1. On the Foundation page, click Hypervisor and select a hypervisor from the drop-down list below
Select a hypervisor installer.
2. To upload a new whitelist.json file, click Manage Whitelist and click upload it.
Note: To verify if the iso_whitelist.json is updated successfully, open the Manage Whitelist menu and
check for the date of the newly updated file.
• The connection dropping intermittently. If intermittent failures persist, look for conflicting IPs.
• Foundation ran out of disk space during the hypervisor or Phoenix preparation phase. Free some space
by deleting unwanted ISO images. In addition, a Foundation crash could leave a /tmp/tmp* directory that
contains a copy of an ISO image that you can unmount (if necessary) and delete. Foundation requires 3
GB for ESXi or AHV.
• The host boots but returns the error indicating an issue with reaching the Foundation VM. The message
varies by hypervisor. For example, on ESXi, you might see a ks.cfg:line 12: "/.pre" script
returned with an error error message. Ensure that you assign the host an IP address on the same
subnet as the Foundation VM or multi-homing is configured. Also, check for IP address conflicts.
Procedure
• See the individual log file for any failed nodes for information about the problem.
In the Foundation GUI, the Installation Progress page shows the logs. You can also see logs in the
Foundation VM at/home/nutanix/foundation/log.
• If the firmware bundle version specified in the Foundation firmware policy is not available in the
Cisco UCS Manager, download the recommended firmware versions to the Cisco UCS Manager by
• Foundation:
StandardError: Failed to mount phoenix iso with error: Unable to find mount
entry
Workaround: Cisco recommends that you de-commission and re-commission the server from the
Cisco UCS Manager. You can restart imaging once the re-commission is complete.
• If the Phoenix IP is not reachable, Foundation fails to connect to Phoenix with the following error
message.
2023-08-01 07:07:23,982Z ERROR Node with ip 10.17.xx.yy is not in phoenix or is
not reachable
2023-08-01 07:07:23,983Z ERROR Exception in running
<ImagingStepRAIDCheckPhoenix(<NodeConfig(10.17.xx.yy) @9590>) @9990>
Traceback (most recent call last):
File "foundation/imaging_step.py", line 161, in _run
File "foundation/imaging_step_misc_hw_checks.py", line 155, in run
StandardError: Couldn't find hardware_config.json in 10.17.xx.yy
Workaround: Cisco recommends that you de-commission and re-commission the server from the
Cisco UCS Manager. You can restart imaging once the re-commission is complete.
• If Foundation fails when you reach the maximum web session limit for the Cisco UCS Manager, the
following error message appears.
StandardError: Request failed, error 572 (User reached maximum session limit)
Workaround: Increase the default value from 32 to higher in the Cisco UCS Manager.