System Specs NX3155GG8 - Compressed

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Platform ANY

NX-3155G-G8 System
Specifications
NX-3155G-G8

March 21, 2023


Contents

1. System Specifications............................................................................................. 3
Node Naming (NX-3155G-G8).............................................................................................................................. 3
NX-3155G-G8 System Specifications..................................................................................................... 4
NX-3155G-G8 GPU Specifications......................................................................................................... 10

2. Component Specifications...................................................................................11
Controls and LEDs for Single-node Platforms.............................................................................................. 11
LED Meanings for Network Cards.....................................................................................................................12
Power Supply Unit (PSU) Redundancy and Node Configuration.........................................................14
Nutanix DMI Information....................................................................................................................................... 15

3. Memory Configurations........................................................................................17
Supported Memory Configurations...................................................................................................................17

4. Nutanix Hardware Naming Convention........................................................22

Copyright.......................................................................................................................25
License......................................................................................................................................................................... 25
Conventions............................................................................................................................................................... 25
Default Cluster Credentials..................................................................................................................................25
Version......................................................................................................................................................................... 26
1
SYSTEM SPECIFICATIONS
Node Naming (NX-3155G-G8)
Nutanix assigns a name to each node in a block, which varies by product type.
The NX-3155G-G8 block contains a single node named Node A.
The NX-3155G-G8 supports solid-state drives (SSD), hard disk drives (HDD), and non-volatile
memory express (NVMe) drives.
Only drive slots 4, 5, 7, 8, 10, and 11 can be populated. All other drive slots must remain empty
for thermal reasons.
For all-SSD configuration, the NX-3155G-G8 supports partial population of drives. Populate
the drives two by two, from left to right in the active drive slots. The NX-3155G-G8 does not
support odd-numbered drive configurations.
Any configuration with NVMe drives must contain at least four SSDs; since six slots must remain
empty, that means there is only one supported SSD with NVMe configuration, with SSDs in slots
4, 5, 7, and 8 and NVMe drives in slots 10 and 11, with all other slots empty.

Figure 1: NX-3155G-G8 Front Panel

Table 1: Supported Drive Configurations

Hybrid HDD and SSD Two SSDs in slots 4 and 5; four HDDs in slots 7, 8, 10,
and 11; all other drive slots empty.

All-flash Two, four, or six SSDs, with other drive slots empty.

SSD with NVMe Four SSDs and two NVMe drives, with six empty drive
slots. The NVMe drives must go in slots 10 and 11, on the
lower right.

Platform | System Specifications | 3


Figure 2: NX-3155G-G8 Back Panel

NX-3155G-G8 System Specifications

Table 2: System Characteristics

Boot Device Dual M.2 RAID1

• 2 x 512GB M.2 Boot Device

CPU Processor

• 2 x Intel Xeon® Silver 4316 [20 cores / 2.30 GHz]


• 2 x Intel Xeon® Gold 5315Y [8 cores / 3.20 GHz]
• 2 x Intel Xeon® Gold 5317 [12 cores / 3.00 GHz]
• 2 x Intel Xeon® Gold 5318Y [24 cores / 2.10 GHz]
• 2 x Intel Xeon® Gold 5320T [20 cores / 2.30 GHz]
• 2 x Intel Xeon® Gold 6326 [16 cores / 2.90 GHz]
• 2 x Intel Xeon® Gold 6334 [8 cores / 3.60 GHz]
• 2 x Intel Xeon® Gold 6342 [24 cores / 2.80 GHz]
• 2 x Intel Xeon® Gold 6348 [28 cores / 2.60 GHz]
• 2 x Intel Xeon® Gold 6354 [18 cores / 3.00 GHz]
• 2 x Intel Xeon® Platinum 8358 [32 cores / 2.60 GHz]
• 2 x Intel Xeon® Platinum 8360Y [36 cores / 2.40 GHz]

Platform | System Specifications | 4


GPU
Note:

• When configuring 1 NIC, a max of 5x T4 GPUs are supported or a


max of 2x A series GPUs are supported.
• When configuring 2 NICs, a max of 2,3 or 4x T4 GPUs or a max of
2x A series GPUs are supported.
• When configuring 3 NICs, a max of 1x T4 GPU or 1xA series GPU is
supported.

GPU

• 0, 1 or 2 x NVIDIA A100 40GB


• 0, 1 or 2 x NVIDIA A16 64GB
• 0, 1 or 2 x NVIDIA A40 48GB

• 0, 1, 2, 3, 4 or 5 x NVIDIA A2 16GB
• 0, 1, 2, 3, 4 or 5 x NVIDIA T4 16GB

Platform | System Specifications | 5


Memory 32GB RDIMM

• 4 x 32 GB = 128 GB
• 8 x 32 GB = 256 GB
• 12 x 32 GB = 384 GB
• 16 x 32 GB = 512 GB
• 24 x 32 GB = 768 GB
• 32 x 32 GB = 1.0 TB
64GB DIMM

• 4 x 64 GB = 256 GB
• 8 x 64 GB = 512 GB
• 12 x 64 GB = 768 GB
• 16 x 64 GB = 1.0 TB
• 24 x 64 GB = 1.5 TB
• 32 x 64 GB = 2.0 TB
128GB RDIMM

• 4 x 128 GB = 512 GB
• 8 x 128 GB = 1.0 TB
• 12 x 128 GB = 1.5 TB
• 16 x 128 GB = 2.0 TB
• 24 x 128 GB = 3.0 TB
• 32 x 128 GB = 4.0 TB

Platform | System Specifications | 6


Network
Note:

• When configuring 1 NIC, a max of 5x T4 GPUs are supported or a


max of 2x A series GPUs are supported.
• When configuring 2 NICs, a max of 2,3 or 4x T4 GPUs or a max of
2x A series GPUs are supported.
• When configuring 3 NICs, a max of 1x T4 GPU or 1xA series GPU is
supported.

Serverboard

• 1x 1GbE Dedicated IPMI


• 2x 10GbE SFP+
• 2x 10GBaseT (IPMI Failover)
NICs in PCIe Slots

• 0, 1, 2 or 3 x 10GBaseT 2P NIC
• 0, 1, 2 or 3 x 10GBaseT 4P NIC
• 0, 1, 2 or 3 x 10GbE 4P NIC
• 0, 1, 2 or 3 x 25GbE 2P NIC

Network Cables Network Cables

• OPT,CBL,SFP28,1M,CU
• OPT, CBL, 1M, SFP+ TO SFP+
• OPT,CBL,SFP28,3M,CU
• OPT, CBL, 3M, SFP+ TO SFP+
• OPT,CBL,SFP28,5M,CU
• OPT, CBL, 5M, SFP+ TO SFP+

Power Cables Power Cables

• 2 x C13/14 4ft Power Cable

Power Supply 2000W PSU

• 2 x 2000W Titanium PSU

Server Server

• NX-3155G-G8 Server

Storage

Platform | System Specifications | 7


Storage : NVMe 2 x NVMe
+SSD
• 1.92TB
• 3.84TB
• 7.68TB
2 or 4 x SSD

• 1.92TB
• 3.84TB
• 7.68TB

Storage : All 2, 4 or 6 x SSD


SSD
• 1.92TB
• 3.84TB
• 7.68TB

Storage : All 2, 4 or 6 x SSD


SSD SED
• 1.92TB
• 3.84TB

Storage : SSD 2 x SSD


+HDD
• 1.92TB
• 3.84TB
• 7.68TB
4 x HDD

• 6.0TB
• 8.0TB
• 12.0TB
• 18.0TB

Storage : SSD 2 x SSD


+HDD SED
• 1.92TB
• 3.84TB
4 x HDD

• 6.0TB
• 8.0TB
• 12.0TB

Platform | System Specifications | 8


TPM TPM

• 1 x Unprovisioned Trusted Platform Module

Transceiver Transceiver

• SR SFP+ Transceiver

Chassis fans 4x 80 mm heavy duty fans with PWM fan speed controls

Table 3: Block, power and electrical

Block Depth : 741.1 mm


Rack Units : 2 U
Width : 440.9 mm
Height : 88.1 mm
Weight : 24.72 kg

Package Weight : 36.29 kg

Shock Non-Operating : 10 ms
Operating : 2.5 ms

Thermal Maximum : 6114 BTU/hr


Dissipation
Typical : 4280 BTU/hr

Vibration Non-Operating : 0.98 Grms


(Random)
Operating : 0.4 Grms

Power Max Config w/ 2xA40 2xAOC


consumption
• Maximum: 1793 VA
• Typical: 1255 VA

Operating Operating temperature : 10-30C


environment
Non-Operating temperature : -40-70C
Operating relative humidity : 20-90%
Non-operating relative humidity : 5-95%

Certifications

• Energy Star
• CSAus
• FCC
• CSA

Platform | System Specifications | 9


• UL
• cUL
• ICES
• CE
• UKCA
• KCC
• RCM
• VCCI-A
• BSMI
• EAC
• SABS
• S-MARK
• UKRSEPRO
• BIS
• SII
• RoHS
• Reach
• WEEE

NX-3155G-G8 GPU Specifications


The NX-3155G-G8 supports NVIDIA A2 and T4 GPU cards. Nutanix does not support mixing
different types of GPU cards in the same platform.

Table 4: Minimum Firmware and Software Versions When Using a GPU Card

Firmware or Software NVIDIA Tesla A2 NVIDIA Tesla T4

BIOS WU10.104 WU10.104

BMC 8.1.7 8.0.3

Hypervisor ESXi 7.0 u3


• AHV 7.2
• ESXi 6.7 U3b
• ESXi 7.0 U2a

Foundation 5.3.4 5.0.4

AOS 6.5.2 5.20.1.1

NCC 4.6.3 4.2.0

NVIDIA vGPU driver GRID 15.1 GRID 12


2
COMPONENT SPECIFICATIONS
Controls and LEDs for Single-node Platforms

Figure 3: Controls and LEDs for NX-3155G-G8, NX-8150-G8, and NX-8155-G8

Table 5: Front Panel Controls and Indicators

LED or Button Function

Power button System on/off (Press and hold 4 seconds to


power off.)
Power LED Solid green when block is powered on
Unit Identifier button (UID) Press button to illuminate the i LED (blue)
Drive LED Flashing orange on activity
10 GbE LAN 1 LED (Port 1) Activity = green flashing
10 GbE LAN 2 LED (Port 2) Activity = green flashing
Multiple Function i LED
i Unit identification Solid blue
i Overheat condition Solid red
i Power supply failure Flashing red at 0.25Hz
i Fan failure Flashing red at 1Hz

Platform | Component Specifications | 11


Table 6: Drive LEDs

Top LED: Activity Blue or green, blinking = I/O activity, off = idle
Bottom LED: Status Red solid = failed drive, On five seconds after
boot = power on

Table 7: Back Panel Controls and Indicators

LED Function

PSU1 and PSU2 LED Green = OK (PSU normal)


Yellow = no node power or PSU is not
inserted completely
Red for Fail
10 GbE LAN 1 LED (Port 1) Activity = green flashing, Link = solid green
10 GbE LAN 2 LED (Port 2) Activity = green flashing, Link = solid green
1 GbE dedicated IPMI Left LED green for 100 Mbps, Left amber for 1
Gbps, Right yellow flashing for activity
i Unit identification LED Solid blue
2 x 10 GbE and 4 x 10 GbE NIC LEDs Link and Activity = green flashing 10 Gbps,
Amber = 1 Gbps

Table 8: Power Supply LED Indicators

Power supply condition LED status

No AC power to all power supplies Off

Power supply critical events that cause a shutdown: Failure, Steady amber
Over Current Protection, Over Voltage Protection, Fan Fail, Over
Temperature Protection, Under Voltage Protection.
Power supply warning events. Power supply continues to Blinking amber (1 Hz )
operate. High temperature, over voltage, under voltage and other
conditions.

When AC is present only: 12VSB on (PS off) or PS in sleep state Blinking green (1 Hz)
minute

Output on and OK Steady green

AC cord unplugged Steady amber

Power supply firmware updating mode Blinking green (2 Hz)

For LED states for add-on NICs, see LED Meanings for Network Cards on page 12.

LED Meanings for Network Cards


Descriptions of LEDs for supported NICs.

Platform | Component Specifications | 12


Different NIC manufacturers use different LED colors and blink states. Not all NICs are
supported for every Nutanix platform. See the system specifications for your platform to verify
which NICs are supported.

Table 9: On-Board Ports

NIC Link (LNK) LED Activity (ACT) LED

1 GbE dedicated IPMI Green: 100 Mbps Blinking yellow: activity

Yellow: 1 Gbps

1 GbE shared IPMI Green: 1 Gbps Blinking yellow: activity

Yellow: 100 Mbps

i Unit identification LED Blinking blue: UUID has been


activated.

Table 10: SuperMicro NICs

NIC Link (LNK) LED Activity (ACT) LED

Dual-port 1 GbE Green: 100 Mbps Blinking yellow: activity

Yellow: 1 Gb/s
OFF: 10 Mb/s or No
Connection

Dual-port 10G SFP+ Green: 10 Gb Blinking green: activity

Yellow: 1 Gb

Table 11: Silicom NICs

NIC Link (LNK) LED Activity (ACT) LED

Dual-port 10G SFP+ Green: all speeds Solid green: idle


Blinking green: activity

Quad-port 10G SFP+ Blue: 10 Gb Solid green: idle


Yellow: 1 Gb Blinking green: activity

Dual-port 10G BaseT Yellow: 1 Gb/s Blinking green: activity

Green: 10 Gb/s

Platform | Component Specifications | 13


Table 12: Mellanox NICs

NIC Link (LNK) LED Activity (ACT) LED

Dual-port 10G SFP+ Green: 10 Gb speed with no Blinking yellow and green:
ConnectX-3 Pro traffic activity

Blinking yellow: 10 Gb speed


with traffic
Not illuminated: no connection

Dual-port 40G SFP+ Solid green: good link Blinking yellow: activity
ConnectX-3 Pro
Not illuminated: no activity

Dual-port 10G SFP28 Solid yellow: good link Solid green: valid link with no
ConnectX-4 Lx traffic
Blinking yellow: physical
problem with link Blinking green: valid link with
active traffic

Dual-port 25G SFP28 Solid yellow: good link Solid green: valid link with no
ConnectX-4 Lx traffic
Blinking yellow: physical
problem with link Blinking green: valid link with
active traffic

Power Supply Unit (PSU) Redundancy and Node Configuration


Note: Nutanix recommends that you carefully plan your AC power source needs, especially in
cases where the cluster consists of mixed models.
Nutanix recommends that you use 180 V ~ 240 V AC power source to secure PSU
redundancy. However, according to the following table, and depending on the number
of nodes in the chassis, some NX platforms can work with redundant 100 V ~ 210 V AC
power supply units.

Table 13: PSU Redundancy and Node Configuration

Nutanix model Number of nodes Redundancy at 110 V Redundancy at


208-240 V

NX-1065-G8 1 YES YES

2, 3, or 4 NO YES

NX-1175S-G8 1 YES YES

NX-3060-G8 1 YES YES

2, 3, or 4 NO YES

NX-3155G-G8 1 NO YES

NX-3170-G8 1 NO YES

Platform | Component Specifications | 14


Nutanix model Number of nodes Redundancy at 110 V Redundancy at
208-240 V

NX-8035-G8 1 YES YES

2 No YES

NX-8150-G8 1 NO YES

NX-8155-G8 1 For CPUs with a YES


thermal design profile
equal to or less than
130 W: redundant at
110 V over the entire
supported ambient
temperature range of
10° C to 35° C.
For all other CPUs,
use the following rule:

• Ambient
temperature is 25°
C or less: YES
• Ambient
temperature is
greater than 25° C:
NO

NX-8170-G8 1 NO YES

Nutanix DMI Information


Format for Nutanix DMI strings.
VMware reads model information from the direct media interface (DMI) table.
For platforms with Intel Icelake CPUs, Nutanix provides model information to the DMI table in
the following format:
NX-motherboard_idNIC_id-HBA_id-G8

motherboard-id has the following options:

Argument Option

T X12 multi-node motherboard

U X12 single-node motherboard

W X12 single-socket single-node motherboard

NIC_id has the following options:

Argument Option

D1 dual-port 1G NIC

Q1 quad-port 1G NIC

DT dual-port 10GBaseT NIC

Platform | Component Specifications | 15


Argument Option

QT quad-port 10GBaseT NIC

DS dual-port SFP+ NIC

QS quad-port SFP+ NIC

HBA_id specifies the number of nodes and type of HBA controller. For example:

Argument Option

1NL3 single-node LSI3808

2NL3 2-node LSI3808

4NL3 4-node LSI3808

Table 14: Examples

DMI string Explanation Nutanix model

NX-TDT-4NL3-G8 X12 motherboard with dual- NX-1065-G8, NX-3060-G8


port 10GBase-T NIC, 4 nodes
with LSI3808 HBA controllers

NX-TDT-2NL3-G8 X12 motherboard with dual- NX-8035-G8


port 10GBase-T NIC, 2 nodes
with LSI3808 HBA controllers

NX-UDT-1NL3-G8 X12 motherboard with dual- NX-3155G-G8, NX-3170-G8,


port 10GBase-T NIC, 1 node NX-8150-G8, NX-8155-G8,
with LSI3808 HBA controller NX-8170-G8

NX-WDT-1NL3-G8 X12 single-socket NX-1175S-G8


motherboard with dual-port
10GBase-T NIC, 1 node with
LSI3808 HBA controller
3
MEMORY CONFIGURATIONS
Supported Memory Configurations
DIMM installation information for all Nutanix G8 platforms.

DIMM Restrictions
DIMM type
Each G8 node must contain only DIMMs of the same type. So, for example, you cannot
mix RDIMM and LRDIMM in the same node.
DIMM capacity
Each G8 node must contain only DIMMs of the same capacity. So, for example, you
cannot mix 32 GB DIMMs and 64 GB DIMMs in the same node.
DIMM speed
G8 nodes ship with 3200 MHz DIMMs. 3200 MHz is the highest speed Nutanix currently
supports, so you cannot currently mix DIMM speeds in any G8 node.
DIMM manufacturer
You can mix DIMMs from different manufacturers in the same node, but not in the same
channel.
Multi-node platforms
Multi-node G8 platforms contain only one active DIMM slot per channel, so mixing
DIMMs in the same channel is not possible.
Single-socket platforms
The single-socket NX-1175S-G8 platform contains only one DIMM slot per channel,
so mixing DIMMs in the same channel is not possible.
Single-node platforms

• Single-node G8 platforms contain two DIMM slots per channel. Within a channel,
all DIMMs must be from the same manufacturer.
• When replacing a failed DIMM, if there are two DIMMs in the channel, either
replace the failed DIMM with a new DIMM form the same manufacturer, or else
replace both DIMMs in the channel and make sure that both new DIMMs are from
the same manufacturer.
• When adding new DIMMs to a node, if the new DIMMs and the original DIMMs
have different manufacturer part numbers, arrange the DIMMs so that the
original DIMMs and the new DIMMs are not mixed in the same channel.

• EXAMPLE: You have an NX-8155-G8 node that has sixteen 32GB DIMMs for
a total of 512 GB. You decide to upgrade to thirty-two 32GB DIMMs for a
total of 1024 GB. When you remove the node from the chassis and look at
the server board, you see that each CPU has eight DIMMs. Remove all DIMMs
from one CPU and place them in the empty DIMM slots for the other CPU.

Platform | Memory Configurations | 17


Then place all the new DIMMs in the DIMM slots for the first CPU, filling all
slots. This way you can ensure that the original DIMMs and the new DIMMs do
not share channels.

Note: You do not need to balance numbers of DIMMs from different manufacturers
within a node, so long as you never mix them in the same channel.

Memory Installation Order for Multi-Node G8 Platforms


A memory channel is a group of DIMM slots.
For G8 multi-node platforms, each CPU is associated with eight active memory channels that
contain one blue slot each, plus two inactive black slots.

Note: The black slots (C2 and G2 on each CPU) are inactive.

Figure 4: DIMM Slots for a G8 Multinode Serverboard

Table 15: DIMM Installation Order for Multi-node G8 Platforms

Number of DIMMs Slots to Use Supported Capacities

NX-1065-G8 NX-3060-G8 and


NX-8035-G8

4 CPU1: A1, E1 32 GB, 64 GB 32 GB, 64 GB, 128 GB

CPU2: A1, E1

8 CPU1: A1, C1, E1, G1 32 GB, 64 GB 32 GB, 64 GB, 128 GB

CPU2: A1, C1, E1, G1

12 CPU1: A1, B1, C1, E1, F1, G1 32 GB, 64 GB 32 GB, 64 GB, 128 GB

CPU2: A1, B1, C1, E1, F1, G1

16 Fill all blue slots. 32 GB, 64 GB 32 GB, 64 GB

Platform | Memory Configurations | 18


Memory Installation Order for Single-node G8 Platforms
A memory channel is a group of DIMM slots.
For G8 single-node platforms, each CPU is associated with eight memory channels that contain
one blue slot and one black slot each, for a total of 32 DIMM slots.

Figure 5: DIMM Slots for a G8 Single-Node Serverboard

Table 16: DIMM Installation Order for Single-Node G8 Platforms

Number Slots to Use Supported Capacities


of
DIMMs NX-3170-G8, NX-3155G-G8,
NX-8155-G8, NX-8150-G8
NX-8170-G8

4 CPU1: A1, E1 (blue slots) 32 GB, 64 GB 32 GB, 64 GB, 128


GB
CPU2: A1, E1 (blue slots)

8 CPU1: A1, C1, E1, G1 (blue slots) 32 GB, 64 GB 32 GB, 64 GB, 128
GB
CPU2: A1, C1, E1, G1 (blue slots)

Platform | Memory Configurations | 19


Number Slots to Use Supported Capacities
of
DIMMs NX-3170-G8, NX-3155G-G8,
NX-8155-G8, NX-8150-G8
NX-8170-G8

12 CPU1: A1, B1, C1, E1, F1, G1 (blue slots) 32 GB, 64 GB 32 GB, 64 GB, 128
GB
CPU2: A1, B1, C1, E1, F1, G1 (blue slots)

16 CPU1: A1, B1, C1, D1, E1, F1, G1, H1 (blue 32 GB, 64 GB 32 GB, 64 GB, 128
slots) GB

CPU2: A1, B1, C1, D1, E1, F1, G1, H1 (blue


slots)

24 CPU1: A1, B1, C1, E1, F1, G1 (blue slots) 32 GB, 64 GB 32 GB, 64 GB, 128
GB
CPU1: A2, B2, C2, E2, F2, G2 (black slots)
CPU2: A1, B1, C1, E1, F1, G1 (blue slots)
CPU2: A2, B2, C2, E2, F2, G2 (black
slots)

32 Fill all slots. 32 GB, 64 GB 32 GB, 64 GB, 128


GB

Memory Installation Order for the Single-socket NX-1175S-G8 Platform


A memory channel is a group of DIMM slots.
On the NX-1175S-G8, the single CPU is associated with eight memory channels that contain one
slot each.

Figure 6: DIMM slots for an NX-1175S-G8 server board

Platform | Memory Configurations | 20


Table 17: DIMM Installation Order for NX-1175S-G8

Number of DIMMs Slots to use Supported DIMM


capacities

2 A1, E1 32 GB, 64 GB, 128 GB

4 A1, C1, E1, G1 32 GB, 64 GB, 128 GB

8 Fill all slots. 32 GB, 64 GB, 128 GB

Platform | Memory Configurations | 21


4
NUTANIX HARDWARE NAMING
CONVENTION
Every Nutanix block has a unique name based on the standard Nutanix naming convention.
The Nutanix hardware model name uses the following format.
Prefix-body-suffix
The prefix is NX for all Nutanix platforms.

Table 18: Prefix

Prefix Description

NX Indicates that the platform is sold directly by Nutanix and


support calls are handled by Nutanix.
NX stands for Nutanix.

Table 19: Body

Body Description

W Indicates the product series, and takes one of the following


values.

• 1 – small or Remote Office/Branch Office (ROBO)


businesses
• 3 – heavy compute
• 8 – high-performance

Platform | Nutanix Hardware Naming Convention | 22


Body Description

X Indicates the number of nodes, and takes one of the


following values.

• 1 – single-node platforms
• 2 – multinode platforms
• 3 – multinode platforms
• 4 – multinode platforms

Note: Though multinode platforms can have two, three,


or four nodes, the documentation always uses a generic
zero, 0.

Y Indicates the chassis form-factor, and takes one of the


following values.

• 3 – 2U2N (two rack units high, two nodes)


• 5 – 2U1N (two rack units high with one node)
• 6 – 2U4N (two rack units high with four nodes)
• 7 – 1U1N (one rack unit high with one node)

Z Indicates the drive form-factor, and takes one of the


following values.

• 0 – 2.5 inch drives


• 5 – 3.5 inch drives

Indicates one of the following:


• G
• S • G at the end of the body stands for "graphics" and
means that the platform is optimized for using Graphics
Processing Unit (GPU) cards.
• S at the end of the body stands for "single socket" and
means that the motherboard has only one CPU instead
of the usual two.

Table 20: Suffix

Suffix Description

G4 The platform uses the Intel Haswell CPU

G5 The platform uses the Intel Broadwell CPU

G6 The platform uses the Intel Skylake CPU

G7 The platform uses the Intel Cascade Lake CPU

G8 The platform uses the Intel Ice Lake CPU

Platform | Nutanix Hardware Naming Convention | 23


Figure 7: Nutanix Hardware Naming Convention
COPYRIGHT
Copyright 2023 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the
United States and/or other jurisdictions. All other brand and product names mentioned herein
are for identification purposes only and may be trademarks of their respective holders.

License
The provision of this software to you does not grant any licenses or other rights under any
Microsoft patents with respect to anything other than the file server implementation portion of
the binaries for this software, including no licenses or any other rights in any hardware or any
devices or software that are used to communicate with or in connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as


nutanix) in the system shell.

root@host# command The commands are executed as the root user in the vSphere or
Acropolis host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a


log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin Nutanix/4u

vSphere Web Client ESXi host root nutanix/4u

vSphere client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

SSH client Nutanix Controller VM admin Nutanix/4u

Platform | Copyright | 25
Interface Target Username Password

SSH client or console Acropolis OpenStack root admin


Services VM (Nutanix
OVM)

Version
Last modified: March 21, 2023 (2023-03-21T16:59:12+05:30)

You might also like