h15300 Vxrail Network Guide
h15300 Vxrail Network Guide
h15300 Vxrail Network Guide
H15300.31
October 2023
Rev. H15300.31
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2019 – 2023 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Contents
Revision history.......................................................................................................................................................................... 7
Chapter 1: Introduction................................................................................................................. 9
Contents 3
Prepare service connectivity for VxRail.......................................................................................................................42
Prepare for VMware vSphere+ subscription licensing.............................................................................................42
Prepare data center routing services........................................................................................................................... 43
Prepare for multirack VxRail cluster............................................................................................................................. 44
Prepare for VMware vSAN HCI mesh topology........................................................................................................ 46
Prepare external FC storage for dynamic clusters................................................................................................... 48
Prepare for VxRail custom uplink assignments.......................................................................................................... 49
Prepare data center network MTU............................................................................................................................... 51
Prepare for LAG of VxRail networks............................................................................................................................ 52
Identify switch ports to be configured for LAG...................................................................................................55
Plan LAG on switch port pairs....................................................................................................................................... 55
Prepare certificate authority server for VxRail..........................................................................................................56
Identify isolation IP addresses for VMware vSphere High Availability................................................................. 56
4 Contents
Chapter 10: Configure the Network for VxRail............................................................................. 84
Setting up the network switch for VxRail connectivity........................................................................................... 84
Configure multicast for VxRail internal management network.........................................................................84
Configure unicast for VxRail vSAN network.........................................................................................................84
Configure VLANs for the VxRail networks............................................................................................................85
Configure the inter-switch links...............................................................................................................................87
Configure switch port mode..................................................................................................................................... 87
Configure LAG..............................................................................................................................................................87
Limit spanning tree protocol..................................................................................................................................... 88
Enable flow control..................................................................................................................................................... 88
Set up the network switch ports for VxRail connectivity....................................................................................... 88
Set up the upstream network for VxRail connectivity.............................................................................................89
Configure your network to support RoCE.................................................................................................................. 90
Confirm your data center network............................................................................................................................... 90
Confirm your firewall settings................................................................................................................................... 91
Confirm your data center environment........................................................................................................................ 91
Contents 5
Custom option: Six ports................................................................................................................................................ 115
Custom option: Eight ports............................................................................................................................................ 117
Custom option: Eight ports connected to four Ethernet switches......................................................................118
6 Contents
Revision history
Date Revision Description
October 2023 H15300.31 ● Updated section on service connectivity.
● Added support for 10GbE NICs on P570 nodes.
● Added support for 10GbE for VMware vSAN ESA
August 2023 H15300.30 Added support for VxRail VE-660 and VP-760 models.
June 2023 H15300.29 Added content:
● Support for vSAN HCI mesh with vSAN Express Storage Architecture (ESA)
● Support for HCI mesh with VxRail stretched cluster
May 2023 H15300.28 New content for LAG is supported on VxRail-managed VMware VDS.
April 2023 H15300.27 Updates for hardware branding and removed SmartFabric content.
March 2023 H15300.26 ● Included content for release of VxRail 15G VD-Series hardware platform.
● Added sections to provide networking preparation guidance for connecting to Dell and
VMware external sites.
● Added chapter for networking requirements for vSAN witness.
● Removed references to Platform Services Controller.
● Updated SmartFabric content based on new VxRail rules.
January 2023 H15300.25 Support for DPUs and ESXio in VxRail.
December H15300.24 ● Support for vSAN Express Architecture.
2022 ● Support for SmartFabric Services in integrated mode and decoupled mode.
November H15300.23 Support VxRail-managed VMware vCenter Server on a 2-node cluster.
2022
August 2022 H15300.22 Support for new features in 7.0.400.
March 2022 H15300.21 Support for new features in 7.0.350.
January 2022 H15300.20 Support for new 15G VxRail models.
December H15300.19 ● Update dynamic cluster content with link to Dell published guide.
2021 ● Update content for VxRail Manager network exclusions.
November H15300.18 Support for PowerFlex as external storage for dynamic cluster.
2021
October 2021 H15300.17 Support for satellite nodes.
August 2021 H15300.16 Support for new features in 7.0.240.
June 2021 H15300.15 ● Updated Intel and AMD node connectivity options for 100 GbE.
● Expanded network topology option to include custom networks with six Ethernet ports per
node.
● Clarified that VxRail supplied internal DNS cannot support naming services outside of its
resident cluster.
● Private VLANs (PVLANs) are unsupported for VxRail networking.
April 2021 H15300.14 ● Added content on mixing of node ports in VxRail clusters.
● Option for manual node ingestion instead of IPV6 multicast.
● Added content for LACP policies.
● Updated stretched cluster node minimums.
February 2021 H15300.13 Support for new features in 7.0.131.
November H15300.12 Removed requirement for VxRail guest network during initial configuration.
2020
Revision history 7
Date Revision Description
October 2020 H15300.11 Support for new features in VxRail 7.0.100 and removed references to VMware Log Insight.
September H15300.10 Outlined best practices for link aggregation on non-VxRail ports.
2020
August 2020 H15300.9 Updated requirement for NIC redundancy enablement.
July 2020 H15300.8 Support for new features in VxRail 7.0.010.
June 2020 H15300.7 Updated networking requirements for multirack VxRail clusters.
May 2020 H15300.6 Updated switch requirements for VxRail IPv6 multicast.
April 2020 H15300.5 Support for:
● VxRail SmartFabric multirack switch network.
● Optional 100 GbE Ethernet and FC network ports on VxRail nodes.
March 2020 H15300.4 Support for new functionality in VMware vSphere 7.0.
February 2020 H15300.3 Support for new features in VxRail 4.7.410.
August 2019 H15300.2 Support for VxRail 4.7.300 with Layer 3 VxRail networks.
June 2019 H15300.1 Support for VxRail 4.7.210 and updates to 25 GbE networking.
April 2019 H15300 ● First inclusion of this version history table.
● Support of VMware Cloud Foundation on VxRail.
8 Revision history
1
Introduction
This guide provides the network details for VxRail deployment planning that include best practices recommendations, and
requirements for both physical and virtual network environments.
VxRail is an HCI solution that consolidates compute, storage, and network into a single and unified system. With careful
planning, you can deploy VxRail into an existing data center environment to immediately deploy applications and services.
VxRail is based on a collection of nodes and switches that are integrated as a cluster, under a single point of management.
All physical compute, network, and storage resources in the VxRail are managed as a single shared pool. They are allocated to
applications and services that are based on defined business and operational requirements.
VxRail scale-out architecture leverages VMware vSphere and VMware vSAN to provide server virtualization and software-
defined storage, with simplified deployment, upgrades, and maintenance through VxRail Manager.
Network connectivity is fundamental to the VxRail clustered architecture. Through logical and physical networks, individual
nodes act as a single system providing scalability, resiliency, and workload balance.
The VxRail software bundle is preloaded onto the compute nodes and consists of the following components:
● VxRail Manager
● VMware vCenter Server
● VMware vSAN
● VMware vSphere
Audience
This document has been prepared for anyone who is involved in planning, installing, and maintaining VxRail, including Dell field
engineers, and customer system and network administrators. Do not use this guide to perform the installation and set-up of
VxRail. Work with your Dell service representative to perform the installation.
Licenses
The VxRail includes the following Dell RecoverPoint for Virtual Machines licenses that you can download, install, and configure:
● Five full VM licenses per single node
● Fifteen full VM licenses for the G Series chassis
Temporary evaluation licenses are provided for VMware vSphere and VMware vSAN. The VMware vSphere licenses can be
purchased from Dell, VMware, or your preferred VMware reseller partner.
The Deployment Guide contains additional information about licenses.
Introduction 9
2
Plan the VxRail network
Take the time to plan out your data center network for VxRail. The network considerations for VxRail are the same as that of
any enterprise IT infrastructure: availability, performance, and extensibility.
VxRail is manufactured in the factory per your purchase order, and delivered to your data center ready for deployment. The
VxRail nodes can connect to any compatible network infrastructure to enable operations. Follow all the guidance and decision
points that are described in this document. If there are separate teams for network and servers in your data center, work
together to design the network and configure the switches.
The following planning recommendations ensure a proper deployment and functioning of your VxRail:
● Select the VxRail hardware and physical network infrastructure that best aligns with your business and operational
objectives.
● Plan and prepare for VxRail implementation in your data center before product delivery.
● Set up the network switch infrastructure in your data center for VxRail before product delivery.
● Prepare for physical installation and VxRail initialization into the final product.
The VxRail nodes connect to one or more network switches to form a VxRail cluster. VxRail communicates with the physical
data center network through one or more VMware VDS that is deployed in the VxRail cluster. The VMware VDS integrates with
the physical network infrastructure to provide connectivity for the virtual infrastructure, and enable virtual network traffic to
pass through the physical switch infrastructure. In this relationship, the physical switch infrastructure serves as a backplane,
supporting network traffic between virtual machines in the cluster, and enabling virtual machine mobility and resiliency.
In addition, the physical network infrastructure enables I/O operations between the storage objects in the VxRail VMware vSAN
data store. It also provides connectivity to applications and end-users outside of the VxRail cluster.
The following are the physical components and selection criteria for VxRail clusters:
● VxRail clusters and nodes
● Network switch
● Data Center Network
● Topology and connections
● Workstation or laptop
● Out of band management (optional)
VxRail models
The naming standards for VxRail models are structured to target specific objectives
● E Series: Balanced compute and storage.
● V Series: Virtual desktop enablement with support for GPUs.
● P Series: High performance.
● S Series: Dense storage.
With the 1U rack mount models, such as the VE-Series and E-Series models, PCIe slots can be used for network expansion.
Certain models support both low-profile PCIe slots and full-height PCIe slots. The number and type of slots on each node vary
depending on the hardware options that are configured, such as whether to include support for GPUs.
Figure 2. Back view of VxRail VP-, V-, P-, and S-Series nodes
The 2U models offer a higher number of PCIe slots in contrast to the 1U models. However, the 2U rackmount models can
be configured at the time of ordering to address a wider range of use cases, which can reduce the number of available PCIe
slots. For instance, network expansion options would be reduced in cases where those slots instead are populated with storage
devices or GPUs.
The VxRail VD-Series support sleds of 1U in size and 2U in size, and both sled options are supported in either a rackable chassis
or stackable chassis. Only the 2U sleds support network expansion outside of the onboard Ethernet ports through PCIe slots.
Network switch
A VxRail cluster depends on adjacent ToR Ethernet switches to support cluster operations.
VxRail is compatible with most Ethernet switches. For best results, select a switch platform that meets the operational and
performance criteria for your planned use cases.
The network that is visible only to the VxRail nodes depends on IPv6 multi casting services configured on the adjacent ToR
switches for node discovery purposes. One node is automatically designated as the primary node. It acts as the source, and
listens for packets from the other nodes using multicast. A VLAN assignment on this network limits the multicast traffic only to
the interfaces connected to this internal management network.
A common Ethernet switch feature, Multicast Listener Discovery (MLD) snooping and querier is designed to further constrain
the flooding of multicast traffic by examining MLD messages, and then forwarding multicast traffic only to interested interfaces.
Since the traffic on this node discovery network is already constrained through the configuration of this VLAN on the ports
supporting the VxRail cluster, this setting may provide some incremental efficiency benefits, but does not negatively impact
network efficiency.
If your data center networking policy has restrictions for the IPV6 multicast protocol, IP addresses can be manually assigned to
the VxRail nodes as an alternative to automatic discovery.
Figure 6. Integrated connectivity options for 16th generation VxRail models with Intel CPUs
Figure 8. Integrated connectivity options for 15th generation VxRail models with Intel CPUs
Figure 10. Integrated connectivity options for 14th generation VxRail models with Intel CPUs
Figure 12. Built-in connectivity options for VxRail models with AMD CPUs
Table 1. Networking products that pass qualification and are supported for VxRail
Port speed Vendor
10 GbE ● Intel
● Broadcom
● QLogic
25 GbE ● Broadcom
● Intel
● Mellanox
● QLogic
100 GbE Mellanox
Figure 15. Mixing NDC/OCP and PCIe ports to support a VxRail cluster
If change the topology by migrating the VxRail networks onto other uplinks on the VxRail nodes, you can perform this activity
after the cluster is built, as long as the VxRail cluster is at VxRail 7.0.010 or later.
VxRail cluster operations require several ports on each switch. To determine the base number of ports, multiply the number of
Ethernet ports on each VxRail node to support VxRail networking by the number of nodes to be configured into the cluster.
For a dual-switch configuration, reserve ports on each switch to form an interswitch link for network traffic passage. Reserve
additional ports to pass VxRail network traffic upstream and one port on a switch to enable a laptop to connect to VxRail to
perform initial build.
If the VxRail clusters are at a data center that you cannot easily access, set up an OOB management switch to facilitate direct
communication with each node.
To use OOB management, connect the iDRAC port to a separate switch to provide physical network separation. Default values,
capabilities, and recommendations for OOB management are provided with server hardware information. Reserve an IP address
for each iDRAC in your VxRail cluster (one per node).
The cluster initialization process performs an inventory of the disk drives on the nodes, and uses that discovery process to
identify the number of disk groups on each node to be used as the foundation for the vSAN data store. The high-endurance SSD
drives discovered on each node serve as a cache for virtual machine I-O operations in each disk group, while the high-capacity
drives discovered are the primary permanent storage resource for the virtual machines for each disk group. The vSAN build
process partners the discovered cache drives with one or more capacity drives to form disk groups, with the resulting vSAN
data store consisting of this collection of disk groups.
VxRail clusters with vSAN datastores based on the Original vSAN Architecture support solid-state and NVMe drives for both
cache and capacity, and solid-state and hard drives for capacity only. This architecture supports network speeds of 10 GbE, 25
GbE, and 100 GbE.
During the inventory process, VxRail queries the drive slots and verifies whether a datastore can be built with the Express vSAN
Architecture. The data store will only be built with this architecture if all the drive slots contain compatible NVMe drives. If you
select this vSAN architecture option, the network that is configured to support the data store must be running at either 10 GbE,
25 GbE, or 100 GbE.
Dynamic cluster
Dynamic clusters differentiate themselves from other VxRail cluster types with the resource that is selected for primary storage.
With other cluster types, there is a dependency on the local vSAN data store as the primary storage resource. With a dynamic
cluster, the nodes that are used to build the cluster do not have local disk drives. Therefore, an external storage resource is
required to support workload and applications.
A dynamic cluster may be preferable to other cluster types in these situations:
● You already have an investment in compatible external storage resources in your data centers that can serve as primary
storage for a dynamic cluster. You already have an investment in compatible external storage resources in your data centers
that can serve as primary storage for a dynamic cluster.
● The business and operational requirements for the applications that are targeted for the VxRail cluster can be better served
with existing storage resources.
● The likelihood of stranded assets through node expansion is less likely with a dynamic cluster.
If a dynamic cluster is the best fit for your business and operational requirements, be aware of the following:
FC storage option
You can configure a compatible FC storage array which can be configured to supply a single VMFS data store or multiple data
stores to a VxRail dynamic cluster.
Figure 20. Dynamic cluster using FC-connected VMFS for primary storage
Figure 22. VxRail dynamic cluster storage provided by PowerFlex virtual volume
PowerFlex configures pools of storage through a virtualization process, and manages the allocation of virtual volumes to
connected clients. Virtual volumes can be configured to meet certain capacity, performance, and scalability characteristics to
align with the workload requirements planned for the VxRail dynamic cluster.
The supported storage resources can be either block-based or file system based. With block-level storage option, the LUN is
presented to the VxRail cluster nodes over an IP network. VMware vSphere is used to configure a VMFS data store from the
LUN.
NVMe option
You can leverage NVMe (Non-Volatile Memory Express) to connect to block-based storage.
Leveraging NVMe can support local storage devices over PCIe, and storage devices over an FC network or an IP network.
NVMe can serve as an alternative to FC or iSCSI storage for demanding workloads, since it is designed for usage with faster
storage devices that are enabled with non-volatile memory.
With all the storage options for dynamic clusters, verify that the storage resource in the data center you plan for VxRail dynamic
clusters is supported. See the VxRail E-Lab Navigator to verify compatibility.
See Dell VxRail vSAN Stretched Cluster Planning Guide for detailed information about vSAN stretched cluster and the
networking requirements.
2-node cluster
A 2-node cluster supports small-scale deployments with reduced workload and availability requirements, such as those in a
remote office setting.
The solution is limited to two VxRail nodes only, and like the stretched cluster solution, requires a witness.
To deploy 2-node VxRail clusters, note the following:
● The minimum VxRail software version for the 2-node cluster is 4.7.100.
● The deployment is limited to a pair of VxRail nodes.
● Verify that your workload requirements do not exceed the resource capacity of this small-scale solution.
● You cannot expand to three or more nodes unless the cluster is running version 7.0.130 or later.
● A single ToR switch is supported.
● Four Ethernet ports per node are required to deploy a 2-node cluster. Supported networking options include:
○ Four 10 GbE NICs.
○ Four 25 GbE NICs.
○ Two 10 GbE NICs and two 25 GbE NICs.
Figure 29. Two VMware vCenter Server placement options for a 2-node cluster
Like the VMware vSAN stretched cluster feature, this small-scale solution has strict networking guidelines that must be adhered
to for the solution to work. For more information about the planning and preparation for a deployment of a 2-node VxRail
cluster, see the Planning Guide—vSAN 2-Node Cluster on VxRail.
The PowerEdge models used as the hardware foundation for the other VxRail cluster types are the same for satellite nodes.
Satellite nodes go through the same engineering, qualification, and manufacturing processes as the VxRail nodes used in
clusters, and software lifecycle management of satellite nodes is supported through VxRail Manager.
The primary difference from a networking perspective between satellite nodes and the nodes supporting other cluster types is
that satellite nodes require only a single IP address. This IP address is used to enable connectivity to a VxRail cluster in a central
location and establish communication for management purposes.
To deploy VxRail satellite nodes, note the following:
● A compatible VxRail cluster with local vSAN storage must already be deployed to support the management and monitoring of
satellite nodes.
● The minimum VxRail software version to support satellite nodes is 7.0.320.
● VxRail satellite nodes are limited to a single instance and cannot be reconfigured to join a cluster.
● Verify that your workload requirements at the remote locations do not exceed the resource capacity of a satellite node.
The path starts with a structured discovery and planning process that focuses on business use cases and strategic goals.
These goals drive the selection of software layers that consist of the SDDC. Software layers are implemented in a methodical,
structured manner, where each phase involves incremental planning and preparation of the supporting network. The next phase
As a VxRail administrator using VMware vSphere management capabilities, you can create name spaces on the Supervisor
Cluster, and configure them with a specified amount of memory, CPU, and storage. Within the name spaces, you can run
containerized workloads on the same platform with shared resource pools.
Steps
1. Assess your requirements and perform a sizing exercise to determine the quantity and characteristics of the VxRail nodes
you require to meet planned workload and targeted use cases.
2. Determine the number of physical racks required to support the quantity and footprint of VxRail nodes to meet workload
requirements, including the ToR switches. Verify that the data center has sufficient floor space, power, and cooling.
3. Determine the network switch topology that aligns with your business and operational requirements. See the sample wiring
diagrams in Appendix F: Physical Network Switch for guidance on the options supported for VxRail cluster operations.
4. Based on the sizing exercise, determine the number of Ethernet ports on each VxRail node you want to reserve for VxRail
networking.
a. Two ports might be sufficient in cases where the resource consumption on the cluster is low and will not exceed available
bandwidth, or if the network infrastructure supports a high enough bandwidth that two ports are sufficient to support
planned future growth.
b. Workloads with a high resource requirement or with a high potential for growth benefits from a 4-port deployment.
Resource-intensive networks, such as the vSAN and VMware vSphere vMotion networks, benefit from the 4-port option
because two ports can be reserved just for those demanding networks.
c. The 4-port option is required to enable link aggregation of demanding networks for the purposes of load-balancing. In this
case, the two ports that are reserved exclusively for the resource-intensive networks (vSAN and possibly vMotion) are
configured into a logical channel to enable load-balancing.
d. More than four ports per node can be reserved for VxRail networking for cases where it is desirable for certain individual
VxRail networks to not share any bandwidth with other VxRail networks.
NOTE: You can reserve more than four ports per node for VxRail networking for cases where it is desirable for
certain individual VxRail networks to not share any bandwidth with other VxRail networks.
5. Determine the optimal VxRail adapter and Ethernet port types to meet planned workload and availability requirements.
a. VxRail supports 1 GbE, 10 GbE, 25 GbE, and 100 GbE connectivity options to build the initial cluster.
b. Starting with VxRail 7.0.130, you have the flexibility to reserve and use the following Ethernet adapter types:
● Only ports on the NDC or OCP for VxRail cluster networking.
● Both NDC or OCP-based and PCIe-based ports for VxRail cluster networking.
● Only PCIe-based ports for VxRail cluster networking.
c. If your performance and availability requirements might change later, you can reserve and use just NDC or OCP ports to
build the initial cluster, and then migrate certain VxRail networks to PCIe-based ports.
d. If your requirements include using FC storage to support VxRail workload, you can select either 16 GB or 32 GB
connectivity to your FC network.
NOTE: The VxRail cluster must be at version 7.0.010 or later to migrate VxRail networks to PCIe-based ports.
6. Select the network adapter type and cable type to connect the VxRail nodes to your switches.
● VxRail nodes can connect to switches with either RJ45, SFP+, SFP28, or QSFP adapter types, depending on the type of
adapter cards selected for the nodes.
● VxRail nodes with RJ45 ports require CAT5 or CAT6 cables. CAT6 cables are included with every VxRail.
● VxRail nodes with SFP+ ports require optics modules (transceivers) and optical cables, or Twinax Direct-Attach Copper
(DAC) cables. These cables and optics are not included; you must supply your own. The NIC and switch connectors and
cables must be on the same wavelength.
● VxRail nodes with SFP28 ports require high-thermal optics for ports on the NDC or OCP. Optics that are rated for
standard thermal specifications can be used on the expansion PCIe network ports supporting SFP28 connectivity.
● The VMware-branded software that is deployed on VxRail requires a VMware license. The licenses can be perpetual or based
on a subscription licensing model. A perpetual license requires a key value that is retrieved from the VMware support site
Table 4. Open ports to connect VMware vCenter Server to the VMware Cloud
Source Destination Port
Web browser VMware vCenter Server Cloud Gateway 5480
VMware vCenter Server VMware Cloud 433
VMware vCenter Server VMware vCenter Server Cloud Gateway 5480
VMware vCenter Server VMware vCenter Server Cloud Gateway 5484
VMware vCenter Server VMware vCenter Server Cloud Gateway 7444
Figure 34. Layer 2/3 boundary at the leaf layer or spine layer
Establishing routing services at the spine layer means that the uplinks on the leaf layer are trunked ports, and pass through all
the required VLANs to the switches at the spine layer. This topology has the advantage of enabling the Layer 2 networks to
span across all the switches at the leaf layer. This topology can simplify VxRail clusters that extend beyond one rack, because
the Layer 2 networks at the leaf layer do not need Layer 3 services to span across multiple racks. A major drawback to this
topology is scalability. Ethernet standards enforce a limitation of addressable VLANs to 4094, which can be a constraint if the
application workload requires a high number of reserved VLANs, or if multiple VxRail clusters are planned.
Enabling routing services at the leaf layer overcomes this VLAN limitation. This option also helps optimize network routing
traffic, as it reduces the number of hops to reach routing services. However, this option does require Layer 3 services to be
licensed and configured at the leaf layer. In addition, since Layer 2 VxRail networks now terminate at the leaf layer, they cannot
span across leaf switches in multiple racks.
NOTE: If your network supports VTEP, you can extend Layer 2 networks between switches in physical racks over a Layer 3
overlay network to support a multirack VxRail cluster.
To enable vSAN HCI mesh, the data center must have a network topology that can enable connectivity of the vSAN networks
on the two participating VxRail clusters.
● Two Ethernet ports on each node in the cluster will be configured to support vSAN network traffic.
● A common VLAN can be assigned to this vSAN network on each cluster so they can connect over a Layer 2 network. This
method is preferable if the client cluster does not have a local vSAN datastore. If a common VLAN is assigned, and the
VxRail clusters are deployed against different sets of ToR switches, the VLAN must be configured to stretch between the
set of switches.
● If a unique VLAN is assigned to the vSAN network on each cluster, then connectivity can be enabled using Layer 3 routing
services. If this option is selected, be sure to assign routable IP addresses to the vSAN network on each participating VxRail
cluster.
Figure 39. Enabling vSAN HCI mesh connectivity over a Layer 3 network
To share storage resources between VxRail clusters both with VMware vSAN datastores in a VMware vSAN HCI mesh, prepare
your data center to meet the following prerequisites:
● A VMware vCenter Server instance at a version that supports VxRail 7.0.100 or later.
● A VMware vSAN Enterprise license for each VxRail cluster using VMware vSAN HCI mesh topology.
If your plans include sharing the VMware vSAN resources of a VxRail cluster with one or more VxRail dynamic clusters, meet the
following prerequisites:
● A VMware vCenter Server instance at a version that supports VxRail 7.0.240 or later.
● A VMware vSAN Enterprise license is needed only for the VxRail cluster that is sharing its VMware vSAN storage resources.
This license is not needed on a dynamic cluster because it does not have a local VMware vSAN datastore.
If your plans include sharing the VMware vSAN resources from a VxRail stretched cluster, meet the following prerequisites:
● The VxRail cluster participating in the vSAN HCI mesh topology must be running VxRail 8.0.100 or later.
● The VMware vCenter Server instance that supports the VxRail clusters must be running at a version that supports VxRail
8.0.100.
● The VMware vSAN datastore being shared is based in the OSA.
●
Figure 40. VxRail stretched cluster sharing VMware vSAN storage with a client cluster
Follow these guidelines to prepare your environment before initial cluster build:
● See the VxRail 8.0 Support Matrix to verify if the FC array is compatible with dynamic clusters.
Figure 42. Default network profiles for a 2-port and a 4-port VxRail network
Figure 43. Custom uplink assignment across NDC/OCP-based and PCIe-based ports
If you expect the applications to be running on the VxRail cluster to be I/O intensive and require high bandwidth, you can place
the VMware vMotion network on the same pair of ports as reserved for the VxRail management networks, and isolate the
VMware vSAN network on a pair of Ethernet ports.
Customizing the uplink assignments can impact the ToR switch configuration.
● With a custom uplink assignment, there is more flexibility in a data center with a mixed network. You can assign the
resource-intense networks like VMware vSAN to higher-speed uplinks, and low-impact networks to slower uplinks, and then
connect those uplinks to switches with compatible port speeds.
● On VxRail nodes with both NDC or OCP ports and PCIe Ethernet adapter cards, you can migrate certain VxRail networks off
the NDC or OCP ports and onto PCIe ports after the cluster is built. This is advantageous if workload demand increases on
the cluster, and additional bandwidth is required. You can later install switches that better align with adapting requirements,
and migrate specific workloads to those switches.
Each VxRail network is assigned two uplinks by default during the initial implementation operation. Even if the virtual distributed
switch port group is assigned a teaming and fail over policy to enable better distribution across the two uplinks, true load
balancing is not achievable without LAG. Enabling LAG allows the VxRail network to better use the bandwidth of both uplinks,
with the traffic flow coordinated based on the load-balancing hash algorithm that is configured on the virtual distributed switch
and the ToR switches.
The following functionality dependencies must be understood if considering LAG with VxRail:
● The switches that are targeted for the VxRail cluster must support LACP. LACP is the protocol that dynamically forms a
peering relationship between ports on two separate switches. Dynamic LAG requires an LACP policy to be configured on the
VMware VDS to enable this peering to be established with the adjacent ToR switches.
The following guidelines must be followed to enable LAG on a customer-managed VMware VDS:
● You must supply a compatible VMware vCenter Server instance to serve as the target for the VxRail cluster.
● You must supply a compatible and pre-configured virtual distributed switch to support the VxRail networking requirements.
● The LACP policy to support LAG and load-balancing must be pre-configured on the virtual distributed switch.
Follow these guidelines to enable LAG on a VxRail-managed VMware VDS:
● The default mode for switch ports configured as port channels is that the ports shut down if they do not form a pairing with
another pair of ports on another switch.
● VxRail requires those switch ports to remain open and active during the VxRail initial implementation process.
● The switches that are targeted for the VxRail cluster must support enabling the individual ports in the port channel to stay
open and active if they do not form a LAG partnership with other switch port pairs within the configured timeout setting.
(On Dell-branded switches running OS10, this is known as the LACP individual’ feature.)
On Cisco-branded switches, the feature is LACP suspend-individual. This feature should be disabled on the switch ports in an
EtherChannel to prevent the ports from shutting down. Check the documentation for switches from other vendors for the
proper feature description.
Figure 47. Enable connectivity for VxRail initial implementation with LAG
An individual VxRail node connects to the adjacent ToR with a standard virtual switch at power-on, and virtual switches
do not support LAG. The LACP policy that is configured by VxRail on the VxRail-managed VMware VDS during the initial
implementation process cannot exchange LACP DPUs at the power-on stage. This peering relationship does not occur until
VxRail begins the virtual network formation stage later in the initial implementation process. Using this feature enables a switch
port that is configured for LAG to be set to an active state in order to enable VxRail connectivity and allow initial implementation
to proceed.
If you are enabling LAG across a pair of switches, and you have matching open ports on both switches, the best practice is to
plug the cables into equivalent port numbers on both switches. Create a table to map each VxRail node port to a corresponding
switch port. Identify which ports on each VxRail to be enabled for LAG.
For example, if you want to deploy a VxRail cluster with four nodes, and reserve and use two ports on the NDC or OCP and two
ports on a PCIe adapter card for the VxRail cluster, and use the first eight ports on a pair of ToR switches for connecting the
cables, you could use the resulting table to identify the switch ports to be configured for LAG.
Assuming that you are using the second port on the NDC or OCP and PCIe adapter card for the non-management VxRail
networks, you can identify the switch port pairs, as shown in the columns shaded green, to be configured for LAG. Create a
table mapping the VxRail ports to the switch ports as part of the planning and design phase.
The certificates in VxRail are configured with an expiration. Certificate replacement can be performed manually, or auto-
renewed starting with VxRail 7.0.350. When auto- renewal is enabled, VxRail Manager automatically contacts the certificate
authority for new certificates before the expiration date
The isolation IP addresses can be configured as part of the automated VxRail initial build process, or configured after the cluster
is built. A minimum of two IP addresses reachable from the VxRail external management network should be selected for this
purpose.
If your company or organization has stringent security policies regarding network separation, splitting the VxRail networks
between two VMware VDS enables better compliance with those policies, and simplify redirecting the VxRail management
network traffic and non-management network traffic down separate physical network paths.
You can choose from the following options to align with your company or organization networking policies:
● Place all the required VxRail network traffic and guest network traffic on a single VMware VDS.
● Use two VMware VDS to segment the VxRail management network traffic from the VxRail non-management traffic and
guest VM network traffic.
● Deploy a separate VMware VDS to support guest virtual machine network traffic.
VxRail supports either a single VMware VDS or two VMware VDS as part of the initial implementation process. If your security
posture changes after the VxRail cluster initial implementation has completed, a second VMware VDS can still be deployed, and
the VxRail network traffic can be redirected to that second VMware VDS. Any additional VMware VDS beyond two switches,
such as those for user requirements outside of VxRail networking can be deployed after initial implementation.
VxRail has predefined logical networks to manage and control traffic within and outside of the cluster. Make VxRail logical
networks accessible to the outside community. For instance, connectivity to the VxRail management system is required by IT
management. VxRail networks must be configured for end-users and application owners who need to access their applications
and virtual machines running in the VxRail cluster.
● The internal management network that is used for device discovery does not require assigned IP addresses.
Virtual Machine
The Virtual Machine networks are for the virtual machines running your applications and services. These networks can be
created by VxRail during the initial build or afterward using the VMware vClient after initial configuration is complete. Dedicated
VLANs are preferred to divide VM traffic, based on business and operational objectives. VxRail creates one or more VM
networks for you, based on the name and VLAN ID pairs that you specify. When you create VMs in the VMware vSphere Web
Client to run your applications and services, you can assign the VM to the VM networks of your choice. For example, you could
have one VLAN for development, one for production, and one for staging.
Network configuration
Table 5. Network configuration
Network configuration Action
table
Row 1 Enter the external management VLAN ID for VxRail management network (VxRail Manager, ESXi,
VMware vCenter Server, Log Insight). If you do not plan to have a dedicated management VLAN
and will accept this traffic as untagged, enter 0 or Native VLAN.
Request your networking team to reserve a subnet range that has sufficient open IP addresses to cover VxRail initial build and
any planned future expansion.
If you choose instead to assign the IP addresses to each individual ESXi host, record the IP address for each ESXi host to be
included for VxRail initial build.
If you are going to deploy the embedded VMware vCenter Server on the VxRail cluster provided with VxRail, record the
permanent IP address for VMware vCenter Server. Leave these entries blank if you are going to provide an external VMware
vCenter Server for VxRail.
Table 12. Record the permanent IP address for VMware vCenter Server
Network configuration table Action
Row 34 Enter the IP address for VxRail vCenter.
Enter the values for building and auto-assigning the ESXi hostnames if that is the chosen method.
If you are going to assign the ESXi hostnames manually, capture the name for each ESXi host that is planned for the VxRail
initial build operation.
You must create lookup records in your selected DNS for every VxRail management component that you are deploying in the
cluster and are assigning a hostname and IP address. These components can include VxRail Manager, VxRail vCenter Server,
Log Insight, and each ESXi host in the VxRail cluster. The DNS entries must support both forward and reverse lookup.
Use the Appendix A: VxRail Network Configuration Table to determine which VxRail management components to include in your
planned VxRail cluster, and have assigned a hostname and IP address. VMware vSphere vMotion and vSAN IP addresses do not
require hostnames, so no entries are required in the DNS server.
Refer the configuration settings that are applied to the virtual-distributed switch by the automated VxRail initial build process
as a baseline. This ensures a successful deployment of a VxRail cluster against the customer-managed virtual-distributed switch.
The settings that are used by the automated initial build process can be found in Appendix E: Virtual Distributed Switch
Portgroup Default Settings.
● Configure the teaming and failover policy on the port groups that are targeted for LAG.
Table 29. Record the IP addresses for the manual assignment method
Network configuration table Action
Rows 50-53 Enter the IP addresses for VMware vSphere vMotion.
Enter the subnet mask and gateway. You can use the default gateway that is assigned to the VxRail External Management
network, or enter a gateway that is dedicated for the VMware vSphere vMotion network.
Table 32. Record the IP addresses for the manual assignment method
Network configuration table Action
Rows 58-61 Enter the IP addresses for vSAN.
Enter the subnet mask and gateway for the vSAN network. You can use the default gateway that is assigned to the VxRail
External Management network if you do not enable routing for this network, or enter a gateway to enable routing for the vSAN
network.
If a syslog server is already deployed in your data center and you are going to use it as a logging solution, capture the IP address.
● If a Witness Traffic Separation (WTS) network is being planned for the VxRail cluster, then the vSAN witness traffic on the
Secondary Network must be able to route to this network.
● If a Witness Traffic Separation (WTS) network is not being considered for the VxRail cluster, then the vSAN witness traffic
on the Secondary Network must be able to route to the vSAN network planned for the VxRail cluster.
Decide whether to use the vSAN network or the Witness Traffic Separation (WTS) Network for monitoring.
● For a VxRail 2-node cluster, the WTS network is required.
● For a VxRail stretched cluster, a WTS network is recommended but not required.
Record the VLAN for the WTS network using the network configuration table: Row 74 and enter the VLAN for the WTS
network.
For a 2-node cluster deployment, record the IP addresses for each node that is required for vSAN witness traffic.
Figure 65. Routing between VxRail Manager and VxRail satellite nodes
Each VxRail satellite node is assigned a management IP address during the installation process. At initial power-on, the VxRail
satellite node is considered stranded until a connection with a VxRail Manager instance is established. Plan your data center
Steps
1. Configure the External Management VLAN (Row 1) on the switches. If you entered Native VLAN, set the ports on the
switch to accept untagged traffic and tag it to the native management VLAN ID. Untagged management traffic is the default
management VLAN setting on VxRail.
2. For VxRail 4.7.x and later, configure the Internal Management VLAN (Row 2) on the switches.
3. Allow multicast on the Internal Management network to support device discovery.
4. Configure a VMware vSphere vMotion VLAN (Row 3) on the switches.
5. Configure a vSAN VLAN (Row 4) on the switches. Unicast is required for VxRail clusters built with VxRail 4.5.x and later.
6. Configure the VLANs for your VM Networks (Rows 6) on the switches. These networks can be added after the cluster initial
build is complete.
7. If you choose to create a separate subnet for the vCenter Server Network, configure the vCenter Server Network VLAN
(Row 7), configure the VLAN on the switches.
8. Configure the optional VxRail Witness Traffic Separation VLAN (Row 69) on the switches ports if required.
9. Configure the switch uplinks to allow the External Management VLAN (Row 1) and VM Network VLANs (Row 6) to pass
through, and optionally the vSphere vMotion VLAN (Row 3), vSAN VLAN (Row 4) and vCenter Server Network VLAN (Row
7). If a vSAN witness is required for the VxRail cluster, include the VxRail Witness Traffic Separation VLAN (Row 69) on the
uplinks.
Configure LAG
LAG is supported for the VxRail initial implementation process only under the following conditions:
● The nodes are running on VxRail 7.0.130 or earlier.
● LAG is being applied only to non-management VxRail networks.
● VxRail is being deployed with customer-supplied virtual distributed switches.
If these conditions are not applicable to your plans, do not enable LAG, including protocols such as LACP and Ether Channel,
on any switch ports that are connected to VxRail node ports before initial implementation. Doing so results in VxRail initial
implementation to fail. When the initial implementation process completes, you can configure LAG on the operational VxRail
cluster, as described in the section Configure LAG on VxRail networks.
If your plans meet these conditions for supporting LAG during the VxRail initial implementation process, then perform these
action items before starting:
● For data center networks with more than one switch planned to support the VxRail cluster, configure virtual link trunking
between the switches.
● Configure a port channel on each switch for each VxRail node.
Steps
1. Configure the MTU size if using jumbo frames.
2. Set the port to the appropriate speed or to auto-negotiate speed.
3. Set spanning tree mode to disable transition to a blocking state, which can cause a timeout condition.
4. Enable flow control receive mode and disable flow control transmit mode.
5. Configure the External Management VLAN (Row 1) on the switch ports. If you entered Native VLAN, set the ports on the
switch to accept untagged traffic and tag it to the native management VLAN ID. Untagged management traffic is the default
management VLAN setting on VxRail.
6. For VxRail version 4.7 and later, configure the Internal Management VLAN (Row 2) on the switch ports.
7. If required, allow multicast on the VxRail switch ports to support the Internal Management network.
8. Configure a VMware vSphere vMotion VLAN (Row 3) on the switch ports.
9. Configure a vSAN VLAN (Row 4) on the switch ports. Allow unicast traffic on this VLAN.
10. Configure the VLANs for your VM Networks (Rows 6) on the switch ports.
11. Configure the optional vCenter Server Network VLAN (Row 7) on the switch ports.
12. Configure the optional VxRail Witness Traffic Separation VLAN (Row 69) on the switch ports, if required.
● Configure the appropriate port channel or equivalent, depending on the switch operating system on the switch port.
● Configure any additional network port settings that do not transfer from the port channel on the switch port.
If your Layer 2/Layer3 boundary is at the lowest network tier (ToR switch), perform the following tasks:
● Configure point-to-point links with the adjacent upstream switches.
● Terminate the VLANs requiring upstream access on the ToR switches.
● Enable and configure routing services for the VxRail networks requiring upstream passage.
Prerequisites
Before forming the VxRail cluster, the VxRail initialization process performs several verification steps, including:
● Verifying switch and data center environment support.
● Verifying passage of VxRail logical networks.
● Verifying accessibility of required data center applications.
● Verifying compatibility with the planned VxRail implementation.
Certain data center environment and network configuration errors cause the validation to fail, and the VxRail cluster is not
formed. When validation fails, the data center settings and switch configurations must undergo troubleshooting to resolve the
problems reported.
Steps
1. External management traffic is untagged on the native VLAN by default. If a tagged VLAN is used instead, customize the
switches with the new VLAN.
2. Internal device discovery network traffic uses the default VLAN of 3939. If this has changed, customize all ESXi hosts with
the new VLAN, or device discovery will not work.
3. Confirm that the switch ports that attach to the VxRail nodes allow passage of all VxRail network VLANs.
4. Confirm that the switch uplinks allow passage of external VxRail networks.
5. If you have two or more switches, confirm that an inter-switch link is configured between them to support passage of the
VxRail network VLANs.
Steps
1. Verify that VxRail can communicate with your DNS server.
2. Verify that VxRail can communicate with your NTP server, if planned for clock synchronization.
3. Verify that VxRail can communicate with your syslog server if you plan to capture logging.
4. Verify that your IT administrators can communicate with the VxRail management system.
5. If you plan to use a customer-supplied vCenter, verify open communication between the vCenter instance and the VxRail
managed hosts.
6. If you plan to use a third-party syslog server instead of Log Insight, verify that open communication between the syslog
server and the VxRail management components.
7. If you plan to deploy a separate network for ESXi host management (iDRAC), verify that your IT administrators can
communicate with the iDRAC network.
8. If you plan to use an external secure connect gateway in your data center instead of the secure connection deployed in the
VxRail cluster, verify the open communications between VxRail management and the secure connect gateway.
9. If you are planning to use VMware subscription licenses with VxRail, confirm connectivity to the VMware Cloud from the
VMware vCenter Cloud Gateway.
See Appendix D: VxRail Open Ports Requirements for information about VxRail port requirements.
Steps
1. Before coming on-site, the Dell service representative contacts you to capture and record the information that is described
in Appendix A: VxRail Network Configuration Table and walk through Appendix C: VxRail Cluster Setup Checklist.
2. If your planned VxRail deployment requires a witness at a remote data center location, the witness virtual appliance is
deployed.
3. If your deployment includes Dell Ethernet switches and professional services to install and configure the switches to support
the VxRail cluster, that activity is performed before VxRail deployment activities.
4. If your planned deployment is a dynamic cluster, complete the necessary preparations for the selected external storage
resource.
5. Install the VxRail nodes in a rack or multiple racks in the data center. If Dell professional services are not installing the
switches, install the network switches supporting the VxRail cluster into the same racks for ease of management.
6. Attach Ethernet cables between the ports on the VxRail nodes and switch ports that are configured to support VxRail
network traffic.
7. Power on the initial nodes to form the initial VxRail cluster. Do not turn on any other VxRail nodes until you have completed
the formation of the VxRail cluster with the first three or four nodes.
8. Connect a workstation or laptop or jump host that is configured for VxRail initialization to access the VxRail external
management network on your selected VLAN. Plug into the ToR switch or logically reach the VxRail external management
network from elsewhere on your network.
9. Open a browser to the VxRail IP address to begin the VxRail initialization. This is either the default IP address that is
assigned to VxRail Manager at the factory or the permanent IP address set by Dell services.
10. The Dell service representative populates the input screens on the menu with the data that is collected from the customer
during the planning and design process.
11. VxRail performs the verification process using the information input into the menus.
12. After validation is successful, the initialization process begins to build a new VxRail cluster. The new permanent IP address
for VxRail Manager is displayed.
The following rules apply for migrating VxRail networks from NDC/OCP-only ports to mixed NDC/OCP-PCIe ports:
● The VxRail version on your cluster is 7.0.010 or later.
● Reserve the first port configured for VxRail networking, which is known as vmnic0 or vmnic1, for VxRail management and
node discovery. Do not migrate VxRail management or node discovery off this first reserved port.
● The switch ports enabling connectivity to the PCIe-based ports are properly configured to support VxRail network traffic.
● All the network ports supporting VxRail network traffic must be running the same speed.
● The network reconfiguration requires a one-to-one swap. For example, a VxRail network that is running on two NDC/OCP
ports can be reconfigured to run on one NDC/OCP port and one PCIe port.
Follow the official instructions or procedures from VMware and Dell for these operations.
The supported operations include:
● Create a VMware Standard Switch and connect to unused ports.
● Connect unused ports to new port groups on the default VMware VDS.
● Create a VMware VDS and add VxRail nodes. Connect their unused network ports to the VMware VDS.
● Create VMkernel adapters, and enable services of IP storage and VMware vSphere replication.
● Create VM Networks and assign them to new port groups.
The following operations are unsupported in versions earlier than VxRail 7.0.010:
● You cannot migrate or move VxRail traffic to the optional ports. VxRail traffic includes the management, vSAN, VMware
vCenter Server, and VMware vSphere vMotion Networks.
● You cannot migrate VxRail traffic to other port groups.
● You cannot migrate VxRail traffic to another VMware VDS.
CAUTION: Unsupported operations impact the stability and operations of the VxRail cluster and cause a failure.
You can configure any unused network ports on the VxRail nodes that were not configured for VxRail network traffic use cases
to support LAG. These can include any unused ports on the NDC/OCP or on the optional PCIe adapter cards. Updates to
support these new networks can be configured on the VMware VDS deployed during VxRail initial build, or you can configure a
new VMware VDS. Since the initial VMware VDS is under the management and control of VxRail, configure a separate VMware
VDS on the VMware vCenter Server instance to support these networking use cases.
10 VMware vCenter Subnet Mask VMware vCenter Server network on the same subnet as external
Server Management management network by default. Can optionally be assigned a new subnet
Network mask.
11 Gateway VMware vCenter Server network on the same subnet as external
management network by default. Can optionally be assigned a new gateway.
12 System Global Settings Time zone
NTP servers
13 DNS servers
14
15 Top level domain
16 VxRail Manager Hostname
17 IP address
18 ESXi Hostnames VxRail autoassign Prefix
method
Separator
19 Iterator
20
24 ESXi hostname 2
25 ESXi hostname 3
ESXi hostname 4
26
27 ESXi IP Addresses VxRail autoassign Starting IP address
method
28 Ending IP address
29 Customer-supplied VMware ESXi IP Address 1
method
30 VMware ESXi IP Address 2
31 VMware ESXi IP Address 3
32 VMware ESXi IP Address 4
33 VMware vCenter VxRail VMware VMware vCenter Server Hostname
Server vCenter Server
34 VMware vCenter Server IP Address
35 VMware vCenter Server Hostname (FQDN)
36 VMware vCenter Server SSO Domain
37 Admin username/password or the VxRail nonadmin username and password
38 New VxRail management username and password
39 VMware vCenter Data Center Name
40 VMware vCenter Cluster Name
41 VMware VDS Customer-supplied Name of first VMware VDS
Switch Names
42 Name of second VMware VDS
43 Customer-supplied Name of VMware VDS port group supporting VxRail external management
VMware VDS port network
groups
44 Name of VMware VDS port group supporting VxRail VMware vCenter
Server network
45 Name of VMware VDS port group supporting VxRail internal management
network
46 Name of VMware VDS port group supporting VxRail vMotion network
47 Name of VMware VDS port group supporting VxRail vSAN network
48 vMotion VxRail autoassign Starting address for IP pool
method
49 Ending address for IP pool
50 Customer-supplied VMware vSphere vMotion IP Address 1
method
51 VMware vSphere vMotion IP Address 2
VMware vSphere vMotion IP Address 3
52 VMware vSphere vMotion IP Address 4
55 Subnet Mask
Gateway Default or vMotion stack to enable routing
Component Description
Reserve VLANs One external management VLA
One internal management VLAN with multicast for autodiscovery and device management. The
default is 3939.
One VLAN with unicast enabled for VMware vSAN traffic
One VLAN for VMware vSphere vMotion
One or more VLANs for your VM Guest Networks
One VLAN for VMware vCenter Server Network (if applicable)
If you are enabling witness traffic separation, reserve one VLAN for the VxRail witness traffic
separation network.
Management Decide on your VxRail host naming scheme. The naming scheme applies to all VxRail management
components.
Reserve IP addresses for VMware ESXi hosts.
Reserve one IP address for VxRail Manager
Determine default gateway and subnet mask.
Select passwords for VxRail management components.
vCenter Determine whether to use a VMware vCenter Server that is customer-supplied or a VMware
vCenter provided by VxRail.
VxRail vCenter Server: Reserve IP addresses for VMware vCenter Server (if supplied by VxRail).
Customer-managed VMware vCenter Server: Determine hostname and IP address for vCenter,
administration user, and name of VMware vSphere data center. Create a VxRail management user in
VMware vCenter. Select a unique VxRail cluster name. (Optional) Create a VxRail nonadmin user.
vMotion Decide whether you want to use the default TCP-IP stack for VMware vMotion, or a separate IP
addressing scheme for the dedicated VMware vMotion TCP-IP stack.
Reserve IP addresses and a subnet mask for VMware vSphere vMotion.
Select the gateway for either the default TCP-IP stack, or the dedicated VMware vMotion TCP-IP
stack.
vSAN Reserve IP addresses and a subnet mask for vSAN, if using vSAN for primary storage.
Witness Site If a witness is required, reserve one IP address for the management network and one IP address for
the vSAN network.
Set up Switches Configure your selected external management VLAN (default is untagged/native).
Configure your internal management VLAN.
Confirm unicast is enabled for device discovery.
Configure your selected VLANs for VMware vSAN, VMware vSphere vMotion, VMware vCenter
Server Network and VM Guest Networks.
If applicable, configure your witness traffic separation VLAN.
In dual-switch environments, configure the interswitch links to carry traffic between switches.
Configure uplinks or point-to-points links to carry networks requiring external connectivity
upstream.
Configure one port as an access port for laptop/workstation to connect to VxRail Manager for initial
configuration.
Confirm configuration and network access.
Workstation/Laptop Configure your workstation/laptop to reach the VxRail Manager initial IP address.
Configure the laptop to reach the VxRail Manager IP address after permanent IP address
assignment.
Open the necessary firewall ports to enable IT administrators to deploy the VxRail cluster.
If you plan to use a customer-managed VMware vCenter Server instead of deploying a VMware vCenter Server in the VxRail
cluster, open the necessary ports so that the VMware vCenter Server instance can connect to the ESXi hosts.
Other firewall port settings may be necessary depending on your data center environment. The list of documents in this table
is provided for reference purposes. VxRail manages the VxRail Customer Firewall Rules interactive workbook. Access to the
VMware vCenter Cloud Gateway Requirements VMware Cloud Gateway for vSphere+ Requirements
The reservation value is set to zero for all network traffic types, with no limits set on bandwidth.
Figure 71. VxRail nodes with two 10 Gb or 25 Gb NDC/OCP ports connected to two ToR switches, and one optional
connection to management switch for iDRAC
With this predefined profile, VxRail selects the two ports on the NDC/OCP to support VxRail networking. If the NDC/OCP
adapter on the VxRail nodes is shipped with four Ethernet ports, the two left most ports are selected. If you choose to use only
two Ethernet ports, the remaining ports can be used for other use cases. This connectivity option is the simplest to deploy. It is
suitable for smaller, less demanding workloads that can tolerate the loss of the NDC/OCP adapter as a single point of failure.
Figure 72. VxRail nodes with four 10 Gb NDC/OCP ports connected to two ToR switches, and one optional
connection to management switch for iDRAC
In this predefined network profile, VxRail selects all four ports on the NDC/OCP to support VxRail networking instead of two.
The same number of cable connections should be made to each switch. This topology provides additional bandwidth over the
two-port option, but provides no protection resulting from a failure with the network adapter card.
Figure 73. VxRail nodes with two 10/25 Gb NDC/OCP ports and two 10/25 Gb PCIe ports connected to two ToR
switches, and one optional connection to management switch for iDRAC
In this predefined network profile option, two NDC/OCP ports and two ports on the PCIe card in the first slot are selected
for VxRail networking. The network profile splits the VxRail networking workload between the NDC/OCP ports and the two
switches, and splits the workload on the PCIe-based ports between the two switches. This option ensures against the loss of
service with a failure at the switch level, and also with a failure in either the NDC/OCP or PCIe adapter card.
Figure 74. VxRail nodes with any two 10/25 Gb NDC/OCP ports and two 10/25 Gb PCIe ports connected to two ToR
switches, and one optional connection to management switch for iDRAC
This is an example of a custom cabling setup with two NDC/OCP ports and 2 PCIe ports that are connected to a pair of 10gb
switches or 25gb switches. Any NDC/OCP port and any PCIe port can be selected to support VxRail networking. However, the
two NICs assigned to support a specific VxRail network must be of the same type and running at the same speed.
Figure 75. VxRail nodes with any two 10/25 Gb NDC/OCP ports and any two 10/25 Gb PCIe ports connected to two
ToR switches, and one optional connection to management switch for iDRAC
With the custom option, there is no restriction about the ports that are selected for VxRail networking reside on the PCIe
adapter card in the first slot. If there is more than one PCIe adapter card, ports can be selected from either card.
This option supports spreading the VxRail networking across ports on more than one PCIe adapter card.
With the six-port option, you can use more of the PCIe ports as opposed to the NDC/OCP ports. If your nodes have two PCIe
slots that are occupied with network adapter cards, this offers the flexibility to spread the VxRail networking workload across
three network adapter cards.