h15300 Vxrail Network Guide
h15300 Vxrail Network Guide
h15300 Vxrail Network Guide
Abstract
This is a planning and consideration guide for VxRail™ Appliances. It can be
used to better understand the networking requirements for VxRail
implementation. This document does not replace the implementation services
with VxRail Appliances requirements and should not be used to implement
networking for VxRail Appliances.
October 2020
H15300.14
Revision history
Date Description
April 2019 First inclusion of this version history table
Added support of VMware Cloud Foundation on VxRail
June 2019 Added support for VxRail 4.7.210 and updates to 25 GbE networking
August 2019 Added support for VxRail 4.7.300 with Layer 3 VxRail networks
September 2020 Outlined best practices for link aggregation on non-VxRail ports
The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software that is described in this publication requires an applicable software license.
Copyright © 2020 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, Dell EMC, and other trademarks are trademarks of Dell Technologies. or
its subsidiaries. Other trademarks may be trademarks of their respective owners. [10/27/2020] [Planning Guide] [H15300.14]
VxRail is not a server. It is an appliance that is based on a collection of nodes and switches that are
integrated as a cluster under a single point of management. All physical compute, network, and storage
resources in the appliance are managed as a single shared pool and allocated to applications and
services based on customer-defined business and operational requirements.
The compute nodes are based on Dell EMC PowerEdge servers. The G Series consists of up to four
nodes in a single chassis, whereas all other models are based on a single node. An Ethernet switch is
required, at speeds of either 1/10/25 GbE, depending on the VxRail infrastructure deployed. A
workstation/laptop for the VxRail user interface is also required.
VxRail has a simple, scale-out architecture, leveraging VMware vSphere® and VMware vSAN™ to
provide server virtualization and software-defined storage, with simplified deployment, upgrades, and
maintenance through VxRail Manager. Fundamental to the VxRail clustered architecture is network
connectivity. It is through the logical and physical networks that individual nodes act as a single system
providing scalability, resiliency, and workload balance.
The VxRail software bundle is preloaded onto the compute nodes, and consists of the following
components (specific software versions not shown):
• VxRail Manager
• VMware vCenter Server™
• VMware vRealize Log Insight™
• VMware vSAN
• VMware vSphere
• Dell Secure Remote Support (SRS)/VE
Licenses are required for VMware vSphere and VMware vSAN. The vSphere licenses can be purchased
from Dell Technologies, VMware, or your preferred VMware reseller partner.
The VxRail Appliances also include the following licenses for software that can be downloaded, installed,
and configured:
• Dell EMC RecoverPoint for Virtual Machines (RP4VM)
• 5 full VM licenses per single node (E, V, P, D, and S series)
• 15 full VM licenses for the G Series chassis
Note: Follow all the guidance and decision point described in this document; otherwise, VxRail will not
implement properly, and it will not function correctly in the future. If you have separate teams for network
and servers in your data center, you must work together to design the network and configure the
switches.
This section describes the physical components and selection criteria for VxRail clusters:
• VxRail clusters, appliances, and nodes
• Network switch
• Data Center Network
• Topology and connections
• Workstation/laptop
• Out-of-band management (optional)
A standard VxRail cluster starts with a minimum of three nodes and can scale to a maximum of 64 nodes.
The selection of the VxRail nodes to form a cluster is primarily driven by planned business use cases,
and factors such as performance and capacity. Five series of VxRail models are offered, each targeting
specific objectives:
Each VxRail model series offers choices for network connectivity. The following figures show some of the
physical network port options for the VxRail models.
Back view of VxRail V-, P-, S-Series on Dell 14th Generation PowerEdge server
In addition to network connectivity, review the physical power, space, and cooling requirements for your
planned infrastructure to ensure data center compatibility.
The network traffic that is configured in a VxRail cluster is Layer 2. VxRail is architected to enable
efficiency with the physical ‘top-of-rack’ switches through the assignment of virtual LANs (VLANs) to
individual VxRail Layer 2 networks in the cluster. This functionality eases network administration and
integration with the upstream network.
A common Ethernet switch feature, Multicast Listener Discovery (MLD) snooping and querier, is designed
to constrain the flooding of multicast traffic by examining MLD messages and then forwarding multicast
traffic only to interested interfaces. Since the traffic on this node discovery network is already constrained
through the configuration of this VLAN on the ports supporting the VxRail cluster, this setting may provide
some incremental efficiency benefits, but does not negatively impact network efficiency.
You must specify a set of Virtual LAN (VLAN) IDs in your data center network that will be assigned to
support the VxRail networks. All the VLANs must be configured on the adjoining physical switches. The
VLANs that need to pass upstream must be configured on adjoining network switch uplinks, and also on
the ports on the upstream network devices.
One VLAN is assigned for external VxRail management access. Data center services (such as DNS and
NTP) that are required by VxRail cluster must be able to connect to this VLAN. Routing services must be
updated to enable connectivity to these services from this VxRail management network. Other VLANs,
such as those required for end-user access must also be configured in routing services to connect end-
users to the virtual machines running on the VxRail cluster.
The following connectivity rules apply to VxRail nodes based on 14th Generation Dell EMC PowerEdge
servers:
• G Series
- 2x10GbE SFP+ NIC ports
The following connectivity rules apply for VxRail nodes based on Pre-14th Generation Dell EMC
PowerEdge servers:
• E, P, S, and V Series
- 2x10GbE + 2x1GbE in either SFP+ or RJ-45 NIC ports
• E, P, and S Series
- 1 GbE connectivity is supported on single processor models only.
- The 2x10GbE ports will auto-negotiate to 1 GbE when used with 1 GbE networking.
• When a VxRail cluster is initially built, all the network adapter cards that are used to form the
cluster must be of the same vendor and model. This rule does not apply to nodes added to an
existing VxRail cluster, so long as the port speed and port type match the existing nodes.
• VxRail recommends using the same adapter card vendor and model for all the nodes in a cluster.
There is no guarantee that using optics or cables from one vendor with an adapter card from
another vendor will work as expected. VxRail recommends consulting the Dell cable and optics
support matrix before attempting to mix vendor equipment in a VxRail cluster.
• The feature sets supported from network adapter card suppliers do not always match. There is a
dependency on the firmware and/or driver in the adapter card to support certain features. If there
is a specific feature that is needed to meet a business requirement, VxRail recommends
consulting with a sales specialist to verify that the needed feature is supported for a specific
vendor.
For a VxRail cluster at a minimum version of 7.0.010 and configured for 10gb Ethernet connectivity, the
option to enable redundancy at the NIC level is supported during VxRail initial build if the cluster is
deployed against a customer-supplied virtual distributed switch. If the virtual distributed switch is created
during the VxRail initial build operation, enabling NIC-level redundancy is a Day 2 operation.
For a VxRail cluster at a minimum version of 7.0.100 and configured for 25gb connectivity, the two ports
on the NDC and the two ports on the PCIe adapter card can be configured to support VxRail network
traffic at the time of initial build, or later as a Day 2 operation.
• Any unused Ethernet ports on the nodes that are not reserved by the VxRail cluster can be used
for other purposes, such as guest networks, NFS, etc.
• For VxRail nodes supplied with 10 GbE ports, the VxRail cluster can be configured with either two
ports or four ports to support VxRail network traffic.
• For VxRail nodes supplied with 1 GbE ports, all four ports must be reserved for VxRail network
traffic.
• All-flash VxRail models must use either 10 GbE or 25 GbE NICs. 1 GbE is not supported for all-
flash.
• The network hardware configuration in a VxRail cluster must have the same Ethernet port types
across all VxRail nodes.
- VxRail nodes with RJ45 and SFP+ ports cannot be mixed in the same VxRail cluster.
- The port speed for each VxRail node (25 GbE, 10 GbE, 1 GbE) must be the same in the
VxRail cluster.
• One additional port on the switch or one logical path on the VxRail external management VLAN is
required for a workstation or laptop to access the VxRail user interface for the cluster.
Be sure to follow your switch vendor’s best practices for performance and availability. For example,
packet buffer banks may provide a way to optimize your network with your wiring layout.
Decide if you plan to use one or two switches for the VxRail cluster. One switch is acceptable, and is
often used in test and development environments. To support sustained performance, high availability,
and failover in production environments, two or more switches are required. The VxRail appliance is a
software-defined data center which is totally dependent on the physical top-of-rack switch for network
communications. A lack of network redundancy places you at risk of losing availability to all of the virtual
machines operating on the appliance.
Decide what network architecture you want to support the VxRail cluster, and what protocols will be used
to connect to data center services and end users. For VxRail clusters managing production workloads,
VLANs will be configured to support the VxRail networks. Determine which network tier the VxRail
networking VLANs will terminate, and which tier to configure routing services.
The number of Ethernet ports on each VxRail node you choose to support VxRail networking, and the
number of adjacent top-of-rack switches you choose to deploy to support the workload running on the
VxRail cluster will drive the cabling within the data center rack. Examples of wiring diagrams between
VxRail nodes and the adjacent switches can be found in Physical Network Switch Wiring Examples.
To use out-of-band management, connect the integrated Dell Remote Access Controller (iDRAC) port to
a separate switch to provide physical network separation. Default values, capabilities, and
recommendations for out-of-band management are provided with server hardware information.
You must reserve an IP address for each iDRAC in your VxRail cluster (one per node).
The path starts with a structured discovery and planning process that focuses on business use cases and
strategic goals, and that will drive the selection of software layers that will comprise the software-defined
data center. Dell Technologies implements the desired software layers in a methodical, structured
manner, where each phase involves incremental planning and preparation of the supporting network.
The next phase after the deployment of the VxRail cluster is to layer the VMware Cloud Foundation
software on the cluster. This enables assigning cluster resources as the underpinning for logical domains,
whose policies align with use cases and requirements.
In this profile setting, VxRail uses the SmartFabric feature to discover VxRail nodes and Dell EMC
switches on the network, perform zero-touch configuration of the switch fabric to support VxRail
deployment, and then create a unified hyperconverged infrastructure of the VxRail cluster and Dell EMC
switch network fabric.
For ongoing VxRail cluster network management after initial deployment, the Dell EMC OMNI (Open
Manage Network Interface) vCenter plug-in is provided free of charge. The Dell EMC OMNI plug-in
enables the integration and orchestration of the physical and virtual networking components in the VxRail-
SmartFabric HCI stack, providing deep visibility from the vClient for ease of overall management and
troubleshooting. The Dell EMC OMNI plug-in serves as the centralized point of administration for
SmartFabric-enabled networks in the data center, with a user interface eliminating the need to manage
the switches individually at the console level.
The orchestration of SmartFabric Services with the VxRail cluster means that state changes to the virtual
network settings on the vCenter instance will be synchronized to the switch fabric using REST API. In this
scenario, there is no need to manually reconfigure the switches that are connected to the VxRail nodes
when an update such as a new VLAN, port group, or virtual switch, is made using the vClient.
The SmartFabric-enabled networking infrastructure can start as small as a pair of Dell EMC Ethernet
switches, and can expand to support a leaf-spine topology across multiple racks. A VxLAN-based tunnel
Planning for VxRail with the Dell EMC SmartFabric networking feature must be done in coordination with
Dell Technologies representatives to ensure a successful deployment. The planned infrastructure must
be a supported configuration as outlined in the VxRail Support Matrix.
Using the Dell EMC SmartFabric feature with VxRail requires an understanding of several key points:
• At the time of VxRail deployment, you must choose the method of network switch configuration.
Enabling the VxRail personality profile on the switches resets the switches from the factory
default state and enables SmartFabric Services. If you enable SmartFabric Services, all switch
configuration functionality except for basic management functions are disabled at the console,
and the management of switch configuration going forward are performed with SmartFabric tools
or through the automation and orchestration that is built into VxRail and SmartFabric Services.
• A separate Ethernet switch to support out-of-band management for the iDRAC feature on the
VxRail nodes and for out-of-band management of the Dell Ethernet switches is required.
• Disabling the VxRail personality profile on the Dell network switches deletes the network
configuration set up by SmartFabric services. If a VxRail cluster is operational on the Dell switch
fabric, the cluster must be deployed.
• Non-VxRail devices can be attached to switches running in SmartFabric services mode using the
OMNI vCenter plug-in.
For more information about how to plan and prepare for a deployment of VxRail clusters on a
SmartFabric-enabled network, reference the Dell EMC VxRail with SmartFabric Planning and Preparation
Guide. For more information about the deployment process of a VxRail cluster on a SmartFabric-enabled
network, go to VxRail Networking Solutions at Dell Technologies InfoHub.
When a VxRail cluster is enabled for vSphere with Kubernetes, the following six services are configured
to support vSphere with Tanzu:
As a VxRail administrator using vSphere management capabilities, you can create namespaces on the
Supervisor Cluster, and configure them with specified amount of memory, CPU, and storage. Within the
namespaces, you can run containerized workloads on the same platform with shared resource pools.
• This feature requires each VxRail node that is part of the Supervisor cluster to be configured with
a vSphere Enterprise Plus license with an add-on license for Kubernetes.
• This feature requires portgroups to be configured on the VxRail cluster virtual distributed switch to
support workload networks. These networks provide connectivity to the cluster nodes and the
three Kubernetes control plane VMs. Each Supervisor Cluster must have one primary workload
network.
• A virtual load balancer that is supported for vSphere must be also configured on the VxRail
cluster to enable connectivity from client network to workloads running in the namespaces.
• The workload networks require reserved IP addresses to enable connectivity for the control plane
VMs and the load balancer.
For complete details on enabling a VxRail cluster to support vSphere with Tanzu, see the vSphere with
Tanzu Configuration and Management Guide.
If you plan to deploy a vSAN stretched-cluster on VxRail, note the following requirements:
More detailed information about vSAN stretched-cluster and the networking requirements can be found in
the Dell-EMC VxRail vSAN Stretched Cluster Planning Guide.
• The minimum VxRail software version for the 2-Node cluster is 4.7.1.
• The deployment is limited to a pair of VxRail nodes and cannot grow through node expansion.
Verify that your workload requirements do not exceed the resource capacity of this small-scale
solution.
• Only one top-of-rack switch is required.
• Four Ethernet ports per node are required. Supported profiles:
- 2x1G and 2x10G
Like the vSAN stretched-cluster feature, the small-scale solution has strict networking guidelines,
specifically for the WAN, that must be adhered to for the solution to work. For more information about the
planning and preparation for a deployment of a 2-node VxRail cluster, go to Dell EMC vSAN 2-Node
Cluster Planning and Preparation Guide.
Before getting to this phase, several planning and preparation steps must be undertaken to ensure a
seamless integration of the final product into your data center environment. These planning and
preparation steps include:
1. Plan Data Center Routing Services.
2. Decide on VxRail Single Point of Management.
3. Plan the VxRail logical network.
4. Identify IP address range for VxRail logical networks.
5. Identify unique hostnames for VxRail management components.
6. Identify external applications and settings for VxRail.
7. Create DNS records for VxRail management components.
8. Prepare customer-supplied vCenter Server.
9. Reserve IP addresses for VxRail vMotion and vSAN networks.
10. Decide on VxRail Logging Solution.
11. Decide on passwords for VxRail management.
Use the VxRail Setup Checklist and the VxRail Network Configuration Table to help create your network
plan. References to rows in this document are to rows in the VxRail Network Configuration Table.
Note: Once you set up the VxRail cluster and complete the initial initialization phase to produce the final
product, the configuration cannot easily be changed. We strongly recommend that you take care during
this planning and preparation phase to decide on the configurations that will work most effectively for your
organization.
A leaf-spine network topology in the most common use case for VxRail clusters. A single VxRail cluster
can start on a single pair of switches in a single rack. When workload requirements expand beyond a
single rack, expansion racks can be deployed to support the additional VxRail nodes and switches. The
switches at the top of those racks, which are positioned as a ‘leaf’ layer, can be connected together using
switches at the adjacent upper layer, or ‘spine’ layer.
Establishing routing services at the spine layer means that the uplinks on the leaf layer are trunked ports,
and pass through all the required VLANs to the switches at the spine layer. This topology has the
advantage of enabling the Layer 2 networks to span across all the switches at the leaf layer. This
topology can simplify VxRail clusters that extend beyond one rack, because the Layer 2 networks at the
leaf layer do not need Layer 3 services to span across multiple racks. A major drawback to this topology
is scalability. Ethernet standards enforce a limitation of addressable VLANs to 4094, which can be a
constraint if the application workload requires a high number of reserved VLANs, or if multiple VxRail
clusters are planned.
Enabling routing services at the leaf layer overcomes this VLAN limitation. This option also helps optimize
network routing traffic, as it reduces the number of hops to reach routing services. However, this option
does require Layer 3 services to be licensed and configured at the leaf layer. In addition, since Layer 2
VxRail networks now terminate at the leaf layer, they cannot span across leaf switches in multiple racks.
Note: If your network supports VTEP, which enables extending Layer 2 networks between switches in
physical racks over a Layer 3 overlay network, that can be considered to support a multi-rack VxRail
cluster.
You have two options if the VxRail cluster extends beyond a single rack:
• Use the same assigned subnet ranges for all VxRail nodes in the expansion rack. This option is
required if SmartFabric Services are enabled on supporting switch infrastructure.
• Assign a new subnet range with a new gateway to the VxRail nodes in the expansion racks.
(Your VxRail cluster must be running a minimum version of 4.7.300 to use this option)
If the same subnets are extended to the expansion racks, the VLANs representing those VxRail networks
must be configured on the top-of-rack switches in each expansion rack and physical connectivity must be
established. If new subnets are used for the VxRail nodes and management components in the
expansion racks, the VLANs terminate at the router layer and routing services must be configured to
enable connectivity between the racks.
If your immediate or future plans include storage resource sharing using vSAN HCI mesh, be sure to
prepare your data center to meet the following prerequisites:
Dell Technologies recommends that you consider all the ramifications during this planning and
preparation phase, and decide on the single point of management option that will work most effectively for
your organization. Once VxRail initial build has completed the cluster deployment process, the
configuration cannot easily be changed.
The following should be considered for selecting the VxRail vCenter server:
• A vCenter Standard license is included with VxRail, and does not require a separate license. This
license cannot be transferred to another vCenter instance.
• The VxRail vCenter Server can manage only a single VxRail instance. This means an
environment of multiple VxRail clusters with the embedded vCenter instance requires an
equivalent number of points of management for each cluster.
• VxRail Lifecycle Management supports the upgrade of the VxRail vCenter server. Upgrading a
customer-supplied vCenter using VxRail Lifecycle Management is not supported.
• DNS services are required for VxRail. With the VxRail vCenter option, you have the choice of
using the internal DNS supported within the VxRail cluster, or leveraging external DNS in your
data center.
For a customer-supplied vCenter, the following items should be considered:
• The vCenter Standard license included with VxRail cannot be transferred to a vCenter instance
outside of the VxRail cluster.
• Multiple VxRail clusters can be configured on a single customer-supplied vCenter server, limiting
the points of management.
• With the customer-supplied vCenter, external DNS must be configured to support the VxRail
cluster.
• Ensuring version compatibility of the customer-supplied vCenter with VxRail is the responsibility
of the customer.
• You have the option of configuring the virtual distributed switch settings to deploy the VxRail
cluster against with the customer-supplied vCenter, or have VxRail deploy a virtual distributed
switch and perform the configuration instead. This option is advantageous if you want better
Note: The options to use the internal DNS or to deploy the VxRail cluster against a preconfigured virtual
distributed switch require VxRail version of 7.0.010 or later.
The customer-supplied vCenter server option is more scalable, provides more configuration options, and
is the recommended choice for most VxRail deployments. See the Dell EMC VxRail vCenter Server
Planning Guide for details.
VxRail has predefined logical networks to manage and control traffic within the cluster and outside of the
cluster. Certain VxRail logical networks must be made accessible to the outside community. For instance,
connectivity to the VxRail management system is required by IT management. VxRail networks must be
configured for end-users and application owners who need to access their applications and virtual
machines running in the VxRail cluster. In addition, a network supporting I/O to the vSAN datastore is
required, and a network to support vMotion, which is used to dynamically migrate virtual machines
between VxRail nodes to balance workload, must also be configured. Finally, an internal management
network is required by VxRail for device discovery.
• The internal management network that is used for device discovery does not require assigned IP
addresses.
• Since the external management network must be able to route upstream to network services and
end users, a nonprivate, routable IP address range must be assigned to this network.
• Traffic on the vSAN network is passed only between the VxRail nodes that form the cluster.
Either a routable or non-IP address range can be assigned. If your plans include a multi-rack
cluster, and you want to consider a new IP subnet range in the expansion racks, then assign a
routable IP address range to this network.
• If your requirements for virtual machine mobility are within the VxRail cluster, a non-routable IP
address range can be assigned to the vMotion network. However, if you need to enable virtual
machine mobility outside of the VxRail cluster, or have plans for a multi-rack expansion that will
use a different subnet range on any expansion racks, reserve a routable IP address range.
As a first step, the network team and virtualization team should meet in advance to plan VxRail’s network
architecture.
• The virtualization team must meet with the application owners to determine which specific
applications and services that are planned for VxRail are to be made accessible to specific end-
users. This determines the number of logical networks that are required to support traffic from
non-management virtual machines.
• The network team must define the pool of VLAN IDs needed to support the VxRail logical
networks, and determine which VLANs will restrict traffic to the cluster, and which VLANs will be
allowed to pass through the switch up to the core network.
• The network team must also plan to configure the VLANs on the upstream network, and on the
switches attached to the VxRail nodes.
• The network team must also configure routing services to ensure connectivity for external users
and applications on VxRail network VLANs passed upstream.
Before VxRail version 4.7, both external and internal management traffic shared the external
management network. Starting with version 4.7 of VxRail, the external and internal management
networks are broken out into separate networks.
External Management traffic includes all VxRail Manager, vCenter Server, ESXi communications, and in
certain cases, vRealize Log Insight. All VxRail external management traffic is untagged by default and
should be able to go over the Native VLAN on your top-of-rack switches.
A tagged VLAN can be configured instead to support the VxRail external management network. This
option is considered a best practice, and is especially applicable in environments where multiple VxRail
clusters will be deployed on a single set of top-of-rack switches. To support using a tagged VLAN for the
VxRail external management network, configure the VLAN on the top-of-rack switches, and then
configure trunking for every switch port that is connected to a VxRail node to tag the external
management traffic.
The Internal Management network is used solely for device discovery by VxRail Manager during initial
implementation and node expansion. This network traffic is non-routable and is isolated to the top-of-rack
switches connected to the VxRail nodes. Powered-on VxRail nodes advertise themselves on the Internal
Management network using multicast, and discovered by VxRail Manager. The default VLAN of 3939 is
configured on each VxRail node that is shipped from the factory. This VLAN must be configured on the
switches, and configured on the trunked switch ports that are connected to VxRail nodes.
If a different VLAN value is used for the Internal Management network, it not only must be configured on
the switches, but also must be applied to each VxRail node on-site. Device discovery on this network by
VxRail Manager will fail if these steps are not followed.
It is a best practice to configure a VLAN for the vSphere vMotion and vSAN networks. For these
networks, configure a VLAN for each network on the top-of-rack switches, and then include the VLANs on
the trunked switch ports that are connected to VxRail nodes.
The Virtual Machine networks are for the virtual machines running your applications and services.
Dedicated VLANs are preferred to divide Virtual Machine traffic, based on business and operational
objectives. VxRail creates one or more VM Networks for you, based on the name and VLAN ID pairs that
you specify. Then, when you create VMs in vSphere Web Client to run your applications and services,
you can easily assign the virtual machine to the VM Networks of your choice. For example, you could
have one VLAN for Development, one for Production, and one for Staging.
Network Configuration Enter the external management VLAN ID for VxRail management network
Table (VxRail Manager, ESXi, vCenter Server/PSC, Log Insight). If you do not
Row 1 plan to have a dedicated management VLAN and will accept this traffic as
untagged, enter “0” or “Native VLAN.”
Network Configuration Enter the internal management VLAN ID for VxRail device discovery. The
Table default is 3939. If you do not accept the default, the new VLAN must be
Row 2 applied to each VxRail node before cluster implementation.
Note: If you plan to have multiple independent VxRail clusters, we recommend using different VLAN IDs
across multiple VxRail clusters to reduce network traffic congestion.
For a 2-Node cluster, the VxRail nodes must connect to the Witness over a separate Witness traffic
separation network. The Witness traffic separation network is not required for stretched-cluster but is
considered a best practice. For this network, a VLAN is required to enable Witness network on this VLAN
must be able to pass through upstream to the Witness site.
• 172.28.0.0/16
• 172.29.0.0/16
• 10.0.0.0/24
• 10.0.1.0/24
You have flexibility in how the IP addresses are assigned to the VxRail management components. If the
VxRail cluster to be deployed in at version 7.0.010 or later, you can either manually assign the IP
addresses to the management components, or have the IP addresses auto during VxRail initial build.
Before VxRail version 7.0.010, the only supported option was to auto-assign the IP addresses to the
management components. The assignment process allocates IP addresses in sequential order, so a
range must be provided for this method.
The decisions that you make on the final VxRail configuration that is planned for your data center impacts
the number of IP addresses you will need to reserve.
• Decide if you want to reserve additional IP addresses in the VxRail management system to
assign to VxRail nodes in the future for expansion purposes in a single rack. When a new node is
added to an existing VxRail cluster, it assigns an IP address from the unused reserve pool, or
prompts you to enter an IP address manually.
• Decide whether you will use the vCenter instance that is deployed in the VxRail cluster, or use an
external vCenter already operational in your data center.
- For VxRail versions 7.0 or later, if you use the vCenter instance that is deployed on the
VxRail cluster, you must reserve an IP address for vCenter. The Platform Service Controller
is bundled into the vCenter instance.
- For VxRail versions earlier than version 7.0, if you have VxRail deploy vCenter, you must
reserve an IP address for the vCenter instance and an IP address for the Platform Service
Controller.
• Decide if you will use vSphere Log Insight that can be deployed in the VxRail cluster.
- For VxRail version 7.0 and earlier, if you choose to use the vCenter instance that is deployed
in the VxRail cluster, you have the option to deploy vSphere Log Insight on the cluster. You
can also choose to connect to an existing syslog server in your data center, or no logging at
all. If you choose to deploy vSphere Log Insight in the VxRail cluster, you need to reserve
one IP address.
- vRealize Log Insight is not an option for deployment during the initial VxRail configuration
process starting in version 7.0.010.
- If you use an external vCenter already operational in your data center for VxRail, vSphere
Log Insight cannot be deployed.
• VxRail supports the Dell EMC ‘call home’ feature, where alerts from the appliance are routed to
customer service. The Secure Remote Services gateway is required to enable alerts from VxRail
to be sent to Dell Technologies customer service.
- Decide whether to use an existing Secure Remote Services gateway in your data center for
‘call-home’, deploy a virtual instance of the Secure Remote Services gateway in the VxRail
cluster for this purpose, or none.
- Reserve one IP address to deploy SRS-VE (Secure Remote Services Virtual Edition) in the
VxRail cluster.
• For a 2-Node Cluster, the VxRail nodes must connect to the Witness over a separate Witness
traffic separation network. For this network, an additional IP address is required for each of the
two VxRail nodes.
- The VxRail nodes must be able to route to the remote site Witness.
- The traffic must be able to pass through the Witness traffic separation VLAN.
Use the following table to determine the number of public IP addresses required for the Management
logical network:
Request your networking team to reserve a subnet range that has sufficient open IP addresses to cover
VxRail initial build and any planned future expansion.
Network Configuration
Table Enter the subnet mask for the VxRail External Management network.
Row 7
Network Configuration
Table Enter the gateway for the VxRail External Management network.
Rows 8
Network Configuration
Enter the starting and ending IP addresses for the ESXi hosts - a
Table
continuous IP range is required.
Rows 24 and 25
If you choose instead to assign the IP addresses to each individual ESXi host, record the IP address for
each ESXi host to be included for VxRail initial build.
Network Configuration
Table Enter the IP addresses for the ESXi hosts.
Rows 26 and 29
Network Configuration
Table Enter the permanent IP address for VxRail Manager.
Row 14
If you are going to deploy vCenter on the VxRail cluster, record the permanent IP address for vCenter
and Platform Service Controller (if applicable). Leave these entries blank if you will provide an external
vCenter for VxRail.
Network Configuration
Table Enter the IP address for VxRail vCenter.
Row 31
Network Configuration
Table Enter the IP address for VxRail Platform Service Controller (if applicable)
Row 33
Record the IP address for Log Insight. Leave this entry blank if you are deploying a version of VxRail at
version 7.0.010 or higher, or if you choose to not deploy Log Insight on VxRail.
Network Configuration
Table Enter the IP address for vSphere Log Insight.
Row 59
Record the two IP addresses for the witness virtual appliance. Leave blank if a witness is not required for
your VxRail deployment.
Network Configuration Table Enter the IP address for the second of the two nodes in the 2-Node
Row 73 Cluster.
Determine the naming format for the hostnames to be applied to the required VxRail management
components: each ESXi host, and VxRail Manager. If you deploy the vCenter Server in the VxRail cluster,
that also requires a hostname. In addition, if you decide to deploy Log Insight in the VxRail cluster, that
needs a hostname as well.
Note: You cannot easily change the hostnames and IP addresses of the VxRail management
components after initial implementation.
Enter the values for building and auto-assigning the ESXi hostnames if this is the chosen method.
Network Configuration Table Enter an example of your desired ESXi host-naming scheme. Be
Rows 15–19 sure to show your desired prefix, separator, iterator, offset, suffix, and
domain.
If the ESXi hostnames will be applied manually, capture the name for each ESXi host that is planned for
the VxRail initial build operation.
Note: You can skip this section if you plan to use an external vCenter Server in your data center for
VxRail. These action items are only applicable if you plan to use the VxRail vCenter Server.
Network Configuration Table Enter an alphanumeric string for the new vCenter Server hostname.
Row 30 The domain that is specified will be appended.
Network Configuration Table Enter an alphanumeric string for the new Platform Services Controller
Row 32 hostname. The domain that is specified will be appended.
Note: You can skip this section if you plan to deploy a VxRail cluster at version 7.0.010 or higher, will use
an external syslog server instead of Log Insight, or will not enable logging.
To deploy Log Insight to the VxRail cluster, the management component must be assigned a hostname.
You can use your own third-party syslog server, use the vRealize Log Insight solution included with
VxRail, or no logging. You can only select the vRealize Log Insight option if you also use the VxRail
vCenter Server.
An NTP server is not required, but is recommended. If you provide an NTP server, vCenter server will be
configured to use it. If you do not provide at least one NTP server, VxRail uses the time that is set on
ESXi host #1 (regardless of whether the time is correct or not).
Note: Ensure that the NTP IP address is accessible from the VxRail External Management Network
which the VxRail nodes will be connected to and is functioning properly.
Network Configuration
Table Enter your time zone.
Row 9
If the internal DNS option is not selected, one or more external, customer-supplied DNS servers are
required for VxRail. The DNS server that you select for VxRail must be able to support naming services
for all the VxRail management components (VxRail Manager, vCenter, and so on).
Note: Ensure that the DNS IP address is accessible from the network to which VxRail is connected and
functioning properly.
Lookup records must be created in your selected DNS for every VxRail management component you are
deploying in the cluster and are assigning a hostname and IP address. These components can include
VxRail Manager, VxRail vCenter Server, VxRail Platform Service Controller, Log Insight, and each ESXi
host in the VxRail cluster. The DNS entries must support both forward and reverse lookups.
Use the VxRail Network Configuration table to determine which VxRail management components to
include in your planned VxRail cluster, and have assigned a hostname and IP address. vMotion and
Note: You can skip this section if you plan to use the VxRail vCenter server. These action items are only
applicable if you plan to use a customer-supplied vCenter server in your data center for VxRail.
Certain pre-requisites must be completed before VxRail initialization if you use a customer-supplied
vCenter as the VxRail cluster management platform. During the VxRail initialization process, it connects
to your customer-supplied vCenter to perform the necessary validation steps, and perform the
configuration steps, to deploy the VxRail cluster on your vCenter instance.
• Determine if your customer-supplied vCenter server is compatible with your VxRail version.
- See the Knowledge Base article VxRail: VxRail and External vCenter Interoperability Matrix
on the Dell EMC product support site for the latest support matrix.
• Enter the FQDN of your selected, compatible customer-supplied vCenter server in the VxRail
Network Configuration table.
• Determine whether your customer-supplied vCenter server has an embedded or external platform
services controller. If the platform services controller is external to your customer-supplied
vCenter, enter the platform services controller FQDN in the VxRail Network Configuration table.
Network Configuration Table Enter the FQDN of the customer-supplied platform services controller
Row 34 (PSC).
Leave this row blank if the PSC is embedded in the customer-
supplied vCenter server.
• Decide on the single sign-on (SSO) domain that is configured on the customer-supplied vCenter
you want to use to enable connectivity for VxRail, and enter the domain in the VxRail Network
Configuration Table.
Network Configuration Table Enter the single sign-on (SSO) domain for the customer-supplied
Row 36 vCenter server. (For example, vsphere.local)
• The VxRail initialization process requires login credentials to your customer-supplied vCenter.
The credentials must have the privileges to perform the necessary configuration work for VxRail.
You have two choices:
- Provide vCenter login credentials with administrator privileges.
- Create a new set of credentials in your vCenter for this purpose. Two new roles are created
to this user by your Dell Technologies representative.
• A set of credentials must be created in the customer-supplied vCenter for VxRail management
with no permissions and no assigned roles. These credentials are assigned a role with limited
privileges during the VxRail initialization process, and then assigned to VxRail to enable
connectivity to the customer-supplied vCenter after initialization completes.
- If this is the first VxRail cluster on the customer-supplied vCenter, enter the credentials that
you will create in the customer-supplied vCenter.
- If you already have an account for a previous VxRail cluster in the customer-supplied
vCenter, enter those credentials.
• The VxRail initialization process deploys the VxRail cluster under an existing data center in the
customer-supplied vCenter. Create a new data center, or select an existing Data center on the
customer-supplied vCenter.
Network Configuration Table Enter the name of a data center on the customer-supplied vCenter
Row 39 server.
• Specify the name of the cluster that is to be created by the VxRail initialization process in the
selected data center. This name must be unique, and not used anywhere in the data center on
the customer-supplied vCenter.
You can skip this section if your VxRail version is not 7.0.010 or later, or if you do not plan to deploy
VxRail against a customer-supplied virtual distributed switch.
Before VxRail version 7.0.010, if you chose to deploy the VxRail cluster on an external, customer-
supplied vCenter, a virtual distributed switch would be configured on the vCenter instance as part of the
initial cluster build process. The automated initial build process would deploy the virtual distributed switch
adhering to VxRail requirements in the vCenter instance, and then attach the VxRail networks to the
portgroups on the virtual distributed switch.
Starting with VxRail version 7.0.010, if you choose to deploy the VxRail cluster on an external, customer-
supplied vCenter, you have the choice of having the automated initial build process deploy the virtual
distributed switch, or of configuring the virtual network, including the virtual distributed switch, manually
before the initial cluster build process.
• Unless your data center already has a vCenter instance compatible with VxRail, deploy a vCenter
instance that will serve as the target for the VxRail cluster.
• Unless you are connecting the VxRail cluster to an existing virtual distributed switch, configure a
virtual distributed switch on the target vCenter instance.
• Configure a portgroup for each of the required VxRail networks. Dell Technologies recommends
using naming standards that clearly identify the VxRail network traffic type.
• Configure the VLAN assigned to each required VxRail network on the respective portgroup. The
VLANs for each VxRail network traffic type can be referenced in the ‘VxRail Networks’ section in
the VxRail Network Configuration Table.
• Configure two or four uplinks on the virtual distributed switch to support the physical connectivity
for the VxRailcluster.r
• Configure the teaming and failover policies for the distributed port groups. Each port group on the
virtual distributed switch is assigned a teaming and failover policy. You can choose a simple
strategy and configure a single policy that is applied to all port groups, or configure a set of
policies to address requirements at the port group level.
Dell Technologies recommends referencing the configuration settings that are applied to the virtual
distributed switch by the automated VxRail initial build process as a baseline. This will ensure a
successful deployment of a VxRail cluster against the customer-supplied virtual distributed switch. The
settings that are used by the automated initial build process can be found in the section Virtual Distributed
Switch Portgroup Default Settings.
Network Configuration Table Enter the name of the portgroup that will enable connectivity for the
Row 41 VxRail external management network.
Network Configuration Table Enter the name of the portgroup that will enable connectivity for the
Row 42 VxRail vCenter Server network.
Network Configuration Table Enter the name of the portgroup that will enable connectivity for the
Row 43 VxRail internal management network
Network Configuration Table Enter the name of the portgroup that will enable connectivity for the
Row 45 vSAN network.
If your plan is to have more than one VxRail cluster deployed against a single customer-supplied virtual
distributed switch, Dell Technologies recommends establishing a distinctive naming standard for the
distributed port groups. This will ease network management and help distinguish the individual VxRail
networks among multiple VxRail clusters.
Configuring portgroups on the virtual distributed switch for any guest networks you want to have is not
required for the VxRail initial build process. These portgroups can be configured after the VxRail initial
build process is complete. Dell Technologies also recommends establishing a distinctive naming standard
for these distributed port groups.
Customizing the teaming and failover policies can also be performed as a post-deployment operation
instead of as a pre-requisite. VxRail will support the teaming and failover policies that are described in the
section Configure teaming and failover policies for VxRail networks.
Starting with VxRail version 7.0.010, you can choose to have the IP addresses assigned automatically
during VxRail initial build, or manually select the IP addresses for each ESXi host. If the VxRail version is
earlier than 7.0.010, auto-assignment method by VxRail is the only option.
For the auto-assignment method, the IP addresses for VxRail initial build must be contiguous, with the
specified range in a sequential order. The IP address range must be large enough to cover the number of
ESXi hosts planned for the VxRail cluster. A larger IP address range can be specified to cover for
planned expansion.
If your plans include expanding the VxRail cluster to deploy nodes in more than one physical rack, you
have the option of whether to stretch the IP subnet for vMotion between the racks, or to use routing
services in your data center instead.
Enter the subnet mask and default gateway. You can use the default gateway assigned to the VxRail
External Management network, or enter a gateway dedicated for the vMotion network.
Starting with VxRail version 7.0.010, you can choose to have the IP addresses assigned automatically
during VxRail initial build, or manually select the IP addresses for each ESXi host. If the VxRail version is
below 7.0.010, auto-assignment method by VxRail is the only option.
For the auto-assign method, the IP addresses for the initial build of the VxRail cluster must be contiguous,
with the specified range in a sequential order. The IP address range must be large enough to cover the
• You will deploy the vCenter instance included with the VxRail onto the VxRail cluster.
• The VxRail cluster to be deployed is version 7.0.010 or lower.
If you use a customer-supplied vCenter server, you can either use your own third-part syslog server, or no
logging. If you choose the vRealize Log Insight option, the IP address that is assigned to Log Insight must
be on the same subnet as the VxRail management network.
Network Configuration Table Enter the IP address for vRealize Log Insight or the hostnames of your
Row 62 or Row 63 existing third-party syslog servers. Leave blank for no logging.
Note: The Dell Technologies service representative will need passwords for the VxRail accounts in this
table. For security purposes, you can enter the passwords during the VxRail initialization process, as
opposed to providing them visibly in a document.
• For ESXi hosts, passwords must be assigned to the ‘root’ account. You can use one password for
each ESXi host or apply the same password to each host.
Note: Skip this section if you do not plan to enable Dell EMC SmartFabric Services to pass control of
switch configuration to VxRail.
The planning and preparation tasks for the deployment and operations of a VxRail cluster on a network
infrastructure enabled with SmartFabric Services differ from connecting a VxRail cluster to a standard
data center network. The basic settings that are required for the initial buildout of the network
infrastructure with SmartFabric Services are outlined in this section.
Enabling the SmartFabric personality on a Dell Ethernet switch that is qualified for SmartFabric Services
initiates a discovery process for other connected switches with the same SmartFabric personality for the
purposes of forming a unified switch fabric. A switch fabric can start as small as two leaf switches in a
single rack, then expand automatically by enabling the SmartFabric personality on connected spine
switches, and connected leaf switches in expansion racks.
Both the Dell Ethernet switches and VxRail nodes advertise themselves at the time of power-on on this
same internal discovery network. The SmartFabric-enabled network also configures an ‘untagged’ virtual
network on the switch fabric to enable client onboarding through a jump port for access to VxRail
Manager to perform cluster implementation. During VxRail initial configuration through VxRail Manager,
the required VxRail networks are automatically configured on the switch fabric.
• The Dell EMC Open Management Network Interface (OMNI) plug-in must be deployed on the
vCenter instance to support automated switch management after the VxRail cluster is built. The
Dell EMC OMNI vCenter plug-in is required for each Dell EMC switch fabric pair, and requires
network properties to be set during the deployment process.
Network Configuration Table Reserve an IP address for out-of-band management of each switch
Row 64 and 65 in the SmartFabric-enabled network.
Network Configuration Table
Enter the IP address for Dell EMC OMNI vCenter plug-in.
Row 66
Network Configuration Table
Enter the subnet mask for Dell EMC OMNI vCenter plug-in.
Row 67
Network Configuration Table
Enter the gateway for Dell EMC OMNI vCenter plug-in.
Row 68
For complete details on the settings that are needed during the planning and preparation phase for a
SmartFabric-enabled network, see the ‘Dell EMC VxRail™ with SmartFabric Network Services Planning
and Preparation Guide’ on the Dell Technologies VxRail Technical Guides site.
The VxRail External Management Network should be accessible to your location’s IT infrastructure and
personnel only. IT administrators require access to this network for day-to-day management of the VxRail
cluster, and the VxRail cluster is dependent on outside applications such as DNS and NTP to operate
correctly.
VxRail Virtual Machine Networks support access to applications and software that is deployed on the
virtual machines on the VxRail cluster. While you must create at least one VxRail Virtual Machine network
at VxRail initial implementation, additional VxRail Virtual Machine networks can be added to support the
end-user community. The spine switch must be configured to direct the traffic from these VxRail Virtual
Machine networks to the appropriate end-users.
The VxRail Witness Traffic Separation Network is optional if you plan to deploy a stretched-cluster.
The VxRail Witness traffic separation network enables connectivity between the VxRail nodes with the
witness at an offsite location. The remote-site witness monitors the health of the vSAN datastore on the
VxRail cluster over this network.
Using the VxRail Network Configuration table, perform the following steps:
Step 1. Configure the External Management Network VLAN (Row 1) on the spine switch.
Configure all of the VxRail Virtual Machine Network VLANs (Rows 39 and 40) on the spine switch.
If applicable, configure the VxRail Witness Traffic Separation Network VLAN (Row 50) on the spine
switch.
Note: You can skip this section if you plan to enable Dell EMC SmartFabric Services and extend VxRail
automation to the TOR switch layer.
For the VxRail initialization process to pass validation and build the cluster, you must configure the ports
that VxRail will connect to on your switch before you plug in VxRail nodes and powering them on.
Note: This section provides guidance for preparing and setting up your switch for VxRail. Be sure to
follow your vendor’s documentation for specific switch configuration activities and for best practices for
performance and availability.
The network switch ports that connect to VxRail nodes must allow for pass-through of multicast traffic on
the VxRail Internal Management VLAN. Multicast is not required on your entire network, just on the ports
connected to VxRail nodes.
VxRail creates very little traffic through multicasting for auto-discovery and device management.
Furthermore, the network traffic for the Internal Management network is restricted through a VLAN. You
can choose to enable MLD Snooping and MLD Querier on the VLAN if supported on your switches.
If MLD Snooping is enabled, MLD Querier must be enabled. If MLD Snooping is disabled, MLD Querier
must be disabled.
For VxRail v4.5.0 and earlier, multicast is required for the vSAN VLAN. One or more network switches
that connect to VxRail must allow for pass-through of multicast traffic on the vSAN VLAN. Multicast is not
required on your entire network, just on the ports connected to VxRail.
VxRail multicast traffic for vSAN will be limited to broadcast domain per vSAN VLAN. There is minimal
impact on network overhead as management traffic is nominal. You can limit multicast traffic by enabling
IGMP Snooping and IGMP Querier. We recommend enabling both IGMP Snooping and IGMP Querier if
your switch supports them and you configure this setting.
IGMP Snooping software examines IGMP protocol messages within a VLAN to discover which interfaces
are connected to hosts or other devices that are interested in receiving this traffic. Using the interface
information, IGMP Snooping can reduce bandwidth consumption in a multi-access LAN environment to
avoid flooding an entire VLAN. IGMP Snooping tracks ports that are attached to multicast-capable routers
to help manage IGMP membership report forwarding. It also responds to topology change notifications.
IGMP Querier sends out IGMP group membership queries on a timed interval, retrieves IGMP
membership reports from active members, and allows updates to group membership tables. By default,
most switches enable IGMP Snooping but disable IGMP Querier. You will need to change the settings if
this is the case.
If IGMP Snooping is enabled, IGMP Querier must be enabled. If IGMP Snooping is disabled, IGMP
Querier must be disabled.
For questions about how your switch handles multicast traffic, contact your switch vendor.
6.1.1.3 Enable uplinks to pass inbound and outbound VxRail network traffic
The uplinks on the switches must be configured to allow passage for external network traffic to
administrators and end-users. This includes the VxRail external management network (or combined
VxRail management network earlier than version 4.7) and Virtual Machine network traffic. The VLANs
representing these networks need to be passed upstream through the uplinks. For VxRail clusters
running at version 4.7 or later, the VxRail internal management network must be blocked from outbound
upstream passage.
If the VxRail vMotion network is going to be configured to be routable outside of the top-of-rack switches,
include the VLAN for this network in the uplink configuration. This is to support the use case where virtual
machine mobility is desired outside of the VxRail cluster.
If you plan to expand the VxRail cluster beyond a single rack, configure the VxRail network VLANs for
either stretched Layer 2 networks across racks, or pass upstream to terminate at Layer 3 routing services
if new subnets will be assigned in expansion racks.
• Access mode – The port accepts untagged packets only and distributes the untagged packets to
all VLANs on that port. This is typically the default mode for all ports. This mode should only be
used for supporting VxRail clusters for test environments or temporary usage.
• Trunk mode – When this port receives a tagged packet, it passes the packet to the VLAN
specified in the tag. To configure the acceptance of untagged packets on a trunk port, you must
first configure a single VLAN as a “Native VLAN.” A “Native VLAN” is when you configure one
VLAN to use as the VLAN for all untagged traffic.
• Tagged-access mode – The port accepts tagged packets only.
If your requirements include using any spare network ports on the VxRail nodes that were not configured
for VxRail network traffic for other use cases, then link aggregation can be configured to support that
network traffic. These can include any unused ports on the network daughter card (NDC) or on the
optional PCIe adapter cards. Updates can be configured on the virtual distributed switch deployed during
VxRail initial build to support the new networks, or a new virtual distributed switch can be configured.
Since the initial virtual distributed switch is under the management and control of VxRail, the best practice
is to configure a separate virtual distributed switch on the vCenter instance to support these networking
use cases.
If Spanning Tree is enabled in your network, ensure that the physical switch ports that are connected to
VxRail nodes are configured with a setting such as ‘Portfast’, or set as an edge port. These settings set
the port to forwarding state, so no disruption occurs. Because vSphere virtual switches do not support
STP, physical switch ports that are connected to an ESXi host must have a setting such as ‘Portfast’
configured if spanning tree is enabled to avoid loops within the physical switch network.
• VxRail Management VLAN (default is untagged/native) ‒ ensure that multicast is enabled on this
VLAN.
For VxRail clusters using version 4.5 or later:
• vSAN VLAN ‒ ensure that unicast is enabled.
For VxRail clusters using versions earlier than 4.5:
• vSAN VLAN ‒ ensure that multicast is enabled. Enabling IGMP snooping and querier is
recommended.
For all VxRail clusters:
VxRail Logical Networks: Version earlier than 4.7 (left) and 4.7 or later (right)
3. Configure the External Management VLAN (Row 1) on the switch ports. If you entered “Native
VLAN,” set the ports on the switch to accept untagged traffic and tag it to the native management
VLAN ID. Untagged management traffic is the default management VLAN setting on VxRail.
4. For VxRail version 4.7 and later, configure the Internal Management VLAN (Row 2) on the
switch ports.
5. Allow multicast on the VxRail switch ports to support the Internal Management network.
6. Configure a vSphere vMotion VLAN (Row 3) on the switch ports.
7. Configure a vSAN VLAN (Row 4) on the switch ports. For release prior to VxRail v4.5.0, allow
multicast on this VLAN. For VxRail v4.5.0 and later, allow unicast traffic on this VLAN.
8. Configure the VLANs for your VM Networks (Rows 6) on the switch ports.
9. Configure the optional VxRail Witness Traffic Separation VLAN (Row 70) on the switch ports if
required.
10. Configure the switch uplinks to allow the External Management VLAN (Row 1) and VM
Network VLANs (Row 6) to pass through, and optionally the vSphere vMotion VLAN and vSAN
VLAN. If a vSAN witness is required for the VxRail cluster, include the VxRail Witness Traffic
Separation VLAN (Row 70) on the uplinks.
11. Configure the inter-switch links to allow the all VLANs to pass through if deploying dual switches.
Confirm the settings on the switch, using the switch vendor instructions for guidance:
Note: Do not try to plug your workstation/laptop directly into a VxRail server node to connect to the VxRail
management interface for initialization. It must be plugged into your network or switch, and the
workstation/laptop must be logically configured to reach the necessary networks.
A supported web browser is required to access VxRail management interface. The latest versions of
Firefox, Chrome, and Internet Explorer 10+ are all supported. If you are using Internet Explorer 10+ and
an administrator has set your browser to “compatibility mode” for all internal websites (local web
addresses), you will get a warning message from VxRail. Contact your administrator to whitelist URLs
mapping to the VxRail user interface.
To access the VxRail management interface to perform initialization, you must use the temporary,
preconfigured VxRail initial IP address: 192.168.10.200/24. This IP address will automatically change
during VxRail initialization to your desired permanent address, and assigned to VxRail Manager during
cluster formation.
VxRail Workstation/laptop
Example
Configuration IP address/netmask IP address Subnet mask Gateway
Initial
192.168.10.200/24 192.168.10.150 255.255.255.0 192.168.10.254
(temporary)
Post-
configuration 10.10.10.100/24 10.10.10.150 255.255.255.0 10.10.10.254
(permanent)
Your workstation/laptop must be able to reach both the temporary VxRail initial IP address and the
permanent VxRail Manager IP address (Row 26 from VxRail Network Configuration table). VxRail
initialization will remind you that you might need to reconfigure your workstation/laptop network settings to
access the new IP address.
It is best practice to give your workstation/laptop or your jump server two IP addresses on the same
network port, which allows for a smoother experience. Depending on your workstation/laptop, this can be
implemented in several ways (such as dual-homing or multi-homing). Otherwise, change the IP address
on your workstation/laptop when instructed to and then return to VxRail Manager to continue with the
initialization process.
Note: If a custom VLAN ID will be used for the VxRail management network other than the default “Native
VLAN”, ensure the workstation/laptop can also access this VLAN.
Before coming on-site, the Dell Technologies service representative will have contacted you to
capture and record the information that is described in the VxRail Network Configuration Table and
walk through the VxRail Setup Checklist.
If your planned VxRail deployment requires a Witness at a remote data center location, the
Witness virtual appliance is deployed.
If your planned deployment include the purchase of Dell Ethernet switches and professional
services to install and configure the switches to support the VxRail cluster, that activity is
performed before VxRail deployment activities commence.
Install the VxRail nodes in a rack or multiple racks in the data center. If Dell professional services
are not installing the switches, install the network switches supporting the VxRail cluster into the
same racks for ease of management.
Attach Ethernet cables between the ports on the VxRail nodes and switch ports that are configured
to support VxRail network traffic.
Power on three or four initial nodes to form the initial VxRail cluster. Do not turn on any other
VxRail nodes until you have completed the formation of the VxRail cluster with the first three or
four nodes.
Connect a workstation/laptop configured for VxRail initialization to access the VxRail external
management network on your selected VLAN. It must be either plugged into the switch or able to
logically reach the VxRail external management VLAN from elsewhere on your network.
Open a browser to the VxRail initial IP address to begin the VxRail initialization process.
The Dell Technologies service representative will populate the input screens on the menu with the
data collected and recorded in the
If you have enabled Dell EMC SmartFabric Services, VxRail will automatically configure the
switches that are connected to VxRail nodes using the information populated on the input screens.
VxRail performs the verification process, using the information input into the menus.
After validation is successful, the initialization process will begin to build a new VxRail cluster.
VxRail will apply a default teaming and failover policy for each VxRail network during the initial build
operation.
• The default load-balancing policy is ‘Route based on originating virtual port’ for all VxRail network
traffic.
• The default network failure detection setting is ‘link status only’. This setting should not be
changed. VMware recommends having 3 or more physical NICs in the team for ‘beacon probing’
to work correctly.
• The setting for ‘Notify switches’ is set to ‘Yes’. This instructs the virtual distributed switch to notify
the adjacent physical switch of a failover.
• The setting for ‘Failback’ is set to ‘Yes’. This instructs a failed adapter to take over for the standby
adapter once it is recovered and comes online again, if the uplinks are in an active-standby
configuration.
• The failover order for the uplinks is dependent on the VxRail network configured on the portgroup.
Default VDS teaming and failover policy for vSAN network configured with 2 VxRail
ports
After the virtual switch selects an uplink for a virtual machine or VMkernel adapter, it always
forwards traffic through the same uplink. This option makes a simple selection based on the
available physical uplinks. However, this policy does not attempt to load balance based on
network traffic.
The virtual switch selects an uplink for a virtual machine based on the virtual machine MAC
address. While it requires more resources than using the originating virtual port, it has more
flexibility in uplink selection. This policy does not attempt to load balance based on network traffic
analysis.
Always use the highest order uplink that passes failover detection criteria from the active
adapters. No actual load balancing is performed with this option.
The virtual switch monitors network traffic, and makes adjustments on overloaded uplinks by
moving traffic to another uplink. This option does use additional resources to track network traffic.
VxRail does not support the ‘Route based on IP Hash’ policy, as there is a dependency on the logical link
setting of the physical port adapters on the switch, and link settings such as static port channels, LAGs,
and LACP are not supported with VxRail.
Starting with VxRail version 7.0.010, the ‘Failover Order’ setting on the teaming and failover policy on the
VDS portgroups supporting VxRail networks can be changed. The default failover order for the uplinks
each portgroup configured during VxRail initial build is described in the section Default failover order
policy. For any portgroup configured during VxRail initial build to support VxRail network traffic, an uplink
in ‘Standby’ mode can be moved into ‘Active’ mode to enable an ‘Active/Active’ configuration. This action
can be performed after the VxRail cluster has completed the initial build operation.
Moving an uplink that is configured as ‘Unused’ for a portgroup supporting VxRail network traffic into
either ‘Active’ mode or ‘Standby’ mode does not automatically activate the uplink and increase bandwidth
for that portgroup. Bandwidth optimization is dependent on the load-balancing settings on the upstream
switches, and link aggregation is not supported on those switch ports that are configured to support for
VxRail network traffic.
VxRail node with NDC ports and ports from optional PCIe adapter card
Network redundancy across NDC and PCIe Ethernet ports can be enabled by reconfiguring the VxRail
networks. The table below describes the supported starting and ending network reconfigurations
Note: You must follow the official instructions/procedures from VMware and Dell Technologies for these
operations.
• The first port configured for VxRail networking, commonly known as ‘vmnic0’ or ‘vmnic1’, must be
reserved for VxRail management and node discovery. Do not migrate VxRail management or
node discovery off of this first reserved port.
• The network reconfiguration requires a one-to-one swap. For example, a VxRail network that is
currently running on two NDC ports can be reconfigured to run on one NDC port and one PCIe
port. The network cannot be reconfigured to swap one NCD port for two PCIe ports,
• The number of ports reserved per node during VxRail initial build (either 2 or 4) cannot be altered
by a reconfiguration across NDC and PCIe ports.
The following operations are unsupported in versions earlier than VxRail 7.0.010:
• Migrating or moving VxRail system traffic to the optional ports. VxRail system traffic includes the
management, vSAN, vCenter Server, and vMotion Networks.
• Migrating VxRail system traffic to other port groups.
• Migrating VxRail system traffic to another VDS.
Note: Performing any unsupported operations will impact the stability and operations of the VxRail
cluster, and may cause a failure in the VxRail cluster.
4 vSAN
6 VLAN
7 VxRail Subnet Mask Subnet mask for VxRail External Management Network
Management
8 Default Gateway Default gateway for VxRail External Management Network
13 VxRail Hostname
Manager
14 IP Address
15 ESXi VxRail auto- Prefix
Hostnames assign method
16 Separator
17 Iterator
18 Offset
19 Suffix
20 Customer- ESXi hostname 1
supplied method
21 ESXi hostname 2
22 ESXi hostname 3
23 ESXi hostname 4
24 ESXi IP VxRail auto- Starting IP Address
Addresses assign method
25 Ending IP Address
VxRail cluster: Decide if you want to plan for additional nodes beyond the initial three (or four)-node
cluster. You can have up to 64 nodes in a VxRail cluster.
VxRail ports: Decide how many ports to configure per VxRail node, what port type, and what network
speed.
Network switch: Ensure that your switch supports VxRail requirements and provides the connectivity
option that you chose for your VxRail nodes. Verify cable requirements. Decide if you will have a single or
multiple switch setup for redundancy.
Data center: Verify that the required external applications for VxRail are accessible over the network and
correctly configured.
Topology: If you are deploying VxRail over more than one rack, be sure that network connectivity is set
up between the racks. Determine the Layer 2/Layer 3 boundary in the planned network topology.
Workstation/laptop: Any operating system with a browser to access the VxRail user interface. The latest
versions of Firefox, Chrome, and Internet Explorer 10+ are all supported.
Out-of-band Management (optional): One available port that supports 1 Gb for each VxRail node.
Logical Network
One external management VLAN for traffic from VxRail, vCenter Server, ESXi
One internal management VLAN with multicast for auto-discovery and device
management. The default is 3939.
One VLAN with unicast (starting with VxRail v4.5.0) or multicast (prior to v4.5.0)
Reserve VLANs for vSAN traffic
One VLAN for vSphere vMotion
One or more VLANs for your VM Guest Networks
If you are enabling witness traffic separation, reserve one VLAN for the VxRail
witness traffic separation network
Select the Time zone
Select the Top-Level Domain
Hostname or IP address of the NTP servers on your network (recommended)
System
IP address of the DNS servers on your network (if external DNS)
Forward and reverse DNS records for VxRail management components (if
external DNS)
Decide on your VxRail host naming scheme. The naming scheme will be applied
to all VxRail management components.
Reserve three or more IP addresses for ESXi hosts.
Management
Reserve one IP address for VxRail Manager.
Determine default gateway and subnet mask.
Select passwords for VxRail management components.
The VxRail cluster needs to be able to connect to specific applications in your data center. DNS is
required, and NTP is optional. Open the necessary ports to enable connectivity to the external syslog
server, and for LDAP and SMTP.
Open the necessary firewall ports to enable IT administrators to deploy the VxRail cluster.
Administration Access
Description Source Devices Destination Devices Protocol Ports
ESXi Management Administrators Host ESXi Management TCP. UDP 902
Interface
VxRail Administrators VMware vCenter Server, TCP 80, 443
Management VxRail Manager, Host
GUI/Web Interfaces ESXi Management,
Dell iDRAC port,
vRealize Log Insight,
PSC
Dell server Administrators Dell iDRAC TCP 623,
management 5900,
5901
SSH and SCP Administrators Host ESXi Management, TCP 22
vCenter Server
Appliance,
Dell iDRAC port,
VxRail Manager Console
Other firewall port settings may be necessary depending on your data center environment. The list of
documents in this table is provided for reference purposes.
Description Reference
Setting Value
Port Binding Static
Port Allocation Elastic
Number of ports 8
Network Resource Pool (default)
Override port policies Only ‘Block ports’ allowed
VLAN Type VLAN
Promiscuous mode Reject
MAC address changes Reject
Forged transmits Reject
Ingress traffic shaping Disabled
Egress traffic shaping Disabled
NetFlow Disabled
Block All Ports No
Setting Value
Load Balancing Route based on originating virtual port
Network failure detection Link status only
Notify switches Yes
Failback Yes
NIOC Shares
Traffic Type
4 Ports 2 Ports
Management Traffic 40 20
vMotion Traffic 50 50
The reservation value is set to zero for all network traffic types, with no limits set on bandwidth.
Uplink1 Uplink2
Uplink3 Uplink4
Traffic Type (10/25 GbE) (10/25 GbE)
No VMNIC No VMNIC
VMNIC0 VMNIC1
VxRail nodes with two 10gb NDC ports connected to 2 TOR switches, and one optional
connection to management switch for iDRAC
This connectivity option is the simplest to deploy. It is suitable for smaller, less demanding workloads that
can tolerate the NDC as a potential single point of failure.
If the NDC on the VxRail nodes are shipped with 4 Ethernet ports, you can choose to reserve either 2
ports or 4 Ethernet ports on the VxRail nodes to support networking workload on the VxRail cluster. If you
choose to use only two Ethernet ports, the remaining ports can be used for other use cases.
If you are deploying VxRail with 1gb Ethernet ports, then you must connect four Ethernet ports to support
VxRail networking.
In this option, the VxRail networking workload on the NDC ports is split between the two switches, and
the workload on the PCIe-based ports is also split between the two switches. Furthermore, this option
insures against the loss of service with a failure at the switch level, but also with a failure in either the
NDC or PCIe adapter card.
This option offers the same benefits as the 2x10gb NDC and 2x10gb PCIe deployment option, except for
additional bandwidth available to support the workload on the VxRail cluster. If additional Ethernet
connectivity is required to support other use cases, then additional slots on the VxRail nodes must be
reserved for PCIe adapter cards. If this is a current requirement or potential future requirement, then be
sure to select a VxRail node model with sufficient PCIe slots to accommodate the additional adapter
cards.
Be aware that the cabling for the 25gb option with NDC ports and PCIe ports differs from the 10gb option.
Note that the second port on the PCIe adapter cards is paired with the first port on the NDC on the first
switch, and the first port on the PCIe adapter is paired with the second port on the NDC on the second
switch. This is to ensure balancing of the VxRail networks between the switches in the event of a failure
at the network port layer.
This is an optional cabling setup for a 2x10gb NDC and 2x10gb PCIe deployment, where both ports from
either the NDC or the PCIe card are connected to the same TOR switch. The difference with this cabling
option as opposed to splitting the cabling from the NDC and PCIe ports between switches is that in the
event of failure of a node network port, all VxRail networking will flow to one TOR switch until the problem
is resolved.
For workload use cases with extreme availability, scalability and performance requirements, four TOR
switches can be positioned to support VxRail networking. In this example, each Ethernet port is
connected to a single TOR switch.
TOR 3 and 4's Upstream switches are optional because those TORs only carry vSAN and vMotion which
might not need access to the rest of the network.
1/10/25GbE TOR switches are supported. Witness runs on host separate from 2-Node cluster and routable
from 2xTOR switches.