h15300 Vxrail Network Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 83

Planning Guide

Dell EMC VxRail Network Planning Guide


Physical and Logical Network Considerations and Planning

Abstract
This is a planning and consideration guide for VxRail™ Appliances. It can be
used to better understand the networking requirements for VxRail
implementation. This document does not replace the implementation services
with VxRail Appliances requirements and should not be used to implement
networking for VxRail Appliances.

October 2020

H15300.14
Revision history

Date Description
April 2019 First inclusion of this version history table
Added support of VMware Cloud Foundation on VxRail
June 2019 Added support for VxRail 4.7.210 and updates to 25 GbE networking

August 2019 Added support for VxRail 4.7.300 with Layer 3 VxRail networks

February 2020 Added support for new features in VxRail 4.7.410

March 2020 Added support for new functionality in vSphere 7.0

April 2020 Added support for


• VxRail SmartFabric multi-rack switch network
• Optional 100 GbE Ethernet and FC network ports on VxRail nodes
May 2020 Updated switch requirements for VxRail IPv6 multicast

June 2020 Updated networking requirements for multi-rack VxRail clusters

July 2020 Added support for new features in VxRail 7.0.010

August 2020 Updated requirement for NIC redundancy enablement

September 2020 Outlined best practices for link aggregation on non-VxRail ports

October 2020 Added support for new features in VxRail 7.0.100

The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any software that is described in this publication requires an applicable software license.

Copyright © 2020 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, Dell EMC, and other trademarks are trademarks of Dell Technologies. or
its subsidiaries. Other trademarks may be trademarks of their respective owners. [10/27/2020] [Planning Guide] [H15300.14]

2 VxRail Network Planning Guide


Table of contents
1 Introduction to VxRail ............................................................................. 6
2 Planning Your Data Center Network for VxRail ..................................... 7
2.1 VxRail hardware and the physical network infrastructure .................................................................................. 7
2.2 VxRail clusters, appliances, and nodes .............................................................................................................. 7
2.3 Network switch.................................................................................................................................................... 9
2.4 Data center network .........................................................................................................................................10
2.5 VxRail Ethernet port options .............................................................................................................................11
2.6 VxRail Ethernet port options .............................................................................................................................13
2.7 VxRail node connectivity options ......................................................................................................................14
2.8 VxRail networking rules and restrictions ..........................................................................................................14
2.9 Topology and connections ................................................................................................................................15
2.10 Out-of-band management (optional) ................................................................................................................16

3 VxRail Feature-Driven Decision Points ................................................ 17


3.1 Software-defined data center ...........................................................................................................................17
3.2 Dell EMC SmartFabric network mode ..............................................................................................................18
3.3 vSphere with Kubernetes on VxRail .................................................................................................................19
3.4 vSAN stretched-cluster .....................................................................................................................................20
3.5 2-Node cluster ..................................................................................................................................................21

4 VxRail Hardware and Switch Selection Decision Points ...................... 23


5 Planning the VxRail Implementation .................................................... 24
5.1 Plan data center routing services .....................................................................................................................24
5.2 Plan for multi-rack VxRail cluster......................................................................................................................26
5.3 Plan for a vSAN HCI mesh topology ................................................................................................................27
5.4 Decide on VxRail single point of management.................................................................................................28
5.5 Plan the VxRail logical network ........................................................................................................................29
5.6 Plan network exclusions reserved for VxRail Manager ....................................................................................33
5.7 Plan network settings for VxRail management components ............................................................................33
5.8 Identify IP addresses for VxRail management components ............................................................................35
5.9 Select hostnames for VxRail management components .................................................................................37
5.10 Identify external applications and settings for VxRail .......................................................................................39

3 VxRail Network Planning Guide


5.11 Prepare customer-supplied vCenter server ......................................................................................................41
5.12 Prepare customer-supplied virtual distributed switch .......................................................................................42
5.13 Reserve IP addresses for VxRail vMotion network ..........................................................................................44
5.14 Reserve IP addresses for VxRail vSAN network ..............................................................................................45
5.15 Decide on VxRail logging solution ....................................................................................................................46
5.16 Assign passwords for VxRail management ......................................................................................................46
5.17 Prepare for Dell EMC SmartFabric Services enablement ................................................................................47

6 Configure the Upstream Network for VxRail ........................................ 49


6.1 Setting up the network switch for VxRail connectivity ......................................................................................50
6.2 Confirm your data center network ....................................................................................................................55
6.3 Confirm your firewall settings ...........................................................................................................................56
6.4 Confirm your data center environment .............................................................................................................56

7 Preparing to Build the VxRail Cluster ................................................... 58


7.1 Configuring a workstation/laptop for VxRail initialization ..................................................................................58
7.2 Perform initialization to create a VxRail cluster ................................................................................................59

8 Additional VxRail Network Considerations ........................................... 61


8.1 Configure teaming and failover policies for VxRail networks ...........................................................................61
8.2 Support for NSX................................................................................................................................................63
8.3 Using unassigned VxRail physical ports ..........................................................................................................63

A VxRail Network Configuration Table .................................................... 65


B VxRail Passwords ................................................................................ 68
C VxRail Setup Checklist ......................................................................... 69
D VxRail Open Ports Requirements ........................................................ 71
E Virtual Distributed Switch Portgroup Default Settings .......................... 73
E.1 Default standard settings ..................................................................................................................................73
E.2 Default teaming and failover policy...................................................................................................................73
E.3 Default network I-O control (NIOC) ..................................................................................................................73
E.4 Default failover order policy ..............................................................................................................................74

F Physical Network Switch Wiring Examples .......................................... 76

4 VxRail Network Planning Guide


Intended use and audience
This guide discusses the essential network details for VxRail deployment planning purposes only. It
introduces best practices, recommendations, and requirements for both physical and virtual network
environments. This document has been prepared for anyone that is involved in planning, installing, and
maintaining VxRail, including Dell Technologies field engineers, and customer system and network
administrators. This guide should not be used to perform the actual installation and set-up of VxRail. Work
with your Dell Technologies service representative to perform the actual installation.

5 VxRail Network Planning Guide


1 Introduction to VxRail
Dell EMC VxRail™ Appliances are a hyperconverged infrastructure (HCI) solution that consolidates
compute, storage, and network into a single, highly available, unified system. With careful planning,
VxRail Appliances can be rapidly deployed into an existing data center environment, and the end-product
is immediately available to deploy applications and services.

VxRail is not a server. It is an appliance that is based on a collection of nodes and switches that are
integrated as a cluster under a single point of management. All physical compute, network, and storage
resources in the appliance are managed as a single shared pool and allocated to applications and
services based on customer-defined business and operational requirements.

The compute nodes are based on Dell EMC PowerEdge servers. The G Series consists of up to four
nodes in a single chassis, whereas all other models are based on a single node. An Ethernet switch is
required, at speeds of either 1/10/25 GbE, depending on the VxRail infrastructure deployed. A
workstation/laptop for the VxRail user interface is also required.

VxRail has a simple, scale-out architecture, leveraging VMware vSphere® and VMware vSAN™ to
provide server virtualization and software-defined storage, with simplified deployment, upgrades, and
maintenance through VxRail Manager. Fundamental to the VxRail clustered architecture is network
connectivity. It is through the logical and physical networks that individual nodes act as a single system
providing scalability, resiliency, and workload balance.

The VxRail software bundle is preloaded onto the compute nodes, and consists of the following
components (specific software versions not shown):

• VxRail Manager
• VMware vCenter Server™
• VMware vRealize Log Insight™
• VMware vSAN
• VMware vSphere
• Dell Secure Remote Support (SRS)/VE
Licenses are required for VMware vSphere and VMware vSAN. The vSphere licenses can be purchased
from Dell Technologies, VMware, or your preferred VMware reseller partner.

The VxRail Appliances also include the following licenses for software that can be downloaded, installed,
and configured:
• Dell EMC RecoverPoint for Virtual Machines (RP4VM)
• 5 full VM licenses per single node (E, V, P, D, and S series)
• 15 full VM licenses for the G Series chassis

6 VxRail Network Planning Guide


2 Planning Your Data Center Network for VxRail
The network considerations for VxRail are no different than those of any enterprise IT infrastructure:
availability, performance, and extensibility. VxRail Appliances are delivered to your data center ready for
deployment. The nodes in the appliance can attach to any compatible network infrastructure at 1/10/25
GbE speeds with either RJ45 or SFP+ ports. Models with single processors can attach to compatible 1
GbE network infrastructure. Most production VxRail network topologies use dual top-of-the-rack (ToR)
switches to eliminate the switch as a single point of failure. This document guides you through the key
phases and decision points for a successful VxRail implementation. The key phases are:
Select the VxRail hardware and physical network infrastructure that best aligns with your business
and operational objectives.
Plan and prepare for VxRail implementation in your data center before product delivery.
Set up the network switch infrastructure in your data center for VxRail before product delivery.
Prepare for physical installation and VxRail initialization into the final product.

Note: Follow all the guidance and decision point described in this document; otherwise, VxRail will not
implement properly, and it will not function correctly in the future. If you have separate teams for network
and servers in your data center, you must work together to design the network and configure the
switches.

2.1 VxRail hardware and the physical network infrastructure


VxRail nodes connect to one or more network switches, with the final product forming a VxRail cluster.
VxRail communicates with the physical data center network through a virtual distributed switch that is
deployed in the VxRail cluster. The virtual distributed switch and physical network infrastructure
integration provide connectivity for the virtual infrastructure, and enable virtual network traffic to pass
through the physical switch infrastructure. In this relationship, the physical switch infrastructure serves as
a backplane, supporting network traffic between virtual machines in the cluster, and enabling virtual
machine mobility and resiliency. In addition, the physical network infrastructure enables I/O operations
between the storage objects in the VxRail vSAN datastore, and provides connectivity to applications and
end-users outside of the VxRail cluster.

This section describes the physical components and selection criteria for VxRail clusters:
• VxRail clusters, appliances, and nodes
• Network switch
• Data Center Network
• Topology and connections
• Workstation/laptop
• Out-of-band management (optional)

2.2 VxRail clusters, appliances, and nodes


A VxRail appliance consists of a set of server nodes that are designed and engineered for VxRail. A
VxRail physical node starts as a standard Dell PowerEdge server. The Dell PowerEdge server next goes
through a manufacturing process following VxRail product engineering specifications to produce a VxRail
node ready for shipment. A set of prepared VxRail nodes is delivered to the customer site based on a
purchase order. The set of VxRail nodes is delivered ready for data center installation and connectivity
into the data center network infrastructure.

7 VxRail Network Planning Guide


Once the data center installation and network connectivity are complete, and the equipment is powered
on, the VxRail management interface is used to perform the initialization process, which forms the final
product: a VxRail cluster.

A standard VxRail cluster starts with a minimum of three nodes and can scale to a maximum of 64 nodes.
The selection of the VxRail nodes to form a cluster is primarily driven by planned business use cases,
and factors such as performance and capacity. Five series of VxRail models are offered, each targeting
specific objectives:

VxRail Series Target Objective


E-Series Balanced Compute and Storage, Space Optimized (1U1N chassis)
V-Series Virtual Desktop Enablement
P-Series High Performance
S-Series Storage Dense
G-Series Compute Dense, Space Optimized (2U4N chassis)
D-Series Durable, ruggedized, short-depth platforms that are designed to
withstand extreme conditions

Each VxRail model series offers choices for network connectivity. The following figures show some of the
physical network port options for the VxRail models.

Back view of VxRail E-Series on Dell 14th Generation PowerEdge server

Back view of VxRail V-, P-, S-Series on Dell 14th Generation PowerEdge server

8 VxRail Network Planning Guide


Back view of VxRail G-Series on Dell 14th Generation PowerEdge server

In addition to network connectivity, review the physical power, space, and cooling requirements for your
planned infrastructure to ensure data center compatibility.

2.3 Network switch


A VxRail cluster depends on adjacent Ethernet switches, commonly referred to as ‘top-of-rack’ switches,
to support cluster operations. VxRail is broadly compatible with most Ethernet switches on the market.
For best results, select a switch platform that meets the operational and performance criteria for your
planned use cases.

2.3.1 VxRail’s relationship with the Ethernet switch


The VxRail product does not have a backplane, so the adjacent ‘top-of-rack’ switch enables all
connectivity between the nodes that comprise a VxRail cluster. All the networks (management, storage,
virtual machine movement, guest networks) configured within the VxRail cluster depend on the ‘top-of-
rack’ switches for physical network transport between the nodes, and upstream to data center services
and end-users.

The network traffic that is configured in a VxRail cluster is Layer 2. VxRail is architected to enable
efficiency with the physical ‘top-of-rack’ switches through the assignment of virtual LANs (VLANs) to
individual VxRail Layer 2 networks in the cluster. This functionality eases network administration and
integration with the upstream network.

2.3.2 VxRail node discovery and the Ethernet switch


One specific network, which is known as the ‘VxRail internal management network’, depends on
multicasting services on the ‘top-of-rack’ switches for node discovery and cluster deployment purposes.
Through the VLAN assignment, the flooding of Layer 2 multicast traffic is limited only to the interfaces that
belong to that VLAN, except for the interface that is the source of the multicast traffic.

A common Ethernet switch feature, Multicast Listener Discovery (MLD) snooping and querier, is designed
to constrain the flooding of multicast traffic by examining MLD messages and then forwarding multicast
traffic only to interested interfaces. Since the traffic on this node discovery network is already constrained
through the configuration of this VLAN on the ports supporting the VxRail cluster, this setting may provide
some incremental efficiency benefits, but does not negatively impact network efficiency.

9 VxRail Network Planning Guide


2.3.3 Basic switch requirements
• The switch does not need to support Layer 3 services or be licensed for Layer 3 services.
• A VxRail cluster can be deployed in a ‘flat’ network using the default VLAN on the switch, or be
configured so that all the management, storage, and guest networks are segmented by virtual
LANs for efficient operations. For best results, especially in a production environment, only
managed switches should be deployed. A VxRail cluster that is built on a ‘flat’ network should be
considered only for test cases or for temporary usage.

2.3.4 Advanced switch requirements


In certain instances, additional switch features and functionality are necessary to support specific use
cases or requirements.
• If your plans include deploying all-flash storage on your VxRail cluster, 10 GbE network switches are
the minimum requirement for this feature. Dell Technologies recommends a 25 GbE network if that is
supported in your data center infrastructure.
• Enabling advanced features on the switches planned for the VxRail cluster, such as Layer 3 routing
services, can cause resource contention and consume switch buffer space. To avoid resource
contention, select switches with sufficient resources and buffer capacity.
• Switches that support higher port speeds are designed with higher Network Processor Unit (NPU)
buffers. An NPU shared switch buffer of at least 16 MB is recommended for 10 GbE network
connectivity, and an NPU buffer of at least 32 MB is recommended for more demanding 25 GbE
network connectivity.
• For very large VxRail clusters with demanding performance requirements and advanced switch
services enabled, consider switches with additional resource capacity and deeper buffer capacity.

2.4 Data center network


VxRail is dependent of specific data center services to implement the cluster and for day-to-day
operations. The top-of-rack switches must be configured to the upstream network to enable connectivity
to these data center services, and to enable connectivity to the end-user community.

2.4.1 Data center services


• Domain Naming Services (DNS) is required to deploy the VxRail cluster and for ongoing
operations.
• VxRail cluster depends on Network Time Protocol (NTP) to keep the clock settings on the various
VxRail components synchronized. Dell Technologies recommends that a reliable global timing
service be used for VxRail.
• VxRail depends on VMware vCenter for cluster management and operations. You can use either
the embedded vCenter instance that is included with VxRail, or an external vCenter instance in
your data center.

10 VxRail Network Planning Guide


Connecting data center services with VxRail cluster

2.4.2 Routing services


VxRail cluster operations depend on a set of networks that run on both the virtual network inside the
cluster and on the adjoining physical network switches. Some of these networks, specifically for VxRail
management and for end-user access must be passed to the upstream network, while other VxRail
networks stay isolated on the adjoining network switches.

You must specify a set of Virtual LAN (VLAN) IDs in your data center network that will be assigned to
support the VxRail networks. All the VLANs must be configured on the adjoining physical switches. The
VLANs that need to pass upstream must be configured on adjoining network switch uplinks, and also on
the ports on the upstream network devices.

One VLAN is assigned for external VxRail management access. Data center services (such as DNS and
NTP) that are required by VxRail cluster must be able to connect to this VLAN. Routing services must be
updated to enable connectivity to these services from this VxRail management network. Other VLANs,
such as those required for end-user access must also be configured in routing services to connect end-
users to the virtual machines running on the VxRail cluster.

2.5 VxRail Ethernet port options


The following figures show the appliance connectivity options that are supported on the Network
Daughter Cards (NDCs) for each VxRail node model, including the Dell 13th and 14th generation servers,
and the connectivity requirements for the management port. These figures also show the available
options that are supported for each VxRail node model for network connectivity that is not reserved for
VxRail usage, including Fibre channel.

11 VxRail Network Planning Guide


VxRail Node Connectivity Comparison

The following connectivity rules apply to VxRail nodes based on 14th Generation Dell EMC PowerEdge
servers:

• For E, P, S, and V Series


- 2x10GbE in either SFP+ or RJ-45 NIC ports
- 4x10GbE in either SFP+ or RJ-45 NIC ports
- 2x25GbE SFP28 ports
• E Series models support both Intel and AMD processors.
- Intel models support both 2-port and 4-port configurations.
- AMD models do not support 4-port base configurations. AMD models support 2x10GbE with
either RJ-45 or SFP+, or 2x25GbE with SFP28.
• E, P, and S Series
- 1 GbE connectivity is supported on single processor models only.
• D Series
- 4x10GbE RJ-45 NIC ports
- 2x25GbE SFP28 NIC ports

• G Series
- 2x10GbE SFP+ NIC ports

12 VxRail Network Planning Guide


VxRail Pre-14th Generation Node Connectivity Summary

The following connectivity rules apply for VxRail nodes based on Pre-14th Generation Dell EMC
PowerEdge servers:

• E, P, S, and V Series
- 2x10GbE + 2x1GbE in either SFP+ or RJ-45 NIC ports

• E, P, and S Series
- 1 GbE connectivity is supported on single processor models only.
- The 2x10GbE ports will auto-negotiate to 1 GbE when used with 1 GbE networking.

2.6 VxRail Ethernet port options


There are restrictions on the models of Ethernet ports that can be configured for VxRail nodes. Each
vendor adapter card and firmware select for support with VxRail must pass a set of tests to be qualified.
The following table highlights the vendors’ options for VxRail:

Port Speed Vendor


10 GbE Intel
QLogic
25 GbE Broadcom
Mellanox
100 GbE Mellanox

13 VxRail Network Planning Guide


The following guidelines should be understood to drive port adapter selection:

• When a VxRail cluster is initially built, all the network adapter cards that are used to form the
cluster must be of the same vendor and model. This rule does not apply to nodes added to an
existing VxRail cluster, so long as the port speed and port type match the existing nodes.
• VxRail recommends using the same adapter card vendor and model for all the nodes in a cluster.
There is no guarantee that using optics or cables from one vendor with an adapter card from
another vendor will work as expected. VxRail recommends consulting the Dell cable and optics
support matrix before attempting to mix vendor equipment in a VxRail cluster.
• The feature sets supported from network adapter card suppliers do not always match. There is a
dependency on the firmware and/or driver in the adapter card to support certain features. If there
is a specific feature that is needed to meet a business requirement, VxRail recommends
consulting with a sales specialist to verify that the needed feature is supported for a specific
vendor.

2.7 VxRail node connectivity options


In VxRail versions earlier than 7.0.010, only the Ethernet ports on the Network Daughter Card (NDC)
could be used to support the VxRail cluster. Starting with version 7.0.010, the Ethernet ports on the
optional PCIe adapter cards can also support VxRail network traffic. This node connectivity option
protects the NDC from being a potential single point of failure.

Mixing NDC and PCIe ports to support a VxRail cluster

For a VxRail cluster at a minimum version of 7.0.010 and configured for 10gb Ethernet connectivity, the
option to enable redundancy at the NIC level is supported during VxRail initial build if the cluster is
deployed against a customer-supplied virtual distributed switch. If the virtual distributed switch is created
during the VxRail initial build operation, enabling NIC-level redundancy is a Day 2 operation.

For a VxRail cluster at a minimum version of 7.0.100 and configured for 25gb connectivity, the two ports
on the NDC and the two ports on the PCIe adapter card can be configured to support VxRail network
traffic at the time of initial build, or later as a Day 2 operation.

2.8 VxRail networking rules and restrictions


• The Ethernet ports selected during the VxRail initialization process to support VxRail cluster
networking are reserved exclusively for VxRail usage and cannot be migrated or used for other
purposes.
• The Ethernet ports on the optional PCIe adapter cards can be reserved for VxRail cluster usage if
running version 7.0.010 or later.

14 VxRail Network Planning Guide


- If the VxRail cluster is deployed against a customer-supplied virtual distributed switch, PCIe-
based ports can be configured to support VxRail network traffic during the initial build
operation.
- If the VxRail cluster is not deployed against a customer-supplied virtual distributed switch,
configuring PCIe-based ports to support VxRail network traffic is performed after the initial
build process is completed.

• Any unused Ethernet ports on the nodes that are not reserved by the VxRail cluster can be used
for other purposes, such as guest networks, NFS, etc.
• For VxRail nodes supplied with 10 GbE ports, the VxRail cluster can be configured with either two
ports or four ports to support VxRail network traffic.
• For VxRail nodes supplied with 1 GbE ports, all four ports must be reserved for VxRail network
traffic.
• All-flash VxRail models must use either 10 GbE or 25 GbE NICs. 1 GbE is not supported for all-
flash.
• The network hardware configuration in a VxRail cluster must have the same Ethernet port types
across all VxRail nodes.
- VxRail nodes with RJ45 and SFP+ ports cannot be mixed in the same VxRail cluster.
- The port speed for each VxRail node (25 GbE, 10 GbE, 1 GbE) must be the same in the
VxRail cluster.

• One additional port on the switch or one logical path on the VxRail external management VLAN is
required for a workstation or laptop to access the VxRail user interface for the cluster.

2.9 Topology and connections


Various network topologies are possible with VxRail clusters. Complex production environments have
multi-tier network topologies with clusters in multiple racks, and spanning across data centers. Simpler
workloads can be satisfied with the nodes and adjacent switches confined to a single rack, with routing
services configured further upstream. A site diagram showing the proposed network components and
connectivity is highly recommended before cabling and powering on VxRail nodes, and performing an
initial build of the VxRail cluster.

Be sure to follow your switch vendor’s best practices for performance and availability. For example,
packet buffer banks may provide a way to optimize your network with your wiring layout.

Decide if you plan to use one or two switches for the VxRail cluster. One switch is acceptable, and is
often used in test and development environments. To support sustained performance, high availability,
and failover in production environments, two or more switches are required. The VxRail appliance is a
software-defined data center which is totally dependent on the physical top-of-rack switch for network
communications. A lack of network redundancy places you at risk of losing availability to all of the virtual
machines operating on the appliance.

Decide what network architecture you want to support the VxRail cluster, and what protocols will be used
to connect to data center services and end users. For VxRail clusters managing production workloads,
VLANs will be configured to support the VxRail networks. Determine which network tier the VxRail
networking VLANs will terminate, and which tier to configure routing services.

15 VxRail Network Planning Guide


High-level network topology with Layer 2 and Layer 3 options

The number of Ethernet ports on each VxRail node you choose to support VxRail networking, and the
number of adjacent top-of-rack switches you choose to deploy to support the workload running on the
VxRail cluster will drive the cabling within the data center rack. Examples of wiring diagrams between
VxRail nodes and the adjacent switches can be found in Physical Network Switch Wiring Examples.

2.10 Out-of-band management (optional)


If the VxRail Appliances are located at a data center that you cannot access easily, we recommend
setting up an out-of-band management switch to facilitate direct communication with each node.

To use out-of-band management, connect the integrated Dell Remote Access Controller (iDRAC) port to
a separate switch to provide physical network separation. Default values, capabilities, and
recommendations for out-of-band management are provided with server hardware information.

You must reserve an IP address for each iDRAC in your VxRail cluster (one per node).

16 VxRail Network Planning Guide


3 VxRail Feature-Driven Decision Points
Certain applications, software stacks, and product features that are supported on VxRail can impact the
architecture, deployment, and operations of the cluster. If your plans for VxRail include any of the feature
sets or software stacks that are listed in this section, make note of the requirements that each of these
might have on your plans for VxRail.

3.1 Software-defined data center


If your plans include the transformation of your current data center with disparate technologies and
processes towards a software-defined data center, consider that VxRail can be positioned as a building
block towards that eventual outcome. The physical compute, network, and storage resources from built
VxRail clusters can be allocated to VMware’s cloud management and virtual desktop software solutions,
and managed as a logical pool for end-user consumption. By using VxRail clusters as the underlying
foundation, the software-defined data center can be designed and deployed to meet specific business
and operational requirements.

VxRail as the foundation for the software-defined data center

The path starts with a structured discovery and planning process that focuses on business use cases and
strategic goals, and that will drive the selection of software layers that will comprise the software-defined
data center. Dell Technologies implements the desired software layers in a methodical, structured
manner, where each phase involves incremental planning and preparation of the supporting network.

The next phase after the deployment of the VxRail cluster is to layer the VMware Cloud Foundation
software on the cluster. This enables assigning cluster resources as the underpinning for logical domains,
whose policies align with use cases and requirements.

17 VxRail Network Planning Guide


The information that is outlined in this guide covers networking considerations for VxRail. For more
information about the architecture of VMware Cloud Foundation on VxRail, and to plan and prepare for a
deployment of VMware Cloud Foundation on VxRail, go to Dell Technologies VxRail Technical Guides.

3.2 Dell EMC SmartFabric network mode


Dell network switches support SmartFabric Services, which enable the configuration and operation of the
switches to be controlled outside of the standard management console through a REST API interface.
Certain Dell EMC switch models support initializing the switches with a SmartFabric personality profile,
which then forms a unified network fabric. The SmartFabric personality profile enables VxRail to become
the source for the automated configuration and administration of the Dell switches.

In this profile setting, VxRail uses the SmartFabric feature to discover VxRail nodes and Dell EMC
switches on the network, perform zero-touch configuration of the switch fabric to support VxRail
deployment, and then create a unified hyperconverged infrastructure of the VxRail cluster and Dell EMC
switch network fabric.

Dell EMC SmartFabric for VxRail

For ongoing VxRail cluster network management after initial deployment, the Dell EMC OMNI (Open
Manage Network Interface) vCenter plug-in is provided free of charge. The Dell EMC OMNI plug-in
enables the integration and orchestration of the physical and virtual networking components in the VxRail-
SmartFabric HCI stack, providing deep visibility from the vClient for ease of overall management and
troubleshooting. The Dell EMC OMNI plug-in serves as the centralized point of administration for
SmartFabric-enabled networks in the data center, with a user interface eliminating the need to manage
the switches individually at the console level.

The orchestration of SmartFabric Services with the VxRail cluster means that state changes to the virtual
network settings on the vCenter instance will be synchronized to the switch fabric using REST API. In this
scenario, there is no need to manually reconfigure the switches that are connected to the VxRail nodes
when an update such as a new VLAN, port group, or virtual switch, is made using the vClient.

The SmartFabric-enabled networking infrastructure can start as small as a pair of Dell EMC Ethernet
switches, and can expand to support a leaf-spine topology across multiple racks. A VxLAN-based tunnel

18 VxRail Network Planning Guide


is automatically configured across the leaf and spine switches, which enable the VxRail nodes to be
discovered and absorbed into a VxRail cluster from any rack within the switch fabric.

SmartFabric-enabled multi-rack network expansion

Planning for VxRail with the Dell EMC SmartFabric networking feature must be done in coordination with
Dell Technologies representatives to ensure a successful deployment. The planned infrastructure must
be a supported configuration as outlined in the VxRail Support Matrix.

Using the Dell EMC SmartFabric feature with VxRail requires an understanding of several key points:
• At the time of VxRail deployment, you must choose the method of network switch configuration.
Enabling the VxRail personality profile on the switches resets the switches from the factory
default state and enables SmartFabric Services. If you enable SmartFabric Services, all switch
configuration functionality except for basic management functions are disabled at the console,
and the management of switch configuration going forward are performed with SmartFabric tools
or through the automation and orchestration that is built into VxRail and SmartFabric Services.
• A separate Ethernet switch to support out-of-band management for the iDRAC feature on the
VxRail nodes and for out-of-band management of the Dell Ethernet switches is required.
• Disabling the VxRail personality profile on the Dell network switches deletes the network
configuration set up by SmartFabric services. If a VxRail cluster is operational on the Dell switch
fabric, the cluster must be deployed.
• Non-VxRail devices can be attached to switches running in SmartFabric services mode using the
OMNI vCenter plug-in.
For more information about how to plan and prepare for a deployment of VxRail clusters on a
SmartFabric-enabled network, reference the Dell EMC VxRail with SmartFabric Planning and Preparation
Guide. For more information about the deployment process of a VxRail cluster on a SmartFabric-enabled
network, go to VxRail Networking Solutions at Dell Technologies InfoHub.

3.3 vSphere with Kubernetes on VxRail


If your requirements include workload management using Kubernetes, then a VxRail cluster can be
configured as a supervisor cluster for Kubernetes. Kubernetes is a portable, extensible, API-driven
platform for the management of containerized workload and services. VMware’s Tanzu feature enables
the conversion of a VxRail cluster, whose foundation is vSphere, into a platform for running Kubernetes
workloads inside dedicated resource pools. A VxRail cluster that is enabled for vSphere with Tanzu is
called a Supervisor cluster.

When a VxRail cluster is enabled for vSphere with Kubernetes, the following six services are configured
to support vSphere with Tanzu:

19 VxRail Network Planning Guide


• vSphere Pod Service
• Registry Service
• Storage Service
• Network Service
• Virtual Machine Service
• Tanzu Kubernetes Grid Service for vSphere

vSphere with Tanzu on a VxRail cluster

As a VxRail administrator using vSphere management capabilities, you can create namespaces on the
Supervisor Cluster, and configure them with specified amount of memory, CPU, and storage. Within the
namespaces, you can run containerized workloads on the same platform with shared resource pools.

• This feature requires each VxRail node that is part of the Supervisor cluster to be configured with
a vSphere Enterprise Plus license with an add-on license for Kubernetes.
• This feature requires portgroups to be configured on the VxRail cluster virtual distributed switch to
support workload networks. These networks provide connectivity to the cluster nodes and the
three Kubernetes control plane VMs. Each Supervisor Cluster must have one primary workload
network.
• A virtual load balancer that is supported for vSphere must be also configured on the VxRail
cluster to enable connectivity from client network to workloads running in the namespaces.
• The workload networks require reserved IP addresses to enable connectivity for the control plane
VMs and the load balancer.
For complete details on enabling a VxRail cluster to support vSphere with Tanzu, see the vSphere with
Tanzu Configuration and Management Guide.

3.4 vSAN stretched-cluster


vSAN stretched-cluster is a VMware solution that supports synchronous I/O on a vSAN datastore over
distance and is supported on VxRail. A vSAN stretched-cluster enables site-level failure protection with
no loss of service or loss of data.

If you plan to deploy a vSAN stretched-cluster on VxRail, note the following requirements:

20 VxRail Network Planning Guide


• Three data center sites: two data center sites (Primary and Secondary) host the VxRail
infrastructure, and the third site supports a witness to monitor the stretched-cluster.
• A minimum of three VxRail nodes in the Primary site, and a minimum of three VxRail nodes in the
Secondary site
• A minimum of one top-of-rack switch for the VxRail nodes in the Primary and Secondary sites
• An ESXi instance at the Witness site
The vSAN stretched-cluster feature has strict networking guidelines, specifically for the WAN, that must
be adhered to for the solution to work.

vSAN Stretched Cluster Topology

More detailed information about vSAN stretched-cluster and the networking requirements can be found in
the Dell-EMC VxRail vSAN Stretched Cluster Planning Guide.

3.5 2-Node cluster


VxRail supports a solution specifically for small-scale deployments with reduced workload and availability
requirements, such as those in a remote office setting. The solution is fixed to two VxRail nodes only, and
like the stretched-cluster solution, requires a third site to act as a witness for monitoring purposes.

If you plan to deploy 2-node VxRail clusters, note the following:

• The minimum VxRail software version for the 2-Node cluster is 4.7.1.
• The deployment is limited to a pair of VxRail nodes and cannot grow through node expansion.
Verify that your workload requirements do not exceed the resource capacity of this small-scale
solution.
• Only one top-of-rack switch is required.
• Four Ethernet ports per node are required. Supported profiles:
- 2x1G and 2x10G

21 VxRail Network Planning Guide


- 4x10G
• The switch can support either 1 GbE or 10 GbE connectivity.
• Two network topologies are supported for inter-cluster VxRail traffic:
- All four network ports connect to the top-of-rack switch
- A pair of network cables connect to create two links between the physical nodes, and the
other two network ports connect to the top-of-rack switch
• A customer-supplied external vCenter is required. The customer-supplied external vCenter
cannot reside on the 2-Node cluster.
• The Witness is a small virtual appliance that monitors the health of the 2-Node cluster. A Witness
is required for the 2-Node cluster.
- An ESXi instance is required at the Witness site.
- There is a 1:1 ratio of Witness per 2-Node cluster.
- Witness can be deployed at the same site as the data nodes but not on the 2-Node cluster.
- For instances where there are more than one 2-Node clusters deployed at the site, the
Witness can reside on a 2-Node cluster it is not monitoring. This configuration requires a
VMware RPQ.
- The top-of-rack switch must be able to connect over the network with the Witness site.

2-Node Cluster Topology with direct connections between nodes

Like the vSAN stretched-cluster feature, the small-scale solution has strict networking guidelines,
specifically for the WAN, that must be adhered to for the solution to work. For more information about the
planning and preparation for a deployment of a 2-node VxRail cluster, go to Dell EMC vSAN 2-Node
Cluster Planning and Preparation Guide.

22 VxRail Network Planning Guide


4 VxRail Hardware and Switch Selection Decision Points
Step 1. Assess your requirements and perform a sizing exercise to determine the quantity and
characteristics of the VxRail nodes you need to meet planned workload and targeted use cases.
Step 2. Determine the number of physical racks needed to support the quantity and footprint of VxRail nodes
required to meet workload requirements, including the top-of-rack switches. Verify that the data
center has sufficient floor space, power, and cooling.
Determine the optimal VxRail port speed to meet planned workload requirements, and to calculate
the number of physical switch ports for connectivity.
• VxRail supports 1 GbE, 10 GbE, and 25 GbE connectivity options to build the initial cluster.
• VxRail supports either two or four connections per node to the physical switch.
Decide whether you want to attach the VxRail nodes to the switches with RJ45, SFP+ or SFP28
connections.
• VxRail nodes with RJ-45 ports require CAT5 or CAT6 cables. CAT6 cables are included with
every VxRail.
• VxRail nodes with SFP+ ports require optics modules (transceivers) and optical cables, or Twinax
Direct-Attach Copper (DAC) cables. These cables and optics are not included; you must supply
your own. The NIC and switch connectors and cables must be on the same wavelength.
• VxRail nodes with SFP28 ports require high thermal optics for ports on the NDC (Network
Daughter Card). Optics that are rated for standard thermal specifications can be used on the
expansion PCIe network ports supporting SFP28 connectivity.
Determine the number of additional ports and port speed on the switches for the uplinks to your core
network infrastructure to meet VxRail workload requirements. Select a switch or switches that
provide sufficient port capacity and characteristics.
Reserve one additional port on the switch for a workstation/laptop to access the VxRail management
interface for the cluster.
• The additional port for access to the management interface is removed if connectivity is available
elsewhere on the logical path on the VxRail management VLAN.
Select a switch or switches that support the features and functionality that is required for VxRail.
• Multicast is a requirement for VxRail device discovery.
• If you want to enable SmartFabric services on the supporting switched infrastructure, select
supported Dell switch models and select an OS10 Enterprise license for each switch. For an up-
to-date list of supported models, consult the latest VxRail Support Matrix.
Determine whether a single switch will meet business ojectives, as it is a potential single point of
failure. Dual top-of-rack (ToR) switches provide protection from a switch failure.
• If you are deploying dual top-of-rack switches, it is best practice to reserve ports on each switch
for interswitch links.
Decide whether to deploy a separate switch to support connectivity to the VxRail management port
on each node.
• Dell iDRAC supports 1 GbE connectivity. Dell Technologies recommends deploying a dedicated 1
GbE switch for this purpose. In certain cases, you can also use open ports on the top-of-rack
switches.

23 VxRail Network Planning Guide


5 Planning the VxRail Implementation
VxRail is an entire software-defined data center in an appliance form factor. All administrative activities,
including initial implementation and initialization, configuration, capacity expansion, online upgrades, as
well as maintenance and support are handled within the VxRail management system. When the VxRail
appliance is installed in your data center, which is connected to your network, and the physical
components that are powered on, the VxRail management system automates the full implementation of
the final software-defined data center based on your settings and input.

Before getting to this phase, several planning and preparation steps must be undertaken to ensure a
seamless integration of the final product into your data center environment. These planning and
preparation steps include:
1. Plan Data Center Routing Services.
2. Decide on VxRail Single Point of Management.
3. Plan the VxRail logical network.
4. Identify IP address range for VxRail logical networks.
5. Identify unique hostnames for VxRail management components.
6. Identify external applications and settings for VxRail.
7. Create DNS records for VxRail management components.
8. Prepare customer-supplied vCenter Server.
9. Reserve IP addresses for VxRail vMotion and vSAN networks.
10. Decide on VxRail Logging Solution.
11. Decide on passwords for VxRail management.
Use the VxRail Setup Checklist and the VxRail Network Configuration Table to help create your network
plan. References to rows in this document are to rows in the VxRail Network Configuration Table.

Note: Once you set up the VxRail cluster and complete the initial initialization phase to produce the final
product, the configuration cannot easily be changed. We strongly recommend that you take care during
this planning and preparation phase to decide on the configurations that will work most effectively for your
organization.

5.1 Plan data center routing services


Specific VxRail networks, including the VxRail external management network and any external-facing
end-user networks that are configured for VxRail, must have routing services that are enabled to support
connectivity to external services and applications, as well as end-users.

A leaf-spine network topology in the most common use case for VxRail clusters. A single VxRail cluster
can start on a single pair of switches in a single rack. When workload requirements expand beyond a
single rack, expansion racks can be deployed to support the additional VxRail nodes and switches. The
switches at the top of those racks, which are positioned as a ‘leaf’ layer, can be connected together using
switches at the adjacent upper layer, or ‘spine’ layer.

24 VxRail Network Planning Guide


If you choose to use a spine-leaf network topology to support the VxRail cluster or clusters in your data
center, enabling Layer 3 routing services at either the spine layer or the leaf layer can both be
considered.

Layer 2/3 boundary at the leaf layer or spine layer

Establishing routing services at the spine layer means that the uplinks on the leaf layer are trunked ports,
and pass through all the required VLANs to the switches at the spine layer. This topology has the
advantage of enabling the Layer 2 networks to span across all the switches at the leaf layer. This
topology can simplify VxRail clusters that extend beyond one rack, because the Layer 2 networks at the
leaf layer do not need Layer 3 services to span across multiple racks. A major drawback to this topology
is scalability. Ethernet standards enforce a limitation of addressable VLANs to 4094, which can be a
constraint if the application workload requires a high number of reserved VLANs, or if multiple VxRail
clusters are planned.

Enabling routing services at the leaf layer overcomes this VLAN limitation. This option also helps optimize
network routing traffic, as it reduces the number of hops to reach routing services. However, this option
does require Layer 3 services to be licensed and configured at the leaf layer. In addition, since Layer 2
VxRail networks now terminate at the leaf layer, they cannot span across leaf switches in multiple racks.

Note: If your network supports VTEP, which enables extending Layer 2 networks between switches in
physical racks over a Layer 3 overlay network, that can be considered to support a multi-rack VxRail
cluster.

VTEP tunneling between leaf switches across racks

25 VxRail Network Planning Guide


5.2 Plan for multi-rack VxRail cluster
A VxRail cluster can be extended beyond a single physical rack, and can extend to as many as six racks.
All the network addresses applied to the VxRail nodes within a single rack must be within the same
subnet.

You have two options if the VxRail cluster extends beyond a single rack:
• Use the same assigned subnet ranges for all VxRail nodes in the expansion rack. This option is
required if SmartFabric Services are enabled on supporting switch infrastructure.
• Assign a new subnet range with a new gateway to the VxRail nodes in the expansion racks.
(Your VxRail cluster must be running a minimum version of 4.7.300 to use this option)

If the same subnets are extended to the expansion racks, the VLANs representing those VxRail networks
must be configured on the top-of-rack switches in each expansion rack and physical connectivity must be
established. If new subnets are used for the VxRail nodes and management components in the
expansion racks, the VLANs terminate at the router layer and routing services must be configured to
enable connectivity between the racks.

MultiRack VxRail sharing the same subnet

26 VxRail Network Planning Guide


MultiRack VxRail with different subnets

5.3 Plan for a vSAN HCI mesh topology


Starting with VxRail version 7.0.100, the storage resources on the vSAN datastores on each VxRail
cluster can be shared with other VxRail clusters. This storage sharing model is applicable only in a multi-
cluster environment where the VxRail clusters are configured under a common data center on a
customer-supplied vCenter instance.

Storage resource sharing with vSAN HCI mesh

If your immediate or future plans include storage resource sharing using vSAN HCI mesh, be sure to
prepare your data center to meet the following prerequisites:

• A vCenter instance at a version that supports VxRail version 7.0.100 or higher


• A vSAN Enterprise license for each VxRail cluster that will participate in a vSAN HCI mesh
topology
• A network topology that can connect the vSAN networks of the two VxRail clusters

27 VxRail Network Planning Guide


- A common VLAN can be assigned to the vSAN network for each cluster to connect over a
Layer 2 network. If the VxRail clusters are deployed against different top-of-rack switches,
then the VLAN must be configured to stretch between the switch instances.
- If the VxRail clusters are deployed against different top-of-rack switches, and the common
VLAN cannot be stretched between the switch instances, then connectivity can be enabled
using Layer 3 routing services. If this option is selected, be sure to assign routable IP
addresses to the vSAN network on each participating VxRail cluster.

5.4 Decide on VxRail single point of management


The unified resources of a VxRail appliance create a virtual infrastructure that is defined and managed as
a vSphere cluster under a single instance of vCenter. A decision must be made to use the VxRail vCenter
Server, which is deployed in the cluster, or a customer-supplied vCenter server, which is external to the
cluster. During the VxRail initialization process which creates the final product, you must select whether to
deploy VxRail vCenter Server on the cluster or deploy the cluster on an external customer-supplied
vCenter server. Once the initialization process is complete, migrating to a new vCenter single point of
management requires professional services assistance, and is difficult to change.

Dell Technologies recommends that you consider all the ramifications during this planning and
preparation phase, and decide on the single point of management option that will work most effectively for
your organization. Once VxRail initial build has completed the cluster deployment process, the
configuration cannot easily be changed.

The following should be considered for selecting the VxRail vCenter server:

• A vCenter Standard license is included with VxRail, and does not require a separate license. This
license cannot be transferred to another vCenter instance.
• The VxRail vCenter Server can manage only a single VxRail instance. This means an
environment of multiple VxRail clusters with the embedded vCenter instance requires an
equivalent number of points of management for each cluster.
• VxRail Lifecycle Management supports the upgrade of the VxRail vCenter server. Upgrading a
customer-supplied vCenter using VxRail Lifecycle Management is not supported.
• DNS services are required for VxRail. With the VxRail vCenter option, you have the choice of
using the internal DNS supported within the VxRail cluster, or leveraging external DNS in your
data center.
For a customer-supplied vCenter, the following items should be considered:

• The vCenter Standard license included with VxRail cannot be transferred to a vCenter instance
outside of the VxRail cluster.
• Multiple VxRail clusters can be configured on a single customer-supplied vCenter server, limiting
the points of management.
• With the customer-supplied vCenter, external DNS must be configured to support the VxRail
cluster.
• Ensuring version compatibility of the customer-supplied vCenter with VxRail is the responsibility
of the customer.
• You have the option of configuring the virtual distributed switch settings to deploy the VxRail
cluster against with the customer-supplied vCenter, or have VxRail deploy a virtual distributed
switch and perform the configuration instead. This option is advantageous if you want better

28 VxRail Network Planning Guide


control and manageability of the virtual networking in your data center, and consolidate the
number of virtual distributed switches in your vCenter instance.

Note: The options to use the internal DNS or to deploy the VxRail cluster against a preconfigured virtual
distributed switch require VxRail version of 7.0.010 or later.

The customer-supplied vCenter server option is more scalable, provides more configuration options, and
is the recommended choice for most VxRail deployments. See the Dell EMC VxRail vCenter Server
Planning Guide for details.

5.5 Plan the VxRail logical network


The physical connections between the ports on your network switches and the NICs on the VxRail nodes
enable communications for the virtual infrastructure within the VxRail cluster. The virtual infrastructure
within the VxRail cluster uses the virtual distributed switch to enable communication within the cluster,
and out to IT management and the application user community.

VxRail has predefined logical networks to manage and control traffic within the cluster and outside of the
cluster. Certain VxRail logical networks must be made accessible to the outside community. For instance,
connectivity to the VxRail management system is required by IT management. VxRail networks must be
configured for end-users and application owners who need to access their applications and virtual
machines running in the VxRail cluster. In addition, a network supporting I/O to the vSAN datastore is
required, and a network to support vMotion, which is used to dynamically migrate virtual machines
between VxRail nodes to balance workload, must also be configured. Finally, an internal management
network is required by VxRail for device discovery.

VxRail Logical Network Topology

29 VxRail Network Planning Guide


All the Dell PowerEdge servers that serve as the foundation for VxRail nodes include a separate Ethernet
port that enables connectivity to the platform to perform hardware-based maintenance and
troubleshooting tasks. A separate network to support management access to the Dell PowerEdge servers
is recommended, but not required.

5.5.1 IP Address considerations for VxRail networks


IP addresses must be assigned to the VxRail external management network, vSAN network, vMotion
network, and any guest networks you want to configure on the VxRail cluster. Decisions need to be made
on the IP address ranges reserved for each VxRail network:

• The internal management network that is used for device discovery does not require assigned IP
addresses.
• Since the external management network must be able to route upstream to network services and
end users, a nonprivate, routable IP address range must be assigned to this network.
• Traffic on the vSAN network is passed only between the VxRail nodes that form the cluster.
Either a routable or non-IP address range can be assigned. If your plans include a multi-rack
cluster, and you want to consider a new IP subnet range in the expansion racks, then assign a
routable IP address range to this network.
• If your requirements for virtual machine mobility are within the VxRail cluster, a non-routable IP
address range can be assigned to the vMotion network. However, if you need to enable virtual
machine mobility outside of the VxRail cluster, or have plans for a multi-rack expansion that will
use a different subnet range on any expansion racks, reserve a routable IP address range.

5.5.2 Virtual LAN considerations for VxRail networks


Virtual LANs (VLANs) define the VxRail logical networks within the cluster, and the method that is used to
control the paths that a logical network can pass through. A VLAN, represented as a numeric ID, is
assigned to a VxRail logical network. The same VLAN ID is also configured on the individual ports on
your top-of-rack switches, and on the virtual ports in the virtual-distributed switch during the automated
implementation process. When an application or service in the VxRail cluster sends a network packet on
the virtual-distributed switch, the VLAN ID for the logical network is attached to the packet. The packet will
only be able to pass through the ports on the top-of-rack switch and the virtual distributed switch where
there is a match in VLAN IDs. Isolating the VxRail logical network traffic using separate VLANs is highly
recommended, but not required. A ‘flat’ network is recommended only for test, nonproduction purposes.

As a first step, the network team and virtualization team should meet in advance to plan VxRail’s network
architecture.

• The virtualization team must meet with the application owners to determine which specific
applications and services that are planned for VxRail are to be made accessible to specific end-
users. This determines the number of logical networks that are required to support traffic from
non-management virtual machines.
• The network team must define the pool of VLAN IDs needed to support the VxRail logical
networks, and determine which VLANs will restrict traffic to the cluster, and which VLANs will be
allowed to pass through the switch up to the core network.
• The network team must also plan to configure the VLANs on the upstream network, and on the
switches attached to the VxRail nodes.
• The network team must also configure routing services to ensure connectivity for external users
and applications on VxRail network VLANs passed upstream.

30 VxRail Network Planning Guide


• The virtualization team must assign the VLAN IDs to the individual VxRail logical networks.
VxRail groups the logical networks in the following categories: External Management, Internal
Management, vSAN, vSphere vMotion, and Virtual Machine. VxRail assigns the settings that you
specify for each of these logical networks during the initialization process.

Before VxRail version 4.7, both external and internal management traffic shared the external
management network. Starting with version 4.7 of VxRail, the external and internal management
networks are broken out into separate networks.

External Management traffic includes all VxRail Manager, vCenter Server, ESXi communications, and in
certain cases, vRealize Log Insight. All VxRail external management traffic is untagged by default and
should be able to go over the Native VLAN on your top-of-rack switches.

A tagged VLAN can be configured instead to support the VxRail external management network. This
option is considered a best practice, and is especially applicable in environments where multiple VxRail
clusters will be deployed on a single set of top-of-rack switches. To support using a tagged VLAN for the
VxRail external management network, configure the VLAN on the top-of-rack switches, and then
configure trunking for every switch port that is connected to a VxRail node to tag the external
management traffic.

The Internal Management network is used solely for device discovery by VxRail Manager during initial
implementation and node expansion. This network traffic is non-routable and is isolated to the top-of-rack
switches connected to the VxRail nodes. Powered-on VxRail nodes advertise themselves on the Internal
Management network using multicast, and discovered by VxRail Manager. The default VLAN of 3939 is
configured on each VxRail node that is shipped from the factory. This VLAN must be configured on the
switches, and configured on the trunked switch ports that are connected to VxRail nodes.

If a different VLAN value is used for the Internal Management network, it not only must be configured on
the switches, but also must be applied to each VxRail node on-site. Device discovery on this network by
VxRail Manager will fail if these steps are not followed.

It is a best practice to configure a VLAN for the vSphere vMotion and vSAN networks. For these
networks, configure a VLAN for each network on the top-of-rack switches, and then include the VLANs on
the trunked switch ports that are connected to VxRail nodes.

The Virtual Machine networks are for the virtual machines running your applications and services.
Dedicated VLANs are preferred to divide Virtual Machine traffic, based on business and operational
objectives. VxRail creates one or more VM Networks for you, based on the name and VLAN ID pairs that
you specify. Then, when you create VMs in vSphere Web Client to run your applications and services,
you can easily assign the virtual machine to the VM Networks of your choice. For example, you could
have one VLAN for Development, one for Production, and one for Staging.

Network Configuration Enter the external management VLAN ID for VxRail management network
Table (VxRail Manager, ESXi, vCenter Server/PSC, Log Insight). If you do not
 Row 1 plan to have a dedicated management VLAN and will accept this traffic as
untagged, enter “0” or “Native VLAN.”
Network Configuration Enter the internal management VLAN ID for VxRail device discovery. The
Table default is 3939. If you do not accept the default, the new VLAN must be
 Row 2 applied to each VxRail node before cluster implementation.

31 VxRail Network Planning Guide


Network Configuration
Enter a VLAN ID for vSphere vMotion.
Table
(Enter 0 in the VLAN ID field for untagged traffic)
 Row 3
Network Configuration
Enter a VLAN ID for vSAN.
Table
(Enter 0 in the VLAN ID field for untagged traffic)
 Row 4
Network Configuration Enter a Name and VLAN ID pair for each VM guest network you want to
Table create.
 Rows 5-6 You must create at least one VM Network.
(Enter 0 in the VLAN ID field for untagged traffic)

Note: If you plan to have multiple independent VxRail clusters, we recommend using different VLAN IDs
across multiple VxRail clusters to reduce network traffic congestion.

For a 2-Node cluster, the VxRail nodes must connect to the Witness over a separate Witness traffic
separation network. The Witness traffic separation network is not required for stretched-cluster but is
considered a best practice. For this network, a VLAN is required to enable Witness network on this VLAN
must be able to pass through upstream to the Witness site.

Logical network with Witness and Witness Traffic Separation

Network Configuration Table


Enter the Witness traffic separation VLAN ID.
 Row 71

32 VxRail Network Planning Guide


5.6 Plan network exclusions reserved for VxRail Manager
VxRail Manager relies internally on a microservice model using a Docker container architecture. A set of
IP addresses is reserved for use by VxRail Manager to support networking for the microservices. The IP
addresses within these reserved pools are automatically assigned to the microservices initiated by VxRail
Manager at the time of power-on, and assigned as needed as part of normal VxRail Manager operations.
Using these reserved IP addresses for any VxRail network can potentially cause a conflict with VxRail
Manager operations, and should be blocked from assignment to VxRail networks.

The reserved IP address ranges are:

• 172.28.0.0/16
• 172.29.0.0/16
• 10.0.0.0/24
• 10.0.1.0/24

5.7 Plan network settings for VxRail management components


During the initial build of the VxRail cluster, IP addresses that are entered are assigned to the VxRail
components that are members of the External Management network and must follow certain rules:

• The IP address scheme must be a public IP address range.


• The IP address must be fixed (no DHCP).
• The IP addresses cannot be in use.
• The IP address range must all be in the same subnet.

You have flexibility in how the IP addresses are assigned to the VxRail management components. If the
VxRail cluster to be deployed in at version 7.0.010 or later, you can either manually assign the IP
addresses to the management components, or have the IP addresses auto during VxRail initial build.
Before VxRail version 7.0.010, the only supported option was to auto-assign the IP addresses to the
management components. The assignment process allocates IP addresses in sequential order, so a
range must be provided for this method.

The decisions that you make on the final VxRail configuration that is planned for your data center impacts
the number of IP addresses you will need to reserve.

33 VxRail Network Planning Guide


VxRail Management Components and Required Networks

• Decide if you want to reserve additional IP addresses in the VxRail management system to
assign to VxRail nodes in the future for expansion purposes in a single rack. When a new node is
added to an existing VxRail cluster, it assigns an IP address from the unused reserve pool, or
prompts you to enter an IP address manually.
• Decide whether you will use the vCenter instance that is deployed in the VxRail cluster, or use an
external vCenter already operational in your data center.
- For VxRail versions 7.0 or later, if you use the vCenter instance that is deployed on the
VxRail cluster, you must reserve an IP address for vCenter. The Platform Service Controller
is bundled into the vCenter instance.
- For VxRail versions earlier than version 7.0, if you have VxRail deploy vCenter, you must
reserve an IP address for the vCenter instance and an IP address for the Platform Service
Controller.

• Decide if you will use vSphere Log Insight that can be deployed in the VxRail cluster.
- For VxRail version 7.0 and earlier, if you choose to use the vCenter instance that is deployed
in the VxRail cluster, you have the option to deploy vSphere Log Insight on the cluster. You
can also choose to connect to an existing syslog server in your data center, or no logging at
all. If you choose to deploy vSphere Log Insight in the VxRail cluster, you need to reserve
one IP address.
- vRealize Log Insight is not an option for deployment during the initial VxRail configuration
process starting in version 7.0.010.
- If you use an external vCenter already operational in your data center for VxRail, vSphere
Log Insight cannot be deployed.

• VxRail supports the Dell EMC ‘call home’ feature, where alerts from the appliance are routed to
customer service. The Secure Remote Services gateway is required to enable alerts from VxRail
to be sent to Dell Technologies customer service.
- Decide whether to use an existing Secure Remote Services gateway in your data center for
‘call-home’, deploy a virtual instance of the Secure Remote Services gateway in the VxRail
cluster for this purpose, or none.
- Reserve one IP address to deploy SRS-VE (Secure Remote Services Virtual Edition) in the
VxRail cluster.

34 VxRail Network Planning Guide


• If you are planning to deploy a VxRail cluster that requires a Witness at a remote third site, such
as VxRail stretched-cluster or 2-Node cluster, two IP addresses are required to deploy the
witness virtual appliance.
- One IP address is assigned to the witness management network.
- One IP address is assigned to the witness vSAN network.
- Both networks must be able to route to the VxRail cluster requiring the remote site witness.
An existing vSAN witness can be shared in your remote site if the VxRail clusters are stretched
clusters, and the vSAN witness can support vSAN datastores at version 7 Update 1 or higher.

• For a 2-Node Cluster, the VxRail nodes must connect to the Witness over a separate Witness
traffic separation network. For this network, an additional IP address is required for each of the
two VxRail nodes.
- The VxRail nodes must be able to route to the remote site Witness.
- The traffic must be able to pass through the Witness traffic separation VLAN.
Use the following table to determine the number of public IP addresses required for the Management
logical network:

Component Condition Contiguous?

VxRail Node One per VxRail Node Yes


VxRail Manager One No
vCenter If you are supplying vCenter Server for VxRail: 0 No
If you are using vCenter on VxRail: 2
Log Insight If you are supplying vCenter Server for VxRail: 0 No
If you are supplying a syslog server for VxRail: 0
If you will not enable logging for VxRail: 0
If you are using Log Insight on VxRail: 1
SRS-VE If you are planning to deploy SRS Gateway on VxRail: 1 No
If you will not deploy SRS Gateway on VxRail: 0

Request your networking team to reserve a subnet range that has sufficient open IP addresses to cover
VxRail initial build and any planned future expansion.

Network Configuration
Table Enter the subnet mask for the VxRail External Management network.
 Row 7
Network Configuration
Table Enter the gateway for the VxRail External Management network.
 Rows 8

5.8 Identify IP addresses for VxRail management components


If you are choosing to auto the IP addresses for the ESXi hosts that serve as the foundation for VxRail
nodes, request your networking team to reserve a large enough pool of unused IP addresses.

35 VxRail Network Planning Guide


Record the IP address range for the ESXi hosts.

Network Configuration
Enter the starting and ending IP addresses for the ESXi hosts - a
Table
continuous IP range is required.
 Rows 24 and 25

If you choose instead to assign the IP addresses to each individual ESXi host, record the IP address for
each ESXi host to be included for VxRail initial build.

Network Configuration
Table Enter the IP addresses for the ESXi hosts.
 Rows 26 and 29

Record the permanent IP address for VxRail Manager. This is required.

Network Configuration
Table Enter the permanent IP address for VxRail Manager.
 Row 14

If you are going to deploy vCenter on the VxRail cluster, record the permanent IP address for vCenter
and Platform Service Controller (if applicable). Leave these entries blank if you will provide an external
vCenter for VxRail.

Network Configuration
Table Enter the IP address for VxRail vCenter.
 Row 31
Network Configuration
Table Enter the IP address for VxRail Platform Service Controller (if applicable)
 Row 33

Record the IP address for Log Insight. Leave this entry blank if you are deploying a version of VxRail at
version 7.0.010 or higher, or if you choose to not deploy Log Insight on VxRail.

Network Configuration
Table Enter the IP address for vSphere Log Insight.
 Row 59

Record the two IP addresses for the witness virtual appliance. Leave blank if a witness is not required for
your VxRail deployment.

Network Configuration Table Enter IP address for Witness Management Network.


 Row 69
Network Configuration Table Enter IP address for Witness vSAN Network.
 Rows 70

36 VxRail Network Planning Guide


Record the IP addresses for each node required for Witness traffic for a 2-Node cluster deployment.
Leave blank if you are not deploying a 2-Node cluster.

Network Configuration Table


Enter the IP address for the first of the two nodes in the 2-Node cluster.
 Row 72

Network Configuration Table Enter the IP address for the second of the two nodes in the 2-Node
 Row 73 Cluster.

5.9 Select hostnames for VxRail management components


Each of the VxRail management components you deploy in the VxRail cluster requires you to assign an
IP address, and assign a fully qualified hostname. During initialization, each of these VxRail management
components are assigned a hostname and IP address.

Determine the naming format for the hostnames to be applied to the required VxRail management
components: each ESXi host, and VxRail Manager. If you deploy the vCenter Server in the VxRail cluster,
that also requires a hostname. In addition, if you decide to deploy Log Insight in the VxRail cluster, that
needs a hostname as well.

Note: You cannot easily change the hostnames and IP addresses of the VxRail management
components after initial implementation.

5.9.1 Select top-level domain


Begin the process by selecting the domain to use for VxRail and assign to the fully qualified hostnames.
Be aware that DNS is a requirement for VxRail, so select a domain where the naming services can
support that domain.

Network Configuration Table


Enter the top-level domain.
 Row 12

5.9.2 Select VxRail Manager hostname


A hostname must be assigned to VxRail Manager. The domain is also automatically applied to the
chosen hostname. Dell Technologies recommends following the naming format that is selected for the
ESXi hosts to simplify cluster management.

Network Configuration Table


Enter the hostname for VxRail Manager.
 Row 13

5.9.3 Select ESXi hostnames


All VxRail nodes in a cluster require hostnames. Starting with VxRail version 7.0.010, you have the choice
of using any host naming convention you want, provided it is a legitimate format, or having VxRail auto-
assign the hostnames to the ESXi nodes following VxRail rules automically during the VxRail initial build
process.

37 VxRail Network Planning Guide


If you plan to have VxRail auto-assign the hostnames during the cluster initial build process, make sure to
follow the rules stated in this section. All ESXi hostnames in a VxRail cluster are defined by a naming
scheme that comprises: an ESXi hostname prefix (an alphanumeric string), a separator (“None” or a dash
”-“), an iterator (Alpha, Num X, or Num 0X), an offset 1 (empty or numeric), a suffix 2 (empty or
alphanumeric string with no .) and a domain. The Preview field that is shown during VxRail initialization is
an example of the hostname of the first ESXi host. For example, if the prefix is “host,” the separator is
“None,” the iterator is “Num 0X”, the offset is empty, and the suffix is “lab”, and the domain is “local,” the
first ESXi hostname would be “host01lab.local.” The domain is also automatically applied to the VxRail
management components. (Example: my-vcenter.local).

Example 1 Example 2 Example 3


Prefix host myname esxi-host
Separator None - -
Iterator Num 0X Num X Alpha
Offset 4
Suffix lab
Domain local college.edu company.com
Resulting host01.local myname-4lab.college.edu esxi-host-a.company.com
hostname

Enter the values for building and auto-assigning the ESXi hostnames if this is the chosen method.

Network Configuration Table Enter an example of your desired ESXi host-naming scheme. Be
 Rows 15–19 sure to show your desired prefix, separator, iterator, offset, suffix, and
domain.

If the ESXi hostnames will be applied manually, capture the name for each ESXi host that is planned for
the VxRail initial build operation.

Network Configuration Table


Enter the reserved hostname for each ESXi host.
 Rows 20–23

5.9.4 Select VxRail vCenter Server hostname

Note: You can skip this section if you plan to use an external vCenter Server in your data center for
VxRail. These action items are only applicable if you plan to use the VxRail vCenter Server.

38 VxRail Network Planning Guide


If you want to deploy a new vCenter Server on the VxRail cluster, you must specify a hostname for the
VxRail vCenter Server and, if required, for the Platform Services Controller (PSC). The domain is also
automatically applied to the chosen hostname. Dell Technologies recommends following the naming
format that is selected for the ESXi hosts to simplify cluster management.

Network Configuration Table Enter an alphanumeric string for the new vCenter Server hostname.
 Row 30 The domain that is specified will be appended.

Network Configuration Table Enter an alphanumeric string for the new Platform Services Controller
 Row 32 hostname. The domain that is specified will be appended.

5.9.5 Select Log Insight hostname

Note: You can skip this section if you plan to deploy a VxRail cluster at version 7.0.010 or higher, will use
an external syslog server instead of Log Insight, or will not enable logging.

To deploy Log Insight to the VxRail cluster, the management component must be assigned a hostname.
You can use your own third-party syslog server, use the vRealize Log Insight solution included with
VxRail, or no logging. You can only select the vRealize Log Insight option if you also use the VxRail
vCenter Server.

Network Configuration Table


Enter the hostname for Log Insight.
 Row 61

5.10 Identify external applications and settings for VxRail


VxRail is dependent on specific applications in your data center to be available over your data center
network. These data center applications must be accessible to the VxRail management network.

5.10.1 Set time zone and NTP server


A time zone is required. It is configured on vCenter server and each ESXi host during VxRail initial
configuration.

An NTP server is not required, but is recommended. If you provide an NTP server, vCenter server will be
configured to use it. If you do not provide at least one NTP server, VxRail uses the time that is set on
ESXi host #1 (regardless of whether the time is correct or not).

Note: Ensure that the NTP IP address is accessible from the VxRail External Management Network
which the VxRail nodes will be connected to and is functioning properly.

Network Configuration
Table Enter your time zone.
 Row 9

39 VxRail Network Planning Guide


Network Configuration
Table Enter the hostnames or IP addresses of your NTP servers.
 Row 10

5.10.2 Set DNS for VxRail management components


Starting with VxRail version 7.0.010, you can either use an internal DNS included with VxRail vCenter
Server, or use an external DNS in your data center. If you choose to use the internal DNS method, the
steps to set up DNS as outlined in this section can be skipped.

If the internal DNS option is not selected, one or more external, customer-supplied DNS servers are
required for VxRail. The DNS server that you select for VxRail must be able to support naming services
for all the VxRail management components (VxRail Manager, vCenter, and so on).

Note: Ensure that the DNS IP address is accessible from the network to which VxRail is connected and
functioning properly.

Network Configuration Table


Enter the IP addresses for your DNS servers.
 Row 11

Lookup records must be created in your selected DNS for every VxRail management component you are
deploying in the cluster and are assigning a hostname and IP address. These components can include
VxRail Manager, VxRail vCenter Server, VxRail Platform Service Controller, Log Insight, and each ESXi
host in the VxRail cluster. The DNS entries must support both forward and reverse lookups.

Sample DNS Forward Lookup Entries

Sample DNS Reverse Lookup Entries

Use the VxRail Network Configuration table to determine which VxRail management components to
include in your planned VxRail cluster, and have assigned a hostname and IP address. vMotion and

40 VxRail Network Planning Guide


vSAN IP addresses are not configured for routing by VxRail, so there are no entries required in the DNS
server.

5.11 Prepare customer-supplied vCenter server

Note: You can skip this section if you plan to use the VxRail vCenter server. These action items are only
applicable if you plan to use a customer-supplied vCenter server in your data center for VxRail.

Certain pre-requisites must be completed before VxRail initialization if you use a customer-supplied
vCenter as the VxRail cluster management platform. During the VxRail initialization process, it connects
to your customer-supplied vCenter to perform the necessary validation steps, and perform the
configuration steps, to deploy the VxRail cluster on your vCenter instance.

• Determine if your customer-supplied vCenter server is compatible with your VxRail version.
- See the Knowledge Base article VxRail: VxRail and External vCenter Interoperability Matrix
on the Dell EMC product support site for the latest support matrix.
• Enter the FQDN of your selected, compatible customer-supplied vCenter server in the VxRail
Network Configuration table.

Network Configuration Table


Enter the FQDN of the customer-supplied vCenter Server.
 Row 35

• Determine whether your customer-supplied vCenter server has an embedded or external platform
services controller. If the platform services controller is external to your customer-supplied
vCenter, enter the platform services controller FQDN in the VxRail Network Configuration table.

Network Configuration Table Enter the FQDN of the customer-supplied platform services controller
 Row 34 (PSC).
Leave this row blank if the PSC is embedded in the customer-
supplied vCenter server.

• Decide on the single sign-on (SSO) domain that is configured on the customer-supplied vCenter
you want to use to enable connectivity for VxRail, and enter the domain in the VxRail Network
Configuration Table.

Network Configuration Table Enter the single sign-on (SSO) domain for the customer-supplied
 Row 36 vCenter server. (For example, vsphere.local)

• The VxRail initialization process requires login credentials to your customer-supplied vCenter.
The credentials must have the privileges to perform the necessary configuration work for VxRail.
You have two choices:
- Provide vCenter login credentials with administrator privileges.
- Create a new set of credentials in your vCenter for this purpose. Two new roles are created
to this user by your Dell Technologies representative.

41 VxRail Network Planning Guide


Network Configuration Table Enter the administrative username/password for the customer-
 Row 37 supplied vCenter server, or the VxRail non-admin
username/password you will create on the customer-supplied
vCenter server.

• A set of credentials must be created in the customer-supplied vCenter for VxRail management
with no permissions and no assigned roles. These credentials are assigned a role with limited
privileges during the VxRail initialization process, and then assigned to VxRail to enable
connectivity to the customer-supplied vCenter after initialization completes.
- If this is the first VxRail cluster on the customer-supplied vCenter, enter the credentials that
you will create in the customer-supplied vCenter.
- If you already have an account for a previous VxRail cluster in the customer-supplied
vCenter, enter those credentials.

Network Configuration Table Enter the full VxRail management username/password.


 Row 38 (For example, [email protected])

• The VxRail initialization process deploys the VxRail cluster under an existing data center in the
customer-supplied vCenter. Create a new data center, or select an existing Data center on the
customer-supplied vCenter.

Network Configuration Table Enter the name of a data center on the customer-supplied vCenter
 Row 39 server.

• Specify the name of the cluster that is to be created by the VxRail initialization process in the
selected data center. This name must be unique, and not used anywhere in the data center on
the customer-supplied vCenter.

Network Configuration Table


Enter the name of the cluster that will be used for VxRail.
 Row 40

5.12 Prepare customer-supplied virtual distributed switch

You can skip this section if your VxRail version is not 7.0.010 or later, or if you do not plan to deploy
VxRail against a customer-supplied virtual distributed switch.

Before VxRail version 7.0.010, if you chose to deploy the VxRail cluster on an external, customer-
supplied vCenter, a virtual distributed switch would be configured on the vCenter instance as part of the
initial cluster build process. The automated initial build process would deploy the virtual distributed switch
adhering to VxRail requirements in the vCenter instance, and then attach the VxRail networks to the
portgroups on the virtual distributed switch.

Starting with VxRail version 7.0.010, if you choose to deploy the VxRail cluster on an external, customer-
supplied vCenter, you have the choice of having the automated initial build process deploy the virtual
distributed switch, or of configuring the virtual network, including the virtual distributed switch, manually
before the initial cluster build process.

42 VxRail Network Planning Guide


If you choose to manually configure the virtual network before initial cluster build, be aware that you must
perform the following pre-requisites:

• Unless your data center already has a vCenter instance compatible with VxRail, deploy a vCenter
instance that will serve as the target for the VxRail cluster.
• Unless you are connecting the VxRail cluster to an existing virtual distributed switch, configure a
virtual distributed switch on the target vCenter instance.
• Configure a portgroup for each of the required VxRail networks. Dell Technologies recommends
using naming standards that clearly identify the VxRail network traffic type.
• Configure the VLAN assigned to each required VxRail network on the respective portgroup. The
VLANs for each VxRail network traffic type can be referenced in the ‘VxRail Networks’ section in
the VxRail Network Configuration Table.
• Configure two or four uplinks on the virtual distributed switch to support the physical connectivity
for the VxRailcluster.r
• Configure the teaming and failover policies for the distributed port groups. Each port group on the
virtual distributed switch is assigned a teaming and failover policy. You can choose a simple
strategy and configure a single policy that is applied to all port groups, or configure a set of
policies to address requirements at the port group level.

Sample portgroups on customer-supplied virtual distributed switch

Dell Technologies recommends referencing the configuration settings that are applied to the virtual
distributed switch by the automated VxRail initial build process as a baseline. This will ensure a
successful deployment of a VxRail cluster against the customer-supplied virtual distributed switch. The
settings that are used by the automated initial build process can be found in the section Virtual Distributed
Switch Portgroup Default Settings.

Network Configuration Table Enter the name of the portgroup that will enable connectivity for the
 Row 41 VxRail external management network.

Network Configuration Table Enter the name of the portgroup that will enable connectivity for the
 Row 42 VxRail vCenter Server network.

Network Configuration Table Enter the name of the portgroup that will enable connectivity for the
 Row 43 VxRail internal management network

43 VxRail Network Planning Guide


Network Configuration Table Enter the name of the portgroup that will enable connectivity for the
 Row 44 vMotion network.

Network Configuration Table Enter the name of the portgroup that will enable connectivity for the
 Row 45 vSAN network.

If your plan is to have more than one VxRail cluster deployed against a single customer-supplied virtual
distributed switch, Dell Technologies recommends establishing a distinctive naming standard for the
distributed port groups. This will ease network management and help distinguish the individual VxRail
networks among multiple VxRail clusters.

Configuring portgroups on the virtual distributed switch for any guest networks you want to have is not
required for the VxRail initial build process. These portgroups can be configured after the VxRail initial
build process is complete. Dell Technologies also recommends establishing a distinctive naming standard
for these distributed port groups.

5.12.1 Configure teaming and failover policies for customer-supplied virtual


distributed switch
For a customer-supplied virtual distributed switch, you can use the default teaming and failover policy for
VxRail, or customize teaming and failover policies for each portgroup. The default teaming and failover
policy for VxRail uses the ‘Route based on originating virtual port’ for load balancing, ‘link status’ for
failure detection, and places the two uplinks in an active-standby failover configuration.

Sample VxRail default teaming and failover policy

Customizing the teaming and failover policies can also be performed as a post-deployment operation
instead of as a pre-requisite. VxRail will support the teaming and failover policies that are described in the
section Configure teaming and failover policies for VxRail networks.

5.13 Reserve IP addresses for VxRail vMotion network


An IP address is required for the vMotion network for each ESXi host in the VxRail cluster. A private
address range is acceptable if you decide the vMotion network will not be routable. If your plans include

44 VxRail Network Planning Guide


the ability to migrate virtual machines outside of the VxRail cluster, that needs to be considered when
selecting the IP address scheme.

Starting with VxRail version 7.0.010, you can choose to have the IP addresses assigned automatically
during VxRail initial build, or manually select the IP addresses for each ESXi host. If the VxRail version is
earlier than 7.0.010, auto-assignment method by VxRail is the only option.

For the auto-assignment method, the IP addresses for VxRail initial build must be contiguous, with the
specified range in a sequential order. The IP address range must be large enough to cover the number of
ESXi hosts planned for the VxRail cluster. A larger IP address range can be specified to cover for
planned expansion.

If your plans include expanding the VxRail cluster to deploy nodes in more than one physical rack, you
have the option of whether to stretch the IP subnet for vMotion between the racks, or to use routing
services in your data center instead.

For the IP address auto-assignment method, record the IP address range.

Network Configuration Table


Enter the starting and ending IP addresses for vSphere vMotion.
 Rows 46-47

For the manual assignment method, record the IP addresses.

Network Configuration Table


Enter the IP addresses for vSphere vMotion.
 Rows 48-51

Enter the subnet mask and default gateway. You can use the default gateway assigned to the VxRail
External Management network, or enter a gateway dedicated for the vMotion network.

Network Configuration Table


Enter the subnet mask for vMotion.
 Row 52

Network Configuration Table


Enter the default gateway for vMotion.
 Row 53

5.14 Reserve IP addresses for VxRail vSAN network


An IP address is required for the vSAN network for each ESXi host in the VxRail cluster. A private
address range is acceptable unless you decide you may expand beyond one rack and want to use a
different subnet for expansion.

Starting with VxRail version 7.0.010, you can choose to have the IP addresses assigned automatically
during VxRail initial build, or manually select the IP addresses for each ESXi host. If the VxRail version is
below 7.0.010, auto-assignment method by VxRail is the only option.

For the auto-assign method, the IP addresses for the initial build of the VxRail cluster must be contiguous,
with the specified range in a sequential order. The IP address range must be large enough to cover the

45 VxRail Network Planning Guide


number of ESXi hosts planned for the VxRail cluster. A larger IP address range can be specified to cover
for planned expansion.

For the IP address auto-assignment method, record the IP address range.

Network Configuration Table


Enter the starting and ending IP addresses for vSAN.
 Rows 54-55

For the manual assignment method, record the IP addresses

Network Configuration Table


Enter the IP addresses for vSAN.
 Rows 56-59

Enter the subnet mask for the vSAN network.

Network Configuration Table


Enter the subnet mask for vSAN.
 Row 60

5.15 Decide on VxRail logging solution


Decide whether to use your own third-party syslog server, use the vRealize Log Insight solution included
with VxRail, or no logging. You can only select the vRealize Log Insight option if:

• You will deploy the vCenter instance included with the VxRail onto the VxRail cluster.
• The VxRail cluster to be deployed is version 7.0.010 or lower.
If you use a customer-supplied vCenter server, you can either use your own third-part syslog server, or no
logging. If you choose the vRealize Log Insight option, the IP address that is assigned to Log Insight must
be on the same subnet as the VxRail management network.

Network Configuration Table Enter the IP address for vRealize Log Insight or the hostnames of your
 Row 62 or Row 63 existing third-party syslog servers. Leave blank for no logging.

5.16 Assign passwords for VxRail management


You will need to assign a password to the accounts that are members of the VxRail management
ecosystem. See the VxRail Passwords table to use as worksheets for your passwords.

Note: The Dell Technologies service representative will need passwords for the VxRail accounts in this
table. For security purposes, you can enter the passwords during the VxRail initialization process, as
opposed to providing them visibly in a document.

• For ESXi hosts, passwords must be assigned to the ‘root’ account. You can use one password for
each ESXi host or apply the same password to each host.

46 VxRail Network Planning Guide


• For VxRail Manager, a password must be assigned to the ‘root’ account [Row 1]. This credential
is for access to the console.
• Access to the VxRail Manager web interface will use the ‘administrator@<SSO Domain>’
credentials.
- If you deploy the VxRail vCenter Server, VxRail Manager and vCenter share the same default
administrator login, ‘[email protected]’. Enter the password that you want to use
[Row 2].
- If you use a customer-supplied vCenter server, VxRail Manager will use the same
‘administrator@<SSO Domain>’ login credentials you use for access to the customer-
supplied vCenter server.

• If you deploy the VxRail vCenter Server:


- Enter the ‘root’ password for the VxRail vCenter Server [Row 3].
- Enter a password for ‘management’ for the VxRail vCenter Server [Row 4].
- A Platform Services controller will be deployed. Enter the ‘root’ password for the Platform
Services controller [Row 5].

• If you deploy vRealize Log Insight:


- Enter a password for ‘root’ [Row 6].
- Enter a password for ‘admin’ [Row 7].
Passwords must adhere to VMware vSphere complexity rules. Passwords must contain between eight
and 20 characters with at least one lowercase letter, one uppercase letter, one numeric character, and
one special character. For more information about password requirements, see the vSphere password
and vCenter Server password documentation.

5.17 Prepare for Dell EMC SmartFabric Services enablement

Note: Skip this section if you do not plan to enable Dell EMC SmartFabric Services to pass control of
switch configuration to VxRail.

The planning and preparation tasks for the deployment and operations of a VxRail cluster on a network
infrastructure enabled with SmartFabric Services differ from connecting a VxRail cluster to a standard
data center network. The basic settings that are required for the initial buildout of the network
infrastructure with SmartFabric Services are outlined in this section.

Enabling the SmartFabric personality on a Dell Ethernet switch that is qualified for SmartFabric Services
initiates a discovery process for other connected switches with the same SmartFabric personality for the
purposes of forming a unified switch fabric. A switch fabric can start as small as two leaf switches in a
single rack, then expand automatically by enabling the SmartFabric personality on connected spine
switches, and connected leaf switches in expansion racks.

Both the Dell Ethernet switches and VxRail nodes advertise themselves at the time of power-on on this
same internal discovery network. The SmartFabric-enabled network also configures an ‘untagged’ virtual
network on the switch fabric to enable client onboarding through a jump port for access to VxRail
Manager to perform cluster implementation. During VxRail initial configuration through VxRail Manager,
the required VxRail networks are automatically configured on the switch fabric.

47 VxRail Network Planning Guide


• Network connectivity to out-of-band management for each switch that is enabled with the
SmartFabric personality is a requirement for VxRail. A reserved IP address is required for each
switch.
• A separate Ethernet switch outside of SmartFabric is required to support connectivity to switch
management through the out-of-band network.
• A reserved IP address for iDRAC connectivity to each VxRail node on this same separate
management switch is recommended.

Logical networks for single-tier and two-tier SmartFabric deployments

• The Dell EMC Open Management Network Interface (OMNI) plug-in must be deployed on the
vCenter instance to support automated switch management after the VxRail cluster is built. The
Dell EMC OMNI vCenter plug-in is required for each Dell EMC switch fabric pair, and requires
network properties to be set during the deployment process.

Network Configuration Table Reserve an IP address for out-of-band management of each switch
 Row 64 and 65 in the SmartFabric-enabled network.
Network Configuration Table
Enter the IP address for Dell EMC OMNI vCenter plug-in.
 Row 66
Network Configuration Table
Enter the subnet mask for Dell EMC OMNI vCenter plug-in.
 Row 67
Network Configuration Table
Enter the gateway for Dell EMC OMNI vCenter plug-in.
 Row 68

For complete details on the settings that are needed during the planning and preparation phase for a
SmartFabric-enabled network, see the ‘Dell EMC VxRail™ with SmartFabric Network Services Planning
and Preparation Guide’ on the Dell Technologies VxRail Technical Guides site.

48 VxRail Network Planning Guide


6 Configure the Upstream Network for VxRail
The upstream network from the VxRail cluster must be configured to allow passage for VxRail networks
that require external access. The switches supporting direct connectivity to the VxRail cluster should pass
the external-facing VxRail network traffic through a pair of switch ports upstream to a pair of switch ports
on the next network layer (spine). The switches at the next layer needs to direct this network traffic to the
appropriate data center services and end-user community.

The VxRail External Management Network should be accessible to your location’s IT infrastructure and
personnel only. IT administrators require access to this network for day-to-day management of the VxRail
cluster, and the VxRail cluster is dependent on outside applications such as DNS and NTP to operate
correctly.

Logical Network including Upstream Elements

VxRail Virtual Machine Networks support access to applications and software that is deployed on the
virtual machines on the VxRail cluster. While you must create at least one VxRail Virtual Machine network
at VxRail initial implementation, additional VxRail Virtual Machine networks can be added to support the
end-user community. The spine switch must be configured to direct the traffic from these VxRail Virtual
Machine networks to the appropriate end-users.

The VxRail Witness Traffic Separation Network is optional if you plan to deploy a stretched-cluster.
The VxRail Witness traffic separation network enables connectivity between the VxRail nodes with the
witness at an offsite location. The remote-site witness monitors the health of the vSAN datastore on the
VxRail cluster over this network.

Using the VxRail Network Configuration table, perform the following steps:

Step 1. Configure the External Management Network VLAN (Row 1) on the spine switch.
Configure all of the VxRail Virtual Machine Network VLANs (Rows 39 and 40) on the spine switch.
If applicable, configure the VxRail Witness Traffic Separation Network VLAN (Row 50) on the spine
switch.

49 VxRail Network Planning Guide


Create a logical pair (port channel) on the spine switch ports that will connect downstream to the
uplinks on the TOR switch or switches.
• Ensure the port channel settings (active/passive) on the spine switch matches the setting on the
TOR switch.
• Configure all the external VLANs on this port channel.
Enable routing services or configure additional logical pairs as necessary to direct VxRail network
traffic to the appropriate end destination.

6.1 Setting up the network switch for VxRail connectivity

Note: You can skip this section if you plan to enable Dell EMC SmartFabric Services and extend VxRail
automation to the TOR switch layer.

For the VxRail initialization process to pass validation and build the cluster, you must configure the ports
that VxRail will connect to on your switch before you plug in VxRail nodes and powering them on.

Follow these steps to set up your switch:

1. Plan switch configuration.


2. Plan switch port configuration.
3. Configure ports and VLANs on your switches.

Note: This section provides guidance for preparing and setting up your switch for VxRail. Be sure to
follow your vendor’s documentation for specific switch configuration activities and for best practices for
performance and availability.

6.1.1 Plan switch base configuration


6.1.1.1 Multicast for VxRail Internal Management network
VxRail Appliances have no backplane, so communication between its nodes is facilitated through the
network switch. This communication between the nodes uses VMware’s Loudmouth auto-discovery
capabilities, based on the RFC-recognized "Zero Network Configuration" protocol. New VxRail nodes
advertise themselves on the network using the VMware Loudmouth service, and are discovered by
VxRail Manager with the Loudmouth service. VMware’s Loudmouth service depends on multicasting,
which is required for the VxRail internal management network.

The network switch ports that connect to VxRail nodes must allow for pass-through of multicast traffic on
the VxRail Internal Management VLAN. Multicast is not required on your entire network, just on the ports
connected to VxRail nodes.

VxRail creates very little traffic through multicasting for auto-discovery and device management.
Furthermore, the network traffic for the Internal Management network is restricted through a VLAN. You
can choose to enable MLD Snooping and MLD Querier on the VLAN if supported on your switches.

If MLD Snooping is enabled, MLD Querier must be enabled. If MLD Snooping is disabled, MLD Querier
must be disabled.

50 VxRail Network Planning Guide


6.1.1.2 Unicast or multicast for VxRail vSAN network
Starting in VxRail v4.5.0, all vSAN traffic replaces multicast with unicast. This change helps reduce
network configuration complexity and simplifies switch configuration.

For VxRail v4.5.0 and earlier, multicast is required for the vSAN VLAN. One or more network switches
that connect to VxRail must allow for pass-through of multicast traffic on the vSAN VLAN. Multicast is not
required on your entire network, just on the ports connected to VxRail.

VxRail multicast traffic for vSAN will be limited to broadcast domain per vSAN VLAN. There is minimal
impact on network overhead as management traffic is nominal. You can limit multicast traffic by enabling
IGMP Snooping and IGMP Querier. We recommend enabling both IGMP Snooping and IGMP Querier if
your switch supports them and you configure this setting.

IGMP Snooping software examines IGMP protocol messages within a VLAN to discover which interfaces
are connected to hosts or other devices that are interested in receiving this traffic. Using the interface
information, IGMP Snooping can reduce bandwidth consumption in a multi-access LAN environment to
avoid flooding an entire VLAN. IGMP Snooping tracks ports that are attached to multicast-capable routers
to help manage IGMP membership report forwarding. It also responds to topology change notifications.

IGMP Querier sends out IGMP group membership queries on a timed interval, retrieves IGMP
membership reports from active members, and allows updates to group membership tables. By default,
most switches enable IGMP Snooping but disable IGMP Querier. You will need to change the settings if
this is the case.

If IGMP Snooping is enabled, IGMP Querier must be enabled. If IGMP Snooping is disabled, IGMP
Querier must be disabled.

For questions about how your switch handles multicast traffic, contact your switch vendor.

6.1.1.3 Enable uplinks to pass inbound and outbound VxRail network traffic
The uplinks on the switches must be configured to allow passage for external network traffic to
administrators and end-users. This includes the VxRail external management network (or combined
VxRail management network earlier than version 4.7) and Virtual Machine network traffic. The VLANs
representing these networks need to be passed upstream through the uplinks. For VxRail clusters
running at version 4.7 or later, the VxRail internal management network must be blocked from outbound
upstream passage.

If the VxRail vMotion network is going to be configured to be routable outside of the top-of-rack switches,
include the VLAN for this network in the uplink configuration. This is to support the use case where virtual
machine mobility is desired outside of the VxRail cluster.

If you plan to expand the VxRail cluster beyond a single rack, configure the VxRail network VLANs for
either stretched Layer 2 networks across racks, or pass upstream to terminate at Layer 3 routing services
if new subnets will be assigned in expansion racks.

6.1.1.4 Enable Inter-Switch communication


In a dual-switch environment, configure the ports that are used for inter-switch communication to allow
passage for all the VxRail virtual networks. Plan switch port configuration.

51 VxRail Network Planning Guide


6.1.2 Plan switch port configuration
6.1.2.1 Determine switch port mode
Configure the port mode on your switch based on the plan for the VxRail logical networks, and whether
VLANs will be used to segment VxRail network traffic. Ports on a switch operate in one of the following
modes:

• Access mode – The port accepts untagged packets only and distributes the untagged packets to
all VLANs on that port. This is typically the default mode for all ports. This mode should only be
used for supporting VxRail clusters for test environments or temporary usage.
• Trunk mode – When this port receives a tagged packet, it passes the packet to the VLAN
specified in the tag. To configure the acceptance of untagged packets on a trunk port, you must
first configure a single VLAN as a “Native VLAN.” A “Native VLAN” is when you configure one
VLAN to use as the VLAN for all untagged traffic.
• Tagged-access mode – The port accepts tagged packets only.

6.1.2.2 Disable link aggregation on switch ports supporting VxRail networks


Do not enable link aggregation, including protocols such as LACP and EtherChannel, on any switch ports
that are connected to VxRail node ports that are supporting VxRail management network traffic. During
the VxRail initial build process, either 2 or 4 ports will be selected to support the VxRail management
networks and any guest networks configured at that time. The VxRail initial build process will configure a
virtual distributed switch on the cluster, and then configure a portgroup on that virtual distributed switch
for each VxRail management network.

Unused VxRail node ports configured for non-VxRail network traffic

If your requirements include using any spare network ports on the VxRail nodes that were not configured
for VxRail network traffic for other use cases, then link aggregation can be configured to support that
network traffic. These can include any unused ports on the network daughter card (NDC) or on the
optional PCIe adapter cards. Updates can be configured on the virtual distributed switch deployed during
VxRail initial build to support the new networks, or a new virtual distributed switch can be configured.
Since the initial virtual distributed switch is under the management and control of VxRail, the best practice
is to configure a separate virtual distributed switch on the vCenter instance to support these networking
use cases.

52 VxRail Network Planning Guide


6.1.2.3 Limit spanning tree protocol on VxRail switch ports
Network traffic must be allowed uninterrupted passage between the physical switch ports and the VxRail
nodes. Certain Spanning Tree states can place restrictions on network traffic and can force the port into
an unexpected timeout mode. These conditions that are caused by Spanning Tree can disrupt VxRail
normal operations and impact performance.

If Spanning Tree is enabled in your network, ensure that the physical switch ports that are connected to
VxRail nodes are configured with a setting such as ‘Portfast’, or set as an edge port. These settings set
the port to forwarding state, so no disruption occurs. Because vSphere virtual switches do not support
STP, physical switch ports that are connected to an ESXi host must have a setting such as ‘Portfast’
configured if spanning tree is enabled to avoid loops within the physical switch network.

6.1.2.4 Enable flow control


Network instability or congestion contributes to low performance in VxRail, and has a negative effect on
the vSAN I-O datastore operations. VxRail recommends enabling flow control on the switch to assure
reliability on a congested network. Flow control is a switch feature that helps manage the rate of data
transfer to avoid buffer overrun. During periods of high congestion and bandwidth consumption, the
receiving network injects pause frames for a period of time to the sender network to slow transmission in
order to avoid buffer overrun. The absence of flow control on a congested network can result in increased
error rates and force network bandwidth to be consumed for error recovery. The flow control settings can
be adjusted depending on network conditions, but VxRail recommends that flow control should be
‘receive on’ and ‘transmit off’.

6.1.3 Configure ports and VLANs on your switches


Now that you understand the switch requirements, it is time to configure your switches. The VxRail
network can be configured with or without VLANs. For performance and scalability, we highly
recommended configuring VxRail with VLANs. As listed in the VxRail Setup Checklist, you will be
configuring the following VLANs:

For VxRail clusters using version 4.7 or later:

• VxRail External Management VLAN (default is untagged/native).


• VxRail Internal Management VLAN ‒ ensure that multicast is enabled on this VLAN.
For VxRail clusters using versions earlier than 4.7:

• VxRail Management VLAN (default is untagged/native) ‒ ensure that multicast is enabled on this
VLAN.
For VxRail clusters using version 4.5 or later:
• vSAN VLAN ‒ ensure that unicast is enabled.
For VxRail clusters using versions earlier than 4.5:
• vSAN VLAN ‒ ensure that multicast is enabled. Enabling IGMP snooping and querier is
recommended.
For all VxRail clusters:

• vSphere vMotion VLAN

53 VxRail Network Planning Guide


• VM Networks VLAN (minimum one)

VxRail Logical Networks: Version earlier than 4.7 (left) and 4.7 or later (right)

VxRail Logical Networks: 2-Node Cluster with Witness

54 VxRail Network Planning Guide


• The additional VxRail Witness traffic separation VLAN to manage traffic between the VxRail
cluster and the witness. This is only needed if deploying VxRail stretched-cluster or 2-Node
cluster.
Using the VxRail Network Configuration table, perform the following steps:

1. Configure a VLAN on the switches for each VxRail logical network.


2. Configure each switch port that will be connected to a VxRail node.

• Set the switch port mode to the appropriate setting.


• Set the port to the appropriate speed or to auto-negotiate speed.

3. Configure the External Management VLAN (Row 1) on the switch ports. If you entered “Native
VLAN,” set the ports on the switch to accept untagged traffic and tag it to the native management
VLAN ID. Untagged management traffic is the default management VLAN setting on VxRail.
4. For VxRail version 4.7 and later, configure the Internal Management VLAN (Row 2) on the
switch ports.
5. Allow multicast on the VxRail switch ports to support the Internal Management network.
6. Configure a vSphere vMotion VLAN (Row 3) on the switch ports.
7. Configure a vSAN VLAN (Row 4) on the switch ports. For release prior to VxRail v4.5.0, allow
multicast on this VLAN. For VxRail v4.5.0 and later, allow unicast traffic on this VLAN.
8. Configure the VLANs for your VM Networks (Rows 6) on the switch ports.
9. Configure the optional VxRail Witness Traffic Separation VLAN (Row 70) on the switch ports if
required.
10. Configure the switch uplinks to allow the External Management VLAN (Row 1) and VM
Network VLANs (Row 6) to pass through, and optionally the vSphere vMotion VLAN and vSAN
VLAN. If a vSAN witness is required for the VxRail cluster, include the VxRail Witness Traffic
Separation VLAN (Row 70) on the uplinks.
11. Configure the inter-switch links to allow the all VLANs to pass through if deploying dual switches.

6.2 Confirm your data center network


Upon completion of the switch configuration, there should be unobstructed network paths between the
switch ports and the ports on the VxRail nodes. The VxRail management network and VM network should
have unobstructed passage to your data center network. Before forming the VxRail cluster, the VxRail
initialization process will perform several verification steps, including:

• Verifying switch and data center environment supportability


• Verifying passage of VxRail logical networks
• Verifying accessibility of required data center applications
• Verifying compatibility with the planned VxRail implementation
Certain data center environment and network configuration errors cause the validation to fail, and the
VxRail cluster will not be formed. When validation fails, the data center settings and switch configurations
must undergo troubleshooting to resolve the problems reported.

Confirm the settings on the switch, using the switch vendor instructions for guidance:

55 VxRail Network Planning Guide


1. External management traffic will be untagged on the native VLAN by default. If a tagged VLAN is
used instead, the switches must be customized with the new VLAN.
2. Internal device discovery network traffic uses the default VLAN of 3939. If this has changed, all
ESXi hosts must be customized with the new VLAN, or device discovery will not work.
3. Confirm that the switch ports that will attach to VxRail nodes allow passage of all VxRail network
VLANs.
4. Confirm that the switch uplinks allow passage of external VxRail networks.
5. If you have two or more switches, confirm an inter-switch link is configured between them to
support passage of the VxRail network VLANs.

6.3 Confirm your firewall settings


If you have positioned a firewall between the switches that are planned for VxRail and the rest of your
data center network, be sure that the required firewall ports are open for VxRail network traffic.

1. Verify that VxRail can communicate with your DNS server.


2. Verify that VxRail can communicate with your NTP server.
3. Verify that your IT administrators can communicate with the VxRail management system.
4. If you plan to use a customer-supplied vCenter, verify open communication between the vCenter
instance and the VxRail managed hosts.
5. If you plan to use a third-party syslog server instead of Log Insight, verify that open
communication between the syslog server and the VxRail management components.
6. If you plan to deploy a separate network for ESXi host management (iDRAC), verify that your IT
administrators can communicate with the iDRAC network.
7. If you plan to use an external Secure Remote Services (SRS) gateway in your data center
instead of SRS-VE deployed in the VxRail cluster, verify the open communications between
VxRail management and the SRS gateway.
See VxRail Open Ports Requirements for information of VxRail port requirements.

6.4 Confirm your data center environment


1. Confirm that you cannot ping any IP address that is reserved for VxRail management
components.
2. Confirm that your DNS servers are reachable from the VxRail external management network.
3. Confirm the forward and reverse DNS entries for the VxRail management components.
4. Confirm that your management gateway IP address is accessible.
5. If you decide to use the TCP-IP stack for vMotion instead of the default TCP-IP stack, confirm
that your vMotion gateway IP address is accessible.
6. If you have configured NTP servers, or a third-party syslog server, confirm that you can reach
them from your configured VxRail external management network.
7. If you plan to use a customer-supplied vCenter, confirm that it is accessible from the VxRail
external management network.

56 VxRail Network Planning Guide


8. If you plan to deploy a witness at a remote site to monitor vSAN, and plan to enable Witness
Traffic Separation, confirm that there is a routable path between the witness and this network.
9. If you plan to install the VxRail nodes in more than one rack, and you plan to terminate the VxRail
networks at the ToR switches, verify that routing services have been configured upstream for the
VxRail networks.

57 VxRail Network Planning Guide


7 Preparing to Build the VxRail Cluster
The steps that are outlined in this section will be performed by Dell Technologies professional services.
They are described here to provide insight into the activities to be performed during the delivery
engagement.

7.1 Configuring a workstation/laptop for VxRail initialization


A workstation/laptop with a web browser for the VxRail user interface is required to perform the
initialization process. It must be plugged into the top-of-rack switch, or be able to logically reach the
VxRail external management VLAN from elsewhere on your network; for example, a jump server (Jump
Server Description). Once the VxRail initialization process is complete, the switch port or jump host is no
longer required to manage VxRail.

Note: Do not try to plug your workstation/laptop directly into a VxRail server node to connect to the VxRail
management interface for initialization. It must be plugged into your network or switch, and the
workstation/laptop must be logically configured to reach the necessary networks.

A supported web browser is required to access VxRail management interface. The latest versions of
Firefox, Chrome, and Internet Explorer 10+ are all supported. If you are using Internet Explorer 10+ and
an administrator has set your browser to “compatibility mode” for all internal websites (local web
addresses), you will get a warning message from VxRail. Contact your administrator to whitelist URLs
mapping to the VxRail user interface.

To access the VxRail management interface to perform initialization, you must use the temporary,
preconfigured VxRail initial IP address: 192.168.10.200/24. This IP address will automatically change
during VxRail initialization to your desired permanent address, and assigned to VxRail Manager during
cluster formation.

VxRail Workstation/laptop
Example
Configuration IP address/netmask IP address Subnet mask Gateway

Initial
192.168.10.200/24 192.168.10.150 255.255.255.0 192.168.10.254
(temporary)
Post-
configuration 10.10.10.100/24 10.10.10.150 255.255.255.0 10.10.10.254
(permanent)

Your workstation/laptop must be able to reach both the temporary VxRail initial IP address and the
permanent VxRail Manager IP address (Row 26 from VxRail Network Configuration table). VxRail
initialization will remind you that you might need to reconfigure your workstation/laptop network settings to
access the new IP address.

It is best practice to give your workstation/laptop or your jump server two IP addresses on the same
network port, which allows for a smoother experience. Depending on your workstation/laptop, this can be
implemented in several ways (such as dual-homing or multi-homing). Otherwise, change the IP address
on your workstation/laptop when instructed to and then return to VxRail Manager to continue with the
initialization process.

58 VxRail Network Planning Guide


If you cannot reach the VxRail initial IP address, Dell Technologies support team can configure a custom
IP address, subnet mask, and gateway on VxRail Manager before initialization.

Note: If a custom VLAN ID will be used for the VxRail management network other than the default “Native
VLAN”, ensure the workstation/laptop can also access this VLAN.

7.2 Perform initialization to create a VxRail cluster


If you have successfully followed all the steps that are listed in this document, you are ready to move to
the final phase: Connect the laptop or workstation to a switch port, and perform VxRail initialization.
These steps are done by Dell Technologies service representatives and are included here to help you
understand the complete process.

Before coming on-site, the Dell Technologies service representative will have contacted you to
capture and record the information that is described in the VxRail Network Configuration Table and
walk through the VxRail Setup Checklist.
If your planned VxRail deployment requires a Witness at a remote data center location, the
Witness virtual appliance is deployed.
If your planned deployment include the purchase of Dell Ethernet switches and professional
services to install and configure the switches to support the VxRail cluster, that activity is
performed before VxRail deployment activities commence.
Install the VxRail nodes in a rack or multiple racks in the data center. If Dell professional services
are not installing the switches, install the network switches supporting the VxRail cluster into the
same racks for ease of management.
Attach Ethernet cables between the ports on the VxRail nodes and switch ports that are configured
to support VxRail network traffic.
Power on three or four initial nodes to form the initial VxRail cluster. Do not turn on any other
VxRail nodes until you have completed the formation of the VxRail cluster with the first three or
four nodes.
Connect a workstation/laptop configured for VxRail initialization to access the VxRail external
management network on your selected VLAN. It must be either plugged into the switch or able to
logically reach the VxRail external management VLAN from elsewhere on your network.
Open a browser to the VxRail initial IP address to begin the VxRail initialization process.
The Dell Technologies service representative will populate the input screens on the menu with the
data collected and recorded in the
If you have enabled Dell EMC SmartFabric Services, VxRail will automatically configure the
switches that are connected to VxRail nodes using the information populated on the input screens.
VxRail performs the verification process, using the information input into the menus.
After validation is successful, the initialization process will begin to build a new VxRail cluster.

59 VxRail Network Planning Guide


The new permanent IP address for VxRail Manager will be displayed.
- If you configured the workstation/laptop to enable connectivity to both the temporary VxRail
IP address and the new permanent IP address, the browser session will make the switch
automatically. If not, you must manually change the IP settings on your workstation/laptop to
be on the same subnet as the new VxRail IP address.
- If your workstation/laptop cannot connect to the new IP address that you configured, you will
get a message to fix your network and try again. If you are unable to connect to the new IP
address after 20 minutes, VxRail will revert to its un-configured state and you will need to re-
enter your configuration at the temporary VxRail IP address.
- After the build process starts, if you close your browser, you will need to browse to the new,
permanent VxRail IP address.
Progress is shown as the VxRail cluster is built.
When you see the Hooray! page, VxRail initialization is complete and a new VxRail cluster is
built. Click the Manage VxRail button to continue to VxRail management. You should also
bookmark this IP address in your browser for future use.
Connect to VxRail Manager using either the VxRail Manager IP address (Row 14) or the fully
qualified domain name (FQDN) (Row 13) that you configured on your DNS server.
If the Dell EMC SmartFabric services were enabled on the switch infrastructure, the Dell EMC
OMNI plug-in is deployed on the vCenter instance.

60 VxRail Network Planning Guide


8 Additional VxRail Network Considerations
This section provides guidance on additional actions that can be performed on the VxRail network to
address specific requirements and use cases.

8.1 Configure teaming and failover policies for VxRail networks


Starting with VxRail version 4.7.410, the customer can customize the teaming and failover policies for the
VxRail networks. If your VxRail version is 7.0.010 or later, you can also customize the failover settings for
the uplinks to an active-active configuration.

VxRail will apply a default teaming and failover policy for each VxRail network during the initial build
operation.

• The default load-balancing policy is ‘Route based on originating virtual port’ for all VxRail network
traffic.
• The default network failure detection setting is ‘link status only’. This setting should not be
changed. VMware recommends having 3 or more physical NICs in the team for ‘beacon probing’
to work correctly.
• The setting for ‘Notify switches’ is set to ‘Yes’. This instructs the virtual distributed switch to notify
the adjacent physical switch of a failover.
• The setting for ‘Failback’ is set to ‘Yes’. This instructs a failed adapter to take over for the standby
adapter once it is recovered and comes online again, if the uplinks are in an active-standby
configuration.
• The failover order for the uplinks is dependent on the VxRail network configured on the portgroup.

Default VDS teaming and failover policy for vSAN network configured with 2 VxRail
ports

61 VxRail Network Planning Guide


Starting with VxRail version 4.7.410, the teaming and failover policy can be modified after the VxRail
initial build. Note that a load-balancing policy that has a dependency on a physical switch setting such as
link aggregation is not supported, as link aggregation at the physical switch level is not supported with
VxRail. The following load-balancing policies are supported for VxRail clusters running version 4.7.410 or
later:

• Route based on the originating virtual port

After the virtual switch selects an uplink for a virtual machine or VMkernel adapter, it always
forwards traffic through the same uplink. This option makes a simple selection based on the
available physical uplinks. However, this policy does not attempt to load balance based on
network traffic.

• Route based on source MAC hash

The virtual switch selects an uplink for a virtual machine based on the virtual machine MAC
address. While it requires more resources than using the originating virtual port, it has more
flexibility in uplink selection. This policy does not attempt to load balance based on network traffic
analysis.

• Use explicit failover order

Always use the highest order uplink that passes failover detection criteria from the active
adapters. No actual load balancing is performed with this option.

• Route based on physical NIC load

The virtual switch monitors network traffic, and makes adjustments on overloaded uplinks by
moving traffic to another uplink. This option does use additional resources to track network traffic.

VxRail does not support the ‘Route based on IP Hash’ policy, as there is a dependency on the logical link
setting of the physical port adapters on the switch, and link settings such as static port channels, LAGs,
and LACP are not supported with VxRail.

Starting with VxRail version 7.0.010, the ‘Failover Order’ setting on the teaming and failover policy on the
VDS portgroups supporting VxRail networks can be changed. The default failover order for the uplinks
each portgroup configured during VxRail initial build is described in the section Default failover order
policy. For any portgroup configured during VxRail initial build to support VxRail network traffic, an uplink
in ‘Standby’ mode can be moved into ‘Active’ mode to enable an ‘Active/Active’ configuration. This action
can be performed after the VxRail cluster has completed the initial build operation.

Moving an uplink that is configured as ‘Unused’ for a portgroup supporting VxRail network traffic into
either ‘Active’ mode or ‘Standby’ mode does not automatically activate the uplink and increase bandwidth
for that portgroup. Bandwidth optimization is dependent on the load-balancing settings on the upstream
switches, and link aggregation is not supported on those switch ports that are configured to support for
VxRail network traffic.

62 VxRail Network Planning Guide


Sample failover order setting set to active/active

8.2 Support for NSX


VxRail is fully compatible with other software in the VMware ecosystem, including VMware NSX. See the
VMware Product Interoperability Matrices for specific versions of NSX supported on vSphere.

8.3 Using unassigned VxRail physical ports


Starting with version 7.0.010 of VxRail, you can configure the ports on the optional PCIe adapter cards to
support VxRail network traffic. Unless you are deploying the VxRail cluster to a customer-supplied virtual
distributed switch, this is only supported as a post-deployment activity.

VxRail node with NDC ports and ports from optional PCIe adapter card

Network redundancy across NDC and PCIe Ethernet ports can be enabled by reconfiguring the VxRail
networks. The table below describes the supported starting and ending network reconfigurations

Starting configuration Ending configuration


2 NDC ports 1 NDC port & 1 PCIe port
2 NDC Ports 2 NDC ports & 2 PCIe ports
4 NDC Ports 2 NDC ports & 2 PCIe ports
4 NDC Ports 1 NDC port & 1 PCIe port

63 VxRail Network Planning Guide


• The first port configured for VxRail networking, commonly known as ‘vmnic0’ or ‘vmnic1’, must be
reserved for VxRail management and node discovery. Do not migrate VxRail management or
node discovery off of this first reserved port.
• The switch ports enabling connectivity to the PCIe-based ports are properly configured to support
VxRail network traffic.
• All of the network ports supporting VxRail network traffic must be running the same speed.
For VxRail versions prior to 7.0.010, VxRail nodes ordered with extra physical network ports can be
configured for non-VxRail system traffic.

Note: You must follow the official instructions/procedures from VMware and Dell Technologies for these
operations.

The supported operations include:


• Create a new vSphere Standard Switch (VSS), and connect unused ports to the VSS.
• Connect unused ports to new port groups on the default vSphere Distributed Switch.
• Create a new vSphere Distributed Switch (VDS), add VxRail nodes to the new VDS, and connect
their unused network ports to the VDS.
• Create new VMKernel adapters, and enable services of IP Storage and vSphere Replication.
• Create new VM Networks, and assign them to new port groups.
If your VxRail version is 7.0.010 or later, NIC redundancy can be enabled by reconfiguring the VxRail
networks across NDC and PCIe Ethernet ports.

• The first port configured for VxRail networking, commonly known as ‘vmnic0’ or ‘vmnic1’, must be
reserved for VxRail management and node discovery. Do not migrate VxRail management or
node discovery off of this first reserved port.
• The network reconfiguration requires a one-to-one swap. For example, a VxRail network that is
currently running on two NDC ports can be reconfigured to run on one NDC port and one PCIe
port. The network cannot be reconfigured to swap one NCD port for two PCIe ports,
• The number of ports reserved per node during VxRail initial build (either 2 or 4) cannot be altered
by a reconfiguration across NDC and PCIe ports.
The following operations are unsupported in versions earlier than VxRail 7.0.010:

• Migrating or moving VxRail system traffic to the optional ports. VxRail system traffic includes the
management, vSAN, vCenter Server, and vMotion Networks.
• Migrating VxRail system traffic to other port groups.
• Migrating VxRail system traffic to another VDS.

Note: Performing any unsupported operations will impact the stability and operations of the VxRail
cluster, and may cause a failure in the VxRail cluster.

64 VxRail Network Planning Guide


A VxRail Network Configuration Table
The Dell Technologies service representative uses a data collection workbook to capture the settings that
are needed to build the VxRail cluster. The workbook includes the following information:

Row Topic Category Description


1 VxRail External Untagged traffic is recommended on the Native VLAN. If you
Networks Management want the host to send only tagged frames, manually
configure the VLAN on each ESXi™ host using DCUI and
set tagging for your management VLAN on your switch
before you deploy VxRail.
2 Internal This network traffic should stay isolated on the top-of-rack
Management switches. The default VLAN ID is 3939.
3 vMotion

4 vSAN

5 Guest Network(s) Network Name

6 VLAN

7 VxRail Subnet Mask Subnet mask for VxRail External Management Network
Management
8 Default Gateway Default gateway for VxRail External Management Network

9 System Global Settings Time zone


10 NTP servers
11 DNS servers
12 Top Level Domain

13 VxRail Hostname
Manager
14 IP Address
15 ESXi VxRail auto- Prefix
Hostnames assign method
16 Separator
17 Iterator
18 Offset
19 Suffix
20 Customer- ESXi hostname 1
supplied method
21 ESXi hostname 2
22 ESXi hostname 3
23 ESXi hostname 4
24 ESXi IP VxRail auto- Starting IP Address
Addresses assign method
25 Ending IP Address

65 VxRail Network Planning Guide


26 Customer- ESXi IP Address 1
supplied method
27 ESXi IP Address 2
28 ESXi IP Address 3
29 ESXi IP Address 4
30 vCenter VxRail vCenter vCenter Server Hostname
Server
31 vCenter Server IP Address
32 Platform Services Controller Hostname (if applicable)
33 Platform Services Controller IP address (if applicable)
34 Customer- Platform Services Controller Hostname (FQDN)
supplied vCenter (Leave blank if PSC is embedded in customer-supplied
Server vCenter Server)
35 vCenter Server Hostname (FQDN)
36 vCenter Server SSO Domain
37 Admin username/password or the newly created VxRail
non-admin username and password
38 New VxRail management username and password
39 vCenter Data Center Name
40 vCenter Cluster Name
41 Virtual Customer- Name of VDS portgroup supporting VxRail external
Distributed supplied VDS management network
Switch Portgroups
42 Name of VDS portgroup supporting VxRail vCenter Server
network
43 Name of VDS portgroup supporting VxRail internal
management network
44 Name of VDS portgroup supporting VxRail vMotion network
45 Name of VDS portgroup supporting VxRail vSAN network
46 vMotion VxRail auto- Starting address for IP pool
assign method
47 Ending address for IP pool
48 Customer- vMotion IP Address 1
supplied method
49 vMotion IP Address 2
50 vMotion IP Address 3
51 vMotion IP Address 4
52 Subnet Mask
53 Gateway Gateway (Default or vMotion)
54 vSAN VxRail auto- Starting address for IP pool
assign method
55 Ending address for IP pool
56 Customer- vSAN IP Address 1
supplied method
57 vSAN IP Address 2
58 vSAN IP Address 3
59 vSAN IP Address 4

66 VxRail Network Planning Guide


60 Subnet Mask
61 Logging VxRail Internal vRealize Log Insight™ hostname
62 vRealize Log Insight IP address
63 VxRail External Syslog Server (instead of Log Insight)

64 SmartFabric Switch out-of- Out-of-band management IP address for switch 1


band
65 Out-of-band management IP address for switch 2
management
66 Dell EMC OMNI IP address
plug-in
67 Subnet Mask
68 Gateway
69 Witness Site Management IP Witness management network IP address
Address
70 vSAN IP Address Witness vSAN network IP address
71 Witness WTS VLAN Optional to enable Witness traffic separation on stretched-
Traffic cluster or 2-Node Cluster
Separation
72 2-Node Node 1 WTS IP Must be routable to Witness
Cluster address
73 Node 2 WTS IP Must be routable to Witness
address

67 VxRail Network Planning Guide


B VxRail Passwords
Item Account Password
VxRail Manager Root
VxRail vCenter Server Administrator@<SSO Domain>
Root
Management
VxRail Platform Service Controller Root

vRealize Log Insight Root


Admin

Item Account Password


ESXi Host #1 Root
ESXi Host #2 Root
ESXi Host #3 Root
ESXi Host #4 Root

68 VxRail Network Planning Guide


C VxRail Setup Checklist
Physical Network

VxRail cluster: Decide if you want to plan for additional nodes beyond the initial three (or four)-node
cluster. You can have up to 64 nodes in a VxRail cluster.
VxRail ports: Decide how many ports to configure per VxRail node, what port type, and what network
speed.
Network switch: Ensure that your switch supports VxRail requirements and provides the connectivity
option that you chose for your VxRail nodes. Verify cable requirements. Decide if you will have a single or
multiple switch setup for redundancy.
Data center: Verify that the required external applications for VxRail are accessible over the network and
correctly configured.
Topology: If you are deploying VxRail over more than one rack, be sure that network connectivity is set
up between the racks. Determine the Layer 2/Layer 3 boundary in the planned network topology.
Workstation/laptop: Any operating system with a browser to access the VxRail user interface. The latest
versions of Firefox, Chrome, and Internet Explorer 10+ are all supported.
Out-of-band Management (optional): One available port that supports 1 Gb for each VxRail node.
Logical Network

 One external management VLAN for traffic from VxRail, vCenter Server, ESXi
 One internal management VLAN with multicast for auto-discovery and device
management. The default is 3939.
 One VLAN with unicast (starting with VxRail v4.5.0) or multicast (prior to v4.5.0)
Reserve VLANs for vSAN traffic
 One VLAN for vSphere vMotion
 One or more VLANs for your VM Guest Networks
 If you are enabling witness traffic separation, reserve one VLAN for the VxRail
witness traffic separation network
 Select the Time zone
 Select the Top-Level Domain
 Hostname or IP address of the NTP servers on your network (recommended)
System
 IP address of the DNS servers on your network (if external DNS)
 Forward and reverse DNS records for VxRail management components (if
external DNS)
 Decide on your VxRail host naming scheme. The naming scheme will be applied
to all VxRail management components.
 Reserve three or more IP addresses for ESXi hosts.
Management
 Reserve one IP address for VxRail Manager.
 Determine default gateway and subnet mask.
 Select passwords for VxRail management components.

69 VxRail Network Planning Guide


 Determine whether you will use a vCenter Server that is customer-supplied or
new to your VxRail cluster.
 VxRail vCenter Server: Reserve IP addresses for vCenter Server and PSC (if
applicable).
vCenter
 Customer-supplied vCenter Server: Determine hostname and IP address for
vCenter and PSC, administration user, and name of vSphere data center.
Create a VxRail management user in vCenter. Select a unique VxRail cluster
name. (Optional) Create a VxRail non-admin user.
 Determine whether you will use pre-configure a customer-supplied virtual
distributed switch or have VxRail deploy a virtual distributed switch in your
Virtual Distributed vCenter instance.
Switch
 Customer-supplied Virtual Distributed Switch: Configure target portgroups for
required VxRail networks.
 Decide whether you want to use the default TCP-IP stack for vMotion, or a
separate IP addressing scheme for the dedicated vMotion TCP-IP stack.
 Reserve three or more contiguous IP addresses and a subnet mask for vSphere
vMotion
vMotion.
 Select the gateway for either the default TCP-IP stack, or the dedicated vMotion
TCP-IP stack.
vSAN  Reserve three or more contiguous IP addresses and a subnet mask for vSAN
 To use vRealize Log Insight: Reserve one IP address.
Solutions  To use an existing syslog server: Get the hostname or IP address of your third-
party syslog server.
 If Witness is required, reserve one IP address for the management network and
Witness Site
one IP address for the vSAN network.
 Configure your workstation/laptop to reach the VxRail initial IP address.
Workstation  Ensure you know how to configure the laptop to reach the VxRail Manger IP
address after configuration.
 Configure your selected external management VLAN (default is
untagged/native).
 Configure your internal management VLAN.
 Confirm multicast is enabled for device discovery.
 Configure your selected VLANs for vSAN, vSphere vMotion, and VM Guest
Networks.
Set up Switch  If applicable, configure your Witness traffic separation VLAN.
 In dual-switch environments, configure the inter-switch links to carry traffic
between switches.
 Configure uplinks to carry upstream network VLANs.
 Configure one port as an access port for laptop/workstation to connect to VxRail
Manager for initial configuration.
 Confirm configuration and network access.
 Configure your workstation/laptop to reach the VxRail Manager initial IP
address.
Workstation/Laptop
 Configure the laptop to reach the VxRail Manager IP address after permanent IP
address assignment.

70 VxRail Network Planning Guide


D VxRail Open Ports Requirements
Use the tables in this Appendix for guidance on firewall settings specific for the deployment of a VxRail
cluster. Then use the links provided after the tables for firewall rules that are driven by product feature
and use case.

The VxRail cluster needs to be able to connect to specific applications in your data center. DNS is
required, and NTP is optional. Open the necessary ports to enable connectivity to the external syslog
server, and for LDAP and SMTP.

Datacenter Application Access


Description Source Devices Destination Devices Protocol Ports

DNS VxRail Manager, Dell iDRAC DNS Servers UDP 53


NTP Client Host ESXi Management NTP Servers UDP 123
Interface,
Dell iDRAC,
VMware vCenter Servers,
VxRail Manager
SYSLOG Host ESXi Management Syslog Server TCP 514
Interface,
vRealize Log Insight
LDAP VMware vCenter Servers, LDAP Server TCP 389,
PSC 636

SMTP SRS Gateway VMs, SMTP Servers TCP 25


vRealize Log Insight

Open the necessary firewall ports to enable IT administrators to deploy the VxRail cluster.

Administration Access
Description Source Devices Destination Devices Protocol Ports
ESXi Management Administrators Host ESXi Management TCP. UDP 902
Interface
VxRail Administrators VMware vCenter Server, TCP 80, 443
Management VxRail Manager, Host
GUI/Web Interfaces ESXi Management,
Dell iDRAC port,
vRealize Log Insight,
PSC
Dell server Administrators Dell iDRAC TCP 623,
management 5900,
5901
SSH and SCP Administrators Host ESXi Management, TCP 22
vCenter Server
Appliance,
Dell iDRAC port,
VxRail Manager Console

71 VxRail Network Planning Guide


If you plan to use a customer-supplied vCenter server instead of deploying a vCenter server in the VxRail
cluster, open the necessary ports so that the vCenter instance can connect to the ESXi hosts.

vCenter and vSphere


Description Source Devices Destination Devices Protocol Ports
vSphere Clients to vSphere Clients vCenter Server TCP 5480, 8443,
vCenter Server 9443, 10080,
10443
Managed Hosts to Host ESXi vCenter Server TCP 443, 902,
vCenter Management 5988,5989,
6500, 8000,
8001
Managed Hosts to Host ESXi vCenter Server UDP 902
vCenter Heartbeat Management

Other firewall port settings may be necessary depending on your data center environment. The list of
documents in this table is provided for reference purposes.

Description Reference

VMware Ports and Protocols VMware Ports and Protocols


Network port diagram for vSphere 6 Network Port Diagram for vSphere 6
vSAN Ports Requirements vSAN Network Ports Requirements
Dell iDRAC Port Requirements How to configure the iDRAC 9 for Dell PowerEdge
Secure Remote Services Port Requirements Dell EMC Secure Remote Services Documentation

72 VxRail Network Planning Guide


E Virtual Distributed Switch Portgroup Default Settings
Unless you configure an external distributed virtual switch in your external vCenter for the VxRail cluster,
the VxRail initial build process will configure a virtual distributed switch on the selected vCenter instance
using best practices for VxRail.

E.1 Default standard settings


For each VxRail network portgroup, the initial build process will apply the following standard settings.

Setting Value
Port Binding Static
Port Allocation Elastic
Number of ports 8
Network Resource Pool (default)
Override port policies Only ‘Block ports’ allowed
VLAN Type VLAN
Promiscuous mode Reject
MAC address changes Reject
Forged transmits Reject
Ingress traffic shaping Disabled
Egress traffic shaping Disabled
NetFlow Disabled
Block All Ports No

E.2 Default teaming and failover policy


VxRail will configure a teaming and failover policy for the port groups on the virtual distributed switch with
the following settings:

Setting Value
Load Balancing Route based on originating virtual port
Network failure detection Link status only
Notify switches Yes
Failback Yes

E.3 Default network I-O control (NIOC)


VxRail will enable network I-O control on the distributed switch, and configure custom Network I-O Control
(NIOC) settings for the following network traffic types. The settings are dependent on whether the VxRail

73 VxRail Network Planning Guide


cluster was deployed with either 2 Ethernet ports per node reserved for the VxRail cluster, or if 4 Ethernet
ports were reserved for the VxRail cluster:

NIOC Shares
Traffic Type
4 Ports 2 Ports

Management Traffic 40 20

vMotion Traffic 50 50

vSAN Traffic 100 100

Virtual Machine Traffic 60 30

The reservation value is set to zero for all network traffic types, with no limits set on bandwidth.

E.4 Default failover order policy


VxRail will configure an active/standby policy to the uplinks VxRail uses for the four pre-defined network
traffic types that are required for operation: Management, vMotion, vSAN, and Virtual Machine.

4x10GbE Traffic Configuration

Uplink1 Uplink2 Uplink3 Uplink4


Traffic Type (10 GbE) (10 GbE) (10 GbE) (10 GbE)
VMNIC0 VMNIC1 VMNIC2 VMNIC3

Management Standby Active Unused Unused

vSphere vMotion Unused Unused Standby Active

vSAN Unused Unused Active Standby

Virtual Machine Active Standby Unused Unused

2x10GbE or 2x25GbE Traffic Configuration

Uplink1 Uplink2
Uplink3 Uplink4
Traffic Type (10/25 GbE) (10/25 GbE)
No VMNIC No VMNIC
VMNIC0 VMNIC1

Management Active Standby Unused Unused

vSphere vMotion Active Standby Unused Unused

vSAN Standby Active Unused Unused

Virtual Machine Active Standby Unused Unused

74 VxRail Network Planning Guide


4x1GbE Traffic Configuration

Uplink1 Uplink2 Uplink3 Uplink4


Traffic Type (1 GbE) (1 GbE) (1 GbE) (1 GbE)
VMNIC0 VMNIC1 VMNIC2 VMNIC3

Management Active Standby Unused Unused

vSphere vMotion Unused Unused Standby Active

vSAN Unused Unused Active Standby

Virtual Machine Standby Active Unused Unused

75 VxRail Network Planning Guide


F Physical Network Switch Wiring Examples
These diagrams show different options for physical wiring between VxRail nodes and the adjacent, top-of-
rack switches. They are provided as illustrative examples to assist with the planning and design process.
All VxRail nodes are manufactured with Ethernet ports built into the NDC (Network Daughter Card).
Optional PCIe adapter cards can be installed in the VxRail nodes to provide additional Ethernet ports for
redundancy and increased bandwidth.

1. 2x10gb or 2x25gb connectivity option

VxRail nodes with two 10gb NDC ports connected to 2 TOR switches, and one optional
connection to management switch for iDRAC

This connectivity option is the simplest to deploy. It is suitable for smaller, less demanding workloads that
can tolerate the NDC as a potential single point of failure.

2. 4x10gb or 4x1gb NDC connectivity option

76 VxRail Network Planning Guide


VxRail nodes with four 10gb NDC ports connected to 2 TOR switches, and one optional
connection to management switch for iDRAC

If the NDC on the VxRail nodes are shipped with 4 Ethernet ports, you can choose to reserve either 2
ports or 4 Ethernet ports on the VxRail nodes to support networking workload on the VxRail cluster. If you
choose to use only two Ethernet ports, the remaining ports can be used for other use cases.

If you are deploying VxRail with 1gb Ethernet ports, then you must connect four Ethernet ports to support
VxRail networking.

3. 2x10gb NDC & 2x10gb PCIe connectivity option

77 VxRail Network Planning Guide


VxRail nodes with two 10gb NDC ports and two 10gb PCIe ports connected to 2 TOR
switches, and one optional connection to management switch for iDRAC

In this option, the VxRail networking workload on the NDC ports is split between the two switches, and
the workload on the PCIe-based ports is also split between the two switches. Furthermore, this option
insures against the loss of service with a failure at the switch level, but also with a failure in either the
NDC or PCIe adapter card.

4. 2x25gb NDC and 2x25gb PCIe connectivity option

78 VxRail Network Planning Guide


VxRail nodes with two 25gb NDC ports and two 25gb PCIe ports connected to 2 TOR
switches, and one optional connection to management switch for iDRAC

This option offers the same benefits as the 2x10gb NDC and 2x10gb PCIe deployment option, except for
additional bandwidth available to support the workload on the VxRail cluster. If additional Ethernet
connectivity is required to support other use cases, then additional slots on the VxRail nodes must be
reserved for PCIe adapter cards. If this is a current requirement or potential future requirement, then be
sure to select a VxRail node model with sufficient PCIe slots to accommodate the additional adapter
cards.

Be aware that the cabling for the 25gb option with NDC ports and PCIe ports differs from the 10gb option.
Note that the second port on the PCIe adapter cards is paired with the first port on the NDC on the first
switch, and the first port on the PCIe adapter is paired with the second port on the NDC on the second
switch. This is to ensure balancing of the VxRail networks between the switches in the event of a failure
at the network port layer.

79 VxRail Network Planning Guide


VxRail nodes with two 25gb NDC ports connected to same TOR switch, two 25gb PCIe
ports connected to same TOR switch, and one optional connection to management
switch for iDRAC

This is an optional cabling setup for a 2x10gb NDC and 2x10gb PCIe deployment, where both ports from
either the NDC or the PCIe card are connected to the same TOR switch. The difference with this cabling
option as opposed to splitting the cabling from the NDC and PCIe ports between switches is that in the
event of failure of a node network port, all VxRail networking will flow to one TOR switch until the problem
is resolved.

5. Four TOR switches to support VxRail cluster networking

80 VxRail Network Planning Guide


VxRail nodes with four ports connected to four TOR switches, and one optional
connection to management switch for iDRAC

For workload use cases with extreme availability, scalability and performance requirements, four TOR
switches can be positioned to support VxRail networking. In this example, each Ethernet port is
connected to a single TOR switch.

81 VxRail Network Planning Guide


VxRail nodes with four ports connected to 4x TOR switches, 1x optional Management
Switch with iDRAC.

TOR 3 and 4's Upstream switches are optional because those TORs only carry vSAN and vMotion which
might not need access to the rest of the network.

82 VxRail Network Planning Guide


VxRail 2-Node Cluster with ports 1 and 2 connected to 2x TOR switches for external
network traffic, and a direct connection between ports 3 and 4 on each node for
internal network traffic

1/10/25GbE TOR switches are supported. Witness runs on host separate from 2-Node cluster and routable
from 2xTOR switches.

Learn more about Contact a Dell View more Join the


Dell EMC VxRail Technologies Expert resources conversation
Appliances @DellEMC

83 VxRail Network Planning Guide

You might also like