h15300 Vxrail Network Guide PDF
h15300 Vxrail Network Guide PDF
h15300 Vxrail Network Guide PDF
Abstract
This is a planning and consideration guide for VxRail Appliances. It can be used
to better understand the networking requirements for VxRail implementation. This
document does not replace the implementation services with VxRail Appliances
requirements and should not be used to implement networking for VxRail
Appliances.
October 2019
H15300.11
Dell Customer Communication - Confidential
Revision history
Date Description
October 2019 Support for VxRail 4.7.300 with Layer 3 VxRail networks
The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2019 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners. [10/14/2019] [Planning Guide] [H15300.11]
VxRail is not a server. It is an appliance based on a collection of nodes and switches integrated as a
cluster under a single point of management. All physical compute, network and storage resources in the
appliance are managed as a single shared pool and allocated to applications and services based on
customer-defined business and operational requirements.
The compute nodes are based on Dell EMC PowerEdge servers. The G Series consists of up to four
nodes in a single chassis, whereas all other models are based on a single node. An Ethernet switch is
required, at speeds of either 1/10/25 Gb, depending on the VxRail infrastructure deployed. A
workstation/laptop for the VxRail user interface is also required.
VxRail has a simple, scale-out architecture, leveraging VMware vSphere® and VMware vSAN™ to
provide server virtualization and software-defined storage, with simplified deployment, upgrades, and
maintenance through VxRail Manager. Fundamental to the VxRail clustered architecture is network
connectivity. It is through the logical and physical networks that individual nodes act as a single system
providing scalability, resiliency, and workload balance.
The VxRail software bundle is preloaded onto the compute nodes, and consists of the following
components (specific software versions not shown):
• VxRail Manager
• VMware vCenter Server™
• VMware vRealize Log Insight™
• VMware vSAN
• VMware vSphere
• Dell EMC Secure Remote Support (SRS)/VE
Licenses are required for VMware vSphere and VMware vSAN. The vSphere licenses can be
purchased from Dell EMC, VMware, or your preferred VMware reseller partner.
The VxRail Appliances also include the following licenses for software that can be downloaded,
installed and configured:
• Dell EMC RecoverPoint for Virtual Machines (RP4VM) - 5 full VM licenses per single node
• VxRail appliance (15 for the G Series appliance)
Note: Follow all the guidance and decision point described in this document; otherwise, VxRail will not
implement properly, and it will not function correctly in the future. If you have separate teams for
network and servers in your datacenter, you will need to work together to design the network and
configure the switch(es).
This section describes the physical components and selection criteria for VxRail clusters:
A standard VxRail cluster starts with a minimum of three nodes and can scale to a maximum of 64
nodes. The selection of the VxRail nodes to form a cluster is primarily driven by planned business use
cases, and factors such as performance and capacity. Five series of VxRail models are offered, each
targeting specific objectives:
Each VxRail model series offers choices for network connectivity. The following illustrations show some
of the physical network port options for the VxRail models.
Back view of VxRail V-, P-, S-Series on Dell 14th Generation PowerEdge server
Note: IPv6 multicast needs to be enabled on the switch ports connected to VxRail Appliances only. The
multicast traffic required by VxRail is limited to those switch ports that service VxRail.
• Layer 3 support is not required on the switch(es) directly connected to VxRail nodes.
You will be required to specify a set of VLAN (Virtual LAN) IDs in your data center network that will be
assigned to support the VxRail networks. All of the VLANs must be configured on the adjoining physical
switches. The VLANs that need to pass upstream must be configured on adjoining network switch
uplinks, and also on the ports on the upstream network devices.
One VLAN is assigned for external VxRail management access. Data center services (such as DNS
and NTP) that are required by VxRail cluster must be able to connect to this VLAN. Routing services
must be updated to enable connectivity to these services from this VxRail management network.
Additional VLANs, such as those required for end-user access, must also be configured in routing
services to connect end-users to the virtual machines running on the VxRail cluster.
Be sure to follow your switch vendor’s best practices for performance and availability. For example,
packet buffer banks may provide a way to optimize your network with your wiring layout.
Decide if you plan to use one or two switches for VxRail. One switch is acceptable and is often seen in
test and development environments. To support high availability and failover in production
environments, two or more switches are required. The VxRail appliance is a software-defined
datacenter which is totally dependent on the physical top-of-rack switch for network communications. A
lack of network redundancy places you at risk of losing availability to all of the virtual machines
operating on the appliance.
The following figure shows the recommended physical network setup using a management switch (for
iDRAC) and two ToR switches. Other network setup examples can be found in the Physical Network
Switch Examples appendix.
Note: For 13th generation PowerEdge servers in the E, P, S and V series VxRail Appliances utilizing 1
GbE with two switches, the switches must be interconnected.
To use out-of-band management, connect the internal Dell Remote Access Controller (iDRAC) port to a
separate switch to provide physical network separation. Default values, capabilities, and
recommendations for out-of-band management are provided with server hardware information.
You will need to reserve an IP address for each iDRAC in your VxRail cluster (one per node).
The next phase after the deployment of the VxRail cluster is to layer the VMware cloud foundation
software on the cluster. This will enable assigning cluster resources as the underpinning for logical
domains, whose policies align with use cases and requirements.
The information outlined in this guide covers networking considerations for VxRail. Go to the VMware
Cloud on the Dell-EMC site to review the hyper-converged infrastructure options for software-defined
data centers: VMware Cloud on Dell-EMC Infrastructure
In this profile setting, VxRail uses the SmartFabric feature to discover VxRail nodes and Dell switches
on the network, perform zero-touch configuration of the switch fabric to support VxRail deployment, and
then create a unified hyper-converged infrastructure of the VxRail cluster and Dell switch network
fabric.
Planning for VxRail with the Dell EMC SmartFabric networking feature must be done in coordination
with Dell-EMC representatives to ensure a successful deployment. The planned infrastructure must be
a supported configuration as outlined in the VxRail Support Matrix.
Using the Dell EMC SmartFabric feature with VxRail requires an understanding of several key points:
• At the time of VxRail deployment, you must choose the method of network switch configuration.
Enabling the VxRail personality profile on the switches resets the switches to the default state
and passes switch configuration responsibility to VxRail. If you choose this method, all the
switch configuration functionality except basic management functions are disabled at the
console, and VxRail and the Dell EMC OMNI plug-in are the tools going forward for network
switch configuration management.
• The Dell network switches enabled in VxRail personality profile mode cannot support any
connected devices other than VxRail nodes.
• You must deploy a separate Ethernet switch to support out-of-band management for the iDRAC
feature on the VxRail nodes.
If you plan to deploy a vSAN stretched-cluster on VxRail, note the following requirements:
• Three datacenter sites: two datacenter sites (Primary and Secondary) host the VxRail
infrastructure, and the third site supports a witness to monitor the stretched-cluster
• A minimum of three VxRail nodes in the Primary site, and a minimum of three VxRail nodes in
the Secondary site
• A minimum of one top-of-rack switch for the VxRail nodes in the Primary and Secondary sites
• An ESXi instance at the Witness site.
The vSAN stretched-cluster feature has strict networking guidelines, specifically for the WAN, that must
be adhered to for the solution to work.
Before getting to this phase, several planning and preparation steps must be undertaken to ensure a
seamless integration of the final product into your datacenter environment. These planning and
preparation steps include:
1. Plan Data Center Routing Services.
2. Decide on VxRail Single Point of Management.
3. Plan the VxRail logical network.
4. Identify IP address range for VxRail logical networks.
5. Identify unique hostnames for VxRail management components.
6. Identify external applications and settings for VxRail.
7. Create DNS records for VxRail management components.
8. Prepare customer-supplied vCenter Server.
9. Reserve IP addresses for VxRail vMotion and vSAN networks.
10. Decide on VxRail Logging Solution
11. Decide on passwords for VxRail management.
Use the VxRail Setup Checklist and the VxRail Network Configuration Table to help create your
network plan. References to rows in this document are to rows in the VxRail Network Configuration
Table.
Note: Once you set up the VxRail cluster and complete the initial initialization phase to produce the
final product, the configuration cannot easily be changed. Consequently, we strongly recommend that
you take care during this planning and preparation phase to decide on the configurations that will work
most effectively for your organization.
A VxRail cluster can be extended beyond a single physical rack, and can extend to as many as six
racks. At initial implementation of the VxRail cluster, all of the network addresses applied to the VxRail
nodes and management components must be within the same subnet This same subnet rule applies
to VxRail nodes and management components within a single physical rack.
You have two options if the VxRail cluster extends beyond a single rack:
• Use the same assigned subnet ranges for all VxRail nodes and management components
• Assign a new subnet range to the VxRail nodes and management components in the
expansion racks. (Your VxRail cluster must be running a minimum version of 4.7.300 to use this
option)
Note: Dell EMC strongly recommends that you take care during this planning and preparation phase
and decide on the single point of management option that will work most effectively for your
organization Once VxRail initialization has configured the final product, the configuration cannot easily
be changed.
VxRail has pre-defined logical networks to manage and control traffic within the cluster and outside of
the cluster. Certain VxRail logical networks must be made accessible to the outside community. For
instance, connectivity to the VxRail management system is required by IT management. End-users and
application owners will need to access their virtual machines running in the VxRail cluster. The network
traffic supporting I/O to the vSAN datastore, or the vMotion network used to dynamically migrate virtual
machines between VxRail nodes to balance workload, can stay within the VxRail cluster, or be
configured with a routable network. The internal network used for device discovery is isolated and does
not exit the top-of-rack switches.
Virtual LANs (VLANs) define the VxRail logical networks within the cluster, and the method used to
control the paths a logical network can pass through. A VLAN, represented as a numeric ID, is
assigned to a VxRail logical network. The same VLAN ID is also configured on the individual ports on
your top-of-rack switches, and on the virtual ports in the virtual-distributed switch during the automated
implementation process. When an application or service in the VxRail cluster sends a network packet
on the virtual-distributed switch, the VLAN ID for the logical network is attached to the packet. The
packet will only be able to pass through the ports on the top-of-rack switch and the virtual distributed
switch where there is a match in VLAN IDs. Isolating the VxRail logical network traffic using separate
VLANs is highly recommended, but not required. A ‘flat’ network is recommended only for test, non-
production purposes.
As a first step, the network team and virtualization team should meet in advance to plan VxRail’s
network architecture.
• The virtualization team must meet with the application owners to determine which specific
applications and services planned for VxRail are to be made accessible to specific end-users.
This will determine the number of logical networks required to support traffic from non-
management virtual machines.
• The network team must define the pool of VLAN IDs needed to support the VxRail logical
networks, and determine which VLANs will restrict traffic to the cluster, and which VLANs will
be allowed to pass through the switch up to the core network.
• The network team must also plan to configure the VLANs on the upstream network, and on the
switch(es) attached to the VxRail nodes.
• The network team must also configure routing services to ensure connectivity for external users
and applications on VxRail network VLANs passed upstream.
• The virtualization team needs to assign the VLAN IDs to the individual VxRail logical networks.
VxRail groups the logical networks in the following categories: External Management, Internal
Management, vSAN, vSphere vMotion, and Virtual Machine. VxRail assigns the settings you specify
for each of these logical networks during the initialization process.
Before VxRail version 4.7, both external and internal management traffic shared the external
management network. Starting with version 4.7 of VxRail, the external and internal management
networks are broken out into separate networks.
There are two methods that allow you to tag external management traffic:
1. Configure each port on your switch connected to a VxRail node to tag the management traffic
and route it to the desired VLAN.
2. Alternately, you can configure a custom management VLAN to allow tagged management
traffic after you power on each node, but before you run VxRail initialization. Your Dell EMC
service representative will take care of this during installation.
The Internal Management network is used solely for device discovery by VxRail Manager during initial
implementation and node expansion. This network traffic is non-routable and is isolated to the network
switches connected to the VxRail nodes. Powered-on VxRail nodes advertise themselves on the
Internal Management network using IPV6 multicast, which is required on this network, and discovered
by VxRail Manager. The default VLAN of 3939 is configured on each VxRail node shipped from the
factory. If a different VLAN value is used for this network, it must be applied to each VxRail node on-
site, or device discovery will fail.
The vSphere vMotion and vSAN network traffic can be either routed or non-routed. Support for layer 3
networking is introduced with version 4.7.300 of VxRail for multi-rack clusters. If your requirements
include expansion of the VxRail cluster beyond a single rack, then you can choose to extend either or
both of these networks across the top-of-rack switches in each of the racks, or terminate the networks
at the routing layer, and use routing services to enable connectivity between racks. This traffic will be
tagged for the VLANs you specify in VxRail initialization.
The Virtual Machine network(s) are for the virtual machines running your applications and services.
Dedicated VLANs are preferred to divide Virtual Machine traffic, based on business and operational
objectives. VxRail creates one or more VM Networks for you, based on the name and VLAN ID pairs
that you specify. Then, when you create VMs in vSphere Web Client to run your applications and
services, you can easily assign the virtual machine to the VM Network(s) of your choice. For example,
you could have one VLAN for Development, one for Production, and one for Staging.
Network Configuration Enter the external management VLAN ID for VxRail management
Table network (VxRail Manager, ESXi, vCenter Server/PSC, Log Insight). If
✓ Row 1 you do not plan to have a dedicated management VLAN and will
accept this traffic as untagged, enter “0” or “Native VLAN.”
Network Configuration Enter the internal management VLAN ID for VxRail device discovery.
Table The default is 3939. If you do not accept the default, the new VLAN
✓ Row 2 must be applied to each VxRail node before cluster implementation.
Network Configuration
Enter a VLAN ID for vSphere vMotion.
Table
(Enter 0 in the VLAN ID field for untagged traffic)
✓ Row 34
Network Configuration
Enter a VLAN ID for vSAN.
Table
(Enter 0 in the VLAN ID field for untagged traffic)
✓ Row 38
Network Configuration Enter a Name and VLAN ID pair for each VM guest network you want
Table to create.
✓ Rows 39-40 You must create at least one VM Network.
(Enter 0 in the VLAN ID field for untagged traffic)
TOTAL
Request your networking team to provide you with a pool of unused IP addresses required for the
VxRail External Management logical network. Record the IP address range for the ESXi hosts. These
IP addresses are required.
Network Configuration Table Enter the starting and ending IP addresses for the ESXi hosts -
✓ Rows 12 and 13 a continuous IP range is required, with a minimum of 3 IPs.
Network Configuration Table Enter the subnet mask for the VxRail External Management
✓ Row 27 network
Network Configuration Table
Enter the gateway for the VxRail External Management network
✓ Rows 28
If you are going to deploy vCenter on the VxRail cluster, record the permanent IP address for vCenter
and Platform Service Controller. Leave these entries blank if you will provide an external vCenter for
VxRail.
Record the IP address for Log Insight. Leave this entry blank if you will not deploy Log Insight on
VxRail.
Record the two IP addresses for the witness virtual appliance. Leave blank if a witness is not required
for your VxRail deployment.
Record the IP addresses for each node required for Witness traffic for a 2-Node cluster deployment.
Leave blank if you are not deploying a 2-Node cluster.
Network Configuration Table Enter the IP address for the first of the two nodes in the 2-Node
✓ Row 51 cluster
Network Configuration Table Enter the IP address for the second of the two nodes in the 2-
✓ Row 52 Node Cluster
Determine the naming format for the hostnames to be applied to the required VxRail management
components: each ESXi host, and VxRail Manager. If you deploy the vCenter Server in the VxRail
cluster, that also requires a hostname. In addition, if you decide to deploy Log Insight in the VxRail
cluster, that needs a hostname as well.
Note: You cannot easily change the hostnames and IP addresses of the VxRail management
components after initial implementation.
1
Offset is available starting in VxRail Release 4.0.200. It is only applicable when the iterator is numeric.
2
Suffix is available starting in VxRail Release 4.0.200.
Separator None - -
Offset 4
Suffix lab
Network Configuration Table Enter an example of your desired ESXi host-naming scheme. Be
✓ Rows 6-11 sure to show your desired prefix, separator, iterator, offset, suffix
and domain.
If you want to deploy a new vCenter Server on the VxRail cluster, you will need to specify a hostname
for the VxRail vCenter Server and Platform Services Controller (PSC) virtual machines. Again, the
domain is also automatically applied to the chosen hostname. Dell EMC recommends following the
naming format selected for the ESXi hosts to simplify cluster management.
Network Configuration Table Enter an alphanumeric string for the new vCenter Server
✓ Row 14 hostname. The domain specified will be appended.
Network Configuration Table Enter an alphanumeric string for the new Platform Services
✓ Row 16 Controller hostname. The domain specified will be appended.
An NTP server is not required, but is recommended. If you provide an NTP server, vCenter server will
be configured to use it. If you do not provide at least one NTP server, VxRail uses the time that is set on
ESXi host #1 (regardless of whether the time is correct or not).
Note: Make sure the NTP IP address is accessible from the VxRail External Management Network
which the VxRail nodes will be connected to and is functioning properly.
Note: Make sure that the DNS IP address is accessible from the network to which VxRail is connected
and functioning properly.
• Determine if your customer-supplied vCenter server is compatible with your VxRail version.
- Refer to the Knowledge Base article VxRail: VxRail and External vCenter Interoperability
Matrix on the Dell EMC product support site for the latest support matrix.
• Enter the FQDN of your selected, compatible customer-supplied vCenter server in the VxRail
Network Configuration table.
Network Configuration Table Enter the FQDN of the customer-supplied platform services
✓ Row 18 controller (PSC)
Leave this row blank if the PSC is embedded in the customer-
supplied vCenter server.
• Decide on the single sign-on (SSO) domain configured on the customer-supplied vCenter you
want to use to enable connectivity for VxRail, and enter the domain in the VxRail Network
Configuration Table.
Network Configuration Table Enter the single sign-on (SSO) domain for the customer-supplied
✓ Row 20 vCenter server. (For example, vsphere.local)
• The VxRail initialization process requires login credentials to your customer-supplied vCenter.
The credentials must have the privileges to perform the necessary configuration work for
VxRail. You have two choices:
- Provide vCenter login credentials with administrator privileges
- Create a new set of credentials in your vCenter for this purpose. Two new roles will be
created and assigned to this user by your Dell EMC representative.
Network Configuration Table Enter the administrative username/password for the customer-
✓ Row 21 supplied vCenter server, or the VxRail non-admin
username/password you will create on the customer-supplied
vCenter server.
• A set of credentials must be created in the customer-supplied vCenter for VxRail management
with no permissions and no assigned roles. These credentials are assigned a role with limited
privileges during the VxRail initialization process, and then assigned to VxRail to enable
connectivity to the customer-supplied vCenter after initialization completes.
- If this is the first VxRail cluster on the customer-supplied vCenter, enter the credentials you
will create in the customer-supplied vCenter.
- If you already have an account for a previous VxRail cluster in the customer-supplied
vCenter, enter those credentials.
Network Configuration Table Enter the full VxRail management username/password.
✓ Row 22 (For example, [email protected])
• The VxRail initialization process will deploy the VxRail cluster under an existing datacenter in
the customer-supplied vCenter. Create a new datacenter or select an existing Datacenter on
the customer-supplied vCenter.
Network Configuration Table Enter the name of the cluster that will be used for VxRail.
✓ Row 24
If your plans include expanding the VxRail cluster to deploy nodes in more than one physical rack, then
you have the option of whether to stretch the IP subnet for vSAN and vMotion between the racks, or to
use routing services in your data center instead. If you plan to enable routing services, a routable
address range is required for the vSAN and vMotion networks.
Network Configuration Table Enter the starting and ending IP addresses for vSphere
✓ Rows 31 and 32 vMotion.
Network Configuration Table Use the default TCP-IP gateway for the VxRail external
✓ Row 34 management network, or enter a new gateway for the vMotion
network
Network Configuration Table Enter the starting and ending IP addresses for vSAN. Routing
✓ Rows 35 and 36 is not configured for vSAN.
Network Configuration Table Enter the IP address for vRealize Log Insight or the hostname(s)
✓ Row 42 or of your existing third-party syslog server(s). Leave blank for no
✓ Row 43 logging.
Note: The Dell EMC service representative will need passwords for the VxRail accounts in this table.
For security purposes, you can enter the passwords during the VxRail initialization process, as opposed
to providing them visibly in a document.
• For ESXi hosts, passwords must be assigned to the ‘root’ account. You can use one password
for each ESXi host or apply the same password to each host.
• For VxRail Manager, a password must be assigned to the ‘root’ account [Row 1]. This
credential is for access to the console.
• Access to the VxRail Manager web interface will use the ‘administrator@<SSO Domain>’
credentials
- If you deploy the VxRail vCenter Server, VxRail Manager and vCenter share the same
default administrator login, ‘[email protected]’. Enter the password you want to
use [Row 2].
- If you use a customer-supplied vCenter server, VxRail Manager will use the same
‘administrator@<SSO Domain>’ login credentials you use for access to the customer-
supplied vCenter server.
• If you deploy the VxRail vCenter Server:
- Enter the ‘root’ password for the VxRail vCenter Server [Row 3].
- Enter a password for ‘management’ for the VxRail vCenter Server [Row 4].
- A Platform Services controller will be deployed. Enter the ‘root’ password for the Platform
Services controller [Row 5].
• If you deploy vRealize Log Insight:
- Enter a password for ‘root’ [Row 6].
- Enter a password for ‘admin’ [Row 7].
Passwords must adhere to VMware vSphere complexity rules. Passwords must contain between eight
and 20 characters with at least one lowercase letter, one uppercase letter, one numeric character, and
one special character. For more information about password requirements, see the vSphere password
and vCenter Server password documentation.
When the VxRail personality profile is enabled on a pair of Dell switches running in SmartFabric mode,
a VLAN must be entered as part of the configuration process. This VLAN is assigned to every switch
data port as ‘untagged’. This establishes an access network across the entire switch fabric for enabling
connectivity to VxRail Manager for initial configuration.
At the time of VxRail personality profile enablement on the Dell switch fabric, the VxRail Cluster Build
network and the Internal Management network are both established on every data port on the switch
pair. The switches and VxRail nodes advertise themselves at power-on on the Internal Management
network and are discovered by VxRail Manager on the same network. VxRail Manager then connects
itself to the VxRail Cluster Build network to enable access on the ‘untagged’ network for cluster
implementation. During the cluster implementation process, VxRail Manager will connect itself to the
External Management network and transition off the VxRail Cluster Build network.
When the cluster implementation process completes, all VxRail Manager, vCenter Server, and ESXi
management communications occurs over the External Management network, freeing the VxRail
Cluster Build network for additional clusters to be added to the switch fabric.
The Dell EMC Open Management Network Interface (OMNI) plug-in must be deployed on the vCenter
instance to support automated switch management after the VxRail cluster is built. The Dell EMC OMNI
vCenter plug-in is required for each Dell EMC switch fabric pair, and requires network properties to be
set during the deployment process.
Record the VxRail Cluster Build network VLAN and the network settings for Dell EMC OMNI vCenter
plug-in.
Network Configuration Table To enable Dell EMC SmartFabric services on your switches and
✓ Row 44 enable the VxRail personality profile, enter a VLAN ID for the
VxRail Cluster Build Network.
The VxRail External Management Network should be accessible to your location’s IT infrastructure
and personnel only. IT administrators require access to this network for day-to-day management of the
VxRail cluster, and the VxRail cluster is dependent on outside applications such as DNS and NTP to
operate correctly.
The VxRail Cluster Build Network is required only if you plan to enable Dell EMC SmartFabric
services and extend VxRail automation to the switch layer. The VxRail Cluster Build network enables
access to VxRail Manager during initial implementation using a jump host.
The VxRail Witness Traffic Separation Network is optional if you plan to deploy a stretched-cluster.
The VxRail Witness traffic separation network enables connectivity between the VxRail nodes with the
witness at an offsite location. The remote-site witness monitors the health of the vSAN datastore on the
VxRail cluster over this network.
Using the VxRail Network Configuration table, perform the following steps:
Step 1. Configure the External Management Network VLAN (Row 1) on the spine switch.
Configure all of the VxRail Virtual Machine Network VLANs (Rows 39,40) on the spine switch.
If applicable, configure the VxRail Cluster Build Network VLAN (Row 44) on the spine switch.
Note: This section provides guidance for preparing and setting up your switch for VxRail. Be sure to
follow your vendor’s documentation for specific switch configuration activities and for best practices for
performance and availability.
The network switch ports that connect to VxRail nodes must allow for pass-through of multicast traffic
on the VxRail Internal Management VLAN. Multicast is not required on your entire network, just on the
ports connected to VxRail nodes.
VxRail creates very little traffic through IPv6 multicast for auto-discovery and device management. We
recommend that you limit traffic further on your switch by enabling MLD Snooping and MLD Querier.
If MLD Snooping is enabled, MLD Querier must be enabled. If MLD Snooping is disabled, MLD Querier
must be disabled.
For VxRail v4.5.0 and earlier, IPv4 multicast is required for the vSAN VLAN. The network switch(es)
that connect to VxRail must allow for pass-through of multicast traffic on the vSAN VLAN. Multicast is
not required on your entire network, just on the ports connected to VxRail.
There are two options to handle vSAN IPv4 multicast traffic. Either limit multicast traffic by enabling or
disabling IGMP Snooping and IGMP Querier. We recommend enabling both IGMP Snooping and IGMP
Querier if your switch supports them.
IGMP Snooping software examines IGMP protocol messages within a VLAN to discover which
interfaces are connected to hosts or other devices interested in receiving this traffic. Using the interface
information, IGMP Snooping can reduce bandwidth consumption in a multi-access LAN environment to
avoid flooding an entire VLAN. IGMP Snooping tracks ports that are attached to multicast-capable
routers to help manage IGMP membership report forwarding. It also responds to topology change
notifications. Disabling IGMP Snooping might lead to additional multicast traffic on your network.
IGMP Querier sends out IGMP group membership queries on a timed interval, retrieves IGMP
membership reports from active members, and allows updates to group membership tables. By default,
most switches enable IGMP Snooping but disable IGMP Querier. You will need to change the settings if
this is the case.
If IGMP Snooping is enabled, IGMP Querier must be enabled. If IGMP Snooping is disabled, IGMP
Querier must be disabled.
If your switch does not support IGMP nor MLD Snooping, VxRail multicast traffic will be broadcasted in
one broadcast domain per VLAN. There is minimal impact on network overhead as management traffic
is nominal.
For questions about how your switch handles multicast traffic, contact your switch vendor .
6.1.1.3 Enable uplinks to pass inbound and outbound VxRail network traffic
The uplinks on the switches must be configured to allow passage for external network traffic to
administrators and end-users. This includes the VxRail external management network (or combined
VxRail management network prior to version 4.7) and Virtual Machine network traffic. The VLANs
representing these networks need to be passed upstream through the uplinks. For VxRail clusters
running at version 4.7 or later, the VxRail internal management network must be blocked from outbound
passage.
If the VxRail vMotion network is going to be configured to be routable outside of the top-of-rack
switches, include the VLAN for this network in the uplink configuration. This is usually the case for a
multi-rack VxRail configuration where a new subnet will be assigned to the expansion racks. In addition,
the VxRail vSAN network will follow this same rule for a multi-rack VxRail cluster.
If a multi-rack VxRail cluster is planned, and you plan to extend the VxRail External Management,
vMotion, vSAN and guest networks to the expansion racks, configure the ports on the switch to pass
these VLANs to the switches in the expansion racks.
• Access mode – The port accepts untagged packets only and distributes the untagged packets
to all VLANs on that port. This is typically the default mode for all ports.
• Trunk mode – When this port receives a tagged packet, it passes the packet to the VLAN
specified in the tag. To configure the acceptance of untagged packets on a trunk port, you must
first configure a single VLAN as a “Native VLAN.” A “Native VLAN” is when you configure one
VLAN to use as the VLAN for all untagged traffic.
• Tagged-access mode – The port accepts tagged packets only.
Do not use link aggregation, including protocols such as LACP and EtherChannel, on any ports directly
connected to VxRail nodes. VxRail Appliances use the vSphere active/standby configuration (NIC
teaming) for network redundancy. However, LACP can be enabled on non-system ports, such as
additional NIC ports or 1G ports, for user traffic.
VxRail uses vSphere Network I/O Control (NIOC) to allocate and control network resources for the four
predefined network traffic types required for operation: Management, vSphere vMotion, vSAN, and
Virtual Machine. The respective NIOC settings for the predefined network traffic types are listed in the
following tables for the various VxRail Models. 3
3
For a general overview on NIOC shares refer to http://frankdenneman.nl/2013/01/17/a-primer-on-network-io-control/.
UPLINK1 UPLINK2
Multicast (10Gb or (10Gb or UPLINK3 UPLINK4
Traffic Type NIOC Shares
Requirements 25Gb) 25Gb) No VMNIC No VMNIC
VMNIC0 VMNIC1
• VM Networks VLAN
VxRail Logical Networks: Version earlier than 4.7 and 4.7 or later
Confirm the settings on the switch, using the switch vendor instructions for guidance:
1. Confirm that IPv4 multicast (VxRail release earlier than v4.5.0) or unicast (VxRail v4.5.0 and
later) and IPv6 multicast are enabled for the VLANs described in this document.
2. If you have two or more switches, confirm that IPv4 multicast/unicast and IPv6 multicast traffic
is transported between them.
3. External management traffic will be untagged on the native VLAN on your switch by default. If
this has changed, the switches and/or ESXi hosts must be customized with the new VLAN.
4. Internal device discovery network traffic will use the default VLAN of 3939. If this has changed,
all ESXi hosts must be customized with the new VLAN, or device discovery will not work.
5. Confirm that the switch ports that will attach to VxRail nodes allow passage of all VxRail
network VLANs.
6. Confirm that the switch uplinks allow passage of external VxRail networks.
If you have positioned a firewall between the switch(es) planned for VxRail and the rest of your
datacenter network, be sure the required firewall ports are open for VxRail network traffic.
Note: Don’t try to plug your workstation/laptop directly into a VxRail server node to connect to the
VxRail management interface for initialization. It must be plugged into your network or switch, and the
workstation/laptop must be logically configured to reach the necessary networks.
A supported web browser is required to access VxRail management interface. The latest versions of
Firefox, Chrome, and Internet Explorer 10+ are all supported. If you are using Internet Explorer 10+ and
an administrator has set your browser to “compatibility mode” for all internal websites (local web
addresses), you will get a warning message from VxRail. Contact your administrator to whitelist URLs
mapping to the VxRail user interface.
To access the VxRail management interface to perform initialization, you must use the temporary, pre-
configured VxRail initial IP address: 192.168.10.200/24. This IP address will automatically change
during VxRail initialization to your desired permanent address, and assigned to VxRail Manager during
cluster formation.
Your workstation/laptop will need to be able to reach both the temporary VxRail initial IP address and
the permanent VxRail Manager IP address (Row 26 from VxRail Network Configuration table). VxRail
initialization will remind you that you might need to reconfigure your workstation/laptop network settings
to access the new IP address.
It is best practice to give your workstation/laptop or your jump server two IP addresses on the same
network port, which allows for a smoother experience. Depending on your workstation/laptop, this can
be implemented in several ways (such as dual-homing or multi-homing). Otherwise, change the IP
address on your workstation/laptop when instructed to and then return to VxRail Manager to continue
with the initialization process.
If you cannot reach the VxRail initial IP address, Dell EMC support team can configure a custom IP
address, subnet mask, and gateway on VxRail Manager before initialization.
Before coming on-site, the Dell EMC service representative will have contacted you
to capture and record the information described in the VxRail Network Configuration
Table and walk through the VxRail Setup Checklist.
If your planned VxRail deployment requires a Witness at a remote datacenter
location, the Witness virtual appliance is deployed.
Install the VxRail nodes in a rack or multiple racks in the datacenter. For ease of
manageability, install the network switches supporting the VxRail cluster into the
same racks.
Attach Ethernet cables between the ports on the VxRail nodes and switch ports
configured to support VxRail network traffic.
Power on three or four initial nodes to form the initial VxRail cluster. Do not turn on
any other VxRail nodes until you have completed the formation of the VxRail cluster
with the first three or four nodes.
Connect a workstation/laptop configured for VxRail initialization to access the VxRail
external management network on your selected VLAN. It must be either plugged
into the switch or able to logically reach the VxRail external management VLAN from
elsewhere on your network.
Open a browser to the VxRail initial IP address to begin the VxRail initialization
process.
The Dell EMC service representative will populate the input screens on the menu
with the data collected and recorded in the VxRail Network Configuration Table.
If you have enabled Dell EMC SmartFabric services, VxRail will automatically
configure the switches connected to VxRail nodes.
VxRail performs the verification process, using the information input into the menus.
After validation is successful, the initialization process will begin to build a new
VxRail cluster.
The new permanent IP address for VxRail Manager will be displayed.
- If you configured the workstation/laptop to enable connectivity to both the temporary VxRail
IP address and the new permanent IP address, the browser session will make the switch
automatically. If not, you must manually change the IP settings on your workstation/laptop
to be on the same subnet as the new VxRail IP address.
- If your workstation/laptop cannot connect to the new IP address that you configured, you
will get a message to fix your network and try again. If you are unable to connect to the new
IP address after 20 minutes, VxRail will revert to its un-configured state and you will need
to re-enter your configuration at the temporary VxRail IP address.
- After the build process starts, if you close your browser, you will need to browse to the new,
permanent VxRail IP address.
Progress is shown as the VxRail cluster is built.
Note: You must follow the official instructions/procedures from VMware and Dell EMC for these
operations.
• Migrating or moving VxRail system traffic to the optional ports. VxRail system traffic includes
the management, vSAN, vCenter Server and vMotion Networks.
• Migrating VxRail system traffic to other port groups.
• Migrating VxRail system traffic to another VDS.
Note: Performing any of these unsupported operations will impact the stability and operations of the
VxRail cluster, and likely cause a failure in the VxRail cluster.
root
management
admin
VxRail cluster: Decide if you want to plan for additional nodes beyond the initial three (or four)-node
cluster. You can have up to 64 nodes in a VxRail cluster.
VxRail ports: Decide how many ports to configure per VxRail node, what port type, and what network
speed.
Network switch: Ensure your switch supports VxRail requirements and provides the connectivity option
you chose for your VxRail nodes. Verify cable requirements.
Datacenter: Verify that the required external applications for VxRail are accessible over the network and
correctly configured. If you are deploying VxRail over more than one rack, be sure network connectivity is
set up between the racks.
Topology: Decide if you will have a single or multiple switch setup for redundancy.
Workstation/laptop: Any operating system with a browser to access the VxRail user interface. The latest
versions of Firefox, Chrome, and Internet Explorer 10+ are all supported.
Out-of-band Management (optional): One available port that supports 1Gb for each VxRail node.
Logical Network
✓ One external management VLAN with IPv6 multicast for traffic from VxRail,
vCenter Server, ESXi (recommendation is untagged/native).
✓ One internal management VLAN for auto-discovery and device management.
The default is 3939.
✓ One VLAN with IPv4 unicast (starting with VxRail v4.5.0) or IPv4 multicast
(prior to v4.5.0) for vSAN traffic
Reserve VLANs
✓ One VLAN for vSphere vMotion
✓ One or more VLANs for your VM Guest Network(s)
✓ If you are planning to enable Dell EMC SmartFabric services, one VLAN for
the VxRail cluster build network.
✓ If you are enabling witness traffic separation, one VLAN for the VxRail witness
traffic separation network.
✓ Time zone
✓ Hostname or IP address of the NTP server(s) on your network (recommended)
System
✓ IP address of the DNS server(s) on your network (required)
✓ Forward and reverse DNS records for VxRail management components
✓ Decide on your VxRail host naming scheme. The naming scheme will be
applied to all VxRail management components.
✓ Reserve three or more contiguous IP addresses for ESXi hosts.
✓ Determine whether you will use a vCenter Server that is customer-supplied or
new to your VxRail cluster.
✓ VxRail vCenter Server: Reserve two IP addresses for vCenter Server and
PSC.
Management
✓ Customer-supplied vCenter Server: Determine hostname and IP address for
vCenter and PSC, administration user, and name of vSphere datacenter.
Create a VxRail management user in vCenter. Select a unique VxRail cluster
name. (Optional) Create a VxRail non-admin user.
✓ Reserve one IP address for VxRail Manager.
✓ Determine default gateway and subnet mask.
✓ Select passwords for VxRail management components.
With network virtualization, the functional equivalent of a “network hypervisor” reproduces the complete set
of Layer 2 to Layer 7 networking services (e.g., switching, routing, access control, firewalling, QoS, and load
balancing) in software. Just as VMs are independent of the underlying x86 hardware platform and allow IT
to treat physical hosts as a pool of compute capacity, virtual networks are independent of the underlying IP
network hardware and allow IT to treat the physical network as a pool of transport capacity that can be
consumed and repurposed on demand.
NSX coordinates ESXi’s vSwitches and the network services pushed to them for connected VMs to
effectively deliver a platform—or “network hypervisor”—for the creation of virtual networks. Similar to the
way that a virtual machine is a software container that presents logical compute services to an application,
a virtual network is a software container that presents logical network services—logical switches, logical
routers, logical firewalls, logical load balancers, logical VPNs and more—to connected workloads. These
network and security services are delivered in software and require only IP packet forwarding from the
underlying physical network.
To connected workloads, a virtual network looks and operates like a traditional physical network. Workloads
“see” the same Layer 2, Layer 3, and Layers 4-7 network services that they would in a traditional physical
configuration. It’s just that these network services are now logical instances of distributed software modules
running in the hypervisor on the local host and applied at the vSwitch virtual interface.
NSX Edge:
Compact 512MB 512MB 1
Large 1GB 512MB 2
Extra Large 8GB 4.5GB (with 4GB swap) 6
Quad Large 1GB 512MB 4
vShield Endpoint 1GB 4GB 2
NSX Data Security 512MB 6GB per ESXi host 1
In a VxRail cluster, the key benefits of NSX are consistent, simplified network management and operations,
plus the ability to leverage connected workload mobility and placement. With NSX, connected workloads
can freely move across subnets and availability zones. Their placement is not dependent on the physical
topology and availability of physical network services in a given location. Everything a VM needs from a
networking perspective is provided by NSX, wherever it resides physically. It is not necessary to over-
provision server capacity within each application/network pod. Instead, organizations can take advantage of
available resources wherever they’re located, thereby allowing greater optimization and consolidation of
resources. VxRail easily inserts into existing NSX environments and provide NSX awareness so network
administrators can leverage simplified network administration. See the VMware NSX Design Guide for NSX
best practices and design considerations.
The VxRail cluster needs to be able to connect to specific applications in your datacenter. DNS is
required, and NTP is optional. Open the necessary ports to enable connectivity to the external syslog
server, and for LDAP and SMTP.
Open the necessary firewall ports to enable IT administrators to manage the VxRail cluster.
Administration Access
Description Source Device(s) Destination Device(s) Protocol Port(s)
ESXi Management Administrators Host ESXi Management TCP. UDP 902
Interface
VxRail Management Administrators VMware vCenter Server, VxRail TCP 80, 443
GUI/Web Interfaces Manager, Host ESXi
Management, Dell iDRAC port,
vRealize Log Insight,
PSC
Dell server Administrators Dell iDRAC TCP 623, 5900,
management 5901
SSH & SCP Administrators Host ESXi Management, TCP 22
vCenter Server Appliance,
Dell iDRAC port,
VxRail Manager Console
If you plan to use a customer-supplied vCenter server instead of deploying a vCenter server in the
VxRail cluster, open the necessary ports so that the vCenter instance can manage the ESXi hosts.
If you plan to enable Dell EMC ‘call-home’ with an external SRS gateway already deployed in your
datacenter, open the necessary ports to enable communications between the SRS gateway and VxRail
Manager.
Additional firewall port settings may be necessary depending on your datacenter environment. The list
of documents in this table is provided for reference purposes.
Description Reference
List of Incoming and Outgoing Firewall List of Incoming and Outgoing Firewall Ports for ESXi 6.5 Hosts
Ports for ESXi 6.5 Hosts
List of Incoming and Outgoing Firewall List of Incoming and Outgoing Firewall Ports for ESXi 6.0 Hosts
Ports for ESXi 6.0 Hosts
Required port to access VMware TCP and UDP Ports required to access VMware vCenter Server and
vCenter Server and VMware ESXi hosts VMware ESXi hosts
Secure Remote Services Port Dell EMC Secure Remote Services Documentation
Requirements
VxRail nodes with two ports connected to 2x TOR switches, 1x Optional Management
Switch with iDRAC
1/10/25GbE TOR switches are supported. Witness runs on host separate from 2-Node cluster and routable
from 2xTOR switches.