Dell PowerEdge MX Networking
Dell PowerEdge MX Networking
Dell PowerEdge MX Networking
Deployment Guide
H18548.7
Abstract
This document provides an overview of the architecture, features, and functionality of
the Dell PowerEdge MX networking infrastructure, including the steps for configuring
and troubleshooting the PowerEdge MX networking switches in Full Switch and
SmartFabric modes.
August 2022
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2022 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be trademarks of their respective owners.
Contents
Contents 3
Upgrading Dell SmartFabric OS10........................................................................................................................... 59
Alerts tab....................................................................................................................................................................... 60
Settings tab................................................................................................................................................................... 61
OS10 privileged accounts................................................................................................................................................ 62
NIC teaming guidelines.................................................................................................................................................... 63
4 Contents
Steps to create a SmartFabric....................................................................................................................................... 91
Physically cable PowerEdge MX chassis and upstream switches......................................................................... 91
Define VLANs......................................................................................................................................................................91
Define VLANs for FCoE............................................................................................................................................. 92
Create the SmartFabric................................................................................................................................................... 93
Optional steps.................................................................................................................................................................... 94
Forward error correction........................................................................................................................................... 94
Configure uplink port speed or breakout............................................................................................................... 96
Configure Ethernet ports...........................................................................................................................................97
Create Ethernet – No Spanning Tree uplink.............................................................................................................. 98
Ethernet – No Spanning Tree upstream switch configuration............................................................................ 100
Optional - Configure Fibre Channel............................................................................................................................. 101
Configure Fibre Channel universal ports...............................................................................................................101
Create Fibre Channel uplinks................................................................................................................................... 101
Enable support for larger VLAN counts..................................................................................................................... 102
Uplink failure detection.................................................................................................................................................. 105
Verifying UFD configuration....................................................................................................................................108
Configuring the upstream switch and connecting uplink cables..........................................................................108
Contents 5
Step 2: Create Multichassis Management Group.............................................................................................. 138
Step 3: Add second MX Chassis to the MCM Group....................................................................................... 138
Step 4: Move MX9116n FSE from first chassis to second chassis................................................................ 139
Step 5: Validation.......................................................................................................................................................140
SmartFabric mode IOM replacement process.......................................................................................................... 140
MXG610 Fibre Channel switch module replacement process...............................................................................143
Chassis Backup and Restore.........................................................................................................................................143
Backing up the chassis............................................................................................................................................. 144
Restoring chassis....................................................................................................................................................... 147
Manual backup of IOM configuration through the CLI..................................................................................... 149
6 Contents
Configure NIC boot device...................................................................................................................................... 212
Configure BIOS settings...........................................................................................................................................214
Connect FCoE LUN................................................................................................................................................... 214
Set up and install media connection......................................................................................................................214
Use Lifecycle Controller to set up operating system driver for media installation.................................... 214
Contents 7
Dell Technologies Networking Infrastructure Solutions documentation..................................................... 249
Support and feedback................................................................................................................................................... 249
8 Contents
1
Dell PowerEdge MX Platform Overview
Dell Technologies Demo Center
The Dell Technologies Demo Center is a highly scalable, cloud-based service that provides 24/7 self-service access to virtual
labs, hardware labs, and interactive product simulations. Several interactive demos are available on the Demo Center for
PowerEdge MX platform deployments. Go to Dell Technologies Interactive Demo: OpenManage Enterprise Modular for MX
solution management to quickly become familiar with deploying MX Networks.
Introduction
The vision of Dell Technologies is to be the essential technology company from the edge, to the core, and to the cloud. Dell
Technologies ensures modernization for today's applications and the emerging cloud-native world. Dell Networking is committed
to disrupting the fundamental economics of the market with an open strategy that gives you the freedom of choice for
networking operating systems and top-tier merchant silicon. The Dell Technologies strategy enables business transformations
that maximize the benefits of collaborative software and standards-based hardware, including lowered costs, flexibility, freedom,
and security. Dell Technologies provides further customer enablement through validated deployment guides that demonstrate
these benefits while maintaining a high standard of quality, consistency, and support.
The Dell PowerEdge MX platform is a unified, high-performance data center infrastructure. It provides the agility, resiliency,
and efficiency to optimize a wide variety of traditional and new, emerging data center workloads and applications. With its
kinetic architecture and agile management, PowerEdge MX dynamically configures compute, storage, and fabric; increases team
effectiveness; and accelerates operations. The responsive design delivers the innovation and longevity that customers need for
their IT and digital business transformations.
As part of the PowerEdge MX platform, the Dell SmartFabric OS10 network operating system includes SmartFabric Services
(SFS), a network automation and orchestration solution that is fully integrated with the MX platform.
NOTE: This guide may contain language that is not consistent with Dell's current guidelines. Dell plans to update this guide
over subsequent future releases to revise the language accordingly.
Hardware
This section contains information about the hardware and options available in the Dell PowerEdge MX7000. The section is
divided into two parts:
● The front of the MX7000 chassis, containing compute and storage sleds
● The back of the MX7000 chassis, containing networking, storage, and management components
Overview
The following figure shows the front view of the Dell PowerEdge MX7000 chassis. The left side of the chassis can have one of
three control panel options:
● LED status light panel
● Touch screen LCD panel
● Touch screen LCD panel equipped with Dell PowerEdge iDRAC Quick Sync 2
The bottom of the figure shows six hot-pluggable, redundant, 3,000-watt power supplies. Above the power supplies are eight
single-width slots that support compute and storage sleds. In the example below, the slots contain:
● Four Dell PowerEdge MX740c sleds in slots one through four
● One Dell PowerEdge MX840C sled in slots five and six
● Two Dell PowerEdge MX5016s sleds in slots seven and eight
Figure 3. Dell PowerEdge MX740c sled with six 2.5-inch SAS drives
Figure 5. Dell PowerEdge MX5016s sled with the drive bay extended
Overview
The Dell PowerEdge MX7000 includes three I/O fabrics and the Management Modules. Fabrics A and B are for Ethernet and
future I/O module connectivity, and Fabric C is for SAS and Fibre Channel (FC) connectivity. Each fabric provides two slots
for redundancy. Management Modules contain the chassis intelligence, which overlooks and orchestrates the operations of the
various components. The following example figure shows the rear of the PowerEdge MX7000 chassis. From top to bottom, the
chassis is configured with:
● One Dell Networking MX9116n Fabric Switching Engine (FSE) installed in fabric slot A1
● One Dell Networking MX7116n Fabric Expander Module (FEM) installed in fabric slot A2
● Two Dell Networking MX5108n Ethernet switches installed in fabric slots B1 and B2
● Two Dell Networking MXG610s Fibre Channel switches installed in fabric slots C1 and C2
● Two Dell PowerEdge MX9002m modules are installed in management slots MM1 and MM2
The following figure shows different uplink options for the MX7116n FEM to act as a pass-through module operating at 25 GbE.
The MX7116n FEM should be connected to an upstream switch at 25 GbE. Support for 10 GbE is available as of OME-Modular
1.20.00.
If the MX7116n FEM port connects to QSFP28 ports, a QSFP28-DD to 2x QSFP28 cable is used. If the MX7116n FEM port
connects to SFP28 ports, a QSFP28-DD to 8x SFP28 cable is used. These cables can be DAC, AOC, or optical transceiver plus
passive fiber. See the PowerEdge MX I/O Guide for more information about cable selection.
NOTE: If connecting the FEM to a QSFP+/QSFP28 port on a ToR switch, ensure that the port is configured to break out
to 4x 10 GbE or 4x 25 GbE and not 40 GbE or 100 GbE.
NOTE: The MX7116n FEM cannot act as a stand-alone switch and must be connected to the MX9116n FSE or other Dell
ToR switches to function. Connecting the MX7116n FEM to non-Dell switches is not supported.
The following 10GBase-T Ethernet PTM components are labeled in the figure:
1. Express service tag
2. Power and indicator LEDs
NOTE: For information about the optical transceivers and cables used with the MXG610s, see the MX610 Fibre Channel
Switch Module Installation Guide.
ISL Trunking Allows you to aggregate multiple physical links into one logical link for enhanced network performance and
fault tolerance. ISL trunking also enables Brocade Access Gateway ISL Trunking (N_port Trunking).
Fabric Vision Enables MAPS (Monitoring and Alerting Policy Suite), Flow Vision, IO Insight, VM Insight, and ClearLink,
or D_Port, to non-Brocade devices
● MAPS enables rules-based monitoring, alerting capabilities, and provides comprehensive dashboards
to troubleshoot problems in Brocade SAN environments
● Flow Vision enables the host to LUN flow monitoring, application flow mirroring for offline capture and
deeper analysis, and test traffic flow generation function for SAN infrastructure validation
● IO Insight automatically detects degraded storage IO performance with integrated device latency, and
IOPS monitoring embedded in the hardware
● ClearLink (D_Port) to non-Brocade devices allows extensive diagnostic testing of links to devices
other than Brocade switches and adapters.
NOTE: The functionality requires the support of the attached device, and the ability for the user
to check the device.
Extended Fabric Provides greater than 10 km of switched fabric connectivity at full bandwidth over long distances
NOTE: These features described are only available as part of the Enterprise software bundle. Individual feature licenses are
not available.
Ports on Demand
You can purchase the Ports on Demand (POD) licenses to activate up to 24 additional ports using 8-port POD licenses. The
switch module supports dynamic POD license allocation, where two port licenses are assigned to ports 0 and 17 at the factory.
The remaining licenses are assigned to active ports on a first-come, first-served basis. After the licenses are installed, you can
move them from one port to another, making port licensing flexible.
Broadcom software licensing upgrades
To obtain software licenses for the MXG610s, you must register the switch on the Broadcom support portal at https://
support.broadcom.com/.
NOTE: Run the chassisshow command to obtain the required Factory Serial Number.
NOTE: The external (rear-facing) ports on MX5000s SAS switches are not currently enabled.
The MX5000s provides Fabric C SAS connectivity to each compute and one or more MX5016s storage sleds. Compute
sleds connect to the MX5000s using either SAS Host Bus Adapters (HBA) or a PowerEdge RAID Controller (PERC) in the
mini-mezzanine PCIe slot.
The MX5000s switches are deployed as redundant pairs to offer multiple SAS paths to the individual SAS disk drives. The
MX7000 chassis supports redundant MX5000s in Fabric C.
NOTE: A MX5000s SAS module and a MXG610s are not supported in the same MX7000 chassis.
Overview
The PowerEdge MX7000 chassis includes two general-purpose I/O fabrics, Fabric A and B. The vertically aligned compute
sleds in slots one through eight connect to the horizontally aligned I/O modules (IOMs) in fabric slots A1, A2, B1, and B2. This
orthogonal connection method results in a midplane free design and enables the adoption of new I/O technologies without the
burden of having to upgrade the midplane.
Mezzanine cards
The MX740c and MX750c support up to two mezzanine cards, which are installed in slots A1 and B1, and the MX840c supports
up to four mezzanine cards, which are installed in slots A1, A2, B1, and B2. Each mezzanine card provides redundant connections
to each fabric, A or B, as shown in the following figure. A mezzanine card connects orthogonally to the pair of IOMs installed
Mini-mezzanine card
The MX7000 chassis also provides Fabric C, shown in the following figure, supporting redundant MXG610s FC switches, or
MX5000s SAS modules. This fabric uses a midplane connecting the C1 and C2 modules to each compute or storage sled. The
MX740c supports one mini-mezzanine card, which is installed in slot C1, and the MX840c supports two mini-mezzanine cards,
which are installed in slots C1 and C2.
NOTE: To expand from single-chassis to dual-chassis configuration, see Expanding from a single-chassis to dual-chassis
configuration on page 138.
To provide further redundancy and throughput to each compute sled, Fabric B can be used to create an additional Scalable
Fabric Architecture. Utilizing Fabric A and B can provide up to eight 25-Gbps connections to each MX740c or sixteen 25-Gbps
connections to each MX840c.
The complex scalable fabric topologies in this section apply to dual-port Ethernet NICs.
These complex topologies are described as follows.
NOTE: The following diagrams show the connections for a scalable fabric on multiple chassis between the FSE and
FEM components. The diagrams do not show the VLTi connections required for operating in SmartFabric mode or as
recommended when in Full Switch mode.
Single chassis:
● MX9116n FSE in slot A1 is connected to MX7116n FEM in slot B1.
● MX9116n FSE in slot A2 is connected to MX7116n FEM in slot B2.
Dual chassis:
● MX9116n FSE in Chassis 1 slot A1 is connected to MX7116n FEMs in Chassis 1 slot B1, Chassis 2 slot A1, Chassis 2 slot B1.
● MX9116n FSE in Chassis 2 slot A2 is connected to MX7116n FEMs in Chassis 1 slot A2, Chassis 1 slot B2, Chassis 2 slot B2.
Multiple chassis:
The topology with multiple chassis is similar to the dual chassis. Make sure to connect the FSE and FEM in the same numeric
slot numbers. For example, connecting FSE in Chassis 1 slot A1 to FEM in Chassis 2 slot B2 is not supported.
NOTE: The Broadcom 57504 quad-port Ethernet adapter is not a converged network adapter and does not support FCoE
or iSCSI offload.
The MX9116n FSE has sixteen 25 GbE server-facing ports, ethernet1/1/1 through ethernet1/1/16, which are used when the
PowerEdge MX server sleds are in the same chassis as the MX9116n FSE.
With only dual-port NICs in all server sleds, only the odd-numbered server-facing ports are active. If the server has a quad-port
NIC, but the MX7116n FEM has only one port connected to the MX9116n FSE, only half of the NIC ports will be connected and
show a link up.
With quad-port NICs in all server sleds, both the odd- and even-numbered server-facing ports will be active. The following table
shows the MX server sled to MX9116n FSE interface mapping for quad-port NIC servers which are directly connected to the
switch.
When using multiple chassis and MX7116n FEMs, virtual slots are used to maintain a continuous mapping between the NIC and
physical port. For more information on virtual slots, see Virtual ports and slots on page 39.
In a multiple chassis Scalable Fabric, the interface numbers for the first two are mixed, as one NIC connection is to the MX9116n
in the same chassis as the server, and the other NIC connection is to the MX7116n. In this example, the following table shows
the server interface mapping for Chassis 1 using quad-port adapters.
The following figure shows the two-chassis topology with quad-port NICs in each chassis. Only a single fabric is configured.
Make sure to connect both ports on the MX7116n FEM to the same MX9116n FSE.
The following figure shows the two-chassis topology with quad-port NICs. Dual fabrics are configured.
The following figure shows the multiple chassis topology with quad-port NICs. Only a single fabric is configured.
Figure 28. Multiple chassis topology with quad-port NICs – single fabric
The following figure shows the multiple chassis topology with quad-port NICs in two chassis and dual-port NICs in one chassis.
Only a single fabric is configured. Make sure to connect both ports on the MX7116n FEM to the same MX9116n FSE with the
quad-port card. Do not connect the second port on the MX7116n FEM when configured with a dual-port NIC.
The following figure shows one example of an unsupported topology. The ports on the MX7116n FEMs must never be connected
to different MX9116n FSEs.
To view the server-facing interface port status, use the show interface status command. Server-facing ports are
numbered 1/1/1 to 1/1/16.
For the MX9116n FSE, servers that have a dual-port NIC connect only to odd-numbered internal Ethernet interfaces; for
example, a MX740c in slot one would be 1/1/1, and a MX840c in slots five and six occupies 1/1/9 and 1/1/11.
NOTE: Even-numbered Ethernet ports between 1/1/1–1/1/16 are reserved for quad-port NICs.
A port group is a logical port that consists of one or more physical ports and provides a single interface. Only the MX9116n FSE
supports the following port groups:
● QSFP28-DD – Port groups 1 through 12
● QSFP28 – Port groups 13 and 14
● QSFP28 Unified – Port groups 15 and 16
The following figure shows these port groups along the top, and the bottom shows the physical ports in each port group. For
instance, QSFP28-DD port group 1 has member ports 1/1/17 and 1/1/18, and unified port group 15 has a single member, port
1/1/43.
Figure 32. QSFP28-DD connection between MX9116n FSE and MX7116n FEM
NOTE: Compute sleds with dual-port NICs require only MX7116n FEM port 1 to be connected.
In addition to fabric-expander-mode, QSFP28-DD port groups support the following Ethernet breakout configurations:
● Using QSFP28-DD optics/cables:
○ 2x 100 GbE – Breakout a QSFP28-DD port into two 100-GbE interfaces
○ 2x 40 GbE – Breakout a QSFP28-DD port into two 40-GbE interfaces
○ 8x 25 GbE – Breakout a QSFP28-DD port into eight 25-GbE interfaces
NOTE: QSFP28-DD ports are backwards compatible with QSFP28 and QSFP+ optics and cables.
While each 32 Gb FC connection is providing 25 Gbps, the overall FC bandwidth available is 100 Gbps per unified port group,
or 200 Gbps for both ports. However, if an application requires the maximum 28 Gbps throughput per port, use the 2x 32 Gb
breakout mode. This mode configures the connections between the NPU and the FC ASIC, as shown in the following figure.
MX9116n FSE
Unified Port
2x 50 Gbps 2x 32 GFC ports
In 2x 32 Gb FC breakout mode, the MX9116n FSE binds two 50 Gbps links together to provide a total of 100 Gbps bandwidth
per lane to the FC ASIC. This results in the two FC ports operating at 28 Gbps. The overall FC bandwidth available is 56 Gbps
per unified port, or 112 Gbps for both (compared to the 200 Gbps using 4x 32-Gb FC).
NOTE: Rate limited ports are not oversubscribed ports. There is no FC frame drop on these ports and buffer to buffer
credit exchanges ensure flow consistency.
Use the same command to show the list of MX7116n FEMs in a quad-port NIC configured scenario, in which each MX7116n FEM
creates two connections with the MX9116n FSE. In a dual-chassis scenario, MX7116n FEMs are connected on port group 1 and
port group 7 to the MX9116n FSE as shown below. For example, if the quad-port NIC is configured on compute sled 1, then
virtual ports 1/1/71:1 and 1/1/71:9 will be up.
The MX9116n physical interfaces mapped to the MX7116n virtual ports display dormant (instead of up) in the show
interface status output until a virtual port starts to transmit server traffic.
Communication between rack and blade servers must traverse the core, increasing latency, and the storage array consumes
expensive core switch ports. All of this results in increased operations cost from the increased number of managed switches.
Embedded ToR functionality is built into the MX9116n FSE. Configure any QSFP28-DD port to break out into 8x 10 GbE or 8x
25 GbE and connect the appropriate cables and optics. This enables all servers and storage to connect directly to the MX9116n
Operating modes
The Dell Networking MX9116n Fabric Switching Engine (FSE) and MX5108n Ethernet Switch operate in one of two modes:
● Full Switch mode (Default) – All switch-specific SmartFabric OS10 capabilities are available and managed through the CLI.
● SmartFabric mode – Switches operate as a Layer 2 I/O aggregation fabric and are managed through the Open Manage
Enterprise-Modular (OME-M) console.
SmartFabric mode
A SmartFabric is a logical entity that consists of a collection of physical resources, such as servers and switches, and logical
resources such as networks, templates, and uplinks. The OpenManage Enterprise – Modular (OME-M) console provides a
method to manage these resources as a single unit.
For more information about SmartFabric mode, see Overview of SmartFabric Services for PowerEdge MX on page 73.
VLAN restrictions
VLANs 4004 and 4020 are reserved for internal switch communication and cannot be assigned to any interface in Full Switch
or SmartFabric mode. VLAN 4020 is automatically created by the system as the Management VLAN. Do not remove this VLAN,
and do not remove the VLAN tag or edit the Management VLAN on the Edit Uplink page if it is running in SmartFabric mode.
The VLAN and subnet that are assigned to OME-M cannot be used in the data path or fabric of the MX-IOMs. Ensure the
management network used for OME-M does not conflict with networks configured on the fabric. All other VLANs are allowed on
the data plane and can be assigned to any interface.
Storage networking
PowerEdge MX Ethernet I/O modules support Fibre Channel (FC) connectivity in different ways:
● Direct Attach, also called F_Port
● NPIV Proxy Gateway (NPG)
● FIP Snooping Bridge (FSB)
● Internet Small Computer Systems Interface, or iSCSI
The method to implement depends on the existing infrastructure and application requirements. Consult your Dell representative
for more information.
Configuring FC connectivity in SmartFabric mode is simple and is almost identical across the three connectivity types.
NOTE: The PowerEdge MX Platform supports all Dell PowerStore storage appliance models. This document provides
example deployments that include the PowerStore 1000T appliance. For specific details on PowerStore appliance models,
see the Dell PowerStore T page.
FC SAN A
FC SAN B
:2
rts 4:2 rts /44
Po 1/1/4 Po 1/1
– –
: 1 :1
4 44
1/ 1/4 1/
1/
Controller A Controller B
MX7000 MX7000
chassis 1 chassis 2
Figure 43. Fibre channel NPG network to Dell PowerStore 1000T SAN
NOTE: For more information about configuration and deployment, see Scenario 5: Connect MX9116n FSE to Fibre Channel
storage - NPIV Proxy Gateway mode on page 198.
FC SAN A
FC SAN B
1/44:2
1/44:1
4:1 :2
1/4 1/44
MX7000 MX7000
chassis 1 chassis 2
Figure 44. Fibre Channel (F_Port) direct attach network to Dell PowerStore 1000T SAN
NOTE: For more information on configuration and deployment, see Scenario 6: Connect MX9116n FSE to Fibre Channel
storage - FC Direct Attach on page 202.
Figure 45. FCoE (FSB) network to Dell PowerStore 1000T SAN through S4148U-ON NPG switch
NOTE: An FCoE uplink from the MX5108n or MX9116n must contain only a single port interface on an MX IOM to the
existing FCoE switch such as the Dell PowerSwitch S4148U shown in the diagram above.
NOTE: For more information about configuration and deployment, see Scenario 7: Connect MX5108n to Fibre Channel
storage - FSB on page 207.
NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.
iSCSI
iSCSI is a transport layer protocol that embeds SCSI commands inside of TCP/IP packets. TCP/IP transports the SCSI
commands from the Host (initiator) to storage array (target). iSCSI traffic can be run on a shared or dedicated network
depending on application performance requirements.
In the example below, MX9116n FSEs are connected to Dell PowerStore 1000T storage array controllers SP A and SP B through
ports 1/1/41:1-2. If there are multiple paths from host to target, iSCSI can use multiple sessions for each path. Each path from
the initiator to the target will have its own session and connection. This connectivity method is often referred as “port binding”.
Dell Technologies recommends that you use the port binding method for connecting MX environment to the PowerStore 1000T
storage array. Configure multiple iSCSI targets on PowerStore 1000T and establish connectivity from the host initiators (MX
compute sleds) to each of the targets. When Logical Unit Numbers (LUNs) are successfully created on target, host initiators
can make connections to target through iSCSI session. For more information, see the Dell PowerStore T page.
iSCSI SAN A
iSCSI SAN B
1/1/41:2
1/1/41:1
:1 :2
/41 1
1/1 1/1/4
MX740c MX740c
NVMe/TCP
Resource Description
SFSS Deployment Guide This document demonstrates the planning and deployment of SmartFabric
Storage Software (SFSS) for NVMe/TCP.
NVMe/TCP Host/Storage Interoperability Simple This document provides information about the NVMe/TCP Host/Storage
Support Matrix Interoperability support matrix.
NVMe/TCP Supported Switches Simple Support This document provides information about the NVMe/TCP Supported
Matrix Switches Simple Support matrix.
Switch Overview
The Overview page provides a convenient location to view the pertinent data on the IOM such as:
● Chassis information
● Recent alerts
● Recent activity
● IOM subsystems
● Environment
The Power Control drop-down button provides three options:
● Power Off: Turns off the IOM
● Power Cycle: Power cycles the IOM
● System Reset: Initiates a cold reboot of the IOM
Hardware tab
The Hardware tab provides information about the following IOM hardware:
● FRU
● Device Management Info
● Installed software
● Port information
In Smartfabric mode, the Port Information tab provides useful operations such as:
● Configuring port-group breakout
● Toggling the admin state of ports
● Configuring MTU of ports
● Toggling Auto Negotiation
● Setting the port description
NOTE: Do not use the OME-M UI to manage ports of a switch in Full Switch mode.
NOTE: The following phased update order helps you to manually orchestrate MX component updates with no workload
disruption:
1. Update the components in the following order: OME-Modular application.
2. Network IOMs (Smart Fabrics and Full-Switches) and SAS IOMs
3. Server update—Phased update of servers (depending on clustering solution)
NOTE: When upgrading OS10, always perform the upgrade as part of an overall MX baseline. Follow the installation
instructions in the OME-M User's Guide. Find the relevant version of the User Guide in the OME-M and OS10 compatibility
and documentation table table.
NOTE: If an IOM is in SmartFabric mode, all the switches that are part of the fabric are updated in sequence automatically.
Do not select both of the switches in the fabric to update.
NOTE: If an IOM is in Full Switch mode, the firmware upgrade is completed only on the specific IOMs that are selected in
the UI.
For step-by-step instructions about how to upgrade OS10 on PowerEdge MX IO modules along with a version-to-version
upgrade matrix, see the OME-M User's Guide. Find the relevant version of the User Guide in the OME-M and OS10 compatibility
and documentation table.
Alerts tab
The Alerts tab provides information about alerts and notifies the administrator. The advanced filter option can be leveraged to
quickly filter out alerts. Various operations can be performed on an alert or several alerts such as:
● Acknowledge
● Unacknowledged
● Ignore
Settings tab
The Settings tab provides options to configure the following settings for the IOMs:
● Network
● Management
● Monitoring
● Advanced Settings
Network
The Network option includes configuring IPv4, IPv6, DNS Server, and Management VLAN settings.
Management
The Management option includes setting the hostname and admin account password.
NOTE: Beginning with OME-M 1.20.00 and OS 10.5.0.7, this field will set the admin account password. For versions
OME-M 1.10.20 and OS10.5.0.5 and earlier, the field name Root Password will set the OS10 linuxadmin account password.
The default username for CLI access is admin and the password is admin.
Monitoring
The Monitoring section provides options for SNMP settings.
Advanced Settings
The Advanced Settings tab offers the option for time configuration replication and alert replication. Select the Replicate
Time Configuration from Chassis check box to replicate the time settings that are configured in the chassis to the IOM.
Select the Replicate Alert Destination Configuration from Chassis check box to replicate the alert destination settings that
are configured in the chassis to the IOM.
NOTE: You cannot delete the default linuxadmin user name. The default admin user name can only be deleted if at
least one OS10 user with the sysadmin role is configured.
For more information on OS10 privileged accounts, find the relevant version of the User Guide in the OME-M and OS10
compatibility and documentation table.
NOTE: OME-M versions prior to 1.20.00 will set the linuxadmin password, instead of the 'admin' password, when using
this page.
If the MXG610s I/O module is selected, this procedure sets the admin account password for the Fabric OS running on the IOM.
Switch Also referred to as LACP, 802.3ad, or Dynamic Link Aggregation, this teaming method uses the LACP
dependent protocol to understand the teaming topology. This teaming method provides active/active teaming and
requires the switch to support LACP teaming.
Switch This method uses the operating system and NIC device drivers on the server to team the NICs. Each NIC
independent vendor may provide slightly different implementations with different pros and cons.
NIC Partitioning (NPAR) can impact how NIC teaming operates. Based on restrictions that the NIC vendors implement and that
are related to NIC partitioning, certain configurations preclude certain types of teaming.
The following restrictions are in place for both Full Switch and SmartFabric modes:
● If NPAR is not in use, both switch-dependent (LACP and static LAG) and switch-independent teaming methods are
supported.
● If NPAR is in use, only switch-independent teaming methods are supported. Switch-dependent teaming (LACP and static
LAG) is not supported.
If switch dependent (LACP) teaming is used, the following restrictions are in place:
● The iDRAC shared LAN on motherboard (LOM) feature can only be used if the Failover option on the iDRAC is enabled.
● If the host operating system is Microsoft Windows, the LACP timer MUST be set to Slow, also referred to as Normal.
Refer to the network adapter or operating system documentation for detailed NIC teaming instructions.
● Microsoft Windows 2012 R2, refer to the Instructions section
● Microsoft Windows 2016, refer to the Instructions section
NOTE: For deployments utilizing NPAR on the MX Platform with VMware solutions, contact Dell Support.
The following table shows the options that the MX Platform provides for NIC teaming:
NOTE: Prior to enabling scale-profile vlan, add the mode L3 command to VLAN 4020 and any VLANs with FCoE
or routing enabled. Failure to do this will disrupt network traffic on those VLANs, including access to the management
interface on the switch. For more information on this command and its use, find the relevant version of the User Guide in
the OME-M and OS10 compatibility and documentation table.
NOTE: When the PV value becomes very large, some show commands may take additional time to execute. This delay does
not impact switching performance, only the CLI display function.
MX9116n-A1 MX9116n-A2
Create FC zones
Server and storage adapter WWPNs, or their aliases are combined into zones to allow communication between devices in the
same zone. Dell Technologies recommends single-initiator zoning. In other words, no more than one server HBA port per zone.
For high availability, each server HBA port should be zoned to at least one port from SP A and one port from SP B. In this
example, one zone is created for each server HBA port. The zone contains the server port and the two storage processor ports
that are connected to the same MX9116n FS.
MX9116n-A1 MX9116n-A2
MX9116n-A1 MX9116n-A2
MX9116n-A1 MX9116n-A2
vfabric 1 vfabric 1
zoneset activate zoneset1 zoneset activate zoneset1
exit exit
NOTE: A new replacement IOM will have a factory default configuration. All port interfaces in the default configuration are
in the no shutdown state.
In Full Switch mode, Dell PowerEdge MX platform gives you the option to replace the I/O modules in the case of persistent
errors or failures. The MX9116n FSE and MX5108n can be replaced with another I/O module of the same type. In the case of
errors or failures, replace the old IOM with a new IOM.
Follow the instructions in this section to replace a failed the I/O module.
Prerequisites:
● The replacement IOM must be a new device within the chassis deployment. Do not use an IOM that was previously deployed
within the MCM group.
● The other IOM in Full Switch mode must be up, running, and healthy; otherwise a complete traffic outage may occur.
● The new IOM must have the same OS10 version as the faulty IOM.
NOTE: OS10 is factory-installed in the MX9116n FSE or MX5108n Ethernet Switch. If the faulty IOM has an upgraded
version of OS10, you must upgrade the new IOM to the same version.
The following is an overview of the module replacement process:
1. Back up the IOM configuration.
2. Physically replace the IOM.
3. Verify firmware versions and configure the IOM settings.
4. Restore the IOM configuration.
5. Connect the cables to the new IOM.
VLAN stacking
Dell Technologies introduces VLAN stacking in Dell SmartFabric OS10.5.4.0. This feature, commonly called Q-in-Q, is available
for use on the Dell PowerEdge MX platform in Full Switch mode starting with version OS10.5.4.1.
VLAN stacking is often recommended for the service provider use case. VLAN stacking enables service providers to offer
separate VLANs to customers with no coordination between customers, with minimal coordination between customers and the
provider. VLAN stacking allows service providers to add their own VLAN tag to data or control frames traversing the provider
network. The provider can differentiate customers even if those customers use the same VLAN ID. The providers' network
forwarding decisions are based on the provider VLAN tag only. This tag enables the provider to map traffic through the core
independent of the customer; the customer and provider only coordinate at the provider edge.
Figure 63. Addition (ingress) and removal (egress) of the S-Tag before the original 802.1Q header
Another use case that is more suited to the Dell PowerEdge MX Platform is to allow the MX7000 Chassis, or MX Scalable
Fabric, to be treated as a single workload from the perspective of the top of rack (ToR) leaf pair. VLAN stacking is used to allow
many workloads with unique VLANs to be represented by a single stack VLAN on the uplink of the MX IOMs. This allows for
VLAN changes to occur within the MX Scalable Fabric on each server without the need for networking admins to change the
configuration in the overall data center. This also provides PowerEdge MX Platform a flexibility of better VLAN Management and
Scaling.
The following diagrams demonstrate a few topologies:
Functional overview
SmartFabric mode provides the following functionality:
● Data center modernization
○ I/O aggregation
○ Plug-and-play fabric deployment
○ Single interface to manage all switches in the fabric
● Lifecycle management
○ Fabric-wide SmartFabric OS10 updates
○ Automated or user-enforced rollback to last well-known state
● Fabric automation
○ Physical topology compliance
○ Server networking managed using templates
○ Automated QoS assignment per VLAN
○ Automated storage networking
● Failure remediation
○ Dynamically adjusts bandwidth across all interswitch links in the event of a link failure
○ Automatically detects fabric misconfigurations or link level failure conditions
○ Automatically heals the fabric on failure condition removal
NOTE: In SmartFabric mode, MX series switches operate entirely as a Layer 2 network fabric. Layer 3 protocols are not
supported.
clock
fc alias
fc zone
fc zoneset
hostname
host-description
interface
ip nameserver
ip ssh server
ip telnet server
login concurrent-session
login statistics
logging
management route
ntp
snmp-server
tacacs-server
username
spanning-tree
vlan
All switch interfaces are assigned to VLAN 1 by default and Layer 2 bridging is disabled by default. Interfaces must join a
are in the same Layer 2 bridge domain. bridge domain (VLAN) before being able to forward frames.
All configuration changes are saved in the running Verify configuration changes using feature-specific show
configuration by default. To display the current configuration, commands, such as show interface and show vlan,
use the show running-configuration command. instead of show running-configuration.
NOTE: The cabling shown in this section is the VLTi connection between the MX switches.
NOTE: The VLTi ports are not user selectable, and the SmartFabric engine enforces the connection topology.
a. From SmartFabric OS10.5.1.6 and later, twelve FSEs in a single MCM group and eight MX5108 switches in a single
MCM group is supported, but twelve FSEs and eight MX5108 (20 total) switches together in a single MCM group is not
supported.
NOTE: VLANs 4004 and 4020 are reserved for internal switch communication and cannot be assigned to any interface in
Full Switch or SmartFabric mode. VLAN 4020 is a Management VLAN and is enabled by default. Do not remove this VLAN,
and do not remove the VLAN tag or edit Management VLAN on the Edit Uplink page. In Full Switch mode, you can create
a VLAN, enable it, and define it as a Management VLAN in global configuration mode on the switch. All other VLANs are
allowed on data plane and can be assigned to any interface. For more information on Configuring VLANs in Full Switch
mode, find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.
NOTE: In SmartFabric mode, a VLAN can be created using the CLI, but cannot be deleted or removed. Therefore, all VLAN
configuration must be done in the OME-M UI while in SmartFabric mode.
IGMP snooping
IGMP is a communications protocol that establishes multicast group memberships to neighboring switches and routers using
IPv4 networks. OS10 supports IGMPv1, IGMPv2, and IGMPv3 to manage the multicast group memberships on IPv4 networks.
IGMP snooping uses the information in IGMP packets to generate a forwarding table that associates ports with multicast
groups. When switches receive multicast frames, they forward them to their intended receivers. OS10 supports IGMP snooping
on VLAN interfaces.
MLD snooping
IPv6 uses MLD protocol to manage multicast groups. OS10 supports MLDv1and MLDv2 to manage the multicast group
memberships on IPv6 networks.
MLD snooping enables switches to use the information in MLD packets and generate a forwarding table that associates ports
with multicast groups. When switches receive multicast frames, they forward them to their intended receivers. OS10 supports
MLD snooping on VLAN interfaces.
To enable MLD snooping in Full switch mode, see Dell SmartFabric OS10 user guide. Find the relevant version of the User Guide
in the OME-M and OS10 compatibility and documentation table.
Validation
Run the following commands on MX IOMs in the Fabric to validate the IGMP and MLD snooping.
The show ip igmp snooping summary command shows maximum number of instances and total number interfaces with
igmp snooping enabled.
The show ip igmp snooping interface command shows VLANs, IGMP version and all other IGMP snooping details.
Physical connectivity
All physical Ethernet connections within an uplink from a SmartFabric are automatically grouped into a single LACP LAG. All
related ports on the upstream switches must also be in a single LACP LAG. Failure to do so may create network loops.
A minimum of one physical uplink from each MX switch to each upstream switch is required and the uplinks must be connected
in a mesh design. For example, if you have two upstream switches, you need two uplinks from each MX9116n FSE, as shown in
the following figure.
Starting with Dell Networking OS10.5.2.4 and later, a SmartFabric supports a maximum of four Ethernet – no Spanning Tree
or three legacy Ethernet uplinks. Versions of Dell Networking OS10.5.1.6 or earlier, a SmartFabric supports a maximum of three
Ethernet - no Spanning Tree uplinks or three legacy Ethernet uplinks.
NOTE: If multiple uplinks are going to be used, you cannot use the same VLAN ID on more than one uplink without creating
a network loop.
NOTE: The upstream switch ports must be in a single LACP LAG as shown in the figure below. Creating multiple LAGs
within a single uplink results in a network loop and is not supported.
The maximum number of uplinks supported in SmartFabric are detailed in the following table.
NOTE: If multiple uplinks are to be used, you cannot use the same VLAN ID on more than one uplink without creating a
network loop.
Dell Technologies has tested uplinks with the following combination of switch models, and operating system versions.
To validate the STP configuration, use the show spanning-tree brief command:
NOTE: STP is required when using legacy Ethernet uplinks. MSTP is not supported. Operating a SmartFabric with STP
disabled and the legacy Ethernet uplink may create a network loop and is not supported. Use the Ethernet - No Spanning
Tree uplink instead.
NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.
The following table lists the network types and related settings. The QoS group is the numerical value for the queues available in
SmartFabric mode. Available queues include 2 through 5. Queues 1, 6, and 7 are reserved.
NOTE: In SmartFabric mode, an administrator cannot change the default weights for the queues. Weights for each queue
can be seen using the show queuing weights interface ethernet command that is described in Common CLI
troubleshooting commands for Full Switch and SmartFabric modes on page 157.
Templates
A template is a set of system configuration settings referred to as attributes. A template may contain a small set of attributes
for a specific purpose, or all the attributes for a full system configuration. Templates allow for multiple servers to be configured
quickly and automatically without the risk of human error.
Networks (VLANs) are assigned to NICs as part of the server template. When the template is deployed, those networks are
programmed on the fabric for the servers that are associated with the template.
NOTE: Network assignment through template only functions for servers connected to a SmartFabric. If a template with
network assignments is deployed to a server connected to a switch in Full Switch mode, the network assignments are
ignored.
The OME-M UI provides the following options for creating templates:
● Most frequently, templates are created by getting the current system configuration from a server that has been configured
to the exact specifications required. This is referred to as a Reference Server.
● Templates may be cloned, copied, and edited.
● A template can be created by importing a Server Configuration Profile (SCP) file. The SCP file may be from a server or
exported by OpenManage Essentials, OpenManage Enterprise, or OME-M.
● OME-M comes prepopulated with several templates for specific purposes.
Profiles
A server profile is a combination of template and identity settings that are applied to a specific server or multiple servers.
When the server template is deployed successfully, OME-M automatically creates and applies a server profile to that template.
OME-M also allows you to manually create a server profile that you can apply to the designated compute sleds.
Instead of deleting and recreating server templates, profiles can be used to deploy with modified attributes on server templates.
A single profile can be applied to multiple server templates with modified attributes, or all attributes.
Deployment
Deployment is the process of applying a full or partial system configuration on a specific target device. In OME-M, templates are
the basis for all deployments. Templates contain the system configuration attributes that get provisioned to the target server,
then the iDRAC on the target server applies the attributes contained in the template and reboots the server if necessary. Often,
templates contain virtual identity attributes. As mentioned above, identity attributes must have unique values on the network.
Identity Pools facilitate the assignment and management of unique virtual identities.
NOTE: OMNI 2.0 and 2.1 only support VLAN automation with one uplink per SmartFabric.
For more information about network cabling on PowerEdge MX, see Supported cables and optical connectors on page 223.
Define VLANs
Before creating the SmartFabric, the initial set of VLANs should be created. The first VLAN to be created should be the default,
or native VLAN, typically VLAN 1. The default VLAN must be created for any untagged traffic to cross the fabric.
SmartFabric Creation 91
NOTE: VLAN 1 will be created as a Default VLAN when the first fabric is created.
To define VLANs using the OME-M console, perform the following steps.
1. Open the OME-M console.
2. From the navigation menu, click Configuration > VLANs.
NOTE: In OME-M 1.10.20 and earlier, the VLANs screen is titled Networks.
NOTE: Define VLANs for FCoE if implementing Fibre Channel configurations. Skip this section if not required.
A standard Ethernet uplink carries assigned VLANs on all physical uplinks. When implementing FCoE, traffic for SAN path A and
SAN path B must be kept separate. The storage arrays have two separate controllers which create two paths, SAN path A and
SAN path B, connected to the MX9116n FSE. For storage traffic to be redundant, two separate VLANs are created for that
traffic.
Using the same process described in Define VLANs, create two additional VLANs for FCoE traffic.
92 SmartFabric Creation
Figure 83. Defined FCoE VLANs list
NOTE: To create VLANs for FCoE, from the Network Type list, select Storage – FCoE, and then click Finish. VLANs to be
used for FCoE must be configured as the Storage – FCoE network type.
NOTE: In OME-M 1.10.20 and earlier, the VLANs screen is titled as Networks.
SmartFabric Creation 93
Figure 84. SmartFabric deployment design window
The SmartFabric deploys. The process of Fabric creation can take up to 20 minutes to complete. During this time, all related
switches are rebooted, and the operating mode changes from Full Switch to SmartFabric mode.
NOTE: After the fabric is created, the fabric health is critical until at least one uplink is created.
The following figure shows the new SmartFabric object and some basic information about the fabric.
Optional steps
The configuration of forward error correction, uplink port speed and breakout, MTU, and autonegotiation is optional.
Forward error correction (FEC) is a technique used for controlling errors in data transmission at high speeds. With FEC, the
destination recognizes only the data with no errors from the source that is sending redundant error correcting code with the
data frame. This technique extends the range of the signal by correcting error without retransmission. FEC enhances data
reliability.
Available FEC modes:
● CL91-RS - Supports 100 GbE
● CL108-RS – Supports 25 GbE and 50 GbE
● CL74-FC – Supports 25 GbE and 50 GbE
● Auto
● Off
In SmartFabric mode, configuring FEC is supported on OME-M 1.20.00 and later. FEC options CL91-RS, CL108-RS, CL74-FC,
Auto, and Off are available. The options displayed in the UI vary depending on the speed of the selected interface.
94 SmartFabric Creation
The following table shows the default FEC and auto negotiation values for optics and cables for the QSFP28-DD and QSFP28
ports at 200 GbE and 100 GbE speeds.
Table 19. Media, Auto negotiation, and default FEC values for 200 GbE and 100 GbE
Media Auto negotiation FEC
200 GbE and 100 GbE DAC Enabled CL91-RS
200 GbE and 100 GbE Fiber or AOC, except LR-related optics Disabled CL91-RS
200 GbE and 100 GbE LR-related optics Disabled Disabled
The following table shows the default FEC and auto negotiation values for optics and cables for the QSFP28-DD and QSFP28
ports at 200, 100, 50, and 25 GbE speeds.
Table 20. Media, cable type, auto negotiation, and default FEC values
Media DAC cable type Auto negotiation FEC
200, 100, 50, and 25 GbE DAC CR-L Enabled CL-108-RS
CR-S Enabled CL-74-FC
CR-N Enabled Disabled
200, 100, 50, and 25 GbE Fiber or AOC, except N/A Disabled CL108-RS
LR-related optics
200, 100, 50, and 25 GbE LR-related optics N/A Disabled Disabled
To configure FEC in Full Switch mode, find the relevant version of the Dell SmartFabric OS10 User Guide in the OME-M and
OS10 compatibility and documentation table.
To configure FEC in SmartFabric mode on the OME-M console, perform the following steps.
Steps
1. Access the OME-M console.
2. Choose Devices > I/O Modules > Click on an I/O Module.
3. In an I/O Module option, choose Hardware > Port Information. This option lists the IOM ports and its information.
4. Select a port to configure FEC and click Configure FEC option at the top.
NOTE: FEC options are not supported for compute sled facing ports and FEM ports (breakout FEM, virtual ports).
SmartFabric Creation 95
Figure 87. Select FEC Type
If the uplink ports must be reconfigured to a different speed or breakout setting from the default, you must complete this before
creating the uplink.
To configure the Ethernet breakout on port groups using OME-M Console, perform the following steps:
1. Open the OME-M console.
2. From the navigation menu, click Devices > I/O Modules.
3. Select the switch that you want to manage. In this example, a MX9116n FSE in slot IOM-A1 is selected.
4. Choose Hardware > Port Information.
5. In the Port Information pane, choose the desired port group. In this example port-group1/1/13 is selected.
6. Select Configure Breakout. In the Configure Breakout dialog box, select the required Breakout option. In the example
provided, the Breakout Type for port-group1/1/13 is selected as 1x 40GE.
NOTE: Before choosing the breakout type, you must set the Breakout Type to HardwareDefault and then set the
desired configuration. If the desired breakout type is selected before setting HardwareDefault, an error occurs.
7. Click Finish.
96 SmartFabric Creation
Figure 88. Select the desired breakout type
8. Configure the remaining breakout types on additional uplink port groups as needed.
SmartFabric Creation 97
Figure 90. Port information section
2. To configure MTU, select the port that is listed under the respective port-group.
3. Click Configure MTU. Enter MTU size in bytes.
98 SmartFabric Creation
NOTE: Ethernet – No Spanning Tree uplink is supported with Dell and non-Dell switches in a vPC/VLT. Each uplink must be
a single LACP LAG.
NOTE: To change the port speed or breakout configuration, see Configure uplink port speed or breakout on page 96 and
make those changes before creating the uplinks.
After initial deployment, the new fabric shows Uplink Count as ‘zero’ and shows a warning (yellow triangle with exclamation
point). The lack of a fabric uplink results in a failed health check (red circle with x). To create the uplink, perform the following
steps.
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. Click on the fabric name. In this example, SmartFabric is selected.
4. In the Fabric Details pane, click Uplinks.
5. Click the Add Uplinks button.
6. In the Add Uplink window, complete the following:
a. Enter a name for the uplink in the Name box. In this example, Uplink01 is entered.
b. Optionally, enter a description in the Description box.
c. From the Uplink Type list, select the desired type of uplink. In this example, Ethernet – No Spanning Tree is selected.
NOTE: For more information on Uplink Failure Detection, see the Uplink failure detection section.
d. Click Next.
e. From the Switch Ports list, select the uplink ports on both the Mx9116n FSEs. In this example, ethernet 1/1/41 and
ethernet 1/1/42 are selected for both MX9116n FSEs.
NOTE: The show inventory CLI command can be used to find the I/O Module service tag information (for example,
8XRJ0T2).
f. From the Tagged Networks list, select the desired tagged VLANs. In this example, VLAN0010 is selected.
g. From the Untagged Network list, select the untagged VLAN. In this example, VLAN0001 is selected.
SmartFabric Creation 99
Figure 94. Create Ethernet uplink
7. Click Finish.
At this point, SmartFabric creates the uplink object and the status for the fabric changes to OK (green box with checkmark).
NOTE: VLAN1 will be assigned to Untagged Network by default.
Table 21. Dell OS10 and Cisco Nexus Ethernet – No Spanning Tree configuration
Dell Networking OS10 Cisco Nexus OS
Global Settings Global Settings
spanning-tree mode RSTP spanning-tree port type edge bpduguard default
spanning-tree port type network default
Port-channel Port-channel
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan xy switchport trunk allowed vlan xy
spanning-tree bpdu guard enable channel-group <channel-group-id> mode active
spanning-tree guard root
spanning-tree port type edge
Interface Interface
no shutdown switchport mode trunk
NOTE: Configure Fibre Channel universal ports, if implementing Fibre Channel configurations as per requirement.
NOTE: Fibre Channel port speed must be specifically configured. Auto negotiation is not currently supported.
On the MX9116n FSE, port-group 1/1/15 and 1/1/16 are universal ports capable of connecting to FC devices at various speeds
depending on the optic being used. In this example, we are configuring the universal port speed as 4x32G FC. To enable FC
capabilities, perform the following steps on each MX9116n FSE.
5. Click the port-group 1/1/16 check box, then click Configure breakout.
6. In the Configure breakout panel, select 4X32GFC as the breakout type used in this example.
NOTE: With OME-M 1.20.10 and earlier, you must set the Breakout Type to HardwareDefault first and then set the
desired configuration. If the desired breakout type is selected before setting HardwareDefault, an error occurs.
7. Click Finish.
NOTE: When enabling Fibre Channel ports, they are set administratively down by default. Select the ports and click the
Toggle Admin State button. Click Finish to administratively set the ports to up.
NOTE: The MX9116n supports FC speeds of 8G, 16G, and 32G FC.
NOTE: Create Fibre Channel uplinks for FCoE, if implementing Fibre Channel configurations as per requirement.
NOTE: The steps in this section allow you to connect to an existing FC switch using NPG mode, or directly attach an
FC storage array. The uplink type is the only setting within the MX7000 chassis that distinguishes between the two
configurations.
To create uplinks, perform the following steps.
NOTE: An FCoE uplink from the MX5108n or MX9116n must contain only a single port interface on an MX IOM to the
existing FCoE switch such as the Dell PowerSwitch S4148U shown in the diagram above.
NOTE: Make sure to have MTU set up on the internal Ethernet ports leveraging FCoE. If the MTU is not set, configure the
MTU by selecting port under Port Information and choosing Configure MTU. Enter the MTU size between 2500 and 9216
bytes.
NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.
For the examples shown in Scenario 5: Connect MX9116n FSE to Fibre Channel storage - NPIV Proxy Gateway mode on page
198 and Scenario 6: Connect MX9116n FSE to Fibre Channel storage - FC Direct Attach on page 202, the uplink attributes are
defined in the following table.
NOTE: Do not assign the same FCoE VLAN to both switches. They must be kept separate.
NOTE: If your environment has fewer than 256 VLANs, this support does not need to be enabled.
9. To verify that the scale-vlan-profile has been enabled, access a switch CLI that is part of the fabric and execute the show
running-configuration command. If successful, the entry scale-profile vlan will be listed in the configuration.
For example, in the MX scenario that is mentioned in Scenario 5: Connect MX9116n FSE to Fibre Channel storage - NPIV Proxy
Gateway mode on page 198, when an uplink is set as FC gateway, UFD associates the set of downstream interfaces which are
part of the corresponding FCoE VLAN into a UFD group. In this scenario, the VLANs are VLAN 30 and VLAN 40 on each switch
respectively. The downstream interfaces are the ones connected to the MX740c compute sleds.
In SmartFabric mode with an FC uplink failure situation, where all FC uplink ports are down (for example, removing the fibre
channel transceiver from the switch), the switch operationally disables the downstream interfaces which belong to that UFD
group AND have the FCoE VLAN provisioned to them. A server that does not have an impacted FCoE VLAN is not disturbed.
Once the downstream ports are set operationally down, the traffic on these server ports is stopped, giving the operating system
the ability to fail traffic over to the other path. In a scenario with MX9116n FSEs, a maximum of eight FC ports can be part of an
FC Gateway uplink.
This is resolved by shutting down only the corresponding compute sled downstream ports which provide an alternate path to
the compute sleds. Bring up at least one FC port that is part of the FC gateway uplink so that the FCoE traffic can transition
through another FC port on the NIC or an IOM in the fabric. Remove FCoE VLANs from Ethernet-only downstream ports to
avoid an impact on Ethernet traffic.
NOTE: In SmartFabric mode, one uplink-state-group is created and is enabled by default. In Full Switch mode, up to 16
uplink-state groups can be created, the same as any SmartFabric OS10 switch. By default, no uplink-state groups are
created in Full Switch mode. Physical port channels can be assigned to an uplink-state group.
To include uplinks into a UFD group in SmartFabric mode, perform the following steps.
Steps
1. Access the OME-M console.
2. Select Devices > Fabric. Choose created fabric.
3. The UFD group can be included in two ways. If uplinks are not created, select Add Uplink. Enter Name, Description, and
Uplink type.
4. Mark the check box Include in Uplink Failure Detection Group.
Server preparation
The examples in this guide reference the Dell PowerEdge MX740c compute sled with QLogic QL41262 Converged Network
Adapters (CNA) installed. CNAs are required to achieve FCoE connectivity. Use the steps below to prepare each CNA by setting
them to factory defaults (if required) and configuring NIC partitioning (NPAR) if needed. Not every implementation requires
NPAR.
NOTE: iDRAC steps in this section may vary depending on hardware, software, and browser versions used. See the
documentation for your Dell server for instructions on connecting to the iDRAC.
NOTE: In SmartFabric mode, you must use a template to deploy a server and to configure networking.
A server template contains parameters that are extracted from a server and allows these parameters to be quickly applied to
multiple compute sleds. A server template contains all server settings for a specific deployment type including BIOS, iDRAC,
RAID, NIC/CNA, and so on. The template is captured from a reference server and can then be deployed to multiple servers
simultaneously. The server template allows an administrator to associate VLANs to compute sleds.
The templates contain settings for the following categories:
● Local access configuration
● Location configuration
● Power configuration
● Chassis network configuration
● Slot configuration
● Setup configuration
To create a server template, perform the following steps.
1. Open the OME-M console.
2. From the navigation menu, click Configuration, then click Templates.
NOTE: With OME-M 1.20.10 and earlier, the Templates option is called Deploy.
3. From the center panel, click Create Template, then click From Reference Device to open the Create Template window.
A job starts and the new server template displays on the list. When complete, the Completed successfully status displays.
The following figure shows the associated networks for the server template with OME-M 1.20.10 and earlier.
Figure 114. Server template network settings - no FCoE with OME-M 1.20.10 and earlier
Figure 116. Server template network settings - FCoE with OME-M 1.30.00 and later
The following figure shows the associated networks for the server template with OME-M 1.20.10 and earlier.
Profile deployment
PowerEdge MX environment supports Profiles with OME-M 1.30.00 and later. OME-M creates and automatically assigns a
profile once the server template is deployed successfully. If the server template is not deployed, OME–M allows user to create
server profiles and apply them to compute sled or slot.
NOTE: The server template cannot be deleted until it is Unassigned from a profile. To unassign server templates from a
profile, see the Unassign a profile section. To delete a profile, see the Delete a profile section.
Create a profile
If the server template is not deployed, OME–M allows user to create server profiles and apply them to compute sled or slot.
To create a profile, perform the following steps:
1. Open OME-M console and select Configuration.
2. From the drop-down menu, select Profiles.
3. From the Profiles pane, choose Create.
4. From Select Template window, choose MX740c with FCOE CNA then click Next.
View a profile
User can view a profile and network details under this option. On the Profiles page, select a profile and click View and select
View Profile. The View Profile wizard is displayed.
View Profile You can view Boot to Network ISO, iDRAC Management IP, Target Attribute, and Virtual
Identities information that is related to the profile.
View Network You can view Bandwidth and VLANs information that is related to the profile.
Edit a profile
The Edit Profile feature allows user to change the Profile name, Network options, iDRAC management IP, Target
attributes, and Unassigned virtual identities. The user can edit the profile characteristics that are unique to the device or
slot.
To edit a profile, perform the following steps:
1. From the OME-M console, click Configurations > Profiles and select the profile to be edited.
2. Select Edit > Edit Profile.
Assign a profile
The Assign a profile function allows the user to assign and deploy a profile on target devices.
To assign a profile, perform the following steps:
1. From the OME-M console, click Configurations > Profiles and select a profile to assign.
CAUTION: When you select Enable Schedule, the profile deployment runs at the scheduled time, even if you
have already performed a Run Now function before the schedule. The Deploy Profile job will fail when it is
run at the scheduled time which results in an error message displaying.
NOTE: You can only select the profiles that are in an Assigned or Deployed state.
Delete a profile
You can delete profiles that are not running any profile actions and is in the unassigned profile state. To delete the profile:
1. From the Profiles page, select the profile or profiles that you want to delete and click Delete.
2. Click Yes to confirm the deletion.
The image below shows the content of the Topology tab and the VLTi that the SmartFabric mode created.
Within the Topology tab, you can also view the Wiring Diagram table as shown in the image below.
The Topology view on the Home Screen of the OME-M console shows connections for the quad-port NIC. To access this,
perform the following steps:
a. Access the OME-M Console.
b. Go to Home > View Topology. This will show connections between MX7116n FEMs and MX9116n FSEs, similar to
Two-chassis topology with quad-port NICs – dual fabric on page 34.
NOTE: Make sure that the compute sled iDRAC is at the latest version to ensure an accurate Group Topology view.
5. Once connections are established and validated, access the Port Information on I/O Modules by performing the following
steps:
a. Access the OME-M Console.
b. Go to Devices > I/O Modules.
c. Select an IOM > Hardware > Port Information. This shows two port groups each with eight internal ports.
For example, if Compute Sled 1 is configured with a dual-port NIC, then only one port group with eight ports can be seen
on OME-M. These internal ports are numbered 1/71/1 through 1/71/8. For Compute Sled 1 with a dual-port NIC, port
1/71/1 is Up.
If Compute Sled 1 is configured with a quad-port NIC, then two port groups each with eight ports can be seen on
OME-M. These internal ports are numbered 1/71/1 through 1/71/16. For Compute Sled 1 with a quad-port NIC, ports
1/71/1 and 1/71/9 are Up.
NOTE: Fabric Expander Modules are transparent and therefore do not appear on the Fabric Details page.
Servers lists the compute sleds that are part of the fabric. In this example, two PowerEdge MX740c compute sleds are part of
the fabric.
ISL Links lists the VLT interconnects between the two switches. The ISL links must be connected on port groups 11 and 12 on
MX9116n switches, and ports 9 and 10 on MX5108n switches.
CAUTION: This connection is required. Failure to connect the defined ports results in a fabric validation error.
Edit a SmartFabric
A fabric has four components:
● Uplinks
● Switches
● Servers
● ISL Links
To edit the fabric that is discussed in this section, edit the fabric name and description using the following steps:
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. On the right, click the Edit button.
Edit uplinks
Perform the following steps to edit uplinks on a SmartFabric:
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. Select the fabric.
4. Select the Uplink to edit and click Edit.
NOTE: In this example, Uplink1 is selected.
5. In the Edit Uplink dialog box, modify the Name and Description as necessary.
NOTE: The uplink type cannot be modified once the fabric is created. If the uplink type must be changed after the
fabric is created, delete the uplink and create a new uplink with the wanted uplink type.
NOTE: The Include in Uplink Failure Detection Group box under Uplink Type will only be seen on OME-M 1.20.00 and
later.
6. Click Next.
NOTE: An FCoE uplink from the MX5108n or MX9116n must contain only a single port interface on an MX IOM to the
existing FCoE switch such as the Dell PowerSwitch S4148U shown in the diagram above.
9. Click Finish.
Edit VLANs
The following sections describe this task for deployed servers with different versions of OME-M.
NOTE: At this time, only one server can be selected at a time in the UI.
Delete SmartFabric
To remove the SmartFabric using the OME-M console, perform the following steps:
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. Select SmartFabric.
4. Click the Delete button.
5. In the delete fabric dialog box, click Yes.
All participating switches reboot to Full Switch mode.
CAUTION: Any configuration that is not completed by the OME-M console is lost when switching between IOM
operating modes.
Select I/O Module in slot A2 from the first chassis. Power off the IOM from the Power Control drop-down menu.
Step 5: Validation
Perform the following steps to validate the environment.
1. Make sure that all MX9116n FSEs and MX7116n FEMs on both chassis appear in the OME-M UI. Restart the second MX9116n
FSE if you do not see it in the correct chassis.
2. Check the SmartFabric configuration to ensure that nothing has changed.
3. Make sure all internal switch ports on the MX9116n FSE and MX7116n FEMs are enabled and up. Check link lights for the
external ports to make sure that they are illuminated.
NOTE: A new replacement IOM will have a factory default configuration. All port interfaces in the default configuration are
in the no shutdown state.
With OME-M 1.30.00 and later, Dell PowerEdge MX platform gives you the option to replace the I/O modules in SmartFabric
mode in the case of persistent errors or failures and if required through OME-M console. This process can only be done on
OME-M after SmartFabric is created.
Prerequisites:
● The MX9116n FSE and MX5108n can only be replaced with another I/O Module of the same type. Ensure that you have the
same Dell SmartFabric OS10 version on the switch that is to be replaced, and on the new switch.
● The replacement IOM must be a new device within the chassis deployment. Do not use an IOM that was previously deployed
within the MCM group.
● The other IOM in SmartFabric mode must be up, running, and healthy; otherwise a complete traffic outage may occur.
NOTE: Before beginning this process, you must have a replacement switch module or filler blade available. Never leave
the slot on the blade server chassis open for an extended time period. To maintain proper airflow, fill the slot with either a
replacement switch module or filler blade.
1. Back up the switch module configuration to an FTP or TFTP server using the configUpload command. The
configUpload command uploads the switch module configuration to the server and makes it available for downloading
to the replacement switch module if necessary. To ensure that a complete configuration is available for downloading to a
replacement switch module, back up the configuration regularly.
2. Stop all activities on the ports that the switch module uses. To verify that there is no activity, view the switch module LEDs.
3. Disconnect all cables from the SFP+/QSFP ports and remove the SFP+ or QSFP optical transceivers from the switch
module external ports.
4. Press the Release latch and gently pull the release lever out from the switch module.
5. Slide the switch module out of the I/O module bay and set it aside.
6. Insert the replacement switch module in the I/O module bay of the blade server chassis.
NOTE: Complete this step within 60 seconds.
7. Insert the SFP+ or QSFP optical transceivers.
8. Reconnect the cables and establish a connection to the blade server management module.
NOTE: The password must be 8 to 32 characters long and must be a combination of an uppercase, a lowercase, a
special character (+, &, ?, >, -, }, |, ., !, (, ', ,, _, [, ", @, #, ), *, ;, $, ], /, §, %, =, <, :, {, I) , and a number.
10. In the Ethernet Switch Backup Instructions, Select the check box to confirm the Ethernet switch backup settings.
NOTE: Chassis backup is not supported on Ethernet switch settings like Hostname, Password, Management network,
Spanning tree configurations, IOMs that are in full switch mode, and some CLI configurations. For the list of CLI
configurations that are not supported, find the relevant version of the User Guide in the OME-M and OS10 compatibility
and documentation table.
11. Click Finish. Click Learn More to see more information about the Ethernet switch backup instructions.
A message is displayed indicating that the backup is successful, and the chassis Overview page is displayed.
You can check the status and details of the backup process on the Montitoring > Jobs page.
NOTE: Backup and restore operations cannot be performed when you have initiated any job and the job status is
in-progress.
Sensitive data
This option allows you to include passwords while taking the backup.
If you do not select the Include Password option, passwords for the following components are not included.
NOTE: During the SmartFabric restore, all the IOMs are converted in to the operating mode as in the backup file.
NOTE: All the IOMs which go through the fabric restore are reloaded. The IOMs are reloaded twice if there is any
difference in the mode of backup file and the current IOM.
6. Enter the Network Share Address, and Network Share Filepath where the backup file is stored.
7. Enter the name of the Backup File and Extension.
NOTE: If the restore operation is done excluding passwords or on a different chassis with proxy settings, the proxy
dependent tasks like the repository task try to connect to the external share. Rerun the tasks after configuring the
proxy password.
There are several options to copy files from the IOM to a remote server through many protocols. These options can be found in
the Dell SmartFabric OS10 User Guide.
The following example shows a mismatch of the vPC domain IDs on vPC peer switches. To resolve this issue, ensure that a
single vPC domain is used across the vPC peers.
OS10# show fc
alias Show FC alias
ns Show FC NS Switch parameters
statistics Show FC Switch parameters
switch Show FC Switch parameters
zone Show FC Zone
zoneset Show fc zoneset
● Use the following commands to verify FCoE configurations from MX9116n CLI:
The show vfabric command output provides various information including the default zone mode, the active zone set, and
interfaces that are members of the vfabric.
The show fcoe sessions command shows active FCoE sessions. The output includes MAC addresses, Ethernet interfaces,
the FCoE VLAN ID, FC IDs, and WWPNs of logged-in CNAs.
NOTE: Due to the width of the command output, each line of output is shown on two lines below.
NOTE: For more information about FC and FCoE, find the relevant version of the Dell SmartFabric OS10 User Guide in the
OME-M and OS10 compatibility and documentation table.
NOTE: Once the rebalance is complete, below syslog will be generated in MX console.
show discovered-expanders
The show discovered-expanders command is only available on the MX9116n FSE. The MX7116n FEMs display the service
tag that is attached to the MX9116n FSEs, the associated port group, and the virtual slot.
show unit-provision
The show unit-provision command is only available on the MX9116n FSE and displays the unit ID, the provision name, and
the discovered name of the MX7116n FEM that is attached to the MX9116n FSE.
Alternately, the iDRAC MAC information can be obtained from the System Information on the iDRAC Dashboard page.
When viewing the LLDP neighbors, the iDRAC MAC address in addition to the NIC MAC address of the respective mezzanine
card are shown.
In the example deployment validation of LLDP neighbors, Ethernet1/1/1, ethernet 1/1/3, and ethernet
1/1/71-1/1/72 represent the two MX740c sleds in one chassis. The first entry is the iDRAC for the compute sled. The
iDRAC uses connectivity to the mezzanine card to advertise LLDP information. The second entry is the mezzanine card itself.
Ethernet 1/71/1 and ethernet 1/71/2 represent the MX740c compute sleds connected to the MX7116n FEM in the
other chassis.
Ethernet range ethernet1/1/37-1/1/40 are the VLTi interfaces for the SmartFabric. Last, ethernet1/1/41-1/1/42
are the links in a port channel that is connected to the Dell Networking S5232-ON leaf switches.
show policy-map
Using the service policy from show qos system, the show policy-map type qos PM_VLAN command displays QoS policy
details including associated class maps, for example, CM10, and QoS queue settings, qos-group 2.
show class-map
The show class-map command displays details for all the configured class-maps. For example, the association between CM10
and VLAN 10 is shown.
The example of QoS group, its related queue, and weight is shown here.
Admin Parameters :
------------------
Admin is enabled
Remote Parameters :
-------------------
Remote is enabled
PG-grp Priority# Bandwidth TSA
------------------------------------------------
0 0,1,2,5,6,7 1% ETS
1 0% SP
2 0% SP
3 3 98% ETS
4 4 1% ETS
5 0% SP
6 0% SP
7 0% SP
4 Input Conf TLV Pkts, 55 Output Conf TLV Pkts, 2 Error Conf TLV Pkts
0 Input Reco TLV Pkts, 55 Output Reco TLV Pkts, 0 Error Reco TLV Pkts
With OME-M 1.20.10 and earlier, you must set the Breakout Type to HardwareDefault first and then set the desired
configuration as shown in the figure below.
If the recommended order of steps is not followed, you may encounter the following errors:
Once the uplinks are added, they are most often associated
with tagged or untagged VLANs. When attempting to configure
the breakout on the uplink port-groups after adding uplinks
associated with VLANs to the fabric, the following error displays:
After the SmartFabric is created, you may see the following error: Warning: Unable to validate the fabric because the
design link ICL-1_REVERSE not connected as per design and Unable to validate the fabric because the design link
ICL-1_FORWARD not connected as per design.
There are two common reasons why you may receive this error:
● QSFP28 cables are being used between MX9116n switches instead of QSFP28-DD cables.
● The VLTi cables are not connected to the correct physical ports.
An example is shown below. To see the warning message, go to the OME-M UI and click Devices > Fabric. Choose View
Details next to Warning. You can view the details of the warning message choosing the SmartFabric that was created, and
clicking Topology. The warnings are displayed in Validation Errors section.
Figure 181. Warning for VLTi connections using QSFP28 100 GbE cables
This occurs because the VLTi connections between two MX9116n FSEs are using QSFP28 cables instead of QSFP28-DD cables.
Make sure QSFP28-DD cables are connected between port group 11 and 12 (ports 1/1/37 through 1/1/40) on both FSEs for
VLTi connections.
The following example shows interface ethernet 1/2 with auto negotiation enabled on the interface:
Verify LACP
The interface status of the upstream switches can provide valuable information for the link being down. The following example
shows interfaces 1 and 3 on upstream Cisco Nexus switches as members of port channel 1:
Checking interface 1 reveals that the ports are not receiving the LACP PDUs as shown in the following example:
NOTE: Within the Dell PowerSwitch, use the show interface status command to view the interfaces and associated
status information. Use the show interface ethernet interface number to view the interface details.
In the following example, the errors listed above occurred because an uplink was not created on the fabric.
Figure 186. Fabric topology with uplinks and QSFP28 100 VLTi connection
The resolution is to add one or more uplinks and verify that the fabric turns healthy.
The following example shows that the STP on the upstream switches, Cisco Nexus 3232C, is configured to run MST:
The recommended course of action is to change the STP type to RPVST+ on the upstream Cisco Nexus switches.
Another course of action in the above case can be to change the spanning tree type on the MX switches operating in
SmartFabric mode to match the STP type on the upstream switches. Make the change using the SmartFabric OS10 CLI. The
options available for the type of STP are as follows:
Not able to set Scenario: I/O Modules MX9116n FSE or MX5108n By default, the MX9116n FSE and MX5108n IOMs
QoS on a compute is connected to MX740c compute sled with Intel support the DCBx protocol and can be used to push
sled connected to XXV710 ethernet controller. IOMs are connected to their QoS configuration to the server NIC. The NIC
MX9116n FSE or upstream switches must be configured to accept these QoS settings
MX5108 from the switch by setting their Remote Willing
Running show lldp dcbx interface ethernet <node/
Status to Enable.
slot/port> pfc detail command shows Remote
willingness status is disabled on server facing ports. In Full Switch mode, user can configure DCBx as
mentioned in the Dell SmartFabric OS10 User Guide.
OS10# show lldp dcbx interface ethernet 1/1/1 pfc
Find the relevant version of the User Guide in the
detail
OME-M and OS10 compatibility and documentation
Interface ethernet1/1/15 table.
Admin mode is on In SmartFabric mode, DCBX configuration is tied to
FCoE UPLINK and it will enable only after FCoE
Admin is enabled, Priority list is 4,5,6,7
Uplink configured on this switch.
Remote is enabled, Priority list is 4,5,6,7
Once DCBX configuration applied on switch side,
Remote Willing Status is disabled it will push to remote end and remote end must
accept this configuration by “Remote Willing Status
(Output Truncated)
Enabled”.
The NIC on the server that is attached to the
switch is not configured to receive DCBx or any
QoS configurations, which is what causes the
Removing the To reproduce the scenario with MX IOMs In Full Switch mode, user can create a VLAN, enable
management VLAN connected to Upstream switches: it and define as a Management VLAN in global
tag under Edit 1. Create management VLAN. configuration mode on switch. For more information
Uplinks removes 2. After creating SmartFabric and adding uplinks, on Configuring VLANs in Full Switch mode, find the
the management the VLANs can be edited from the Edit Uplinks relevant version of the User Guide in the OME-M
VLAN page. and OS10 compatibility and documentation table.
3. Go to OME-M Console > Devices > Fabric > In SmartFabric mode, management VLAN 4020 will
Select a fabric > Select uplink > Edit. be created by default.
4. Click Next to access Edit Uplink page.
Make sure not to add management VLAN by Add
5. Add Network and add management VLAN
Network or remove tag on management VLAN.
6. Tag the management VLAN. The UI accepts the
change but no change in device. Access the CLI This removes the management VLAN itself.
to confirm.
7. Remove the tag on management VLAN, this in
turn deletes the management VLAN as well.
----------------------------------------------------------
CLUSTER DOMAIN ID : 50
VIP : fde1:53ba:e9a0:de14:0:5eff:fe00:150
ROLE : MASTER
SERVICE-TAG : CBJXLN2
NOTE: New features may not appear in the MSM UI until the master is upgraded to the version that supports the new
features. The example above shows how the show smartfabric cluster command determines which I/O module is
the master, and which I/O module role is the back-up.
--------------------------------------------------------------------------------
FCoE A1 STORAGE_FCOE PLATINUM 998
VLAN1 GENERAL_PURPOSE BRONZE 1
FCoE A2 STORAGE_FCOE PLATINUM 999
VLAN10 GENERAL_PURPOSE SILVER 10
UPLINK VLAN GENERAL_PURPOSE SILVER 2491
----------------------------------------------------------
Nic-Id : NIC.Mezzanine.1A-1-1
Switch-Interface : 8XRJ0T2:ethernet1/1/3
Fabric : SF (abdeec7f-3a83-483a-929e-aa102429ae86)
Is-Discovered : TRUE
Is-Onboarded : TRUE
Is-Configured : TRUE
NicBonded : FALSE
Native-vlan : 1
Static-onboard-interface:
Networks : 30, 1611
Opaque-id : 53f953f5-91ae-4009-b457-ef0f531cdc15
Upgrade Protocol : PUSH
Upgrade start time : 2021-02-11 14:48:51.595000
Status : INPROGRESS
Nodes to Upgrade : FD59H13
Reboot Sequence : FD59H13
NOTE: See QSFP28 double density connectors on page 228 for more information about the QSFP28-DD cables.
Configure SmartFabric
Perform the following steps to configure SmartFabric:
NOTE: For information related to the same scenario using the legacy Ethernet uplink with Spanning Tree Protocol, see
Scenario 3: SmartFabric deployment with S5232F-ON upstream switches with legacy Ethernet uplink on page 189.
There are four steps to configure the S5232F-ON upstream switches:
1. Set the switch hostname and management IP address. Enable spanning-tree mode as RSTP.
2. Configure the VLT between the switches.
3. Configure the VLANs.
4. Configure the port channels to connect to the MX switches.
Use the following commands to set the hostname, and to configure the OOB management interface and default gateway.
Configure the VLT between switches using the following commands. VLT configuration involves setting a discovery interface
range and discovering the VLT peer in the VLTi. The vlt-domain command configures the peer leaf-2 switch as a back-up
destination.
vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet1/1/29-1/1/31 discovery-interface ethernet1/1/29-1/1/31
Configure the port channels that connect to the downstream switches. The LACP protocol is used to create the dynamic LAG.
Trunk ports allow tagged VLANs to traverse the trunk link. In this example, the trunk is configured to allow VLAN 10. Disable
the spanning tree on port channels and run the commands related to Ethernet - No Spanning Tree uplinks as mentioned in the
following.
end end
write memory write memory
show vlt
The show vlt command validates the VLT configuration status when the VLTi Link Status is up. The role of one switch in the
VLT pair is primary, and its peer switch (not shown) is assigned the secondary role.
NOTE: See the QSFP28 double density connectors on page 228 for more information about the QSFP28-DD cables.
Configure SmartFabric
Perform the following steps to configure SmartFabric:
1. Physically cable the MX9116n FSE to the Cisco Nexus upstream switch. Make sure that chassis are in a Multi-Chassis
Management group. For instructions, find the relevant version of the User Guide in the OME-M and OS10 compatibility and
documentation table.
2. Define VLANs to use in the Fabric. For instructions, see Define VLANs on page 91.
3. Create the SmartFabric as per instructions in Create the SmartFabric on page 93.
NOTE: For information related to the same scenario using the legacy Ethernet uplink with Spanning Tree Protocol, see
Scenario 4: SmartFabric connected to Cisco Nexus 3232C switches with legacy Ethernet uplink on page 193.
There are four steps to configure the 3232C upstream switches:
1. Set switch hostname, management IP address, enable features vPC, LLDP, LACP, and interface-vlan.
2. Configure vPC between the switches.
3. Configure the VLANs.
4. Configure the downstream port channels to connect to the MX switches.
Enter the following commands to set the hostname and enable required features. Configure the management interface and
default gateway. Also run the global setting commands for Spanning Tree Protocol as mentioned in the following.
NOTE: The MX IOMs run Rapid per-VLAN Spanning Tree Plus (RPVST+) by default. Cisco Nexus switches run RSTP by
default. Ensure the Dell and non-Dell switches are both configured to use RSTP. For the Ethernet - No spanning Tree
uplinks from MX9116n FSE to the Cisco Nexus switches, spanning tree must be disabled on ports of Cisco Nexus.
Enter the following commands to create a virtual port channel (vPC) domain and assign the keepalive destination to the peer
switch management IP. Then create a port channel for the vPC peer link and assign the appropriate switchport interfaces.
Configure the required VLANs on each switch. In this deployment example, the Tagged VLAN used is VLAN 10 and Untagged
VLAN used is VLAN 1. Disable spanning tree on VLANs.
Enter the following commands to configure the port channels to connect to the downstream MX9116n FSEs. Then, exit
configuration mode and save the configuration. Disable spanning tree on the port channel connected to MX9116n FSE.
end end
copy running-configuration startup- copy running-configuration startup-
configuration configuration
NOTE: If the connections to the MX switches do not come up, see SmartFabric Troubleshooting on page 162 for
troubleshooting steps.
Trunk ports on switches allow tagged traffic to traverse the links. All flooded traffic for the VLAN is sent across trunk ports
to all the switches, even if those switches do not have an associated VLAN. This takes up the network bandwidth with
unnecessary traffic. VLAN or VTP Pruning is the feature that can be used to eliminate this unnecessary traffic by pruning the
VLANs.
Pruning restricts the flooded traffic to only those trunk ports with associated VLANs to optimize the usage of network
bandwidth. If the existing environment is configured for Cisco VTP or VLAN pruning, ensure that the Cisco upstream switches
are configured appropriately. See the Cisco Nexus 3000 Series NX-OS Configuration Guide for additional information.
NOTE: Do not use switchport trunk allow vlan all on the Cisco interfaces. The VLANs must be explicitly
assigned to the interface.
Configuration validation
This section covers the validation of the Cisco Nexus 3232C leaf switches. For information about the Dell Networking MX
switch validation commands, see Common CLI troubleshooting commands for Full Switch and SmartFabric modes on page 157.
show vpc
The show vpc command validates the vPC configuration status. The peer adjacency should be OK, with the peer should show
as alive. The end of the command shows which VLANs are active across the vPC.
vPC status
----------------------------------------------------------------------
id Port Status Consistency Reason Active vlans
-- ---- ------ ----------- ------ ------------
255 Po1 up success success 1,10
NOTE: See the Supported cables and optical connectors on page 223 for more information about the QSFP28-DD cables.
NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.
There are four steps to configure the S5232F-ON upstream switches:
1. Set the switch hostname and management IP address.
2. Configure the VLT between the switches.
3. Configure the VLANs.
4. Configure the port channels to connect to the MX switches.
Use the following commands to set the hostname, and to configure the OOB management interface and default gateway.
NOTE: Use the spanning-tree {vlan vlan-id priority priority-value} command to set the bridge
priority for the upstream switches. The bridge priority ranges from 0 to 61440 in increments of 4096. For example, to
make S5232F-ON Leaf 1 as the root bridge for VLAN 10, enter the command spanning-tree vlan 10 priority 4096.
Configure the VLT between switches using the following commands. VLT configuration involves setting a discovery interface
range and discovering the VLT peer in the VLTi. vlt-domain configures the peer leaf-2 switch as a back up destination.
vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet1/1/29-1/1/31 discovery-interface ethernet1/1/29-1/1/31
Configure the required VLANs on each switch. In this deployment example, the VLAN used is VLAN 10.
Configure the port channels that connect to the downstream switches. The LACP protocol is used to create the dynamic LAG.
Trunk ports allow tagged VLANs to traverse the trunk link. In this example, the trunk is configured to allow VLAN 10.
end end
write memory write memory
show vlt
The show vlt command validates the VLT configuration status when the VLTi Link Status is up. The role of one switch in the
VLT pair is primary, and its peer switch (not shown) is assigned the secondary role.
Interface
Name Role PortID Prio Cost Sts Cost Link-type Edge
--------------------------------------------------------------------------------
port-channel1 Root 128.2517 128 50 FWD 0 AUTO No
VLAN 10
Executing IEEE compatible Spanning Tree Protocol
Root ID Priority 32778, Address 4c76.25e8.e840
Root Bridge hello time 2, max age 20, forward delay 15
Bridge ID Priority 32778, Address 4c76.25e8.f2c0
Configured hello time 2, max age 20, forward delay 15
Flush Interval 200 centi-sec, Flush Invocations 5
Flush Indication threshold 0 (MAC flush optimization is disabled)
Interface Designated
Interface Designated
Name PortID Prio Cost Sts Cost Bridge ID PortID
--------------------------------------------------------------------------------
port-channel1 128.2517 128 50 FWD 1 32768 2004.0f00
Interface
Name Role PortID Prio Cost Sts Cost Link-type Edge
--------------------------------------------------------------------------------
port-channel1 Root 128.2517 128 50 FWD 1 AUTO No
NOTE: See Supported cables and optical connectors on page 223 for more information about the QSFP28-DD cables.
NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.
There are four steps to configure the 3232C upstream switches:
1. Set switch hostname, management IP address, enable features and spanning tree.
2. Configure vPC between the switches.
3. Configure the VLANs.
4. Configure the downstream port channels to connect to the MX switches.
Enter the following commands to set the hostname, enable required features, and enable RPVST spanning tree mode. Configure
the management interface and default gateway.
Enter the following commands to create a virtual port channel (vPC) domain and assign the keepalive destination to the peer
switch management IP. Then create a port channel for the vPC peer link and assign the appropriate switchport interfaces.
switchport switchport
switchport mode trunk switchport mode trunk
channel-group 255 mode active channel-group 255 mode active
no shutdown no shutdown
Enter the following commands to configure the port channels to connect to the downstream MX9116n FSEs. Then, exit
configuration mode and save the configuration.
end end
copy running-configuration startup- copy running-configuration startup-
configuration configuration
NOTE: If the connections to the MX switches do not come up, see SmartFabric Troubleshooting on page 162 for
troubleshooting steps.
Trunk ports on switches allow tagged traffic to traverse the links. All flooded traffic for the VLAN is sent across trunk ports
to all the switches, even if those switches do not have an associated VLAN. This takes up the network bandwidth with
unnecessary traffic. VLAN or VTP Pruning is the feature that can be used to eliminate this unnecessary traffic by pruning the
VLANs.
Pruning restricts the flooded traffic to only those trunk ports with associated VLANs to optimize the usage of network
bandwidth. If the existing environment is configured for Cisco VTP or VLAN pruning, ensure that the Cisco upstream switches
are configured appropriately. See the Cisco Nexus 3000 Series NX-OS Configuration Guide for additional information.
NOTE: Do not use switchport trunk allow vlan all on the Cisco interfaces. The VLANs must be explicitly
assigned to the interface.
Configuration validation
This section covers the validation of the Cisco Nexus 3232C leaf switches. For information about the Dell Networking MX
switch validation commands, see Common CLI troubleshooting commands for Full Switch and SmartFabric modes on page 157.
vPC status
----------------------------------------------------------------------
id Port Status Consistency Reason Active vlans
-- ---- ------ ----------- ------ ------------
255 Po1 up success success 1,10
FC Switch FC Switch
Spine 1 Spine 2
FC SAN A
FC SAN B
2
4:2 rts /44:
rts
Po 1/1/4 Po 1/1
– –
1 :1
4: 44
1/ 1/4 1 /1/
Controller A Controller B
MX7000 MX7000
chassis 1 chassis 2
SmartFabric mode
This scenario shows attachment to an existing FC switch infrastructure. Configuration of the existing FC switches is beyond the
scope of this document.
NOTE: The MX5108n Ethernet Switch does not support this feature.
This example assumes that an existing SmartFabric has been created and is fully operational. For instructions on creating a
SmartFabric, see SmartFabric Creation on page 91.
To configure NPG mode on an existing SmartFabric, the following steps are completed using the OME-M console:
1. Connect the MX9116n FSE to the FC SAN.
CAUTION: Ensure that the cables do not criss-cross between the switches.
Make sure that chassis are in a Multi-Chassis Management group. For instructions, find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table.
2. Define FCoE VLANs to use in the fabric. For instructions, see Define VLANs on page 91 for information about defining the
VLANs.
3. If necessary, create the Identity Pools. See Create identity pools on page 111 for more information about how to create
identity pools.
4. Configure the physical switch ports for FC operation. See Configure Fibre Channel universal ports on page 101 for
instructions.
5. Create the FC Gateway uplinks. For instruction, see Create Fibre Channel uplinks on page 101 for steps on creating uplinks.
6. Create and deploy the appropriate server templates to the compute sleds. See Server Deployment on page 109 for more
information.
Once the server operating system loads the FCoE driver, the WWN appears on the fabric and on the FC SAN. The system is
now ready to connect to Fibre Channel storage. See Dell PowerStore 1000T on page 235 for setting up storage logical unit
numbers (LUNs).
NOTE: For information related to use cases and configuring Ethernet – No Spanning Tree uplink with different tagged and
untagged VLANs, see Ethernet – No Spanning Tree uplink on page 84.
NOTE: When MX9116n FSEs are in NPG mode, connecting to more than one SAN is possible by creating multiple vFabrics
each with their own NPG gateway only in Full Switch mode. However, an individual server can only connect to one vFabric
at a time, so one server cannot see both SANs.
NOTE: The MX5108n Ethernet Switch does not support this feature.
To configure MX IOMs in Full Switch mode, Follow the steps mentioned below to configure MX IOMs through CLI:
1. Verify the MX9116n FSE is in Full Switch mode by running show switch-operating-mode command.
2. Connect the MX9116n FSE to the FC SAN.
CAUTION: Ensure that the cables do not criss-cross between the switches.
Once the server operating system loads the FCoE driver, the WWN appears on the fabric and on the FC SAN. The system is
now ready to connect to Fibre Channel storage. See Dell PowerStore 1000T on page 235 for setting up storage logical unit
numbers (LUNs).
NOTE: When MX9116n FSEs are in NPG mode, connecting to more than one SAN is possible by creating multiple vFabrics
each with their own NPG gateway only in Full Switch mode. However, an individual server can only connect to one vFabric
at a time, so one server cannot see both SANs.
Configure global switch settings
Run the following commands to configure the switch hostname, OOB management IP address, and OOB management default
gateway.
MX9116-B1 MX9116-B2
MX9116-B1 MX9116-B2
Configure VLTi
Configure VLTi on ports 37 through 40 on the MX9116n FSE. This establishes the connection between the two MX IOMs.
MX9116-B1 MX9116-B2
vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
MX9116-B1 MX9116-B2
MX9116-B1 MX9116-B2
MX9116-B1 MX9116-B2
Configure UFD
Uplink Failure Detection, or UFD, is recommended on all server-facing interfaces and upstream interfaces.
uplink-state-group 1 uplink-state-group 1
name "UFD_Group_1" name "UFD_Group_1"
downstream ethernet1/1/1-1/1/3 downstream ethernet1/1/1-1/1/3
upstream fibrechannel1/1/44:1-1/1/44:2 upstream fibrechannel1/1/44:1-1/1/44:2
enable enable
Configuration validation
show fcoe sessions
The show fcoe sessions command shows active FCoE sessions. The output includes MAC addresses, Ethernet interfaces,
the FCoE VLAN ID, FC IDs, and WWPNs of logged-in CNAs.
NOTE: Due to the width of the command output, each line of output is shown on two lines below.
show vfabric
The show vfabric command output provides various information including the default zone mode, the active zone set, and
interfaces that are members of the vfabric.
show fc switch
The show fc switch command verifies the switch mode (for example, F_Port) for FC traffic.
FC SAN A
FC SAN B
1/44:2
1/44:1
4:1 :2
1/4 1/44
MX7000 MX7000
chassis 1 chassis 2
Figure 193. Fibre Channel (F_Port) Direct Attach to Dell PowerStore 1000T
SmarFabric mode
This example shows directly attaching a Dell PowerStore 1000T storage array to the MX9116n FSE using universal ports 44:1 and
44:2.
NOTE: The MX5108n Ethernet Switch does not support this feature.
This example assumes that an existing SmartFabric has been created and is fully operational. For instructions on creating a
SmartFabric, see SmartFabric Creation on page 91.
To configure NPG mode on an existing SmartFabric, the following steps are completed using the OME-M console:
1. Connect the storage array to the MX9116n FSE. Each storage controller is connected to each MX9116n FSE.
● Define FCoE VLANs to use in the fabric. For instructions, see Define VLANs on page 91.
● Make sure that chassis are in a Multi-Chassis Management group. For instructions, find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table.
2. If necessary, create Identity Pools. See the Create identity pools on page 111 section for more information about how to
create identity pools.
3. Configure the physical switch ports for FC operation. See the Configure Fibre Channel universal ports on page 101 section
for instructions.
4. Create the FC Direct Attached uplinks. For more information about creating uplinks, see the Create Fibre Channel uplinks on
page 101 section.
5. Create and deploy the appropriate server templates to the compute sleds. See Server Deployment on page 109 for more
information.
6. Configure zones and zone sets. See the Managing Fibre Channel Zones on MX9116n FSE on page 65 section for instructions.
Once the server operating system loads the FCoE, the WWN appears on the fabric and on the FC SAN. The system is now
ready to connect to Fibre Channel storage. See Dell PowerStore 1000T on page 235 for how to create host groups and map
volumes to the target host.
NOTE: The configuration of FC Zones through the CLI is supported while using SmartFabric mode.
NOTE: For information related to use cases and configuring Ethernet - No Spanning Tree uplink with different tagged and
untagged VLANs, see the Ethernet – No Spanning Tree uplink on page 84 section.
NOTE: The MX5108n Ethernet Switch does not support this feature.
1. Verify the MX9116n FSE is in Full Switch mode by running show switch-operating-mode command.
2. Connect the MX9116n FSE to the FC SAN.
Once the server operating system loads the FCoE, the WWN appears on the fabric and on the FC SAN. The system is now
ready to connect to Fibre Channel storage. See Dell PowerStore 1000T on page 235 for how to create host groups and map
volumes to the target host.
Configure global switch settings
Configure the switch hostname, OOB management IP address, and OOB management default gateway.
MX9116-B1 MX9116-B2
MX9116-B1 MX9116-B2
Configure VLTi
Configure VLTi on Ports 37 through 40 on MX9116n FSE. This establishes connection between two MX IOMs.
MX9116-B1 MX9116-B2
vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet 1/1/37-1/1/40 discovery-interface ethernet 1/1/37-1/1/40
peer-routing peer-routing
MX9116-B1 MX9116-B2
MX9116-B1 MX9116-B2
Configure UFD
Uplink Failure Detection, or UFD, is recommended on all server-facing interfaces and upstream interfaces.
MX9116-B1 MX9116-B2
uplink-state-group 1 uplink-state-group 1
name "UFD_Group_1" name "UFD_Group_1"
downstream ethernet1/1/1-1/1/3 downstream ethernet1/1/1-1/1/3
upstream fibrechannel1/1/44:1-1/1/44:2 upstream fibrechannel1/1/44:1-1/1/44:2
enable enable
To configure the Fibre Channel zoning on MX IOMs, see the Managing Fibre Channel zones on MX9116n FSE section.
NOTE: Due to the width of the command output, each line of output is shown on two lines below.
show vfabric
show fc switch
The show fc switch command verifies the switch mode (for example, F_Port) for FC traffic.
NOTE: An FCoE uplink from the MX5108n or MX9116n must contain only a single port interface on an MX IOM to the
existing FCoE switch such as the Dell PowerSwitch S4148U shown in the diagram above.
The FSB switch can connect to an upstream switch operating in NPG mode:
NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.
Figure 194. FCoE (FSB) Network to Dell PowerStore 1000T through NPG mode switch
FCoE SAN B
FC SAN B
rt rt
Po 11:1 Po 11:1
1/ 1/
1 / 1/
Controller A Controller B
MX5108n MX5108n
FSB mode VLT
FSB mode PowerStore 1000T
(Leaf 1) (Leaf 2)
MX7000 chassis
Figure 195. FCoE (FSB) Network to Dell PowerStore 1000T through F_Port mode switch
NOTE: See the Dell SmartFabric OS10 User Guide for configuring FSB mode globally on the Dell Networking S4148U-ON
switches. Find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.
SmartFabric mode
This example assumes that an existing SmartFabric has been created and is fully operational. For instructions on creating a
SmartFabric, see SmartFabric Creation on page 91.
1. To configure FCoE mode on an existing SmartFabric, the following steps are completed using the OME-M console: Connect
the MX switch to the S4148U.
CAUTION: Verify that the cables do not criss-cross between the switches.
Make sure that chassis are in a Multi-Chassis Management group. For instructions, find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table.
2. Define FCoE VLANs to use in the fabric. For instructions, see the Define VLANs on page 91 section for more information
about defining the VLANs.
3. If necessary, create Identity Pools. See the Create identity pools on page 111 for more information.
4. Create the FCoE uplinks. See the Create Fibre Channel uplinks on page 101 section for more information about creating
uplinks.
5. Create and deploy the appropriate server templates to the compute sleds. See Server Deployment on page 109 for more
information.
6. Configure the S4148U switch. See the Dell Networking Fibre Channel Deployment with S4148U-ON in F_port Mode
knowledge base article for more information.
Once the server operating system loads the FCoE driver, the WWN displays on the fabric and on the FC SAN. The system
is now ready to connect to Fibre Channel storage. See Dell PowerStore 1000T on page 235 to create host groups and map
volumes to the target host.
To validate the configuration, use the same commands that are mentioned in SmartFabric Deployment Validation on page 121.
MX5108-A1 MX5108-A2
interface breakout 1/1/11 map 10g-4x interface breakout 1/1/11 map 10g-4x
Configure VLTi
Configure VLTi on Ports 9 and 10 on MX5108n ethernet switches. By default, port 9 is 40 GbE. Configure breakout on port 10
from 1x100 GbE to 1x40 GbE.
MX5108-A1 MX5108-A2
interface breakout 1/1/10 map 40g-1x interface breakout 1/1/10 map 40g-1x
vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet 1/1/9-1/1/10 discovery-interface ethernet 1/1/9-1/1/10
peer-routing peer-routing
NOTE: This command is mandatory for FSB cascading, port-pinning, and standalone FSB.
MX5108-A1 MX5108-A2
VLAN configuration
For each IOM, define the VLANs.
MX5108-A1 MX5108-A2
no shutdown no shutdown
MX5108-A1 MX5108-A2
MX5108-A1 MX5108-A2
MX5108-A1 MX5108-A2
service-policy output type queuing ETS service-policy output type queuing ETS
qos-map traffic-class TC-Q qos-map traffic-class TC-Q
no shutdown no shutdown
Configure UFD
Uplink Failure Detection, or UFD, is recommended on all server-facing interfaces and upstream interfaces.
MX5108-A1 MX5108-A2
uplink-state-group 1 uplink-state-group 1
name "UFD_Group_1" name "UFD_Group_1"
downstream ethernet1/1/1-1/1/3 downstream ethernet1/1/1-1/1/3
upstream fibrechannel1/1/11:1-1/1/11:2 upstream fibrechannel1/1/11:1-1/1/11:2
enable enable
Server
Server
Server Server
Server Server
The figure below shows the example topology that is used in this chapter to demonstrate Boot from SAN. The required steps
are provided to configure NIC partitioning, system BIOS, an FCoE LUN, and an OS install media device required for Boot from
SAN.
1. Connect to the server’s iDRAC in a web browser and launch the virtual console.
2. In the virtual console, from the Virtual Media menu, select Virtual Media.
3. In the virtual console, from the Virtual Media menu, select Map CD/DVD.
4. Click Browse to find the location of the operating system install media then click Map Device.
5. In the virtual console, from the Next Boot menu, select Lifecycle Controller.
6. Reboot the server.
NOTE: For VMware ESXi, see the Dell customized media instructions provided on the Dell Technologies Support website.
After the next reboot, the switch loads with default configuration settings.
System setup
Connect the chassis Management port on Management module to the network to download an image.
Before installation, verify that the system is connected correctly. To connect and access the I/O module on MX Chassis, see
the Connect to IO Module console port using RACADM section. Also, you can directly SSH to the IP, if it is assigned to the IOM
through management module.
Install OS10
For an ONIE-enabled switch, go to the ONIE boot menu. An ONIE-enabled switch boots with preloaded diagnostics (DIAGs) and
ONIE software.
+-------------------------------+
|*ONIE: Install OS |
| ONIE: Rescue |
| ONIE: Uninstall OS |
| ONIE: Update ONIE |
| ONIE: Embed ONIE |
| ONIE: Diag ONIE |
+-------------------------------+
Install OS Boots to the ONIE prompt and installs an OS10 image using the Automatic Discovery process. When ONIE
installs a new OS image, the previously installed image and OS10 configuration are deleted
Rescue Boots to the ONIE prompt and enables manual installation of an OS10 image or ONIE update
Uninstall OS Deletes the contents of all disk partitions, including the OS10 configuration, except ONIE and diagnostics
Update ONIE Installs a new ONIE version
EDA DIAG Runs the system diagnostics
After the ONIE process installs an OS10 image and you reboot the switch in ONIE: Install OS mode (default), ONIE takes
ownership of the system and remains in Install mode until an OS10 image successfully installs again. To boot the switch from
ONIE for any reason other than installation, select the ONIE: Rescue or ONIE: Update ONIE option from the ONIE boot menu.
The OS10 installer image creates several partitions. After the installation is complete, the switch automatically reboots and loads
an OS10 active image. The other image becomes the standby image. Both the Active and Standby images are of the same
version.
NOTE: During an automatic or manual OS10 installation, if an error condition occurs that results in an unsuccessful
installation, perform Uninstall OS first to clear the partitions if there is an existing OS on the device. If the problem
persists, contact Dell Technologies Technical Support.
Manual installation
If you do not use the ONIE-based automatic installation of an OS10 image and if a DHCP server is not available, you can
manually install the image. Configure the Management port and the software image file to start the installation.
5. Enter the onie-nos-install image_url command to install the software on the device.
NOTE: The installation command accesses the OS10 software from the specified SCP, TFTP, or FTP URL, creates
partitions, verifies installation, and reboots itself.
The following is an example of the installation command:
NOTE: a.b.c.d represents the location to download the image file from, and x.x.xx represents the version number
of the software to install.
Automatic installation
You can automatically install an OS10 image on a Dell ONIE-enabled device. This process is known as zero-touch install. After
the device boots to ONIE: Install OS, ONIE auto-discovery follows these steps to locate the installer file and uses the first
successful method:
1. Use a statically configured path that is passed from the boot loader.
2. Search file systems on locally attached devices, such as USB.
3. Search the exact URLs from a DHCPv4 server.
4. Search the inexact URLs based on the DHCP responses.
5. Search IPv6 neighbors.
6. Start a TFTP waterfall.
The ONIE automatic discovery process locates the stored software image, downloads and installs it, and reboots the device with
the new image. Automatic discovery repeats until a successful software image installation occurs and reboots the switch.
If DHCPv4 server is used, ONIE auto-discovery obtains the hostname, domain name, Management interface IP address,
and the IP address of the domain name server (DNS) from the DHCP server and DHCP options. It also searches SCP, FTP, or
TFTP servers with the default DNS of the ONIE server. DHCP options are not used to provide the server IP.
If USB storage device is used, ONIE searches only FAT or EXT2 file systems for an OS10 image.
Upgrade
To upgrade the Firmware on MXG610s FC switch, perform the following steps:
1. Validate the current Fabric OS version and other build information by running version command on MXG610 IOM.
NOTE: The FTP protocol is being deprecated starting with Fabric OS version 9.0.1a. Uploads or downloads using FTP
may not be supported. For release notes and MXG610s software, contact Dell Technical Support.
3. Enter the firmwaredownload command to download the firmware. You will need to provide the Server Name or IP
address, File name, Username, Network Protocol (1-auto-select, 2-FTP, 3-SCP, 4-SFTP) to be used, and the password.
Downgrade
To downgrade the firmware on the MXG610s, perform the following steps:
NOTE: Once upgraded to Gen 7, you cannot downgrade to a Fabric OS lower than Fabric OS 9.0.0.
1. Enter the firmwareshow command to validate the current firmware on the switch.
To validate that the transceivers are supported and working correctly, use the sfpshow command. It displays the port
information, transceiver information, and speed information.
The switchshow command displays switch hostname, switch type, online status, switch role, and all other switch-related
information, as shown in the figure below.
The fabricshow command displays Switch ID, Worldwide Name, and Management IP address of the switch.
NOTE: Additional information about supported cables and optics can be found in the PowerEdge MX IO Guide.
The following table shows the different optical connectors and a brief description of the standard.
The following table shows the model of IOM where each type of media is relevant.
Each type of media has a specific use case regarding the MX7000, with each type of media there are various applications. The
following sections outline where in the chassis each type of media is relevant.
NOTE: See the Dell Networking Transceivers and Cables document for more information about supported optics and
cables.
SFP+/SFP28
As seen in the preceding table, SFP+ is a 10 GbE transceiver and SFP28 is a 25 GbE transceiver, both of which can use either
fiber or copper media to achieve 10 GbE or 25 GbE communication in each direction. While the MX5108n has four 10GBase-T
copper interfaces, the focus is on optical connectors.
The SFP+ media type is typically seen in the PowerEdge MX7000 using the 25 GbE Pass-Through Module (PTM) and using
breakout cables from the QSFP+ and QSFP28 ports. The following are supported on the PowerEdge MX7000:
● Direct Attach Copper (DAC)
● LC fiber optic cable with SFP+ transceivers
The use of SFP+/SFP28, as it relates to QSFP+ and QSFP28, as discussed in those sections.
NOTE: The endpoints of the connection need to be set to 10 GbE if SFP+ media is being used.
The preceding figures show examples of SFP+ cables and transceivers. Also, the SFP+ form factor can be seen referenced in
the QSFP+ and QSFP28 sections using breakout cables.
QSFP+
QSFP+ is a 40 Gb standard that uses either fiber or copper media to achieve communication in each direction. This standard
has four individual 10-Gb lanes that can be used together to achieve 40 GbE throughput or separately as four individual 10 GbE
connections (using breakout connections). One variant of the Dell QSFP+ transceiver is shown in the following figure.
The QSFP+ media type has several uses in the MX7000. While the MX9116n does not have interfaces that are dedicated to
QSFP+, ports 41 through 44 can be broken out to 1x 40 GbE that enables QSFP+ media to be used in those ports. The MX5108n
has one dedicated QSFP+ port and two QSFP28 ports that can be configured for 1x 40 GbE.
The MX7000 also supports the use of QSFP+ to SFP+ breakout cables. This offers the ability to use a QSFP+ port and connect
to four SFP+ ports on the terminating end.
The following figures show the DAC and MPO cables, which are two variations of breakout cables. The MPO cable in this
example attaches to one QSFP+ transceiver and four SFP+ transceivers.
Figure 222. QSFP+ to SFP+ breakout cables: Multi-fiber Push On (MPO) breakout cable
NOTE: The MPO breakout cables uses a QSFP+ transceiver on one end and four SFP+ transceivers on the terminating end.
QSFP28
The QSFP28 standard is 100 Gb that uses either fiber or copper media to achieve communication in each direction. The QSFP28
transceiver has four individual 25-Gb lanes which can be used together to achieve 100 GbE throughput or separately as four
individual 25 GbE connections (using four SFP28 modules). One variant of the Dell QSFP28 transceiver is shown in the following
figure.
There are three variations of cables for QSFP28 connections. The variations are shown in the following figures.
NOTE: The QSFP28 form factor can use the same MPO cable as QSFP+. The DAC and AOC cables are different in that the
attached transceiver is a QSFP28 transceiver rather than QSFP+.
QSFP28 supports the following breakout configurations:
● 1x 40 Gb with QSFP+ connections, using either a DAC, AOC, or MPO cable and transceiver.
● 2x 50 Gb with a fully populated QSFP28 end and two depopulated QSFP28 ends, each with 2x 25 GbE lanes. This product is
only available as DAC cables.
● 4x 25 Gb with a QSFP28 connection and using four SFP28 connections, using either a DAC, AOC, or MPO breakout cable
with associated transceivers.
● 4x 10 Gb with a QSFP28 connection and using four SFP+ connections, using either a DAC, AOC, or MPO breakout cable with
associated transceivers.
QSFP28-DD cables and optics build on the current QSFP28 naming convention. For example, the current 100 GbE short range
transceiver has the following description:
Q28-100G-SR4: Dell Networking Transceiver, 100GbE QSFP28, SR4, MPO12, MMF
The equivalent QSFP28-DD description is easily identifiable:
Q28DD-200G-2SR4: Dell Networking Transceiver, 2x100GbE QSFP28-DD, 2SR4, MPO12-DD, MMF
The following table lists all other supported IOM slot configurations. These configurations are either single IOMs configurations
or each chassis contains dual MX9116n FSEs. In either configuration, redundancy is not a requirement.
General settings
NOTE: The MX I/O Modules run Rapid Per-VLAN Spanning Tree Plus (RPVST+) by default. RPVST+ runs RSTP on each
VLAN while RSTP runs a single instance of spanning tree across the default VLAN. The Dell PowerSwitch S4148U-ON
used in this example runs SmartFabric OS10 and has RPVST+ enabled by default. See the Spanning Tree Protocol
recommendations in the Dell SmartFabric OS10 User Guide for more information. Find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table.
NOTE: Use the spanning-tree {vlan vlan-id priority priority-value} command to set the bridge
priority for the upstream switches. The bridge priority ranges from 0 to 61440 in increments of 4096. The switch which has
lowest bridge priority becomes STP root.
Configure VLANs
Run the commands in this section to configure VLANs. In this deployment example, the VLANs used are VLAN 30 and VLAN 40.
Set the MTU as 9216 Bytes.
Configure QoS
Configure class-map, policy-map and define QoS parameters. In this example, queue 3 is defined as the Output queue in policy
map. The bandwidth is also defined as 50%. Configure the QoS parameters as mentioned in the following example.
end end
write memory write memory
Create a host
Perform the following steps to create a host.
1. Connect to the PowerStore 1000T UI in a web browser and log in using the required credentials.
2. Click on Compute and select the Hosts & Host Groups option.
3. Click Add Host. Enter a host name and select the Operating System. Click Next.
4. Select Protocol Type. In this example, Fibre Channel is selected. Click Next.
5. The Initiator WWPN will discover automatically. Select the Initiator Identifier WWPN and click Next.
6. Review selections on the Summary page and click Add Host to create the host as shown in the following figure.
The host is displayed on the Compute > Hosts & Host Groups page.
2. Enter the name of the host group, select Protocol Type, and select the right host to add.
3. Click Create.
Additional hosts may be added to the same host group as needed by clicking the (+ Add Host) button on the Host Groups
page.
Create volumes
Perform the following steps to create the volumes under Volume Groups.
1. Once the volume group is created, click ADD VOLUMES.
2. Select Add New Volumes.
3. Enter the name of volume. Select the desired quantity and size of each volume. In this example, two volumes quantity and
size 10 GB is selected. Leave the other options as default.
4. Click Next as shown in the following figure.
5. Select the right host group to map volumes. Leave the other options as default, as shown in the following figure.
NOTE: To modify Volume name or size, click Storage > Volumes > Select Volume, then Modify to make changes.
Two WWNs are listed for each port. The World Wide Node Name (WWNN), outlined in black, identifies the PowerStore
1000T Node storage array. The WWPNs, outlined in blue, identify the individual ports associated with the corresponding
array node.
Record the WWPNs as shown in the following table:
4. The first FCoE partition is Port 1, Partition 2. Click the (+) icon to view the MAC Addresses as shown in the following figure.
Figure 240. MAC address and FCoE WWPN for CNA port 1
NOTE: A convenient method is to copy and paste these values into a text file.
More detail about each of these devices is provided in the following sections.
For detailed information about hardware components related to the MX platform, see Software and firmware versions used on
page 245.
Dell PowerSwitch
Table 34. Dell PowerSwitch switches and OS versions – Scenarios 1 through 4
Qty Item Software version
2 Dell PowerSwitch S5232F-ON leaf switches 10.5.4.0
1 Dell PowerSwitch S3048-ON OOB management switch 10.5.4.0
Scenarios 5 through 8
The tables in this section include the hardware components and supported software and firmware versions for Scenario 5
through Scenario 8 in this document.
Table 43. Dell PowerEdge MX740c compute sled details - Scenarios 5 through 8
Qty per sled Item Firmware version
1 QLogic QL41262HMKR (25 G) mezzanine CNA 15.35.06
2 Intel(R) Xeon(R) Silver 4114 CPU @ 2.20 GHz -
12 16 GB DDR4 DIMMs (192 GB total) -
3 600 GB SAS HDD -
- BIOS 2.15.1
- iDRAC with Lifecycle Controller 5.10.50.00
248 References
● Dell OpenManage Enterprise-Modular Edition for PowerEdge MX7000 Chassis Release Notes
OS10 documentation:
● Dell SmartFabric OS10 User Guide
● Dell SmartFabric OS10 Release Notes
NOTE: To access the OS10 Release Notes, you must log in to your Dell Digital Locker account.
References 249