Dell PowerEdge MX Networking

Download as pdf or txt
Download as pdf or txt
You are on page 1of 249

Dell PowerEdge MX Networking

Deployment Guide
H18548.7

Abstract
This document provides an overview of the architecture, features, and functionality of
the Dell PowerEdge MX networking infrastructure, including the steps for configuring
and troubleshooting the PowerEdge MX networking switches in Full Switch and
SmartFabric modes.

Dell Technologies Solutions

August 2022
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2022 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be trademarks of their respective owners.
Contents

Chapter 1: Dell PowerEdge MX Platform Overview........................................................................ 9


Dell Technologies Demo Center....................................................................................................................................... 9
Dell PowerEdge MX models and components..............................................................................................................9
Introduction.....................................................................................................................................................................9
Hardware........................................................................................................................................................................10
Dell PowerEdge MX7000 - front.................................................................................................................................... 11
Dell PowerEdge MX7000 - rear..................................................................................................................................... 15
PowerEdge MX compute slot to I/O slot mapping...................................................................................................24
Open Manage Enterprise - Modular Edition............................................................................................................... 26
Introduction...................................................................................................................................................................26
PowerEdge MX initial deployment ......................................................................................................................... 26

Chapter 2: PowerEdge MX Scalable Fabric Architecture.............................................................. 27


Scalable Fabric Architecture...........................................................................................................................................27
Complex Scalable Fabric topologies............................................................................................................................. 29
Quad-port Ethernet NICs................................................................................................................................................30
Interfaces and port groups............................................................................................................................................. 36
Recommended port order for MX7116n FEM connectivity......................................................................................41
Embedded top-of-rack switching..................................................................................................................................42
MX Chassis management wiring....................................................................................................................................43

Chapter 3: Dell SmartFabric OS10............................................................................................... 46


Operating modes............................................................................................................................................................... 46
Full Switch mode......................................................................................................................................................... 46
SmartFabric mode....................................................................................................................................................... 47
Changing operating modes..............................................................................................................................................47
VLAN restrictions.............................................................................................................................................................. 49
LLDP for iDRAC................................................................................................................................................................. 49
Virtual Link Trunking.........................................................................................................................................................50
Storage networking.......................................................................................................................................................... 50
NPIV Proxy Gateway..................................................................................................................................................50
Direct attached (F_Port)........................................................................................................................................... 51
FCoE Transit or FIP Snooping Bridge..................................................................................................................... 51
iSCSI............................................................................................................................................................................... 52
NVMe/TCP................................................................................................................................................................... 53
Host FCoE session load balancing................................................................................................................................ 53
OS10 version 10.5.2.4 or later.................................................................................................................................. 53
OS10 version 10.5.1.9 and earlier............................................................................................................................. 54
PowerEdge MX IOM operations.................................................................................................................................... 54
Switch Management page overview...................................................................................................................... 54
Switch Overview......................................................................................................................................................... 55
Hardware tab................................................................................................................................................................56
View port status.......................................................................................................................................................... 57
Firmware tab................................................................................................................................................................ 59

Contents 3
Upgrading Dell SmartFabric OS10........................................................................................................................... 59
Alerts tab....................................................................................................................................................................... 60
Settings tab................................................................................................................................................................... 61
OS10 privileged accounts................................................................................................................................................ 62
NIC teaming guidelines.................................................................................................................................................... 63

Chapter 4: Full Switch Mode....................................................................................................... 65


VLAN scaling guidelines for Full Switch mode........................................................................................................... 65
Managing Fibre Channel Zones on MX9116n FSE..................................................................................................... 65
Configure FC aliases for server and storage adapter WWPNs........................................................................66
Create FC zones..........................................................................................................................................................66
Create zone set............................................................................................................................................................67
Activate zone set.........................................................................................................................................................67
Full Switch mode IO module replacement process................................................................................................... 67
VLAN stacking................................................................................................................................................................... 68

Chapter 5: Overview of SmartFabric Services for PowerEdge MX................................................73


Functional overview.......................................................................................................................................................... 73
OS10 operating mode differences................................................................................................................................. 73
CLI commands available in SmartFabric mode........................................................................................................... 74
IOM slot placement in SmartFabric mode................................................................................................................... 75
Two MX9116n Fabric Switching Engines in different chassis...........................................................................75
Two MX5108n Ethernet switches in the same chassis......................................................................................75
Two MX9116n Fabric Switching Engines in the same chassis.......................................................................... 76
Switch-to-switch (VLTi) cabling................................................................................................................................... 76
VLT backup link............................................................................................................................................................ 76
Configuring port speed and breakout........................................................................................................................... 77
VLAN scaling guidelines................................................................................................................................................... 78
Maximum Transmission Unit behavior..........................................................................................................................79
Layer 2 Multicast, IGMP, and MLD snooping............................................................................................................. 79
IGMP snooping............................................................................................................................................................. 79
MLD snooping.............................................................................................................................................................. 80
Configuring L2 Multicast in SmartFabric mode................................................................................................... 80
Validation........................................................................................................................................................................81
Upstream network requirements...................................................................................................................................82
Physical connectivity.................................................................................................................................................. 82
Other restrictions and guidelines...................................................................................................................................83
Ethernet – No Spanning Tree uplink............................................................................................................................ 84
Spanning Tree Protocol - legacy Ethernet uplink......................................................................................................86
Networks and automated QoS.......................................................................................................................................86
Server templates, profiles, virtual identities, networks, and deployment............................................................88
Templates...................................................................................................................................................................... 88
Profiles........................................................................................................................................................................... 88
Virtual identities and identity pools......................................................................................................................... 88
Deployment................................................................................................................................................................... 89
VMware vCenter integration - OpenManage Network Integration...................................................................... 89
OpenManage Integration for VMware vCenter......................................................................................................... 90

Chapter 6: SmartFabric Creation................................................................................................. 91

4 Contents
Steps to create a SmartFabric....................................................................................................................................... 91
Physically cable PowerEdge MX chassis and upstream switches......................................................................... 91
Define VLANs......................................................................................................................................................................91
Define VLANs for FCoE............................................................................................................................................. 92
Create the SmartFabric................................................................................................................................................... 93
Optional steps.................................................................................................................................................................... 94
Forward error correction........................................................................................................................................... 94
Configure uplink port speed or breakout............................................................................................................... 96
Configure Ethernet ports...........................................................................................................................................97
Create Ethernet – No Spanning Tree uplink.............................................................................................................. 98
Ethernet – No Spanning Tree upstream switch configuration............................................................................ 100
Optional - Configure Fibre Channel............................................................................................................................. 101
Configure Fibre Channel universal ports...............................................................................................................101
Create Fibre Channel uplinks................................................................................................................................... 101
Enable support for larger VLAN counts..................................................................................................................... 102
Uplink failure detection.................................................................................................................................................. 105
Verifying UFD configuration....................................................................................................................................108
Configuring the upstream switch and connecting uplink cables..........................................................................108

Chapter 7: Server Deployment................................................................................................... 109


Deploying a server........................................................................................................................................................... 109
Server preparation.......................................................................................................................................................... 109
Create a server template...............................................................................................................................................109
Create identity pools........................................................................................................................................................ 111
Associate server template with networks – no FCoE............................................................................................. 112
Associate server template with networks - with FCoE.......................................................................................... 113
Deploy a server template................................................................................................................................................115
Profile deployment........................................................................................................................................................... 116

Chapter 8: SmartFabric Deployment Validation.......................................................................... 121


Validate the SmartFabric health................................................................................................................................... 121
Validation of quad-port NIC topologies...................................................................................................................... 122
Validate with OME-M............................................................................................................................................... 122
Validation through switch CLI.................................................................................................................................125
Validating Ethernet - No Spanning Tree uplinks...................................................................................................... 125
Upstream switch validation - SmartFabric OS10............................................................................................... 126
Upstream switch validation - Cisco.......................................................................................................................128

Chapter 9: SmartFabric Operations............................................................................................ 131


Viewing SmartFabric health and status...................................................................................................................... 131
Edit a SmartFabric...........................................................................................................................................................132
Edit uplinks........................................................................................................................................................................ 133
Edit VLANs........................................................................................................................................................................ 134
Edit VLANs on deployed servers with OME-M 1.20.00 and later.................................................................. 134
Edit VLANs on a deployed Server with OME-M 1.10.20 and earlier .............................................................136
Delete SmartFabric..........................................................................................................................................................137
Connect non-MX Ethernet devices to a SmartFabric............................................................................................ 137
Expanding from a single-chassis to dual-chassis configuration........................................................................... 138
Step 1: Cable Management module....................................................................................................................... 138

Contents 5
Step 2: Create Multichassis Management Group.............................................................................................. 138
Step 3: Add second MX Chassis to the MCM Group....................................................................................... 138
Step 4: Move MX9116n FSE from first chassis to second chassis................................................................ 139
Step 5: Validation.......................................................................................................................................................140
SmartFabric mode IOM replacement process.......................................................................................................... 140
MXG610 Fibre Channel switch module replacement process...............................................................................143
Chassis Backup and Restore.........................................................................................................................................143
Backing up the chassis............................................................................................................................................. 144
Restoring chassis....................................................................................................................................................... 147
Manual backup of IOM configuration through the CLI..................................................................................... 149

Chapter 10: General Troubleshooting......................................................................................... 150


View or extract logs using OME-M.............................................................................................................................150
Troubleshooting MCM topology errors...................................................................................................................... 150
Troubleshooting VLT and vPC configuration on upstream switches.................................................................. 151
Troubleshooting FEM and compute sled discovery................................................................................................ 152
Troubleshooting FC and FCoE..................................................................................................................................... 152
Rebalancing FC and FCoE sessions............................................................................................................................ 154
Common CLI troubleshooting commands for Full Switch and SmartFabric modes........................................ 157

Chapter 11: SmartFabric Troubleshooting...................................................................................162


Troubleshooting SmartFabric issues........................................................................................................................... 162
Troubleshoot port group breakout errors.................................................................................................................. 162
Troubleshooting VLTi between switches...................................................................................................................166
Troubleshooting uplink errors....................................................................................................................................... 167
Troubleshooting legacy Ethernet uplink with STP.................................................................................................. 169
Troubleshooting common issues.................................................................................................................................. 170
SmartFabric Services troubleshooting commands.................................................................................................. 172

Chapter 12: Configuration Scenarios.......................................................................................... 178


Scenario 1: SmartFabric deployment with S5232F-ON upstream switches with Ethernet - No
Spanning Tree uplink...................................................................................................................................................179
Configure SmartFabric............................................................................................................................................. 179
Dell PowerSwitch S5232F-ON configuration..................................................................................................... 180
Dell PowerSwitch S5232-ON validation............................................................................................................... 181
Scenario 2: SmartFabric connected to Cisco Nexus 3232C switches with Ethernet - No Spanning
Tree uplink..................................................................................................................................................................... 183
Configure SmartFabric............................................................................................................................................. 183
Cisco Nexus 3232C switch configuration............................................................................................................ 184
Configuration validation........................................................................................................................................... 186
Scenario 3: SmartFabric deployment with S5232F-ON upstream switches with legacy Ethernet uplink.189
Dell PowerSwitch S5232F-ON configuration..................................................................................................... 190
Dell PowerSwitch S5232F-ON validation............................................................................................................. 191
Scenario 4: SmartFabric connected to Cisco Nexus 3232C switches with legacy Ethernet uplink........... 193
Cisco Nexus 3232C switch configuration............................................................................................................193
Configuration validation........................................................................................................................................... 195
Scenario 5: Connect MX9116n FSE to Fibre Channel storage - NPIV Proxy Gateway mode.......................198
Scenario 6: Connect MX9116n FSE to Fibre Channel storage - FC Direct Attach......................................... 202
Scenario 7: Connect MX5108n to Fibre Channel storage - FSB......................................................................... 207
Scenario 8: Configure boot from SAN.........................................................................................................................211

6 Contents
Configure NIC boot device...................................................................................................................................... 212
Configure BIOS settings...........................................................................................................................................214
Connect FCoE LUN................................................................................................................................................... 214
Set up and install media connection......................................................................................................................214
Use Lifecycle Controller to set up operating system driver for media installation.................................... 214

Appendix A: Additional Tasks..................................................................................................... 216


Reset SmartFabric OS10 switch to factory defaults.............................................................................................. 216
Reset Cisco Nexus 3232C to factory defaults......................................................................................................... 216
Connect to IO Module console port using RACADM.............................................................................................. 216
MX I/O module OS10 installation using ONIE........................................................................................................... 217
Manual installation..................................................................................................................................................... 217
Automatic installation................................................................................................................................................218
MXG610s FC switch upgrade downgrade................................................................................................................. 219
MXG610s switch details validation............................................................................................................................. 220

Appendix B: Additional Information............................................................................................222


PTM port mapping..........................................................................................................................................................222
Supported cables and optical connectors.................................................................................................................223
PowerEdge MX IOM slot support matrix.................................................................................................................. 229

Appendix C: Dell PowerSwitch S4148U-ON Configuration in Scenario 7..................................... 232


Switch configuration commands.................................................................................................................................232

Appendix D: Dell PowerStore 1000T...........................................................................................235


About Dell PowerStore 1000T..................................................................................................................................... 235
Configure PowerStore 1000T FC storage................................................................................................................ 235
Create a host............................................................................................................................................................. 235
Create host groups and add hosts........................................................................................................................236
Create volume groups..............................................................................................................................................238
Create volumes..........................................................................................................................................................238
Determine PowerStore 1000T storage array FC WWPNs.................................................................................... 240
Determine CNA FCoE port WWPNs........................................................................................................................... 241

Appendix E: Hardware and Version Information......................................................................... 243


Hardware used in this guide......................................................................................................................................... 243
Dell PowerSwitch S3048-ON.................................................................................................................................243
Dell PowerSwitch S5232F-ON.............................................................................................................................. 243
Dell PowerSwitch S4148U-ON...............................................................................................................................244
Dell PowerSwitch Z9264F-ON.............................................................................................................................. 244
Dell PowerStore 1000T............................................................................................................................................ 244
Cisco Nexus 3232C.................................................................................................................................................. 245
Software and firmware versions used....................................................................................................................... 245
Scenarios 1 through 4.............................................................................................................................................. 245
Scenarios 5 through 8..............................................................................................................................................246

Appendix F: References............................................................................................................. 248


Dell Technologies documentation............................................................................................................................... 248
OME-M and OS10 compatibility and documentation....................................................................................... 248

Contents 7
Dell Technologies Networking Infrastructure Solutions documentation..................................................... 249
Support and feedback................................................................................................................................................... 249

8 Contents
1
Dell PowerEdge MX Platform Overview
Dell Technologies Demo Center
The Dell Technologies Demo Center is a highly scalable, cloud-based service that provides 24/7 self-service access to virtual
labs, hardware labs, and interactive product simulations. Several interactive demos are available on the Demo Center for
PowerEdge MX platform deployments. Go to Dell Technologies Interactive Demo: OpenManage Enterprise Modular for MX
solution management to quickly become familiar with deploying MX Networks.

Dell PowerEdge MX models and components

Introduction
The vision of Dell Technologies is to be the essential technology company from the edge, to the core, and to the cloud. Dell
Technologies ensures modernization for today's applications and the emerging cloud-native world. Dell Networking is committed
to disrupting the fundamental economics of the market with an open strategy that gives you the freedom of choice for
networking operating systems and top-tier merchant silicon. The Dell Technologies strategy enables business transformations
that maximize the benefits of collaborative software and standards-based hardware, including lowered costs, flexibility, freedom,
and security. Dell Technologies provides further customer enablement through validated deployment guides that demonstrate
these benefits while maintaining a high standard of quality, consistency, and support.
The Dell PowerEdge MX platform is a unified, high-performance data center infrastructure. It provides the agility, resiliency,
and efficiency to optimize a wide variety of traditional and new, emerging data center workloads and applications. With its
kinetic architecture and agile management, PowerEdge MX dynamically configures compute, storage, and fabric; increases team
effectiveness; and accelerates operations. The responsive design delivers the innovation and longevity that customers need for
their IT and digital business transformations.
As part of the PowerEdge MX platform, the Dell SmartFabric OS10 network operating system includes SmartFabric Services
(SFS), a network automation and orchestration solution that is fully integrated with the MX platform.
NOTE: This guide may contain language that is not consistent with Dell's current guidelines. Dell plans to update this guide
over subsequent future releases to revise the language accordingly.

Dell PowerEdge MX Platform Overview 9


Figure 1. PowerEdge MX7000 chassis

Hardware
This section contains information about the hardware and options available in the Dell PowerEdge MX7000. The section is
divided into two parts:
● The front of the MX7000 chassis, containing compute and storage sleds
● The back of the MX7000 chassis, containing networking, storage, and management components

10 Dell PowerEdge MX Platform Overview


Dell PowerEdge MX7000 - front

Overview
The following figure shows the front view of the Dell PowerEdge MX7000 chassis. The left side of the chassis can have one of
three control panel options:
● LED status light panel
● Touch screen LCD panel
● Touch screen LCD panel equipped with Dell PowerEdge iDRAC Quick Sync 2
The bottom of the figure shows six hot-pluggable, redundant, 3,000-watt power supplies. Above the power supplies are eight
single-width slots that support compute and storage sleds. In the example below, the slots contain:
● Four Dell PowerEdge MX740c sleds in slots one through four
● One Dell PowerEdge MX840C sled in slots five and six
● Two Dell PowerEdge MX5016s sleds in slots seven and eight

Figure 2. PowerEdge MX7000 – front

Dell PowerEdge MX Platform Overview 11


Dell PowerEdge MX740c and MX750c compute sled
The Dell PowerEdge MX740c and MX750c are two-socket, full-height, single-width compute sleds that offer impressive
performance and scalability. The MX740c and MX750c are ideal for dense virtualization environments and can serve as a
foundation for collaborative workloads. An MX7000 chassis can support up to eight MX740c and MX750c sleds. Mixing compute
sleds of different generations is supported.
Key features include:
● Single-width slot design
● One or two CPU sockets
● 24 or 32 DIMM slots of DDR4 memory (MX740c and MX750c)
● Boot options include BOSS-S1 or Internal Dual SD Modules (IDSDM)
● Up to six SAS/SATA SSD/HDD and NVMe PCIe SSDs
● Two PCIe mezzanine card slots for connecting to network Fabric A and B
● One PCIe mini-mezzanine card slot for connecting to storage Fabric C
● iDRAC with Lifecycle Controller

Figure 3. Dell PowerEdge MX740c sled with six 2.5-inch SAS drives

12 Dell PowerEdge MX Platform Overview


Dell PowerEdge MX840c compute sled
The Dell PowerEdge MX840c is a powerful four-socket, full-height, double-width server that features dense compute,
exceptionally large memory capacity, and a highly expandable storage subsystem. It is the ultimate scale-up server that excels
at running a wide range of database applications, substantial virtualization, and software-defined storage environments. The
MX7000 chassis supports up to four MX840c compute sleds.
Key features of the MX840c include:
● Dual-width slot design
● Four CPU sockets
● 48 DIMM slots of DDR4 memory
● Boot options include BOSS-S1 or IDSDM
● Up to eight SAS/SATA SSD/HDD and NVMe PCIe SSDs
● Four PCIe mezzanine card slots for connecting to network Fabric A and B
● Two PCIe mini-mezzanine card slots for connecting to storage Fabric C
● iDRAC9 with Lifecycle Controller

Figure 4. PowerEdge MX840c sled with eight 2.5-inch SAS drives

Dell PowerEdge MX Platform Overview 13


Dell PowerEdge MX5016s storage sled
The Dell PowerEdge MX5016s storage sled delivers scale-out, direct attached storage within the PowerEdge MX architecture.
The MX5016s provides customizable 12-GB/s direct-attached SAS storage with up to 16 SAS HDDs/SSDs. The MX740c,
MX750c, and the MX840c compute sleds can share drives with the MX5016s using the dedicated PowerEdge MX5000s SAS
switch. Internal server drives may be combined with up to seven MX5016s sleds on one chassis for extensive scalability. The
MX7000 chassis supports up to seven MX5016s storage sleds.

Figure 5. Dell PowerEdge MX5016s sled with the drive bay extended

14 Dell PowerEdge MX Platform Overview


Dell PowerEdge MX7000 - rear

Overview
The Dell PowerEdge MX7000 includes three I/O fabrics and the Management Modules. Fabrics A and B are for Ethernet and
future I/O module connectivity, and Fabric C is for SAS and Fibre Channel (FC) connectivity. Each fabric provides two slots
for redundancy. Management Modules contain the chassis intelligence, which overlooks and orchestrates the operations of the
various components. The following example figure shows the rear of the PowerEdge MX7000 chassis. From top to bottom, the
chassis is configured with:
● One Dell Networking MX9116n Fabric Switching Engine (FSE) installed in fabric slot A1
● One Dell Networking MX7116n Fabric Expander Module (FEM) installed in fabric slot A2
● Two Dell Networking MX5108n Ethernet switches installed in fabric slots B1 and B2
● Two Dell Networking MXG610s Fibre Channel switches installed in fabric slots C1 and C2
● Two Dell PowerEdge MX9002m modules are installed in management slots MM1 and MM2

Figure 6. Dell PowerEdge MX7000 – rear

Dell PowerEdge MX Platform Overview 15


Dell PowerEdge MX9002m management module
The Dell PowerEdge MX9002m management module controls the overall chassis power, cooling, and hosts the OpenManage
Enterprise Modular console. Two external Gigabit Ethernet ports are provided to enable management connectivity and to
connect additional MX7000 chassis in a single logical chassis. The MX7000 chassis supports two MX9002m modules for
redundancy. The following figure shows a single MX9002m module and its components.

Figure 7. Dell PowerEdge MX9002m module

The following MX9002m module components are labeled in the figure:


1. Handle release
2. Gigabit Ethernet port 1
3. Gigabit Ethernet port 2
4. ID button and health status LED
5. Power status LED
6. Micro-B USB serial port

16 Dell PowerEdge MX Platform Overview


Dell PowerEdge MX9116n Fabric Switching Engine
The Dell PowerEdge MX9116n Fabric Switching Engine (FSE) is a scalable, high-performance, low latency 25 Gbps Ethernet
switch, purpose-built for the PowerEdge MX platform. The MX9116n FSE provides enhanced capabilities and cost-effectiveness
for enterprise, mid-market, Tier 2 cloud, and NFV service providers with demanding compute and storage traffic environments.
The MX9116n FSE provides:
● Sixteen internal 25 GbE server facing ports, ports 1 through 16, connected to compute sleds
● Twelve QSFP28-Double Density (DD) ports for fabric expansion/uplinks, ports 17 through 40 (These ports can be operated
as 2x 100 GbE, 2x 40 GbE, 8x 25 GbE, and 8x 10 GbE.
● Two 100 GbE QSFP28 ports, used for Ethernet uplinks, ports 41 and 42
● Two 100 GbE QSFP28 unified ports, used for Ethernet and Fibre Channel connections, ports 43 and 44.
For more information about port-mapping and virtual ports, see Interfaces and port groups on page 36.
The two standard QSFP28 ports can be used for Ethernet uplinks. The QSFP28 unified ports can support Ethernet or native
Fibre Channel connectivity, supporting both NPIV Proxy Gateway (NPG) and direct attach FC capabilities.
The twelve QSFP28-DD ports provide additional uplinks, VLTi links, and connections to rack servers at 10 GbE or 25 GbE using
breakout cables. The QSFP28-DD ports also provide fabric expansion connections for up to nine additional MX7000 chassis
using the MX7116n Fabric Expander Module. The MX7000 chassis supports up to four MX9116n FSEs in Fabric A and Fabric B,
or both. See the PowerEdge MX IOM slot support matrix on page 229 for more information about supported slot configurations
and the PowerEdge MX I/O Guide for more information about cable selection.

Figure 8. MX9116n FSE

The following MX9116n FSE components are labeled in the figure:


1. Express service tag
2. Storage USB port
3. Micro-B USB console port
4. Power and indicator LEDs
5. Handle release
6. Two QSFP28 ports
7. Two QSFP28 unified ports
8. Twelve QSFP28-DD ports

Dell PowerEdge MX Platform Overview 17


The following table shows the port mapping example for internal and external interfaces on the MX9116n FSE. The MX9116n
FSE maps dual-port mezzanine cards to odd-numbered ports. The MX7116n FEM, connected to the MX9116n FSE, maps to
sequential virtual ports with each port representing a compute sled attached to the MX7116n FEM.

Table 1. Port-mapping example for Fabric A


MX7000 slot MX9116n FSE ports MX7116n FEM virtual ports
1 Ethernet 1/1/1 Ethernet 1/71/1
2 Ethernet 1/1/3 Ethernet 1/71/2
3 Ethernet 1/1/5 Ethernet 1/71/3
4 Ethernet 1/1/7 Ethernet 1/71/4
5 Ethernet 1/1/9 Ethernet 1/71/5
6 Ethernet 1/1/11 Ethernet 1/71/6
7 Ethernet 1/1/13 Ethernet 1/71/7
8 Ethernet 1/1/15 Ethernet 1/71/8

Dell Networking MX7116n Fabric Expander Module


The Dell Networking MX7116n Fabric Expander Module (FEM) acts as an Ethernet repeater, taking signals from an attached
compute sled and repeating them to the associated lane on the external QSFP28-DD connector. The MX7116n FEM provides
two QSFP28-DD interfaces, each providing up to eight 25 Gbps connections to the chassis and 16 internal server-facing ports.
There is no operating system or switching ASIC on the MX7116n FEM, so it rarely requires an upgrade. There is also no
management or user interface, making the MX7116n FEM almost maintenance-free. The MX7000 chassis supports up to four
MX7116n FEMs in Fabric A, Fabric B, or both. See PowerEdge MX IOM slot support matrix on page 229 for more information
about supported slot configurations, and the PowerEdge MX I/O Guide for more information about cable selection.

Figure 9. MX7116n FEM

The following MX7116n FEM components are labeled in the figure:


1. Express service tag
2. Supported optic LED
3. Power and indicator LEDs
4. Module insertion and removal latch
5. Two QSFP28-DD fabric expander ports
The following figure shows how the MX7116n FEM can act as a pass-through module. The breakout of the port is shown to
connect to ToR switches (using SFP+, SFP28, QSFP+, or QSFP28 connections), and the internal connections to compute sleds
with two ports on the Mezzanine card. When connecting to QSFP+ or QSFP28 interfaces, the interface must be configured as
4x 10 GbE or 4x 25 GbE, respectively.
NOTE: For an MX7116n FEM acting as a pass-through module, only Dell ToR switches are supported for external
connections.

18 Dell PowerEdge MX Platform Overview


Figure 10. Ethernet MX7116n-FEM mezzanine mapping

The following figure shows different uplink options for the MX7116n FEM to act as a pass-through module operating at 25 GbE.
The MX7116n FEM should be connected to an upstream switch at 25 GbE. Support for 10 GbE is available as of OME-Modular
1.20.00.
If the MX7116n FEM port connects to QSFP28 ports, a QSFP28-DD to 2x QSFP28 cable is used. If the MX7116n FEM port
connects to SFP28 ports, a QSFP28-DD to 8x SFP28 cable is used. These cables can be DAC, AOC, or optical transceiver plus
passive fiber. See the PowerEdge MX I/O Guide for more information about cable selection.
NOTE: If connecting the FEM to a QSFP+/QSFP28 port on a ToR switch, ensure that the port is configured to break out
to 4x 10 GbE or 4x 25 GbE and not 40 GbE or 100 GbE.

Dell PowerEdge MX Platform Overview 19


Figure 11. Topologies for MX7116n FEM as pass-through module

NOTE: The MX7116n FEM cannot act as a stand-alone switch and must be connected to the MX9116n FSE or other Dell
ToR switches to function. Connecting the MX7116n FEM to non-Dell switches is not supported.

Dell Networking MX5108n Ethernet switch


The Dell Networking MX5108n Ethernet switch is targeted at PowerEdge MX deployments of one or two chassis. While not
a scalable switch, it still provides high-performance and low latency with a nonblocking switching architecture. The MX5108n
provides line-rate 25 Gbps Layer 2 and Layer 3 forwarding capacity to all connected servers with no oversubscription.
In addition to eight internal 25 GbE ports, the MX5108n provides:
● One 40 GbE QSFP+ port
● Two 100 GbE QSFP28 ports
● Four 10 GbE RJ45 Base-T ports
These ports can be used to provide a combination of network uplink, VLT interconnect (VLTi), or FCoE connectivity. The
MX5108n supports FCoE Initialization Protocol (FIP) Snooping Bridge (FSB) mode, but does not support NPG or direct-attach
FC capabilities. The MX7000 chassis supports up to four MX5108n Ethernet switches in Fabric A and Fabric B, or both.
See PowerEdge MX IOM slot support matrix on page 229 for more information about supported slot configurations and the
PowerEdge MX I/O Guide for more information about cable selection.

Figure 12. MX5108n Ethernet switch

The following MX5108n components are labeled in the figure:


1. Express service tag
2. Storage USB port
3. Micro-B USB console port
4. Power and indicator LEDs
5. Module insertion and removal latch

20 Dell PowerEdge MX Platform Overview


6. One QSFP+ port
7. Two QSFP28 ports
8. Four 10GBase-T ports
NOTE: Compute sleds with quad-port mezzanine cards are not supported with MX5108n Ethernet switches.

PowerEdge MX Ethernet Pass-Through Modules


There are two Ethernet Pass-Through Modules (PTM) providing nonswitched Ethernet connections to ToR switches. Each PTM
provides 16 internal ports mapped directly to 16 external ports. The MX7000 chassis supports four PTMs in Fabric A and Fabric
B, or both. See PowerEdge MX IOM slot support matrix on page 229 for more information about supported slot configurations
and the PowerEdge MX I/O Guide for more information about cable selection. For more information about PTM port to compute
sled mapping, see PTM port mapping on page 222.
The following figure shows the 25 GbE Ethernet PTM. The 25 GbE PTM provides 16 external SFP28 ports that can operate at
10 GbE or 25 GbE.

Figure 13. 25 GbE Ethernet PTM

The following 25 GbE PTM components are labeled in the figure:


1. Express service tag
2. Power and indicator LEDs
3. Module insertion and removal latch
4. 16 SFP28 ports
The 10GBase-T Ethernet PTM, shown in the following figure, provides 16 external RJ45 Base-T ports that operate at 10 GbE.

Figure 14. 10GBase-T Ethernet PTM

The following 10GBase-T Ethernet PTM components are labeled in the figure:
1. Express service tag
2. Power and indicator LEDs

Dell PowerEdge MX Platform Overview 21


3. Module insertion and removal latch
4. 16 10GBase-T ports

Dell Networking MXG610s Fibre Channel switch


The Dell Networking MXG610s is a high-performance, 32 Gbps Fibre Channel switch based on Brocade technology. It is ideal for
connectivity to all-flash SAN storage solutions and is designed for maximum flexibility and value with pay-as-you-grow scalability
using a Port on Demand (PoD) license model. The MXG610s is compatible with Brocade and Cisco FC switches. The MXG610s
runs the Brocade FOS operating system and Brocade tools are used to manage the switch.
In addition to 16 internal 32-GFC ports, the MXG610s provides:
● Eight external SFP+ ports
● Two 4x 32 Gbps external QSFP ports
The internal and external port information is as follows:
● Internal ports support 16-Gbps or 32-Gbps speed
● Internal ports support F_Port mode and N_Port mode for NPIV connections
● External ports support F_Port, N_Port, D_Port, and E_Port modes
● SFP+ ports auto negotiate to 8 Gbps, 16 Gbps, or 32 Gbps speeds when 32 Gbps SFP+ transceivers are used
● SFP+ ports auto negotiate to 8 Gbps or 16 Gbps speeds when 16 Gbps SFP+ transceivers are used
● QSFP ports auto negotiate to 16 Gbps, or 32 Gbps speeds when 32 Gbps QSFP transceivers are used
● QSFP ports support breakout cables
● QSFP ports support ISL connections only. Interchassis link (ICL) connections are not supported
● Dynamic Ports on Demand (POD) support with increments of 8-port licenses
The external ports support the connection of the MX7000 chassis to existing SAN switches, or the connection of a FC storage
array directly to the switch.
NOTE: The MX7000 chassis requires redundant MXG610s in Fabric C. The operation of a single MXG610s switch is not
supported.

NOTE: For information about the optical transceivers and cables used with the MXG610s, see the MX610 Fibre Channel
Switch Module Installation Guide.

Figure 15. MXG610s Fibre Channel switch module

The following MXG610s components are labeled in the figure:


1. Express service tag
2. Module insertion and removal latch
3. Micro-B USB console port
4. Power and indicator LEDs
5. Eight external SFP+ ports
6. Two 4x 32-GFC QSFP ports

22 Dell PowerEdge MX Platform Overview


Dell Networking MXG610s Fibre Channel switch models and licenses
The Dell Networking MXG610s FC switch can be purchased in two configurations:
● Sixteen activated ports and four 32 Gbps SFP+ SWL optical transceivers
● Sixteen activated ports, eight 32 Gbps SFP+ SWL optical transceivers, and the Enterprise software bundle
Enterprise software bundle
The Enterprise bundle includes ISL Trunking, Fabric Vision, and Extended Fabric licenses:

ISL Trunking Allows you to aggregate multiple physical links into one logical link for enhanced network performance and
fault tolerance. ISL trunking also enables Brocade Access Gateway ISL Trunking (N_port Trunking).
Fabric Vision Enables MAPS (Monitoring and Alerting Policy Suite), Flow Vision, IO Insight, VM Insight, and ClearLink,
or D_Port, to non-Brocade devices
● MAPS enables rules-based monitoring, alerting capabilities, and provides comprehensive dashboards
to troubleshoot problems in Brocade SAN environments
● Flow Vision enables the host to LUN flow monitoring, application flow mirroring for offline capture and
deeper analysis, and test traffic flow generation function for SAN infrastructure validation
● IO Insight automatically detects degraded storage IO performance with integrated device latency, and
IOPS monitoring embedded in the hardware
● ClearLink (D_Port) to non-Brocade devices allows extensive diagnostic testing of links to devices
other than Brocade switches and adapters.
NOTE: The functionality requires the support of the attached device, and the ability for the user
to check the device.
Extended Fabric Provides greater than 10 km of switched fabric connectivity at full bandwidth over long distances

NOTE: These features described are only available as part of the Enterprise software bundle. Individual feature licenses are
not available.
Ports on Demand
You can purchase the Ports on Demand (POD) licenses to activate up to 24 additional ports using 8-port POD licenses. The
switch module supports dynamic POD license allocation, where two port licenses are assigned to ports 0 and 17 at the factory.
The remaining licenses are assigned to active ports on a first-come, first-served basis. After the licenses are installed, you can
move them from one port to another, making port licensing flexible.
Broadcom software licensing upgrades
To obtain software licenses for the MXG610s, you must register the switch on the Broadcom support portal at https://
support.broadcom.com/.

NOTE: Run the chassisshow command to obtain the required Factory Serial Number.

To obtain upgrades for MXG610s software, contact Dell Technical Support.

Dell PowerEdge MX5000s SAS module


The Dell PowerEdge MX5000s SAS module supports four SAS internal connections to all eight front-facing slots in the
PowerEdge MX7000 chassis. The MX5000s uses T10 SAS zoning to provide multiple SAS zones/domains for the compute
sleds. Storage management is conducted through the OpenManage Enterprise Modular console.

NOTE: The external (rear-facing) ports on MX5000s SAS switches are not currently enabled.

The MX5000s provides Fabric C SAS connectivity to each compute and one or more MX5016s storage sleds. Compute
sleds connect to the MX5000s using either SAS Host Bus Adapters (HBA) or a PowerEdge RAID Controller (PERC) in the
mini-mezzanine PCIe slot.
The MX5000s switches are deployed as redundant pairs to offer multiple SAS paths to the individual SAS disk drives. The
MX7000 chassis supports redundant MX5000s in Fabric C.

NOTE: A MX5000s SAS module and a MXG610s are not supported in the same MX7000 chassis.

Dell PowerEdge MX Platform Overview 23


Figure 16. MX5000s SAS module

The following MX5000s components are labeled in the figure:


1. Express service tag
2. Module insertion and removal latch
3. Power and indicator LEDs
4. Six SAS ports

PowerEdge MX compute slot to I/O slot mapping

Overview
The PowerEdge MX7000 chassis includes two general-purpose I/O fabrics, Fabric A and B. The vertically aligned compute
sleds in slots one through eight connect to the horizontally aligned I/O modules (IOMs) in fabric slots A1, A2, B1, and B2. This
orthogonal connection method results in a midplane free design and enables the adoption of new I/O technologies without the
burden of having to upgrade the midplane.

Figure 17. MX7000 orthogonal connection

Mezzanine cards
The MX740c and MX750c support up to two mezzanine cards, which are installed in slots A1 and B1, and the MX840c supports
up to four mezzanine cards, which are installed in slots A1, A2, B1, and B2. Each mezzanine card provides redundant connections
to each fabric, A or B, as shown in the following figure. A mezzanine card connects orthogonally to the pair of IOMs installed

24 Dell PowerEdge MX Platform Overview


in the corresponding fabric slot. For example, port one of mezzanine card A1 connects to fabric slot A1, a MX9116n FSE (not
shown). The second port of mezzanine card A1 connects to fabric slot A2, a MX7116n FEM (not shown).

Figure 18. MX740c mezzanine cards

Mini-mezzanine card
The MX7000 chassis also provides Fabric C, shown in the following figure, supporting redundant MXG610s FC switches, or
MX5000s SAS modules. This fabric uses a midplane connecting the C1 and C2 modules to each compute or storage sled. The
MX740c supports one mini-mezzanine card, which is installed in slot C1, and the MX840c supports two mini-mezzanine cards,
which are installed in slots C1 and C2.

Figure 19. MX740c mini-mezzanine card

Dell PowerEdge MX Platform Overview 25


Open Manage Enterprise - Modular Edition
Introduction
The Dell PowerEdge MX9002m management module hosts the OpenManage Enterprise - Modular Edition (OME-M) console.
OME-M is the latest addition to the Dell OpenManage Enterprise suite of tools and provides a centralized management interface
for the PowerEdge MX platform. The OME-M console features include:
● Manage up to 20 chassis from a single web or REST API endpoint using multichassis management groups
● End-to-end life cycle management for servers, storage, and networking
● Monitoring and management of the entire PowerEdge MX platform
● Integration with OpenManage Mobile for configuration and troubleshooting, including wireless server vKVM
● Integration with OpenManage Enterprise for multi-datacenter management of PowerEdge systems

PowerEdge MX initial deployment


Initial PowerEdge MX deployment begins with assigning network settings for OME-M and completing the Chassis Deployment
Wizard.
There are three methods for initial configuration:
● Using the LCD touchscreen on the front-left of the MX7000 chassis (if installed)
● Setting the initial OME-M console IP address through the KVM ports on the front-right side of the MX7000 chassis
● Setting the initial OME-M console IP address through the serial port on the MX9002m module
The Deployment Wizard is displayed on first login to the console and enables configuration of the following:
● Time
● Alerting
● iDRAC9 quick deployment settings
● Network IOM access settings
● Firmware updates
● Network proxy settings
● MCM group definition
NOTE: For more information regarding the initial deployment of the MX7000, see the PowerEdge MX7000 -
Documentation site.

26 Dell PowerEdge MX Platform Overview


2
PowerEdge MX Scalable Fabric Architecture
Scalable Fabric Architecture
Overview
A multichassis group enables multiple chassis to be managed as if they were a single chassis. A PowerEdge MX Scalable Fabric
enables multiple chassis to behave like a single chassis from a networking perspective.
A Scalable Fabric consists of two main components - the MX9116n FSE and the MX7116n FEM. A typical configuration includes
one MX9116n FSE and one MX7116n FEM in each of the first two chassis, and additional pairs of MX7116n FEMs in the remaining
chassis. Each MX7116n FEM connects to the MX9116n FSE corresponding to its fabric and slot. This hardware-enabled
architecture applies regardless of whether the switch is running in Full Switch or SmartFabric mode.
The following figure shows up to ten MX7000 chassis in a single Scalable Fabric. The first two chassis house MX9116n FSEs,
while chassis 3 through 10 only house MX7116n FEMs. All connections in the following figure use QSFP28-DD connections.
NOTE: The following diagrams show the connections for a scalable fabric on multiple chassis between the FSE and
FEM components. The diagrams do not show the VLTi connections required for operating in SmartFabric mode or as
recommended when in Full Switch mode.

Figure 20. Scalable Fabric example using Fabric A

NOTE: To expand from single-chassis to dual-chassis configuration, see Expanding from a single-chassis to dual-chassis
configuration on page 138.

PowerEdge MX Scalable Fabric Architecture 27


The following table shows the recommended IOM slot placement when creating a Scalable Fabric Architecture.

Table 2. Scalable Fabric Architecture maximum recommended design


MX7000 chassis Fabric slot IOM module
Chassis 1 A1 MX9116n FSE
A2 MX7116n FEM
Chassis 2 A1 MX7116n FEM
A2 MX9116n FSE
Chassis 3–10 A1 MX7116n FEM
A2 MX7116n FEM

To provide further redundancy and throughput to each compute sled, Fabric B can be used to create an additional Scalable
Fabric Architecture. Utilizing Fabric A and B can provide up to eight 25-Gbps connections to each MX740c or sixteen 25-Gbps
connections to each MX840c.

Figure 21. Two Scalable Fabrics spanning two MX7000 chassis

Restrictions and guidelines


The following restrictions and guidelines are in place when building a Scalable Fabric:
● All MX7000 chassis in the same Scalable Fabric must be in the same multichassis group.
● Mixing IOM types in the same Scalable Fabric (for example, MX9116n FSE in fabric slot A1 and MX5108n in fabric slot A2) is
not supported. See PowerEdge MX IOM slot support matrix on page 229 for more information about IOM placement.
● All participating MX9116n FSEs and MX7116n FEMs must be in MX7000 chassis that are part of the same MCM group. For
more information, find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation
table.
● When using both Fabric A and B for a Scalable Fabric, the following restrictions apply:
○ IOM placement for each fabric must be the same in each chassis. For instance, if an MX9116n FSE is in chassis 1 fabric
slot A1, then the second MX9116n FSE should be in chassis 1 fabric slot B1.
○ For chassis 3 through 10, which only contain MX7116n FEMs, they must connect to the MX9116n FSE that is in the same
group.
NOTE: For information about the recommended MX9116n FSE port connectivity order, see the Additional Information on
page 222 section.

28 PowerEdge MX Scalable Fabric Architecture


Complex Scalable Fabric topologies
Beginning with OME-M 1.20.00 and SmartFabric OS10.5.0.7, additional Scalable Fabric topologies are supported in Full Switch
and SmartFabric modes. These topologies are more complex than the ones presented in previous sections. These designs enable
physical NIC redundancy using a pair of switches instead of two pairs, providing a significant cost reduction.
These complex topologies support connections between MX9116n FSEs in FabA with MX7116n FEMs in FabB across single and
multiple chassis, up to a total of five chassis. Once you connect the FSE and FEMs, ensure that the slot numbers are the same
for the connection. For example, MX9116n FSE in slot A1 can be connected to an MX7116n FEM in slot B1 (same chassis), or slot
A1 (second chassis), or slot B1 (second chassis), and so on.
NOTE: Cabling multiple chassis together with these topologies can become very complex. Care must be taken to correctly
connect each component.

The complex scalable fabric topologies in this section apply to dual-port Ethernet NICs.
These complex topologies are described as follows.
NOTE: The following diagrams show the connections for a scalable fabric on multiple chassis between the FSE and
FEM components. The diagrams do not show the VLTi connections required for operating in SmartFabric mode or as
recommended when in Full Switch mode.
Single chassis:
● MX9116n FSE in slot A1 is connected to MX7116n FEM in slot B1.
● MX9116n FSE in slot A2 is connected to MX7116n FEM in slot B2.

Figure 22. Single chassis topology

Dual chassis:
● MX9116n FSE in Chassis 1 slot A1 is connected to MX7116n FEMs in Chassis 1 slot B1, Chassis 2 slot A1, Chassis 2 slot B1.
● MX9116n FSE in Chassis 2 slot A2 is connected to MX7116n FEMs in Chassis 1 slot A2, Chassis 1 slot B2, Chassis 2 slot B2.

PowerEdge MX Scalable Fabric Architecture 29


Figure 23. Dual chassis topology

Multiple chassis:
The topology with multiple chassis is similar to the dual chassis. Make sure to connect the FSE and FEM in the same numeric
slot numbers. For example, connecting FSE in Chassis 1 slot A1 to FEM in Chassis 2 slot B2 is not supported.

Figure 24. Multiple chassis topology

Quad-port Ethernet NICs


PowerEdge MX 1.20.10 adds support for the Broadcom 57504 quad-port Ethernet adapter. For chassis with MX7116n FEMs, the
first QSFP28-DD port is used when attaching dual-port NICs. The first and second QSFP28-DD ports of the MX7116n FEM are
used when attaching quad-port NICs. When both QSFP28-DD ports are connected, a server with a dual-port NIC will only use
the first port on each FEM. With quad-port NICs, both ports are used.

30 PowerEdge MX Scalable Fabric Architecture


NOTE: The MX5108n Ethernet switch does not support quad-port adapters.

NOTE: The Broadcom 57504 quad-port Ethernet adapter is not a converged network adapter and does not support FCoE
or iSCSI offload.
The MX9116n FSE has sixteen 25 GbE server-facing ports, ethernet1/1/1 through ethernet1/1/16, which are used when the
PowerEdge MX server sleds are in the same chassis as the MX9116n FSE.
With only dual-port NICs in all server sleds, only the odd-numbered server-facing ports are active. If the server has a quad-port
NIC, but the MX7116n FEM has only one port connected to the MX9116n FSE, only half of the NIC ports will be connected and
show a link up.

PowerEdge MX Scalable Fabric Architecture 31


The following table shows the MX server sled to MX9116n FSE interface mapping for dual-port NIC servers which are directly
connected to the switch.

Table 3. Interface mapping for dual-port NIC servers


Sled number MX9116n FSE server interface
Sled 1 ethernet 1/1/1
Sled 2 ethernet 1/1/3
Sled 3 ethernet 1/1/5
Sled 4 ethernet 1/1/7
Sled 5 ethernet 1/1/9
Sled 6 ethernet 1/1/11
Sled 7 ethernet 1/1/13
Sled 8 ethernet 1/1/15

With quad-port NICs in all server sleds, both the odd- and even-numbered server-facing ports will be active. The following table
shows the MX server sled to MX9116n FSE interface mapping for quad-port NIC servers which are directly connected to the
switch.

Table 4. Interface mapping for quad-port NIC servers


Sled number MX9116n FSE server interface
Sled 1 ethernet 1/1/1, ethernet 1/1/2
Sled 2 ethernet 1/1/3, ethernet 1/1/4
Sled 3 ethernet 1/1/5, ethernet 1/1/6
Sled 4 ethernet 1/1/7, ethernet 1/1/8
Sled 5 ethernet 1/1/9, ethernet 1/1/10
Sled 6 ethernet 1/1/11, ethernet 1/1/12
Sled 7 ethernet 1/1/13, ethernet 1/1/14
Sled 8 ethernet 1/1/15, ethernet 1/1/16

When using multiple chassis and MX7116n FEMs, virtual slots are used to maintain a continuous mapping between the NIC and
physical port. For more information on virtual slots, see Virtual ports and slots on page 39.
In a multiple chassis Scalable Fabric, the interface numbers for the first two are mixed, as one NIC connection is to the MX9116n
in the same chassis as the server, and the other NIC connection is to the MX7116n. In this example, the following table shows
the server interface mapping for Chassis 1 using quad-port adapters.

Table 5. Interface mapping for multiple chassis


Chassis 1 sled number Chassis 1 MX9116n server interface Chassis 2 MX9116n server interface
Sled 1 ethernet 1/1/1, ethernet 1/1/2 ethernet 1/71/1, ethernet 1/71/9
Sled 2 ethernet 1/1/3, ethernet 1/1/4 ethernet 1/71/2, ethernet 1/71/10
Sled 3 ethernet 1/1/5, ethernet 1/1/6 ethernet 1/71/3, ethernet 1/71/11
Sled 4 ethernet 1/1/7, ethernet 1/1/8 ethernet 1/71/4, ethernet 1/71/12
Sled 5 ethernet 1/1/9, ethernet 1/1/10 ethernet 1/71/5, ethernet 1/71/13
Sled 6 ethernet 1/1/11, ethernet 1/1/12 ethernet 1/71/6, ethernet 1/71/14
Sled 7 ethernet 1/1/13, ethernet 1/1/14 ethernet 1/71/7, ethernet 1/71/15
Sled 8 ethernet 1/1/15, ethernet 1/1/16 ethernet 1/71/8, ethernet 1/71/16

Quad-port NIC restrictions and guidelines

32 PowerEdge MX Scalable Fabric Architecture


● If the server has a quad-port NIC, but the MX7116n FEM has only one port connected to the MX9116n FSE, only half of the
NIC ports will be connected and show a link up.
● Both ports on the MX7116n FEM must be connected to the same MX9116n FSE.
NOTE: Do not connect one MX7116n FEM port to one MX9116n FSE and the other MX7116n FEM port to another
MX9116n FSE. This is not supported. The Unsupported configuration for quad-port NICs on page 35 figure shows the
unsupported configuration.
● If a Scalable Fabric has some chassis with quad-port NICs and some with only dual-port NICs, only the chassis with
quad-port NICs require the second MX7116n FEM port to be connected, as shown in the Multiple chassis topology with
quad-port and dual-port NICs – single fabric on page 35 figure.
● It is supported to have a dual-port NIC in Fabric A and a quad-port NIC in Fabric B (or the inverse), or have a quad-port NIC
in both Fabric A and Fabric B.
● Up to five chassis with quad-port NICs are supported in a single Scalable Fabric.
The following set of figures show the basic supported topologies when using quad-port Ethernet adapters.
NOTE: The following diagrams show the connections for a scalable fabric on multiple chassis between the FSE and
FEM components. The diagrams do not show the VLTi connections required for operating in SmartFabric mode or as
recommended when in Full Switch mode.
The following figure shows a single-chassis topology with quad-port NICs. Make sure to connect both ports on the MX7116n
FEM to the same MX9116n FSE.

Figure 25. Single-chassis topology with quad-port NICs - dual fabric

The following figure shows the two-chassis topology with quad-port NICs in each chassis. Only a single fabric is configured.
Make sure to connect both ports on the MX7116n FEM to the same MX9116n FSE.

Figure 26. Two-chassis topology with quad-port NICs – single fabric

The following figure shows the two-chassis topology with quad-port NICs. Dual fabrics are configured.

PowerEdge MX Scalable Fabric Architecture 33


Figure 27. Two-chassis topology with quad-port NICs – dual fabric

The following figure shows the multiple chassis topology with quad-port NICs. Only a single fabric is configured.

Figure 28. Multiple chassis topology with quad-port NICs – single fabric

The following figure shows the multiple chassis topology with quad-port NICs in two chassis and dual-port NICs in one chassis.
Only a single fabric is configured. Make sure to connect both ports on the MX7116n FEM to the same MX9116n FSE with the
quad-port card. Do not connect the second port on the MX7116n FEM when configured with a dual-port NIC.

34 PowerEdge MX Scalable Fabric Architecture


Figure 29. Multiple chassis topology with quad-port and dual-port NICs – single fabric

The following figure shows one example of an unsupported topology. The ports on the MX7116n FEMs must never be connected
to different MX9116n FSEs.

Figure 30. Unsupported configuration for quad-port NICs

PowerEdge MX Scalable Fabric Architecture 35


Interfaces and port groups
On the MX9116n FSE and MX5108n, server-facing interfaces are internal and are enabled by default. To view the backplane port
connections to servers, use the show inventory media command.
In the output, a server-facing interface displays INTERNAL as its media. A FIXED port does not use external transceivers and
always displays as Dell EMC Qualified true.

OS10# show inventory media


--------------------------------------------------------------------------------
System Inventory Media
--------------------------------------------------------------------------------
Node/Slot/Port Category Media Serial-Number Dell EMC-Qualified
--------------------------------------------------------------------------------
1/1/1 FIXED INTERNAL true
1/1/2 FIXED INTERNAL true
1/1/3 FIXED INTERNAL true
1/1/4 FIXED INTERNAL true
1/1/5 FIXED INTERNAL true
1/1/6 FIXED INTERNAL true
1/1/7 FIXED INTERNAL true
1/1/8 FIXED INTERNAL true
1/1/9 FIXED INTERNAL true
1/1/10 FIXED INTERNAL true
1/1/11 FIXED INTERNAL true
1/1/12 FIXED INTERNAL true
1/1/13 FIXED INTERNAL true
1/1/14 FIXED INTERNAL true
1/1/15 FIXED INTERNAL true
1/1/16 FIXED INTERNAL true
1/1/17 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC TW04829489D0007 true
1/1/18 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC TW04829489D0007 true
1/1/19 Not Present
1/1/20 Not Present
1/1/21 Not Present
--------------------- Output Truncated ----------------------------------------
1/1/37 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC TW04829489J0021 true
1/1/38 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC TW04829489J0021 true
1/1/39 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC TW04829489J0024 true
1/1/40 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC TW04829489J0024 true
1/1/41 QSFP28 QSFP28 100GBASE CR4 2M CN0APX0084G1F05 true
1/1/42 QSFP28 QSFP28 100GBASE CR4 2M CN0APX0084G1F49 true
--------------------- Output Truncated ----------------------------------------

To view the server-facing interface port status, use the show interface status command. Server-facing ports are
numbered 1/1/1 to 1/1/16.
For the MX9116n FSE, servers that have a dual-port NIC connect only to odd-numbered internal Ethernet interfaces; for
example, a MX740c in slot one would be 1/1/1, and a MX840c in slots five and six occupies 1/1/9 and 1/1/11.

NOTE: Even-numbered Ethernet ports between 1/1/1–1/1/16 are reserved for quad-port NICs.

A port group is a logical port that consists of one or more physical ports and provides a single interface. Only the MX9116n FSE
supports the following port groups:
● QSFP28-DD – Port groups 1 through 12
● QSFP28 – Port groups 13 and 14
● QSFP28 Unified – Port groups 15 and 16
The following figure shows these port groups along the top, and the bottom shows the physical ports in each port group. For
instance, QSFP28-DD port group 1 has member ports 1/1/17 and 1/1/18, and unified port group 15 has a single member, port
1/1/43.

36 PowerEdge MX Scalable Fabric Architecture


Figure 31. MX9116n FSE port groups

QSFP28-DD port groups


On the MX9116n FSE, QSFP28-DD port groups are 1 through 12, which contain ports 1/1/17 through 1/1/40 and are used to:
● Connect to a MX7116n FEM to extend the Scalable Fabric
● Connect to an Ethernet rack server or storage device
● Connect to another networking device, typically an Ethernet switch
By default, QSFP28-DD port groups 1 through 9 are in fabric-expander-mode and QSFP28-DD port groups 10 through 12 are in
2x 100 GbE breakout mode. Fabric Expander mode is an 8x 25 GbE interface that is used only to connect to MX7116n FEMs in
additional chassis. The interfaces from the MX7116n FEM appear as standard Ethernet interfaces from the perspective of the
MX9116n FSE.
The following figure illustrates how the QSFP28-DD cable provides 8x 25 GbE lanes between the MX9116n FSE and a MX7116n
FEM.

Figure 32. QSFP28-DD connection between MX9116n FSE and MX7116n FEM

NOTE: Compute sleds with dual-port NICs require only MX7116n FEM port 1 to be connected.

In addition to fabric-expander-mode, QSFP28-DD port groups support the following Ethernet breakout configurations:
● Using QSFP28-DD optics/cables:
○ 2x 100 GbE – Breakout a QSFP28-DD port into two 100-GbE interfaces
○ 2x 40 GbE – Breakout a QSFP28-DD port into two 40-GbE interfaces
○ 8x 25 GbE – Breakout a QSFP28-DD port into eight 25-GbE interfaces

PowerEdge MX Scalable Fabric Architecture 37


○ 8x 10 GbE – Breakout a QSFP28-DD port into eight 10-GbE interfaces
● Using QSFP28 optics/cables:
○ 1x 100 GbE – Breakout a QSFP28-DD port into one 100-GbE interface
○ 4x 25 GbE – Breakout a QSFP28-DD port into four 25-GbE interfaces
● Using QSFP+ optics/cables:
○ 1x 40 GbE – Breakout a QSFP28-DD port into one 40-GbE interface
○ 4x 10 GbE – Breakout a QSFP28-DD port into four 10-GbE interfaces
NOTE: Before changing the port breakout configuration from one setting to another, the port must first be set back to the
hardware default setting.

NOTE: QSFP28-DD ports are backwards compatible with QSFP28 and QSFP+ optics and cables.

Single-density QSFP28 port groups


On the MX9116n FSE, single-density QSFP28 port groups are 13 and 14, contain ports 1/1/41 and 1/1/42 respectively, and are
used to connect to upstream networking devices. By default, both port groups are set to 1x 100 GbE. Port groups 13 and 14
support the following Ethernet breakout configurations:
● 4x 10 GbE – Breakout a QSFP28 port into four 10-GbE interfaces
● 1x 40 GbE – Set a QSFP28 port to 40 GbE mode
● 4x 25 GbE – Breakout a QSFP28 port into four 25-GbE interfaces
● 2x 50 GbE – Breakout a QSFP28 port into two 50-GbE interfaces
● 1x 100 GbE – Reset the unified port back to the default, 100-GbE mode

Unified port groups


Unified port groups operate as either Ethernet or FC. By default, both unified port groups, 15 and 16, are set to 1x 100 GbE. To
activate the two port groups as FC interfaces in Full Switch mode, use the command mode fc. Both port groups are enabled
as Ethernet or FC together. You cannot have port group 15 as Ethernet and port group 16 as Fibre Channel.
The MX9116n FSE unified port groups support the following Ethernet breakout configurations:
● 4x 10 GbE – Breakout a QSFP28 port into four 10-GbE interfaces
● 1x 40 GbE – Set a QSFP28 port to 40 GbE mode
● 4x 25 GbE – Breakout a QSFP port into four 25-GbE interfaces
● 2x 50 GbE – Breakout a QSFP28 port into two 50-GbE interfaces
● 1x 100 GbE – Reset the unified port back to the default, 100-GbE mode
The MX9116n FSE unified port groups support the following FC breakout configurations:
● 4x 8 Gb – Breakout a unified port group into four 8-Gb FC interfaces
● 2x 16 Gb – Breakout a unified port group into two 16-Gb FC interfaces
● 4x 16 Gb – Breakout a unified port group into four 16-Gb FC interfaces
● 1x 32 Gb – Breakout a unified port group into one 32-Gb FC interface
● 2x 32 Gb – Breakout a unified port group into two 32-Gb FC interfaces
● 4x 32 Gb – Breakout a unified port group into four 32-Gb FC interfaces, rate limited
NOTE: After enabling FC on the unified ports, these ports will be set administratively down and must be enabled in order to
be used.

Rate limited 32 Gb Fibre Channel


When using 32-Gb FC, the actual data rate is 28 Gbps due to 64b/66b encoding. The following figure shows unified port group
15. The port group is set to 4x 32 Gb FC mode. However, each of the four lanes is 25 Gbps, not 28 Gbps. When these lanes are
mapped from the Network Processing Unit (NPU) to the FC ASIC for conversion to FC signaling, the four 32 Gb FC interfaces
are mapped to four 25 Gbps lanes. With each lane operating at 25 Gbps, not 28 Gbps, the result is rate limited to 25 Gbps.

38 PowerEdge MX Scalable Fabric Architecture


MX9116n FSE
Unified Port 4x 32 GFC ports
4x 25 Gbps Rate limit: 25 Gbps

Lane 1 - 25 Gbps Lane 1 - 25 Gbps

Lane 2 - 25 Gbps Lane 2 - 25 Gbps


Network Fibre Channel
Processing Unit Lane 3 - 25 Gbps ASIC Lane 3 - 25 Gbps

Lane 4 - 25 Gbps Lane 4 - 25 Gbps

Figure 33. 4x 32 Gb FC breakout mode, rate limit of 25 Gbps

While each 32 Gb FC connection is providing 25 Gbps, the overall FC bandwidth available is 100 Gbps per unified port group,
or 200 Gbps for both ports. However, if an application requires the maximum 28 Gbps throughput per port, use the 2x 32 Gb
breakout mode. This mode configures the connections between the NPU and the FC ASIC, as shown in the following figure.

MX9116n FSE
Unified Port
2x 50 Gbps 2x 32 GFC ports

Lane 1 - 50 Gbps Lane 1 - 28 Gbps

Network Fibre Channel


Processing Unit ASIC
Lane 2 - 50 Gbps Lane 2 - 28 Gbps

Figure 34. 2x 32 Gb FC breakout mode

In 2x 32 Gb FC breakout mode, the MX9116n FSE binds two 50 Gbps links together to provide a total of 100 Gbps bandwidth
per lane to the FC ASIC. This results in the two FC ports operating at 28 Gbps. The overall FC bandwidth available is 56 Gbps
per unified port, or 112 Gbps for both (compared to the 200 Gbps using 4x 32-Gb FC).
NOTE: Rate limited ports are not oversubscribed ports. There is no FC frame drop on these ports and buffer to buffer
credit exchanges ensure flow consistency.

Virtual ports and slots


A virtual port is a logical interface that connects to a downstream server and has no physical location on the switch. Virtual
ports are created when a MX9116n FSE onboards (discovers and configures) a MX7116n FEM.

PowerEdge MX Scalable Fabric Architecture 39


If a MX7116n is moved and cabled to a different QSFP28-DD port on the MX9116n, all software configurations on the virtual
ports are maintained. Only the QSFP28-DD breakout interfaces mapped to the virtual ports change.
A virtual slot contains all provisioned virtual ports across one or both FEM connections. On the MX9116n FSE, virtual slots 71
through 82 are pre-provisioned, and each virtual slot has eight virtual ports. For example, virtual slot 71 contains virtual ports
ethernet 1/71/1 through 1/71/8. When a quad-port adapter is used, that virtual slot will expand to 16 virtual ports, for example
ethernet 1/71/1 through 1/71/16.
If the MX9116n FSE is in SmartFabric mode, the MX7116n FEM is automatically configured with a virtual slot ID and virtual ports
that are mapped to the physical interfaces. The following table shows how the physical ports are mapped to the virtual slot and
ports.
If the MX9116n FSE is in Full Switch mode, it automatically discovers the MX7116n FEM when the following conditions are met:
● The MX7116n FEM is connected to the MX9116n FSE by attaching a Dell qualified cable between the QSFP28-DD ports on
both devices.
● The interface for the QSFP28-DD port group connected to the MX9116n FSE is in 8x 25 GbE FEM mode.
● At least one blade server is inserted into the MX7000 chassis containing the MX7116n FEM.
The FEM will be automatically discovered and provisioned into a virtual slot when operating in SmartFabric mode. In Full Switch
mode, this mapping is done with the unit-provision command. See show unit-provision on page 157 for more information
on the show unit-provision command.
To verify that a MX7116n FEM is communicating with the MX9116n FSE, enter the show discovered-expanders
command.

MX9116n-FSE # show discovered-expanders


Service Model Type Chassis Chassis-slot Port-group Virtual
tag service-tag Slot-Id
--------------------------------------------------------------------------
D10DXC2 MX7116n FEM 1 SKY002Z A1 1/1/1

Table 6. Virtual Port mapping example 1


MX7116n service tag MX9116n QSFP28-DD MX9116n physical MX7116n virtual slot MX7116n virtual ports
port group interface (ID)
12AB3456 portgroup1/1/1 1/1/17:1 71 1/71/1
1/1/17:2 1/71/2
1/1/17:3 1/71/3
1/1/17:4 1/71/4
1/1/18:1 1/71/5
1/1/18:2 1/71/6
1/1/18:3 1/71/7
1/1/18:4 1/71/8

Use the same command to show the list of MX7116n FEMs in a quad-port NIC configured scenario, in which each MX7116n FEM
creates two connections with the MX9116n FSE. In a dual-chassis scenario, MX7116n FEMs are connected on port group 1 and
port group 7 to the MX9116n FSE as shown below. For example, if the quad-port NIC is configured on compute sled 1, then
virtual ports 1/1/71:1 and 1/1/71:9 will be up.

MX9116N-1# show discovered-expanders


Service Model Type Chassis Chassis-slot Port-group Virtual
tag service-tag Slot-Id
--------------------------------------------------------------------------
D10DXC2 MX7116n FEM 1 SKY002Z A1 1/1/1 71
D10DXC2 MX7116n FEM 1 SKY002Z A1 1/1/7 71
D10DXC4 MX7116n FEM 1 SKY003Z A1 1/1/2 72

40 PowerEdge MX Scalable Fabric Architecture


Table 7. Virtual Port mapping example 2
MX7116n service tag MX9116n QSFP28-DD MX9116n physical MX7116n virtual slot MX7116n virtual ports
port group interface (ID)
12AB3456 portgroup1/1/1 1/1/17:1 71 1/71/1

portgroup1/1/7 1/1/17:2 1/71/2


1/1/17:3 1/71/3
1/1/17:4 1/71/4
1/1/18:1 1/71/5
1/1/18:2 1/71/6
1/1/18:3 1/71/7
1/1/18:4 1/71/8
1/1/29:1 1/71/9
1/1/29:2 1/71/10
1/1/29:3 1/71/11
1/1/29:4 1/71/12
1/1/30:1 1/71/13
1/1/30:2 1/71/14
1/1/30:3 1/71/15
1/1/30:4 1/71/16

The MX9116n physical interfaces mapped to the MX7116n virtual ports display dormant (instead of up) in the show
interface status output until a virtual port starts to transmit server traffic.

MX9116n-FSE # show interface status


Port Description Status Speed Duplex Mode Vlan
Eth 1/1/17:1 dormant
Eth 1/1/17:2 dormant
<output truncated>

Recommended port order for MX7116n FEM


connectivity
While any QSFP28-DD port can be used for any purpose, the following table and figure outline the recommended but not
required, port order for connecting the chassis with the MX7116n FEM modules to the MX9116n FSE to optimize NPU utilization.
NOTE: If you are using the connection order shown in the following table, you must change the Port group 9 breakout
type to FabricExpander.

Table 8. Recommended PowerEdge MX7000 chassis connection order


Chassis MX9116n FSE port group Physical port numbers
1/2 Port group 1 17 and 18
3 Port group 7 29 and 30
4 Port group 2 19 and 20
5 Port group 8 31 and 32
6 Port group 3 21 and 22

PowerEdge MX Scalable Fabric Architecture 41


Table 8. Recommended PowerEdge MX7000 chassis connection order (continued)
Chassis MX9116n FSE port group Physical port numbers
7 Port group 9 33 and 34
8 Port group 4 23 and 24
9 Port group 10 35 and 36
10 Port group 5 25 and 26

Figure 35. Recommended MX7000 chassis connection order

Embedded top-of-rack switching


Most environments with blade servers also have rack servers. The following figure shows a typical design having rack servers
connecting to their respective top-of-rack (ToR) switches and blade chassis connecting to a different set of ToR switches. If
the storage array is Ethernet-based, it is typically connected to the core/spine. This design is inefficient and expensive.

Figure 36. Traditional mixed blade/rack networking

Communication between rack and blade servers must traverse the core, increasing latency, and the storage array consumes
expensive core switch ports. All of this results in increased operations cost from the increased number of managed switches.
Embedded ToR functionality is built into the MX9116n FSE. Configure any QSFP28-DD port to break out into 8x 10 GbE or 8x
25 GbE and connect the appropriate cables and optics. This enables all servers and storage to connect directly to the MX9116n

42 PowerEdge MX Scalable Fabric Architecture


FSE, and communication between all devices that are kept within the switch. This provides a single point of management and
network security while reducing cost and improving performance and latency.
The preceding figure shows eight switches in total. In the following figure, using embedded ToR, switch count is reduced to the
two MX9116n FSE in the two chassis:

Figure 37. MX9116n FSE embedded ToR

MX Chassis management wiring


You can use the automatic uplink detection and network loop prevention features in OME-Modular to connect multiple chassis
with cables. This cabling or wiring method is called stacking. Stacking saves port usage in the data center switches and access
for each chassis in the network.
While wiring a chassis, connect one network cable from each management module to the out-of-band (OOB) management
switch of the data center. Ensure that both ports on the OOB management switch are enabled and are in the same network and
VLAN.
The following image is a representation of the individual chassis wiring:

PowerEdge MX Scalable Fabric Architecture 43


Figure 38. Individual chassis management wiring

The following image is a representation of the two-chassis wiring:

Figure 39. Two-chassis management wiring

The following image is a representation of the multi-chassis wiring:

44 PowerEdge MX Scalable Fabric Architecture


Figure 40. Multi-chassis management wiring

PowerEdge MX Scalable Fabric Architecture 45


3
Dell SmartFabric OS10
The networking market is transitioning from a closed, proprietary stack to open hardware supporting various operating systems.
Dell SmartFabric OS10 is designed to allow multilayered disaggregation of the network functionality. While OS10 contributions to
Open Source provide users with freedom and flexibility to pick their own third-party networking, monitoring, management, and
orchestration applications; SmartFabric OS10 bundles an industry-hardened networking stack featuring standard Layer 2 and
Layer 3 protocols over a well-accepted CLI interface.

Figure 41. Dell SmartFabric OS10 high-level architecture

Operating modes
The Dell Networking MX9116n Fabric Switching Engine (FSE) and MX5108n Ethernet Switch operate in one of two modes:
● Full Switch mode (Default) – All switch-specific SmartFabric OS10 capabilities are available and managed through the CLI.
● SmartFabric mode – Switches operate as a Layer 2 I/O aggregation fabric and are managed through the Open Manage
Enterprise-Modular (OME-M) console.

Full Switch mode


In Full Switch mode, all SmartFabric OS10 features and functions that are supported by the hardware are available to the user.
In other words, the switch operates the same way as any other SmartFabric OS10 switch. Configuration is primarily done using
the CLI, however, the following items can be configured or managed using the OME-M UI:
● Initial switch deployment: Configure hostname, password, SNMP, NTP, and so on
● Monitor health, logs, alerts, and events
● Update the SmartFabric OS10 firmware
● View physical topology

46 Dell SmartFabric OS10


● Switch power management
Full Switch mode is typically used when a desired feature or function is not available when operating in SmartFabric mode. For
more information about Dell SmartFabric OS10 operations, find the relevant version of the User Guide in the OME-M and OS10
compatibility and documentation table.

SmartFabric mode
A SmartFabric is a logical entity that consists of a collection of physical resources, such as servers and switches, and logical
resources such as networks, templates, and uplinks. The OpenManage Enterprise – Modular (OME-M) console provides a
method to manage these resources as a single unit.
For more information about SmartFabric mode, see Overview of SmartFabric Services for PowerEdge MX on page 73.

Changing operating modes


In both Full Switch and SmartFabric modes, only configuration changes you make using the OME-M UI are retained when you
switch modes. The graphical user interface is used for switch configuration in SmartFabric mode and the OS10 CLI is used for
switch configuration in Full Switch mode.
By default, a switch is in Full Switch mode. When that switch is added to a fabric, it automatically changes to SmartFabric mode.
When you change from Full Switch to SmartFabric mode, all Full Switch CLI configurations are deleted except for the subset of
CLI commands that are supported in SmartFabric mode.

Dell SmartFabric OS10 47


Figure 42. Switch settings saved when switching between operating modes

48 Dell SmartFabric OS10


To change a switch from SmartFabric to Full Switch mode, you must delete the fabric. At that time, only the configuration
changes such as admin password, hostname, and management IP address, will be retained.
NOTE: There is no CLI command to switch between operating modes. Delete the fabric to change from SmartFabric to Full
Switch mode.
The CLI command show switch-operating-mode displays the currently configured operating mode of the switch. This
information is also available on the switch landing page in the OME-M UI.

VLAN restrictions
VLANs 4004 and 4020 are reserved for internal switch communication and cannot be assigned to any interface in Full Switch
or SmartFabric mode. VLAN 4020 is automatically created by the system as the Management VLAN. Do not remove this VLAN,
and do not remove the VLAN tag or edit the Management VLAN on the Edit Uplink page if it is running in SmartFabric mode.
The VLAN and subnet that are assigned to OME-M cannot be used in the data path or fabric of the MX-IOMs. Ensure the
management network used for OME-M does not conflict with networks configured on the fabric. All other VLANs are allowed on
the data plane and can be assigned to any interface.

LLDP for iDRAC


To understand the physical network topology, SmartFabric OS10 discovers end-host devices based on specific custom originator
TLVs in LLDP PDUs sent out through the connected ports by the iDRAC, regardless of whether the switches are in Full Switch
or SmartFabric mode. The types of information provided are shown in the following table.
For servers connected to switches in SmartFabric mode, the iDRAC LLDP topology feature must be enabled. Without it, the
fabric does not recognize the compute sled and the user cannot deploy networks to the sled.
NOTE: Topology LLDP is enabled by default for PowerEdge MX servers and disabled for all other Dell servers. To enable
or disable the feature, open the iDRAC console and navigate to iDRAC Settings > Connectivity > Network > Common
Settings > Topology LLDP.

Table 9. iDRAC LLDP TLVs and subtypes


TLV Subtype Description
Originator 1 Indicates the iDRAC string that is used as originator. This string enables external
switches to identify iDRAC LLDP PDUs.
Port type 2 The following are the applicable port types:
● iDRAC port (dedicated)
● iDRAC port (dedicated)
● iDRAC and NIC port (shared)
Port FQDD 3 Port number that uniquely identifies a NIC port within a server.
Server service tag 4 Service tag ID of the server.
Server model name 5 Model name of the server.
Server slot number 6 Slot number of the server. For example: 1, 2, 3, 1a, and 1b.
Chassis service tag 7 Service tag ID of the chassis (applicable only to MX servers).
Chassis model 8 Model name of the chassis (applicable only to MX servers).
IOM service tag 9 Service tag ID of the IOM device (applicable only to MX servers).
IOM model name 10 Model name of the IOM device (applicable only to MX servers).
IOM slot label 11 Slot label of the IOM device. For example: A1, B1, A2, and B2 (applicable only to MX
servers).
IOM port number 12 Port number of the NIC. For example: 1, 2, 3, and so on.

Dell SmartFabric OS10 49


For additional information about LLDP and TLVs, see the Link Layer Discovery Protocol section of the Dell SmartFabric OS10
User Guide. Find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.
See the Common CLI troubleshooting commands for Full Switch and SmartFabric modes on page 157 section for examples of
the show lldp neighbors command, which provides information about connected devices.

Virtual Link Trunking


Virtual Link Trunking (VLT) aggregates two identical physical switches to form a single logical extended switch. However, each
of the VLT peers has its own control and data planes and can be configured individually for port, protocol, and management
behaviors. Though the dual physical units act as a single logical unit, the control and data plane of both switches remain isolated,
ensuring high availability and high resilience for all its connected devices. This differs from the legacy stacking concept, where
there is a single control plane across all switches in the stack, creating a single point of failure.
With the critical need for high availability in modern data centers and enterprise networks, VLT plays a vital role connecting with
rapid convergence, seamless traffic flow, efficient load balancing, and loop free capabilities.
With the instantaneous synchronization of MAC and ARP entries, both the nodes remain active/active and continue to forward
the data traffic seamlessly.
VLT is required when operating in SmartFabric mode.
For more information about VLT, see the Virtual Link Trunking chapter in the Dell SmartFabric OS10 User Guide. Find the
relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.

Storage networking
PowerEdge MX Ethernet I/O modules support Fibre Channel (FC) connectivity in different ways:
● Direct Attach, also called F_Port
● NPIV Proxy Gateway (NPG)
● FIP Snooping Bridge (FSB)
● Internet Small Computer Systems Interface, or iSCSI
The method to implement depends on the existing infrastructure and application requirements. Consult your Dell representative
for more information.
Configuring FC connectivity in SmartFabric mode is simple and is almost identical across the three connectivity types.
NOTE: The PowerEdge MX Platform supports all Dell PowerStore storage appliance models. This document provides
example deployments that include the PowerStore 1000T appliance. For specific details on PowerStore appliance models,
see the Dell PowerStore T page.

NPIV Proxy Gateway


The most common connectivity method, NPIV Proxy Gateway mode (NPG) is used when connecting PowerEdge MX to a
storage area network that hosts a storage array. NPG mode is simple to implement as there is little configuration that must be
done. The NPG switch converts FCoE from the server to native FC and aggregates the traffic into an uplink. The NPG switch is
effectively transparent to the FC SAN, which “sees” the hosts themselves. This mode is supported only on the MX9116n FSE.
OS10 supports configuring N_Port mode on an Ethernet port that connects to converged network adapters (CNA). NPG node
port (N_Port) is a port on a network node that act as a host or initiator device and is used in FC point-to-point or FC switched
fabric. N_Port ID Virtualization (NPIV) allows multiple N_Port IDs to share a single physical N_Port.
In the deployment example shown below, MX9116n IOMs are configured as NPGs connected with pre-configured FC switches
using port 1/1/44 on each MX9116n to allow connectivity to a Dell PowerStore 1000T storage array. Port-group 1/1/16 is
configured as 4x 16 GFC to convert physical port 1/1/44 into 4x 16 GFC connections. MX9116n FSE universal ports 44:1 and 44:2
are used for FC connections and operate in N_Port mode to connect to the FC switches. The FC Gateway uplink type enables
N_Port functionality on the MX9116n unified ports, converting FCoE traffic to native FC traffic and passing that traffic to a
storage array through FC switches.

50 Dell SmartFabric OS10


FC Switch FC Switch
Spine 1 Spine 2

FC SAN A
FC SAN B
:2
rts 4:2 rts /44
Po 1/1/4 Po 1/1
– –
: 1 :1
4 44
1/ 1/4 1/
1/
Controller A Controller B

MX9116n VLT MX9116n PowerStore 1000T


(Leaf 1) (Leaf 2) Unity 500F

MX7000 MX7000
chassis 1 chassis 2

Figure 43. Fibre channel NPG network to Dell PowerStore 1000T SAN

NOTE: For more information about configuration and deployment, see Scenario 5: Connect MX9116n FSE to Fibre Channel
storage - NPIV Proxy Gateway mode on page 198.

Direct attached (F_Port)


Direct Attached mode, or F_Port, is used when FC storage needs to be directly connected to the MX9116n FSE. The MX9116n
supports the required FC services such as name server and zoning that are typical of standard FC switches.
This example demonstrates the direct attachment of the Dell PowerStore 1000T storage array. MX9116n FSE universal ports
44:1 and 44:2 are required for FC connections and operate in F_Port mode, which allows for an FC storage array to be
connected directly to the MX9116n FSE. The uplink type enables F_Port functionality on the MX9116n unified ports, converting
FCoE traffic to native FC traffic and passing that traffic to a directly attached FC storage array.
This mode is supported only on the MX9116n FSE.

Spine 1 Spine 2 PowerStore 1000T


Controller A Controller B

FC SAN A
FC SAN B
1/44:2
1/44:1

4:1 :2
1/4 1/44

MX9116n VLT MX9116n


(Leaf 1) (Leaf 2)

MX7000 MX7000
chassis 1 chassis 2

Figure 44. Fibre Channel (F_Port) direct attach network to Dell PowerStore 1000T SAN

NOTE: For more information on configuration and deployment, see Scenario 6: Connect MX9116n FSE to Fibre Channel
storage - FC Direct Attach on page 202.

FCoE Transit or FIP Snooping Bridge


The FCoE Transit, or FIP Snooping Bridge (FSB) mode is used when connecting the Dell PowerEdge MX to an upstream switch,
such as the Dell PowerSwitch S4148U that accepts FCoE and converts it to native FC. This mode is typically used when
an existing FCoE infrastructure is in place that PowerEdge MX must connect to. In the following example, the PowerSwitch
S4148U-ON receives FCoE traffic from the MX5108n Ethernet switch and converts that FCoE traffic to native FC passes that
traffic to an external FC switch.
When operating in FSB mode, the switch snoops Fibre Channel over Ethernet (FCoE) Initialization Protocol (FIP) packets on
FCoE-enabled VLANs, and discovers the following information:

Dell SmartFabric OS10 51


● End nodes (ENodes)
● Fibre channel forwarders (FCFs)
● Connections between ENodes and FCFs
● Sessions between ENodes and FCFs
Using the discovered information, the switch installs ACL entries that provide security and point-to-point link emulation. This
mode is supported on both the MX9116n FSE and the MX5108n Ethernet Switch.

Figure 45. FCoE (FSB) network to Dell PowerStore 1000T SAN through S4148U-ON NPG switch

NOTE: An FCoE uplink from the MX5108n or MX9116n must contain only a single port interface on an MX IOM to the
existing FCoE switch such as the Dell PowerSwitch S4148U shown in the diagram above.

NOTE: For more information about configuration and deployment, see Scenario 7: Connect MX5108n to Fibre Channel
storage - FSB on page 207.

NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.

iSCSI
iSCSI is a transport layer protocol that embeds SCSI commands inside of TCP/IP packets. TCP/IP transports the SCSI
commands from the Host (initiator) to storage array (target). iSCSI traffic can be run on a shared or dedicated network
depending on application performance requirements.
In the example below, MX9116n FSEs are connected to Dell PowerStore 1000T storage array controllers SP A and SP B through
ports 1/1/41:1-2. If there are multiple paths from host to target, iSCSI can use multiple sessions for each path. Each path from
the initiator to the target will have its own session and connection. This connectivity method is often referred as “port binding”.
Dell Technologies recommends that you use the port binding method for connecting MX environment to the PowerStore 1000T
storage array. Configure multiple iSCSI targets on PowerStore 1000T and establish connectivity from the host initiators (MX
compute sleds) to each of the targets. When Logical Unit Numbers (LUNs) are successfully created on target, host initiators
can make connections to target through iSCSI session. For more information, see the Dell PowerStore T page.

52 Dell SmartFabric OS10


Spine 1 Spine 2 PowerStore 1000T
Controller A Controller B

iSCSI SAN A
iSCSI SAN B

1/1/41:2
1/1/41:1
:1 :2
/41 1
1/1 1/1/4

MX9116n VLT MX9116n


(Leaf 1) (Leaf 2)

MX740c MX740c

MX7000 Chassis 1 MX7000 Chassis 2

Figure 46. iSCSI network to Dell PowerStore 1000T

NVMe/TCP

OME-M 1.40.20 NVMe/TCP support


With the release of OME-M 1.40.20, the MX platform supports NVMe/TCP.
NVMe/TCP and SFSS solutions with PowerEdge MX require PowerEdge MX Baseline 22.03.00 (1.40.20), and are supported in
full switch mode only. Converged FCoE and NVMe/TCP on the same IOM is not currently supported.

OME-M 2.00.00 NVMe/TCP support


NVMe/TCP and SFSS solutions with PowerEdge MX require PowerEdge MX Baseline 22.09.00 (2.00.00) and are supported in
full switch mode only. Converged FCoE and NVMe/TCP on the same IOM is not currently supported. NVMe/TCP and SFSS
solutions with SmartFabric Mode are not currently supported -the new Storage - NVMe/TCP VLAN type is defined for future
support with SmartFabric Mode.
For more information, refer to the following resources:

Resource Description
SFSS Deployment Guide This document demonstrates the planning and deployment of SmartFabric
Storage Software (SFSS) for NVMe/TCP.
NVMe/TCP Host/Storage Interoperability Simple This document provides information about the NVMe/TCP Host/Storage
Support Matrix Interoperability support matrix.
NVMe/TCP Supported Switches Simple Support This document provides information about the NVMe/TCP Supported
Matrix Switches Simple Support matrix.

Host FCoE session load balancing


Host FCoE session load balancing differs depending on the version of OS10 that is being used.

OS10 version 10.5.2.4 or later


The FC uplinks from the MX9116n follow industry-standard protocols. Unlike the Ethernet LACP Link Aggregation Group (LAG)
protocol, there is no industry-standard mechanism for bonding multiple FC uplinks together. Because of this, Fibre Channel
switch manufacturers independently developed their own proprietary mechanisms that are not interoperable. This prevents the
MX9116n FC uplinks to be bonded using the native or proprietary protocols.
Instead, load balancing is achieved through a single Fibre Channel Forwarder (FCF) per vFabric. The following describes the
behavior of the logical FCF:

Dell SmartFabric OS10 53


● This feature presents all available operational Fibre Channel uplinks in a fabric as a single logical unit. The uplinks are
presented as one logical Fibre Channel Forwarder (FCF) to the end points connected to the same fabric.
● Better load balancing is achieved during boot-up and bulk configuration by requiring the FC uplink successfully completed the
initial login with the upstream switch at the time of timer expiry.
NOTE: Set timeout value using the CLI command fcoe delay fcf-adv timeout.
● The system finds the optimally loaded FC uplink while the load balancing algorithm makes use of the link's session count and
the link speed as factors for session re-balancing.
● End devices do not have control over the link chosen for session establishment. This behavior ensures better load balancing
across the available uplinks. After the session is established, the FCoE/FC data traffic is re-directed to the appropriate port
to which the login request was associated.
NOTE: As of OME-M 1.20.00 and OS10.5.0.7, it is possible to rebalance FCoE sessions across FCFs. For more
information, see Rebalancing FC and FCoE sessions on page 154.

OS10 version 10.5.1.9 and earlier


The FC uplinks from the MX9116n follow industry-standard protocols. Unlike the Ethernet LACP Link Aggregation Group (LAG)
protocol, there is no industry-standard mechanism for bonding multiple FC uplinks together. Because of this, Fibre Channel
switch manufacturers independently developed their own proprietary mechanisms that are not interoperable. This prevents the
MX9116n FC uplinks to be bonded using the native or proprietary protocols.
Instead, load balancing rule are used and listed below:
● Load is calculated based on number of server sessions that are connected to the fibre channel forwarder (FCF). The FCF
runs in OS10 and provides the FC gateway functionality. There is one FCF for each physical uplink.
● If only one FCF is available, then all the servers form FCoE sessions with that FCF.
● In the case of multiple FCFs, the NPG module running in OS10 will provide the least loaded FCF available at that time to the
next server that will log in to the FC fabric.
● Load balancing is performed only during the server login process.
● If a new FCF/uplink is created, existing server sessions will not be automatically balanced across the new session. New
server sessions will leverage the new FCF.
● Once a server is logged into an FCF, it will not shift to least loaded FCF until there is a disruption to the existing session.
NOTE: As of OME-M 1.20.00 and OS10.5.0.7, it is possible to rebalance FCoE sessions across FCFs. For more
information, see Rebalancing FC and FCoE sessions on page 154.

PowerEdge MX IOM operations


Dell PowerEdge MX switches can be managed using the OME-M console. From the Switch Management page, you can
view activity, health, and alerts. The Switch Management page also allows you to perform operations such as power control,
firmware update, and port configuration. Many of these operations can also be performed in Full Switch mode.

Switch Management page overview


To access the Switch Management page:
1. Open the OME-M console.
2. From the navigation menu, click Devices > I/O Modules.
3. Select the preferred switch.
NOTE: In the following example, the MX9116n FSE IOM-A1 is selected.

54 Dell SmartFabric OS10


Figure 47. IOM Overview page on OME-M

Switch Overview
The Overview page provides a convenient location to view the pertinent data on the IOM such as:
● Chassis information
● Recent alerts
● Recent activity
● IOM subsystems
● Environment
The Power Control drop-down button provides three options:
● Power Off: Turns off the IOM
● Power Cycle: Power cycles the IOM
● System Reset: Initiates a cold reboot of the IOM

Figure 48. Power Control options

Dell SmartFabric OS10 55


The Blink LED drop-down button provides an option to turn the ID LED on the IOM on or off. To turn on the ID LED, select
Blink LED > Turn On. This selection activates a blinking blue LED which provides easy identification. To turn off the blinking ID
LED, select Blink LED > Turn Off.

Figure 49. Blink LED button

Hardware tab
The Hardware tab provides information about the following IOM hardware:
● FRU
● Device Management Info
● Installed software
● Port information

Figure 50. Hardware tab

In Smartfabric mode, the Port Information tab provides useful operations such as:
● Configuring port-group breakout
● Toggling the admin state of ports
● Configuring MTU of ports
● Toggling Auto Negotiation
● Setting the port description
NOTE: Do not use the OME-M UI to manage ports of a switch in Full Switch mode.

56 Dell SmartFabric OS10


Figure 51. Port Information

View port status


The OME-M console can be used to show the port status. In this example, the figure displays ports for an MX9116n FSE.
1. Open the OME-M console.
2. From the navigation menu, click Devices > I/O Modules.
3. Select an IOM and click the View Details button to the right of the Inventory screen. The IOM Overview displays for that
device.
4. From IOM Overview, click Hardware.
5. Click to select the Port Information tab.
The image below shows Ethernet 1/1/1, 1/1/3, 1/71/1, and 1/72/1 in the correct operational status, which is Up. The interfaces
correspond to the MX740c compute sleds in slots 1 and 2 in both chassis. The figure also shows the VLT connection (port
channel 1000) and the uplinks (port channel 1) to the S5232F-ON leaf switches.

Dell SmartFabric OS10 57


Figure 52. IOM port information

58 Dell SmartFabric OS10


Firmware tab
The Firmware tab provides options to manage the firmware on the IOM. For more information about updating switch firmware,
see Upgrading Dell SmartFabric OS10 on page 59.

Figure 53. Firmware tab

Upgrading Dell SmartFabric OS10


Upgrading the IOMs in the fabric should be done using the OME-M console. The upgrade is carried out using a Dell Update
Package (DUP). A DUP is a self-contained package format that updates a single element on a system. Using DUPs, you can
update a wide range of system components simultaneously and apply scripts to similar sets of Dell systems to levels. As of
OME-M 1.30.00 and OS10.5.2.4, the OS10 DUP is carried in the online firmware catalog and can be installed as part of a
firmware baseline. Earlier versions of the OS10 DUP must be downloaded from https://www.dell.com/support/ and are not
carried in the online firmware catalog.
NOTE: To access the complete inventory of drivers and other downloads specific for your system, sign in to your Dell
Support account.

NOTE: The following phased update order helps you to manually orchestrate MX component updates with no workload
disruption:
1. Update the components in the following order: OME-Modular application.
2. Network IOMs (Smart Fabrics and Full-Switches) and SAS IOMs
3. Server update—Phased update of servers (depending on clustering solution)

NOTE: When upgrading OS10, always perform the upgrade as part of an overall MX baseline. Follow the installation
instructions in the OME-M User's Guide. Find the relevant version of the User Guide in the OME-M and OS10 compatibility
and documentation table table.

Dell SmartFabric OS10 59


Figure 54. Download page file for MX9116n FSE

NOTE: If an IOM is in SmartFabric mode, all the switches that are part of the fabric are updated in sequence automatically.
Do not select both of the switches in the fabric to update.

NOTE: If an IOM is in Full Switch mode, the firmware upgrade is completed only on the specific IOMs that are selected in
the UI.
For step-by-step instructions about how to upgrade OS10 on PowerEdge MX IO modules along with a version-to-version
upgrade matrix, see the OME-M User's Guide. Find the relevant version of the User Guide in the OME-M and OS10 compatibility
and documentation table.

Alerts tab
The Alerts tab provides information about alerts and notifies the administrator. The advanced filter option can be leveraged to
quickly filter out alerts. Various operations can be performed on an alert or several alerts such as:
● Acknowledge
● Unacknowledged
● Ignore

60 Dell SmartFabric OS10


● Export
● Delete

Figure 55. Alerts tab

Settings tab
The Settings tab provides options to configure the following settings for the IOMs:
● Network
● Management
● Monitoring
● Advanced Settings

Figure 56. Settings tab

Network
The Network option includes configuring IPv4, IPv6, DNS Server, and Management VLAN settings.

Figure 57. Network settings

Management
The Management option includes setting the hostname and admin account password.
NOTE: Beginning with OME-M 1.20.00 and OS 10.5.0.7, this field will set the admin account password. For versions
OME-M 1.10.20 and OS10.5.0.5 and earlier, the field name Root Password will set the OS10 linuxadmin account password.
The default username for CLI access is admin and the password is admin.

Dell SmartFabric OS10 61


Figure 58. Management settings

Monitoring
The Monitoring section provides options for SNMP settings.

Figure 59. Monitoring settings option

Advanced Settings
The Advanced Settings tab offers the option for time configuration replication and alert replication. Select the Replicate
Time Configuration from Chassis check box to replicate the time settings that are configured in the chassis to the IOM.
Select the Replicate Alert Destination Configuration from Chassis check box to replicate the alert destination settings that
are configured in the chassis to the IOM.

Figure 60. Advanced settings option

OS10 privileged accounts


OS10 uses two privileged user accounts:
● For day to day operations, the default administrative account is 'admin' for the user name, and 'admin' is the default
password.
● For specific troubleshooting needs, Dell Technologies support may have you log in to the Linux shell.

62 Dell SmartFabric OS10


NOTE: The Linux shell account is linuxadmin and the default password is linuxadmin.

NOTE: You cannot delete the default linuxadmin user name. The default admin user name can only be deleted if at
least one OS10 user with the sysadmin role is configured.
For more information on OS10 privileged accounts, find the relevant version of the User Guide in the OME-M and OS10
compatibility and documentation table.

Setting the OS10 admin account password using OME-M


To configure the OS10 ‘admin’ account password, access the OME-M UI. Choose Devices > I/O Module > Select an IOM and
choose Settings.

Figure 61. Set password on OME-M

NOTE: Passwords require a minimum of nine characters.

NOTE: OME-M versions prior to 1.20.00 will set the linuxadmin password, instead of the 'admin' password, when using
this page.
If the MXG610s I/O module is selected, this procedure sets the admin account password for the Fabric OS running on the IOM.

Failure to set the password message


The following error message displays if the password requirements are not met.

Figure 62. Error message for password requirements failure

Validate password configuration


SSH to the switch and log in using the new password to ensure that the new password has been set.

NIC teaming guidelines


While NIC teaming is not required, it is suggested for redundancy unless a specific implementation recommends against it.

Dell SmartFabric OS10 63


There are two main kinds of NIC teaming:

Switch Also referred to as LACP, 802.3ad, or Dynamic Link Aggregation, this teaming method uses the LACP
dependent protocol to understand the teaming topology. This teaming method provides active/active teaming and
requires the switch to support LACP teaming.
Switch This method uses the operating system and NIC device drivers on the server to team the NICs. Each NIC
independent vendor may provide slightly different implementations with different pros and cons.

NIC Partitioning (NPAR) can impact how NIC teaming operates. Based on restrictions that the NIC vendors implement and that
are related to NIC partitioning, certain configurations preclude certain types of teaming.
The following restrictions are in place for both Full Switch and SmartFabric modes:
● If NPAR is not in use, both switch-dependent (LACP and static LAG) and switch-independent teaming methods are
supported.
● If NPAR is in use, only switch-independent teaming methods are supported. Switch-dependent teaming (LACP and static
LAG) is not supported.
If switch dependent (LACP) teaming is used, the following restrictions are in place:
● The iDRAC shared LAN on motherboard (LOM) feature can only be used if the Failover option on the iDRAC is enabled.
● If the host operating system is Microsoft Windows, the LACP timer MUST be set to Slow, also referred to as Normal.
Refer to the network adapter or operating system documentation for detailed NIC teaming instructions.
● Microsoft Windows 2012 R2, refer to the Instructions section
● Microsoft Windows 2016, refer to the Instructions section
NOTE: For deployments utilizing NPAR on the MX Platform with VMware solutions, contact Dell Support.

The following table shows the options that the MX Platform provides for NIC teaming:

Table 10. NIC teaming options on the MX Platform


Teaming option Description
No teaming No NIC bonding, teaming, or switch-independent teaming
LACP teaming LACP (Also called 802.3ad or dynamic link aggregation.)
Other Other
NOTE: If using the Broadcom 57504 Quad-Port NIC and two separate LACP
groups are needed, select this option and configure the LACP groups in the
Operating System. Otherwise, this setting is not recommended as it can have a
performance impact on link management.

NOTE: LACP Fast timer is not currently supported.

64 Dell SmartFabric OS10


4
Full Switch Mode
VLAN scaling guidelines for Full Switch mode
When running RSTP with IGMP snooping disabled, the below table indicates the total number of Port VLAN (PV) combinations
that are supported. This number is calculated by multiplying the total number of VLANs provisioned on the switch and the
number of active ports, including VLTi and uplink port channels. For example, a switch with 20 active ports and 200 provisioned
VLANs has a PV value of 4,000 (20 x 200). SmartFabric OS10 includes a command scale-profile vlan that enables a
larger PV value. On OS10 version 10.5.2.4 and earlier, IGMP/MLD snooping cannot be enabled when scale-profile-vlan is
enabled. For more information on this command and its use, find the relevant version of the User Guide in the OME-M and OS10
compatibility and documentation table.
NOTE: Enabling scale-profile vlan can be done without a reboot of the switch, however any VLANs created prior to
this will not support the additional VLAN scale capabilities until the switch has been rebooted.

NOTE: Prior to enabling scale-profile vlan, add the mode L3 command to VLAN 4020 and any VLANs with FCoE
or routing enabled. Failure to do this will disrupt network traffic on those VLANs, including access to the management
interface on the switch. For more information on this command and its use, find the relevant version of the User Guide in
the OME-M and OS10 compatibility and documentation table.

Table 11. Supported Port VLAN values


OS10 version Platform With scale-profile VLAN enabled Without scale-profile VLAN enabled

10.5.4.1 MX5108n 45,000 PV 10,000 PV


MX9116n 200,000 PV 30,000 PV
10.5.3.1 MX5108n 45,000 PV 10,000 PV
MX9116n 180,000 PV 30,000 PV

10.5.2.6 MX5108n 30,000 PV 10,000 PV


10.5.2.9 MX9116n 60,000 PV 30,000 PV

10.5.1.6 MX5108n 30,000 PV 10,000 PV


10.5.1.7 MX9116n 60,000 PV 30,000 PV

10.5.0.7 MX5108n 20,000 PV 10,000 PV


MX9116n 60,000 PV 20,000 PV

NOTE: When the PV value becomes very large, some show commands may take additional time to execute. This delay does
not impact switching performance, only the CLI display function.

Managing Fibre Channel Zones on MX9116n FSE


When a storage array is directly connected to the MX9116n FSE, Fibre Channel Zones can be used to improve security and
performance.
Preparation of the servers is the same as mentioned in Server preparation on page 109. Determine the FC WWPNs for the
compute sleds and storage array as discussed in Dell PowerStore 1000T on page 235.
NOTE: FC zoning is supported in both SmartFabric mode and Full Switch mode. In each mode, the FC zones are configured
through the CLI as shown in the example below.

Full Switch Mode 65


These examples assume that the storage array has been successfully connected to the MX9116n FSE’s FC uplinks and there are
no errors.
Below are examples of the steps and commands to configure FC Zoning.
NOTE: For more information about the Dell SmartFabric OS10 Fibre Channel capabilities and commands, find the relevant
version of the User Guide in the OME-M and OS10 compatibility and documentation table.
These examples are valid for both Full Switch and SmartFabric modes.
NOTE: For the default zone settings to work properly, ensure that the maximum number of logged-in FC and FCoE nodes is
less than 120.

Configure FC aliases for server and storage adapter WWPNs


An FC alias is a human defined name that references a WWN. This allows users to refer to those devices by the easy to
remember alias instead of the long WWN. In this example, aliases for two MX740c compute sleds and a Dell PowerStore 1000T
storage array is defined.
The WWNs for the servers are obtained using the OME-M console.

MX9116n-A1 MX9116n-A2

configure terminal configure terminal

fc alias mx740c-1p1 fc alias mx740c-1p2


member wwn 20:01:00:0E:1E:09:A2:3A member wwn 20:01:00:0E:1E:09:A2:3B

fc alias mx740c-2p1 fc alias mx740c-2p2


member wwn 20:01:00:0E:1E:09:B8:F6 member wwn 20:01:00:0E:1E:09:B8:F7

fc alias SpA-0 fc alias SpA-1


member wwn 50:06:01:66:47:E0:1B:19 member wwn 50:06:01:67:47:E0:1B:19

fc alias SpB-0 fc alias SpB-1


member wwn 50:06:01:6E:47:E0:1B:19 member wwn 50:06:01:6F:47:E0:1B:19

Create FC zones
Server and storage adapter WWPNs, or their aliases are combined into zones to allow communication between devices in the
same zone. Dell Technologies recommends single-initiator zoning. In other words, no more than one server HBA port per zone.
For high availability, each server HBA port should be zoned to at least one port from SP A and one port from SP B. In this
example, one zone is created for each server HBA port. The zone contains the server port and the two storage processor ports
that are connected to the same MX9116n FS.

NOTE: The maximum number of members in an FC zone is 255.

MX9116n-A1 MX9116n-A2

fc zone mx740c-1p1zone fc zone mx740c-1p2zone


member alias-name mx740c-1p1 member alias-name mx740c-1p2
member alias-name SpB-0 member alias-name SpB-1
member alias-name SpA-0 member alias-name SpA-1

fc zone mx740c-2p1zone fc zone mx740c-2p2zone


member alias-name mx740c-2p1 member alias-name mx740c-2p2
member alias-name SpB-0 member alias-name SpB-1
member alias-name SpA-0 member alias-name SpA-1

66 Full Switch Mode


Create zone set
A zone set is a collection of zones. A zone set named zoneset1 is created on each switch, and the zones are added to it.

MX9116n-A1 MX9116n-A2

fc zoneset zoneset1 fc zoneset zoneset1


member mx740c-1p1zone member mx740c-1p2zone
member mx740c-2p1zone member mx740c-2p2zone
exit exit

Activate zone set


Once the zone set is created and members are added, activating the zone set is the last step in the process. After the zone set
is activated, save the configuration using the write memory command.

MX9116n-A1 MX9116n-A2

vfabric 1 vfabric 1
zoneset activate zoneset1 zoneset activate zoneset1
exit exit

write memory write memory

Full Switch mode IO module replacement process


NOTE: If you are replacing an I/O module (IOM) in SmartFabric mode prior to OME-M version 1.30.00, the process used
depends on the version of OS10 installed and should be run with Dell Technical Support engaged. For technical support, go
to https://www.dell.com/support or call (USA) 1-800-945-3355. With OME-M 1.30.00 and later, see the SmartFabric mode
IOM replacement process section.

NOTE: A new replacement IOM will have a factory default configuration. All port interfaces in the default configuration are
in the no shutdown state.

In Full Switch mode, Dell PowerEdge MX platform gives you the option to replace the I/O modules in the case of persistent
errors or failures. The MX9116n FSE and MX5108n can be replaced with another I/O module of the same type. In the case of
errors or failures, replace the old IOM with a new IOM.
Follow the instructions in this section to replace a failed the I/O module.
Prerequisites:
● The replacement IOM must be a new device within the chassis deployment. Do not use an IOM that was previously deployed
within the MCM group.
● The other IOM in Full Switch mode must be up, running, and healthy; otherwise a complete traffic outage may occur.
● The new IOM must have the same OS10 version as the faulty IOM.
NOTE: OS10 is factory-installed in the MX9116n FSE or MX5108n Ethernet Switch. If the faulty IOM has an upgraded
version of OS10, you must upgrade the new IOM to the same version.
The following is an overview of the module replacement process:
1. Back up the IOM configuration.
2. Physically replace the IOM.
3. Verify firmware versions and configure the IOM settings.
4. Restore the IOM configuration.
5. Connect the cables to the new IOM.

Full Switch Mode 67


Back up the IOM configuration
If possible, obtain a current backup of the running configuration for the IOM being replaced. The running configuration contains
the current OS10 system configuration and consists of a series of OS10 commands.
For instructions on how to back up the switch configuration, find the relevant version of the User Guide in the OME-M and
OS10 compatibility and documentation table.

Physically replace the IOM


Perform the following steps to physically replace an IOM:
1. Identify the faulty IOM to replace.
2. Carefully record the cable and port connections to ensure that the correct cables are connected to the correct ports once
the replacement IOM is installed. Disconnect the cables connected to the faulty IOM.
3. Remove the faulty IOM and set it aside.
4. Insert the new IOM in the same slot as the failed IOM.
NOTE: The model of the new IOM must be the same, and the new IOM must have the same version of SmartFabric
OS10 as the old IOM.
5. Confirm that the new IOM has been recognized by OME-M before proceeding further.

Verify firmware versions and configure the IOM settings


Verify the firmware version on the new IOM using the show version command. If required, upgrade the firmware on the new
IOM. To view a pending firmware upgrade, use the show image firmware command. For more information, see the Install
firmware upgrade section in the Dell SmartFabric OS10 User Guide. Find the relevant version of the User Guide in the OME-M
and OS10 compatibility and documentation table.
Configure the hostname and IP management protocols (such as SNMP and NTP) on the new IOM and then restore the
configuration to the new switch. For more information, see the System management chapter in the Dell SmartFabric OS10 User
Guide. Find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.
NOTE: When you remove the faulty IOM in Full Switch mode, the CLI configurations are lost. Reapply the configurations in
the new IOM using OS10 CLI.

Restore the IOM configuration


To restore a backup configuration, copy a local or remote file to the startup configuration and reload the switch.
See the Dell SmartFabric OS10 User Guide for instructions on how to restore the switch configuration. Find the relevant version
of the User Guide in the OME-M and OS10 compatibility and documentation table.

Connect the cables to the new IOM


The I/O module is now ready to be used. Connect the network cables in the same configuration that was used on the failing I/O
module.

VLAN stacking
Dell Technologies introduces VLAN stacking in Dell SmartFabric OS10.5.4.0. This feature, commonly called Q-in-Q, is available
for use on the Dell PowerEdge MX platform in Full Switch mode starting with version OS10.5.4.1.
VLAN stacking is often recommended for the service provider use case. VLAN stacking enables service providers to offer
separate VLANs to customers with no coordination between customers, with minimal coordination between customers and the
provider. VLAN stacking allows service providers to add their own VLAN tag to data or control frames traversing the provider
network. The provider can differentiate customers even if those customers use the same VLAN ID. The providers' network
forwarding decisions are based on the provider VLAN tag only. This tag enables the provider to map traffic through the core
independent of the customer; the customer and provider only coordinate at the provider edge.

68 Full Switch Mode


At the access point of a VLAN-stacking network, service providers add a VLAN tag, the S-Tag, to each frame before the 802.1Q
tag. From this point on, the frame is double tagged. The service provider uses the S-Tag to forward frame traffic across its
network. At the egress edge, the provider removes the S-Tag so that the customer receives the frame in its original condition,
as shown in the following figure.

Service provider network


Provider Edge Provider Edge
Provider trunk Provider trunk
Pro
ge v
Ed t Ac ider
er r ces Ed
rovid ss po s p ge
P cce ort
A

Destination 802.1 Q header 802.1 Q header


Source MAC IP Header TCP Header Payload
MAC (S-TAG/O-TAG) (C-TAG/I-TAG)

Customer Edge Customer Edge

Destination 802.1 Q header


Source MAC IP Header TCP Header Payload
MAC (C-TAG/I-TAG)

Figure 63. Addition (ingress) and removal (egress) of the S-Tag before the original 802.1Q header

Another use case that is more suited to the Dell PowerEdge MX Platform is to allow the MX7000 Chassis, or MX Scalable
Fabric, to be treated as a single workload from the perspective of the top of rack (ToR) leaf pair. VLAN stacking is used to allow
many workloads with unique VLANs to be represented by a single stack VLAN on the uplink of the MX IOMs. This allows for
VLAN changes to occur within the MX Scalable Fabric on each server without the need for networking admins to change the
configuration in the overall data center. This also provides PowerEdge MX Platform a flexibility of better VLAN Management and
Scaling.
The following diagrams demonstrate a few topologies:

Full Switch Mode 69


Figure 64. VLAN stacking to a data center - One leaf pair

Figure 65. VLAN stacking to a data center - Multiple leaf pairs

70 Full Switch Mode


Figure 66. 802.1Q header and port types for VLAN stacking to a data center

Full Switch Mode 71


For more information about VLAN Stacking, see the VLAN Stacking section in the Dell SmartFabric OS10 User Guide.
Find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.

72 Full Switch Mode


5
Overview of SmartFabric Services for
PowerEdge MX
A SmartFabric is a logical entity that consists of a collection of physical resources, such as servers and switches, and logical
resources such as networks, templates, and uplinks. The OpenManage Enterprise – Modular (OME-M) console provides a
method to manage these resources as a single unit.

Functional overview
SmartFabric mode provides the following functionality:
● Data center modernization
○ I/O aggregation
○ Plug-and-play fabric deployment
○ Single interface to manage all switches in the fabric
● Lifecycle management
○ Fabric-wide SmartFabric OS10 updates
○ Automated or user-enforced rollback to last well-known state
● Fabric automation
○ Physical topology compliance
○ Server networking managed using templates
○ Automated QoS assignment per VLAN
○ Automated storage networking
● Failure remediation
○ Dynamically adjusts bandwidth across all interswitch links in the event of a link failure
○ Automatically detects fabric misconfigurations or link level failure conditions
○ Automatically heals the fabric on failure condition removal
NOTE: In SmartFabric mode, MX series switches operate entirely as a Layer 2 network fabric. Layer 3 protocols are not
supported.

OS10 operating mode differences


The following table outlines the differences between the two operating modes and apply to both the MX9116n FSE and the
MX5108n switches.

Table 12. OS10 operating mode differences


Full Switch mode SmartFabric mode
Configuration changes are persistent during power cycle Only the configuration changes made using the OS10
events. commands below are persistent across power cycle events.
All other CLI configuration commands are disabled.

clock
fc alias
fc zone
fc zoneset
hostname
host-description
interface
ip nameserver

Overview of SmartFabric Services for PowerEdge MX 73


Table 12. OS10 operating mode differences (continued)
Full Switch mode SmartFabric mode

ip ssh server
ip telnet server
login concurrent-session
login statistics
logging
management route
ntp
snmp-server
tacacs-server
username
spanning-tree
vlan

All switch interfaces are assigned to VLAN 1 by default and Layer 2 bridging is disabled by default. Interfaces must join a
are in the same Layer 2 bridge domain. bridge domain (VLAN) before being able to forward frames.
All configuration changes are saved in the running Verify configuration changes using feature-specific show
configuration by default. To display the current configuration, commands, such as show interface and show vlan,
use the show running-configuration command. instead of show running-configuration.

CLI commands available in SmartFabric mode


When operating in SmartFabric mode, access to CLI commands is restricted to SmartFabric OS10 show commands and the
following subset of CLI configuration commands:
● clock – Configure clock parameters
● end – Exit to the EXEC mode
● exit – Exit from the current mode
● fc alias – Set fibre channel name
● fc zone – Set fibre channel zone name
● fc zoneset – Set fibre channel zone set name
● help – Display available commands
● hostname – Set the system hostname
● host-description – Define the description for the host
● interface – Configure or select an interface
● ip nameserver – Configure nameserver
● ip ssh server – Configure SSH server
● ip telnet server – Configure Telnet server
● login concurrent-session – Start concurrent session and login
● login statistics – Enable timeframe for the login session
● logging – Configure system logging
● management route – Configure the IPV4/IPv6 management route
● no – Delete or disable commands in configuration mode
● ntp – Configure the network time protocol
● snmp-server – Configure the SNMP server
● tacacs-server – Configure the TACACS server
● username – Create or modify user credentials
● Spanning-tree commands:
○ disable – Disable spanning tree globally
○ mac-flush-timer – Set the time used to flush MAC address entries
○ mode – Enable a spanning-tree mode, such as RSTP or MST
○ rstp – Configure rapid spanning-tree protocol (RSTP) mode
○ vlan – Configure spanning-tree on a VLAN range

74 Overview of SmartFabric Services for PowerEdge MX


IOM slot placement in SmartFabric mode
SmartFabric mode supports three specific switch placement options. Attempts to use placements different than described here
is not supported and may result in unpredictable behavior and/or data loss.
A SmartFabic cannot be split across physical fabric slots. For example, you cannot create a SmartFabric with switches in slot A1
and B1. They must be A1/A2 or B1/B2.

NOTE: The cabling shown in this section is the VLTi connection between the MX switches.

Two MX9116n Fabric Switching Engines in different chassis


This is the required IOM placement when creating a SmartFabric on top of a Scalable Fabric Architecture. Placing the FSE
modules in different chassis provides redundancy in the event of a chassis failure. This configuration supports placement in
Chassis1 Slot A1 and Chassis 2 Slot A2 and/or Chassis1 Slot B1 and Chassis 2 Slot B2. A SmartFabric cannot include a switch in
Fabric A and a switch in Fabric B.

Figure 67. IOM placement – 2x MX9116n in different chassis

Two MX5108n Ethernet switches in the same chassis


The MX5108n Ethernet Switch is only supported in single chassis configurations, with the switches in either slots A1/A2 or slots
B1/B2. A SmartFabric cannot include a switch in Fabric A and a switch in Fabric B.

Figure 68. IOM placement – 2x MX5108n in the same chassis

Overview of SmartFabric Services for PowerEdge MX 75


Two MX9116n Fabric Switching Engines in the same chassis
This placement should only be used in environments with a single chassis, with the switches in either slots A1/A2 or slots B1/B2.
A SmartFabric cannot include a switch in Fabric A and a switch in Fabric B.
As of OME-M 1.20.00, an MX deployment can start with a single MX7000 chassis with a pair of MX9116n FSEs and grow to
two or more chassis. The instructions for this can be found in this document in Expanding from a single-chassis to dual-chassis
configuration on page 138.

Figure 69. IOM placement – 2x MX9116n in the same chassis

Switch-to-switch (VLTi) cabling


When operating in SmartFabric mode, each switch pair runs a VLT interconnect (VLTi) between them. For the MX9116n FSE,
QSFP28-DD port groups 11 and 12 (eth1/1/37-1/1/40) are used.
For the MX5108n, ports 9 and 10 are used. Port 10 operates at 40 GbE instead of 100 GbE because all VLTi links must run at the
same speed.

NOTE: The VLTi ports are not user selectable, and the SmartFabric engine enforces the connection topology.

Figure 70. MX9116n SmartFabric VLTi cabling

Figure 71. MX5108n SmartFabric VLTi cabling

VLT backup link


A pair of cables is used to provide redundancy for the primary VLTi link. A third redundancy mechanism, a VLT backup link,
is automatically created when the SmartFabric is created. This link exchanges VLT heartbeat information between the two
switches using the management network to avoid a split-brain scenario should the external VLTi links go down. Based on the
node liveliness information, the VLT LAG/port is in up state in the primary VLT peer and in down state in the secondary VLT
peer. When only the VLTi link fails, but the peer is alive, the secondary VLT peer shuts down the VLT ports. When the node in
primary peer fails, the secondary becomes the primary peer.
To see the status of VLT backup link, run show vlt domain-id backup-link.

76 Overview of SmartFabric Services for PowerEdge MX


For example:

OS10# show vlt 255 backup-link


VLT Backup Link
------------------------
Destination : fde1:53ba:e9a0:de14:2204:fff:fe00:a267
Peer Heartbeat status : Up
Heartbeat interval : 30
Heartbeat timeout : 90
Destination VRF : default

Configuring port speed and breakout


If you need to change the default port speed and/or breakout configuration of an uplink port, you must complete this task
before creating the uplink.
For example, the QSFP28 interfaces that belong to port groups 13, 14, 15, and 16 on MX9116n FSE are typically used for uplink
connections. By default, the ports are set to 1x 100 GbE. The QSFP28 interface supports the following Ethernet breakout
configurations:
● 1x 100 GbE – One 100 GbE interface
● 1x 40 GbE – One 40 GbE interface
● 2x 50 GbE – Breakout a QSFP28 port into two 50 GbE interfaces
● 4x 25 GbE – Breakout a QSFP28 port into four 25 GbE interfaces
● 4x 10 GbE – Breakout a QSFP28 port into four 10 GbE interfaces
The MX9116n FSE also supports fibre channel (FC) capabilities using universal ports on port-groups 15 and 16. For more
information about configuring FC storage on the MX9116n FSE, see Scenario 5 and Scenario 6 in the Configuration scenarios
section.
For more information on interface breakouts, find the relevant version of the User Guide in the OME-M and OS10 compatibility
and documentation table.

Overview of SmartFabric Services for PowerEdge MX 77


VLAN scaling guidelines
Because SmartFabric mode provides network automation capabilities that Full Switch mode does not, the number of supported
VLANs differs between the modes. The following table provides the recommended maximum number of VLANs per fabric,
uplink, and server port. For a SmartFabric created with OME-M version 1.20.10 or earlier, you must enable support for VLAN
counts larger than 256 per fabric. A SmartFabric created with OME-M version 1.30.00 or later has this support automatically
enabled. See the Enable support for larger VLAN counts for more information.
If the number of configured VLANs is more than 500, it is recommended to have IGMP/MLD snooping enabled only on the
VLANs that required it and should not exceed 500. With configured VLANs less than 500, disable IGMP/MLD snooping globally.
Beginning with OME-M 1.30.00, IGMP/MLD snooping can be enabled in SmartFabric mode. To enable IGMP/MLD Snooping,
see the Layer 2 Multicast, Internet Group Management Protocol (IGMP) snooping, Multicast Listener Discovery Protocol (MLD)
snooping section.

NOTE: These are recommendations, not enforced maximums.

Table 13. Recommended maximum number of VLANs in SmartFabric mode


OS10 version Parameter Value

10.5.4.1 Recommended max VLANs per fabric 3000

Recommended max VLANs per uplink 3000

Recommended max VLANs per server port 1500

Maximum number of MX9116n FSEs in a single MCM group 12

Maximum number of MX5108n Ethernet switches in a single 8


MCM group
10.5.3.1 Recommended max VLANs per fabric 3000
Recommended max VLANs per uplink 3000
Recommended max VLANs per server port 1024
Maximum number of MX9116n FSEs in a single MCM group 12
Maximum number of MX5108n Ethernet switches in a single 8
MCM group
10.5.2.4 Recommended max VLANs per fabric 1536
Recommended max VLANs per uplink 512 across all uplinks
Recommended max VLANs per server port 512 across all uplinks
Maximum number of MX9116n FSEs in a single MCM group 12
Maximum number of MX5108n Ethernet switches in a single 8
MCM group

10.5.1.6 Recommended max VLANs per fabric 512


10.5.1.7 Recommended max VLANs per uplink 512 across all uplinks
Recommended max VLANs per server port 256
Maximum number of MX9116n FSEs in a single MCM group 12 a
Maximum number of MX5108n Ethernet switches in a single 8
MCM group
10.5.0.1 through 10.5.0.7 Recommended max VLANs per fabric 256
Recommended max VLANs per uplink 64 across all uplinks
Recommended max VLANs per server port 64

78 Overview of SmartFabric Services for PowerEdge MX


Table 13. Recommended maximum number of VLANs in SmartFabric mode (continued)
OS10 version Parameter Value

10.4.0.R3S Recommended max VLANs per fabric 128


10.4.0.R4S Recommended max VLANs per uplink 128 across all uplinks
Recommended max VLANs per server port 32

a. From SmartFabric OS10.5.1.6 and later, twelve FSEs in a single MCM group and eight MX5108 switches in a single
MCM group is supported, but twelve FSEs and eight MX5108 (20 total) switches together in a single MCM group is not
supported.

NOTE: VLANs 4004 and 4020 are reserved for internal switch communication and cannot be assigned to any interface in
Full Switch or SmartFabric mode. VLAN 4020 is a Management VLAN and is enabled by default. Do not remove this VLAN,
and do not remove the VLAN tag or edit Management VLAN on the Edit Uplink page. In Full Switch mode, you can create
a VLAN, enable it, and define it as a Management VLAN in global configuration mode on the switch. All other VLANs are
allowed on data plane and can be assigned to any interface. For more information on Configuring VLANs in Full Switch
mode, find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.

NOTE: In SmartFabric mode, a VLAN can be created using the CLI, but cannot be deleted or removed. Therefore, all VLAN
configuration must be done in the OME-M UI while in SmartFabric mode.

Maximum Transmission Unit behavior


Beginning with OS10.5.1.6, the default maximum transmission unit (MTU) size is 9216 bytes. Earlier versions default to 1512
bytes. When a SmartFabric is created, the default MTU for the switch is set to jumbo (9216 bytes), even if manually changed
prior to creating the SmartFabric. This introduces the following behaviors:
● If the MTU is not individually set on a specific interface, the MTU is 9216 bytes.
● If the MTU has been specifically set on an individual interface, the MTU is the value that has been specified.
● If a FCoE VLAN is assigned to an interface, the MTU is set to 2500 bytes even if the MTU has been manually set to a
different value before the FCoE VLAN was assigned. It is recommended that you set the MTU back to 9216 bytes after the
FCoE VLAN is assigned.
See Configure Ethernet ports on page 97 for instructions on setting the MTU.

Layer 2 Multicast, IGMP, and MLD snooping


Multicast is a technique that allows networking devices to send data to a group of interested receivers in a single transmission.
Multicast allows you to more efficiently use network resources, specifically for bandwidth-consuming services. Dell SmartFabric
OS10 supports the multicast feature in IPv4 and IPv6 networks and uses the following protocols for multicast distribution:
● Internet Group Management Protocol (IGMP)
● Protocol Independent Multicast (PIM)
To enable multicast routing in Full switch mode, see Dell SmartFabric OS10 user guide. Find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table. Beginning with OME-M 1.30.00 and later, configuring
Layer 2 Multicast in a SmartFabric is supported.

IGMP snooping
IGMP is a communications protocol that establishes multicast group memberships to neighboring switches and routers using
IPv4 networks. OS10 supports IGMPv1, IGMPv2, and IGMPv3 to manage the multicast group memberships on IPv4 networks.
IGMP snooping uses the information in IGMP packets to generate a forwarding table that associates ports with multicast
groups. When switches receive multicast frames, they forward them to their intended receivers. OS10 supports IGMP snooping
on VLAN interfaces.

Overview of SmartFabric Services for PowerEdge MX 79


To enable IGMP snooping in Full switch mode, see Dell SmartFabric OS10 user guide. Find the relevant version of the User Guide
in the OME-M and OS10 compatibility and documentation table.

MLD snooping
IPv6 uses MLD protocol to manage multicast groups. OS10 supports MLDv1and MLDv2 to manage the multicast group
memberships on IPv6 networks.
MLD snooping enables switches to use the information in MLD packets and generate a forwarding table that associates ports
with multicast groups. When switches receive multicast frames, they forward them to their intended receivers. OS10 supports
MLD snooping on VLAN interfaces.
To enable MLD snooping in Full switch mode, see Dell SmartFabric OS10 user guide. Find the relevant version of the User Guide
in the OME-M and OS10 compatibility and documentation table.

Configuring L2 Multicast in SmartFabric mode


To enable L2 Multicast, IGMP snooping and MLD snooping in SmartFabric mode, follow the steps mentioned below:
1. Access OME-M Console.
2. Go to Devices > Fabric and click on the desired Fabric.
3. Select the Multicast VLANs tab.
NOTE: This tab shows current IGMP version, MLD version and Flood restrict configuration. Flood restrict enables the
switch to forward unknown multicast packets to a multicast router. For it to be effective on the VLAN, IGMP and MLD
snooping must be enabled on the VLAN.

Figure 72. L2 Multicast option under Fabric


4. Select L2 Multicast.
5. Under IGMP, select VLAN(s) from Available VLANs and shift it to Selected VLANs as per the configuration.

80 Overview of SmartFabric Services for PowerEdge MX


Figure 73. Select VLANs for IGMP snooping
6. Select the Add selected VLANs to MLD configuration option for the same VLANs to be configured for MLD snooping.
7. Click Next.
8. Select the VLANs for MLD snooping then click Finish.

Figure 74. Selected VLANs for IGMP and MLD snooping

Validation
Run the following commands on MX IOMs in the Fabric to validate the IGMP and MLD snooping.
The show ip igmp snooping summary command shows maximum number of instances and total number interfaces with
igmp snooping enabled.

MX9116N-A1# show ip igmp snooping summary


Maximum number of IGMP and MLD Instances: 512
Total Number of interface with IGMP Snooping enabled is: 1

The show ip igmp snooping interface command shows VLANs, IGMP version and all other IGMP snooping details.

MX9116N-A1# show ip igmp snooping interface


Vlan10 is up, line protocol is up
IGMP version is 3
IGMP snooping is enabled on interface
IGMP snooping query interval is 60 seconds
IGMP snooping querier timeout is 130 seconds
IGMP snooping last member query response interval is 1000 ms
IGMP Snooping max response time is 10 seconds
IGMP snooping fast-leave is disabled on this interface
IGMP snooping querier is disabled on this interface
Multicast snooping flood-restrict is enabled on this interface

Overview of SmartFabric Services for PowerEdge MX 81


Upstream network requirements
This section describes the requirements and guidelines for connecting a SmartFabric to an upstream network.

Physical connectivity
All physical Ethernet connections within an uplink from a SmartFabric are automatically grouped into a single LACP LAG. All
related ports on the upstream switches must also be in a single LACP LAG. Failure to do so may create network loops.
A minimum of one physical uplink from each MX switch to each upstream switch is required and the uplinks must be connected
in a mesh design. For example, if you have two upstream switches, you need two uplinks from each MX9116n FSE, as shown in
the following figure.
Starting with Dell Networking OS10.5.2.4 and later, a SmartFabric supports a maximum of four Ethernet – no Spanning Tree
or three legacy Ethernet uplinks. Versions of Dell Networking OS10.5.1.6 or earlier, a SmartFabric supports a maximum of three
Ethernet - no Spanning Tree uplinks or three legacy Ethernet uplinks.
NOTE: If multiple uplinks are going to be used, you cannot use the same VLAN ID on more than one uplink without creating
a network loop.

NOTE: The upstream switch ports must be in a single LACP LAG as shown in the figure below. Creating multiple LAGs
within a single uplink results in a network loop and is not supported.

Figure 75. Required upstream network connectivity

The maximum number of uplinks supported in SmartFabric are detailed in the following table.

Table 14. Number of uplinks supported


OME-M version Uplink type supported Number of uplinks
2.00.00 Ethernet - No Spanning Tree 4
Legacy Ethernet (with Spanning Tree) 3
Fibre Channel 1 (from each switch)
1.40.00 Ethernet - No Spanning Tree 4
Legacy Ethernet (with Spanning Tree) 3
Fibre Channel 1 (from each switch)
1.30.00 Ethernet - No Spanning Tree 4

82 Overview of SmartFabric Services for PowerEdge MX


Table 14. Number of uplinks supported (continued)
OME-M version Uplink type supported Number of uplinks
Legacy Ethernet (with Spanning Tree) 3
Fibre Channel 1 (from each switch)
1.20.10 Ethernet - No Spanning Tree 3
Legacy Ethernet (with Spanning Tree) 3
Fibre Channel 1 (from each switch)
1.20.00 Ethernet - No Spanning Tree 2 (Only QSFP28 interfaces are currently
supported for Ethernet - No Spanning Tree
uplinks.)
Legacy Ethernet (with Spanning Tree) 3
Fibre Channel 1 (from each switch)
1.10.20 and earlier Ethernet 3
Fibre Channel 1 (from each switch)

NOTE: If multiple uplinks are to be used, you cannot use the same VLAN ID on more than one uplink without creating a
network loop.
Dell Technologies has tested uplinks with the following combination of switch models, and operating system versions.

Table 15. Tested upstream switches and operating system versions


Manufacturer Switch model Operating system version
Cisco Nexus C93180YC-EX NX-OS 9.2.4
FEX C2232-PP
Nexus C93180YC-EX ACI 14.0(3d)
Nexus C9332C
FEX C2232-PP
Arista DCS-7280SR2K-48C6-F 4.23.0F

Other restrictions and guidelines


The following additional restrictions and guidelines are in place when operating in SmartFabric mode:
● Interconnecting switches in Slots A1/A2 with switches in Slots B1/B2, regardless of chassis, is not supported.
● When operating with multiple chassis, switches in Slots A1/A2 or Slots B1/B2 in one chassis must be interconnected only
with other Slots A1/A2 or Slots B1/B2 switches respectively. Connecting switches that reside in Slots A1/A2 in one chassis
with switches in Slots B1/B2 in another is not supported.
● Physical uplinks must be symmetrical. If one switch in a SmartFabric has two uplinks, the other switch must have two uplinks
of the same speed. Single-armed uplinks are not currently supported.
● You cannot have a pair of switches in SmartFabric mode uplink to another pair of switches in SmartFabric mode. A
SmartFabric can uplink to a pair of switches in Full Switch mode.
● VLANs 4004 and 4020 are reserved for internal switch communication and must not be assigned to an interface.
● In SmartFabric mode, although you can use the CLI to create any non-restricted VLANs, but you cannot assign interfaces to
them. For this reason, do not use the CLI to create VLANs in SmartFabric mode.
● VLAN 1 is automatically created as the Default/Native VLAN, but it is not required to be used. See Define VLANs on page
91 for more information.
● Do not create a VLAN or subnet on the Fabric that is in use for the management network on the MX Chassis or MX IOMs.

Overview of SmartFabric Services for PowerEdge MX 83


Ethernet – No Spanning Tree uplink
OME-M 1.20.00 and OS10.5.0.7 and later supports a new uplink type: Ethernet - No Spanning Tree. This uplink type allows
Ethernet uplinks to represent a SmartFabric as an end host with multiple adapters to the upstream network with spanning tree
being disabled on the uplink interfaces.
A loop free topology without STP is achieved by not allowing overlapping VLANs across uplinks. Supported use cases are shown
in the following figures.
NOTE: For PowerEdge MX systems using OME-M 1.20.00 and OS10.5.0.7 and later, Ethernet - No Spanning Tree uplinks
should be used instead of the legacy Ethernet uplink.
Supported scenarios:
The Ethernet - No Spanning Tree feature supports uplinks to both Dell and non-Dell switches in a vPC/VLT. Each uplink must be
in a single LACP LAG.
Guidelines:
● On an existing SmartFabric, all legacy Ethernet uplinks must be deleted before creating Ethernet - No Spanning Tree uplinks
to avoid the possibility of creating a network loop.
● Ethernet-No Spanning Tree uplinks cannot co-exist with legacy Ethernet uplinks in the same SmartFabric.
● VLAN IDs (tagged/untagged) must not overlap.
● FCoE Uplinks require separate untagged VLAN IDs.
● With OME-M 1.20.00, only QSFP28 interfaces on the MX9116n FSE are supported for Ethernet - No Spanning Tree uplinks.
With OME-M 1.20.10 and later, QSFP28-DD interfaces for Ethernet - No Spanning Tree are also supported.
Use Case 1: Standard uplink configuration (maximum of 2 uplinks)

Figure 76. Standard uplink configuration

Use Case 2: Uplink with FC gateway

84 Overview of SmartFabric Services for PowerEdge MX


Figure 77. Uplink with FC gateway

Use Case 3: Uplink with direct attached FC

Figure 78. Uplink with direct attached FC

Use Case 4: Ethernet - No Spanning Tree uplink with FCoE FSB

Figure 79. Uplink in FSB scenario

Configuring Ethernet - No Spanning Tree uplinks

Overview of SmartFabric Services for PowerEdge MX 85


Creating an Ethernet – No Spanning Tree uplink is the same process as the legacy Ethernet uplink except the upstream
switch configuration is different. Configuration examples for upstream switches can be found in this guide under Configuration
Scenarios on page 178. Instructions for how to create an uplink are included in this guide under Create Ethernet – No Spanning
Tree uplink on page 98.

Spanning Tree Protocol - legacy Ethernet uplink


It is not recommended to use the legacy Ethernet uplink type when creating a new SmartFabric. Use the Ethernet - No
Spanning Tree uplink.
By default, SmartFabric OS10 uses Rapid per-VLAN Spanning Tree Plus (RPVST+) across all switching platforms including
PowerEdge MX networking IOMs. SmartFabric OS10 also supports RSTP.
NOTE: Dell Technologies recommends using RSTP instead of RPVST+ when more than 64 VLANs are required in a fabric to
avoid performance problems.
Caution should be taken when connecting an RPVST+ to an existing RSTP environment. RPVST+ creates a single topology per
VLAN with the default VLAN, typically VLAN 1, for the Common Spanning Tree (CST) with RSTP.
For non-native VLANs, all bridge protocol data unit (BPDU) traffic is tagged and forwarded by the upstream, RSTP-enabled
switch with the associated VLAN. These BPDUs use a protocol-specific multicast address.
Any other RPVST+ tree that is attached to the RSTP tree might process these packets accordingly leading to the potential of
unexpected trees.
NOTE: When connecting to an existing environment that is not using RPVST+, Dell Technologies recommends changing to
the existing spanning tree protocol before connecting a SmartFabric OS10 switch. This change ensures that the same type
of Spanning Tree is run on the SmartFabric OS10 MX switches and the upstream switches.
To switch from RPVST+ to RSTP, use the spanning-tree mode rstp command:

MX9116N-A1(config)# spanning-tree mode rstp


MX9116N-A1(config)# end

To validate the STP configuration, use the show spanning-tree brief command:

MX9116N-A1#show spanning-tree brief


Spanning tree enabled protocol rstp with force-version rstp
Executing IEEE compatible Spanning Tree Protocol Root ID Priority 0, Address
4c76.25e8.f2c0 Root Bridge hello time 2, max age 20, forward delay 15 Bridge ID
Priority 32768, Address 2004.0f00.cd1e Configured hello time 2, max age 20, forward
delay 15 Flush Interval 200 centi-sec, Flush Invocations 95 Flush Indication threshold 0
(MAC flush optimization is disabled)

NOTE: STP is required when using legacy Ethernet uplinks. MSTP is not supported. Operating a SmartFabric with STP
disabled and the legacy Ethernet uplink may create a network loop and is not supported. Use the Ethernet - No Spanning
Tree uplink instead.

NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.

Networks and automated QoS


In addition to assigning VLANs to server profiles, SmartFabric automates QoS settings based on the Network Type specified.
The following figure shows that when defining a VLAN, several options are pre-defined.

86 Overview of SmartFabric Services for PowerEdge MX


Figure 80. Network types available in SmartFabric mode

The following table lists the network types and related settings. The QoS group is the numerical value for the queues available in
SmartFabric mode. Available queues include 2 through 5. Queues 1, 6, and 7 are reserved.
NOTE: In SmartFabric mode, an administrator cannot change the default weights for the queues. Weights for each queue
can be seen using the show queuing weights interface ethernet command that is described in Common CLI
troubleshooting commands for Full Switch and SmartFabric modes on page 157.

Table 16. Network types and default QoS settings


Network type Description QoS group
General Purpose (Bronze) Used for low-priority data traffic 2
General Purpose (Silver) Used for standard/default-priority data traffic 3
General Purpose (Gold) Used for high-priority data traffic 4
General Purpose (Platinum) Used for extremely high-priority data traffic 5
Cluster Interconnect Used for cluster heartbeat VLANs 5
Hypervisor Management Used for hypervisor management connections such as the ESXi 5
management VLAN
Storage - NVMe/TCP Used for NVMe/TCP storage traffic 4
Storage - iSCSI Used for iSCSI VLANs 5
Storage - FCoE Used for FCoE VLANs 5
Storage - Data Replication Used for VLANs supporting storage data replication such as for 5
VMware VSAN
VM Migration Used for VLANs supporting vMotion and similar technologies 5
VMware FT Logging Used for VLANs supporting VMware Fault Tolerance 5

Overview of SmartFabric Services for PowerEdge MX 87


Server templates, profiles, virtual identities,
networks, and deployment
For detailed information about server templates, profiles, virtual identities, and deployment, see the OpenManage Enterprise -
Modular documentation.

Templates
A template is a set of system configuration settings referred to as attributes. A template may contain a small set of attributes
for a specific purpose, or all the attributes for a full system configuration. Templates allow for multiple servers to be configured
quickly and automatically without the risk of human error.
Networks (VLANs) are assigned to NICs as part of the server template. When the template is deployed, those networks are
programmed on the fabric for the servers that are associated with the template.
NOTE: Network assignment through template only functions for servers connected to a SmartFabric. If a template with
network assignments is deployed to a server connected to a switch in Full Switch mode, the network assignments are
ignored.
The OME-M UI provides the following options for creating templates:
● Most frequently, templates are created by getting the current system configuration from a server that has been configured
to the exact specifications required. This is referred to as a Reference Server.
● Templates may be cloned, copied, and edited.
● A template can be created by importing a Server Configuration Profile (SCP) file. The SCP file may be from a server or
exported by OpenManage Essentials, OpenManage Enterprise, or OME-M.
● OME-M comes prepopulated with several templates for specific purposes.

Profiles
A server profile is a combination of template and identity settings that are applied to a specific server or multiple servers.
When the server template is deployed successfully, OME-M automatically creates and applies a server profile to that template.
OME-M also allows you to manually create a server profile that you can apply to the designated compute sleds.
Instead of deleting and recreating server templates, profiles can be used to deploy with modified attributes on server templates.
A single profile can be applied to multiple server templates with modified attributes, or all attributes.

Virtual identities and identity pools


Some of the attributes that are in a template are referred to as identity attributes. Identity attributes identify a device and
distinguish it from all other devices on the network. Since identity attributes must uniquely identify a device, it is imperative that
each device has a unique network identity. Otherwise, devices cannot communicate with each other over the network.
Devices come with unique manufacturer-assigned identity values preinstalled, such as a factory-assigned MAC address. Those
identities are fixed and never change. However, devices can assume a set of alternate identity values, called a “virtual identity.”
A virtual identity functions on the network using that identity, as if the virtual identity was its factory-installed identity. The use
of virtual identity is the basis for stateless operations.
OME-M uses identity pools to manage the set of values that can be used as virtual identities for discovered devices. It controls
the assignment of virtual identity values, selecting values for individual deployments from predefined ranges of possible values.
This allows the customer to control the set of values which can be used for identities. The customer does not have to enter all
needed identity values with every deployment request, or remember which values have or have not been used. Identity pools
make configuration deployment and migration easier to manage.
Identity pools are used with template deployment and profile operations. They provide sets of values that can be used for virtual
identity attributes for deployment. After a template is created, an identity pool may be associated with it. Doing this directs
the identity pool to get identity values whenever the template is deployed to a target device. The same identity pool can be
associated with, or used by, any number of templates. Only one identity pool can be associated with a template.

88 Overview of SmartFabric Services for PowerEdge MX


Each template has specific virtual identity needs, based on its configuration. For example, one template may have iSCSI
configured, so it needs the appropriate virtual identities for iSCSI operations. Another template may not have iSCSI configured,
but may have FCoE configured, so it needs virtual identities for FCoE operations but not for iSCSI operations.

Deployment
Deployment is the process of applying a full or partial system configuration on a specific target device. In OME-M, templates are
the basis for all deployments. Templates contain the system configuration attributes that get provisioned to the target server,
then the iDRAC on the target server applies the attributes contained in the template and reboots the server if necessary. Often,
templates contain virtual identity attributes. As mentioned above, identity attributes must have unique values on the network.
Identity Pools facilitate the assignment and management of unique virtual identities.

VMware vCenter integration - OpenManage Network


Integration
Dell OpenManage Network Integration (OMNI) is an external plug-in for VMware vCenter that is designed to complement
SmartFabric Services (SFS) by integrating with VMware vCenter to perform fabric automation. With the release of OMNI
2.0, this integration is extended to SFS that runs on PowerEdge MX. This integration automates VLAN changes that occur in
VMware vCenter and propagates those changes into the related SFS instances running on the MX platform as shown in the
following figure.
The combination of OMNI and Cisco ACI vCenter integration creates a fully automated solution. OMNI and the Cisco APIC
recognize changes in vCenter and automatically propagate the changes to the MX SmartFabric and ACI fabric respectively. This
allows a VLAN change to be made in vCenter, and it will flow through the entire solution without any manual intervention.
For more information about OMNI, see the SmartFabric Services for OpenManage Network Integration User Guide on the Dell
OpenManage Network Integration for VMware vCenter documentation page.

NOTE: OMNI 2.0 and 2.1 only support VLAN automation with one uplink per SmartFabric.

Overview of SmartFabric Services for PowerEdge MX 89


Figure 81. OMNI integration workflow

OpenManage Integration for VMware vCenter


The Dell OpenManage systems management solutions portfolio provides full-lifecycle management of PowerEdge servers and
associated infrastructure. The foundational technologies are the integrated Dell Remote Access Controller (iDRAC) and the
OpenManage Enterprise console.
The Dell OpenManage Integration for VMware vCenter (OMIVV) is designed to streamline the management processes in your
data center environment by allowing you to use VMware vCenter Server to manage your full server infrastructure - both
physical and virtual.
The following list of OMIVV capabilities applies to the full portfolio of Dell OpenManage systems management solutions:
● Monitor PowerEdge hardware inventory directly in Host and Cluster views and the OMIVV dashboard within vCenter
● Bubble up hardware system alerts for configurable actions in vCenter
● Manage firmware alongside vSphere Lifecycle Manager in vSphere 7.0 and higher
● Set baselines for server configuration and firmware levels with cluster aware updates for non-vSphere Lifecycle Manager
vSphere and vSAN clusters
● Speed deployment of ESXi to new PowerEdge servers and quickly add them to managed vCenters
OMIVV provides a unified PowerEdge and VMware inventory, monitoring, and update solution. Specifically, for the Dell
PowerEdge MX platform, OMIVV provides the following:
● Inventory, monitoring, and alerting directly within vCenter
● Manage server lifecycle updates in vCenter

90 Overview of SmartFabric Services for PowerEdge MX


6
SmartFabric Creation
Steps to create a SmartFabric
The procedures in this section make the following assumptions:
● All MX7000 chassis and management modules are cabled correctly and in a MultiChassis Management group.
● The VLTi cables between switches have been connected.
● Open Manage Enterprise - Modular is at version 1.20.00 and OS10 is version 10.5.0.7 or later.
● The entire platform is healthy.
NOTE: All server, network, and chassis hardware must be updated to the latest firmware. See Software and firmware
versions used on page 245 for the minimum recommended firmware versions.
To walk through the steps of creating a SmartFabric yourself, see the interactive demos for MX at Dell Technologies Interactive
Demo: OpenManage Enterprise Modular for MX solution management.

Physically cable PowerEdge MX chassis and upstream


switches
There are multiple areas of cabling for the PowerEdge MX chassis that must be completed. It is recommended to cable the
PowerEdge MX chassis and upstream switches before creating the SmartFabric.

Table 17. Cable requirements and instructions


Cable requirement Instructions
Management module cabling MX Chassis management wiring
https://www.dell.com/support/manuals/en-us/poweredge-
mx7000/omem_1_30_10_ug/revision-history?
guid=guid-891bbdd9-3032-4b85-9f92-63ac8c002d9b&lang=e
n-us

VLTi cabling options See IOM placement – 2x MX9116n in different chassis on


page 75, IOM placement – 2x MX5108n in the same chassis
on page 75, and IOM placement – 2x MX9116n in the same
chassis on page 76.
Cabling the PowerEdge MX chassis upstream See the example topologies in Configuration Scenarios on
page 178.
Console cable access, in-band and out-of-band management Management Networks for Dell Networking
networks

For more information about network cabling on PowerEdge MX, see Supported cables and optical connectors on page 223.

MX Chassis management wiring

Define VLANs
Before creating the SmartFabric, the initial set of VLANs should be created. The first VLAN to be created should be the default,
or native VLAN, typically VLAN 1. The default VLAN must be created for any untagged traffic to cross the fabric.

SmartFabric Creation 91
NOTE: VLAN 1 will be created as a Default VLAN when the first fabric is created.

To define VLANs using the OME-M console, perform the following steps.
1. Open the OME-M console.
2. From the navigation menu, click Configuration > VLANs.
NOTE: In OME-M 1.10.20 and earlier, the VLANs screen is titled Networks.

3. In the VLANs pane, click Define.


4. In the Define Network window, complete the following:
a. Enter a name for the VLAN in the Name box. In this example, VLAN0010 was used.
b. Optionally, enter a description in the Description box. In this example, the description was entered as “Company A
General Purpose”.
c. Enter the VLAN number in the VLAN ID box. In this example, 10 was entered.
d. From the Network Type list, select the desired network type. In this example, General Purpose (Bronze) was used.
e. Click Finish.
The following figure shows VLAN 1 and VLAN 10 after being created using the previous steps.

Figure 82. Defined VLANs list

Define VLANs for FCoE

NOTE: Define VLANs for FCoE if implementing Fibre Channel configurations. Skip this section if not required.

A standard Ethernet uplink carries assigned VLANs on all physical uplinks. When implementing FCoE, traffic for SAN path A and
SAN path B must be kept separate. The storage arrays have two separate controllers which create two paths, SAN path A and
SAN path B, connected to the MX9116n FSE. For storage traffic to be redundant, two separate VLANs are created for that
traffic.
Using the same process described in Define VLANs, create two additional VLANs for FCoE traffic.

Table 18. FCoE VLAN attributes


Name Description Network type VLAN ID SAN
FC A1 FCOE A1 Storage - FCoE 30 A
FC A2 FCOE A2 Storage - FCoE 40 B

92 SmartFabric Creation
Figure 83. Defined FCoE VLANs list

NOTE: To create VLANs for FCoE, from the Network Type list, select Storage – FCoE, and then click Finish. VLANs to be
used for FCoE must be configured as the Storage – FCoE network type.

NOTE: In OME-M 1.10.20 and earlier, the VLANs screen is titled as Networks.

Create the SmartFabric


To create a SmartFabric using the OME-M console, perform the following steps:
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. In the Fabric pane, click Add Fabric.
4. In the Create Fabric window, complete the following:
a. Enter a name for the fabric in the Name box. In this example, "SmartFabric" was entered.
b. Optionally, enter a description in the Description box. In this example, the description was entered as “SmartFabric
using MX9116n/MX7116n in Fabric A.”
c. Click Next.
d. From the Design Type list, select the appropriate type. In this example, “2x MX9116n Fabric Switching Engine in
different chassis” was selected.
e. From the Chassis-X list, select the first MX7000 chassis.
f. From the Switch-A list, select Slot-IOM-A1.
g. From the Chassis-Y list, select the second MX7000 chassis to join the fabric.
h. From the Switch-B list, select Slot-IOM-A2.
i. Click Next.
j. On the Summary page, verify the proposed configuration and click Finish.
NOTE: From the Summary window, a list of the physical cabling requirements can be printed.

SmartFabric Creation 93
Figure 84. SmartFabric deployment design window

The SmartFabric deploys. The process of Fabric creation can take up to 20 minutes to complete. During this time, all related
switches are rebooted, and the operating mode changes from Full Switch to SmartFabric mode.

NOTE: After the fabric is created, the fabric health is critical until at least one uplink is created.

The following figure shows the new SmartFabric object and some basic information about the fabric.

Figure 85. SmartFabric post-deployment without defined uplinks

Optional steps
The configuration of forward error correction, uplink port speed and breakout, MTU, and autonegotiation is optional.

Forward error correction


NOTE: Users should only use this feature if needed.

Forward error correction (FEC) is a technique used for controlling errors in data transmission at high speeds. With FEC, the
destination recognizes only the data with no errors from the source that is sending redundant error correcting code with the
data frame. This technique extends the range of the signal by correcting error without retransmission. FEC enhances data
reliability.
Available FEC modes:
● CL91-RS - Supports 100 GbE
● CL108-RS – Supports 25 GbE and 50 GbE
● CL74-FC – Supports 25 GbE and 50 GbE
● Auto
● Off
In SmartFabric mode, configuring FEC is supported on OME-M 1.20.00 and later. FEC options CL91-RS, CL108-RS, CL74-FC,
Auto, and Off are available. The options displayed in the UI vary depending on the speed of the selected interface.

94 SmartFabric Creation
The following table shows the default FEC and auto negotiation values for optics and cables for the QSFP28-DD and QSFP28
ports at 200 GbE and 100 GbE speeds.

Table 19. Media, Auto negotiation, and default FEC values for 200 GbE and 100 GbE
Media Auto negotiation FEC
200 GbE and 100 GbE DAC Enabled CL91-RS
200 GbE and 100 GbE Fiber or AOC, except LR-related optics Disabled CL91-RS
200 GbE and 100 GbE LR-related optics Disabled Disabled

The following table shows the default FEC and auto negotiation values for optics and cables for the QSFP28-DD and QSFP28
ports at 200, 100, 50, and 25 GbE speeds.

Table 20. Media, cable type, auto negotiation, and default FEC values
Media DAC cable type Auto negotiation FEC
200, 100, 50, and 25 GbE DAC CR-L Enabled CL-108-RS
CR-S Enabled CL-74-FC
CR-N Enabled Disabled
200, 100, 50, and 25 GbE Fiber or AOC, except N/A Disabled CL108-RS
LR-related optics
200, 100, 50, and 25 GbE LR-related optics N/A Disabled Disabled

To configure FEC in Full Switch mode, find the relevant version of the Dell SmartFabric OS10 User Guide in the OME-M and
OS10 compatibility and documentation table.
To configure FEC in SmartFabric mode on the OME-M console, perform the following steps.
Steps
1. Access the OME-M console.
2. Choose Devices > I/O Modules > Click on an I/O Module.
3. In an I/O Module option, choose Hardware > Port Information. This option lists the IOM ports and its information.
4. Select a port to configure FEC and click Configure FEC option at the top.
NOTE: FEC options are not supported for compute sled facing ports and FEM ports (breakout FEM, virtual ports).

Figure 86. Configure FEC option


5. This shows the Current and Auto negotiated FEC Settings. Choose FEC Type for the port selected from the list.

SmartFabric Creation 95
Figure 87. Select FEC Type

Verify FEC configuration


FEC can be verified on I/O Module CLI in Full switch and SmartFabric mode. Run the following command to verify.
The show interface ethernet 1/1/41 command shows the current and negotiated FEC for port 1/1/41.

MX9116N-A1# show interface ethernet 1/1/41


Ethernet 1/1/41 is up, line protocol is up
Port is part of Port-channel 2
Hardware is Eth, address is 20:04:0f:21:d4:f1
Current address is 20:04:0f:21:d4:f1
Pluggable media present, QSFP28 type is QSFP28 100GBASE-SR4-NOF
Wavelength is 850
Receive power reading is not available

Interface index is 112


Internet address is not set
Mode of IPv4 Address Assignment: not set
Interface IPv6 oper status: Disabled
MTU 9216 bytes, IP MTU 9184 bytes
LineSpeed 100G, Auto-Negotiation off
Configured FEC is cl91-rs, Negotiated FEC is cl91-rs
(Output Truncated)

Configure uplink port speed or breakout

NOTE: Users should only perform this task if needed.

If the uplink ports must be reconfigured to a different speed or breakout setting from the default, you must complete this before
creating the uplink.
To configure the Ethernet breakout on port groups using OME-M Console, perform the following steps:
1. Open the OME-M console.
2. From the navigation menu, click Devices > I/O Modules.
3. Select the switch that you want to manage. In this example, a MX9116n FSE in slot IOM-A1 is selected.
4. Choose Hardware > Port Information.
5. In the Port Information pane, choose the desired port group. In this example port-group1/1/13 is selected.
6. Select Configure Breakout. In the Configure Breakout dialog box, select the required Breakout option. In the example
provided, the Breakout Type for port-group1/1/13 is selected as 1x 40GE.
NOTE: Before choosing the breakout type, you must set the Breakout Type to HardwareDefault and then set the
desired configuration. If the desired breakout type is selected before setting HardwareDefault, an error occurs.
7. Click Finish.

96 SmartFabric Creation
Figure 88. Select the desired breakout type
8. Configure the remaining breakout types on additional uplink port groups as needed.

Configure Ethernet ports


Use the OME-M console to configure various settings such as port breakout, MTU size, auto negotiation, and so forth. Perform
the following steps to gain insight into modifying various entities.
NOTE: In SmartFabric mode, the configuration of the interfaces using the CLI should not be performed. Use the OME-
Modular UI instead. In Full Switch mode, the configuration of the interfaces using the OME-Modular UI is not supported, use
CLI instead.
1. From the Switch management page, choose Hardware > Port Information.

Figure 89. IOM Overview page on OME-M

SmartFabric Creation 97
Figure 90. Port information section
2. To configure MTU, select the port that is listed under the respective port-group.
3. Click Configure MTU. Enter MTU size in bytes.

Figure 91. Configure MTU


4. Click Finish.
5. To configure Auto Negotiation, select the port that is listed under the respective port-group and then click Toggle
AutoNeg. This changes the Auto Negotiation of the port to Disabled/Enabled.
6. Click Finish.

Figure 92. Enable/Disable Auto Negotiation


7. To configure the administrative state (shut/no shut) of a port, select the port that is listed under the respective port-group.
Click Toggle Admin State. This toggles the port administrative state to aDisabled/Enabled state.
8. Click Finish.

Create Ethernet – No Spanning Tree uplink


As of OME-M 1.20.00 and OS10.5.0.7, the preferred uplink type is the Ethernet - No Spanning Tree Protocol uplink. The legacy
Ethernet uplink is still available but is no longer recommended. The process for creating a legacy Ethernet uplink is the same as
below except for selecting Ethernet as the uplink type.
An Ethernet - No Spanning Tree uplink represents a SmartFabric as an end host with multiple adapters to the upstream
network. For this, STP is disabled on the uplink interfaces. A loop-free topology without STP is achieved by not allowing
overlapping VLANs across uplinks.

98 SmartFabric Creation
NOTE: Ethernet – No Spanning Tree uplink is supported with Dell and non-Dell switches in a vPC/VLT. Each uplink must be
a single LACP LAG.

NOTE: To change the port speed or breakout configuration, see Configure uplink port speed or breakout on page 96 and
make those changes before creating the uplinks.
After initial deployment, the new fabric shows Uplink Count as ‘zero’ and shows a warning (yellow triangle with exclamation
point). The lack of a fabric uplink results in a failed health check (red circle with x). To create the uplink, perform the following
steps.
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. Click on the fabric name. In this example, SmartFabric is selected.
4. In the Fabric Details pane, click Uplinks.
5. Click the Add Uplinks button.
6. In the Add Uplink window, complete the following:
a. Enter a name for the uplink in the Name box. In this example, Uplink01 is entered.
b. Optionally, enter a description in the Description box.
c. From the Uplink Type list, select the desired type of uplink. In this example, Ethernet – No Spanning Tree is selected.

Figure 93. Create Ethernet – No Spanning Tree uplink

NOTE: For more information on Uplink Failure Detection, see the Uplink failure detection section.
d. Click Next.
e. From the Switch Ports list, select the uplink ports on both the Mx9116n FSEs. In this example, ethernet 1/1/41 and
ethernet 1/1/42 are selected for both MX9116n FSEs.
NOTE: The show inventory CLI command can be used to find the I/O Module service tag information (for example,
8XRJ0T2).

f. From the Tagged Networks list, select the desired tagged VLANs. In this example, VLAN0010 is selected.
g. From the Untagged Network list, select the untagged VLAN. In this example, VLAN0001 is selected.

SmartFabric Creation 99
Figure 94. Create Ethernet uplink

7. Click Finish.
At this point, SmartFabric creates the uplink object and the status for the fabric changes to OK (green box with checkmark).
NOTE: VLAN1 will be assigned to Untagged Network by default.

Ethernet – No Spanning Tree upstream switch


configuration
If using Ethernet – No Spanning Tree uplinks, refer to the following table to configure your uplink switches. Configurations for
Dell Networking OS10 (S5232F-ON) and Cisco Nexus 9000-series were used for these examples.

Table 21. Dell OS10 and Cisco Nexus Ethernet – No Spanning Tree configuration
Dell Networking OS10 Cisco Nexus OS
Global Settings Global Settings
spanning-tree mode RSTP spanning-tree port type edge bpduguard default
spanning-tree port type network default

Port-channel Port-channel
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan xy switchport trunk allowed vlan xy
spanning-tree bpdu guard enable channel-group <channel-group-id> mode active
spanning-tree guard root
spanning-tree port type edge

Interface Interface
no shutdown switchport mode trunk

100 SmartFabric Creation


Table 21. Dell OS10 and Cisco Nexus Ethernet – No Spanning Tree configuration (continued)
Dell Networking OS10 Cisco Nexus OS

channel-group <channel-group-id> mode active switchport trunk allowed vlan xy


no switchport spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root

Optional - Configure Fibre Channel


Depending on the deployment, configuration of the Fibre Channel ports and uplinks is optional.

Configure Fibre Channel universal ports

NOTE: Configure Fibre Channel universal ports, if implementing Fibre Channel configurations as per requirement.

NOTE: Fibre Channel port speed must be specifically configured. Auto negotiation is not currently supported.

On the MX9116n FSE, port-group 1/1/15 and 1/1/16 are universal ports capable of connecting to FC devices at various speeds
depending on the optic being used. In this example, we are configuring the universal port speed as 4x32G FC. To enable FC
capabilities, perform the following steps on each MX9116n FSE.

NOTE: Port-group 1/1/16 is used for FC connections in this example.

1. Open the OME-M console.


2. From the navigation menu click Devices, then click I/O Modules.
3. In the Devices panel, click to select the IOM to configure.
4. In the IOM panel, click Hardware, then click Port Information.
NOTE: See the SmartFabric Services for PowerEdge MX Port-Group Configuration Errors video for more information
about configuration errors.

5. Click the port-group 1/1/16 check box, then click Configure breakout.
6. In the Configure breakout panel, select 4X32GFC as the breakout type used in this example.
NOTE: With OME-M 1.20.10 and earlier, you must set the Breakout Type to HardwareDefault first and then set the
desired configuration. If the desired breakout type is selected before setting HardwareDefault, an error occurs.
7. Click Finish.
NOTE: When enabling Fibre Channel ports, they are set administratively down by default. Select the ports and click the
Toggle Admin State button. Click Finish to administratively set the ports to up.

NOTE: The MX9116n supports FC speeds of 8G, 16G, and 32G FC.

Create Fibre Channel uplinks


Before creating a Fibre Channel uplink, make sure you have configured the universal ports as FC ports using the steps in the
previous Configure Fibre Channel universal ports on page 101 section.

NOTE: Create Fibre Channel uplinks for FCoE, if implementing Fibre Channel configurations as per requirement.

NOTE: The steps in this section allow you to connect to an existing FC switch using NPG mode, or directly attach an
FC storage array. The uplink type is the only setting within the MX7000 chassis that distinguishes between the two
configurations.
To create uplinks, perform the following steps.

SmartFabric Creation 101


1. Open the OME-M console.
2. From the navigation menu click Devices, then click Fabric.
3. Click the SmartFabric fabric name.
4. In the Fabric Details panel, click Uplinks, then click the Add Uplinks button.
5. From the Add Uplinks window, use the information in the following table to enter an uplink name in the Name box.
6. Optionally, enter a description in the Description box.
7. From the Uplink Type list, select Type, then click Next. In this example, FCoE is selected as Uplink type. Choose Uplink
type as per your configuration from FC Gateway, FC Direct Attach, or FCoE options.
8. From the Switch Ports list, select the FC ports as defined in the following table. Select the appropriate port for the
connected uplink.
9. From the Tagged Networks list, select VLAN defined in the following table, then click Finish. SmartFabric creates the
uplink object, and the status for the fabric changes to OK.
NOTE: Fibre Channel ports are administratively disabled by default. Make sure to set the Fibre Channel ports to Enabled
by toggling the Admin State of the ports. This is done by choosing Devices > I/O Modules > MX9116n FSE switch >
Hardware and Port Information. Select the port and choose Toggle Admin State.

NOTE: An FCoE uplink from the MX5108n or MX9116n must contain only a single port interface on an MX IOM to the
existing FCoE switch such as the Dell PowerSwitch S4148U shown in the diagram above.

NOTE: Make sure to have MTU set up on the internal Ethernet ports leveraging FCoE. If the MTU is not set, configure the
MTU by selecting port under Port Information and choosing Configure MTU. Enter the MTU size between 2500 and 9216
bytes.

NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.
For the examples shown in Scenario 5: Connect MX9116n FSE to Fibre Channel storage - NPIV Proxy Gateway mode on page
198 and Scenario 6: Connect MX9116n FSE to Fibre Channel storage - FC Direct Attach on page 202, the uplink attributes are
defined in the following table.

Table 22. Uplink attributes


Uplink name Description Ports VLAN (tagged)
FCoE A1 FC Uplink for switch in Slot A1 Switch model dependent 30
FCoE A2 FC Uplink for switch in Slot Switch model dependent 40
A2

NOTE: Do not assign the same FCoE VLAN to both switches. They must be kept separate.

Enable support for larger VLAN counts


A SmartFabric created on PowerEdge MX versions 1.20.10 or later has support for large VLAN counts enabled by default. For
a SmartFabric created with OME-M prior to version 1.20.10, you must manually enable support for VLAN counts larger than
256 per fabric. After upgrading to OME-M 1.20.10 or later, support for larger VLAN count is automatically enabled even on
SmartFabrics created prior to OME-M 1.20.10.

NOTE: If your environment has fewer than 256 VLANs, this support does not need to be enabled.

To enable this support, perform the following steps:


1. Download the script titled Set-ScaleVLANProfile.ps1 from the GitHub repository.
2. Copy this script to any folder or directory.
3. Open PowerShell and change the path to the directory where the script was copied.
4. Execute the script.

102 SmartFabric Creation


Figure 95. PowerShell - execute script
5. Enter the IP address of the OME-M instance that manages the switch being replaced. In this example, the OME-M Instance
IP is 100.67.XX.XX.

Figure 96. PowerShell - enter OME-M instance IP address


6. Provide the credentials for the OME-M instance.

Figure 97. PowerShell - enter credentials


7. When prompted, enter Enabled to enable the scale-vlan-profile. To disable the profile, enter Disabled.

Figure 98. PowerShell - enable scale-vlan-profile


8. Using the cursor, select the SmartFabric to enable the scale-vlan-profile on.

SmartFabric Creation 103


Figure 99. PowerShell - select fabric

If successful, the script will indicate success.

Figure 100. PowerShell - execution of script successful

9. To verify that the scale-vlan-profile has been enabled, access a switch CLI that is part of the fabric and execute the show
running-configuration command. If successful, the entry scale-profile vlan will be listed in the configuration.

104 SmartFabric Creation


Figure 101. Show running-configuration output on IOM to verify

Uplink failure detection


Uplink failure detection (UFD) detects the loss of upstream connectivity from switch uplinks to the next-hop switch. If the
switch loses upstream connectivity, the related downstream server-facing interfaces are shut down so the host can use a
different path to send data out of the fabric. By default, the attached hosts continue to send traffic to that switch without a
direct path to the destination. The downstream devices do not generally receive an indication that the upstream connectivity
was lost because connectivity to the local switch is still operational. To solve this issue, use UFD. The VLTi link to the peer
switch can temporarily handle traffic during a network outage, but this is not considered a best practice.
NOTE: In the case of a loss of all VLTi links, the VLT secondary peer IOM brings down its VLT port channels. In SmartFabric
mode, UFD will not bring down those associated interfaces since there is an operational uplink. Ensure the server facing
ports are in a VLT port channel for proper behavior.
An uplink state group is configured on each switch, which creates an association between the uplinks to the upstream devices
and the downlink interfaces. In the event that all uplinks fail on a switch, UFD automatically shuts down the downstream
interfaces. This propagates to the hosts attached to the switch. Each host then uses its link to the remaining switch to continue
sending traffic across the network. An interface in an uplink-state group can be a physical interface or a port channel (LAG)
aggregation of physical interfaces.
In SmartFabric mode, UFD is automatically enabled with OME-M 1.10.20. UFD is user-configurable with OME-M 1.20.00 and
later. In Full Switch mode, UFD is NOT enabled by default and must be configured at the switch CLI. Enabling UFD is
recommended.

SmartFabric Creation 105


Figure 102. UFD topology

For example, in the MX scenario that is mentioned in Scenario 5: Connect MX9116n FSE to Fibre Channel storage - NPIV Proxy
Gateway mode on page 198, when an uplink is set as FC gateway, UFD associates the set of downstream interfaces which are
part of the corresponding FCoE VLAN into a UFD group. In this scenario, the VLANs are VLAN 30 and VLAN 40 on each switch
respectively. The downstream interfaces are the ones connected to the MX740c compute sleds.
In SmartFabric mode with an FC uplink failure situation, where all FC uplink ports are down (for example, removing the fibre
channel transceiver from the switch), the switch operationally disables the downstream interfaces which belong to that UFD
group AND have the FCoE VLAN provisioned to them. A server that does not have an impacted FCoE VLAN is not disturbed.
Once the downstream ports are set operationally down, the traffic on these server ports is stopped, giving the operating system
the ability to fail traffic over to the other path. In a scenario with MX9116n FSEs, a maximum of eight FC ports can be part of an
FC Gateway uplink.
This is resolved by shutting down only the corresponding compute sled downstream ports which provide an alternate path to
the compute sleds. Bring up at least one FC port that is part of the FC gateway uplink so that the FCoE traffic can transition
through another FC port on the NIC or an IOM in the fabric. Remove FCoE VLANs from Ethernet-only downstream ports to
avoid an impact on Ethernet traffic.

106 SmartFabric Creation


Figure 103. UFD in an MX scenario

NOTE: In SmartFabric mode, one uplink-state-group is created and is enabled by default. In Full Switch mode, up to 16
uplink-state groups can be created, the same as any SmartFabric OS10 switch. By default, no uplink-state groups are
created in Full Switch mode. Physical port channels can be assigned to an uplink-state group.
To include uplinks into a UFD group in SmartFabric mode, perform the following steps.
Steps
1. Access the OME-M console.
2. Select Devices > Fabric. Choose created fabric.
3. The UFD group can be included in two ways. If uplinks are not created, select Add Uplink. Enter Name, Description, and
Uplink type.
4. Mark the check box Include in Uplink Failure Detection Group.

Figure 104. UFD under Add Uplink


5. If uplinks are created, choose an uplink, select Edit.
6. Under Edit Uplink, mark the check box Include in Uplink Failure Detection Group.
7. This enables UFD and includes the uplink into the UFD group.

SmartFabric Creation 107


Figure 105. UFD under Edit Uplink

Verifying UFD configuration


To verify UFD on a switch, run the following CLI commands.

MX9116n-1# show uplink-state-group 1


Uplink State Group: 1, Status: Enabled, Up

MX9116n-1# show uplink-state-group detail


(Up): Interface up (Dwn): Interface down
(Dis): Interface disabled (NA): Not Available
*: VLT Port-channel V: VLT Status P: Peer Operational Status ^: Tracking
Status
Uplink State Group : 1 Status : Enabled,up
Upstream Interfaces : Fc 1/1/44:1(Up), Fc 1/1/44:2(Up)
Downstream Interfaces: Eth 1/1/1(Up), Eth 1/1/3(Up), Eth
1/71/2(Up), Eth 1/71/7(Up)

Configuring the upstream switch and connecting


uplink cables
The upstream switch ports must be configured in a single LACP LAG. This document provides eight example configurations.
● Scenario 1: SmartFabric deployment with S5232F-ON upstream switches with Ethernet - No Spanning Tree uplink on page 179
● Scenario 2: SmartFabric connected to Cisco Nexus 3232C switches with Ethernet - No Spanning Tree uplink on page 183
● Scenario 3: SmartFabric deployment with S5232F-ON upstream switches with legacy Ethernet uplink on page 189
● Scenario 4: SmartFabric connected to Cisco Nexus 3232C switches with legacy Ethernet uplink on page 193
● Scenario 5: Connect MX9116n FSE to Fibre Channel storage - NPIV Proxy Gateway mode on page 198
● Scenario 6: Connect MX9116n FSE to Fibre Channel storage - FC Direct Attach on page 202
● Scenario 7: Connect MX5108n to Fibre Channel storage - FSB on page 207
● Scenario 8: Configure boot from SAN on page 211

108 SmartFabric Creation


7
Server Deployment
Deploying a server
Before beginning, ensure that all server firmware, especially the NIC/CNA, has been updated to the latest version. For additional
information about components and firmware used in this guide, see Software and firmware versions used on page 245.

Server preparation
The examples in this guide reference the Dell PowerEdge MX740c compute sled with QLogic QL41262 Converged Network
Adapters (CNA) installed. CNAs are required to achieve FCoE connectivity. Use the steps below to prepare each CNA by setting
them to factory defaults (if required) and configuring NIC partitioning (NPAR) if needed. Not every implementation requires
NPAR.
NOTE: iDRAC steps in this section may vary depending on hardware, software, and browser versions used. See the
documentation for your Dell server for instructions on connecting to the iDRAC.

Create a server template


Before creating the template, select a server to be the reference server and configure the hardware to the exact settings
required for the implementation.

NOTE: In SmartFabric mode, you must use a template to deploy a server and to configure networking.

A server template contains parameters that are extracted from a server and allows these parameters to be quickly applied to
multiple compute sleds. A server template contains all server settings for a specific deployment type including BIOS, iDRAC,
RAID, NIC/CNA, and so on. The template is captured from a reference server and can then be deployed to multiple servers
simultaneously. The server template allows an administrator to associate VLANs to compute sleds.
The templates contain settings for the following categories:
● Local access configuration
● Location configuration
● Power configuration
● Chassis network configuration
● Slot configuration
● Setup configuration
To create a server template, perform the following steps.
1. Open the OME-M console.
2. From the navigation menu, click Configuration, then click Templates.
NOTE: With OME-M 1.20.10 and earlier, the Templates option is called Deploy.
3. From the center panel, click Create Template, then click From Reference Device to open the Create Template window.

Server Deployment 109


Figure 106. Create a server template
4. In the Template Name box, enter a name. In this example, MX740c with FCOE CNA is entered.

Figure 107. Create Template dialog box

5. Optionally, enter a description in the Description box, then click Next.


6. In the Device Selection section, click Select Device.

Figure 108. Device Selection screen

110 Server Deployment


7. From the Select Devices window, choose the previously configured server or the server whose settings need to be applied
to the target servers, then click Finish.

Figure 109. Devices selected


8. From the Elements to Clone list, select all the elements, and then click Finish.

Figure 110. Select the elements to clone

A job starts and the new server template displays on the list. When complete, the Completed successfully status displays.

Create identity pools


Identity pools are recommended, but not required. Virtual identity pools are used in conjunction with server templates to
automate network onboarding of compute sleds. Perform the following steps to create an ID pool.
For more information about identity pools, find the relevant version of the User Guide in the OME-M and OS10 compatibility and
documentation table.
1. Open the OME-M console.
2. From the navigation menu, click Configuration, then click Identity Pools.
3. In the Network panel, click Create. The Create Identity Pool window displays.
4. Type Ethernet CNA into the Pool Name box.
5. Optionally, enter a description in the Description box.
6. Click Next.
7. Click to select the Include Ethernet Virtual MAC Addresses option.
8. In the Starting MAC Address box, type a unique MAC address (for example, 06:3C:F9:A4:CC:00).
9. Type 255 in the Number of Virtual MAC Identities box, click Next, then click Next again.

Server Deployment 111


10. Select the Include FCoE Identity option if using FCoE.
11. In the Starting MAC Address box, type a unique MAC address (for example, 06:3C:F9:A4:CD:00).
12. Type 255 in the Number of FCoE identities box for FCoE scenarios.

Figure 111. Include FCoE identity

13. Click Finish, then click Finish again.

Associate server template with networks – no FCoE


After successfully creating a new template, associate the template with a network.
1. From the Templates pane, select the template to be associated with VLANs. In this example, the FCOE CNA server
template is selected.
NOTE: With OME-M 1.20.10 and earlier, the Templates option is called Deploy.
2. Click Edit Network.
3. In the Edit Network window, complete the following:
a. Optionally, from the Identity Pool list, choose the desired identity pool. In this example, the Ethernet ID Pool is
selected.
b. Click Next.

Figure 112. IO Pool Assignment screen


c. Assign bandwidth to the ports and the partitions as required by your configuration, then click Next.
d. Optionally, from the NIC Teaming option, choose the desired NIC Teaming option.
e. The NIC teaming option can be selected as No Teaming, LACP teaming, and Other, as detailed in NIC teaming
guidelines on page 63.
f. For both ports, from the Untagged Network list, select the untagged VLAN. In this example, VLAN0001 is selected.
g. For both ports, from the Tagged Network list, select the tagged VLAN. In this example, VLAN0010 is selected.
h. Click Finish.
The following figure shows the associated networks for the server template with OME-M 1.30.00 and later.

112 Server Deployment


Figure 113. Server template network settings - no FCoE with OME-M 1.30.00 and later

The following figure shows the associated networks for the server template with OME-M 1.20.10 and earlier.

Figure 114. Server template network settings - no FCoE with OME-M 1.20.10 and earlier

Associate server template with networks - with FCoE


After successfully creating a template, associate the template with a network.
1. From the Templates pane, select the template to be associated with VLANs. In this example, the MX740c with FCOE CNA
server template is selected.
NOTE: With OME-M 1.20.10 and earlier, the Templates option is called Deploy.
2. Click Edit Network.
3. In the Edit Network window, complete the following:
a. To choose FCoE VLANs, select the appropriate Identity Pool list provided, then click Next.
b. Assign the bandwidth to the ports and partitions as necessary for your configuration, then click Next.
c. From the Untagged Network list, select the untagged VLAN for both ports.
NOTE: In this example, VLAN0001 is selected.
d. For NIC in Mezzanine 1A Port 1, select FC A1 from the Tagged Network list.

Server Deployment 113


Figure 115. Select VLANs
e. For NIC in Mezzanine 1A Port 2, select FC A2 from the Tagged Network list.
f. Click Finish.
The following figure shows the associated networks for the server template with OME-M 1.30.00 and later.

Figure 116. Server template network settings - FCoE with OME-M 1.30.00 and later

The following figure shows the associated networks for the server template with OME-M 1.20.10 and earlier.

114 Server Deployment


Figure 117. Server template network settings - FCoE with OME-M 1.20.10 and earlier

Deploy a server template


To deploy the server template, perform the following steps.
NOTE: To deploy a server template with OME-M 1.20.10 and earlier, see the Steps to deploy server template with
OME-M 1.20.10 and earlier section below.
Steps to deploy server template with OME-M 1.30.00 and later
1. From the Templates pane, select the template to be deployed. In this example, the MX740c with FCOE CNA server
template is selected.
2. Click Deploy Template.
3. In the Deploy Template window, complete the following:
a. Click the Deploy to Devices or Attach to Slots button and choose which slots or compute sleds the template needs to
be deployed to. These are target servers.
b. Select the Do not forcefully reboot the host OS option then click Next.
c. Keep other settings set to Default.
d. From iDRAC Management IP settings, choose Don’t change IP settings option then click Next.
e. Choose Target Attributes as required for your configuration then click Next.
f. Click to Reserve identities from the Identity Pool then click Next.
4. Click Next then select Run Now.
5. Click Finish.

Steps to deploy server template with OME-M 1.20.10 and earlier


1. From the Deploy pane, select the template to be deployed. In this example, the MX740c with FCOE CNA server template is
selected.
2. Click Deploy Template.
3. From the Deploy Template window, complete the following:
a. Click the Select button to choose which slots or compute sleds to deploy the template to.
b. Select the Do not forcefully reboot the host OS option.
4. Click Next, then select Run Now.
5. Click Finish.
The interfaces on the switch are updated automatically. SmartFabric configures each interface with an untagged VLAN and any
tagged VLANs. Also, SmartFabric deploys the associated QoS settings. See the Networks and automated QoS section for more
information.

Server Deployment 115


To monitor the deployment progress, go to Monitor > Jobs > Select Job > View Details. This shows the progress of the
server template deployment.

Figure 118. Job details displaying deployment of server template

Profile deployment
PowerEdge MX environment supports Profiles with OME-M 1.30.00 and later. OME-M creates and automatically assigns a
profile once the server template is deployed successfully. If the server template is not deployed, OME–M allows user to create
server profiles and apply them to compute sled or slot.

Profiles with server template deployment


Once the server template is deployed successfully, OME-M automatically creates Profiles. In this example, Profile from template
MX740c with FCOE CNA has been created and deployed as shown in figure below.

Figure 119. Profile created with server template deployment

NOTE: The server template cannot be deleted until it is Unassigned from a profile. To unassign server templates from a
profile, see the Unassign a profile section. To delete a profile, see the Delete a profile section.

Create a profile
If the server template is not deployed, OME–M allows user to create server profiles and apply them to compute sled or slot.
To create a profile, perform the following steps:
1. Open OME-M console and select Configuration.
2. From the drop-down menu, select Profiles.
3. From the Profiles pane, choose Create.
4. From Select Template window, choose MX740c with FCOE CNA then click Next.

116 Server Deployment


NOTE: Ensure that you attach the server template to a virtual identity pool. Deploying the profile without an identity
pool attachment will not change the virtual network addresses on the target devices.

Figure 120. Select template under Profiles


5. On the Details tab, enter the Name Prefix, Description, and Profile Count of the profile and click Next.
NOTE: You can create a maximum of 100 profiles at a time.

Figure 121. Details for profiles


6. Select Boot to Network ISO and enter the following file share information.
a. Share Type—Select CIFS or NFS as required
b. ISO Information—Enter the ISO path
c. Share Information—Enter the Share IP Address, Workgroup, Username, and Password
d. Time to Attach ISO—Select the time duration to attach ISO from the drop-down
e. Test Connection—Displays the test connection status
7. Click Next. The iDRAC Management IP tab displays.
8. Click Finish.

View a profile
User can view a profile and network details under this option. On the Profiles page, select a profile and click View and select
View Profile. The View Profile wizard is displayed.

View Profile You can view Boot to Network ISO, iDRAC Management IP, Target Attribute, and Virtual
Identities information that is related to the profile.
View Network You can view Bandwidth and VLANs information that is related to the profile.

Edit a profile
The Edit Profile feature allows user to change the Profile name, Network options, iDRAC management IP, Target
attributes, and Unassigned virtual identities. The user can edit the profile characteristics that are unique to the device or
slot.
To edit a profile, perform the following steps:
1. From the OME-M console, click Configurations > Profiles and select the profile to be edited.
2. Select Edit > Edit Profile.

Server Deployment 117


3. On the Details tab, edit name and description of the profile and click Next.

Figure 122. Edit Profile description


4. From the Boot to network ISO tab, edit the information already entered while creating a profile, then click Next.
5. Select the Target IP settings, then select one of the following options:
● Don't change IP settings
● Set as DHCP
● Set static IP
6. Click Next.

Figure 123. iDRAC Management IP settings


7. From the Target Attributes screen, select the components or attributes in the iDRAC, NIC, and System sections to
include in the template, then click Finish.

Figure 124. Edit Target Attributes

Assign a profile
The Assign a profile function allows the user to assign and deploy a profile on target devices.
To assign a profile, perform the following steps:
1. From the OME-M console, click Configurations > Profiles and select a profile to assign.

118 Server Deployment


2. Click Assign.
3. On the Details tab, verify the details and click Next.
4. Select Attach to Slots or Deploy to Devices and click Select Slots or Sleds to choose the target servers.

Figure 125. Deploy Profile screen


5. Select the target server or servers where the profile is being deployed.

Figure 126. Select target servers


6. Choose Do not forcefully reboot the host OS option and click Next.

Figure 127. Target servers deployed


7. Select Boot to Network ISO, enter the file share information as needed, then click Next.
8. Select iDRAC Management IP settings then click Next.
9. Select the Target Attributes under the iDRAC, NIC, and System options then click Next.
10. Click Run Now or Enable Schedule then click Finish.
NOTE: The Enable Schedule option is disabled for slot-based profile deployment.

CAUTION: When you select Enable Schedule, the profile deployment runs at the scheduled time, even if you
have already performed a Run Now function before the schedule. The Deploy Profile job will fail when it is
run at the scheduled time which results in an error message displaying.

Server Deployment 119


Unassign a profile
Use the Unassign a profile function to disassociate profiles from the selected targets.

NOTE: You can only select the profiles that are in an Assigned or Deployed state.

To unassign the profile:


1. From the OME-M console, click Configurations > Profiles then select a profile to unassign.
2. From the Actions menu, click Unassign. The Unassign Profile window displays.
3. In the Unassign profile wizard, the Force Reclaim Identities is checked by default. This action reclaims the identities
from this device, and the server is forcefully rebooted. All the VLANs configured on the server are removed.
4. Click Finish.
NOTE: The Unassign profile job is not created when the action is performed on the assigned profile that has the Last
Action Status as Scheduled for device-based deployment.

Delete a profile
You can delete profiles that are not running any profile actions and is in the unassigned profile state. To delete the profile:
1. From the Profiles page, select the profile or profiles that you want to delete and click Delete.
2. Click Yes to confirm the deletion.

120 Server Deployment


8
SmartFabric Deployment Validation
Validate the SmartFabric health
The OME-M console can be used to show the overall health of the SmartFabric.
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. Select SmartFabric1 to expand the details of the fabric. The following figure shows the status of the fabric.

Figure 128. Fabric status details


The Overview tab shows the current inventory, including switches, servers, and interconnects between the MX9116n FSEs in
the fabric. The image below shows the SmartFabric switch in a healthy state.

Figure 129. SmartFabric switch inventory

The following image shows the participating servers in a healthy state.

Figure 130. SmartFabric server inventory

The image below shows the content of the Topology tab and the VLTi that the SmartFabric mode created.

SmartFabric Deployment Validation 121


Figure 131. SmartFabric Topology overview

Within the Topology tab, you can also view the Wiring Diagram table as shown in the image below.

Figure 132. Wiring Diagram table

Validation of quad-port NIC topologies

Validate with OME-M


Validation of quad-port NICs can be done on OME-M by performing the following steps:
1. Access the OME-M Console.
2. Go to Devices > Compute.
3. Select a compute sled. Choose Hardware and then Network Devices. The following figure shows the quad-port NIC in an
OME-M Console.

122 SmartFabric Deployment Validation


Figure 133. Ports on quad-port NIC shows on OME-M UI
4. Expand one of the ports to see details about Product name, Link status, and MAC Address.

Figure 134. Details of quad-port NIC

The Topology view on the Home Screen of the OME-M console shows connections for the quad-port NIC. To access this,
perform the following steps:
a. Access the OME-M Console.
b. Go to Home > View Topology. This will show connections between MX7116n FEMs and MX9116n FSEs, similar to
Two-chassis topology with quad-port NICs – dual fabric on page 34.

SmartFabric Deployment Validation 123


Figure 135. View Topology

NOTE: Make sure that the compute sled iDRAC is at the latest version to ensure an accurate Group Topology view.

5. Once connections are established and validated, access the Port Information on I/O Modules by performing the following
steps:
a. Access the OME-M Console.
b. Go to Devices > I/O Modules.
c. Select an IOM > Hardware > Port Information. This shows two port groups each with eight internal ports.
For example, if Compute Sled 1 is configured with a dual-port NIC, then only one port group with eight ports can be seen
on OME-M. These internal ports are numbered 1/71/1 through 1/71/8. For Compute Sled 1 with a dual-port NIC, port
1/71/1 is Up.

Figure 136. Port Information for dual-port NIC

If Compute Sled 1 is configured with a quad-port NIC, then two port groups each with eight ports can be seen on
OME-M. These internal ports are numbered 1/71/1 through 1/71/16. For Compute Sled 1 with a quad-port NIC, ports
1/71/1 and 1/71/9 are Up.

124 SmartFabric Deployment Validation


Figure 137. Port Information for quad-port NIC

Validation through switch CLI


The show discovered-expanders command is only available on the MX9116n FSE and displays all connected MX7116n
FEMs, its service tag attached to the MX9116n FSEs, and the associated port group and virtual slot. With a quad-port NIC,
each MX7116n FEM creates two connections with the MX9116n FSE on port group 1/1/1 and port group 1/1/7, as shown in the
following.

MX9116N-1# show discovered-expanders


Service Model Type Chassis Chassis-slot Port-group Virtual
tag service-tag Slot-Id
--------------------------------------------------------------------------
D10DXC2 MX7116n 1 SKY002Z A1 1/1/1 71
FEM
D10DXC2 MX7116n 1 SKY002Z A1 1/1/7 71
FEM
D10DXC4 MX7116n 1 SKY003Z A1 1/1/2 72
FEM

Validating Ethernet - No Spanning Tree uplinks


If using Ethernet – No Spanning Tree uplinks, use the CLI commands in this section to validate the configuration.

show port-channel summary on MX9116n FSE


From the MX I/O module, use the show port-channel summary command to confirm the port-channel is created for the
uplink with spanning-tree disabled on MX Switches

MX9116N-A1# show port-channel summary

Flags: D - Down I - member up but inactive P - member up and active U - Up


(port-channel) F - Fallback Activated

SmartFabric Deployment Validation 125


------------------------------------------------------------------
Group Port-Channel Type Protocol Member Ports
--------------------------------------------------------------------------------
2 port-channel2 (U) Eth DYNAMIC 1/1/41(P)
1000 port-channel1000 (U) Eth STATIC 1/1/37(P) 1/1/38(P) 1/1/39(P) 1/1/40(P)

Upstream switch validation - SmartFabric OS10

show port-channel summary


From the upstream switch, run the show port-channel summary CLI command to verify port-channel is up and running
and no STP BPDUs are received on the upstream switch. This command is for SmartFabric OS10. If the upstream swtich is not
running OS10, execute the similar command specific to that switch model.

Figure 138. show port-channel summary command

show spanning-tree interface port-channel


From the upstream switch, run the show spanning-tree interface port- channel CLI command to verify that no
BPDUs are received on the port.

Figure 139. show spanning-tree interface port-channel command

126 SmartFabric Deployment Validation


show running-configuration interface port-channel
From the upstream switches, use the show running-configuration interface port-channel CLI command to
verify spanning tree is enabled on the port-channel interface. Then run the show lldp neighbors command to show that
no BPDU packets are received on the interface.

Figure 140. show running-configuration interface port-channel command

SmartFabric Deployment Validation 127


show lldp neighbors
After running the show running-configuration interface port-channel command above, use the show lldp
neighbors CLI command to verify that no BPDU packets are received on the interface.

Figure 141. show lldp neighbors command

Upstream switch validation - Cisco

show port-channel summary


From an upstream Cisco Nexus switch, run the show port-channel summary CLI command to verify port-channel is up
and running and no STP BPDUs are received on the upstream switch.

128 SmartFabric Deployment Validation


Figure 142. show port-channel summary

SmartFabric Deployment Validation 129


show running-configuration interface port-channel
Run the show running-config interface port-channel {port-channel ID} command on the Cisco Nexus to
show spanning tree configuration.

Figure 143. show running-config interface port-channel command

130 SmartFabric Deployment Validation


9
SmartFabric Operations

Viewing SmartFabric health and status


VIew the SmartFabric using OME-M. The green checkmark next to the fabric name indicates that the fabric is in a healthy state.
In this example, the fabric created is named Fabric01.
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. To view the Fabric components, select the fabric. This can also be achieved by clicking the View Details button on the
right.

Figure 144. SmartFabric details screen


Fabric components include:
● Uplinks
● Switches
● Servers
● ISL links
Uplinks connect the MX9116n switches with upstream switches. In this example, the uplink is named as Uplink1.

Figure 145. Uplinks information within Fabric Details

SmartFabric Operations 131


Switches lists the I/O modules that are part of the fabric. In this example, the fabric has two MX9116n switches.

NOTE: Fabric Expander Modules are transparent and therefore do not appear on the Fabric Details page.

Figure 146. Switches listing within Fabric Details

Servers lists the compute sleds that are part of the fabric. In this example, two PowerEdge MX740c compute sleds are part of
the fabric.

Figure 147. Servers listing within Fabric Details

ISL Links lists the VLT interconnects between the two switches. The ISL links must be connected on port groups 11 and 12 on
MX9116n switches, and ports 9 and 10 on MX5108n switches.

CAUTION: This connection is required. Failure to connect the defined ports results in a fabric validation error.

Figure 148. ISL Links within Fabric Details

Edit a SmartFabric
A fabric has four components:
● Uplinks
● Switches
● Servers
● ISL Links
To edit the fabric that is discussed in this section, edit the fabric name and description using the following steps:
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. On the right, click the Edit button.

132 SmartFabric Operations


Figure 149. Edit fabric name and description screen
4. In the Edit Fabric dialog box, change the name and description, then click Finish.

Edit uplinks
Perform the following steps to edit uplinks on a SmartFabric:
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. Select the fabric.
4. Select the Uplink to edit and click Edit.
NOTE: In this example, Uplink1 is selected.
5. In the Edit Uplink dialog box, modify the Name and Description as necessary.
NOTE: The uplink type cannot be modified once the fabric is created. If the uplink type must be changed after the
fabric is created, delete the uplink and create a new uplink with the wanted uplink type.

Figure 150. Edit Uplink dialog box

NOTE: The Include in Uplink Failure Detection Group box under Uplink Type will only be seen on OME-M 1.20.00 and
later.
6. Click Next.

SmartFabric Operations 133


7. Edit the Uplink ports on the MX switches that connect to the upstream switches. In this example, ports 41 and 42 that are
on the MX9116n switches, connect to upstream switches, and are displayed.
NOTE: Carefully modify the uplink ports on both MX switches. Select the IOM to display the respective uplink switch
ports.

Figure 151. Edit uplink ports and VLAN networks


8. If necessary, modify the tagged and untagged VLANs.
NOTE: If you have changed OME-M to use a VLAN other than the default, make sure that you do not add that VLAN to
an uplink.

NOTE: An FCoE uplink from the MX5108n or MX9116n must contain only a single port interface on an MX IOM to the
existing FCoE switch such as the Dell PowerSwitch S4148U shown in the diagram above.
9. Click Finish.

Edit VLANs
The following sections describe this task for deployed servers with different versions of OME-M.

Edit VLANs on deployed servers with OME-M 1.20.00 and later


OME-M 1.20.00 adds the ability to edit VLANs on multiple servers at the same time. This section describes how to edit VLANs
and deploy settings from a reference server to multiple target servers in SmartFabric mode. After deployment of SmartFabric
and deployment of server templates, network settings can be changed by the following instructions.
1. Open the OME-M console.
2. From the navigation menu, click Device > Fabric.
3. Select the fabric.
4. Select Servers from the left pane.
5. Choose Edit Networks.

134 SmartFabric Operations


Figure 152. Edit Networks
6. Select Reference Server, click Next. The Reference Server settings will be deployed on one more target servers in the
fabric. In this example, Sled-1 is chosen as the Reference Server.

Figure 153. Select Reference Server


7. Choose NIC teaming from LACP, No Teaming, and Other options.
8. Modify the VLAN selections as required by defining the tagged and untagged VLANs.
9. Select VLANs on Tagged and Untagged Network for each Mezzanine card port. Click Next.

Figure 154. Modify VLANs


10. Select Target Server(s).
11. To select multiple Servers click Add and choose multiple servers from the list. Click Add again.

SmartFabric Operations 135


Figure 155. Select multiple target servers
12. Select the servers.

Figure 156. Select target servers


13. Click Finish.
NOTE: VLAN settings will be pushed to the selected servers and will overwrite any existing settings.

Edit VLANs on a deployed Server with OME-M 1.10.20 and earlier


NOTE: Instructions in this section are supported until OME-M 1.10.20. If you are using the updated Firmware OME-M
1.20.00 and later, follow the instructions in the next section.
The OME-M Console is used to add/remove VLANs on the deployed servers in a SmartFabric. Perform the following steps to
add/remove VLANs on the deployed servers.
1. Open the OME-M console.
2. From the navigation menu, click Device > Fabric.
3. Select the fabric.
4. Select Servers from the left pane.

Figure 157. Add and remove VLANs


5. Choose the wanted server. In this example, the PowerEdge MX740C with service tag 8XQP0T2 is selected.

136 SmartFabric Operations


6. Choose Edit Networks.
7. Choose NIC teaming from LACP, No Teaming, and Other options.
8. Modify the VLAN selections as required by defining the tagged and untagged VLANs.
9. Select VLANs on Tagged and Untagged Network for each Mezzanine card port.
10. Click Save.

Figure 158. Modify VLANs

NOTE: At this time, only one server can be selected at a time in the UI.

Delete SmartFabric
To remove the SmartFabric using the OME-M console, perform the following steps:
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. Select SmartFabric.
4. Click the Delete button.
5. In the delete fabric dialog box, click Yes.
All participating switches reboot to Full Switch mode.
CAUTION: Any configuration that is not completed by the OME-M console is lost when switching between IOM
operating modes.

Connect non-MX Ethernet devices to a SmartFabric


As of SmartFabric OS10.5.0.1 and OME-M 1.10.00, PowerEdge MX Ethernet switches allow the connection of non-MX devices
such as rack servers to the fabric, so long as that device provides a physical interface that is supported by the switch. Once
connected, VLANs must be assigned to each port that the device is connected. This capability does not allow non-MX devices
to support FCoE.
To connect a non-MX device to a switch running in SmartFabric mode, perform the following steps:
1. Open the OME-M console.
2. To configure the breakout on the port-group, see the Configure uplink port speed or breakout on page 96 section, if needed.
3. Once the breakout on the port-group is complete, select the port. by selecting Edit VLANs.
NOTE: Make sure that the port is not in use for any other purpose.
4. Click Edit VLANs and then select Default VLAN 1, which is shown as Untagged Network in the example below.
5. Select any of the other VLANs as the Tagged Network.

SmartFabric Operations 137


Figure 159. Selection of VLANs in Edit VLANs section
6. Click Finish.
7. Repeat these steps for any other port or IOM.

Expanding from a single-chassis to dual-chassis


configuration
Starting with OME-Modular 1.20.00 and OS10.5.0.7, a single MX7000 chassis with a pair of MX9116n switches can be expanded
to two MX7000 chassis with MX9116n FSEs and M7116n FEMs while running in SmartFabric mode. As shown in the following
steps, this process will not require any reconfiguration, is not destructive, and can be performed with the system online as long
as network redundancy is configured correctly.
NOTE: Before beginning this process, ensure that server redundancy is configured and working correctly. While this
process is not destructive, it will disrupt the network path for NIC ports connected to the switch being moved

Step 1: Cable Management module


Connect network cables to the MX7000 Management Modules on both chassis. For more information on Management Module
cabling, see the PowerEdge MX Chassis Management Networking Cabling White Paper.
NOTE: Make sure to have both chassis powered on and have an IP address assigned to the Management Module using LCD
panel or KVM ports.

Step 2: Create Multichassis Management Group


Create a Multichassis Management (MCM) Group on the single MX chassis configuration. For a scalable fabric that uses more
than one MX chassis, the chassis must be in an MCM Group.

NOTE: This step can be skipped if MCM Group is already created.

Step 3: Add second MX Chassis to the MCM Group


Perform the following steps:
1. Access the OME-M UI.
2. Select Chassis. Choose Configure > Add member.
3. Select the second MX7000 Chassis from the available chassis to be added as a member to the existing MCM group.
4. Click Finish.

138 SmartFabric Operations


Step 4: Move MX9116n FSE from first chassis to second chassis
Access OME-M UI from the lead MX Chassis. Choose I/O Modules under Devices.

Figure 160. Select IOM under Devices > I/O Modules

Select I/O Module in slot A2 from the first chassis. Power off the IOM from the Power Control drop-down menu.

Figure 161. Power off the IOM

SmartFabric Operations 139


1. Once the MX9116n FSE in Chassis 1-Slot A2 in the first chassis is powered off, physically move the switch to Slot A2 of the
second MX7000 chassis, but do NOT insert it completely at this time.
2. Insert a MX7116n FEM in Chassis 1-Slot A2 and another FEM in Chassis 2-Slot A1.
3. Connect QSFP28-DD cables between FSE and FEM, as shown in the following figure.
NOTE: The following diagram shows the connections for a scalable fabric on multiple chassis between the FSE and
FEM components. The diagram does not show the VLTi connections required for operating in SmartFabric mode or as
recommended when in Full Switch mode.

Figure 162. Connection between FSE and FEM


4. Once cabled, fully insert the MX9116n FSE in Chassis 2-Slot A2 and it will power on automatically.
5. These steps can be repeated for IOMs in slots B1/B2 as well.

Step 5: Validation
Perform the following steps to validate the environment.
1. Make sure that all MX9116n FSEs and MX7116n FEMs on both chassis appear in the OME-M UI. Restart the second MX9116n
FSE if you do not see it in the correct chassis.
2. Check the SmartFabric configuration to ensure that nothing has changed.
3. Make sure all internal switch ports on the MX9116n FSE and MX7116n FEMs are enabled and up. Check link lights for the
external ports to make sure that they are illuminated.

SmartFabric mode IOM replacement process


NOTE: The Dell PowerEdge MX platform gives you the ability to replace an I/O module in a SmartFabric if required. The
process used depends on the version of OS10 installed and should be run with Dell Technical Support engaged before
starting and throughout the process of IOM replacement. For technical support, go to https://www.dell.com/support or
call (USA) 1-800-945-3355.

NOTE: A new replacement IOM will have a factory default configuration. All port interfaces in the default configuration are
in the no shutdown state.

With OME-M 1.30.00 and later, Dell PowerEdge MX platform gives you the option to replace the I/O modules in SmartFabric
mode in the case of persistent errors or failures and if required through OME-M console. This process can only be done on
OME-M after SmartFabric is created.
Prerequisites:
● The MX9116n FSE and MX5108n can only be replaced with another I/O Module of the same type. Ensure that you have the
same Dell SmartFabric OS10 version on the switch that is to be replaced, and on the new switch.
● The replacement IOM must be a new device within the chassis deployment. Do not use an IOM that was previously deployed
within the MCM group.
● The other IOM in SmartFabric mode must be up, running, and healthy; otherwise a complete traffic outage may occur.

140 SmartFabric Operations


NOTE: OS10 is factory-installed in the MX9116n FSE or MX5108n Ethernet Switch. If the faulty IOM has an upgraded
version of OS10, you must upgrade the new IOM to the same version.
To replace the IOM through OME-M, follow the steps provided in this section.
CAUTION: Carefully follow the steps indicated in the OME-M prompts. Performing the steps out of order or
missing a step could cause a failure and may require a replacement of the switch.
1. Open OME-M console.
2. From Navigation pane, choose Devices > Fabric.
3. Select the already created Fabric and select the Replace Switch option.

Figure 163. Replace Fabric Switch Introduction screen


4. Click Next.
5. Copy the Current Running Configurations from the switch that is to be replaced.
NOTE: See the Dell SmartFabric OS10 User Guide for more information. Find the relevant version of the User Guide in
the OME-M and OS10 compatibility and documentation table.
6. Click Next.

Figure 164. Copy Current Configuration screen


7. Carefully remove the cables from the switch that is to be replaced.
8. Remove the switch that is to be replaced from the chassis and insert the new switch in the same slot.
CAUTION: Do not connect the cables yet. Wait for the switch to boot and ensure that the OS10 version on
new switch is same as the switch that is being replaced.
9. Confirm the OS10 version on OME-M then click Next.

SmartFabric Operations 141


Figure 165. Replace Switch Hardware screen
10. Configure new switch and apply the settings that were copied from the switch that is being replaced, to the new switch.
NOTE: For more information about the application of the settings from the switch that is being replaced to the new
switch, see the Dell SmartFabric OS10 User Guide. Find the relevant version of the User Guide in the OME-M and OS10
compatibility and documentation table.

CAUTION: Do not connect the cables at this time.


11. After you have configured the software settings and have verified the configuration of the new switch, click to place a
check in the Confirm New Switch Settings box, then click Next.

Figure 166. Configure New Switch screen


12. From the Activate New Switch screen, click the drop-down to select the Old Switch and New Switch in the fields
provided.

142 SmartFabric Operations


Figure 167. Activate New Switch screen
13. After you have confirmed that each of the steps required to recreate the SmartFabric using the new switch is complete,
click to place a check in the Confirm SmartFabric Configuration box.
14. Click Finish, then click Yes to complete the process.

MXG610 Fibre Channel switch module replacement


process
NOTE: The Dell PowerEdge MX platform gives you the ability to replace an I/O module in a SmartFabric if required. This
process depends on the operating system version that is installed and should be run with Dell Technical Support engaged
before starting and throughout the process of IOM replacement. For technical support, go to https://www.dell.com/
support or call (USA) 1-800-945-3355.

NOTE: Before beginning this process, you must have a replacement switch module or filler blade available. Never leave
the slot on the blade server chassis open for an extended time period. To maintain proper airflow, fill the slot with either a
replacement switch module or filler blade.
1. Back up the switch module configuration to an FTP or TFTP server using the configUpload command. The
configUpload command uploads the switch module configuration to the server and makes it available for downloading
to the replacement switch module if necessary. To ensure that a complete configuration is available for downloading to a
replacement switch module, back up the configuration regularly.
2. Stop all activities on the ports that the switch module uses. To verify that there is no activity, view the switch module LEDs.
3. Disconnect all cables from the SFP+/QSFP ports and remove the SFP+ or QSFP optical transceivers from the switch
module external ports.
4. Press the Release latch and gently pull the release lever out from the switch module.
5. Slide the switch module out of the I/O module bay and set it aside.
6. Insert the replacement switch module in the I/O module bay of the blade server chassis.
NOTE: Complete this step within 60 seconds.
7. Insert the SFP+ or QSFP optical transceivers.
8. Reconnect the cables and establish a connection to the blade server management module.

Chassis Backup and Restore


Backing up the configurations on the IOMs is supported in two ways:
● Chassis backup for SmartFabric
● Manual backup through the CLI

SmartFabric Operations 143


NOTE: The Chassis backup for SmartFabric does not provide backup for Ethernet switch settings like Hostname,
Password, Management network, Spanning tree configurations, and other CLI configurations. Manual backup through the
CLI is also recommended when performing a chassis backup.

Backing up the chassis


Back up the chassis and compute sled configuration for later use. To backup the chassis, you must have administrator access
with the device configuration privilege. The chassis configuration contains the following settings:
● Application settings
○ Setup configuration
○ Power configuration
○ Chassis network configuration
○ Local access configuration
○ Location configuration
○ Slot configuration
○ OME Modular network settings
○ Users settings
○ Security settings
○ Alert settings
● System configuration
○ Templates
○ Profiles
○ Identity pools and Vlans
● Catalogs and baselines
● Alert policies
● SmartFabric
● MCM configuration
NOTE: Backup and Restore operations are supported in FIPS-enabled configuration. The FIPS attribute is not part of
backup files by default. You must toggle the required FIPS mode before initiating the restore process.
You can use the backed-up configuration in other chassis.
To create a chassis backup:
1. Manually back up all IOM startup configurations. Refer to Manual backup of IOM configuration through the CLI.
2. On the chassis Overview page, click More Actions > Backup.

The Chassis Backup window is displayed.


3. On the Introduction section, read the process and click Next.
The Backup File Settings section is displayed.
4. In Backup File Location, select the Share Type where you want to store the chassis backup file.
The available options are:
● CIFS
● NFS

144 SmartFabric Operations


5. Enter the Network Share Address and Network Share Filepath.
6. Enter a name for the Backup File.
The backup file name should not contain file extension. It can contain alphanumeric characters and the special characters,
hyphen (-), period (.), and underscore (_).
7. If the Share Type is CIFS, enter the Domain, User Name, and Password. Else, go to step 8.
8. In the Sensitive Data, select the Include Passwords check box to include passwords while taking the backup. These
passwords are encrypted and are applied when the backup file is restored on the same chassis. For additional information on
Sensitive Data, find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table..
9. In Backup File Password, enter the Encryption Password and Confirm Encryption Password.
The backup file is encrypted and cannot be edited.

NOTE: The password must be 8 to 32 characters long and must be a combination of an uppercase, a lowercase, a
special character (+, &, ?, >, -, }, |, ., !, (, ', ,, _, [, ", @, #, ), *, ;, $, ], /, §, %, =, <, :, {, I) , and a number.

10. In the Ethernet Switch Backup Instructions, Select the check box to confirm the Ethernet switch backup settings.
NOTE: Chassis backup is not supported on Ethernet switch settings like Hostname, Password, Management network,
Spanning tree configurations, IOMs that are in full switch mode, and some CLI configurations. For the list of CLI
configurations that are not supported, find the relevant version of the User Guide in the OME-M and OS10 compatibility
and documentation table.

SmartFabric Operations 145


NOTE: Back up the IOM Startup.xml file from all the IOMs before you perform chassis backup. See Refer to Manual
backup of IOM configuration through the CLI.

11. Click Finish. Click Learn More to see more information about the Ethernet switch backup instructions.
A message is displayed indicating that the backup is successful, and the chassis Overview page is displayed.
You can check the status and details of the backup process on the Montitoring > Jobs page.
NOTE: Backup and restore operations cannot be performed when you have initiated any job and the job status is
in-progress.

Sensitive data
This option allows you to include passwords while taking the backup.
If you do not select the Include Password option, passwords for the following components are not included.

Table 23. Sensitive data


Category Description
Network Proxy password
Alerts Email username and password
Alerts SNMP Destination V3 user credentials
Network Services SNMP Agent V3 user credentials
Local Access: Power Button Disabled button LCD override PIN
Catalogs CIFS or HTTPS username and password
Templates* All user created templates
Users AD or LDAP password and bind password
Users OIDC registration username and password

*The secured attributes for templates include the following:


iDRAC Config
● USB Management
○ USB 1 Password for zip file on USB
● RAC Remote Hosts
○ RemoteHosts 1 SMTP Password
● Auto Update
○ AutoUpdate 1 Password
○ AutoUpdate 1 ProxyPassword
● Remote File Share
○ RFS 1 Remote File Share Password
● RAC VNC Server
○ VNCServer 1 Password
● SupportAssist
○ SupportAssist 1 Default Password
● LDAP
○ LDAP 1 LDAP Bind Password

146 SmartFabric Operations


Restoring chassis
You can restore the configuration of a chassis using a backup file. You must have the chassis administrator role with device
configuration privilege to restore the chassis.
Catalogs and associated baselines cannot be restored when downloads.dell.com is not reachable. Catalogs with proxy settings
cannot be restored on a different chassis because the proxy password is not restored on a different chassis. This action causes
downloads.dell.com not reachable. Configure proxy password manually and then rerun all catalog and baseline jobs to complete
the restore process. If the source of a catalog is a validated firmware, you must manually re-create the catalog and all baselines
that are associated with the catalog to complete the restoration.
Based on the HTTPS network share configuration, the catalogs for HTTPs are restored with or without password after the
backup file excluding sensitive data is restored. If entering the username and password for the HTTPS share is not mandatory,
the catalog is restored, else the catalog is restored with job status "failed". Enter the username and password manually after the
restore task for the status to display as "completed".
SmartFabric restore operation is not supported if:
● It is restored on a different chassis.
● There is any difference between the current setup of the IOM hardware and the backup file.
NOTE: The chassis backup and restore feature is supported only if the OME-M firmware version in the backup file and the
chassis during the restore process are identical. The restore functionality is not supported if the OME-M versions do not
match.
To restore a chassis:
1. Ethernet switch settings that are associated with a SmartFabric must be restored prior to starting the MX Chassis restore
process. Refer to Manual backup of IOM configuration through the CLI.
2. To ensure the restored startup configuration is loaded into the running configuration, reload the IOMs immediately after
restoring the startup configuration.
NOTE: The running configuration is automatically written to the startup.xml every 30 minutes. Reloading the IOM
immediately after each startup configuration restore avoids the startup.xml being overwritten.

3. On the chassis Overview page, click More Actions > Restore.


The Restore Chassis window is displayed.
4. On the Introduction section, read the process and click Next.
The Upload File section is displayed.
NOTE: Click Learn More to see more information about the Ethernet switch restore. The restore process must be
completed as part of step 1.
5. Under Restore File Location, select the Share Type where the configuration backup file is located.
NOTE: If the current MM link configuration setup is different from the backup file, you must match the TOR (Top of
Rack) connection to the MM link configuration before the restore operation.

NOTE: During the SmartFabric restore, all the IOMs are converted in to the operating mode as in the backup file.

NOTE: All the IOMs which go through the fabric restore are reloaded. The IOMs are reloaded twice if there is any
difference in the mode of backup file and the current IOM.

6. Enter the Network Share Address, and Network Share Filepath where the backup file is stored.
7. Enter the name of the Backup File and Extension.

SmartFabric Operations 147


8. If the Share Type is CIFS, enter the Domain, Username, and Password to access shared location. Else, go to step 9.
9. In the Restore File Password section, enter the Encryption Password to open the encrypted backup file.
NOTE: The password must be 8 to 32 characters long and must be a combination of an uppercase, a lowercase, a
special character (+, &, ?, >, -, }, |, ., !, (, ', ,, _, [, ", @, #, ), *, ;, $, ], /, §, %, =, <, :, {, I) , and a number.

NOTE: If the restore operation is done excluding passwords or on a different chassis with proxy settings, the proxy
dependent tasks like the repository task try to connect to the external share. Rerun the tasks after configuring the
proxy password.

10. Click Validate to upload and validate the chassis configuration.


The Optional Component section is displayed.
11. (Optional) From the Optional components, you can choose to restore files on the selected components.
● Restore File Validation Status—Displays the validation status of the restore files.
NOTE: The status indicates whether the restore file validation status is successful. If the validation is not successful,
an error message is displayed with the recommended action.
● Optional Components—Displays the components that you can select for the restore operation. The available options
are:
○ Templates, Profiles, Identity Pools, and VLAN Configurations
○ Application and Chassis Settings
○ Catalogs and Baselines
○ Alert Policies
○ SmartFabric Settings
NOTE: The list of Optional Components is based on the backup chassis settings. The components that are not
part of chassis backup is listed under Unavailable Components section below.
● Mandatory Components—Displays the mandatory components for the restore operation.
● Unavailable Components—Displays the components that are unavailable for the restore operation.

Figure 168. Restore chassis optional components

148 SmartFabric Operations


Figure 169. Restore chassis confirmation

12. Click Restore to restore the chassis.

Manual backup of IOM configuration through the CLI


The running configuration contains the current OS10 system configuration and consists of a series of OS10 commands. Copy the
configuration to a remote server or local directory as a backup or for viewing and editing. The running configuration is copied as
a text file that you can view and edit with a text editor.
Manual backup of IOM configuration provides a backup of the running configuration. To back up the chassis, including the
SmartFabric settings, use the instructions in Backing up the chassis on page 144.
Copy running configuration to startup configuration
To display the configured settings in the current OS10 session, use the show running-configuration. To save new
configuration settings across system reboots, copy the running configuration to the startup configuration file.

OS10# copy running-configuration startup-configuration

Back up startup file to local directory

OS10# copy config://startup.xml config://backup-9-28.xml

Restore startup file from backup

OS10# copy config://backup-9-28.xml config://startup.xml


OS10# reload
System configuration has been modified. Save? [yes/no]:no

There are several options to copy files from the IOM to a remote server through many protocols. These options can be found in
the Dell SmartFabric OS10 User Guide.

SmartFabric Operations 149


10
General Troubleshooting
View or extract logs using OME-M
This section briefly describes a method for collecting Extract Logs to troubleshoot any hardware or firmware issues in an
MX environment. Dell PowerEdge MX7000 comes with a Management Module that provides chassis management. An integral
feature of the management firmware is to keep a detailed log of events from managed devices and software events in the
management firmware. Firmware logs collected from Management Module components, which can be used for troubleshooting,
are grouped as Extract Logs.
It is important to note that Extract Logs are on-demand (user-initiated) from the Management Module and are always stored in
a network share that the customer configures.
For step-by-step instructions about how to view and collect these logs, For instructions, find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table.

Troubleshooting MCM topology errors


The OME-M console can be used to show the physical cabling of the SmartFabric.
1. Open the OME-M console.
2. In the left navigation panel, click View Topology.
3. Click the lead chassis and then click Show Wiring.
4. To show the cabling, click the light-blue checkmark icons.

Figure 170. SmartFabric cabling


The following figure shows the validation errors displayed when a VLTi cable is connected incorrectly.

150 General Troubleshooting


Figure 171. SmartFabric cabling error

Troubleshooting VLT and vPC configuration on


upstream switches
Configuring a single VLT domain with Dell upstream switches or a single vPC domain with Cisco upstream switches is required.
Creating two VLT/vPC domains may cause a network loop. See Scenario 1: SmartFabric deployment with S5232F-ON upstream
switches with Ethernet - No Spanning Tree uplink on page 179 and Scenario 2: SmartFabric connected to Cisco Nexus 3232C
switches with Ethernet - No Spanning Tree uplink on page 183 for the topology that is used in the deployment example.
The following example shows a mismatch of the VLT domain IDs on VLT peer switches. To resolve this issue, ensure that a
single VLT domain is used across the VLT peers.

S5232-Leaf1# show vlt 1


Domain ID : 1
Unit ID : 1
Role : primary
Version : 1.0
Local System MAC address : 4c:76:25:e8:f2:c0

S5232-Leaf2# show vlt 30


Domain ID : 30
Unit ID : 1
Role : primary
Version : 1.0

The following example shows a mismatch of the vPC domain IDs on vPC peer switches. To resolve this issue, ensure that a
single vPC domain is used across the vPC peers.

Nexus-3232C-Leaf1# show vpc


Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 1
Peer status : peer link is down
vPC keep-alive status : peer is alive, but domain IDs do not match

General Troubleshooting 151


---- OUTPUT TRUNCATED -----

3232C-Leaf2# show vpc


Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 255
Peer status : peer link is down
vPC keep-alive status : peer is alive, but domain IDs do not match
---- OUTPUT TRUNCATED -----

Troubleshooting FEM and compute sled discovery


Verify the following if server or FEM discovery does not happen:
● Verify that the compute sled is properly seated in the compute slot in the MX7000 chassis.
● Verify that at least one compute sled in the chassis is powered on.
● If the connected FSE port does not show a link up, toggle the auto negotiation settings for that port.
● Confirm that all of the firmware on the compute sleds is up to date and aligned with the installed MX baseline.
● If a QLogic/Marvell 41262 or 41232 adapter is used in the compute sled, the link speed setting on the adapter should be set
to SmartAN.
● Check the Topology LLDP settings. You can verify the settings by selecting iDRAC Settings > Connectivity from the
iDRAC UI that is on the compute sled. Check that this setting is set to Enabled as shown in the following figure.

Figure 172. Ensure that Topology LLDP is enabled

Troubleshooting FC and FCoE


When troubleshooting FC and FCoE, consider the following:
● Verify that the firmware and drivers are up to date on the CNAs.
● Check the support matrix to confirm that the CNAs are supported by the storage that is used in the deployment. For the
support matrix for Dell storage platforms, see the following:
○ Dell Technologies E-Lab Navigator
○ Dell Storage Compatibility Matrix for SC Series, PS Series, and FS Series storage solutions
● Verify that port group breakout mode is configured correctly.
● Ensure that the FC port-groups that are broken out on the unified ports in the MX9116n switches are set administratively up
after the ports are changed from Ethernet to FC.

152 General Troubleshooting


● MX9116n switches operating in SmartFabric mode support various commands to verify the configuration. Use the following
commands to verify FC configurations from MX9116n CLI:

OS10# show fc
alias Show FC alias
ns Show FC NS Switch parameters
statistics Show FC Switch parameters
switch Show FC Switch parameters
zone Show FC Zone
zoneset Show fc zoneset

● Use the following commands to verify FCoE configurations from MX9116n CLI:

OS10# show fcoe


enode Show FCOE enode information
fcf Show FCOE fcf information
sessions Show FCOE session information
statistics Show FCOE statistics information
system Show FCOE system information
vlan Show FCOE vlan information

● Verify that the FC ports are up, for example:

OS10# show interface status | grep 1/43


Fc 1/1/43:1 up 16G auto -
Fc 1/1/43:2 up 16G auto -
Fc 1/1/43:3 down 0 auto –
Fc 1/1/43:4 down 0 auto –

The show vfabric command output provides various information including the default zone mode, the active zone set, and
interfaces that are members of the vfabric.

OS10# show vfabric


Fabric Name New vfabric
Fabric Type FPORT
Fabric Id 1
Vlan Id 30
FC-MAP 0xEFC00
Vlan priority 3
FCF Priority 128
FKA-Adv-Period Enabled,8
Config-State ACTIVE
Oper-State UP
==========================================
Switch Config Parameters
==========================================
Domain ID 1
==========================================
Switch Zoning Parameters
==========================================
Default Zone Mode: Allow
Active ZoneSet: None
==========================================
Members
fibrechannel1/1/44:1
ethernet1/1/1
ethernet1/71/1
ethernet1/71/2

The show fcoe sessions command shows active FCoE sessions. The output includes MAC addresses, Ethernet interfaces,
the FCoE VLAN ID, FC IDs, and WWPNs of logged-in CNAs.

NOTE: Due to the width of the command output, each line of output is shown on two lines below.

OS10# show fcoe sessions


Enode MAC Enode Interface FCF MAC FCF interface VLAN FCoE
MAC FC-ID PORT WWPN PORT WWNN
-----------------------------------------------------------------------------------------
----------------------------------------------------------------
06:c3:f9:a4:cd:03 Eth 1/71/1 20:04:0f:00:ce:1d ~ 30

General Troubleshooting 153


0e:fc:00:01:01:00 01:01:00 20:01:06:c3:f9:a4:cd:03 20:00:06:c3:f9:a4:cd:03
f4:e9:d4:73:d0:0c Eth 1/1/1 20:04:0f:00:ce:1d ~ 30
0e:fc:00:01:02:00 01:02:00 20:01:f4:e9:d4:73:d0:0c 20:00:f4:e9:d4:73:d0:00

NOTE: For more information about FC and FCoE, find the relevant version of the Dell SmartFabric OS10 User Guide in the
OME-M and OS10 compatibility and documentation table.

Rebalancing FC and FCoE sessions


Beginning with OME-M 1.20.00 and OS10.5.0.7, the ability to rebalance FC and FCoE sessions across FC uplinks has been
added. This can be validated in Scenario 5: Connect MX9116n FSE to Fibre Channel storage - NPIV Proxy Gateway mode on
page 198.
The system performs an end-node based rebalancing when the CLI command is run. Factors for rebalancing are the current
session count on the uplink includes Fabric Login requests (FLOGI), Fabric Discovery Requests (FDISC), and the speed of the
uplink. Rebalance can be applied once the FC fabric is up and running and uplinks sessions are established in them.
Prior to the release of Dell SmartFabric OS10.5.1, NPG implementations exposed one Fibre Channel Forwarder (FCF) for each
physical FC uplink to end nodes. Starting with Dell SmartFabric OS10.5.2.4, all physical uplink interfaces within a vFabric are
represented as a single logical FCF. This improves session management and failover as the CNA no longer has to select a
different FCF during a link event.

Requirements and configuration guidelines


When a new physical uplink is added to a vFabric operating in NPG mode, or when a physical uplink having FC/FCoE sessions
established in them goes down, the system will go to an unbalanced state. A manual rebalance can be performed when the
system is found to be unbalanced.
The new uplink added must be operationally up before the rebalance is triggered. When an uplink goes down, all the sessions
associated with that uplink will be interrupted and will be reestablished and load balanced to the other available uplinks.
Rebalancing is done at the vFabric level.
NOTE: Sessions that are interrupted and reestablished appear to the host as an FC path failure until the session is
reestablished. Ensure that MPIO functionality on the host is operational before performing the rebalance.
Because FC session rebalancing is path disruptive, the command provides the ability to perform a dry run to provide a list of
servers that will be affected.
Below are the steps to perform rebalancing of uplinks.

System in unbalanced state


The following command shows that the system is in an unbalanced state. Run the show npg device brief and show npg
uplink-interfaces commands to see the unbalanced state of the system. In the following figure, interface Fc 1/1/23
has two FC sessions and interface Fc 1/1/24 has zero.

Figure 173. System in unbalanced state

154 General Troubleshooting


Perform a dry run of the rebalance command
To understand what changes will be made during a rebalance before making them, run the re-balance npg sessions
vfabric 10 dry-run command to view what changes will be made when the command is executed.

Figure 174. Review the rebalance using the dry-run command

General Troubleshooting 155


Run rebalance command
To perform the rebalance, run the command re-balance npg sessions vfabric 10.

Figure 175. Rebalance with actual run

NOTE: Once the rebalance is complete, below syslog will be generated in MX console.

System in a balanced state


The following figure shows the system in a balanced state. Interface Fc 1/1/23 now has one FC session and interface Fc
1/1/24 has one.

Figure 176. System in balanced state

156 General Troubleshooting


Beginning with Dell SmartFabric OS10.5.2, the show npg uplink-interface command has one more option added as
fcf-info to display the FCF Availability Status, fabric name of the FC upstream switch connected, error reason, FCF
advertisement delay timeout left, and duplicate FC id assignment counter.

MX9116N-A1# show npg uplink-interfaces fcf-info


Vfabric-Id : 10
FAD Timeout Left : 0 second(s)
FCF Availability Status : Yes
Uplink Duplicate
Intf Upstream Fabric-name Error Reason FC Id(s)
--------------------------------------------------------------------------
Fc 1/1/24 10:00:14:18:77:20:7f:cf NONE 0
Fc 1/1/23 10:00:14:18:77:20:7f:cf NONE 0

Common CLI troubleshooting commands for Full


Switch and SmartFabric modes
show switch-operating-mode
Use the show switch-operating-mode command to display the current operating mode:

MX9116N-1# show switch-operating-mode

Switch-Operating-Mode : Smart Fabric Mode

show discovered-expanders
The show discovered-expanders command is only available on the MX9116n FSE. The MX7116n FEMs display the service
tag that is attached to the MX9116n FSEs, the associated port group, and the virtual slot.

MX9116N-1# show discovered-expanders


Service Model Type Chassis Chassis-slot Port-group Virtual
tag service-tag Slot-Id
--------------------------------------------------------------------------
D10DXC2 MX7116n 1 SKY002Z A1 1/1/1 71
FEM

show unit-provision
The show unit-provision command is only available on the MX9116n FSE and displays the unit ID, the provision name, and
the discovered name of the MX7116n FEM that is attached to the MX9116n FSE.

MX9116N-1# show unit-provision


Node ID | Unit ID | Provision Name | Discovered Name | State |
---------+---------+---------------------------------+-------|
1 | 71 | D10DXC2 | D10DXC2 | up |

show lldp neighbors


The show lldp neighbors command shows information about LLDP neighbors. The iDRAC that is in the PowerEdge MX
compute sled produces LLDP topology packets that contain specific information that the SmartFabric Services engine uses to
determine the physical network topology regardless of whether a switch is in Full Switch or SmartFabric mode. For servers that
are connected to switches in SmartFabric mode, the iDRAC LLDP topology feature is required. Without it, the fabric does not
recognize the compute sled and the user cannot deploy networks to the sled.

General Troubleshooting 157


The iDRAC MAC address can be verified by selecting iDRAC Settings > Overview > Current Network Settings from the
iDRAC UI of a compute sled as shown in the following example:

Figure 177. IOM Port information

Alternately, the iDRAC MAC information can be obtained from the System Information on the iDRAC Dashboard page.

Figure 178. System Information on iDRAC Dashboard

When viewing the LLDP neighbors, the iDRAC MAC address in addition to the NIC MAC address of the respective mezzanine
card are shown.

MX9116N-1# show lldp neighbors


Loc PortID Rem Host Name Rem Port Id Rem Chassis Id
--------------------------------------------------------------------------------
ethernet1/1/1 Not Advertised 98:03:9b:65:73:b2 98:03:9b:65:73:b4
ethernet1/1/1 iDRAC-8XQP0T2 8XQP0T2 NIC.Mezzanine.1A-1-1 d0:94:66:87:ab:40
---- OUTPUT TRUNCATED -----

In the example deployment validation of LLDP neighbors, Ethernet1/1/1, ethernet 1/1/3, and ethernet
1/1/71-1/1/72 represent the two MX740c sleds in one chassis. The first entry is the iDRAC for the compute sled. The
iDRAC uses connectivity to the mezzanine card to advertise LLDP information. The second entry is the mezzanine card itself.
Ethernet 1/71/1 and ethernet 1/71/2 represent the MX740c compute sleds connected to the MX7116n FEM in the
other chassis.
Ethernet range ethernet1/1/37-1/1/40 are the VLTi interfaces for the SmartFabric. Last, ethernet1/1/41-1/1/42
are the links in a port channel that is connected to the Dell Networking S5232-ON leaf switches.

MX9116N-1# show lldp neighbors


Loc PortID Rem Host Name Rem Port Id Rem Chassis Id
----------------------------------------------------------------------------
ethernet1/1/1 iDRAC-CBMP9N2 CBMP9N2 NIC.Mezzanine.1A-1-1 d0:94:66:2a:07:2f
ethernet1/1/1 Not Advertised 24:6e:96:9c:e3:50 24:6e:96:9c:e3:50
ethernet1/1/3 iDRAC-1S35MN2 1S35MN2 NIC.Mezzanine.1A-1-1 d0:94:66:29:fa:f4

158 General Troubleshooting


ethernet1/1/3 Not Advertised 24:6e:96:9c:e5:48 24:6e:96:9c:e5:48
ethernet1/1/37 C160A2 ethernet1/1/37 20:04:0f:00:a1:9e
ethernet1/1/38 C160A2 ethernet1/1/38 20:04:0f:00:a1:9e
ethernet1/1/39 C160A2 ethernet1/1/39 20:04:0f:00:a1:9e
ethernet1/1/40 C160A2 ethernet1/1/40 20:04:0f:00:a1:9e
ethernet1/1/41 S5232-Leaf1 ethernet1/1/3 4c:76:25:e8:f2:c0
ethernet1/1/42 S5232-Leaf2 ethernet1/1/3 4c:76:25:e8:e8:40
ethernet1/71/1 Not Advertised 24:6e:96:9c:e5:d8 24:6e:96:9c:e5:d8
ethernet1/71/1 iDRAC-CF52XM2 CF52XM2 NIC.Mezzanine.1A-1-1 d0:94:66:29:fe:b4
ethernet1/71/2 Not Advertised 24:6e:96:9c:e5:da 24:6e:96:9c:e5:da
ethernet1/71/2 iDRAC-1S34MN2 1S34MN2 NIC.Mezzanine.1A-1-1 d0:94:66:29:ff:27

show qos system


The show qos system command displays the QoS configuration that is applied to the system. The command is useful to
verify the service policy that is created manually or automatically by a SmartFabric deployment.

MX9116N-1# show qos system


Service-policy (input): PM_VLAN
ETS Mode : off

show policy-map
Using the service policy from show qos system, the show policy-map type qos PM_VLAN command displays QoS policy
details including associated class maps, for example, CM10, and QoS queue settings, qos-group 2.

MX9116N-1# show policy-map type qos PM_VLAN


Service-policy (qos) input: PM_VLAN
Class-map (qos): CM10
set qos-group 2

show class-map
The show class-map command displays details for all the configured class-maps. For example, the association between CM10
and VLAN 10 is shown.

MX9116N-1# show class-map


Class-map (application): class-iscsi
Class-map (qos): class-trust
Class-map (qos): CM10(match-any)
Match: mac vlan 10
Class-map (qos): CM2(match-any

show vlt domain-id vlt-port-detail


The show vlt domain-id vlt-port-detail command shows the VLT port channel status for both VLT peers. The VLT
in this example is connected to the Cisco ACI vPC. It is automatically configured in port channel 1, and it consists of two ports
on each switch.

MX9116n-1# show vlt 255 vlt-port-detail


vlt-port-channel ID : 1
VLT Unit ID Port-Channel Status Configured ports Active ports
-------------------------------------------------------------------------------
* 1 port-channel1 up 2 2
2 port-channel1 up 2 2

General Troubleshooting 159


show interface port channel summary
The show interface port-channel summary command shows the LAG number (VLT port channel 1 in this example),
the mode, status, and ports used in the port channel.

MX9116n-1# show interface port-channel summary


LAG Mode Status Uptime Ports
1 L2-HYBRID up 00:29:20 Eth 1/1/43 (Up)
Eth 1/1/44 (Up)

show queuing weights interface ethernet


The show queuing weights interface ethernet command shows the queue and weights for each queue in
percentage. These queues belong to the QoS groups mentioned in Networks and automated QoS on page 86. For example,
queue 2 belongs to Bronze and queue 3 belongs to Silver.

MX9116N-1# show queuing weights interface ethernet 1/1/41


Interface ethernet1/1/41
Queue Weight(In percentage)
--------------------------------
0 1
1 2
2 3
3 4
4 5
5 10
6 25
7 50

The example of QoS group, its related queue, and weight is shown here.

QoS Group Queue Weight(In percentage)


--------------------------------
0 0 1
1 1 2
2(Bronze) 2 3
3(Silver) 3 4
4(Gold) 4 5
5(Platinum) 5 10
6 6 25
7 7 50

show lldp dcbx interface ethernet ets detail


The show lldp dcbx interface ethernet ets detail command shows each port group, its priorities and bandwidth
in admin, remote and local mode. Ensure that you have dcbx enabled to run the command. Bandwidth is in percentage. The
minimum and maximum bandwidth can be changed in OME-M under the Edit Network option for the created server template.

MX9116N-1# show lldp dcbx interface ethernet 1/1/1 ets detail


Interface ethernet1/1/1
Max Supported PG is 8
Number of Traffic Classes is 8
Admin mode is on

Admin Parameters :
------------------
Admin is enabled

PG-grp Priority# Bandwidth TSA


------------------------------------------------
0 0,1,2,5,6,7 1% ETS
1 0% SP
2 0% SP
3 3 98% ETS

160 General Troubleshooting


4 4 1% ETS
5 0% SP
6 0% SP
7 0% SP

Remote Parameters :
-------------------
Remote is enabled
PG-grp Priority# Bandwidth TSA
------------------------------------------------
0 0,1,2,5,6,7 1% ETS
1 0% SP
2 0% SP
3 3 98% ETS
4 4 1% ETS
5 0% SP
6 0% SP
7 0% SP

Remote Willing Status is enabled


Local Parameters :
-------------------
Local is enabled

PG-grp Priority# Bandwidth TSA


------------------------------------------------
0 0,1,2,5,6,7 1% ETS
1 0% SP
2 0% SP
3 3 98% ETS
4 4 1% ETS
5 0% SP
6 0% SP
7 0% SP

Oper status is init


ETS DCBX Oper status is Up
State Machine Type is Asymmetric
Conf TLV Tx Status is enabled
Reco TLV Tx Status is enabled

4 Input Conf TLV Pkts, 55 Output Conf TLV Pkts, 2 Error Conf TLV Pkts
0 Input Reco TLV Pkts, 55 Output Reco TLV Pkts, 0 Error Reco TLV Pkts

General Troubleshooting 161


11
SmartFabric Troubleshooting
Troubleshooting SmartFabric issues
This section provides information about errors that might be encountered while working with a SmartFabric. Troubleshooting
and remediation actions are also included to help with resolving errors.

Troubleshoot port group breakout errors


The creation of a SmartFabric requires you to perform steps in a specific order. The SmartFabric deployment consists of four
main steps that are performed using the OME-M console:
1. Create the VLANs to be used in the fabric.
2. Select the switches and create the fabric based on the preferred physical topology.
3. Create uplinks from the fabric to the existing network and assign VLANs to those uplinks.
4. Create and deploy the appropriate server templates to the compute sleds.
For cases where changing the port speed or breakout configuration of port groups is required, the port must be configured after
the SmartFabric creation and before adding the uplinks.
With OME-M 1.30.00 and later, the port breakout can be directly configured to the desired breakout type as shown in figure
below.

162 SmartFabric Troubleshooting


Figure 179. Recommended order of steps for port breakout for OME-M 1.30.10 and later

With OME-M 1.20.10 and earlier, you must set the Breakout Type to HardwareDefault first and then set the desired
configuration as shown in the figure below.

SmartFabric Troubleshooting 163


Figure 180. Recommended order of steps for port breakout for OME-M 1.20.10 and earlier

If the recommended order of steps is not followed, you may encounter the following errors:

Table 24. Troubleshooting port group breakout errors


Scenario Error display
Configuration of the breakout requires you to create the
SmartFabric first. When attempting to configure the breakout
before creating a SmartFabric, the following error displays:

164 SmartFabric Troubleshooting


Table 24. Troubleshooting port group breakout errors (continued)
Scenario Error display
With OME-M 1.20.10 and earlier, configuration of the breakout
requires you to select the HardwareDefault breakout type first.
If the breakout type is directly selected without first selecting
HardwareDefault, the following error displays:

Once the uplinks are added, they are most often associated
with tagged or untagged VLANs. When attempting to configure
the breakout on the uplink port-groups after adding uplinks
associated with VLANs to the fabric, the following error displays:

SmartFabric Troubleshooting 165


Troubleshooting VLTi between switches
NOTE: The below example shows the MX9116n FSE, however the process is the same for the MX5108n Ethernet Switch.

After the SmartFabric is created, you may see the following error: Warning: Unable to validate the fabric because the
design link ICL-1_REVERSE not connected as per design and Unable to validate the fabric because the design link
ICL-1_FORWARD not connected as per design.
There are two common reasons why you may receive this error:
● QSFP28 cables are being used between MX9116n switches instead of QSFP28-DD cables.
● The VLTi cables are not connected to the correct physical ports.
An example is shown below. To see the warning message, go to the OME-M UI and click Devices > Fabric. Choose View
Details next to Warning. You can view the details of the warning message choosing the SmartFabric that was created, and
clicking Topology. The warnings are displayed in Validation Errors section.

Figure 181. Warning for VLTi connections using QSFP28 100 GbE cables

Figure 182. Warning messages

This occurs because the VLTi connections between two MX9116n FSEs are using QSFP28 cables instead of QSFP28-DD cables.
Make sure QSFP28-DD cables are connected between port group 11 and 12 (ports 1/1/37 through 1/1/40) on both FSEs for
VLTi connections.

166 SmartFabric Troubleshooting


Troubleshooting uplink errors
Toggle auto negotiation
Enabling or disabling auto negotiation from the OME-M console can bring up the uplinks connecting to the upstream switches.
For example, when deploying the SmartFabric with the Cisco Nexus 3232C (see Scenario 2: SmartFabric connected to Cisco
Nexus 3232C switches with Ethernet - No Spanning Tree uplink on page 183), disable auto negotiation on uplink ports on the
MX switches to bring the link up.
The OME-M console is used to disable/enable auto negotiation ports on MX switches. The following steps illustrate turning
disabling auto negotiation on ports 41 and 42 of a MX9116n.
1. From switch management page, choose Hardware > Port Information.
2. Select the ports on which auto negotiation must be disabled. In this example, ports 1/1/41 and port 1/1/42 are selected.
3. Click Toggle AutoNeg > Finish.

Figure 183. Toggle AutoNeg dialog box

Set uplink ports to administratively up


The uplink ports on the switch might be administratively down. Enabling the uplink ports can be carried out from the OME-M
console. The uplink ports can be administratively down when a port group breakout happens, especially for FC breakouts.
The OME-M console can be used to disable/enable the ports on MX switches. The following steps illustrate turning setting the
administrative state on ports 41 and 42 of an MX9116n.
1. From switch management page, choose Hardware > Port Information.
2. Select the ports.
NOTE: In this example, ports 1/1/41 and port 1/1/42 are selected.
3. Click Toggle Admin State > Finish .

Figure 184. Toggle Admin State dialog box

Verify MTU size


Set the same MTU size on the ports that connect the MX switches, the ports on the upstream switches, and the server NICs.
To set the MTU size from the OME-M console, see Configure Ethernet ports on page 97.

SmartFabric Troubleshooting 167


Verify auto negotiation settings on upstream switches
Verify the auto negotiation settings on the upstream switches. In the case where auto negotiation settings are modified, the
links might not come up. Change the auto negotiation on upstream switches to resolve the issue.
For example, if the auto negotiation was disabled on the Cisco Nexus upstream switches, the setting can be turned on. To
enable the autonegotiation on an Ethernet interface on Cisco Nexus switches, run the following commands:

switch# configure terminal


switch(config)# interface ethernet <interface-number>
switch(config-if)# negotiate auto

The following example shows interface ethernet 1/2 with auto negotiation enabled on the interface:

Nexus-3232C-Leaf1(config-if)# do show int eth 1/2


Ethernet1/2 is down (XCVR not inserted)
admin state is down, Dedicated Interface
Hardware: 40000/100000 Ethernet, address: 00fe.c8ca.f367 (bia 00fe.c8ca.f36c)
MTU 1500 bytes, BW 100000000 Kbit, DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, medium is broadcast
auto-duplex, auto-speed
Beacon is turned off
Auto-Negotiation is turned on, FEC mode is Auto
---- OUTPUT TRUNCATED -----

Verify LACP
The interface status of the upstream switches can provide valuable information for the link being down. The following example
shows interfaces 1 and 3 on upstream Cisco Nexus switches as members of port channel 1:

3232C-Leaf2# show interface status


--------------------------------------------------------------------------------
Port Name Status Vlan Duplex Speed Type
--------------------------------------------------------------------------------
mgmt0 -- connected routed full 1000 --
Eth1/1 To MX Chassis 1 suspended trunk full 100G QSFP-100G-SR4
Eth1/2 -- xcvrAbsen routed auto auto --
Eth1/3 To MX Chassis 2 suspended trunk full 100G QSFP-100G-SR4
---- OUTPUT TRUNCATED -----

Checking interface 1 reveals that the ports are not receiving the LACP PDUs as shown in the following example:

3232C-Leaf2# show int eth 1/1


Ethernet1/1 is down (suspended(no LACP PDUs))
admin state is up, Dedicated Interface
Belongs to Po1
---- OUTPUT TRUNCATED -----

NOTE: Within the Dell PowerSwitch, use the show interface status command to view the interfaces and associated
status information. Use the show interface ethernet interface number to view the interface details.

In the following example, the errors listed above occurred because an uplink was not created on the fabric.

Figure 185. Fabric topology with no uplinks

168 SmartFabric Troubleshooting


The following image shows the Topology with QSFP28 100 GbE connection on ports 37 and 39 instead of QSFP28-DD
connection, an unsupported configuration.

Figure 186. Fabric topology with uplinks and QSFP28 100 VLTi connection

The resolution is to add one or more uplinks and verify that the fabric turns healthy.

Figure 187. Healthy fabric

Troubleshooting legacy Ethernet uplink with STP


When using the legacy Ethernet uplink type, it is essential to ensure that network loops are prevented by using appropriate
Spanning Tree Protocol (STP) on the MX and upstream switches. STP prevents loops in the network. Loops can occur when
multiple redundant parts are available between the switches. To prevent the network from going down due to loops, various
types of STP are available.
When using the Ethernet – No Spanning Tree Protocol uplink, STP is not required on the upstream switch interfaces.
NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.

Verify STP is enabled on upstream switches


STP is required when connecting a SmartFabric to the upstream network when using the legacy Ethernet uplink. Turning off
Spanning Tree in the upstream switches will result in network loops and may cause downtime. Enable the appropriate STP type
on the upstream switches.

SmartFabric Troubleshooting 169


Verify STP type is identical on MX and upstream switches
Check the upstream switch if STP is enabled and verify that the type of STP matches the type of STP running on the MX
switches. By default, the MX switches run RPVST+ as shown below:

OS10# show spanning-tree brief


Spanning tree enabled protocol rapid-pvst
VLAN 1
Executing IEEE compatible Spanning Tree Protocol
---- OUTPUT TRUNCATED -----

The following example shows that the STP on the upstream switches, Cisco Nexus 3232C, is configured to run MST:

Nexus-3232C-Leaf1(config)# do show spanning-tree summary


Switch is in mst mode (IEEE Standard)
Root bridge for: MST0000
Port Type Default is disable
---- OUTPUT TRUNCATED -----

The recommended course of action is to change the STP type to RPVST+ on the upstream Cisco Nexus switches.

Nexus-3232C-Leaf1(config)# spanning-tree mode rapid-pvst


Nexus-3232C-Leaf1(config)# do show spanning-tree summary
Switch is in rapid-pvst mode
--- OUTPUT TRUNCATED -----

Another course of action in the above case can be to change the spanning tree type on the MX switches operating in
SmartFabric mode to match the STP type on the upstream switches. Make the change using the SmartFabric OS10 CLI. The
options available for the type of STP are as follows:

MX9116N-A1(config)# spanning-tree mode ?


rstp Enable rstp
rapid-pvst Enable rapid-pvst

Troubleshooting common issues


This section discusses the various issues that you may encounter when configuring the scenarios and examples that are
mentioned in this guide. A problem statement is given for each scenario, along with one or more possible solutions.

Table 25. Problem and resolution examples


Problem Scenario Solution
MX7116n FEMs Two MX7000 chassis are connected in an When resolving the issue, consider the following:
are not discovered MCM group with MX9116n FSEs and MX7116n 1. Do not enable LLDP under the Discovery option
when creating a FEMs. MX9116n FSEs are connected to Upstream in the distributed virtual switch settings. LLDP
SmartFabric. switches. Upstream switches are connected to rack is not a supported Discovery protocol on a
servers, and vCenter is deployed in this scenario. Distributed Virtual Switch in ESXi on the MX
VMs are also deployed on the ESXi hosts MX platform.
Compute sleds and Rack servers. 2. Disable Beacon Probing and revert to Link
Status only on all port groups. This can be done
The Link Layer Discovery Protocol (LLDP)
under Port-group settings > Teaming and
advertisements from the blade NICs may not be
visible to the IOMs. Running the show lldp
neighbors command from the IOM does not list
the NICs shown here

In the blade iDRAC, the NICs status shows as


Unknown, and the Switch Connection ID and
Switch Port Connection ID are shown as Not
Applicable.

170 SmartFabric Troubleshooting


Table 25. Problem and resolution examples (continued)
Problem Scenario Solution
Failover.

This issue may prevent MX7116n from being


discovered when creating a SmartFabric.

3. If the NICs are configured for Jumbo Frames,


try turning this off.
4. Set up the Traffic Filtering (ACL) to drop
LLDP packets in ingress and egress direction.
Verify that the same ACL does not exist on
any physical switch or virtual switch where the
SmartFabric is expected to be interconnected.
Dropped packets Two MX7000 chassis are connected in an The issue is when one of the MX9116n FSE on
between VMs for MCM group with MX9116n FSEs and MX7116n MX7000 chassis becomes the Spanning Tree root
15 seconds after FEMs. MX9116n FSEs are connected to Upstream when using the legacy Ethernet uplink type.
the switch reboots switches. Upstream switches are connected to rack
To resolve this issue, make an upstream switch the
servers, and vCenter is deployed in this scenario.
STP Root, not the MX9116n FSE. In the topology
VMs are also deployed on the ESXi hosts MX
mentioned here, the switch with the lower priority
Compute sleds and Rack servers. Verify that STP
number increases the likelihood that the bridge to it
is enabled.
becomes the STP Root.
Rebooting the MX9116n FSE on the MX7000
Run the commands mentioned in the Dell
chassis while passing traffic between the VMs
SmartFabric OS10 User Guide to make upstream
deployed on the MX compute sleds and the VMs
switch the STP root. Find the relevant version
that are deployed on rack servers causes three to
of the User Guide in the OME-M and OS10
five requested time outs and dropped packets for
compatibility and documentation table.
up to 15 seconds.

Not able to set Scenario: I/O Modules MX9116n FSE or MX5108n By default, the MX9116n FSE and MX5108n IOMs
QoS on a compute is connected to MX740c compute sled with Intel support the DCBx protocol and can be used to push
sled connected to XXV710 ethernet controller. IOMs are connected to their QoS configuration to the server NIC. The NIC
MX9116n FSE or upstream switches must be configured to accept these QoS settings
MX5108 from the switch by setting their Remote Willing
Running show lldp dcbx interface ethernet <node/
Status to Enable.
slot/port> pfc detail command shows Remote
willingness status is disabled on server facing ports. In Full Switch mode, user can configure DCBx as
mentioned in the Dell SmartFabric OS10 User Guide.
OS10# show lldp dcbx interface ethernet 1/1/1 pfc
Find the relevant version of the User Guide in the
detail
OME-M and OS10 compatibility and documentation
Interface ethernet1/1/15 table.
Admin mode is on In SmartFabric mode, DCBX configuration is tied to
FCoE UPLINK and it will enable only after FCoE
Admin is enabled, Priority list is 4,5,6,7
Uplink configured on this switch.
Remote is enabled, Priority list is 4,5,6,7
Once DCBX configuration applied on switch side,
Remote Willing Status is disabled it will push to remote end and remote end must
accept this configuration by “Remote Willing Status
(Output Truncated)
Enabled”.
The NIC on the server that is attached to the
switch is not configured to receive DCBx or any
QoS configurations, which is what causes the

SmartFabric Troubleshooting 171


Table 25. Problem and resolution examples (continued)
Problem Scenario Solution

Remote Willing Status is disabled message. Some


server NICs will only receive a QoS configuration
(scheduling, bandwidth, priority queues, etc.) from
the switch they are attaching to. The drivers for
these NICs do not support this configuration via
software, but only from a peer via the DCBx
protocol.

Removing the To reproduce the scenario with MX IOMs In Full Switch mode, user can create a VLAN, enable
management VLAN connected to Upstream switches: it and define as a Management VLAN in global
tag under Edit 1. Create management VLAN. configuration mode on switch. For more information
Uplinks removes 2. After creating SmartFabric and adding uplinks, on Configuring VLANs in Full Switch mode, find the
the management the VLANs can be edited from the Edit Uplinks relevant version of the User Guide in the OME-M
VLAN page. and OS10 compatibility and documentation table.
3. Go to OME-M Console > Devices > Fabric > In SmartFabric mode, management VLAN 4020 will
Select a fabric > Select uplink > Edit. be created by default.
4. Click Next to access Edit Uplink page.
Make sure not to add management VLAN by Add
5. Add Network and add management VLAN
Network or remove tag on management VLAN.
6. Tag the management VLAN. The UI accepts the
change but no change in device. Access the CLI This removes the management VLAN itself.
to confirm.
7. Remove the tag on management VLAN, this in
turn deletes the management VLAN as well.

SmartFabric Services troubleshooting commands


The following commands allow user to view various SmartFabric Services configuration information. These commands can also
be used as troubleshooting purpose on SmartFabric OS10.
For information related to Support release for commands, find the relevant version of the User Guide in the OME-M and OS10
compatibility and documentation table.

show smartfabric personality


The show smartfabric personality command is used on a node to view the personality of SmartFabric Services
configured. The possible values can be PowerEdge MX, Isilon, VxRail, and L3 fabric .

show smartfabric cluster


The show smartfabric cluster command is used to see if node is part of the cluster. This displays the cluster
information of the node such as node role, service, virtual ip address, and node domain. It can also be used to verify role
of the node as either Backup or Master.

OS10# show smartfabric cluster

----------------------------------------------------------
CLUSTER DOMAIN ID : 50
VIP : fde1:53ba:e9a0:de14:0:5eff:fe00:150
ROLE : MASTER
SERVICE-TAG : CBJXLN2

NOTE: New features may not appear in the MSM UI until the master is upgraded to the version that supports the new
features. The example above shows how the show smartfabric cluster command determines which I/O module is
the master, and which I/O module role is the back-up.

172 SmartFabric Troubleshooting


show smartfabric cluster member
The show smartfabric cluster member command is used to see the member details of the cluster. This displays the
cluster member information such as service-tag, ip address, status, role, type of each node, and the service tag of the chassis
that the node belongs to.

OS10# show smartfabric cluster member


Service-tag IP Address Status Role Type
Chassis-Service-Tag Chassis-Slot
-----------------------------------------------------------------------------------------
--------------------------------
CBJXLN2 fde1:53ba:e9a0:de14:2204:fff:fe00:cde7 ONLINE MASTER MX5108n
SKY002Z A1
BZTQPK2 fde1:53ba:e9a0:de14:2204:fff:fe00:19e5 ONLINE BACKUP MX5108n
SKY002Z B1
6L59XM2 fde1:53ba:e9a0:de14:2204:fff:fe00:3de5 ONLINE BACKUP MX5108n
SKY002Z B2
F13RPK2 fde1:53ba:e9a0:de14:2204:fff:fe00:a267 ONLINE BACKUP MX5108n
SKY003Z A2

show smartfabric details


The show smartfabric details command is used to see the all configured fabric details. This command displays the
nodes that are part of the fabric, status of the fabric, and design type associated with the fabric.

OS10# show smartfabric details


----------------------------------------------------------
Name : Fabric 1
Description :
ID : 74b3d3a4-7804-4c15-b6d3-5e4e7c364f82
DesignType : 2xMX9116n_Fabric_Switching_Engines_in_different_chassis
Validation Status: VALID
VLTi Status : VALID
Placement Status : VALID
Nodes : CBJXLN2, F13RPK2
----------------------------------------------------------

show smartfabric uplinks


The show smartfabric uplinks command is used to verify the uplinks configured across the nodes in the fabric. This
command displays the following information that is associated with the fabric:
● Name
● Description
● ID
● Media type
● Native VLAN
● Configured interfaces
● Network profile

OS10# show smartfabric uplinks


----------------------------------------------------------
Name : FCoE Path A
Description :
ID : 1b328dc2-b99c-466e-b87c-b84c9c342225
Media Type : FC
Native Vlan : 0
Untagged-network :
Networks : 6a161bae-788f-4d65-8b0c-69b404c477dc
Configured-Interfaces : CBJXLN2:fibrechannel1/1/44:1, CBJXLN2:fibrechannel1/1/44:2
----------------------------------------------------------
----------------------------------------------------------
Name : Uplink1
Description :

SmartFabric Troubleshooting 173


ID : d493fee2-9680-41c7-989d-cf0347aab4fd
Media Type : ETHERNET
Native Vlan : 1
Untagged-network :
Networks : e6189b88-7f19-4b05-98b5-0c05ff7ff8c8, 284dae93-b91f-4593-9cff-
c8521cd7ae90
Configured-Interfaces : CBJXLN2:ethernet1/1/42:1, F13RPK2:ethernet1/1/41:1,
F13RPK2:ethernet1/1/42:1, CBJXLN2:ethernet1/1/41:1
----------------------------------------------------------
----------------------------------------------------------
Name : FCoE Path B
Description :
ID : 0f7ad3a2-e59e-4a07-9a74-4e57558f0a4d
Media Type : FC
Native Vlan : 0
Untagged-network :
Networks : e2c35ec5-c177-46f1-9a69-75d8b202d739
Configured-Interfaces : F13RPK2:fibrechannel1/1/44:1, F13RPK2:fibrechannel1/1/44:2

show smartfabric networks


The show smartfabric networks command is used to view the various network profiles configured. The command
displays the VLANs that are configured, QoS Priority, and the network type for each network profile.

OS10# show smartfabric networks


Name Type QosPriority Vlan

--------------------------------------------------------------------------------
FCoE A1 STORAGE_FCOE PLATINUM 998
VLAN1 GENERAL_PURPOSE BRONZE 1
FCoE A2 STORAGE_FCOE PLATINUM 999
VLAN10 GENERAL_PURPOSE SILVER 10
UPLINK VLAN GENERAL_PURPOSE SILVER 2491

show smartfabric validation-error


The show smartfabric validation-error command displays all the information about the validation errors such as
category, subcategory, recommended action, severity, timestamp, and recommended link to each error.

show smartfabric nodes


The show smartfabric nodes command is used to view the details of the nodes that are part of the cluster. This
command helps the user to view the status of a node and chassis details.

OS10# show smartfabric nodes


Service-Tag Type Status Mode Chassis-Service
Chassis-Slot
Tag
--------------------------------------------------------------------------
F13RPK2 MX9116n ONLINE FABRIC SKY003Z A2
110DXC2 MX7116n NOT-APPLICABLE SKY002Z A2
CBJXLN2 MX9116n ONLINE FABRIC SKY002Z A1
6L59XM2 MX5108n ONLINE FULL-SWITCH SKY002Z B2
D10DXC2 MX7116n NOT-APPLICABLE SKY003Z A1
BZTQPK2 MX5108n ONLINE FULL-SWITCH SKY002Z B1

174 SmartFabric Troubleshooting


show smartfabric configured-servers
The show smartfabric configured-servers command displays the list of deployed servers and details such as service
tag, model (MX740c/MX840c) of compute sled, chassis slot and chassis service tag. It also shows that the compute sled has
been discovered, onboarded and configured or not.

OS10# show smartfabric configured-server


**********************************************************
Service-Tag : 8XQP0T2
Server-Model : PowerEdge MX740c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : 8XXJ0T2
Is-Discovered : TRUE
Is-Onboarded : TRUE
Is-Configured : TRUE
**********************************************************
**********************************************************
Service-Tag : DTQHMR2
Server-Model : PowerEdge MX740c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : 8XXJ0T2
Is-Discovered : TRUE
Is-Onboarded : TRUE
Is-Configured : TRUE
**********************************************************
**********************************************************
Service-Tag : 8XRH0T2
Server-Model : PowerEdge MX740c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : F7PQ0T2
Is-Discovered : TRUE
Is-Onboarded : TRUE
Is-Configured : TRUE
**********************************************************
**********************************************************
Service-Tag : ST0000W
Server-Model : PowerEdge MX840c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : F7PQ0T2
Is-Discovered : TRUE
Is-Onboarded : TRUE
Is-Configured : TRUE

show smartfabric configured-servers configured-


server-interface
The show smartfabric configured-servers configured-server-interface <compute-sled service
tag> command shows details of one deployed server such as NIC-ID, Switch Interface and Fabric. It also shows tagged and
untagged VLANs on NIC Mezzanine Card ports.

OS10# show smartfabric configured-server configured-server-interface DTQHMR2


**********************************************************
Service-Tag : DTQHMR2
----------------------------------------------------------
Nic-Id : NIC.Mezzanine.1A-2-1
Switch-Interface : 87QNMR2:ethernet1/71/2
Fabric : SF (abdeec7f-3a83-483a-929e-aa102429ae86)
Is-Discovered : TRUE
Is-Onboarded : TRUE
Is-Configured : TRUE
NicBonded : FALSE
Native-vlan : 1
Static-onboard-interface:

SmartFabric Troubleshooting 175


Networks : 40, 1611

----------------------------------------------------------
Nic-Id : NIC.Mezzanine.1A-1-1
Switch-Interface : 8XRJ0T2:ethernet1/1/3
Fabric : SF (abdeec7f-3a83-483a-929e-aa102429ae86)
Is-Discovered : TRUE
Is-Onboarded : TRUE
Is-Configured : TRUE
NicBonded : FALSE
Native-vlan : 1
Static-onboard-interface:
Networks : 30, 1611

show smartfabric discovered-server


The show smartfabric discovered-server command shows the list of servers present in the cluster and discovered by
IOMs.

OS10# show smartfabric discovered-server


----------------------------------------------------------
Service-Tag : 8XQP0T2
Server-Model : PowerEdge MX740c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : 8XXJ0T2
----------------------------------------------------------
----------------------------------------------------------
Service-Tag : DTQHMR2
Server-Model : PowerEdge MX740c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : 8XXJ0T2
----------------------------------------------------------
----------------------------------------------------------
Service-Tag : 8XRH0T2
Server-Model : PowerEdge MX740c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : F7PQ0T2
----------------------------------------------------------
----------------------------------------------------------
Service-Tag : ST0000W
Server-Model : PowerEdge MX840c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : F7PQ0T2

show smartfabric discovered-servers discovered-


server-interface
The show smartfabric discovered-servers discovered-server-interface <compute-sled service
tag> command shows list of discovered servers NIC connections.

OS10# show smartfabric discovered-server discovered-server-interface DTQHMR2


Nic-Id : Switch-Interface
------------------------------------------------------
NIC.Mezzanine.1A-1-1 8XRJ0T2:ethernet1/1/3
NIC.Mezzanine.1A-2-1 87QNMR2:ethernet1/71/2

176 SmartFabric Troubleshooting


show smartfabric upgrade-status
The show smartfabric upgrade-status command shows the current upgrade status of an I/O module in SmartFabric
mode.

OS10# show smartfabric upgrade-status

Opaque-id : 53f953f5-91ae-4009-b457-ef0f531cdc15
Upgrade Protocol : PUSH
Upgrade start time : 2021-02-11 14:48:51.595000
Status : INPROGRESS
Nodes to Upgrade : FD59H13
Reboot Sequence : FD59H13

Node Current-Action Current-Status Status-Message


-----------------------------------------------------------------------------------------
--------
FD59H13 REBOOT REBOOTING [Action : Reboot] Successfully sent the
request for rebooting the node.

show logging smartfabric


The show logging smartfabric command shows the events log information around SmartFabric services.

OS10# show logging smartfabric


2021-02-11 20:06:14.335 OS10 python3[notice]: [SFS_EVENT_LOG:DNV-CAGT] Cluster Group
INIT Group UUID/vrid:(78ff7f40-ef99-46b0-b760-c7c248abd1fc:18) from db
2021-02-11 14:09:01.527 MX9116N-A1 python3[notice]: [SFS_EVENT_LOG:DNV-CAGT]
Processing FA Ready CPS event stag:8XRJ0T2 ready:True
2021-02-11 14:09:01.528 MX9116N-A1 python3[notice]: [SFS_EVENT_LOG:DNV-CAGT] Processing
FA ready event stag:8XRJ0T2 ready:True
2021-02-11 14:09:01.578 MX9116N-A1 python3[notice]: [SFS_EVENT_LOG:DNV-CAGT] [Starting
MDNS manager] intf:br4004
2021-02-11 14:09:02.294 MX9116N-A1 python3[notice]: [SFS_EVENT_LOG:DNV-CAGT] Processing
FA connection state connect:False
2021-02-11 14:09:02.295 MX9116N-A1 python3[notice]: [SFS_EVENT_LOG:DNV-CAGT] processing
MDNS update message:{'group-uuid': '78ff7f40-ef99-46b0-b760-c7c248abd1fc', 'chassis-
model': '', 'device-type': '', 'chassis-name': '', 'group-vrid': '18', 'group-name': '',
'group-vrid-state': 'reserved', 'chassis-service-tag': '8XXJ0T2', 'group-type': 'LEAD'}
Output Truncated

SmartFabric Troubleshooting 177


12
Configuration Scenarios
This chapter discusses different topology configurations and scenarios.
● Scenarios 1 through 4 discuss Ethernet configurations with Ethernet - No Spanning Tree and Legacy Ethernet uplinks
● Scenarios 5 through 8 discuss Storage Networking scenarios with Dell PowerEdge MX connected to a storage array. The
scenarios also contain configurations with NPG, FSB, and direct attached modes

178 Configuration Scenarios


Scenario 1: SmartFabric deployment with S5232F-ON
upstream switches with Ethernet - No Spanning Tree
uplink
The following figure shows a topology using a pair of Dell PowerSwitch S5232F-ON upstream switches, but any SmartFabric
OS10 switches can be used. This section details configuration of the S5232F-ON with Ethernet - No Spanning Tree uplink as
well as validation of the S5232F-ON configuration. It also includes instructions on how to configure SmartFabric.

Figure 188. SmartFabric with Dell PowerSwitch S5232F-ON leaf switches

NOTE: See QSFP28 double density connectors on page 228 for more information about the QSFP28-DD cables.

Configure SmartFabric
Perform the following steps to configure SmartFabric:

Configuration Scenarios 179


1. Physically cable the MX9116n FSE to the S5232F-ON upstream switch. Make sure that chassis are in a Multi-Chassis
Management group. For instructions, find the relevant version of the User Guide in the OME-M and OS10 compatibility and
documentation table.
2. Define VLANs to use in the Fabric. For instructions, see Define VLANs on page 91.
3. Create the SmartFabric as per instructions in Create the SmartFabric on page 93.
4. Configure uplink port speed or breakout. For more instructions, see Configuring port speed and breakout on page 77.
5. After the SmartFabric is created, create the Ethernet - No Spanning Tree uplink. See Create Ethernet – No Spanning Tree
uplink on page 98 for more information.
6. Set MX I/O modules global spanning tree configurations to Rapid Spanning Tree Protocol (RSTP).
7. Create and deploy the appropriate server templates to the compute sleds. See Server Deployment on page 109 for more
information.

Dell PowerSwitch S5232F-ON configuration


This section outlines the configuration commands issued to the Dell PowerSwitch S5232F-ON switches with Ethernet - No
Spanning Tree uplink connected from MX9116n FSE to S5232F-ON. The switches start with their factory default settings as
indicated in the Reset SmartFabric OS10 switch to factory defaults on page 216 section.
NOTE: With Ethernet - No Spanning Tree uplink, spanning tree is disabled on the upstream port channel on the MX
I/O modules. To disable spanning tree on ports connected to MX I/O modules, run the commands below on the Dell
PowerSwitch S5232F-ON.

NOTE: For information related to the same scenario using the legacy Ethernet uplink with Spanning Tree Protocol, see
Scenario 3: SmartFabric deployment with S5232F-ON upstream switches with legacy Ethernet uplink on page 189.
There are four steps to configure the S5232F-ON upstream switches:
1. Set the switch hostname and management IP address. Enable spanning-tree mode as RSTP.
2. Configure the VLT between the switches.
3. Configure the VLANs.
4. Configure the port channels to connect to the MX switches.
Use the following commands to set the hostname, and to configure the OOB management interface and default gateway.

S5232-ON Leaf 1 S5232-ON Leaf 2

configure terminal configure terminal

hostname S5232-Leaf1 hostname S5232-Leaf2


spanning-tree mode rstp spanning-tree mode rstp

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
no shutdown no shutdown
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24

management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

Configure the VLT between switches using the following commands. VLT configuration involves setting a discovery interface
range and discovering the VLT peer in the VLTi. The vlt-domain command configures the peer leaf-2 switch as a back-up
destination.

S5232-ON Leaf 1 S5232-ON Leaf 2

interface range ethernet1/1/29-1/1/31 interface range ethernet1/1/29-1/1/31


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport

vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet1/1/29-1/1/31 discovery-interface ethernet1/1/29-1/1/31

180 Configuration Scenarios


Configure the required VLANs on each switch. In this deployment example, the VLAN used is VLAN 10 and the Untagged VLAN
used is VLAN 1.

S5232-ON Leaf 1 S5232-ON Leaf 2

interface vlan1 interface vlan1


description “Default VLAN” description “Default VLAN”
no shutdown no shutdown
interface vlan10 interface vlan10
description “Company A General Purpose” description “Company A General Purpose”
no shutdown no shutdown

Configure the port channels that connect to the downstream switches. The LACP protocol is used to create the dynamic LAG.
Trunk ports allow tagged VLANs to traverse the trunk link. In this example, the trunk is configured to allow VLAN 10. Disable
the spanning tree on port channels and run the commands related to Ethernet - No Spanning Tree uplinks as mentioned in the
following.

S5232-ON Leaf 1 S5232-ON Leaf 2

interface port-channel1 interface port-channel1


description "To MX Chassis" description "To MX Chassis"
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport access vlan 1 switchport access vlan 1
switchport trunk allowed vlan 10 switchport trunk allowed vlan 10
vlt-port-channel 1 vlt-port-channel 1
mtu 9216 mtu 9216
no shutdown no shutdown
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree disable spanning-tree disable
spanning-tree port type edge spanning-tree port type edge

interface ethernet1/1/1 interface ethernet1/1/1


description "To MX Chassis-1" description "To MX Chassis-1"
no shutdown no shutdown
no switchport no switchport
channel-group 1 mode active channel-group 1 mode active

interface ethernet1/1/3 interface ethernet1/1/3


description "To MX Chassis-2" description "To MX Chassis-2"
no shutdown no shutdown
no switchport no switchport
channel-group 1 mode active channel-group 1 mode active

end end
write memory write memory

Dell PowerSwitch S5232-ON validation


This section contains validation commands for the Dell PowerSwitch S5232-ON leaf switches.

show vlt
The show vlt command validates the VLT configuration status when the VLTi Link Status is up. The role of one switch in the
VLT pair is primary, and its peer switch (not shown) is assigned the secondary role.

S5232F-Leaf1# show vlt 1


Domain ID : 1
Unit ID : 1
Role : primary
Version : 1.0
Local System MAC address : 4c:76:25:e8:f2:c0
VLT MAC address : 4c:76:25:e8:f2:c0

Configuration Scenarios 181


IP address : fda5:74c8:b79e:1::1
Delay-Restore timer : 90 seconds
Peer-Routing : Disabled
Peer-Routing-Timeout timer : 0 seconds
VLTi Link Status
port-channel1000 : up

VLT Peer Unit ID System MAC Address Status IP Address Version


--------------------------------------------------------------------------------
2 4c:76:25:e8:e8:40 up fda5:74c8:b79e:1::2 1.0

show lldp neighbors


The show lldp neighbors command provides information about connected devices. In this case, ethernet1/1/1 and
ethernet1/1/3 connect to the two MX9116n FSEs, C160A2 and C140A1 . The remaining links, ethernet1/1/29, and
ethernet 1/1/31, represent the VLTi connection.

S5232F-Leaf1# show lldp neighbors


Loc PortID Rem Host Name Rem Port Id Rem Chassis Id
----------------------------------------------------------------
ethernet1/1/1 C160A2 ethernet1/1/41 20:04:0f:00:a1:9e
ethernet1/1/3 C140A1 ethernet1/1/41 20:04:0f:00:cd:1e
ethernet1/1/29 S5232F-Leaf2 ethernet1/1/29 4c:76:25:e8:e8:40
ethernet1/1/31 S5232F-Leaf2 ethernet1/1/31 4c:76:25:e8:e8:40

show smartfabric uplinks


The show smartfabric uplinks command is used to verify the uplinks configured across the nodes in the fabric. This
displays name, description, id, media type, native vlan, configured interfaces, and network profile associated with fabric. Run this
command on MX9116n FSE. The following output shows that the uplink created is an Ethernet - No Spanning Tree uplink.

MX9116n-A1# show smartfabric uplinks


----------------------------------------------------------
Name : Uplink 1
Description :
ID : 3d4f2222-f082-43c1-b034-b14a8df3a172
Media Type : Ethernet - No Spanning Tree
Native Vlan : 1
Untagged-network :
Networks : 9418125b-5f1f-48d7-8b5d-648b0977c643
Configured-Interfaces : 87QNMR2:ethernet1/1/41, 87QNMR2:ethernet1/1/42
8XRJ0T2:ethernet1/1/41, 8XRJ0T2:ethernet1/1/42
----------------------------------------------------------

182 Configuration Scenarios


Scenario 2: SmartFabric connected to Cisco Nexus
3232C switches with Ethernet - No Spanning Tree
uplink
The figure below shows a topology using a pair of Cisco Nexus 3232C as leaf switches, but other Cisco Nexus switches may
be used. This section details configuration of the Cisco Nexus switch with Ethernet - No Spanning Tree uplink, validation of the
topology with Cisco Nexus switches, and creation of a SmartFabric with the corresponding uplinks.

Figure 189. SmartFabric with Cisco Nexus 3232C leaf switches

NOTE: See the QSFP28 double density connectors on page 228 for more information about the QSFP28-DD cables.

Configure SmartFabric
Perform the following steps to configure SmartFabric:
1. Physically cable the MX9116n FSE to the Cisco Nexus upstream switch. Make sure that chassis are in a Multi-Chassis
Management group. For instructions, find the relevant version of the User Guide in the OME-M and OS10 compatibility and
documentation table.
2. Define VLANs to use in the Fabric. For instructions, see Define VLANs on page 91.
3. Create the SmartFabric as per instructions in Create the SmartFabric on page 93.

Configuration Scenarios 183


4. Configure uplink port speed or breakout. For more instructions, see Configuring port speed and breakout on page 77.
5. After the SmartFabric is created, create the Ethernet - No Spanning Tree uplink. See Create Ethernet – No Spanning Tree
uplink on page 98 for more information.
6. Set MX I/O modules global spanning tree configurations to Rapid Spanning Tree Protocol (RSTP).
7. Create and deploy the appropriate server templates to the compute sleds. See Server Deployment on page 109 for more
information.

Cisco Nexus 3232C switch configuration


The following section outlines the configuration commands that are issued to the Cisco Nexus 3232C leaf switches with
Ethernet - No Spanning Tree uplink connected from MX9116n FSE to the Cisco Nexus switch.
NOTE: While this configuration example is specific to the Cisco Nexus 3232C switch, the same concepts apply to other
Cisco Nexus and IOS switches.
The switches start at their factory default settings, as described in Reset Cisco Nexus 3232C to factory defaults on page 216.
NOTE: With Ethernet - No Spanning Tree Uplink, spanning tree is disabled on upstream port channel on MX I/O modules.
To disable spanning tree on ports connected to MX I/O modules, run the commands below on the Cisco Nexus switches.
In this deployment example, default VLAN is VLAN 1 and the created VLAN is VLAN 10. See the Cisco Nexus 3000 Series
NX-OS Configuration Guide for more details.

NOTE: For information related to the same scenario using the legacy Ethernet uplink with Spanning Tree Protocol, see
Scenario 4: SmartFabric connected to Cisco Nexus 3232C switches with legacy Ethernet uplink on page 193.
There are four steps to configure the 3232C upstream switches:
1. Set switch hostname, management IP address, enable features vPC, LLDP, LACP, and interface-vlan.
2. Configure vPC between the switches.
3. Configure the VLANs.
4. Configure the downstream port channels to connect to the MX switches.
Enter the following commands to set the hostname and enable required features. Configure the management interface and
default gateway. Also run the global setting commands for Spanning Tree Protocol as mentioned in the following.
NOTE: The MX IOMs run Rapid per-VLAN Spanning Tree Plus (RPVST+) by default. Cisco Nexus switches run RSTP by
default. Ensure the Dell and non-Dell switches are both configured to use RSTP. For the Ethernet - No spanning Tree
uplinks from MX9116n FSE to the Cisco Nexus switches, spanning tree must be disabled on ports of Cisco Nexus.

Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

configure terminal configure terminal

hostname 3232C-Leaf1 hostname 3232C-Leaf2

feature vpc feature vpc


feature lldp feature lldp
feature lacp feature lacp
feature interface-vlan feature interface-vlan
spanning-tree port type edge bpduguard spanning-tree port type edge bpduguard
default default
spanning-tree port type network default spanning-tree port type network default

interface mgmt0 interface mgmt0


vrf member management vrf member management
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24

vrf context management vrf context management


ip route 0.0.0.0/0 100.67.XX.XX ip route 0.0.0.0/0 100.67.YY.YY

Enter the following commands to create a virtual port channel (vPC) domain and assign the keepalive destination to the peer
switch management IP. Then create a port channel for the vPC peer link and assign the appropriate switchport interfaces.

184 Configuration Scenarios


Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

vpc domain 255 vpc domain 255


peer-keepalive destination 100.67.YY.YY peer-keepalive destination 100.67.XX.XX

interface port-channel255 interface port-channel255


switchport switchport
switchport mode trunk switchport mode trunk
vpc peer-link vpc peer-link

interface Ethernet1/29 interface Ethernet1/29


description vPC Interconnect description vPC Interconnect
switchport switchport
switchport mode trunk switchport mode trunk
channel-group 255 mode active channel-group 255 mode active
no shutdown no shutdown

interface Ethernet1/31 interface Ethernet1/31


description vPC Interconnect description vPC Interconnect
switchport switchport
switchport mode trunk switchport mode trunk
channel-group 255 mode active channel-group 255 mode active
no shutdown no shutdown

Configure the required VLANs on each switch. In this deployment example, the Tagged VLAN used is VLAN 10 and Untagged
VLAN used is VLAN 1. Disable spanning tree on VLANs.

Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

interface vlan1 interface vlan1


description “Default VLAN” description “Default VLAN”
no spanning-tree mode no spanning-tree mode
no shutdown no shutdown

interface vlan10 interface vlan10


description “Company A General Purpose” description “Company A General Purpose”
no spanning-tree mode no spanning-tree mode
no shutdown no shutdown

Enter the following commands to configure the port channels to connect to the downstream MX9116n FSEs. Then, exit
configuration mode and save the configuration. Disable spanning tree on the port channel connected to MX9116n FSE.

Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

interface port-channel1 interface port-channel1


description To MX Chassis description To MX Chassis
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,10 switchport trunk allowed vlan 1,10
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree port type edge spanning-tree port type edge
spanning-tree guard root spanning-tree guard root
vpc 255 vpc 255

interface Ethernet1/1 interface Ethernet1/1


description To MX Chassis 1 description To MX Chassis 1
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,10 switchport trunk allowed vlan 1,10
channel-group 1 mode active channel-group 1 mode active
no shutdown no shutdown

interface Ethernet1/3 interface Ethernet1/3


description To MX Chassis 2 description To MX Chassis 2
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,10 switchport trunk allowed vlan 1,10

Configuration Scenarios 185


Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

channel-group 1 mode active channel-group 1 mode active


no shutdown no shutdown

end end
copy running-configuration startup- copy running-configuration startup-
configuration configuration

NOTE: If the connections to the MX switches do not come up, see SmartFabric Troubleshooting on page 162 for
troubleshooting steps.
Trunk ports on switches allow tagged traffic to traverse the links. All flooded traffic for the VLAN is sent across trunk ports
to all the switches, even if those switches do not have an associated VLAN. This takes up the network bandwidth with
unnecessary traffic. VLAN or VTP Pruning is the feature that can be used to eliminate this unnecessary traffic by pruning the
VLANs.
Pruning restricts the flooded traffic to only those trunk ports with associated VLANs to optimize the usage of network
bandwidth. If the existing environment is configured for Cisco VTP or VLAN pruning, ensure that the Cisco upstream switches
are configured appropriately. See the Cisco Nexus 3000 Series NX-OS Configuration Guide for additional information.
NOTE: Do not use switchport trunk allow vlan all on the Cisco interfaces. The VLANs must be explicitly
assigned to the interface.

Configuration validation
This section covers the validation of the Cisco Nexus 3232C leaf switches. For information about the Dell Networking MX
switch validation commands, see Common CLI troubleshooting commands for Full Switch and SmartFabric modes on page 157.

show vpc
The show vpc command validates the vPC configuration status. The peer adjacency should be OK, with the peer should show
as alive. The end of the command shows which VLANs are active across the vPC.

NX3232C-Leaf1# show vpc


Legend:
(*) - local vPC is down, forwarding via vPC peer-link

vPC domain id : 255


Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status : success
Per-vlan consistency status : success
Type-2 inconsistency reason : Consistency Check Not Performed
vPC role : secondary, operational primary
Number of vPCs configured : 1
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
Auto-recovery status : Disabled
Delay-restore status : Timer is off.(timeout = 30s)
Delay-restore SVI status : Timer is off.(timeout = 10s)

vPC Peer-link status


---------------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ --------------------------------------------------
1 Po255 up 1,10

vPC status
----------------------------------------------------------------------
id Port Status Consistency Reason Active vlans
-- ---- ------ ----------- ------ ------------
255 Po1 up success success 1,10

186 Configuration Scenarios


show vpc consistency-parameters
The show vpc consistency-parameters command displays the configured values on all interfaces in the vPC. The
displayed configurations are only those configurations that limit the vPC peer link and vPC from coming up.

NX3232C-Leaf1# show vpc consistency-parameters vpc 255


Legend:
Type 1 : vPC will be suspended in case of mismatch

Name Type Local Value Peer Value


------------- ---- ---------------------- -----------------------
STP Port Type 1 Normal Port Normal Port
STP Port Guard 1 Default Default
STP MST Simulate PVST 1 Default Default
lag-id 1 [(1000, [(1000,
20-4-f-0-cd-1e, 1, 0, 20-4-f-0-cd-1e, 1, 0,
0), (7f9b, 0), (7f9b,
0-23-4-ee-be-ff, 80ff, 0-23-4-ee-be-ff, 80ff,
0, 0)] 0, 0)]
mode 1 active active
delayed-lacp 1 disabled disabled
Speed 1 100 Gb/s 100 Gb/s
Duplex 1 full full
Port Mode 1 trunk trunk
Native Vlan 1 1 1
MTU 1 1500 1500
Dot1q Tunnel 1 no no
Switchport Isolated 1 0 0
vPC card type 1 N9K TOR N9K TOR
Allowed VLANs - 1,10 1,10
Local suspended VLANs - - -

show lldp neighbors


The show lldp neighbors command provides information about lldp neighbors. In this example, Eth1/1 and Eth1/3 are
connected to the two MX9116n FSEs, C160A2 and C140A1. The remaining links, Eth1/29 and Eth1/31, represent the vPC
connection.

NX3232C-Leaf1(config)# show lldp neighbors


Capability codes:
(R) Router, (B) Bridge, (T) Telephone, (C) DOCSIS Cable Device
(W) WLAN Access Point, (P) Repeater, (S) Station, (O) Other
Device ID Local Intf Hold-time Capability Port ID
S3048-ON mgmt0 120 PBR ethernet1/1/45
C160A2 Eth1/1 120 PBR ethernet1/1/41
C140A1 Eth1/3 120 PBR ethernet1/1/41
NX3232C-Leaf2 Eth1/29 120 BR Ethernet1/29
NX3232C-Leaf2 Eth1/31 120 BR Ethernet1/31
Total entries displayed: 5

show smartfabric uplinks


The show smartfabric uplinks command is used to verify the uplinks configured across the nodes in the fabric. This
displays name, description, id, media type, native vlan, configured interfaces, and network profile associated with fabric. Run this
command on MX9116n FSE. The following output shows that the uplink created is an Ethernet - No Spanning Tree uplink.

MX9116n-A1# show smartfabric uplinks


----------------------------------------------------------
Name : Uplink 1
Description :
ID : 3d4f2222-f082-43c1-b034-b14a8df3a172
Media Type : Ethernet - No Spanning Tree
Native Vlan : 1
Untagged-network :
Networks : 9418125b-5f1f-48d7-8b5d-648b0977c643

Configuration Scenarios 187


Configured-Interfaces : 87QNMR2:ethernet1/1/41, 87QNMR2:ethernet1/1/42
8XRJ0T2:ethernet1/1/41, 8XRJ0T2:ethernet1/1/42
----------------------------------------------------------

188 Configuration Scenarios


Scenario 3: SmartFabric deployment with S5232F-ON
upstream switches with legacy Ethernet uplink
The following figure shows a topology using a pair of Dell PowerSwitch S5232F-ON switches as upstream switches. This section
walks through configuring the S5232F-ON and validating the configuration, but any SmartFabric OS10 switches can be used.
This section details configuration of the S5232F-ON as well as validation of the configuration.
NOTE: For information related to the same scenario using the Ethernet - No Spanning Tree uplink (recommended), see
Scenario 1: SmartFabric deployment with S5232F-ON upstream switches with Ethernet - No Spanning Tree uplink on page
179.

Figure 190. SmartFabric with Dell PowerSwitch S5232F-ON leaf switches

NOTE: See the Supported cables and optical connectors on page 223 for more information about the QSFP28-DD cables.

Configuration Scenarios 189


Dell PowerSwitch S5232F-ON configuration
This section outlines the configuration commands issued to the Dell PowerSwitch S5232F-ON switches. The switches start with
their factory default settings as indicated in the Reset SmartFabric OS10 switch to factory defaults on page 216 section.
NOTE: The MX IOMs run Rapid Per-VLAN Spanning Tree Plus (RPVST+) by default. RPVST+ runs RSTP on each VLAN
while RSTP runs a single instance of spanning tree across the default VLAN. The Dell PowerSwitch S5232F-ON used in this
example runs SmartFabric OS10 and has RPVST+ enabled by default.

NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.
There are four steps to configure the S5232F-ON upstream switches:
1. Set the switch hostname and management IP address.
2. Configure the VLT between the switches.
3. Configure the VLANs.
4. Configure the port channels to connect to the MX switches.
Use the following commands to set the hostname, and to configure the OOB management interface and default gateway.

S5232F-ON Leaf 1 S5232F-ON Leaf 2

configure terminal configure terminal

hostname S5232-Leaf1 hostname S5232-Leaf2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
no shutdown no shutdown
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24

management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

NOTE: Use the spanning-tree {vlan vlan-id priority priority-value} command to set the bridge
priority for the upstream switches. The bridge priority ranges from 0 to 61440 in increments of 4096. For example, to
make S5232F-ON Leaf 1 as the root bridge for VLAN 10, enter the command spanning-tree vlan 10 priority 4096.
Configure the VLT between switches using the following commands. VLT configuration involves setting a discovery interface
range and discovering the VLT peer in the VLTi. vlt-domain configures the peer leaf-2 switch as a back up destination.

S5232F-ON Leaf 1 S5232F-ON Leaf 2

interface range ethernet1/1/29-1/1/31 interface range ethernet1/1/29-1/1/31


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport

vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet1/1/29-1/1/31 discovery-interface ethernet1/1/29-1/1/31

Configure the required VLANs on each switch. In this deployment example, the VLAN used is VLAN 10.

190 Configuration Scenarios


S5232F-ON Leaf 1 S5232F-ON Leaf 2

interface vlan10 interface vlan10


description “Company A General Purpose” description “Company A General Purpose”
no shutdown no shutdown

Configure the port channels that connect to the downstream switches. The LACP protocol is used to create the dynamic LAG.
Trunk ports allow tagged VLANs to traverse the trunk link. In this example, the trunk is configured to allow VLAN 10.

S5232F-ON Leaf 1 S5232F-ON Leaf 2

interface port-channel1 interface port-channel1


description "To MX Chassis" description "To MX Chassis"
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan10 switchport trunk allowed vlan10
vlt-port-channel 1 vlt-port-channel 1

interface ethernet1/1/1 interface ethernet1/1/1


description "To MX Chassis-1" description "To MX Chassis-1"
no shutdown no shutdown
no switchport no switchport
channel-group 1 mode active channel-group 1 mode active

interface ethernet1/1/3 interface ethernet1/1/3


description "To MX Chassis-2" description "To MX Chassis-2"
no shutdown no shutdown
no switchport no switchport
channel-group 1 mode active channel-group 1 mode active

end end
write memory write memory

Dell PowerSwitch S5232F-ON validation


This section contains validation commands for the Dell PowerSwitch S5232F-ON leaf switches.

show vlt
The show vlt command validates the VLT configuration status when the VLTi Link Status is up. The role of one switch in the
VLT pair is primary, and its peer switch (not shown) is assigned the secondary role.

S5232-Leaf1# show vlt 1


Domain ID : 1
Unit ID : 1
Role : primary
Version : 1.0
Local System MAC address : 4c:76:25:e8:f2:c0
VLT MAC address : 4c:76:25:e8:f2:c0
IP address : fda5:74c8:b79e:1::1
Delay-Restore timer : 90 seconds
Peer-Routing : Disabled
Peer-Routing-Timeout timer : 0 seconds
VLTi Link Status
port-channel1000 : up

VLT Peer Unit ID System MAC Address Status IP Address Version


--------------------------------------------------------------------------------
2 4c:76:25:e8:e8:40 up fda5:74c8:b79e:1::2 1.0

Configuration Scenarios 191


show lldp neighbors
The show lldp neighbors command provides information about connected devices. In this case, ethernet1/1/1 and
ethernet1/1/3 connect to the two MX9116n FSEs, C160A2 and C140A1 . The remaining links, ethernet1/1/29, and
ethernet 1/1/31, represent the VLTi connection.

S5232-Leaf1# show lldp neighbors


Loc PortID Rem Host Name Rem Port Id Rem Chassis Id
----------------------------------------------------------------
ethernet1/1/1 C160A2 ethernet1/1/41 20:04:0f:00:a1:9e
ethernet1/1/3 C140A1 ethernet1/1/41 20:04:0f:00:cd:1e
ethernet1/1/29 S5232-Leaf2 ethernet1/1/29 4c:76:25:e8:e8:40
ethernet1/1/31 S5232-Leaf2 ethernet1/1/31 4c:76:25:e8:e8:40

show spanning-tree brief


The show spanning-tree brief command validates that STP is enabled on the leaf switches. All the interfaces are
forwarding (FWD), as shown in the Sts column.

S5232-Leaf1# show spanning-tree brief


Spanning tree enabled protocol rapid-pvst
VLAN 1
Executing IEEE compatible Spanning Tree Protocol
Root ID Priority 32768, Address 2004.0f00.a19e
Root Bridge hello time 2, max age 20, forward delay 15
Bridge ID Priority 32769, Address 4c76.25e8.f2c0
Configured hello time 2, max age 20, forward delay 15
Flush Interval 200 centi-sec, Flush Invocations 432
Flush Indication threshold 0 (MAC flush optimization is disabled)
Interface Designated
Name PortID Prio Cost Sts Cost Bridge ID PortID
--------------------------------------------------------------------------------
port-channel1 128.2517 128 50 FWD 0 32768 2004.0f00

Interface
Name Role PortID Prio Cost Sts Cost Link-type Edge
--------------------------------------------------------------------------------
port-channel1 Root 128.2517 128 50 FWD 0 AUTO No

VLAN 10
Executing IEEE compatible Spanning Tree Protocol
Root ID Priority 32778, Address 4c76.25e8.e840
Root Bridge hello time 2, max age 20, forward delay 15
Bridge ID Priority 32778, Address 4c76.25e8.f2c0
Configured hello time 2, max age 20, forward delay 15
Flush Interval 200 centi-sec, Flush Invocations 5
Flush Indication threshold 0 (MAC flush optimization is disabled)
Interface Designated
Interface Designated
Name PortID Prio Cost Sts Cost Bridge ID PortID
--------------------------------------------------------------------------------
port-channel1 128.2517 128 50 FWD 1 32768 2004.0f00
Interface
Name Role PortID Prio Cost Sts Cost Link-type Edge
--------------------------------------------------------------------------------
port-channel1 Root 128.2517 128 50 FWD 1 AUTO No

192 Configuration Scenarios


Scenario 4: SmartFabric connected to Cisco Nexus
3232C switches with legacy Ethernet uplink
The figure below shows a topology using a pair of Cisco Nexus 3232C as leaf switches, but other Cisco Nexus switches may be
used. This section details configuration of the Cisco Nexus 3232Cs and creation of a SmartFabric with the corresponding legacy
Ethernet uplinks.
NOTE: For information related to the same scenario using Ethernet - No Spanning Tree uplink, see Scenario 2: SmartFabric
connected to Cisco Nexus 3232C switches with Ethernet - No Spanning Tree uplink on page 183.

Figure 191. SmartFabric with Cisco Nexus 3232C leaf switches

NOTE: See Supported cables and optical connectors on page 223 for more information about the QSFP28-DD cables.

Cisco Nexus 3232C switch configuration


This section outlines the configuration commands that are issued to the Cisco Nexus 3232C leaf switches.
NOTE: While this configuration example is specific to the Cisco Nexus 3232C switch, the same concepts apply to other
Cisco Nexus and IOS switches.

Configuration Scenarios 193


The switches start at their factory default settings, as described in the Reset Cisco Nexus 3232C to factory defaults on page
216 section.
NOTE: The MX IOMs run Rapid per-VLAN Spanning Tree Plus (RPVST+) by default. Ensure the Cisco and Dell switches
are configured to use compatible STP protocols. The mode of STP on the Cisco switch can be set using the command
spanning-tree mode, which is shown below. In this deployment example, default VLAN is VLAN 1 and the created VLAN is
VLAN 10. See the Cisco Nexus 3000 Series NX-OS Configuration Guide for more details.

NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.
There are four steps to configure the 3232C upstream switches:
1. Set switch hostname, management IP address, enable features and spanning tree.
2. Configure vPC between the switches.
3. Configure the VLANs.
4. Configure the downstream port channels to connect to the MX switches.
Enter the following commands to set the hostname, enable required features, and enable RPVST spanning tree mode. Configure
the management interface and default gateway.

Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

configure terminal configure terminal

hostname 3232C-Leaf1 hostname 3232C-Leaf2

feature vpc feature vpc


feature lldp feature lldp
feature lacp feature lacp

spanning-tree mode rapid-pvst spanning-tree mode rapid-pvst

interface mgmt0 interface mgmt0


vrf member management vrf member management
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24

vrf context management vrf context management


ip route 0.0.0.0/0 100.67.XX.XX ip route 0.0.0.0/0 100.67.YY.YY

Enter the following commands to create a virtual port channel (vPC) domain and assign the keepalive destination to the peer
switch management IP. Then create a port channel for the vPC peer link and assign the appropriate switchport interfaces.

Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

vpc domain 255 vpc domain 255


peer-keepalive destination 100.67.YY.YY peer-keepalive destination 100.67.XX.XX

interface port-channel255 interface port-channel255


switchport switchport
switchport mode trunk switchport mode trunk
vpc peer-link vpc peer-link

interface Ethernet1/29 interface Ethernet1/29


description vPC Interconnect description vPC Interconnect
switchport switchport
switchport mode trunk switchport mode trunk
channel-group 255 mode active channel-group 255 mode active
no shutdown no shutdown

interface Ethernet1/31 interface Ethernet1/31


description vPC Interconnect description vPC Interconnect

194 Configuration Scenarios


Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

switchport switchport
switchport mode trunk switchport mode trunk
channel-group 255 mode active channel-group 255 mode active
no shutdown no shutdown

Enter the following commands to configure the port channels to connect to the downstream MX9116n FSEs. Then, exit
configuration mode and save the configuration.

Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

interface port-channel1 interface port-channel1


description To MX Chassis description To MX Chassis
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,10 switchport trunk allowed vlan 1,10
vpc 255 vpc 255

interface Ethernet1/1 interface Ethernet1/1


description To MX Chassis 1 description To MX Chassis 1
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,10 switchport trunk allowed vlan 1,10
channel-group 1 mode active channel-group 1 mode active
no shutdown no shutdown

interface Ethernet1/3 interface Ethernet1/3


description To MX Chassis 2 description To MX Chassis 2
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,10 switchport trunk allowed vlan 1,10
channel-group 1 mode active channel-group 1 mode active
no shutdown no shutdown

end end
copy running-configuration startup- copy running-configuration startup-
configuration configuration

NOTE: If the connections to the MX switches do not come up, see SmartFabric Troubleshooting on page 162 for
troubleshooting steps.
Trunk ports on switches allow tagged traffic to traverse the links. All flooded traffic for the VLAN is sent across trunk ports
to all the switches, even if those switches do not have an associated VLAN. This takes up the network bandwidth with
unnecessary traffic. VLAN or VTP Pruning is the feature that can be used to eliminate this unnecessary traffic by pruning the
VLANs.
Pruning restricts the flooded traffic to only those trunk ports with associated VLANs to optimize the usage of network
bandwidth. If the existing environment is configured for Cisco VTP or VLAN pruning, ensure that the Cisco upstream switches
are configured appropriately. See the Cisco Nexus 3000 Series NX-OS Configuration Guide for additional information.
NOTE: Do not use switchport trunk allow vlan all on the Cisco interfaces. The VLANs must be explicitly
assigned to the interface.

Configuration validation
This section covers the validation of the Cisco Nexus 3232C leaf switches. For information about the Dell Networking MX
switch validation commands, see Common CLI troubleshooting commands for Full Switch and SmartFabric modes on page 157.

Configuration Scenarios 195


show vpc
The show vpc command validates the vPC configuration status. The peer adjacency should be OK, with the peer should show
as alive. The end of the command shows which VLANs are active across the vPC.

NX3232C-Leaf1# show vpc


Legend:
(*) - local vPC is down, forwarding via vPC peer-link

vPC domain id : 255


Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status : success
Per-vlan consistency status : success
Type-2 inconsistency reason : Consistency Check Not Performed
vPC role : secondary, operational primary
Number of vPCs configured : 1
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
Auto-recovery status : Disabled
Delay-restore status : Timer is off.(timeout = 30s)
Delay-restore SVI status : Timer is off.(timeout = 10s)

vPC Peer-link status


---------------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ --------------------------------------------------
1 Po255 up 1,10

vPC status
----------------------------------------------------------------------
id Port Status Consistency Reason Active vlans
-- ---- ------ ----------- ------ ------------
255 Po1 up success success 1,10

show vpc consistency-parameters


The show vpc consistency-parameters command displays the configured values on all interfaces in the vPC. The
displayed configurations are only those configurations that limit the vPC peer link and vPC from coming up.

NX3232C-Leaf1# show vpc consistency-parameters vpc 255


Legend:
Type 1 : vPC will be suspended in case of mismatch

Name Type Local Value Peer Value


------------- ---- ---------------------- -----------------------
STP Port Type 1 Normal Port Normal Port
STP Port Guard 1 Default Default
STP MST Simulate PVST 1 Default Default
lag-id 1 [(1000, [(1000,
20-4-f-0-cd-1e, 1, 0, 20-4-f-0-cd-1e, 1, 0,
0), (7f9b, 0), (7f9b,
0-23-4-ee-be-ff, 80ff, 0-23-4-ee-be-ff, 80ff,
0, 0)] 0, 0)]
mode 1 active active
delayed-lacp 1 disabled disabled
Speed 1 100 Gb/s 100 Gb/s
Duplex 1 full full
Port Mode 1 trunk trunk
Native Vlan 1 1 1
MTU 1 1500 1500
Dot1q Tunnel 1 no no
Switchport Isolated 1 0 0
vPC card type 1 N9K TOR N9K TOR
Allowed VLANs - 1,10 1,10
Local suspended VLANs - - -

196 Configuration Scenarios


show lldp neighbors
The show lldp neighbors command provides information about lldp neighbors. In this example, Eth1/1 and Eth1/3 are
connected to the two MX9116n FSEs, C160A2 and C140A1. The remaining links, Eth1/29 and Eth1/31, represent the vPC
connection.

NX3232C-Leaf1(config)# show lldp neighbors


Capability codes:
(R) Router, (B) Bridge, (T) Telephone, (C) DOCSIS Cable Device
(W) WLAN Access Point, (P) Repeater, (S) Station, (O) Other
Device ID Local Intf Hold-time Capability Port ID
S3048-ON mgmt0 120 PBR ethernet1/1/45
C160A2 Eth1/1 120 PBR ethernet1/1/41
C140A1 Eth1/3 120 PBR ethernet1/1/41
NX3232C-Leaf2 Eth1/29 120 BR Ethernet1/29
NX3232C-Leaf2 Eth1/31 120 BR Ethernet1/31
Total entries displayed: 5

show spanning-tree summary


The show spanning-tree summary command validates that STP is enabled on the leaf switches. All interfaces are shown
as forwarding.

NX3232C-Leaf1# show spanning-tree summary


Switch is in rapid-pvst mode
Root bridge for: VLAN0010
Port Type Default is disable
Edge Port [PortFast] BPDU Guard Default is disabled
Edge Port [PortFast] BPDU Filter Default is disabled
Bridge Assurance is enabled
Loopguard Default is disabled
Pathcost method used is short
STP-Lite is disabled

Name Blocking Listening Learning Forwarding STP Active


---------------------- -------- --------- -------- ---------- ----------
VLAN0001 0 0 0 2 2
VLAN0010 0 0 0 2 2
---------------------- -------- --------- -------- ---------- ----------
2 vlans 0 0 0 4 4

Configuration Scenarios 197


Scenario 5: Connect MX9116n FSE to Fibre Channel
storage - NPIV Proxy Gateway mode
This section discusses a method for connecting the MX9116n FSE to an FC storage array connected to existing FC switches
using the NPIV Proxy Gateway (NPG) mode for the connection. NPG mode allows for larger SAN deployments that aggregate
I/O traffic at the NPG switch.

FC Switch FC Switch
Spine 1 Spine 2

FC SAN A
FC SAN B
2
4:2 rts /44:
rts
Po 1/1/4 Po 1/1
– –
1 :1
4: 44
1/ 1/4 1 /1/
Controller A Controller B

MX9116n VLT MX9116n PowerStore 1000T


(Leaf 1) (Leaf 2) Unity 500F

MX7000 MX7000
chassis 1 chassis 2

Figure 192. FC (NPG) network to Dell PowerStore 1000T

SmartFabric mode
This scenario shows attachment to an existing FC switch infrastructure. Configuration of the existing FC switches is beyond the
scope of this document.

NOTE: The MX5108n Ethernet Switch does not support this feature.

This example assumes that an existing SmartFabric has been created and is fully operational. For instructions on creating a
SmartFabric, see SmartFabric Creation on page 91.
To configure NPG mode on an existing SmartFabric, the following steps are completed using the OME-M console:
1. Connect the MX9116n FSE to the FC SAN.
CAUTION: Ensure that the cables do not criss-cross between the switches.

Make sure that chassis are in a Multi-Chassis Management group. For instructions, find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table.
2. Define FCoE VLANs to use in the fabric. For instructions, see Define VLANs on page 91 for information about defining the
VLANs.
3. If necessary, create the Identity Pools. See Create identity pools on page 111 for more information about how to create
identity pools.
4. Configure the physical switch ports for FC operation. See Configure Fibre Channel universal ports on page 101 for
instructions.
5. Create the FC Gateway uplinks. For instruction, see Create Fibre Channel uplinks on page 101 for steps on creating uplinks.
6. Create and deploy the appropriate server templates to the compute sleds. See Server Deployment on page 109 for more
information.
Once the server operating system loads the FCoE driver, the WWN appears on the fabric and on the FC SAN. The system is
now ready to connect to Fibre Channel storage. See Dell PowerStore 1000T on page 235 for setting up storage logical unit
numbers (LUNs).
NOTE: For information related to use cases and configuring Ethernet – No Spanning Tree uplink with different tagged and
untagged VLANs, see Ethernet – No Spanning Tree uplink on page 84.

NOTE: When MX9116n FSEs are in NPG mode, connecting to more than one SAN is possible by creating multiple vFabrics
each with their own NPG gateway only in Full Switch mode. However, an individual server can only connect to one vFabric
at a time, so one server cannot see both SANs.

198 Configuration Scenarios


Full switch mode
This section contains the Full Switch mode switch configuration of MX I/O modules in NPG mode. Configuration of the existing
FC switches is beyond the scope of this document.

NOTE: The MX5108n Ethernet Switch does not support this feature.

To configure MX IOMs in Full Switch mode, Follow the steps mentioned below to configure MX IOMs through CLI:
1. Verify the MX9116n FSE is in Full Switch mode by running show switch-operating-mode command.
2. Connect the MX9116n FSE to the FC SAN.
CAUTION: Ensure that the cables do not criss-cross between the switches.

Once the server operating system loads the FCoE driver, the WWN appears on the fabric and on the FC SAN. The system is
now ready to connect to Fibre Channel storage. See Dell PowerStore 1000T on page 235 for setting up storage logical unit
numbers (LUNs).
NOTE: When MX9116n FSEs are in NPG mode, connecting to more than one SAN is possible by creating multiple vFabrics
each with their own NPG gateway only in Full Switch mode. However, an individual server can only connect to one vFabric
at a time, so one server cannot see both SANs.
Configure global switch settings
Run the following commands to configure the switch hostname, OOB management IP address, and OOB management default
gateway.

MX9116-B1 MX9116-B2

configure terminal configure terminal

hostname MX9116-B1 hostname MX9116-B2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24
no shutdown no shutdown

management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

Configure FC port group and speed


Configure the port group for the FC interfaces used to connect to storage. In the deployment example here, port-group 1/1/16
is configured for breakout from 1x64 GFC to 4x16 GFC.

MX9116-B1 MX9116-B2

configure terminal configure terminal


port-group 1/1/16 port-group 1/1/16
mode FC 16g-4x mode FC 16g-4x
exit exit

Configure VLTi
Configure VLTi on ports 37 through 40 on the MX9116n FSE. This establishes the connection between the two MX IOMs.

MX9116-B1 MX9116-B2

interface range ethernet 1/1/37-1/1/40 interface range ethernet 1/1/37-1/1/40


description VLTi description VLT
no switchport no switchport

vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX

Configuration Scenarios 199


MX9116-B1 MX9116-B2

discovery-interface ethernet 1/1/37-1/1/40 discovery-interface ethernet 1/1/37-1/1/40


peer-routing peer-routing

NPG FC or FCoE configuration


For each IOM, define the VLANs and virtual fabrics. The global feature fc npg command enables the switch in NPG mode.
Create FCoE VLANs and create vFabric for SAN.

MX9116-B1 MX9116-B2

dcbx enable dcbx enable


feature fc npg feature fc npg

interface vlan 30 interface vlan 40


description FC_B1 description FC_B2
no shutdown no shutdown

vfabric 101 vfabric 102


vlan 30 vlan 40
fcoe fcmap 0xEFC00 fcoe fcmap 0xEFC01

Configure upstream interfaces


Configure the IOMs FC uplink connections to the existing FC switch. In the deployment example here, FC ports 1/1/44:1 and
1/1/44:2 are configured for upstream FC switch connections.

MX9116-B1 MX9116-B2

interface fibrechannel 1/1/44:1 interface fibrechannel 1/1/44:1


description uplink1_to_FC_switch description uplink1_to_FC_switch
vfabric 101 vfabric 102
no shutdown no shutdown

interface fibrechannel 1/1/44:2 interface fibrechannel 1/1/44:2


description uplink2_to_FC_switch description uplink2_to_FC_switch
vfabric 101 vfabric 102
no shutdown no shutdown

Configure downstream interfaces


Configure the IOMs ports connected to MX Compute sleds. In the deployment example here, ports 1/1/1 and 1/1/3 are
configured for downstream connections.

MX9116-B1 MX9116-B2

interface ethernet 1/1/1 interface ethernet 1/1/1


description MX_ComputeSled_1 description MX_ComputeSled_1
Switchport access vlan 1 Switchport access vlan 1
vfabric 101 vfabric 102
no shutdown no shutdown

interface ethernet 1/1/3 interface ethernet 1/1/3


description MX_ComputeSled_2 description MX_ComputeSled_2
Switchport access vlan 1 Switchport access vlan 1
vfabric 101 vfabric 102
no shutdown no shutdown

Configure UFD
Uplink Failure Detection, or UFD, is recommended on all server-facing interfaces and upstream interfaces.

200 Configuration Scenarios


MX9116-B1 MX9116-B2

uplink-state-group 1 uplink-state-group 1
name "UFD_Group_1" name "UFD_Group_1"
downstream ethernet1/1/1-1/1/3 downstream ethernet1/1/1-1/1/3
upstream fibrechannel1/1/44:1-1/1/44:2 upstream fibrechannel1/1/44:1-1/1/44:2
enable enable

Configuration validation
show fcoe sessions
The show fcoe sessions command shows active FCoE sessions. The output includes MAC addresses, Ethernet interfaces,
the FCoE VLAN ID, FC IDs, and WWPNs of logged-in CNAs.

NOTE: Due to the width of the command output, each line of output is shown on two lines below.

C140A1# show fcoe sessions


Enode MAC Enode Interface FCF MAC FCF interface VLAN FCoE
MAC FC-ID PORT WWPN PORT WWNN
-----------------------------------------------------------------------------------------
----------------------------------------------------------------
06:c3:f9:a4:cd:03 Eth 1/71/1 20:04:0f:00:ce:1d ~ 30
0e:fc:00:01:01:00 01:01:00 20:01:06:c3:f9:a4:cd:00 20:00:06:c3:f9:a4:cd:00
06:3c:f9:a4:cd:01 Eth 1/1/1 20:04:0f:21:d5:7f Fc 1/1/43:2 30
0e:fc:00:01:04:01 01:04:01 20:01:06:3c:f9:a4:cd:01 20:00:06:3c:f9:a4:cd:01

show vfabric
The show vfabric command output provides various information including the default zone mode, the active zone set, and
interfaces that are members of the vfabric.

C140A1# show vfabric


Fabric Name New vfabric
Fabric Type NPG
Fabric Id 101
Vlan Id 30
FC-MAP 0xEFC00
Vlan priority 3
FCF Priority 128
FKA-Adv-Period Enabled,8
Config-State ACTIVE
Oper-State UP
==========================================
Members
fibrechannel1/1/43:1
fibrechannel1/1/43:2
ethernet1/1/1

show fc switch
The show fc switch command verifies the switch mode (for example, F_Port) for FC traffic.

C140A1# show fc switch


Switch Mode : NPG
Switch WWN : 10:00:20:04:0f:21:d4:80

Configuration Scenarios 201


Scenario 6: Connect MX9116n FSE to Fibre Channel
storage - FC Direct Attach
This chapter discusses a method for connecting an FC storage array directly to the MX9116n FSE.
On PowerEdge MX platform, the difference between configuring NPG mode or FC Direct Attach mode on the MX9116n FSE is
selecting different uplink type desired.

Spine 1 Spine 2 PowerStore 1000T


Controller A Controller B

FC SAN A
FC SAN B

1/44:2
1/44:1
4:1 :2
1/4 1/44

MX9116n VLT MX9116n


(Leaf 1) (Leaf 2)

MX7000 MX7000
chassis 1 chassis 2

Figure 193. Fibre Channel (F_Port) Direct Attach to Dell PowerStore 1000T

SmarFabric mode
This example shows directly attaching a Dell PowerStore 1000T storage array to the MX9116n FSE using universal ports 44:1 and
44:2.

NOTE: The MX5108n Ethernet Switch does not support this feature.

This example assumes that an existing SmartFabric has been created and is fully operational. For instructions on creating a
SmartFabric, see SmartFabric Creation on page 91.
To configure NPG mode on an existing SmartFabric, the following steps are completed using the OME-M console:
1. Connect the storage array to the MX9116n FSE. Each storage controller is connected to each MX9116n FSE.
● Define FCoE VLANs to use in the fabric. For instructions, see Define VLANs on page 91.
● Make sure that chassis are in a Multi-Chassis Management group. For instructions, find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table.
2. If necessary, create Identity Pools. See the Create identity pools on page 111 section for more information about how to
create identity pools.
3. Configure the physical switch ports for FC operation. See the Configure Fibre Channel universal ports on page 101 section
for instructions.
4. Create the FC Direct Attached uplinks. For more information about creating uplinks, see the Create Fibre Channel uplinks on
page 101 section.
5. Create and deploy the appropriate server templates to the compute sleds. See Server Deployment on page 109 for more
information.
6. Configure zones and zone sets. See the Managing Fibre Channel Zones on MX9116n FSE on page 65 section for instructions.
Once the server operating system loads the FCoE, the WWN appears on the fabric and on the FC SAN. The system is now
ready to connect to Fibre Channel storage. See Dell PowerStore 1000T on page 235 for how to create host groups and map
volumes to the target host.

NOTE: The configuration of FC Zones through the CLI is supported while using SmartFabric mode.

NOTE: For information related to use cases and configuring Ethernet - No Spanning Tree uplink with different tagged and
untagged VLANs, see the Ethernet – No Spanning Tree uplink on page 84 section.

202 Configuration Scenarios


Full switch mode
This section contains the Full Switch mode switch configuration of MX I/O modules connected directly to Dell PowerStore
1000T in Direct-Attached mode.

NOTE: The MX5108n Ethernet Switch does not support this feature.

1. Verify the MX9116n FSE is in Full Switch mode by running show switch-operating-mode command.
2. Connect the MX9116n FSE to the FC SAN.
Once the server operating system loads the FCoE, the WWN appears on the fabric and on the FC SAN. The system is now
ready to connect to Fibre Channel storage. See Dell PowerStore 1000T on page 235 for how to create host groups and map
volumes to the target host.
Configure global switch settings
Configure the switch hostname, OOB management IP address, and OOB management default gateway.

MX9116-B1 MX9116-B2

configure terminal configure terminal

hostname MX9116-B1 hostname MX9116-B2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24
no shutdown no shutdown
management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

Configure FC port group and speed


Configure the port group for the FC interfaces used to connect to storage. In the deployment example here, port-group 1/1/16
is configured for breakout from 1x64 GFC to 4x16 GFC.

MX9116-B1 MX9116-B2

configure terminal configure terminal


port-group 1/1/16 port-group 1/1/16
mode FC 16g-4x mode FC 16g-4x
exit exit

Configure VLTi
Configure VLTi on Ports 37 through 40 on MX9116n FSE. This establishes connection between two MX IOMs.

MX9116-B1 MX9116-B2

interface range ethernet 1/1/37-1/1/40 interface range ethernet 1/1/37-1/1/40


description VLTi description VLT
no switchport no switchport

vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet 1/1/37-1/1/40 discovery-interface ethernet 1/1/37-1/1/40
peer-routing peer-routing

Direct attached FC or FCoE configuration


For each IOM, define the VLANs and virtual fabrics. The global feature fc domain-id 1 command enables the switch in
direct-attached mode. Create FCoE VLANs and create vFabric for SAN.

Configuration Scenarios 203


MX9116-B1 MX9116-B2

dcbx enable dcbx enable


feature fc domain-id 1 feature fc domain-id 1

interface vlan 30 interface vlan 40


description FC_B1 description FC_B2
no shutdown no shutdown

vfabric 101 vfabric 102


vlan 30 vlan 40
fcoe fcmap 0xEFC00 fcoe fcmap 0xEFC01

Configure upstream interfaces


Configure the IOMs FC uplink connections to the existing Dell PowerStore 1000T array. In the deployment example here, FC
ports 1/1/44:1 and 1/1/44:2 are configured for upstream storage array connections.

MX9116-B1 MX9116-B2

interface fibrechannel 1/1/44:1 interface fibrechannel 1/1/44:1


description uplink1_to_PowerStore description uplink1_to_PowerStore
vfabric 101 vfabric 102
no shutdown no shutdown

interface fibrechannel 1/1/44:2 interface fibrechannel 1/1/44:2


description uplink2_to_PowerStore description uplink2_to_PowerStore
vfabric 101 vfabric 102
no shutdown no shutdown

Configure downstream interfaces


Configure the IOMs ports connected to MX Compute sleds. In the deployment example here, ports 1/1/1 and 1/1/3 are
configured for downstream connections.

MX9116-B1 MX9116-B2

interface ethernet 1/1/1 interface ethernet 1/1/1


description MX_ComputeSled_1 description MX_ComputeSled_1
Switchport access vlan 1 Switchport access vlan 1
vfabric 101 vfabric 102
no shutdown no shutdown

interface ethernet 1/1/3 interface ethernet 1/1/3


description MX_ComputeSled_2 description MX_ComputeSled_2
Switchport access vlan 1 Switchport access vlan 1
vfabric 101 vfabric 102
no shutdown no shutdown

Configure UFD
Uplink Failure Detection, or UFD, is recommended on all server-facing interfaces and upstream interfaces.

MX9116-B1 MX9116-B2

uplink-state-group 1 uplink-state-group 1
name "UFD_Group_1" name "UFD_Group_1"
downstream ethernet1/1/1-1/1/3 downstream ethernet1/1/1-1/1/3
upstream fibrechannel1/1/44:1-1/1/44:2 upstream fibrechannel1/1/44:1-1/1/44:2
enable enable

To configure the Fibre Channel zoning on MX IOMs, see the Managing Fibre Channel zones on MX9116n FSE section.

204 Configuration Scenarios


Configuration validation
show fc ns switch
The show fc ns switch command shows all device ports that are logged into the fabric. In this deployment, four ports are
logged in to each switch: two storage ports and two CNA ports.

C140A1# show fc ns switch

Total number of devices = 3


Switch Name 10:00:20:04:0f:00:cd:1e
Domain Id 1
Switch Port fibrechannel1/1/44:1
FC-Id 01:00:00
Port Name 58:cc:f0:90:49:20:0c:e7
Node Name 58:cc:f9:90:c9:20:0c:e7
Class of Service 8
Symbolic Port Name PowerSt::::SPA::FC::::::
Symbolic Node Name PowerSt::::SPA::FC::::::
Port Type N_PORT
Registered with NameServer Yes
Registered for SCN Yes

Switch Name 10:00:20:04:0f:00:cd:1e


Domain Id 1
Switch Port ethernet1/71/1
FC-Id 01:01:00
Port Name 20:01:06:c3:f9:a4:cd:03
Node Name 20:00:06:c3:f9:a4:cd:03
Class of Service 8
Symbolic Port Name
Symbolic Node Name
Port Type N_PORT
Registered with NameServer Yes
Registered for SCN Yes

Switch Name 10:00:20:04:0f:00:cd:1e


Domain Id 1
Switch Port ethernet1/1/1
FC-Id 01:02:00
Port Name 20:01:f4:e9:d4:73:d0:0c
Node Name 20:00:f4:e9:d4:73:d0:0c
Class of Service 8
Symbolic Port Name QLogic qedf v8.24.8.0
Symbolic Node Name QLogic qedf v8.24.8.0
Port Type N_PORT
Registered with NameServer Yes
Registered for SCN Yes

show fcoe sessions


The show fcoe sessions command shows active FCoE sessions. The output includes MAC addresses, Ethernet interfaces,
the FCoE VLAN ID, FC IDs, and WWPNs of logged-in CNAs.

NOTE: Due to the width of the command output, each line of output is shown on two lines below.

C140A1# show fcoe sessions


Enode MAC Enode Interface FCF MAC FCF interface VLAN FCoE
MAC FC-ID PORT WWPN PORT WWNN
-----------------------------------------------------------------------------------------
----------------------------------------------------------------
06:c3:f9:a4:cd:03 Eth 1/71/1 20:04:0f:00:ce:1d ~ 30
0e:fc:00:01:01:00 01:01:00 20:01:06:c3:f9:a4:cd:03 20:00:06:c3:f9:a4:cd:03
f4:e9:d4:73:d0:0c Eth 1/1/1 20:04:0f:00:ce:1d ~ 30
0e:fc:00:01:02:00 01:02:00 20:01:f4:e9:d4:73:d0:0c 20:00:f4:e9:d4:73:d0:0c

show vfabric

Configuration Scenarios 205


The show vfabric command output provides various information including the default zone mode, the active zone set, and
interfaces that are members of the vfabric.

C140A1# show vfabric


Fabric Name New vfabric
Fabric Type FPORT
Fabric Id 1
Vlan Id 30
FC-MAP 0xEFC00
Vlan priority 3
FCF Priority 128
FKA-Adv-Period Enabled,8
Config-State ACTIVE
Oper-State UP
==========================================
Switch Config Parameters
==========================================
Domain ID 1
==========================================
Switch Zoning Parameters
==========================================
Default Zone Mode: Allow
Active ZoneSet: None
==========================================
Members
fibrechannel1/1/44:1
ethernet1/1/1
ethernet1/71/1
ethernet1/71/2

show fc switch
The show fc switch command verifies the switch mode (for example, F_Port) for FC traffic.

C140A1# show fc switch


Switch Mode : FPORT
Switch WWN : 10:00:e4:f0:04:6b:04:42

206 Configuration Scenarios


Scenario 7: Connect MX5108n to Fibre Channel
storage - FSB
This chapter provides instructions for connecting either the MX5108n or MX9116n to a Fibre Channel SAN using native FCoE
uplinks. This connection type would be used in an environment where an existing switch such as the Dell PowerSwitch S4148U
has the capability to accept native FCoE and connect to native FC.
Dell SmartFabric OS10 uses a FIP Snooping Bridge (FSB) to detect and manage FCoE traffic and discovers the following
information:
● End nodes (E_Nodes)
● Fibre Channel forwarder (FCF)
● Connections between E_Nodes and FCFs
● Sessions between E_Nodes and FCFs
Using the discovered information, the switch installs ACL entries that provide security and point-to-point link emulation to
ensure that FCoE traffic is handled appropriately.
NOTE: The examples in this chapter use the Dell Networking MX5108n. The same instructions may also be applied and used
with the MX9116n.

NOTE: An FCoE uplink from the MX5108n or MX9116n must contain only a single port interface on an MX IOM to the
existing FCoE switch such as the Dell PowerSwitch S4148U shown in the diagram above.
The FSB switch can connect to an upstream switch operating in NPG mode:
NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.

Figure 194. FCoE (FSB) Network to Dell PowerStore 1000T through NPG mode switch

Or operating in F_Port mode:

Configuration Scenarios 207


S4148U-ON S4148U-ON
(F_port mode) (F_port mode)

ToR switch 1 VLT ToR switch 2


FCoE SAN A
FC SAN A

FCoE SAN B
FC SAN B
rt rt
Po 11:1 Po 11:1
1/ 1/
1 / 1/
Controller A Controller B

MX5108n MX5108n
FSB mode VLT
FSB mode PowerStore 1000T
(Leaf 1) (Leaf 2)

MX7000 chassis

Figure 195. FCoE (FSB) Network to Dell PowerStore 1000T through F_Port mode switch

NOTE: See the Dell SmartFabric OS10 User Guide for configuring FSB mode globally on the Dell Networking S4148U-ON
switches. Find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.

SmartFabric mode
This example assumes that an existing SmartFabric has been created and is fully operational. For instructions on creating a
SmartFabric, see SmartFabric Creation on page 91.
1. To configure FCoE mode on an existing SmartFabric, the following steps are completed using the OME-M console: Connect
the MX switch to the S4148U.
CAUTION: Verify that the cables do not criss-cross between the switches.

Make sure that chassis are in a Multi-Chassis Management group. For instructions, find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table.
2. Define FCoE VLANs to use in the fabric. For instructions, see the Define VLANs on page 91 section for more information
about defining the VLANs.
3. If necessary, create Identity Pools. See the Create identity pools on page 111 for more information.
4. Create the FCoE uplinks. See the Create Fibre Channel uplinks on page 101 section for more information about creating
uplinks.
5. Create and deploy the appropriate server templates to the compute sleds. See Server Deployment on page 109 for more
information.
6. Configure the S4148U switch. See the Dell Networking Fibre Channel Deployment with S4148U-ON in F_port Mode
knowledge base article for more information.
Once the server operating system loads the FCoE driver, the WWN displays on the fabric and on the FC SAN. The system
is now ready to connect to Fibre Channel storage. See Dell PowerStore 1000T on page 235 to create host groups and map
volumes to the target host.
To validate the configuration, use the same commands that are mentioned in SmartFabric Deployment Validation on page 121.

Full switch mode


This section contains the Full Switch mode switch configuration of MX I/O modules in FSB mode. Configuration of the
existingFC switches is beyond the scope of this document.
To configure MX IOMs in Full Switch mode, Follow the steps mentioned below to configure MX IOMs through CLI:
1. Verify the MX9116n FSE is in Full Switch mode by running show switch-operating-mode command.
2. Connect the MX9116n FSE to the FC SAN.
3. Configure the S4148U switch. See the Dell Networking Fibre Channel Deployment with S4148U-ON in F_port Mode
knowledge base article for more information.

208 Configuration Scenarios


Once the server operating system loads the FCoE driver, the WWN displays on the fabric and on the FC SAN. The system
is now ready to connect to Fibre Channel storage. See Dell PowerStore 1000T on page 235 to create host groups and map
volumes to the target host.
Configure global switch settings
Configure the switch hostname, OOB management IP address, and OOB management default gateway. Configure the port
group for the ethernet interfaces used to connect to upstream switches S4148U-ON. In this deployment example, port 1/1/11 is
configured to breakout from 1x40 GbE to 4x10 GbE.

MX5108-A1 MX5108-A2

configure terminal configure terminal

hostname MX5108-A1 hostname MX5108-A2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24
no shutdown no shutdown
management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

interface breakout 1/1/11 map 10g-4x interface breakout 1/1/11 map 10g-4x

Configure VLTi
Configure VLTi on Ports 9 and 10 on MX5108n ethernet switches. By default, port 9 is 40 GbE. Configure breakout on port 10
from 1x100 GbE to 1x40 GbE.

MX5108-A1 MX5108-A2

interface breakout 1/1/10 map 40g-1x interface breakout 1/1/10 map 40g-1x

interface range ethernet 1/1/9-1/1/10 interface range ethernet 1/1/9-1/1/10


description VLTi description VLTi
no switchport no switchport

vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet 1/1/9-1/1/10 discovery-interface ethernet 1/1/9-1/1/10
peer-routing peer-routing

FSB FC or FCoE configuration


On each of the MX IOMs, enable FSB mode by running the feature fip-snooping with-cvl command.

NOTE: This command is mandatory for FSB cascading, port-pinning, and standalone FSB.

MX5108-A1 MX5108-A2

dcbx enable dcbx enable


feature fip-snooping with-cvl feature fip-snooping with-cvl

VLAN configuration
For each IOM, define the VLANs.

MX5108-A1 MX5108-A2

interface Vlan 30 interface Vlan 40


description FC-A1 description FC-A2
fip-snooping enable fip-snooping enable

no shutdown no shutdown

Configuration Scenarios 209


QoS and CoS configuration

MX5108-A1 MX5108-A2

class-map type network-qos fcoematch class-map type network-qos fcoematch


match qos-group 3 match qos-group 3

policy-map type network-qos PFC policy-map type network-qos PFC


class fcoematch class fcoematch
pause pause
pfc-cos 3 pfc-cos 3

class-map type queuing lan class-map type queuing lan


match queue 1 match queue 1
class-map type queuing san class-map type queuing san
match queue 3 match queue 3

policy-map type queuing ETS policy-map type queuing ETS


class lan class lan
bandwidth percent 70 bandwidth percent 70
class san class san
bandwidth percent 30 bandwidth percent 30

qos-map traffic-class TC-Q qos-map traffic-class TC-Q


queue 1 qos-group 0-2,4-7 queue 1 qos-group 0-2,4-7
queue 3 qos-group 3 queue 3 qos-group 3

Configure upstream interfaces

MX5108-A1 MX5108-A2

interface ethernet 1/1/11:1 interface ethernet 1/1/11:1


description "S4148U1_F-Port-1" description "S4148U2_F-Port-1"
switchport access vlan 1 switchport access vlan 1
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 30 switchport trunk allowed vlan 40
priority-flow-control mode on priority-flow-control mode on
service-policy input type network-qos PFC service-policy input type network-qos PFC
service-policy output type queuing ETS service-policy output type queuing ETS
ets mode on ets mode on
qos-map traffic-class TC-Q qos-map traffic-class TC-Q
fip-snooping port-mode fcf fip-snooping port-mode fcf
no shutdown no shutdown

Configure downstream interfaces


Configure the IOMs ports connected to MX Compute sleds. In the deployment example here, ports 1/1/1 and 1/1/3 are
configured for downstream connections.

MX5108-A1 MX5108-A2

interface ethernet 1/1/1 interface ethernet 1/1/1


description MX_ComputeSled_1 description MX_ComputeSled_1
Switchport access vlan 1 Switchport access vlan 1
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 30 switchport trunk allowed vlan 40
service-policy input type network-qos PFC service-policy input type network-qos PFC
service-policy output type queuing ETS service-policy output type queuing ETS
qos-map traffic-class TC-Q qos-map traffic-class TC-Q
no shutdown no shutdown

interface ethernet 1/1/3 interface ethernet 1/1/3


description MX_ComputeSled_2 description MX_ComputeSled_2
Switchport access vlan 1 Switchport access vlan 1
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 30 switchport trunk allowed vlan 40
service-policy input type network-qos PFC service-policy input type network-qos PFC

210 Configuration Scenarios


MX5108-A1 MX5108-A2

service-policy output type queuing ETS service-policy output type queuing ETS
qos-map traffic-class TC-Q qos-map traffic-class TC-Q
no shutdown no shutdown

Configure UFD
Uplink Failure Detection, or UFD, is recommended on all server-facing interfaces and upstream interfaces.

MX5108-A1 MX5108-A2

uplink-state-group 1 uplink-state-group 1
name "UFD_Group_1" name "UFD_Group_1"
downstream ethernet1/1/1-1/1/3 downstream ethernet1/1/1-1/1/3
upstream fibrechannel1/1/11:1-1/1/11:2 upstream fibrechannel1/1/11:1-1/1/11:2
enable enable

Scenario 8: Configure boot from SAN


The host operating system of MX Server can boot from a remote FC storage array using the IOMs. Booting to an operating
system through FC direct attach (F_port), FC (NPG), and FCoE (FSB) scenarios are supported.

FC storage Boot from SAN


(3 methods)

Direct Attached (F_port) FC (NPG) FCoE (FSB)

MX9116n MX9116n MX5108n


MX7000 MX7000 MX7000

Server
Server
Server Server
Server Server

Figure 196. Boot from SAN

The figure below shows the example topology that is used in this chapter to demonstrate Boot from SAN. The required steps
are provided to configure NIC partitioning, system BIOS, an FCoE LUN, and an OS install media device required for Boot from
SAN.

Figure 197. FCoE boot from SAN

Configuration Scenarios 211


NOTE: See the Dell SmartFabric OS10 User Guide for configuring NPG mode globally on the S4148U-ON switches. Find the
relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.

Configure NIC boot device


In this section, each QLogic CNA port is partitioned into one Ethernet and one FCoE partition.
NOTE: This is only done on CNA ports that carry converged traffic. In this example, these are the two 25 GbE QLogic CNA
ports on each server that attach to the switches internally through an orthogonal connection.
1. Connect to the server's iDRAC in a web browser and launch the virtual console.
2. In the virtual console, select BIOS Setup from the Next Boot menu.
3. Reboot the server.
4. On the System Setup Main Menu, select Device Settings.
5. Select the first CNA port.
6. Select Device Level Configuration.
7. Set the Virtualization Mode to NPAR (if not already set), and click Back.

Figure 198. Virtualization mode to NPAR


8. Choose NIC Partitioning Configuration.
9. Select Partition 1 Configuration.
10. Set NIC + RDMA Mode to Disabled.

Figure 199. Set the value of NIC and RDMA mode


11. Click Back to return.
12. Select Partition 2 Configuration.
13. Set FCoE Mode to Enabled as shown.

212 Configuration Scenarios


Figure 200. FCoE mode to Enabled
14. Click Back and select Back to go to Main Configuration Page.
15. Select NIC Configuration, then set the Boot Protocol to UEFI FCoE, and then click Back.

Figure 201. Set value of Boot Protocol to UEFI FCoE


16. If present, select Partition 3 Configuration in NIC Partitioning Configuration.
17. Set all modes to Disabled and then click Back.
18. If present, select Partition 4 Configuration in NIC Partitioning Configuration.
19. Set all modes to Disabled and then click Back.
20. Select FCoE Configuration.
NOTE: It is not required to have a Virtual LAN ID setup in the CNA, as the CNA uses FIP discovery on the untagged
VLAN to obtain the FCoE VLAN.
21. Set Connect 1 to Enabled.
22. Set the World Wide Port Name Target 1 connected to the port on PowerStore 1000T.

Figure 202. FCoE configuration


23. Click Back and then click Finish.
24. When prompted, answer Yes to save changes and click OK in the Success window.

Configuration Scenarios 213


25. Select the second CNA port and repeat the steps in this section for port 2.
26. Click Finish to exit to the System Setup Main Menu.

Configure BIOS settings


To allow boot from SAN, perform the following steps in the system BIOS settings to disable the PXE BIOS.
1. Select System BIOS from the System Setup Main Menu.
2. Select Network Settings.
3. Click Disable for all PXE Devices.
4. Click Back.
5. Click Finish, click Finish again, then select Yes to exit and reboot.
NOTE: As previously documented, this server configuration may be used to generate a template to deploy to other servers
with identical hardware. When a template is not used, repeat the steps in this chapter for each MX server sled that requires
access to the FC storage.

Connect FCoE LUN


The server should be provisioned to connect to an FCoE boot LUN before moving on. Follow the procedures in Dell PowerStore
1000T on page 235 to configure and connect to FCoE volumes. Once connected, continue to the steps below to complete the
Boot from SAN configuration.

Set up and install media connection


NOTE: The steps in this section were completed using the iDRAC Java Virtual Console.

1. Connect to the server’s iDRAC in a web browser and launch the virtual console.
2. In the virtual console, from the Virtual Media menu, select Virtual Media.
3. In the virtual console, from the Virtual Media menu, select Map CD/DVD.
4. Click Browse to find the location of the operating system install media then click Map Device.
5. In the virtual console, from the Next Boot menu, select Lifecycle Controller.
6. Reboot the server.

Use Lifecycle Controller to set up operating system driver for


media installation
The installation media for some operating systems do not contain the necessary FCoE drivers to boot from a FCoE LUN. Use
this procedure to create an internal operating system install media device.

NOTE: For VMware ESXi, see the Dell customized media instructions provided on the Dell Technologies Support website.

1. In Lifecycle Controller, select OS Deployment, then select Deploy OS.


2. From the Select an Operating System screen, verify that Boot mode is set to UEFI.
3. Select an operating system to install to the boot LUN.

214 Configuration Scenarios


Figure 203. Lifecycle Controller operating system deployment menu
4. Click Next.
5. Click the Manual Install check box, then click Next.
6. Click Next on the Insert OS Media screen.
7. Click Finish when prompted on the Reboot System screen.
8. System reboots to Virtual Media. Press any key to boot install media when prompted.
9. Follow the operating system prompts to install the operating system to the FCoE storage volumes.

Configuration Scenarios 215


A
Additional Tasks
Reset SmartFabric OS10 switch to factory defaults
To reset SmartFabric OS10 switches back to the factory default configuration, enter the following commands:

OS10# delete startup-configuration

Proceed to delete startup-configuration [yes/no(default)]:yes


OS10# reload

System configuration has been modified. Save? [yes/no]:no

Proceed to reboot the system? [confirm yes/no]:yes

The switch reboots with default configuration settings.

Reset Cisco Nexus 3232C to factory defaults


To reset the Cisco Nexus 3232C switches to the factory default configuration, enter the following commands:

3232C# write erase


Warning: This command will erase the startup-configuration.
Do you wish to proceed anyway? (y/n) [n] y

After the next reboot, the switch loads with default configuration settings.

Connect to IO Module console port using RACADM


To connect to an IOM console port, first connect to the OME-Modular IP address using SSH using the same credentials used to
log in to the OME-M UI.
Use the RACADM command from the MX9002m management module:

racadm connect [-b] -m <module>

-b is for Binary mode.


-m is the Module option. The module option can be one of the following:
● server-<n>: where n = 1 to 8
● switch-<n>: where n = 1 to 6 or <a1 | a2 | b1 | b2 | c1 | c2>
For example:
● Connect to I/O Module 1 serial console:

racadm connect -m switch-1

● Connect to Server 1 serial console:

racadm connect -m server-1

216 Additional Tasks


MX I/O module OS10 installation using ONIE
The Dell SmartFabric OS10 can be installed using the Open Network Install Environment (ONIE) on MX I/O modules in two
ways:
● Manual installation - Manually configure network information if a DHCP server is not available or install the OS10 software
image using USB media.
● Automatic installation - ONIE discovers network information including the Dynamic Host Configuration Protocol (DHCP)
server, connects to an image server, and downloads and installs an image automatically.

System setup
Connect the chassis Management port on Management module to the network to download an image.
Before installation, verify that the system is connected correctly. To connect and access the I/O module on MX Chassis, see
the Connect to IO Module console port using RACADM section. Also, you can directly SSH to the IP, if it is assigned to the IOM
through management module.

Install OS10
For an ONIE-enabled switch, go to the ONIE boot menu. An ONIE-enabled switch boots with preloaded diagnostics (DIAGs) and
ONIE software.

+-------------------------------+
|*ONIE: Install OS |
| ONIE: Rescue |
| ONIE: Uninstall OS |
| ONIE: Update ONIE |
| ONIE: Embed ONIE |
| ONIE: Diag ONIE |
+-------------------------------+

Install OS Boots to the ONIE prompt and installs an OS10 image using the Automatic Discovery process. When ONIE
installs a new OS image, the previously installed image and OS10 configuration are deleted

Rescue Boots to the ONIE prompt and enables manual installation of an OS10 image or ONIE update
Uninstall OS Deletes the contents of all disk partitions, including the OS10 configuration, except ONIE and diagnostics
Update ONIE Installs a new ONIE version
EDA DIAG Runs the system diagnostics

After the ONIE process installs an OS10 image and you reboot the switch in ONIE: Install OS mode (default), ONIE takes
ownership of the system and remains in Install mode until an OS10 image successfully installs again. To boot the switch from
ONIE for any reason other than installation, select the ONIE: Rescue or ONIE: Update ONIE option from the ONIE boot menu.
The OS10 installer image creates several partitions. After the installation is complete, the switch automatically reboots and loads
an OS10 active image. The other image becomes the standby image. Both the Active and Standby images are of the same
version.
NOTE: During an automatic or manual OS10 installation, if an error condition occurs that results in an unsuccessful
installation, perform Uninstall OS first to clear the partitions if there is an existing OS on the device. If the problem
persists, contact Dell Technologies Technical Support.

Manual installation
If you do not use the ONIE-based automatic installation of an OS10 image and if a DHCP server is not available, you can
manually install the image. Configure the Management port and the software image file to start the installation.

Additional Tasks 217


Manual installation using SCP, TFTP, or FTP server
1. Save the OS10 software image on an SCP, TFTP, or FTP server.
2. Power on the switch and select ONIE: Rescue for manual installation.
3. Enter the onie-discovery-stop command to stop the DHCP discovery.
4. Configure the IP addresses on the Management port, where x.x.x.x represents your internal IP address. After you configure
the Management port, the response is up.

ifconfig eth0 x.x.x.x netmask 255.255.0.0 up

5. Enter the onie-nos-install image_url command to install the software on the device.
NOTE: The installation command accesses the OS10 software from the specified SCP, TFTP, or FTP URL, creates
partitions, verifies installation, and reboots itself.
The following is an example of the installation command:

ONIE:/ # onie-nos-install ftp://a.b.c.d/PKGS_OS10–Enterprise-x.x.xx.bin

NOTE: a.b.c.d represents the location to download the image file from, and x.x.xx represents the version number
of the software to install.

Manual installation using USB drive


You can install the OS10 software image using a USB device. Verify that the USB device supports a FAT or EXT2 file system.
1. Copy OS10 image file PKGS_OS10–Enterprise-x.x.xx.bin to USB storage device.
2. Plug the USB storage device into the USB storage port on the switch.
3. Power on the switch to automatically boot using the ONIE: Rescue option.
4. Optionally, enter the onie-discovery-stop command to stop ONIE discovery if the device boots to ONIE: Install OS.
5. Run the mkdir /mnt/media command to create a USB mount location on the system.
6. Enter the fdisk -l command to identify the path to the USB drive.
7. Run the mount -t vfat usb-drive-path /mnt/media command to mount the USB media plugged in the USB port
on the device.
8. Enter the onie-nos-install /mnt/media/image_file command to install the software from the USB,
where /mnt/media specifies the path where the USB partition is mounted.
The ONIE auto-discovery process discovers the image file at the specified USB path, loads the software image, and reboots the
switch to OS10 active image.

Automatic installation
You can automatically install an OS10 image on a Dell ONIE-enabled device. This process is known as zero-touch install. After
the device boots to ONIE: Install OS, ONIE auto-discovery follows these steps to locate the installer file and uses the first
successful method:
1. Use a statically configured path that is passed from the boot loader.
2. Search file systems on locally attached devices, such as USB.
3. Search the exact URLs from a DHCPv4 server.
4. Search the inexact URLs based on the DHCP responses.
5. Search IPv6 neighbors.
6. Start a TFTP waterfall.
The ONIE automatic discovery process locates the stored software image, downloads and installs it, and reboots the device with
the new image. Automatic discovery repeats until a successful software image installation occurs and reboots the switch.
If DHCPv4 server is used, ONIE auto-discovery obtains the hostname, domain name, Management interface IP address,
and the IP address of the domain name server (DNS) from the DHCP server and DHCP options. It also searches SCP, FTP, or
TFTP servers with the default DNS of the ONIE server. DHCP options are not used to provide the server IP.
If USB storage device is used, ONIE searches only FAT or EXT2 file systems for an OS10 image.

218 Additional Tasks


MXG610s FC switch upgrade downgrade

Upgrade
To upgrade the Firmware on MXG610s FC switch, perform the following steps:
1. Validate the current Fabric OS version and other build information by running version command on MXG610 IOM.

Figure 204. MXG610s current firmware version


2. Back up your switch configuration before the firmware downloads. Enter the supportsave command to collect all
current core files. Also, include all serial consoles and any open network connection sessions, such as TELNET, with any
troubleshooting reports.

Figure 205. Back up configuration

NOTE: The FTP protocol is being deprecated starting with Fabric OS version 9.0.1a. Uploads or downloads using FTP
may not be supported. For release notes and MXG610s software, contact Dell Technical Support.
3. Enter the firmwaredownload command to download the firmware. You will need to provide the Server Name or IP
address, File name, Username, Network Protocol (1-auto-select, 2-FTP, 3-SCP, 4-SFTP) to be used, and the password.

Figure 206. Firmware download


4. When issued with the path to the directory where the firmware is stored, it automatically performs search for the correct
package file type associated with the switch.
Firmware upgrades are available for customers with support service contracts and for partners on the Dell Technologies website
https://www.dell.com/support/home/en-us?app=drivers.
NOTE: When upgrading multiple switch modules, complete the steps above on each switch module before upgrading to the
next one. Do not copy one switch configuration to another switch. Save each switch configuration on file and restore each
switch with the corresponding switch configuration.

Downgrade
To downgrade the firmware on the MXG610s, perform the following steps:

NOTE: Once upgraded to Gen 7, you cannot downgrade to a Fabric OS lower than Fabric OS 9.0.0.

1. Enter the firmwareshow command to validate the current firmware on the switch.

Additional Tasks 219


Figure 207. Current firmware
2. Enter firmwaredownloadstatus to confirm that there is no firmware download already in progress. If there is a
download in progress, wait until that download process is complete.
3. Enter switchshow command to verify that no ports are running as G_Ports.

Figure 208. Switch ports information


4. Enter configupload to save the configuration file to your FTP or SSH server or to a USB memory device.
5. Enter supportsave to retrieve all current core files.
NOTE: The information provided in the supportsave command is useful to troubleshoot the firmware download
process if a problem occurs.
6. Enter errclear to clear all existing messages, including internal messages.
7. Enter supportsave -R (uppercase R). This action clears all core and trace files. Continue with the firmware download.

MXG610s switch details validation


NOTE: When cabling SFP+ optical transceivers, start from port 0, then port 17, and then the other ports.

To validate that the transceivers are supported and working correctly, use the sfpshow command. It displays the port
information, transceiver information, and speed information.

220 Additional Tasks


Figure 209. Transceiver information

The switchshow command displays switch hostname, switch type, online status, switch role, and all other switch-related
information, as shown in the figure below.

Figure 210. Switch information

The fabricshow command displays Switch ID, Worldwide Name, and Management IP address of the switch.

Figure 211. Fabric information

Additional Tasks 221


B
Additional Information
PTM port mapping
The following figures show the port mapping between compute sleds and Pass-Through Module (PTM) interfaces. This mapping
applies to both 25 GbE and 10 GbE PTMs.

Figure 212. Ethernet PTM dual-port NIC mapping

Figure 213. Ethernet PTM quad-port NIC mapping

NOTE: Ports 9 through 14 are reserved for future expansion.

222 Additional Information


Supported cables and optical connectors

PowerEdge MX7000 supported optics and cables


he PowerEdge MX7000 supports various optics and cables. The sections in this appendix provide a summary of the specified
industry standards and the use case regarding the chassis. The following table shows the various cable types that are
supported.

NOTE: Additional information about supported cables and optics can be found in the PowerEdge MX IO Guide.

Table 26. Cable types


Cable type Description
DAC (copper) ● Direct attach copper
● Copper wires and shielding
● 2-wires/channel
AOC (optical) Active Optical Cable
MMF (optical) ● Multi-mode fiber
● Large core fiber (~50 µm)
● 100 m reach
● Transceivers are low cost
● Fiber is 3x the cost of SMF
SMF (optical) ● Single-mode fiber
● Tiny core fiber (~9 µm)
● 2/10 km reach
● Transceivers are expensive

The following table shows the different optical connectors and a brief description of the standard.

Table 27. Optical connectors


Connector Description
Small Form-factor Pluggable (SFP) SFP
● SFP = 1 Gb ● 1 channel
● SFP+ = 10 Gb ● 2 fibers or wires
● SFP28 = 25 Gb ● 1-1.5 W
● Duplex LC optical connector
● MMF or SMF
Quad Small Form-factor Pluggable (QSFP) QSFP
● QSFP+ = 40 Gb ● 4 channels
● QSFP28 = 100 Gb ● 8 fibers or wires
● 3.5-5 W
● MPO12 8 fiber parallel optical connector
Quad Small Form-factor Pluggable Double - Density (QSFP- QSFP-DD
DD) ● 8 channels
● QSFP28-DD = 2x 100 Gb ● 16 fibers or wires
● QSFP56-DD = 2x 200 Gb ● 10 W
● MPO12DD 16 fiber parallel optical connector

The following table shows the model of IOM where each type of media is relevant.

Additional Information 223


Table 28. Media associations
Media type MX9116n MX7116n MX5108n 25 GbE PTM
SFP+ x
SFP28 x
QSFP+ x x
QSFP28 x x
QSFP28-DD x x

Each type of media has a specific use case regarding the MX7000, with each type of media there are various applications. The
following sections outline where in the chassis each type of media is relevant.
NOTE: See the Dell Networking Transceivers and Cables document for more information about supported optics and
cables.

SFP+/SFP28
As seen in the preceding table, SFP+ is a 10 GbE transceiver and SFP28 is a 25 GbE transceiver, both of which can use either
fiber or copper media to achieve 10 GbE or 25 GbE communication in each direction. While the MX5108n has four 10GBase-T
copper interfaces, the focus is on optical connectors.
The SFP+ media type is typically seen in the PowerEdge MX7000 using the 25 GbE Pass-Through Module (PTM) and using
breakout cables from the QSFP+ and QSFP28 ports. The following are supported on the PowerEdge MX7000:
● Direct Attach Copper (DAC)
● LC fiber optic cable with SFP+ transceivers
The use of SFP+/SFP28, as it relates to QSFP+ and QSFP28, as discussed in those sections.

NOTE: The endpoints of the connection need to be set to 10 GbE if SFP+ media is being used.

Figure 214. SFP+/SFP28 media: Direct Attach Copper (DAC)

224 Additional Information


Figure 215. SFP+/SFP28 media: LC fiber optic cable

Figure 216. SFP+/SFP28 media: SFP+/SFP28 transceiver

The preceding figures show examples of SFP+ cables and transceivers. Also, the SFP+ form factor can be seen referenced in
the QSFP+ and QSFP28 sections using breakout cables.

QSFP+
QSFP+ is a 40 Gb standard that uses either fiber or copper media to achieve communication in each direction. This standard
has four individual 10-Gb lanes that can be used together to achieve 40 GbE throughput or separately as four individual 10 GbE
connections (using breakout connections). One variant of the Dell QSFP+ transceiver is shown in the following figure.

Figure 217. QSFP+ transceiver

The QSFP+ media type has several uses in the MX7000. While the MX9116n does not have interfaces that are dedicated to
QSFP+, ports 41 through 44 can be broken out to 1x 40 GbE that enables QSFP+ media to be used in those ports. The MX5108n
has one dedicated QSFP+ port and two QSFP28 ports that can be configured for 1x 40 GbE.

Additional Information 225


The following figures show examples of QSFP+ Coppers. The Direct Attach Copper (DAC) is a copper cable with a QSFP+
transceiver on either end. The Multi-fiber Push On (MPO) cable is a fiber cable that has MPO connectors on either end; these
connectors attach to QSFP+ transceivers. The third variant is an Active Optical Cable (AOC) that is similar to the DAC with a
fixed fiber optic cable in between the attached QSFP+ transceivers.

Figure 218. QSFP+ cables: Direct Attach Copper (DAC)

Figure 219. QSFP+ cables: Multi-fiber Push On (MPO) cable

Figure 220. QSFP+ cables: Active Optical Cable (AOC)

The MX7000 also supports the use of QSFP+ to SFP+ breakout cables. This offers the ability to use a QSFP+ port and connect
to four SFP+ ports on the terminating end.
The following figures show the DAC and MPO cables, which are two variations of breakout cables. The MPO cable in this
example attaches to one QSFP+ transceiver and four SFP+ transceivers.

226 Additional Information


Figure 221. QSFP+ to SFP+ Breakout cables: Direct Attach Copper (DAC) breakout

Figure 222. QSFP+ to SFP+ breakout cables: Multi-fiber Push On (MPO) breakout cable

NOTE: The MPO breakout cables uses a QSFP+ transceiver on one end and four SFP+ transceivers on the terminating end.

QSFP28
The QSFP28 standard is 100 Gb that uses either fiber or copper media to achieve communication in each direction. The QSFP28
transceiver has four individual 25-Gb lanes which can be used together to achieve 100 GbE throughput or separately as four
individual 25 GbE connections (using four SFP28 modules). One variant of the Dell QSFP28 transceiver is shown in the following
figure.

Figure 223. QSFP28 transceiver

There are three variations of cables for QSFP28 connections. The variations are shown in the following figures.

Additional Information 227


Figure 224. QSFP28 cables: Direct Attach Copper (DAC)

Figure 225. QSFP28 cables: Multi-fiber Push On (MPO) cable

Figure 226. QSFP28 cables: Active Optical Cable (AOC)

NOTE: The QSFP28 form factor can use the same MPO cable as QSFP+. The DAC and AOC cables are different in that the
attached transceiver is a QSFP28 transceiver rather than QSFP+.
QSFP28 supports the following breakout configurations:
● 1x 40 Gb with QSFP+ connections, using either a DAC, AOC, or MPO cable and transceiver.
● 2x 50 Gb with a fully populated QSFP28 end and two depopulated QSFP28 ends, each with 2x 25 GbE lanes. This product is
only available as DAC cables.
● 4x 25 Gb with a QSFP28 connection and using four SFP28 connections, using either a DAC, AOC, or MPO breakout cable
with associated transceivers.
● 4x 10 Gb with a QSFP28 connection and using four SFP+ connections, using either a DAC, AOC, or MPO breakout cable with
associated transceivers.

QSFP28 double density connectors


A key technology that enables the Scalable Fabric Architecture is the QSFP28 double-density (DD) connector. The QSFP28-DD
form factor expands on the QSFP28 pluggable form factor by doubling the number of available lanes from four to eight, with
each lane operating at 25 Gbps, the result is 200 Gbps for each connection.

228 Additional Information


The following figure shows that the QSFP28-DD connector is slightly longer than the QSFP28 connector. This is to enable the
second row of pads that carry the additional four 25-Gbps lanes.
NOTE: A 100 GbE QSFP28 optic can be inserted into a QSFP28-DD port, resulting in 100 GbE of available bandwidth. The
other 100 GbE will not be available.

Figure 227. QSFP28-DD and QSFP28 physical interfaces

QSFP28-DD cables and optics build on the current QSFP28 naming convention. For example, the current 100 GbE short range
transceiver has the following description:
Q28-100G-SR4: Dell Networking Transceiver, 100GbE QSFP28, SR4, MPO12, MMF
The equivalent QSFP28-DD description is easily identifiable:
Q28DD-200G-2SR4: Dell Networking Transceiver, 2x100GbE QSFP28-DD, 2SR4, MPO12-DD, MMF

PowerEdge MX IOM slot support matrix


The following table shows the recommended PowerEdge MX IOM slot configurations.

Table 29. Recommended IOM slot configurations


Slot A1 Slot A2 Slot B1 Slot B2
MX9116n MX9116n — —
MX5108n MX5108n — —
MX7116n MX7116n — —
25 GbE PTM 25 GbE PTM — —
10GBase-T PTM 10GBase-T PTM — —
MX9116n MX9116n MX9116n MX9116n
MX5108n MX5108n MX5108n MX5108n
MX7116n MX7116n MX7116n MX7116n
MX9116n MX7116n — —
MX7116n MX9116n — —
MX9116n MX7116n MX9116n MX7116n
MX7116n MX9116n MX7116n MX9116n
25 GbE PTM 25 GbE PTM 25 GbE PTM 25 GbE PTM

Additional Information 229


Table 29. Recommended IOM slot configurations (continued)
Slot A1 Slot A2 Slot B1 Slot B2
10GBase-T PTM 10GBase-T PTM 10GBase-T PTM 10GBase-T PTM
MX9116n MX9116n MX7116n MX7116n
MX9116n MX7116n MX7116n MX7116n
MX7116n MX9116n MX7116n MX7116n

The following table lists all other supported IOM slot configurations. These configurations are either single IOMs configurations
or each chassis contains dual MX9116n FSEs. In either configuration, redundancy is not a requirement.

Table 30. Other supported IOM configurations


Slot A1 Slot A2 Slot B1 Slot B2
MX9116n — — —
MX5108n — — —
MX7116n — — —
25 GbE PTM — — —
10GBase-T PTM — — —
MX9116n — MX9116n —
MX5108n — MX5108n —
MX7116n — MX7116n —
25 GbE PTM — 25 GbE PTM —
10GBase-T PTM — 10GBase-T PTM —
MX9116n MX9116n MX9116n —
MX5108n MX5108n MX5108n —
MX7116n MX7116n MX7116n —
25 GbE PTM 25 GbE PTM 25 GbE PTM —
10GBase-T PTM 10GBase-T PTM 10GBase-T PTM —
MX9116n MX9116n MX5108n MX5108n
MX9116n MX9116n 25 GbE PTM 25 GbE PTM
MX9116n MX9116n 10GBase-T PTM 10GBase-T PTM
MX9116n MX7116n MX5108n MX5108n
MX7116n MX9116n MX5108n MX5108n
MX9116n MX7116n 25 GbE PTM 25 GbE PTM
MX7116n MX9116n 25 GbE PTM 25 GbE PTM
MX9116n MX7116n 10GBase-T PTM 10GBase-T PTM
MX7116n MX9116n 10GBase-T PTM 10GBase-T PTM
MX7116n MX7116n MX5108n MX5108n
MX7116n MX7116n 25 GbE PTM 25 GbE PTM
MX7116n MX7116n 10GBase-T PTM 10GBase-T PTM
MX5108n MX5108n MX9116n MX9116n
MX5108n MX5108n MX7116n MX7116n

230 Additional Information


Table 30. Other supported IOM configurations (continued)
Slot A1 Slot A2 Slot B1 Slot B2
MX5108n MX5108n MX9116n MX7116n
MX5108n MX5108n MX7116n MX9116n
MX5108n MX5108n 25 GbE PTM 25 GbE PTM
MX5108n MX5108n 10GBase-T PTM 10GBase-T PTM
25 GbE PTM 25 GbE PTM MX9116n MX9116n
25 GbE PTM 25 GbE PTM MX7116n MX7116n
25 GbE PTM 25 GbE PTM MX9116n MX7116n
25 GbE PTM 25 GbE PTM MX7116n MX9116n
25 GbE PTM 25 GbE PTM 10GBase-T PTM 10GBase-T PTM
10GBase-T PTM 10GBase-T PTM MX9116n MX9116n
10GBase-T PTM 10GBase-T PTM MX7116n MX7116n
10GBase-T PTM 10GBase-T PTM MX9116n MX7116n
10GBase-T PTM 10GBase-T PTM MX7116n MX9116n
10GBase-T PTM 10GBase-T PTM 25 GbE PTM 25 GbE PTM

Additional Information 231


C
Dell PowerSwitch S4148U-ON Configuration
in Scenario 7
In Scenario 7: Connect MX5108n to Fibre Channel storage - FSB on page 207, S4148U-ON switches are connected to the
MX9116n FSE in the MX7000 chassis and to the FC switches. This chapter covers the switch configuration for S4148U-ON
switches running OS10. Run the commands in the following sections to complete the configuration of both leaf switches.

Switch configuration commands


Run the following commands to configure the hostname, OOB management IP address, and default gateway.

General settings
NOTE: The MX I/O Modules run Rapid Per-VLAN Spanning Tree Plus (RPVST+) by default. RPVST+ runs RSTP on each
VLAN while RSTP runs a single instance of spanning tree across the default VLAN. The Dell PowerSwitch S4148U-ON
used in this example runs SmartFabric OS10 and has RPVST+ enabled by default. See the Spanning Tree Protocol
recommendations in the Dell SmartFabric OS10 User Guide for more information. Find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table.

S4148U-ON Leaf 1 S4148U-ON Leaf 2

configure terminal configure terminal

hostname S4148U-Leaf1 hostname S4148U-Leaf2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
no shutdown no shutdown
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24

management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

NOTE: Use the spanning-tree {vlan vlan-id priority priority-value} command to set the bridge
priority for the upstream switches. The bridge priority ranges from 0 to 61440 in increments of 4096. The switch which has
lowest bridge priority becomes STP root.

Configure VLANs
Run the commands in this section to configure VLANs. In this deployment example, the VLANs used are VLAN 30 and VLAN 40.
Set the MTU as 9216 Bytes.

S4148U-ON Leaf 1 S4148U-ON Leaf 2

interface vlan40 interface vlan30


mtu 9216 mtu 9216
no shutdown no shutdown

232 Dell PowerSwitch S4148U-ON Configuration in Scenario 7


Configure DCBx, NPG, and vFabric
Configure and enable DCBx feature, NPG as a FC feature and vFabric.
NOTE: Remove all the FC configuration, vFabric global configuration, and vFabric configuration under interface or port
channels prior to configuring the FC feature.

S4148U-ON Leaf 1 S4148U-ON Leaf 2

dcbx enable dcbx enable


feature fc npg feature fc npg
vfabric 101 vfabric 102
vlan 40 vlan 30
fcoe fcmap 0xEFC00 fcoe fcmap 0xEFC01

Configure QoS
Configure class-map, policy-map and define QoS parameters. In this example, queue 3 is defined as the Output queue in policy
map. The bandwidth is also defined as 50%. Configure the QoS parameters as mentioned in the following example.

S4148U-ON Leaf 1 S4148U-ON Leaf 2

class-map type network-qos class-map type network-qos


class_Dot1p_3 class_Dot1p_3
match qos-group 3 match qos-group 3
class-map type queuing class-map type queuing
map_ETSQueue_0 map_ETSQueue_0
match queue 0 match queue 0

class-map type queuing class-map type queuing


map_ETSQueue_3 map_ETSQueue_3
match queue 3 match queue 3

trust dot1p-map trust dot1p-map


map_Dot1pToGroups map_Dot1pToGroups
qos-group 0 dot1p 0-2,4-7 qos-group 0 dot1p 0-2,4-7
qos-group 3 dot1p 3 qos-group 3 dot1p 3
qos-map traffic-class qos-map traffic-class
map_GroupsToQueues map_GroupsToQueues

queue 0 qos-group 0 queue 0 qos-group 0


queue 3 qos-group 3 queue 3 qos-group 3

policy-map type network-qos policy-map type network-qos


policy_Input_PFC policy_Input_PFC

class class_Dot1p_3 class class_Dot1p_3


pause pause
pfc-cos 3 pfc-cos 3

policy-map type queuing policy-map type queuing


policy_Output_BandwidthPercent policy_Output_BandwidthPercent

class map_ETSQueue_0 class map_ETSQueue_0


bandwidth percent 50 bandwidth percent 50
class map_ETSQueue_3 class map_ETSQueue_3
bandwidth percent 50 bandwidth percent 50

system qos system qos

trust-map dot1p trust-map dot1p


map_Dot1pToGroups map_Dot1pToGroups
qos-map traffic-class qos-map traffic-class
map_GroupsToQueues map_GroupsToQueues

Dell PowerSwitch S4148U-ON Configuration in Scenario 7 233


Configure interfaces
In this topology, Interface 1/1/1 and 1/1/3 on both leafs are connected to the FC switches. Interfaces 1/1/11 and 1/1/12
are connected to MX9116n FSEs. Configure the interfaces as mentioned below. Make sure to configure port groups before
configuring interfaces.

S4148U-ON Leaf 1 S4148U-ON Leaf 2

interface fibrechannel 1/1/1 interface fibrechannel 1/1/1


no shutdown no shutdown
vfabric 101 vfabric 102

interface ethernet1/1/11 interface ethernet1/1/11


no shutdown no shutdown
switchport access vlan 1 switchport access vlan 1
priority-flow-control mode on priority-flow-control mode on
service-policy input type service-policy input type
network-qos policy_Input_PFC network-qos policy_Input_PFC
service-policy output type service-policy output type
queuing queuing
policy_Output_BandwidthPercent policy_Output_BandwidthPercent
ets mode on ets mode on
vfabric 101 vfabric 102

interface ethernet1/1/12 interface ethernet1/1/12


no shutdown no shutdown
switchport access vlan 1 switchport access vlan 1
priority-flow-control mode on priority-flow-control mode on
service-policy input type service-policy input type
network-qos policy_Input_PFC network-qos policy_Input_PFC
service-policy output type service-policy output type
queuing queuing
policy_Output_BandwidthPercent policy_Output_BandwidthPercent
ets mode on ets mode on
vfabric 101 vfabric 102

end end
write memory write memory

234 Dell PowerSwitch S4148U-ON Configuration in Scenario 7


D
Dell PowerStore 1000T
About Dell PowerStore 1000T
This section shows how an administrator can configure Dell PowerStore 1000T to create hosts, add volumes, determine the
Worldwide Port Names (WWPNs) of Converged Network Adapters (CNAs) and map storage volumes to the target host. The
WWPNs are used to connect FC storage targets to specific servers for file storage or OS boot.
NOTE: The configuration steps and screenshots in this section were taken from the current PowerStore OS version at the
time of publication. For the latest instructions, see the PowerStore 1000T Documentation Page, where you will find the
latest networking and configuration guides.

Configure PowerStore 1000T FC storage


This section covers configuration of a Dell PowerStore 1000T storage array. See the Dell PowerStore Quick Start Guide for
more detail about how to set up the storage array for the first time.
Once the initial storage array cluster configuration is complete and all the network devices are connected, perform the following
steps to create:
● FC storage array hosts
● Host groups
● Volume groups
● Volumes to the groups

Create a host
Perform the following steps to create a host.
1. Connect to the PowerStore 1000T UI in a web browser and log in using the required credentials.
2. Click on Compute and select the Hosts & Host Groups option.

Figure 228. Create host

3. Click Add Host. Enter a host name and select the Operating System. Click Next.

Dell PowerStore 1000T 235


Figure 229. Add host name and operating system

4. Select Protocol Type. In this example, Fibre Channel is selected. Click Next.
5. The Initiator WWPN will discover automatically. Select the Initiator Identifier WWPN and click Next.
6. Review selections on the Summary page and click Add Host to create the host as shown in the following figure.
The host is displayed on the Compute > Hosts & Host Groups page.

Figure 230. Fibre channel host created

Create host groups and add hosts


Perform the following steps to create host groups and add hosts to the group.
1. Once the host is created, click Add Host Group as shown in the following figure.

236 Dell PowerStore 1000T


Figure 231. Add Host Group

2. Enter the name of the host group, select Protocol Type, and select the right host to add.
3. Click Create.

Figure 232. Host group created

Additional hosts may be added to the same host group as needed by clicking the (+ Add Host) button on the Host Groups
page.

Dell PowerStore 1000T 237


Create volume groups
Perform the following steps to create volume groups.
1. Click on the Storage tab and select Volume Groups.
2. Enter the name for the volume group. Leave other options as default.
3. Click Create.

Figure 233. Volume group created

Create volumes
Perform the following steps to create the volumes under Volume Groups.
1. Once the volume group is created, click ADD VOLUMES.
2. Select Add New Volumes.

238 Dell PowerStore 1000T


Figure 234. Add volumes to volume group

3. Enter the name of volume. Select the desired quantity and size of each volume. In this example, two volumes quantity and
size 10 GB is selected. Leave the other options as default.
4. Click Next as shown in the following figure.

Figure 235. Create volumes

5. Select the right host group to map volumes. Leave the other options as default, as shown in the following figure.

Dell PowerStore 1000T 239


Figure 236. Map volumes to host

NOTE: To modify Volume name or size, click Storage > Volumes > Select Volume, then Modify to make changes.

Determine PowerStore 1000T storage array FC


WWPNs
The WWPNs of FC adapters in storage arrays are also used for FC configuration. Perform the following steps to determine
WWPNs on PowerStore 1000T storage arrays.
1. Connect to the PowerStore 1000T UI in a web browser and log in.
2. Click on the Compute > Select Hosts & Host Groups option.
3. Select Host group > Click on Host. The Fibre Channel Ports page is displayed as shown in the following figure.

Figure 237. Fibre Channel Ports

Two WWNs are listed for each port. The World Wide Node Name (WWNN), outlined in black, identifies the PowerStore
1000T Node storage array. The WWPNs, outlined in blue, identify the individual ports associated with the corresponding
array node.
Record the WWPNs as shown in the following table:

240 Dell PowerStore 1000T


Table 31. Storage array FC adapter WWPNs
Service processor Physical port WWNN WWPN
PS CTRL A FC 0 58:cc:f9:90:c9:20:0c:e7 58:cc:f9:90:49:21:0c:e7
PS CTRL A FC 1 58:cc:f9:90:c9:20:0c:e7 58:cc:f9:90:49:22:0c:e7
PS CTRL B FC 0 58:cc:f9:98:c9:20:0c:e7 58:cc:f9:98:49:21:0c:e7
PS CTRL B FC 1 58:cc:f9:98:c9:20:0c:e7 58:cc:f9:98:49:22:0c:e7

Determine CNA FCoE port WWPNs


In this example, the MX740c server's FCoE adapter WWPNs are used for FC connection configuration. Perform the following
steps to determine adapter WWPNs.
1. Connect to the MX computer sled server's iDRAC in a web browser and log in.
2. Select System, then click Network Devices.
3. Click the CNA. In this example, NIC Mezzanine 1A is used. Under Ports and Partitioned Ports, the FCoE partition for
each port is displayed as shown in the following figure.

Figure 238. The FCoE partition displayed for each port

Dell PowerStore 1000T 241


Figure 239. FCoE partitions in iDRAC

4. The first FCoE partition is Port 1, Partition 2. Click the (+) icon to view the MAC Addresses as shown in the following figure.

Figure 240. MAC address and FCoE WWPN for CNA port 1

5. Record the MAC Address and WWPN.

NOTE: A convenient method is to copy and paste these values into a text file.

6. Repeat steps 4 and 5 for the FCoE partition on port 2.


7. Repeat the steps in this section for the remaining MX740c servers.
The FCoE WWPNs and MAC addresses used in this deployment example are shown in the following table:

Table 32. Server CNA FCoE port WWPNs and MACs


Server Port WWPN MAC
MX740c-1 1 20:01:F4:E9:D4:0C:24:F218:66:DA:71:50:AD 18:66:DA:71:50:ACF4:E9:D4:0C:24:F2
MX740c-2 2 20:01:F4:E9:D4:0C:24:F320:01:18:66:DA:71:50:AF F4:E9:D4:0C:24:F318:66:DA:71:50:AE
MX740c-2 1 20:01:34:80:0D:86:80:6218:66:DA:77:D0:C3 34:80:0D:86:80:6218:66:DA:77:D0:C2
MX740c-2 2 20:01:34:80:0D:86:80:6320:01:18:66:DA:77:D0:C5 34:80:0D:86:80:618:66:DA:77:D0:C34

242 Dell PowerStore 1000T


E
Hardware and Version Information
Hardware used in this guide
This section covers the rack-mounted networking switches used in the examples in this guide.

Table 33. Hardware and roles


Hardware Role
Dell PowerSwitch S3048-ON One S3048-ON switch supports out-of-band (OOB) management traffic for all
examples.
Dell PowerSwitch S5232F-ON A pair of S5232F-ON switches are used as leaf switches in Scenario 1: SmartFabric
deployment with S5232F-ON upstream switches with Ethernet - No Spanning Tree
uplink on page 179.
Dell PowerSwitch S4148U-ON Two S4148U-ON switches support storage traffic, and are the first of two leaf switch
options.
Dell PowerSwitch Z9264F-ON This switch may be used as a leaf or spine switch in a Leaf-spine topology. It is
optimized for nonblocking 100 GbE leaf/spine fabrics and high-density 25/50 GbE
in-rack server and storage connections. It provides up to 64 ports of 100 GbE QSFP28
or up to 128 ports of 1/10/25/40/50 GbE ports using breakout cables.
Dell PowerStore 1000T storage array This array is used for the FC connections. Additional 2U Disk Array Enclosures (DAEs)
may be added, providing twenty-five additional drives each.
Cisco Nexus 3232C A pair of Cisco Nexus 3232C switches are used as leaf switches in Scenario 2:
SmartFabric connected to Cisco Nexus 3232C switches with Ethernet - No Spanning
Tree uplink on page 183.

More detail about each of these devices is provided in the following sections.
For detailed information about hardware components related to the MX platform, see Software and firmware versions used on
page 245.

Dell PowerSwitch S3048-ON


The Dell PowerSwitch S3048-ON is a 1U switch with forty-eight 1 GbE BASE-T ports and four 10 GbE SFP+ ports.

Figure 241. PowerSwitch S3048-ON

Dell PowerSwitch S5232F-ON


The Dell PowerSwitch S5232F-ON is a 1U, multilayer switch with 32x 100 GbE QSFP28 ports and 2x 10 GbE SFP+ ports.

Hardware and Version Information 243


Figure 242. Dell PowerSwitch S5232F-ON

Dell PowerSwitch S4148U-ON


The Dell PowerSwitch S4148U-ON is a 1U switch with 48x SFP+ ports, 2x QSFP+ ports, and 4x QSFP28 ports.

Figure 243. Dell PowerSwitch S4148U-ON

Dell PowerSwitch Z9264F-ON


The Dell PowerSwitch Z9264F-ON is a 2U, multilayer switch with 64x 100 GbE QSFP28 ports and 2x 10 GbE SFP+ ports.

Figure 244. PowerSwitch Z9264F-ON

Dell PowerStore 1000T


The PowerStore 1000T storage platform is based on a versatile platform utilizing Intel Xeon Scalable processors and advanced
storage technologies, including end-to-end NVMe Flash, dual-ported Intel OptaneTM SSDs, and NVMe-FC. It supports NAS,
iSCSI, FC, and NVMe-FC. The base enclosure is a 2U, two-node enclosure with twenty-five 2.5” NVMe drive slots.

Figure 245. Dell PowerStore 1000T front view

244 Hardware and Version Information


Figure 246. Dell PowerStore 1000T rear view

Cisco Nexus 3232C


The Cisco Nexus 3232C is a 1U fixed form-factor 100 GbE switch with thirty-two QSFP28 ports supporting 10/25/40/50/100
GbE.

Software and firmware versions used


Scenarios 1 through 4
The following tables include the hardware components and supported software and firmware versions for Scenario 1, Scenario 2,
Scenario 3, and Scenario 4.

Dell PowerSwitch
Table 34. Dell PowerSwitch switches and OS versions – Scenarios 1 through 4
Qty Item Software version
2 Dell PowerSwitch S5232F-ON leaf switches 10.5.4.0
1 Dell PowerSwitch S3048-ON OOB management switch 10.5.4.0

Dell PowerEdge MX7000 chassis and components


Table 35. Dell PowerEdge MX7000 chassis and components – Scenarios 1 through 4
Qty Item Software version
2 Dell PowerEdge MX7000 chassis -
4 Dell PowerEdge MX740c sled See the following table
4 Dell PowerEdge M9002m modules 2.00.00
2 Dell Networking MX9116n FSE 10.5.4.1
2 Dell Networking MX7116n FEM -

Table 36. Minimum software and firmware requirements - MX9116n


Software Minimum release version requirement
ONIE 3.35.5.1-24
BIOS 3.35.0.1-5
CPLD system 0.13

Hardware and Version Information 245


Dell PowerEdge MX740c chassis and components
Table 37. Dell PowerEdge MX740c compute sled details – Scenarios 1 through 4
Qty per sled Item Firmware version
1 Intel(R) Xeon(R) Silver 4114 CPU @ 2.20 GHz -
12 16 GB DDR4 DIMMs (192 GB total) -
3 600 GB SAS HDD -
1 Intel(R) Ethernet 25 G 2P XXV710 mezzanine card 20.5.13
- BIOS 2.15.1
- iDRAC with Lifecycle Controller 5.10.50.00

Cisco Nexus switches


Table 38. Nexus switches and OS versions – Scenarios 1 through 4
Qty Item Software version
2 Cisco Nexus 3232C 7.0(3)I4(1)

Scenarios 5 through 8
The tables in this section include the hardware components and supported software and firmware versions for Scenario 5
through Scenario 8 in this document.

Table 39. Minimum software and firmware requirements - MX9116n


Software Minimum release version requirement
ONIE 3.35.5.1-24
BIOS 3.35.0.1-5
CPLD system 0.13

Table 40. Minimum software and firmware requirements - MX5108n


Software Minimum release version requirement
ONIE 3.35.5.1-24
BIOS 3.35.0.1-5
CPLD system 0.13

Table 41. Dell Switches and OS versions - Scenarios 5 through 8


Qty Item Software version
1 Dell PowerSwitch S3048-ON management switch 10.5.4.0
2 Dell Networking MX9116n FSE 10.5.4.0
2 Dell Networking MX5108 10.5.4.0
2 Dell PowerSwitch S4148U-ON 10.5.4.0
2 Dell Networking MX7116n FEM -

246 Hardware and Version Information


Table 42. Dell PowerEdge MX-series components - Scenarios 5 through 8
Qty Item Software version
4 Dell PowerEdge M9002m modules 2.00.00
4 Dell PowerEdge MX740c compute sleds See the following table

Table 43. Dell PowerEdge MX740c compute sled details - Scenarios 5 through 8
Qty per sled Item Firmware version
1 QLogic QL41262HMKR (25 G) mezzanine CNA 15.35.06
2 Intel(R) Xeon(R) Silver 4114 CPU @ 2.20 GHz -
12 16 GB DDR4 DIMMs (192 GB total) -
3 600 GB SAS HDD -
- BIOS 2.15.1
- iDRAC with Lifecycle Controller 5.10.50.00

Hardware and Version Information 247


F
References
Dell Technologies documentation
The following Dell Technologies documentation provides additional and relevant information. Access to these documents may
depend on your log in credentials. If you do not have access to a document, contact your Dell Technologies representative.
● Dell Networking Guides
● Dell PowerEdge MX IO Guide
● Dell SmartFabric OS10 User Guide
● Dell PowerStore Guides
● Dell Technologies Interactive Demo: OpenManage Enterprise Modular for MX solution management
● Dell PowerEdge MX SmartFabric and Cisco ACI Integration Guide
● Dell Fabric Design Center
● Manuals and documents for Dell Networking MX5108n
● Manuals and documents for Dell Networking MX9116n
● Manuals and documents for Dell PowerEdge MX7000
● Manuals and documents for Dell PowerSwitch S3048-ON
● Manuals and documents for Dell PowerSwitch S5232-ON
● Manuals and documents for Dell PowerSwitch S4148U-ON
● Fibre Channel Deployment with S4148U-ON in F_Port Mode
● FCoE-to-Fibre Channel Deployment with S4148U-ON in F_Port Mode

OME-M and OS10 compatibility and documentation


This section includes the compatibility matrix of OME-M and OS10 and provides links to OME-M and OS10 user guides and
release notes for all versions.

OME-M and OS10 compatibility


OME-M version OS10 version
1.10.00 10.5.0.1
1.10.20 10.5.0.5
1.20.00 10.5.0.7, 10.5.9
1.20.10 10.5.1.6, 10.5.1.7, 10.5.1.9
1.30.00 10.5.2.3 (factory only), 10.5.2.4, 10.5.2.6
1.30.10 10.5.2.6
1.40.00, 1.40.10, 1.40.20 10.5.3.1
2.00.00 10.5.4.1

OME-M and OS10 documentation


The following OME-M and OS10 documents are available on the Documentation tab on the PowerEdge MX7000 support site.
OME-M documentation
● Dell OpenManage Enterprise-Modular Edition for PowerEdge MX7000 Chassis User's Guide

248 References
● Dell OpenManage Enterprise-Modular Edition for PowerEdge MX7000 Chassis Release Notes
OS10 documentation:
● Dell SmartFabric OS10 User Guide
● Dell SmartFabric OS10 Release Notes
NOTE: To access the OS10 Release Notes, you must log in to your Dell Digital Locker account.

Dell Technologies Networking Infrastructure Solutions


documentation
The following documentation provides additional networking solutions information.
NOTE: Access to the documentation may require user credentials. If you do not have access to a document, contact your
Dell Technologies representative.
Networking solutions: https://infohub.delltechnologies.com/t/networking-solutions-57/

Support and feedback


For technical support, visit https://www.dell.com/support.
We encourage readers to provide feedback on the quality and usefulness of this publication by sending an email to
[email protected].

References 249

You might also like