DC Interview

Download as pdf or txt
Download as pdf or txt
You are on page 1of 178

Internal

Data Center Interview Questions and answers

Contributor –
 Vijay Pandey
 Official Cisco Engineers.

Nexus 7000
Q. Is the FAB1 module supported with SUP2 or SUP2E?
A. Yes, supported with both supervisors.

Q. What minimum software release do I need to support SUP2 or


SUP2E?
A. NX-OS 6.1

Q. Can I still run NX-OS version 6.0 on SUP1?


A. Yes.

Q. Can I upgrade SUP2 to SUP2E?


A. Yes. You would need to upgrade both the CPU and memory on
board.

[8/14/2014] update: after further investigation I found that the answer is


no (upgrade is not possible).

Q. I need to enable high-availability (HA), can use one SUP1 with


one SUP2 in the same chassis?
A. No, for high-availability the two supervisors must be of the same type
so you would need to use either SUP1/SUP1 or SUP2/SUP2.
Internal

Q. How many I/O modules can I have in a 7004?


A. Maximum of 2. The other 2 slots are reserved for the supervisors and
you cannot use them for I/O modules.

Q. FAB1 or FAB2 on 7004?


A. The Nexus 7004 chassis does not actually use any FAB’s. The I/O
modules are connected back to back.

Q. How many FEX’s can the Nexus 7000 support?


A. 32 FEX’s with SUP1 or SUP2; and 48 FEX’s with SUP2E.
[8/14/2014] update: 64 FEXs with SUP2E or SUP2
Q. How many VDC’s can the Nexus 7000 support?
A. 4 VDC’s (including 1 VDC for management) with SUP1 or SUP2; and
8 + 1 (management) VDC’s with SUP2E.

Q. Which modules support FabricPath, FCoE, and FEX


connectivity?
A. FabricPath is supported on all F1 and F2 modules. FCoE is supported
on all F1 modules and F2 modules except on the 48 x 10GE F2
(Copper) module. FEX is supported on all F2 modules. Use this link from
Cisco as a reference.
[8/14/2014] update: The F2e module supports FCoE, FEX, and
FabricPath. The F3 module (12 port 40GE) supports FEX, FabricPath,
FCoE, OTV, MPLS and LISP.

Q. Which modules support LISP, MPLS, and OTV?


A. All M1 and M2 modules support MPLS and OTV. LISP is supported
only on the 32 x 10GE M1 module.
Internal

Q. Does the Nexus 7004 support SUP1?


A. No, the Nexus 7004 supports only SUP2 and SUP2E.

Q. Can I place an F2 module in the same VDC with F1 or M module?


A. No, the F2 module must be placed in a separate VDC so if you plan to
mix F2 with F1, and M modules in the same chassis you would need a
VDC license.

[8/14/2014] update: The F2e and F3 (12 port 40GE) modules can
interoperate with the M-series in the same VDC.

Q. How can I upgrade from FAB1 to FAB2


modules during operation without any service disruption?
A. Yes, if you replace each module within a couple of minutes. Just
make sure to replace all FAB1 with FAB2 modules within few hours. If
you mix FAB1 with FAB2 modules in the same chassis for a long time,
the FAB2 modules will operate in backward compatible mode and
downgrade their speed to match the FAB1 modules peed. You can
follow this link for step-by-step procedure for upgrading the FAB
modules on the Nexus 7000.
Q. Can I use FAB1 modules in a 7009 chassis?
A. No, the Nexus 7009 uses only FAB2 modules.

Q. Does the Nexus 7000 support native Fibre Channel (FC) ports?
A. No, FC ports are not supported on the Nexus 7000. You would need
either the Nexus 5500 or the MDS 9000 to get FC support.

Cisco Nexus 7000 FAQs


Internal

1. What are the 7K model available?


Answer:- 7004 , 7009 , 7010 and 7018

2. Is 7004 supports all Fabric Modules?


Answer:- No, Fabric module is not present in 7004 whereas all other Nexus 7K
needs fabric module to work.

3. In 7k, Can we use supervisor slot for line cards?


Answer :- No, we cannot use supervisor slots for line card.

4. Is Sup-1 supported in all 7k models?


Answer :- No, Sup-1 is not supported in 7004 whereas all other model supports
Sup-1.

5. Is fab-1 supported in all 7K models?


Answer:- No, Fab-1 is not supported in 7004 and 7009.

6. Can we use non-XL M1 model in all 7K?


Answer:- No, non-XL model is not supported in 7004.

7. Can we use mix of Fab-1 and Fab-2 in single chassis?


Answer:- Yes but only one fabric version (1 or 2) is recommended in a chassis.

8. Can we use Fabric module of 7009 to 7018?


Answer: No, we cannot use Fabric module of one model to another.

9. Can we create port-channel with one M-card port and other in F-card port?
Answer:- No, it is not possible to bundle M-series and F-port.

10. Is it possible to create port-channel with M-series on one end an other end
is F card?
Answer:- We cannot make port-channel with M port at one end and F at other
side.

11. Are FCOE and Fabricpath supported on M-series card?


Answer:- No, Fabricpath and FCOE are not supported on M-series line cards.
12. Is Mixing I/O modules on the same side of a port channel supported?
Answer: No, Mixing of IO modules in a port-channel is not supported.

13. Can we configure LACP on half duplex port?


Answer: - LACP does not support half-duplex mode. Half-duplex ports in LACP
port channels are put in the suspended state.
Internal

14. Does nexus 7000 series support fragmentation?

Answer:- No, Nexus 7k doesn't support fragmentation and reassembly.

15. Is dense-mode supported on Nexus 7k?


Answer:- No, Nexus 7k only support PIM sparse mode.
Internal

ow to configure virtual port channel (VPC) on Nexus


Internal

What is VPC
A Virtual port channel (VPC) allows you to bundle physical links that are connected to two different chassis (Nexus 7000 /
5000). This creates redundancy and increase bandwidth. A big advantage of using VPC is that you have redundancy without
the using of spanning-tree, a port-channel covers faster from a link failure than spanning-tree.

Advantages of using VPC

 Allows a single device to use a port channel across two upstream devices.
 Eliminates Spanning Tree Protocol (STP) blocked ports.
 Provides a loop-free topology.
 Uses all available uplink bandwidth.
 Provides fast convergence if either the link or a device fails.
 Provides link-level resiliency.
 Assures high availability.

The terminology used in vPCs:

 vPC—The combined port channel between the vPC peer devices and the downstream device.
 vPC peer device—One of a pair of devices that are connected with the special port channel known as the vPC
peer link.
 vPC peer link—The link used to synchronize states between the vPC peer devices. Both ends must be on 10-
Gigabit Ethernet interfaces.
 vPC domain—This domain includes both vPC peer devices, the vPC peer-keepalive link, and all of the port
channels in the vPC connected to the downstream devices. It is also associated to the configuration mode that
you must use to assign vPC global parameters.
 vPC peer-keepalive link—The peer-keepalive link monitors the vitality of a vPC peer.

VPC configuration example


Nexus01:
Nexus01#config t
Nexus01(config)# feature vpc
Internal

Nexus01(config)# vpc domain 1


Nexus01(config-vpc-domain)# peer-keepalive destination 10.10.10.102
! The management VRF will be used by default

Nexus01(config)# interface ethernet 2/1-2


Nexus01(config-if-range)# switch mode trunk
Nexus01(config-if-range)# channel-group 10 mode active
Nexus01(config-if-range)# interface port-channel 10
Nexus01(config-if)# vpc peer-link

Nexus01(config)# interface ethernet 1/1


Nexus01(config-if)# switchport mode trunk
Nexus01(config-if)# channel-group 100 mode active

Nexus01(config)# interface port-channel 100


Nexus01(config-if)# vpc 100

Nexus01:
Nexus02#config t
Nexus02(config)# feature vpc
Nexus02(config)#
Nexus02(config)# vpc domain 1
Nexus02(config-vpc-domain)# peer-keepalive destination 10.10.10.101
! The management VRF will be used by default

Nexus02(config)# interface ethernet 2/1-2


Nexus02(config-if-range)# switch mode trunk
Nexus02(config-if-range)# channel-group 10 mode active
Nexus02(config-if-range)# interface port-channel 10
Nexus02(config-if)# vpc peer-link

Nexus02(config)# interface ethernet 1/1


Nexus02(config-if)# switchport mode trunk
Nexus02(config-if)# channel-group 100 mode active

Nexus02(config)# interface port-channel 100


Nexus02(config-if)# vpc 100

Nexus01# show vpc


Legend:
(*) - local vPC is down, forwarding via vPC peer-link

vPC domain id : 1
Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status: success
vPC role : primary

vPC Peer-link status


---------------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ --------------------------------------------------
1 Po100 up 1,100-110
Internal

Introduction

Unlike traditional Catalyst switches running IOS, Nexus switches run NX-OS. There are
some similarity between IOS and NX-OS. Also there are new features and commands
introduced in NX-OS.

In regards of CLI commands, there are several new commands on Nexus NX-OS
image. There are also old commands you find on regular IOS image, and there are
modified command compared to the regular IOS. Legacy command such as write
memory is not supported anymore, therefore you have to get used to the copy running-
config startup-config command.

A nice feature in Nexus switch is that you don't have to exit configuration mode to type
in any non-configuration commands. You don't type in the do command when you are
on configuration mode to type in any non-configuration commands. You simply type in
the non-configuration commands directly whether you are on regular enable mode or
configuration mode, similar to PIX Firewall or ASA.

All switch ports in Nexus switches only support 1 Gbps and 10 Gbps speed.
Interestingly, these gigabit ports do not show as GigabitEthernet ports or
TenGigabitEthernet ports on switch configurations. Instead the ports show as Ethernet
interfaces. To find out which speed the ports are acting current, you can simply issue
the good old show interface status or simply show interface command.

Along with new commands and features, there are several new concept and technology
in place. One new technology found in Nexus switch is FEX (Fabric Extender). Typically
you use this FEX technology when you have Nexus 2000 and Nexus 5000
interconnectivity.

This FEX technology is similar to the Catalyst 3750 stacking technology where switch
configuration within the same "stack" is visible through just one switch. Similar to
Catalyst 3750 stack switch configuration, the Nexus 5000 shows as the "module 1" and
the Nexus 2000 shows as the "module 2".

Unlike Catalyst 3750 stack switch, the Nexus do not use stack cable. The switch port to
interconnect the two Nexus switches are SFP slot. In order to interconnect the two
Nexus switches, the switch ports are configured as FEX ports instead of regular trunk or
access ports.

To start using this FEX feature, you have to activate FEX on the Nexus 5000. As you
will see, you have to activate telnet and tacacs+ should your network need to use those
as well. In other words, there are some features that you have to active when you plan
to use the features as part of your Nexus switch network topology.

Further, you have to define how the Nexus 2000 port number should look like. If let's
say you configure the FEX port as FEX 101, then all Nexus 2000 switch port will show
Internal

as interface Ethernet 101 (the "module 2") while the Nexus 5000 switch port show as
the regular interface Ethernet 1 (the "module 1").

Note that there is no console port on Nexus 2000. There is console port however on
Nexus 5000. Therefore you need to use the FEX technology to interconnect Nexus
2000 and Nexus 5000 in order to have console access to Nexus 2000.

When you need to use the management port on the Nexus 5000 (and also Supervisor
6E of Catalyst 4500 series), make sure you have at least some familiarity with VRF
(VPN Routing and Forwarding) technology since these management ports are using
involving VRF.

You can't disable the VRF or make the management (mgmt) interface as part of default
VRF or global routing table since such action is not supported. The idea of having
management port in different routing table is to separate management network and
production network, in addition to integrate VRF into Nexus switch platform and new
Catalyst 4500 Supervisor Engines.

You will notice that there is a little difference in VRF command implementation between
traditional IOS and NX-OS. You can also put in subnet mask in CIDR format, since
Nexus platform saves any IP address info in CIDR format.

Unlike traditional Catalyst switches that come with default Layer-2/3 VLAN 1, Nexus
5000 switches only come with default Layer-2 VLAN 1. If you are considering of using
non-management switch port as your customized management port, it might not work.
Note that Nexus 5000 and 2000 switches are designed as Layer-2 switches originally.
The Layer-2 switch design means that you can't create Layer-3 VLAN on Nexus
switches as management VLAN (i.e. SVI VLAN interfaces 1, 50, or else) like you usually
expect in traditional Catalyst switches. You can't convert any non-management switch
port as routing port either. In other words, there is no choice but to use the mgmt port
and get used to VRF environment when you are not used to it yet.

After certain NX-OS releases, the Nexus 5000 switches are now Layer-3 capable
though the 2000 model remains Layer-2 switch. You may need to upgrade the NX-OS
image and/or upgrade the license on the 5000 model in order to support this Layer-3
functionality.

Some management command like backing up your Nexus configuration to TFTP server
(copy running-config tftp: command) is also considering VRF. With copy running-config
tftp: command, you will be asked if the TFTP server is located within the default VRF or
else (like the management VRF).

Understanding the Nexus 5000

The Nexus 5000 is for those of us migrating and needing to protect investment in 100 M
and 1 Gbps ports. It allows Top of Rack consolidation of cabling. Thats the distributed
Internal

switch model mentioned above. Its a way to buy equipment that may be used in other
ways going forward, but that supports your current tangle of 1 Gbps connections.

Bear in mind there are some other uses, which may make up more of the N5K use
going forwards. Right now the Nexus 5000 provides a way to do Fiber Channel over
Ethernet (FCoE) or Data Center Bridging (DCB). So you can lose the server HBA and
use one (or two, for redundant) 10 G connections to carry both network and SAN traffic
up to the N5K. That requires the special 10 G NIC, a Converged Network Adapter or
CNA.

The current approach is for you to then split out the data and SAN traffic to go to
Ethernet or SAN switches (or FC-attached storage). In the future, your traffic may be all
FCoE until reaching a device where the FC device is attached (or perhaps with FCoE
that handles management plus SAN traffic?).Thats a pretty straight-forward use.

Cisco white paper on Unified Access Layer


Unified Access Layer with Cisco Nexus 5000 Series Switches and Cisco Nexus 2000
Series Fabric Extenders Solution Overview

You can also configure your FCoE configured N5K to do Network Port Virtualization, or
NPV. This is a per-switch choice, you use either Fabric or NPV mode. When the switch
is in NPV mode, it does not acquire a domain ID (a limited resource). Instead, it relays
SAN traffic to the core switch, in effect extending the core switch. The N5K looks like a
host to the fabric. This helps the fabric scale better, and makes the N5K transparent
(nearly invisible) as far as the fabric. There's a theme here: fewer boxes to configure,
thats a Good Thing!

The complementary NPIV (Network Port Identifier Virtualization) feature supports


multiple servers on one interface, e.g. separate WWNs (SAN identifiers) per VM on a
VMware virtual server host. This is highly attractive for security (SAN LUN masking and
zoning). Click here for details . Note that certain Cisco MDS switches also perform NPV
and NPIV, and that NPIV has been standardized.

For those doing blade servers and VMware, the Nexus 1000v virtual switch allows
aggregation onto UCS chassis 10 Gbps links instead of many separate 1 Gbps links.
The VN-Link feature allows internal logical (1000v) or (future) external physical tracking
on a per-VM (virtual machine) basis. I currently understand physical VN-Link as a tag on
the media from a VN-Link capable NIC or driver, tied to logical apparatus to have
configuration track VN-Link virtual interfaces on the N5K. The reason to do this: offload
the 1000v processing to external hardware.

VN-Link reference

Cisco UCS Integrated Infrastructure


Cisco VN-Link: Virtualization-Aware Networking
Overview of VN-Link in Cisco UCS
Internal

This is "Network Interface Virtualization"

The Nexus 5000 (N5K), as well as the Nexus 7000 (N7K) both support Virtual Port
Channel. Think 6500 VSS but the two "brains" (control planes) stay active. Or think
PortChannel (LACP) that terminates in two separate switches, with the other end of it
none the wiser. There is a fairly tight vPC limit on the N5K right now.

There are also some gotchas and design situations to avoid, e.g. mixing non-vPC and
vPC VLANs using the same vPC link between switches.That is, if you have VLANs that
aren't doing vPC PortChannel uplinks, you'll want a separate link between the
distribution switches the uplinks go to. Similarly in some L3 FHRP (HSRP, VRRP,
GLBP) routing situations. The issue is traffic that comes up the "wrong" side and goes
across the vPC peer link cannot be forward out a vPC member link on the other
component of the vPC pair, which might happen in certain not-too-rare failure situations.

Understanding the Nexus 2000

There are three fabric extender ("fex") devices available, typically for Top of Rack
("ToR") use. Use two (and two N5K's) for redundancy in each rack. See also:

Cisco Nexus 2000 Series Fabric Extenders Data Sheet

The Nexus 2148 and 2248 are discussed below, under Gotchas. There is also the 2232
PP, which is 32 10 Gbps Fiber Channel over Etherent (FCoE) ports (SFP+) and 8 10 G
Ethernet / FCoE uplinks (SFP+). That's 4:1 oversubscribed, which isn't bad for current
server throughput and loading. If you want less oversubscription, you don't have to use
all the ports (or you can arrange things your way with port pinning, I assume). If you
want 1:1 oversubscription ("wire speed"), you'd probably run fiber right into the N5K,
unless you want to use the N2K as a costly 1:1 10 G ToR copper to fiber conversion
box.

Note those 10 G ports are FCoE ports. Right now, the N5K is the only Cisco switch I'm
aware of doing FCoE. The Nexus 2232 does so as an extension of the N5K.

Note that NPV and NPIV are basically some proxying in the N5K, so the N2K should
just act as server-facing FCoE ports for those functions.

Gotchas and Tips

The Nexus 5000 is by default Layer 2 only. That means any server VLAN to VLAN
traffic between two different VLANs will need to be routed by another box, probably your
Core and / or Aggregation Nexus 7000. You'll want some darn big pipes from the N5K
to the N7K, at least until the N5K can do local routing.

The Nexus 2000 does no local switching. Traffic from one port on a 2K to another on
Internal

the same 2K goes via the N5K. There should be enough bandwidth. Thats why the
Nexus 2000 is referred to as a fabric extender, not a switch.

The Nexus 2148 T is a Gigabit-only blade, 48 ports of Gig (not 10/100/1000) with up to
4 x 10 G fabric connections. Use the new 2248 TP if you need 100/1000 capability (the
data sheet does NOT list 10 Mbps).

You'll probably want to use PortChannel (LACP) for the fabric connections. Otherwise,
you're pinning ports to uplinks, and if the uplink fails, your ports pinned to it don't work;
probably like a module failure in a 6500. You can now do the PortChannel to two N5K's
running Virtual Port Channel (vPC). See the above link for some pictures.

If you attach to the fabric extender (fex, N2K), you can issue show platform software
redwood command. The sts, rate and loss keywords are particularly interesting. The
former shows a diagram, the latter show rates and oversubscription drops (or so it
appears). I like being able to see internal oversubscription drops without relying on
external SNMP tools; which usually show rates over relatively long periods of time, like
5 or more minutes, rather than milliseconds.

Putting a N5K into NPV mode reboots the switch and flushes its configuration. Be
careful!

Designing with the Nexus 5000 and/or 2000

I've got a couple of customers where the N5K/N2K have seemed appropriate. I thought
I'd briefly mention a couple of things that I noticed in trying to design using the boxes;
maybe fairly obvious, maybe a gotcha. I'd like to think the first story is a nice illustration
of how the N5K/N2K lets you do something you couldn't do before!

Case Study 1

The first customer situation is a site where various servers are in DMZ's of various
security levels. Instead of moving the servers to a physically separate data center
server zone, as appears to have been originally intended (big Nortel switches from a
few years back), they extended the various DMZ VLANs to the various physical server
zones using small Cisco switches with optical uplinks. That gear (especially the Nortel
switches) is getting rather old, and it's time to replace it.

For that, the N5K/N2K looks perfect. We can put one or a pair of N5K's in to replace the
big Nortel "DMZ overlay core" switches, and put N5K's out in the server zones (rows or
multi-row areas of racks). For redundancy, we can double everything up. Right now one
can make that work in a basic way, and it sounds like Cisco will fairly soon have some
nice VPC (Virtual Port Channel) features to minimize the amount of Spanning Tree in
such a dual N5K/N2K design, using Multi-Chassis EtherChannel (aka VPC). Neat stuff!

The way I'm thinking of this is as a distributed or "horizontally smeared" 6500 switch (or
Internal

switch pair). The N2K Fabric Extender (FEX) devices act like virtual blades. There's no
Spanning Tree Protocol (STP) running up to the N5K (good), and no local switching
(maybe not completely wonderful, but simple and unlikely to cause an STP loop). So the
N5K/N2K design is like a 6500 with the Sup in one zone and the blades spread across
others.

From that perspective, the 40 Gbps of uplinks per N2K FEX is roughly comparable to
current 6500 backplane speeds. So the "smeared 6500" analogy holds up in that
regard.

The sleeper in all this is that the 10 G optics aren't cheap. So doing say 10-12 zones of
40 G of uplink, times optics and possibly special multi-mode fiber (MMF) patch cords,
adds say 12 x ($2000) of cost, or $24,000 total. Certainly not a show-stopper, but
something to factor into your budget. If you're considering doing it with single-mode fiber
(SMF), the cost is a bit higher. On the other hand, that sort of distributed Layer 2 switch
is a large Spanning-Tree domain if you build it with prior technology.

Case Study 2

The second customer situation is a smaller shop, not that many servers but looking for a
good Top of Rack (ToR) solution going forward. The former Data Center space is
getting re-used (it was too blatantly empty?). And blade servers may eventually allow
them to fit all the servers into one or two blade server enclosures in one rack. Right now
were looking at something like 12 back-to-back racks of stuff, including switches.

For ToR, the 3560-E, 3750-E, 4900M, and N5K/N2K all come to mind. The alternative
solution that comes to mind is a collapsed core pair of 6500s. The cabling would be
messier, but the dual chassis approach would offer more growth potential, and a nice
big backplane (fabric).

The 3560-E and 3750-E have a 20 G of uplink per chassis limitation, not shabby, not
quite up to the 6500 capacity per blade. That's workable and not too limiting.

The issue is, what do you aggregate them into? A smaller 6500 chassis? In that case,
the alternatives are 6500 pair by themselves, or 6500's (maybe smaller) plus some
3560-E's or other small ToR switches, at some extra cost.

Or the N5K/N2K, one might think. The N5K/N2K is Layer 2 only right now, so you need
some way to route between the various server VLANs (gotcha!). Without Layer 3
availability, you still would need to connect the N5K/N2K's to something like 4900M's or
6500's, to get some pretty good Layer 3 switching performance between VLANs. Right
now, that external connection is either a pretty solid bottleneck, or you burn a lot of ports
doing 8 way or (future) 16 way EtherChannel off the N5K/N2K. Bzzzt! That starts feeling
rather klugey.

Some Conclusions
Internal

• The N5K/N2K right now seems to fit in better with a Nexus 7000 behind it. And I'd
much prefer local Layer 3 switching to maximize inter-VLAN switching performance.

• The initial set of Nexus line features are probably chosen for larger customers;
standalone Layer 3 N5K/N2K being something more attractive to a smaller site. And
smaller sites tend not to be early technology adopters.

• You can mitigate this to some extent by careful placement of servers in VLANs. On the
other hand, my read on current Data Center design is that the explosive growth in
numbers of servers and the need for flexibility have left "careful placement of servers" in
the historical dust. Nobody's got the time anymore.

Sample Configurations

Check out the following FAQ for illustrations.

»Cisco Forum FAQ »Sample Configuration: Nexus 5000 and Nexus 2000 with FEX

Dual Connecting Nexus 2000s to Nexus 5000s

Loading Images Order Matters!

After previously dual-connecting one of the FEXes, we upgraded the NX-OS on the
N5Ks. As I recall, we upgraded N5K-2 first, then N5K-1. This is non-optimal, if N5K-1 is
the vPC primary as was the case.

When we updated N5K-2, as you might expect, N5K-2 downloaded a new image to its
connected FEX. When we upgraded N5K-1, it also downloaded the same image to its
connected FEX. This is the same FEX module, and each download of the image took
the FEX offline for 15 minutes or so.

Cisco documents state that the NX-OS software by design will allow an upgraded dual-
home FEX to interoperate with the vPC secondary switches running the original version
of Cisco NX-OS while the primary switch is running the upgrade version. You will have
to have some downtime to get the image loaded.

However, the documentation doesn't say anything about what happens when you first
upgrade the secondary N5K of dual-home FEX. My recommendation is not to do it, you
may need a second image download to the FEX.

Adding Uplink to Second N5K

All of the FEXes were supposed to be dual connected to both N5Ks. Due to timing
constraints and fiber availability, some FEX modules were left single connected for a
period of time. In this case, they had only been connected to N5K-2, the vPC secondary
Internal

switch and were running the current NX-OS image.

Based on our experiences updating the image, we were not sure if connecting the
uplink to the N5K-1 would bring the FEX down while N5K-1 reloaded the image. I was
not able to verify from the Nexus documentation what would happen though Cisco
documentation recommends connecting the primary first. However, we did find that
when we brought up the never-previously connected link to the N5K-1, the FEX stayed
on line.

Pre-provision the FEX

You can and should pre-provision FEX modules, for example:

1 config t
2 slot 101
3 provision model N2K-C2248T

This allows you to pre-load the VLANs, speed, duplex, description etc for the host

interfaces before the FEX modules are connected. Note that you need to know what

type of FEX you have for this command since the N2K-C2248T is different than the

N2K-C2248TP-E-1GE, and is what you want when you have a model number N2K-

C2248TP.

Good Handling of Improperly Connected FEX Modules

The NX-OS appears to handle cross-connected FEX modules appropriately. At one

point, someone connected the second uplink for FEX 101 to the N5K interface

configured as port-channel 102 (FEX 102 should have been placed there). However the

NX-OS noticed the mismatch, knew that FEX 101 was mis-cabled, alerted and left the

second N5Ks FEX offline, but did not shutdown the active FEX.
Internal

The Nexus 7000 is constantly evolving and there seems to be more and more design parameters
that have to be taken into consideration when designing Data Center networks with these
switches. I’m not going to go into each of the different areas from a technical standpoint, but
rather try and point out as many of those so called “gotchas” that need to be known upfront when
purchasing, designing, and deploying Nexus 7000 series switches.

Before we get started, here is a quick summary of current hardware on the market for the Nexus
7000.

1. Supervisor 1
2. Fabric Modules (FAB1, FAB2)
3. M1 Linecards (48 Port 10/100/1000, 48 Port 1G SFP, 32 Port 10G, 8 port 10G)
4. F1 Linecards (32 Port 1G/10G, F2 linecards, 48 Port 1G/10G)
5. Fabric Extenders (2148, 2224, 2248, 2232)
6. Chassis (7009, 7010, 7018)
Instead of writing about all of these design considerations, I thought I’d break it down into a Q &
A format, as that’s typically how I end up getting these questions anyway. I’ve ran into all of
these questions over the past few weeks (many more than once), so hopefully this will be a good
starting point, for myself as I tend to forget, and many others out there, to check compatibility
issues between the hardware, software, features, and licenses of the Nexus 7000. The goal is to
keep the answers short and to the point.

Question:
What are the throughput capabilities and differences of the two fabric modules (FAB1 &
FAB2)?
Answer:
It is important to note each chassis supports up to five (5) fabric modules. Each FAB1 has a
maximum throughput of 46Gbps/slot meaning the total per slot bandwidth available when there
are five (5) FAB1s in a single chassis would be 230Gbps. Each FAB2 has a maximum
throughput of 110Gbps/slot meaning the total per slot bandwidth available when there are five
(5) FAB2s in a single chassis would be 550Gbps. The next question goes into this a bit deeper
and how the MAXIMUM theoretical per slot bandwidth comes down based on which particular
linecards are being used. In other words, the max bandwidth per slot is really dependent on the
fabric connection of the linecard being used.
Question:
What is the maximum bandwidth capacity for each linecard and does it change when using
different Fabric Modules?
Answer:

*20 linerate ports through fabric


Internal

It is recommended to deploy at a minimum of N+1 redundancy with Fabric Modules. For


example, when deploying an M1 10G module that has a max bandwidth/slot of 80G, only 2 x
FAB1s are required, but 3 will allow one to maintain the full 80G should there be a fabric
module failure. Nowadays with the FAB2s, Cisco is requiring a minimum of 3 either way, which
is definitely good practice.

FAB1 & FAB2 modules are both forward and backward compatible. For example, a FAB1 can
be deployed with the F2-48port module, but would have a maximum throughput of 230G/slot
versus the 480G/slot when using FAB2s.

Question:
What is the port to port latency of the various 7K linecards?
Answer:

N5K and N3K latency has also been included because many times if you need this info for
financial applications or any other latency sensitive app, the comparison usually ends up
expanding to include these platforms as well.

Note: from my research I’ve seen some conflicting information for latency for the M1 linecards.
I’ve seen general statements such as the M1 family has 9.5 µsec latency, whereas, I have seen
one document that stated 18 µsec as well. The one that had 18 was an older document, so I do
expect some of the M1 numbers could be off. If anyone has these, please feel free to share.

Question:
Which Nexus 7000 linecards can connect to a Fabric Extender (FEX)?
Answer::
Simply put, there are only two linecards that can connect to a FEX on a 7K. They are the 32 port
M1 and 48 port F2 linecards. If you don’t have one of these, get one, or use a 5K :).
Question:
Does the F1 linecard really not support Layer 3? How is that possible?
Answer:
This is an interesting one, but the short Answer: is no, the F1 linecard does not natively support
Layer 3. Okay, so what does that mean? First, it is important to note the 7K architecture is
different than that of other platforms such as the Catalyst 4500 or 6500. These other platforms
have a centralized (and distributed using DFCs on the 6500) forwarding architecture. Remember
Internal

the 6500 has a MSFC routing engine and PFC (where policies and FIB are located after they are
built) located on the Supervisor and then pushed to the DFCs should they exist on the linecards.
The notion of a centralized PFC goes away with the Nexus 7000 and it is based off a purely
distributed architecture – think the 6500 with DFCs without a centralized PFC.
So to answer that question in the most direct way, and comparing it to what was just described,
the F1 module does not have the capacity to locally store a distributed FIB table on the linecard.
The F1 was purposely built for advanced Layer 2 services including a technology called Fabric
Path.

Now, with all that being said, it is still POSSIBLE to run layer 3 in a chassis that has F1
alongside M1 linecards. There is a notion of proxy layer 3 forwarding that exists. This allows
SVIs to be created on the system (technically, that will exist on the M1s), assigned an IP address,
and then ports on the F1 to be assigned to that VLAN in order get “proxy L3 forwarding” for
hosts that directly connect to the F1. It is NOT possible to configure a routed port on any F1 port.
If you’re curious on how this happening, the F1 linecard is building a Port-Channel within the
“backplane” that will connect to up to sixteen (16) forwarding engines on M1 cards. This means
the maximum capacity for proxy layer 3 forwarding is 160G because each forwarding engine
maps back to a forwarding/port ASIC on the M1 linecards. Sometimes, when I describe this, I’ll
also use the analogy of thinking about the F1 as an IDF switch and M1s as a Core switch, so
even though your IDF switch is L2 only, you can still route by getting switched back to the Core.

Note: it is possible to explicitly configure which forwarding engine’s should be used for F1
linecards.

Question:
What are the design options available when connecting a Nexus 2000 to Nexus 7000s?
Answer:
I’m going to keep this short on the preferred method as it is now supported as of NX-OS 5.2. If
you have 2 x 7Ks, 2 x 2Ks, and a server, the recommended design would be connect your server
to both 2Ks. Each 2K would connect to ONE 7K. That is very important; you CANNOT dual-
home a 2K and connect it to 2 separate 7Ks. I’ve heard this may NEVER be supported. It doesn’t
seem logical, but you don’t lose anything single-homing the 2K to 7K. Should any link or device
go down, half of the bandwidth still remains up. Prior to 5.2, LACP was not supported between
the 2Ks and server and basic active/standby NIC teaming was required.

Question:
Can different linecard types (M1, F1, F2) exist in the same chassis? If so, what functionality
is gained or lost?
Answer:
Major design caveat: F2 linecards require a dedicated VDC if they are in the same chassis as
ANY other M1 or F1 linecard. This one isn’t pretty, but it is what it is for now. This means
you’ll need the Advanced Services License to enable the VDC functionality as well.

For other critical design caveats, please take note of the following: Mixing and matching F1 and
M1 in the same chassis works fine. The biggest caveat is the L2/L3 support issue as described
above. Remember, F1 cards do NOT support L3 routed ports and only support L3 via proxy
routing through M1 linecards in the chassis. In large data centers, other details need to be
Internal

examined such as the MAC table size. The supported MAC table sizes are different on each card
ranging from 16k to 256k MACs, so should the data center be increasing in size by means of
virtualization adding 100s to 1000s of VMs, this should be examined a bit further.

Question:
Does the Nexus 7000 support MPLS? If so, are there any restrictions on software and
hardware?
Answer:
Yes. The Nexus 7000 supports MPLS as of NX-OS 5.2(1) with an M1 linecard. Note: the MPLS
license is also required. F1/F2 modules DO NOT support MPLS.
Question:
What software, hardware, and licenses are required in a Nexus 7000 OTV deployment?
Answer:
OTV was introduced in 5.0(3). Any M1 linecard and the Transport Services Package license is
required. F1/F2 modules DO NOT support OTV.
Note: based on the low level design of OTV, the Advanced Services License may be required to
enable VDCs to support the OTV deployment.

Question:
What software, hardware, and licenses are required in a Nexus 7000 LISP deployment?
Answer:
LISP was introduced in 5.2(1). It requires the 32 Port M1 linecard(s) and the Transport Services
Package license. Other M1 modules and F1/F2 modules DO NOT support LISP.
Question:
What software, hardware, and licenses are required in a Nexus 7000 FCOE deployment?
Answer:
FCOE for the 7K was introduced in 5.2(1). 32 Port F1 linecard(s) are required; 48 port F2 will
support FCOE sometime in 2012.
Note: One or more licenses are required for FCOE on the 7K. The FCOE license is required per
linecard which inherently offers the ability to create the storage VDC (that is required), while the
SAN License is required for the system Inter-VSAN routing and fabric binding. The Advanced
Services License that normally enables VDCs is not required.

Question:
What software, hardware, and licenses are required in a Nexus 7000 FabricPath
deployment?
Answer:
FabricPath was introduced in 5.1(1). It requires the use of either the F1 or F2 module along with
the Enhanced Layer 2 Package License.
Note: While both F1 and F2 modules can run Fabric Path, they cannot be in the same VDC, so
one would probably choose one or the other for a FP deployment.

Question:
Are special racks needed for the Nexus 7000 switches?
Answer:
Four (4) post racks are required for the 7010 chassis and 7018 chassis. There are several racks
that were purpose built for the 7K, but are not “required.” These are documented further in the
Internal

hardware installation guide for the 7K. Note: if not using the purpose built racks, be sure to
measure the depth required for the N7K. I have ran into situations where the depth was too short
and the rack needed to be extended out further delaying the deployment process. Not fun!
The 7009 was built to ease migrations from 6509s, so a 2-post rack works quite well for these, as
it is the same exact form factor as the 6509/E.

Share this on → Twitter LinkedIn F

Cisco Nexus 7k, 5k, 2k FAQ


Nexus FAQ's :

Q. What are the differences between M and F series line cards? What are the differences in F1, F2,
F2e and F3 cards?
A. The initial series of line cards launched by cisco for Nexus 7k series switches were M1 and F1. M1
series line cards are basicaly used for all major layer 3 operations like MPLS, OTV, routing etc,however,
the F1 series line cards are basically layer 2 cards and used for for FEX, FabricPath, FCoE etc. If there is
only F1 card in your chassis, then you can not achieve layer 3 routing. You need to have a M1 card
installed in chassis so that F1 card can send the traffic to M1 card for proxy routing. The fabric capacity of
M1 line card is 80 Gbps. Since F1 line card dont have L3 functionality, that means you can not use same
interface in L3 mode. They are provide a fabric capacity of 230 Gbps.
Later cisco released M2 and F2 series of line cards. A F2 series line card can also do basic Layer 3
functions means you can use interface in L3 mode,however,can not be used for advance L3 feature like
OTV or MPLS. M2 line card's fabric capacity is 240 Gbps while F2 series line cards have fabric capacity
of 480 Gbps.
The problem with F2 card is that they can not be installed in same vdc with any other card.F2 card has to
be in its own vdc.
So, to resolve that, Cisco introduced F2E line cards which can be used with other M series line cards in
same VDC. It supports layer 3 but if it is alone in a single vdc. If it is being used with another card, it
supports (unlike F2) but then it can be used in L2 mode only.
So, finaly cisco launched F3 cards which are full L3 card. Support all advance layer 3 feature like otv,
mpls etc. can be mixed with other cards in same vdc in L2 or L3 mode.

Q. What is Fabric Module? Is it same as line cards?


A. Fabric module are the hardware modules which provides backplane connectivity between I/O modules
and SUP. In traditional switches like 6500, the crossbar switch fabric was integrated into chassis and
there was no redundancy if crossbar switch fabric is faulty,however, in nexus 7k we have fabric
redundancy using switch fabric module.
They are inserted into chassis in the backside and are hot-swappable same as line cards (I/O)
modules,however, they dont have any ports on them to connect any external device. So they are not
alternatives to line card or I/O modules. You can see them in "show module" command output and they
are shown as "X-bar". Nexus 7010 and 7018 can have upto 5 fab modules.

Q. What is fabric module capacity?


A. There are two series of Fabric modules, FAB1 and FAB2.
Each FAB1 has a maximum throughput of 46Gbps per slot meaning the total per slot bandwidth available
when chassis is running on full capacity, ie. there are five FAB1s in a single chassis would be
230Gbps. Each FAB2 has a maximum throughput of 110Gbps/slot meaning the total per slot bandwidth
available when there are five FAB2s in a single chassis would be 550Gbps. These are the FAB module
capacity,however, the actual throughput from a line card is really dependent on type of line card being
used and the fabric connection of the linecard being used.
Internal

Q. Does a Nexus 2k has an operating system (OS)?


A. No, nexus 2k or fex is a dumb device. It doesnt have any PRE-INSTALLED operating system on it. It
can be used with its parent Nexus 5k or 7k switch. When connected to its parent switch, it downloads the
operating system from parent switch only.

Q. What is difference between shared mode vs dedicated mode


A. explained in : Shared vs Dedicated mode of ports in Nexus 7k

Q. Can we connect a Nexus 2k to Nexus 7k?


A. Yes,however, not on every line card. There are few line cards which supports Fex.

Q. Can we connect a Nexus 2k or FEX to two parent switches or it can be controlled or connected
by only one switch?
A. Yes, we can connect a fex to two parent switches,however, only 5ks. we CANNOT connect a nexus 2k
to two Nexus 7Ks. This is dual-homed FEX design and it is supported.

Q. Can we mix different cards like M1/M2/F1/F2 in same vdc?


A. You can mix all cards in same vdc EXCEPT F2 card. The F2 card has to be on it's own VDC. You
can't mix F2 cards with M1/M2 and F1 in the same VDC. As per cisco, its a hardware limitation and it
creates forwarding issues.

Q. Can we do a redistribute in ospf without a route-map?


A. No, a route-map is required in NX-os while redistributing.

Q. What are the differences between VPC and VSS


A. Explained in: VSS vs VPC (difference between VSS and vPC)

Q. What is VDC
A. Explained in detail: VDC Overview

Q. Can a device in one VDC communicate with device in another VDC?


A. One VDC can not communicate to another VDC over backplane. If a reachability is needed, then we
need add a physical connection (via a direct cable or through another switch) between the ports in two
seperate VDC's.

Q. Can 3 or more Nexus 7K or 5Ks become vPC Peers?


A. No,you can have only two devices as vPC peers. Each device can serve as a vPC peer to only one
other vPC peer.

Q. What are the difference between vPC-peer link and vPC keep-alive link?
A. vPC-peer link is a layer 2 link that is used to check the consistancy parameters,states and config sync
and traffic flow(in some cases only). vpc keep-alive link is L3 reachability which is used to check the peer
status and role negotiation. Role negotiation happens at the initial stage only. vpc keep-alive link must be
setup first in order to bring vpc up. vPC peer link will not come up unless the peer-keepalive link is
already up and running.

Q. What are the vpc keep-alive link timers?


A. The default interval time for the vPC peer-keepalive message is 1 second. This is configurable
between 400 milliseconds and 10 seconds.You can configure a hold-timeout value with a range of 3 to 10
seconds; the default hold-timeout value is 3 seconds.
Internal

Q. How many VDC’s can the Nexus 7000 support?


A. It depends on which SUP you are using:
4 VDCs with SUP1,
4+1 (1 VDC for management) SUP2;
8 + 1 (management) VDC’s with SUP2E.

Q. On a Nexus 7k, when trying to perform a 'no shut' on Ethernet 1/3,the ERROR: Ethernet1/3:
Config not allowed, as first port in the port-grp is dedicated error message is received.

Shared vs Dedicated mode of ports in Nexus 7k


Hi..have you ever seen this error message while trying to configure a port?

"ERROR: Ethernet1/4: Config not allowed, as first port in the port−grp is dedicated"

To understand this, we need to understand what is port-group? Below is the image of N7K-M132XP-12
ine card. This line card has 32 ports and all are 10 Gig port. So what does that mean? Does it mean thate
ach one of them is a 10 Gig port and we can have all of these 32 ports connected at the same time and
we should be able to get 320 Gbps speed? Not exactly...!!
Yes, they are 10 Gig ports,HOWEVER, that 10 Gig is shared among 4 ports in a group. That group is
basically all the ports on same hardware ASIC.

Now take a closer look:


As per the below pic, you can see that even or odd continuous ports are on one side and each group of
four ports are on same hardware ASIC. This is a port-group and first port of the group is marked YELLOW
as you can see in below diagram.

So, being said that N7K-M132XP-12 has 32 10G ports, it means that each port-group (group of 4 ports for
this line card) share 10G speed among themselves. YES!! that is correct. All ports dont get 10G
dedicated bandwidth. So, the total capacity of the card is 80G, not 320 (as we were expecting) as there
can be 8 port-grp of 4 ports each. This is designed on the concept that "Chances are less that all devices
are sending data at the same time". So, 1,3,5,7 will be in same port-grp and similary 2,4,6,8 and so on...!!

So, 4 ports in a group will share the total available bandwidth of 10G.
Internal

What if we have requirement for some critical application that we need dedicated bandwidth of 10 G? In
that case, first port of a port-group can be put into "DEDICATED" mode and that port will always be the
first one of the group..ie. marked in yellow as shown in above pic. So, 1,2,9,10,17,18,25,26 can be put
into dedicated mode and if you have put a port in a port-grp into dedicated mode, all other 3 ports in that
group will get disabled. You can not configure them. If you have put Eth1/2 into dedicated mode, and if
you try to configure Eth1/4 then you will get : "ERROR: Ethernet1/4: Config not allowed, as first port in the
port−grp is dedicated"

Shared mode is the default mode. Command to configure port into dedicated mode is:
We first need to shutdown the port
N7K# config t
N7K(config)#interface Eth1/2
N7K(config-if)#rate-mode dedicated

VSS vs VPC (difference between VSS and vPC)


I know many of you have been looking for an answer to this question "what are the differences between
VSS and vPC? "..here are the differences between VPC and VSS in a very easy way, You just need to
read it once..

Both are used basically to support multi-chassis ether-channel that means we can create a port-channel
whose one end is device A,however, another end is physically connected to 2 different physical switches
which logically appears to be one switch.

There are certain differences as listed below:

-vPC is Nexus switch specific feature,however,VSS is created using 6500 series switches

-Once switches are configured in VSS, they get merged logicaly and become one logical switch from
control plane point of view that means single control plane is controlling both the switches in active
standby manner ,however, when we put nexus switches into vPC, their control plane are still separate.
Both devices are controlled individually by their respective SUP and they are loosely coupled with each
other.
Internal

-In VSS, only one logical switch has be managed from management and configuration point of view. That
means, when the switches are put into VSS, now, there is only one IP which is used to access the switch.
They are not managed as separate switches and all configuration are done on active switch. They are
managed similar to what we do in stack in 3750 switches,however, in vPC, the switches are managed
separately. That means both switches will have separate IP by which they can be accessed,monitored
and managed. Virtually they will appear a single logical switch from port-channel point of view only to
downstream devices.
-As i said, VSS is single management and single configuration, we can not use them for HSRP active and
standby purpose because they are no longer 2 seperate boxes. Infact HSRP is not needed, right?
one single IP can be given to L3 interface and that can be used as gateway for the devices in that
particular vlan and we will still have redundancy as being same ip assigned on a group of 2 switches. If
one switch fails, another can take over.,however, in vPC as i mentioned above devices are separately
configured and managed, we need to configure gateway redundancy same as in traditional manner.

For example: We have 2 switches in above diagram. Switch A and B, when we put them in VSS, they will
be accessed by a single logical name say X and if all are Gig ports then interfaces will be seen as
GigA\0\1, GigA\0\2....GigB\0\1,GigB\0\2 and so on...
however,if these are configured in vPC, then they will NOT be accessed with single logical name. They
will be accessed/managed separately. Means, switch A will have its own port only and so on B.

-Similary, in VSS same instances of stp,fhrp,igp,bgp etc will be used,however, in vPC there will be
separate control plane instances for stp,fhrp,igp,bgp just like they are being used in two different switches

-in VSS, the switches are always primary and secondary in all aspects and one switch will work as active
and another as standby,however, in vPC they will be elected as primary and secondary from virtual port-
channel point of view and for all other things,they work individualy and their role of being
primary/secondary regarding vpc is also not true active standby scenario,however, it is for some
particular failure situation only. For example, if peer-link goes down in vpc, then only secondary switch will
act and bring down vpc for all its member ports.
Internal

-VSS can support L3 port-channels across multiple chassis,however, vpc is used for L2 port-channels
only.

-VSS supports both PAgP and LACP,however, VPC only supports LACP.

-In VSS, Control messages and Data frames flow between active and standby via VSL,however, in
VPC,Control messages are carried by CFS over Peer Link and a Peer keepalive link is used to check
heartbeats and detect dual-active condition.

I hope this was helpful. I will keep adding more as i experience more.Thank you!!

Fex Identity-Mismatch (identity-mismatch error on nexus 5k)


While checking fex links, we got the " Identity-Mismatch" error as shown below in "sh int fex" output:

Nexus-5k-1# sh int fex-fabric

Fabric Fabric Fex FEX

Fex Port Port State Uplink Model Serial

---------------------------------------------------------------

103 Eth1/17 Active 1 N2K-C2248TP-1GE JAX1122AAA

103 Eth1/18 Active 2 N2K-C2248TP-1GE JAX1122AAA

103 Eth1/19 Active 3 N2K-C2248TP-1GE JAX1122AAA

103 Eth1/20 Active 4 N2K-C2248TP-1GE JAX1122AAA

105 Eth1/23 Active 1 N2K-C2248TP-1GE MLX1122BBB

105 Eth1/24 Active 2 N2K-C2248TP-1GE MLX1122BBB

105 Eth1/25 Identity-Mismatch 4 N2K-C2248TP-1GE PQR3344DDD <<<Notice this

105 Eth1/26 Active 4 N2K-C2248TP-1GE MLX1122BBB

Nexus-5k-2# sh int fex-fabric

Fabric Fabric Fex FEX

Fex Port Port State Uplink Model Serial

---------------------------------------------------------------

102 Eth1/17 Active 1 N2K-C2248TP-1GE LMN2244CCC

102 Eth1/18 Active 2 N2K-C2248TP-1GE LMN2244CCC


Internal

102 Eth1/19 Active 3 N2K-C2248TP-1GE LMN2244CCC

102 Eth1/20 Active 4 N2K-C2248TP-1GE LMN2244CCC

104 Eth1/23 Active 1 N2K-C2248TP-1GE PQR3344DDD

104 Eth1/24 Active 2 N2K-C2248TP-1GE PQR3344DDD

104 Eth1/25 Active 3 N2K-C2248TP-1GE PQR3344DDD

104 Eth1/26 Identity-Mismatch 3 N2K-C2248TP-1GE MLX1122BBB <<<Notice this

Basically this error is related to incorrect cabling..

As we know that a nexus 2k switch or FEX is connected to its parent Nexus 5k over fex links.

One Fex (2k) can be dual homed to two Nexus 5k switches. and when a nexus 2k is connected to Nexus
5k, a unique fex associate number is assigned to that particular 2k to identify it uniquely.

So, i had four nexus 2k switches whose serial numbers are JAX1122AAA,MLX1122BBB,
PQR3344DDD and LMN2244CCC. JAX1122AAA and ,MLX1122BBB are FEX switches for Nexus5k1.
and PQR3344DDD and LMN2244CCC are part of Nexus-5k-2. JAX1122AAA has been given FEX
associate number 103 and MLX1122BBB has been given 105,LMN2244CCC is assigned 102 and
PQR3344DDD is assigned 104. Each fex is connected to its parent switch via 4 fex links.

Idealy, all 4 fex links which are under same FEX ASSOCIATE NUMBER should be going to same
2k,however, one of our onsite engineer incorrectly cabled one of the fex link from 103 on Nexus-5k-1 to
another 2k which was part of FEX number 104 on Nexus-5k-2 and we started getting identity mismatch.
As you can see in above output,under FEX 105 on Nexus-5k-1, the Eth1/25 is
showing PQR3344DDD serial number,however, all other interfaces showing MLX1122BBB and vice
versa on Nexus-5k-2 for Eth1/26.

In order to verify cabling and make sure right fex or 2k is connected to correct parent 5k switch with
respective to its FEX associate number, we can use "show interface fex-fabric" command and verify the
same using serial number that all are correct switches.

once the cable were swapped, we started getting right serial number for Eth1/25.

Nexus-5k-1# sh int fex-fabric

Fabric Fabric Fex FEX

Fex Port Port State Uplink Model Serial


Internal

---------------------------------------------------------------

103 Eth1/17 Active 1 N2K-C2248TP-1GE JAX1122AAA

103 Eth1/18 Active 2 N2K-C2248TP-1GE JAX1122AAA

103 Eth1/19 Active 3 N2K-C2248TP-1GE JAX1122AAA

103 Eth1/20 Active 4 N2K-C2248TP-1GE JAX1122AAA

105 Eth1/23 Active 1 N2K-C2248TP-1GE MLX1122BBB

105 Eth1/24 Active 2 N2K-C2248TP-1GE MLX1122BBB

105 Eth1/25 Active 3 N2K-C2248TP-1GE MLX1122BBB ---->>>>Correct now

105 Eth1/26 Active 4 N2K-C2248TP-1GE MLX1122BBB

Nexus 5548P vs 5548UP vs 5596UP Switch


I am often asked what is difference between Nexus 5548P and 5548UP switch? In this post i am going to
explain the differences between these two and will also include 5596UP into the discussion.

First of all, all these 3 models are Nexus 5k Switches and basically 5500 series models.
"U" stands for "Unified" ports, so what does that "unified port" mean? Unified means a port is capable of
running into either "Ethernet" or "FC" (Fibre Channel).

For those who are not aware of SAN protocols, i would like to inform you that term "Fibre" here does not
mean the "Fiber" Media ( ie. copper vs fiber) which people refer in terms of cable, [ please note the
difference in spelling, Fibre vs Fiber).

Fibre Channel or FC is a protocol stack in SAN, similar to what TCP/IP is to Networks. SAN switches run
on FC protocol standards, not Ethernet or TCP/IP.(Just a highlevel overview)

So coming back to 5500 series models, all ports of 5548UP and 5596UP models of Nexus 5k, can be
used in ether Ethernet or FC mode,however, ports on 5548P do not work in FC mode. But the
****important thing to note is that this difference is valid for "In-built fixed" ports only******. That means,
both 5548P and 5548UP switch comes with 32-port "in-built"or Fixed ports, plus one expansion module
capable of 16 ports.

So, basicaly 5548P support Unified Port (Ethernet or native FC ) on the expansion module only,however,
in 5548UP, all ports are unified ports.

5596UP comes with built-in 48 Ports, plus we can use 3 expansion slots for additional ports depending
on our requirement.

That was the main difference, other differences are:


- 5548P and UP switch are 1 RU,however, 5596 is 2 RU switch
-Switching capacity of 5548 series are 960Gbps ,however, 5596 is 1.92 Tbps
-5548P only supports front-back airflow,however, 5548UP and 5596 supports both front-back and back-
front.
-a Seperate Layer 3 Daughter card can also be ordered/used to get 160 Gbps of Layer 3 routing
capability in 5548P and 5548UP switches, however, 5596UP can support L3 routing engine through
an Expansion Module.
Internal

Virtual Device Context (vdc) Overview and Configuration


example
Below is the very basic explanation of cisco vdc and i hope you will be able to understand by reading it
once only

Cisco's virtual device context or vdc is basically a concept of dividing a single Nexus 7000 hardware box
into multiple logical boxes in such a way that they look like different physical device to a remote
user/operator and each of the provisioned logical devices is configured and managed as if it were a
separate physical device.
For example, you have a data center where you have deployed Nexus 7k in datacenter. Now, there are
few other companies who don't have enough money to expend in setting up Nexus 7000 so they come to
you to host a data center for them. You can simply virtualize your nexus 7000 into multiple virtual
switches and can assign one logical portion(that is called vdc) to one company. When they will login to
their logical switch (looks like a separate physical switch to user) they can do whatever they want, other
logical partition i.e. other vdc will remain unaffected. You can create vlans with same name/number in all
vdc's and they will not interfere with each other. A particular vdc operator will not even come to know that
same switch is being used by multiple user virtually. Only Admin can create/delete vdc's and from Admin
vdc only, we can see other vdcs.

Similary, vdc can be used to create different test and production traffic. In my previous project, we created
one vdc for test environment in order to test new implementation/protocol etc and another vdc for
production traffic. If our test used to successful in our test environment, then only we used to put them
into production.

How many vdc we can create?? hmm...it depends which supervisor engine you are using.
-If you are using SUP1, then you can create upto 4 vdc's. All of them can be used to carry data traffic and
you can create/delete vdcs from default vdc which can also be used for data traffic.
-if you are using SUP2, then you can create 1 admin + 4 data vdc. That means, you can not use admin
vdc for data traffic. That will be used for only admin purpose i.e. managing other vdc's.
-if you are using SUP2E, then you can create 1+8 vdc, where 1 admin plus 8 production vdc.

Within VDC it can contain its own unique and independent set of VLANs and VRFs. Each VDC can have
assigned to it physical ports, thus allowing for the hardware data plane to be virtualized as well. Within
each VDC, a separate management domain can manage the VDC itself, thus allowing the management
plane itself to also be virtualized.

physical interfaces cannot be shared by multiple VDCs. This one-to-one assignment of physical interfaces
to VDCs is at the basis of complete isolation among the configured contexts. However, there are two
exceptions:
• The out-of-band management interface (mgmt0) can be used to manage all VDCs. Each VDC has its
own representation for mgmt0 with a unique IP address that can be used to send syslog, SNMP and
other management information.
• When a storage VDC is configured, a physical interface can belong to one VDC for Ethernet traffic and
to the storage VDC for FCoE traffic. Traffic entering the shared port is sent to the appropriate VDC
according to the frame's EtherType. Specifically, the storage VDC will get the traffic with EtherType
0x8914 for FCoE Initialization Protocol (FIP) and 0x8906 for FCoE.
Physical interfaces can be assigned to a VDC with a high degree of freedom. However, there are
differences among different I/O modules because of the way the VDC feature is enforced at the hardware
Internal

level. The easy way to learn the specific capabilities of the installed hardware is by entering the show
interface x/y capabilities command to see the port group associated with a particular interface.

Switch Resources that Can Be Allocated to a VDC:

Physical Interfaces, PortChannels, Bridge Domains and VLANs, HSRP and GLBP Group IDs, and SPAN

Switch Resources that Cannot Be Allocated to a VDC:

CPU*, Memory*, TCAM Resources such as the FIB, QoS, and Security ACLs

* Future releases may allow allocation of CPU or memory to a VDC.

VDC configuration is so easy.

step 1 Log in to the default VDC with a username that has the network-admin role.

Step 2 Enter configuration mode and create the VDC using the default settings.

N7k# configure terminal

N7k(config)# vdc WDECAIB

Note: Creating VDC, one moment please ...

switch(config-vdc)#

Step 3 (Optional) Allocate interfaces to the VDC.

N7k(config-vdc)# allocate interface ethernet 1/1-8

similarly more interfaces can be assigned. below is the screenshot of a vdc configuration.

Initially, all physical interfaces belong to the default VDC (VDC 1). When you create a new VDC, the
Cisco NX-OS software creates the virtualized services for the VDC without allocating any physical
interfaces to it. After you create a new VDC, you can allocate a set of physical interfaces from the default
VDC to the new VDC.
Internal

The interface allocation is the most important part of vdc configuration. You can not assign ports of same
port-group to different vdc.If you are unable to assign any interface to particular vdc or some ports are
being automatically being assigned, then it could be port-grouping issue. Port group is basicaly how many
parts are on same hardware ASIC. So, if 4 ports are on same ASIC, then they all must be in same vdc as
they are sharing and being operated by same asic. How many port-groups are there in my card or is
there a fix formula? Basically it depends which type of I/O module card we are using. for example:

•N7K-M202CF-22L (1 interface x 2 port groups = 2 interfaces 100G modules)—There are no restrictions


on the interface allocation between VDCs.

•N7K-M206FQ-23L (1 interface x 6 port groups = 6 interfaces 40G modules)—There are no restrictions


on the interface allocation between VDCs.
Internal

•N7K-M224XP-23L (1 interface x 24 port groups = 24 interfaces 10G modules)—There are no restrictions


on the interface allocation between VDCs.

•N7K-M108X2-12L (1 interface x 8 port groups = 8 interfaces)—There are no restrictions on the interface


allocation between VDCs.

•N7K-M148GS-11L, N7K-M148GT-11, and N7K-M148GS-11 (12 interfaces x 4 port groups = 48


interfaces)—There are no restrictions on the interface allocation between VDCs, but we recommend that
interfaces that belong to the same port group be in a single VDC.

•N7K-M132XP-12 (4 interfaces x 8 port groups = 32 interfaces)—Interfaces belonging to the same port


group must belong to the same VDC. See the example for this module in Figure 1-3.

•N7K-M148GT-11L (same as non-L M148) (1 interface x 48 port groups = 48 interfaces)—There are no


restrictions on the interface allocation between VDCs.

•N7K-M132XP-12L (same as non-L M132) (1 interface x 8 port groups = 8 interfaces)—All M132 cards
require allocation in groups of 4 ports and you can configure 8 port groups.

================
Switching between VDC's

If you have logged into default VDC, you can use “Show VDC” command to see what all other vdc’s have
been created.

IF you want to switch to any other vdc from default vdc, you can use “switchto vdc <vdc name>”
command as shown below and if you have logged into user created vdc WDECAIB from default vdc
using switchto command, you can use “switchback” command to come back to default vdc, however, if
you have directly ssh/telnet into user created vdc WDECAIB here, you can not do a “switchback” to come
into default vdc.

I hope it was helpful. You can read through my blog to know more about vdc's like vdc users etc.

Introduction to cisco nexus 7k, 5k and 2k switches


Internal

Nexus 7000
The Cisco Nexus Series switches are modular network switches designed for the data
center. Nexus 7000 chassis includes 4, 9, 10 and 18 slot chassis, however, we have nexus
7010 deployed in data centers at Core layers.

The first chassis in the Nexus 7000 family is Nexus 7010 switch which is a 10-slot chassis
with two supervisor engine slots and eight I/O module slots at the front, as well as five
crossbar switch fabric modules at the rear.

All switches in the Nexus range run the modular NX-OS firmware/operating system. The
Cisco NX-OS software is a data center-class operating system built with modularity,
resiliency, and serviceability at its foundation. Based on the industry-proven Cisco MDS
9000 SAN-OS software, Cisco NX-OS helps ensure continuous availability and sets the
standard for mission-critical data center environments. The highly modular design of Cisco
NX-OS makes zero-effect operations a reality and enables exceptional operational
flexibility.

Nexus 7010

-10 slots: 1-4 and 7-10 are line card slots, 5-6 are supervisor slots

-21 RU height

-Supports 384 10Gbit/s, and/or 1Gbit/s ports, all non-blocking ports

-9.9 Tbit/s system bandwidth

-480 Gbit/s, 720Mpps per slot

-Air flow is front to back, bottom to top

-Up to 5 Crossbar Fabric Modules

-Up to 3 power supplies

Supervisor Engine

Performs control plane and management functions

Dual-core 1.66 Ghz intel Xeon Processor with 4 GB DRAM

2 MB NVRAM, 32 GB internal bootdisk, compact flash slots

out-of-band 10/100/1000 management interface

Always-on connectivity management processor (CMP) for lights-out management

console and auxiliary serial ports.


Internal

Management Interface

10/100/1000 interface used exclusively for management

It is part of dedicated “management”vrf and can not be moved to any other or default vrf.

You can not assign other ports (Ethernet) to this vrf.

Crossbar Switch Fabric Modules


A single Cisco Nexus 7000 chassis can be configured with one or more fabric modules, up to
a maximum of five for capacity as well as redundancy. Each I/O module installed in the
system will automatically connect to and use all functional installed switch fabric modules. A
failure of a switch fabric module will trigger an automatic reallocation and balancing of
traffic across the remaining active switch fabric modules. Replacement of the failed fabric
module reverses this process. Once the replacement fabric module is inserted and online,
traffic is again redistributed across all installed fabric modules, thereby restoring the
redundancy level.

The Cisco Nexus 7000 Fabric Modules for the Cisco Nexus 7000 Chassis are separate fabric
modules that provide parallel fabric channels to each I/O and supervisor module slot. The
fabric module provides the central switching element for fully distributed forwarding on the
I/O modules.

Switch fabric scalability is made possible through the support of from one to five
concurrently active fabric modules for increased performance as your needs grow. All fabric
modules are connected to all module slots. The addition of each fabric module increases the
bandwidth to all module slots up to the system limit of five modules. The architecture
supports lossless fabric failover, with the remaining fabric modules load balancing the
bandwidth to all the I/O module slots, helping ensure graceful degradation.

Two fabric generations available – Fabric 1 and Fabric 2

All I/O modules compatible with both Fabric 1 and Fabric 2

Cisco Nexus 5000 Series Switches


The Cisco Nexus 5000 Series switches include a family of line-rate, low-latency, lossless 10-
Gigabit Ethernet, Cisco Data Center Ethernet, Fibre Channel over Ethernet (FCoE), and
native Fibre Channel switches for data center applications. The Cisco Nexus 5000 Series
includes the Cisco Nexus 5500 Platform and the Cisco Nexus 5000 Platform.

Mainly Nexus 5k is used for layer 2 switching,however, it can support L2 add-in card.
Internal

-Currently there are 2 generations of 5000 series switches:

5000 series – 5010 & 5020

5500 series – 5548 & 5596

Cisco Nexus 5548P Switch

The Cisco Nexus 5548P Switch is the first of the Cisco Nexus 5500 platform switches. It is a
one-rack-unit (1RU) 10 Gigabit Ethernet and FCoE switch offering up to 960-Gbps
throughput and up to 48 ports. The switch has 32 1/10-Gbps fixed SFP+ Ethernet and FCoE
ports and one expansion slot.

The Cisco Nexus 5548UP is a 1RU 10 Gigabit Ethernet, Fibre Channel, and FCoE switch
offering up to 960 Gbps of throughput and up to 48 ports. The switch has 32 unified ports
and one expansion slot.

5500UP models support unified ports. Ports can run as Ethernet or native Fibre channel and
if you are changing the role of a port, then it requires a reboot.

Cisco Nexus 2000 (Fabric Extender or FEX)

Nexus 2000 Series Fabric Extenders behave logically like remote line cards for a parent
Cisco Nexus 5000 or 7000 Series Switch. They simplify data center access operations and
architecture as well as management from the parent switches. They deliver a broad range of
connectivity options, including 40 Gigabit Ethernet, 10 Gigabit Ethernet, 1 Gigabit Ethernet,
100 MB and Fibre Channel over Ethernet (FCoE).

The Cisco Nexus 2000 Series Fabric Extenders work in conjunction with a Cisco Nexus
parent switch to deliver cost-effective and highly scalable Gigabit Ethernet and 10 Gigabit
Ethernet environments while facilitating migration to 10 Gigabit Ethernet, virtual machine–
aware, and unified fabric environments.

The Cisco Nexus 2000 Series has extended its portfolio to provide more server connectivity
choices and to support Cisco Nexus switches upstream. With more flexibility and choice of
infrastructure, we gain the following benefits:

Architectural flexibility :
Internal

-Provides a comprehensive range of connectiv¬ity options—100 Megabit Ethernet,


Gigabit Ethernet, and 10 Gigabit Ethernet server con¬nectivity and unified fabric
environments—and supports copper and fiber Gigabit Ethernet and10 Gigabit Ethernet
connectivity options with 1GBASE-T, SFP and SFP+, and CX1 over copper and fiber cables

− Supports various server form factors: rack and blade servers

− Offers space optimized for both ToR and EoR topologies

− Provides port-density options: 24, 32, and 48 ports

− Enables quick expansion of network capacity

Highly scalable server access

− Provides highest density per rack unit

− Allows transparent addition of network capacity as needed, reducing initial capital


expenditures (CapEx)

− Enables quick expansion of network capacity by rolling in a prewired rack of servers with
a ToR fabric extender and transparent connectivity to an upstream Cisco Nexus parent
switch

Simplified operations

− With Cisco Nexus 5000 or 7000 Series, provides a single point of management and policy
enforcement

The Cisco Nexus 2000 Series Fabric Extender forwards all traffic to its parent Cisco Nexus
5000 Series

device over 10-Gigabit Ethernet fabric uplinks, allowing all traffic to be inspected by policies
established

on the Cisco Nexus 5000 Series device. No software is included with the Fabric Extender.
Software is

automatically downloaded and upgraded from its parent switch. The Nexus 2248T will allow
100/1000

connectivity and can be dual attached to the Nexus 5000. By dual attaching the Nexus
2248Ts to the 5000, it will allow for the most resilient connections for single attached
servers.

The Cisco Nexus 2000 Series provides two types of ports: ports for end-host attachment
(host interfaces) and uplink ports (fabric interfaces). Fabric Interfaces are differentiated
with a yellow color(as shown in above figure) for connectivity to the upstream parent Cisco
Nexus switch.
Internal

-Each fabric extender module should be assigned a unique number (between 100-199). This
unique number enables the same fabric extender to be deployed in single-attached mode to
one CiscoNexus 5000 Series Switch only or in fabric extender vPC mode (that is, dual-
connected to different Cisco Nexus 5000 Series Switches).

-Nexus 2000 Fabric Extenders are not independent manageable entities; the Nexus 5000
manages the fabric extender through in-band connectivity.

Nexus 2000 Series can be attached to the Nexus 5000 Series in two different
configurations:

-Static pinning: The front host ports on the module are divided across the fabric ports
(that is, the uplinks connecting to the Nexus 5000).

-Port-Channel: The fabric ports form a Port-Channel to the Cisco Nexus 5000.

FEX Configuration Example

N5K(config)#feature fex

N5K(config)#interface Ethernet 1/13-14

N5K(config-if-range)#channel-group 100

N5K(config-if-range)#no shutdown

N5K(config-if-range)#interface port-channel 100

N5K(config-if)#switchport mode fex-fabric

N5K(config-if)# fex associate 100

N5K(config-if)#fex 100

N5K(config-fex)#description FEX 100 Eth1/13-14

What are the challenges?

FabricPath is specifically targeted at data centers because of several unique


challenges:
Internal

• Layer 2 Adjacency - Unlike the campus where we've pushed Layer 3 to the
closet, Data Centers truly have a need for large layer 2 domains. VMWare
especially has made this even more critical because in order to take advantage
of VMotion and DRS (two critical features), every VMWare host must have
access to ALL of the same VLANs.

• Resiliency is key - the Data Center has to have the ability to be ALWAYS up.
Redundant paths make this possible.

• Spanning tree addresses issues with redundant paths, but comes with tons of
caveats. As the L2 network scales, convergence time increases, and it's
complicated (and sometimes dangerous) to configure all of the tweaks to make it
perform better (such as portfast, uplinkfast, etc.). Also, traditional spanning
blocks links which cuts bandwidth in half crippling another need in Data Centers
for bandwidth scalability.

• vPC Limitations vPC's are great, and they address the blocked links. But they
come with several caveats such as complicated matching configuration, orphan
ports, no routing protocol traversal, etc. Even in a vPC scenario, we still have to
run spanning tree, we're just eliminating loops, and if i were to plug a non-vPC
switch into the core, it's still going cause a convergence. Finally, they are only
scalable to two core devices.

• Bandwidth scalability sure the Nexus 7018 can scale tremendously large, but
it's also a massive box. If we use vPC's we are still limited to 2 core boxes. This
sounds like overkill, but it's quickly becoming a more popular design in larger
customers. What if in order to scale bandwidth in the core, we could just add a
third or a fourth, smaller box.

What is FabricPath?

Originally I was worried about having to learn a completely new protocol, but the
truth is that most of us already know all of the concepts that make FabricPath
work. Think about routing to the access layer and why we like that design.

• Routing protocols truly eliminate spanning tree.

• They are very quick to converge, and the addition of a single node doesn't
affect any other part of the network.
Internal

• With equal-cost multipath routing, I can scale bandwidth extremely easily by


adding another core device and simply adding links. All of the links will be active
and all of the links will be load-balanced.

There you go you just learned FabricPath. FabricPath is based on the TRILL
standard with a few Cisco bonuses which builds on the concept of "what if we
could route a layer 2 packet instead of switching it." Under the covers of
FabricPath it uses the ISIS protocol, a MAC encapsulation, and routing tables to
achieve all of the magic. In short, you now have all of the benefits of Layer 3 to
the access switch, none of the caveats of vPCs, while still be able to span
VLANs. Oh, and the configuration is extremely simple.

What do I need to use FabricPath?

F-Series line cards in a Nexus 7000, and Nexus 5500 series + 2k's in the access.
The environment doesn't have to me homogenous, and portions of the
environment could be running FabricPath while others are still traditional vPC or
spanning tree. It's as simple as that.

FabricPath vs. TRILL

Today there are some key differentiators between Cisco's proprietary FabricPath
technology, and what the competitors could bring with TRILL. What it amounts to
is that ours is ready for deployment, and the standard still has some functional
gaps.

In short, the big ones (all of the core switches) can act as a default gateway at
the same time (using GLBP). The vPC+ can be used on the access switches to
extend Active-Active to non-FabricPath-speaking server, and conversational
learning allows extremely scalable setup.

FabricPath vs. vPC

You may note that FabricPath is definitely a replacement for vPC. More than that,
it's really a replacement for traditional L2 network topologies. The vPC is really
an attempt to trick a spanning-tree topology due to loop prevention struggles with
multiple active paths to multiple switches.

There is one place, however, in a FP topology that you would still want to use
vPCs and that is from the access switch to the server itself because there aren't
Internal

any NICs or vSwitches that currently understand FP, but plenty that understand
LACP. In this case, there is an extension of vPC called vPC+ which is a
FabricPath aware vPC that bridge between an access layer switch running FP
and a server that is unaware but still needs multiple active uplinks.

FabricPath for Layer 2 DC Interconnect

The requirement for layer 2 interconnect between data centre sites is very
common these days. The pros and cons of doing L2 DCI have been discussed
many times in other blogs or forums so I won't revisit that here. Basically there
are a number of technology options for achieving this, including EoMPLS, VPLS,
back-to-back vPC and OTV. All of these technologies have their advantages and
disadvantages, so the decision often comes down to factors such as scalability,
skillset and platform choice.

Now that FabricPath is becoming more widely deployed, it is also starting to be


considered by some as a potential L2 DCI technology. In theory, this looks like a
good bet easy configuration, no Spanning-Tree extended between sites, should
be a no brainer, right? Of course, things are never that simple let's look at some
things you need to consider if looking at FabricPath as a DCI solution.

1. FabricPath requires direct point-to-point WAN links

A technology such as OTV uses MAC-in-IP tunnelling to transport layer 2 frames


between sites, so you simply need to ensure that end-to-end IP connectivity is
available. As a result, OTV is very flexible and can run over practically any
network as long as it is IP enabled. FabricPath on the other hand requires a
direct layer 1 link between the sites (e.g. dark fibre), so it is somewhat less
flexible. Bear in mind that you also lose some of the features associated with an
IP network for example, there is currently no support for BFD over FabricPath.

2. Your multi-destination traffic will be "hairpinned" between sites

In order to forward broadcast, unknown unicast and multicast traffic through a


FabricPath network, a multi-destination tree is built. This tree generally needs to
"touch" each and every FabricPath node so that multi-destination traffic is
correctly forwarded. Each multi-destination tree in a FabricPath network must
elect a root switch (this is controllable through root priorities, and it's good
practice to use this), and all multi-destination traffic must flow through this root.
How does this affect things in a DCI environment? The main thing to remember
Internal

is that there will generally be a single multi-destination tree spanning both sites,
and that the root for that tree will exist on one site or the other. The following
diagram shows an example.

In the above example, there are two sites, each with two spine switches and two
edge switches. The root for the multi-destination tree is on Spine-3 in Site B. For
Internal

the hosts connected to the two edge switches in site A, broadcast traffic could
follow the path from Edge-1 up to Spine-1, then over to Spine-3 in Site B, then to
Spine-4, and then back down to the Spine-2 and Edge-2 switches in Site A
before reaching the other host. Obviously there could be slightly different paths
depending on topology, e.g. if the Spine switches are not directly interconnected.
In future releases of NX-OS, the ability to create multiple FabricPath topologies
will alleviate this issue to a certain extent, in that groups of "local" VLANs can be
constrained to a particular site, while allowing "cross-site" VLANs across the DCI
link.

3. First Hop Routing localisation support is limited with FabricPath

When stretching L2 between sites, it's sometimes desirable to implement "FHRP


localization" which usually involves blocking HSRP using port ACLs or similar, so
that hosts at each site use their local gateways rather than traversing the DCI link
and being routed at the other site. The final point to be aware of is that when
using FabricPath for layer 2 DCI, achieving FHRP localisation is slightly more
difficult. On the Nexus 5500, FHRP localization is supported using "mismatched"
HSRP passwords at each site (you can't use port ACLs for this purpose on the
5K). However, if you have any other FabricPath switches in your domain which
aren't acting as a L3 gateway (e.g. at a third site), then that won't work and is not
supported.

This is because FabricPath will send HSRP packets from the virtual MAC
address at each site with the local switch ID as a source. Other FabricPath
switches in the domain will see the same vMAC from two source switch IDs and
will toggle between them, making the solution unusable. Also, bear in mind that
FHRP localization with FabricPath isn't (at the time of writing) supported on the
Nexus 7000.

The issues noted above do not mean that FabricPath cannot be used as a
method for extending layer 2 between sites. In some scenarios, it can be a viable
alternative to the other DCI technologies as long as you are aware of the caveats
above.

A vPC implementation on FabricPath: Introduction to vPC+

Virtual Port Channel (vPC) is a technology that has been around for a few years
on the Nexus range of platforms. With the introduction of FabricPath, an
enhanced version of vPC, known as vPC+ was released. At first glance, the two
Internal

technologies look very similar, however there are a couple of differences


between them which allows vPC+ to operate in a FabricPath environment. So for
those of us deploying FabricPath, why can't we just use regular vPC?

Let's look at an example. The following drawing shows a simple FabricPath


topology with three switches, two of which are configured in a (standard) vPC
pair.

A single server (MAC A) is connected using vPC to S10 and S20, so as a result
traffic sourced from MAC A can potentially take either link in the vPC towards
S10 or S20. If we now look at S30's MAC address table, which switch is MAC A
accessible behind? The MAC table only allows for a one to one mapping
between MAC address and switch ID, so which one is chosen? Is it S10 or S20?
The answer is that it could be either, and it is even possible that MAC A could
"flip flop" between the two switch IDs.

In FabricPath implementation, such "flip flop" situation breaks traffic flow. So,
clearly we have an issue with using regular vPC to dual attach hosts or switches
to a FabricPath domain. How do we resolve this? We use vPC+ instead.

The vPC+ solves the issue above by introducing an additional element, the
"virtual switch". The virtual switch sits "behind" the vPC+ peers and is essentially
used to represent the vPC+ domain to the rest of the FabricPath environment.
Internal

The virtual switch has its own FabricPath switch ID and looks, for all intents and
purposes, like a normal FabricPath edge device to the rest of the infrastructure.

In the above example, vPC+ is now running between S10 and S20, and a virtual
switch S100 now exists behind the physical switches. When MAC A sends traffic
through the FabricPath domain, the encapsulated FabricPath frames will have a
source switch ID of the virtual switch, S100. From S30's (and other remote
switches) point of view, MAC A is now accessible behind a single switch S100.
This enables multi-pathing in both directions between the Classical Ethernet and
FabricPath domains. Note that the virtual switch needs a FabricPath switch ID
assigned to it (just like a physical switch does), so you need to take this into
account when you are planning your switch ID allocations throughout the
network. For example, each access "Pod" would now contain three switch IDs
rather than two in a large environment this could make a difference.

Much of the terminology is common to both vPC and vPC+, such as Peer-Link,
Peer-Keepalive, etc and is also configured in a very similar way. The major
differences are:
Internal

• In vPC+, the Peer-Link is now configured as a FabricPath core port (i.e.


switchport mode fabricpath)

• A FabricPath switch ID is configured under the vPC+ domain configuration


(fabricpath switch id ) remember to configure the same Switch ID on both peers!

• Both the vPC+ Peer-Link and member ports must reside on F series linecards.

The vPC+ also provides the same active / active HSRP forwarding functionality
found in regular vPC this means that (depending on where your default gateway
functionality resides) either peer can be used to forward traffic into your L3
domain. If your L3 gateway functionality resides at the FabricSpine layer, vPC+
can also be used there to provide the same Active/Active functionality.

Mapping a FabricPath Local ID to an Outbound Interface

When a FabricPath edge switch needs to send a frame to a remote MAC


address, it performs a MAC address table lookup and finds an entry of the form
SWID.SSID.LID. The SWID represents the switch-ID of the remote FabricPath
edge switch, the SSID represents the sub-switch ID (which is only used in vPC+),
and the LID (Local ID) represents the outbound port on the remote edge switch.
However, the method by which these LIDs are derived doesn't seem to be very
well documented and this had been bugging me for a while. So I decided to dig in
and see if I could find out a bit more about the way LIDs are used on the Nexus
switches.

I found a somewhat cryptic statement of the followings "for N7K the LID is the
port index of the ingress interface, for N5K LID most of the time will be 0". Let's
see what we can make of that.

The acronym LID stands for "Local ID" and, as the name implies, it has local
significance to the switch that a particular MAC address resides on. As such, it is
up to the implementation to determine how to derive a unique LID to represent its
ports. Apparently, the Nexus 5000 and Nexus 7000 engineering teams did not
talk to each other to agree on some consistent method of assigning the LIDs, but
each created their own platform-specific implementation.

The interface represented by the LID is an ingress interface from the perspective
of the edge switch that inserts the LID into the outer source address. For the
Internal

switch sending to the MAC address it represents the egress port at the
destination edge switch.

For the N5K I couldn't really find more than that the LID will usually be 0, but
there may be some exceptions. For the N7K, the LID maps to the "port index" of
the ingress interface.

So I decided to get into the lab and see if I could find some commands that would
help me establish the relation between the LID and the outbound interface on the
edge switch. I created a very simple FabricPath network and performed a couple
of pings to generate some MAC address table entries.

Let's have a look at a specific entry in the MAC address table of a Nexus 7000:

1
2 N7K-1-pod5# show mac address-table dynamic vlan 100
3 Legend:
4 * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link,
5 (T) - True, (F) - False
6 VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID
7 ---------+-----------------+--------+---------+------+----+------------------
8 * 100 0005.73e9.8c81 dynamic 960 F F Eth3/15
100 0005.73e9.fcfc dynamic 960 F F 16.0.14
9 100 00c0.dd18.6ce0 dynamic 420 F F 16.0.14
10 100 00c0.dd18.6ce1 dynamic 0 F F 16.0.14
11 * 100 00c0.dd18.6e08 dynamic 0 F F Eth3/15
12 * 100 00c0.dd18.6e09 dynamic 0 F F Eth3/15
13

So for example, let's zoom in on the MAC address 0005.73e9.fcfc. According the
table, frames for this destination should be sent to SWID.SSID.LID "16.0.14".
From the SWID part, we can see that the MAC address resides on the switch
with ID "16". To find the corresponding switch hostname we can use the following
command:

1 System-ID Primary Secondary Reachable Bcast-Priority Ftag-Root Capable Hostn


2 MT-0
3
4 b414.89dc.7a44 16 [C] 0[C] Yes 64 [S] Y N7K-2-p
5 f025.72a8.bf44* 15 [C] 0[C] Yes 64 [S] Y N7K-1-p

So we jump to switch N7K-2-pod6 and perform another MAC address table


lookup:
Internal

1
N7K-2-pod6# show mac address-table address 0005.73e9.fcfc
2 Legend:
3 * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
4 age - seconds since last seen,+ - primary entry using vPC Peer-Link,
5 (T) - True, (F) - False
6
7 VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID
---------+-----------------+--------+---------+------+----+------------------
8 * 100 0005.73e9.fcfc dynamic 450 F F Eth3/15
9

Now we know that the outbound interface for the MAC address on the destination
edge switch is Ethernet 3/15. So how can we map the LID "14" to this interface?

Since the LID corresponds to the "port index" for the interface in question, how
can we find the port index? The port index is an internal identifier for the
interface, also referred to as the LTL and there are some show commands to
determine these LTLs. For example, if we wanted to know the LTL for interface
E3/15, we could issue the following command:

1 N7K-2-pod6# show system internal pixm info interface e 3/15


2 LTL TYPE LTL
3 ========================
4 PHY_PORT 0xe
5 FLOOD_W_FPOE 0x8031
FLOOD_W_FPOE 0x8035
6

Here we find that the LTL for the interface is 0xe, which equals 14 in decimal.
This shows that the LID is actually the decimal representation of the LTL.
(FabricPath switch-IDs, subswitch-IDs and Local IDs are represented in decimal
by default).

This lookup can also be performed in reverse. If we take the LID and convert it to
its hexadecimal representation of 0xe, we can find the corresponding interface as
follows:

1 N7K-2-pod6# show system internal pixm info ltl 0xe


2
3 Member info
4 ------------------
Type LTL
5 ---------------------------------
6 PHY_PORT Eth3/15
7 FLOOD_W_FPOE 0x8035
8 FLOOD_W_FPOE 0x8031
Internal

So through use of these two commands, we can map a FabricPath LID to an


interface and vice versa on a Nexus 7000.

FabricPath Authentication in NX-OS

First and foremost, It is assumed that now you have a basic working knowledge
of FabricPath. FabricPath here is Cisco's scalable Layer 2 solution that
eliminates Spanning Tree Protocol and adds some enhancements that are sorely
needed in L2 networks like Time To Live (TTL), Reverse Path Forwarding (RPF)
and uses IS-IS as a control plane protocol. It's the fact that FabricPath uses IS-IS
that makes it very easy and familiar for customers to enable authentication in
their fabric. If you have ever configured authentication for a routing protocol in
Cisco IOS or NX-OS, this will be similar with all of your favorites like key chains,
key strings and hashing algorithms. Hopefully that nugget of information doesn't
send you into a tail spin of despair.

With FabricPath there are two levels of authentication that can be enabled. The
first is at the domain level for the entire switch (or VDC!). Authentication here will
prevent routes from being learned. Important to note that ISIS adjacencies can
be formed on the interface level even when the domain authentication is
mismatched. This domain level authentication is for LSP and NSP exchange not
PDUs on the interfaces.

If you are not careful, you can blackhole traffic during the implementation of
authentication, just like you would with any other routing protocol.

A quick order of operation to enable domain level authentication would be to


define a key-chain with keys which contain key-strings defined underneath. The
key strings are the actual password and NX-OS allows you to define multiple
key-strings so you can rotate passwords as needed and even includes nerd
knobs for setting start and end times. After the key chains are defined, they are
applied to the FabricPath domain. Let's quit typing and let the CLI do the talking.

We start with a VDC that has FabricPath, is in a fabric with other devices but
doesn't have authentication enabled. We can see we have not learned any
routes.

1 N7K-2-Access2# show fabricpath route


Internal

2 FabricPath Unicast Route Table


3 'a/b/c' denotes ftag/switch-id/subswitch-id
'[x/y]' denotes [admin distance/metric]
4 ftag 0 is local ftag
5 subswitch-id 0 is default subswitch-id
6
7 FabricPath Unicast Route Table for Topology-Default
8 0/4/0, number of next-hops: 0
via ---- , [60/0], 24 day/s 00:32:41, local
9 0/69/1, number of next-hops: 0
10 1/69/0, number of next-hops: 0
11 via ---- , [60/0], 15 day/s 04:18:01, local
12 2/69/0, number of next-hops: 0
13 via ---- , [60/0], 15 day/s 04:18:01, local
14
15

We can also see we are adjacent to some other devices, but also note that we do
not see their name under system ID, just the MAC address. This is a quick point
that something is amiss with the control plane. They are in bold and red below.

1
N7K-2-Access2# show fabricpath isis adj
2 Fabricpath IS-IS domain: default Fabricpath IS-IS adjacency database:
3 System ID SNPA Level State Hold Time Interface
4 0026.980f.d9c4 N/A 1 UP 00:00:25 port-channel1
5 0024.98eb.ff42 N/A 1 UP 00:00:29 Ethernet3/9
0024.98eb.ff42 N/A 1 UP 00:00:27 Ethernet3/10
6
0026.980f.d9c2 N/A 1 UP 00:00:22 Ethernet3/20
7 0026.980f.d9c2 N/A 1 UP 00:00:29 Ethernet3/21
8

Now we'll add the authentication and start with the key-chain and call it "domain"
then define key 0 and the key-string of "domain" (not very creative am I?) and
then finally apply it to the fabricpath domain default.

1
N7K-2-Access2# config
2 Enter configuration commands, one per line. End with CNTL/Z.
3 N7K-2-Access2(config)# key chain domain
4 N7K-2-Access2(config-keychain)# key 0
5 N7K-2-Access2(config-keychain-key)# key-string domain
N7K-2-Access2(config-keychain-key)# fabricpath domain default
6 N7K-2-Access2(config-fabricpath-isis)# authentication key domain
7

Now let's see what that does for us. Much happier now aren't we?

1 N7K-2-Access2(config-fabricpath-isis)# show fabricpath route


FabricPath Unicast Route Table
2 'a/b/c' denotes ftag/switch-id/subswitch-id
3 '[x/y]' denotes [admin distance/metric]
Internal

4 ftag 0 is local ftag


5 subswitch-id 0 is default subswitch-id
6
FabricPath Unicast Route Table for Topology-Default
7 0/4/0, number of next-hops: 0
8 via ---- , [60/0], 24 day/s 00:33:32, local
9 0/69/1, number of next-hops: 0
10 1/1/0, number of next-hops: 2
via Eth3/20, [115/40], 0 day/s 00:00:10, isis_fabricpath-default
11 via Eth3/21, [115/40], 0 day/s 00:00:10, isis_fabricpath-default
12 1/2/0, number of next-hops: 2
13 via Eth3/9, [115/40], 0 day/s 00:00:11, isis_fabricpath-default
14 via Eth3/10, [115/40], 0 day/s 00:00:11, isis_fabricpath-default
15 1/69/0, number of next-hops: 0
via ---- , [60/0], 15 day/s 04:18:52, local
16 1/100/0, number of next-hops: 4
17 via Eth3/9, [115/40], 0 day/s 00:00:11, isis_fabricpath-default
18 via Eth3/10, [115/40], 0 day/s 00:00:11, isis_fabricpath-default
19 via Eth3/20, [115/40], 0 day/s 00:00:10, isis_fabricpath-default
via Eth3/21, [115/40], 0 day/s 00:00:10, isis_fabricpath-default
20
2/69/0, number of next-hops: 0
21 via ---- , [60/0], 15 day/s 04:18:52, local
22
23 N7K-2-Access2(config-fabricpath-isis)#
24
25
26
27
28

The exact same sequence applies to interface-level authentication and looks like
the CLI below. We can see that when we have two non-functioning states here
INIT and LOST. INIT is from me removing the key-chain and flapping the
interface (shut/no shut) and LOST is from me removing the pre-defined key chain
and the adjacency going down to N7K-1-Agg1.

1
N7K-2-Access2# show fab isis adj
2 Fabricpath IS-IS domain: default Fabricpath IS-IS adjacency database:
3 System ID SNPA Level State Hold Time Interface
4 N7K-1-Access1 N/A 1 UP 00:00:27 port-channel1
5 N7K-2-Agg2 N/A 1 INIT 00:00:22 Ethernet3/9
6 N7K-2-Agg2 N/A 1 UP 00:00:23 Ethernet3/10
N7K-1-Agg1 N/A 1 LOST 00:04:57 Ethernet3/20
7 N7K-1-Agg1 N/A 1 UP 00:00:30 Ethernet3/21
8

Now we'll add our key chain and key string.


N7K-2-Access2# config
1 Enter configuration commands, one per line. End with CNTL/Z.
2 N7K-2-Access2(config)#
Internal

3 N7K-2-Access2(config-keychain-key)# int e3/9


4 N7K-2-Access2(config-if)# fabricpath isis authentication-type cleartext
N7K-2-Access2(config-if)# fabricpath isis authentication key-chain interface
5 N7K-2-Access2(config-if)#
6 N7K-2-Access2(config-if)# key chain interface
7 N7K-2-Access2(config-keychain)#key 0
8 N7K-2-Access2(config-keychain-key)# key-string 7 interface
N7K-2-Access2(config-keychain-key)#
9 N7K-2-Access2(config-keychain-key)# int e3/9
10 N7K-2-Access2(config-if)# fabricpath isis authentication-type cleartext
11 N7K-2-Access2(config-if)# fabricpath isis authentication key-chain interface
12 N7K-2-Access2(config-if)#
13
14
15

A quick check shows us we're happily adjacent to our switches.

1
N7K-2-Access2(config-keychain)# show fab isis adj
2 Fabricpath IS-IS domain: default Fabricpath IS-IS adjacency database:
3 System ID SNPA Level State Hold Time Interface
4 N7K-1-Access1 N/A 1 UP 00:00:30 port-channel1
5 N7K-2-Agg2 N/A 1 UP 00:00:29 Ethernet3/9
6 N7K-2-Agg2 N/A 1 UP 00:00:26 Ethernet3/10
N7K-1-Agg1 N/A 1 UP 00:00:24 Ethernet3/20
7 N7K-1-Agg1 N/A 1 UP 00:00:31 Ethernet3/21
8

Finally, a quick command to check the FabricPath authentication status on your


device is below:

1 N7K-2-Access2# show fab isi


2
3 Fabricpath IS-IS domain : default
4 System ID : 0024.98eb.ff43 IS-Type : L1
SAP : 432 Queue Handle : 11
5 Maximum LSP MTU: 1492
6 Graceful Restart enabled. State: Inactive
7 Last graceful restart status : none
8 Metric-style : advertise(wide), accept(wide)
9 Start-Mode: Complete [Start-type configuration]
Area address(es) :
10 00
11 Process is up and running
12 CIB ID: 3
13 Interfaces supported by Fabricpath IS-IS :
port-channel1
14 Ethernet3/9
15 Ethernet3/10
16 Ethernet3/20
17 Ethernet3/21
18 Level 1
Internal

19 Authentication type: MD5


20 Authentication keychain: domain Authentication check specified
MT-0 Ref-Bw: 400000
21 Address family Swid unicast :
22 Number of interface : 5
23 Distance : 115
24 L1 Next SPF: Inactive
25
N7K-2-Access2#
26
27
28
29
30

With this simple exercise you've configured FabricPath authentication. Not too
bad and very effective. As always when configuring passwords on your device,
cut and paste from a common text file is important to avoid empty white spaces
at the end of passwords and other nuances that can lead you down the wrong
path. In general, I would expect a company implementing FabricPath
authentication will probably configure both domain and interface level
authentication.

A Way to tell the Root of the FabricPath tree

1
N5K-p1-1(config)# show fabricpath isis topology summary
2 Fabricpath IS-IS domain: default FabricPath IS-IS Topology Summary
3 MT-0
4 Configured interfaces: Ethernet1/1 Ethernet1/2 Ethernet1/3 Ethernet1/4 Ethernet
5 Ethernet1/6 Ethernet1/7 Ethernet1/8 port-channel5
6 Number of trees: 2
Tree id: 1, ftag: 1 [transit-traffic-only], root system: 0024.98e8.01c2, 1709
7 Tree id: 2, ftag: 2, root system: 001b.54c2.67c2, 2040
8

Remember with ISIS there are two authentication methods, the actual hello
adjacency authentication, and the LSP data-plane authentication, here is a
sample config of both of these.

1 key chain cisco


key 0
2 key-string 7 070c705f4d59
3 !
4 interface Ethernet1/16
5 switchport mode fabricpath
fabricpath isis authentication-type md5
6 fabricpath isis authentication key-chain cisco
7 !
Internal

8
9

The config as you can see above is quite simple, don't forget that with key chains
you can specify a accept lifetime and send lifetime. But for our case we are not
going to, when you don't specify this it is simply assumed to be infinite.

1 SW2# show key chain


2 Key-Chain cisco
3 Key 0 -- text 7 070c705f4d59
4 accept lifetime (always valid) [active]
send lifetime (always valid) [active]
5

You can verify your ISIS authentication:

1
2 SW2# show fabricpath isis interface eth1/16
3 Fabricpath IS-IS domain: default
4 Interface: Ethernet1/16
5 Status: protocol-up/link-up/admin-up
6 Index: 0x0001, Local Circuit ID: 0x01, Circuit Type: L1
Authentication type MD5
7 Authentication keychain is cisco
8 Authentication check specified Extended Local Circuit ID: 0x1A00F000, P2P Circuit
9 Retx interval: 5, Retx throttle interval: 66 ms
10 LSP interval: 33 ms, MTU: 1500
P2P Adjs: 1, AdjsUp: 1, Priority 64
11 Hello Interval: 10, Multi: 3, Next IIH: 00:00:06
12 Level Adjs AdjsUp Metric CSNP Next CSNP Last LSP ID
13 1 1 1 40 60 00:00:35 ffff.ffff.ffff.ff-ff
14 Topologies enabled:
15 Topology Metric MetricConfig Forwarding
0 40 no UP
16
17

Next if you want to actually configure the LSP's to be authenticated

1 fabricpath domain default


2 authentication-type md5
3 authentication key-chain cisco

You can then verify this is configured


Internal

1
2
3 SW2# show fabricpath isis
4
5 Fabricpath IS-IS domain : default
6 System ID : 547f.eec2.7d01 IS-Type : L1
SAP : 432 Queue Handle : 10
7
Maximum LSP MTU: 1492
8 Graceful Restart enabled. State: Inactive
9 Last graceful restart status : none
10 Metric-style : advertise(wide), accept(wide)
11 Start-Mode: Complete [Start-type configuration]
Area address(es) :
12 00
13 Process is up and running
14 CIB ID: 3
15 Interfaces supported by Fabricpath IS-IS :
16 Ethernet1/5
Ethernet1/6
17 Ethernet1/7
18 Ethernet1/8
19 Ethernet1/16
20 Level 1
Authentication type: MD5
21 Authentication keychain: cisco Authentication check specified MT-0 Ref-Bw: 400000
22 Address family Swid unicast :
23 Number of interface : 5
24 Distance : 115
25 L1 Next SPF: Inactive
26
27

A big hint that your auth is working for hello but not for LSP is that the hostnames
don't come up correctly in your isis adjacency.

FabricPath Load Balancing

First of all it helps if we establish a few items of terminology. The first thing to
remember is that fabricpath supports multiple topologies so that you can actually
break out particular FabPath enabled VLAN's to use a particular topology.
However this is only available in certain versions of NXOS and is quite advanced,
so we will be skipping this advanced configuration.

However, the concept of "Trees" in fabricpath also exists, tree's are used for the
distribution of "multidestination" traffic, that is traffic that is not a single
destination, so perfect examples of this would be multicast, unknown unicast and
other flooding types.
Internal

The first multidestination tree, tree 1 is normally selected for unknown unicast
and broadcast frames except when used in combination with vpc+, but the detail
of that we will ignore for now.

Multicast traffic is load balanced based on a hashing function (which is based on


the source and dest IP address) across both the trees, you can see what kind of
tree the traffic is going to take on a nexus 7000 with the following command.

1
N7K1# show fabricpath load-balance multicast ftag-selected flow-type l3 src-ip 10.1.1
2 128b Hash Key generated : 00 00 02 9a 00 00 00 00 00 00 00 a0 10 12 10 00
3 0x1b
4 FTAG SELECTED IS : 2
5
6 N7K1# show fabricpath load-balance multicast ftag-selected flow-type l3 src-ip 10.1.1.
7 128b Hash Key generated : 00 00 02 9a 00 00 00 00 00 00 00 a0 10 12 20 00
0xda
8 FTAG SELECTED IS : 1
9

The FTAG is an important key here, the FTAG will correlate to the "Tree". The
FTAG is used as it's an available field in the FabricPath Header that can be used
to identify the frame and tell the switches "use this tree to distribute the traffic".

Now the whole point of this option is for scalability, especially with large multicast
traffic domains, using this option you can increase link utilization for multicast
traffic by having the traffic load balance across two "root" trees (yes, this is fabric
path, so we don't really have a root tree like we do in spanning-tree, but for
multidestination traffic we kind of have to.

You can actually tell using the following command what port your switch is going
to use for that particular FTAG/MTREE:

1 SW2# show fabricpath mroute ftag 2


2
(ftag/2, vlan/666, *, *), Flood, uptime: 04:56:33, isis
3 Outgoing interface list: (count: 2)
4 Interface Ethernet1/16, uptime: 00:26:19, isis
5 Interface Ethernet1/8, uptime: 00:26:19, isis
6
7 (ftag/2, vlan/666, *, *), Router ports (OMF), uptime: 01:48:38, isis igmp
Outgoing interface list: (count: 2)
8
Interface Ethernet1/16, uptime: 00:26:19, isis
9 Interface Vlan666, [SVI] uptime: 01:48:38, igmp
10
11 Found total 2 route(s)
12
Internal

13 SW2# show fabricpath mroute ftag 1


14
15 (ftag/1, vlan/666, *, *), Flood, uptime: 04:56:40, isis
Outgoing interface list: (count: 2)
16 Interface Ethernet1/8, uptime: 00:26:26, isis
17 Interface Ethernet1/8, uptime: 00:26:26, isis
18
19 (ftag/1, vlan/666, *, *), Router ports (OMF), uptime: 01:48:45, isis igmp
20 Outgoing interface list: (count: 2)
Interface Ethernet1/8, uptime: 00:26:26, isis
21
Interface Vlan666, [SVI] uptime: 01:48:45, igmp
22
23 Found total 2 route(s)
24
25
26
27

As you can see from the above, there are two seperate paths that the switch is
taking for each of the Trees based on where the root of the tree lies

So how is the root of each tree chosen? It's based on:

• root-Priority (highest wins, default is 64)


• Switch-id (highest wins, default is randomally chosen but can be manually
assigned)
• System-id (Tie-breaker)

There will always be two seperate roots for each tree, but as you can imagine,
your root tree might not be the most optimally chosen tree, so you can configure
the root priority, the highest root priority will become the root for FTAG 1, and
second place will become the root tree for FTAG 2.

1 N7K1(config)# fabricpath domain default


2 N7K1(config-fabricpath-isis)# root-priority 254

N71k is now the root for this tree, you can attempt to verify this in a few ways, the
first is to look at the show fabricpath mroute ftag 1 command we used previously,
let's just quickly get our topology clear:
SW3# show fabricpath isis adj
1 Fabricpath IS-IS domain: default Fabricpath IS-IS adjacency database:
2 System ID SNPA Level State Hold Time Interface
3 SW2 N/A 1 UP 00:00:23 Ethernet1/5
4 SW2 N/A 1 UP 00:00:27 Ethernet1/6
SW2 N/A 1 UP 00:00:30 Ethernet1/7
5 SW2 N/A 1 UP 00:00:30 Ethernet1/8
Internal

6 N7K1 N/A 1 UP 00:00:29 Ethernet1/17


7
8

As you can see from the above, we have multiple connections between SW3 to
SW2, and then a single connection from SW2 and SW3 up to N7K1

1
SW2# show fabricpath isis adj
2 Fabricpath IS-IS domain: default Fabricpath IS-IS adjacency database:
3 System ID SNPA Level State Hold Time Interface
4 SW3 N/A 1 UP 00:00:30 Ethernet1/5
5 SW3 N/A 1 UP 00:00:21 Ethernet1/6
6 SW3 N/A 1 UP 00:00:29 Ethernet1/7
SW3 N/A 1 UP 00:00:27 Ethernet1/8
7 N7K1 N/A 1 UP 00:00:24 Ethernet1/16
8

Let's check out our mroute routing:

1
2 SW2# show fabricpath mroute ftag 1
3
4 (ftag/1, vlan/666, *, *), Flood, uptime: 05:12:43, isis
5 Outgoing interface list: (count: 2)
6 Interface Ethernet1/16, uptime: 00:07:13, isis
7 Interface Ethernet1/16, uptime: 00:07:13, isis
8
(ftag/1, vlan/666, *, *), Router ports (OMF), uptime: 02:04:48, isis igmp
9 Outgoing interface list: (count: 2)
10 Interface Ethernet1/16, uptime: 00:07:13, isis
11 Interface Vlan666, [SVI] uptime: 02:04:48, igmp
12
13 SW2# show fabricpath mroute ftag 2
14
15 (ftag/2, vlan/666, *, *), Flood, uptime: 05:13:35, isis
Outgoing interface list: (count: 2)
16 Interface Ethernet1/16, uptime: 00:07:09, isis
17 Interface Ethernet1/8, uptime: 00:43:21, isis
18
19 (ftag/2, vlan/666, *, *), Router ports (OMF), uptime: 02:05:39, isis igmp
20 Outgoing interface list: (count: 2)
21 Interface Ethernet1/16, uptime: 00:07:09, isis
Interface Vlan666, [SVI] uptime: 02:05:39, igmp
22
23

You can tell from the above, neither of the switches will ever send unknown
unicast (which remember, is placed into FTAG 1) out to each other but will
Internal

instead always forward it up to the tree, up to N71k, which is our root for this tree.

From N7k's Perspective:

1
N7K1# show fabricpath mroute ftag 1
2
3
(ftag/1, vlan/666, *, *), Flood, uptime: 04:01:00, isis
4 Outgoing interface list: (count: 2)
5 Interface Ethernet4/1, uptime: 00:01:51, isis
6 Interface Ethernet4/2, uptime: 01:06:04, isis
7
8 (ftag/1, vlan/666, *, *), Router ports (OMF), uptime: 04:01:01, isis igmp
Outgoing interface list: (count: 2)
9 Interface Vlan666, [SVI] uptime: 01:51:33, igmp
10 Interface Ethernet4/1, uptime: 00:01:51, isis
11

He is responsible for forwarding it back down, so if an unknown unicast or a


multicast frame that was hashed to FTAG 1 comes from SW2, it will go up to
N7k1 and then back down towards SW3 through N7K1.

Let's manually configure switch 2 to be the root for FTAG 2 by manually


configuring SW3 to have a lower priority.

1 SW3# show run | sect fabricpath


2 fabricpath domain default
3 root-priority 63

Let's take a look at the FTAG distribution now.

1
2 SW2# show fabricpath mroute ftag 2
3
(ftag/2, vlan/666, *, *), Flood, uptime: 05:18:03, isis
4 Outgoing interface list: (count: 2)
5 Interface Ethernet1/16, uptime: 00:11:38, isis
6 Interface Ethernet1/8, uptime: 00:47:50, isis
7
8 (ftag/2, vlan/666, *, *), Router ports (OMF), uptime: 02:10:08, isis igmp
9 Outgoing interface list: (count: 2)
Interface Ethernet1/16, uptime: 00:11:38, isis
10 Interface Vlan666, [SVI] uptime: 02:10:08, igmp
11
12 Found total 2 route(s)
13

Let's check it out on the n71k:


Internal

1
2 N7K1# show fabricpath mroute ftag 2
3
4 (ftag/2, vlan/666, *, *), Flood, uptime: 04:12:32, isis
5 Outgoing interface list: (count: 2)
Interface Ethernet4/1, uptime: 00:12:28, isis
6 Interface Ethernet4/1, uptime: 00:12:28, isis
7
8 (ftag/2, vlan/666, *, *), Router ports (OMF), uptime: 04:12:33, isis igmp
9 Outgoing interface list: (count: 2)
10 Interface Vlan666, [SVI] uptime: 02:03:05, igmp
11 Interface Ethernet4/1, uptime: 00:12:28, isis
12
Found total 2 route(s)
13
14 N7K1# show fabricpath isis adj
15 Fabricpath IS-IS domain: default Fabricpath IS-IS adjacency database:
16 System ID SNPA Level State Hold Time Interface
17 SW2 N/A 1 UP 00:00:27 Ethernet4/1
18

Ok so now SW2 is the root for FTAG 2 and any frames from N71k will come
down to him first, and he in turn will distribute it to SW3, now there is one bit of
that config that might make you say "What Gives?" and that is, I have four
connections between SW2 and SW3, why is traffic not load balancing across
those Equal Cost Links?

Fabric Path only ECMP's for KNOWN unicast frames.

OK, here's one more way you can use to determine the root of a MTREE:

1 N7K1# show fabricpath isis trees


2 Fabricpath IS-IS domain: default
3
4 Note: The metric mentioned for multidestination tree is from the root of that tree to
5
MT-0
6 Topology 0, Tree 1, Swid routing table
7 2, L1
8 via Ethernet4/1, metric 40
9 3, L1
10 via Ethernet4/2, metric 40
11
Topology 0, Tree 2, Swid routing table
12 2, L1
13 via Ethernet4/1, metric 0
14 3, L1
15 via Ethernet4/1, metric 40
16
Internal

17 SW2# show fabricpath isis trees


18 Fabricpath IS-IS domain: default
19
Note: The metric mentioned for multidestination tree is from the root of that tree to
20
21 MT-0
22 Topology 0, Tree 1, Swid routing table
23 1, L1
24 via Ethernet1/16, metric 0
3, L1
25
via Ethernet1/16, metric 40
26
27 Topology 0, Tree 2, Swid routing table
28 1, L1
29 via Ethernet1/16, metric 40
30 3, L1
via Ethernet1/8, metric 40
31
32
SW3# show fabricpath isis trees
33 Fabricpath IS-IS domain: default
34
35 Note: The metric mentioned for multidestination tree is from the root of that tree to
36
37 MT-0
38 Topology 0, Tree 1, Swid routing table
1, L1
39 via Ethernet1/17, metric 0
40 2, L1
41 via Ethernet1/17, metric 40
42
43 Topology 0, Tree 2, Swid routing table
44 1, L1
via Ethernet1/8, metric 40
45 2, L1
46 via Ethernet1/8, metric 0
47
48
49
50
51
52
53

So the key point in this output I have highlighted:


"Note: The metric mentioned for multidestination tree is from the root of that tree
to that switch-id"

What this is saying is that when your looking at this output, your being told the
values for the topology tree as if you where running the command on the root of
Internal

each tree itself, So if we take a closer look at a switch, Switch 3, which is not the
root for either FTAG.

1
2 SW3# show fabricpath isis trees
3 Fabricpath IS-IS domain: default
4
5 Note: The metric mentioned for multidestination tree is from the root of that tree to
6
MT-0
7 Topology 0, Tree 1, Swid routing table
8 1, L1
9 via Ethernet1/17, metric 0
10 2, L1
11 via Ethernet1/17, metric 40
12
Topology 0, Tree 2, Swid routing table
13 1, L1
14 via Ethernet1/8, metric 40
15 2, L1
16 via Ethernet1/8, metric 0
17

The Metric for reaching Switch-ID 1, which this switch reaches via Eth1/17, is
metric 0... Because Switch 1 _is_ the root for this FTAG

Same again for Tree 2, the root of the tree is Switch-ID 2, which is out eth1/8,
which has a metric of 0, because obviously for Switch-ID 2, it's metric to reach
itself, would be 0.

Let's now look at unicast load balancing

So if we look at our default unicast load balancing table right now on our switches
with multiple, equal cost links (Remember, fabricpath only supports load
balancing across equal cost links)

1 SW3# show fabricpath route


FabricPath Unicast Route Table
2 'a/b/c' denotes ftag/switch-id/subswitch-id
3 '[x/y]' denotes [admin distance/metric]
4 ftag 0 is local ftag
5 subswitch-id 0 is default subswitch-id
6
7 FabricPath Unicast Route Table for Topology-Default
8
0/3/0, number of next-hops: 0
9 via ---- , [60/0], 0 day/s 01:43:35, local
10 1/1/0, number of next-hops: 1
Internal

11 via Eth1/17, [115/40], 0 day/s 01:43:44, isis_fabricpath-default


12 1/2/0, number of next-hops: 4
via Eth1/5, [115/40], 0 day/s 01:07:59, isis_fabricpath-default
13 via Eth1/6, [115/40], 0 day/s 01:08:06, isis_fabricpath-default
14 via Eth1/7, [115/40], 0 day/s 01:08:05, isis_fabricpath-default
15 via Eth1/8, [115/40], 0 day/s 01:07:55, isis_fabricpath-default
16
17
18

We can see that our links are being equally balanced, how are they balanced?

1 SW3# show fabricpath load-balance


2 ECMP load-balancing configuration:
3 L3/L4 Preference: Mixed
4 Hash Control: Symmetric
Use VLAN: TRUE
5

They are load balanced based on a combination of values as shown above,


these include

• layer-3: Include only Layer 3 input (source or destination IP address)

• layer-4: Include only Layer 4 input (source or destination TCP and UDP ports, if
available)

• mixed: Include both Layer 3 and Layer 4 input (default).

• source: Use only source parameters (layer-3, layer-4, or mixed).

• destination: Use only destination parameters (layer-3, layer-4, or mixed).

• source-destination: Use both source and destination parameters (layer-3, layer-


4, or mixed).

• symmetric: Sort the source and destination tuples before entering them in the
hash function (source-to-destination and destination-to-source flows hash
identically) (default).

• xor: Perform an exclusive OR operation on the source and destination tuples


before entering them in the hash function.
Internal

• include-vlan: Include the VLAN ID of the frame (default).

• rotate-amount: Specify the number of bytes to rotate the hash string before it is
entered in the hash function.

Each of these values is relatively straight forward; you can specify if you want to
look at the layer 3 or layer 4 source/dest info OR a mixture (which is the default);
you can specify that you only want to look at the source or destination OR mixed;
you can control if the hash function will produce the same value for both source-
dest traffic and the return dest-source traffic. Finally the VLAN ID can be included
in your combinations, last but not least the rotate-amount controls some of the
mathematics of the hash function that we will get into.

Let's use our favorite command to look at this closely

1 SW2# show fabricpath load-balance unicast forwarding-path ftag 1 switchid 3 src-mac 0 00


198.18.66.4 l4-src-port 8080 vlan 666
2 Missing params will be substituted by 0's.
3
4 crc8_hash: 134
5 This flow selects interface Eth1/7
6
7 SW2# show fabricpath load-balance unicast forwarding-path ftag 1 switchid 3 src-mac 0000
8 l4-sr c-port 8081 vlan 666
Missing params will be substituted by 0's.
9
10crc8_hash: 29
11This flow selects interface Eth1/6

We can see that we only changed one tiny param the port number and all of a
sudden the traffic will load balance across another link, great! Looks pretty good
so far right?

Let's check out what that symetric command does for us, check this out:

1SW2# show fabricpath load-balance unicast forwarding-path ftag 1 switchid 3 src-mac 1 111
28.66.1 l4-src-port 80 vlan 666
Missing params will be substituted by 0's.
3
4crc8_hash: 134
5This flow selects interface Eth1/7

Here we have changed the source and destination ports and ip addressing etc
around, and we are provided with exactly the same CRC hash, which leads us to
exactly the same output interface!
Internal

Let's see if that is also true on the N7k:


N7K1# show fabricpath load-balance unicast forwarding-path ftag 1 switchid 2 flow-ty pe
14
2128b Hash Key generated : 01f9029a00000c6124201c6124204005
3This flow selects interface Eth4/1
4
5N7K1# show fabricpath load-balance unicast forwarding-path ftag 1 switchid 2 flow-type l
64
128b Hash Key generated : 01f9029a00000c6124201c6124204005
7This flow selects interface Eth4/1

If we change the length that the hash key is based on, the rotate amount, our
hash key will change.

1N7K1# show fabricpath load-balance


2ECMP load-balancing configuration:
3L3/L4 Preference: Mixed
Hash Control: Symmetric
4Rotate amount: 15 bytes
5
6N7K1# show fabricpath load-balance unicast forwarding-path ftag 1 switchid 2 flow-type l
74
8128b Hash Key generated : 0000000c6124201c612420400501f900

So now we have a diffirent hash key generated, based on a longer rotate-


amount.
This is apparently simply used to make sure that identical or near identical traffic
flows using VDC's disripute the traffic diffirently to each other, it simply adds a
longer hash value (in this case, it takes a number of bytes from the VDC Mac
address) to increase the likelyhood that the hash will differ between the VDC's.

Check this out for size:

Two totally separate VDC's are shown here, and what we do is change the
rotate-amount on each of them to 0 (nothing), then ask us to show it what it
thinks the hash key is.
sw1-2(config)# fabricpath load-balance unicast rotate-amount 0x0
1sw1-2# show fabricpath load-balance unicast forwarding-path ftag 1 switchid 2 flow-type
24
3128b Hash Key generated : 00000c6124201c612420400501f90000
4
5N7K1# show fabricpath load-balance unicast forwarding-path ftag 1 switchid 2 flow-type l
64128b Hash Key generated : 00000c6124201c612420400501f90000
Internal

As you can see, the hash is identical, which means our traffic would flow over the
same paths between these VDC's which we may not want, so we can use the
rotate-amount to increase how much of the VDC-MAC address is used in the
hashing function.

Note that just because FabricPath only supports equal cost load balancing,
doesn't mean that we can't go through intermediate switches and still have load
balancing. Here is an example of this.

1
2 SW2# show fabricpath route
3 FabricPath Unicast Route Table
'a/b/c' denotes ftag/switch-id/subswitch-id
4 '[x/y]' denotes [admin distance/metric]
5 ftag 0 is local ftag
6 subswitch-id 0 is default subswitch-id
7
8 1/3/0, number of next-hops: 5
9 via Eth1/5, [115/40], 0 day/s 00:00:02, isis_fabricpath-default
via Eth1/6, [115/40], 0 day/s 00:00:02, isis_fabricpath-default
10 via Eth1/7, [115/40], 0 day/s 00:00:02, isis_fabricpath-default
11 via Eth1/8, [115/40], 0 day/s 00:00:02, isis_fabricpath-default
12 via Eth1/16, [115/40], 0 day/s 00:00:26, isis_fabricpath-default
13

In the above example, we have modified the metric on N71k so that SW1 and
SW2, which have interfaces eth1/5 - 8 to each ohter, also see the route via N71k
as a valid path between each other two, we did this by modifying the metrics like
so:

1
SW2# show run int eth1/16
2
3 interface Ethernet1/16
4 switchport mode fabricpath
5 fabricpath isis metric 25
6
7 N7K1(config)# int eth4/1
8 N7K1(config-if)# fabricpath isis metric 15
N7K1(config-if)# int eth4/2
9 N7K1(config-if)# fabricpath isis metric 15
10

Notice that the total cost of these links is now 40 (25 + 15) for SW2, which means
SW2 now considers it an alternative Path
Internal

Over on SW3, since we have not modified the default metric, it will still load
balance via the 4 links, not 5.

1
2 SW3# show fabricpath route
3 FabricPath Unicast Route Table
4 'a/b/c' denotes ftag/switch-id/subswitch-id
5 '[x/y]' denotes [admin distance/metric]
ftag 0 is local ftag
6 subswitch-id 0 is default subswitch-id
7
8 FabricPath Unicast Route Table for Topology-Default
9
10 0/3/0, number of next-hops: 0
11 via ---- , [60/0], 0 day/s 02:30:59, local
12 1/1/0, number of next-hops: 1
via Eth1/17, [115/40], 0 day/s 02:31:08, isis_fabricpath-default
13 1/2/0, number of next-hops: 4
14 via Eth1/5, [115/40], 0 day/s 01:55:23, isis_fabricpath-default
15 via Eth1/6, [115/40], 0 day/s 01:55:30, isis_fabricpath-default
16 via Eth1/7, [115/40], 0 day/s 01:55:29, isis_fabricpath-default
via Eth1/8, [115/40], 0 day/s 01:55:19, isis_fabricpath-default
17
18

That is, until we change the metric:

1
2
SW3(config)# int eth1/17
3 SW3(config-if)# fabricpath isis metric 25
4 SW3(config-if)# end
5 SW3# show fabricpath route
6 FabricPath Unicast Route Table
7 'a/b/c' denotes ftag/switch-id/subswitch-id
'[x/y]' denotes [admin distance/metric]
8 ftag 0 is local ftag
9 subswitch-id 0 is default subswitch-id
10
11 FabricPath Unicast Route Table for Topology-Default
12
13 0/3/0, number of next-hops: 0
14 via ---- , [60/0], 0 day/s 02:31:34, local
1/1/0, number of next-hops: 1
15 via Eth1/17, [115/25], 0 day/s 02:31:43, isis_fabricpath-default
16 1/2/0, number of next-hops: 5
17 via Eth1/5, [115/40], 0 day/s 01:55:58, isis_fabricpath-default
18 via Eth1/6, [115/40], 0 day/s 01:56:05, isis_fabricpath-default
via Eth1/7, [115/40], 0 day/s 01:56:04, isis_fabricpath-default
19 via Eth1/8, [115/40], 0 day/s 01:55:54, isis_fabricpath-default
20 via Eth1/17, [115/40], 0 day/s 00:00:03, isis_fabricpath-default
21
22
Internal

DATA CENTER, FIVE FUNCTIONAL FACTS

FIVE FUNCTIONAL FACTS ABOUT


FABRICPATH
FabricPath is Cisco’s proprietary, TRILL-based technology for encapsulating Ethernet frames
across a routed network. Its goal is to combine the best aspects of a Layer 2 network with the
best aspects of a Layer 3 network.
 Layer 2 plug and play characteristics
 Layer 2 adjacency between devices
 Layer 3 routing and path selection
 Layer 3 scalability
 Layer 3 fast convergence
 Layer 3 Time To Live field to drop looping packets
 Layer 3 failure domain isolation
An article on FabricPath could go into a lot of detail and be many pages long but I’m going to
concentrate on five facts that I found particularly interesting as I’ve learned more about
FabricPath.

#1 – FabricPath is not a network topology

When I first started learning about FabricPath, I believed that it came with a requirement that
your network topology conform to certain rules. While I now know that is not true, there is a
common topology that is discussed when talking about network fabrics. It’s called the spine+leaf
topology.
Internal

This is similar to a traditional collapsed core design with a few differences.

 When we’re talking about a fabric, all links in the network are forwarding. So unlike a traditional
network that is running Spanning Tree Protocol, each switch has multiple active paths to every other
switch.
 Because all of the links are forwarding, there are real benefits to scaling the network horizontally.
Consider if the example topology above only showed (2) spine switches instead of (3). That would
give each leaf switch (2) active paths to reach other parts of the network. By adding a third spine
switch, not only is the bandwidth scaled but so is the resiliency of the network. The network can lose
any spine switch and only drop 1/3rd of its bandwidth. In a traditional network that runs Spanning
Tree Protocol, there is no benefit to scaling horizontally like this because STP will only allow (1) link
to be forwarding at a time. The investment in an extra switch, transceivers, cables, etc, is just sitting
idle waiting for a failure before it can start forwarding packets.
So while the spine+leaf topology is commonly used when discussing FabricPath, it is not a
requirement. In fact, even having full-mesh connectivity between spine and leaf nodes as shown
in the drawing is not a requirement. You could connect each spine to every other leaf. You could
connect spines to other spines or a leaf to a leaf.

According to Cisco, there is a lot of interest from customers about using FabricPath for
connecting sites together (ie, as a data center interconnect or for connecting buildings in a
campus). An example of that might be a ring topology that connects each of the sites.
Internal

The drawing shows FabricPath being used between the switches that connect to the fiber ring.
This is obviously a very different topology than spine+leaf and yet perfectly reasonable as far as
FabricPath is concerned.

FabricPath is a method for encapsulating Layer 2 traffic across the network. It does not define or
require a specific network topology. The rule of thumb is: if the topology makes sense for
regular old IP routing, then it makes sense for FabricPath.

#2 – FabricPath introduces its own unique data plane

In order to achieve the benefits that FabricPath brings over Classical Ethernet, some significant
changes needed to be implemented in the data plane of the network. Among these changes
include:

 The introduction of a Time To Live field in the frame header which is decremented at each FabricPath
hop
 A unique addressing scheme consisting of a 12-bit switch ID which is used to switch frames through
the fabric
 A Reverse Path Forwarding check is done on each frame as it enters a FabricPath port (another loop
prevention mechanism)
 A new frame header format with these new fields
Internal

In order for the hardware platform to switch FabricPath frames without any slowdown, new
ASICs are required in the network. On the Nexus 7000, these ASICs are present on the F series
I/O modules. It’s important to understand that not only do the FabricPath core ports need to be
on an F series module but so do the Classic Ethernet edge ports which carry traffic belonging to
FabricPath VLANs. This last requirement may impact certain existing environments where
downstream devices are connected on M1 or M2 I/O modules.

FabricPath is also supported on the Nexus 5500 running NX-OS 5.1(3)N1(1) or higher. Cisco’s
documentation isn’t exactly clear how FabricPath is implemented on the 5500 series but I’ve
been told 55xx boxes do it in hardware (the original 50xx boxes do not support FabricPath).

#3 – FabricPath does not unconditionally learn every MAC in the network

One of the key issues with scaling modern data centers is that the number of MAC addresses
each switch needs to learn is growing all the time. The explosion in growth is due mostly to the
increase in virtualization. Consider a top-of-rack, 48-port Classical Ethernet switch that connects
to 48 servers. That’s 48 MAC addresses that this switch and all the other switches in the network
need to learn to send frames to those servers. Now consider that those 48 servers are really
VMware vSphere hosts and that each host has 20 virtual machines (an average number, probably
low for some environments). That’s 960 MAC addresses. Quite an increase. Now multiply that
out by however many additional ToR switches are also servicing vSphere hosts. All of a sudden
your switches’ TCAM doesn’t look so big any more.

Since FabricPath continues the Layer 2 adjacency that Classical Ethernet has, it must also rely on
MAC address learning to make forwarding decisions. The difference, however, is that FabricPath
does not unconditionally learn the MAC addresses it sees on the wire. Instead it does
“conversational learning” which means that for MACs that are reachable through the fabric, a
FabricPath switch will only learn that MAC if it’s actively conversing with a MAC that is
already present in the MAC forwarding table.
Internal

Consider Switch 2 in this example. Host A is reachable through the fabric while B and C are
reachable via Classic Ethernet ports. The MACs of B and C are learned on Switch 2 using
Classic Ethernet rules which is to say that they are learned as soon as they each send frames into
the network. The MAC for A is only learned at Switch 2 if A is sending a unicast packet to B or
C and their MAC is already in Switch 2’s forwarding table. If A sends a broadcast frame into the
network (such as when A is sending an ARP ‘who-has’ request looking for B’s MAC), Switch 2
will not learn A’s MAC (because the frame from A was not addressed to B, it was a broadcast).
Also if A sends a unicast frame for Host D, a host that Switch 2 knows nothing about, Switch 2
will not learn A’s MAC (destination MAC must be in the forwarding table to learn the source
MAC).

The conversational learning mechanism ensures that switches only learn relevant MACs and not
every MAC in the entire domain thus easing the pressure on the finite amount of TCAM in the
switch

#4 – FabricPath ports do not have IP addresses

One area where FabricPath gets confusing is when it’s referred to as “routing MAC addresses”
or “Layer 2 over Layer 3″. It’s easy to hear terms like “routing” and “Layer 3″ and associate that
with the most common Layer 3 protocol on the planet — IP — and assume that IP must play a
role in the FabricPath data plane. However, as outlined in #2 above, FabricPath employs its own
unique data plane and has been engineered to take on the best characteristics of Ethernet at Layer
2 and IP at Layer 3 without actually using either of those protocols. Below is a capture of a
FabricPath frame showing that neither Ethernet nor IP are in play.
Internal

Instead of using IP addresses, an address — called the “switch ID” — is automatically assigned
to every switch on the fabric. This ID is used as the source and destination address for FabricPath
frames destined to and sourced from the switch. Other fields such as the TTL can also be seen in
the capture.

#5 – FabricPath employs Equal Cost Multipath packet forwarding

In Classic Ethernet networks that utilize Spanning Tree Protocol, it’s no secret that the
bandwidth that’s been cabled up in the network is not used efficiently. STP’s only purpose in life
is to make sure that redundant links in the network are not used during steady-state operation.
That’s a poor ROI on the cost to put in those links and from a scaling/capacity perspective, it’s
equally as poor since the network is limited to whatever the capacity is of that one link and
cannot employ multiple parallel links. (Ok, you technically can using etherchannel but you
understand the point I’m trying to make)

Since FabricPath doesn’t use STP in the fabric and because the fabric ports are routed interfaces
and therefore have loop prevention mechanisms built-in, all of the fabric interfaces will be in a
forwarding state capable of sending and receiving packets. Since all interfaces are forwarding it’s
possible that there are equal cost paths to a particular destination switch ID. FabricPath switches
can employ Equal Cost Multipathing (ECMP) to utilize all equal cost paths.
Internal

Here S100 has (3) equal cost paths to S300: A path to each of S10, S20, and S30 via the orange
links and then from each of those switches to S300 via the purple links.

Much like a regular etherchannel or a CEF multipathing situation, FabricPath ECMP utilizes a
hashing algorithm to determine which link a particular traffic flow should be put on. By default
the inputs to the hash are:

 Source and destination Layer 3 address


 Source and destination Layer 4 ports (if present)
 802.1Q VLAN tag
These values are all taken from the original, encapsulated Ethernet frame.

An interesting value-add that FabricPath does is to use the switch’s own MAC address as a key
for shifting the hashed bits. This shifting prevents polarization of the traffic as it passes through
the fabric (ie, prevents every switch from choosing “link #1″ all the way through the network due
to their hash outputs all being exactly the same). The benefit of this is only realized if there’s
more than (2) hops between source and destination FabricPath switch.

So there you have it. Are you currently using or planning a FabricPath deployment? Please share
your thoughts in the comments below.

fabricpath
sample configuration:
N7K-1# conf t

Enter configuration commands, one per line. End with CNTL/Z.

N7K-1(config)# feature-set fabricpath

N7K-1(config)# vlan 100

N7K-1(config-vlan)# mode fabricpath

N7K-1(config-vlan)# int e1/1-8

N7K-1(config-if-range)# switchport mode fabricpath

N7K-1(config-if-range)# no shutdown
Internal

N5K-1# conf t

Enter configuration commands, one per line. End with CNTL/Z.

N5K-1(config)# install feature-set fabricpath

N5K-1(config)# feature-set fabricpath

N5K-1(config)# vlan 100

N5K-1(config-vlan)# mode fabricpath

N5K-1(config)# int e1/1-8

N5K-1(config-if-range)# switchport mode fabricpath

N5K-1(config-if-range)# no shutdown

You can influence the root selection with the root-priority command:

N7K-1(config)# fabricpath domain default

N7K-1(config-fabricpath-isis)# root-priority 255

By default, all switches are assigned a root priority of 64. Manually setting a given switch’s

priority to 255, the highest value possible, ensures that it will become the primary root.

To verify:
N7K-12-1(config-if-range)# show fabricpath isis interface brief

Fabricpath IS-IS domain: default

Interface Type Idx State Circuit MTU Metric Priority Adjs/AdjsUp

--------------------------------------------------------------------------------

Ethernet1/21 P2P 2 Up/Ready 0x01/L1 1500 40 64 0/0

Ethernet1/22 P2P 4 Up/Ready 0x01/L1 1500 40 64 0/0

Ethernet1/23 P2P 8 Up/Ready 0x01/L1 1500 40 64 0/0

Ethernet1/24 P2P 5 Up/Ready 0x01/L1 1500 40 64 0/0

Ethernet1/25 P2P 1 Up/Ready 0x01/L1 1500 40 64 0/0

Ethernet1/26 P2P 6 Up/Ready 0x01/L1 1500 40 64 0/0


Internal

Ethernet1/27 P2P 3 Up/Ready 0x01/L1 1500 40 64 0/0

Ethernet1/28 P2P 7 Up/Ready 0x01/L1 1500 40 64 0/0

N7K-1(config)# show fabricpath route

N7K-12-2(config-if-range)# show fabricpath isis topology summary

Fabricpath IS-IS domain: default FabricPath IS-IS Topology Summary

MT-0

Configured interfaces: Ethernet1/21 Ethernet1/22 Ethernet1/23 Ethernet1/24

Ethernet1/25 Ethernet1/26 Ethernet1/27

Number of trees: 2

Tree id: 1, ftag: 1, root system: 18ef.63e3.cec4, 71

Tree id: 2, ftag: 2, root system: 0026.980d.3cc4, 72

Note – Configuring FabricPath to establish active/active 4-way

connections between Nexus 7010 and Nexus 5548 devices:

This LAB demonstrates the advantage of FabricPath over STP: when multiple parallel links

interconnect 2 switches together, all those links are integrated in the FabricPath routing table.

This enables all links to be used actively for traffic forwarding.

By nature, FabricPath supports up to 16 active links between 2 FabricPath switches (16 way

ECMP).

L2, L3 and L4 flow field information can be leveraged to create hashing value in order to

select properly the next hop local interface.

FabricPath computes in the background 2 multidestination trees (tree 1 for broadcast,


Internal

unknown unicast and multicast traffic; tree 2 for multicast). It is a best practice to position root

of tree 1 on spine switch and root of tree 2 on another spine switch.

FabricPath network is able to interoperate with legacy STP network. It is possible to

propagate TCN across FabricPath domain by using spanning-tree domain <id>

command.

Be careful to set same MTU value on both sides of FabricPath core port link in order to avoid

any unexpected forwarding behavior.

Fabricpath with vpc+


N5K-1(config)# vpc domain 1

N5K-1 (config-vpc-domain)# role priority <...>

Warning:

!!:: vPCs will be flapped on current primary vPC switch while attempting

role change ::!!

Note:

--------:: Change will take effect after user has re-initd the vPC peerlink

::--------

N5K-1 (config-vpc-domain)# fabricpath switch-id 1

Configuring fabricpath switch id will flap vPCs. Continue (yes/no)? [no] yes

N5K-1(config)# interface Ethernet1/9-10

N5K-1(config-if-range)# description vPC+ peer-link member

N5K-1(config-if-range)# channel-group 2 mode active

N5K-1(config-if-range)# no shutdown

N5K-1(config)# interface port-channel 2

N5K-1(config-if)# description vPC+ peer-link


Internal

N5K-1(config-if)# switchport mode fabricpath

The following command will assign the port channel to be a vPC+ peer-link.

N5K-1(config)# interface port-channel 2

N5K-1(config-if)# vpc peer-link

Note:
All FabricPath edge switches (also called leaf switches) playing role of Layer 2 gateway (i.e

connected to end host device or STP switch) share the same bridge ID.

FabricPath fabric appear as a unique logical switch to the rest of the legacy network.

Bridge ID used by all edge switches is: c84c.75fa.6000.

If you are using spanning-tree domain <ID>, bridge ID will then reflect the domain <ID>:

For example, a spanning-tree domain 5 will generate the following Bridge ID: c84c.75fa.6005

1. how to log " show commands" in account log?

by default, "show accounting log" only shows the actual configuration commands, if you also want it
to track the show commands the user type, you need do the following:

terminal log all

2. how to read a file in the bootflash?

show file ###

3. how to count the line of a show command output?

show *** | count

4. how to find a command I only remember part of it?

show cli list | grep ## (enter your keyword here)


Internal

eg:

show cli list | grep lacp

5.

show cli history

6. check usage of bootflash

show system internal flash

7. debug to a file
debug logfile ##

- how large is the ARP table size in Nexus 5596?

- what is the command to check the number of MAC address entries used in N5K and percentage of
usage?

- what is the command to check the number of IP ARP entries used in N5K and percentage of usage?

Answers:

1. According to the scalability


guide<http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5500/sw/Verified_Scalability/
Internal

602N21/b_N5500_Verified_Scalability_602N21/b_N5500_Config_Limits_602N11_chapter_01.html>;
, maximum limit for ARP table is:

* 8000 for the Cisco Nexus 5548 Layer 3 Daughter Card (N55-D160L3(=))

* 16,000 for the Cisco Nexus 5548 Layer 3 Daughter Card, version 2 (N55-D160L3-V2(=))

1) To display the number of entries currently in the MAC address table, use theshow mac address-
table count command.

Detail:

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/command/reference/layer2
/n5k-l2-cr/n5k-l2_cmds_show.html#wp1438536

2) There isn' t a direct command to check the percentage of usage, however, you can configure
a notification when the usage is over a limit:

To configure a log message notification of MAC address table events, use the mac address-table
notification command.

mac address-table notification threshold [limit ## (percentage 1-100) interval ## (seconds: 10-10000)]}

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/command/reference/layer2
/n5k-l2-cr/n5k-l2_cmds_m.html#wp1375692

2. what is the command to check the number of IP ARP entries used in N5K and percentage of usage?

You can check the number of IP ARP entries with "Show ip arp" or " show ip arp summary", however,
there isn't a way to check the percentage of usage yet. You will need to refer to the release note since
the limit (both for MAC and ARP) will be varied depends on hardware and software versions.
Internal

Detail:

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/command/reference/securi
ty/n5k-sec-cr/n5k-sec_cmds_show.html#wp1742586

NOTE:

useful command to check your TCAM table usage:

show hardware profile status


Total LPM Entries = ##.

Used Host Entries in Host (Total) =###

Used Host4 Entries in Host =##

inter-vlan routing failure on n5k/l3 switch


issue:
the host can ping the ip in the same vlan,but can't ping other hosts in other vlans.
and N5k is the core and has the SVIs as default gateways.

troubleshooting steps:

1. check if default gateway is configured properly in the host and N5K.


2. check log to see if there is any duplicate IP ARP, any other host is using the default gateway's ip
address?

raw note: VTP on Nexus 5000 switches


* config guide in 4.x

very brief

This example shows how to configure VTP in transparent mode (the default mode ):

switch#config t
switch(config)#feature vtp
switch(config)#vtp domain accounting
Internal

switch(config)#vtp version2
switch(config)#exit
switch#

 ** this behavior will possibly lead to the following issue:

Nexus 5000 does not have the same VLANs as switch running VTP server
VLANs for the Nexus 5000 are not the same as for the switch running the VTP server.

Possible Cause

The Nexus 5000 currently supports VTP only in transparent mode (4.2(1)N1(1) and later releases).

Solution

This situation indicates that VLANs must be configured locally. However a VTP client and server can both
communicate through a Nexus 5000 by using the following commands:

 better config guide :

About the VLAN Trunking Protocol


VTP is a distributed VLAN database management protocol that synchronizes the VTP VLAN database across domains.
A VTP domain includes one or more network switches that share thesame VTP domain name and that are connected
with trunk interfaces. Each switch can be in only one VTP domain. Layer 2 trunk interfaces, Layer 2 port channels, and
virtual port channels (vPCs) support VTP functionality. Cisco NX-OS Release 5.0(2)N1(1) introduces the support for
VTPv1 and VTP2. Beginning in Cisco NX-OS Release 5.0(2)N2(1), you can configure VTP in client or server mode.
Prior to NX-OS Release 5.0(2)N2(1), VTP worked only in transparent mode.

There are four VTP modes:

 Server mode–Allows users to perform configurations, it manages the VLAN database version #, and stores the VLAN
database.
 Client mode–Does not allow user configurations and relies on other switches in the domain to provide configuration
information.
 Off mode—Allows you to access the VLAN database (VTP is enabled) but not participate in VTP.
 Transparent mode–Does not participate in VTP, uses local configuration, and relays VTP packets to other forward
ports. VLAN changes affect only the local switch. A VTP transparent network switch does not advertise its VLAN
configuration and does not synchronize its VLAN configuration based on received advertisements.
 Guidelines and Limitations

Guidelines and Limitations


Internal

VTP has the following configuration guidelines and limitations:

 When a switch is configured as a VTP client, you cannot create VLANs on the switch in the range of 1 to 1005.
 VLAN 1 is required on all trunk ports used for switch interconnects if VTP is supported in the network. Disabling VLAN
1 from any of these ports prevents VTP from functioning properly.
 If you enable VTP, you must configure either version 1 or version 2. On the Cisco Nexus 5010 and Nexus 5020 switch,
512 VLANs are supported. If these switches are in a distribution network with other switches, the limit remains the
same.
On the Cisco Nexus 5010 switch and the Nexus 5020 switch, 512 VLANs are supported. If these switches are in a
distribution network with other switches, the VLAN limit for the VTP domain is 512. If a Nexus 5010 switch or Nexus
5020 switch client/server receives additional VLANs from a VTP server, they transition to transparent mode

 The show running-configuration command does not show VLAN or VTP configuration information for VLANs 1 to 1000.
 When deployed with vPC, both vPC switches must be configured identically.
 VTP advertisements are not sent out on Cisco Nexus 2000 Series Fabric Extender ports.
 VTP pruning is not supported.

** interesting discussion about a bug:

There is a bug on the N3K that causes this behavior even if they're currently set the same if you
have ever enabled VTP the box still thinks its on. Perhaps that bug also exists on N5K if show vpc
status shows the same state.

In any case since its a type 2 inconsistency it doesn't affect traffic flow.

*real life case of how VTP can cause an outage with some human error:

DHCP snooping in Nexus


cco Overview
DHCP snooping acts like a firewall between untrusted hosts and trusted DHCP servers by doing the
following:
• Validates DHCP messages received from untrusted sources and filters out invalid response
messages from DHCP servers.
• Builds and maintains the DHCP snooping binding database, which contains information about
untrusted hosts with leased IP addresses.

• Uses the DHCP snooping binding database to validate subsequentrequests from untrusted
hosts.
Internal

DHCP snooping need to be enabled globally as well as per VLAN.

why?
======from cisco======

Minimum DHCP Snooping Configuration


The minimum configuration for DHCP snooping is as follows:

Step 1 Enable the DHCP feature. For more information, see the "Enabling or Disabling the DHCP
Feature" section.
Step 2 Enable DHCP snooping globally. For more information, see the"Enabling or Disabling DHCP
Snooping Globally" section.
Step 3 Enable DHCP snooping on at least one VLAN. For more information, see the "Enabling or
Disabling DHCP Snooping on a VLAN" section.
By default, DHCP snooping is disabled on all VLANs.
Step 4 Ensure that the DHCP server is connected to the device using a trusted interface. For more
information, see the "Configuring an Interface as Trusted or Untrusted" section.

what if I disable the configuration globally?

Enabling or Disabling DHCP Snooping Globally


Use this procedure to globally enable or disable the DHCP snooping.

BEFORE YOU BEGIN


Before beginning this procedure, you must know or do the following:

• By default, DHCP snooping is globally disabled.


• If DHCP snooping is globally disabled, all DHCP snooping stops and no DHCP messages are
relayed.
• If you configure DHCP snooping and then globally disable it, the remaining configuration
is preserved.
Internal

what is VOQ (virtual output queue) ?


good explanation:

implement on all ingress interfaces and represent the egress buffer/


goal is to max throughput on a per class basis

what is "capture enablement"?


good reference from
here: http://www.cisco.com/en/US/products/ps9402/products_tech_note09186a0080c021d5.sh
tml

Access Control List (ACL) capture provides you the ability to selectively capturetraffic on an interface
or virtual local area network (VLAN) When you enable the capture option for an ACL rule, packets that
match this rule are either forwarded or dropped based on the specified permit or deny action and can
also be copied to an alternate destination port for further analysis. An ACL rule with the capture option
can be applied:

1. In a VLAN,
2. In the ingress direction on all interfaces,
3. In the egress direction on all Layer 3 interfaces.

ACL Configuration Example


Here is an example configuration of ACL capture applied to a VLAN, also known as virtual LAN Access
Control List (VACL) capture. Ten gigabit snifers designated may not be feasible for all scenerios.
Selective traffic capture can be very useful in such scenerios especially during troubleshooting when
traffic volumes are high.
!! Global command required to enable ACL-capture feature (on default VDC)
hardware access-list capture

monitor session 1 type acl-capture


destination interface ethernet 2/1
no shut
exit
!!
ip access-list TEST_ACL
10 permit ip 216.113.153.0/27 any capture session 1
20 permit ip 198.113.153.0/24 any capture session 1
30 permit ip 47.113.0.0/16 any capture session 1
40 permit ip any any
!!
!! Note: Capture session ID matches with the monitor session ID
!!
vlan access-map VACL_TEST 10
match ip address TEST_ACL
action forward
Internal

statistics per-entry
!!
vlan filter VACL_TEST vlan-list 500
You can also check the ternary content addressable memory (TCAM) programming of the access list.
This output is for the VLAN 500 for Module 1.

N7k2-VPC1# show system internal access-list vlan 500 input statistics

slot 1
=======

INSTANCE 0x0
---------------

Tcam 1 resource usage:


----------------------
Label_b = 0x802
Bank 0
------
IPv4 Class
Policies: VACL(VACL_TEST)
Netflow profile: 0
Netflow deny profile: 0
Entries:
[Index] Entry [Stats]
---------------------
[0006:0005:0005] permit ip 216.113.153.0/27 0.0.0.0/0 capture [0]
[0009:0008:0008] permit ip 198.113.153.0/24 0.0.0.0/0 capture [0]
[000b:000a:000a] permit ip 47.113.0.0/16 0.0.0.0/0 capture [0]
[000c:000b:000b] permit ip 0.0.0.0/0 0.0.0.0/0 [0]
[000d:000c:000c] deny ip 0.0.0.0/0 0.0.0.0/0 [0]
Posted 20th July 2013 by Ming Tang

NXOS SNMPv3 Security Features


SNMPv3 provides secure access to devices by a combination of authenticating and encrypting frames
over the network. The security features provided in SNMPv3 are the following:

• Message integrity—Ensures that a packet has not been tampered with in-transit.

• Authentication—Determines the message is from a valid source.

• Encryption—Scrambles the packet contents to prevent it from being seen by unauthorized


sources.

http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli/sm_sn
mp.html
Internal

PVLAN
PVLANs provide layer 2 isolation between ports within the same broadcast domain. There are three
types of PVLAN ports:

 Promiscuous— A promiscuous port can communicate with all interfaces, including the isolated and
community ports within a PVLAN.
 Isolated— An isolated port has complete Layer 2 separation from the other ports within the same
PVLAN, but not from the promiscuous ports. PVLANs block all traffic to isolated ports except
traffic from promiscuous ports. Traffic from isolated port is forwarded only to promiscuous ports.
 Community— Community ports communicate among themselves and with their promiscuous
ports. These interfaces are separated at Layer 2 from all other interfaces in other communities or
isolated ports within their PVLAN.
http://www.cisco.com/en/US/tech/tk389/tk814/tk840/tsd_technology_support_sub-
protocol_home.html

GSS failure in a cluster


in the event of a GSS failure in a cluster, there should be no disruptions. the remaining GSS will
respond to any new query.

below is the some info from this Good link from cisco:

Anycast is a Cisco IOS network routing feature that provides a Layer 3 network virtual address. The GSS can leverage
this network-wide virtual address to provide GSS redundancy.

A single anycast address can represent the entire GSS cluster by allowing the mapping of the GSS anycast loopback
address to the virtual network-wide anycast address.

The network-wide anycast address can represent up to 16 GSS devices in a single cluster or multiple GSS clusters.

A failure of any GSS behind the anycast address is transparent to the end user. Also, since anycast leverages the
network's routing tables, the traffic destined to the GSS is based on routing metrics.

Anycast works with the routing topology to route data to the nearest or best destination. Anycast has a one-to-
many association between network addresses and network endpoints, which means that each destination address
identifies a set of receiver endpoints, only one of which receives information from a sender at any time.

syslog reporting fex power supply failure but the power supply is
actually working?
false alarm.
alerts like the following are due to a comestic bug: CSCtl77867. no functional impact, fixed in 5.0(3)N2(1)

%SATCTRL-FEX100-2-SOHMS_DIAG_ERRORFEX-100 Module 1: Runtime diag detected major event: Voltage failure


Internal

on power supply: 2

%SATCTRL-FEX100-2-SOHMS_DIAG_ERROR: FEX-100 System minor alarm on power supply 2: failed

%SATCTRL-FEX100-4-SOHMS_PS_GPIO: FEX-100 System PS access failure on Power


supply: 2
%SATCTRL-FEX100-2-SOHMS_DIAG_ERROR: FEX-100 Recovered: System minor alarm
on power supply 2: failed

Also, here is the link to the recommended minimum release for Nexus
5000:http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/release/recommended_releas
es/recommended_nx-os_releases.html

• Cisco NX-OS Release 5.2(1)N1(4) is the recommended release for general features and functions.

• Cisco NX-OS Release 5.1(3)N2(1c) is the minimum recommended release for general features and functions.

2. Before considering upgrade, it is recommended to go through the release note to see if the new version is
suitable to your environment:http://www.cisco.com/en/US/products/ps9670/prod_release_notes_list.html

3. If you decide to upgrade, please follow the guideline in the upgrade


guides:http://www.cisco.com/en/US/products/ps9670/prod_installation_guides_list.html

Interface ***** is down (Error disabled. Reason:BPDUGuard)

ERROR:
Interface ***** is down (Error disabled. Reason:BPDUGuard)
Reason:
the cisco document explains it all:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/troubleshoot
ing/guide/n5K_ts_l2.html#wp1026440
basically, it is because the FEX host ports are connected with a non-host
devices which is sending out BPDUs. it is not best practice to connect a switch
to a fex.

Spanning Tree Protocol

HIFs go down with the BPDUGuard errDisable message


Internal

HIFs go down accompanied with the message, BPDUGuard errDisable.

Possible Cause

By default, the HIFs are in STP edge mode with the BPDU guard enabled.
This means that the HIFs are supposed to be connected to hosts or
non-switching devices. If they are connected to a non-host device/switch
that is sending BPDUs, the HIFs become error-disabled upon receiving a
BPDU.

Solution

Enable the BPDU filter on the HIF and on the peer connecting device.
With the filter enabled, the HIFs do not send or receive any BPDUs. Use
the following commands to confirm the details of the STP port state for
the port:

• show spanning-tree interface <id> detail

• show spanning-tree interface <id>


a cosmetic bug when the N5K is booting up
" %KERN-3-SYSTEM_MSG: proton_sys_dom_info_read(): dom read failed err 0x426e000e [err resp
type 0xfffe Cache Not Valid] - kernel "

this message could happen a few times when the N5K is booting up,
this is a cosmetic bug and has no functional impact.

FEX port status is "down,Incompatible-Topology"

Sometimes some fex errors could be due to vPC issues, and it is not obvious
about the actual cause:

Topology:

dual homed Fexes with vpc

Symptoms:
Internal

1. Fex fabric ports showed " down NO SFP",after shut/no shut the port, the status is
"down,Incompatible-Topology"

n5k# show int e 1/18

Ethernet1/18 is down (SFP not inserted)

after shut/no shut of the fex fabric port, below:

n5k(config-if)# int e1/18

n5k(config-if)# shut

n5k(config-if)# no shut

n5k(config-if)#

n5k(config-if)#

n5k(config-if)#

n5k(config-if)# ************* n5k %$ VDC-1 %$ %FEX-2-FEX_PORT_STATUS_CRIT:


Uplink-ID 2 of Fex 112 that is connected with Ethernet1/18 changed its status from Fabric
Up to Incompatible-Topology

n5k(config-if)# show int fex

Fabric Fabric Fex FEX

Fex Port Port State Uplink Model Serial

---------------------------------------------------------------

121 Eth1/10 Incompatible-Topology 4 N2K-C2248TP-1GE ******

111 Eth1/12 Incompatible-Topology 4 N2K-C2148T-1GE ******

122 Eth1/15 Configured 0

122 Eth1/16 Configured 0

112 Eth1/17 Configured 0

112 Eth1/18 Incompatible-Topology 2 N2K-C2148T-1GE ******

2. all vPC down.

Cause:
Internal

"show vpc brief" shows this:

“vPC type-1 configuration incompatible - STP gl obal loop guard


inconsistent”

And checked the configuration on both sides. Adding “ spanning-tree loopguard


default” on the missing side, all the vPC comes back up

Renumber FEX/nexus 2000 in N5K


Problem:

Why when I change the FEX number, the fex will become offline?

Possible causes:

1. Sometimes it can take a few minutes for the FEX to transit from offline to online.
2. Do you apply the configuration on both sides if it is dual homed?

=====my lab testing====

=======Single home FEX: ===========

N5K----FEX

The Fex comes back online within a minute after renumbering:

===before renumbering========

33.03-5548UP# show fex

FEX FEX FEX FEX

Number Description State Model Serial

------------------------------------------------------------------------

108 FEX0108 Online N2K-C2148T-1GE FOX1337GEYD

148 FEX148 Online N2K-C2248TP-1GE SSI143407R2

==========renumbering===========
Internal

33.03-5548UP(config-if)# int po148

33.03-5548UP(config-if)# no fex associate 148

33.03-5548UP(config-if)# no fex 148

33.03-5548UP(config)# int po148

33.03-5548UP(config-if)# fex associate 149

33.03-5548UP(config-if)# fex 149

========after renumbering=========

33.03-5548UP(config-fex)# show fex

FEX FEX FEX FEX

Number Description State Model Serial

------------------------------------------------------------------------

108 FEX0108 Online N2K-C2148T-1GE FOX1337GEYD


149 FEX0149 Registered N2K-C2248TP-1GE SSI143407R2

…………………..

33.03-5548UP(config-fex)# show fex

FEX FEX FEX FEX

Number Description State Model Serial

------------------------------------------------------------------------

108 FEX0108 Online N2K-C2148T-1GE FOX1337GEYD

149 FEX0149 Online N2K-C2248TP-1GE SSI143407R2

===dual homed FEX====

One fex goes to both switches running vPC.

The fex comes back online within minutes as well after applying changes on both sides.

===before change ====


Internal

33.02-5596UP# show fex

FEX FEX FEX FEX

Number Description State Model Serial

------------------------------------------------------------------------

108 FEX0108 AA Version Mismatch N2K-C2148T-1GE FOX1337GEYD -----------please


ignore this state, we were running different versions on both N5K switches for testing.

132 Eth1/14 Online N2K-C2232PP-10GE SSI1350063S

======renumbering on both sides ========

33.02-5596UP(config)# int po108

33.02-5596UP(config-if)# no fex associate

33.02-5596UP(config-if)#

33.02-5596UP(config-if)# no fex 108

33.02-5596UP(config)#

33.02-5596UP(config)#

33.02-5596UP(config)# int po108

33.02-5596UP(config-if)# fex associate 109

33.02-5596UP(config-if)#

33.02-5596UP(config-if)# fex 109

========after change=========

33.02-5596UP(config-if)# show fex

FEX FEX FEX FEX

Number Description State Model Serial

------------------------------------------------------------------------

109 FEX0109 Online Sequence N2K-C2148T-1GE FOX1337GEYD

132 Eth1/14 Online N2K-C2232PP-10GE SSI1350063S

33.02-5596UP(config-if)# show fex

FEX FEX FEX FEX

Number Description State Model Serial

------------------------------------------------------------------------
Internal

109 FEX0109 Online N2K-C2148T-1GE FOX1337GEYD

132 Eth1/14 Online N2K-C2232PP-10GE SSI1350063S

Posted 12th March 2013 by Ming Tang

enhanced vPC

according to our release note 5.1:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/release/notes/Rel_5_1_3_N1_
1/Nexus5000_Release_Notes_5_1_3_N1.html#wp387585

Enhanced vPC Support is from 5.1.3 onwards.

The most important thing to remember about enhanced vpc is that you don't need to assign a vpc
number, the system will automatically assign one:

configuration example (from cisco config guide):

Deploying and Monitoring Enhanced vPC

Enhanced vPC Configuration

In the Enhanced vPC topology, the FEXs are virtual line cards and the FEX front panel ports are mapped to the virtual
interfaces on a parent Cisco Nexus 5000 Series device. From the CLI perspective, the configuration of Enhanced vPC
is the same as a regular port channel with member ports from two FEXs. You do not have to enter the CLI vpc vpc
ID to create an Enhanced vPC. An example of how to create an Enhanced vPC with topology.

The following procedure uses the topology in Figure 6-10. In the figure, the number next to the line is the interface ID.
Assume all the ports are base ports the interface ID 2 represent interface eth1/2 on the Cisco Nexus 5000 Series
device.

Figure 6-10 Creating an Enhanced vPC Topology


Internal

Configuration on the first Cisco Nexus 5000 Series device:

N5k-1(config)# interface eth101/1/, eth101/1/2

N5k-1(config-if)# channel-group 2 mode active

N5k-1(config-if)# interface eth102/1/, eth102/1/2


Internal

N5k-1(config-if)# channel-group 2 mode active

Configuration from the second Cisco Nexus 5000 Series device:

N5k-2(config)# interface eth101/1/, eth101/1/2

N5k-2(config-if)# channel-group 2 mode active

N5k-2(config-if)# interface eth102/1/, eth102/1/2

N5k-2(config-if)# channel-group 2 mode active

Although the vPC vPC ID command is not required, the software assigns an internal vPC ID for each Enhanced vPC.
The output of the show vpc command displays this internal vPC ID.

Step 1 Enable a vPC and LACP.

N5k-1(config)# feature vpc

N5k-1(config)# feature lacp

N5k-2(config)# feature vpc

N5k-2(config)# feature lacp

Step 2 Create VLANs.

N5k-1(config)# vlan 10-20

N5k-2(config)# vlan 10-20


Internal

Step 3 Assign the vPC domain ID and configure the vPC peer keepalive.

N5k-1(config)# vpc domain 123

N5k-1(config-vpc)# peer-keepalive destination 172.25.182.100

N5k-2(config)# vpc domain 123

N5k-2(config-vpc)# peer-keepalive destination 172.25.182.99

Step 4 Configure the vPC peer-link.

N5k-1(config)# interface eth1/1-2

N5k-1(config-if)# channel-group 1 mode active

N5k-1(config-if)# interface Po1

N5k-1(config-if)# switchport mode trunk

N5k-1(config-if)# switchport trunk allowed vlan 1, 10-20

N5k-1(config-if)# vpc peer-link

N5k-2(config)# interface eth1/1-2

N5k-2(config-if)# channel-group 1 mode active

N5k-2(config-if)# interface Po1

N5k-2(config-if)# switchport mode trunk


Internal

N5k-2(config-if)# switchport trunk allowed vlan 1, 10-20

N5k-2(config-if)# vpc peer-link

Step 5 Configure FEX 101.

N5k-1(config)# fex 101

N5k-1(config-fex)# interface eth1/3-4

N5k-1(config-if)# channel-group 101

N5k-1(config-if)# interface po101

N5k-1(config-if)# switchport mode fex-fabric

N5k-1(config-if)# vpc 101

N5k-1(config-if)# fex associate 101

N5k-2(config)# fex 101

N5k-2(config-fex)# interface eth1/3-4

N5k-2(config-if)# channel-group 101

N5k-2(config-if)# interface po101

N5k-2(config-if)# switchport mode fex-fabric

N5k-2(config-if)# vpc 101

N5k-2(config-if)# fex associate 101


Internal

Step 6 Configure FEX 102.

N5k-1(config)# fex 102

N5k-1(config-fex)# interface eth1/5-6

N5k-1(config-if)# channel-group 102

N5k-1(config-if)# interface po102

N5k-1(config-if)# switchport mode fex-fabric

N5k-1(config-if)# vpc 102

N5k-1(config-if)# fex associate 102

N5k-2(config)# fex 102

N5k-2(config-fex)# interface eth1/5-6

N5k-2(config-if)# channel-group 102

N5k-2(config-if)# interface po102

N5k-2(config-if)# switchport mode fex-fabric

N5k-2(config-if)# vpc 102

N5k-2(config-if)# fex associate 102

Step 7 Create Enhanced vPC.

N5k-1(config)# interface eth101/1/1, eth101/1/2


Internal

N5k-1(config-if)# channel-group 2 mode active

N5k-1(config-if)# interface eth102/1/1, eth102/1/2

N5k-1(config-if)# channel-group 2 mode active

N5k-1(config-if)# int po2

N5k-1(config-if)# switchport access vlan 10

N5k-2(config)# interface eth101/1/1, eth101/1/2

N5k-2(config-if)# channel-group 2 mode active

N5k-2(config-if)# interface eth102/1/1, eth102/1/2

N5k-2(config-if)# channel-group 2 mode active

N5k-2(config-if)# int po2

N5k-2(config-if)# switchport access vlan 10

As shown in the above procedure, the Enhanced vPC configuration is the same configuration as when you configure
the host port channel with channel members from the same FEX.

N5K silent reload, reason unknown

A Nexus 5500 switch might reload and reset reason of unknown. Most common reason for
reset reason of unknown is loss of power to the switch. Make sure the switch has dual power
supplies connected to different power distribution units(PDU) and power to the switch is stable.

If power to the switch is stable, contact TAC to see if it is this bug:


Internal

http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetails&bugId
=CSCub11616

vPC sync failed after switch reboot ?

Symptom:

When the primary Nexus is reloaded the secondary takes over fine but when the primary comes back
up the VPC is not synced
Internal

vpc is down, peer link is down and keepalive is down


"show vpc" will show that peer link is down and keepalive is down
"show vpc consistency global " will show a out of sync in terms of vPC, the switch can't see the
information on the peer switch and thus suspend the vlans, hence break the connectivity.

Cause
---a common mistake is to put the configure a vlan interaface (SVI) as the peer keepalive and just
allow this vlan in the peer-link, this violate the rules that peer link and keepalive link should
be physically separated.

FIX:

it is recommended to use the mgmt0 link as the keepalive.


or use a different physical port rather than the peer link, associate it with a vlan interface, and don' t
allow this vlan over the peer link.
Internal

Nexus 5000: Storm control on FEX HIF (Host Interface) ?


Q: How come I can't configure storm control under the fex interfaces?

short answer: it is not supported yet, but it will be supported soon:

long Answer:

1. N5K/N2K doesn’t support HIF storm control at the moment. In the configuration guide for
521N12, here is the link about the limitation of storm
control: http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/layer2/521_n1_2/
b_5k_Layer2_Config_521N12_chapter_01111.html

2. HIF storm control is on the roadmap.

3. Similar discussion was seen in here:https://supportforums.cisco.com/thread/2100010


4. And this bug was filed to request storm control in NIF
level:http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDeta
ils&bugId=CSCtj01900
5. NIF Storm control is supported in 55XX Switches in the later
release:http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/release/notes/Rel
_5_2_1_N1_1/Nexus5000_Release_Notes_5_2_1_N1.html
6. The command “ storm-control” is supported in 5.2.1.N1(2) on FEX NIF (Network
interfaces/fabric port) level, however, if the physical port is in a port channel, storm control
configuration need to be applied on the port channel level.

7. If you are running 5.2.1 N11, you may see the command " storm control ***" is available under HIF,
however, it will return an error when you try to apply it:
ERROR: storm control not supported for fex port/PC
Internal

you may wonder if the command is not supported, why is it available? I guess it is just a placeholder,
some of the hardware/software combination will actually clears this up, so the command is not
even available:

From the lab, it looks like depends on the switch and FEX version, the command “ storm-control”
sometimes is available under HIF, but under all current versions, it won’t take effect.
Internal

The Command is not available in Nexus 5596 running5.2.1.12a, with FEX: N2K-C2232PP-10GE. It
is available for N2K-C2148T-1GE.

Introduction

What is the command is used to verify the "HSRP Active State" on a Nexus 7000 Series Switch?

On a Nexus 7018, when trying to perform a 'no shut' on Ethernet 1/3, the ERROR: Ethernet1/3: Config
not allowed, as first port in the port-grp is dedicated error message is received.

What is vPC and what are its benefits?

Why does vPC not block either of the vPC uplinks?

How do I create a peer link for VDC and a keepalive link for each VDC?

What does the %EEM_ACTION-6-INFORM: Packets dropped due to IDS check length consistent on
module message mean?

How do I verify the features enabled on Nexus 7000 Series Switch with NX-OS 4.2?

Is there a tool available for configuration conversion on Cisco 6500 series to the Nexus platform?

How many syslog servers can be added to a Nexus 7000 Series Switch?

Is Nexus 7010vPC feature (LACP enabled) compatible with the Cisco ASA etherchannel feature and
with ACE 4710 etherchannel?

What are orphan ports?

How many OSPF processes can be run in a virtual device context (VDC)?

Which Nexus 7000 modules support Fibre Channel over Ethernet (FCoE)?

What is the minimum NX-OS release required to support FCoE in the Nexus 7000 Series Switches?

On a Nexus, is the metric-type keyword not available in the "default-information originate" command?

How do I redistribute connected routes into an OSPF instance on a Nexus 7010 with a defined metric?

What is the equivalent NX-OS command for the "ip multicast-routing" IOS command, and does the
Nexus 7000 support PIM-Sparse mode?

When I issue the "show ip route bgp" command, I see my routes being learned via OSPF and BGP. How
can I verify on the NX-OS which one will always be used and which one is a backup?

How do I avoid receiving the "Failed to process kickstart image. Pre-Upgrade check failed" error
message when upgrading the image on a Nexus 7000 Series Switch?

How can I avoid receiving the "Configuration does not match the port capability" error message when
enabling "switchport mode fex-fabric"?
Internal

When I issue the "show interface counters errors" command, I see that one of the interfaces is
consistently posting errors. What are the FCS-Err and Rcv-Err in the output of the "show interface
counters errors" command?

How do I enable/disable logging link status per port basis on a Nexus 7000 Series Switch?

On a Nexus 7000 running NX-OS 5.1(3), can the DecNet be bridged on a VLAN?

How do I check the Network Time Protocol (NTP) status on a Nexus 7000 Series Switch?

How do I capture the output of the show tech-support details?

Can a Nexus 7000 be a DHCP server and can it relay DHCP requests to different DHCP servers per
VLAN?

How do I verify if XL mode is enabled on a Nexus 7000 device?

How do I implement VTP in a Nexus 7000 Series Switch where VLANs are manually configured?

Is there a best practice for port-channel load balancing between Nexus 1000V Series and Nexus 7000
Series Switches?

During Nexus 7010 upgrade from 5.2.1 to 5.2.3 code, the X-bar module in slot 4 keeps powering off.
The %MODULE-2-XBAR_DIAG_FAIL: Xbar 4 reported failure due to Module asic(s) reported sync loss
(DevErr is LinkNum). Trying to Resync in device 88 (device error 0x0) error message is received.

What does the %OC_USD-SLOT18-2-RF_CRC: OC2 received packets with CRC error from MOD 6
through XBAR slot 5/inst 1error message mean?

How do I verify packet drops on a Nexus 7000 Switch?

Related Information

Introduction
This document addresses the most frequently asked questions (FAQ) associated with Cisco Nexus 7000 Series
Switches.
Refer to Cisco Technical Tips Conventions for more information on document conventions.
Q. What is the command is used to verify the "HSRP Active State" on a Nexus 7000 Series
Switch?
A. The command is show hsrp active or show hsrp brief .
Nexux_7K# show hsrp br
P indicates configured to preempt.
|
Interface Grp Prio P State Active addr Standby addr Group addr
Vlan132 32 90 P Standby 10.101.32.253 local 10.101.32.254 (conf)
Vlan194 94 90 P Standby 10.101.94.253 local 10.101.94.254 (conf)
Vlan2061 61 110 P Active local 10.100.101.253 10.100.101.254 (conf)

Nexus_7K# show hsrp standb br


P indicates configured to preempt.
|
Interface Grp Prio P State Active addr Standby addr Group addr
Vlan132 32 90 P Standby 10.101.32.253 local 10.101.32.254 (conf)
Internal

Vlan194 94 90 P Standby 10.101.94.253 local 10.101.94.254 (conf)


Vlan196 96 90 P Standby 10.101.96.253 local 10.101.96.254 (conf)
Q. On a Nexus 7018, when trying to perform a 'no shut' on Ethernet 1/3, the ERROR:
Ethernet1/3: Config not allowed, as first port in the port-grp is dedicated error message is received.
A. The device thinks that the first port in the port-grp is in dedicated mode instead of shared mode. When the first
port of a port-grp is in dedicated mode, the other ports of the port-grp cannot be used.
Q. What is vPC and what are its benefits?
A. Virtual PortChannel (vPC) is a port-channeling concept that extends link aggregation to two separate physical
switches.
Benefits of vPC include:
 Utilizes all available uplink bandwidth
 Allows the creation of resilient Layer 2 topologies based on link aggregation
 Eliminates the dependence of Spanning Tree Protocol in Layer 2 access distribution layer(s)
 Enables transparent server mobility and server high availability (HA) clusters
 Scales available Layer 2 bandwidth
 Simplifies network design
 Dual-homed servers can operate in active-active mode
 Faster convergence upon link failure
 Improves convergence time when a single device fails
 Reduces capex and opex

Q. Why does vPC not block either of the vPC uplinks?


A. Nexus 7000 has a loop prevention method that drops traffic traversing the peer link (destined for a vPC peer link)
when there are no failed vPC ports or links. The rule is simple: if the packet crosses the vPC peer link, it may not go
out any port in a vPC even if that vPC does not have the original VLAN.
Q. How do I create a peer link for VDC and a keepalive link for each VDC?
A. Configure the vPC Keepalive Link and Messages
This example demonstrates how to configure the destination, source IP address, and VRF for the vPC-peer-
keepalive link:
Internal

switch# configure terminal


switch(config)# feature vpc
switch(config)# vpc domain 100
switch(config-vpc-domain)# peer-keepalive destination 172.168.1.2 source
172.168.1.1 vrf vpc-keepalive
Create the vPC Peer Link
This example demonstrates how to configure a vPC peer link:
switch# configure terminal
switch(config)# interface port-channel 20
switch(config-if)# vpc peer-link
switch(config-vpc-domain)#
Q. What does the %EEM_ACTION-6-INFORM: Packets dropped due to IDS check length consistent on
module message mean?
A. Cisco NX-OS supports Intrusion Detection System (IDS) checks that validate IP packets to ensure proper
formatting. This is an enhancement beginning in 5.x. The EEM message is being logged because a packet is
received by the switch where the Ethernet frame size is shorter than the expected length to include the IP packet
length plus the Ethernet header. The packet is dropped by the hardware due to this condition.
In order to verify that the IDS drops occurred since the last switch reboot, issue the show hardware forwarding ip
verify module [#] ".
Q. How do I verify the features enabled on Nexus 7000 Series Switch with NX-OS 4.2?
A. Issue the show feature command in order to verify.
switch-N7K# show feature
Feature Name Instance State
-------------------- -------- --------
tacacs 1 enabled
scheduler 1 enabled
isis 2 disabled
isis 3 disabled
isis 4 disabled
ospf 1 enabled
ospf 2 disabled
ospf 3 disabled

switch-N7K# show run | I feature


feature vrrp
feature tacacs+
feature scheduler
feature ospf
feature bgp
feature pim
feature pim6
feature eigrp
feature pbr
feature private-vlan
feature udld
feature interface-vlan
feature netflow
feature hsrp
feature lacp
feature dhcp
feature tunnel
Q. Is there a tool available for configuration conversion on Cisco 6500 series to the Nexus
platform?
A. Cisco has developed the IOS-NXOS Migration Tool for quick configuration conversion on Cisco 6500 series to
the Nexus series OS.
Q. How many syslog servers can be added to a Nexus 7000 Series Switch?
Internal

A. The maximum number of syslog servers configured is 3.


Q. Is Nexus 7010vPC feature (LACP enabled) compatible with the Cisco ASA etherchannel
feature and with ACE 4710 etherchannel?
A. With respect to vPC, any device that runs the LACP (which is a standard), is compatible with the Nexus 7000,
including ASA/ACE.
Q. What are orphan ports?
A. Orphan ports are single attached devices that are not connected via a vPC, but still carry vPC VLANs. In the
instance of a peer-link shut or restoration, an orphan port's connectivity may be bound to the vPC failure or
restoration process. Issue the show vpc orphan-ports command in order to identify the impacted VLANs.
Q. How many OSPF processes can be run in a virtual device context (VDC)?
A. There can be up to four (4) instances of OSPFv2 in a VDC.
Q. Which Nexus 7000 modules support Fibre Channel over Ethernet (FCoE)?
A. The Cisco Nexus 7000 Series 32-Port 1 and 10 Gigabit Ethernet Module support FCoE. The part number of the
product is N7K-F132XP-15.
Q. What is the minimum NX-OS release required to support FCoE in the Nexus 7000
Series Switches?
A. FCoE is supported on Cisco Nexus 7000 Series systems running Cisco NX-OS Release 5.2 or later.
Q. On a Nexus, is the metric-type keyword not available in the "default-information
originate" command?
A. On a Nexus, use a route-map command with a set clause of metric-type type-[½] in order to have the same
functionality as in IOS using the default-information originate always metric-type [½] command.
For example:
switch(config)#route-map STAT-OSPF, permit, sequence 10
switch(config-route-map)#match interface ethernet 1/2
switch(config-route-map)#set metric-type {external | internal | type-1 | type-2}
Q. How do I redistribute connected routes into an OSPF instance on a Nexus 7010 with a
defined metric?
A. In NX-OS, a route-map is always required when redistributing routes into an OSPF instance, and you will also
use this route-map to set the metric. Further, subnet redistribution is by default, so you do not have to add
thesubnets keyword.
For example:
switch(config)#access-list 101 permit ip <connected network> <wildcard> any
switch(config)#access-list 101 permit ip <connected network> <wildcard> any
switch(config)#access-list 101 permit ip <connected network> <wildcard> any
switch(config)#access-list 101 deny any
!
Router(config)# route-map direct2ospf permit 10
Router(config-route-map)# match ip address 101
Router(config-route-map)# set metric <100>

Router(config-route-map)# set metric-type type-1


!
switch(config)#router ospf 1
switch(config-router)#redistribute direct route-map direct2ospf
Q. What is the equivalent NX-OS command for the "ip multicast-routing" IOS command,
and does the Nexus 7000 support PIM-Sparse mode?
A. The command is feature pim. In NX-OS, multicast is enabled only after enabling the PIM or PIM6 feature on
each router and then enabling PIM or PIM6 sparse mode on each interface that you want to participate in multicast.
For example:
Internal

switch(config)#feature pim
switch(config)#interface Vlan[536]
switch(config-if)#ip pim sparse-mode
See Cisco Nexus 7000 Series NX-OS Multicast Routing Configuration Guide, Release 5.x for a complete
configuration guide.
Q. When I issue the "show ip route bgp" command, I see my routes being learned via
OSPF and BGP. How can I verify on the NX-OS which one will always be used and which
one is a backup?
A. Here is what is received:
Nexus_7010#show ip route bgp
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]

172.20.62.0/23, ubest/mbest: 1/0


*via 10.194.160.2, [20/0], 18:53:35, bgp-[AS-Number], internal, tag [Number]
via 10.194.16.5, Vlan116, [110/1043], 18:43:51, ospf-1, intra
172.20.122.0/23, ubest/mbest: 1/0
*via 10.194.160.2, [20/0], 18:53:35, bgp-[AS-Number], internal, tag [Number]
via 10.194.16.5, Vlan116, [110/1041], 18:43:51, ospf-1, intra
By default, BGP selects only a single best path and does not perform load balancing. As a result, the route marked
with the * will always be used, unless it goes down, at which point any remaining routes will become the preferred
path.
Q. How do I avoid receiving the "Failed to process kickstart image. Pre-Upgrade check
failed" error message when upgrading the image on a Nexus 7000 Series Switch?
A. One potential reason for receiving this error message is if the file name specified is not correct.
For example:
switch#install all kickstart bootflash:n7000-sl-kickstart.5.1.1a.bin system
bootflash:n7000-sl-dk9.5.1.1a.bin
In this example, the file name contains "sl" (lowercase letter l) instead of "s1" (number 1).
Q. How can I avoid receiving the "Configuration does not match the port capability" error
message when enabling "switchport mode fex-fabric"?
A. This error message is generated because the port is not FEX capable:
N7K-2(config)#interface ethernet 9/5
N7K-2(config-if)#switchport mode fex-fabric
ERROR: Ethernet9/5: Configuration does not match the port capability
In order to resolve this problem, check the port capabilities by using the show interface ethernet command.
For example:
N7K-2#show interface ethernet 9/5 capabilities
Ethernet9/5
Model: N7K-M132XP-12
Type (SFP capable): 10Gbase-(unknown)
Speed: 10000
Duplex: full
Trunk encap. type: 802.1Q
Channel: yes
Broadcast suppression: percentage(0-100)
Flowcontrol: rx-(off/on),tx-(off/on)
Rate mode: shared
QOS scheduling: rx-(8q2t),tx-(1p7q4t)
CoS rewrite: yes
ToS rewrite: yes
SPAN: yes
UDLD: yes
Internal

Link Debounce: yes


Link Debounce Time: yes
MDIX: no
Pvlan Trunk capable: no
Port Group Members: 1,3,5,7
TDR capable: no
FabricPath capable: no
Port mode: Routed,Switched
FEX Fabric: no
dot1Q-tunnel mode: yes
From this output of the show interface ethernet 9/5 capabilities command, you can see FEX Fabric: no. This
verifies that the port is not FEX capable. In order to resolve this problem, upgrade the EPLD images to Cisco NX-
OS Release 5.1(1) or later.
Q. When I issue the "show interface counters errors" command, I see that one of the
interfaces is consistently posting errors. What are the FCS-Err and Rcv-Err in the output
of the "show interface counters errors" command?
A. Here is what is received:
Nexus-7000#show interface counters errors

----------------------------------------------------------------------------
Port Align-Err FCS-Err Xmit-Err Rcv-Err UnderSize OutDiscards
----------------------------------------------------------------------------
Eth1/1 0 26 0 26 0 0
With FCS-Err and Rcv-Err, it is usually an indication that you are receiving corrupt packets.
Q. How do I enable/disable logging link status per port basis on a Nexus 7000 Series
Switch?
A. All interface link status (up/down) messages are logged by default. Link status events can be configured
globally or per interface. The interface command enables link status logging messages for a specific interface.
For example:
N7k(config)#interface ethernet x/x
N7k(config-if)#logging event port link-status
Q. On a Nexus 7000 running NX-OS 5.1(3), can the DecNet be bridged on a VLAN?
A. All of the Nexus platforms support passing DecNet frames through the device from a layer-2 perspective.
However, there is no support for routing DecNet on the Nexus.
Q. How do I check the Network Time Protocol (NTP) status on a Nexus 7000 Series
Switch?
A. In order to display the status of the NTP peers, issue the show ntp peer-status command:
switch#show ntp peer-status

Total peers : 1

* - selected for sync, + - peer mode(active),

- - peer mode(passive), = - polled in client mode

remote local st poll reach delay vrf

-------------------------------------------------------------------------------

*10.1.10.5 0.0.0.0 1 64 377 0.00134 default


Q. How do I capture the output of the show tech-support details?
A. Issue the tac-pac bootflash://<filename> command in order to redirect the output of the show tech command to
a file, and then gzip the file.
For example:
Internal

switch#tac-pac bootflash://showtech.switch1
Issue the copy bootflash://showtech.switch1 tftp://<server IP/<path> command in order to copy the file from
bootflash to the TFTP server.
For example:
switch#copy bootflash://showtech.switch1 tftp://<server IP/<path>
Q. Can a Nexus 7000 be a DHCP server and can it relay DHCP requests to different DHCP
servers per VLAN?
A. The Nexus 7000 does not support a DHCP server, but it does support DHCP relay. For relay, use the ip dhcp
relay address x.x.x.x interface command.
See Cisco Nexus 7000 Series NX-OS Security Configuration Guide, Release 5.x for more information on
Dynamic Host Configuration Protocol (DHCP) on a Cisco NX-OS device.
Q. How do I verify if XL mode is enabled on a Nexus 7000 device?
A. The Scalable Feature License is the new Nexus 7000 system license that enables the incremental table sizes
supported on the M-Series L Modules. Without the license, the system will run in standard mode, meaning none of
the larger table sizes will be accessible. Having non-XL and XL modules in a system is supported, but for the
system to run in XL mode all modules need to be XL capable, and the Scalable Feature license needs to be installed.
Mixing modules is supported, with the system running in the non-XL mode. If the modules are in the same system,
the entire system falls back to the common smallest value. If the XL and non-XL are isolated using VDCs, then each
VDC is considered a separate system and can be run in different modes.
In order to confirm whether the Nexus 7000 has the XL option enabled, you first need to check if the Scalable
Feature License is installed. Also, having non-XL and XL modules in a system is supported, but in order for the
system to run in XL mode, all modules need to be XL capable.
Q. How do I implement VTP in a Nexus 7000 Series Switch where VLANs are manually
configured?
A. Cisco does not recommend running VTP in data centers. If someone attaches a switch to the network with a
higher revision number without changing the VTP mode from the server, it will override the VLAN configuration
on the switch.
Q. Is there a best practice for port-channel load balancing between Nexus 1000V Series and
Nexus 7000 Series Switches?
A. There is no recommended best practice for load-balancing between the Nexus 1000V Series and Nexus 7000
Series Switches. You can choose either a flow-based or a source-based model depending on the network's
requirement.
Q. During Nexus 7010 upgrade from 5.2.1 to 5.2.3 code, the X-bar module in slot 4 keeps
powering off. The %MODULE-2-XBAR_DIAG_FAIL: Xbar 4 reported failure due to Module asic(s)
reported sync loss (DevErr is LinkNum). Trying to Resync in device 88 (device error 0x0) error message is
received.
A. This error message corresponds to diagnostic failures on module 2. It could be a bad connection to the X-bar
from the linecard, which is results in the linecard being unable to sync. Typically with these errors, the first step is to
reseat the module. If that does not resolve the problem, reseat the fabric as well as the module individually.
Q. What does the %OC_USD-SLOT18-2-RF_CRC: OC2 received packets with CRC error from MOD 6
through XBAR slot 5/inst 1 error message mean?
A. These errors indicate that the octopus engine received frames that failed the CRC error checks. This can be
caused by multiple reasons. For example:
 Hardware problems:
Internal

o Bad links
o Backplane issues
o Sync losses
o Seating problems
 Software problems:
o Old fpga
o Frames forwarded to LC that it is unable to understand
Q. How do I verify packet drops on a Nexus 7000 Switch?
A. Verify the Rx Pause and TailDrops fields from the output of the show interface {/} and show hardware
internal errors module module # commands for the module with these ports.
For example:
Nexus7K#show interface e7/25
Ethernet7/25 is up

!--- Output suppressed

input rate 1.54 Kbps, 2 pps; output rate 6.29 Mbps, 3.66 Kpps
RX
156464190 unicast packets 0 multicast packets 585 broadcast packets
156464775 input packets 11172338513 bytes
0 jumbo packets 0 storm suppression packets
0 runts 0 giants 0 CRC 0 no buffer
0 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
0 input with dribble 0 input discard
7798999 Rx pause
TX
6365127464 unicast packets 6240536 multicast packets 2290164 broadcast packets
6373658164 output packets 8294188005962 bytes
0 jumbo packets
0 output error 0 collision 0 deferred 0 late collision
0 lost carrier 0 no carrier 0 babble
0 Tx pause
The pauses on e7/25 indicate that the server is having difficulty keeping up with the amount of traffic sent to it.
Nexus7k#show hardware internal erroe module 2 | include
r2d2_tx_taildrop_drop_ctr_q3
37936 r2d2_tx_taildrop_drop_ctr_q3 0000000199022704 2-
37938 r2d2_tx_taildrop_drop_ctr_q3 0000000199942292 4-
37941 r2d2_tx_taildrop_drop_ctr_q3 0000000199002223 5-
37941 r2d2_tx_taildrop_drop_ctr_q3 0000000174798985 17 -
This indicates that the amount of traffic sent to these device was too much for the interface itself to transmit. Since
each interface was configured as a trunk allowing all VLANs and multicast/broadcast traffic counters were low, it
appears there is a lot of unicast flooding that may be causing drops for these interfaces.
Related Information
 Cisco Nexus 7000 Series Switches: Support Page
 Fibre Channel over Ethernet (FCoE)
 Switches Product Support
 LAN Switching Technology Support
 Technical Support & Documentation - Cisco Systems
Internal

Contributed by Cisco Engineers


Introduction

What is the command is used to verify the "HSRP Active State" on a Nexus 7000 Series Switch?

On a Nexus 7018, when trying to perform a 'no shut' on Ethernet 1/3, the ERROR: Ethernet1/3: Config
not allowed, as first port in the port-grp is dedicated error message is received.

What is vPC and what are its benefits?

Why does vPC not block either of the vPC uplinks?

How do I create a peer link for VDC and a keepalive link for each VDC?

What does the %EEM_ACTION-6-INFORM: Packets dropped due to IDS check length consistent on
module message mean?

How do I verify the features enabled on Nexus 7000 Series Switch with NX-OS 4.2?

Is there a tool available for configuration conversion on Cisco 6500 series to the Nexus platform?

How many syslog servers can be added to a Nexus 7000 Series Switch?

Is Nexus 7010vPC feature (LACP enabled) compatible with the Cisco ASA etherchannel feature and
with ACE 4710 etherchannel?

What are orphan ports?

How many OSPF processes can be run in a virtual device context (VDC)?

Which Nexus 7000 modules support Fibre Channel over Ethernet (FCoE)?

What is the minimum NX-OS release required to support FCoE in the Nexus 7000 Series Switches?

On a Nexus, is the metric-type keyword not available in the "default-information originate" command?

How do I redistribute connected routes into an OSPF instance on a Nexus 7010 with a defined metric?

What is the equivalent NX-OS command for the "ip multicast-routing" IOS command, and does the
Nexus 7000 support PIM-Sparse mode?

When I issue the "show ip route bgp" command, I see my routes being learned via OSPF and BGP. How
can I verify on the NX-OS which one will always be used and which one is a backup?

How do I avoid receiving the "Failed to process kickstart image. Pre-Upgrade check failed" error
message when upgrading the image on a Nexus 7000 Series Switch?

How can I avoid receiving the "Configuration does not match the port capability" error message when
enabling "switchport mode fex-fabric"?
Internal

When I issue the "show interface counters errors" command, I see that one of the interfaces is
consistently posting errors. What are the FCS-Err and Rcv-Err in the output of the "show interface
counters errors" command?

How do I enable/disable logging link status per port basis on a Nexus 7000 Series Switch?

On a Nexus 7000 running NX-OS 5.1(3), can the DecNet be bridged on a VLAN?

How do I check the Network Time Protocol (NTP) status on a Nexus 7000 Series Switch?

How do I capture the output of the show tech-support details?

Can a Nexus 7000 be a DHCP server and can it relay DHCP requests to different DHCP servers per
VLAN?

How do I verify if XL mode is enabled on a Nexus 7000 device?

How do I implement VTP in a Nexus 7000 Series Switch where VLANs are manually configured?

Is there a best practice for port-channel load balancing between Nexus 1000V Series and Nexus 7000
Series Switches?

During Nexus 7010 upgrade from 5.2.1 to 5.2.3 code, the X-bar module in slot 4 keeps powering off.
The %MODULE-2-XBAR_DIAG_FAIL: Xbar 4 reported failure due to Module asic(s) reported sync loss
(DevErr is LinkNum). Trying to Resync in device 88 (device error 0x0) error message is received.

What does the %OC_USD-SLOT18-2-RF_CRC: OC2 received packets with CRC error from MOD 6
through XBAR slot 5/inst 1error message mean?

How do I verify packet drops on a Nexus 7000 Switch?

Related Information

Introduction
This document addresses the most frequently asked questions (FAQ) associated with Cisco Nexus 7000 Series
Switches.
Refer to Cisco Technical Tips Conventions for more information on document conventions.
Q. What is the command is used to verify the "HSRP Active State" on a Nexus 7000 Series
Switch?
A. The command is show hsrp active or show hsrp brief .
Nexux_7K# show hsrp br
P indicates configured to preempt.
|
Interface Grp Prio P State Active addr Standby addr Group addr
Vlan132 32 90 P Standby 10.101.32.253 local 10.101.32.254 (conf)
Vlan194 94 90 P Standby 10.101.94.253 local 10.101.94.254 (conf)
Vlan2061 61 110 P Active local 10.100.101.253 10.100.101.254 (conf)

Nexus_7K# show hsrp standb br


P indicates configured to preempt.
|
Interface Grp Prio P State Active addr Standby addr Group addr
Vlan132 32 90 P Standby 10.101.32.253 local 10.101.32.254 (conf)
Internal

Vlan194 94 90 P Standby 10.101.94.253 local 10.101.94.254 (conf)


Vlan196 96 90 P Standby 10.101.96.253 local 10.101.96.254 (conf)
Q. On a Nexus 7018, when trying to perform a 'no shut' on Ethernet 1/3, the ERROR:
Ethernet1/3: Config not allowed, as first port in the port-grp is dedicated error message is received.
A. The device thinks that the first port in the port-grp is in dedicated mode instead of shared mode. When the first
port of a port-grp is in dedicated mode, the other ports of the port-grp cannot be used.
Q. What is vPC and what are its benefits?
A. Virtual PortChannel (vPC) is a port-channeling concept that extends link aggregation to two separate physical
switches.
Benefits of vPC include:
 Utilizes all available uplink bandwidth
 Allows the creation of resilient Layer 2 topologies based on link aggregation
 Eliminates the dependence of Spanning Tree Protocol in Layer 2 access distribution layer(s)
 Enables transparent server mobility and server high availability (HA) clusters
 Scales available Layer 2 bandwidth
 Simplifies network design
 Dual-homed servers can operate in active-active mode
 Faster convergence upon link failure
 Improves convergence time when a single device fails
 Reduces capex and opex

Q. Why does vPC not block either of the vPC uplinks?


A. Nexus 7000 has a loop prevention method that drops traffic traversing the peer link (destined for a vPC peer link)
when there are no failed vPC ports or links. The rule is simple: if the packet crosses the vPC peer link, it may not go
out any port in a vPC even if that vPC does not have the original VLAN.
Q. How do I create a peer link for VDC and a keepalive link for each VDC?
A. Configure the vPC Keepalive Link and Messages
This example demonstrates how to configure the destination, source IP address, and VRF for the vPC-peer-
keepalive link:
Internal

switch# configure terminal


switch(config)# feature vpc
switch(config)# vpc domain 100
switch(config-vpc-domain)# peer-keepalive destination 172.168.1.2 source
172.168.1.1 vrf vpc-keepalive
Create the vPC Peer Link
This example demonstrates how to configure a vPC peer link:
switch# configure terminal
switch(config)# interface port-channel 20
switch(config-if)# vpc peer-link
switch(config-vpc-domain)#
Q. What does the %EEM_ACTION-6-INFORM: Packets dropped due to IDS check length consistent on
module message mean?
A. Cisco NX-OS supports Intrusion Detection System (IDS) checks that validate IP packets to ensure proper
formatting. This is an enhancement beginning in 5.x. The EEM message is being logged because a packet is
received by the switch where the Ethernet frame size is shorter than the expected length to include the IP packet
length plus the Ethernet header. The packet is dropped by the hardware due to this condition.
In order to verify that the IDS drops occurred since the last switch reboot, issue the show hardware forwarding ip
verify module [#] ".
Q. How do I verify the features enabled on Nexus 7000 Series Switch with NX-OS 4.2?
A. Issue the show feature command in order to verify.
switch-N7K# show feature
Feature Name Instance State
-------------------- -------- --------
tacacs 1 enabled
scheduler 1 enabled
isis 2 disabled
isis 3 disabled
isis 4 disabled
ospf 1 enabled
ospf 2 disabled
ospf 3 disabled

switch-N7K# show run | I feature


feature vrrp
feature tacacs+
feature scheduler
feature ospf
feature bgp
feature pim
feature pim6
feature eigrp
feature pbr
feature private-vlan
feature udld
feature interface-vlan
feature netflow
feature hsrp
feature lacp
feature dhcp
feature tunnel
Q. Is there a tool available for configuration conversion on Cisco 6500 series to the Nexus
platform?
A. Cisco has developed the IOS-NXOS Migration Tool for quick configuration conversion on Cisco 6500 series to
the Nexus series OS.
Q. How many syslog servers can be added to a Nexus 7000 Series Switch?
Internal

A. The maximum number of syslog servers configured is 3.


Q. Is Nexus 7010vPC feature (LACP enabled) compatible with the Cisco ASA etherchannel
feature and with ACE 4710 etherchannel?
A. With respect to vPC, any device that runs the LACP (which is a standard), is compatible with the Nexus 7000,
including ASA/ACE.
Q. What are orphan ports?
A. Orphan ports are single attached devices that are not connected via a vPC, but still carry vPC VLANs. In the
instance of a peer-link shut or restoration, an orphan port's connectivity may be bound to the vPC failure or
restoration process. Issue the show vpc orphan-ports command in order to identify the impacted VLANs.
Q. How many OSPF processes can be run in a virtual device context (VDC)?
A. There can be up to four (4) instances of OSPFv2 in a VDC.
Q. Which Nexus 7000 modules support Fibre Channel over Ethernet (FCoE)?
A. The Cisco Nexus 7000 Series 32-Port 1 and 10 Gigabit Ethernet Module support FCoE. The part number of the
product is N7K-F132XP-15.
Q. What is the minimum NX-OS release required to support FCoE in the Nexus 7000
Series Switches?
A. FCoE is supported on Cisco Nexus 7000 Series systems running Cisco NX-OS Release 5.2 or later.
Q. On a Nexus, is the metric-type keyword not available in the "default-information
originate" command?
A. On a Nexus, use a route-map command with a set clause of metric-type type-[½] in order to have the same
functionality as in IOS using the default-information originate always metric-type [½] command.
For example:
switch(config)#route-map STAT-OSPF, permit, sequence 10
switch(config-route-map)#match interface ethernet 1/2
switch(config-route-map)#set metric-type {external | internal | type-1 | type-2}
Q. How do I redistribute connected routes into an OSPF instance on a Nexus 7010 with a
defined metric?
A. In NX-OS, a route-map is always required when redistributing routes into an OSPF instance, and you will also
use this route-map to set the metric. Further, subnet redistribution is by default, so you do not have to add
thesubnets keyword.
For example:
switch(config)#access-list 101 permit ip <connected network> <wildcard> any
switch(config)#access-list 101 permit ip <connected network> <wildcard> any
switch(config)#access-list 101 permit ip <connected network> <wildcard> any
switch(config)#access-list 101 deny any
!
Router(config)# route-map direct2ospf permit 10
Router(config-route-map)# match ip address 101
Router(config-route-map)# set metric <100>

Router(config-route-map)# set metric-type type-1


!
switch(config)#router ospf 1
switch(config-router)#redistribute direct route-map direct2ospf
Q. What is the equivalent NX-OS command for the "ip multicast-routing" IOS command,
and does the Nexus 7000 support PIM-Sparse mode?
A. The command is feature pim. In NX-OS, multicast is enabled only after enabling the PIM or PIM6 feature on
each router and then enabling PIM or PIM6 sparse mode on each interface that you want to participate in multicast.
For example:
Internal

switch(config)#feature pim
switch(config)#interface Vlan[536]
switch(config-if)#ip pim sparse-mode
See Cisco Nexus 7000 Series NX-OS Multicast Routing Configuration Guide, Release 5.x for a complete
configuration guide.
Q. When I issue the "show ip route bgp" command, I see my routes being learned via
OSPF and BGP. How can I verify on the NX-OS which one will always be used and which
one is a backup?
A. Here is what is received:
Nexus_7010#show ip route bgp
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]

172.20.62.0/23, ubest/mbest: 1/0


*via 10.194.160.2, [20/0], 18:53:35, bgp-[AS-Number], internal, tag [Number]
via 10.194.16.5, Vlan116, [110/1043], 18:43:51, ospf-1, intra
172.20.122.0/23, ubest/mbest: 1/0
*via 10.194.160.2, [20/0], 18:53:35, bgp-[AS-Number], internal, tag [Number]
via 10.194.16.5, Vlan116, [110/1041], 18:43:51, ospf-1, intra
By default, BGP selects only a single best path and does not perform load balancing. As a result, the route marked
with the * will always be used, unless it goes down, at which point any remaining routes will become the preferred
path.
Q. How do I avoid receiving the "Failed to process kickstart image. Pre-Upgrade check
failed" error message when upgrading the image on a Nexus 7000 Series Switch?
A. One potential reason for receiving this error message is if the file name specified is not correct.
For example:
switch#install all kickstart bootflash:n7000-sl-kickstart.5.1.1a.bin system
bootflash:n7000-sl-dk9.5.1.1a.bin
In this example, the file name contains "sl" (lowercase letter l) instead of "s1" (number 1).
Q. How can I avoid receiving the "Configuration does not match the port capability" error
message when enabling "switchport mode fex-fabric"?
A. This error message is generated because the port is not FEX capable:
N7K-2(config)#interface ethernet 9/5
N7K-2(config-if)#switchport mode fex-fabric
ERROR: Ethernet9/5: Configuration does not match the port capability
In order to resolve this problem, check the port capabilities by using the show interface ethernet command.
For example:
N7K-2#show interface ethernet 9/5 capabilities
Ethernet9/5
Model: N7K-M132XP-12
Type (SFP capable): 10Gbase-(unknown)
Speed: 10000
Duplex: full
Trunk encap. type: 802.1Q
Channel: yes
Broadcast suppression: percentage(0-100)
Flowcontrol: rx-(off/on),tx-(off/on)
Rate mode: shared
QOS scheduling: rx-(8q2t),tx-(1p7q4t)
CoS rewrite: yes
ToS rewrite: yes
SPAN: yes
UDLD: yes
Internal

Link Debounce: yes


Link Debounce Time: yes
MDIX: no
Pvlan Trunk capable: no
Port Group Members: 1,3,5,7
TDR capable: no
FabricPath capable: no
Port mode: Routed,Switched
FEX Fabric: no
dot1Q-tunnel mode: yes
From this output of the show interface ethernet 9/5 capabilities command, you can see FEX Fabric: no. This
verifies that the port is not FEX capable. In order to resolve this problem, upgrade the EPLD images to Cisco NX-
OS Release 5.1(1) or later.
Q. When I issue the "show interface counters errors" command, I see that one of the
interfaces is consistently posting errors. What are the FCS-Err and Rcv-Err in the output
of the "show interface counters errors" command?
A. Here is what is received:
Nexus-7000#show interface counters errors

----------------------------------------------------------------------------
Port Align-Err FCS-Err Xmit-Err Rcv-Err UnderSize OutDiscards
----------------------------------------------------------------------------
Eth1/1 0 26 0 26 0 0
With FCS-Err and Rcv-Err, it is usually an indication that you are receiving corrupt packets.
Q. How do I enable/disable logging link status per port basis on a Nexus 7000 Series
Switch?
A. All interface link status (up/down) messages are logged by default. Link status events can be configured
globally or per interface. The interface command enables link status logging messages for a specific interface.
For example:
N7k(config)#interface ethernet x/x
N7k(config-if)#logging event port link-status
Q. On a Nexus 7000 running NX-OS 5.1(3), can the DecNet be bridged on a VLAN?
A. All of the Nexus platforms support passing DecNet frames through the device from a layer-2 perspective.
However, there is no support for routing DecNet on the Nexus.
Q. How do I check the Network Time Protocol (NTP) status on a Nexus 7000 Series
Switch?
A. In order to display the status of the NTP peers, issue the show ntp peer-status command:
switch#show ntp peer-status

Total peers : 1

* - selected for sync, + - peer mode(active),

- - peer mode(passive), = - polled in client mode

remote local st poll reach delay vrf

-------------------------------------------------------------------------------

*10.1.10.5 0.0.0.0 1 64 377 0.00134 default


Q. How do I capture the output of the show tech-support details?
A. Issue the tac-pac bootflash://<filename> command in order to redirect the output of the show tech command to
a file, and then gzip the file.
For example:
Internal

switch#tac-pac bootflash://showtech.switch1
Issue the copy bootflash://showtech.switch1 tftp://<server IP/<path> command in order to copy the file from
bootflash to the TFTP server.
For example:
switch#copy bootflash://showtech.switch1 tftp://<server IP/<path>
Q. Can a Nexus 7000 be a DHCP server and can it relay DHCP requests to different DHCP
servers per VLAN?
A. The Nexus 7000 does not support a DHCP server, but it does support DHCP relay. For relay, use the ip dhcp
relay address x.x.x.x interface command.
See Cisco Nexus 7000 Series NX-OS Security Configuration Guide, Release 5.x for more information on
Dynamic Host Configuration Protocol (DHCP) on a Cisco NX-OS device.
Q. How do I verify if XL mode is enabled on a Nexus 7000 device?
A. The Scalable Feature License is the new Nexus 7000 system license that enables the incremental table sizes
supported on the M-Series L Modules. Without the license, the system will run in standard mode, meaning none of
the larger table sizes will be accessible. Having non-XL and XL modules in a system is supported, but for the
system to run in XL mode all modules need to be XL capable, and the Scalable Feature license needs to be installed.
Mixing modules is supported, with the system running in the non-XL mode. If the modules are in the same system,
the entire system falls back to the common smallest value. If the XL and non-XL are isolated using VDCs, then each
VDC is considered a separate system and can be run in different modes.
In order to confirm whether the Nexus 7000 has the XL option enabled, you first need to check if the Scalable
Feature License is installed. Also, having non-XL and XL modules in a system is supported, but in order for the
system to run in XL mode, all modules need to be XL capable.
Q. How do I implement VTP in a Nexus 7000 Series Switch where VLANs are manually
configured?
A. Cisco does not recommend running VTP in data centers. If someone attaches a switch to the network with a
higher revision number without changing the VTP mode from the server, it will override the VLAN configuration
on the switch.
Q. Is there a best practice for port-channel load balancing between Nexus 1000V Series and
Nexus 7000 Series Switches?
A. There is no recommended best practice for load-balancing between the Nexus 1000V Series and Nexus 7000
Series Switches. You can choose either a flow-based or a source-based model depending on the network's
requirement.
Q. During Nexus 7010 upgrade from 5.2.1 to 5.2.3 code, the X-bar module in slot 4 keeps
powering off. The %MODULE-2-XBAR_DIAG_FAIL: Xbar 4 reported failure due to Module asic(s)
reported sync loss (DevErr is LinkNum). Trying to Resync in device 88 (device error 0x0) error message is
received.
A. This error message corresponds to diagnostic failures on module 2. It could be a bad connection to the X-bar
from the linecard, which is results in the linecard being unable to sync. Typically with these errors, the first step is to
reseat the module. If that does not resolve the problem, reseat the fabric as well as the module individually.
Q. What does the %OC_USD-SLOT18-2-RF_CRC: OC2 received packets with CRC error from MOD 6
through XBAR slot 5/inst 1 error message mean?
A. These errors indicate that the octopus engine received frames that failed the CRC error checks. This can be
caused by multiple reasons. For example:
 Hardware problems:
Internal

o Bad links
o Backplane issues
o Sync losses
o Seating problems
 Software problems:
o Old fpga
o Frames forwarded to LC that it is unable to understand
Q. How do I verify packet drops on a Nexus 7000 Switch?
A. Verify the Rx Pause and TailDrops fields from the output of the show interface {/} and show hardware
internal errors module module # commands for the module with these ports.
For example:
Nexus7K#show interface e7/25
Ethernet7/25 is up

!--- Output suppressed

input rate 1.54 Kbps, 2 pps; output rate 6.29 Mbps, 3.66 Kpps
RX
156464190 unicast packets 0 multicast packets 585 broadcast packets
156464775 input packets 11172338513 bytes
0 jumbo packets 0 storm suppression packets
0 runts 0 giants 0 CRC 0 no buffer
0 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
0 input with dribble 0 input discard
7798999 Rx pause
TX
6365127464 unicast packets 6240536 multicast packets 2290164 broadcast packets
6373658164 output packets 8294188005962 bytes
0 jumbo packets
0 output error 0 collision 0 deferred 0 late collision
0 lost carrier 0 no carrier 0 babble
0 Tx pause
The pauses on e7/25 indicate that the server is having difficulty keeping up with the amount of traffic sent to it.
Nexus7k#show hardware internal erroe module 2 | include
r2d2_tx_taildrop_drop_ctr_q3
37936 r2d2_tx_taildrop_drop_ctr_q3 0000000199022704 2-
37938 r2d2_tx_taildrop_drop_ctr_q3 0000000199942292 4-
37941 r2d2_tx_taildrop_drop_ctr_q3 0000000199002223 5-
37941 r2d2_tx_taildrop_drop_ctr_q3 0000000174798985 17 -
This indicates that the amount of traffic sent to these device was too much for the interface itself to transmit. Since
each interface was configured as a trunk allowing all VLANs and multicast/broadcast traffic counters were low, it
appears there is a lot of unicast flooding that may be causing drops for these interfaces.
Related Information
 Cisco Nexus 7000 Series Switches: Support Page
 Fibre Channel over Ethernet (FCoE)
 Switches Product Support
 LAN Switching Technology Support
 Technical Support & Documentation - Cisco Systems
Internal

Contributed by Cisco Engineers


product Positioning
Q. What is the Cisco Nexus ® 5500 Platform?

A. The Cisco Nexus 5500 Platform is the next-generation platform of the Cisco Nexus 5000 Series Switches, helping
enable the industry’s highest density and performance purpose-built fixed form-factor switch on a multilayer,
multiprotocol, and multipurpose Ethernet-based fabric.

Q. What is the Cisco Nexus 5548P Switch?

A. The Cisco Nexus 5548P is the first switch in the Cisco Nexus 5500 Platform. It is offered in a one-rack-unit (1RU)
form factor and has 32 fixed 10 Gigabit Ethernet SFP+ ports with one expansion slot for added flexibility. At first
customer shipment (FCS), two expansion modules will be supported: a 16-port 10 Gigabit Ethernet SFP+ expansion
module and an 8-port 10 Gigabit Ethernet SFP+ plus 8-port native Fibre Channel expansion module.

Q. Where can the Cisco Nexus 5500 Platform be deployed?

A. The Cisco Nexus 5500 Platform is well suited for deployments in enterprise data center access layers and
small-scale, midmarket data center aggregation environments.

Q. Does this mean that the Cisco Nexus 5500 Platform will offer Layer 3 services?

A. Yes. The Cisco Nexus 5500 platform, including the 5548P switch, will provide Layer 3 functionality via a
fieldupgradeable module that is targeted for Q1CY11.

Q. Is Cisco announcing the end-of-sale of the current generation of Cisco Nexus 5000 Series Switches?

A. No. Cisco has no plans to end of sale the current Cisco Nexus 5000 Series Switches.

Fibre Channel and FCoE Support


Q. Does the Cisco Nexus 5548P support FCoE?

A. Yes. All 10 Gigabit Ethernet ports on the Cisco Nexus 5548 are capable of supporting FCoE. The Storage Protocol
Services License (SPS) is required to enable FCoE operation.

Q. How is FCoE enabled on the Cisco Nexus 5548P?

A. Similar to on the first generation Nexus 5000 Series Switches, FCoE is an optional feature delivered via the Storage
Protocol Services (SPS) license on the Nexus 5548P switch. However, unlike on the first generation Nexus 5000
Series Switches, the Nexus 5548P switch provides a license with eight-port granularity. The granularity comes from
an eight-port license that enables any eight ports on the Nexus 5548P switch to perform FCoE on 10GE ports or
Native Fibre Channel on the physical Fibre Channel ports. Up to six eight-port licenses can be installed on a Nexus
5548P switch, making it the equivalent of a full chassis license.

Q. Is the Storage Protocol Services (SPS) license enforced or honor-based?

A. The first instance of the SPS license on a system is enforced. Further instances are honor-based. However, similar
to on the current generation Nexus 5000 Series Switches, a temporary 120-day trial license goes in effect for the
entire chassis upon first use of an FC command.

Q. Can I use the Cisco Nexus 5548P Switch ports as native Fibre Channel ports?

A. The Ethernet ports on the base chassis as well as those on the expansion modules cannot be used to support native
Fibre Channel functions. However, you can use the expansion module N55-M8P8FP, which provides eight ports as
Internal

native Fibre Channel ports. The Storage Protocol Services (SPS) license is also required to enable Native Fibre
Channel operation.

Q. Does the Cisco Nexus 5548P support FCoE VE_port (Virtual E_port)?

A. Yes, the Cisco Nexus 5548P supports VE-to-VE connectivity on directly connected Data Center Bridging (DCB)
capable links. This feature will be released for Nexus 5548 first and prior N5Ks in a later release.

Q. What is Unified Ports?

A. Unified Ports combine the physical layer port functionality of 1 Gigabit Ethernet, 10 Gigabit Ethernet, and 8/4/2/1G
Fibre Channel onto a physical port. The physical port can be configured as 1/10G Traditional Ethernet, 10G Fibre
Channel over Ethernet, or 8/4/2/1G Native Fibre Channel. The Storage Protocol Services (SPS) license is required to
enable the use of both FCoE and Native FC operations on the Unified Ports.

Q. When will Unified Ports be available on the Cisco Nexus 5548P?

A. On the Nexus 5548P, 16 Unified Ports will be offered via an expansion module targeted for Q1CY11.

Hardware and Environment


Q. What are the main technical benefits of the Cisco Nexus 5548P compared to the previous generation of the Cisco
Nexus 5000 Series Switches?

A. The main technical benefits include:

● Higher port density: The Cisco Nexus 5548 can support up to 48 10 Gigabit Ethernet with a 16-port 10 Gigabit
Ethernet expansion module in a single 1RU form factor.
● Lower-latency cut-through switching: Latency is reduced to about 2 microseconds.
● Better scalability: VLAN, MAC address count, Internet Group Management Protocol (IGMP) group,
PortChannel, ternary content addressable memory (TCAM), Switched-Port Analyzer (SPAN) session, and
logical interface (LIF) count scalability are increased.
● Hardware support for Cisco® FabricPath and standards-based Transparent Interconnection of Lots of Links
(TRILL): This support makes the Cisco Nexus 5500 Platform an excellent platform for building large-scale,
loop-free Layer 2 networks.
● Support for ingress and egress differentiated services code point (DSCP) marking.
● Layer 3 support: A field-upgradable routing card will be available in the future.
● Enhanced SPAN implementation: This feature protects data traffic in case of congestion resulting from SPAN.
It enables more active SPAN sessions and supports fabric extender ports as SPAN destinations.
Q. What is the architecture of Cisco Nexus 5548P?

A. The Cisco Nexus 5548P implements a switch-fabric-based architecture. It consists of a set of port application-
specific interface cards (ASICs) called unified port controllers (UPCs) and a switch fabric called the unified fabric
controller (UFC). The UPCs provide packet-editing, forwarding, quality-of-service (QoS), security-table-lookup,
buffering, and queuing functions. The UFC connects the ingress UPCs to the egress UPCs and has a built-in central
scheduler. The UFC also replicates packets for unknown unicast, multicast, and broadcast traffic. Each UPC supports
eight 1 and 10 Gigabit Ethernet interfaces; however, no local switching is performed on the UPCs. All packets go
through the same forwarding path, and the system helps ensure consistent latency for all flows.

Q. Does the Cisco Nexus 5548P support Cisco FabricPath?

A. Yes. The Cisco Nexus 5548P hardware supports Cisco FabricPath, which will be enabled in a future software
release.

Q. Does the Cisco Nexus 5548P support IETF TRILL?


Internal

A. Yes. The Cisco Nexus 5548P hardware supports prestandard IETF TRILL, since TRILL has not been completely
standardized. Support will therefore be enabled in a future software release.

Q. Does the Cisco Nexus 5548P support Layer 3 routing?

A. Yes. The Cisco Nexus 5500 Platform has been designed with Layer 3 support from the start. At FCS, Layer 3 routing
will not be available on the Cisco Nexus 5548P and will be enabled in the near future through a fieldupgradeable
daughter card.

Q. What are considered the front and back of a Cisco Nexus 5548P Switch?

A. The front of the Cisco Nexus 5548P is where the fans, power supplies, and management ports are located. The back
of the Cisco Nexus 5548 is where the fixed Ethernet data ports and the expansion slot are located. The data ports are
located on the back of the Cisco Nexus 5548P to facilitate cabling with servers.

Q. Do the power supplies on the Cisco Nexus 5548P support both 110 and 220-volt (V) inputs?

A. Yes. The supported voltage range is from 100V to 240V.

Q. What are the additional RJ45 ports next to the management interface on the front of the Cisco Nexus 5548P?

A. The additional front panel RJ-45 ports are designed for future use. At present, the Cisco Nexus 5548P supports only
a single out-of-band management interface.

Q. Can the existing expansion modules on the Cisco Nexus 5010 and 5020 Switches be used on the Cisco Nexus 5500
Platform?

A. No. The expansion modules supported on the Cisco Nexus 5010 and 5020 are not supported on the Cisco Nexus
5500 Platform.

Q. Can the existing power supplies and fan modules on the Cisco Nexus 5010 and 5020 be used on the Cisco Nexus
5500 Platform?

A. No. The power supplies and fan modules for the Cisco Nexus 5010 and 5020 are not interchangeable with those on
the Cisco Nexus 5500 Platform.

Q. Does the Cisco Nexus 5548P run the same software image as the Cisco Nexus 5010 and 5020 Switches?

A. Yes. All Cisco Nexus 5000 Series Switches, including the Cisco Nexus 5500 Platform, support the same software
image.

Q. Does the Cisco Nexus 5548P support a USB interface?

A. Yes. There is one type-A USB interface on the front of the Cisco Nexus 5548P.

Q. What kind of CPU is used on Cisco Nexus 5548?

A. Intel Dual-Core 1.73GHz with 2 memory channels, DDR3 at 1066Mhz, with 4MB cache.

Q. How much CPU memory comes with the Cisco Nexus 5548P?

A. The Cisco Nexus 5548P comes with 8 GB of CPU DRAM.

Q. How much flash memory comes with the Cisco Nexus 5548P?

A. The Cisco Nexus 5548P comes with 2 GB of flash memory.


Internal

Q. What are the typical and maximum power consumption amounts for the Cisco Nexus 5548P?

A. The typical power consumption of the Cisco Nexus 5548P is 390 watts (W), and the maximum power consumption is
600W.

Q. Does the Cisco Nexus 5548P support 1 Gigabit Ethernet ports?

A. All Ethernet ports on the Cisco Nexus 5548P, including the Ethernet ports on expansion modules, are hardware
capable of supporting both 1 and 10 Gigabit Ethernet speeds. Software support for 1 Gigabit Ethernet will be
available in a future software release.

Q. What types of transceivers are supported by the Cisco Nexus 5548P?

A. Please refer to the Cisco Nexus 5500 Platform data sheet for a list of supported transceivers and cable types. Data
sheets and associated collateral can be found at http://www.cisco.com/go/nexus5000 .

Q. Does the Cisco Nexus 5548P support IEEE 802.1ae link-level cryptography?

A. No. The Cisco Nexus 5548P Switch hardware does not support IEEE 802.1ae.

Hardware Performance and Scalability


Q. What is the performance throughput of the Cisco Nexus 5548P?

A. The Cisco Nexus 5548P provides up to 960-Gbps throughput. It implements a nonblocking hardware architecture
and helps achieve a line-rate throughput for all frame sizes, for both unicast and multicast traffic, across all ports.

Q. Should I expect any performance degradation when I turn on some features, such as access control lists (ACLs) and
Fibre Channel over Ethernet (FCoE), on the Cisco Nexus 5548P?

A. All ports on the Cisco Nexus 5548P provide line-rate performance regardless of the features that are turned on.

Q. The Cisco Nexus 5548P implements cut-through switching among all its 10 Gigabit Ethernet ports. Does it also
support cut-through switching for all 1 Gigabit Ethernet, native Fibre Channel, and FCoE ports?

A. Under various circumstances, the Cisco Nexus 5548P can act as either a cut-through switch or a store-and-forward
switch. Table 1 summarizes the switch behavior in various scenarios.

Table 1. Switching Mode

Source Interface Destination Interface Switching Mode

10 Gigabit Ethernet 10 Gigabit Ethernet Cut-through

10 Gigabit Ethernet 1 Gigabit Ethernet Cut-through

1 Gigabit Ethernet 1 Gigabit Ethernet Store-and-forward

1 Gigabit Ethernet 10 Gigabit Ethernet Store-and-forward

FcoE Fibre Channel Cut-through

Fibre Channel FCoE Store-and-forward


Internal

Fibre Channel Fibre Channel Store-and-forward

FCoE FCoE Cut-through

Whenever the ingress interface operates at 10 Gigabit Ethernet speed, cut-through switching is used.

Q. How many MAC addresses does the Cisco Nexus 5548P support?

A. The Cisco Nexus 5548P Switch hardware provides an address table for 32,000 MAC addresses. The same MAC
address table is shared between unicast and multicast traffic, and it also includes some internal entries. At FCS, 4000
MAC address entries will be reserved for multicast groups that are learned through IGMP snooping, and 25,000 MAC
address entries will be reserved for unicast traffic. The remaining 3000 MAC address entries will be used to handle
hash collision.

Q. How many VLANs does the Cisco Nexus 5548P support?

A. The Cisco Nexus 5548P supports up to 4094 active VLANs. Of these, a few are reserved for internal use, thus
providing users with up to 4014 configurable VLANs.

Q. How many PortChannels are supported with the Cisco Nexus 5548P?

A. All ports on the Cisco Nexus 5548P can be configured as PortChannel members. The Cisco Nexus 5548P Switch
hardware can support up to 48 local PortChannels and up to 576 PortChannels on the host-facing ports of Cisco
Nexus 2000 Series Fabric Extenders.

Q. How many ports can be in a PortChannel on the Cisco Nexus 5548P?

A. One PortChannel can have up to 16 members on the Cisco Nexus 5548P.

Q. What is the TCAM table size on the Cisco Nexus 5548P?

A. The Cisco Nexus 5548P provides a 4000-TCAM table size; however, the table is shared among port ACLs, VLAN
ACLs, QoS ACLs, SPAN ACLs, and ACLs for control traffic redirection.

Q. How many Spanning Tree Protocol logical ports are supported on the Cisco Nexus 5548P?

A. The Cisco Nexus 5548P supports up to 12,000 logical ports, of which up to 4000 can be network ports for switch-to-
switch connection.

Fabric Extender Support


Q. Can the Cisco Nexus 2000 Series Fabric Extenders connect to the expansion module ports on the Cisco Nexus
5548P?

A. Yes. The Cisco Nexus 2000 Series Fabric Extenders can connect to any Ethernet port on the Cisco Nexus 5548P.

Q. How many Cisco Nexus 2000 Series Fabric Extenders can connect to a single Cisco Nexus 5548P Switch?

A. At FCS, one Cisco Nexus 5548P will support up to 12 Cisco Nexus 2000 Series Fabric Extenders. The scalability will
increase with future software releases.

Q. Does the Cisco Nexus 5548P support all the currently available Cisco Nexus 2000 Series Fabric Extenders?

A. Yes. The Cisco Nexus 5548P supports all four currently available Cisco Nexus 2000 Series Fabric Extenders: Cisco
Nexus 2148T, 2248TP GE, 2224TP GE, and 2232PP 10GE Fabric Extenders.
Internal

Management and Troubleshooting


Q. Does the Cisco Nexus 5548P Switch hardware support NetFlow?

A. No. The Cisco Nexus 5548P Switch hardware does not support NetFlow.

Q. How many SPAN sessions does the Cisco Nexus 5548P support?

A. The Cisco Nexus 5548P supports up to four active SPAN sessions.

Q. Does SPAN traffic affect the data traffic on the Cisco Nexus 5548P?

A. No. The Cisco Nexus 5548P Switch hardware is designed to give higher priority to data traffic during periods of
congestion when both SPAN and data traffic could contend with each other. When such congestion occurs, the Cisco
Nexus 5548P can easily be configured to protect the higher-priority data traffic while dropping the lower-priority SPAN
traffic.

Q. Can a 1 Gigabit Ethernet port on the Cisco Nexus 5548P be configured as a SPAN destination port?

A. Yes. After 1 Gigabit Ethernet mode is software enabled on the Cisco Nexus 5548P, any 1 Gigabit Ethernet port can
be configured as a SPAN destination port.

Q. Can I use SPAN to capture a Priority Flow Control (PFC) frame on the Cisco Nexus 5548P?

A. No. The PFC frame will not be mirrored from the SPAN source port to the SPAN destination port.

Q. Can a Cisco Nexus 2000 Series host-facing port be configured as a SPAN destination port on the Cisco Nexus
5548P?

A. The Cisco Nexus 5548P Switch hardware supports configuration of Cisco Nexus 2000 Series host-facing ports as
SPAN destination ports. However, the software support will be available in a future release.

Q. Does the Cisco Nexus 5548P support Encapsulated Remote SPAN (ERSPAN)?

A. In a post-FCS software release, the Cisco Nexus 5548P will support ERSPAN source sessions. The Cisco Nexus
5548P cannot de-encapsulate ERSPAN packets and therefore will not support ERSPAN destination sessions.

Q. Does the Cisco Nexus 5548P support RSPAN?

A. No. The Cisco Nexus 5548P does not support RSPAN.

Q. Does the Cisco Nexus 5548P support the IEEE 1588 Precision Time Protocol (PTP) feature?

A. The Cisco Nexus 5548P Switch hardware is capable of supporting IEEE 15888 PTP. However, software support will
be available in a future software release.

Q. Do the Cisco Data Center Network Manager (DCNM) and Cisco Fabric Manager support the Cisco Nexus 5548P?

A. Cisco DCNM and Cisco Fabric Manager support for the Cisco Nexus 5548P will be available 2 to 3 months
after FCS.

Configuration Synchronization
Q. What is the configuration synchronization feature introduced in Cisco NX-OS Release 5.0(2)N1(1) for the Cisco
Nexus 5000 Series?
Internal

A. Configuration synchronization (config-sync), when enabled, allows the configuration made on one switch to be
pushed to another switch through software. The feature is mainly used in virtual PortChannel (vPC) scenarios to
eliminate the manual configuration on both vPC peer switches. It also eliminates the possibility of human error and
helps ensure that both switches have the exact same configuration.

Q. Does config-sync require special hardware?

A. Config-sync is a software feature that is hardware independent. Starting with Cisco NX-OS Release 5.0(2)N1(1), it is
supported on all Cisco Nexus 5000 Series Switches, including the Cisco Nexus 5548P.

Q. Can Type 1 and Type 2 inconsistencies be avoided with config-sync?

A. No. vPC and config-sync are two separate features. For vPC to be operational, Type 1 and Type 2 parameters must
match. If the parameters do not match, users will continue to experience a vPC-failure scenario. Configsync allows
the user to make changes on one switch and synchronize the configuration with that on the other peer automatically.
It saves the user from having to create identical configurations on each switch.

Q. What are the three requirements for enabling the config-sync feature?

A. To enable the config-sync feature, users need to:

● Enable Cisco Fabric Services over IP on each peer


● Create identical switch profiles on each switch
● Configure the correct peer IP addresses
Q. Which interface carries config-sync traffic?

A. Config-sync messages are carried only over the mgmt0 interface. They cannot currently be carried over the in-band
switch virtual interfaces (SVIs).

Q. If I use a direct point-to-point connection using SVIs and the default Virtual Routing and Forwarding (VRF) instance
for my peer keepalive (instead of the mgmt0 interface and the management VRF instance), will config-sync work?

A. Config-sync is independent of vPC. As long as users have mgmt0 connectivity and can reach the vPC peer, config-
sync will work.

Q. When config-sync is implemented, why are VLANs not propagated?

A. Users must make sure that the specific features are enabled on each Cisco Nexus 5548P Switch. Features are not
automatically synchronized.

Q. Is FCoE supported under config-sync?

A. No. FCoE is not supported under config-sync. The supported features for a switch profile are VLANs, ACLs,
Spanning Tree Protocol, QoS, and interface-level configurations (Ethernet, PortChannels, and vPC).

Q. What happens if the commit process fails during a config-sync operation?

A. The configuration will be rolled back to the original (default) state, resulting in no configuration changes. Neither
switch will update any configurations.

Q. What happens if the switch profile has been created but no commit command was entered, yet a reload occurs?

A. In this instance, the switch profile was not saved to the startup configuration, and as a result, no changes will be
made.
Internal

Q. If the peer is lost (config-sync transport is down) and local configuration changes are made on one switch, what
happens when the config-sync transport (mgmt0 interface) comes back up?

A. Before the mgmt0 interface comes back up, the changes that were made on the switch are applied locally when
the commitcommand is entered. After the mgmt0 interface comes back up, the configuration is automatically
synchronized with that of the peer.

Q. Can I commit from the vPC secondary switch?

A. Yes, the config-sync feature is independent of the vPC. The initiator does not follow the vPC primary or secondary
switch. The commit command can be entered from either of the two switches.

Q. Is there a mechanism to avoid configuration conflicts?

A. Yes. To avoid conflicts, enter the commit command from a single switch. If you simultaneously try to enter
a commitcommand from the other switch, the following error message will appear:

N5K-2(config-sync-sp)# commit
Failed: Session Database already locked, Verify/Commit in Progress.
Q. Where is the configuration submode to create a switch profile?

A. A new mode is introduced with config-sync. As with config t, enter the config sync command to access the switch-
profile subcommand.

Configuration Rollback
Q. What is the minimum Cisco NX-OS release that supports configuration rollback on the Cisco Nexus 5548P?

A. Starting with Cisco NX-OS Release 5.0(2)N1(1), configuration rollback is supported on all Cisco Nexus 5000 Series
Switches, including the Cisco Nexus 5548P.

Q. Is the configuration rollback feature on the Cisco Nexus 5000 Series Switches the same as that on the Cisco Nexus
7000 Series Switches?

A. Yes. However, at FCS, the Cisco Nexus 5000 Series, including the Cisco Nexus 5548P, will support only the atomic
(default) configuration.

Q. Is FCoE supported by configuration rollback?

A. No. If feature fcoe is enabled, users will not be able to use the configuration rollback feature on the Cisco Nexus
5000 Series Switches, including the Cisco Nexus 5548P.

Q. Does configuration rollback require a license?

A. No. It requires only Cisco NX-OS Release 5.0(2)N1(1) as the minimum software version.

Q. Can I use the same checkpoint names?

A. No. Each checkpoint name must be unique.

Q. How do I create a configuration rollback?

A. Enter the following:


N5k#config t
N5k(config)#checkpoint “test”
Internal

N5k(config)#show checkpoint test


Q. How do I implement configuration rollback?

A. Enter the following:


N5k#show diff rollback-path checkpoint test
N5k#rollback running-config checkpoint test
Q. In atomic configuration, rollback will be implemented if there are no errors. What is the behavior if an error occurs?

A. The rollback action will abort if it encounters an error. For example, assume the user has a saved checkpoint named
Test1. If an error occur occurs while the user is trying to roll back from the current running configuration to Test1, the
switch will retain the current running configuration.

Q. What is the difference between config-sync rollback and configuration rollback?

A. Config-sync rollback occurs if a commit command is entered and fails. If the commit command fails, the new
configuration is ignored, and the system reverts to the original configuration. This is an implicit rollback that takes
place automatically. In contrast, the configuration rollback feature is user defined and is controlled by a manual
configuration that is verified and applied by the user.

Q. How do I clear or remove checkpoints?

A. After the system runs a write-erase or reload operation, checkpoints are deleted. You can also enter the clear
checkpoint database command.

Quality of Service
Q. How many classes of service does the Cisco Nexus 5548P support?

A. The Cisco Nexus 5548P supports eight classes of service. Two of them are reserved for internal control traffic, and
six classes of service are available for data traffic. All six classes of service can be used for non-FcoE Ethernet traffic.

Q. How many hardware queues does the Cisco Nexus 5548P have?

A. The Cisco Nexus 5548P has 384 unicast virtual output queues (VOQs) and 128 multicast VOQs at ingress for each
Ethernet port. It has 8 queues for unicast and 8 queues for multicast at egress for each Ethernet port.

Q. How many packet buffers are present on the Cisco Nexus 5548P?

A. The Cisco Nexus 5548P provides 680-KB packet buffers for each 10 Gigabit Ethernet port: 480 KB are allocated for
ingress, and 160 KB are allocated for egress. The default configuration has one system class - class-default - for
data traffic, and all 480 KB of the buffer space are allocated to class-default. User-defined system classes have
dedicated buffers and take buffer space from the 480-KB limit. Command-line interface (CLI) commands are available
to allow users to configure the desired buffer sizes for each system class.

Q. How does the Cisco Nexus 5548P classify incoming traffic?

A. The Cisco Nexus 5548P can classify incoming traffic based on CoS marking, DSCP marking, or user-defined ACL
rules.

Q. Does the Cisco Nexus 5548P trust CoS and DSCP markings by default?

A. Yes. The Cisco Nexus 5548P trusts CoS and DSCP markings by default. The switch will not modify CoS or DSCP
values unless modification is configured by the user. Although the Cisco Nexus 5548P trusts the CoS and DSCP
values, it will not classify and queue the packets based on those values. By default, all traffic will be assigned
Internal

to class-default and mapped to one queue. Users will need to define their own policy maps to classify and queue
packets based on CoS or DSCP values.

Q. Does Cisco Nexus 5548 support ingress policing and egress policing?

A. The Cisco Nexus 5548P Switch hardware supports both ingress and egress policing. However, software support will
be available in a future software release.

Q. Does the Cisco Nexus 5548P support traffic shaping?

A. No. The Cisco Nexus 5548P does not support traffic shaping.

Q. Does the Cisco Nexus 5548P support DSCP marking?

A. Yes. The Cisco Nexus 5548P supports both ingress and egress DSCP marking.

Q. Does the Cisco Nexus 5548P support explicit congestion notification (ECN)?

A. The Cisco Nexus 5548P Switch hardware supports ECN. However, software support will enabled in a future software
release.

Multicast
Q. How many IGMP groups can the Cisco Nexus 5548P support?

A. At FCS, the Cisco Nexus 5548P will support up to 4000 IGMP groups.

Q. How are multicast packets replicated in the Cisco Nexus 5548P?

A. Multicast packets are replicated by the switch fabric. The ingress ports send one copy of the multicast packets to the
switch fabric, and the switch fabric replicates the packets for all the egress ports in the multicast group. No ingress or
egress replication takes place for the multicast packets. However, the SPAN traffic is replicated by the port ASICs
(the UPC); the receive SPAN traffic is replicated at the ingress ports, and transmit SPAN traffic is replicated at the
egress ports.

Q. How is the forwarding decision made for IP multicast packets on the Cisco Nexus 5548P?

A. The Cisco Nexus 5548P intercepts the IGMP join and leave messages from hosts and keeps track of the ports that
send join and leave messages. The IGMP group is converted to a multicast MAC address with the format
0100.5EXX.XX.XX and stored in the MAC address table (sometimes referred to as a station table). Subsequently, the
IP multicast packet forwarding decision is made by checking the destination MAC address against the multicast MAC
table. For other features, such as QoS and security, the multicast IP address is used for table lookup.

Q. What happens if the Cisco Nexus 5548P receives an IP multicast packet whose group address is not yet learned by
the switch?

A. If the destination MAC address is in the range 0100.5E00.00XX, the packets will be flooded in the VLAN. Otherwise,
the IP multicast packets will be dropped if the IGMP group is unknown to the Cisco Nexus 5548P.

Virtualization
Q. What is network interface virtualization (NIV) on the Cisco Nexus 5548P?

A. NIV is a technology that allows any adapter to be virtualized in multiple virtual network interface cards (vNICs) or
virtual host bus adapters (vHBAs). Virtualized adapters can be used to provide multiple interfaces on a single server,
enabling consolidation and flexibility in both physical and virtualized server environments. Each individual vNIC and
Internal

vHBA is identified by a tag called a VNTag. When an NIV-capable adapter is connected to the Cisco Nexus 5548P,
the Cisco Nexus 5548P can use the VNTag to forward frames that belong to the same physical port.

Q. Does support for NIV mean that I can use the Cisco Nexus 5548P as an external switch for virtual machine traffic
instead of the software hypervisor switch?

A. NIV is one of the building blocks necessary to implement virtual machine traffic switching using an external hardware
switch, but it is not the only one. The full set of features is referred to as Cisco VN-Link and will be enabled on the
Cisco Nexus 5548P, in subsequent releases.

Q. How do NIV and VNTag interoperate with existing standards?

A. VNTag and IEEE 802.1Qbh Port Extension provide the same capabilities, functions, and management interface. The
on-the-wire formats are somewhat different between the two. However, Cisco expects to deliver IEEE 802.1Qbh
standards-compliant products in the future that can translate between the on-the-wire formats, enabling full
interoperability of a heterogeneous VNTag and IEEE 802.1Qbh environment.

Q. What is LIF on the Cisco Nexus 5548P?

A. The logical interface, or LIF, is a data structure on the Cisco Nexus 5548P Switch hardware that allows a physical
interface on the Cisco Nexus 5548P to emulate multiple logical or virtual interfaces. The LIF data structure carries
certain properties, such as the VLAN membership, interface ACL labels, and Spanning Tree Protocol states. For NIV
support, the LIF is derived from the VNTag values carried in the packet. With a LIF data structure, the Cisco Nexus
5548P can process and forward frames on a per-LIF basis. For instance, each Cisco Nexus 2000 Series host-facing
port or virtual interface created for the vNICs could be mapped to a LIF data structure on the Cisco Nexus 5548P
Switch hardware.

Q. What is the advantage of supporting more LIFs?

A. After NIV becomes available, if you have more LIFs, you can configure more vNICs on a virtualized adapter. The
Cisco Nexus 5548P Switch hardware can support up to 8000 LIFs per UPC.

Q. Does the Cisco Nexus 5548P support virtual device contexts (VDCs)?

A. No. The Cisco Nexus 5548P does not support VDCs.

Port Profiles
Q. Describe the port-profile feature offered with Cisco NX-OS Release 5.0(2)N1(1) on the Cisco Nexus 5000 Series
Switches, including the Nexus 5548P.

A. A port profile is a preconfigured template that allows repetitive interface commands to be grouped together and
applied to an interface range.

Q. What are the benefits of port profiles?

A. Port profiles provide ease-of-configuration. The switch administrator can manage one simple interface configuration
template and apply it to a large range of ports as needed.

Q. What types of interfaces are supported with port profiles?

A. Port profiles can be configured as for Ethernet, PortChannels, and VLANs.

Q. How do I configure and apply a port profile to an interface?


Internal

A. The procedures for defining and applying port profiles are as follows:

● To create or delete a port profiles, enter the following commands:


N5k(config)# [no] port-profile type [eth|fc|port-channel|tunnel|interface-vlan]
<name>
N5k(config-port-prof)#
● To enable or disable a port profile, enter the following commands:
N5k(config-port-prof)# [no] state enabled
● To assign or unassign a port profiles to Interfaces, enter the following commands:
N5k(config)# interface eth1/1-10
N5k(config-if)# [no] inherit port-profile <name>
Q. What happens to a port-profile inherited interface when the port profile is deleted?

A. When a port profile is deleted, the commands configured in the port profile, are removed from the interfaces that had
inherited the port profile.

Q. Which takes precedence: the interface default, the port profile, or the interface configuration?

A. The interface configuration takes precedence over the port profile, and the port profile takes precedence over the
interface defaults.

Q. What happens if a failure occurs while a port profile is being inherited?

A. Whenever a port profile is to be inherited or enabled, a checkpoint is created through interaction with the
configuration rollback feature. Upon detection of a failure, the software rolls back the configuration to the checkpoint
created before the operation was started. For the rollback, only the commands in interface mode are considered for a
diff computation. This approach helps ensure that a port profile is never partially applied, rendering the system
inconsistent because of port-profile application.

Q. Can I add a command to a port profile after it is inherited by an interface?

A. Yes. You can add commands, and they will also be inherited by the interface.

Q. Can port profiles be combined through inheritance of one port profile by another?

A. Yes. For instance, assume that a port profile named p2 inherits a port profile named p1. In this example, profile p1 is
called a subclass profile, and profile p2 is called a superclass profile. Inheritance allows the subclass port profile to
inherit all the commands of the superclass port profile that do not conflict with its command list. If a conflict occurs,
the configuration in the subclass port profile overrides the configuration in the superclass port profile. For example,
assume that port-profile p2 inherits p1, and configurations are as shown here:

port-profile p1
speed 1000
port-profile p2
inherit port-profile p1
speed 10000
switch access vlan 100
When p2 is applied to an interface, the interface would receive speed 10000 and not speed 1000 as defined in p1.

Q. What types of interface commands are available in the port-profile mode?

A. Any command that is supported in the interface mode will also be supported in the corresponding port-profile mode.
Internal

Fabric Extender and Expansion Module Preprovisioning


Q. What is the preprovisioning feature?

A. Preprovisioning allows users to configure the Cisco Nexus 2000 Series switch ports and the expansion modules on
the Cisco Nexus 5000 Series Switches, including the Nexus 5548P, without requiring the Cisco Nexus 2000 Series
Fabric Extenders or the expansion modules to be connected to the Cisco Nexus 5000 Series chassis. With this
feature, users can also check the configuration when the Cisco Nexus 2000 Series Fabric Extenders are offline or
copy a configuration file to a running configuration.

Q. What version of Cisco NX-OS supports the preprovisioning feature?

A. Starting with Cisco NX-OS Release 5.0(2)N1(1), the preprovisioning feature is supported on all Cisco Nexus 5000
Series Switches, including the Cisco Nexus 5548P.

Q. Which Cisco Nexus 2000 Series Fabric Extenders support preprovisioning?

A. Preprovisioning is supported on all currently available Cisco Nexus 2000 Series Fabric Extenders, including the
Cisco Nexus 2148T, 2248TP, 2224TP, and 2232PP.

Q. Can I make configuration changes for offline modules?

A. Yes. Users can make configuration changes to offline modules that have been preprovisioned before.

Q. What are the implications for the preprovisioning feature when Cisco NX-OS needs to be upgraded, downgraded, or
reloaded?

A. If the upgrade or downgrade is between images that support preprovisioning, any preprovisioned configuration will
be retained across the upgrade. When downgrading from an image that supports preprovisioning to an image that
does not, users will be asked to remove any preprovisioned configuration. When the switch is reloaded, all
configurations will be retained just as they were before the reload operation as long as the copy running startup or
install all command was not entered before the reload.

What is Admin VDC on Nexus 7000s/7700s?

With Admin VDC, network administrators can perform common, system-wide tasks in a context that is not
handling data plane traffic. Admin VDC also allows customers another option to secure their Nexus 7000, as they
can more easily restrict access to the Admin VDC than might be possible with a traditional Ethernet or Storage
VDC. The tasks that can be performed only in Admin VDC are below:
In Service Software Upgrade/Downgrade (ISSU/ISSD)
Erasable Programmable Logic Devices (EPLD) upgrades
Control Plane Policing (CoPP) configuration
Licensing operations
VDC Configuration including creation, suspension, deletion and resource allocation
System-wide QoS policy and port channel load balancing configuration
Internal

Generic Online Diagnostics (GOLD) configuration


V

The Admin VDC is just responsible to manage other Ethernet/Storage VDCs on your Nexus 7000. The
line-card interfaces will not be shown or will not be configurable on the Admin VDC. Also you can not
do any other protocol configurations (e.g vPC, FabricPath, OTV). You can create, edit, change or delete
Ethernet/Storage VDCs, you can also allocate the necessary resources like interfaces, feature-sets to the
individual VDCs but you can not use Admin VDC just like the Default VDC.

Please refer this link for finding more details about configuring Admin VDC.

And this link for better understanding the idea behind Admin VD

Fabricpath FAQs
1. What is the unique mac address used in unknown Unicast.
Answer:- 01:0F:FF:C1:01:C0

2. What is STP bridge ID used by all Fabricpath edge devices?


Answer:- C84C.75FA.6000

3. What is the maximum number of VPC+ port channel support?


Answer: - 244

Note: - On F2/F2E line card, we can increase the maximum number of VPC+
port-channel support by using no port-channel limit commands.

4. What is the default value Root priority?


Answer: - 64 ( It can be between 0 to 255)

5. What is the default TTL value set for all frames?


Answer: 32.

Note:-We can use the command fabricpath ttl to configure the TTL Value.

6. Does VPC+ support static port-channel?


Answer: - Yes, it supports both LACP and Static port-channels.

7. Is fabricpath supported on M cards?


Answer:- No. Fabricpath is only supported on F series.
Internal

8. Which license is required for Fabricpath?


Answer:- Enhanced Layer 2 Package

9. What is ethertype value of Fabricpath frame?


Answer:- 0x8903

10. What is order of preference for root election?


Answer:- Root priority-> System ID->Switch ID

Note:- Higher is better.

11. Is the mac addresses are advertised by fabricpath IS-IS like in OTV?

Answer :- No, Fabricpath IS-IS will not advertise any mac address.

Default Vs. Admin VDC in Nexus 7000


Both are used for the management of the complete switch and are used to
assign interfaces to other non-default VDCs. Also global parameters like COPP
etc. are only configured in default/admin VDC. So what is the difference?

Below is the difference between default and admin VDC.

Default VDC:-

In nexus, default VDC ( VDC-1) performs below two functions:-

1. Default vdc can be used for the management of all the VDCs in the chassis.
From default VDC, network-admin user creates, delete or modify other non-
default VDCs. It can allocate the interfaces to other non-default VDCs.

2. Interface can be allocated to default VDC and then it can handle user traffic
similar to the non-default VDC.
Internal

Admin VDC:-

Admin VDC can be created from the initial configuration wizard. It is only used
for the management of the complete chassis and associated non-default VDCs.
No interface can be allocated to admin VDC and hence it cannot handle user
traffic.

Before 6.2(2), it is not available in SUP-1. In 6.2(2) version it is available on all


supervisor modules.

Note: - Default and admin VDC cannot coexist at the same time. VDC 1 can
either configure as default or Admin.

We can convert default VDC to admin by using below two commands:-

 System admin-vdc :- When it is applied on default VDC, all the non-global


configuration ( VDC specific) will be removed. And hence need to apply with
caution otherwise the default VDC user traffic will be impacted. It is generally
applied during the initial configuration.

 System admin-vdc migrate new-vdc-name :- It creates a new VDC and then


migrate all configuration ,specific to the default VDC, to the new VDC except
few configuration like management IP address, NTP address etc.

All global configurations, like COPP, load balance methods etc., will
remain in the admin VDC.

F1 Vs. F2 Vs. F2E Vs. F3 - Cisco Nexus 7000


There are four types of F line cards available. Below is
the difference between F1, F2, F2e and F3.
Internal

F1 Card:-
 Only perform Layer-2 task.
 No interface can be converted to Layer3.
 M and F1 card can coexist in a chassis
F2 line card:-
 Interface can be used as L2 or L3
 M and F2 card cannot coexist in a chassis.
 Don’t support OTV,MPLS and LISP
F2E line card:-
 Interface can be used as L2 or L3
 M and F2E card can coexist in a chassis but in L2
mode only.
 Don’t support OTV,MPLS and LISP
F3 line cards:-
 Interface can be used as L2 or L3
 M and F3 card can coexist in a chassis
 Support OTV, MPLS and LISP features.

VPC FAQs
1. Can VPC port-channel number different on peer switch?
Answer: - yes, it can be different

2. Is a single VPC domain between two VDCs on the same


physical Cisco Nexus 7000 device supported?
Answer: - No, It is not supported.

3. What are the default parameters of VPC?


Answer:- Below are paramaters.

Parameters Default
vPC system priority 32667
Internal

vPC peer-keepalive
1 second
interval
vPC peer-keepalive 5
timeout seconds
vPC peer-keepalive
3200
UDP port

4. Are Jumbo frames enabled by default on the vPC peer


link.
Answer: Yes, jumbo frame are by default enable.

5. What license is required for VPC?


Answer:- No license is required for VPC

6. Can we create both layer2/3 Only Layer 2 port channels


can be in vPCs.
Answer:- No, we can only configure Layer port-channel in
VPC.

7. In VPC peer-link, is F1 on one side and M1 on the peer


switch supported?
Answer:- No, Module type of both end should be identical.
Please refer to the below table.

vPC Primary vPC Secondary Supported/Not supported

F1 I/O module F1 I/O module Supported

F1 I/O module M1 I/O module Not supported

M1 I/O module M1 I/O module Supported


Internal

M1 I/O module F1 I/O module Not supported

8. Can we use physical interface as VPC peer-link?


Answer: No, VPC peer-link can only be configured on port-
channel containing 10 gig interfaces. 1 Gig
interfaces cannot be configured as VPC peer-link

9. Can we configure system-mac for VPC?


Answer: Yes, we can configure the system ID for VPC with
below command

Nexus(config)# vpc domain 5


Nexus(config-if)# system-mac 0000.0000.000a

10. What is the default role-priority?


Answer: It can be from 1 to 65535 and default value is
32667.

Note: - Lower is better.

11. What is the default VPC domain ID?

Answer: - There is no default domain-id and can be


configured from 1 to 1000.

Why we need VPC?


Initially when I heard of VPC, I neither understand the
advantage of it nor its difference with VSS. Below I tried
to explain the difference between VPC and VSS and the
Internal

legacy setup where STP is being used to prevent L2 loops.


But STP has many limitations which are discussed below:-

1.Suboptimal Path:- To understand it, take a look to the


below topology where three switches are connected to
provide a complete redundant path .

The problem with this design is, STP will block the port
Gi0/3 of Sw-2. And hence traffic instead of taking direct
route from SW-1 to SW-3, will reach to SW-3 via SW-1 and is
known as suboptimal path. It adds extra hop in the path and
reduces the efficiency of the network.
Internal

2.Underutilization of uplink bandwidth:-

STP prevents the layer-2 loop by blocking the redundant


path which is an advantage but in way reduces the uplink
bandwidth which sometimes creates the congestion in the
network.

Refer to the below diagram, traffic from SW-3 to internet


has two path but due to spanning tree Gig0/3 of SW-3 is in
blocking state. It will reduce the uplink bandwidth
available to the SW-3.
Internal

3.Inefficiency: - Let’s assume the traffic is load share between


SW-1 and SW-2 and both switches advertise the user subnet
from same metric. There is no problem when the return
traffic hit the SW-1 but what will happen when the very
first return traffic that hits SW-2.

Does SW-2 have the mac-address of PC-1? Generally NO!

SW-2 will send the unknown broadcast for the mac-address


and if there are many users sitting in the LAN, unknown
unicast will not only create the unnecessary traffic but it
also impacts the CPU utilization of switches.

By using VSS in 6500, both the switches will virtually


become one. One sup is active at a time which will control
the data plan of both the chassis. It not remove the layer
2 loop from the network but also remove the sub-optimal
path and inefficiency problem which we had in our legacy
environment.
Internal
Internal

As you can see there is neither a suboptimal path nor there


is problem of reduced uplinks. It also removed the
unnecessary unknown unicast issue.

But in VSS, control plane is active only on one switch


whereas data plane is active on both the switches. As only
one Sup is active the overall throughput is limited and
other SUP capacity is gone wasted.

Advantage with VPC is not only it removes the above stated


problems but also control and data plane of both the
Internal

chassis are active at the same time. It increases the


overall throughput of the system.

In the below design, traffic from PC-1 can directly reach


PC-2 with adding any hop.

Also in the below design, traffic from PC-1 can go to


internet via SW-1 or SW-2 depending open the hashing
algorithm of SW-3. Also it removes the problem of unknown
unicast in case of asymmetric routing as both the switch
will be appearing as one.
Internal

Migration from FAB- 1 to FAB-2 in 7000 Nexus switch


Before thinking to migrate the Fabric module please check the data sheets for
both fabric modules in order to compare their features and limitations.

Please use the below link to check the difference between FAB-1 and FAB-2.

http://netterrene.blogspot.in/2014/08/fabric-module-in-cisco-nexus-7k-
switches.html
Internal

Fabric cards can be replaced one by one without any disruption. Both cards can
work well together but it is not recommended for longer time.

If all Fabric modules are not replaced within 12 hour of the first card
installation then switch will generate the syslog warning messages to complete
the migration.

Difference between 5548P and 5548UP?


In both 5548UP and 5548P, It has 32 ports in fixed slot 1
and 16 ports in expansion module i.e slot 2.

In 5548P, we can only have 16 FC ports which are on


expansion module. We cannot convert Ethernet port in fixed
slot to FC whereas in 5548UP all 48 ports (including 32
fixed port + 16 expansion module ports) can be converted as
native FC ports.

Each time we convert Ethernet port in fixed module to FC or


vice versa, requires a switch reboot whereas ports on
expansion module can be converted by rebooting only the
expansion module without impacting the traffic of fixed
module ports.

Below is the command to reboot only the expansion module:-

Slot 2

Copy run start


port 1-16 type fc
Internal

Poweroff module 2

No poweroff module 2

NOTE: - It will take few minutes to show the FC ports after


conversion and all 48 ports in both switches can support
FCOE.

5548P is now EOL and replacement model is 5672UP.

Cisco 7700 VS 7000 Nexus switch

7700 is the advance version of 7000 switch with


more capacity. Below are the three 77K series models :-

7708
 Up to 21 Tbps
 9 rack-unit
7710
 42 terabits per second (Tbps)
 14–rack-unit form factor
7718
 83 terabits per second (Tbps)
 26 rack-unit
What is the difference between nexus 7000 and 7700?

Below is the comparison between 7000 and 7700 nexus switch.


Internal

Note:- 7K and 77K line cards, Fabric Modules and Supervisor


module are not interchangeable. We cannot use 7000 SUP-2E
in 77K and vice versa.

XL vs non XL M cards- 7000 Nexus


M1 card are primary used for L3 functionality.M1 card can also perform basic
Layer 2 function. It cannot perform advance Layer 2 feature like Fabricpath
and FCOE.

M1 line card comes with two version XL and non XL version. Both have the same
architecture, the only difference between them is the memory to handle
TCAM, FIB and mac address.

Below table shows the comparison between the XL and non XL card. XL needs a
license in order to increase its capacity. Without License there is no
performance difference between XL and non XL card.

I got the below information from Cisoc document.


Internal

How to recognize the XL line card?

By looking at the Line card model number you can identity whether it is a XL or
non XL card. XL card has a word “ L “ in the end of the model number like N7K-
M132XP-12L. But without license it will have same capability as in non XL card.

SCALABLE_SERVICES_PKG is required to extract the full capacity of the card.


It is installed per system which enables all the XL capable cards in the chassis.
It increases the performance of the following features:-

 IPv4 routes

 IPv6 routes

 ACL entries

 Mac address table

Note:- To work in XL mode all card in a vdc must be XL capable otherwise it


will work in non XL mode only.

M series card architecture - Cisco Nexus 7000

M-series card is used for L3 purpose like routing ACLs. We must have at least one M1
card in the chassis to get the routing or L3 facility. Otherwise we cannot create SVI s and
do inter vlan communication.
Internal

To check the available model and their feature, please refer

http://netterrene.blogspot.in/2014/09/difference-between-m-series-and-f.html

Below is the architecture of the M-series card. Please note different M-series cards can
have the different architecture.

Below is the architecture diagram of N7K-M132XP-12, N7K-M132XP-12L.

The components are explained below:-


Internal

FABRIC :- It is not the fabric on chassis but each card has its own fabric which connects
the Module to the backplane fabric cards. Number of fabric present varies as per the
cards.

More number of fabrics present in the card more backplane throughput card is. Each
fabric has five interfaces to connect to the chassis fabric cards.

FORWARDING ENGINE:- All packet forwarding decision on th card are taken by


forwarding engine. It stores the FIB and TCAM table and take the packet flow decision.

REPLICATION ENGINE:- It is used to replicate the packets as and when required. It is not
only used while port mirroring but also when the card receives the multicast, broadcast
or unknown broadcast.

Since the same replication engine is responsible for multicast, there is a limit on the
packet replication that a card can handle so if the multicast replication is extremely high
it can choke the replication engine, although it will never happen in normal
circumstances.

VOQs :- VOQs stand for VIRTUAL OUTPUT QUEUE. It is a high speed memory to queue
the packets so that it will not overrun the fabric. VOQs are controlled by central
arbitrator siting in the supervisor module.

Its basic function is to provide buffering and queuing

EOBC:- EOBC stands for ETHERNET OUT-OF-BAND CHANNEL. Supervisor module has 24
port local switch and through this it is connected to each line card and fabric modules. It
is of 1 gig capacity.
Internal

EOBC is used to connect local CPU on line card to both supervisor modules and the
other line cards. Each line card has two EOBC connections to the supervisor module.

LC CPU :- Each Line has its own in build small CPU. And it is connected to the Supervisor
CPU via EOBC

10G Mac:- It receives the packet from the interface and then encode the data and send
it to replication engine

4:1 MUX + LINKSEC: - It does the multiplexing and de-multiplexing job of the data
coming in or out from the four ports to the one 10Gig connectivity to the backplane.
This over subscription varies as per the card model. It also performs the function of
linksec and encodes and decodes the data.

Central arbitration: – It controls the traffic coming in/out the Cross fabric
based on priority, available bandwidth.

Crossbar fabric: – It provides dedicated, high-bandwidth interconnects


between ingress and egress I/O modules
Internal

Difference between M-series and F-series card in Nexus


7000.
Cisco Nexus line cards are of two types: - M series and F
series.

M card is basically used for L3 purpose like routing etc.


whereas F series card was originally a Layer2 card. New
third generation of F-series can support L3 features like
MPLS, OTV etc.

Below is the list of M-series and F-series cards:-

M-series:-

Few key points


 All Cards supports OTV
 FCOE and fabricpath not supported in M-series
cards.
 VPC is supported in all cards.
 All M2 series cards and only one M1 cards N7K-
M132XP-12L supports FEX whereas other M1 cards don’t
support Fex connectivity.
 Below are the current M-series cards.
a) M1 cards:-

 N7K-M148GS-11L
 N7K-M148GT-11L
 N7K-M108X2-12L
 N7K-M132XP-12L
b) M2 card:-

 N7K-M224XP-23L
F series:-

Few key points :


 All F-series cards support VPC.
 Fex is also supported in all F cards.
Internal

 FCOE and Fabricpath is also supported in all


cards.
 OTV, LISP, MPLS is only supported in F3 cards.
 M-Series Interoperability in same VDC is only
supported with F2e and F3 cards.
 Below are the current F-series cards.
a) F1 Card:- F1 card is now end of sale and end of life.
 
b) F2 card:-
 N7K-F248XP-25
b) F2e cards:-
 N7K-F248XP-25E
 N7K-F248XT-25E
c) F3 card:-
 N7K-F312FQ-25
There are various model available for not M and F-series and
question is can we get any detail by looking at the model
number like What is 12 in N7K-M132XP-12L?

Below is the chart which explains the fields in the model


number of Line cards. By looking at it you can drive the
fields in all available line cards.

Note:

1. There is a separate Fabric in the line card as well.


Don't confuse it with the fabric present in the chassis.
Internal

2. Number of Fabric card required to support the card can


be calculated as:-

Like this card supports 480 Gbps backplane BW, so to get


the full BW we need 5 (no. of Fabric cards) *110( BW of
each Fbaric card - FAB-2) =550Gbps of fabric speed and
hence require 5 fabric card in the chassis.

What is 25E in N7k-F248T-25E?

In the below chart I tried to explain the fields present in the Nexus line card
model number. I have taken N7K-F248XT-25E as an example but you can drive
the details of any line card using it.

What is GGSN ?
GGSN – Gateway GPSR support Node is the mobility anchor point within the
mobile packet core network. It provides connectivity to the SGSN (Serving GPRS
support Node) and PDN (Packet data network). Session state information of the
subscriber is always maintained at the GGSN. It also maintains the necessary
information required to route the user traffic towards the SGSN and PDN.
Internal

GGSN is mostly located in the service provider network so even if the


subscriber is in roaming location or in the home network, he will be connected
to the GGSN located in the home network.

Key functions of GGSN

 Process PDP request from SGNSs in both home and foreign PLMN network.
After the subscriber is attached to the network, it will initiate the PDP
activate procedure.
 Assign an IP address to the subscriber - A subscriber could have
maximum of 11 PDP context and secondary PDP context. Each subscriber should
have at least one primary PDP context in order to access the services with the
PRD network. The secondary PDP context would create depending on the type
of application the subscriber is accessing. Depending upon the application, the
bandwidth requirement may be higher, due to which the secondary PDP
context will be created. It depends on the type of application as the
application may need more bandwidth which was negotiated in primary PDP
context. For every primary PDP context, the GGSN will assign the IP address
since the secondary PDP context will be associated with the primary PDP
context and therefore GGSN will not assign an IP address to secondary PDP
context.
 Negotiate QOS – For any given subscriber session the GGSN will negotiate
the QOS parameter with SGSN as a part of PDP activation procedure and during
any PDP modification procedure.
 Dynamic Policy control – GGSN has interface Gx towards the PCRF. This is
used for policy control and charging rule function. This function helps the GGSN
Internal

to charge the subscriber as per the QS policy. Depending upon the type of
subscription, PCRF can negotiate various types of QOS policies to the subscriber
and install different charging rules.
 Performs prepaid / postpaid billing – using the Gy interface GGSN
performs the prepaid billing, using the OCS - Online charging server and
performs the postpaid billing towards the Charging gateway function.
 GGSN also authenticates users to perform the authentication using AAA,
OCS and PCRF since all of these maintain a database with the user subscription.
 GGSN also provides secure VPN tunnel connectivity of corporate
subscriber towards the corporate PDN network. Tunneling mechanism such as
GRE, IPSEC, L2TP tunneling can be used for setting up the tunneling interface
on the Gi interface.

GGSN interface types –

 Gn/Gp interface – Used by GGSN to communicate with SGNSs within the


home/PLMN network. This interface is based on the GPRS tunneling protocol
(GTP). It uses the GP interface towards the SGSN within the foreign PLMN
network. This interface carries both Data and signaling plane traffic for a
subscriber PDP session. It uses GTP-C for control signaling and GTP-U for user
data traffic.
Internal

 Gx Interface – It is used to communicate with te PCRF and its bases on


the diameter protocol.
 Gy Interface – This interface is used between GGSN and OCS. It is based
on the diameter protocol used for prepaid billing.
 Ga/Gz interface – As per the 3GPP standard, the Gz interface between
CTF (charging trigger function) and CDF (charging data function). The CDF is a
proxy between GGSN and CGF. The interface between the CDF and CGF is
known as Ga interface.
 Gi Interface – This interface is between GGSN and PDN. It routes the
traffic towards the PDN for the services offered within PDN. This interface
carries both uplink and downlink subscriber data.
 DHCP interface – This interface goes towards the DHCP server. The GGSN
can use this interface if external server is to be used for assigning IP addresses
to the subscribers.
 GC interface – This interface goes towards HLR via GTP – MAP protocol
converter. It is used during network initiated PDP activation procedure.
 AAA – This interface goes towards AAA server. It’s based on radius
protocol and used for authentication and accounting.

Top of Rack Vs. End of Row - Data-center Architecture

What is TOP OF RACK (TOR)?

In TOR, there is one or two access switch installed on the top of each server
rack which provides servers network connectivity and then that access switch
has the connections towards the aggregation switch which is located in the
Network Rack. Hence there are only few cables going from server Rack to the
network Rack.
Internal

Advantage:-

 Cabling Cost: - It reduces the cable requirement as all servers connections are
terminated to its own Rack. And hence there are only few cables running
between the server and network racks.

 Cable management: - Less resources and skills are needed to manage the
cabling infrastructure.

 Easy management and changes: - Since very less number of cable running
between server and network rack, it is quite easy to locate the cable and make
changes.
Internal

Disadvantage:-

 Switch management: - As each Rack requires one or two local switches, the
management of the switch becomes an overhead. It requires not only extra IPs
but also management tool that manages inventory and configuration of the
devices. Tools have its own capability to monitor the maximum number of
devices. More devices in the network, more license cost etc.

 Network resources: - As there are more managed devices, it require more


network resources to manage the infrastructure.

 BW requirement:- This is only for the legacy environment where 10/40/100


Gig links are not present. As there are only few uplink available to access
switch there can be issues with the BW available.

 More rack space: We require more rack space to install SAN and LAN switches
in the server rack. It in turns increases the overall Rack requirement.

 More Space in Datacenters: - As the space requirement is very critical and


expensive criteria to datacenter design and we always try to make our DC
compact and efficient. As stated above more rack space can increase DC space
requirement.

What is END OF ROW ( EOR )?


Internal

In EOR, all the network switches are placed in network rack only whereas cable
from each server, located in server racks, runs towards the network rack.

Advantage:-

 Less device count: - As we needn’t to install switches in each rack, the


number of required switches reduces. In TOR, each rack must have a switch
whether the rack is fully loaded or not, that reduces the device count.

 Rack space: - As the overall device count reduces and hence it requires less
space.
Internal

 Cooling requirement:- Less devices in datacenter, less is the Cooling


requirement. It also reduces the electricity bills and resources needed to
maintain the DC environment.

Disadvantage:-

 Inefficient Layer 2 traffic: - We all know that traffic from East to west is more
than top to bottom. In EOR design, if two servers in same rack and vlan need to
talk to each other, the traffic will go to the aggregation switch in network rack
and then comes back. And hence reduces the efficiency.

Similarly in case of TOR, traffic could easily and efficiently switched by


the local switch present on the server rack. It could not only reduce the traffic
on uplinks but also saves CPU and memory consumption of the aggregate or
core switches.

 Cable requirement: - As cable runs between each server and network switch,
located in different racks, increases of cable requirement and add cost to the
deployment and maintenance.

 Cable management: - More resources and skill required for cable


management. It increases the overall budget of the project.

 Time to make changes: - As more cabling infrastructure is involved,


modification not only becomes tedious but also require more time.

VDC user Roles


Internal

Network-admin: - It only exists in default vdc. User with network-admin access


can configure all the chassis level configuration like reload, creation/deletion
of VDC, allocation of interface to non-default VDC etc.

Network-admin user use switchto vdc vdc_name command to access other


non-default VDC from default vdc. Network-admin has the vdc-admin role in
non-default VDC.

We can configure more than one network-admin users but as per the
recommendation it should be as minimum as well.

Network-operator: - Exists only in default VDC. Network-operator user can


access non-default VDC usingswitchto command from default VDC and will
have vdc-operator access in non-default VDC.

User in this role can only view configuration and will not able to make any
changes.

VDC-ADMIN: - VDC-ADMIN user can do configuration within the VDC. VDC-ADMIN


and network-admin can create/delete or modify user account within the VDC.

VDC-ADMIN can change the configuration of its own vdc; it cannot make any
changes in other VDCs and to the physical level configuration like reload etc.

we can also configure vdc-admin role to the user within default VDC. By doing
it we can restrict user access limited to default VDC only. He will not able to
make any changes in other non default VDCs.
Internal

VDC-Operator: - It provides read-only access to the user limited to VDC only


and hence VDC-operator user cannot make any configuration change.

Nexus 7000 License


1. Enterprise Services
Package – LAN_ENTERPRISE_SERVICES_PKG
- To enable Routing protocols like BGP,OSPF,EIGRP etch.
2. Advanced Services Package:- LAN_ADVANCED_SERVICES_PKG
-Without it one only one default VDC can be in use. BY
installing Advance service license 4 VDC can be created on
SUP1/SUP2 and SUP 2E.
In case of SUP-2E we need another VDC Licenses to support
eight VDCs
3.Transport Services Package :-LAN_TRANSPORT_SERVICES_PKG
To enable OTV and LISP
4. Scalable Services Package :- SCALABLE_SERVICES_PKG
-A single license per system enables all XL-capable I/O
modules to operate in XL mode
5. Enhanced Layer 2 Package:- ENHANCED_LAYER2_PKG
- To enable FabricPath on F modules.
6. MPLS Services Package :- MPLS_PKG
- It is used to enable advance feature like MPLS, VPN, EoMPLS
etc.
7. Storage Enterprise Package:- STORAGE_ENT
- It is require to enable IVR
8. FCoE Services Package :- FCOE_PKG
It is the only license which is enabled on module bases.
There are two different Licenses for F1 and F2 module.
FCOE_PKG- For F1 card
FCOE_F2 - F2 seires
Posted by Networklearner at 1
Internal

Why we need Nexus 2K ( FEX) ?


To understand the need to Nexus 2000, we must know the Datacenter
architecture designs.

There are two types of design architecture:-

1. TOP (Top of Rack) :-

2. EOR (End of Row ):

Each above method has its own pros n corns. Please go through the below blog
to find more details about the methods.

http://netterrene.blogspot.in/2014/09/top-of-rack-vs-end-of-row.html

Below are the disadvantages of both the designs:-

TOP (Top of Rack) :-

Disadvantage:-
 Switch management: - As each Rack requires one or two switch, the
management of the switch becomes an overhead. Which requires not only extra
IPs but also management tool configuration is required which has its own
capability to monitor the maximum number of devices. More devices in the
network, more license cost etc.

EOR (End of Row) :

Disadvantage:-
Internal

 Cable requirement: - As cable runs between each server and network switch,
located in different racks, increases of cable requirement and add cost to the
deployment and maintenance.
 Cable management: - More resources and skill required for cable
management. It increases the overall budget of the project.
 Time to make changes: - As more cabling infrastructure is involved,
modification not only becomes tedious but also require more time.

N2K not only increases the access port for end host connection but also reduces
the major disadvantages of both TOR and EOR as discussed below:-

1. Unlike EOR, it reduces the number of cable between network and server rack
as there are only few uplinks between 2k and its parent switch i.e. 5k/7k. Less
cable means low cable management and procurement cost. It also in turns
increases the efficiency.
2. Cisco nexus 2000 cannot work standalone. It needs either N5k or N7k as the
parent and hence it reduces the management overburden unlike TOR. Less
management require less number of IP address ,network resources as well as
inventory and configuration management server license.

Apart from the above advantages, cisco 2k has few disadvantages as well which
are mentioned below:-
1. It doesn’t perform local switching. Two servers connected to same FEX cannot
communicate directly. The traffic from server-1 will go to the parent switch
i.e. 5k/7K and then come back to the server-2 connected to the same Fex.

OTV FAQs

1.Can OTV VDC configured with SVI of the Extended VLAN?

Answer:-No, OTV VDC cannot have SVI of the extended Vlans.


Internal

2.Is OTV supported on all series of line cards?

Answer:- No, OTV is not supported on F1,F2,F2e. It is only


supported on M series and F3 line cards.

3.Does OTV advertise the mac-address?

Answer: - Unlike fabricpath, OTV advertise the mac-address.

4.What is the size of OTV header?

Answer: 42 Bytes

5.How the authoritative edge device role is negotiated?

Answer:- Edge device with lower system-id will become


authoritative for all even extended vlans and edge devices
with higher system-id will be elected for all odd vlans.

6.What is the COS and DSCP value of OTV control packet?

Answer:- COS=6/DSCP=48

7.Can multiple overlay interfaces share the same join


interface?

Answer:- Yes, One join interface can be shared between


multiple overlay interfaces.

8.How many overlay interfaces can be configured on the edge


devices?

Answer:- Maximum 10 overlay interfaces can be configured.


Internal

9.How many sites can be paired on OTV?

Answer:- Maximum 6 sites can be configured.

10. How many edge device per site can exist?

Answer:- Maximum two edge devices can be configured per site.

11. How many vlans can be extended via OTV?

Answer:- Maximum 256 Vlans can be extended.

12.What license is required for OTV?

Answer:- Transport service license.

13. Can we configure loopback interface as join interface?

Answer:- NO, only physical interface, sub-interface,port-


channel and port-channel sub interface can be configured as
join interface.

SVI and loopback cannot be configured as join interface.

15.Can we configure 1 Gig port as join interface?

Answer:- Yes, there is no restriction for 10 gig.

16. Is OTV support fragmentation?


Internal

Answer:- No in OTV fragmentation or reassembly is not


supported. All control and data traffic is sent with DF bit
sent. OTV adds 42 byte header to IP packet.

17.Is STP BPDU sent across OTV link by default?

Answer: - No, STP BPDU are blocked by default.

18.Is unknown unicast is sent across OTV link?

Answer:- No, it is also not permitted to cross OTV link. OTV


assume that there is no silent machine in the environment.

What is Fabric module in Cisco Nexus 7000 switches ?


Fabric modules provide connectivity between Supervisor module and line cards.
7k chassis support up to 5 fabric cards.

Fabric cards must be present in all 7K nexus switches to make it work except
7004 as it doesn't support fabric card.

Fabric cards are hot swappable, it means we can remove it from the chassis
and other Fabric cards will take over with any impact to the traffic.

There are two below types of fabric cards available. the migration from Fab-1
to Fab-2 is non disruptive. But both in the chassis for long duration is not
recommended by Cisco.

Fabric module version 1 :-

 46 Gbps per slot.


Internal

 Maximum performance per slot with 5 Fabric modules is 46 * 5 = 230Gbps


 Not supported in 7009 chassis.
 Upto 5 Fabric modules are supported

Fabric module version 2 :-

 110 Gbps per slot


 Maximum performance per slot with 5 Fabric modules is 110 *5 =550Gbps.
 Supported on all 7K series.
 Upto 5 Fabric modules are supported
Posted by Networklearner at 12:17 No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Cisco Nexus 7000 Supervisor module comparision - Sup1 Vs


Sup2 Vs Sup2E
1. Supervisor 1 :-
 Before 6.2, maximum 4 VDC (3 non default + 1 default )
are supported.
 In 6.2(2), Sup-1 also support admin VDC. It gives you
the option to either create 1 x default VDC and 3 x non
default VDC or 1 x admin VDC and 4 x non-default VDC.
 Maximum 32 FEX are supported
 CMP supported.
 CPU – Dula core Xeon
 Speed- 1.66 Ghz
 Memory – It comes with 4GB RAM and upgrade to 8GB is
needed for MPLS and VDC features.
 CPU share not supported.
2. Supervisor 2 :-
 Maximum 4+1 admin VDC supported. In initial
configuration Wizard, we get an option to create admin VDC.
If we choose NO, then we can create 1x default VDC and 3 x
non default VDC.
 Maximum 32 FEX are supported
Internal

 CMP is not supported.


 CPU – Quad core
 Speed- 2.13 Ghz
 Memory – 8Gb
 CPU share is supported
3. Supervisor 2E :-
 Maximum 8+ 1 admin VDC supported.
 Maximum 64 FEX are supported
 CMP is not supported.
 CPU – Dual quad core
 Speed- 2.13 Ghz
 Memory – 32 Gb
 CPU share is supported.

Note :-
1. There is a license#LAN_ADVANCED_SERVICES_PKG (N7K-
ADV1K9) needed to create more than one VDC upto 4 VDC.
Without license you can only use VDC 1 ( admin or default
whichever is chosen in the initial wizard).
2. For Sup 2, " VDC Licenses (N7K-VDC1K9) " License is
needed to add license for 4 VDCs and hence can support 8
VDCs. Each license increment the vdc number by 4.

2. CPU share is the way by which we can allocate the


specific CPU to the important VDCs.

Cisco Nexus 7000 Model comparison.


Internal

7004 :-

 Fabric Module is not present.

 Sup 1 is not supported. Only supports Sup2 and Sup 2E.

 All XL versions of M1series modules, M2 series modules, and F2 series modules

are supported. It does not support the F1 series module or non-XL M1 series

modules
Internal

 Maximum 2 line card supported, with 2 dedicated supervisor slots which cannot

be used for line cards.

 Maximum BW per slot is 440 Gig.

 Throughput - more than 1.92.

 Supervisor Module slot - 1 and 2

7009:-

 Only Fab-2 supported


Internal

 All supervisor and line card supported

 Maximum 7 line card supported with 2 dedicated supervisor slots.

 Maximum BW per slot is 550Gig.

 Throughput – more than 8 Tbps.

 Rack Space - 14 RU

 Supervisor Module slot - 1 and 2

7010:-

 Maximum 8 line card supported with 2 dedicated supervisor slots.

 All Sup, Fab and line card supported.

 More than 15 Tbps throughput


Internal

 Rack Space - 21 RU

 Maximum BW per slot is 550Gig.

 Supervisor Module slot -5 and 6

7018:-

 All Sup, Fab and line card supported

 Maximum 16 line card supported with 2 dedicated supervisor slots.

 More than 15 Tbps throughput.

 Rack Space - 25 RU

 Maximum BW per slot is 550Gig.

 Supervisor Module slot - 9 and 10


Internal

You might also like