DC Interview
DC Interview
DC Interview
Contributor –
Vijay Pandey
Official Cisco Engineers.
Nexus 7000
Q. Is the FAB1 module supported with SUP2 or SUP2E?
A. Yes, supported with both supervisors.
[8/14/2014] update: The F2e and F3 (12 port 40GE) modules can
interoperate with the M-series in the same VDC.
Q. Does the Nexus 7000 support native Fibre Channel (FC) ports?
A. No, FC ports are not supported on the Nexus 7000. You would need
either the Nexus 5500 or the MDS 9000 to get FC support.
9. Can we create port-channel with one M-card port and other in F-card port?
Answer:- No, it is not possible to bundle M-series and F-port.
10. Is it possible to create port-channel with M-series on one end an other end
is F card?
Answer:- We cannot make port-channel with M port at one end and F at other
side.
What is VPC
A Virtual port channel (VPC) allows you to bundle physical links that are connected to two different chassis (Nexus 7000 /
5000). This creates redundancy and increase bandwidth. A big advantage of using VPC is that you have redundancy without
the using of spanning-tree, a port-channel covers faster from a link failure than spanning-tree.
Allows a single device to use a port channel across two upstream devices.
Eliminates Spanning Tree Protocol (STP) blocked ports.
Provides a loop-free topology.
Uses all available uplink bandwidth.
Provides fast convergence if either the link or a device fails.
Provides link-level resiliency.
Assures high availability.
vPC—The combined port channel between the vPC peer devices and the downstream device.
vPC peer device—One of a pair of devices that are connected with the special port channel known as the vPC
peer link.
vPC peer link—The link used to synchronize states between the vPC peer devices. Both ends must be on 10-
Gigabit Ethernet interfaces.
vPC domain—This domain includes both vPC peer devices, the vPC peer-keepalive link, and all of the port
channels in the vPC connected to the downstream devices. It is also associated to the configuration mode that
you must use to assign vPC global parameters.
vPC peer-keepalive link—The peer-keepalive link monitors the vitality of a vPC peer.
Nexus01:
Nexus02#config t
Nexus02(config)# feature vpc
Nexus02(config)#
Nexus02(config)# vpc domain 1
Nexus02(config-vpc-domain)# peer-keepalive destination 10.10.10.101
! The management VRF will be used by default
vPC domain id : 1
Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status: success
vPC role : primary
Introduction
Unlike traditional Catalyst switches running IOS, Nexus switches run NX-OS. There are
some similarity between IOS and NX-OS. Also there are new features and commands
introduced in NX-OS.
In regards of CLI commands, there are several new commands on Nexus NX-OS
image. There are also old commands you find on regular IOS image, and there are
modified command compared to the regular IOS. Legacy command such as write
memory is not supported anymore, therefore you have to get used to the copy running-
config startup-config command.
A nice feature in Nexus switch is that you don't have to exit configuration mode to type
in any non-configuration commands. You don't type in the do command when you are
on configuration mode to type in any non-configuration commands. You simply type in
the non-configuration commands directly whether you are on regular enable mode or
configuration mode, similar to PIX Firewall or ASA.
All switch ports in Nexus switches only support 1 Gbps and 10 Gbps speed.
Interestingly, these gigabit ports do not show as GigabitEthernet ports or
TenGigabitEthernet ports on switch configurations. Instead the ports show as Ethernet
interfaces. To find out which speed the ports are acting current, you can simply issue
the good old show interface status or simply show interface command.
Along with new commands and features, there are several new concept and technology
in place. One new technology found in Nexus switch is FEX (Fabric Extender). Typically
you use this FEX technology when you have Nexus 2000 and Nexus 5000
interconnectivity.
This FEX technology is similar to the Catalyst 3750 stacking technology where switch
configuration within the same "stack" is visible through just one switch. Similar to
Catalyst 3750 stack switch configuration, the Nexus 5000 shows as the "module 1" and
the Nexus 2000 shows as the "module 2".
Unlike Catalyst 3750 stack switch, the Nexus do not use stack cable. The switch port to
interconnect the two Nexus switches are SFP slot. In order to interconnect the two
Nexus switches, the switch ports are configured as FEX ports instead of regular trunk or
access ports.
To start using this FEX feature, you have to activate FEX on the Nexus 5000. As you
will see, you have to activate telnet and tacacs+ should your network need to use those
as well. In other words, there are some features that you have to active when you plan
to use the features as part of your Nexus switch network topology.
Further, you have to define how the Nexus 2000 port number should look like. If let's
say you configure the FEX port as FEX 101, then all Nexus 2000 switch port will show
Internal
as interface Ethernet 101 (the "module 2") while the Nexus 5000 switch port show as
the regular interface Ethernet 1 (the "module 1").
Note that there is no console port on Nexus 2000. There is console port however on
Nexus 5000. Therefore you need to use the FEX technology to interconnect Nexus
2000 and Nexus 5000 in order to have console access to Nexus 2000.
When you need to use the management port on the Nexus 5000 (and also Supervisor
6E of Catalyst 4500 series), make sure you have at least some familiarity with VRF
(VPN Routing and Forwarding) technology since these management ports are using
involving VRF.
You can't disable the VRF or make the management (mgmt) interface as part of default
VRF or global routing table since such action is not supported. The idea of having
management port in different routing table is to separate management network and
production network, in addition to integrate VRF into Nexus switch platform and new
Catalyst 4500 Supervisor Engines.
You will notice that there is a little difference in VRF command implementation between
traditional IOS and NX-OS. You can also put in subnet mask in CIDR format, since
Nexus platform saves any IP address info in CIDR format.
Unlike traditional Catalyst switches that come with default Layer-2/3 VLAN 1, Nexus
5000 switches only come with default Layer-2 VLAN 1. If you are considering of using
non-management switch port as your customized management port, it might not work.
Note that Nexus 5000 and 2000 switches are designed as Layer-2 switches originally.
The Layer-2 switch design means that you can't create Layer-3 VLAN on Nexus
switches as management VLAN (i.e. SVI VLAN interfaces 1, 50, or else) like you usually
expect in traditional Catalyst switches. You can't convert any non-management switch
port as routing port either. In other words, there is no choice but to use the mgmt port
and get used to VRF environment when you are not used to it yet.
After certain NX-OS releases, the Nexus 5000 switches are now Layer-3 capable
though the 2000 model remains Layer-2 switch. You may need to upgrade the NX-OS
image and/or upgrade the license on the 5000 model in order to support this Layer-3
functionality.
Some management command like backing up your Nexus configuration to TFTP server
(copy running-config tftp: command) is also considering VRF. With copy running-config
tftp: command, you will be asked if the TFTP server is located within the default VRF or
else (like the management VRF).
The Nexus 5000 is for those of us migrating and needing to protect investment in 100 M
and 1 Gbps ports. It allows Top of Rack consolidation of cabling. Thats the distributed
Internal
switch model mentioned above. Its a way to buy equipment that may be used in other
ways going forward, but that supports your current tangle of 1 Gbps connections.
Bear in mind there are some other uses, which may make up more of the N5K use
going forwards. Right now the Nexus 5000 provides a way to do Fiber Channel over
Ethernet (FCoE) or Data Center Bridging (DCB). So you can lose the server HBA and
use one (or two, for redundant) 10 G connections to carry both network and SAN traffic
up to the N5K. That requires the special 10 G NIC, a Converged Network Adapter or
CNA.
The current approach is for you to then split out the data and SAN traffic to go to
Ethernet or SAN switches (or FC-attached storage). In the future, your traffic may be all
FCoE until reaching a device where the FC device is attached (or perhaps with FCoE
that handles management plus SAN traffic?).Thats a pretty straight-forward use.
You can also configure your FCoE configured N5K to do Network Port Virtualization, or
NPV. This is a per-switch choice, you use either Fabric or NPV mode. When the switch
is in NPV mode, it does not acquire a domain ID (a limited resource). Instead, it relays
SAN traffic to the core switch, in effect extending the core switch. The N5K looks like a
host to the fabric. This helps the fabric scale better, and makes the N5K transparent
(nearly invisible) as far as the fabric. There's a theme here: fewer boxes to configure,
thats a Good Thing!
For those doing blade servers and VMware, the Nexus 1000v virtual switch allows
aggregation onto UCS chassis 10 Gbps links instead of many separate 1 Gbps links.
The VN-Link feature allows internal logical (1000v) or (future) external physical tracking
on a per-VM (virtual machine) basis. I currently understand physical VN-Link as a tag on
the media from a VN-Link capable NIC or driver, tied to logical apparatus to have
configuration track VN-Link virtual interfaces on the N5K. The reason to do this: offload
the 1000v processing to external hardware.
VN-Link reference
The Nexus 5000 (N5K), as well as the Nexus 7000 (N7K) both support Virtual Port
Channel. Think 6500 VSS but the two "brains" (control planes) stay active. Or think
PortChannel (LACP) that terminates in two separate switches, with the other end of it
none the wiser. There is a fairly tight vPC limit on the N5K right now.
There are also some gotchas and design situations to avoid, e.g. mixing non-vPC and
vPC VLANs using the same vPC link between switches.That is, if you have VLANs that
aren't doing vPC PortChannel uplinks, you'll want a separate link between the
distribution switches the uplinks go to. Similarly in some L3 FHRP (HSRP, VRRP,
GLBP) routing situations. The issue is traffic that comes up the "wrong" side and goes
across the vPC peer link cannot be forward out a vPC member link on the other
component of the vPC pair, which might happen in certain not-too-rare failure situations.
There are three fabric extender ("fex") devices available, typically for Top of Rack
("ToR") use. Use two (and two N5K's) for redundancy in each rack. See also:
The Nexus 2148 and 2248 are discussed below, under Gotchas. There is also the 2232
PP, which is 32 10 Gbps Fiber Channel over Etherent (FCoE) ports (SFP+) and 8 10 G
Ethernet / FCoE uplinks (SFP+). That's 4:1 oversubscribed, which isn't bad for current
server throughput and loading. If you want less oversubscription, you don't have to use
all the ports (or you can arrange things your way with port pinning, I assume). If you
want 1:1 oversubscription ("wire speed"), you'd probably run fiber right into the N5K,
unless you want to use the N2K as a costly 1:1 10 G ToR copper to fiber conversion
box.
Note those 10 G ports are FCoE ports. Right now, the N5K is the only Cisco switch I'm
aware of doing FCoE. The Nexus 2232 does so as an extension of the N5K.
Note that NPV and NPIV are basically some proxying in the N5K, so the N2K should
just act as server-facing FCoE ports for those functions.
The Nexus 5000 is by default Layer 2 only. That means any server VLAN to VLAN
traffic between two different VLANs will need to be routed by another box, probably your
Core and / or Aggregation Nexus 7000. You'll want some darn big pipes from the N5K
to the N7K, at least until the N5K can do local routing.
The Nexus 2000 does no local switching. Traffic from one port on a 2K to another on
Internal
the same 2K goes via the N5K. There should be enough bandwidth. Thats why the
Nexus 2000 is referred to as a fabric extender, not a switch.
The Nexus 2148 T is a Gigabit-only blade, 48 ports of Gig (not 10/100/1000) with up to
4 x 10 G fabric connections. Use the new 2248 TP if you need 100/1000 capability (the
data sheet does NOT list 10 Mbps).
You'll probably want to use PortChannel (LACP) for the fabric connections. Otherwise,
you're pinning ports to uplinks, and if the uplink fails, your ports pinned to it don't work;
probably like a module failure in a 6500. You can now do the PortChannel to two N5K's
running Virtual Port Channel (vPC). See the above link for some pictures.
If you attach to the fabric extender (fex, N2K), you can issue show platform software
redwood command. The sts, rate and loss keywords are particularly interesting. The
former shows a diagram, the latter show rates and oversubscription drops (or so it
appears). I like being able to see internal oversubscription drops without relying on
external SNMP tools; which usually show rates over relatively long periods of time, like
5 or more minutes, rather than milliseconds.
Putting a N5K into NPV mode reboots the switch and flushes its configuration. Be
careful!
I've got a couple of customers where the N5K/N2K have seemed appropriate. I thought
I'd briefly mention a couple of things that I noticed in trying to design using the boxes;
maybe fairly obvious, maybe a gotcha. I'd like to think the first story is a nice illustration
of how the N5K/N2K lets you do something you couldn't do before!
Case Study 1
The first customer situation is a site where various servers are in DMZ's of various
security levels. Instead of moving the servers to a physically separate data center
server zone, as appears to have been originally intended (big Nortel switches from a
few years back), they extended the various DMZ VLANs to the various physical server
zones using small Cisco switches with optical uplinks. That gear (especially the Nortel
switches) is getting rather old, and it's time to replace it.
For that, the N5K/N2K looks perfect. We can put one or a pair of N5K's in to replace the
big Nortel "DMZ overlay core" switches, and put N5K's out in the server zones (rows or
multi-row areas of racks). For redundancy, we can double everything up. Right now one
can make that work in a basic way, and it sounds like Cisco will fairly soon have some
nice VPC (Virtual Port Channel) features to minimize the amount of Spanning Tree in
such a dual N5K/N2K design, using Multi-Chassis EtherChannel (aka VPC). Neat stuff!
The way I'm thinking of this is as a distributed or "horizontally smeared" 6500 switch (or
Internal
switch pair). The N2K Fabric Extender (FEX) devices act like virtual blades. There's no
Spanning Tree Protocol (STP) running up to the N5K (good), and no local switching
(maybe not completely wonderful, but simple and unlikely to cause an STP loop). So the
N5K/N2K design is like a 6500 with the Sup in one zone and the blades spread across
others.
From that perspective, the 40 Gbps of uplinks per N2K FEX is roughly comparable to
current 6500 backplane speeds. So the "smeared 6500" analogy holds up in that
regard.
The sleeper in all this is that the 10 G optics aren't cheap. So doing say 10-12 zones of
40 G of uplink, times optics and possibly special multi-mode fiber (MMF) patch cords,
adds say 12 x ($2000) of cost, or $24,000 total. Certainly not a show-stopper, but
something to factor into your budget. If you're considering doing it with single-mode fiber
(SMF), the cost is a bit higher. On the other hand, that sort of distributed Layer 2 switch
is a large Spanning-Tree domain if you build it with prior technology.
Case Study 2
The second customer situation is a smaller shop, not that many servers but looking for a
good Top of Rack (ToR) solution going forward. The former Data Center space is
getting re-used (it was too blatantly empty?). And blade servers may eventually allow
them to fit all the servers into one or two blade server enclosures in one rack. Right now
were looking at something like 12 back-to-back racks of stuff, including switches.
For ToR, the 3560-E, 3750-E, 4900M, and N5K/N2K all come to mind. The alternative
solution that comes to mind is a collapsed core pair of 6500s. The cabling would be
messier, but the dual chassis approach would offer more growth potential, and a nice
big backplane (fabric).
The 3560-E and 3750-E have a 20 G of uplink per chassis limitation, not shabby, not
quite up to the 6500 capacity per blade. That's workable and not too limiting.
The issue is, what do you aggregate them into? A smaller 6500 chassis? In that case,
the alternatives are 6500 pair by themselves, or 6500's (maybe smaller) plus some
3560-E's or other small ToR switches, at some extra cost.
Or the N5K/N2K, one might think. The N5K/N2K is Layer 2 only right now, so you need
some way to route between the various server VLANs (gotcha!). Without Layer 3
availability, you still would need to connect the N5K/N2K's to something like 4900M's or
6500's, to get some pretty good Layer 3 switching performance between VLANs. Right
now, that external connection is either a pretty solid bottleneck, or you burn a lot of ports
doing 8 way or (future) 16 way EtherChannel off the N5K/N2K. Bzzzt! That starts feeling
rather klugey.
Some Conclusions
Internal
• The N5K/N2K right now seems to fit in better with a Nexus 7000 behind it. And I'd
much prefer local Layer 3 switching to maximize inter-VLAN switching performance.
• The initial set of Nexus line features are probably chosen for larger customers;
standalone Layer 3 N5K/N2K being something more attractive to a smaller site. And
smaller sites tend not to be early technology adopters.
• You can mitigate this to some extent by careful placement of servers in VLANs. On the
other hand, my read on current Data Center design is that the explosive growth in
numbers of servers and the need for flexibility have left "careful placement of servers" in
the historical dust. Nobody's got the time anymore.
Sample Configurations
»Cisco Forum FAQ »Sample Configuration: Nexus 5000 and Nexus 2000 with FEX
After previously dual-connecting one of the FEXes, we upgraded the NX-OS on the
N5Ks. As I recall, we upgraded N5K-2 first, then N5K-1. This is non-optimal, if N5K-1 is
the vPC primary as was the case.
When we updated N5K-2, as you might expect, N5K-2 downloaded a new image to its
connected FEX. When we upgraded N5K-1, it also downloaded the same image to its
connected FEX. This is the same FEX module, and each download of the image took
the FEX offline for 15 minutes or so.
Cisco documents state that the NX-OS software by design will allow an upgraded dual-
home FEX to interoperate with the vPC secondary switches running the original version
of Cisco NX-OS while the primary switch is running the upgrade version. You will have
to have some downtime to get the image loaded.
However, the documentation doesn't say anything about what happens when you first
upgrade the secondary N5K of dual-home FEX. My recommendation is not to do it, you
may need a second image download to the FEX.
All of the FEXes were supposed to be dual connected to both N5Ks. Due to timing
constraints and fiber availability, some FEX modules were left single connected for a
period of time. In this case, they had only been connected to N5K-2, the vPC secondary
Internal
Based on our experiences updating the image, we were not sure if connecting the
uplink to the N5K-1 would bring the FEX down while N5K-1 reloaded the image. I was
not able to verify from the Nexus documentation what would happen though Cisco
documentation recommends connecting the primary first. However, we did find that
when we brought up the never-previously connected link to the N5K-1, the FEX stayed
on line.
1 config t
2 slot 101
3 provision model N2K-C2248T
This allows you to pre-load the VLANs, speed, duplex, description etc for the host
interfaces before the FEX modules are connected. Note that you need to know what
type of FEX you have for this command since the N2K-C2248T is different than the
N2K-C2248TP-E-1GE, and is what you want when you have a model number N2K-
C2248TP.
point, someone connected the second uplink for FEX 101 to the N5K interface
configured as port-channel 102 (FEX 102 should have been placed there). However the
NX-OS noticed the mismatch, knew that FEX 101 was mis-cabled, alerted and left the
second N5Ks FEX offline, but did not shutdown the active FEX.
Internal
The Nexus 7000 is constantly evolving and there seems to be more and more design parameters
that have to be taken into consideration when designing Data Center networks with these
switches. I’m not going to go into each of the different areas from a technical standpoint, but
rather try and point out as many of those so called “gotchas” that need to be known upfront when
purchasing, designing, and deploying Nexus 7000 series switches.
Before we get started, here is a quick summary of current hardware on the market for the Nexus
7000.
1. Supervisor 1
2. Fabric Modules (FAB1, FAB2)
3. M1 Linecards (48 Port 10/100/1000, 48 Port 1G SFP, 32 Port 10G, 8 port 10G)
4. F1 Linecards (32 Port 1G/10G, F2 linecards, 48 Port 1G/10G)
5. Fabric Extenders (2148, 2224, 2248, 2232)
6. Chassis (7009, 7010, 7018)
Instead of writing about all of these design considerations, I thought I’d break it down into a Q &
A format, as that’s typically how I end up getting these questions anyway. I’ve ran into all of
these questions over the past few weeks (many more than once), so hopefully this will be a good
starting point, for myself as I tend to forget, and many others out there, to check compatibility
issues between the hardware, software, features, and licenses of the Nexus 7000. The goal is to
keep the answers short and to the point.
Question:
What are the throughput capabilities and differences of the two fabric modules (FAB1 &
FAB2)?
Answer:
It is important to note each chassis supports up to five (5) fabric modules. Each FAB1 has a
maximum throughput of 46Gbps/slot meaning the total per slot bandwidth available when there
are five (5) FAB1s in a single chassis would be 230Gbps. Each FAB2 has a maximum
throughput of 110Gbps/slot meaning the total per slot bandwidth available when there are five
(5) FAB2s in a single chassis would be 550Gbps. The next question goes into this a bit deeper
and how the MAXIMUM theoretical per slot bandwidth comes down based on which particular
linecards are being used. In other words, the max bandwidth per slot is really dependent on the
fabric connection of the linecard being used.
Question:
What is the maximum bandwidth capacity for each linecard and does it change when using
different Fabric Modules?
Answer:
FAB1 & FAB2 modules are both forward and backward compatible. For example, a FAB1 can
be deployed with the F2-48port module, but would have a maximum throughput of 230G/slot
versus the 480G/slot when using FAB2s.
Question:
What is the port to port latency of the various 7K linecards?
Answer:
N5K and N3K latency has also been included because many times if you need this info for
financial applications or any other latency sensitive app, the comparison usually ends up
expanding to include these platforms as well.
Note: from my research I’ve seen some conflicting information for latency for the M1 linecards.
I’ve seen general statements such as the M1 family has 9.5 µsec latency, whereas, I have seen
one document that stated 18 µsec as well. The one that had 18 was an older document, so I do
expect some of the M1 numbers could be off. If anyone has these, please feel free to share.
Question:
Which Nexus 7000 linecards can connect to a Fabric Extender (FEX)?
Answer::
Simply put, there are only two linecards that can connect to a FEX on a 7K. They are the 32 port
M1 and 48 port F2 linecards. If you don’t have one of these, get one, or use a 5K :).
Question:
Does the F1 linecard really not support Layer 3? How is that possible?
Answer:
This is an interesting one, but the short Answer: is no, the F1 linecard does not natively support
Layer 3. Okay, so what does that mean? First, it is important to note the 7K architecture is
different than that of other platforms such as the Catalyst 4500 or 6500. These other platforms
have a centralized (and distributed using DFCs on the 6500) forwarding architecture. Remember
Internal
the 6500 has a MSFC routing engine and PFC (where policies and FIB are located after they are
built) located on the Supervisor and then pushed to the DFCs should they exist on the linecards.
The notion of a centralized PFC goes away with the Nexus 7000 and it is based off a purely
distributed architecture – think the 6500 with DFCs without a centralized PFC.
So to answer that question in the most direct way, and comparing it to what was just described,
the F1 module does not have the capacity to locally store a distributed FIB table on the linecard.
The F1 was purposely built for advanced Layer 2 services including a technology called Fabric
Path.
Now, with all that being said, it is still POSSIBLE to run layer 3 in a chassis that has F1
alongside M1 linecards. There is a notion of proxy layer 3 forwarding that exists. This allows
SVIs to be created on the system (technically, that will exist on the M1s), assigned an IP address,
and then ports on the F1 to be assigned to that VLAN in order get “proxy L3 forwarding” for
hosts that directly connect to the F1. It is NOT possible to configure a routed port on any F1 port.
If you’re curious on how this happening, the F1 linecard is building a Port-Channel within the
“backplane” that will connect to up to sixteen (16) forwarding engines on M1 cards. This means
the maximum capacity for proxy layer 3 forwarding is 160G because each forwarding engine
maps back to a forwarding/port ASIC on the M1 linecards. Sometimes, when I describe this, I’ll
also use the analogy of thinking about the F1 as an IDF switch and M1s as a Core switch, so
even though your IDF switch is L2 only, you can still route by getting switched back to the Core.
Note: it is possible to explicitly configure which forwarding engine’s should be used for F1
linecards.
Question:
What are the design options available when connecting a Nexus 2000 to Nexus 7000s?
Answer:
I’m going to keep this short on the preferred method as it is now supported as of NX-OS 5.2. If
you have 2 x 7Ks, 2 x 2Ks, and a server, the recommended design would be connect your server
to both 2Ks. Each 2K would connect to ONE 7K. That is very important; you CANNOT dual-
home a 2K and connect it to 2 separate 7Ks. I’ve heard this may NEVER be supported. It doesn’t
seem logical, but you don’t lose anything single-homing the 2K to 7K. Should any link or device
go down, half of the bandwidth still remains up. Prior to 5.2, LACP was not supported between
the 2Ks and server and basic active/standby NIC teaming was required.
Question:
Can different linecard types (M1, F1, F2) exist in the same chassis? If so, what functionality
is gained or lost?
Answer:
Major design caveat: F2 linecards require a dedicated VDC if they are in the same chassis as
ANY other M1 or F1 linecard. This one isn’t pretty, but it is what it is for now. This means
you’ll need the Advanced Services License to enable the VDC functionality as well.
For other critical design caveats, please take note of the following: Mixing and matching F1 and
M1 in the same chassis works fine. The biggest caveat is the L2/L3 support issue as described
above. Remember, F1 cards do NOT support L3 routed ports and only support L3 via proxy
routing through M1 linecards in the chassis. In large data centers, other details need to be
Internal
examined such as the MAC table size. The supported MAC table sizes are different on each card
ranging from 16k to 256k MACs, so should the data center be increasing in size by means of
virtualization adding 100s to 1000s of VMs, this should be examined a bit further.
Question:
Does the Nexus 7000 support MPLS? If so, are there any restrictions on software and
hardware?
Answer:
Yes. The Nexus 7000 supports MPLS as of NX-OS 5.2(1) with an M1 linecard. Note: the MPLS
license is also required. F1/F2 modules DO NOT support MPLS.
Question:
What software, hardware, and licenses are required in a Nexus 7000 OTV deployment?
Answer:
OTV was introduced in 5.0(3). Any M1 linecard and the Transport Services Package license is
required. F1/F2 modules DO NOT support OTV.
Note: based on the low level design of OTV, the Advanced Services License may be required to
enable VDCs to support the OTV deployment.
Question:
What software, hardware, and licenses are required in a Nexus 7000 LISP deployment?
Answer:
LISP was introduced in 5.2(1). It requires the 32 Port M1 linecard(s) and the Transport Services
Package license. Other M1 modules and F1/F2 modules DO NOT support LISP.
Question:
What software, hardware, and licenses are required in a Nexus 7000 FCOE deployment?
Answer:
FCOE for the 7K was introduced in 5.2(1). 32 Port F1 linecard(s) are required; 48 port F2 will
support FCOE sometime in 2012.
Note: One or more licenses are required for FCOE on the 7K. The FCOE license is required per
linecard which inherently offers the ability to create the storage VDC (that is required), while the
SAN License is required for the system Inter-VSAN routing and fabric binding. The Advanced
Services License that normally enables VDCs is not required.
Question:
What software, hardware, and licenses are required in a Nexus 7000 FabricPath
deployment?
Answer:
FabricPath was introduced in 5.1(1). It requires the use of either the F1 or F2 module along with
the Enhanced Layer 2 Package License.
Note: While both F1 and F2 modules can run Fabric Path, they cannot be in the same VDC, so
one would probably choose one or the other for a FP deployment.
Question:
Are special racks needed for the Nexus 7000 switches?
Answer:
Four (4) post racks are required for the 7010 chassis and 7018 chassis. There are several racks
that were purpose built for the 7K, but are not “required.” These are documented further in the
Internal
hardware installation guide for the 7K. Note: if not using the purpose built racks, be sure to
measure the depth required for the N7K. I have ran into situations where the depth was too short
and the rack needed to be extended out further delaying the deployment process. Not fun!
The 7009 was built to ease migrations from 6509s, so a 2-post rack works quite well for these, as
it is the same exact form factor as the 6509/E.
Q. What are the differences between M and F series line cards? What are the differences in F1, F2,
F2e and F3 cards?
A. The initial series of line cards launched by cisco for Nexus 7k series switches were M1 and F1. M1
series line cards are basicaly used for all major layer 3 operations like MPLS, OTV, routing etc,however,
the F1 series line cards are basically layer 2 cards and used for for FEX, FabricPath, FCoE etc. If there is
only F1 card in your chassis, then you can not achieve layer 3 routing. You need to have a M1 card
installed in chassis so that F1 card can send the traffic to M1 card for proxy routing. The fabric capacity of
M1 line card is 80 Gbps. Since F1 line card dont have L3 functionality, that means you can not use same
interface in L3 mode. They are provide a fabric capacity of 230 Gbps.
Later cisco released M2 and F2 series of line cards. A F2 series line card can also do basic Layer 3
functions means you can use interface in L3 mode,however,can not be used for advance L3 feature like
OTV or MPLS. M2 line card's fabric capacity is 240 Gbps while F2 series line cards have fabric capacity
of 480 Gbps.
The problem with F2 card is that they can not be installed in same vdc with any other card.F2 card has to
be in its own vdc.
So, to resolve that, Cisco introduced F2E line cards which can be used with other M series line cards in
same VDC. It supports layer 3 but if it is alone in a single vdc. If it is being used with another card, it
supports (unlike F2) but then it can be used in L2 mode only.
So, finaly cisco launched F3 cards which are full L3 card. Support all advance layer 3 feature like otv,
mpls etc. can be mixed with other cards in same vdc in L2 or L3 mode.
Q. Can we connect a Nexus 2k or FEX to two parent switches or it can be controlled or connected
by only one switch?
A. Yes, we can connect a fex to two parent switches,however, only 5ks. we CANNOT connect a nexus 2k
to two Nexus 7Ks. This is dual-homed FEX design and it is supported.
Q. What is VDC
A. Explained in detail: VDC Overview
Q. What are the difference between vPC-peer link and vPC keep-alive link?
A. vPC-peer link is a layer 2 link that is used to check the consistancy parameters,states and config sync
and traffic flow(in some cases only). vpc keep-alive link is L3 reachability which is used to check the peer
status and role negotiation. Role negotiation happens at the initial stage only. vpc keep-alive link must be
setup first in order to bring vpc up. vPC peer link will not come up unless the peer-keepalive link is
already up and running.
Q. On a Nexus 7k, when trying to perform a 'no shut' on Ethernet 1/3,the ERROR: Ethernet1/3:
Config not allowed, as first port in the port-grp is dedicated error message is received.
"ERROR: Ethernet1/4: Config not allowed, as first port in the port−grp is dedicated"
To understand this, we need to understand what is port-group? Below is the image of N7K-M132XP-12
ine card. This line card has 32 ports and all are 10 Gig port. So what does that mean? Does it mean thate
ach one of them is a 10 Gig port and we can have all of these 32 ports connected at the same time and
we should be able to get 320 Gbps speed? Not exactly...!!
Yes, they are 10 Gig ports,HOWEVER, that 10 Gig is shared among 4 ports in a group. That group is
basically all the ports on same hardware ASIC.
So, being said that N7K-M132XP-12 has 32 10G ports, it means that each port-group (group of 4 ports for
this line card) share 10G speed among themselves. YES!! that is correct. All ports dont get 10G
dedicated bandwidth. So, the total capacity of the card is 80G, not 320 (as we were expecting) as there
can be 8 port-grp of 4 ports each. This is designed on the concept that "Chances are less that all devices
are sending data at the same time". So, 1,3,5,7 will be in same port-grp and similary 2,4,6,8 and so on...!!
So, 4 ports in a group will share the total available bandwidth of 10G.
Internal
What if we have requirement for some critical application that we need dedicated bandwidth of 10 G? In
that case, first port of a port-group can be put into "DEDICATED" mode and that port will always be the
first one of the group..ie. marked in yellow as shown in above pic. So, 1,2,9,10,17,18,25,26 can be put
into dedicated mode and if you have put a port in a port-grp into dedicated mode, all other 3 ports in that
group will get disabled. You can not configure them. If you have put Eth1/2 into dedicated mode, and if
you try to configure Eth1/4 then you will get : "ERROR: Ethernet1/4: Config not allowed, as first port in the
port−grp is dedicated"
Shared mode is the default mode. Command to configure port into dedicated mode is:
We first need to shutdown the port
N7K# config t
N7K(config)#interface Eth1/2
N7K(config-if)#rate-mode dedicated
Both are used basically to support multi-chassis ether-channel that means we can create a port-channel
whose one end is device A,however, another end is physically connected to 2 different physical switches
which logically appears to be one switch.
-vPC is Nexus switch specific feature,however,VSS is created using 6500 series switches
-Once switches are configured in VSS, they get merged logicaly and become one logical switch from
control plane point of view that means single control plane is controlling both the switches in active
standby manner ,however, when we put nexus switches into vPC, their control plane are still separate.
Both devices are controlled individually by their respective SUP and they are loosely coupled with each
other.
Internal
-In VSS, only one logical switch has be managed from management and configuration point of view. That
means, when the switches are put into VSS, now, there is only one IP which is used to access the switch.
They are not managed as separate switches and all configuration are done on active switch. They are
managed similar to what we do in stack in 3750 switches,however, in vPC, the switches are managed
separately. That means both switches will have separate IP by which they can be accessed,monitored
and managed. Virtually they will appear a single logical switch from port-channel point of view only to
downstream devices.
-As i said, VSS is single management and single configuration, we can not use them for HSRP active and
standby purpose because they are no longer 2 seperate boxes. Infact HSRP is not needed, right?
one single IP can be given to L3 interface and that can be used as gateway for the devices in that
particular vlan and we will still have redundancy as being same ip assigned on a group of 2 switches. If
one switch fails, another can take over.,however, in vPC as i mentioned above devices are separately
configured and managed, we need to configure gateway redundancy same as in traditional manner.
For example: We have 2 switches in above diagram. Switch A and B, when we put them in VSS, they will
be accessed by a single logical name say X and if all are Gig ports then interfaces will be seen as
GigA\0\1, GigA\0\2....GigB\0\1,GigB\0\2 and so on...
however,if these are configured in vPC, then they will NOT be accessed with single logical name. They
will be accessed/managed separately. Means, switch A will have its own port only and so on B.
-Similary, in VSS same instances of stp,fhrp,igp,bgp etc will be used,however, in vPC there will be
separate control plane instances for stp,fhrp,igp,bgp just like they are being used in two different switches
-in VSS, the switches are always primary and secondary in all aspects and one switch will work as active
and another as standby,however, in vPC they will be elected as primary and secondary from virtual port-
channel point of view and for all other things,they work individualy and their role of being
primary/secondary regarding vpc is also not true active standby scenario,however, it is for some
particular failure situation only. For example, if peer-link goes down in vpc, then only secondary switch will
act and bring down vpc for all its member ports.
Internal
-VSS can support L3 port-channels across multiple chassis,however, vpc is used for L2 port-channels
only.
-VSS supports both PAgP and LACP,however, VPC only supports LACP.
-In VSS, Control messages and Data frames flow between active and standby via VSL,however, in
VPC,Control messages are carried by CFS over Peer Link and a Peer keepalive link is used to check
heartbeats and detect dual-active condition.
I hope this was helpful. I will keep adding more as i experience more.Thank you!!
---------------------------------------------------------------
---------------------------------------------------------------
As we know that a nexus 2k switch or FEX is connected to its parent Nexus 5k over fex links.
One Fex (2k) can be dual homed to two Nexus 5k switches. and when a nexus 2k is connected to Nexus
5k, a unique fex associate number is assigned to that particular 2k to identify it uniquely.
So, i had four nexus 2k switches whose serial numbers are JAX1122AAA,MLX1122BBB,
PQR3344DDD and LMN2244CCC. JAX1122AAA and ,MLX1122BBB are FEX switches for Nexus5k1.
and PQR3344DDD and LMN2244CCC are part of Nexus-5k-2. JAX1122AAA has been given FEX
associate number 103 and MLX1122BBB has been given 105,LMN2244CCC is assigned 102 and
PQR3344DDD is assigned 104. Each fex is connected to its parent switch via 4 fex links.
Idealy, all 4 fex links which are under same FEX ASSOCIATE NUMBER should be going to same
2k,however, one of our onsite engineer incorrectly cabled one of the fex link from 103 on Nexus-5k-1 to
another 2k which was part of FEX number 104 on Nexus-5k-2 and we started getting identity mismatch.
As you can see in above output,under FEX 105 on Nexus-5k-1, the Eth1/25 is
showing PQR3344DDD serial number,however, all other interfaces showing MLX1122BBB and vice
versa on Nexus-5k-2 for Eth1/26.
In order to verify cabling and make sure right fex or 2k is connected to correct parent 5k switch with
respective to its FEX associate number, we can use "show interface fex-fabric" command and verify the
same using serial number that all are correct switches.
once the cable were swapped, we started getting right serial number for Eth1/25.
---------------------------------------------------------------
First of all, all these 3 models are Nexus 5k Switches and basically 5500 series models.
"U" stands for "Unified" ports, so what does that "unified port" mean? Unified means a port is capable of
running into either "Ethernet" or "FC" (Fibre Channel).
For those who are not aware of SAN protocols, i would like to inform you that term "Fibre" here does not
mean the "Fiber" Media ( ie. copper vs fiber) which people refer in terms of cable, [ please note the
difference in spelling, Fibre vs Fiber).
Fibre Channel or FC is a protocol stack in SAN, similar to what TCP/IP is to Networks. SAN switches run
on FC protocol standards, not Ethernet or TCP/IP.(Just a highlevel overview)
So coming back to 5500 series models, all ports of 5548UP and 5596UP models of Nexus 5k, can be
used in ether Ethernet or FC mode,however, ports on 5548P do not work in FC mode. But the
****important thing to note is that this difference is valid for "In-built fixed" ports only******. That means,
both 5548P and 5548UP switch comes with 32-port "in-built"or Fixed ports, plus one expansion module
capable of 16 ports.
So, basicaly 5548P support Unified Port (Ethernet or native FC ) on the expansion module only,however,
in 5548UP, all ports are unified ports.
5596UP comes with built-in 48 Ports, plus we can use 3 expansion slots for additional ports depending
on our requirement.
Cisco's virtual device context or vdc is basically a concept of dividing a single Nexus 7000 hardware box
into multiple logical boxes in such a way that they look like different physical device to a remote
user/operator and each of the provisioned logical devices is configured and managed as if it were a
separate physical device.
For example, you have a data center where you have deployed Nexus 7k in datacenter. Now, there are
few other companies who don't have enough money to expend in setting up Nexus 7000 so they come to
you to host a data center for them. You can simply virtualize your nexus 7000 into multiple virtual
switches and can assign one logical portion(that is called vdc) to one company. When they will login to
their logical switch (looks like a separate physical switch to user) they can do whatever they want, other
logical partition i.e. other vdc will remain unaffected. You can create vlans with same name/number in all
vdc's and they will not interfere with each other. A particular vdc operator will not even come to know that
same switch is being used by multiple user virtually. Only Admin can create/delete vdc's and from Admin
vdc only, we can see other vdcs.
Similary, vdc can be used to create different test and production traffic. In my previous project, we created
one vdc for test environment in order to test new implementation/protocol etc and another vdc for
production traffic. If our test used to successful in our test environment, then only we used to put them
into production.
How many vdc we can create?? hmm...it depends which supervisor engine you are using.
-If you are using SUP1, then you can create upto 4 vdc's. All of them can be used to carry data traffic and
you can create/delete vdcs from default vdc which can also be used for data traffic.
-if you are using SUP2, then you can create 1 admin + 4 data vdc. That means, you can not use admin
vdc for data traffic. That will be used for only admin purpose i.e. managing other vdc's.
-if you are using SUP2E, then you can create 1+8 vdc, where 1 admin plus 8 production vdc.
Within VDC it can contain its own unique and independent set of VLANs and VRFs. Each VDC can have
assigned to it physical ports, thus allowing for the hardware data plane to be virtualized as well. Within
each VDC, a separate management domain can manage the VDC itself, thus allowing the management
plane itself to also be virtualized.
physical interfaces cannot be shared by multiple VDCs. This one-to-one assignment of physical interfaces
to VDCs is at the basis of complete isolation among the configured contexts. However, there are two
exceptions:
• The out-of-band management interface (mgmt0) can be used to manage all VDCs. Each VDC has its
own representation for mgmt0 with a unique IP address that can be used to send syslog, SNMP and
other management information.
• When a storage VDC is configured, a physical interface can belong to one VDC for Ethernet traffic and
to the storage VDC for FCoE traffic. Traffic entering the shared port is sent to the appropriate VDC
according to the frame's EtherType. Specifically, the storage VDC will get the traffic with EtherType
0x8914 for FCoE Initialization Protocol (FIP) and 0x8906 for FCoE.
Physical interfaces can be assigned to a VDC with a high degree of freedom. However, there are
differences among different I/O modules because of the way the VDC feature is enforced at the hardware
Internal
level. The easy way to learn the specific capabilities of the installed hardware is by entering the show
interface x/y capabilities command to see the port group associated with a particular interface.
Physical Interfaces, PortChannels, Bridge Domains and VLANs, HSRP and GLBP Group IDs, and SPAN
CPU*, Memory*, TCAM Resources such as the FIB, QoS, and Security ACLs
step 1 Log in to the default VDC with a username that has the network-admin role.
Step 2 Enter configuration mode and create the VDC using the default settings.
switch(config-vdc)#
similarly more interfaces can be assigned. below is the screenshot of a vdc configuration.
Initially, all physical interfaces belong to the default VDC (VDC 1). When you create a new VDC, the
Cisco NX-OS software creates the virtualized services for the VDC without allocating any physical
interfaces to it. After you create a new VDC, you can allocate a set of physical interfaces from the default
VDC to the new VDC.
Internal
The interface allocation is the most important part of vdc configuration. You can not assign ports of same
port-group to different vdc.If you are unable to assign any interface to particular vdc or some ports are
being automatically being assigned, then it could be port-grouping issue. Port group is basicaly how many
parts are on same hardware ASIC. So, if 4 ports are on same ASIC, then they all must be in same vdc as
they are sharing and being operated by same asic. How many port-groups are there in my card or is
there a fix formula? Basically it depends which type of I/O module card we are using. for example:
•N7K-M132XP-12L (same as non-L M132) (1 interface x 8 port groups = 8 interfaces)—All M132 cards
require allocation in groups of 4 ports and you can configure 8 port groups.
================
Switching between VDC's
If you have logged into default VDC, you can use “Show VDC” command to see what all other vdc’s have
been created.
IF you want to switch to any other vdc from default vdc, you can use “switchto vdc <vdc name>”
command as shown below and if you have logged into user created vdc WDECAIB from default vdc
using switchto command, you can use “switchback” command to come back to default vdc, however, if
you have directly ssh/telnet into user created vdc WDECAIB here, you can not do a “switchback” to come
into default vdc.
I hope it was helpful. You can read through my blog to know more about vdc's like vdc users etc.
Nexus 7000
The Cisco Nexus Series switches are modular network switches designed for the data
center. Nexus 7000 chassis includes 4, 9, 10 and 18 slot chassis, however, we have nexus
7010 deployed in data centers at Core layers.
The first chassis in the Nexus 7000 family is Nexus 7010 switch which is a 10-slot chassis
with two supervisor engine slots and eight I/O module slots at the front, as well as five
crossbar switch fabric modules at the rear.
All switches in the Nexus range run the modular NX-OS firmware/operating system. The
Cisco NX-OS software is a data center-class operating system built with modularity,
resiliency, and serviceability at its foundation. Based on the industry-proven Cisco MDS
9000 SAN-OS software, Cisco NX-OS helps ensure continuous availability and sets the
standard for mission-critical data center environments. The highly modular design of Cisco
NX-OS makes zero-effect operations a reality and enables exceptional operational
flexibility.
Nexus 7010
-10 slots: 1-4 and 7-10 are line card slots, 5-6 are supervisor slots
-21 RU height
Supervisor Engine
Management Interface
It is part of dedicated “management”vrf and can not be moved to any other or default vrf.
The Cisco Nexus 7000 Fabric Modules for the Cisco Nexus 7000 Chassis are separate fabric
modules that provide parallel fabric channels to each I/O and supervisor module slot. The
fabric module provides the central switching element for fully distributed forwarding on the
I/O modules.
Switch fabric scalability is made possible through the support of from one to five
concurrently active fabric modules for increased performance as your needs grow. All fabric
modules are connected to all module slots. The addition of each fabric module increases the
bandwidth to all module slots up to the system limit of five modules. The architecture
supports lossless fabric failover, with the remaining fabric modules load balancing the
bandwidth to all the I/O module slots, helping ensure graceful degradation.
Mainly Nexus 5k is used for layer 2 switching,however, it can support L2 add-in card.
Internal
The Cisco Nexus 5548P Switch is the first of the Cisco Nexus 5500 platform switches. It is a
one-rack-unit (1RU) 10 Gigabit Ethernet and FCoE switch offering up to 960-Gbps
throughput and up to 48 ports. The switch has 32 1/10-Gbps fixed SFP+ Ethernet and FCoE
ports and one expansion slot.
The Cisco Nexus 5548UP is a 1RU 10 Gigabit Ethernet, Fibre Channel, and FCoE switch
offering up to 960 Gbps of throughput and up to 48 ports. The switch has 32 unified ports
and one expansion slot.
5500UP models support unified ports. Ports can run as Ethernet or native Fibre channel and
if you are changing the role of a port, then it requires a reboot.
Nexus 2000 Series Fabric Extenders behave logically like remote line cards for a parent
Cisco Nexus 5000 or 7000 Series Switch. They simplify data center access operations and
architecture as well as management from the parent switches. They deliver a broad range of
connectivity options, including 40 Gigabit Ethernet, 10 Gigabit Ethernet, 1 Gigabit Ethernet,
100 MB and Fibre Channel over Ethernet (FCoE).
The Cisco Nexus 2000 Series Fabric Extenders work in conjunction with a Cisco Nexus
parent switch to deliver cost-effective and highly scalable Gigabit Ethernet and 10 Gigabit
Ethernet environments while facilitating migration to 10 Gigabit Ethernet, virtual machine–
aware, and unified fabric environments.
The Cisco Nexus 2000 Series has extended its portfolio to provide more server connectivity
choices and to support Cisco Nexus switches upstream. With more flexibility and choice of
infrastructure, we gain the following benefits:
Architectural flexibility :
Internal
− Enables quick expansion of network capacity by rolling in a prewired rack of servers with
a ToR fabric extender and transparent connectivity to an upstream Cisco Nexus parent
switch
Simplified operations
− With Cisco Nexus 5000 or 7000 Series, provides a single point of management and policy
enforcement
The Cisco Nexus 2000 Series Fabric Extender forwards all traffic to its parent Cisco Nexus
5000 Series
device over 10-Gigabit Ethernet fabric uplinks, allowing all traffic to be inspected by policies
established
on the Cisco Nexus 5000 Series device. No software is included with the Fabric Extender.
Software is
automatically downloaded and upgraded from its parent switch. The Nexus 2248T will allow
100/1000
connectivity and can be dual attached to the Nexus 5000. By dual attaching the Nexus
2248Ts to the 5000, it will allow for the most resilient connections for single attached
servers.
The Cisco Nexus 2000 Series provides two types of ports: ports for end-host attachment
(host interfaces) and uplink ports (fabric interfaces). Fabric Interfaces are differentiated
with a yellow color(as shown in above figure) for connectivity to the upstream parent Cisco
Nexus switch.
Internal
-Each fabric extender module should be assigned a unique number (between 100-199). This
unique number enables the same fabric extender to be deployed in single-attached mode to
one CiscoNexus 5000 Series Switch only or in fabric extender vPC mode (that is, dual-
connected to different Cisco Nexus 5000 Series Switches).
-Nexus 2000 Fabric Extenders are not independent manageable entities; the Nexus 5000
manages the fabric extender through in-band connectivity.
Nexus 2000 Series can be attached to the Nexus 5000 Series in two different
configurations:
-Static pinning: The front host ports on the module are divided across the fabric ports
(that is, the uplinks connecting to the Nexus 5000).
-Port-Channel: The fabric ports form a Port-Channel to the Cisco Nexus 5000.
N5K(config)#feature fex
N5K(config-if-range)#channel-group 100
N5K(config-if-range)#no shutdown
N5K(config-if)#fex 100
• Layer 2 Adjacency - Unlike the campus where we've pushed Layer 3 to the
closet, Data Centers truly have a need for large layer 2 domains. VMWare
especially has made this even more critical because in order to take advantage
of VMotion and DRS (two critical features), every VMWare host must have
access to ALL of the same VLANs.
• Resiliency is key - the Data Center has to have the ability to be ALWAYS up.
Redundant paths make this possible.
• Spanning tree addresses issues with redundant paths, but comes with tons of
caveats. As the L2 network scales, convergence time increases, and it's
complicated (and sometimes dangerous) to configure all of the tweaks to make it
perform better (such as portfast, uplinkfast, etc.). Also, traditional spanning
blocks links which cuts bandwidth in half crippling another need in Data Centers
for bandwidth scalability.
• vPC Limitations vPC's are great, and they address the blocked links. But they
come with several caveats such as complicated matching configuration, orphan
ports, no routing protocol traversal, etc. Even in a vPC scenario, we still have to
run spanning tree, we're just eliminating loops, and if i were to plug a non-vPC
switch into the core, it's still going cause a convergence. Finally, they are only
scalable to two core devices.
• Bandwidth scalability sure the Nexus 7018 can scale tremendously large, but
it's also a massive box. If we use vPC's we are still limited to 2 core boxes. This
sounds like overkill, but it's quickly becoming a more popular design in larger
customers. What if in order to scale bandwidth in the core, we could just add a
third or a fourth, smaller box.
What is FabricPath?
Originally I was worried about having to learn a completely new protocol, but the
truth is that most of us already know all of the concepts that make FabricPath
work. Think about routing to the access layer and why we like that design.
• They are very quick to converge, and the addition of a single node doesn't
affect any other part of the network.
Internal
There you go you just learned FabricPath. FabricPath is based on the TRILL
standard with a few Cisco bonuses which builds on the concept of "what if we
could route a layer 2 packet instead of switching it." Under the covers of
FabricPath it uses the ISIS protocol, a MAC encapsulation, and routing tables to
achieve all of the magic. In short, you now have all of the benefits of Layer 3 to
the access switch, none of the caveats of vPCs, while still be able to span
VLANs. Oh, and the configuration is extremely simple.
F-Series line cards in a Nexus 7000, and Nexus 5500 series + 2k's in the access.
The environment doesn't have to me homogenous, and portions of the
environment could be running FabricPath while others are still traditional vPC or
spanning tree. It's as simple as that.
Today there are some key differentiators between Cisco's proprietary FabricPath
technology, and what the competitors could bring with TRILL. What it amounts to
is that ours is ready for deployment, and the standard still has some functional
gaps.
In short, the big ones (all of the core switches) can act as a default gateway at
the same time (using GLBP). The vPC+ can be used on the access switches to
extend Active-Active to non-FabricPath-speaking server, and conversational
learning allows extremely scalable setup.
You may note that FabricPath is definitely a replacement for vPC. More than that,
it's really a replacement for traditional L2 network topologies. The vPC is really
an attempt to trick a spanning-tree topology due to loop prevention struggles with
multiple active paths to multiple switches.
There is one place, however, in a FP topology that you would still want to use
vPCs and that is from the access switch to the server itself because there aren't
Internal
any NICs or vSwitches that currently understand FP, but plenty that understand
LACP. In this case, there is an extension of vPC called vPC+ which is a
FabricPath aware vPC that bridge between an access layer switch running FP
and a server that is unaware but still needs multiple active uplinks.
The requirement for layer 2 interconnect between data centre sites is very
common these days. The pros and cons of doing L2 DCI have been discussed
many times in other blogs or forums so I won't revisit that here. Basically there
are a number of technology options for achieving this, including EoMPLS, VPLS,
back-to-back vPC and OTV. All of these technologies have their advantages and
disadvantages, so the decision often comes down to factors such as scalability,
skillset and platform choice.
is that there will generally be a single multi-destination tree spanning both sites,
and that the root for that tree will exist on one site or the other. The following
diagram shows an example.
In the above example, there are two sites, each with two spine switches and two
edge switches. The root for the multi-destination tree is on Spine-3 in Site B. For
Internal
the hosts connected to the two edge switches in site A, broadcast traffic could
follow the path from Edge-1 up to Spine-1, then over to Spine-3 in Site B, then to
Spine-4, and then back down to the Spine-2 and Edge-2 switches in Site A
before reaching the other host. Obviously there could be slightly different paths
depending on topology, e.g. if the Spine switches are not directly interconnected.
In future releases of NX-OS, the ability to create multiple FabricPath topologies
will alleviate this issue to a certain extent, in that groups of "local" VLANs can be
constrained to a particular site, while allowing "cross-site" VLANs across the DCI
link.
This is because FabricPath will send HSRP packets from the virtual MAC
address at each site with the local switch ID as a source. Other FabricPath
switches in the domain will see the same vMAC from two source switch IDs and
will toggle between them, making the solution unusable. Also, bear in mind that
FHRP localization with FabricPath isn't (at the time of writing) supported on the
Nexus 7000.
The issues noted above do not mean that FabricPath cannot be used as a
method for extending layer 2 between sites. In some scenarios, it can be a viable
alternative to the other DCI technologies as long as you are aware of the caveats
above.
Virtual Port Channel (vPC) is a technology that has been around for a few years
on the Nexus range of platforms. With the introduction of FabricPath, an
enhanced version of vPC, known as vPC+ was released. At first glance, the two
Internal
A single server (MAC A) is connected using vPC to S10 and S20, so as a result
traffic sourced from MAC A can potentially take either link in the vPC towards
S10 or S20. If we now look at S30's MAC address table, which switch is MAC A
accessible behind? The MAC table only allows for a one to one mapping
between MAC address and switch ID, so which one is chosen? Is it S10 or S20?
The answer is that it could be either, and it is even possible that MAC A could
"flip flop" between the two switch IDs.
In FabricPath implementation, such "flip flop" situation breaks traffic flow. So,
clearly we have an issue with using regular vPC to dual attach hosts or switches
to a FabricPath domain. How do we resolve this? We use vPC+ instead.
The vPC+ solves the issue above by introducing an additional element, the
"virtual switch". The virtual switch sits "behind" the vPC+ peers and is essentially
used to represent the vPC+ domain to the rest of the FabricPath environment.
Internal
The virtual switch has its own FabricPath switch ID and looks, for all intents and
purposes, like a normal FabricPath edge device to the rest of the infrastructure.
In the above example, vPC+ is now running between S10 and S20, and a virtual
switch S100 now exists behind the physical switches. When MAC A sends traffic
through the FabricPath domain, the encapsulated FabricPath frames will have a
source switch ID of the virtual switch, S100. From S30's (and other remote
switches) point of view, MAC A is now accessible behind a single switch S100.
This enables multi-pathing in both directions between the Classical Ethernet and
FabricPath domains. Note that the virtual switch needs a FabricPath switch ID
assigned to it (just like a physical switch does), so you need to take this into
account when you are planning your switch ID allocations throughout the
network. For example, each access "Pod" would now contain three switch IDs
rather than two in a large environment this could make a difference.
Much of the terminology is common to both vPC and vPC+, such as Peer-Link,
Peer-Keepalive, etc and is also configured in a very similar way. The major
differences are:
Internal
• Both the vPC+ Peer-Link and member ports must reside on F series linecards.
The vPC+ also provides the same active / active HSRP forwarding functionality
found in regular vPC this means that (depending on where your default gateway
functionality resides) either peer can be used to forward traffic into your L3
domain. If your L3 gateway functionality resides at the FabricSpine layer, vPC+
can also be used there to provide the same Active/Active functionality.
I found a somewhat cryptic statement of the followings "for N7K the LID is the
port index of the ingress interface, for N5K LID most of the time will be 0". Let's
see what we can make of that.
The acronym LID stands for "Local ID" and, as the name implies, it has local
significance to the switch that a particular MAC address resides on. As such, it is
up to the implementation to determine how to derive a unique LID to represent its
ports. Apparently, the Nexus 5000 and Nexus 7000 engineering teams did not
talk to each other to agree on some consistent method of assigning the LIDs, but
each created their own platform-specific implementation.
The interface represented by the LID is an ingress interface from the perspective
of the edge switch that inserts the LID into the outer source address. For the
Internal
switch sending to the MAC address it represents the egress port at the
destination edge switch.
For the N5K I couldn't really find more than that the LID will usually be 0, but
there may be some exceptions. For the N7K, the LID maps to the "port index" of
the ingress interface.
So I decided to get into the lab and see if I could find some commands that would
help me establish the relation between the LID and the outbound interface on the
edge switch. I created a very simple FabricPath network and performed a couple
of pings to generate some MAC address table entries.
Let's have a look at a specific entry in the MAC address table of a Nexus 7000:
1
2 N7K-1-pod5# show mac address-table dynamic vlan 100
3 Legend:
4 * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link,
5 (T) - True, (F) - False
6 VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID
7 ---------+-----------------+--------+---------+------+----+------------------
8 * 100 0005.73e9.8c81 dynamic 960 F F Eth3/15
100 0005.73e9.fcfc dynamic 960 F F 16.0.14
9 100 00c0.dd18.6ce0 dynamic 420 F F 16.0.14
10 100 00c0.dd18.6ce1 dynamic 0 F F 16.0.14
11 * 100 00c0.dd18.6e08 dynamic 0 F F Eth3/15
12 * 100 00c0.dd18.6e09 dynamic 0 F F Eth3/15
13
So for example, let's zoom in on the MAC address 0005.73e9.fcfc. According the
table, frames for this destination should be sent to SWID.SSID.LID "16.0.14".
From the SWID part, we can see that the MAC address resides on the switch
with ID "16". To find the corresponding switch hostname we can use the following
command:
1
N7K-2-pod6# show mac address-table address 0005.73e9.fcfc
2 Legend:
3 * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
4 age - seconds since last seen,+ - primary entry using vPC Peer-Link,
5 (T) - True, (F) - False
6
7 VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID
---------+-----------------+--------+---------+------+----+------------------
8 * 100 0005.73e9.fcfc dynamic 450 F F Eth3/15
9
Now we know that the outbound interface for the MAC address on the destination
edge switch is Ethernet 3/15. So how can we map the LID "14" to this interface?
Since the LID corresponds to the "port index" for the interface in question, how
can we find the port index? The port index is an internal identifier for the
interface, also referred to as the LTL and there are some show commands to
determine these LTLs. For example, if we wanted to know the LTL for interface
E3/15, we could issue the following command:
Here we find that the LTL for the interface is 0xe, which equals 14 in decimal.
This shows that the LID is actually the decimal representation of the LTL.
(FabricPath switch-IDs, subswitch-IDs and Local IDs are represented in decimal
by default).
This lookup can also be performed in reverse. If we take the LID and convert it to
its hexadecimal representation of 0xe, we can find the corresponding interface as
follows:
First and foremost, It is assumed that now you have a basic working knowledge
of FabricPath. FabricPath here is Cisco's scalable Layer 2 solution that
eliminates Spanning Tree Protocol and adds some enhancements that are sorely
needed in L2 networks like Time To Live (TTL), Reverse Path Forwarding (RPF)
and uses IS-IS as a control plane protocol. It's the fact that FabricPath uses IS-IS
that makes it very easy and familiar for customers to enable authentication in
their fabric. If you have ever configured authentication for a routing protocol in
Cisco IOS or NX-OS, this will be similar with all of your favorites like key chains,
key strings and hashing algorithms. Hopefully that nugget of information doesn't
send you into a tail spin of despair.
With FabricPath there are two levels of authentication that can be enabled. The
first is at the domain level for the entire switch (or VDC!). Authentication here will
prevent routes from being learned. Important to note that ISIS adjacencies can
be formed on the interface level even when the domain authentication is
mismatched. This domain level authentication is for LSP and NSP exchange not
PDUs on the interfaces.
If you are not careful, you can blackhole traffic during the implementation of
authentication, just like you would with any other routing protocol.
We start with a VDC that has FabricPath, is in a fabric with other devices but
doesn't have authentication enabled. We can see we have not learned any
routes.
We can also see we are adjacent to some other devices, but also note that we do
not see their name under system ID, just the MAC address. This is a quick point
that something is amiss with the control plane. They are in bold and red below.
1
N7K-2-Access2# show fabricpath isis adj
2 Fabricpath IS-IS domain: default Fabricpath IS-IS adjacency database:
3 System ID SNPA Level State Hold Time Interface
4 0026.980f.d9c4 N/A 1 UP 00:00:25 port-channel1
5 0024.98eb.ff42 N/A 1 UP 00:00:29 Ethernet3/9
0024.98eb.ff42 N/A 1 UP 00:00:27 Ethernet3/10
6
0026.980f.d9c2 N/A 1 UP 00:00:22 Ethernet3/20
7 0026.980f.d9c2 N/A 1 UP 00:00:29 Ethernet3/21
8
Now we'll add the authentication and start with the key-chain and call it "domain"
then define key 0 and the key-string of "domain" (not very creative am I?) and
then finally apply it to the fabricpath domain default.
1
N7K-2-Access2# config
2 Enter configuration commands, one per line. End with CNTL/Z.
3 N7K-2-Access2(config)# key chain domain
4 N7K-2-Access2(config-keychain)# key 0
5 N7K-2-Access2(config-keychain-key)# key-string domain
N7K-2-Access2(config-keychain-key)# fabricpath domain default
6 N7K-2-Access2(config-fabricpath-isis)# authentication key domain
7
Now let's see what that does for us. Much happier now aren't we?
The exact same sequence applies to interface-level authentication and looks like
the CLI below. We can see that when we have two non-functioning states here
INIT and LOST. INIT is from me removing the key-chain and flapping the
interface (shut/no shut) and LOST is from me removing the pre-defined key chain
and the adjacency going down to N7K-1-Agg1.
1
N7K-2-Access2# show fab isis adj
2 Fabricpath IS-IS domain: default Fabricpath IS-IS adjacency database:
3 System ID SNPA Level State Hold Time Interface
4 N7K-1-Access1 N/A 1 UP 00:00:27 port-channel1
5 N7K-2-Agg2 N/A 1 INIT 00:00:22 Ethernet3/9
6 N7K-2-Agg2 N/A 1 UP 00:00:23 Ethernet3/10
N7K-1-Agg1 N/A 1 LOST 00:04:57 Ethernet3/20
7 N7K-1-Agg1 N/A 1 UP 00:00:30 Ethernet3/21
8
1
N7K-2-Access2(config-keychain)# show fab isis adj
2 Fabricpath IS-IS domain: default Fabricpath IS-IS adjacency database:
3 System ID SNPA Level State Hold Time Interface
4 N7K-1-Access1 N/A 1 UP 00:00:30 port-channel1
5 N7K-2-Agg2 N/A 1 UP 00:00:29 Ethernet3/9
6 N7K-2-Agg2 N/A 1 UP 00:00:26 Ethernet3/10
N7K-1-Agg1 N/A 1 UP 00:00:24 Ethernet3/20
7 N7K-1-Agg1 N/A 1 UP 00:00:31 Ethernet3/21
8
With this simple exercise you've configured FabricPath authentication. Not too
bad and very effective. As always when configuring passwords on your device,
cut and paste from a common text file is important to avoid empty white spaces
at the end of passwords and other nuances that can lead you down the wrong
path. In general, I would expect a company implementing FabricPath
authentication will probably configure both domain and interface level
authentication.
1
N5K-p1-1(config)# show fabricpath isis topology summary
2 Fabricpath IS-IS domain: default FabricPath IS-IS Topology Summary
3 MT-0
4 Configured interfaces: Ethernet1/1 Ethernet1/2 Ethernet1/3 Ethernet1/4 Ethernet
5 Ethernet1/6 Ethernet1/7 Ethernet1/8 port-channel5
6 Number of trees: 2
Tree id: 1, ftag: 1 [transit-traffic-only], root system: 0024.98e8.01c2, 1709
7 Tree id: 2, ftag: 2, root system: 001b.54c2.67c2, 2040
8
Remember with ISIS there are two authentication methods, the actual hello
adjacency authentication, and the LSP data-plane authentication, here is a
sample config of both of these.
8
9
The config as you can see above is quite simple, don't forget that with key chains
you can specify a accept lifetime and send lifetime. But for our case we are not
going to, when you don't specify this it is simply assumed to be infinite.
1
2 SW2# show fabricpath isis interface eth1/16
3 Fabricpath IS-IS domain: default
4 Interface: Ethernet1/16
5 Status: protocol-up/link-up/admin-up
6 Index: 0x0001, Local Circuit ID: 0x01, Circuit Type: L1
Authentication type MD5
7 Authentication keychain is cisco
8 Authentication check specified Extended Local Circuit ID: 0x1A00F000, P2P Circuit
9 Retx interval: 5, Retx throttle interval: 66 ms
10 LSP interval: 33 ms, MTU: 1500
P2P Adjs: 1, AdjsUp: 1, Priority 64
11 Hello Interval: 10, Multi: 3, Next IIH: 00:00:06
12 Level Adjs AdjsUp Metric CSNP Next CSNP Last LSP ID
13 1 1 1 40 60 00:00:35 ffff.ffff.ffff.ff-ff
14 Topologies enabled:
15 Topology Metric MetricConfig Forwarding
0 40 no UP
16
17
1
2
3 SW2# show fabricpath isis
4
5 Fabricpath IS-IS domain : default
6 System ID : 547f.eec2.7d01 IS-Type : L1
SAP : 432 Queue Handle : 10
7
Maximum LSP MTU: 1492
8 Graceful Restart enabled. State: Inactive
9 Last graceful restart status : none
10 Metric-style : advertise(wide), accept(wide)
11 Start-Mode: Complete [Start-type configuration]
Area address(es) :
12 00
13 Process is up and running
14 CIB ID: 3
15 Interfaces supported by Fabricpath IS-IS :
16 Ethernet1/5
Ethernet1/6
17 Ethernet1/7
18 Ethernet1/8
19 Ethernet1/16
20 Level 1
Authentication type: MD5
21 Authentication keychain: cisco Authentication check specified MT-0 Ref-Bw: 400000
22 Address family Swid unicast :
23 Number of interface : 5
24 Distance : 115
25 L1 Next SPF: Inactive
26
27
A big hint that your auth is working for hello but not for LSP is that the hostnames
don't come up correctly in your isis adjacency.
First of all it helps if we establish a few items of terminology. The first thing to
remember is that fabricpath supports multiple topologies so that you can actually
break out particular FabPath enabled VLAN's to use a particular topology.
However this is only available in certain versions of NXOS and is quite advanced,
so we will be skipping this advanced configuration.
However, the concept of "Trees" in fabricpath also exists, tree's are used for the
distribution of "multidestination" traffic, that is traffic that is not a single
destination, so perfect examples of this would be multicast, unknown unicast and
other flooding types.
Internal
The first multidestination tree, tree 1 is normally selected for unknown unicast
and broadcast frames except when used in combination with vpc+, but the detail
of that we will ignore for now.
1
N7K1# show fabricpath load-balance multicast ftag-selected flow-type l3 src-ip 10.1.1
2 128b Hash Key generated : 00 00 02 9a 00 00 00 00 00 00 00 a0 10 12 10 00
3 0x1b
4 FTAG SELECTED IS : 2
5
6 N7K1# show fabricpath load-balance multicast ftag-selected flow-type l3 src-ip 10.1.1.
7 128b Hash Key generated : 00 00 02 9a 00 00 00 00 00 00 00 a0 10 12 20 00
0xda
8 FTAG SELECTED IS : 1
9
The FTAG is an important key here, the FTAG will correlate to the "Tree". The
FTAG is used as it's an available field in the FabricPath Header that can be used
to identify the frame and tell the switches "use this tree to distribute the traffic".
Now the whole point of this option is for scalability, especially with large multicast
traffic domains, using this option you can increase link utilization for multicast
traffic by having the traffic load balance across two "root" trees (yes, this is fabric
path, so we don't really have a root tree like we do in spanning-tree, but for
multidestination traffic we kind of have to.
You can actually tell using the following command what port your switch is going
to use for that particular FTAG/MTREE:
As you can see from the above, there are two seperate paths that the switch is
taking for each of the Trees based on where the root of the tree lies
There will always be two seperate roots for each tree, but as you can imagine,
your root tree might not be the most optimally chosen tree, so you can configure
the root priority, the highest root priority will become the root for FTAG 1, and
second place will become the root tree for FTAG 2.
N71k is now the root for this tree, you can attempt to verify this in a few ways, the
first is to look at the show fabricpath mroute ftag 1 command we used previously,
let's just quickly get our topology clear:
SW3# show fabricpath isis adj
1 Fabricpath IS-IS domain: default Fabricpath IS-IS adjacency database:
2 System ID SNPA Level State Hold Time Interface
3 SW2 N/A 1 UP 00:00:23 Ethernet1/5
4 SW2 N/A 1 UP 00:00:27 Ethernet1/6
SW2 N/A 1 UP 00:00:30 Ethernet1/7
5 SW2 N/A 1 UP 00:00:30 Ethernet1/8
Internal
As you can see from the above, we have multiple connections between SW3 to
SW2, and then a single connection from SW2 and SW3 up to N7K1
1
SW2# show fabricpath isis adj
2 Fabricpath IS-IS domain: default Fabricpath IS-IS adjacency database:
3 System ID SNPA Level State Hold Time Interface
4 SW3 N/A 1 UP 00:00:30 Ethernet1/5
5 SW3 N/A 1 UP 00:00:21 Ethernet1/6
6 SW3 N/A 1 UP 00:00:29 Ethernet1/7
SW3 N/A 1 UP 00:00:27 Ethernet1/8
7 N7K1 N/A 1 UP 00:00:24 Ethernet1/16
8
1
2 SW2# show fabricpath mroute ftag 1
3
4 (ftag/1, vlan/666, *, *), Flood, uptime: 05:12:43, isis
5 Outgoing interface list: (count: 2)
6 Interface Ethernet1/16, uptime: 00:07:13, isis
7 Interface Ethernet1/16, uptime: 00:07:13, isis
8
(ftag/1, vlan/666, *, *), Router ports (OMF), uptime: 02:04:48, isis igmp
9 Outgoing interface list: (count: 2)
10 Interface Ethernet1/16, uptime: 00:07:13, isis
11 Interface Vlan666, [SVI] uptime: 02:04:48, igmp
12
13 SW2# show fabricpath mroute ftag 2
14
15 (ftag/2, vlan/666, *, *), Flood, uptime: 05:13:35, isis
Outgoing interface list: (count: 2)
16 Interface Ethernet1/16, uptime: 00:07:09, isis
17 Interface Ethernet1/8, uptime: 00:43:21, isis
18
19 (ftag/2, vlan/666, *, *), Router ports (OMF), uptime: 02:05:39, isis igmp
20 Outgoing interface list: (count: 2)
21 Interface Ethernet1/16, uptime: 00:07:09, isis
Interface Vlan666, [SVI] uptime: 02:05:39, igmp
22
23
You can tell from the above, neither of the switches will ever send unknown
unicast (which remember, is placed into FTAG 1) out to each other but will
Internal
instead always forward it up to the tree, up to N71k, which is our root for this tree.
1
N7K1# show fabricpath mroute ftag 1
2
3
(ftag/1, vlan/666, *, *), Flood, uptime: 04:01:00, isis
4 Outgoing interface list: (count: 2)
5 Interface Ethernet4/1, uptime: 00:01:51, isis
6 Interface Ethernet4/2, uptime: 01:06:04, isis
7
8 (ftag/1, vlan/666, *, *), Router ports (OMF), uptime: 04:01:01, isis igmp
Outgoing interface list: (count: 2)
9 Interface Vlan666, [SVI] uptime: 01:51:33, igmp
10 Interface Ethernet4/1, uptime: 00:01:51, isis
11
1
2 SW2# show fabricpath mroute ftag 2
3
(ftag/2, vlan/666, *, *), Flood, uptime: 05:18:03, isis
4 Outgoing interface list: (count: 2)
5 Interface Ethernet1/16, uptime: 00:11:38, isis
6 Interface Ethernet1/8, uptime: 00:47:50, isis
7
8 (ftag/2, vlan/666, *, *), Router ports (OMF), uptime: 02:10:08, isis igmp
9 Outgoing interface list: (count: 2)
Interface Ethernet1/16, uptime: 00:11:38, isis
10 Interface Vlan666, [SVI] uptime: 02:10:08, igmp
11
12 Found total 2 route(s)
13
1
2 N7K1# show fabricpath mroute ftag 2
3
4 (ftag/2, vlan/666, *, *), Flood, uptime: 04:12:32, isis
5 Outgoing interface list: (count: 2)
Interface Ethernet4/1, uptime: 00:12:28, isis
6 Interface Ethernet4/1, uptime: 00:12:28, isis
7
8 (ftag/2, vlan/666, *, *), Router ports (OMF), uptime: 04:12:33, isis igmp
9 Outgoing interface list: (count: 2)
10 Interface Vlan666, [SVI] uptime: 02:03:05, igmp
11 Interface Ethernet4/1, uptime: 00:12:28, isis
12
Found total 2 route(s)
13
14 N7K1# show fabricpath isis adj
15 Fabricpath IS-IS domain: default Fabricpath IS-IS adjacency database:
16 System ID SNPA Level State Hold Time Interface
17 SW2 N/A 1 UP 00:00:27 Ethernet4/1
18
Ok so now SW2 is the root for FTAG 2 and any frames from N71k will come
down to him first, and he in turn will distribute it to SW3, now there is one bit of
that config that might make you say "What Gives?" and that is, I have four
connections between SW2 and SW3, why is traffic not load balancing across
those Equal Cost Links?
OK, here's one more way you can use to determine the root of a MTREE:
What this is saying is that when your looking at this output, your being told the
values for the topology tree as if you where running the command on the root of
Internal
each tree itself, So if we take a closer look at a switch, Switch 3, which is not the
root for either FTAG.
1
2 SW3# show fabricpath isis trees
3 Fabricpath IS-IS domain: default
4
5 Note: The metric mentioned for multidestination tree is from the root of that tree to
6
MT-0
7 Topology 0, Tree 1, Swid routing table
8 1, L1
9 via Ethernet1/17, metric 0
10 2, L1
11 via Ethernet1/17, metric 40
12
Topology 0, Tree 2, Swid routing table
13 1, L1
14 via Ethernet1/8, metric 40
15 2, L1
16 via Ethernet1/8, metric 0
17
The Metric for reaching Switch-ID 1, which this switch reaches via Eth1/17, is
metric 0... Because Switch 1 _is_ the root for this FTAG
Same again for Tree 2, the root of the tree is Switch-ID 2, which is out eth1/8,
which has a metric of 0, because obviously for Switch-ID 2, it's metric to reach
itself, would be 0.
So if we look at our default unicast load balancing table right now on our switches
with multiple, equal cost links (Remember, fabricpath only supports load
balancing across equal cost links)
We can see that our links are being equally balanced, how are they balanced?
• layer-4: Include only Layer 4 input (source or destination TCP and UDP ports, if
available)
• symmetric: Sort the source and destination tuples before entering them in the
hash function (source-to-destination and destination-to-source flows hash
identically) (default).
• rotate-amount: Specify the number of bytes to rotate the hash string before it is
entered in the hash function.
Each of these values is relatively straight forward; you can specify if you want to
look at the layer 3 or layer 4 source/dest info OR a mixture (which is the default);
you can specify that you only want to look at the source or destination OR mixed;
you can control if the hash function will produce the same value for both source-
dest traffic and the return dest-source traffic. Finally the VLAN ID can be included
in your combinations, last but not least the rotate-amount controls some of the
mathematics of the hash function that we will get into.
We can see that we only changed one tiny param the port number and all of a
sudden the traffic will load balance across another link, great! Looks pretty good
so far right?
Let's check out what that symetric command does for us, check this out:
1SW2# show fabricpath load-balance unicast forwarding-path ftag 1 switchid 3 src-mac 1 111
28.66.1 l4-src-port 80 vlan 666
Missing params will be substituted by 0's.
3
4crc8_hash: 134
5This flow selects interface Eth1/7
Here we have changed the source and destination ports and ip addressing etc
around, and we are provided with exactly the same CRC hash, which leads us to
exactly the same output interface!
Internal
If we change the length that the hash key is based on, the rotate amount, our
hash key will change.
Two totally separate VDC's are shown here, and what we do is change the
rotate-amount on each of them to 0 (nothing), then ask us to show it what it
thinks the hash key is.
sw1-2(config)# fabricpath load-balance unicast rotate-amount 0x0
1sw1-2# show fabricpath load-balance unicast forwarding-path ftag 1 switchid 2 flow-type
24
3128b Hash Key generated : 00000c6124201c612420400501f90000
4
5N7K1# show fabricpath load-balance unicast forwarding-path ftag 1 switchid 2 flow-type l
64128b Hash Key generated : 00000c6124201c612420400501f90000
Internal
As you can see, the hash is identical, which means our traffic would flow over the
same paths between these VDC's which we may not want, so we can use the
rotate-amount to increase how much of the VDC-MAC address is used in the
hashing function.
Note that just because FabricPath only supports equal cost load balancing,
doesn't mean that we can't go through intermediate switches and still have load
balancing. Here is an example of this.
1
2 SW2# show fabricpath route
3 FabricPath Unicast Route Table
'a/b/c' denotes ftag/switch-id/subswitch-id
4 '[x/y]' denotes [admin distance/metric]
5 ftag 0 is local ftag
6 subswitch-id 0 is default subswitch-id
7
8 1/3/0, number of next-hops: 5
9 via Eth1/5, [115/40], 0 day/s 00:00:02, isis_fabricpath-default
via Eth1/6, [115/40], 0 day/s 00:00:02, isis_fabricpath-default
10 via Eth1/7, [115/40], 0 day/s 00:00:02, isis_fabricpath-default
11 via Eth1/8, [115/40], 0 day/s 00:00:02, isis_fabricpath-default
12 via Eth1/16, [115/40], 0 day/s 00:00:26, isis_fabricpath-default
13
In the above example, we have modified the metric on N71k so that SW1 and
SW2, which have interfaces eth1/5 - 8 to each ohter, also see the route via N71k
as a valid path between each other two, we did this by modifying the metrics like
so:
1
SW2# show run int eth1/16
2
3 interface Ethernet1/16
4 switchport mode fabricpath
5 fabricpath isis metric 25
6
7 N7K1(config)# int eth4/1
8 N7K1(config-if)# fabricpath isis metric 15
N7K1(config-if)# int eth4/2
9 N7K1(config-if)# fabricpath isis metric 15
10
Notice that the total cost of these links is now 40 (25 + 15) for SW2, which means
SW2 now considers it an alternative Path
Internal
Over on SW3, since we have not modified the default metric, it will still load
balance via the 4 links, not 5.
1
2 SW3# show fabricpath route
3 FabricPath Unicast Route Table
4 'a/b/c' denotes ftag/switch-id/subswitch-id
5 '[x/y]' denotes [admin distance/metric]
ftag 0 is local ftag
6 subswitch-id 0 is default subswitch-id
7
8 FabricPath Unicast Route Table for Topology-Default
9
10 0/3/0, number of next-hops: 0
11 via ---- , [60/0], 0 day/s 02:30:59, local
12 1/1/0, number of next-hops: 1
via Eth1/17, [115/40], 0 day/s 02:31:08, isis_fabricpath-default
13 1/2/0, number of next-hops: 4
14 via Eth1/5, [115/40], 0 day/s 01:55:23, isis_fabricpath-default
15 via Eth1/6, [115/40], 0 day/s 01:55:30, isis_fabricpath-default
16 via Eth1/7, [115/40], 0 day/s 01:55:29, isis_fabricpath-default
via Eth1/8, [115/40], 0 day/s 01:55:19, isis_fabricpath-default
17
18
1
2
SW3(config)# int eth1/17
3 SW3(config-if)# fabricpath isis metric 25
4 SW3(config-if)# end
5 SW3# show fabricpath route
6 FabricPath Unicast Route Table
7 'a/b/c' denotes ftag/switch-id/subswitch-id
'[x/y]' denotes [admin distance/metric]
8 ftag 0 is local ftag
9 subswitch-id 0 is default subswitch-id
10
11 FabricPath Unicast Route Table for Topology-Default
12
13 0/3/0, number of next-hops: 0
14 via ---- , [60/0], 0 day/s 02:31:34, local
1/1/0, number of next-hops: 1
15 via Eth1/17, [115/25], 0 day/s 02:31:43, isis_fabricpath-default
16 1/2/0, number of next-hops: 5
17 via Eth1/5, [115/40], 0 day/s 01:55:58, isis_fabricpath-default
18 via Eth1/6, [115/40], 0 day/s 01:56:05, isis_fabricpath-default
via Eth1/7, [115/40], 0 day/s 01:56:04, isis_fabricpath-default
19 via Eth1/8, [115/40], 0 day/s 01:55:54, isis_fabricpath-default
20 via Eth1/17, [115/40], 0 day/s 00:00:03, isis_fabricpath-default
21
22
Internal
When I first started learning about FabricPath, I believed that it came with a requirement that
your network topology conform to certain rules. While I now know that is not true, there is a
common topology that is discussed when talking about network fabrics. It’s called the spine+leaf
topology.
Internal
When we’re talking about a fabric, all links in the network are forwarding. So unlike a traditional
network that is running Spanning Tree Protocol, each switch has multiple active paths to every other
switch.
Because all of the links are forwarding, there are real benefits to scaling the network horizontally.
Consider if the example topology above only showed (2) spine switches instead of (3). That would
give each leaf switch (2) active paths to reach other parts of the network. By adding a third spine
switch, not only is the bandwidth scaled but so is the resiliency of the network. The network can lose
any spine switch and only drop 1/3rd of its bandwidth. In a traditional network that runs Spanning
Tree Protocol, there is no benefit to scaling horizontally like this because STP will only allow (1) link
to be forwarding at a time. The investment in an extra switch, transceivers, cables, etc, is just sitting
idle waiting for a failure before it can start forwarding packets.
So while the spine+leaf topology is commonly used when discussing FabricPath, it is not a
requirement. In fact, even having full-mesh connectivity between spine and leaf nodes as shown
in the drawing is not a requirement. You could connect each spine to every other leaf. You could
connect spines to other spines or a leaf to a leaf.
According to Cisco, there is a lot of interest from customers about using FabricPath for
connecting sites together (ie, as a data center interconnect or for connecting buildings in a
campus). An example of that might be a ring topology that connects each of the sites.
Internal
The drawing shows FabricPath being used between the switches that connect to the fiber ring.
This is obviously a very different topology than spine+leaf and yet perfectly reasonable as far as
FabricPath is concerned.
FabricPath is a method for encapsulating Layer 2 traffic across the network. It does not define or
require a specific network topology. The rule of thumb is: if the topology makes sense for
regular old IP routing, then it makes sense for FabricPath.
In order to achieve the benefits that FabricPath brings over Classical Ethernet, some significant
changes needed to be implemented in the data plane of the network. Among these changes
include:
The introduction of a Time To Live field in the frame header which is decremented at each FabricPath
hop
A unique addressing scheme consisting of a 12-bit switch ID which is used to switch frames through
the fabric
A Reverse Path Forwarding check is done on each frame as it enters a FabricPath port (another loop
prevention mechanism)
A new frame header format with these new fields
Internal
In order for the hardware platform to switch FabricPath frames without any slowdown, new
ASICs are required in the network. On the Nexus 7000, these ASICs are present on the F series
I/O modules. It’s important to understand that not only do the FabricPath core ports need to be
on an F series module but so do the Classic Ethernet edge ports which carry traffic belonging to
FabricPath VLANs. This last requirement may impact certain existing environments where
downstream devices are connected on M1 or M2 I/O modules.
FabricPath is also supported on the Nexus 5500 running NX-OS 5.1(3)N1(1) or higher. Cisco’s
documentation isn’t exactly clear how FabricPath is implemented on the 5500 series but I’ve
been told 55xx boxes do it in hardware (the original 50xx boxes do not support FabricPath).
One of the key issues with scaling modern data centers is that the number of MAC addresses
each switch needs to learn is growing all the time. The explosion in growth is due mostly to the
increase in virtualization. Consider a top-of-rack, 48-port Classical Ethernet switch that connects
to 48 servers. That’s 48 MAC addresses that this switch and all the other switches in the network
need to learn to send frames to those servers. Now consider that those 48 servers are really
VMware vSphere hosts and that each host has 20 virtual machines (an average number, probably
low for some environments). That’s 960 MAC addresses. Quite an increase. Now multiply that
out by however many additional ToR switches are also servicing vSphere hosts. All of a sudden
your switches’ TCAM doesn’t look so big any more.
Since FabricPath continues the Layer 2 adjacency that Classical Ethernet has, it must also rely on
MAC address learning to make forwarding decisions. The difference, however, is that FabricPath
does not unconditionally learn the MAC addresses it sees on the wire. Instead it does
“conversational learning” which means that for MACs that are reachable through the fabric, a
FabricPath switch will only learn that MAC if it’s actively conversing with a MAC that is
already present in the MAC forwarding table.
Internal
Consider Switch 2 in this example. Host A is reachable through the fabric while B and C are
reachable via Classic Ethernet ports. The MACs of B and C are learned on Switch 2 using
Classic Ethernet rules which is to say that they are learned as soon as they each send frames into
the network. The MAC for A is only learned at Switch 2 if A is sending a unicast packet to B or
C and their MAC is already in Switch 2’s forwarding table. If A sends a broadcast frame into the
network (such as when A is sending an ARP ‘who-has’ request looking for B’s MAC), Switch 2
will not learn A’s MAC (because the frame from A was not addressed to B, it was a broadcast).
Also if A sends a unicast frame for Host D, a host that Switch 2 knows nothing about, Switch 2
will not learn A’s MAC (destination MAC must be in the forwarding table to learn the source
MAC).
The conversational learning mechanism ensures that switches only learn relevant MACs and not
every MAC in the entire domain thus easing the pressure on the finite amount of TCAM in the
switch
One area where FabricPath gets confusing is when it’s referred to as “routing MAC addresses”
or “Layer 2 over Layer 3″. It’s easy to hear terms like “routing” and “Layer 3″ and associate that
with the most common Layer 3 protocol on the planet — IP — and assume that IP must play a
role in the FabricPath data plane. However, as outlined in #2 above, FabricPath employs its own
unique data plane and has been engineered to take on the best characteristics of Ethernet at Layer
2 and IP at Layer 3 without actually using either of those protocols. Below is a capture of a
FabricPath frame showing that neither Ethernet nor IP are in play.
Internal
Instead of using IP addresses, an address — called the “switch ID” — is automatically assigned
to every switch on the fabric. This ID is used as the source and destination address for FabricPath
frames destined to and sourced from the switch. Other fields such as the TTL can also be seen in
the capture.
In Classic Ethernet networks that utilize Spanning Tree Protocol, it’s no secret that the
bandwidth that’s been cabled up in the network is not used efficiently. STP’s only purpose in life
is to make sure that redundant links in the network are not used during steady-state operation.
That’s a poor ROI on the cost to put in those links and from a scaling/capacity perspective, it’s
equally as poor since the network is limited to whatever the capacity is of that one link and
cannot employ multiple parallel links. (Ok, you technically can using etherchannel but you
understand the point I’m trying to make)
Since FabricPath doesn’t use STP in the fabric and because the fabric ports are routed interfaces
and therefore have loop prevention mechanisms built-in, all of the fabric interfaces will be in a
forwarding state capable of sending and receiving packets. Since all interfaces are forwarding it’s
possible that there are equal cost paths to a particular destination switch ID. FabricPath switches
can employ Equal Cost Multipathing (ECMP) to utilize all equal cost paths.
Internal
Here S100 has (3) equal cost paths to S300: A path to each of S10, S20, and S30 via the orange
links and then from each of those switches to S300 via the purple links.
Much like a regular etherchannel or a CEF multipathing situation, FabricPath ECMP utilizes a
hashing algorithm to determine which link a particular traffic flow should be put on. By default
the inputs to the hash are:
An interesting value-add that FabricPath does is to use the switch’s own MAC address as a key
for shifting the hashed bits. This shifting prevents polarization of the traffic as it passes through
the fabric (ie, prevents every switch from choosing “link #1″ all the way through the network due
to their hash outputs all being exactly the same). The benefit of this is only realized if there’s
more than (2) hops between source and destination FabricPath switch.
So there you have it. Are you currently using or planning a FabricPath deployment? Please share
your thoughts in the comments below.
fabricpath
sample configuration:
N7K-1# conf t
N7K-1(config-if-range)# no shutdown
Internal
N5K-1# conf t
N5K-1(config-if-range)# no shutdown
You can influence the root selection with the root-priority command:
By default, all switches are assigned a root priority of 64. Manually setting a given switch’s
priority to 255, the highest value possible, ensures that it will become the primary root.
To verify:
N7K-12-1(config-if-range)# show fabricpath isis interface brief
--------------------------------------------------------------------------------
MT-0
Number of trees: 2
This LAB demonstrates the advantage of FabricPath over STP: when multiple parallel links
interconnect 2 switches together, all those links are integrated in the FabricPath routing table.
By nature, FabricPath supports up to 16 active links between 2 FabricPath switches (16 way
ECMP).
L2, L3 and L4 flow field information can be leveraged to create hashing value in order to
unknown unicast and multicast traffic; tree 2 for multicast). It is a best practice to position root
command.
Be careful to set same MTU value on both sides of FabricPath core port link in order to avoid
Warning:
!!:: vPCs will be flapped on current primary vPC switch while attempting
Note:
--------:: Change will take effect after user has re-initd the vPC peerlink
::--------
Configuring fabricpath switch id will flap vPCs. Continue (yes/no)? [no] yes
N5K-1(config-if-range)# no shutdown
The following command will assign the port channel to be a vPC+ peer-link.
Note:
All FabricPath edge switches (also called leaf switches) playing role of Layer 2 gateway (i.e
connected to end host device or STP switch) share the same bridge ID.
FabricPath fabric appear as a unique logical switch to the rest of the legacy network.
If you are using spanning-tree domain <ID>, bridge ID will then reflect the domain <ID>:
For example, a spanning-tree domain 5 will generate the following Bridge ID: c84c.75fa.6005
by default, "show accounting log" only shows the actual configuration commands, if you also want it
to track the show commands the user type, you need do the following:
eg:
5.
7. debug to a file
debug logfile ##
- what is the command to check the number of MAC address entries used in N5K and percentage of
usage?
- what is the command to check the number of IP ARP entries used in N5K and percentage of usage?
Answers:
602N21/b_N5500_Verified_Scalability_602N21/b_N5500_Config_Limits_602N11_chapter_01.html>;
, maximum limit for ARP table is:
* 8000 for the Cisco Nexus 5548 Layer 3 Daughter Card (N55-D160L3(=))
* 16,000 for the Cisco Nexus 5548 Layer 3 Daughter Card, version 2 (N55-D160L3-V2(=))
1) To display the number of entries currently in the MAC address table, use theshow mac address-
table count command.
Detail:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/command/reference/layer2
/n5k-l2-cr/n5k-l2_cmds_show.html#wp1438536
2) There isn' t a direct command to check the percentage of usage, however, you can configure
a notification when the usage is over a limit:
To configure a log message notification of MAC address table events, use the mac address-table
notification command.
mac address-table notification threshold [limit ## (percentage 1-100) interval ## (seconds: 10-10000)]}
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/command/reference/layer2
/n5k-l2-cr/n5k-l2_cmds_m.html#wp1375692
2. what is the command to check the number of IP ARP entries used in N5K and percentage of usage?
You can check the number of IP ARP entries with "Show ip arp" or " show ip arp summary", however,
there isn't a way to check the percentage of usage yet. You will need to refer to the release note since
the limit (both for MAC and ARP) will be varied depends on hardware and software versions.
Internal
Detail:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/command/reference/securi
ty/n5k-sec-cr/n5k-sec_cmds_show.html#wp1742586
NOTE:
troubleshooting steps:
very brief
This example shows how to configure VTP in transparent mode (the default mode ):
switch#config t
switch(config)#feature vtp
switch(config)#vtp domain accounting
Internal
switch(config)#vtp version2
switch(config)#exit
switch#
Nexus 5000 does not have the same VLANs as switch running VTP server
VLANs for the Nexus 5000 are not the same as for the switch running the VTP server.
Possible Cause
The Nexus 5000 currently supports VTP only in transparent mode (4.2(1)N1(1) and later releases).
Solution
This situation indicates that VLANs must be configured locally. However a VTP client and server can both
communicate through a Nexus 5000 by using the following commands:
Server mode–Allows users to perform configurations, it manages the VLAN database version #, and stores the VLAN
database.
Client mode–Does not allow user configurations and relies on other switches in the domain to provide configuration
information.
Off mode—Allows you to access the VLAN database (VTP is enabled) but not participate in VTP.
Transparent mode–Does not participate in VTP, uses local configuration, and relays VTP packets to other forward
ports. VLAN changes affect only the local switch. A VTP transparent network switch does not advertise its VLAN
configuration and does not synchronize its VLAN configuration based on received advertisements.
Guidelines and Limitations
When a switch is configured as a VTP client, you cannot create VLANs on the switch in the range of 1 to 1005.
VLAN 1 is required on all trunk ports used for switch interconnects if VTP is supported in the network. Disabling VLAN
1 from any of these ports prevents VTP from functioning properly.
If you enable VTP, you must configure either version 1 or version 2. On the Cisco Nexus 5010 and Nexus 5020 switch,
512 VLANs are supported. If these switches are in a distribution network with other switches, the limit remains the
same.
On the Cisco Nexus 5010 switch and the Nexus 5020 switch, 512 VLANs are supported. If these switches are in a
distribution network with other switches, the VLAN limit for the VTP domain is 512. If a Nexus 5010 switch or Nexus
5020 switch client/server receives additional VLANs from a VTP server, they transition to transparent mode
The show running-configuration command does not show VLAN or VTP configuration information for VLANs 1 to 1000.
When deployed with vPC, both vPC switches must be configured identically.
VTP advertisements are not sent out on Cisco Nexus 2000 Series Fabric Extender ports.
VTP pruning is not supported.
There is a bug on the N3K that causes this behavior even if they're currently set the same if you
have ever enabled VTP the box still thinks its on. Perhaps that bug also exists on N5K if show vpc
status shows the same state.
In any case since its a type 2 inconsistency it doesn't affect traffic flow.
*real life case of how VTP can cause an outage with some human error:
• Uses the DHCP snooping binding database to validate subsequentrequests from untrusted
hosts.
Internal
why?
======from cisco======
Step 1 Enable the DHCP feature. For more information, see the "Enabling or Disabling the DHCP
Feature" section.
Step 2 Enable DHCP snooping globally. For more information, see the"Enabling or Disabling DHCP
Snooping Globally" section.
Step 3 Enable DHCP snooping on at least one VLAN. For more information, see the "Enabling or
Disabling DHCP Snooping on a VLAN" section.
By default, DHCP snooping is disabled on all VLANs.
Step 4 Ensure that the DHCP server is connected to the device using a trusted interface. For more
information, see the "Configuring an Interface as Trusted or Untrusted" section.
Access Control List (ACL) capture provides you the ability to selectively capturetraffic on an interface
or virtual local area network (VLAN) When you enable the capture option for an ACL rule, packets that
match this rule are either forwarded or dropped based on the specified permit or deny action and can
also be copied to an alternate destination port for further analysis. An ACL rule with the capture option
can be applied:
1. In a VLAN,
2. In the ingress direction on all interfaces,
3. In the egress direction on all Layer 3 interfaces.
statistics per-entry
!!
vlan filter VACL_TEST vlan-list 500
You can also check the ternary content addressable memory (TCAM) programming of the access list.
This output is for the VLAN 500 for Module 1.
slot 1
=======
INSTANCE 0x0
---------------
• Message integrity—Ensures that a packet has not been tampered with in-transit.
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli/sm_sn
mp.html
Internal
PVLAN
PVLANs provide layer 2 isolation between ports within the same broadcast domain. There are three
types of PVLAN ports:
Promiscuous— A promiscuous port can communicate with all interfaces, including the isolated and
community ports within a PVLAN.
Isolated— An isolated port has complete Layer 2 separation from the other ports within the same
PVLAN, but not from the promiscuous ports. PVLANs block all traffic to isolated ports except
traffic from promiscuous ports. Traffic from isolated port is forwarded only to promiscuous ports.
Community— Community ports communicate among themselves and with their promiscuous
ports. These interfaces are separated at Layer 2 from all other interfaces in other communities or
isolated ports within their PVLAN.
http://www.cisco.com/en/US/tech/tk389/tk814/tk840/tsd_technology_support_sub-
protocol_home.html
below is the some info from this Good link from cisco:
Anycast is a Cisco IOS network routing feature that provides a Layer 3 network virtual address. The GSS can leverage
this network-wide virtual address to provide GSS redundancy.
A single anycast address can represent the entire GSS cluster by allowing the mapping of the GSS anycast loopback
address to the virtual network-wide anycast address.
The network-wide anycast address can represent up to 16 GSS devices in a single cluster or multiple GSS clusters.
A failure of any GSS behind the anycast address is transparent to the end user. Also, since anycast leverages the
network's routing tables, the traffic destined to the GSS is based on routing metrics.
Anycast works with the routing topology to route data to the nearest or best destination. Anycast has a one-to-
many association between network addresses and network endpoints, which means that each destination address
identifies a set of receiver endpoints, only one of which receives information from a sender at any time.
syslog reporting fex power supply failure but the power supply is
actually working?
false alarm.
alerts like the following are due to a comestic bug: CSCtl77867. no functional impact, fixed in 5.0(3)N2(1)
on power supply: 2
Also, here is the link to the recommended minimum release for Nexus
5000:http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/release/recommended_releas
es/recommended_nx-os_releases.html
• Cisco NX-OS Release 5.2(1)N1(4) is the recommended release for general features and functions.
• Cisco NX-OS Release 5.1(3)N2(1c) is the minimum recommended release for general features and functions.
2. Before considering upgrade, it is recommended to go through the release note to see if the new version is
suitable to your environment:http://www.cisco.com/en/US/products/ps9670/prod_release_notes_list.html
ERROR:
Interface ***** is down (Error disabled. Reason:BPDUGuard)
Reason:
the cisco document explains it all:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/troubleshoot
ing/guide/n5K_ts_l2.html#wp1026440
basically, it is because the FEX host ports are connected with a non-host
devices which is sending out BPDUs. it is not best practice to connect a switch
to a fex.
Possible Cause
By default, the HIFs are in STP edge mode with the BPDU guard enabled.
This means that the HIFs are supposed to be connected to hosts or
non-switching devices. If they are connected to a non-host device/switch
that is sending BPDUs, the HIFs become error-disabled upon receiving a
BPDU.
Solution
Enable the BPDU filter on the HIF and on the peer connecting device.
With the filter enabled, the HIFs do not send or receive any BPDUs. Use
the following commands to confirm the details of the STP port state for
the port:
this message could happen a few times when the N5K is booting up,
this is a cosmetic bug and has no functional impact.
Sometimes some fex errors could be due to vPC issues, and it is not obvious
about the actual cause:
Topology:
Symptoms:
Internal
1. Fex fabric ports showed " down NO SFP",after shut/no shut the port, the status is
"down,Incompatible-Topology"
n5k(config-if)# shut
n5k(config-if)# no shut
n5k(config-if)#
n5k(config-if)#
n5k(config-if)#
---------------------------------------------------------------
Cause:
Internal
Why when I change the FEX number, the fex will become offline?
Possible causes:
1. Sometimes it can take a few minutes for the FEX to transit from offline to online.
2. Do you apply the configuration on both sides if it is dual homed?
N5K----FEX
===before renumbering========
------------------------------------------------------------------------
==========renumbering===========
Internal
========after renumbering=========
------------------------------------------------------------------------
…………………..
------------------------------------------------------------------------
The fex comes back online within minutes as well after applying changes on both sides.
------------------------------------------------------------------------
33.02-5596UP(config-if)#
33.02-5596UP(config)#
33.02-5596UP(config)#
33.02-5596UP(config-if)#
========after change=========
------------------------------------------------------------------------
------------------------------------------------------------------------
Internal
enhanced vPC
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/release/notes/Rel_5_1_3_N1_
1/Nexus5000_Release_Notes_5_1_3_N1.html#wp387585
The most important thing to remember about enhanced vpc is that you don't need to assign a vpc
number, the system will automatically assign one:
In the Enhanced vPC topology, the FEXs are virtual line cards and the FEX front panel ports are mapped to the virtual
interfaces on a parent Cisco Nexus 5000 Series device. From the CLI perspective, the configuration of Enhanced vPC
is the same as a regular port channel with member ports from two FEXs. You do not have to enter the CLI vpc vpc
ID to create an Enhanced vPC. An example of how to create an Enhanced vPC with topology.
The following procedure uses the topology in Figure 6-10. In the figure, the number next to the line is the interface ID.
Assume all the ports are base ports the interface ID 2 represent interface eth1/2 on the Cisco Nexus 5000 Series
device.
Although the vPC vPC ID command is not required, the software assigns an internal vPC ID for each Enhanced vPC.
The output of the show vpc command displays this internal vPC ID.
Step 3 Assign the vPC domain ID and configure the vPC peer keepalive.
As shown in the above procedure, the Enhanced vPC configuration is the same configuration as when you configure
the host port channel with channel members from the same FEX.
A Nexus 5500 switch might reload and reset reason of unknown. Most common reason for
reset reason of unknown is loss of power to the switch. Make sure the switch has dual power
supplies connected to different power distribution units(PDU) and power to the switch is stable.
http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetails&bugId
=CSCub11616
Symptom:
When the primary Nexus is reloaded the secondary takes over fine but when the primary comes back
up the VPC is not synced
Internal
Cause
---a common mistake is to put the configure a vlan interaface (SVI) as the peer keepalive and just
allow this vlan in the peer-link, this violate the rules that peer link and keepalive link should
be physically separated.
FIX:
long Answer:
1. N5K/N2K doesn’t support HIF storm control at the moment. In the configuration guide for
521N12, here is the link about the limitation of storm
control: http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/layer2/521_n1_2/
b_5k_Layer2_Config_521N12_chapter_01111.html
7. If you are running 5.2.1 N11, you may see the command " storm control ***" is available under HIF,
however, it will return an error when you try to apply it:
ERROR: storm control not supported for fex port/PC
Internal
you may wonder if the command is not supported, why is it available? I guess it is just a placeholder,
some of the hardware/software combination will actually clears this up, so the command is not
even available:
From the lab, it looks like depends on the switch and FEX version, the command “ storm-control”
sometimes is available under HIF, but under all current versions, it won’t take effect.
Internal
The Command is not available in Nexus 5596 running5.2.1.12a, with FEX: N2K-C2232PP-10GE. It
is available for N2K-C2148T-1GE.
Introduction
What is the command is used to verify the "HSRP Active State" on a Nexus 7000 Series Switch?
On a Nexus 7018, when trying to perform a 'no shut' on Ethernet 1/3, the ERROR: Ethernet1/3: Config
not allowed, as first port in the port-grp is dedicated error message is received.
How do I create a peer link for VDC and a keepalive link for each VDC?
What does the %EEM_ACTION-6-INFORM: Packets dropped due to IDS check length consistent on
module message mean?
How do I verify the features enabled on Nexus 7000 Series Switch with NX-OS 4.2?
Is there a tool available for configuration conversion on Cisco 6500 series to the Nexus platform?
How many syslog servers can be added to a Nexus 7000 Series Switch?
Is Nexus 7010vPC feature (LACP enabled) compatible with the Cisco ASA etherchannel feature and
with ACE 4710 etherchannel?
How many OSPF processes can be run in a virtual device context (VDC)?
Which Nexus 7000 modules support Fibre Channel over Ethernet (FCoE)?
What is the minimum NX-OS release required to support FCoE in the Nexus 7000 Series Switches?
On a Nexus, is the metric-type keyword not available in the "default-information originate" command?
How do I redistribute connected routes into an OSPF instance on a Nexus 7010 with a defined metric?
What is the equivalent NX-OS command for the "ip multicast-routing" IOS command, and does the
Nexus 7000 support PIM-Sparse mode?
When I issue the "show ip route bgp" command, I see my routes being learned via OSPF and BGP. How
can I verify on the NX-OS which one will always be used and which one is a backup?
How do I avoid receiving the "Failed to process kickstart image. Pre-Upgrade check failed" error
message when upgrading the image on a Nexus 7000 Series Switch?
How can I avoid receiving the "Configuration does not match the port capability" error message when
enabling "switchport mode fex-fabric"?
Internal
When I issue the "show interface counters errors" command, I see that one of the interfaces is
consistently posting errors. What are the FCS-Err and Rcv-Err in the output of the "show interface
counters errors" command?
How do I enable/disable logging link status per port basis on a Nexus 7000 Series Switch?
On a Nexus 7000 running NX-OS 5.1(3), can the DecNet be bridged on a VLAN?
How do I check the Network Time Protocol (NTP) status on a Nexus 7000 Series Switch?
Can a Nexus 7000 be a DHCP server and can it relay DHCP requests to different DHCP servers per
VLAN?
How do I implement VTP in a Nexus 7000 Series Switch where VLANs are manually configured?
Is there a best practice for port-channel load balancing between Nexus 1000V Series and Nexus 7000
Series Switches?
During Nexus 7010 upgrade from 5.2.1 to 5.2.3 code, the X-bar module in slot 4 keeps powering off.
The %MODULE-2-XBAR_DIAG_FAIL: Xbar 4 reported failure due to Module asic(s) reported sync loss
(DevErr is LinkNum). Trying to Resync in device 88 (device error 0x0) error message is received.
What does the %OC_USD-SLOT18-2-RF_CRC: OC2 received packets with CRC error from MOD 6
through XBAR slot 5/inst 1error message mean?
Related Information
Introduction
This document addresses the most frequently asked questions (FAQ) associated with Cisco Nexus 7000 Series
Switches.
Refer to Cisco Technical Tips Conventions for more information on document conventions.
Q. What is the command is used to verify the "HSRP Active State" on a Nexus 7000 Series
Switch?
A. The command is show hsrp active or show hsrp brief .
Nexux_7K# show hsrp br
P indicates configured to preempt.
|
Interface Grp Prio P State Active addr Standby addr Group addr
Vlan132 32 90 P Standby 10.101.32.253 local 10.101.32.254 (conf)
Vlan194 94 90 P Standby 10.101.94.253 local 10.101.94.254 (conf)
Vlan2061 61 110 P Active local 10.100.101.253 10.100.101.254 (conf)
switch(config)#feature pim
switch(config)#interface Vlan[536]
switch(config-if)#ip pim sparse-mode
See Cisco Nexus 7000 Series NX-OS Multicast Routing Configuration Guide, Release 5.x for a complete
configuration guide.
Q. When I issue the "show ip route bgp" command, I see my routes being learned via
OSPF and BGP. How can I verify on the NX-OS which one will always be used and which
one is a backup?
A. Here is what is received:
Nexus_7010#show ip route bgp
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
----------------------------------------------------------------------------
Port Align-Err FCS-Err Xmit-Err Rcv-Err UnderSize OutDiscards
----------------------------------------------------------------------------
Eth1/1 0 26 0 26 0 0
With FCS-Err and Rcv-Err, it is usually an indication that you are receiving corrupt packets.
Q. How do I enable/disable logging link status per port basis on a Nexus 7000 Series
Switch?
A. All interface link status (up/down) messages are logged by default. Link status events can be configured
globally or per interface. The interface command enables link status logging messages for a specific interface.
For example:
N7k(config)#interface ethernet x/x
N7k(config-if)#logging event port link-status
Q. On a Nexus 7000 running NX-OS 5.1(3), can the DecNet be bridged on a VLAN?
A. All of the Nexus platforms support passing DecNet frames through the device from a layer-2 perspective.
However, there is no support for routing DecNet on the Nexus.
Q. How do I check the Network Time Protocol (NTP) status on a Nexus 7000 Series
Switch?
A. In order to display the status of the NTP peers, issue the show ntp peer-status command:
switch#show ntp peer-status
Total peers : 1
-------------------------------------------------------------------------------
switch#tac-pac bootflash://showtech.switch1
Issue the copy bootflash://showtech.switch1 tftp://<server IP/<path> command in order to copy the file from
bootflash to the TFTP server.
For example:
switch#copy bootflash://showtech.switch1 tftp://<server IP/<path>
Q. Can a Nexus 7000 be a DHCP server and can it relay DHCP requests to different DHCP
servers per VLAN?
A. The Nexus 7000 does not support a DHCP server, but it does support DHCP relay. For relay, use the ip dhcp
relay address x.x.x.x interface command.
See Cisco Nexus 7000 Series NX-OS Security Configuration Guide, Release 5.x for more information on
Dynamic Host Configuration Protocol (DHCP) on a Cisco NX-OS device.
Q. How do I verify if XL mode is enabled on a Nexus 7000 device?
A. The Scalable Feature License is the new Nexus 7000 system license that enables the incremental table sizes
supported on the M-Series L Modules. Without the license, the system will run in standard mode, meaning none of
the larger table sizes will be accessible. Having non-XL and XL modules in a system is supported, but for the
system to run in XL mode all modules need to be XL capable, and the Scalable Feature license needs to be installed.
Mixing modules is supported, with the system running in the non-XL mode. If the modules are in the same system,
the entire system falls back to the common smallest value. If the XL and non-XL are isolated using VDCs, then each
VDC is considered a separate system and can be run in different modes.
In order to confirm whether the Nexus 7000 has the XL option enabled, you first need to check if the Scalable
Feature License is installed. Also, having non-XL and XL modules in a system is supported, but in order for the
system to run in XL mode, all modules need to be XL capable.
Q. How do I implement VTP in a Nexus 7000 Series Switch where VLANs are manually
configured?
A. Cisco does not recommend running VTP in data centers. If someone attaches a switch to the network with a
higher revision number without changing the VTP mode from the server, it will override the VLAN configuration
on the switch.
Q. Is there a best practice for port-channel load balancing between Nexus 1000V Series and
Nexus 7000 Series Switches?
A. There is no recommended best practice for load-balancing between the Nexus 1000V Series and Nexus 7000
Series Switches. You can choose either a flow-based or a source-based model depending on the network's
requirement.
Q. During Nexus 7010 upgrade from 5.2.1 to 5.2.3 code, the X-bar module in slot 4 keeps
powering off. The %MODULE-2-XBAR_DIAG_FAIL: Xbar 4 reported failure due to Module asic(s)
reported sync loss (DevErr is LinkNum). Trying to Resync in device 88 (device error 0x0) error message is
received.
A. This error message corresponds to diagnostic failures on module 2. It could be a bad connection to the X-bar
from the linecard, which is results in the linecard being unable to sync. Typically with these errors, the first step is to
reseat the module. If that does not resolve the problem, reseat the fabric as well as the module individually.
Q. What does the %OC_USD-SLOT18-2-RF_CRC: OC2 received packets with CRC error from MOD 6
through XBAR slot 5/inst 1 error message mean?
A. These errors indicate that the octopus engine received frames that failed the CRC error checks. This can be
caused by multiple reasons. For example:
Hardware problems:
Internal
o Bad links
o Backplane issues
o Sync losses
o Seating problems
Software problems:
o Old fpga
o Frames forwarded to LC that it is unable to understand
Q. How do I verify packet drops on a Nexus 7000 Switch?
A. Verify the Rx Pause and TailDrops fields from the output of the show interface {/} and show hardware
internal errors module module # commands for the module with these ports.
For example:
Nexus7K#show interface e7/25
Ethernet7/25 is up
input rate 1.54 Kbps, 2 pps; output rate 6.29 Mbps, 3.66 Kpps
RX
156464190 unicast packets 0 multicast packets 585 broadcast packets
156464775 input packets 11172338513 bytes
0 jumbo packets 0 storm suppression packets
0 runts 0 giants 0 CRC 0 no buffer
0 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
0 input with dribble 0 input discard
7798999 Rx pause
TX
6365127464 unicast packets 6240536 multicast packets 2290164 broadcast packets
6373658164 output packets 8294188005962 bytes
0 jumbo packets
0 output error 0 collision 0 deferred 0 late collision
0 lost carrier 0 no carrier 0 babble
0 Tx pause
The pauses on e7/25 indicate that the server is having difficulty keeping up with the amount of traffic sent to it.
Nexus7k#show hardware internal erroe module 2 | include
r2d2_tx_taildrop_drop_ctr_q3
37936 r2d2_tx_taildrop_drop_ctr_q3 0000000199022704 2-
37938 r2d2_tx_taildrop_drop_ctr_q3 0000000199942292 4-
37941 r2d2_tx_taildrop_drop_ctr_q3 0000000199002223 5-
37941 r2d2_tx_taildrop_drop_ctr_q3 0000000174798985 17 -
This indicates that the amount of traffic sent to these device was too much for the interface itself to transmit. Since
each interface was configured as a trunk allowing all VLANs and multicast/broadcast traffic counters were low, it
appears there is a lot of unicast flooding that may be causing drops for these interfaces.
Related Information
Cisco Nexus 7000 Series Switches: Support Page
Fibre Channel over Ethernet (FCoE)
Switches Product Support
LAN Switching Technology Support
Technical Support & Documentation - Cisco Systems
Internal
What is the command is used to verify the "HSRP Active State" on a Nexus 7000 Series Switch?
On a Nexus 7018, when trying to perform a 'no shut' on Ethernet 1/3, the ERROR: Ethernet1/3: Config
not allowed, as first port in the port-grp is dedicated error message is received.
How do I create a peer link for VDC and a keepalive link for each VDC?
What does the %EEM_ACTION-6-INFORM: Packets dropped due to IDS check length consistent on
module message mean?
How do I verify the features enabled on Nexus 7000 Series Switch with NX-OS 4.2?
Is there a tool available for configuration conversion on Cisco 6500 series to the Nexus platform?
How many syslog servers can be added to a Nexus 7000 Series Switch?
Is Nexus 7010vPC feature (LACP enabled) compatible with the Cisco ASA etherchannel feature and
with ACE 4710 etherchannel?
How many OSPF processes can be run in a virtual device context (VDC)?
Which Nexus 7000 modules support Fibre Channel over Ethernet (FCoE)?
What is the minimum NX-OS release required to support FCoE in the Nexus 7000 Series Switches?
On a Nexus, is the metric-type keyword not available in the "default-information originate" command?
How do I redistribute connected routes into an OSPF instance on a Nexus 7010 with a defined metric?
What is the equivalent NX-OS command for the "ip multicast-routing" IOS command, and does the
Nexus 7000 support PIM-Sparse mode?
When I issue the "show ip route bgp" command, I see my routes being learned via OSPF and BGP. How
can I verify on the NX-OS which one will always be used and which one is a backup?
How do I avoid receiving the "Failed to process kickstart image. Pre-Upgrade check failed" error
message when upgrading the image on a Nexus 7000 Series Switch?
How can I avoid receiving the "Configuration does not match the port capability" error message when
enabling "switchport mode fex-fabric"?
Internal
When I issue the "show interface counters errors" command, I see that one of the interfaces is
consistently posting errors. What are the FCS-Err and Rcv-Err in the output of the "show interface
counters errors" command?
How do I enable/disable logging link status per port basis on a Nexus 7000 Series Switch?
On a Nexus 7000 running NX-OS 5.1(3), can the DecNet be bridged on a VLAN?
How do I check the Network Time Protocol (NTP) status on a Nexus 7000 Series Switch?
Can a Nexus 7000 be a DHCP server and can it relay DHCP requests to different DHCP servers per
VLAN?
How do I implement VTP in a Nexus 7000 Series Switch where VLANs are manually configured?
Is there a best practice for port-channel load balancing between Nexus 1000V Series and Nexus 7000
Series Switches?
During Nexus 7010 upgrade from 5.2.1 to 5.2.3 code, the X-bar module in slot 4 keeps powering off.
The %MODULE-2-XBAR_DIAG_FAIL: Xbar 4 reported failure due to Module asic(s) reported sync loss
(DevErr is LinkNum). Trying to Resync in device 88 (device error 0x0) error message is received.
What does the %OC_USD-SLOT18-2-RF_CRC: OC2 received packets with CRC error from MOD 6
through XBAR slot 5/inst 1error message mean?
Related Information
Introduction
This document addresses the most frequently asked questions (FAQ) associated with Cisco Nexus 7000 Series
Switches.
Refer to Cisco Technical Tips Conventions for more information on document conventions.
Q. What is the command is used to verify the "HSRP Active State" on a Nexus 7000 Series
Switch?
A. The command is show hsrp active or show hsrp brief .
Nexux_7K# show hsrp br
P indicates configured to preempt.
|
Interface Grp Prio P State Active addr Standby addr Group addr
Vlan132 32 90 P Standby 10.101.32.253 local 10.101.32.254 (conf)
Vlan194 94 90 P Standby 10.101.94.253 local 10.101.94.254 (conf)
Vlan2061 61 110 P Active local 10.100.101.253 10.100.101.254 (conf)
switch(config)#feature pim
switch(config)#interface Vlan[536]
switch(config-if)#ip pim sparse-mode
See Cisco Nexus 7000 Series NX-OS Multicast Routing Configuration Guide, Release 5.x for a complete
configuration guide.
Q. When I issue the "show ip route bgp" command, I see my routes being learned via
OSPF and BGP. How can I verify on the NX-OS which one will always be used and which
one is a backup?
A. Here is what is received:
Nexus_7010#show ip route bgp
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
----------------------------------------------------------------------------
Port Align-Err FCS-Err Xmit-Err Rcv-Err UnderSize OutDiscards
----------------------------------------------------------------------------
Eth1/1 0 26 0 26 0 0
With FCS-Err and Rcv-Err, it is usually an indication that you are receiving corrupt packets.
Q. How do I enable/disable logging link status per port basis on a Nexus 7000 Series
Switch?
A. All interface link status (up/down) messages are logged by default. Link status events can be configured
globally or per interface. The interface command enables link status logging messages for a specific interface.
For example:
N7k(config)#interface ethernet x/x
N7k(config-if)#logging event port link-status
Q. On a Nexus 7000 running NX-OS 5.1(3), can the DecNet be bridged on a VLAN?
A. All of the Nexus platforms support passing DecNet frames through the device from a layer-2 perspective.
However, there is no support for routing DecNet on the Nexus.
Q. How do I check the Network Time Protocol (NTP) status on a Nexus 7000 Series
Switch?
A. In order to display the status of the NTP peers, issue the show ntp peer-status command:
switch#show ntp peer-status
Total peers : 1
-------------------------------------------------------------------------------
switch#tac-pac bootflash://showtech.switch1
Issue the copy bootflash://showtech.switch1 tftp://<server IP/<path> command in order to copy the file from
bootflash to the TFTP server.
For example:
switch#copy bootflash://showtech.switch1 tftp://<server IP/<path>
Q. Can a Nexus 7000 be a DHCP server and can it relay DHCP requests to different DHCP
servers per VLAN?
A. The Nexus 7000 does not support a DHCP server, but it does support DHCP relay. For relay, use the ip dhcp
relay address x.x.x.x interface command.
See Cisco Nexus 7000 Series NX-OS Security Configuration Guide, Release 5.x for more information on
Dynamic Host Configuration Protocol (DHCP) on a Cisco NX-OS device.
Q. How do I verify if XL mode is enabled on a Nexus 7000 device?
A. The Scalable Feature License is the new Nexus 7000 system license that enables the incremental table sizes
supported on the M-Series L Modules. Without the license, the system will run in standard mode, meaning none of
the larger table sizes will be accessible. Having non-XL and XL modules in a system is supported, but for the
system to run in XL mode all modules need to be XL capable, and the Scalable Feature license needs to be installed.
Mixing modules is supported, with the system running in the non-XL mode. If the modules are in the same system,
the entire system falls back to the common smallest value. If the XL and non-XL are isolated using VDCs, then each
VDC is considered a separate system and can be run in different modes.
In order to confirm whether the Nexus 7000 has the XL option enabled, you first need to check if the Scalable
Feature License is installed. Also, having non-XL and XL modules in a system is supported, but in order for the
system to run in XL mode, all modules need to be XL capable.
Q. How do I implement VTP in a Nexus 7000 Series Switch where VLANs are manually
configured?
A. Cisco does not recommend running VTP in data centers. If someone attaches a switch to the network with a
higher revision number without changing the VTP mode from the server, it will override the VLAN configuration
on the switch.
Q. Is there a best practice for port-channel load balancing between Nexus 1000V Series and
Nexus 7000 Series Switches?
A. There is no recommended best practice for load-balancing between the Nexus 1000V Series and Nexus 7000
Series Switches. You can choose either a flow-based or a source-based model depending on the network's
requirement.
Q. During Nexus 7010 upgrade from 5.2.1 to 5.2.3 code, the X-bar module in slot 4 keeps
powering off. The %MODULE-2-XBAR_DIAG_FAIL: Xbar 4 reported failure due to Module asic(s)
reported sync loss (DevErr is LinkNum). Trying to Resync in device 88 (device error 0x0) error message is
received.
A. This error message corresponds to diagnostic failures on module 2. It could be a bad connection to the X-bar
from the linecard, which is results in the linecard being unable to sync. Typically with these errors, the first step is to
reseat the module. If that does not resolve the problem, reseat the fabric as well as the module individually.
Q. What does the %OC_USD-SLOT18-2-RF_CRC: OC2 received packets with CRC error from MOD 6
through XBAR slot 5/inst 1 error message mean?
A. These errors indicate that the octopus engine received frames that failed the CRC error checks. This can be
caused by multiple reasons. For example:
Hardware problems:
Internal
o Bad links
o Backplane issues
o Sync losses
o Seating problems
Software problems:
o Old fpga
o Frames forwarded to LC that it is unable to understand
Q. How do I verify packet drops on a Nexus 7000 Switch?
A. Verify the Rx Pause and TailDrops fields from the output of the show interface {/} and show hardware
internal errors module module # commands for the module with these ports.
For example:
Nexus7K#show interface e7/25
Ethernet7/25 is up
input rate 1.54 Kbps, 2 pps; output rate 6.29 Mbps, 3.66 Kpps
RX
156464190 unicast packets 0 multicast packets 585 broadcast packets
156464775 input packets 11172338513 bytes
0 jumbo packets 0 storm suppression packets
0 runts 0 giants 0 CRC 0 no buffer
0 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
0 input with dribble 0 input discard
7798999 Rx pause
TX
6365127464 unicast packets 6240536 multicast packets 2290164 broadcast packets
6373658164 output packets 8294188005962 bytes
0 jumbo packets
0 output error 0 collision 0 deferred 0 late collision
0 lost carrier 0 no carrier 0 babble
0 Tx pause
The pauses on e7/25 indicate that the server is having difficulty keeping up with the amount of traffic sent to it.
Nexus7k#show hardware internal erroe module 2 | include
r2d2_tx_taildrop_drop_ctr_q3
37936 r2d2_tx_taildrop_drop_ctr_q3 0000000199022704 2-
37938 r2d2_tx_taildrop_drop_ctr_q3 0000000199942292 4-
37941 r2d2_tx_taildrop_drop_ctr_q3 0000000199002223 5-
37941 r2d2_tx_taildrop_drop_ctr_q3 0000000174798985 17 -
This indicates that the amount of traffic sent to these device was too much for the interface itself to transmit. Since
each interface was configured as a trunk allowing all VLANs and multicast/broadcast traffic counters were low, it
appears there is a lot of unicast flooding that may be causing drops for these interfaces.
Related Information
Cisco Nexus 7000 Series Switches: Support Page
Fibre Channel over Ethernet (FCoE)
Switches Product Support
LAN Switching Technology Support
Technical Support & Documentation - Cisco Systems
Internal
A. The Cisco Nexus 5500 Platform is the next-generation platform of the Cisco Nexus 5000 Series Switches, helping
enable the industry’s highest density and performance purpose-built fixed form-factor switch on a multilayer,
multiprotocol, and multipurpose Ethernet-based fabric.
A. The Cisco Nexus 5548P is the first switch in the Cisco Nexus 5500 Platform. It is offered in a one-rack-unit (1RU)
form factor and has 32 fixed 10 Gigabit Ethernet SFP+ ports with one expansion slot for added flexibility. At first
customer shipment (FCS), two expansion modules will be supported: a 16-port 10 Gigabit Ethernet SFP+ expansion
module and an 8-port 10 Gigabit Ethernet SFP+ plus 8-port native Fibre Channel expansion module.
A. The Cisco Nexus 5500 Platform is well suited for deployments in enterprise data center access layers and
small-scale, midmarket data center aggregation environments.
Q. Does this mean that the Cisco Nexus 5500 Platform will offer Layer 3 services?
A. Yes. The Cisco Nexus 5500 platform, including the 5548P switch, will provide Layer 3 functionality via a
fieldupgradeable module that is targeted for Q1CY11.
Q. Is Cisco announcing the end-of-sale of the current generation of Cisco Nexus 5000 Series Switches?
A. No. Cisco has no plans to end of sale the current Cisco Nexus 5000 Series Switches.
A. Yes. All 10 Gigabit Ethernet ports on the Cisco Nexus 5548 are capable of supporting FCoE. The Storage Protocol
Services License (SPS) is required to enable FCoE operation.
A. Similar to on the first generation Nexus 5000 Series Switches, FCoE is an optional feature delivered via the Storage
Protocol Services (SPS) license on the Nexus 5548P switch. However, unlike on the first generation Nexus 5000
Series Switches, the Nexus 5548P switch provides a license with eight-port granularity. The granularity comes from
an eight-port license that enables any eight ports on the Nexus 5548P switch to perform FCoE on 10GE ports or
Native Fibre Channel on the physical Fibre Channel ports. Up to six eight-port licenses can be installed on a Nexus
5548P switch, making it the equivalent of a full chassis license.
A. The first instance of the SPS license on a system is enforced. Further instances are honor-based. However, similar
to on the current generation Nexus 5000 Series Switches, a temporary 120-day trial license goes in effect for the
entire chassis upon first use of an FC command.
Q. Can I use the Cisco Nexus 5548P Switch ports as native Fibre Channel ports?
A. The Ethernet ports on the base chassis as well as those on the expansion modules cannot be used to support native
Fibre Channel functions. However, you can use the expansion module N55-M8P8FP, which provides eight ports as
Internal
native Fibre Channel ports. The Storage Protocol Services (SPS) license is also required to enable Native Fibre
Channel operation.
Q. Does the Cisco Nexus 5548P support FCoE VE_port (Virtual E_port)?
A. Yes, the Cisco Nexus 5548P supports VE-to-VE connectivity on directly connected Data Center Bridging (DCB)
capable links. This feature will be released for Nexus 5548 first and prior N5Ks in a later release.
A. Unified Ports combine the physical layer port functionality of 1 Gigabit Ethernet, 10 Gigabit Ethernet, and 8/4/2/1G
Fibre Channel onto a physical port. The physical port can be configured as 1/10G Traditional Ethernet, 10G Fibre
Channel over Ethernet, or 8/4/2/1G Native Fibre Channel. The Storage Protocol Services (SPS) license is required to
enable the use of both FCoE and Native FC operations on the Unified Ports.
A. On the Nexus 5548P, 16 Unified Ports will be offered via an expansion module targeted for Q1CY11.
● Higher port density: The Cisco Nexus 5548 can support up to 48 10 Gigabit Ethernet with a 16-port 10 Gigabit
Ethernet expansion module in a single 1RU form factor.
● Lower-latency cut-through switching: Latency is reduced to about 2 microseconds.
● Better scalability: VLAN, MAC address count, Internet Group Management Protocol (IGMP) group,
PortChannel, ternary content addressable memory (TCAM), Switched-Port Analyzer (SPAN) session, and
logical interface (LIF) count scalability are increased.
● Hardware support for Cisco® FabricPath and standards-based Transparent Interconnection of Lots of Links
(TRILL): This support makes the Cisco Nexus 5500 Platform an excellent platform for building large-scale,
loop-free Layer 2 networks.
● Support for ingress and egress differentiated services code point (DSCP) marking.
● Layer 3 support: A field-upgradable routing card will be available in the future.
● Enhanced SPAN implementation: This feature protects data traffic in case of congestion resulting from SPAN.
It enables more active SPAN sessions and supports fabric extender ports as SPAN destinations.
Q. What is the architecture of Cisco Nexus 5548P?
A. The Cisco Nexus 5548P implements a switch-fabric-based architecture. It consists of a set of port application-
specific interface cards (ASICs) called unified port controllers (UPCs) and a switch fabric called the unified fabric
controller (UFC). The UPCs provide packet-editing, forwarding, quality-of-service (QoS), security-table-lookup,
buffering, and queuing functions. The UFC connects the ingress UPCs to the egress UPCs and has a built-in central
scheduler. The UFC also replicates packets for unknown unicast, multicast, and broadcast traffic. Each UPC supports
eight 1 and 10 Gigabit Ethernet interfaces; however, no local switching is performed on the UPCs. All packets go
through the same forwarding path, and the system helps ensure consistent latency for all flows.
A. Yes. The Cisco Nexus 5548P hardware supports Cisco FabricPath, which will be enabled in a future software
release.
A. Yes. The Cisco Nexus 5548P hardware supports prestandard IETF TRILL, since TRILL has not been completely
standardized. Support will therefore be enabled in a future software release.
A. Yes. The Cisco Nexus 5500 Platform has been designed with Layer 3 support from the start. At FCS, Layer 3 routing
will not be available on the Cisco Nexus 5548P and will be enabled in the near future through a fieldupgradeable
daughter card.
Q. What are considered the front and back of a Cisco Nexus 5548P Switch?
A. The front of the Cisco Nexus 5548P is where the fans, power supplies, and management ports are located. The back
of the Cisco Nexus 5548 is where the fixed Ethernet data ports and the expansion slot are located. The data ports are
located on the back of the Cisco Nexus 5548P to facilitate cabling with servers.
Q. Do the power supplies on the Cisco Nexus 5548P support both 110 and 220-volt (V) inputs?
Q. What are the additional RJ45 ports next to the management interface on the front of the Cisco Nexus 5548P?
A. The additional front panel RJ-45 ports are designed for future use. At present, the Cisco Nexus 5548P supports only
a single out-of-band management interface.
Q. Can the existing expansion modules on the Cisco Nexus 5010 and 5020 Switches be used on the Cisco Nexus 5500
Platform?
A. No. The expansion modules supported on the Cisco Nexus 5010 and 5020 are not supported on the Cisco Nexus
5500 Platform.
Q. Can the existing power supplies and fan modules on the Cisco Nexus 5010 and 5020 be used on the Cisco Nexus
5500 Platform?
A. No. The power supplies and fan modules for the Cisco Nexus 5010 and 5020 are not interchangeable with those on
the Cisco Nexus 5500 Platform.
Q. Does the Cisco Nexus 5548P run the same software image as the Cisco Nexus 5010 and 5020 Switches?
A. Yes. All Cisco Nexus 5000 Series Switches, including the Cisco Nexus 5500 Platform, support the same software
image.
A. Yes. There is one type-A USB interface on the front of the Cisco Nexus 5548P.
A. Intel Dual-Core 1.73GHz with 2 memory channels, DDR3 at 1066Mhz, with 4MB cache.
Q. How much CPU memory comes with the Cisco Nexus 5548P?
Q. How much flash memory comes with the Cisco Nexus 5548P?
Q. What are the typical and maximum power consumption amounts for the Cisco Nexus 5548P?
A. The typical power consumption of the Cisco Nexus 5548P is 390 watts (W), and the maximum power consumption is
600W.
A. All Ethernet ports on the Cisco Nexus 5548P, including the Ethernet ports on expansion modules, are hardware
capable of supporting both 1 and 10 Gigabit Ethernet speeds. Software support for 1 Gigabit Ethernet will be
available in a future software release.
A. Please refer to the Cisco Nexus 5500 Platform data sheet for a list of supported transceivers and cable types. Data
sheets and associated collateral can be found at http://www.cisco.com/go/nexus5000 .
Q. Does the Cisco Nexus 5548P support IEEE 802.1ae link-level cryptography?
A. No. The Cisco Nexus 5548P Switch hardware does not support IEEE 802.1ae.
A. The Cisco Nexus 5548P provides up to 960-Gbps throughput. It implements a nonblocking hardware architecture
and helps achieve a line-rate throughput for all frame sizes, for both unicast and multicast traffic, across all ports.
Q. Should I expect any performance degradation when I turn on some features, such as access control lists (ACLs) and
Fibre Channel over Ethernet (FCoE), on the Cisco Nexus 5548P?
A. All ports on the Cisco Nexus 5548P provide line-rate performance regardless of the features that are turned on.
Q. The Cisco Nexus 5548P implements cut-through switching among all its 10 Gigabit Ethernet ports. Does it also
support cut-through switching for all 1 Gigabit Ethernet, native Fibre Channel, and FCoE ports?
A. Under various circumstances, the Cisco Nexus 5548P can act as either a cut-through switch or a store-and-forward
switch. Table 1 summarizes the switch behavior in various scenarios.
Whenever the ingress interface operates at 10 Gigabit Ethernet speed, cut-through switching is used.
Q. How many MAC addresses does the Cisco Nexus 5548P support?
A. The Cisco Nexus 5548P Switch hardware provides an address table for 32,000 MAC addresses. The same MAC
address table is shared between unicast and multicast traffic, and it also includes some internal entries. At FCS, 4000
MAC address entries will be reserved for multicast groups that are learned through IGMP snooping, and 25,000 MAC
address entries will be reserved for unicast traffic. The remaining 3000 MAC address entries will be used to handle
hash collision.
A. The Cisco Nexus 5548P supports up to 4094 active VLANs. Of these, a few are reserved for internal use, thus
providing users with up to 4014 configurable VLANs.
Q. How many PortChannels are supported with the Cisco Nexus 5548P?
A. All ports on the Cisco Nexus 5548P can be configured as PortChannel members. The Cisco Nexus 5548P Switch
hardware can support up to 48 local PortChannels and up to 576 PortChannels on the host-facing ports of Cisco
Nexus 2000 Series Fabric Extenders.
A. The Cisco Nexus 5548P provides a 4000-TCAM table size; however, the table is shared among port ACLs, VLAN
ACLs, QoS ACLs, SPAN ACLs, and ACLs for control traffic redirection.
Q. How many Spanning Tree Protocol logical ports are supported on the Cisco Nexus 5548P?
A. The Cisco Nexus 5548P supports up to 12,000 logical ports, of which up to 4000 can be network ports for switch-to-
switch connection.
A. Yes. The Cisco Nexus 2000 Series Fabric Extenders can connect to any Ethernet port on the Cisco Nexus 5548P.
Q. How many Cisco Nexus 2000 Series Fabric Extenders can connect to a single Cisco Nexus 5548P Switch?
A. At FCS, one Cisco Nexus 5548P will support up to 12 Cisco Nexus 2000 Series Fabric Extenders. The scalability will
increase with future software releases.
Q. Does the Cisco Nexus 5548P support all the currently available Cisco Nexus 2000 Series Fabric Extenders?
A. Yes. The Cisco Nexus 5548P supports all four currently available Cisco Nexus 2000 Series Fabric Extenders: Cisco
Nexus 2148T, 2248TP GE, 2224TP GE, and 2232PP 10GE Fabric Extenders.
Internal
A. No. The Cisco Nexus 5548P Switch hardware does not support NetFlow.
Q. How many SPAN sessions does the Cisco Nexus 5548P support?
Q. Does SPAN traffic affect the data traffic on the Cisco Nexus 5548P?
A. No. The Cisco Nexus 5548P Switch hardware is designed to give higher priority to data traffic during periods of
congestion when both SPAN and data traffic could contend with each other. When such congestion occurs, the Cisco
Nexus 5548P can easily be configured to protect the higher-priority data traffic while dropping the lower-priority SPAN
traffic.
Q. Can a 1 Gigabit Ethernet port on the Cisco Nexus 5548P be configured as a SPAN destination port?
A. Yes. After 1 Gigabit Ethernet mode is software enabled on the Cisco Nexus 5548P, any 1 Gigabit Ethernet port can
be configured as a SPAN destination port.
Q. Can I use SPAN to capture a Priority Flow Control (PFC) frame on the Cisco Nexus 5548P?
A. No. The PFC frame will not be mirrored from the SPAN source port to the SPAN destination port.
Q. Can a Cisco Nexus 2000 Series host-facing port be configured as a SPAN destination port on the Cisco Nexus
5548P?
A. The Cisco Nexus 5548P Switch hardware supports configuration of Cisco Nexus 2000 Series host-facing ports as
SPAN destination ports. However, the software support will be available in a future release.
Q. Does the Cisco Nexus 5548P support Encapsulated Remote SPAN (ERSPAN)?
A. In a post-FCS software release, the Cisco Nexus 5548P will support ERSPAN source sessions. The Cisco Nexus
5548P cannot de-encapsulate ERSPAN packets and therefore will not support ERSPAN destination sessions.
Q. Does the Cisco Nexus 5548P support the IEEE 1588 Precision Time Protocol (PTP) feature?
A. The Cisco Nexus 5548P Switch hardware is capable of supporting IEEE 15888 PTP. However, software support will
be available in a future software release.
Q. Do the Cisco Data Center Network Manager (DCNM) and Cisco Fabric Manager support the Cisco Nexus 5548P?
A. Cisco DCNM and Cisco Fabric Manager support for the Cisco Nexus 5548P will be available 2 to 3 months
after FCS.
Configuration Synchronization
Q. What is the configuration synchronization feature introduced in Cisco NX-OS Release 5.0(2)N1(1) for the Cisco
Nexus 5000 Series?
Internal
A. Configuration synchronization (config-sync), when enabled, allows the configuration made on one switch to be
pushed to another switch through software. The feature is mainly used in virtual PortChannel (vPC) scenarios to
eliminate the manual configuration on both vPC peer switches. It also eliminates the possibility of human error and
helps ensure that both switches have the exact same configuration.
A. Config-sync is a software feature that is hardware independent. Starting with Cisco NX-OS Release 5.0(2)N1(1), it is
supported on all Cisco Nexus 5000 Series Switches, including the Cisco Nexus 5548P.
A. No. vPC and config-sync are two separate features. For vPC to be operational, Type 1 and Type 2 parameters must
match. If the parameters do not match, users will continue to experience a vPC-failure scenario. Configsync allows
the user to make changes on one switch and synchronize the configuration with that on the other peer automatically.
It saves the user from having to create identical configurations on each switch.
Q. What are the three requirements for enabling the config-sync feature?
A. Config-sync messages are carried only over the mgmt0 interface. They cannot currently be carried over the in-band
switch virtual interfaces (SVIs).
Q. If I use a direct point-to-point connection using SVIs and the default Virtual Routing and Forwarding (VRF) instance
for my peer keepalive (instead of the mgmt0 interface and the management VRF instance), will config-sync work?
A. Config-sync is independent of vPC. As long as users have mgmt0 connectivity and can reach the vPC peer, config-
sync will work.
A. Users must make sure that the specific features are enabled on each Cisco Nexus 5548P Switch. Features are not
automatically synchronized.
A. No. FCoE is not supported under config-sync. The supported features for a switch profile are VLANs, ACLs,
Spanning Tree Protocol, QoS, and interface-level configurations (Ethernet, PortChannels, and vPC).
A. The configuration will be rolled back to the original (default) state, resulting in no configuration changes. Neither
switch will update any configurations.
Q. What happens if the switch profile has been created but no commit command was entered, yet a reload occurs?
A. In this instance, the switch profile was not saved to the startup configuration, and as a result, no changes will be
made.
Internal
Q. If the peer is lost (config-sync transport is down) and local configuration changes are made on one switch, what
happens when the config-sync transport (mgmt0 interface) comes back up?
A. Before the mgmt0 interface comes back up, the changes that were made on the switch are applied locally when
the commitcommand is entered. After the mgmt0 interface comes back up, the configuration is automatically
synchronized with that of the peer.
A. Yes, the config-sync feature is independent of the vPC. The initiator does not follow the vPC primary or secondary
switch. The commit command can be entered from either of the two switches.
A. Yes. To avoid conflicts, enter the commit command from a single switch. If you simultaneously try to enter
a commitcommand from the other switch, the following error message will appear:
N5K-2(config-sync-sp)# commit
Failed: Session Database already locked, Verify/Commit in Progress.
Q. Where is the configuration submode to create a switch profile?
A. A new mode is introduced with config-sync. As with config t, enter the config sync command to access the switch-
profile subcommand.
Configuration Rollback
Q. What is the minimum Cisco NX-OS release that supports configuration rollback on the Cisco Nexus 5548P?
A. Starting with Cisco NX-OS Release 5.0(2)N1(1), configuration rollback is supported on all Cisco Nexus 5000 Series
Switches, including the Cisco Nexus 5548P.
Q. Is the configuration rollback feature on the Cisco Nexus 5000 Series Switches the same as that on the Cisco Nexus
7000 Series Switches?
A. Yes. However, at FCS, the Cisco Nexus 5000 Series, including the Cisco Nexus 5548P, will support only the atomic
(default) configuration.
A. No. If feature fcoe is enabled, users will not be able to use the configuration rollback feature on the Cisco Nexus
5000 Series Switches, including the Cisco Nexus 5548P.
A. No. It requires only Cisco NX-OS Release 5.0(2)N1(1) as the minimum software version.
A. The rollback action will abort if it encounters an error. For example, assume the user has a saved checkpoint named
Test1. If an error occur occurs while the user is trying to roll back from the current running configuration to Test1, the
switch will retain the current running configuration.
A. Config-sync rollback occurs if a commit command is entered and fails. If the commit command fails, the new
configuration is ignored, and the system reverts to the original configuration. This is an implicit rollback that takes
place automatically. In contrast, the configuration rollback feature is user defined and is controlled by a manual
configuration that is verified and applied by the user.
A. After the system runs a write-erase or reload operation, checkpoints are deleted. You can also enter the clear
checkpoint database command.
Quality of Service
Q. How many classes of service does the Cisco Nexus 5548P support?
A. The Cisco Nexus 5548P supports eight classes of service. Two of them are reserved for internal control traffic, and
six classes of service are available for data traffic. All six classes of service can be used for non-FcoE Ethernet traffic.
Q. How many hardware queues does the Cisco Nexus 5548P have?
A. The Cisco Nexus 5548P has 384 unicast virtual output queues (VOQs) and 128 multicast VOQs at ingress for each
Ethernet port. It has 8 queues for unicast and 8 queues for multicast at egress for each Ethernet port.
Q. How many packet buffers are present on the Cisco Nexus 5548P?
A. The Cisco Nexus 5548P provides 680-KB packet buffers for each 10 Gigabit Ethernet port: 480 KB are allocated for
ingress, and 160 KB are allocated for egress. The default configuration has one system class - class-default - for
data traffic, and all 480 KB of the buffer space are allocated to class-default. User-defined system classes have
dedicated buffers and take buffer space from the 480-KB limit. Command-line interface (CLI) commands are available
to allow users to configure the desired buffer sizes for each system class.
A. The Cisco Nexus 5548P can classify incoming traffic based on CoS marking, DSCP marking, or user-defined ACL
rules.
Q. Does the Cisco Nexus 5548P trust CoS and DSCP markings by default?
A. Yes. The Cisco Nexus 5548P trusts CoS and DSCP markings by default. The switch will not modify CoS or DSCP
values unless modification is configured by the user. Although the Cisco Nexus 5548P trusts the CoS and DSCP
values, it will not classify and queue the packets based on those values. By default, all traffic will be assigned
Internal
to class-default and mapped to one queue. Users will need to define their own policy maps to classify and queue
packets based on CoS or DSCP values.
Q. Does Cisco Nexus 5548 support ingress policing and egress policing?
A. The Cisco Nexus 5548P Switch hardware supports both ingress and egress policing. However, software support will
be available in a future software release.
A. No. The Cisco Nexus 5548P does not support traffic shaping.
A. Yes. The Cisco Nexus 5548P supports both ingress and egress DSCP marking.
Q. Does the Cisco Nexus 5548P support explicit congestion notification (ECN)?
A. The Cisco Nexus 5548P Switch hardware supports ECN. However, software support will enabled in a future software
release.
Multicast
Q. How many IGMP groups can the Cisco Nexus 5548P support?
A. At FCS, the Cisco Nexus 5548P will support up to 4000 IGMP groups.
A. Multicast packets are replicated by the switch fabric. The ingress ports send one copy of the multicast packets to the
switch fabric, and the switch fabric replicates the packets for all the egress ports in the multicast group. No ingress or
egress replication takes place for the multicast packets. However, the SPAN traffic is replicated by the port ASICs
(the UPC); the receive SPAN traffic is replicated at the ingress ports, and transmit SPAN traffic is replicated at the
egress ports.
Q. How is the forwarding decision made for IP multicast packets on the Cisco Nexus 5548P?
A. The Cisco Nexus 5548P intercepts the IGMP join and leave messages from hosts and keeps track of the ports that
send join and leave messages. The IGMP group is converted to a multicast MAC address with the format
0100.5EXX.XX.XX and stored in the MAC address table (sometimes referred to as a station table). Subsequently, the
IP multicast packet forwarding decision is made by checking the destination MAC address against the multicast MAC
table. For other features, such as QoS and security, the multicast IP address is used for table lookup.
Q. What happens if the Cisco Nexus 5548P receives an IP multicast packet whose group address is not yet learned by
the switch?
A. If the destination MAC address is in the range 0100.5E00.00XX, the packets will be flooded in the VLAN. Otherwise,
the IP multicast packets will be dropped if the IGMP group is unknown to the Cisco Nexus 5548P.
Virtualization
Q. What is network interface virtualization (NIV) on the Cisco Nexus 5548P?
A. NIV is a technology that allows any adapter to be virtualized in multiple virtual network interface cards (vNICs) or
virtual host bus adapters (vHBAs). Virtualized adapters can be used to provide multiple interfaces on a single server,
enabling consolidation and flexibility in both physical and virtualized server environments. Each individual vNIC and
Internal
vHBA is identified by a tag called a VNTag. When an NIV-capable adapter is connected to the Cisco Nexus 5548P,
the Cisco Nexus 5548P can use the VNTag to forward frames that belong to the same physical port.
Q. Does support for NIV mean that I can use the Cisco Nexus 5548P as an external switch for virtual machine traffic
instead of the software hypervisor switch?
A. NIV is one of the building blocks necessary to implement virtual machine traffic switching using an external hardware
switch, but it is not the only one. The full set of features is referred to as Cisco VN-Link and will be enabled on the
Cisco Nexus 5548P, in subsequent releases.
A. VNTag and IEEE 802.1Qbh Port Extension provide the same capabilities, functions, and management interface. The
on-the-wire formats are somewhat different between the two. However, Cisco expects to deliver IEEE 802.1Qbh
standards-compliant products in the future that can translate between the on-the-wire formats, enabling full
interoperability of a heterogeneous VNTag and IEEE 802.1Qbh environment.
A. The logical interface, or LIF, is a data structure on the Cisco Nexus 5548P Switch hardware that allows a physical
interface on the Cisco Nexus 5548P to emulate multiple logical or virtual interfaces. The LIF data structure carries
certain properties, such as the VLAN membership, interface ACL labels, and Spanning Tree Protocol states. For NIV
support, the LIF is derived from the VNTag values carried in the packet. With a LIF data structure, the Cisco Nexus
5548P can process and forward frames on a per-LIF basis. For instance, each Cisco Nexus 2000 Series host-facing
port or virtual interface created for the vNICs could be mapped to a LIF data structure on the Cisco Nexus 5548P
Switch hardware.
A. After NIV becomes available, if you have more LIFs, you can configure more vNICs on a virtualized adapter. The
Cisco Nexus 5548P Switch hardware can support up to 8000 LIFs per UPC.
Q. Does the Cisco Nexus 5548P support virtual device contexts (VDCs)?
Port Profiles
Q. Describe the port-profile feature offered with Cisco NX-OS Release 5.0(2)N1(1) on the Cisco Nexus 5000 Series
Switches, including the Nexus 5548P.
A. A port profile is a preconfigured template that allows repetitive interface commands to be grouped together and
applied to an interface range.
A. Port profiles provide ease-of-configuration. The switch administrator can manage one simple interface configuration
template and apply it to a large range of ports as needed.
A. The procedures for defining and applying port profiles are as follows:
A. When a port profile is deleted, the commands configured in the port profile, are removed from the interfaces that had
inherited the port profile.
Q. Which takes precedence: the interface default, the port profile, or the interface configuration?
A. The interface configuration takes precedence over the port profile, and the port profile takes precedence over the
interface defaults.
A. Whenever a port profile is to be inherited or enabled, a checkpoint is created through interaction with the
configuration rollback feature. Upon detection of a failure, the software rolls back the configuration to the checkpoint
created before the operation was started. For the rollback, only the commands in interface mode are considered for a
diff computation. This approach helps ensure that a port profile is never partially applied, rendering the system
inconsistent because of port-profile application.
A. Yes. You can add commands, and they will also be inherited by the interface.
Q. Can port profiles be combined through inheritance of one port profile by another?
A. Yes. For instance, assume that a port profile named p2 inherits a port profile named p1. In this example, profile p1 is
called a subclass profile, and profile p2 is called a superclass profile. Inheritance allows the subclass port profile to
inherit all the commands of the superclass port profile that do not conflict with its command list. If a conflict occurs,
the configuration in the subclass port profile overrides the configuration in the superclass port profile. For example,
assume that port-profile p2 inherits p1, and configurations are as shown here:
port-profile p1
speed 1000
port-profile p2
inherit port-profile p1
speed 10000
switch access vlan 100
When p2 is applied to an interface, the interface would receive speed 10000 and not speed 1000 as defined in p1.
A. Any command that is supported in the interface mode will also be supported in the corresponding port-profile mode.
Internal
A. Preprovisioning allows users to configure the Cisco Nexus 2000 Series switch ports and the expansion modules on
the Cisco Nexus 5000 Series Switches, including the Nexus 5548P, without requiring the Cisco Nexus 2000 Series
Fabric Extenders or the expansion modules to be connected to the Cisco Nexus 5000 Series chassis. With this
feature, users can also check the configuration when the Cisco Nexus 2000 Series Fabric Extenders are offline or
copy a configuration file to a running configuration.
A. Starting with Cisco NX-OS Release 5.0(2)N1(1), the preprovisioning feature is supported on all Cisco Nexus 5000
Series Switches, including the Cisco Nexus 5548P.
A. Preprovisioning is supported on all currently available Cisco Nexus 2000 Series Fabric Extenders, including the
Cisco Nexus 2148T, 2248TP, 2224TP, and 2232PP.
A. Yes. Users can make configuration changes to offline modules that have been preprovisioned before.
Q. What are the implications for the preprovisioning feature when Cisco NX-OS needs to be upgraded, downgraded, or
reloaded?
A. If the upgrade or downgrade is between images that support preprovisioning, any preprovisioned configuration will
be retained across the upgrade. When downgrading from an image that supports preprovisioning to an image that
does not, users will be asked to remove any preprovisioned configuration. When the switch is reloaded, all
configurations will be retained just as they were before the reload operation as long as the copy running startup or
install all command was not entered before the reload.
With Admin VDC, network administrators can perform common, system-wide tasks in a context that is not
handling data plane traffic. Admin VDC also allows customers another option to secure their Nexus 7000, as they
can more easily restrict access to the Admin VDC than might be possible with a traditional Ethernet or Storage
VDC. The tasks that can be performed only in Admin VDC are below:
In Service Software Upgrade/Downgrade (ISSU/ISSD)
Erasable Programmable Logic Devices (EPLD) upgrades
Control Plane Policing (CoPP) configuration
Licensing operations
VDC Configuration including creation, suspension, deletion and resource allocation
System-wide QoS policy and port channel load balancing configuration
Internal
The Admin VDC is just responsible to manage other Ethernet/Storage VDCs on your Nexus 7000. The
line-card interfaces will not be shown or will not be configurable on the Admin VDC. Also you can not
do any other protocol configurations (e.g vPC, FabricPath, OTV). You can create, edit, change or delete
Ethernet/Storage VDCs, you can also allocate the necessary resources like interfaces, feature-sets to the
individual VDCs but you can not use Admin VDC just like the Default VDC.
Please refer this link for finding more details about configuring Admin VDC.
And this link for better understanding the idea behind Admin VD
Fabricpath FAQs
1. What is the unique mac address used in unknown Unicast.
Answer:- 01:0F:FF:C1:01:C0
Note: - On F2/F2E line card, we can increase the maximum number of VPC+
port-channel support by using no port-channel limit commands.
Note:-We can use the command fabricpath ttl to configure the TTL Value.
11. Is the mac addresses are advertised by fabricpath IS-IS like in OTV?
Answer :- No, Fabricpath IS-IS will not advertise any mac address.
Default VDC:-
1. Default vdc can be used for the management of all the VDCs in the chassis.
From default VDC, network-admin user creates, delete or modify other non-
default VDCs. It can allocate the interfaces to other non-default VDCs.
2. Interface can be allocated to default VDC and then it can handle user traffic
similar to the non-default VDC.
Internal
Admin VDC:-
Admin VDC can be created from the initial configuration wizard. It is only used
for the management of the complete chassis and associated non-default VDCs.
No interface can be allocated to admin VDC and hence it cannot handle user
traffic.
Note: - Default and admin VDC cannot coexist at the same time. VDC 1 can
either configure as default or Admin.
All global configurations, like COPP, load balance methods etc., will
remain in the admin VDC.
F1 Card:-
Only perform Layer-2 task.
No interface can be converted to Layer3.
M and F1 card can coexist in a chassis
F2 line card:-
Interface can be used as L2 or L3
M and F2 card cannot coexist in a chassis.
Don’t support OTV,MPLS and LISP
F2E line card:-
Interface can be used as L2 or L3
M and F2E card can coexist in a chassis but in L2
mode only.
Don’t support OTV,MPLS and LISP
F3 line cards:-
Interface can be used as L2 or L3
M and F3 card can coexist in a chassis
Support OTV, MPLS and LISP features.
VPC FAQs
1. Can VPC port-channel number different on peer switch?
Answer: - yes, it can be different
Parameters Default
vPC system priority 32667
Internal
vPC peer-keepalive
1 second
interval
vPC peer-keepalive 5
timeout seconds
vPC peer-keepalive
3200
UDP port
The problem with this design is, STP will block the port
Gi0/3 of Sw-2. And hence traffic instead of taking direct
route from SW-1 to SW-3, will reach to SW-3 via SW-1 and is
known as suboptimal path. It adds extra hop in the path and
reduces the efficiency of the network.
Internal
Please use the below link to check the difference between FAB-1 and FAB-2.
http://netterrene.blogspot.in/2014/08/fabric-module-in-cisco-nexus-7k-
switches.html
Internal
Fabric cards can be replaced one by one without any disruption. Both cards can
work well together but it is not recommended for longer time.
If all Fabric modules are not replaced within 12 hour of the first card
installation then switch will generate the syslog warning messages to complete
the migration.
Slot 2
Poweroff module 2
No poweroff module 2
7708
Up to 21 Tbps
9 rack-unit
7710
42 terabits per second (Tbps)
14–rack-unit form factor
7718
83 terabits per second (Tbps)
26 rack-unit
What is the difference between nexus 7000 and 7700?
M1 line card comes with two version XL and non XL version. Both have the same
architecture, the only difference between them is the memory to handle
TCAM, FIB and mac address.
Below table shows the comparison between the XL and non XL card. XL needs a
license in order to increase its capacity. Without License there is no
performance difference between XL and non XL card.
By looking at the Line card model number you can identity whether it is a XL or
non XL card. XL card has a word “ L “ in the end of the model number like N7K-
M132XP-12L. But without license it will have same capability as in non XL card.
IPv4 routes
IPv6 routes
ACL entries
M-series card is used for L3 purpose like routing ACLs. We must have at least one M1
card in the chassis to get the routing or L3 facility. Otherwise we cannot create SVI s and
do inter vlan communication.
Internal
http://netterrene.blogspot.in/2014/09/difference-between-m-series-and-f.html
Below is the architecture of the M-series card. Please note different M-series cards can
have the different architecture.
FABRIC :- It is not the fabric on chassis but each card has its own fabric which connects
the Module to the backplane fabric cards. Number of fabric present varies as per the
cards.
More number of fabrics present in the card more backplane throughput card is. Each
fabric has five interfaces to connect to the chassis fabric cards.
REPLICATION ENGINE:- It is used to replicate the packets as and when required. It is not
only used while port mirroring but also when the card receives the multicast, broadcast
or unknown broadcast.
Since the same replication engine is responsible for multicast, there is a limit on the
packet replication that a card can handle so if the multicast replication is extremely high
it can choke the replication engine, although it will never happen in normal
circumstances.
VOQs :- VOQs stand for VIRTUAL OUTPUT QUEUE. It is a high speed memory to queue
the packets so that it will not overrun the fabric. VOQs are controlled by central
arbitrator siting in the supervisor module.
EOBC:- EOBC stands for ETHERNET OUT-OF-BAND CHANNEL. Supervisor module has 24
port local switch and through this it is connected to each line card and fabric modules. It
is of 1 gig capacity.
Internal
EOBC is used to connect local CPU on line card to both supervisor modules and the
other line cards. Each line card has two EOBC connections to the supervisor module.
LC CPU :- Each Line has its own in build small CPU. And it is connected to the Supervisor
CPU via EOBC
10G Mac:- It receives the packet from the interface and then encode the data and send
it to replication engine
4:1 MUX + LINKSEC: - It does the multiplexing and de-multiplexing job of the data
coming in or out from the four ports to the one 10Gig connectivity to the backplane.
This over subscription varies as per the card model. It also performs the function of
linksec and encodes and decodes the data.
Central arbitration: – It controls the traffic coming in/out the Cross fabric
based on priority, available bandwidth.
M-series:-
N7K-M148GS-11L
N7K-M148GT-11L
N7K-M108X2-12L
N7K-M132XP-12L
b) M2 card:-
N7K-M224XP-23L
F series:-
Note:
In the below chart I tried to explain the fields present in the Nexus line card
model number. I have taken N7K-F248XT-25E as an example but you can drive
the details of any line card using it.
What is GGSN ?
GGSN – Gateway GPSR support Node is the mobility anchor point within the
mobile packet core network. It provides connectivity to the SGSN (Serving GPRS
support Node) and PDN (Packet data network). Session state information of the
subscriber is always maintained at the GGSN. It also maintains the necessary
information required to route the user traffic towards the SGSN and PDN.
Internal
Process PDP request from SGNSs in both home and foreign PLMN network.
After the subscriber is attached to the network, it will initiate the PDP
activate procedure.
Assign an IP address to the subscriber - A subscriber could have
maximum of 11 PDP context and secondary PDP context. Each subscriber should
have at least one primary PDP context in order to access the services with the
PRD network. The secondary PDP context would create depending on the type
of application the subscriber is accessing. Depending upon the application, the
bandwidth requirement may be higher, due to which the secondary PDP
context will be created. It depends on the type of application as the
application may need more bandwidth which was negotiated in primary PDP
context. For every primary PDP context, the GGSN will assign the IP address
since the secondary PDP context will be associated with the primary PDP
context and therefore GGSN will not assign an IP address to secondary PDP
context.
Negotiate QOS – For any given subscriber session the GGSN will negotiate
the QOS parameter with SGSN as a part of PDP activation procedure and during
any PDP modification procedure.
Dynamic Policy control – GGSN has interface Gx towards the PCRF. This is
used for policy control and charging rule function. This function helps the GGSN
Internal
to charge the subscriber as per the QS policy. Depending upon the type of
subscription, PCRF can negotiate various types of QOS policies to the subscriber
and install different charging rules.
Performs prepaid / postpaid billing – using the Gy interface GGSN
performs the prepaid billing, using the OCS - Online charging server and
performs the postpaid billing towards the Charging gateway function.
GGSN also authenticates users to perform the authentication using AAA,
OCS and PCRF since all of these maintain a database with the user subscription.
GGSN also provides secure VPN tunnel connectivity of corporate
subscriber towards the corporate PDN network. Tunneling mechanism such as
GRE, IPSEC, L2TP tunneling can be used for setting up the tunneling interface
on the Gi interface.
In TOR, there is one or two access switch installed on the top of each server
rack which provides servers network connectivity and then that access switch
has the connections towards the aggregation switch which is located in the
Network Rack. Hence there are only few cables going from server Rack to the
network Rack.
Internal
Advantage:-
Cabling Cost: - It reduces the cable requirement as all servers connections are
terminated to its own Rack. And hence there are only few cables running
between the server and network racks.
Cable management: - Less resources and skills are needed to manage the
cabling infrastructure.
Easy management and changes: - Since very less number of cable running
between server and network rack, it is quite easy to locate the cable and make
changes.
Internal
Disadvantage:-
Switch management: - As each Rack requires one or two local switches, the
management of the switch becomes an overhead. It requires not only extra IPs
but also management tool that manages inventory and configuration of the
devices. Tools have its own capability to monitor the maximum number of
devices. More devices in the network, more license cost etc.
More rack space: We require more rack space to install SAN and LAN switches
in the server rack. It in turns increases the overall Rack requirement.
In EOR, all the network switches are placed in network rack only whereas cable
from each server, located in server racks, runs towards the network rack.
Advantage:-
Rack space: - As the overall device count reduces and hence it requires less
space.
Internal
Disadvantage:-
Inefficient Layer 2 traffic: - We all know that traffic from East to west is more
than top to bottom. In EOR design, if two servers in same rack and vlan need to
talk to each other, the traffic will go to the aggregation switch in network rack
and then comes back. And hence reduces the efficiency.
Cable requirement: - As cable runs between each server and network switch,
located in different racks, increases of cable requirement and add cost to the
deployment and maintenance.
We can configure more than one network-admin users but as per the
recommendation it should be as minimum as well.
User in this role can only view configuration and will not able to make any
changes.
VDC-ADMIN can change the configuration of its own vdc; it cannot make any
changes in other VDCs and to the physical level configuration like reload etc.
we can also configure vdc-admin role to the user within default VDC. By doing
it we can restrict user access limited to default VDC only. He will not able to
make any changes in other non default VDCs.
Internal
Each above method has its own pros n corns. Please go through the below blog
to find more details about the methods.
http://netterrene.blogspot.in/2014/09/top-of-rack-vs-end-of-row.html
Disadvantage:-
Switch management: - As each Rack requires one or two switch, the
management of the switch becomes an overhead. Which requires not only extra
IPs but also management tool configuration is required which has its own
capability to monitor the maximum number of devices. More devices in the
network, more license cost etc.
Disadvantage:-
Internal
Cable requirement: - As cable runs between each server and network switch,
located in different racks, increases of cable requirement and add cost to the
deployment and maintenance.
Cable management: - More resources and skill required for cable
management. It increases the overall budget of the project.
Time to make changes: - As more cabling infrastructure is involved,
modification not only becomes tedious but also require more time.
N2K not only increases the access port for end host connection but also reduces
the major disadvantages of both TOR and EOR as discussed below:-
1. Unlike EOR, it reduces the number of cable between network and server rack
as there are only few uplinks between 2k and its parent switch i.e. 5k/7k. Less
cable means low cable management and procurement cost. It also in turns
increases the efficiency.
2. Cisco nexus 2000 cannot work standalone. It needs either N5k or N7k as the
parent and hence it reduces the management overburden unlike TOR. Less
management require less number of IP address ,network resources as well as
inventory and configuration management server license.
Apart from the above advantages, cisco 2k has few disadvantages as well which
are mentioned below:-
1. It doesn’t perform local switching. Two servers connected to same FEX cannot
communicate directly. The traffic from server-1 will go to the parent switch
i.e. 5k/7K and then come back to the server-2 connected to the same Fex.
OTV FAQs
Answer: 42 Bytes
Answer:- COS=6/DSCP=48
Fabric cards must be present in all 7K nexus switches to make it work except
7004 as it doesn't support fabric card.
Fabric cards are hot swappable, it means we can remove it from the chassis
and other Fabric cards will take over with any impact to the traffic.
There are two below types of fabric cards available. the migration from Fab-1
to Fab-2 is non disruptive. But both in the chassis for long duration is not
recommended by Cisco.
Note :-
1. There is a license#LAN_ADVANCED_SERVICES_PKG (N7K-
ADV1K9) needed to create more than one VDC upto 4 VDC.
Without license you can only use VDC 1 ( admin or default
whichever is chosen in the initial wizard).
2. For Sup 2, " VDC Licenses (N7K-VDC1K9) " License is
needed to add license for 4 VDCs and hence can support 8
VDCs. Each license increment the vdc number by 4.
7004 :-
are supported. It does not support the F1 series module or non-XL M1 series
modules
Internal
Maximum 2 line card supported, with 2 dedicated supervisor slots which cannot
7009:-
Rack Space - 14 RU
7010:-
Rack Space - 21 RU
7018:-
Rack Space - 25 RU