BRKDCT 2951
BRKDCT 2951
BRKDCT 2951
BRKDCT -2951
BRKDCT-2951
Cisco Public
Session Abstract
This session is targeted to Network Engineers, Network Architects and IT administrators who have deployed or are considering the deployment of the Nexus 7000. The session begins with a Nexus 7000 hardware overview and primarily focuses on Data Centre related features and implementation best practices. The session covers recent hardware enhancements to Nexus 7000 product line such as the new Nexus 7004 chassis, the new supervisors modules (SUP2/2E) and the new highperformance 10/40/100G linecards (M2 and F2e).
The session also discusses some of the Data Centre design examples and its best practices section will cover features such as VDC, VPC, Cisco FabricPath, Layer2, Layer3, Fabric Extenders(FEX), etc.
Attendee should have a basic knowledge of the Nexus 7000 hardware platform and software features as well as good understanding of L2 and L3 protocols.
BRKDCT-2951
Cisco Public
Agenda
Evolution of Data Centre Trends & Observations Changes to Data Centre Fabric Nexus 7000 Switching Hardware Overview Features Overview & Best Practices Data Centre Design Examples
BRKDCT-2951
Cisco Public
CAPACITY
Do I have the right performance to scale?
COMPLEXITY
How do I simplify deployments?
COST
How can I be operationally efficient?
BRKDCT-2951
Cisco Public
Core
Services in aggregation
What has changed? Most everything Hypervisors Cloud IaaS, PaaS, SaaS MSDC Ultra Low Latency
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved.
Layer 2
Aggregation
Services
Access
Cisco Public
Workload Virtualisation
Flexibility & Provisioning
Partitioning Physical devices partitioned into virtual devices
Virtual Machines
App OS App OS App OS OS OS
Physical Servers
BRKDCT-2951
Cisco Public
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8
Ultra Low Latency High Frequency Trading Layer 3 & Multicast No Virtualisation Limited Physical Scale Nexus 3000 & UCS 10G edge moving to 40G
BRKDCT-2951
HPC/GRID Layer 3 & Layer 2 No Virtualisation Nexus 2000, 3000, 5500, 7000 & UCS 10G moving to 40G
MSDC Layer 3 Edge (iBGP, ISIS) 1000s of racks Homogeneous Environment No Hypervisor virtualisation 1G edge moving to 10G Nexus 2000, 3000, 5500, 7000 & UCS
SP and Enterprise Hypervisor Virtualisation Shared infrastructure Heterogenous 1G Edge moving to 10G Nexus 1000v, 2000, 5500, 7000 Cisco Public 2013 Cisco and/or its affiliates. All rights reserved. & UCS
L2/L3
Active workload migration (e.g. vMotion) currently constrained by the latency requirements associated with storage synchronisation Tightly coupled workload domain has specific network, storage, virtualisation and services requirements
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public
Asynchronous Storage
Burst workload (adding temporary processing capacity) and Disaster Recovery leverage out of region facilities Loosely coupled workload domain has a different set of network, storage, virtualisation and services requirements
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public
VM #2 VM VM #2 #3 BRKDCT-2951 VM #4 VM VM VM
VM #3
VM #4
Cisco Public
...
Nexus 3000 32 way ECMP
Nexus HW provides a solid toolset for these designs FabricPath 16 way ECMP & 16 way port
Scaling Port channel Bandwidth: 8 links 16 links, Virtual Port Channels FabricPath Nexus 7K, Nexus 5K
channels x 10G links
FEX
FEX
FEX
.. ...
Benefits
...
...
End of Row (EoR)
De-coupling and optimisation of Layer 1 and Layer 2 Topologies Simplified Top of Row cabling with End of Row Management paradigm Support for Rack and blade server connectivity Reducing number of management points in a ToR Model Fewer devices to
Blocked Links
Active Links
Features
Overcomes spanning tree
Benefits
Double the Bandwidth
infrastructure
Cisco Public
Cisco FabricPath
Extend VLANs Within the Data Centre
Nexus FabricPath Network
POD 1 VLAN1
POD 2 VLAN1
Span VLANs Within the Data Centre Benefits
POD 3 VLAN1
Features
racks/pods Leverage compute resources across data High cross-sectional bandwidth centre for any workload Extend VLANs across data centre Simplify scale out by adding compute resources for any app, anywhere in the data centre Cisco Public BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved.
servers in single domain
OTV
DC 1 VLAN1
DC 2 VLAN1
Extend VLANs Across Data Centres
DC 3 VLAN1
Features
Ethernet LAN Extension
Benefits
Many physical sitesone logical data centre
design
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved.
between data centres Leverage and optimise compute resources across data centres for any workload Enables disaster avoidance and simplifies recovery
Cisco Public
I/O Modules
Supervisor Engine
Chassis
Fabrics
BRKDCT-2951
Cisco Public
Highest 10GE Density in Modular Switching Nexus 7004 Nexus 7009 Nexus 7010 Nexus 7018
Height Max BW per Slot Max 10/40/100GE ports Air Flow Power Supply Configurations
14 RU 550 Gig/Slot 336/42/14 Side-to-Side 2 x 6KW AC/DC 2 x 7.5KW AC Data Centre and Campus Core
25 RU 550 Gig/slot 768/96/32 Side-to-Side 4 x 6KW AC/DC 4 x 7.5KW AC Large Scale Data Centre
Application
BRKDCT-2951
Cisco Public
22
Nexus 7004
2 Supervisors + 2 Modules No Fabric Modules Required Up to 4 3kW Power Supply AC/DC Air Flow: Side to Rear Use cases: DC Edge, Small core/agg Supports FabricPath, OTV, LISP etc
Side-to-back airflow Fan tray Air exhaust
Supported Modules
M1-XL M2-XL F2/F2e SUP2/SUP2E Sup1 F1 M1-NonXL
7RU
Supervisor slots (1-2) I/O module slots (3-4) Power supplies
BRKDCT-2951
6.1 maintenance
Front
2013 Cisco and/or its affiliates. All rights reserved. Cisco Public
Rear
23
Console Port
Management Ethernet
Reset Button
Supervisor 1
AUX Port
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved.
CMP Ethernet
Supervisor Comparison
Sup1
CPU Speed Dual-Core Xeon 1.66 Ghz
Sup2
Quad-Core Xeon 2.13 GHz
Sup2E
2 x Quad-Core Xeon 2.13 GHz
Memory
8G
12 GB
32 GB
Flash Memory
Compact Flash
USB
USB
CMP
Supported
Not Supported
Not Supported
NX-OS Release
4.0 or later
6.1 or later
6.1 or later
VDCs
4+1
8+1
FEX
BRKDCT-2951
32 FEX/1536 Ports
2013 Cisco and/or its affiliates. All rights reserved.
32 FEX/1536 Ports
Cisco Public
48 FEX/2048 Ports
25
Fabric Modules
Fabric 1 Each module provides 46Gbps per I/O module slot
Up to 230Gbps per slot with 5 fabric modules
N7K-C7018-FAB-1 N7K-C7018-FAB-2
I/O modules leverage different amount of fabric bandwidth Fabric access controlled using QoS-aware central arbitration with VOQ Fabric 2
Fabric2
Fabric2
Fabric2
Fabric2
Fabric2
New !
N7K-M224XP-23L
N7K-M148GS-11/N7K-M148GS-11L
N7K-M206FQ-23L
N7K-M202CF-22L
F family Low-cost, high performance, low latency, low power and streamlined feature set
New !
BRKDCT-2951
Cisco Public
FabricPath
FCoE
Advanced Features
BRKDCT-2951
Cisco Public
29
BRKDCT-2951
Cisco Public
30
100G
CORE / ISP
40 /100G
BRKDCT-2951
Cisco Public
31
100G
CORE / ISP
40 /100G
BRKDCT-2951
Cisco Public
32
of FabricPath with LISP & MPLS by providing M/F2e VDC Inter-operability support* Better scaling by utilising larger M tables Ports 41-48 capable of Wire rate encryption with MacSec*
Interop options Software required F2e Behaviour
BRKDCT-2951
Cisco Public
Ideal for EoR and MoR design Enables Cost Effective MoR/EoR Designs
BRKDCT-2951
Cisco Public
Incremental Features in F2e: Interoperability with M1XL/M2* (F1 Interop not planned) MACSec (802.1AE) * Bidir PIM * SVI stats * IPv6 DSCP-to-Queue Mapping 48 Port 1G/10G Copper modules
FabricPath
IEEE 1588 PTP
FEX Support
System scale
Nexus 7000
Up to 20481 host ports
FEX supported with SUP1, SUP2 and SUP2E support FEX M132XP, M224XP & F2 series modules Up to 48 FEX (both 1GE and 10GE FEX) modules supported with SUP2E (6.1) Choice of 1G/10G interfaces with FEX
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public
Core
Layer 3 Protocols Layer 3 Protocols
OSPF OSPFBGP EIGRP BGP EIGRPPIM PIM
Agg
VDC 1
GLBP
Admin VDC
VDC 2
VDC 8
DMZ
Internet
Features Flexible separation/distribution of hardware resources and software components Complete data plane and control plane separation Complete software fault isolation Securely delineated administrative contexts BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved.
Benefits Device consolidation, both vertical and horizontal Reduced number of deviceslower power usage, reduced footprint and lower CapEx/OpEx Fewer devices to manage Optimise investment
40 Cisco Public
Any individual ports on the 48 Port 1GE & 8 Port 10GE M1( ex : Port 1 , Port 2 etc ) Any individual ports on the 24 Port 10GE,6 port 40GE & 2 Port 100GE M2 ( ex : Port 1 ,Port 2 etc)
M108
All ports in the same port-group on 32 port 10GE M1 modules ( Ex : 1,3,5,7 2,4,6,8 etc )
M2-10G
M132
M148
M2-40G
All Ports in a SoC (Port-Group) on 32/48 port 10GE F1 / F2/F2e Modules (ex Ports 1,2 Ports 7,8 etc on F1) & ( Ports 1,2,3,4.. Ports 13,14,15,16 etc on F2/F2e )
F1 F2/F2e
M148
M2-100G
BRKDCT-2951
Cisco Public
41
Default VDC mode allows M1 / F1/ M1-XL / M2-XL Modules Other dedicated modes (ex:F1,M1,M1-XL,M2-XL & F2 only) are configurable
Nexus7K(config)# vdc inet Nexus7K(config-vdc)# limit-resource module-type m2-xl
It is recommended to allocate whole modules per VDC, Helps with better hardware resource scaling
Communication Between VDCs Must use front panel port to communicate between VDCs No soft cross-connect or backplane inter-VDC communication
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public
VDC2 VDC3
42
Admin VDC
Purely Administrative Context
Available on Supervisor 2/2E Provides pure administrative context
CoPP configuration / HWRL Configuration ISSU and EPLD VDC creation, suspension and deletion, interface allocation Show tech-support, tac-pac, debugs, GOLD Diagnostics System-wide QoS, Port Channel load-balancing Poweroff & out-of-service Modules License Management
Admin
Management Functions
CoPP ISSU GOLD Licensing EPLD
Admin
VDC
Infrastructure Kernel
Doesnt require Advanced or VDC License
Can use 1 Admin VDC + 1 Data VDC (1+1)
VDC1
Shares = 2
VDC2
Shares = 4
VDC3
Shares=1
VDC4
Shares=8
VDC5
Shares=10
VDC6
Shares=5
Controlled by NX-OS scheduler in the kernel Processes that do not want the CPU do not affect CPU time of other processes *SUP2 and SUP2E Require NX-OS 6.1
Cisco Public
limit-resource module-type m1 f1 m1xl m2xl allow feature-set ethernet allow feature-set fabricpath allow feature-set fex cpu-share 5 allocate interface Ethernet4/1-8 boot-order 1 <snp> N7K-1# show vdc Agg1 det vdc vdc vdc vdc vdc vdc vdc CPU CPU vdc vdc vdc vdc vdc id: 2 name: Agg1 state: active mac address: 00:26:98:0f:d9:c2 ha policy: RESTART dual-sup ha policy: SWITCHOVER boot Order: 1 Share: 5 Share Percentage: 20% create time: Mon Apr 23 15:13:39 2012 reload count: 0 restart count: 0 type: Ethernet supported linecards: m1 f1 m1xl m2xl
2013 Cisco and/or its affiliates. All rights reserved.
BRKDCT-2951
Cisco Public
45
Uses all available uplink bandwidth - enables dualhomed servers to operate in active-active mode
Provides fast convergence upon link/device failure If HSRP enabled, both vPC devices are active/active on forwarding plane Available since NX-OS 4.1(3) on the Nexus 7000 & NX-OS 4.1(3)N1 on N5K
! Enable vpc on the switch dc11-7010-1(config)# feature vpc ! Check the feature status dc11-7010-1(config)# show feature | include vpc vpc 1 enabled
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public
MCEC
47
Enable STP port type edge and port type edge trunk on host ports
Enable STP BPDU-guard globally on access switches Selectively allow vlans on trunks
BPDU-guard
BRKDCT-2951
Cisco Public
48
Enable PeerGateway
vPC Autorecovery
Enable PeerGateway
agg1b
vPC_PL
BRKDCT-2951
Cisco Public
49
Utilise diverse 10GE modules to form vPC peer-link (must be 10GE port-channel)
Peer-Link port-channel requires identical modules on same & other side and can use any 10GE Module (M1,M2,F1,F2/F2e) Dedicated mode (For M132) recommended , Shared mode is supported but not recommended vPC peer-link must be configured as a trunk
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public
Po1
vPC_PL
Trunk Trunk allowed Allowed VLANs VLANs = vPC vPC VLANs VLANs
50
Core
Orphan port
Isolated!!
Acc1 Acc2 Acc3
If vPC peer-link fails, the secondary vPC peer suspends local vPCs and shuts down SVIs of vPC VLANs
Cisco Public 51
agg1a
agg1b
Acc1a
Acc1b
agg1b vPC_PL
52
vPC member ports on S1 and S2 should have identical parameters (MTU, speed, ) Any inconsistency in such parameters is Type 1 all vlans on both vpc legs are brought down in such Inconsistency With graceful type-1 check, only Secondary vPC members are brought down.
Type-1 Inconsistency
vPC 1
po1 CE-1
S1(config-vpc-domain)# graceful consistency-check S2(config-vpc-domain)# graceful consistency-check Graceful Type-1 check enabled by default.
BRKDCT-2951
Cisco Public
53
Orphan-Port Suspend
vPC Active / Standby NIC teaming support
A vPC orphan port is an non-vPC interface on a switch where other ports in the same VLAN are configured as vPC interfaces Prior to release 5.0(3)N2 on Nexus 5000/5500 and 5.2 on Nexus 7000 an orphan port was not shut down on loss of vPC peer-link With the supported release the orphan ports on the vPC secondary peer can (configurable) also be shut down triggering NIC teaming recovery for all teaming configurations Configuration is applied to the physical port*
N5K-2(config)# int eth 100/1/1 N5K-2(config-if)# vpc orphan-port suspend vPC Supported Server fails over correctly
vPC
eth 100/1/1
Active/Standby Server does not fail over correctly since orphan port is still active
orphan-port suspend with FEX host interface requires 6.1.2 release due to CSCua35190
* VPC
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
5k1
5k2
Nexus 5K
Multicast Traffic
Dynamic Layer 3 peering support over VPC with F2 Modules on N7K is targeted for 6.2 release (1HCY13 )
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public
Benefits of FabricPath
Eliminate Spanning tree limitations Multi-pathing across all links, high cross-sectional bandwidth High resiliency, faster network re-convergence Any VLAN, any where in the fabric eliminate VLAN Scoping
Nexus7K(config)# feature fabricpath Nexus7K(config)# feature switch-id <#> Nexus7K(config)# interface ethernet 1/1 Nexus7K (config)# switchport mode fabricpath
FabricPath
BRKDCT-2951
Cisco Public
FabricPath Terminology
Interface connected to another FabricPath device Sends/receives traffic with FabricPath header Does not perform MAC learning, No STP Exchanges topology info through L2 ISIS adjacency Forwarding based on Switch ID Table
FP Core Ports
S10 S20 S30 S40 Ethernet frames transmitted on a Cisco FP CORE port always carry an IEEE 802.1Q tag, and as such can be conceptually considered a trunk port.
Spine Switch
FabricPath (FP)
S100 S200 S300
BRKDCT-2951
Cisco Public
FabricPath
Conversational Learning & VPC+
MA C A IF e1/1 MA C IF s1,e1/1 e1/2
B
s8, e1/2
FabricPath
s3
e1/1
A
s8
e1/2
s5
VPC+
MA C
VLAN X VLAN Y VLAN Z
IF
Per-port MAC address table only needs to learn the peers that are reached across the fabric A virtually unlimited number of hosts can be attached to the fabric Allows extending VLANs with no limitation (no risks of loop) Devices can be attached active/active (VPC+) to the fabric using IEEE standard port channels and without resorting to STP
2013 Cisco and/or its affiliates. All rights reserved. Cisco Public
BRKDCT-2951
FabricPath switch-id
Configure switch-ID manually for all switches in the network fabricpath switch-id 1 Make sure switch-ID (as well as vPC+ emulated switch-ID) is unique in the whole FP fabric
BRKDCT-2951
Cisco Public
Routing At Aggregation
Centralised Routing Evolutionary extension of current design practices
Design benefits:
Simplified configuration Removal of STP
L2/L3 boundary
SVIs
L3
SVIs
Routed core
Aggregation
Traffic distribution over all uplinks without VPC port-channels Active/active gateways VLAN anywhere at access layer Topological flexibility
FabricPath
Access
Scalability considerations
Today: 16K unique host MACs across all routed VLANs
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public
Routing At Aggregation
Option to Scale-out the Spine Layer
VLAN 100-200 VLAN 300-400
VLAN 100-400
VLAN 100-400
FabricPath
FabricPath
FabricPath
Split VLANs
GLBP
Host is pinned to a single gateway Less granular load balancing
2013 Cisco and/or its affiliates. All rights reserved.
BRKDCT-2951
Cisco Public
Centralised Routing
Removing Routing from the FP Spine Layer
Centralised Routing Design Alternate View
L3
FabricPath spine L2/L3 boundary Layer 3 services leaf switches
FabricPath
FabricPath spine
FabricPath
L3
Server access leaf switches
All VLANs available at all leaf switches FHRP between L3 services switches for FHRP
BRKDCT-2951
Cisco Public
Centralised Routing
Key Design Highlights Traditional aggregation layer becomes pure FabricPath spine
Provides uniform any-to-any connectivity between leaf switches Only FabricPath bridging occurs in spine
Two or more leaf switches provide L2/L3 boundary, inter-VLAN routing and North South routing (Border Leaves)
Other (or same) leaf switches provide access to L4-7 services or have L4-7 services personality (future)
16K unique host MACs today, 128K MACs with 6.2 release and Nexus 6K (at FCS) Cisco Public 2013 Cisco and/or its affiliates. All rights reserved.
DC2
Unified Fabric
Physical SAN HFT/HPC NAS
Cisco
USE CASES Inter and intra DC connectivity across L3 Use all data centre capacity Back up data centre, rapid recovery Reduced Data centre maintenance outage
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved.
RESULTING IN Scalability across multiple data centres Seamless overlayno network redesign required Single-touch site configuration High resiliency Maximised bandwidth
Cisco Public
OTV at a Glance
Ethernet traffic between sites is encapsulated in IP: MAC in IP
IP A IP B
MAC1 MAC2
OTV
IP A
IP B
Server 1 MAC 1
BRKDCT-2951
Server 2 MAC 2
OTV Terminology
Edge Device (ED): connects the site to the (WAN/MAN) core; responsible for performing all the OTV functions Authoritative Edge Device (AED): Elected ED that performs traffic forwarding for a set of VLAN Internal Interfaces: interfaces of the ED that face the site. Join interface: interface of the ED that faces the core. Overlay Interface: logical multi-access multicast-capable interface. It encapsulates Layer 2 frames in IP unicast or multicast headers.
OTV Overlay Interface
L2
L3
Internal Interfaces
BRKDCT-2951
Join Interface
Core
Cisco Public
OTV VDC as an appliance Single L2 internal interface and single Layer 3 Join Interface
BRKDCT-2951
Cisco Public
IF
IP A IP A IP A
1
3 New MACs are learned on VLAN 100 Vlan 100 Vlan 100 Vlan 100 MAC A MAC B MAC C
100
100
100
East
VLAN MAC
MAC A MAC B
IF
IP A IP A
West
100 100
100
MAC C
IP A
South
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public
OTV Configuration
OTV over a Multicast Transport Minimal configuration required to get OTV up and running
feature otv otv site-identifier 0x1* otv site-vlan 99 interface Overlay100 otv join-interface e1/1 otv control-group 239.1.1.1 otv data-group 232.192.1.0/24 OTV feature otv otv extend-vlan 100-150 otv site-identifier 0x2* otv site-vlan 99 IP A interface Overlay100 West otv join-interface Po16 otv control-group 239.1.1.1 IP C otv data-groupOTV 232.192.1.0/24 otv extend-vlan 100-150 feature otv otv site-identifier 0x3* otv site-vlan 99 interface Overlay100 otv join-interface e1/1.10 otv control-group 239.1.1.1 otv data-group 232.192.1.0/24 otv extend-vlan 100-150
OTV
IP B
East
User
x.x.x.x
y.y.y.y
z.z.z.z
DC 1 VLAN1 10.10.10.2
DC 2 VLAN2
DC 3 VLAN3
Features IP address portability across subnets Auto detection and re-route of traffic/session Highly scalable technology
BRKDCT-2951
Benefits Seamless workload mobility between data centres and cloud Direct Path (no triangulation), connections maintained during move No routing re-convergence, no DNS updates required Transparent to the hosts and users Cisco Public
Hosts
FCoE
Storage Targets
Features
Industry's highest performance
Benefits
Wire once flexibility over single
Director-Class SAN platform Lossless Ethernet (DCB) Multi-hop FCoE support: Spans Nexus 7000, Nexus 5000, and MDS 9500
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved.
Ethernet Fabric Reduce network Sprawl switches, cables, adapters, etc. Up to 45% access layer CapEx savings Seamlessly integrate converged networks with existing MDS FC SANs
Cisco Public
76
FCoE ON F2 MODULE
High Performance Director Class Convergence
REQUIRES
SUP2/2E Fabric 2 Modules for full bandwidth
F2 module: N7K-F248XP-25
BRKDCT-2951
Cisco Public
F2 VDC
Storage VDC
F1 or F1-M1 VDC
F2 VDC
Storage VDC
F1 or F1-M1 VDC
F2
F1
Any non-F2
F2
F2
Any non-F2
Notes
F1 and F2 cannot co-exist in the same VDC Only one storage VDC per chassis
BRKDCT-2951
Cisco Public
Storage VDC
F1 or F1-M1 VDC
Storage
VDC
F2 VDC
F1
F2
Notes
F1 and F2 cannot co-exist in the same VDC Only one storage VDC per chassis
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved. Cisco Public
Software Licensing
Features installed by individual licenses or enabling the license grace period (120 days)
Grace period not recommended
Installation is non-disruptive to features already running under the grace period Backup the license after license is installed System generates periodic Syslog, SNMP or Call home messages
Feature License Enterprise LAN OSPF, EIGRP, BGP, Advanced LAN Scalable Feature Transport Services Enhanced L2 Package CTS, VDC M1-XL TCAM OTV FabricPath Features
Ins Yes No
Lic Count -
Status Expiry Date Comments In use Never In use Grace 119D 22H
Cisco Public
---------------------------------------------------------------------------------------------------------------
Software Upgrade
Synchronise the kickstart image with the system image Utilise cold start upgrade procedure for non-production devices
Nexus7K(config)# boot system bootflash:<system-image> Nexus7K(config)# boot kickstart bootflash:<kickstart-image> Nexus7K# copy run startup-config Nexus7K# reload
Utiliseinstall all to perform ISSU with zero service interruption Issue show install all impact to determine upgrade impact
Nexus7K# install all kickstart bootflash:<kickstart-image> system bootflash:<system-image>
Avoid disruption to the system during ISSU upgrade (STP topology change, module removal, power interruption, etc)
BRKDCT-2951
Cisco Public
EPLD Upgrade
EPLD upgrade is used to enhance HW functionality or to resolve known issues EPLD upgrade is an independent process from software upgrade and not dependent on NX-OS EPLD upgrade is typically not required Performed on all Field Replaceable Modules
Nexus7K# sh ver <type> <#> epld Nexus7K# sh ver mod 3 epld EPLD Device Version -------------------------------Power Manager 4.008 IO 1.016 Forwarding Engine 1.006 FE Bridge(1) 186.006 FE Bridge(2) 186.006 Linksec Engine(1) 2.006 ---deleted--Linksec Engine(8) 2.006
BRKDCT-2951
Cisco Public
When performing supervisor EPLD upgrade for a system with dual-sup, first upgrade the standby supervisor, then switchover and upgrade previous active supervisor
Make sure EPLD image is on both supervisors flash
Nexus7K# install module <module> epld bootflash:<EPLD_Image_name>
In a redundant system, only EPLD upgrade for I/O modules can disrupt traffic since the module needs to be power-cycled
BRKDCT-2951
Cisco Public
BRKDCT-2951
Cisco Public
Nexus 7000
Core1
Core2
L3
vPC
SW-1b VDC3
SW-2a VDC2
vPC
SW-2b VDC2
SW-2b VDC3
L3 L2 L2
87
active
active
Cisco Public
Core1
Core2
L3
aggNa
VPC
agg1b
..
aggNb
L3 L2
Nexus 2000
vPC
VPC
VPC
vPC
Nexus 2000
Active/Standby
Active/Active
Active
BRKDCT-2951
Active
active
Cisco Public
88
Nexus 7004
N7004
N7004
SW-1b VDC2
Core
SW-1a VDC2
L3
Nexus 7004
Aggregation
SW-1a VDC3
vPC
SW-1b VDC3
L3 L2
vPC
Nexus 5500
Access
active standby
active
L2
active
Cisco Public 89
BRKDCT-2951
L3
Aggregation
L2/L3 boundary L2/L3 boundary
FabricPath Access
BRKDCT-2951 2013 Cisco and/or its affiliates. All rights reserved.
FabricPath
Cisco Public
FabricPath spine
FabricPath
FabricPath
SVI 100 SVI 100 SVI 100
SVI 100
SVI 100
SVI 100
VPC+
SVI 40
SVI 30 SVI 30
SVI 10
SVI 20
VPC+
SVI 10
SVI 20
SVI 40
SVI 50
VPC+
SVI 50
L3
INTER-VLAN ROUTED FLOWS INTER-VLAN ROUTED FLOWS
Rack 1 VLAN 10
Rack 2 VLAN 20
Rack 3 VLAN 30
Rack 4 VLAN 40
Rack 5 VLAN 50
Rack 6 VLAN 30
BRKDCT-2951
Cisco Public
L3
Fabric Path Core
Vlans 100 - 199 Vlans 200 -299 Vlans 300 - 399 Vlans 2000 - 2099
Aggregation
L2/L3 boundary
FabricPath Access
POD 1
Vlans 100 - 199 Vlans 2000 - 2099
BRKDCT-2951
POD 2
Vlans 200 - 299 Vlans 2000 - 2099
Cisco Public
POD 3
Vlans 300 - 399 Vlans 2000 - 2099
Q&A
Dont forget to activate your Cisco Live 365 account for access to all session material, communities, and on-demand and live activities throughout the year. Log into your Cisco Live portal and click the "Enter Cisco Live 365" button. www.ciscoliveaustralia.com/portal/login.ww
Cisco Public 95
BRKDCT-2951
BRKDCT-2951
Cisco Public