Best Practice Network Design For The Data Center: Mihai Dumitru, CCIE2 #16616
Best Practice Network Design For The Data Center: Mihai Dumitru, CCIE2 #16616
Best Practice Network Design For The Data Center: Mihai Dumitru, CCIE2 #16616
1
A Few Words about Cronus eBusiness:
39 employees, 3 national offices
Focus on large enterprise customers from banking and
retail, plus education
Specializing in:
System integration (consulting, project management, network
equipment sale and deployment, maintenance)
Managed services (operational support, network management,
server hosting and business continuity)
2
What We Will Cover In This Session:
3
Hierarchical Design Network Layers:
Defining the Terms
Data Center Core
Routed layer which is distinct from Enterprise Network
enterprise network core
Provides scalability to build multiple
aggregation blocks Data Center
Core
Aggregation Layer
Provides the boundary between layer-3
routing and layer-2 switching
Layer 3 Links
Point of connectivity for service devices Aggregation
(firewall, SLB, etc.) Layer 2 Trunks
Access Layer
Provides point of connectivity for
servers and shared resources
Access
Typically layer-2 switching
4
Scaling the Topology With a Dedicated
Data Center Core
A dedicated Data Center Core provides layer-3
insulation from the rest of the network
Switch port density in the DC Core is reserved for
scaling additional DC Aggregation blocks or pods
Provides single point of DC route summarization
Enterprise Network
Data Center
Core
Multiple Aggregation
Blocks/Pods
5
Mapping Network Topology to
the Physical Design
Design the Data Center topology in a consistent, modular
fashion for ease of scalability, support, and troubleshooting
Use a pod definition to map an aggregation block or other
bounded unit of the network topology to a single pod
The server access connectivity model can dictate port count
requirements in the aggregation and affect the entire design
Network Rack
Aggregation
Server Rack
Access
6
Traditional Data Center Server
Access Models
End-of-Row (EoR)
High density chassis switch at end or middle of
a row of racks, fewer overall switches
Provides port scalability and local switching, may
create cable management challenges
Top-of-Rack (ToR)
Small fixed or modular switch at the top of
each rack, more devices to manage
Significantly reduces bulk of cable by keeping
connections local to rack or adjacent rack
Integrated Switching
Switches integrated directly into blade server chassis enclosure
Maintaining feature consistency is critical to network management,
sometimes pass-through modules are used
7
Impact of New Features and Products On
Hierarchical Design for
Data Center Networks
8
Building the Access Layer using
Virtualized Switching
Virtual Access Layer
Data Center
Still a single logical tier of Core
layer-2 switching
Common control plane with
virtual hardware and software
Layer 3 Links
based I/O modules Aggregation
Layer 2 Trunks
9
Migration to a Unified Fabric at the
Access Supporting Data and Storage
Nexus 5000 Series switches support integration of both IP data
and Fibre Channel over Ethernet at the network edge
FCoE traffic may be broken out on native Fibre Channel interfaces from
the Nexus 5000 to connect to the Storage Area Network (SAN)
Servers require Converged Network Adapters (CNAs) to consolidate this
communication over one interface, saving on cabling and power
LAN SAN
Server
Access
Ethernet
Fibre Channel
Ethernet plus FCoE
10
Cisco Unified Computing System (UCS)
A cohesive system including a virtualized layer-2 access layer supporting
unified fabric with central management and provisioning
Optimized for greater flexibility and ease of rapid server deployment in a
server virtualization environment
From a topology perspective, similar to the Nexus 5000 and 2000 series
LAN SAN
11
Nexus 7000 Series Virtual Device
Contexts (VDCs)
Virtualization of the Nexus 7000 Series Chassis
Up to 4 separate virtual switches from a single
physical chassis with common supervisor module(s)
Separate control plane instances and
management/CLI for each virtual switch
Interfaces only belong to one of the active VDCs
in the chassis, external connectivity required to
pass traffic between VDCs of the same switch
12
Virtual Device Context Example:
Services VDC Sandwich
Multiple VDCs used to sandwich
Enterprise Network
services between switching layers
Allows services to remain transparent
(layer-2) with routing provided by VDCs
Aggregation blocks only communicate Core
through the core layer
13
Data Center Service Insertion
14
Data Center Service Insertion:
Direct Services Appliances
Appliances directly connected
to the aggregation switches Data Center
Core
Service device type and Routed
or Transparent mode can affect
physical cabling and traffic flows.
Transparent mode
Aggregation
ASA example:
Each ASA dependant on
one aggregation switch
Services
Separate links for fault tolerance
and state traffic either run through
aggregation or directly
Dual-homed with interface redundancy
feature is an option Access
Currently no EtherChannel
supported on ASA
15
Data Center Service Insertion:
External Services Chassis
Dual-homed Catalyst 6500
Services do not depend on a Data Center
single aggregation switch Core
16
Using Virtualization and Service
Insertion to Build Logical Topologies
Logical topology example Enterprise Network
VDC
served by a single set of VLANs
through the services modules VLAN 180
17
Using Virtualization and Service
Insertion to Build Logical Topologies
Enterprise Network
Logical Topology to support
multi-tier application traffic flow Data Center Core
Client-Server Flow
19
Layer 3 Features and Best Practices
20
Layer-3 Feature Configuration
in the Data Center
Summarize IP routes at the DC
Aggregation or Core to advertise fewer
Enterprise Network
destinations to the enterprise core
Avoid IGP peering of aggregation switches
through the access layer by setting VLAN Data Center
Core
interfaces as passive
Use routing protocol authentication to help
prevent unintended peering
If using OSPF, set consistent reference Aggregation
Layer 3
21
IGP Hello and Dead/Hold Timers
Behavior Over Shared Layer 2 Domain
Routing protocols insert destinations
into the routing table and maintain
Network A
peer state based on receipt of
continuous Hello packets. Node 1 Node 2
22
IGP Hello and Dead/Hold Timers
Behavior Over Layer-3 Links
Upon device or link failure, routing
protocol immediately removes Network A
routes from failed peer based
Node 1 Node 2
on interface down state.
Tuning the IGP Hello and Dead/
Hold timers lower is not required My one direct link to
Node 3 has gone
for convergence due Point-to- down. I better
to link or device failure. Point Routed remove Node 3s
Links routes from the table
X
immediately
Transparent-mode services or
using static routing with HSRP can
help ensure all failover cases are Node 3 Node 4
based on point-to-point links.
Network B
Note that static routing with HSRP
is not a supported approach for IP
multicast traffic.
23
Layer 2 Features, Enhancements and Best
Practices
24
Classic Spanning Tree Topology
Looped Triangle Access
Layer-2 protocols are designed to
be plug-and-play, and forward traffic
without configuration N N
Aggregation
Stability is enhanced by controlling NR N R
25
Spanning Tree Configuration Features:
Rootguard, Loopguard, Portfast, BPDUguard
These Features Allow STP to Behave with More
Intelligence, but Require Manual Configuration:
Rootguard prevents a port from accepting a better path
to root where this information should not be received
Loopguard restricts the transition of a port to a
designated forwarding role without receiving
a BPDU with an inferior path to root
Port fast (Edge Port) allows STP to skip the listening
and learning stages on ports connected to end hosts
BPDUguard shuts down a port that receives a
BPDU where none should be found, typically
also used on ports facing end hosts
26
Updated STP Features:
Bridge Assurance
Specifies transmission of BPDUs Aggregation
on all ports of type network.
Protects against unidirectional links N N
27
STP Configuration Feature Placement
In the Data Center
Bridge Assurance Replace the Requirement For
Loopguard On Supported Switches
N Network port
E Edge port
Data Center - Normal port type
Core B BPDUguard
R Rootguard
L Loopguard
HSRP HSRP
ACTIVE STANDBY Layer 3
Aggregation N N Backup
Root Root Layer 2 (STP + Bridge Assurance)
N N N - N N N -
R R R R R R R R
Layer 2 (STP + BA + Rootguard)
N N
Access N N
N N L L
E E E E E
B B B B B
Layer 2 (STP + BPDUguard)
28
Redundant Paths Without STP Blocking:
Basic EtherChannel
Bundles several physical links into a logical one
No blocked ports (redundancy not handled by STP)
Per frame (not per-vlan) load balancing
A A
Root port Channel looks like a
Alternate port single link to STP
Designated port
B B
29
Designs Not Relying on STP:
Virtual Switching System (VSS)
Merges two bridges into one, allowing
Multi-Chassis EtherChannels
Also merges Layer-3 and overall switch management
Does not rely on STP for redundancy
Limited to pair of switches
Blocked
port
N Network port
E Edge port
Data Center - Normal port type
Core B BPDUguard
R Rootguard
L Loopguard
Layer 3
Aggregation
Root Layer 2 (STP + Bridge Assurance)
N N -
R R R
Layer 2 (STP + BA + Rootguard)
Access N
N
L
E E E E
B B B B
Layer 2 (STP + BPDUguard)
31
Designs Not Relying on STP:
Virtual Port Channel (vPC)
Appears as a single EtherChannel to the access layer
Two independent control planes
Active/active HSRP, separate Layer-3 and management
Still no STP blocked ports
VPC VPC
domain domain
Blocked
port
N Network port
E Edge port
Data Center - Normal port type
Core B BPDUguard
R Rootguard
L Loopguard
VPC
HSRP domain HSRP
ACTIVE STANDBY Layer 3
Aggregation N N Backup
Root Root Layer 2 (STP + Bridge Assurance)
N N N - N N N -
R R R R R R R R
Layer 2 (STP + BA + Rootguard)
N
Access N
N
L
E E E E E
B B B B B
Layer 2 (STP + BPDUguard)
33
Summary
34
35