Active Active Data Centre Strategies

Download as pdf or txt
Download as pdf or txt
You are on page 1of 87

Active - Active Data Centre Strategies

BRKDCT-2615

www.ciscolivevirtual.com
Housekeeping
We value your feedbackdon't forget to complete your online
session evaluations after each session and complete the Overall
Conference Evaluation
Visit the World of Solutions
Please remember this is a 'non-smoking' venue.
Please switch off your mobile phones
Please make use of the recycling bins provided
Please remember to wear your badge at all times

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 2
Agenda
Active-Active Data Centre: Business Drivers and Solutions Overview
Host Mobility using LISP
Active / Active Data Centre Design Considerations
Storage Extension
Data Centre Interconnect (LAN Extension Deployment Scenarios)
Ethernet Based
MPLS Based
IP Based
Network Services and Applications (Path optimisation)

Summary and Conclusions


Q&A
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
Data Centre Evolution
Cloud Network Fabric
Business drivers Increased Dependence on Network Evolution
East West Traffic Automated Policy Driven

Low Cost, Standard Protocols, Open


in the Data Centre
Provisioning and Management
High Density 10G at Edge + 40G
Virtualisation drives 10G at edge
&100G in Core/Agg + Unified Fabric
DCI
Non-Blocking FabricPath TRILL
Clustered Applications
Predictable Lower Latency
40G & 100G Secure Segmentation
Multi-Tenancy
10G

Architectures
L2 or L3 DCI Connectivity
Business Continuity
Storage Extensions

Workload Mobility Large L2 Domains


Non-Disruptive Migration
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
Distributed Data Centres Trends
Building the Data Centre Cloud
Distributed Data Centre Goals
Seamless workload mobility
Distributed applications
Pool and maximise global compute resources
Business continuity

Interconnect Challenges
Complex operations
Transport dependant
Bandwidth management
Failure containment Geographically Dispersed
Data Centres
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
Data Centre Interconnect (DCI)
Business Drivers
Data Centres are extending beyond traditional boundaries
Virtualisation applications are driving DCI across PODs
(aggregation blocks) and Data Centres

Drivers Business Solution Constraints IT Technology

Business Disaster Recovery Stateless GSLB


Continuity HA Framework Network Service Sync Geo-clusters
Process Sync HA Cluster

Operation Cost Data Centre Maintenance / Migration / Host Mobility Distributed Virtual Data
Containment Consolidation Centre

Business Resource Disaster Avoidance VLAN Extension


Optimisation Workload Mobility Stateful VM Mobility
Bandwidth & Latency

Cloud Services Inter-Cloud Networking Flexibility VM Mobility


XaaS Application mobility Automation
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
Driver Business Continuance
High Availability Clusters - Local Enterprise
Core
Cluster Application such as
Cluster VIP
Microsoft MSCS
Extended LAN
Vmware Cluster (Local)
Solaris Sun Cluster Enterprise Heartbeat 1
Heartbeat 2
Vmware cluster (Local)
SAN A
Oracle RAC Active Standby
IBM HCMP Extended SAN

.
SAN B

Typically Active/Standby Cluster failover; Failure transfers Storage ownership


Inter-server heartbeats, status & control synchronised through private network as well as VIP cluster through the public networks
Requires Layer 2 path between hosts
Client reconnection transparent - shared IP address Layer 2 must be extended
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
Geo Clusters for Disaster Recovery
Multi-site HA Clusters

Public LAN VIP Cluster

Cluster A Cluster A
Node 1 Node 2
Private LAN Heartbeat

Enhances HA Clusters to protect against Catastrophic site-level failures

Clustering applications typically require Stretched L2 VLANs to peer DC sites


Some BRKDCT-2615
applications support clustering using
2012 Cisco and/or L3 for
its affiliates. Inter-site
All rights reserved. routing
Cisco Public 8
Cisco-VMware With
Disaster Prevention/Avoidance EMC & NetApp Validated
Design & Certification for
Long Distance VMotion
Core Network
DCI LAN extension

DC 1 DC 2
ESX-A source ESX-B target

Long-distance VMotion across stretched VLAN for Disaster Prevention


Disaster Recovery Applications Ex- VMware Site Recovery Manager (SRM)
UsesBRKDCT-2615
either stretched vlan orand/or
2012 Cisco Layer 3 for
its affiliates. inter-site
All rights reserved. communications
Cisco Public 9
Interdependency of Network HA & Applications H
A Journey to Alleviate DC to Network Constraints

DC to NW coupling DC to NW Independence

Server infrastructure Cold HA Cluster X86 CLOUD


Breakthrough: Systems Hot HA Virtualisation
1997 2000 2005 2008 2011 2015
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 10
Data Centre Interconnect (DCI) MPLS
Solution Components IP Core

DCI Function Purpose

Storage Providing applications access to storage locally, as well


Extensions as remotely with desirable storage attributes

LAN Extensions Extend same VLAN across Data Centres, to virtualise


servers and applications

Inter-DC Routing Provide routed connectivity between data centres (used


for L3 segmentation/virtualisation, etc.)

Path Optimisation Routing users to the data centre where the application
resides while keeping symmetrical routing in consideration for IP
services (e.g. Firewall)
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
Data Centre Interconnect
SAN Extension

DC 1 Synchronous implies strict distance limitation DC 2


Localisation of Active Storage is key
Distance can be improved using IO accelerator or caching
Virtual LUN is allowing Active/Active
ESX-A source ESX-B target

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
Data Centre Interconnect
LAN Extension
STP Isolation is the key element
Multipoint
Loop avoidance + Storm-Control
Unknown Unicast & Broadcast control
DC 1 Link sturdiness DC 2
Scale & Convergence
ESX-A source ESX-B target

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 13
Data Centre Interconnect
Path Optimisation
Options
Egress
Addressed by FHRP Filtering

DC 1
Ingress:
DC 2
1. DNS redirection with ACE/GSS
2. Route Injection
ESX-A source 3. LISP ESX-B target

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
Agenda
Active-Active Data Centre: Business Drivers and Solutions Overview
Host Mobility using LISP
Active / Active Data Centre Design Considerations
Storage Extension
Data Centre Interconnect (LAN Extension Deployment Scenarios)
Ethernet Based
MPLS Based
IP Based
Network Services and Applications (Path optimisation)

Summary and Conclusions


Q&A
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Cloud services mobility
Without LAN extension
LISP site
xTR

DR Location or Cloud Provider


Mapping DB DC
IP

LISP-VM (xTR)

West-DC East-DC

IP mobility across subnets


Disaster Recovery
BRKDCT-2615 Cloud Bursting
2012 Cisco and/or its affiliates. All rights reserved.
Cisco Public 16
LISP Mapping Database
The Basics Registration and Resolution
LISP Site Mapping Cache Entry (on ITR):
10.1.0.0/16-> (A, B)
iTR

Map Server /
Resolver: 1.1.1.1

Map-Reply
10.1.0.0/16 -> (A, B)

A B C D
Database Mapping Entry (on ETR): eTR eTR eTR eTR Database Mapping Entry (on ETR):
10.1.0.0/16 -> (A, B) 10.2.0.0/16 -> (C, D)

West-DC East-DC
10.1.0.0 /16 10.2.0.0/16
Y
X Y Z

BRKDCT-2615
10.1.0.2
2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
Host-Mobility and Multi-homing
eTR updates across LISP sites
Null0 host routes indicate the host is away
10.1.0.0/16 RLOC A, B*
6 10.1.0.2/32 RLOC C, D
Map-Register
10.1.0.2/32 <C,D>
Map-Notify Mapping DB
10.1.0.2/32 <C,D> 1.1.1.1 2.2.2.2
Routing Table: Routing Table:
10.1.0.0/16 Local
7 5 10.2.0.0/16 Local
10.1.0.2/32 Null0 4 10.1.0.2/32 Local
10 A B Routing Table: C D
10.2.0.0/16 Local
2 10.1.0.2/32 Local
Routing Table: 3
9 10.1.0.0/16 Local
10.1.0.0 /16 10.2.0.0 /16
8 10.1.0.2/32 Null0 1 East-DC
West-DC Y
Map-Notify X Map-Notify
Y
10.1.0.2/32 <C,D> 10.1.0.2 10.1.0.2/32 <C,D>
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
Refreshing the map caches Map Cache @ iTR

10.1.0.0/16 RLOC A,B

1. iTRs and PiTRs with cached mappings LISP site


continue to send traffic to the old locators iTR
1. The old xTR knows the host has moved (Null0 10.1.0.2/32 RLOC C,D
route).

2. Old xTR sends Solicit Map Request


(SMR) messages to any encapsulators Mapping DB
sending traffic to the moved host

3. The iTR then initiates a new map request


process

4. An updated map-reply is issued from the


new location A B C D
LISP-VM (xTR)
5. The ITR Map Cache is updated
West-DC East-DC
Traffic is now re-directed 10.1.0.0 /16 10.2.0.0 /16

SMRs are an important integrity measure to Y


avoid unsolicited map responses and X Y Z
spoofing
10.1.0.2

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
LISP Host-Mobility - First Hop Routing
Without Extended Subnets
SVI (Interface VLAN x) and HSRP configured as usual
Consistent GWY-MAC configured across all dynamic subnets

The lisp mobility <dyn-eid-map> command enables proxy-arp functionality on the SVI
The LISP-VM router services first hop routing requests for both local and roaming subnets

interface
Hostsvlan
can interface vlan 200
move anywhere and always talk to a local gateway with the same MAC
100 ip address 192.2.0.7/24
ip address 10.1.0.5/24 lisp mobility roamer
lisp
Totally transparent
mobility roamer to the moving hosts ip proxy-arp
ip proxy-arp hsrp 201
hsrp 101 mac-address 0000.0e1d.010c
mac-address 0000.0e1d.010c ip 192.2.0.1
ip 10.1.0.1

A B C D
LISP-VM (xTR)
HSRP Active HSRP Active
West-DC East-DC
10.1.0.0 /24 192.2.0.0 /24
HSRP HSRP
ARP ARP
GWY-MAC GWY-MAC
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
LISP Host-Mobility Configuration
No LAN Extensions (Across LISP sites) For Your Reference

ip lisp itr-etr ip lisp itr-etr


ip lisp database-mapping 10.1.0.0/16 <RLOC-A> ip lisp database-mapping 10.2.0.0/16 <RLOC-C>
ip lisp database-mapping 10.1.0.0/16 <RLOC-B> ip lisp database-mapping 10.2.0.0/16 <RLOC-D>

lisp dynamic-eid roamer lisp dynamic-eid roamer


database-mapping 10.1.0.0/24 <RLOC-A> database-mapping 10.1.0.0/24 <RLOC-C>
database-mapping 10.1.0.0/24 <RLOC-B> database-mapping 10.1.0.0/24 <RLOC-D>
map-server 1.1.1.1 key abcd map-server 1.1.1.1 key abcd
map-notify-group 239.1.1.1 map-notify-group 239.2.2.2
interface vlan 100 interface vlan 200
ip address 10.1.0.10 /16 ip address 10.2.0.11 /16
lisp mobility roamer lisp mobility roamer
ip proxy-arp ip proxy-arp
hsrp 101 hsrp 201
mac-address 0000.0e1d.010c mac-address 0000.0e1d.010c
ip 10.1.0.1 ip 10.2.0.1

Mapping DB
A B C D
LISP-VM (xTR)

West-DC East-DC
10.1.0.0 /16 10.2.0.0 /16

X
Y Z
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
Agenda
Active-Active Data Centre: Business Drivers and Solutions Overview
Host Mobility using LISP
Active / Active Data Centre Design Considerations
Storage Extension
Data Centre Interconnect (LAN Extension Deployment Scenarios)
Ethernet Based
MPLS Based
IP Based
Network Services and Applications (Path optimisation)

Summary and Conclusions


Q&A
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 22
Data Centre Interconnect
SAN Extension

DC 1 Synchronous implies strict distance limitation DC 2


Localisation of Active Storage is key
Distance can be improved using IO accelerator or caching
Virtual LUN is allowing Active/Active
ESX-A source ESX-B target

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
SAN Extension
Synchronous vs. Asynchronous Data Replication
Synchronous Data replication: The Application receives the acknowledgement for I/O complete when both
primary and remote disks are updated. This is also known as Zero data loss data replication method (or Zero
RPO)
Metro Distances (depending on the Application can be 50-300kms max)

Asynchronous Data replication: The Application receives the acknowledgement for I/O complete as soon as
the primary disk is updated while the copy continues to the remote disk.
Unlimited distances

Synchronous Asynchronous
Data Replication Data Replication

4 1 2 1

2 3
3
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 24
Synchronous Data Replication
Network Latency
Speed of Light is about 300000 Km/s
Speed is reduced to 200000 Km/s 5 s per Km (8 s per Mile)
That gives us an average of 1ms for the light to cross 200 Kms of fibre

Synchronous Replication: SCSI protocol (FC) takes a four round trips


For each Write cmd a two round trips is about 10 s per kilometer
20s/km for 4 round trips for Synch data replication

50 Kilometers 1ms
250 s : Rec_Ready ?
1
250 s : Wait for response?
2
250 s : Send data
1
250 s : Wait for Ack?
Local Storage Array 2 Remote Storage Array
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
Assisted Disk Failover
Ex.: Geocluster Deployment

Disk failover to active node location


EMC SRDF/Cluster Enabler (CE) L3
EMC Legato Autostart (AAM) Public L2 L2
Failover Application
HP Continental Clusters
IBM Geographical Disperse Parallel Private L2
Sysplex (GDPS)
1. Failover Application Failover the DISK

i.e. MSCS move a Group


2. Failover the DISK
i.e. symrdf g disk_for_ha_cluster failover WD
RW RW
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
Storage Deployment in DCI
Option 1 - Shared Storage

Core Network
L2 extension for VMotion Network

DC 1 DC 2

Initiator
ESX-A source ESX-B target
Virtual Centre

Volumes

Target
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
Storage Deployment in DCI
Shared Storage Improvement Using Cisco IOA

Core Network
L2 extension for VMotion Network

DC 1 DC 2

ESX-A source ESX-B target


Virtual Centre

Improve Latency using Cisco Write


Acceleration feature on MDS Fabric
Synchronous replcation Latency requiments
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 28
http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns836/white_paper_c11-557822.pdf
Storage Deployment in DCI
Option 2 NetApp FlexCache (Active/Cache)

Core Network
L2 extension for VMotion Network

DC 1 DC 2
NAS ?
Read
Write
2 Temp
data
Read
Write
ESX-A source data 3 Cache
data ESX-B target
ACK 1
4
2 data
Virtual Centre ACK

FlexCache does NOT act as a write-back cache


FlexCache responds to the Host only if/when the original subsystem acked to it
No imperative need to protect a Flexcache from a power Failure
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-591960.pdf
Storage Deployment in DCI
Option 3 EMC VPLEX Metro (Active/Active)

Hosts at both sites instantly access


Distributed Virtual Volume
Synchronisation starts at Distributed
Volume creation

Synchronous Latency
Distributed Virtual Volume

Fibre Channel

WRITEs are protected on storage at both


Site A and B
READs are serviced from VPLEX cache or
local storage.
DC A DC B
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
Storage Deployment in DCI
Option 3 EMC VPLEX Metro (Active/Active)

Core Network
L2 extension for VMotion Network

DC 1 DC 2

Initiator
ESX-A source ESX-B target
Virtual Centre
From the Host

Target VPLEX Virtual Layer


From the Storage

LUNv LUNv
EMC
F
Initiator CLARiiON
EMC
VMAX VPLEX VPLEX
Target Engine Synchronous Latency requiments ~100 kms max Engine
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
http://media.vceportal.com/documents/WhitePaper_Application_Mobility.pdf
Agenda
Active-Active Data Centre: Business Drivers and Solutions Overview
Host Mobility using LISP
Active / Active Data Centre Design Considerations
Storage Extension
Data Centre Interconnect (LAN Extension Deployment Scenarios)
Ethernet Based
MPLS Based
IP Based
Network Services and Applications (Path optimisation)

Summary and Conclusions


Q&A
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
Cloud services mobility
with LAN extension
Non-LISP site LISP site

xTR

Mapping DB

IP/MPLS/Ethernet
LAN Extension

LISP-VM (xTR)

West-DC East-DC

Routing for extended subnets


Ex.: Active-Active Data Centres & Distributed Clusters

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 33
VLAN Extension with DCI
VLAN Types
Type T0
Limited to a single access layer

Type T1
Extended inside an aggregation block (POD)

Type T2
Extended between PODs part of the same DC site

Type T3
Extended between twin DC sites connected via dedicated dark fibre
links T4
Type T4
Extended between twin DC sites using non 5*9 connection
T3
Type T5 T1 T2
Extended between remote DC sites
T0

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
Data Centre InterconnectLAN Extension
Technology Selection Criteria
Over dark fibre or protected D-WDM
VSS & vPC
Ethernet Dual site interconnection
FabricPath (TRILL)

MPLS Transport
EoMPLS
transparent point to point
MPLS A-VPLS
Enterprise style MPLS
H-VPLS
Large scale & Multi-tenants

IP Transport
OTV
IP Enterprise style Inter-site MAC
Routing
VXLAN
Intra-site MAC bridging in total
virtualised environment
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
DCI key selection criteria For Your Reference

Transport
Fibre
LOS report / Protected DWDM
L2 SP offer (HA=99.7+)
IP
Scale
# of Sites
VLAN (102 or 103 or 104)
MAC (103 or 104 or 105)
Multi-tenants
Tagging (VLAN / 2Q / VRF)
Overlapping / Translation
Multi-point or point to point
Greenfield vs. Brownfield
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 36
Dual Sites Interconnection
Leveraging Etherchannel between Sites
On DCI Etherchannel:
STP Isolation (BPDU Filtering)
interface port-channel10
Broadcast Storm Control
desc DCI point to point connection Primary Root Primary Root
FHRP Isolation
switchport
switchport mode trunk L L
2 WAN 2
vpc 10 L L
switchport trunk allowed vlan 100-600 3 3
spanning-tree port type edge trunk Si Si

spanning-tree bpdufilter enable


storm-control broadcast level 1
storm-control multicast level x

Link utilisation with Multi-Chassis


EtherChannel Server Cabinet Pair 1 Server Cabinet Pair N Server Cabinet Pair 1 Server Cabinet Pair N

DCI port-channel vPC does not support L3 peering:


- 2 or 4 links Use dedicated L3 Links for Inter-DC routing!
Validated design:
Requires protected DWDM or
200 Layer 2 VLANs + 100 VLAN SVIs
Direct fibres
1000 VLAN + 1000 SVI (static routing)
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 37
FabricPath
Basic Data Plane Operation
FabricPath interface
DSID20
SSID10
CE interface
DMACB
SMACA
Payload
S10 S20
Ingress FabricPath Egress FabricPath
Switch Switch
Payload
SMACA
DMACB
FabricPath Core
DMACB STP STP
SMACA
Payload

MAC A MAC B
Ingress FabricPath switch determines destination Switch ID and imposes FabricPath header
Destination Switch ID used to make routing decisions through FabricPath core
No MAC learning or lookups required inside core
Egress FabricPath switch removes FabricPath header and forwards to CE
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
FabricPath
Building the Routing Table
Switch IF Switch IF
S20 L1,L5,L9 S10 L4,L8,L12
S30 L1,L5,L9 S20 L4,L8,L12
S40 L1,L5,L9 S30 L4,L8,L12
S100 L1
S10 S20 S30 S40 S100 L4
S101 L5 S101 L8

S200 L9 S200 L12

L5 L6 L7 L8

L1 L2 L3 L4 L9 L10 L11 L12

S100 S101 FabricPath S200

Switch IF Switch IF
S10 L1 S10 L9
S20 L2 S20 L10
S30 L3 S30 L11
S40 L4 S40 L12
MAC A MAC B MAC C MAC D
S101 L1, L2, L3, L4 S100 L9, L10, L11, L12
S101 L9, L10, L11, L12
S200 BRKDCT-2615
L1, L2, L3, L4 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
FabricPath
Conversational MAC Learning
FabricPath
MAC Table on S300
MAC IF/SID
B S200 (remote)
C e7/10 (local) S300

FabricPath
MAC Table on S100 S100 MAC C

MAC IF/SID
A e1/1 (local)
B S200 (remote) FabricPath
MAC Table on S200
FabricPath Core
MAC IF/SID
S200 A S100 (remote)

MAC A B e12/1(local)
C S300 (remote)

MAC B

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
FabricPath for Interconnecting Multiple Sites
Partial-Meshed Topology for different models of DC
Conversational Mac Learning
Classical
Ethernet vPC+ Offer a full HA DCI solution with
Cloud Native STP Isolation

vPC+ Provides easy integration with


Brownfield DC
Optimised using vPC+

Site A Site D
Core FabricPath Pre-TRILL
F1/F2 End to End for optimal design
Site C Site B Required point to point connections
Relies on Flooding for Unknown
Unicast traffic
No current Broadcast suppression
CE
L2 Multipath only for equal cost path
VSS STP
can be leveraged (i.e. AB or
vPC+ CD)

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
EoMPLS
port mode xconnect
No need of RSVP or TE In Ciscos EoMPLS implementation
(LSP can be LDP only or TE)

interface PE1

LDP/RSVP
PE2
interface g1/1
description EoMPLS port mode connection interface
no switchport
no ip address
xconnect 2.2.2.2 vcid 1 encapsulation mpls

DA SA 0x8847 LSP Label VC Label Ethernet Header Ethernet Payload FCS

8 1518
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 42
EoMPLS Usage with DCI
End-to-End Loop Avoidance using Edge to Edge LACP
On DCI Etherchannel:
STP Isolation (BPDU Filtering)
Broadcast Storm Control
FHRP Isolation
Active PW

MPLS Core

DCI Active PW DCI


Aggregation Aggregation
Layer DC1 Layer DC2

Encryption Services with 802.1AE


requires a full meshed vPC
4 PW
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
EoMPLS Usage with DCI
Over IP core

Active PW

IP Core

DCI Active PW DCI


Aggregation Aggregation
Layer DC1 Layer DC2
crypto ipsec profile MyProfile
set transform-set MyTransSet

interface Tunnel100
ip address 100.11.11.11 255.255.255.0
ip mtu 9216
mpls ip
tunnel source Loopback100
tunnel destination 12.11.11.21
tunnel protection ipsec profile MyProfile
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 44
Multi-Point Topologies
What is VPLS? PW
VFI
VLAN VLAN
MPLS
Core
SVI VFI SVI
PW
PW

Mac address table population


VFI is pure Learning-Bridge
One extended bridge-domain built using:
VFI = Virtual Forwarding Instance
SVI
( VSI = Virtual Switch Instance)
PW = Pseudo-Wire
SVI = Switch Virtual Interface
xconnect VLAN
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 45
VPLSRedundancy Sup2T has embedded
VPLS support on any port
Core LDP Link Failure
Si
X Si

Si Si
mpls ldp session protection

mpls ldp router-id Loopback100 force

LDP session protection & Loopback usage allows PW state to be


Failover
(msec)
Fallback
(msec)
unaffected
258 218
Bridged traffic
162 174
LDP + IGP convergence in sub-second
Fast failure detection on Carrier-delay / BFD

Immediate local fast protection


Traffic exit directly from egress VSS node
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
VPLSRedundancy
VSS Node Failure (or Ingress Link)

X Si

mpls ldp graceful-restart


Si

Si Si

If failing slave node: PW state is unaffected

If failing master node:


Failover Fallback PW forwarding is ensured via SSO
(msec) (msec)
PW state is maintained on the other side using Graceful restart
224 412
Bridged traffic
326 316
Edge Ether-channel convergence in sub-second

Traffic is directly going to working VSS node

Traffic exits directly from egress VSS node


BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
A-VPLSRedundancy/Dual-Homing Using VSS
Enable A-VPLS
#sh mpls l2 vc

Local intf Local circuit Dest address VC ID Status


------------- ------------- ------------ ----- ------
Si VFI VFI_610_ VFI 10.100.2.2 610 UP Si
VFI VFI_610_ VFI 10.100.3.3 610 UP
VFI VFI_611_ VFI 10.100.2.2 611 UP
VFI VFI_611_ VFI 10.100.3.3 611 UP

Si Rem: One PW per VLAN per destination


Si

interface Virtual-Ethernet1

Si
switchport

Si
switchport mode trunk
switchport trunk allowed vlan 610-619
neighbor 10.100.2.2 pw-class Core
neighbor 10.100.3.3 pw-class Core
Any card type facing edge (SUP-720)
pseudowire-class Core
encapsulation mpls
SIP-400 facing core (5Gbps)
ES-40 (20/40Gbps) support with 12.2(33)SXJ

BRKDCT-2615 support
2012of Routed-PW
Cisco from
and/or its affiliates. All 12.2(33)SXJ
rights reserved. Cisco Public 48
A-VPLS Label Paths
Traffic load Balancing
pseudowire-class A-VPLS_remote_PE
encapsulation mpls
load-balance flow ! enable ML-PW load-balancing based on ECMP

Si Si

Si Si

ML-PW:
Multi Link Pseudo-Wire
Balance traffic between multiple ECMP on one VSS member

Etherchannel:
RBH (Result Bundle Hash) Etherchannel balancing
Polarisation of traffic within VSS member
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
Dual Homing Attachment Circuit
Using mLACP / MC-LAG
Redundancy
Multi-Chassis LACP synchronisation: Group
LACP BPDUs (01:80:C2:00:00:00) are exchanged on each Link
System Attributes: Priority + bundle MAC Address Active POA
Port Attributes: Key + Priority + Number + State

DHD
ICCP MPLS
redundancy
iccp
group <ig-id>
mlacp node <node id>
mlacp system mac <system mac>
mlacp system priority <sys_prio> Standby POA
member
neighbor <mpls device>

interface <bundle>
mlacp iccp-group <ig-id> Terminology:
mlacp port-priority <port prio> mLACP : Multi-Chassis Link Aggregation Control Protocol
MC-LAG : Multi-Chassis Link Aggregation Group
interface <physical interface> ICCP : Inter Chassis Communication Protocol
bundle id <bundle id> mode active DHD : Dual Homed Device (Customer Edge)
DHN : Dual Homed Network (Customer Edge)
POA : Point of Attachment (Provider Edge)
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
MC-LAG to VPLS Testing
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DCI/vpls/vpls_asr9k.html

2 3
1 4
5
8
Si 6
MPLS
7
core
Si

Only error 2/3/4 are leading to ICCP convergence


Rem: 2 & 4 are dual errors
500 VLAN Unicast: Link error sub-1s & Node error sub-2s
1200 VLAN unicast: Link error sub-2s & Node error sub-4s
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 51
MPLS DCI Conclusion
A Mature Solution
EoMPLS is an easy point to point solution
VPLS DCI is having two flavors:
1. A-VPLS based on node clustering
Simplicity
Very fast convergence
Only available today with Catalyst 6500
2. H-VPLS based on mLACP attachment
High-end devices (7600 / ASR9K, )
Multi-tenant features
High scale
High SLA features
Standard based
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
Data Centre InterconnectLAN Extension
Technology Selection Criteria
Over dark fibre or protected D-WDM
VSS & vPC
Ethernet Dual site interconnection
FabricPath (TRILL)

MPLS Transport
EoMPLS
transparent point to point
MPLS A-VPLS
Enterprise style MPLS
H-VPLS
Large scale & Multi-tenants

IP Transport
OTV
IP Enterprise style Inter-site MAC
Routing
VXLAN
Intra-site MAC bridging in total
virtualised environment
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
LAN Extensions Evolution
From Circuits to Packets
Circuits + Data Plane Flooding Packet Switching + Control Protocol

Full mesh of circuits (pseudo-wires) Packet switched connectivity


MAC learning based on flooding MAC learning by control protocol
Failure propagation Failure containment
Limited information Rich information
Operationally Challenging Operational simplification
Loop prevention and multi-homing must be provided Automatic loop prevention & multi-homing
separately

Traditional L2 VPNs
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
MAC Routing 54
Why do we really need LAN Extensions?
Moving Workloads

Hypervisor Control
Hypervisor Traffic (routable) Hypervisor
IP Network

Not necessarily for moving workloads:


Can be solved with IP mobility solutions: LISP Host Mobility

Application High Availability Distributed Clusters


e.g. Node Discovery & Heartbeats
Distributed in clustered Applications
App (GeoCluster)
OS OS OS

Non-IP application traffic


(heartbeats)
LAN Extension (OTV)
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
Overlay Transport Virtualisation (OTV)
Simplifying LAN Extensions
Ethernet LAN Extension over any Network
Works over dark fibre, MPLS, or IP
Multi-Data Centre scalability
Many Physical Sites
Simplified Configuration & Operation One Logical Data Centre
Seamless overlay - No network re-design
Single touch site configuration

High Resiliency
Failure domain isolation
Seamless Multi-homing

Maximises available bandwidth


Automated multi-pathing
Any Workload, Anytime, Anywhere
Optimal multicast replication Unleashing the Full Potential of Compute Virtualisation
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 56
OTV Data Plane
Inter-Site Packet Flow
1. Layer 2 lookup on the destination MAC. MAC 3 is 4. The Edge Device on site East receives and
reachable through IP B decapsulates the packet
2. The Edge Device encapsulates the frame 5. Layer 2 lookup on the original frame. MAC 3 is a
3. The transport delivers the packet to the Edge Device local MAC
on site East 6. The frame is delivered to the destination

3
MAC TABLE MAC TABLE
Transport
VLAN MAC IF VLAN MAC IF
Infrastructure Decap
100 MAC 1 Eth 2 IP A 2 4 IP B 100 MAC 1 IP A
1 100
OTV
MAC 2 Eth 1
OTV OTV
100 MAC 2
OTV
IP A 5
Encap
MAC 1 MAC 3 IP A IP B
Layer 2 100 MAC 3 IP B MAC 1 MAC 3 IP A IP B 100 MAC 3 Eth 3 Layer 2
Lookup 100 MAC 4 IP B 100 MAC 4 Eth 4 Lookup

West East MAC 1 MAC 3 6


MAC 1 MAC 3 MAC 1
Site Site MAC 3
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 57
Overlay Transport Virtualisation
The OTV Control Plane
Neighbor discovery and adjacency over
Multicast
Unicast (Adjacency Server Mode available with NX-OS 5.2 release)

OTV proactively advertises/withdraws MAC reachability (control-plane learning)

IS-IS is the OTV Control Protocol between edge devices - No specific configuration required

OTV
MAC Addresses OTV
Advertisements
IP A IP B
West East

IP C OTV

South
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
OTV Failure Domain Isolation
Spanning-Tree Site Independence
Site transparency: no changes to the STP topology
Total isolation of the STP domain

Default behaviour: no configuration is required


BPDUs sent and received ONLY on Internal Interfaces

OTV OTV

The BPDUs
L3The BPDUs
stop here stop here
L2

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
OTV Failure Domain Isolation
Preventing Unknown Unicast Storms
No requirements to forward unknown unicast frames

Assumption: end-host are not silent or uni-directional


Default behaviour: no configuration is required

MAC TABLE
VLAN MAC IF
OTV OTV 100 MAC 1 Eth1
100 MAC 2 IP B
-
L3 - - No MAC 3 in the
L2 MAC Table

MAC 1 MAC 3

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
OTV: Join and Internal Interfaces
Deployment Guidelines
Both currently supported only on M1 line cards
The OTV Internal Interfaces should carry the VLANs to be extended plus the OTV
site-vlan
Only one join interface (physical or logical) can currently be specified per Overlay
Multiple physical interfaces can be deployed as L3 uplinks
For a higher resiliency the use of a port-channel is encouraged, but its not mandatory
There are NOT requirements neither in terms of 1GE vs 10GE nor in terms of
Dedicated vs Shared mode.
Supported Join Interface types:

Join Interface Type* Supported


Layer 3 Routed Physical Interface and Sub-interface
Layer 3 Port-Channel Interface and Sub-interface

* Loopback interfaces and SVI support planned for future releases


BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
OTV and Multi-homing
VLAN Splitting between Edge Devices
Authoritative Edge Device (AED) role negotiated between
the two OTV VDCs (on a per VLAN basis)
Internal IS-IS peering on the site VLAN
VLANs are split between the OTV Edge Devices belonging
to the same site
AED OTV OTV AED
Achieved via a very deterministic
algorithm (not configurable)
Future functionality will allow to tune the
behaviour
System-ID determines which AED will handle ODD / EVEN
VLANs
Highest System-ID = AED for ODD VLANs
OTV-ED# show otv site
Site Adjacency Information (Site-VLAN: 1999) (* - this device)
Internal peering on Site
Overlay100 Site-Local Adjacencies (Count: 2) VLAN for AED election
Hostname System-ID Ordinal
----------------- ---------------- -------
dc2a-agg-7k2-otv 001b.54c2.e142 0
* dc2a-agg-7k1-otv 0022.5579.0f42 1

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
Placement of the OTV Edge Device
Option 1 OTV in the DC Core

Easy deployment for Brownfield


L2-L3 boundary remains at aggregation vPC

vPC VSS
DC Core devices performs L3 and OTV SVIs SVIs SVIs SVIs

functionalities vPC

May use a pair of dedicated Nexus 7000


VLANs extended from aggregation layer
L2 Octopus design
Recommended to use separate physical links for
vPC
L2 & L3 traffic
vPC VSS
STP and L2 broadcast domains not isolated SVIs SVIs SVIs SVIs
between PODs vPC

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 63
OTV at the Aggregation Layer
Option 2 - OTV at the Aggregation with L3 boundary on the FW
The Firewalls host the Default Gateway
No SVIs at the Aggregation Layer
No Need for the OTV VDC

Core
OTV OTV
Def Def
L3 GWY GWY
L2 Aggregation

Firewall Firewall

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 64
OTV and SVI Coexistence
Introducing the OTV VDC

Currently, on the Nexus 7000 traffic


belonging to a given VLAN can either be
OTV
VDC
OTV
VDC routed (associated with an SVI) or
extended using OTV
L3 L3
L2 L2
This would theoretically require a dual-system
solution
The VDC feature allows to deploy a dual-vdc solution
on the same physical device
Different OTV VDC deployment options
Single Homed OTV VDC Model
Dual Homed OTV VDC Model
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Single Homed OTV VDC
Simple Model

N7K-A N7K-B
May use a single physical link for Join
Link-1 Link-2
Po1 and Internal interfaces
OTV OTV
VDC VDC Minimises the number of ports required to
Routing VDC Routing VDC interconnect the VDCs
OTV VDC
Logical View OTV VDC Single link or physical node (or VDC)
failures lead to AED re-election
N7K-A N7K-B
50% of the extended VLANs affected
Link-1 Po1 Link-2
Failure of the routed link to the core is not
Link-3 Link-4
OTV related
Physical View Recovery is based on IP convergence

Layer 3
Layer 2

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
Dual Homed OTV VDC
Improving the Design Resiliency

N7K-A N7K-B Logical Port-channels used for the Join


Links 1-2 Links 3-4
Po1 and the Internal interfaces
OTV
VDC
OTV
VDC
Increases the number of physical interfaces
required to interconnect the VDCs
Routing VDC Routing VDC

OTV VDC OTV VDC


Traffic recovery after single link failure
Logical View event based on port-channel re-hashing
N7K-A N7K-B No need for AED re-election
Links 1-2 Po1 Links 3-4
Physical node (or VDC) failure still
Link 5 Link 6 Link 7 requires AED re-election
Link 8
In the current implementation may cause few
Physical View seconds of outage (for 50% of the extended
Layer 3
VLANs)
Layer 2

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
Placement of the OTV Edge Device
Option 3 OTV in the DC Aggregation

L2-L3 boundary at aggregation


DC Core performs only L3 role
STP and L2 broadcast Domains isolated
between PODs
Intra-DC and Inter-DCs LAN extension
provided by OTV
SVIs SVIs SVIs SVIs
Requires the deployment of dedicated OTV VDCs
vPC vPC
Ideal for single aggregation block
topologies
Recommended for Green Field
deployments
Nexus 7000 required in aggregation

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 68
Placement of the OTV Edge Device
Option 4 OTV over Dark Fibre Deployments
Data Centres directly connected at the Aggregation
Currently mandates the deployment of dedicated OTV VDCs
OTV Control Plane messages must always be received on the Join Interface
Requires IGP/PIM peering between aggregation devices (via peer-link)
Advantages over VSS-vPC solution:
Provision of Layer 2 and Layer 3 connectivity leveraging the same dark fibre connections
Native STP isolation: no need to explicitly configure BPDU filtering
ARP Optimisation with the OTV ARP Cache
Simplified provisioning of FHRP isolation
Limits the Fault Domain on each site Layer 2 Link
Layer 3 Link
Easy Addition of Sites OTV Virtual Link

Site
OTV 1 OTV OTV Site
OTV 2
VDC SVIs SVIs VDC VDC SVIs SVIs VDC

vPC vPC

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 69
OTV
North
Summary Data
Fault Centre Fault
Domain Domain
Extensions over any transport (IP, MPLS)
Failure boundary preservation
Site independence
Optimal BW utilisation
(no head-end replication)
OTV
Automated Built-in Multihoming
End-to-End loop prevention
Only few CLI
Scalability commands

Sites, VLANs, MACs


Operations simplicity Fault Fault
Domain South Domain
Data
Centre
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 70
Agenda
Active-Active Data Centre: Business Drivers and Solutions Overview
Host Mobility using LISP
Active / Active Data Centre Design Considerations
Storage Extension
Data Centre Interconnect (LAN Extension Deployment Scenarios)
Ethernet Based
MPLS Based
IP Based
Network Services and Applications (Path optimisation)

Summary and Conclusions


Q&A
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 71
VMotion Service Normally in Left DC
Optimised Multi-Tier Framework and Active Network Services
144.254.100.0/25 & 144.254.100.128/25 144.254.100.0/24
EEM or RHI can be used to get very granular Backup for Data Centre A

ISP A Layer 3 Core ISP B


Ingress Path Optimisation: Clients-
DC A Server DC B

Server-Server
Path Optimisation
Public Network
Agg
Agg
VLAN A

Access Move the whole application tier


Access
Optimise the whole path:
Client to Server
Server to Server
DB Server to Client

L2 Links (GE or 10GE)


Front-End Data-Base L3 Links (GE or 10GE)

Egress Path BRKDCT-2615


Optimisation: 2012 Cisco and/or its affiliates. All rights reserved.
Server-Client Cisco Public Egress Path Optimisation: Server-Client 72
1a VMotion - Primary Service in Left DC
GSS and ACE KAL-AP 144.254.1.100
144.254.200.100
KAL-AP Change IP
GSS

Layer 3 Core
ISP A ISP B
Intranet
DC A DC B

144.254.1.100 144.254.200.100

SNAT SNAT
Agg
VLAN A

Access
Access

L2 Links (GE or 10GE)


VM= 10.1.1.100 L3 Links (GE or 10GE)
Default GWBRKDCT-2615
= 10.1.1.1 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 73
1b VMotion - Primary Service in Left DC
Movement of VM announced via vCenter 144.254.1.100
144.254.200.100

144.254.1.0/24 is
advertised into L3

Layer 3 WAN
ISP B
ISP A
MAC moved Data Centre B
Data Centre A
Change the IP@

144.254.200.100
144.254.1.100
Agg Public Network SNAT Agg
SNAT
VLAN A

Access
Access

VM= 10.1.1.100
Default GWBRKDCT-2615
= 10.1.1.1 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 74
2a VMotion - Primary Service in Left DC
Detection of Movement of VM using ACE Probes Ingress Path Optimisation
144.254.100.0/25 & 144.254.100.128/25 144.254.100.0/24
EEM or RHI can be used to get very granular Backup for Data Centre A

Layer 3 Core
ISP A Intranet Probe to ISP B

DC A 10.1.1.100 DC B
Failed

IS 10.1.1.100 OK?
Public Network
Agg
VLAN A Agg

Access
Access

L2 Links (GE or 10GE)


App VM = 10.1.1.100 L3 Links (GE or 10GE)
Default GW = 10.1.1.1
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 75
2a VMotion - Primary Service in Left DC
Detection of Movement of VM using ACE Probes Ingress Path Optimisation
144.254.100.0/25 & 144.254.100.128/25 144.254.100.100/32 is advertised into L3 using RHI
144.254.100.0/24
EEM or RHI can be used to get very granular Backup for Data Center A

Layer 3 Core
ISP A ISP B
Intranet
Probe to
DC A DC B
10.1.1.100
is OK
IS 10.1.1.100 OK? RHI
Public Network
Agg
VLAN A Agg
10.1.1.1 HSRP 10.1.1.1 HSRP
Group 1 Group 1

Access

L2 Links (GE or 10GE)


App VM= 10.1.1.100
L3 Links (GE or 10GE)
Default GW = 10.1.1.1
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 76
3 VMotion Prefix Route Locator
(RLOC)

Ingress Routing Optimisation with LISP 10.10.10.1 A,


C,BD
10.10.10.2 A, B
Ingress Tunnel Router (ITR) IP_DA 10.10.10.1

1
Encap 10.10.10.5 C, D
2 10.10.10.6 C, D
IP_DA = 10.10.10.1 IP_DA= A Layer 3 Core
ISP A ISP B
Intranet IP_DA = 10.10.10.1 IP_DA= D 3
A 3 DC A
B DC B
@ RLOCs: C D
Decap Egress TR (ETR)
Decap
IP_DA = 10.10.10.1 IP_DA = 10.10.10.1

Public Network
Agg
VLAN A Agg

Access
Access

L2 Links (GE or 10GE)


VM= 10.10.10.1 L3 Links (GE or 10GE)
Default GWBRKDCT-2615
= 10.10.10.100 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 77
Agenda
Active-Active Data Centre: Business Drivers and Solutions Overview
Host Mobility using LISP
Active / Active Data Centre Design Considerations
Storage Extension
Data Centre Interconnect (LAN Extension Deployment Scenarios)
Ethernet Based
MPLS Based
IP Based
Network Services and Applications (Path optimisation)

Summary and Conclusions


Q&A
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 78
DCI Architectures
OTV

Greenfield Greenfield ASR 1K/N7K ASR 1K Brownfield

Nexus 7K Si Si

Nexus 7K Si Si

Nexus 7K

Leverage OTV capabilities on Nexus 7000 (Greenfield) and ASR 1K (FCS 2HCY11)
L3 Scalability and convergence improvements on N7K planned for 1HCY12
L2 CE
Build on top of the traditional DC L3 switching model (L2-L3 boundary in Agg, Core is pure L3)
L2 FP
OTV Virt. Integration with the FabricPath/TRILL model
LinkBRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 79
DCI Architectures
Large Scale VPLS
MPLS Core
WAN edge ASR 9K

Core ASR 9K
L3
L2
FP

Aggregation

Access

Servers
Leverage VPLS on ASR 9K for high scale and multi-tenancy support
Targeted to large (SP-like) Enterprise customers
Two possible models:
FP up to the DC Core, DCI on ASR 9K in the WAN Edge
ASR 9K as collapsed Wan Edge/Core fusion
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 80
DCI Architectures
Enterprise Model
WAN
WAN edge
6500 VSS A-VPLS 6500 VSS
Core Si Si Si Si

Aggregation Si Si Si Si

Access
L3
Servers L2

Leverage existing Catalyst 6500 installed base to perform LAN extension


A-VPLS deployed from the DC Core/WAN Edge or Aggregation Layer (leveraging the
upcoming IRB functionality)
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 81
Data Centre Interconnect - DCI Model
Connecting Virtualised Data Centres
IP Localisation
L2 Domain Elasticity - Optimal Routing
Considerations
- Route Portability
Network and Security services deployment
- Fabric
STP Isolation is the Path
key element
Server-Client Flows
Multipoint - LAN Extensions Server-Server Flows
Loop avoidance + Storm-Control
Path Optimisation Options
Unknown Unicast & Broadcast control OTV
Egress
Link sturdiness Addressed by FHRP
Scale & Convergence OTV Service
Filtering Localization
Ingress:
- Any service anywhere
1. DNS redirection with ACE/GSS
2. Route Injection
3. LISP
Fabric Consolidation
- Unified Fabric & I/O
- Device Virtualisation
- Segmentation OTV
VN-link
OTV notifications

Storage Elasticity
- by
Sync or Async replication modes are driven SAN Extensionshence the
the applications,
distance/latency is a key component to select the choice
Localisation of Active Storage is key VM-awareness
Distance can be improved using IO accelerator or caching
Virtual LUN is allowingActive/Active - VN-link intelligence
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 82
LISP empowering DCI
Efficient Multi-Homing IPv6 Transition Support
v6
LISP
v6 services router
LISP
Internet router
IPv4 IPv6
Internet Internet
LISP LISP
Site v6 v4 v6
routers

IP Portability v6-over-v4, v6-over-v6


Ingress Traffic Engineering without BGP v4-over-v6, v4-over-v4

Multi-tenancy and VPNs Mobility


LISP site LISP site

IP Network IP Network

West-DC East-DC West-DC East-DC

Reduced CapEx/OpEx Cloud / Layer 3 workload moves


Large scale Segmentation Segmentation

BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 83
Data Centre Interconnect
Where to Go for More Information

http://www.cisco.com/go/dci
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 84
Q&A
Complete Your Online Session
Evaluation
Complete your session evaluation:
Directly from your mobile device by visiting
www.ciscoliveaustralia.com/mobile and login
by entering your username and password

Visit one of the Cisco Live internet


stations located throughout the venue

Open a browser on your own computer


to access the Cisco Live onsite portal Dont forget to activate your Cisco Live
Virtual account for access to all session
materials, communities, and on-demand and
live activities throughout the year. Activate your
account at any internet station or visit
www.ciscolivevirtual.com.
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 86
BRKDCT-2615 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 87

You might also like