LTRSPG 3005
LTRSPG 3005
LTRSPG 3005
XRv-9000 in Evolved
Programmable Network (EPN)
Osman Hashmi, Technical Leader
Nicolas Fevrier, Technical Leader
LTRSPG-3005
Agenda
Introduction
Installation over KVM/QEMU and ESXi environment
Configuration for the virtualized OS with focus on VNF
applications
Troubleshooting
Demo
3
Introduction
4
IOS XRv 9000 Vision Statement
Common SP Operating System across physical and virtual Data planes
Abstracted Network Services & Automation
IOS XRv
Physical High touch Dataplanes Virtual XR Dataplane Physical cost-optimized Dataplanes / merchant
Virtual
XR DP
5
DISCLAIMER: Many of the products and features described herein remain in varying stages of development and will be offered on a when-and-if-available basis. This roadmap is subject to change
at the sole discretion of Cisco, and Cisco will have no liability for delay in the delivery or failure to deliver any of the products or features set forth in this document.
Vision: Operationally Consistent Infrastructure
Both Physical and Virtual Infrastructure
IOS XR IOS XR
Common IOS XR based Nothbound APIs
Physical and virtual solutions with feature Common SDN enabled Data Plane
Virtual DP
Ease of adoption
Access / Core Access /
Aggregation Aggregation
Common Hardware
Rapid Creation and Deployment
platform (x86)
Common x86 Hardware
of new Services Familiar IOS XR/XE
Create new Routers in
Single Tenant or Multi-Tenant Cloud Orchestrated
Easy Migration specific roles in seconds
Customer Service Provider Service Provider
Wide range of
Access / Aggregation Core
Premise
Access /
Data Center
Core
Edge
Internet / 3rd party
applications
Aggregation provider
7
Universal Forwarder Architecture Common Code & Feature leverage
XR based Virtual Router
Server
Lightweight and optimized admin plane
Infra management
Admi
LXC SMU management
n VM/LXC management
Lifecycle Management for CP and DP
LXC
XR Install
LINUX
XR Combined CP
Start/Stop/Restart
VM
RP+ Data Plane Upgrade/Downgrade
Controller
LC functionality Liveness (Heartbeat monitoring)
LXC
Data Plane Referred to as DPA
GigE 0/0/2
MgmtEth 0/0/1
TenGigE 0/0/0
GigE 0/0/1
Nomenclature
Data Plane Controller DPC
Hypervisor vswitch Data Plane Agent DPA
Virtual Forwarder - UVF
8
Full IOS-XR Protocol Stack
RPL
HA (Process restart, NSR)
IOS-XRv 9000
64 bit Linux Kernel for higher scale
Flexible CPU/Memory allocation for
scale out
IOS-XRv Control Plane LXC for fault isolation
IOS-XR 64-bit OS Modular and lightweight admin
plane
SW based HW Assists
9
TASK 1 Accessing the
Lab
10
Task 1
Objectives
Launch student POD Virtual Machine
Launch AnyConnect to dCloud
Connect to router consoles
12
Task 2 Installing the Cisco IOS XRv 9000 Router
Installation Overview
Installing in VMware ESXi Environments
Installing in KVM Environments
Smart Licensing
Accessing the Console Port
13
Installation Overview
ACL
LXC - Admin L2/L3/
Mcast
QOS
FIB
ADMIN IPC
CM VM Interface LXC Data Plane
XR System Infra
Driver LC Comps
SM FM
(agent, drivers, features)
DP Controller vAntares Datapath
PD PRM
Svr DP
Driver Driver Driver
BI Agent
RM
DPDK
DPC/DPA xPort
10G
10G
Ctrl
Ctrl Eth
Eth
Host Linux Kernel
e1000
Linux
virtio
GE bridge
Mgmt
Eth
Bare Metal or
KVM, ESXi (future: XEN, HyperV, ...) VF PF
15
Installation Overview
When you boot the ISO, it will install to a virtual harddisk in a newly
created folder called workdir-XXX under the directory you are in.
By default sunstone.sh will create 6 tap interfaces
Management Ethernet - This will be the MgmtEth port in XR
Ctrl Ethernet - This is for future use, so currently unused
16
Installation Overview
3 Traffic interfaces - These show up as GigEth interfaces in XR. This
number is configurable via the data-nics option.
The bridges can be seen on the Linux Host using the brctl command.
Tap interfaces show up in the ifconfig command.
17
Requirements
Minimum version of ESXi 5.5
Virtual RAM: mini 16GB
Virtual HD: mini 55GB
Supported vNIC: E1000 and VMXNET3
Number of vNICs: from 4 to 11 vNICS
1 for Management
2 reserved (vNIC 2 and 3 are for internal use only)
from 1 to 8 for traffic
18
Lets Get Started
19
Right click on datastore1
20
Upload the .iso file
The file is now
ingested in our
datastore
22
We are now ready to
build the virtual machine
23
24
25
26
Second and third vNIC are not available (used internally)
27
28
29
We edit the VM settings before finishing. We will add a CD/DVD and serial ports
30
This CD/DVD drive will point to our .iso image and will be used to boot the system
31
We use the .iso image pushed
to the datastore1 earlier
32
Now the image is prepared, we will add the serial ports
33
Serial Ports
If you used a vga image like: xrv9k-fullk9_vga-x.iso or xrv9k-fullk9_vga-x.ova
The XR console port is mapped on the VGA console
(available in the console tab in the vSphere client)
XR Auxiliary: 1st Serial Port
Admin Console: 2nd Serial Port
Admin Auxiliary: 3rd Serial Port
34
Click on Add and repeat the operation for 3 or 4 serial ports
35
Use the ESXi server IP address and some available ports
36
x3
Repeat 3 more times with other available ports, and click on Finish
37
It takes 3 minutes
to prepare
38
Start the Virtual Machine
Now select the VM in the vSphere Client and start it
Simultaneously you can connect on the console and follow the booting process
39
40
TASK 3 Configuration
for the Virtualized OS with
focus on VNF applications
41
Task 3
Objectives - Configuration for the virtualized OS with focus on VNF applications
Understand the new virtualization environment
Verify VMs resources (CPU and memory)
Verify definition and state of VMs
Verify networking elements of VMs
Understand SysAdmin basics
TASK 4 -Troubleshooting
43
Cheat Sheet
44
Datapath Architecture
On XRv 9000 the control and data planes are DataPath Controller
separated into into different linux containers SPP
(DPC)
The dataplane resides in the UVF (Universal
Virtual Forwarder) container
XR Container
The datapath is a single process, composed of 2
parts:
1. An agent (DPA) that communicates with a
controller in the XR container to get UVF Container
route/feature configuration and program it into
the datapath
DataPath Agent (DPA)
2. The datapath forwarding code, comprised of
DPDK, VPP, and the XRv 9000 dataplane
code
Punt/Inject packets travel from spp <-> DPA Packet Forwarding
45
The life of the DPA thread
VPP Tasks (stats, CLI, link up/down
The DPA thread processing
continuously runs
in a loop servicing
background tasks, Service Punt Ring send to punt socket
as well as the
punt/inject and Read from Inject Socket enqueue to
control sockets datapath
46
Datapath Core Usage
The XRv 9000 datapath is able to scale from 2 to 16 CPU cores
The high-level organization of work varies depending on the number of
cores given to the datapath
There are 2 cases:
2/3 cores (all work on one core)
4+ cores (IO in one core, TM in one core, workers on the rest)
One core is always used for the DPA (main) thread
47
Demo
48
Conclusions
49
References
Cisco IOS XRv 9000 Virtual Router
Cisco IOS XRv 9000 Router Data Sheet
Cisco IOS XRv Router - PIW - Cisco XRv 9000: Carrier class virtual router -
Video
Cisco IOS-XRv 9000 Virtual Route Reflector v1 - Demo on Cisco dCloud
50
Call to Action
Visit the World of Solutions for
Cisco Campus
Walk in Labs
Technical Solution Clinics
53