ACI
ACI
ACI
0
Implementation Guide
February 20, 2015
CCDE, CCENT, CCSI, Cisco Eos, Cisco Explorer, Cisco HealthPresence, Cisco IronPort, the Cisco logo, Cisco Nurse Connect, Cisco Pulse, Cisco SensorBase,
Cisco StackPower, Cisco StadiumVision, Cisco TelePresence, Cisco TrustSec, Cisco Unified Computing System, Cisco WebEx, DCE, Flip Channels, Flip for Good, Flip
Mino, Flipshare (Design), Flip Ultra, Flip Video, Flip Video (Design), Instant Broadband, and Welcome to the Human Network are trademarks; Changing the Way We Work,
Live, Play, and Learn, Cisco Capital, Cisco Capital (Design), Cisco:Financed (Stylized), Cisco Store, Flip Gift Card, and One Million Acts of Green are service marks; and
Access Registrar, Aironet, AllTouch, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the
Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Lumin, Cisco Nexus, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity,
Collaboration Without Limitation, Continuum, EtherFast, EtherSwitch, Event Center, Explorer, Follow Me Browsing, GainMaker, iLYNX, IOS, iPhone, IronPort, the
IronPort logo, Laser Link, LightStream, Linksys, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, PCNow, PIX, PowerKEY,
PowerPanels, PowerTV, PowerTV (Design), PowerVu, Prisma, ProConnect, ROSA, SenderBase, SMARTnet, Spectrum Expert, StackWise, WebEx, and the WebEx logo are
registered trademarks of Cisco and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship
between Cisco and any other company. (1002R)
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT
SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE
OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCBs public
domain version of the UNIX operating system. All rights reserved. Copyright 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH
ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT
LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF
DEALING, USAGE, OR TRADE PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING,
WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO
OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Cisco Confidential Partners ONLY
Intercloud Data Center ACI 1.0, Implementation Guide
Service Provider Segment
2015 Cisco Systems, Inc. All rights reserved.
CONTENTS
Preface
iii
Audience
CHAPTER
iii
Solution Overview
1-1
Implementation Overview
Solution Architecture
1-4
1-4
1-8
1-12
1-18
2-1
2-2
2-4
3-1
3-1
3-2
3-3
3-13
3-17
Contents
3-36
3-44
4-1
3-44
4-6
5-1
5-5
ii
Implementation Guide
Contents
5-6
5-6
5-29
iii
Contents
6-1
6-9
6-11
7-1
7-2
7-5
iv
Implementation Guide
Contents
Classification 7-9
Trust 7-11
Marking 7-11
UCS QoS 7-12
AVS Encapsulation 7-12
QoS System Class 7-12
QoS Policy 7-13
ASR 9000 Data Center PE QoS
Deployment Considerations
CHAPTER
7-13
7-14
8-1
8-2
8-3
8-5
8-6
8-8
8-75
8-78
9-1
Contents
10
9-70
9-51
10-1
10-1
10-4
vi
Implementation Guide
Contents
11
10-23
10-32
11-1
11-11
11-17
vii
Contents
viii
Implementation Guide
Preface
The Cisco Intercloud Datacenter ACI 1.0 (ICDC ACI 1.0) system provides design and implementation
guidance for building cloud infrastructures for both Enterprises deploying Private cloud services, and
Service Providers building Public Cloud and Virtual Public Cloud services. With the goal of providing
an end-to-end system architecture, ICDC ACI 1.0 integrates Cisco and third-party products in the cloud
computing ecosystem. This preface explains the objectives and intended audience of the Cisco
Intercloud Data Center ACI 1.0 solution and this implementation guide.
The Intercloud Data Center system is a continuation of the Virtualized Multi-Service Data Center
(VMDC) systems, and this implementation guide is based on Application Centric Infrastructure (ACI)
technology that Cisco has just released. In this first release of the implementation guide, focus is placed
on showing how to build complex tenancy constructs using ACI.
Product screen shots and other similar material in this guide are used for illustrative purposes only and
show trademarks of EMC Corporation (VMAX), NetApp, Inc. (NetApp FAS3250), and VMware, Inc.
(vSphere). All other marks and names mentioned herein may be trademarks of their respective
companies.
Use of the word partner or partnership does not imply a legal partnership relationship between Cisco
and any other company.
Audience
This guide is intended for, but not limited to, system architects, network design engineers, system
engineers, field consultants, advanced services specialists, and customers who want to understand how
to deploy a Public or Private cloud data center infrastructure using ACI. This guide assumes that you are
familiar with the basic concepts of Infrastructure as a Service, Cisco Virtualized Multi-service Data
Center (VMDC) Solution, IP protocols, Quality of Service (QoS), and High Availability (HA), and that
you are aware of general system requirements and data center technologies.
This implementation guide provides guidance for cloud service providers to build cloud infrastructures
using the Cisco Application Centric Infrastructure (ACI) Technology. This implementation guide is part
of the Cisco reference design for cloud infrastructures called Cisco Intercloud Data Center ACI 1.0
release.
iii
Preface
iv
Implementation Guide
CH A P T E R
Solution Overview
The goal of implementing cloud infrastructures is to provide highly scalable, efficient, and elastic
services accessed on-demand over the Internet or intranet. In the cloud, compute, storage, and network
hardware are abstracted and delivered as a service to run the workloads that provide value to its users.
The end users, also called tenants, utilize the functionality and value provided by the service as and when
needed without the need to build and manage the underlying data center infrastructure. A cloud
deployment model differs from traditional deployments as the focus is on deploying application by
consuming a service from a provider, and results in business agility and lower cost due to consuming
only the resources needed and for the duration needed. For the provider of the cloud infrastructure, the
compute, storage, networking and services infrastructure in the Data Center are pooled together as a
common shared fabric of resources, hosted by at the providers facility, and consumed by tenants using
automation via API or portals. The key requirements for cloud service providers are multi-tenancy, high
scale, automation to deploy tenant services and operational ease.
With the availability of ACI, powerful technology is now available to build highly scalable and
programmable data center infrastructures. ACI technology brings in software defined networking
principles using centralized policy controller to configure, deploy and manage datacenter infrastructure
including services appliances, and scales vastly by using overlay technology in hardware yielding high
performance and enhanced visibility to the network. It also introduces a different paradigm of designing
and running applications in a datacenter in a multi-tenant environment, with enhanced security.
ACI is supported on the new Nexus 9000 series switches, and the centralized policy controller is called
Application Programming Infrastructure Controller (APIC). Limited First Customer Shipment (FCS)
release of this software was done in summer of 2014, with General Availability (GA) in Nov 2014.
This guide documents the implementation of reference Infrastructure as a Service (IaaS) containers
using the FCS ACI software release and includes detailed configurations and findings based on solution
validation in Cisco labs. The focus is to show the use of ACI based constructs to build similar reference
IaaS containers as shown in past Cisco Virtualized Multi-service Data Center (VMDC) Cisco Validated
Designs (CVD), to enable Cisco customers to understand how ACI can be applied to build Cloud
infrastructures. This is the first release of this system, and hence focus has been to show the functional
capabilities of ACI and how it applies to building reference containers. Validation of scalability of ACI
will be covered in subsequent updates or releases of this solution and implementation guide.
This release of Intercloud Data Center ACI 1.0 includes VMware vSphere-based hypervisor and uses
Cisco Application Virtual Switch (AVS) to extend ACI integration all the way to virtual access layer,
and configuration of the virtual switch port-groups is also done via APIC. Additionally, OpenStack
based compute pod is also validated using the Nexus1000V for KVM platform this implementation
was targeted for providing lower cost hosting services. This was implemented using Canonical
distribution of OpenStack Icehouse release with Ubuntu 14.04 LTS, and does not include APIC
integration.
1-1
Chapter 1
Solution Overview
Previous Cisco cloud reference designs were named Virtualized Multi-Service Data Center (VMDC),
and going forward, the naming of these systems has been changed to Intercloud Data Center starting with
this system release. For reference purposes, details are provided here about the previously released
VMDC design and implementation guides. There have been several iterations of the VMDC solution,
with each phase encompassing new platforms, versions, and technologies.
VMDC 2.3
This implementation guide introduces several ACI based design elements and technologies:
Scaling with VXLAN based overlaysACI uses VXLANs internally to scale beyond the 4000
VLAN limit to implement Layer 2 (L2) segments.
Data Center fabric uses the Clos design, allowing for large cross sectional bandwidth, and smaller
failure domain using dedicated spines. All attachment of servers and external network is to the leaf
nodes.
Centralized policy controlSDN & programmable Data Center network: Using APIC GUI or REST
API, the whole ACI fabric can be configured.
Integration with Virtual Machine Managers (VMM)vSphere 5.1 using Application Virtual
Switch.
Service integration of Firewall and Server Load Balancer using ACI Service Graphing technology.
The Intercloud ACI 1.0 solution addresses the following key requirements for cloud infrastructure
providers:
1. Tenancy ScaleMulti-tenant cloud infrastructures require the use of multiple Layer 2 segments per
tenant, and each tenant needs layer 3 contexts for isolation to support security as well as overlapping
IP address spaces. These are typically implemented as VLANs and VRFs on the data center access
and aggregation layers and extending the Layer-3 isolation all the way to the DC provider edge. Due
to the 4000 VLAN limit, overlays are required and ACI uses VXLAN technology within the fabric
to scale to a very high number of bridge domains. The number of tenants is similarly very high
with plans to support 64000 tenants in future releases. The implementation of VXLANs in hardware
provides for large scale, high performance and throughput, innovative visibility to tenant traffic, as
well as new security models.
2.
Programmable DC NetworkThe data center network is configured using APIC which is the
central policy control element. The DC fabric and tenant configurations can be created via the APIC
GUI or via REST API Calls, allowing for highly programmable and automatable data center. There
is integration with the Virtual Machine Managers VMware vSphere 5.1 currently using
Application Virtual Switch (AVS), so that the Tenant L2 segments can be created via APIC.
3.
Integration of ServicesDeploying services for tenants such as Firewall and Server Load balancer
requires separate configuration of these devices via orchestration tools. With ACI, these devices can
be also configured via APIC, allowing for a single point of configuration for the data center services.
Each service platform publishes the supported data items via a device package, which then APIC
exposes via its user interface. Currently Cisco ASA Firewalls and Citrix NetScaler Server Load
Balancers (SLB) are among the supported devices, and a number of other vendors are building their
own device packages to allow for integration with ACI.
1-2
Implementation Guide
Chapter 1
Solution Overview
In summary, the Intercloud ACI 1.0 solution provides the following benefits to cloud providers:
Increased L2 segment scaleVXLAN overlays in fabric provide higher L2 scale and also
normalizes the encapsulation on the wire.
Single Clos based Data Center fabric that scales horizontally by adding more leafs.
Large cross sectional bandwidth using Clos fabric, smaller failure domains and enhanced HA using
ACI virtual port-channels from 2 leaf nodes to an external device.
SDNsoftware defined network and services integration all configurations through a centralized
policy controller.
APIC provides integration to the virtual access layer using Application Virtual Switch for the
VMWare vSphere 5.1 Hypervisor based virtual machines - no additional configurations required.
The Intercloud ACI 1.0 solution (as validated) is built around Cisco UCS, AVS, Nexus 9000 ACI
switches, APIC, ASR 9000, Adaptive Security Appliance (ASA), Cisco NetScaler 1000V, VMware
vSphere 5.1, Canonical OpenStack, KVM, Nexus 1000V, NetApp FAS storage arrays and Ceph storage.
Figure 1-1 shows the functional infrastructure components comprising the Intercloud ACI 1.0 solution.
Figure 1-1
Data Center PE
Nexus 9508
with 9736PQ
Nexus
9393PQ
Nexus
93128TX
Virtual Access
Application
Virtual Switch
Compute
UCS C-Series
Rack Servers
Storage
NetApp FAS
EMC VMAX,
VNX or any other
Hypervisors
vSphere
OpenStack KVM
Services
Management
ASAv
NetScaler
1000V
APIC, VMware vCenter,
OpenStack Horizon,
Cisco UCSM
298460
ASA 5585-X
1-3
Chapter 1
Solution Overview
Implementation Overview
Implementation Overview
The Intercloud ACI 1.0 solution utilizes a Clos design for large capacity DC fabric with High
Availability (HA) and scalability. All external devices are connected to the leaf nodes. This design uses
multiple Nexus 9500 series spine switchesat least two spine switches are required, with 4 spines being
preferred to provide smaller failure domains. Each Nexus 9300 series leaf node is connected to all spines
using 40Gbps connections, and the paths between leafs is highly available via any of the spines.
The external devices are attached to leaf nodesthese include Integrated Compute stacks, Services
appliances such as Firewalls and server load balancers and also the WAN routers that form the DC
Provider edge. These devices are attached to 2 Nexus 9300 series leaf nodes using virtual Port-channels
to ensure high availability from single leaf or link failure. Each service appliance also supports high
availability using redundant appliances either in active/standby or active/active cluster mode to provide
HA and scale. The fabric normalizes the encapsulation used to each external device, and re-encapsulates
using enhanced VXLAN within the fabricthis allows for highly flexible connectivity options and
horizontal scaling. By allowing connectivity of all types of devices to a common fabric, and
interconnecting using overlays, Data Centers can be built in a very highly scalable and flexible manner,
and expanded by adding more leaf nodes as needed.
Using the Application Virtual Switch (AVS) for VMware vSphere based workloads, extends the ACI
fabric to the virtual compute workloads, with the creation of the port-groups for different tenant
segments and end point groups done via the APIC.
BGP or static routing is used to connect the ACI fabric to the ASR 9000 DC Edge for Layer-3 external
connectivity models, while L2 external connectivity to the ASR 9000 is used for some tenant containers.
Solution Architecture
The Intercloud Data Center ACI 1.0 architecture is comprised of ACI Fabric, WAN layer, Compute and
Services layer. All the layers attach to the ACI fabric leaf nodes, and considerations to attach which
devices to which leafs is driven by physical considerations as well as scale per leaf considerations.
Figure 1-2 shows a logical representation of the Intercloud ACI 1.0 solution architecture.
1-4
Implementation Guide
Chapter 1
Solution Overview
Solution Architecture
Figure 1-2
Tenant Sites
ASR 9000 nV
Tenant routes
over L3VPN
DC Edge
iBGP or static or L2
external per tenant
between ACI and
DC-PE
Any leaf can be border
leaf connecting to
DC-PE
ACI Fabric
DC Network
L2 external is
over vPC
Tenant VMs default
Gateway on ACI or
ASA firewall
Managment
Service
OOB-Mgmt
Compute
Ceph Nodes
vCenter
MaaS
VM VM VM VM
VM VM VM VM
VM VM VM
Juju
298461
Storage
ACI Fabric
SpineNexus 9508 with 9736PQ line cards are used as ACI Fabric spine. In this
implementation, 4 Nexus 9508 spine nodes are used. Each 9736PQ line card has 36 40Gbps
ports, and up to 8 such line cards can be installed in each Nexus 9508 chassis. Only leaf nodes
attach to the spines. Leafs can be connected via a single 40G to each spine, or via multiple 40Gs.
Since each leaf connects to spine, the number of spine ports determines the total size of the
fabric and additional spine cards in spine nodes can be attached to increase the number of leaf
nodes supported. Additional form-factors of the Nexus 9500 ACI spine node will be released in
the future.
1-5
Chapter 1
Solution Overview
Solution Architecture
LeafsNexus 9396PX or Nexus 93128TX Leaf switches can be used. These switches have 12x
40G ports to be used to connect to spines. All connections external to the ACI Fabric are made
using the edge 1G/10GE ports on the leaf nodes. This includes connecting to the ICS,
WAN/Edge provider edge, and services appliances, as well as storage devices. Scale
considerations based on consumption of hardware resources per leaf node, will determine the
scale per leaf of the number of mac addresses, endpoint groups, bridge domains and security
policy filters. The fabric allows for very high scaling by adding more leaf nodes as needed.
ServicesNetwork and security services, such as Firewalls, server load balancers, intrusion
prevention systems, application-based Firewalls, and network analysis modules, attached directly to
the Nexus 9300 series leaf switches. Virtual Port-channels are used to connect to 2 different leaf
nodes for HA. In this implementation, ASA-5585X physical Firewalls are used, and connected via
vPC to a pair of Nexus 9300 Top of Rack switches. Virtual appliances like the ASAv virtual FW and
NetScaler 1000V virtual SLB are used, but these run over VMware vSphere hypervisor on the
integrated compute stack.
Integrated Compute Stack using VMware vSphereThis is the ICS stack such as FlexPod or
Vblock. These typically consist of racks of compute based on UCS, and storage devices and attach
to a pair Nexus 9300 series leaf pair. Storage can be via IP transport such as NFS, iSCSI or CIFS.
Alternatively, FC/FCoE based SANs can be used by connecting UCS Fabric Interconnect 6200s to
a pair of SAN Fabrics implemented using MDS switches. The Compute and Storage layer in the
Intercloud ACI 1.0 solution has been validated with a FlexPod aligned implementation using the
following components:
ComputeCisco UCS 6296 Fabric Interconnect switches with UCS 5108 blade chassis
populated with UCS B200 and B230 half-width blades. VMware vSphere 5.1 ESXi is the
hypervisor for virtualizing the UCS blade servers.
StorageIP based storage connected directly to the Nexus 9300 series leaf switches. NetApp
FAS storage devices (10G interfaces) are connected directly to leaf nodes, and NFS based
storage for tenant workloads is used.
Virtual AccessCisco Application Virtual Switch (AVS) is used on VMWARE vSphere 5.1
with full integration with APIC. APIC creates port-groups for EPGs and maps it to a VLAN on
the wire.
OpenStack Compute PodOpenStack is setup as an alternative for tenants that desire to use
OpenStack-based virtualization. Canonical OpenStack Icehouse release with Ubuntu 14.04 LTS
Linux is utilized, with a 3 node High availability configuration. Both control nodes and compute
nodes are Cisco UCS C-series servers and connected to the ACI fabric using virtual Port-channels.
The virtual access switch used is Nexus1000V for KVM, using the Nexus1000V Neutron plugin.
For this implementation, the OpenStack compute is validated with copper container only, and hence
the default gateway for all tenant VMs is the ASA Firewall. Each tenant gets an ASA sub-interface
which is extended via the ACI fabric to the compute layer for hosting Tenant VMs. This release with
OpenStack IceHouse does not have integration between APIC and OpenStack and the tenant EPGs
are statically mapped to VLANs.
ComputeCisco UCS C-series servers. These are also Ceph nodes, hence local disks are
configured and used by Ceph as OSDs. The compute nodes also have access to traditional
storage using NetApp.
Storage
Traditional storage using NetApp NFS shares. Cinder is setup to mount NFS shares on compute
nodes, and use it for running instances.
Software defined storage using Ceph. Compute nodes use the built in RBD client to access the
Ceph OSDs.
1-6
Implementation Guide
Chapter 1
Solution Overview
Solution Architecture
Horizon dashboard and published to the Nexus1000V VSM. Nexus1000V neutron plugin is
used.
Figure 1-3 provides a logical representation of the OpenStack Pod.
Logical Representation of OpenStack Pod
Nexus
93128
cimc
C240-M3
ACI
Juju
C240-M3 MaaS
OpenStack Control
Nodes + Rados GW
Nexus 1000V
VSM Nodes
Management PoD
C240-M3
C220-M3
C220-M3
C220-M3
C240-M3
C220-M3
C240-M3
C220-M3
C220-M3
Compute Nodes
Also Ceph OSDs
and MON
PoD 1
NetApp NFS/
via Cinder
298467
Figure 1-3
PoD 2
The OpenStack implementation is targeted for smaller deployments of up to 256 hosts with current
Nexus 1000v KVM verified scalability, and in future releases can scale higher. The High Availability for
the OpenStack control plane is implemented with 3 nodes running all OpenStack services in an
active/active cluster configuration. Canonical-recommended High Availability (HA) designs require
running each OpenStack service in a separate node for production and scaled-up environments, and
alternatively running services on independent Virtual Machines during staging. For this implementation,
3 node HA cluster was setup and Linux containers (LXC) are used to isolate individual OpenStack
services on these nodes (Figure 1-4).
Figure 1-4
Management Pod
Workload Pod(s)
Controller Nodes
Build Node
MaaS
Keystone
Glance
Neutron
Nova
Cinder
Horizon
Rados-GW
Control Node 01
Control Node 01
Control Node 02
Ceph OSD/mon
Control Node 03
Control Node 02
Ceph OSD/mon
Control Node 03
Control KVM
Node 04
Control KVM
Node 05
Nexus 1000V
VSM
Nexus 1000V
VSM
Ceph OSD/mon
Control Node 03
Ceph OSD
298468
Juju
(bootstrap)
HA Proxy
Rabbit MQ cluster
MySQLPercona/Galera cluster
1-7
Chapter 1
Solution Overview
Note
End Point GroupA set of endpoints either VMs or hosts, to be treated similarly from policy
perspective. From the perspective of ACI Fabric, each endpoint is a MAC address and IP address.
In virtualized environments (currently only VMware vSphere), the EPGs are extended all the way
to the virtual switch with the Cisco Application Virtual Switch, and port-groups are created by the
APIC on the vCenter that can then be used to attach VMs the port-groups for the specific EPG.
Currently, EPGs can be mapped to VMM domains, wherein the APIC automatically assigns the
VLAN (from a pool) and creates port-groups with the name of the EPG to indicate to server admins
to attach VMs to those port-groups. Alternative for non-integrated external devices is to statically
map EPG to a certain VLAN on an interface. Multiple such VLANs are allowed at different points
in the fabric allowing flexibility in stitching together a tenant container.
Note
Note
Note there are some protocols not filtered by contracts, please see the following URL:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/release/notes/aci_nxos_rn_1
102.html
Application ProfileA set of EPGs and contracts between them, implementing a specific
multi-tier application profile. For example, a 3 tier web/app/db type of application might have 3
EPGs, with contracts for outside to web, web to app and app to db. All together these form the
Application Profile.
APIC TenantsAPIC is by design multi-tenant, and creation of policies and configuration are done
on a per tenant basis. Role based access control allows each tenant admin to be able to configure
policies for that specific tenant.
Bridge DomainsBridge domains are L2 segments overlaid over the fabric. At the edges, the
tenant bridge domains are mapped to VLANs or VXLANs on the wire, and carried over the fabric
with enhanced VXLAN encapsulation.
Private NetworksPrivate networks are similar to VRFs on traditional routers. Each Private
network has its own addressing space and routing space.
SubnetsSubnets are IP subnets attached to Bridge domains. There can be one or more subnets
attached to a bridge domain and these are similar to primary and secondary addresses. SVIs are
created on the fabric for these subnets, and these exist on all of the leaf nodes where the bridge
domain exists providing a proxy default gateway for these subnets at the local leaf.
1-8
Implementation Guide
Chapter 1
Solution Overview
Service Tiers
External Routing OptionsCurrently iBGP sessions or static routing can be used between the
border leaf to external router. This is on a per tenant basis. The scale of external routing adjacencies
per leaf is currently 32 iBGP sessions. Only one session per tenant is allowed per leaf. A contract is
required to allow destinations outside of the fabric to be reached from inside, and an external EPG
is created to represent outside destinations.
L2 ExternalWhen layer-2 connections are extended to outside of the ACI fabric, l2 external
connections can be configured, with a contract to secure traffic between external endpoints and
internal endpoints.
From the perspective of IaaS services offered by the Cloud Service Providers, the following
considerations are used in this implementation:
1.
CSPs can use APIC tenancy constructs to provide multi-tenant Role based access control to the
configuration. The APIC and ACI Fabric scale can go up to a large number of tenants however
currently released software has verified scalability of 100 tenants.
2.
Cloud Service Providers for IaaS want to provide logical containers for hosting VMs, without being
aware of application specifics. On ACI this maps to the CSP providing Bridge domains (L2
segments) to tenants and creating one EPG per Bridge domain to host any number of applications.
The contracts would need to allow access to all application services that are hosted in that L2
segment. While multiple EPGs can be mapped to same BD, and save on the hardware resources in
the leaf VLAN table, separate BD per EPG is used in this implementation to isolate multicast and
broadcast traffic.
3.
Use of L3 vs L2 based containerscurrently the ACI fabric verified scalability is 100 VRFs (called
private networks in ACI), and hence use of VRF per tenant allows for that many tenants. To scale
beyond that limit, for some tenancy models, instead of creating a per-tenant APIC tenant and APIC
VRF, just a L2 segment is created and default gateway is setup on an external device. This is
particularly a good choice for low-end tenants with no features/services, such as Bronze and Copper
tenancy models, and it allows scaling the number of such tenants to a very high number.
4.
Use of service graphingService Graphing allows APIC to configure services devices such as
Firewalls and load balancers. In the current software release, there is no redirection capability, so
all traffic has to be routed or switched to the services appliance explicitly. Additionally, there is
another restriction that there is no routing within fabric, hence this restricts the stitching of the
services to a subset of scenarios. In this implementation, 1arm routed mode is used for Server Load
balancer with default gateway on ACI fabric. However for ASA Firewall, when used in
routed-mode, the default-gateway has to be on the ASA Firewall and not on the ACI fabric and
hence this model is implemented in this release.
5. Additional restrictions on service graphing are covered in the detail in later chapters of this
implementation guide.
Service Tiers
Cloud providers, whether Service Providers or Enterprises, want an IaaS offering that has multiple
feature tiers and pricing levels. To tailor workload or application requirements to specific customer
needs, the cloud provider can differentiate services with a multi-tiered service infrastructure and Quality
of Service (QoS) settings. The Cisco Intercloud architecture allows customers to build differentiated
service tiers and service level agreements that support their tenant or application requirements. Such
services can be used and purchased under a variable pricing model. Infrastructure and resource pools can
be designed so that end users can add or expand services by requesting additional compute, storage, or
network capacity. This elasticity allows the provider to maximize the user experience by offering a
custom, private Data Center in virtual form.
1-9
Chapter 1
Solution Overview
Service Tiers
The Intercloud ACI 1.0 solution supports a reference multi-tier IaaS service model of Gold, Silver,
Bronze, and Copper tiers, very similar to what was shown in the previous Cisco VMDC reference
designs. These service tiers (or network containers) define resource and service levels for compute,
storage, and network performance. This is not meant to be a strict definition of appliance and resource
allocation, but to demonstrate how differentiated service tiers could be built. These are differentiated
based on the following features:
some instances, applications may require several tiers of VMs (for example, web, application,
database, and so on). Intercloud ACI 1.0 Gold and Silver class tenant containers are defined
with three application tiers on three separate Bridge Domains and 3 separate EPGs to host web,
application, and database services on different VMs. The Bronze and Copper service is defined
with one Bridge Domain and 1 EPG only, so if there are multi-tiered applications, they must
reside on the same L2 segment or potentially on the same VM (Linux, Apache, MySQL, PHP,
Perl, and Python (LAMP)/Windows Apache, MySQL, PHP, Perl, and Python (WAMP) stack).
Access Methods and SecurityThe Gold and Silver service tiers are defined with separate
service appliances per-tenant to provide security and isolation. The Gold tier offers the most
flexible access methodsthrough Internet, L3VPN, and secure VPN access over the Internet.
Also, the Gold tier has multiple security zones for each tenant. The Silver and Bronze tiers do
not support any perimeter Firewall service and provide access through L3VPN only. The
Copper tier supports access over Internet only, along with perimeter Firewall service and NAT.
In this release, the goal was to have all of the services implemented through the APIC using
service graphing feature. However the device package support for integrating with APIC was
not yet available for certain functionality at the time of testing, notably NAT and
RA-VPN/Secure VPN access on the ASA device package. These services can still be
implemented albeit via directly configuring the service appliance itself and in future will be
supported via APIC.
Stateful ServicesTenant workloads can also be differentiated by the services applied to each
tier. The Expanded Gold tier is defined with an ASA based perimeter Firewall and dual security
zonesPVT zone and DMZ zone. Both physical ASA-5585-X and ASAv models were
validated and either option can be used depending on customer requirements. The ASA-5585-X
based implementation uses multi-context mode with each tenant getting a context on a pair of
physical ASAs, whereas if virtual ASAv is used, each tenant gets a pair of single-context
dedicated ASAv. Support for configuring policies inside an ASA context on a multi-context
ASA through APIC will be released in future release, and in this implementation, beta code was
used to validate this functionality. The Gold and Silver tiers are defined with a NetScaler 1000V
SLB service. The Bronze tier is defined with no Firewall or SLB services. The Copper tier
provides NAT and perimeter Firewall services with a context shared amongst all copper tenants
on the ASA-5585 Firewall.
QoSBandwidth guarantee and traffic treatment can be a key differentiator. QoS policies can
provide different traffic classes to different tenant types and prioritize bandwidth by service tier.
The Gold tier supports VoIP/real-time traffic, call signaling and data class, while the Silver,
Bronze, and Copper tiers have only data class. Additionally, Gold and Silver tenants are
guaranteed bandwidth, with Gold getting more bandwidth than Silver. In this release, ACI does
not support rate-limiting. Additionally deploying different classes of traffic to the same tenant
requires either separating the traffic by EPGs or trusting the DSCP set by tenant VM.
VM ResourcesService tiers can vary based on the size of specific VM attributes such as
CPU, memory, and storage capacity. The Gold service tier is defined with VM characteristics
of 4 vCPU and 16 GB memory. The Silver tier is defined with VMs of 2 vCPU and 8 GB, while
the Bronze and Copper tier VMs have 1 vCPU and 4 GB each.
1-10
Implementation Guide
Chapter 1
Solution Overview
Service Tiers
Storage ResourcesStorage multi-tenancy on NetApp FAS storage arrays using clustered data on
tap was implemented to provide dedicated NetApp Storage VMs to Gold class tenants, whereas
Silver tenants share a single SVM, but use dedicated volumes, and Bronze and Copper share
volumes as well. The storage performance can be also differentiated, for example, Gold tier is
defined with 15000 rpm FC disks, the Silver tier on 10000 rpm FC disks, and the Bronze tier on
Serial AT Attachment (SATA) disks. Additionally, to meet data store protection, the recovery point,
or the recovery time objectives, service tiers can vary based on provided storage features such as
Redundant Array of Independent Disks (RAID) levels, disk types and speeds, and backup and
snapshot capabilities.
Table 1-1 lists the four service tiers or network container models defined and validated in the Intercloud
ACI 1.0 solution. Cloud providers can use this as a basis and define their own custom service tiers, based
on their own deployment requirements. For similar differentiated offerings for Compute and Storage,
reference service tiers can be found in previously published Cisco VMDC VSA 1.0 Implementation
Guide.
Table 1-1
Service Tiers
Secure Zones
E-Gold
Silver
Bronze
Copper
Two,
None
None
PVT
DMZ
Perimeter Firewalls Two
Access Methods
L3VPN
Internet
Public IP/NAT
Yes
n/a
n/a
Yes
VM L2 Segments
3 in PVT zone
3 in PVT
1 in PVT
Static
Def Gwy
ASA
ACI Fabric
ACI fabric
ASA
Security between
L2 segments
ASA
ACI Fabric
Not available
OpenStack
security groups
1-11
Chapter 1
Solution Overview
Service Tiers
Table 1-1
Services
E-Gold
Silver
Bronze
Copper
NetScaler
1000V based
SLB
None
ASA based
internet Firewall
DMZ zone
Netscaler1000V based
SLB, one per each zone
NAT on ASA (not via
Service Graphs)
RA-VPN with ASAv (not
tested)
QoS
Standard Data
class,
Available BW
service (Best
effort)
Standard Data
class, Available
BW service (Best
effort)
1-12
Implementation Guide
Chapter 1
Solution Overview
Service Tiers
Figure 1-5
Zinc
Copper
L3
Bronze
L3
Silver
Gold
L3
L3
Expanded Gold
IOS
vFW
Protected
Front-End
L3
Expanded Palladium
LB
VM
Dedicated
vFW
FW
Shared
FW
Context
L3
L3
L2
L2
L2
vLB
LB
LB
VM
L3
L3
Protected
Back-End
L2
VM
Public Zone
FW
vFW
Private Zone
IOS
vFW
L3
LB
L3
L2
VM
L2
vLB
VM
VM
vFW
VM
VM
vFW
VM
VM
VM
vFW
VM
VM
VM
vFW
VM
VM
L2
VM
VM
VM
VM
vFW
VM
VM
VM
298463
vFW
In this document, implementation details of Expanded-Gold, Silver, Bronze and Copper containers are
provided. A high level overview of implementing these containers with ACI is provided here, and the
specific implementation and configuration details are provided in subsequent chapters on each of the
container types.
First the simplest container Bronze is explained, followed by Silver and E-Gold. Lastly the Copper
container which has a shared Firewall and Internet-based access for low cost tenancy model.
Bronze
The Bronze reference container is a simple, low-cost tenancy container.
Each Bronze tenant container has one Layer 2 segment for tenant VMs, implemented with one ACI
BD/EPG. There is one VRF on the Data Center provider edge for each Bronze tenant, and tenants access
their cloud service over L3VPN.
The Bronze Tenant traffic is mapped into the standard data class and can use available bandwidth (best
effort), that is, no bandwidth guarantee.
There are two options to implement Bronze with ACI, with different scaling considerations.
L3-BronzeThis has default gateway of the VMs on the ACI fabric. L3 external routing is used
between the ACI fabric for each of the L3-bronze containers, and can be either IBGP or static. On
the Data Center provider edge router, a VRF for each L3-Bronze is used with a sub-interface towards
the ACI fabric. Two independent L3 links are configured to two different leafs to provide
redundancy for high availability. Each leaf runs an IBGP session or has static routing configured.
1-13
Chapter 1
Solution Overview
Service Tiers
Figure 1-6
*Redundant boxes
not shown
Customer
VRF
ASR 9000 nV
VLAN
iBGP/Static
Bxx Tenant in ACI
BXX_VRF
VLAN
Bxx_EPG
Bxx_BD
Port-group
created by APIC
AVS
VM
Tenant VMs
298462
Compute
Default Gateway
L2-BronzeThis design has the ACI provide only a BD/EPG for each tenant, and the BD is
configured without unicast routing. Tenant VMs have default gateway on the Data Center provider
edge ASR 9000 tenant VRF. L2 external configuration on the BD is used and ACI contracts can be
setup to protect the Tenant VMs for outside to inside traffic. The connection between the Data
Center provider edge ASR 9000 nV cluster and the ACI fabric is a virtual port channel (vPC)
connecting to two different ACI leaf nodes, and connecting to two different chassis on the ASR 9000
nV side.
Silver
Figure 1-7 shows the Silver container logical topology. The Silver tenant accesses its cloud service via
L3VPN. Each Silver tenant container has 3 EPGs for tenant workloads, mapped to 3 different BDs,
allowing for 3-tier apps. Additionally, the silver tenant has Server Load Balancer implemented using
NetScaler 1000V and this is configured via the APIC using service graphing using the NS1000V device
package. Contracts on the ACI can be used to enforce security policy between tiers as well as between
external to tiers.
This Silver service tier provides the following services:
Routing (IBGP) from ACI Fabric to the Data Center Edge ASR9000 router
ACI Fabric default Gateway, and contracts and filters to implement policy between tiers.
SLB on the Netscaler1000V to provide L4-7 load balancing and SSL Offload services to tenant
workloads.
Medium QoS SLA with one traffic classpremium data class for in-contract traffic.
1-14
Implementation Guide
Chapter 1
Solution Overview
Service Tiers
Figure 1-7
*Redundant boxes
not shown
Customer
VRF
ASR 9000
L3-ext
SiXX Tenant in ACI
SiXX_VRF
APIC Tenant: 1
ACI VRF (private network): 1
DC-PE VRF: 1
L3-ext: 1 (iBGP or static)
BDs: 4 Subnets: 5 (3 tenant tiers, 1/1 vip/snip)
EPGs: 4 + 1 (external EPG)
Contract: 3 (out to t1, t1 to t2, t2 to t3 )
Server Leaf VLANs: 8 (EPG + BD)
Border Leaf VLANs: 1
Service Graph: 1, instances 3
SG-lb
SG-lb-t1
SG-lb-t2
SG-lb-t3
Compute
Vips_subnet
Snip subnet
AVS
VM
T1_EPG
T1_subnet
VM
T2_EPG
T2_subnet
VM
Tenant VMs
T3_EPG
T3_subnet
298464
NS1000V
1-15
Chapter 1
Solution Overview
Service Tiers
Figure 1-8
*Redundant boxes
not shown
L2-ext
pubout-BD/EPG
L2-ext
SG-fw-slb
GoXX Tenant
SG-fw-fw
APIC Tenant: 1
ACI VRF (private network): 2 (reserved for future)
DC-PE VRF: 1
L2-ext: 2
BDs: 9 Subnets: 0 (L2 only model used)
EPGs: 6 + 2 (external EPG)
Contract:
Server Leaf VLANs: 8 (EPG + BD)
Border Leaf VLANs: 2
Service Graph: 3, instances see below
SG-fw-slb: 2
SG-fw-slb-pvt
SG-fw-slb-dmz
SG-fw-fw: 1
SG-fw-fw
SG-fw: 4
SG-fw-pvt-t1
SG-fw-pvt-t2
SG-fw-pvt-t3
SG-fw-dmz
Pvtdmz BD/EPG
GoXX_PVT_VRF
GoXX_DMZ_VRF
Compute
AVS
AVS
NS1000V
NS1000V
VM
T1_EPG
T1_subnet
VM
T2_EPG
T2_subnet
VM
VM
DMZ_EPG
DMZ_subnet
T3_EPG
T3_subnet
298465
SG-fw
Global
This E-Gold service tier provides the highest level of sophistication by including the following services:
Default Gateway for the VMs is on their respective Zone Firewall, that is, for the PVT zone
BD/EPGs, the default gateway is on the PVT Firewall instance of the tenant, and the DMZ BD/EPG
VMs have default gateway (def gwy) on the DMZ FW instance of the tenant. Default gateway on
ASA is required in this design to use APIC integration for configuring the ASA Firewalls with the
Firewalls in routed mode.
2 ZonesPVT and DMZto place workloads. Each zone has its own BD/EPGs which are basically
L2 segments.
Either Physical ASA-5585-X in multi-context mode with each tenant getting dedicated contexts or
dedicated ASAv virtualized ASA can be used.
IPsec Remote-Access VPN using the ASA or ASAv, to provide Internet-based secure connectivity
for end-users to their virtual data center resources this was not implemented as the device package
support to configure via APIC is not yet available.
Stateful perimeter and inter-Zone Firewall services to protect the tenant workloads via ASA or
ASAv
Network Address Translation (NAT) on the ASA/ASav, to provide Static and Dynamic NAT services
to RFC1918 addressed VMs, however, configuring NAT via APIC/Device package has limitations
that dont allow it at this time. Enhancements are in progress and will be supported in future
releases.
SLB on the NetScaler 1000V to provide L4-7 load balancing and SSL Offload services to tenant.
One NetScaler 1000V instance for each zone.
Higher QoS SLA and three traffic classes - real-time (VoIP), call signaling and premium data.Please
note within datacenter, the call signaling and premium data travel in same ACI class however in the
MPLS WAN, 3 separate classes are used, one each for VoIP, Call signaling and Data.
1-16
Implementation Guide
Chapter 1
Solution Overview
Service Tiers
The two zones can be used to host different types of applications to be accessed through different
network paths.
The two zones are discussed in detail below.
PVT ZoneThe PVT, or Private Zone, and its VMs can be used for cloud services to be accessed
through the customer MPLS-VPN network. The customer sites connect to the provider MPLS Core
and the customer has their own MPLS-VPN (Customer-VRF). The Data Center Edge router (ASR
9000 provider edge) connects to the customer sites through the MPLS-VPN (via the
Customer-VRF). This Customer-VRF is connected through a VLAN on Virtual Port-channel to a
pair of ACI leafs, and configured as a L2-external connection in ACI. This L2-external connection
is extension of a bridge domain that also has the PVT Firewall ASA outside interface EPG. From
the perspective of the ASA, the next hop is the ASR9K sub-interface which is in the Customer-VRF.
The ASA is either a dedicated ASAv or ASA-5585 context. PVT BDs are L2 only BDs, that is, no
unicast routing, and default gateway is on the PVT ASA for the VMs in the BD/EPGs in PVT zone.
DMZThe Intercloud ACI 1.0 E-Gold container supports a DMZ for tenants to place VMs into a
DMZ area, for isolating and securing the DMZ workloads from the PVT workloads, and also to
enable users on the Internet to access the DMZ-based cloud services. The ASR 9000 provider edge
WAN router is also connected to the Internet, and a shared (common) VRF instance (usually global
routing table) exists for all E-Gold tenants to connect to (either encrypted or unencrypted). The
ASR9000 Internet table VRF is connected via an ASR 9000 sub-interface to the tenants dedicated
DMZ Firewall, and the sub-interface VLAN is trunked over vPC to ACI Fabric and is mapped to a
L2-external on the DMZ-external BD. On this DMZ-external BD, an EPG exists that is mapped to
the external interface of the DMZ ASA FW. Thus, the DMZ FW outside interface and the ASR9000
sub-interface in global table are L2 adjacent and IP peers. The ASR9000 has a static route for the
tenant public addresses pointing to the DMZ ASA FW outside interface address, and redistributes
static into BGP for advertising towards Internet. The DMZ ASA FW has a static default pointing
back to the ASR 9000 sub-interface, as well as static routes towards L3VPN and PVT subnets
pointing back to the PVT FW.
The DMZ can be used to host applications like proxy servers, Internet-facing web servers, email servers,
etc. The DMZ consists of one L2 segment implemented using a BD and an EPG and default gateway is
on the DMZ ASA FW. For SLB service in the DMZ, there is a NetScaler 1000V. For RA-VPN service,
currently the integration with APIC to configure this service does not exist, hence manual configuration
of the ASAv is required.
As an option, the E-Gold container may be deployed in simplified manner with only one zone, either
PVT zone with only the PVT Firewall with L3VPN connection (previous VMDC designs called this as
Gold container) or with DMZ only, with DMZ Firewall and access via internet only, and additional
secure access via RA-VPN (similar to the Zinc container in the previously released VMDC VSA 1.0
solution).
Copper
Figure 1-9 shows the Copper container logical topology. The Copper tenant gets one zone to place
workloads into and just one L2 segment for tenant VMs, implemented with one ACI BD/EPG, and
default gateway is on the ASA Shared Firewall. Multiple Copper tenants share the same Firewall, with
each tenant getting a different inside interface, but sharing the same outside/Internet-facing interface.
ASA Security policy restricts access to the tenant container from outside or other tenants, as well as
allows for NAT for reduced public address consumption.
Routing (static or Exterior BGP) from ASA shared Firewall to the Data Center provider edge ASR
9000, to connect all of the copper tenant virtual data centers to the global table (internet) instance
on the ASR 9000 router, and advertise towards Internet all the tenants public IP addresses.
1-17
Chapter 1
Solution Overview
Solution Components
ASA Firewall Security policy allows only restricted services and public IPs to be allowed to be
accessed from outside.
The shared ASA context is configured manually, i.e ACI service graphing is not utilized
Figure 1-9
*Redundant boxes
not shown
Global
ASR 9000 nV
VLAN
Cu_VRF
Not used, but created to
separate BDs into its own VRF
Copper Tenant
eBGP
Cuout-bd
VLAN
VLAN
Default Gateway
ASA
Shared context for all
Copper customers, no
service graphing
Cuout-EPG
CuXX-bd
CuXX-bd
CuOS-bd
Cu1_EPG
(static VLANs)
Cu2_EPG
Static VLANs
CuOS_EPG
Static VLANs
VLAN
VLAN
VLAN
Compute
Port-group manual
config no APIC
integration
VM
Tenant VMs
for Cu1
VM
VM
Tenant VMs
for Cu2
OS
Many Tenants, Cu1 to Cuxxxx, each tenant with a L2 segment and VMs on
the segment with default gateway to a tenant specific sub interface on the ASA
298466
Nexus 1000V-KVM
Solution Components
Table 1-2, and Table 1-3 list Cisco and Third Party product components for this solution, respectively.
Table 1-2
Cisco Products
Product
Description
Hardware
Software
ASR 9000
ASR9010-NV
IOS-XR5.1.2
A9K-RSP440-SE
A9K-24x10GE-SE
A9K-MOD80-SE
A9K-MPA-4X10GE
APIC
APIC-CLUSTER-L
1.0(2j)
Nexus 9500
Nexus 9508
11.0(2j)
9736PQ
1-18
Implementation Guide
Chapter 1
Solution Overview
Solution Components
Table 1-2
Product
Description
Hardware
Software
Nexus 9300
Nexus 9396
11.0(2j)
Nexus 93128
UCS 6200
UCS-FI-6296UP
2.2(1d)
UCS B-Series
Blade Servers
UCS-5108, B200-M3,
UCS VIC 1240/1280,
UCS 2204XP
2.2(1d)
UCS C-Series
Rack Servers
C240-M3, C220M3
CIMC: 2.0(1a)
Nexus 2000
FEX
Nexus 2232PP
11.0(2j)
ASA-5585-X
ASA Firewall
ASAv
9.3.1
Device package - 1.0.1
10.1
Device package - 10.5
Cisco AVS
Table 1-3
4.2(1)SV2(2.3)
Product
Description
Hardware
Software
VMWare ESXi
Hypervisor
N/A
VMWare vCenter
Management tool
N/A
NetApp FAS3250
StorageArray
FAS3250
8.2.2 cDoT
Linux
Tenant VM
Centos
Ubuntu 14.04 LTS
Linux
Openstack Nodes
Openstack
Cloud Platform
Icehouse release
Ceph
Software defined
storage
0.80.5
1-19
Chapter 1
Solution Overview
Solution Components
1-20
Implementation Guide
CH A P T E R
APIC Policy
Bridge Domains
(BD)
Contracts
The rules that
govern the
interactions of
EPGs
Contracts
determine how
applications use
the network
Contexts
(VRFs)
Unigue L3
forwarding
domain
Relation to
application
profile(s) with
their policies
348504
2-1
Chapter 2
Figure 2-2
In the ACI framework, a tenant is a logical container (or a unit of isolation from a policy perspective)
for application policies that enable an administrator to exercise domain-based access control. Tenants
can represent a customer in a service provider setting, an organization or domain in an enterprise setting,
or just a convenient grouping of objects and policies. Figure 2-3 provides an overview of the tenant
portion of the MIT. The tenant managed object is the basis for the Expanded Gold Tenant Container.
Figure 2-3
Tenant
Outside
Network
Application
Profile
1
n
Bridge
Domain
Context
(VRF)
Contract
Filter
1
1
n
n
Subnet
Endpoint
Group
n n
Subject
348505
Legend:
Solid lines indicate that objects contain the ones below.
Dotted lines indicate a relationship.
1:n indicates one to many; n:n indicates many to many.
2-2
Implementation Guide
Chapter 2
Note
In the JSON or XML data structure, the colon after the package name is omitted from class names and
method names. For example, in the data structure for a managed object of class zzz:Object, label the
class element as zzzObject.
Managed objects can be accessed with their well-defined address, the REST URLs, using standard HTTP
commands. The URL format used can be represented as follows:
{http|https}://host[:port]/api/{mo|class}/{dn|className}.{json|xml}[?options]
Where:
host
port
api
mo|class
Specifies whether the target of the operation is a managed object, or an object class.
dn|className Specifies the DN of the targeted managed object, or the name of the targeted class.
Note
json|xml
Specifies whether the encoding format of the command or response HTML body is
JSON or XML.
?options
By default, only HTTPS is enabled on APIC. HTTP or HTTP-to-HTTPS redirection, if desired, must be
explicitly enabled and configured. HTTP and HTTPS can coexist on APIC.
The API supports HTTP POST, GET, and DELETE request methods as follows:
An API command to create or update a managed object, or to execute a method, is sent as an HTTP
POST message.
An API query to read the properties and status of a managed object, or to discover objects, is sent
as an HTTP GET message.
An API command to delete a managed object is sent as either an HTTP POST or DELETE message.
In most cases, a managed object can be deleted by setting its status to deleted in a POST operation.
The HTML body of a POST operation must contain a JSON or XML data structure that provides the
essential information necessary to execute the command. No data structure is required with a GET or
DELETE operation.
Note
The API is case sensitive. When sending an API command with 'api' option in the URL, the maximum
size of the HTML body for the POST request is 1 MB.
The API model documentation is embedded within APIC, accessible with the following URL:
https://{apic_ip_or_hostname}/doc/html/
2-3
Chapter 2
aaaLoginSent as a POST message to log in a user and open a session. The message body contains
an aaa:User object with the name and password attributes, and the response contains a session token
and cookie.
aaaRefreshSent as a GET message with no message body or as a POST message with the
aaaLogin message body, this method resets the session timer. The response contains a new session
token and cookie.
aaaLogoutSent as a POST message, to log out the user and close the session. The message body
contains an aaa:User object with the name attribute. The response contains an empty data structure.
The example below shows a user login message that uses a XML data structure. The example makes use
of user ID with a login domain, with the following format:
apic#{loginDomain}\{userID}
POST https://{apic_ip_or_hostname}/api/aaaLogin.xml
<aaaUser name="apic#my_login_domain\my_user_id" pwd="my_pA5sW0rd" />
After the API session is authentication and established, retrieve and send the token or cookie with all
subsequent requests for the session.
SSL
FW
Output
Node
Destination
EPG
298739
Source
EPG
A service graph is inserted between source/provider EPG and destination/consumer EPG by a contract.
After the service graph is configured on APIC, APIC automatically configures the services according to
the service function requirements that are specified in the service graph. APIC also automatically
configures the network according to the needs of the service function that is specified in the service
2-4
Implementation Guide
Chapter 2
graph. A physical or virtual service appliance/device performs the service function within the service
graph. A service appliance, or several service appliances, render the services required by a service graph.
A single service device can perform one or more service functions.
APIC offers a centralized touch point for configuration management and automation of L4-L7 services
deployment, using the device package to configure and monitor service devices via the southbound
APIs. A device package manages a class of service devices, and provides APIC with information about
the devices so that the APIC knows what the device is and what the device can do. A device package is
a zip file that contains the following:
Device ScriptThe device script, written in Python, manages communication between the APIC
and the service device. It defines the mapping between APIC events and the function calls that are
defined in the device script. The device script converts the L4-L7 service parameters to the
configuration that is downloaded onto the service device.
Figure 2-5 shows the APIC service automation and insertion architecture through the device package.
Figure 2-5
Policy Manager
Device Specification XML
Linux Name
Space for Device
Package
298740
Device Scripts
Supporting Files
Upload Device
Package
After a unique device package is uploaded on APIC, APIC creates a namespace for it. The content of the
device package is unzipped and copied to the name space. The device specification XML is parsed, and
the managed objects defined in the XML are added to the APIC's managed object tree. The tree is
maintained by the policy manager. The Python scripts that are defined in the device package are
launched within a script wrapper process in the namespace. Access by the device script to APICs file
system is restricted.
Multiple versions of a device package can coexist on the APIC, because each device package version
runs in its own namespace. Administrators can select a specific version for managing a set of devices.
The following REST request uploads the device package on APIC. The body of the POST request should
contain the device package zip file being uploaded. Only one package is allowed in a POST request:
POST https://{apic_ip_or_hostname}/ppi/mo.xml
2-5
Chapter 2
Note
When uploading a device package file with 'ppi' option in the URL, the maximum size of the HTML
body for the POST request is 10 MB.
L4 to L7 Service Parameters
The XML file within the device package describes the specification for the service device. This
specification includes device information as well as various functions provided by the service device.
This XML specification contains the declaration for the L4-L7 service parameters needed by the service
device. The L4-L7 service parameters are needed to configure various functions that are provided by the
service device during service graph instantiation.
You can configure the L4-L7 service parameters as part of the managed objects such as bridge domains,
EPGs, application profiles, or tenant. When the service graph is instantiated, APIC passes the parameters
to the device script that is within the device package. The device script converts the parameter data to
the configuration that is downloaded onto the service device. Figure 2-6 shows the L4-L7 service
parameters hierarchy within a managed object.
L4-L7 Service Parameters
vnsFolderInst
vnsFolderInst
vnsParamInst
vnsParamInst
vnsFolderInst
vnsCfgRelInst
vnsParamInst
vnsParamInst
vnsParamInst
298741
Figure 2-6
The vnsFolderInst is a group of configuration items that can contain vnsParamInst and other nested
vnsFolderInst. A vnsFolderInst has the following attributes:
KeyDefines the type of the configuration item. The key is defined in the device package and can
never be overwritten. The key is used as a matching criterion as well as for validation.
NameDefines the user defined string value that identifies the folder instance.
2-6
Implementation Guide
Chapter 2
The value of this field can be any to allow this vnsFolderInst to be used for all nodes in a service
graph.
The vnsParamInst is the basic unit of configuration parameters that defines a single configuration
parameter. A vnsParamInst has the following attributes:
KeyDefines the type of the configuration item. The key is defined in the device package and can
never be overwritten. The key is used as a matching criterion as well as for validation.
NameDefines the user defined string value that identifies the parameter instance.
ValueHolds the value for a given configuration item. The value of this attribute is service device
specific and depended on the Key. The value of this attribute is case sensitive.
The vnsCfgRelInst allows one vnsFolderInst to refer to another vnsFolderInst. vnsCfgRelInst has
following attributes:
Note
KeyDefines the type of the configuration item. The key is defined in the device package and can
never be overwritten. The key is used as a matching criterion as well as for validation.
NameDefines the user defined string value that identifies the config relationship/reference
instance.
targetNameHolds the path for the target vnsFolderInst. The value of this attribute is case
sensitive.
By default, if the L4-L7 service parameters are configured on EPG, APIC only picks up the L4-L7
service parameters configured on the provider EPG, parameters configured on the consumer EPG are
ignored. The vnsRsScopeToTerm relational attribute for a function node or a vnsFolderInst specifies the
terminal node where APIC picks up the parameters.
When a service graph is instantiated, APIC resolves the configuration parameters for a service graph by
looking up the L4-L7 service parameters from various MOs. After resolution completes, the parameter
values are passed to the device script. The device script uses these parameter values to configure the
service on the service device. Figure 2-7 shows the L4-L7 service parameter resolution flow.
Figure 2-7
Note
End
298742
Start
By default, the scoped By attribute of L4-L7 service parameter is set to epg; APIC starts the parameter
resolution from the EPG, walking up the MIT to the application profile and then to the tenant to resolve
the service parameter.
2-7
Chapter 2
The flexibility of being able to configure L4-L7 service parameters on various MOs allows an
administrator to configure a single service graph and then use it as a template for instantiating different
service graph instances for different tenants or EPGs, each with its own L4-L7 service parameters.
A simple service graph template has a function node representing a single service function, and two
terminal nodes that connect the service graph to the contract. When mapped to a service device, the
service graph resulted in a service device with one external interface and one internal interface. Utilizing
multiple logical device contexts, and the flexible configuration of L4-L7 service parameters on various
MOs, with the ctrctNameOrLbl, graphNameOrLbl, and nodeNameOrLbl attributes set to the appropriate
contact, service graph and function node, a service appliance with more than two interfaces can be
modeled by mapping multiple service graph instances onto the same service device.
Figure 2-8
outside_epg
inside3_if
inside3_epg
inside1_epg
inside2_epg
298743
asa_fw
inside2_if
contract2 service graph instance2
Figure 2-8 shows the setup where a single service graph template (with a single ASA firewall function
node) is used to instantiate three service graph instances onto a single ASA security appliance. The
L4-L7 service parameters for each service graph instance could be configured on the inside1_epg,
inside2_epg, and inside3_epg provider EPG respectively, with the ctrctNameOrLbl attribute set to
contract1, contract2 and contract3 respectively.
The L4-L7 service parameters for modeling the outside_if related configurations could either be
repeated three times on inside1_epg, inside2_epg, and inside3_epg EPGs; or more conveniently
configured on the application profile or tenant managed object, with the ctrctNameOrLbl attribute set to
any for APIC to be able to use the parameters for all three service graph instances.
Note
The L4-L7 service parameters for modeling the outside_if related configurations is not configured on
the outside_epg consumer EPG, as by default, APIC would not picks up parameters configured on
consumer EPG.
2-8
Implementation Guide
CH A P T E R
Four (4) Nexus 9508 Spine Nodes are used. A minimum of 2 spine nodes are needed for high
availability (HA), and adding more nodes increases resiliency by reducing failure domain when a
single spine node fails.
2.
Each Nexus 9300 series leaf node attaches via a single 40G link. With 4x Spine nodes, each leaf
node has 4x 40Gbps = 160 Gbps bandwidth to the fabric. Each leaf has 12 40G uplink ports on the
N9K-M12PQ daughter card or the N9K-M6P with 6 40G uplink ports.
3. The number of ports on the Spine nodes defines the size of the fabric in terms number of leafs. This
can scale quite high with 9736 spine cards each card has 36 40G ports.
3-1
Chapter 3
4. The Spine nodes itself only connect to the leaf nodes, and each leaf node connects to all Spine nodes
Leaf nodes to Spine nodes connections are implemented with QSFP+ with BiDi optics and use dual
strand fiber connections.
7. All Data Center devices are attached to Nexus 9300 leaf nodes using the 10Gbs ports. This includes
compute, storage, service nodes, Data Center premise equipment (PE) ASR 9000.
8. While for operational reasons, some leaf nodes can be dedicated for certain roles such as border leaf,
service leaf etc, in this implementation every leaf node can be used in any role. The technology itself
does not require dedication of such roles, but for operational reasons it might be required to dedicate
leaf node to certain types of roles.
9.
Devices attach to the leaf nodes using different resiliency models. For L2 connections, virtual
port-channels (vPC) are used.
10. For L3 connections, two leaf nodes run open-jaw connections (no diagonal connections) with
routing adjacency to 2 external routers for high availability or as in this implementation to the same
ASR9000 nV cluster with links to different ASR 9000 chassis. The leaf nodes can run iBGP, OSPF
or static routing for each tenant to external router over interface, sub-interface or SVI. In this
implementation, routing is setup over SVI-channel trunks with each tenant on a VLAN. On the
external router side, sub-interfaces are used for each of these VLANs and in the tenant specific VRF
for tenant separation.
11. Attachment of blade servers is from vPC to fabric interconnects. Attachment of compute to rack
servers is directly made or made through FEX. The Nexus 93128TX is oversubscribed and 96 hosts
can be attached to a pair of 93128TXs with RJ-45 based 10GBASE-T connections. The hosts need
to have the same ports types for the 10G connection, and with Cisco C-Series, VIC-1225Ts are used.
12. Storage devices NetApp 3250 cluster is attached to the leaf nodes as well. NetApp controllers in a
cluster are distributed and in this implementation, 4 NetApp controllers are used. Each pair of 3250s
attaches to a Nexus9300 node pair using vPC.
13. ASA-5585 clusters are attached to the Nexus 9300 leaf nodes. The CCL is attached using vPC to a
pair of leaf nodes. The spanned Ethernet-channel connections are to the Nexus 9300 leaf nodes pair
as well.
3-2
Implementation Guide
Chapter 3
External Connectivity to PE
The ACI Fabric is connected to ASR 9000 nV using single link, port channel (PC) or virtual port channel
(vPC) depending on the tenant container model. On the border leaf nodes, these links are configured as
layer 2 ports to carry the VLAN that is used by the tenant for external connectivity. These VLANs
terminate on a sub-interface that is configured on the ASR 9000 bundle-Ethernet interface.
Note
Unlike traditional vPC architecture on Nexus, the ACI implementation of vPC does not require peerlink
configuration between the vPC peers.
3-3
Chapter 3
External Connectivity to PE
Figure 3-2
ASR 9000 nV
BE-9.xxx
BE-10.xxx
vPC
VLAN-xxx
1/15
ASR 9000 nV
VLAN-xxx
1/15
v6-l3a Node-105
Node-106
vPC
VLAN-xxx
1/33
v6-l3b
1/33
v6-l2a Node-105
Node-106
v6-l2b
ACI Fabric
298519
ACI Fabric
VLAN-xxx
VLAN-xxx corresponds to the VLAN used for external connectivity for a given tenant. These VLANs
are typically defined in a VLAN pool in APIC and associated with a physical domain prior to creating a
tenant. Since the ASR 9000 is in nV cluster mode, it is recommended to distribute the port channel
member links across different ASR 9000 chassis to provide chassis level redundancy.
Note
In Figure 3-2, a single 10G link is used on the border leaf switch however you can have 2 or more ports
assigned per leaf switch to provide additional link level redundancy.
The following show command displays the physical connectivity between the border leaf and the ASR
9000 for Gold and L2-Bronze container setup.
RP/0/RSP1/CPU0:v6-pe-NV#sh run int bundle-ether 9
Tue Nov 4 16:02:00.885 EST
interface Bundle-Ether9
!
RP/0/RSP1/CPU0:v6-pe-NV#sh run int te0/0/0/0
Tue Nov 4 16:04:19.424 EST
interface TenGigE0/0/0/0
description v6-leaf5-9396::e1/35
bundle id 9 mode active
cdp
!
RP/0/RSP1/CPU0:v6-pe-NV#sh run int te0/0/0/1
Tue Nov 4 16:04:39.875 EST
interface TenGigE0/0/0/1
description v6-leaf6-9396::e1/35
bundle id 9 mode active
!
RP/0/RSP1/CPU0:v6-pe-NV#
RP/0/RSP1/CPU0:v6-pe-NV#sh lldp nei | inc BE9
Tue Nov 4 09:38:04.700 EST
v6-l3a
Te0/0/0/0[BE9]
120
B,R
v6-l3b
Te0/0/0/1[BE9]
120
B,R
RP/0/RSP1/CPU0:v6-pe-NV#sh lldp nei | inc BE10
Tue Nov 4 09:38:46.509 EST
v6-l2a
Te1/0/1/2[BE10]
120
B,R
v6-l2b
Te1/1/1/2[BE10]
120
B,R
RP/0/RSP1/CPU0:v6-pe-NV#
Eth1/35
Eth1/35
Eth1/33
Eth1/33
3-4
Implementation Guide
Chapter 3
2.
3.
4.
5.
The following section describes the procedure for creating a vPC in in APIC. Border leaf nodes 105 and
106 are used in this example for illustration purpose. On each border leaf, interface e1/35 is used to
create the vPC as shown in Figure 3-2. This vPC connects to Bundle-Ethernet 9 on ASR 9000.
Step 1
In the pop-up window enter relevant information as shown in Figure 3-4 and submit the configuration.
3-5
Chapter 3
External Connectivity to PE
Figure 3-4
Step 2
LACP Policy
In the pop-up window enter a name for this policy, select "Active" mode and submit the configuration
(Figure 3-6).
3-6
Implementation Guide
Chapter 3
Figure 3-6
Step 3
In the pop-up window, enter a name for the policy group and select the LACP policy that was created in
the previous step (Figure 3-8). You may also enable LLDP or CDP as needed.
3-7
Chapter 3
External Connectivity to PE
Figure 3-8
Step 4
Interface Profile
In the pop-up window, enter a name for the interface profile and click on the "+" sign in the Interface
Selector box (Figure 3-9).
3-8
Implementation Guide
Chapter 3
Figure 3-10
Provide a name for the access port selector and enter the interface used for the vPC port channel. Select
the interface policy group that was created in the previous step and click OK to close this window. Click
on submit button to finish the interface profile creation (Figure 3-11).
Figure 3-11
Step 5
3-9
Chapter 3
External Connectivity to PE
Figure 3-12
In the next screen, select the interface profile that was created in step 4 and finish the configuration
(Figure 3-13).
Figure 3-13
This completes the vPC configuration from an infrastructure perspective. The vPC can be utilized by all
tenants in the ACI infrastructure.
You can issue the following show commands on the leaf switches to see the status of vPC port channel.
3-10
Implementation Guide
Chapter 3
Po3
up
success
success
2901
v6-l3a#
Notice that there are no active VLANs on vPC port channel 2. To see active VLANs, you need to
associate the Bridged external EPG of a tenant with the vPC. This is typically done during the creation
of a tenant. For more information, refer to L2-Bronze and Gold chapters.
3-11
Chapter 3
External Connectivity to PE
ASR 9000 nV
BE-5.xxx
BE-5.xxx
svi01
svi02
1/33-34
1/33-34
1/15
v6-l3a Node-105
Node-106
v6-l3b
298531
ACI Fabric
The following show commands display the bundle ethernet configuration on ASR 9000.
RP/0/RSP1/CPU0:v6-pe-NV#sh run int
Tue Nov 4 10:04:24.584 EST
interface Bundle-Ether5
mtu 9000
mac-address 4055.3943.f93
load-interval 30
!
RP/0/RSP1/CPU0:v6-pe-NV#sh run int
Wed Nov 5 10:51:04.750 EST
interface TenGigE1/0/0/0
description v6-leaf5-9396::e1/33
bundle id 5 mode active
cdp
!
RP/0/RSP1/CPU0:v6-pe-NV#sh run int
Wed Nov 5 10:51:08.356 EST
interface TenGigE1/1/0/0
description v6-leaf5-9396::e1/34
bundle id 5 mode active
cdp
!
RP/0/RSP1/CPU0:v6-pe-NV#
RP/0/RSP1/CPU0:v6-pe-NV#sh run int
Tue Nov 4 10:04:30.214 EST
interface Bundle-Ether6
mtu 9000
mac-address 4055.3943.1f93
load-interval 30
!
RP/0/RSP1/CPU0:v6-pe-NV#
RP/0/RSP1/CPU0:v6-pe-NV#sh run int
Wed Nov 5 10:53:40.960 EST
interface TenGigE1/0/0/1
description v6-leaf6-9396::e1/33
bundle id 6 mode active
cdp
!
bundle-ether 5
te1/0/0/0
te1/1/0/0
bundle-ether 6
te1/0/0/1
3-12
Implementation Guide
Chapter 3
Note
It is recommended to configure separate MAC address on the bundle Ethernet interfaces to prevent MAC
address flapping when the same VLAN encapsulation is used on both border leaves for external
connectivity to ASR 9000 nV.
Step 1
1.
2.
3.
4.
Step 2
3-13
Chapter 3
External Connectivity to PE
The interface policy group configuration step is similar to the steps described in vPC configuration. The
difference is that a Port Channel is selected instead of a vPC as highlighted in Figure 3-16. CDP is
enabled by default.
Figure 3-16
The policy group can be reused on node-106 or a new policy group can be configured as shown in the
Figure 3-16.
Step 3
3-14
Implementation Guide
Chapter 3
Figure 3-17
Step 4
3-15
Chapter 3
External Connectivity to PE
Figure 3-19
At this time, the port channel interfaces should come up as displayed in the show command output below.
v6-l3a# show port-channel summary
Flags: D - Down
P - Up in port-channel (members)
I - Individual H - Hot-standby (LACP only)
s - Suspended
r - Module-removed
S - Switched
R - Routed
U - Up (port-channel)
M - Not in use. Min-links not met
F - Configuration failed
------------------------------------------------------------------------------Group PortType
Protocol Member Ports
Channel
------------------------------------------------------------------------------1
Po1(SU)
Eth
LACP
Eth1/33(P)
Eth1/34(P)
2
Po2(SU)
Eth
LACP
Eth1/35(P)
3
Po3(SU)
Eth
NONE
Eth1/3(P)
4
Po4(SU)
Eth
LACP
Eth1/2(P)
Eth1/4(P)
5
Po5(SU)
Eth
NONE
Eth1/1(P)
v6-l3a#
v6-l3b# show port-channel summary
Flags: D - Down
P - Up in port-channel (members)
I - Individual H - Hot-standby (LACP only)
s - Suspended
r - Module-removed
S - Switched
R - Routed
U - Up (port-channel)
M - Not in use. Min-links not met
F - Configuration failed
------------------------------------------------------------------------------Group PortType
Protocol Member Ports
Channel
------------------------------------------------------------------------------1
Po1(SU)
Eth
LACP
Eth1/33(P)
Eth1/34(P)
2
Po2(SU)
Eth
LACP
Eth1/2(P)
Eth1/4(P)
3
Po3(SU)
Eth
LACP
Eth1/35(P)
4
Po4(SU)
Eth
NONE
Eth1/1(P)
5
Po5(SU)
Eth
NONE
Eth1/3(P)
v6-l3b#
3-16
Implementation Guide
Chapter 3
The following show command displays the status of bundle ethernet interface on the ASR 9000.
RP/0/RSP1/CPU0:v6-pe-NV#show bundle brief
Wed Nov 5 10:35:48.201 EST
Name
| IG
| State
| LACP | BFD |
Links
| Local b/w, |
|
|
|
|
| act/stby/cfgd |
kbps
|
-------|----------|---------------|------|-----|---------------|------------|
BE5
- Up
On
Off
2 / 0 / 2
20000000
BE6
- Up
On
Off
2 / 0 / 2
20000000
BE9
- Up
On
Off
2 / 0 / 2
20000000
BE10
- Up
On
Off
2 / 0 / 2
20000000
BE11
- Up
On
Off
1 / 0 / 1
10000000
BE12
- Up
On
Off
2 / 0 / 2
20000000
RP/0/RSP1/CPU0:v6-pe-NV#
RP/0/RSP1/CPU0:v6-pe-NV#sh lldp nei | inc BE5
Tue Nov 4 09:40:30.626 EST
v6-l3a
Te1/0/0/0[BE5]
120
B,R
Eth1/33
v6-l3a
Te1/1/0/0[BE5]
120
B,R
Eth1/34
RP/0/RSP1/CPU0:v6-pe-NV#sh lldp nei | inc BE6
Tue Nov 4 09:40:34.400 EST
v6-l3b
Te1/0/0/1[BE6]
120
B,R
Eth1/33
v6-l3b
Te1/1/0/1[BE6]
120
B,R
Eth1/34
RP/0/RSP1/CPU0:v6-pe-NV#
Connectivity to Compute
Cisco Integrated Compute Stacks (ICS) can be attached to the ACI fabric directly. Bare metal servers
can be attached directly or using Cisco Nexus 2000 Fabric Extenders. This section details the physical
connectivity and APIC configuration to attach the compute infrastructure to ACI fabric.
ACI Fabric
Leaf102
E1/5-8
E1/1-4
E1/1-4
E2/1-4
E2/5-8
ICS3-6296-P1A
E2/1-4
Leaf104
E1/5-8
E1/5-8
E1/1-4
E2/5-8
E2/1-4
ICS3-6296-P1B
ICS3
Leaf103
E1/1-4
E1/5-8
E2/5-8
ICS4-6296-P1A
E2/1-4
E2/5-8
ICS4-6296-P1B
ICS3-C1F1-P1
ICS4-C1F1-P1
ICS3-C2F1-P1
ICS4-C2F1-P1
ICS4
298637
Leaf101
3-17
Chapter 3
Connectivity to Compute
<infraInfra>
<infraFuncP>
<!-- access interface policy group, this create the vpc defination, protocols
-->
<infraAccBndlGrp name="vpc_n101_n102_ics3_fi_a" lagT="node">
<infraRsLldpIfPol tnLldpIfPolName="lldp_disabled" />
<infraRsCdpIfPol tnCdpIfPolName="cdp_enabled" />
<infraRsStpIfPol tnStpIfPolName="spt_no_bpdu" />
<infraRsLacpPol tnLacpLagPolName="lacp_active" />
<infraRsAttEntP tDn="uni/infra/attentp-{{vmmAEP}}" />
</infraAccBndlGrp>
<infraAccBndlGrp name="vpc_n101_n102_ics3_fi_b" lagT="node">
<infraRsLldpIfPol tnLldpIfPolName="lldp_disabled" />
<infraRsCdpIfPol tnCdpIfPolName="cdp_enabled" />
<infraRsStpIfPol tnStpIfPolName="spt_no_bpdu" />
<infraRsLacpPol tnLacpLagPolName="lacp_active" />
<infraRsAttEntP tDn="uni/infra/attentp-{{vmmAEP}}" />
</infraAccBndlGrp>
</infraFuncP>
<!-- access interface profile, this specify the interfaces to use for the vpc -->
<infraAccPortP name="vpc_n101_n102_ics3_fi_a">
<infraHPortS name="port_members" type="range">
<infraPortBlk name="block2" fromPort="1" toPort="4" />
<infraRsAccBaseGrp
tDn="uni/infra/funcprof/accbundle-vpc_n101_n102_ics3_fi_a" />
</infraHPortS>
</infraAccPortP>
<infraAccPortP name="vpc_n101_n102_ics3_fi_b">
<infraHPortS name="port_members" type="range">
<infraPortBlk name="block2" fromPort="5" toPort="8" />
<infraRsAccBaseGrp
tDn="uni/infra/funcprof/accbundle-vpc_n101_n102_ics3_fi_b" />
</infraHPortS>
</infraAccPortP>
<!-- access switch profile, this specify the leaf switches to use for the vpc -->
<infraNodeP name="vpc_n101_n102_ics3_fi">
<infraLeafS name="101_102" type="range">
<infraNodeBlk name="block0" from_="101" to_="102" />
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-vpc_n101_n102_ics3_fi_a" />
<infraRsAccPortP tDn="uni/infra/accportprof-vpc_n101_n102_ics3_fi_b" />
</infraNodeP>
</infraInfra>
The following APIC screen captures (Figure 3-21 and Figure 3-22)show sample vPC interface policy
groups and interface profiles in APIC GUI.
3-18
Implementation Guide
Chapter 3
Figure 3-21
Another profile (vpc_n101_n102_ics3_fi_b) is created using ports 1/5-8. These interface profiles can be
reused on other ICS, but new interface profiles for connecting ICS4 FI on nodes 103 and 104 have been
created. A vPC interface policy is shown in Figure 3-22.
Figure 3-22
Notice the Attachable Entity profile (AEP) tied to this interface policy. An AEP is configured to deploy
VLAN pools on the leaf switches. A particular VLAN from this pool is enabled on the vPC based on
VM events from VMware vCenter.
3-19
Chapter 3
Connectivity to Compute
Note
The physical topology diagram is shown in Figure 3-23. In this topology, the Cisco Nexus 2000 series
(N2K-C2232PP-10GE) fabric extender is direct attached to port 31 and 32 on leaf switches 105 and 106.
Figure 3-23
ACI Fabric
Leaf105
Leaf106
N9K-C9396PX
N9K-C9396PX
E1/31-32
E1/31-32
FEX101
FEX101
Activebackup
N2K-C2232PP
UCS C-Series
298640
N2K-C2232PP
2.
Profile Configuration
3.
REST API
The quick start configuration method uses a template to attach the FEX to the fabric. This method is
useful for users that are new to the APIC GUI. The profile configuration is useful for creating FEX
profiles and reusing the profile across multiple switches when the configuration is identical.
Provisioning can be done using the REST API as well which would help with large scale provisioning.
These methods are described in the following section.
Step 1
3-20
Implementation Guide
Chapter 3
Figure 3-24
Step 2
Click on the "+" sign to enter the configuration information for FEX uplink ports.
Figure 3-25
Step 3
In the pop-up window, select "Advanced" mode. Configure the switch ID in the box by clicking on the
"+" sign. Enter a name for the switch profile. Choose "96 ports" since this is a Nexus 9396. Enter the
FEX ID and interface selectors.
3-21
Chapter 3
Connectivity to Compute
Figure 3-26
Step 4
3-22
Implementation Guide
Chapter 3
Figure 3-28
Profile Configuration
The following steps describe how to attach the FEX to fabric by configuring FEX profiles and switch
profiles independently. The major steps are:
1.
2.
3-23
Chapter 3
Connectivity to Compute
3.
Step 1
Select "Create FEX Profiles" from the list. In the pop-up window, enter a name for the FEX Profile and
click on submit button.
Figure 3-30
Notice that this step creates a FEX Policy Group with the same name as the FEX Profile.
3-24
Implementation Guide
Chapter 3
Figure 3-31
Step 2
Click on the "+" sign in the Interface Selector box. In the pop-up window, enter a name for the port
selector. Enter the interface IDs, FEX ID and select the FEX Profile that was created in the previous step.
3-25
Chapter 3
Connectivity to Compute
Figure 3-33
Click OK to close this window and submit button to finish the configuration.
Sample XML Code
<infraInfra>
<infraAccPortP descr="PortP Profile: FEX101"
dn="uni/infra/accportprof-FEX101_ifselector" name="FEX101_ifselector" ownerKey=""
ownerTag="">
<infraHPortS descr="" name="FexCard101" ownerKey="" ownerTag="" type="range">
<infraRsAccBaseGrp fexId="101"
tDn="uni/infra/fexprof-FEX101_FexP101/fexbundle-FexBndleP101"/>
<infraPortBlk fromCard="1" fromPort="31" name="block1" toCard="1"
toPort="32"/>
</infraHPortS>
</infraAccPortP>
</infraInfra>
Step 3
3-26
Implementation Guide
Chapter 3
Figure 3-34
Provide a name for the switch profile. Click on "+" sign in the Switch Selector box. Provide a name for
the switch selector and select the switch ID from the drop-down list.
Figure 3-35
Click NEXT to go to the next window and select the Interface selector profile that was created in the
previous step. Click FINISH to submit the configuration.
3-27
Chapter 3
Connectivity to Compute
Figure 3-36
The XML code for creating FEX profile, Interface selector, and switch profile is shown in the previous
section. In this section, the power of REST API to provision additional FEX in the fabric is highlighted
without repeating step 1 & 2.
Assume that identical FEX need to be connected to port 31 & 32 on other leaf switches. The FEX profile
and FEX interface selectors do not need to be reconfigured. All that is needed is to configure a new
switch profile with the node ID of the new device and associate with the interface selector profile that
was created already. In this example, Node-106 is used.
<infraInfra>
<infraNodeP descr="Switch Profile: FEX101" dn="uni/infra/nprof-Node106-FEX101"
name="Node106-FEX101" ownerKey="" ownerTag="">
<infraLeafS descr="" name="FEX101_selector_node106" ownerKey="" ownerTag=""
type="range">
<infraNodeBlk from_="106" name="single0" to_="106"/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-FEX101_ifselector"/>
</infraNodeP>
</infraInfra>
3-28
Implementation Guide
Chapter 3
The following interface configuration is taken from the bare metal server running Ubuntu 12.04. The
server has two 1G and two 10G interfaces. Bonding is configured in Active-backup mode.
root@v6-bm-1:/etc/network# more interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface, used for management
auto eth2
iface eth2 inet static
address 10.0.35.103
netmask 255.255.255.0
network 10.0.35.0
broadcast 10.0.35.255
gateway 10.0.35.253
dns-nameservers 64.102.6.247
#eth0 is manually configured, and slave to the "bond0" bonded NIC
auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0
#eth1 ditto, thus creating a 2-link bond.
auto eth1
iface eth1 inet manual
bond-master bond0
# bond0 is the bonding NIC and can be used like any other normal NIC.
# bond0 is configured using manual network information.
# bond0 does not need IP since IP is configured on STCA application
# since bond0 does not have IP, the static route below is not needed
auto bond0
iface bond0 inet manual
3-29
Chapter 3
Connectivity to Compute
address 10.1.1.101
netmask 255.255.255.0
gateway 10.1.1.253
#static route
up route add -net 10.0.0.0/16 gw 10.0.35.253 dev eth2
up route add -net 172.18.0.0/16 gw 10.0.35.253 dev eth2
up route add -net 0.0.0.0/0 gw 10.1.1.253 dev bond0
# pre-up ip link set $IFACE up
# post-down ip link set $IFACE down
bond-mode active-backup
bond-miimon 100
#bond-lacp-rate 1
bond-slaves eth0 eth1
root@v6-bm-1:/etc/network#
ACI Fabric
Leaf103
Leaf104
Leaf107
N9K-C9312TX
UCS C-Series
UCS C-Series
298655
N9K-C9396PX
Leaf108
Each server is connected to the fabric via vPC and the ports are bonded to work in active-active mode.
In APIC, a vPC interface policy group is created for each bare metal server. The policy is attached to an
AEP (open_stack_aep) from where the VLANs are assigned. A sample policy is shown in Figure 3-39.
3-30
Implementation Guide
Chapter 3
Figure 3-39
In APIC, an interface profile is created with interface selectors and corresponding policy group as shown
in Figure 3-40. In this case servers 1 to 8 are connected to ports 1/38 to 1/44 on nodes 107 and 108.
Figure 3-40
3-31
Chapter 3
Figure 3-41
3-32
Implementation Guide
Chapter 3
Figure 3-42
ACI Fabric
/6
Po2
T0/9
ASA5585-SSP60
ASA-1
T0/7
T0/8
T0/6
T0/7
T0
/8
T0
Po1
ASA5585-SSP60
ASA-2
298659
1/
1/4
1/
1/3
1/
4
1/
1/2
1/1
Po1
CCL
Data
Leaf106
T0/9
Leaf105
ASA cluster integration to the ACI Fabric involves the following major steps.
Create virtual Port Channels (vPC) on APIC to attach CCL and Data Port channels
For procedural guidance on how to configure ASA Clustering, refer to the VMDC 3.0.1 Implementation
Guide.
A sample Configuration from System and Admin contexts on the ASA is shown below. The
configuration is identical on both units except for the highlighted parameters.
System Context
!
hostname asa-1
mac-address auto prefix 1
lacp system-priority 1
!
interface Management0/1
!
interface TenGigabitEthernet0/6
channel-group 1 mode on
!
interface TenGigabitEthernet0/7
channel-group 2 mode active vss-id 1
!
interface TenGigabitEthernet0/8
channel-group 1 mode on
!
interface TenGigabitEthernet0/9
channel-group 2 mode active vss-id 2
!
interface Port-channel1
description CCL Interface
lacp max-bundle 8
port-channel load-balance vlan-src-dst-ip-port
!
interface Port-channel2
description Spanned Etherchannel
lacp max-bundle 8
port-channel load-balance src-dst-ip-port
port-channel span-cluster vss-load-balance
3-33
Chapter 3
!
!
boot system disk0:/asa931-smp-k8.bin
ftp mode passive
cluster group ACI_10
key *****
local-unit ASA-2
# Configure ASA-1 on peer unit
cluster-interface Port-channel1 ip 98.1.1.2 255.255.255.0 # 98.1.1.1 on peer
priority 2
# Configure Priority 1 on peer unit
console-replicate
health-check holdtime 3
clacp system-mac auto system-priority 1
enable
pager lines 24
mtu cluster 1600
no failover
asdm image disk0:/asdm-731.bin
no asdm history enable
arp timeout 14400
no arp permit-nonconnected
!
no ssh stricthostkeycheck
console timeout 0
!
tls-proxy maximum-session 1000
!
admin-context admin
context admin
allocate-interface Management0/1
config-url disk0:/aci-admin.cfg
!
ntp server 172.18.114.20 prefer
username apic password mCrzqDeDrHuidnJf encrypted
username admin password rwXrwfFLI2xesBa/ encrypted
prompt hostname context state
!
jumbo-frame reservation
!
Admin Context
!
hostname admin
names
ip local pool mgmt 10.0.32.69-10.0.32.70 mask 255.255.255.0
!
interface Management0/1
management-only
nameif management
security-level 100
ip address 10.0.32.71 255.255.255.0 cluster-pool mgmt
!
pager lines 24
mtu management 1500
icmp unreachable rate-limit 1 burst-size 1
no asdm history enable
arp timeout 14400
route management 0.0.0.0 0.0.0.0 10.0.32.1 1
user-identity default-domain LOCAL
aaa authentication ssh console LOCAL
aaa authentication http console LOCAL
http server enable
http 10.0.0.0 255.255.0.0 management
http 172.18.0.0 255.255.0.0 management
3-34
Implementation Guide
Chapter 3
The next step is to configure vPCs on the APIC. The procedure to bring up vPCs on the leaf switches
are already explained earlier in this chapter, so it is not covered here.
Table 3-1 shows various parameters configured on APIC to bring up separate vPC to each ASA for
cluster control link connectivity.
Table 3-1
Type
ASA-1
ASA-2
vpc_n105_n106_asa_ccl1
vpc_n105_n106_asa_ccl2
n105_n106_asa_ccl1
n105_n106_asa_ccl2
asa_ccl1_ports
asa_ccl2_ports
e1/1
e1/3
Switch Profile
vpc_n105_n106_asa5585_ccl
Switch Block
105-106
asa_ccl_aep
Domain (VMM/Physical)
asa_ccl_phy
asa_ccl_vlan_pool
10
2901
Table 3-2 shows various parameters configured on APIC to bring up a vPC for Spanned Ether Channel
connectivity to ASA cluster.
Table 3-2
Type
ASA-1/ASA-2
vpc_n105_n106_asa5585_data
port members
1/2,1/4
Switch Profile
vpc_n105_n106_asa5585_data
Switch Block
105-106
3-35
Chapter 3
Connectivity to Storage
Connectivity to Storage
In this implementation shared storage is implemented with NFS. Both VMware and OpenStack servers
use NFS shares implemented on NetApp FAS 3250 cluster. Following sections describes the
connectivity between the NFS cluster, servers, and the ACI Fabric.
Leaf101
Leaf102
Leaf103
E3a,
E4a
ICS3-6296-P1A
E3b,
E4b
E3a,
E4a
E3b,
E4b
E3a,
E4a
Leaf104
E3b,
E4b
E3a,
E4a
E3b,
E4b
ICS3-6296-P1B
ICS3-C1F1-P1
NetApp
FAS 3250a
NetApp
FAS 3250b
NetApp
FAS 3250c
NetApp
FAS 3250d
UCS-B Series
298660
ICS3-C2F1-P1
ACI Fabric configuration for NetApp controllers are discussed in details in the next sections.
VPC Configuration
All connections from the NetApp controllers to ACI Fabric are configured as vPCs. Therefore in this
implementation four separate vPC are configured. Following are the steps to create a vPC port channels
to a pair of NetApp controllers 3250a/3250b.
Step 1
Create a VLAN pool with single VLAN for NFS traffic encapsulation.
<infraInfra>
<fvnsVlanInstP name="nfs_storage" allocMode="dynamic">
<fvnsEncapBlk from="vlan-1000" to="vlan-1000" />
</fvnsVlanInstP>
</infraInfra>
Step 2
3-36
Implementation Guide
Chapter 3
<physDomP name="netapp_nfs_phy">
<infraRsVlanNs tDn="uni/infra/vlanns-[nfs_storage]-dynamic" />
</physDomP>
Step 3
Create an AEP.
<infraInfra>
<infraAttEntityP name="netapp_nfs_aep">
<infraRsDomP tDn="uni/phys-netapp_nfs_phy" />
</infraAttEntityP>
</infraInfra>
Step 4
Step 5
Step 6
Figure 3-44 of the APIC GUI shows the switch profile configuration.
3-37
Chapter 3
Connectivity to Storage
Figure 3-44
Step 2
Step 3
3-38
Implementation Guide
Chapter 3
<fvAp name="ip_storage">
<fvAEPg name="nfs">
<fvRsBd tnFvBDName="ip_storage" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc" instrImedcy="immediate"
resImedcy="immediate" />
</fvAEPg>
</fvAp>
</fvTenant>
The XML configuration snippet above assumes the Virtual Machine Manager (VMM) vCenter domain
profile 'ics3_prod_vc' is already defined and associates it with this EPG.
OpenStack compute host NFS Access
Refer to the NFS storage access configuration section under the copper tenant chapter for detailed
description.
Step 4
Step 5
Add contract to application EPG and external bridged network. In this implementation the built-in
contract "default" under built-in tenant "common" is used. Default contract does not enforce any filtering
and allows all traffic through.
<fvTenant name="storage">
<fvAp name="ip_storage">
<fvAEPg name="nfs">
<fvRsProv tnVzBrCPName="default" />
</fvAEPg>
</fvAp>
<l2extOut name="l2_outside">
<l2extInstP name="outside_network">
<fvRsCons tnVzBrCPName="default" />
</l2extInstP>
</l2extOut>
</fvTenant>
3-39
Chapter 3
3-40
Implementation Guide
Chapter 3
Step 1
1.
2.
3.
Create IP address pool and associate with the managed node connectivity group.
In this screen, you have the option of configuring OOB address for the switch.
<infraInfra>
<!-- One for each Switch -->
<mgmtNodeGrp name="node101">
<mgmtRsGrp tDn="uni/infra/funcprof/grp-node101"/>
<infraNodeBlk name="default" from_="101" to_="101"/>
</mgmtNodeGrp>
</infraInfra>
3-41
Chapter 3
Step 2
In this window, you have the option of creating an IP address pool for the managed node or you could
do it separately as in the next step.
<infraInfra>
<!-- One for each Switch -->
<infraFuncP>
<mgmtGrp name="node101">
<mgmtOoBZone name="">
<mgmtRsOobEpg tDn="uni/tn-mgmt/mgmtp-default/oob-default"/>
<!-- The IP address pool is associated in advance -->
<mgmtRsAddrInst tDn="uni/tn-mgmt/addrinst-node101oobaddr"/>
</mgmtOoBZone>
</mgmtGrp>
</infraFuncP>
</infraInfra>
Step 3
Create IP address pool and associate with the managed node connectivity group.
In the navigation pane, right click on "IP Address Pools". Enter the IP address information. Once the
pool is created, go to the managed node connectivity group that was created in the previous step and
associate the IP address pool.
3-42
Implementation Guide
Chapter 3
Figure 3-47
<fvTenant name="mgmt">
<fvnsAddrInst name="node101oobaddr" addr="10.0.32.1/24">
<fvnsUcastAddrBlk from="10.0.32.15" to="10.0.32.15"/>
</fvnsAddrInst>
</fvTenant>
3-43
Chapter 3
Deployment Considerations
Deployment Considerations
The following considerations are recommended.
It is recommended to use 3 or more APIC controllers in the APIC cluster and spread them out to
multiple leaf nodes for resiliency.
Gold, Copper and L2-Bronze containers use L2 extension between ACI Fabric and ASR 9000 nV
cluster. vPC is the recommended configuration.
vPC implementation on ACI Fabric does not require dedicated peer links unlike traditional vPC
implementation on Nexus switches.
On ASR 9000 nV Edge, it is recommended to distribute port channel members across both Chassis
for chassis level redundancy.
On ASR 9000 nV, it is recommended to configure separate MAC address on Bundle Ethernet
interfaces connecting to ACI Fabric via separate L3 port channels. This is to prevent MAC address
flapping when the same VLAN encapsulation is used on both border leaves for External Routed
Connectivity to ASR 9000 nV.
In the ACI environment, vPC connection is not supported on FEX Network Interfaces (NIF).
In the ACI environment, FEX Host Interfaces (HIF) will not support port channel or vPC.
When attached to FEX, bare metal server interfaces can be configured for active-backup
connectivity.
For vPC connectivity to ACI Fabric, bare metal servers can be directly attached to leaf switches.
You can use in-band or out-of-band connectivity in APIC for managing the fabric nodes. In this
implementation, out-of-band is used for management connectivity.
3-44
Implementation Guide
CH A P T E R
Reference Architecture
Figure 4-1 represents the end-to-end reference architecture for the system with VMware vSphere built
on FlexPod components and network connections for NFS protocol.
Figure 4-1
NetApp
FAS 3250a
E3a,
E4a
Leaf101
E3b,
E4b
Leaf102
ICS3-6296-P1A
ICS3-6296-P1B
UCS-B Series
NetApp
FAS 3250c
E3b,
E4b
E3a,
E4a
NetApp
FAS 3250d
E3a,
E4a
Leaf103
E3b,
E4b
Leaf104
ICS4-6296-P1A
ICS4-6296-P1B
ICS3-C1F1-P1
ICS4-C1F1-P1
ICS3-C2F1-P1
ICS4-C2F1-P1
UCS-B Series
298665
E3b,
E4b
E3a,
E4a
NetApp
FAS 3250b
4-1
Chapter 4
Two UCS 5108 chassis are connected to two pairs of Nexus 9396 leaf switches via UCS 6296 Fabric
Interconnects.
The uplinks on the FI switches are bundled into port-channels to upstream Nexus 9396 leaf switches
and to management switches with disjoint L2 networks.
The FI switches connect to two Nexus 9396 leaf switches using Virtual Port Channel (vPC) links
that carry both NFS storage traffic and tenant data traffic.
UCS Setup
TACACS
Servers
vCenter
Server
DNS
Servers
Management Ethernet
Tenant Data + NFS
DC
Management
Network
DHCP/TFTP
Server
Leaf101
vCenter Auto
Deploy Server
Leaf102
Management
Nexus 7009
Po1
UCS 6296
UCS 6296
L1 and L2
ICS3-6296-P1A
Po11
Po22
Chassis 1
ICS3-6296-P1B
Chassis 2
298666
Po2
For redundancy, each blade server is configured with four Virtual Network Interface Cards (vNIC) for
access to two disjoint upstream Layer 2 (L2) networks. One pair was used for management and the other
pair was used for all data and NFS storage traffic. On the UCSM, fabric failover for each vNIC is not
enabled. Service profile templates and vNIC templates of updating template type are used to ensure that
the configurations across multiple blade servers are consistent and up-to-date. Figure 4-3 shows the
service-profile configuration for one of the blade servers on the UCSM.
4-2
Implementation Guide
Chapter 4
Figure 4-3
Forwarding Modes
Cisco AVS supports two modes of traffic forwarding namely Local Switching (LS) mode and
Non-Switching (NS) mode. These modes are also known as Fabric Extender (FEX) disable mode and
FEX enable mode respectively.
Figure 4-4
Leaf Switch
Hypervisor
VM
EPG App
VM
VM
EPG App
vLeaf
VM
VM
EPG App
VM
VM
EPG App
VM
Hypervisor
4-3
Chapter 4
With the FEX disable mode, all intra-EPG traffic is locally forwarded by the Cisco AVS as shown in
Figure 4-4. All inter-EPG traffic is sent to the leaf switch and the leaf switch will in turn forward it to
the appropriate EPG based on the forwarding policy. This mode supports VLAN encapsulation or
VXLAN encapsulation for forwarding traffic to the leaf and back. The VLAN encapsulation is locally
significant to the Cisco AVS and Leaf switch. If VXLAN encapsulation is used, only the infra-VLAN
needs to be available between the Cisco AVS and the VXLAN.
With the FEX enable mode, both inter and intra EPG traffic is forwarded to the leaf switch. In this mode,
VXLAN is the only allowed encapsulation.
Note
In ICDC ACI 1.0 solution, FEX disable mode (Local Switching) with VLAN encapsulation is validated.
VXLAN encapsulation is not used in this solution since it does not support service graphs.
Network
Admin
OpFlex
nd
Ba
ft-o
Ou
Physical
Spine
Spine
Hypervisor Manager
Leaf
VMware vCenter
Leaf
Ou
Leaf
OpFlex
t-o
f-B
an
Virtual
vLeaf/AVS vLeaf/AVS vLeaf/AVS vLeaf/AVS
APP
APP
OSAPP
OS
OS
APP
APP
OSAPP
OS
OS
APP
APP
OSAPP
OS
OS
APP
APP
OSAPP
OS
OS
298669
Server
Admin
When the vCenter domain is created, APIC creates a VTEP port-group in vCenter automatically. When
AVS is installed on a host and added to DVS in vCenter, it will automatically bind this VTEP to Virtual
Machine Kernel (VMK) 1 interface on the host enabling the OpFlex channel for control plane
communication.
4-4
Implementation Guide
Chapter 4
Figure 4-6 shows how vCenter domain is created in APIC. The AVS switching mode and Encapsulation
are set during this process.
Figure 4-6
Cisco AVS software release 4.2(1) SV2 (2.3), and later releases.
VMware ESXi hypervisor installed on a physical server (Release 5.1 and 5.5).
Cisco AVS download instructions for VMware ESXi deployments is located at the following URL:
http://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/nexus1000/avs/install-upgrade/avs-do
wnloads/Cisco-AVS-Download-Instructions-VMware-ESXi.pdf
Cisco AVS can be installed manually using ESX command line interface or using VMware Update
Manager (VUM). After installation, the Cisco AVS hosts needs to be added to the distributed virtual
switch. Notice that you can add only one host to DVS at a time. It is also required to configure a node
policy for Cisco AVS and create a VMWare vCenter domain in APIC.
For more information on installation and configuration of AVS, refer to the following URL:
http://www.cisco.com/c/en/us/support/switches/application-virtual-switch/products-installation-guides
-list.html
4-5
Chapter 4
Step 2
Open the folder under the data center name and click on the virtual switch.
Step 3
Click the Hosts tab and look at the VDS Status and Status fields.
Figure 4-7 is a sample screen shot taken from VMware vCenter. The DVS status is "Up" indicating that
the OpFlex communication has been established with the Leaf switches.
Figure 4-7
4-6
Implementation Guide
Chapter 4
Figure 4-8
In this implementation, VMK2 VMK NIC is used for vMotion traffic. This vNIC is mapped to the
vMotion EPG created under VMI tenant.
Figure 4-9 shows vMotion EPG created under tenant VMI.
Figure 4-9
4-7
Chapter 4
Figure 4-10 shows the mapping between VMK2 vNIC in vCenter and the EPG in APIC. Notice that the
vNIC uses port-group vmotion_epg.
Figure 4-10
The VMK3 VMK NIC is used for ip storage (NFS) traffic and is mapped to an EPG in the storage tenant.
This interface has a static IP configured.
The VMK4 interface is used for VMware fault tolerance and is mapped to vm_ft_epg in the VMI tenant.
Fault Tolerance logging is enabled on this interface and it has a static IP.
Do not enable vMotion on the VMK NIC used for the OpFlex channel.
3.
Do not delete or change any parameters for the VMK NIC created for OpFlex Channel.
4.
If you delete OpFlex VMK NIC by mistake, recreate with VTEP port-group and configure for
dynamic IP address.
5.
6.
4-8
Implementation Guide
Chapter 4
HA Pair
HA Pair
SVM1
298675
SVM2
This large pool of data can be presented to hosts as if it was a single storage array, or as many, apparently
independent, storage arrays through secure logical containers known as Storage Virtual Machines, or
SVMs. Each SVM has its own set of storage and logical interfaces to which only it has access, and its
own unique configuration. An SVM may own resources on one, several, or all nodes within the cluster,
and those resources can be moved without disruption between individual cluster nodes.
Figure 4-12 shows the physical layout of the NetApp FAS3250 based storage array connectivity
validated during this implementation.
Figure 4-12
Nexus 5596 A
Nexus 5596 B
NetApp Storage
FAS3250
FAS 3250a
E3a,
E4a
FAS 3250b
E3b, E3a,
E4b E4a
E3b,
E4b
FAS 3250c
FAS 3250d
E3b, E3a,
E4b E4a
E3a,
E4a
E3b,
E4b
Leaf101
Leaf102
Leaf103
Leaf104
298676
4-9
Chapter 4
In this implementation each tenant category such as Gold, Silver, Bronze and copper shares a single NFS
SVM per tenant category. Each tenant in Gold and Silver categories has separate volume on these SVMs.
Additionally, each Gold Tenant gets a dedicated NFS SVMthis is provided only to Gold Tenants, since
Gold service tier is a premium service. It is expected that Gold Tenants will run their more secure
workloads from data stores from the dedicated SVM and normal workloads in data stores from shared
SVMs.
Figure 4-13 and Figure 4-14 show a summary of SVM provisioning for Gold and Silver tenants.
Gold Tenant Storage Volume Layout
Gold2
Tenant VMS
Gold1
Tenant VMS
Dedicated
Datastore 1
Dedicated
Datastore 2
Dedicated
Datastore 1
Dedicated
Datastore 2
Gold1
Volume
Gold1
Volume
Gold2
Volume
Gold2
Volume
Gold1
Dedicated SVM
Figure 4-14
Gold2
Dedicated SVM
298677
Figure 4-13
Dedicated
Datastore 1
Dedicated
Datastore 2
Silver1
Volume
Silver2
Volume
298678
Silver2
Tenant VMS
Silver1
Tenant VMS
All Bronze tenants share the same volume and data store on a single shared SVM. Copper tenants use
the same layout as Bronze as shown in Figure 4-15.
4-10
Implementation Guide
Chapter 4
Bronze1
Tenant VMS
Copper2
Tenant VMS
Copper1
Tenant VMS
Shared
Datastore 1
Shared
Datastore 1
Bronze
Volume
Copper
Volume
Bronze
Shared SVM
Copper
Shared SVM
298679
Figure 4-15
Create the base SVM. As part of this step, the SVM name, data protocols, client services, root volume
aggregate and root volume security style will all be defined.
vserver create -vserver svm_aci_gold_tenant2 -rootvolume svm_aci_gold_tenant2_rootvol
-aggregate aggr_gold_dedicated_SAS_flash -ns-switch file -nm-switch file
-rootvolume-security-style unix
Step 2
Enable NFS on the SVM. The required parameters include protocols, versions enabled.
vserver nfs create -vserver svm_aci_gold_tenant2 access true v3 enabled
Step 3
Create the logical interfaces (LIFs). The required parameters include LIF name, home node, home port,
failover-group, IP address, and netmask. Refer to VMDC VSA 1.0.1 implementation guide for details
on creating fail over groups.
network interface create -vserver svm_aci_gold_tenant2 -lif nfs1 -role data
-data-protocol nfs -home-node vmdc-3250a -home-port a0a-1000 address 10.0.40.212
netmask 255.255.255.0 -status-admin up use-failover-group enabled failover-group
data-1000
Step 4
Create export-policy and rules to allow access from NFS subnets to the NFS SVMs
vserver export-policy create -policyname vmware_mounts -vserver svm_aci_gold_tenant2
vserver export-policy rule create -vserver svm_aci_gold_tenant2 -policyname
vmware_mounts -clientmatch 10.0.40.0/24 -rorule sys -rwrule sys -superuser sys
-protocol nfs
Step 5
Create volume of size 1TB, set the permissions and turn off the volume snapshots.
volume create -vserver svm_aci_gold_tenant2 -volume svm_aci_gold_tenant2_vol01
-aggregate aggr_gold_dedicated_SAS_flash -size 1TB -state online -type RW -policy
vmware_mounts -security-style unix -unix-permissions ---rwxr-xr-x -junction-path
/svm_aci_gold_tenant2 -space-guarantee none -percent-snapshot-space 0%
Step 6
4-11
Chapter 4
NFS Resiliency
In this implementation, link/path level resiliency between server and storage has been validated. There
are total of 16 paths to the storage from server. Considering the best practices for NFS on VMware,
redundancies are created in links (connecting from server to storage), storage switches (redundant ACI
leaf switches) and storage controllers (there are 4 nodes in the NetApp cluster). In this topology there
are four nodes and any set of disk drives is controlled by a pair of nodes (HA pair).
A LIF is a logical network interface that virtualizes SAN or NAS network connections. LIFs are tied to
an SVM and mapped to physical network ports, interface groups, or VLANs (when tagging is used) on
the controller. Because LIFs are virtualized, a LIF address remains the same even when a LIF is migrated
to another physical port on the same or a different node within the cluster. NAS LIFs can automatically
fail over if the current physical interface to which it is assigned fails (whether due to a cable, switch port,
interface port, or interface card failure), or can work in conjunction with storage failover of an HA pair
if the cluster node hosting the LIF goes down. LIFs can also be manually migrated to another physical
port within the cluster.
The state of the LIF used in this implementation in normal operation is shown below
vmdc-3250-cluster::> network interface show -vserver svm_aci_copper_shared -lif nfs1
-fields home-node,home-port,curr-node,curr-port
vserver
lif home-node home-port curr-node curr-port
--------------------- ---- ---------- --------- ---------- --------svm_aci_copper_shared nfs1 vmdc-3250c a0a-1000 vmdc-3250a a0a-1000
vmdc-3250-cluster::>
vmdc-3250-cluster::>
vmdc-3250-cluster::> network port ifgrp show -node vmdc-3250a -ifgrp a0a
Node: vmdc-3250a
Interface Group Name: a0a
Distribution Function: ip
Create Policy: multimode_lacp
MAC Address: 02:a0:98:40:bd:9a
Port Participation: full
Network Ports: e3a, e3b, e4a, e4b
Up Ports: e3a, e3b, e4a, e4b
Down Ports: -
Notice that while a port has failed, the interface group remains up and the LIF has not been migrated.
Configuring both interface groups and failover groups provides for the maximum resiliency of NAS
LIFs.
After Multiple Port Failure
vmdc-3250-cluster::> network port ifgrp show -node vmdc-3250a -ifgrp a0a
4-12
Implementation Guide
Chapter 4
Node: vmdc-3250a
Interface Group Name: a0a
Distribution Function: ip
Create Policy: multimode_lacp
MAC Address: 02:a0:98:40:bd:9a
Port Participation: none
Network Ports: e3a, e3b, e4a, e4b
Up Ports: Down Ports: e3a, e3b, e4a, e4b
vmdc-3250-cluster::> network interface show -vserver svm_aci_copper_shared -lif nfs1
-fields home-node,home-port,curr-node,curr-port
vserver
lif home-node home-port curr-node curr-port
--------------------- ---- ---------- --------- ---------- --------svm_aci_copper_shared nfs1 vmdc-3250c a0a-1000 vmdc-3250b a0a-1000
Notice that the interface group has no ports in a state of up, and the LIF has migrated to a port on the
other node of the HA pair.
HA Pair Failure
vmdc-3250-cluster::> network port ifgrp show -node vmdc-3250b -ifgrp a0a
Node: vmdc-3250b
Interface Group Name: a0a
Distribution Function: ip
Create Policy: multimode_lacp
MAC Address: 02:a0:98:3f:b2:b0
Port Participation: none
Network Ports: e3a, e3b, e4a, e4b
Up Ports: Down Ports: e3a, e3b, e4a, e4b
vmdc-3250-cluster::>
vmdc-3250-cluster::> network port ifgrp show -node vmdc-3250a -ifgrp a0a
Node: vmdc-3250a
Interface Group Name: a0a
Distribution Function: ip
Create Policy: multimode_lacp
MAC Address: 02:a0:98:40:bd:9a
Port Participation: none
Network Ports: e3a, e3b, e4a, e4b
Up Ports: Down Ports: e3a, e3b, e4a, e4b
vmdc-3250-cluster::> network interface show -vserver svm_aci_copper_shared -lif nfs1
-fields home-node,home-port,curr-node,curr-port
vserver
lif home-node home-port curr-node curr-port
--------------------- ---- ---------- --------- ---------- --------svm_aci_copper_shared nfs1 vmdc-3250c a0a-1000 vmdc-3250c a0a-1000
The interface group has no ports in a state of up in the HA pair consisting of nodes
vmdc-3250a/vmdc-3250-b, and the LIF has migrated to a port on the node of the second HA pair.
4-13
Chapter 4
Figure 4-16
Figure 4-17 shows NFS tenant data stores created on each ESXi host in this implementation.
4-14
Implementation Guide
Chapter 4
Figure 4-17
4-15
Chapter 4
4-16
Implementation Guide
CH A P T E R
VM
VM
VM
VM
Cisco
Nexus
1000V
VEM
VM
VM
VM
Cisco
Nexus
1000V
VEM
KVM
VM
VM
VM
VM
VM
Cisco
Nexus
1000V
VEM
KVM
KVM
KVM
Compute Node
+ Ceph
Compute Node
+ Ceph
Nexus
1000V
Plugin
Juju
OpenStack
MaaS
Infrastructure
Nodes
298682
Compute Node
+ Ceph
5-1
Chapter 5
Figure 5-2
Spine202
Spine203
Spine204
40G
10G
10G CCL
FAS 3250a
FAS 3250b
Leaf102
FAS 3250c
NetApp Cluster
Leaf103
Leaf104
Leaf105
Leaf106
Leaf107
OpenStack
C-Series
Servers
FAS 3250d
ASR 9000 nV
Edge Router
OpenStack
C-Series
Servers
Leaf108
ASA 5585
Cluster
Nexus
7009 A
Nexus
7009 B
MaaS/Juju Servers
298683
Leaf101
In this implementation, Canonical Ubuntu MaaS is used to manage the Cisco C-series servers that host
OpenStack control, compute, and Cisco Nexus 1000v VSM nodes.
MaaS makes deploying services faster, more reliable, repeatable, and scalable by using the service
orchestration tool Juju. MaaS and Juju services were hosted on separate Cisco C-series servers in the
provider management segments as shown in Figure 5-1.
5-2
Implementation Guide
Chapter 5
Figure 5-3
Nexus 9396
Leaf103
Nexus 9396
VIC 1225
Nexus 93128
Leaf104
Nexus 93128
VIC 1225-T
Leaf107
CIMC
Mgmt
NFS
Leaf108
CIMC
Mgmt
NFS
OpenStack Control/
Compute/ Nexus 1000V VSM
OpenStack Control/
Compute/Ceph/
Nexus 1000V VSM
3xC240M3:
Ceph Monitor + OSD + MDS +
Nova Compute
1xC220M3:
Nova compute
Nexus
7009
Nexus
7009
MaaS/Juju Servers
298684
Provider Backend
Management
5xC220M3:
3x All in one OS nodes
2x N1KV VSM (P/S)
CIMC connectivity
2.
Management connectivity
3.
NFS
In this implantation a single 1G interface was used for NFS connectivity. It is possible to have bonded
1G interface for redundancy and higher bandwidth if required.
Figure 5-4 shows the NIC connectivity in more detail.
Figure 5-4
Eth0-1
Eth2
br0
bridge
LXC
bridge
br0
10.0.45.15
Eth0
10.0.45.25
Eth0
10.0.45.30
Cinder LXC
Nova Cloud
Controller
CIMC
Eth3
10.0.40.101
CIMC
10.0.35.11
Neutron
298685
NFS
5-3
Chapter 5
Internet
NAT for rados/Swift Access
S: 10.21.1.X D:192.168.100.100
NAT to
S:10.0.46.x D:10.0.45.78 (ha proxy VIP for Rados GW)
ASA FW
(OS Instances GW)
ACI Fabric
Nexus 1000V
LACP Active
OS Instances
10.21.(1-4).x
Compute/Ceph
Node 1-3
All in One HA
Node 1-3
(LXC Containers)
NFS Data
10.0.40.x
Rados GW
HA proxy VIP
10.0.45.78
Spirent Mgmt
10.0.47.0/24
Control
Traffic
FAS 3250c
FAS 3250d
NetApp Cluster
Table 5-1
OOB
172.18.x.x
298686
FAS 3250a
Management
Router
MaaS/Juju Servers
Name
Network
Purpose
OpenStack
API/Control/Management
10.0.45.0/24
10.21.x.0/24
172.18.116.0/24
10.0.47.0/24
10.0.40.0/24
5-4
Implementation Guide
Chapter 5
Table 5-1
Name
Network
Purpose
10.0.46.0/24
Swift/rados gateway NAT network 192.168.100.0/24 NAT network presented to instances for
Swift/rados gateway access.
5-5
Chapter 5
Region controller
Cluster controller(s)
Nodes
The nodes are the physical servers managed using MaaS. These can range from just a handful to many
thousands of systems.
Nodes can be controlled in a hierarchical way to facilitate different control policies as shown in
Figure 5-6. A Region controller is responsible for managing the cluster and consists of a web user
interface, an API, the metadata server for cloud-init and an optional DNS server.
A cluster controller is responsible for provisioning and consists of a TFTP server and an optional DHCP
server. It is also responsible for powering servers on and off via IPMI.
Regional controllers can be used to separate clusters of nodes that belong to different subnets. In this
implementation both regional and cluster controllers are hosted in a single server.
5-6
Implementation Guide
Chapter 5
Figure 5-6
MaaS Hierarchy
Region
Controller
Web UI
API
Highly
Available
Cluster
Cluster
Cluster
Controller
Cluster
Controller
TFTP
(PXE)
DHCP
DHCP
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
298687
TFTP
(PXE)
Juju
Juju is a service orchestration tool that lets users to quickly deploy OpenStack on Ubuntu. Its libraries
of charms make it simple to deploy, configure and scale out cloud services with only a few simple
commands. The magic behind Juju is a collection of software components called charms that
Encapsulate knowledge of how to properly deploy and configure the services on resources.
Juju needs a separate boot strap node where all the Juju orchestration is based off of. This Juju boot strap
node should be provided as part of MaaS managed hosts. Figure 5-7 show the Juju client deploying a
service on a node provision by MaaS based on MaaS API calls made by a Juju client.
Figure 5-7
Juju Client
MaaS API
MaaS Region
Controller
MaaS Cluster
AZ1
DHCP DNS PXE
Job Service
State Node
MaaS Node
MaaS Node
MaaS Node
298688
MaaS Node
5-7
Chapter 5
Charms
Each charm is a structured bundle of files. Conceptually, charms are composed of metadata,
configuration data, and hooks with some extra support files.
Hooks are executable files in a charm's hooks directory; hooks with particular names (see below) will
be invoked by the Juju unit agent at particular times.
There are 5 "unit hooks" with predefined names that can be implemented by any charm:
Installinstall runs just once, before any other hook. It should be used to perform one-time setup
operations only.
Startstart runs immediately after the first config-changed hook. It should be used to ensure the
charm's software is running.
Stopstop runs immediately before the end of the unit's destruction sequence. It should be used to
ensure that the charm's software is not running, and will not start again on reboot.
5-8
Implementation Guide
Chapter 5
Figure 5-8
Juju
Controller
Bootstrap
Node
Charms
Archives
OS services and
Nexus 1000V
Charms
VSM, VEM,
VXGW
Packages
OpenStack Services
MySQL
RabbitMQ
Ceph
Ceph
(Rados GW)
Cinder
Glance
Keystone
Horizon
Tab
Nova
Neutron
Plugin
Rest API
VSM
VEM
KVM
OVS
VM
VM VM
VM
VM VM
VM VM
VM
Compute Nodes
L3 Agent
Libvertd
KVM
L2 Agent
Network Nodes
298689
OVS
Nova
Libvertd
Nova
Nexus 1000v has the following major components mentioned in Figure 5-8:
Virtual Supervisor Module (VSM) that can run as a VM on KVM or on Nexus 1110-x cloud services
appliance.
The Cisco Virtual Networking Solution for OpenStack is available in two editions. The Essential Edition
is available at no cost for up to 20 physical hosts and includes all the basic switching features. The
Advanced Edition adds Cisco VXLAN Gateway to the base functionality of Essential Edition. In this
release essential edition has been used.
5-9
Chapter 5
Figure 5-9
VSM
(Active)
VSM
(Standby)
MTS/AIPC
MTS/AIPC
KVM
Host-2
VEM
KVM
Host-N
VEM
298690
KVM
Host-1
VEM
Applications on standby keep runtime contexts in sync with both the active and ready to run states. At
switchover, standby is ready to take over as active. When started, standby services get an initial state
snapshot from active VSM using Sysmgr. It retrieves a snapshot of persistent context from active peer
including runtime configuration and runtime information.
Subsequent syncing of persistent context includes standby receiving persistent/log messages via sync
events whenever active VSM application sends/receives/drops messages (TCP sync connection).
Standby service receives only updates not already included in the initial snapshot.
Neutron
Policy
Manager
Seg
Manager
Port
Manager
Rest API
Nexus 1000V (VSM)
VXLAN
GW
VEM
VEM
VEM
VEM
Hosts
Hosts
298691
VLANs
Tenant B
Tenant A
The key components of the system are explained in the following sections.
5-10
Implementation Guide
Chapter 5
KVM VEM
User Space
Data Path
Agent
Data Path
Fast Path
Kernel Space
298692
The sub-component that is responsible for packet processing is called data path (DP). DP registers as a
Controller to Open-vSwitch and gets involved in switching the incoming packets through an efficient
DP scheduler. VEM works in tandem with Open-vSwitch (OVS), which notifies port events to Nexus
1000v VEM. Together they provide a distributed Virtual Switch solution for KVM. OVS is not engaged
in fast switch path. Nexus 1000v's VEM kernel module is responsible for fast path switching.
VEM does not directly interact with any host side OpenStack module (Nova Compute-Agent as well as
Neutron-Agent) or VM Management entity: LibVirt. It interacts with OVS DB entity for port events. It
includes feature code that works in the DP such as ACL/NetFlow and is a user-space process with
multiple threads
The sub-component that interacts with VSM is called data path agent (DPA). DPA communicates with
VSM and downloads configuration. It also sends notifications on port attach/detach to VSM. DPA
applies policies on DP such as ACL/NetFlow.
A VSM and a group of VEMs together forms one distributed switch system. In the current model, VSM
communicates with VEMs using L3 network and there is no control plane communication among the
VEMs. Unlike a physical switch that uses a dedicated ultra-reliable backplane channel for
communication, VSM uses the network as the communication fabric. Only L3-based communication
modes are supported in the current release of Nexus 1000v.
5-11
Chapter 5
Figure 5-12
Horizon/Custom
Dashboard Tenant Portal
Create Networks
Create VMs and bind ports to networks
OpenStack
Compute
Network
Nova
Nexus 1000V
Neutron
Plugin
Create
Network
Ports, etc.
Controller
Node
VM
Create VMs
VM
VM
Cisco
Nexus
1000V
VEM
VM
Nova
Compute
Nodes
298693
KVM
Normal PriorityFor all other traffic, for example, unknown MAC address or a missing flow.
5-12
Implementation Guide
Chapter 5
Figure 5-13
User
VEM-DP
Program L2 or
feature flows
VM1
VM2
tap1
tap2
Kernel
Nexus 1000
(Kernel)
Physical
From Outside
VSM Charm
VSM is required to be installed as a virtual machine on a bare metal server. Both primary and secondary
VSM use the same charm. Figure 5-14 shows three hooks implemented in the VSM charm. In summary
VSM charm installs OVS, creates the VSM virtual machine and brings it up.
Figure 5-14
Install Hook
Config-changed
Hook
Start Hook
Install
openvswitch(1.10)
Start VSM VM
if not running
Install pre-req
packages for VSM
VM bringup
Configure OVS
Virsh create
VSM VM
298695
Following parameters are defined and passed to the VSM charm during the install via configuration yaml
file.
5-13
Chapter 5
VEM Charm
VEM charm is designed to be a subordinate charm. It should be deployed on all Nova-compute and
quantum-gateway node (/neutronnetwork node).
Similar to VSM configuration parameters are defined by administrator and passed to VEM which will
be applied to all compute/neutron hosts.
Part of the config option is a string variable which takes in the content of mapping file (in yaml format).
This mapping file can be used by the admin to specify host specific configuration such as interfaces used
for VEM/VSM communication.
After install, when configuration change hook is run (Figure 5-15), it picks up the parameters entered by
the administrator for each host. If for any host, configuration is not specified in the mapping file then
the general configuration is applied. Following are a list of parameters that can be passed on to VEM,
n1kv-source: "ppa:cisco-n1kv/icehouse-updates"
If VSM is charm-based, add-relation between VSM and VEM will configure VEM nodes to connect
to VSM. On an appliance based VSM (non-charm based), config parameters can be set on the VEM
charm to connect to the VSM.
Nexus 1000v VEM Charm Hooks
Install
Dependencies
Read host-specific
config
Start
Read VSM
connectivity details
(add-relation hook)
Update n1kv.conf
(config-changed hook)
298696
Figure 5-15
5-14
Implementation Guide
Chapter 5
OpenStack-origin: ppa:cisco-n1kv/icehouse-updates
n1kv-restrict-policy-profiles: All tenants will be able to access all policy profiles if disabled. If
enabled, tenants can access only those policy profiles which are explicitly
assigned to them by
admin
OpenStack-origin: ppa:cisco-n1kv/icehouse-updates
OpenStack-origin: "ppa:cisco-n1kv/icehouse-updates
Cisco-policy-profile
Cisco-network-profile
5-15
Chapter 5
Table 5-2
Network
Network segment
Subnet
IP Pool template
Port
NetworkVeth port
VM
VM
VM
VM
OpenStack Controller
Nova
Service
Cisco
Nexus
1000V
VEM
Horizon
Cloud
Management
Neutron
Service
KVM
4 C
o
Server
3 Create tenants,
networks, subnets
and VMs
Other Services
se nf
nt igu
to ra
N tio
ex n
us da
10 ta
00 an
V dp
VE o
M licie
1 Create policy-profiles
Network
Management
298697
Figure 5-16
5-16
Implementation Guide
Chapter 5
Create Networks
Network represents an L2 segment with the segment id chosen from the corresponding network profile
range defined in the network profile and is created from the Horizon dashboard. Following is an example
of configuration getting created on Nexus 1000v for a network created in the dashboard,
nsm network segment e9e3e757-8647-47d3-9f25-69a52d735cf7
description copper2_data
uuid e9e3e757-8647-47d3-9f25-69a52d735cf7
member-of network segment pool 54e0418a-0ca3-4670-b954-4c7399283cac
switchport mode access
switchport access vlan 502
ip pool import template bd9a3387-dddd-4942-94d7-45b5eaffaf4b uuid
bd9a3387-dddd-4942-94d7-45b5eaffaf4b
publish network segment
Create Subnet
Subnet represents a block of IPv4 addresses with an option to enable/disable DHCP for that IP range
created from the Horizon dashboard. Following is an example of configuration getting created on Nexus
1000v for a subnet created in the dashboard,
nsm ip pool template 054ef0e3-1549-41c9-a91a-092ba3b93110
description 10.0.47_net
ip address 10.0.47.1 10.0.47.252
network 10.0.47.0 255.255.255.0
default-router 10.0.47.253
dhcp
dns-server 64.102.6.247
Create Port
Neutron port represents a specific instance of network segment + policy profile combination and is
created on the Horizon dashboard or from the Python Neutron CLI.
Corresponding network Veth object and port is created on VSM.
nsm network vethernet
vmn_1357022c-9112-4908-a045-4e0a94577ecc_749d699a-51c3-48f9-9ee8-df97ead49d17
import port-profile copper2_data uuid 1357022c-9112-4908-a045-4e0a94577ecc
allow network segment 749d699a-51c3-48f9-9ee8-df97ead49d17 uuid
749d699a-51c3-48f9-9ee8-df97ead49d17
state enabled
port uuid 73d5635b-dd75-4e32-af54-7744a6db77ff mac fa:16:3e:c4:e3:83
port uuid 81995fcf-d027-4304-b485-f451786f6b53 mac fa:16:3e:4a:1b:16
5-17
Chapter 5
OpenStack Installation
OpenStack Installation
OpenStack software controls large pools of compute, storage, and networking resources throughout a
datacenter, managed through a dashboard or via the OpenStack API. OpenStack works with popular
enterprise and open source technologies making it ideal for heterogeneous infrastructure.
This document deployment is for OpenStack Icehouse with High Availability for all services and uses
Cisco Nexus 1000v for networking aspects. These services are deployed with the Ubuntu MaaS. MaaS
manages the commissioning of physical servers and Ubuntu Juju to instantiate services using Juju
charms. LXC containers (https://linuxcontainers.org/) are used to better manage resource usage on nodes
where multiple services are co-located. As part of the scope of this project, two storage backends were
configured to work with the deployment. Both object based Ceph storage, and the more traditional
NetApp storage, have been configured to work with Cinder and Glance.
High Availability
Table 5-3
Node # Services
Node Type
Ubuntu MaaS
MaaS Node
Juju Node
3-5
Control Node
6-8
Compute Node
VSM Node
10
VSM Node
Node 1
Ubuntu MaaS runs on its own node and will provision servers for use with Juju. This is the only service
that will run on this node and shall be known as the MaaS Node in this document.
Node 2
Ubuntu Juju will be bootstrapped to this node, this will allow Juju to be run in conjunction with MaaS
to provision new servers with a fresh Ubuntu OS installation then configure and install services, such as
OpenStack components. This is the only service that will run on this node and shall be known as the Juju
Node in this document.
Nodes 3-5
These nodes will contain Neutron Gateway on the bare-metal OS installation while RabbitMQ Server,
MySQL (Percona XtraDB Cluster), Ceph RADOS Gateway, Keystone, Glance, Cinder, and Horizon are
created and run in LXC containers. Placing the other services in containers will allow their operations
to be compartmentalized, as of now Ubuntu calls co-locating some services hulk-smashing, which
could lead to unintended errors. The 3 nodes will have the same services on them, establishing High
Availability with some extra configurations. These nodes shall be known as Control Nodes in this
document.
5-18
Implementation Guide
Chapter 5
Note
Ceph RADOS gateway does not currently support HA via the hacluster charm. A bug has been opened
in Launchpad to request this feature https://bugs.launchpad.net/charms/+source/ceph-radosgw/+bug/1328927
Nodes 6-8
These nodes will contain Nova-Compute and Ceph services on the bare-metal OS installation.
Nova-Compute and Ceph can be co-located according to Ubuntus recommendations. 3 nodes will have
the same services on them, establishing High Availability with some extra configurations. These nodes
shall be known as Compute Nodes in this document.
Nodes 9-10
These nodes will contain the Nexus 1000v component to control the networking aspect of the setup.
Neutron-Gateway functionality will not be used. Node 9 will be the primary VSM and node 10 will be
the secondary VSM node in case the primary VSM fails. This will allow the Nexus 1000v to be in High
Availability and continue to perform during any single failures. These nodes shall be known as the VSM
Nodes in this document.
Table 5-4
Package / OS Version
Ubuntu OS
maas
1.5.4+bzr2294-0ubuntu1.1
maas-dns
1.5.4+bzr2294-0ubuntu1.1
maas-dhcp
1.5.4+bzr2294-0ubuntu1.1
juju-core
1.20.11-0ubuntu1~14.04.1~juju1
juju-deployer 0.3.6-0ubuntu2
cloud-init
0.7.5-0ubuntu1.3
Pre-clustering
Leaders are elected by selecting the older peer within a given service deployment. This service unit
will undertake activities such as creating underlying databases, issuing username and passwords and
configuring HA services prior to full clustering.
Post-clustering
Once a set of service units have been clustered using Corosync and Pacemaker, leader election is
determined by which service unit holds the VIP through which the service is accessed. This service
unit will then take ownership of singleton activity within the cluster.
hacluster is the charm used to configure HA for many services, the following is a quick rundown of its
behavior:
5-19
Chapter 5
OpenStack Installation
The hacluster charm deals with installing and configuring Corosync and Pacemaker based on what
relation data provided by its related principle charm. This includes services to control from the cluster,
shared block devices from Ceph, file systems on those block devices and VIP's.
If you need to check the cluster status of any service that utilizes the hacluster charm (Glance in this
example):
juju
sudo
sudo
sudo
ssh glance/0
crm status
corosync-quorumtool -s
corosync-cfgtool -s
This will output the current status of resources controlled by Corosync and Pacemaker.
There are two HA Models:
Stateful Server
For services where state must be stored, such as for MySQL or RabbitMQ, state is stored on a shared
block device provided by Ceph; this is mapped on one (and only one) server at a time using the Ceph
RBD kernel module.
The device, and its associated file system and contents, are placed under the control of Corosync
and Pacemaker using the hacluster charm; this ensures that the persistent data is only writable from
one service unit within a service at any point in time.
Services of this type are described as active or passive.
Table 5-5
Service
HA Model
Description
MySQL
Stateful Server
(Percona
XtraDB Cluster)
RabbitMQ
Server
Stateful Server
Keystone
Stateless API
Server
5-20
Implementation Guide
Chapter 5
Table 5-5
Service
HA Model
Description
Nova Cloud
Controller
Stateless API
Server
Glance
Stateless API
Server
Cinder
Stateless API
Server
Neutron
See Description
Gateway (Charm
is named after
Quantum
Gateway)
Nova Compute
Horizon
API Server
(Although this
service is not an
API service, it
uses the same
model for HA.)
5-21
Chapter 5
OpenStack Installation
Step 1
Step 2
Configure DNS.
Step 3
b.
c.
Note
Step 4
Step 5
Install MaaS and other needed packages such as NTP and a Juju tool.
sudo apt-get install maas maas-dhcp maas-dns cloud-init ntp
Step 6
Step 7
Step 8
Create MaaS Profile and log into the MaaS API using key created in Step 10. The MaaS session
identifies this login with MaaS commands (Figure 5-17).
maas login <maas-session> http://<MaaS-Server-IP>/MAAS/api/1.0 <Generated-API-Key>
Figure 5-17
Step 9
Note
5-22
Implementation Guide
Chapter 5
Step 10
Set Network Configuration through the MaaS GUI. Select the settings option on the right side of the top
menu bar. Scroll down to the Network Configuration as set the values then save (Figure 5-18).
Figure 5-18
Step 11
2.
5-23
Chapter 5
OpenStack Installation
Figure 5-19
Note
Step 12
There is no way as of this writing to check status of import. Reload the page above occasionally to check
on status. In our system under test, it has been observed to possibly take > 30 minutes to download all
images for the latest architectures of the Ubuntu Operating Systems.
Create an SSH key and add to MaaS Server.
a.
b.
c.
d.
Select root (or your own username) -> preferences from the top right of the web page (Figure 5-20).
Figure 5-20
Step 13
Screen Capture of the Clusters Tab showing Import boot images Button
e.
f.
Paste the created key into the field and select + Add key.
b.
Select the Cluster Master cluster to be taken to the Edit Cluster Controller page.
c.
Enter the DNS zone name and select Save cluster controller.
5-24
Implementation Guide
Chapter 5
Figure 5-21
d.
Select the interface which will serve DHCP and DNS under the Interfaces section and edit that
interface (Figure 5-22).
Figure 5-22
e.
Screen Capture of Selecting Edit for DHCP and DNS Management Interface
Enter the correct information on the Edit Cluster Interface for the interface to serve as DHCP and
DNS (Figure 5-23).
5-25
Chapter 5
OpenStack Installation
Figure 5-23
Step 14
Add nodes to MaaS server Repeat for each node to be added to MaaS.
a.
b.
c.
d.
Select an Ubuntu OS release from the Release drop down to be installed on the node.
e.
f.
Select the architecture of the node from the Architecture drop down.
g.
Select the Power type from the drop down, this option will bring up another area to enter
information based on the selection. This is how MaaS will control the node.
If IPMI is selected, an IP address, power user and power password must be entered.
h.
Enter the Mac address of each network interface of the node, selecting + Add additional MAC
address if more spaces are needed.
i.
5-26
Implementation Guide
Chapter 5
Prepare an environment yaml file (http://www.yaml.org/) with specific configuration information. This
allows the Juju installation to connect to the MaaS server.
Within the users home directory on the MaaS server, ensure .juju/environments.yaml is created with the
following information:
default: maas
environments:
maas:
type: maas
maas-server: 'http://172.18.112.140:80/MAAS'
maas-oauth: '9KyGP99ZQaffEaD7cp:uSdhpkHRL9gGRV4aXm:5ZRvWCWQPTnRCgYVmKbE7aqLWreRJVHX'
admin-secret: Cisco12345
default-series: trusty
bootstrap-timeout: 3600
http-proxy: http://proxy-wsa.esl.cisco.com:80
https-proxy: http://proxy-wsa.esl.cisco.com:80
no-proxy:
10.0.45.1,10.0.45.0/24,localhost,192.168.125.0/24,192.168.125.10,10.10.10.0/24,10.10.1
0.10
Note
Step 2
The maas-server configuration options value is the key created in Step 7 of Ubuntu MaaS
Installation section
MaaS can tag servers with an identifying stringthese identifiers must be utilized to ensure selected
services are deployed on specific hardware. (https://maas.ubuntu.com/docs/tags.html) The following
tags were used in this documentation:
Table 5-6
MaaS Tags
Node Type
Tag
Juju Node
juju
Compute Node
compute
Control Node
control
vsmp
b.
Run the command below to get a list of all the MaaS nodes and copy the system_id of the nodes to
tag in MaaS (Figure 5-24):
5-27
Chapter 5
OpenStack Installation
Figure 5-24
c.
Run the following command to add a tag to a specific machine, repeat this step for each machine
that needs a tag:
maas <maas-session> tag update-nodes <tag-name> add=<system_id>
Step 3
Bring up the Juju bootstrap. This will install the Juju service on a node provided from the MaaS server
with the configurations in the environments.yaml file. Proxies must be set if needed to correctly
bootstrap Juju.
sudo -i
export http_proxy=http://proxy-wsa.esl.cisco.com:80
export https_proxy=http://proxy-wsa.esl.cisco.com:80
export no_proxy="10.0.45.1,10,10.0.45.0/24,localhost"
add-apt-repository ppa:juju/stable
exit
export http_proxy=http://proxy-wsa.esl.cisco.com:80
export https_proxy=http://proxy-wsa.esl.cisco.com:80
export no_proxy="10.0.45.1,10,10.0.45.0/24,localhost"
sudo apt-get install juju-core
juju sync-tools --debug
juju bootstrap --constraints tags=juju --debug
juju status
The juju status output should look something like the following:
environment: maas
machines:
"0":
agent-state: started
5-28
Implementation Guide
Chapter 5
agent-version: 1.20.11
dns-name: vmdc-juju1.icdc.sdu.cisco.com
instance-id: /MAAS/api/1.0/nodes/node-f23a8d48-691c-11e4-9921-4403a74abe42/
series: trusty
hardware: arch=amd64 cpu-cores=16 mem=196608M tags=use-fastpath-installer,juju
state-server-member-status: has-vote
services: {}
If it does, Juju has been successfully bootstrapped and charms can now be deployed.
Note
The following information is specific to this system. It is important to verify the information and modify
it to the necessary requirements. This configuration is specific to the HA scenario explained in previous
sections.
Table 5-7
Charm
ceph
86
Charm Store
ceph-radosgw
30
Charm Store
cinder
56
Charm Store
glance
78
Charm Store
hacluster
37
Charm Store
haproxy
86
Charm Store
5-29
Chapter 5
OpenStack Installation
Table 5-7
Charm
keystone
87
Charm Store
mysql
130
Charm Store
nova-cloud-controller 70
N1kv
nova-compute
59
N1kv
openstack-dashboard
27
N1kv
percona-cluster
39
Charm Store
quantum-gateway
47
N1kv
rabbitmq-server
68
Charm Store
vem
155
N1kv
vsm
46
N1kv
OpenStack_HA_N1kv.yaml
# OpenStack Options
openstack-common:
series: trusty
services:
nova-compute:
charm: nova-compute
options:
config-flags:
"auto_assign_floating_ip=False,compute_driver=libvirt.LibvirtDriver"
enable-live-migration: True
enable-resize: True
migration-auth-type: 'ssh'
virt-type: kvm
openstack-origin: ppa:cisco-n1kv/icehouse-updates
neutron-gateway:
charm: quantum-gateway
options:
instance-mtu: 1350
ext-port: eth3
plugin: n1kv
openstack-origin: ppa:cisco-n1kv/icehouse-updates
mysql:
charm: percona-cluster
options:
root-password: ubuntu
sst-password: ubuntu
vip: 10.0.45.201
vip_iface: eth0
vip_cidr: 24
ha-bindiface: eth0
max-connections: 500
mysql-hacluster:
charm: hacluster
options:
corosync_mcastaddr: 226.94.1.1
corosync_key:
"3r8Y1zILzqADvJB7eLJGPrCI4g5Tg+uZ0+qq1kXNe0273yZlee9k2VT1twsyaSx3tNDDIcfuM/ykQNFRLw6dO
WdXPbzgqIM5M5FExYQlXv2+s3kowRL0xuanVWXucaKu+t3jDDxmVnhj0SY/ixl3Gg0XrW4qXFoK05uMoIhK8Js
="
rabbitmq-server:
5-30
Implementation Guide
Chapter 5
charm: rabbitmq-server
options:
vip: 10.0.45.202
vip_iface: eth0
vip_cidr: 24
ha-bindiface: eth0
ha-vip-only: True
keystone:
charm: keystone
options:
admin-password: openstack
debug: 'True'
log-level: DEBUG
enable-pki: 'False'
vip: 10.0.45.203
ha-bindiface: eth0
keystone-hacluster:
charm: hacluster
options:
corosync_mcastaddr: 226.94.1.5
corosync_key:
"6aVson6XvaprzAppLB6UA4OUgZIyNtW+qVwbanQta0aLMagwbPNomTniLr3ZyVGtEL7A0c48tJvaA+lafL2Hz
Gq+43/aKnUbG5k7d4sKaQXP/sKLhCpyj+04DddBRAVsBJ6r9tG45CGF+H+qUykL1rOT0EesZhDqBiBGrV+DXes
="
openstack-dashboard:
charm: openstack-dashboard
expose: true
options:
profile: cisco
secret: openstack
vip: 10.0.45.204
vip_iface: eth0
vip_cidr: 24
ha-bindiface: eth0
openstack-origin: ppa:cisco-n1kv/icehouse-updates
dashboard-hacluster:
charm: hacluster
options:
corosync_mcastaddr: 226.94.1.9
corosync_key:
"9aNUFk+o0Hqt/6i46ltcycMogHm+bgOkhsIwBwuXX3YQZfvioZZZqggi9R9Ccj1OqIrxLA+GTstghYcc/hjUL
hIl3BIX6HAdePhX7sI8khTCiPTN/w4MIy3nW1CjFaeWW31CIhrXnTcq11l0MEB3vKNlN5/b7/kqvagB6oSjw4s
="
nova-cloud-controller:
charm: nova-cloud-controller
options:
network-manager: Neutron
neutron-external-network: Public_Network
quantum-security-groups: 'False'
n1kv-vsm-ip: 10.0.45.208
n1kv-vsm-username: admin
n1kv-vsm-password: Cisco12345
openstack-origin: ppa:cisco-n1kv/icehouse-updates
quantum-plugin: n1kv
vip: 10.0.45.205
vip_iface: eth0
vip_cidr: 24
ha-bindiface: eth0
ncc-hacluster:
charm: hacluster
options:
corosync_mcastaddr: 226.94.1.6
5-31
Chapter 5
OpenStack Installation
corosync_key:
"xZP7GDWV0e8Qs0GxWThXirNNYlScgi3sRTdZk/IXKDqkNFcwdCWfRQnqrHU/6mb6sz6OIoZzX2MtfMQIDcXuP
qQyvKuv7YbRyGHmQwAWDUA4ed759VWAO39kHkfWp9y5RRk/wcHakTcWYMwm70upDGJEP00YT3xem3NQy27AC1w
="
cinder:
charm: cinder
options:
block-device: "None"
overwrite: 'True'
ceph-osd-replication-count: 3
glance-api-version: 2
vip: 10.0.45.206
ha-bindiface: eth0
cinder-hacluster:
charm: hacluster
options:
corosync_mcastaddr: 226.94.1.8
corosync_key:
"wllBMGAfdCsotmXGbCbJ0LhAuOPQ9ZEIIAXIWWeNLwrmC7C9jmm92RSL1kYGCRRWaL7W7AziA6aBy//rZxeZ3
z0YkM0QFD+4Vg7vtM6JaBoOFlJgVd6mbYUfVbI6IMqGiUDJ8hh5sKmN7kwQLNNwASGlJiMo5s9ErWviVM6/OrQ
="
glance:
charm: glance
options:
ceph-osd-replication-count: 3
vip: 10.0.45.207
ha-bindiface: eth0
glance-hacluster:
charm: hacluster
options:
corosync_mcastaddr: 226.94.1.7
corosync_key:
"eO34WuxbQ/FaQvYb/ffTtX+0phNfNZlmhRrC8gLYJMf/b52Ny3cRXjgp5P1lEfZFHjrhQ3lWQOqENuBVcejS1
OYt574Xq2l1XLEHoEPbktovDhaS9yxIU7SYULdlx7j/BNtW7evY0pRBr23MYWEI3hETHVdtOeqgW1IB3zgoyco
="
ceph:
charm: ceph
options:
monitor-count: 3
fsid: 6547bd3e-1397-11e2-82e5-53567c8d32dc
monitor-secret: AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ==
osd-devices: /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
/dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn
osd-reformat: "yes"
ceph-radosgw:
charm: ceph-radosgw
haproxy:
charm: haproxy
vsm-p:
charm: vsm
options:
n1kv-source: ppa:cisco-n1kv/n1kv-updates
n1kv-vsm-domain-id: 500
n1kv-vsm-password: "Cisco12345"
n1kv-vsm-mgmt-ip: "10.0.45.208"
n1kv-phy-intf-bridge: "eth0"
n1kv-vsm-mgmt-gateway: "10.0.45.253"
n1kv-vsm-mgmt-netmask: "255.255.255.0"
n1kv-vsm-role: "primary"
n1kv-vsm-name: "vsm-p"
vsm-s:
charm: vsm
options:
n1kv-source: ppa:cisco-n1kv/n1kv-updates
5-32
Implementation Guide
Chapter 5
n1kv-vsm-domain-id: 500
n1kv-vsm-password: "Cisco12345"
n1kv-phy-intf-bridge: "eth0"
n1kv-vsm-role: "secondary"
n1kv-vsm-name: "vsm-s"
vem:
charm: vem
options:
host_mgmt_intf: eth2
n1kv-vsm-domain-id: 500
uplink_profile: phys eth3 profile sys-uplink
n1kv-source: ppa:cisco-n1kv/n1kv-updates
n1kv-vsm-ip: 10.0.45.208
relations:
- [ haproxy, ceph-radosgw ]
- [ nova-cloud-controller, mysql ]
- [ nova-cloud-controller, rabbitmq-server ]
- [ nova-cloud-controller, glance ]
- [ nova-cloud-controller, keystone ]
- [ nova-compute, nova-cloud-controller ]
- [ nova-compute, mysql ]
- [ nova-compute, 'rabbitmq-server:amqp' ]
- [ nova-compute, glance ]
- [ nova-compute, ceph ]
- [ glance, mysql ]
- [ glance, keystone ]
- [ glance, ceph ]
- [ glance, cinder ]
- [ glance, rabbitmq-server ]
- [ cinder, mysql ]
- [ cinder, rabbitmq-server ]
- [ cinder, nova-cloud-controller ]
- [ cinder, keystone ]
- [ cinder, ceph ]
- [ neutron-gateway, mysql ]
- [ neutron-gateway, rabbitmq-server ]
- [ neutron-gateway, nova-cloud-controller ]
- [ openstack-dashboard, keystone ]
- [ ceph, ceph-radosgw ]
- [ ceph-radosgw, keystone ]
- [ mysql, mysql-hacluster ]
- [ keystone, keystone-hacluster ]
- [ nova-cloud-controller, ncc-hacluster ]
- [ glance, glance-hacluster ]
- [ cinder, cinder-hacluster ]
- [ openstack-dashboard, dashboard-hacluster ]
- [ keystone, mysql ]
trusty-icehouse-ha-lxc:
inherits: openstack-common
series: trusty
services:
neutron-gateway:
num_units: 3
constraints: "tags=control"
nova-compute:
num_units: 3
constraints: "tags=compute"
vsm-p:
num_units: 1
constraints: "tags=vsmp"
vsm-s:
num_units: 1
constraints: "tags=vsms"
vem:
5-33
Chapter 5
OpenStack Installation
num_units: 1
nova-cloud-controller:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
rabbitmq-server:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
mysql:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
openstack-dashboard:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
keystone:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
cinder:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
glance:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
ceph-radosgw:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
haproxy:
num_units: 1
to:
- lxc:neutron-gateway=0
ceph:
num_units: 3
to:
- nova-compute=0
- nova-compute=1
- nova-compute=2
5-34
Implementation Guide
Chapter 5
Download the OpenStack charms edited for Cisco Nexus 1000v. (Source: N1kv).
Enter the following commands:
sudo add-apt-repository y ppa:cisco-n1kv/icehouse-updates
sudo apt-get update
sudo apt-get install jujucharm-n1k
tar zxf /opt/cisco/n1kv/charms/jujucharm-n1k-precise_5.2.1.sk3.1.1.YYYYMMDDhhmm.tar.gz
Once the file is untarred, copy the trusty directory from /jujucharm-n1k/charms/trusty directory into
the home directory.
Step 2
Download the rest of the charms from the charm store found at https://manage.jujucharms.com/charms.
(Source: Charm Store)
Place the downloaded charms into the trusty folder in the home directory.
For NetApp deployment the charm needs to be customized. See the Block Storage with NetApp section
below for more information. The charm will be in the trusty directory.
a.
3.
Change this:
{% if volume_driver -%}
volume_driver = {{ volume_driver }}
{% endif -%}
{% if rbd_pool -%}
rbd_pool = {{ rbd_pool }}
host = {{ host }}
rbd_user = {{ rbd_user }}
{% endif -%}
to this:
{% if rbd_pool -%}
[ceph]
{% if volume_driver -%}
volume_driver = {{ volume_driver }}
{% endif -%}
volume_backend_name=ceph
rbd_pool = {{ rbd_pool }}
host = {{ host }}
5-35
Chapter 5
OpenStack Installation
rbd_user = {{ rbd_user }}
{% endif -%}
b.
Step 3
Note
Change the current working directory to the home directory where the yaml file and the charms for
Juju-Deployer are contained.
The script requires manual intervention to complete successfully. Read the following instructions before
continuing.
a.
Table 5-8
Juju-Deployer Arguments
Argument
Behavior
-c CONFIGS
-d
Step 4
Note
-w REL_WAIT
Number of seconds to wait before checking for relation errors after all relations
have been added and subordinates started. (default: 60)
-r RETRY_COUNT
-t TIMEOUT
The script must be manually stopped with Ctrl-c after the services requiring bare metal servers to be
commissioned by MaaS are deployed. It is necessary to wait for the nodes to come up and start
successfully, then commands must be run on the machines before the rest of the services can be
deployed. Interfaces on these nodes must be set to correctly interact with the LXC containers and the
Nexus 1000v modules. To ensure the correct functionality of LXC containers, eth0 must be setup as the
management interface. Additionally, some packages must be installed to have the services work.
This section is a recommendation to eliminate possible issues. In our testing, issues were encountered
with interfaces on some machines during deployment that this workaround consistently avoided.
With the previous yaml file, the output should look like the following when breaking the script. The
vsm-s service in this case is the last service to be deployed directly to a bare metal server, any service
after these will be co-located on bare metal with a previously deployed service or will be deployed in an
LXC container:
5-36
Implementation Guide
Chapter 5
With the script stopped, wait for the machines to come up and get into the 'started' state. The status of
the commissioned machines can be viewed with the command 'juju status.'
Step 5
Once all machines are in the started state, the following two scripts are used to set necessary interfaces
and install packages on the compute and control nodes for this setup.
From here, the control node references the node with quantum-gateway on the bare metal and LXC
containers containing the rest of the services. The 'compute node' references the node with
Nova-compute and Ceph co-located on bare metal. They may need to be edited to match an alternate
setup. Ultimately, the control nodes should have br0 as a bridge for the management interface (in our
case, eth2), this allows LXC containers to communicate over the management network successfully
through their host machines. Also on the control nodes, eth0 and eth1will be up as data interfaces (they
do not need IP addresses or other information). The compute nodes need to have eth0 and eth1 up as data
interfaces, eth2 will already be acting as the management interface.
The scripts can be run in two ways:
juju ssh <Juju Machine #> 'bash -s' < <Script Path>.sh
set_compute.sh
sudo su
echo -e "\nauto eth0\niface eth0 inet manual\n\nauto eth1\niface eth1 inet manual" >>
/etc/network/interfaces
ifup eth0
ifup eth1
exit
In this script, eth0 and eth1 are the data interfaces for OpenStack instances to communicate with each
other and eth2 is the management interface. They will be port channeled via the Nexus 1000v. They do
not communicate over the management network that OpenStack services use to operate; data and
management traffic is separate. If run with juju ssh <Juju machine #> bash s command, press ctrl-c
to exit and return control to the MaaS node.
set_control.sh
sudo su
apt-get -y update > /dev/null
apt-get -y install lxc bridge-utils debootstrap ntp > /dev/null
sed -i "8,9d" /etc/network/interfaces
echo -e "auto eth2\niface eth2 inet manual\n\nauto br0\niface br0 inet
dhcp\nbridge_ports eth2\nbridge_stp off\nbridge_fd 0\nbridge_maxwait 0\n\nauto
eth0\niface eth0 inet manual\n\nauto eth1\niface eth1 inet manual" >>
/etc/network/interfaces
brctl addbr br0
ufw disable
5-37
Chapter 5
OpenStack Installation
reboot
This script sets up interfaces and a bridge for LXC to run correctly. If the machine already has a br0,
then it is unnecessary to add the br0 lines into the interfaces file. In some setups, if eth0 is not the default
management interface (in this case eth2 is the management interface) then br0 will not be automatically
created, but it is still needed. The line sed i 8,9d /etc/network/interfaces removes the previous
management interface information and replaces it with a manual interface which will be ported under
br0. It is important to make sure these lines are changed for a specific setup. In this script, eth0 and eth1
are also the data interfaces for OpenStack and eth2 is the management interface. The reboot of the
machine will bring up all the changes made. The script should make the interfaces file look something
like the following:
ubuntu@neutron-node:~$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
auto eth2
iface eth2 inet manual
auto br0
iface br0 inet dhcp
bridge_ports eth2
bridge_stp off
bridge_fd 0
bridge_maxwait 0
auto eth0
iface eth0 inet manual
auto eth1
iface eth1 inet manual
If run with juju ssh <Juju machine #> bash s command, the script may need to be run twice on the
wanted machine to successfully reboot and create all the settings. If the script hangs, press ctrl-c to go
back to MaaS and rerun the command, this should reboot the machine and return the control to the MaaS
node.
Step 6
Once all of the interfaces are successfully set and the machines are all in the started state according to
juju status, run the following command on every node to ensure NTP is installed and running:
sudo apt-get install ntp y
Step 7
The Juju-deployer script can now be run again until it finishes without any manual intervention:
sudo juju-deployer -c ./OpenStack_HA_N1kv.yaml -d -s 30 -w 300 -r 3 -t 9999
trusty-icehouse-ha-lxc
Use juju status to check that all services and machines are in the started state. If the following
commands return nothing, then the services should have come up correctly; if not see the
Troubleshooting section below:
juju status | grep hook
juju status | grep warning
juju status | grep down
It is only possible to ssh into each machine using Juju ssh commands, this can be done the following
ways:
juju ssh
ex: juju
juju ssh
ex: juju
<machine #>
ssh 1
<service name>/<service #>
ssh nova-compute/0
5-38
Implementation Guide
Chapter 5
Post Juju-Deployer
The OpenStack installation is not finished; there are more steps and workarounds to complete before the
environment is fully functional.
Step 1
The VEM service also needs some more configurations to connect to each machine. Nexus 1000v needs
to get information about each hosts specific data interfaces (uplink_profile), in addition to which
interface will be used for management (host_mgmt_intf). The specific configurations will be placed in
a file called mapping.yaml:
Table 5-9
Hostname
Node Type
Management Interface
Data Interfaces (uplink_profile) (host_mgmt_intf)
br0
br0
br0
vmdc-ceph1
Compute
Node
eth0 / eth1
eth2
vmdc-ceph2
Compute
Node
eth0 / eth1
eth2
vmdc-ceph3
Compute
Node
eth0 / eth1
eth2
N/A
br0
N/A
br0
mapping.yaml
vmdc-OpenStack7:
host_mgmt_intf:
uplink_profile:
vmdc-OpenStack3:
host_mgmt_intf:
uplink_profile:
vmdc-OpenStack4:
host_mgmt_intf:
uplink_profile:
vmdc-ceph1:
host_mgmt_intf:
uplink_profile:
vmdc-ceph2:
host_mgmt_intf:
uplink_profile:
vmdc-ceph3:
host_mgmt_intf:
uplink_profile:
vmdc-OpenStack1:
host_mgmt_intf:
vmdc-OpenStack2:
host_mgmt_intf:
Step 2
br0
'phys eth0 profile sys-uplink,phys eth1 profile sys-uplink'
br0
'phys eth0 profile sys-uplink,phys eth1 profile sys-uplink'
br0
'phys eth0 profile sys-uplink,phys eth1 profile sys-uplink'
eth2
'phys eth0 profile sys-uplink,phys eth1 profile sys-uplink'
eth2
'phys eth0 profile sys-uplink,phys eth1 profile sys-uplink'
eth2
'phys eth0 profile sys-uplink,phys eth1 profile sys-uplink'
br0
br0
5-39
Chapter 5
OpenStack Installation
Step 3
Step 4
Log into the Nexus 1000v using ssh admin@<vsm_ip> and run the following commands in
configuration mode to set up the port profiles:
feature lacp
port-profile type ethernet sys-uplink
switchport mode trunk
switchport trunk allowed vlan 500-549
channel-group auto mode active
no shutdown
mtu 9216
state enabled
publish port-profile
port-profile type vethernet default-pp
no shutdown
mtu 9216
state enabled
publish port-profile
This should complete the Nexus 1000v setup and all the nodes should be successfully up and recognized
by the VSM.
Step 5
The RADOS gateway needs to have a key ring copied from any Ceph node to work correctly.
Copy etc/ceph/ceph.client.admin.keyring from any ceph machine and ensure
etc/ceph/ceph.client.admin.keyring exists on each RADOS gateway machine with the same data.
Troubleshooting
Juju has built in commands that can help to trouble shoot errors found in the environment. By using a
combination of the following, it may be possible to solve issues without fully redeploying the
environment from scratch.
If a service has a hook error, first run the following command to rerun the hook that failed:
juju resolved r <service-name>
The r option tells Juju to rerun the entire hook. If run without r, the failed hook will be skipped
and there may be unexpected behavior due to this.
Note
Its been observed that juju resolved r may sometimes return text saying the hook/error has been
resolved while juju status still shows a hook error. This issue has been opened as a bug here https://bugs.launchpad.net/juju-core/+bug/1393434.
Sometimes a hook will fail between two services (ex: keystone and mysql). Its possible that the
timing of the hooks being installed caused an issue. Removing and redoing the relations between
services is a possible fix to any problem. Configurations that are shared between services will be
fully reestablished by Juju if a relation is removed then added once again. If an error is seen between
services, the following commands can be run to try and reestablish the correct relations:
juju destroy-relation keystone mysql
juju add-relation keystone mysql
5-40
Implementation Guide
Chapter 5
These commands are done for the user via Juju-deployer, but there are established in the relations:
section of the yaml file for the deployment.
If a machine gets stuck in the pending stage, it may have to be removed from the environment and
added once again to get it correctly started. The services on said machine will also need to be
destroyed due to the deployment configuration. The following commands show how to complete
this:
juju destroy-machine <machine-number> --force
juju destroy-service <service-on-machine>
Once these commands are run, rerun the Juju-deployer command and ctrl-c once the service that
resides on the destroyed machine is deployed. Once every machine is started, the installation can
continue as normal.
If there are unsolvable issues then the entire environment may have to be redeployed. Juju will have
to be bootstrapped again after this command. The following command will remove the Juju
bootstrap from your environment:
juju destroy-environment maas
MySQL has an issue where an extremely large amount of connections can build up and disrupt
services if left for too long. This issue has been opened as a bug in Launchpad here
(https://bugs.launchpad.net/charms/+source/mysql/+bug/1389837). A workaround for this issue has
been produced by Canonical and will keep the connections at workable levels.
On every Nova-compute node, add the following to the end of /etc/nova/nova.conf then run the
After running these commands, the MySQL connections should be limited to workable levels.
Run the following commands to install the OpenStack command line clients:
export http_proxy=http://proxy-wsa.esl.cisco.com:80/
export https_proxy=http://proxy-wsa.esl.cisco.com:80/
sudo apt-get install python-keystoneclient python-glanceclient python-novaclient
python-keystoneclient
5-41
Chapter 5
OpenStack Configuration
Step 2
Create an admin.rc file with OS admin credentials to run OpenStack commands. Additionally, a file can
be created with the specific information for each project/tenant to run commands within a specific
project. Run the following command with the example admin.rc to set these variables in your terminal:
source admin.rc
admin.rc
export
export
export
export
export
export
export
export
export
export
export
export
OS_NO_CACHE=true
OS_TENANT_NAME=admin # Change to match tenant/project name
OS_USERNAME=admin
# Change to match tenant/project user name
OS_PASSWORD=OpenStack # Change to match tenant/project user password
OS_AUTH_URL=http://<Horizon-VIP>:5000/v2.0/
OS_AUTH_STRATEGY=keystone
OS_REGION_NAME=RegionOne
CINDER_ENDPOINT_TYPE=publicURL
GLANCE_ENDPOINT_TYPE=publicURL
KEYSTONE_ENDPOINT_TYPE=publicURL
NOVA_ENDPOINT_TYPE=publicURL
QUANTUM_ENDPOINT_TYPE=publicURL
OpenStack Configuration
Following sections describes the important implementation details with respect to the OpenStack
environment build in this implementation.
Tenant Configurations
In this implementation each Copper tenant is mapped to an Openstack project. Each tenant is assigned
a single VLAN for all their instances. Each tenant will have their own tenant admin with the access rights
restricted to resources of that tenant.
Networking Configuration
In this implementation of OpenStack, VLAN backed Provider networking was used. Each Nova compute
nodes as well as Neutron gateway nodes were connected to the ACI fabric for data path. However all
OpenStack instances had their default gateway set to the ASA firewall, therefore using the ACI fabric
to go straight out to the Internet. Neutron gateway was only used as L2 DHCP and metadata provider
functionality. Logical connectivity of the OpenStack networking is shown in Figure 5-25.
5-42
Implementation Guide
Chapter 5
Figure 5-25
DHCP +
Instance Meta Data
ACI Fabric
APP
OS
APP
OS
APP
OS
OS Instances
Neutron Gateway Nodes
APP
OS
OS Instances
298706
10.21.1.254
Eth0-1
Eth2
br0
bridge
LXC
bridge
br0
10.0.45.15
Eth0
10.0.45.25
Eth0
10.0.45.30
Cinder LXC
Nova Cloud
Controller
CIMC
Note
Eth3
10.0.40.101
CIMC
10.0.35.11
Neutron
298685
NFS
It is required to turn on multicast receiving support on the OpenStack API/management network segment
for HA components such as corosync to work correctly. In this implementation IP PIM sparse mode was
configured on the management network segment's SVI.
5-43
Chapter 5
OpenStack Configuration
role network-admin
The configuration above is generated by the VSM charm based on the following parameters set in the
yaml configuration file described in the OpenStack installation section.
vsm-p:
charm: vsm
options:
n1kv-source: ppa:cisco-n1kv/n1kv-updates
n1kv-vsm-domain-id: 500
n1kv-vsm-password: "Cisco12345"
n1kv-vsm-mgmt-ip: "10.0.45.208"
n1kv-phy-intf-bridge: "eth0"
n1kv-vsm-mgmt-gateway: "10.0.45.253"
n1kv-vsm-mgmt-netmask: "255.255.255.0"
n1kv-vsm-role: "primary"
n1kv-vsm-name: "vsm-p"
Step 2
Note
When changing the MTU on Nexus 1000v uplink port profiles, physical uplink interface flap and this is
expected behavior.
Note
In this implementation, all VEMs use the port channel LACP protocol mode active and the ACI VPC
port channels are configured accordingly.
The uplink name in For configuration should be the same as mentioned in the configuration yaml file
for VEM similar to below.
5-44
Implementation Guide
Chapter 5
vem:
charm: vem
options:
uplink_profile: phys eth3 profile sys-uplink
Step 3
Create Nexus 1000v default port profile and tenant port profiles.
In this implementation, each Copper tenant has its own port-profile. This allows different policies to be
applied on different tenants. A default port profile is supported on Nexus 1000v when a tenant network
is created without specifying a port profile.
Note
In this Nexus 1000v release, policy profile UUID is now optional. If no policy profile UUID is entered
while creating a port, a default policy profile will be used. This behavior is in line with ports created for
dhcp and routers.
port-profile type vethernet copper_template
mtu 9216
no shutdown
state enabled
port-profile type vethernet copper1_data
inherit port-profile copper_template
no shutdown
guid 66916079-5e3f-43a2-bef8-2fece1efad49
description copper tenant 1
state enabled
publish port-profile
port-profile type vethernet default-pp
mtu 9216
no shutdown
guid 1650ddbe-dca8-4948-81e9-f92194de2b7d
state enabled
publish port-profile
Figure 5-27 from the OpenStack dashboard shows the port profiles once created in Nexus 1000v appear
in the dashboard.
Figure 5-27
Step 4
5-45
Chapter 5
OpenStack Configuration
Create a network profile for each tenant which specifies which VLANs are assigned to each tenant
(Figure 5-28).
Figure 5-28
Following configuration on Nexus 1000v will be created by the network segmentation manager
corresponding to the configuration above in the dashboard.
nsm network segment 542ed831-aac2-43f0-9c8f-6c9007d26f5d
description copper1_data
uuid 542ed831-aac2-43f0-9c8f-6c9007d26f5d
member-of network segment pool b382d17b-2a1b-4972-a00e-51f6b6360aa7
switchport mode access
switchport access vlan 501
publish network segment
Step 5
5-46
Implementation Guide
Chapter 5
Figure 5-29
The configuration above in dashboard will create the following NSM configuration in the Nexus 1000v
VSM.
nsm logical network b382d17b-2a1b-4972-a00e-51f6b6360aa7_log_net
description copper1_data
nsm network segment pool b382d17b-2a1b-4972-a00e-51f6b6360aa7
description copper1_data
uuid b382d17b-2a1b-4972-a00e-51f6b6360aa7
member-of logical network b382d17b-2a1b-4972-a00e-51f6b6360aa7_log_net
Step 6
5-47
Chapter 5
OpenStack Configuration
Figure 5-30
Adding the subnet in to the network creates the following configuration in the VSM:
nsm ip pool template 8b3d61a1-64fc-424c-8355-7b02c298e29b
description vlan501
ip address 10.21.1.1 10.21.1.253
network 10.21.1.0 255.255.255.0
default-router 10.21.1.254
dhcp
dns-server 64.102.6.247
nsm network segment 542ed831-aac2-43f0-9c8f-6c9007d26f5d
description copper1_data
ip pool import template 8b3d61a1-64fc-424c-8355-7b02c298e29b uuid
8b3d61a1-64fc-424c-8355-7b02c298e29b
Once the steps above are completed newly created network is ready to be used by the tenant instances.
5-48
Implementation Guide
Chapter 5
Note
As of the time of this verification, both management (Nova, Cinder, and Juju traffic) and Ceph storage
traffic (replication and crush data) are bundled to a single network called management. A new feature
where these two types of traffic can be separated in to two networks is available in Juno/Utopic release.
Ceph is installed and configured entirely through Juju using the options set in the configuration yaml
file. There must be a minimum of 3 nodes for Ceph to be started; this is an official Ceph requirement
(http://ceph.com/). The following commands and outputs will show the status of the Ceph cluster if it
was created successfully:
sudo ceph s
sudo ceph df
ubuntu@vmdc-ceph2:~$ sudo ceph -s
cluster 6547bd3e-1397-11e2-82e5-53567c8d32dc
health HEALTH_OK
monmap e1: 3 mons at
{vmdc-ceph1=10.0.45.20:6789/0,vmdc-ceph2=10.0.45.21:6789/0,vmdc-ceph3=10.0.45.22:6789/
0}, election epoch 6, quorum 0,1,2 vmdc-ceph1,vmdc-ceph2,vmdc-ceph3
osdmap e132: 39 osds: 39 up, 39 in
pgmap v1424: 2832 pgs, 10 pools, 770 MB data, 158 objects
4015 MB used, 36224 GB / 36228 GB avail
2832 active+clean
ubuntu@vmdc-ceph2:~$ sudo ceph df
GLOBAL:
SIZE
AVAIL
RAW USED
%RAW USED
36228G
36224G
4015M
0.01
POOLS:
NAME
ID
USED
%USED
OBJECTS
data
0
0
0
0
metadata
1
0
0
0
5-49
Chapter 5
OpenStack Configuration
rbd
glance
.rgw.root
.rgw.control
.rgw
.rgw.gc
cinder
.users.uid
2
3
4
5
6
7
8
9
0
706M
840
0
0
0
65536k
0
0
0
0
0
0
0
0
0
0
94
3
8
0
32
21
0
Ceph can now be selected through Horizon. When creating a volume, select Ceph as the volume type to
use Ceph storage (Figure 5-31).
Figure 5-31
5-50
Implementation Guide
Chapter 5
Figure 5-32
Cinder
LXC
NetApp Cluster
Control Nodes
Nova
Compute
Nodes
298713
Copper Shared
NFS NVM
On Cinder change app, armor rules on host to allow LXC s to mount NFS.
By default apparmor doesn't allow LXC containers to mount NFS shares, so it's necessary to explicit
allow it adding the following lines to /etc/apparmor.d/abstractions/LXC /container-base.
mount fstype=nfs,
mount fstype=nfs4,
mount fstype=rpc_pipefs,
Step 2
Step 3
Step 4
Step 5
Step 6
5-51
Chapter 5
OpenStack Configuration
The configuration above refers to the IP address of the NetApp cluster management IP. It also requires
the login to the SVM. The file /etc/cinder/nfs.share1 contains the actual NFS mount point information
as shown below. This file contains the IP address of the SVM and the junction path for the NFS volume.
root@juju-machine-122-LXC -1:/tmp# cat /etc/cinder/nfs.share1
10.0.40.203:/svm_aci_copper_shared_tenant1
Once the /etc/cinder/cinder.conf and /etc/cinder/nfs.share1 files are configured Cinder volume services
should be restarted as shown below:
service cinder-volume restart
Once this is done the NFS mounts appear under the /var/lib/cinder/mnt/ as shown below:
root@juju-machine-121-LXC -1:~# mount
10.0.40.203:/svm_aci_copper_shared_tenant1 on
/var/lib/cinder/mnt/446d1154b7aded6d478f79de396fd513 type nfs (rw,addr=10.0.40.203)
10.0.40.203:/svm_aci_copper_shared_tenant2 on
/var/lib/cinder/mnt/fa9d93b729574cd40cb02b1b42739f70 type nfs (rw,addr=10.0.40.203)
root@juju-machine-121-LXC -1:~# ls -ltr
/var/lib/cinder/mnt/446d1154b7aded6d478f79de396fd513/
total 17397624
drwxr-xr-x 2 root root
4096 Oct 8 00:17 nfs_test
-rw-rw-rw- 1 root root
10737418240 Oct 8 00:36
img-cache-8d094419-43fa-4d5d-b5bf-1fbd680e4430
-rw-rw-rw- 1 root root
10737418240 Oct 8 00:41
volume-eda4e2c3-9d39-463f-84fb-9c6537f55a33
-rw-rw-rw- 1 root root
10737418240 Oct 16 10:43
img-cache-c62498e8-c14c-4554-9436-821822831300
-rw-rw-rw- 1 root root
21474836480 Oct 16 11:52
volume-ddea005c-10b7-40a6-bf52-4d70dec58821
-rw-rw-rw- 1 root root
10737418240 Oct 16 11:55
volume-7e739c91-aa8e-4457-8f19-8fda477a4826
-rw-rw-rw- 1 root root
10737418240 Oct 21 19:53
volume-b054002b-45a6-4efc-be61-dd3054ee44e9
Nova compute nodes mount these NFS shares as per needed. When an instance has an volume crated on
NFS, during the launching of the instance Nova compute mounts the NFS shares.
root@vmdc-ceph1:~# ls -l /var/lib/nova/mnt/
total 8
drwxr-xr-x 3 root root 4096 Oct 23 05:14 446d1154b7aded6d478f79de396fd513
drwxr-xr-x 2 nova nova 4096 Oct 22 21:31 fa9d93b729574cd40cb02b1b42739f70
root@vmdc-ceph1:~# ls -l /var/lib/nova/mnt/446d1154b7aded6d478f79de396fd513/
total 17397624
-rw-rw-rw- 1 root
root 10737418240 Oct 8 00:36
img-cache-8d094419-43fa-4d5d-b5bf-1fbd680e4430
-rw-rw-rw- 1 root
root 10737418240 Oct 16 10:43
img-cache-c62498e8-c14c-4554-9436-821822831300
drwxr-xr-x 2 root
root
4096 Oct 8 00:17 nfs_test
-rw-rw-rw- 1 libvirt-qemu kvm 10737418240 Nov 6 19:22
volume-0bf63ad4-178f-4053-b358-0fbe6fa0f3ec
-rw-rw-rw- 1 root
root 1073741824 Oct 23 05:14
volume-0d7f5c76-05ae-46ea-8e66-e5867d0be7de
-rw-rw-rw- 1 root
root 10737418240 Oct 23 05:14
volume-27e2bd5a-6c62-4f33-aced-4275dfe3ebba
-rw-rw-rw- 1 root
root 10737418240 Oct 22 21:00
volume-3512a0c9-134d-4957-a385-ed1df013f4d3
Create volume types for NFS share with Cinder Python client.
cinder type-create nfs
cinder type-key nfs set volume_backend_name=cdot-nfs
5-52
Implementation Guide
Chapter 5
NetApp NFS shares can now be selected through Horizon. When creating a volume, select NFS as the
volume type to use NetApp storage.
Figure 5-33
Step 2
Step 3
Step 4
Restart Nova-compute by service Nova-compute restart. From now on all instances will have the
ephemeral disk pointing to this shared location.
5-53
Chapter 5
OpenStack Configuration
Image Storage
In this implementation, Glance image storage uses Ceph as the backend. This configuration is built
during creation of the relations between the charms. The following configuration shows the relation
between the Ceph and Glance.
vmdc-admin@vmdc-maas1:~$ juju status ceph
services:
ceph:
charm: local:trusty/ceph-105
exposed: false
relations:
client:
- cinder
- glance
- nova-compute
mon:
- ceph
radosgw:
- ceph-radosgw
Object Storage
In this implementation, Ceph RADOS gateway provides the object storage services. RADOS gateway
implements SWIFT API and allows object manipulation with SWIFT API.
Note
As of this writing RADOS gateway charm doesnt have ha cluster support, therefore no Juju charm based
automatic HA is available. In this implementation a single ha proxy subordinate charm has been placed
manually in front of the three ceph-radosgw LXC nodes. Launchpad bug id 1328927 is used to track this
enhancement request.
Perform the following procedure to configure RADOS gateway once the charm installation is complete.
Note
Step 1
During this implementation object storage access and creation through Horizon dashboard encountered
errors and all configurations was done through the Python CLI. This is tracked by an existing launchpad
bug 1271570.
Copy Ceph keys in to RADOS gateway nodes.
Currently Ceph client.admin key rings are not automatically copied on to RADOS gateway nodes during
charm deployment. Copy the ceph.client.admin.keyring file from /etc/ceph in Ceph nodes in to all three
RADOS gateway nodes.
5-54
Implementation Guide
Chapter 5
Step 2
Step 3
Step 4
The following shows the content of RADOS gateway containers stored in Ceph:
root@vmdc-ceph2:~# rados --pool=.rgw ls
.bucket.meta.trusty-server-cloudimg-amd64-disk1.img:default.4358.2
.bucket.meta.copper1_container:default.4358.1
copper1_container
.bucket.meta.copper1:default.4355.1
.bucket.meta.copper1_objects:default.4355.2
copper1_objects
Instance Migration
Two types of migration instances in Openstack are described, cold and live.
Cold Migration
Cold migration in OpenStack allows for a user to move instances within a host aggregate. Before the
migration begins, the instance is shut down and then started on the new host once the process is
completed. Access to the instance is lost while the migration is occurring. Once migration is finished,
OpenStack prompts the user to verify the move. Canceling moves the instance back to its original host,
while accepting will complete the migration.
Cold migration works via command line or through Horizon.
Command Line
Run the following command to cold migrate an instance within a host aggregate. The ID of the
instance must be retrieved using the nova list command:
nova migrate <instance-id>
5-55
Chapter 5
Once the migration is complete, you will need to confirm the migration with another command. Use
nova migration-list to check if the migration has a status of finished. Once it does, run the
following command:
nova resize-confirm <instance-id>
Horizon
Cold migration can also be done through horizon by navigating to the instances tab. In the pull-down
menu select Migrate instance and confirm. When the instance is finished migrating, another button
will come up asking the user to confirm the resize/migration. Once this is confirmed, the migration
is complete.
Live Migration
Live migration in OpenStack creates minimal downtime for instances as they are moved between hosts.
The instance will only be down for a few seconds as the instance is transferred over the network and
started on the specified host. If a host will be going down for maintenance or some other reason, live
migration is the best choice to move instances and reduce any down time.
Live migration works via command line by using the following command (once again use nova list to
retrieve the ID of the instance to migrate):
nova live-migration <instance-id> <destination-host-name>
This command moves the specified instance to the destination host with minimal downtime.
Note
For NFS backed instances, shared storage must be configured as described in the Block Storage with
NetApp section above
Note
There are issues using live migration in Horizon. Using CLI gives consistent working behavior, a
launchpad bug has been opened here
https://bugs.launchpad.net/charms/+source/openstack-dashboard/+bug/1393445.
Compute Nodes
Taking a compute node down, even with High Availability (HA), needs some manual intervention to
ensure no information is lost. The following link contains the Canonical description of Nova-compute
HA.
5-56
Implementation Guide
Chapter 5
https://wiki.ubuntu.com/ServerTeam/OpenStackHA#Compute_.28Nova.29
Fully automated HA and instance migration is not possible with Nova Compute Services out of the
box. Live Migration is the recommended functionality to maintain Nova Compute hosts and keep
instances running with minimal downtime. A process to ensure instances are always available can be
scripted via Openstack CLI commands. As of this implementation, if a compute host goes down without
notice, the following manual intervention is necessary to get instances back up:
Restart compute host. Instances will come up in shutdown state, and will need a hard reboot to start
up again.
The Ceph/NFS volume can be detached from the shutdown instance and reattached to a new instance
on a running host. The administrator needs to change any static information on the volume, such as
IP addresses in /etc/networks/interfaces, to match new instance information allocated by OpenStack.
(The new IP address is visible via the instances tab.)
Use 'nova evacuate <instance-id> <host-id>' to move instances from dead host -> live host via CLI.
The commands nova list and nova host-list can be used to get both of those parameters.
If the Nova-compute host shutdown is planned, the Canonical recommends live migration to be used
to move instances to hosts that will stay running while the one host is shut down.
Control Nodes
A control node failing should not affect the stability of the OpenStack environment. There should be no
loss of information or access to OpenStack services due to a node being removed or failed. The following
commands can be used to get the status of a cluster:
sudo crm_mon -1
sudo corosync-quorumtool s
sudo corosync-cfgtool s
The following are outputs of the commands above and an example of a healthy keystone cluster:
root@juju-machine-3-lxc -3:~# crm_mon -1
Last updated: Fri Nov 14 18:04:20 2014
Last change: Thu Nov 13 17:10:44 2014 via crmd on juju-machine-1-LXC -4
Stack: corosync
Current DC: juju-machine-2-LXC -3 (167783727) - partition with quorum
Version: 1.1.10-42f2063
3 Nodes configured
4 Resources configured
Online: [ juju-machine-1-LXC -4 juju-machine-2-LXC -3 juju-machine-3-LXC -3 ]
Resource Group: grp_ks_vips
res_ks_eth0_vip(ocf::heartbeat:IPaddr2):Started juju-machine-1-LXC -4
Clone Set: cl_ks_haproxy [res_ks_haproxy]
Started: [ juju-machine-1-LXC -4 juju-machine-2-LXC -3 juju-machine-3-LXC -3 ]
root@juju-machine-3-LXC -3:~# corosync-quorumtool -s
Quorum information
-----------------Date:
Fri Nov 14 18:04:21 2014
Quorum provider: corosync_votequorum
Nodes:
3
Node ID:
167783718
Ring ID:
16
Quorate:
Yes
Votequorum information
---------------------Expected votes:
3
Highest expected: 3
Total votes:
3
5-57
Chapter 5
Quorum:
2
Flags:
Quorate
Membership information
---------------------Nodeid
Votes Name
167783708
1 10-0-45-28.icdc.sdu.cisco.com
167783718
1 10-0-45-38.icdc.sdu.cisco.com (local)
167783727
1 10-0-45-47.icdc.sdu.cisco.com
root@juju-machine-3-LXC -3:~# corosync-cfgtool -s
Printing ring status.
Local node ID 167783718
RING ID 0
id = 10.0.45.38
status= ring 0 active with no faults
If one node is failed, the behavior of the environment is as expected - there is no loss and the command
juju status shows no errors.
However, during testing we did sometimes observe some adverse behavior; using updated charms
produced better results. Upon returning the machine to the started state, errors can be seen with
corosync/pacemaker, which are the two services that are installed via the hacluster charm. There are
issues returning the node to the corosync cluster in addition to triggering hook errors shown by juju
status that cannot be resolved. OpenStack services are still available after the restore, but multiple
consecutive failures would increase the chance of an irreversible failure or reduction of service
availability. A workaround to this issue was found and is documented in the following Launchpad bug.
(https://bugs.launchpad.net/charms/+source/hacluster/+bug/1392438).
The following procedure should correctly cluster the services again if they failed to automatically do so
after a restore:
Step 1
Stop each corosync and pacemaker service on the nodes that contain the service whose cluster failed to
rejoin correctly. For example, if keystone failed to cluster correctly the following commands would be
run on each machine running keystone:
sudo service corosync stop
sudo service pacemaker stop
Step 2
Once the services are stopped on each machine, start the corosync service on each node using the
following command:
sudo service corosync start
Step 3
Once every corosync service is started, start pacemaker on each node to complete the workaround.
sudo service pacemaker start
Step 4
Run the following commands to ensure the cluster has been formed correctly once again:
sudo crm_mon -1
sudo corosync-quorumtool s
sudo corosync-cfgtool s
Step 5
Run the following commands to try to fix the hook errors seen in juju status, as of right now the hook
errors may be incorrectly shown. This issue has been opened in Launchpad
(https://bugs.launchpad.net/juju-core/+bug/1393434). The most important thing is ensuring the cluster
is created successfully; the hook error issue seems to only be cosmetic.
juju resolved r <service-with-hook-error>
5-58
Implementation Guide
CH A P T E R
Benefits
The following benefits are provided from Network Virtualization Edge on the ASR 9000.
Simplifying management (operate two Cisco ASR 9000 router platforms as a single virtual Cisco
ASR 9000 Series system).
Eliminate the need for complex protocol-based High Availability (HA) schemes.
Devices attaching can dual home to both racks. For example, a device can have a bundled Ethernet
to the ASR 9000, and the member of bundle in the ASR 9000 can be in two racks; only one routing
peer and no Equal Cost Multi Path (ECMP) needed.
Requirements
The following hardware and software requirements are detailed.
6-1
Chapter 6
HardwareCisco ASR 9000 Series SPA Interface Processor-700 and Cisco ASR 9000 Enhanced
Ethernet line cards are supported. Cisco ASR 9000 Enhanced Ethernet line card 10 GbE links are
used as inter rack links. The individual racks must be of same type (both 10-slots or 6-slots, and so
on). Mixed chassis types are not supported.
Restrictions
The following restrictions are emphasized for the Cisco ASR 9001 Series nV Edge System.
Refer to Restrictions of the Cisco ASR 9001 Series nV Edge System for more information.
Figure 6-1
1
Active
RSP
LC
LC
Standby
RSP
LC
LC
Active
RSP
LC
LC
Standby
RSP
LC
LC
Inter-Rack Links
298725
Control-plane Extension
The Route-Switch Processor (RSP) communicates using a Layer 1 (L1) Ethernet Out-of-Band Channel
(EOBC) extension to create a single, virtual control plane. The control-plane packets are forwarded from
chassis to chassis in these EOBC links. Each chassis has two RSPs, and each RSP has two EOBC ports,
with four connections between the chassis, to provide high redundancy. If any of the links go down, there
are three possible backup links. Only one of the links will be used for forwarding control plane data, and
all of the other links will be in the "standby" state. The EOBC link can only be 1 GB. The SFP has to be
1 GB SFP as 10 GB SFP is not supported.
Link Distribution
In the case of an ECMP or bundle-ether scenario, the ASR 9000 nV by default will use "Source IP,
Destination IP, Source port (TCP/UDP only), Destination port (TCP/UDP only), Router ID" to determine
which link it will take if it is IPv4 traffic or MPLS traffic (less than four labels).
Refer to ASR 9000 nV Edge Guide for more details.
6-2
Implementation Guide
Chapter 6
<-->
<-->
<-->
<-->
TenGigE1_0_0_3
TenGigE1_0_1_3
TenGigE1_1_0_3
TenGigE1_1_1_3
6-3
Chapter 6
Provider edgeCustomer edge connection with default gateway for Tenants L2 Bronze
ASR 9000 Data Center Provider Edge Implementation Toward MPLS Core
The ASR 9000 nV cluster is a Provider Edge (Provider edge) router in the MPLS network. It is a best
practice to have devices dual home to the ASR 9000. In this implementation, Cisco CRS-1 core routers
connect to the ASR 9000 using bundle-ether interfaces. All of these interfaces are dual homed to the
ASR 9000.
The routing protocol used in MPLS core networks is usually OSPF or ISIS, and in this implementation
OSPF was used with core in area 0. LDP is configured for exchanging labels, and best practices for fast
convergence are implemented.
Multiprotocol Interior BGP (IBGP) is configured to peering to remote Provider edge for both IPv4
address family for Internet prefixes, and VPNv4 address family for MPLS-VPNs. Normally a route
reflector is used to distribute prefixes between all Provider edges and, therefore, is highly recommended.
Figure 6-2
ASR 9000-PE1
Bundle
-ether1
CRS-P1
ASR 9000-PE2
ther11
Bundle-e
CRS-P2
298726
Leaf103
interface Bundle-Ether11
mtu 4114
ipv4 address 10.254.11.1 255.255.255.0
mac-address 4055.3943.f93
load-interval 30
!
interface Bundle-Ether12
mtu 4114
ipv4 point-to-point
ipv4 address 10.254.12.1 255.255.255.0
mac-address 4055.3934.f92
load-interval 30
!
router ospf 1
nsr
router-id 10.255.255.1
mpls ldp auto-config
nsf cisco
area 0
interface Bundle-Ether11
!
interface Bundle-Ether12
!
interface Loopback0
6-4
Implementation Guide
Chapter 6
!
!
!
router bgp 200
address-family ipv4 unicast
!
address-family vpnv4 unicast
!
neighbor 10.255.255.201
remote-as 200
update-source Loopback0
address-family ipv4 unicast
route-policy allow-all in
route-policy allow-all out
!
address-family vpnv4 unicast
route-policy allow-all in
route-policy allow-all out
!
!
mpls ldp
nsr
interface Bundle-Ether11
!
interface Bundle-Ether12
!
!
Note
In this implementation, Interior BGP (IBGP) is used as Provider edge-Customer edge protocol since the
Fabric does not support Exterior BGP (EBGP). Currently IOS-XR on ASR 9000 does not support RFC
6368 to use ATTR_SET to send customer BGP attributes to remote Provider edge. However, prefixes
are advertised from Data Center Provider edge towards remote Provider edge. This will be supported in
a future release.
Note
If ASR 1000 is used as a Provider edge, local or remote, it is recommended to use internal-vpn-client
towards Customer edge along with route-reflector-client to forward the prefixes using IBGP as Provider
edge-Customer edge.
6-5
Chapter 6
Note
If the ASR 1000 remote Provider edge uses EBGP to connect to remote Customer edge,
internal-vpn-client and route-reflector-client configurations are not required to advertise a route
originated from remote Customer edge.
From the ASR 9000 perspective, there are two sub-interfaces, one per Nexus 9300 leaf node. Please note
the BGP sessions are per tenant, and each tenant needs a sub-interface pair from the ASR 9000 side and
mapped to the VLAN on Nexus 9300 leaf node side where the external connection is configured as L2
with SVI mode.
Please see the Data Center ACI Fabric implementation chapter for more details on configuring external
L3 and L2 connections on the ACI Nexus 9300 leaf nodes.
Please note that loopback interfaces are created in the tenant VRF, and used for IBGP peering between
the ASR 9000 and ACI Fabric border leafs. The loopback on the ACI Fabric side needs to be reachable
from ASR 9000, and hence a static route is needed towards the ACI Fabric from ASR 9000. Similar
configuration is required on the ACI Fabric side as well. The actual IBGP peering is between loopback
on ASR 9000 to the loopback on the each of the Nexus 9300 leaf nodes. Update source loopback is
configured on the ASR 9000. Figure 6-3 shows the ASR 9000 running BGP over the port channels per
tenant.
Figure 6-3
Static route to
ASR 9000 Loopback
10.2.201.1
10.2.202.1
Static route to
ASR 9000 Loopback
10.2.202.2
Border Leaf - 2
Loopback Interface 10.2.200.106
298540
ACI Fabric
ASR 9000 Tenant Configuration for IBGP as Provider edge-Customer edge Routing Protocol
Silver Tenant as well as L3-Bronze tenant types use this implementation.
!# VRF definition for silver tenant s001
vrf s001
address-family ipv4 unicast
import route-target
2:417
export route-target
2:417
!
!#LoopBack interface on asr9k for IBGP peering
interface loopback 411
vrf s001
ipv4 address 10.2.200.1/32
!
!#sub-interface for portchannel1
interface Bundle-Ether 5.411
vrf s001
ipv4 address 10.2.201.1 255.255.255.0
6-6
Implementation Guide
Chapter 6
L3 Bronze Configuration
For the Bronze tenant, the border leaves on the ACI Fabric has IBGP peering with ASR 9000 nV Edge
device. Each border leaf has an L2 port channel carrying the external VLAN for each tenant and it
terminate on a port channel sub-interface on the ASR 9000. Loopback interfaces configured on both the
border leaves and ASR 9000 act as BGP Router ID. The bronze tenant BGP configuration on the ASR
9000 is identical to the Silver tenant configuration specified in ASR 9000 Tenant Configuration for
IBGP as Provider edge-Customer edge Routing Protocol, page 6-6.
6-7
Chapter 6
For redundancy purposes, each Nexus 9300 border leaf has the following:
1. A default route pointing to upstream ASR 9000 nV Edge device For North-bound traffic.
2. ASR 9000 has static routes pointing to the SVI interfaces in the Fabric as next hop for south or Data
6-8
Implementation Guide
Chapter 6
Static routes for tenant subnets that are to be reachable in the Internet are configured on ASR 9000 with
next hop to the ASA external sub-interface on the tenant context for the corresponding gold tenant,
which then forwards the traffic to tenant VMs on the inside interfaces all tenant VMs have default
gateway on the tenant ASA context (or alternately ASAv). Copper container has similar configuration,
the only difference is that in Copper case, there is only one ASA context shared by all copper tenants,
and only on external connection to the ASR 9000 Data Center Provider edge. These static routes for
tenant subnets are redistributed into IPv4 IBGP towards the Service Providers route reflectors, to
distribute to all other Internet routers in the network.
E-Gold Tenant Internet Connection Configuration on ASR 9000 Data Center Provider Edge
Figure 6-4 shows the E-Gold Tenant Internet Connection.
Figure 6-4
MPLS
L3 VPN
Internet
Bridge Domain:
dmz_external_bd
ASR 9000
11.1.8.254
11.1.8.253
dmz_asa
APP
OS
APP
OS
DMZ VMs
298727
DMZ 11.1.8.0/29
interface Bundle-Ether10.3080
description g008 internet
vrf internet
ipv4 address 11.1.8.254 255.255.255.252
encapsulation dot1q 3080
!
router static
vrf internet
address-family ipv4 unicast
10.1.6.101/32 11.1.4.5
11.1.4.0/29 11.1.4.5
11.1.4.0/29 11.1.4.253
11.1.6.0/29 11.1.6.253
11.1.8.0/29 11.1.8.253
12.1.1.3/32 11.1.3.5
12.1.1.4/32 11.1.4.253
12.1.1.6/32 11.1.6.5
12.1.1.6/32 11.1.6.253
12.1.1.7/32 11.1.7.5
12.1.1.8/32 11.1.8.253
!
router bgp 200
6-9
Chapter 6
!
vrf internet
rd 9:999
address-family ipv4 unicast
redistribute static
!
!
ASR 9000 nV edge provides the copper tenants access to and from Internet. Figure 6-5 shows the
connectivity of the ASR 9000 to the ASA cluster.
Figure 6-5
Internet
AS 200
Po10.500
10.4.101.1
DMZ 11.1.8.0/29
eBGP
10.4.101.2
ASA Cluster
298728
AS 65101
Interface Configuration
ASA outside interface is mapped to the ASR 9000 bundle Ethernet interface with a sub-interface. All
copper tenants' traffic leaving the ASA to and from Internet shares the same sub-interface. Copper traffic
is then put in to a VRF which will then carry the traffic out to the MPLS core separating the traffic from
other tenant's traffic.
The following IOS-XR configuration snippet shows the interface-related configuration.
interface Bundle-Ether10.500
description copper tenants asa outside interface
mtu 9216
vrf internet
ipv4 address 10.4.101.1 255.255.255.0
encapsulation dot1q 500
!
interface Bundle-Ether10
description copper_vpc_103_104
mtu 9216
!
interface TenGigE0/0/1/2
bundle id 10 mode active
!
interface TenGigE1/1/1/2
bundle id 10 mode active
6-10
Implementation Guide
Chapter 6
Routing Configuration
ASR 9000 configures the static routes for NAT subnets used in the ASA for the traffic coming from the
Internet in to the copper container.
router static
vrf internet
address-family ipv4 unicast
111.21.0.0/16 10.4.101.2
!
EBGP is used to exchange routes between the ASR 9000 and ASA. A default route is injected by the
ASR 9000 towards the ASA.
router bgp 200
bgp router-id 200.200.200.1
address-family ipv4 unicast
!
vrf internet
rd 9:999
address-family ipv4 unicast
redistribute connected
!
neighbor 10.4.101.2
remote-as 65101
address-family ipv4 unicast
route-policy allow-all in
route-policy allow-all out
default-originate
!
!
Deployment Considerations
The following considerations are recommended.
Provider edge-Customer edge implementation in the data center uses IBGP as the ACI Fabric does
not support EBGP at this time.
In a typical implementation, the remote Provider edge-Customer edge connection uses EBGP.
Currently IOS-XR on ASR 9000 does not support RFC 6368 to use ATTR_SET to send Customer
edge attributes to remote Provider edge using IBGP. This support will be available in a future
release. Although the RFC 6368 support is not there, the prefixes are advertised from Data Center
Provider edge towards remote Provider edges.
If the ASR 1000 remote Provider edge uses EBGP to connect to remote Customer edge,
internal-vpn-client and route-reflector-client configurations are not required to advertise a route
originated from remote Customer edge.
6-11
Chapter 6
Deployment Considerations
6-12
Implementation Guide
CH A P T E R
Edge policies enforce the agreed upon contractual limits. The Service Provider identifies traffic
based on agreed upon markings, and classifies and enforces contractual limits at customer
attachment points.
2.
Customer Attachment (CA) is a location, or multiple locations, where customer sites attach to the
MPLS-VPN WAN network of the provider. For remote sites, the customer edge QoS enforcement is
implemented at the remote Provider Edge (PE) devices from where tenants attach to the SP network
using Layer 3 VPN (L3VPN).
3. Within the data center, the tenant Virtual Machines (VMs) and bare metal servers attach to the Data
Center network based on the Application Centric Infrastructure (ACI) Fabric. This is another edge
enforcement point for per-VM/vNIC limits enforcement - however on the ACI Fabric,
policing/rate-limiting is not supported in this release, and in this implementation, classification
based on customers' DSCP, and mapping to the ACI Traffic class is done. Tenant Virtual Machines
attach via Application Virtual Switch (AVS), bare metals attach directly or via the Fabric Extender
(FEX). North bound traffic reaches the Data Center PE and on the Data Center PE tenant policy
enforcement is done for tenant aggregate bandwidth limits per class and traffic needs to be identified
using IP/DSCP.
4. The ACI fabric offers 3 classes of traffic for tenants, and depending on the tenant type, tenant traffic
is mapped to one or more of these classes. These 3 classes are configured so that one class (Level-1)
is low latency switched, Level-2 is given the bulk of bandwidth remaining to carry premium data,
and Level-3 is the best effort class also called as standard data with a small amount of bandwidth
reserved. All east-west tenant traffic rides on the appropriate ACI Traffic class.
5. ACI Fabric does not mark dot1p bits on the wire. IP/DSCP is used to map tenant traffic to different
7-1
Chapter 7
or IaaS) is in its own QoS domain and implements policies independently from the Data Center and
WAN network. This implementation does not cover this topic.
2. The MPLS-Core network (for example, Service Provider Next Generation Network (SP-NGN) or
an Enterprise-wide MPLS-Core WAN) implements a QoS that supports the different offered
services for WAN transport. The end tenant customer traffic is mapped to one of the WAN/SPNGN
service classes based on the contractual SLA between the tenant and the Enterprise-WAN or
SP-NGN.
3.
Inside the Data Center, another QoS domain exists to support Data Center service offerings. The
tenant customer's traffic is mapped into one of the Data Center classes of service to implement the
contractual SLA.
The remote provider equipment is the boundary between the tenant network and the provider network
that is, WAN/SP-NGN and classifies and marks traffic incoming into the WAN/SP-NGN from the
tenant. This is also the enforcement point for traffic entering the WAN/SP-NGN, and hence, traffic is
treated to enforce the contractual agreement and support agreed upon SLA by policing/rate-limiting and
mark down. Traffic that is allowed into a SP Traffic class is marked with a WAN/SP-NGN class marking,
so that the rest of the Enterprise/SP-NGN QoS domain and the Data Center QoS domain can trust this
marking and use it to classify and provide appropriate treatment.
The Data Center-PE is the boundary between the WAN/SP-NGN and the Data Center. While the
WAN/SP-NGN and Data Center can also be two independent Service Providers/Operators, in this
implementation, they are assumed to be one. For the ingress direction from WAN/NGN to the Data
Center, the Data Center-PE trusts the WAN/NGN markings and classifies traffic into similar classes
within the Data Center. The meaning of the markings in the MPLS network that use the MPLS Traffic
Class (MPLS-TC) field are kept consistent with the dot1p Cost of Service (CoS) markings used within
the Data Center. In the egress direction, i.e., from the Data Center to the MPLS network, the Data
Center-PE implements tenant aggregate policy enforcement, as well as mapping from the Data Center
classes to the WAN/NGN classes. Figure 7-1 shows the end-to-end QoS domains.
End-to-End QoS Domains
Enterprise
Customer
SP-NGN
SP-DC
Public Cloud
SP Hosted IaaS
Tenant Dept
Network
Enterprise
MPLS WAN
Enterprise
Data Center
Private
Cloud
Tenant
Network
MPLS
Core
PE
ACI Data
Center
DCPE
DSCP
MPLS TC
DSCP/ACI
298716
Figure 7-1
7-2
Implementation Guide
Chapter 7
QoS Transparency
QoS transparency is an important requirement for many customers this is when the QoS labels used by
end users are preserved when traffic transits a provider network. Typically, the IP/DSCP or Type of
Service (ToS) bits are used to mark QoS labels by end users; however when this traffic transits a Cloud
Service Provider or WAN provider network, the SP classifies the tenant traffic and marks SP QoS labels.
To provide full QoS transparency, the SP QoS Labels should be independent of the Tenant QoS labels.
This can be achieved by using outer header QoS markings for SP/WAN QoS labels.
In the MPLS WAN network, the SP classifies traffic coming in from customers sites, and after
classification and traffic conditioning, the SP class is marked using MPLS-TC bits in the MPLS Core.
The SP MPLS network trusts the MPLS-TC field and provides the PHB of the class that is marked.
When the traffic reaches the Data Center, the data center provider equipment (DC-PE) maps the MPLS
QoS domain labels in to Data Center domain labels. This is done using the Ethernet Dot1p bits on the
Data Center PE towards the data center.
From the Data Center PE, traffic enters the ACI Fabric, and the ACI Fabric classifies based on the EPG
type to ACI Classes. If a tenant type has multiple classes of traffic, then classification based on dot1p
bits to select one of the 3 ACI Traffic classes can be used using Custom QoS, however this is only
supported on L3-external (used on L3-Bronze and Silver container). On L2 external-based connections
(used in E-Gold, Copper and L2 Bronze), all of the traffic is mapped to the same ACI Traffic class as
custom QoS is not available at this time.
Note
You can't configure Custom QoS policy at the "External Bridged Network" level. An enhancement
(CSCur79905) is filed to enable this capability.
In the opposite direction, traffic originating from the tenant Virtual Machines is mapped to ACI classes
using DSCP and EPGs. Traffic that is northbound, that is, moving towards remote customer sites, exits
the ACI fabric and reaches the Data Center PE. The Data Center PE then does the edge enforcement
based on the tenant type and the IP/DSCP bits and selects the appropriate SP Class and marked with the
MPLS-TC bits.
This implementation does not support AVS policing for edge conditioning and rate limiting at the virtual
access.
Trust Boundaries
The SP network identifies the traffic at its edges, and marks with SP QoS labels. These labels are trusted
from that point on within the SP network. The edges of the SP WAN network form a trust boundary of
traffic entering the SP WAN network. Similarly, there is another trust boundary at the Data Center PE.
At the trust boundaries, contract SLA enforcement and admission control is done. This is typically done
using policing traffic under the agreed upon bandwidth limit is allowed and marked to the correct class.
Traffic exceeding is either dropped or marked down. The PHB in terms of bandwidth guarantee or low
latency is implemented on every node in the SP network based on the markings selected.
To identify the tenant traffic, for instance the VoIP traffic, customers' DSCP can be used this is agreed
upon between the tenant and the SP. For instance, up to a certain bandwidth with DSCP=ef is accepted
and marked with MPLS-TC=5 and given low latency behavior.
Figure 7-2 shows the enforcement point for customer to Data Center traffic and the trust boundary.
7-3
Chapter 7
QoS Transparency
Tenant
Network
MPLS
Core
PE
ACI Data
Center
DCPE
VM
VM
VM
Cisco AVS
VM
VM
VM
VM
VM
VM
DSCP
MPLS TC
Unstrusted
DSCP/ACI
Trusted
Trust Boundary.
WAN/NGN Edge - Apply
Traffic Conditioning and
Mark MPLS-TC
298717
Figure 7-2
Figure 7-3 shows the enforcement point for Data Center to customer premises.
Data Center to Customer premises
Tenant
Network
MPLS
Core
PE
Data
Center
DCPE
DSCP
MPLS TC
VM
VM
VM
Cisco AVS
Figure 7-3
VM
VM
VM
VM
VM
VM
DSCP/ACI
Trust Aggregate
Enforcement and
Conditioning
Classify on DSCP
and Mark MPLS-TC
298718
Trusted
Dot1P bits
in Data
Center
MPLS TC in WAN ACI Traffic Class
Management
Low Latency
Low Latency
VoIP
Low Latency
Storage/Infra
Bandwidth Guarantee
7-4
Implementation Guide
Chapter 7
Table 7-1
Dot1P bits
in Data
Center
MPLS TC in WAN ACI Traffic Class
Call Signaling
Bandwidth Guarantee
Premium Data
21
21
Bandwidth Guarantee,
WRED in WAN
Standard Data
Note
COS=6 and COS=7Tenant EPGs cannot send this class traffic. Only Cloud Service Provider in-band
management and backend EPGs can send traffic with this class to protect traffic going in Traffic class 1.
Note
Tenant VoIP traffic is also sent in low latency ACI class, at this time the AVS does not support rate
limiting this traffic. CoS 4 is used for Network File System (NFS) traffic, and this traffic is seen in the
ACI fabric between UCS/compute and Storage. This marking is also for any Data Center infrastructure
traffic such as vMotion.
Note
CoS 3 is used for tenant traffic for call signaling. It is also used for Fibre Channel over Ethernet (FCoE)
traffic inside the UCS where both call signaling and FCoE traffic share the same queue. This
implementation uses IP-based storage instead of Fibre Channel/Fibre Channel over Ethernet (FC/FCoE).
If using FC/FCoE from UCS to a separate SAN, then UCS queue for COS=3 needs to be configured for
FC/nodrop treatment.
Note
Provider marked IP/DSCP can be used if QoS transparency is not required. This rewrites IP/DSCP to SP
markings. This implementation preserves the customers' original IP/DSCP markings except for the
traffic flowing through the SLB Citrix NetScaler 1000v. For traffic flowing through NS1000V, the
connection opening and closing SYN-ACK and FIN-ACK dont have IP/DSCP preserved.
Low Latency Switched Traffic. For real time apps such as VoIP.
2.
Call Signaling Class. Bandwidth is guaranteed for signaling for VoIP and other multimedia.
3.
4.
7-5
Chapter 7
QoS Transparency
To make the offerings simple, these traffic classes are bundled and mapped to tenant types.
Table 7-1 shows that Gold tenants can send traffic with DSCP=EF, and the Data Center/MPLS network
classifies it as VoIP in their domain to provide low latency guarantee by switching it in the priority
queue. Call control is also allowed for Gold tenants and recognized by marking DSCP=AF31. All other
traffic from Gold tenants is treated as premium data QoS service class traffic. For Silver tenants, all
traffic is treated as premium data, and there is no VoIP or call signaling class offered. For Bronze tenants,
all traffic is treated as standard data class.
Table 7-2
Data Center and WAN Traffic Classes Customer Marking Gold Silver Bronze/Copper
VoIP (Low Latency)
DSCP=EF or CS5 x
Call Signaling
DSCP=af31
Premium Data
Any
Standard data
Any
x
x
On the ASR 9000 Data Center PE, the tenant aggregate SLA can be implemented to allow per tenant
aggregate bandwidth and rate limiting.
Note
Per-VM rate limiting is not possible with this implementation as the AVS does not support rate-limiting
or policing at this time.
Table 7-3
Traffic Type
VoIP
1R2C
50 Mbps
Call control
1R2C
10 Mbps
2R3C
CIR=500 Mbps
PIR= 3Gbps
Bronze/Copper
Silver
CIR=250Mb
ps
PIR=2 Gbps
7-6
Implementation Guide
Chapter 7
Table 7-3
2R3C
CIR=500 Mbps
PIR= 3 Gbps
WRED in MPLS
core drops out of
contract traffic
before dropping
CIR
CIR=250
Mbps
PIR=2 Gbps
1R2C
Rate limited
to 100 Mbps,
no CIR per
tenant.
Note
IFC Class
All the IFC originated or destined traffic classified into this class
Strict Priority Class
In Flowlet Prioritization mode, Prioritized packets use IFC class
2.
3.
Span Class
Best Effort Class, DWRR mode, Least Possible Weight
All the SPAN and ERSPAN traffic is classified into this class
Can be starved.
This implementation uses the 3 user configurable traffic classes to permit three types of PHB. Tenant
EPGs are mapped to this QoS level directly for Silver, Bronze, and Copper because the traffic from these
tenants is mapped to one traffic class. In the case of E-Gold/Gold tenants, the custom QoS option is used
to map traffic based on DSCP to classify into VoIP class versus non-VoIP class. In this case, a single
EPG can generate traffic of different Traffic classes, and tenant IP/DSCP marking is used to pick the
traffic class to be used. For DSCP=EF, VoIP traffic, ACI traffic class Level-1 is selected, which has low
latency PHB. However, please note at this time there is no rate-limiting capability on the fabric and low
latency queue packets are switched first before switching Level-2 and Level-3 class packets based on
their bandwidth remaining weights.
7-7
Chapter 7
QoS Transparency
Note
In this implementation. the Level-2 ACI traffic class is used for bandwidth-guaranteed tenant traffic as
well as all infrastructure traffic such as IP storage (NFS) and vMotion.
Also note that there is no policing or rate limiting at the AVS virtual edge currently, which should be a
consideration for offering any level 1 tenant traffic class.
This implementation does not provide remarking for any of the traffic at ACI fabric edge. Incoming
DSCP is trusted across the fabric.
The traffic classes in APIC maps to specific port class on the fabric.
Table 7-4 shows QoS classes in APIC and corresponding classes at the switch port.
Table 7-4
Level3 / Unspecified
Class 0
Level2
Class 1
Level1
Class 2
Class 3 (IFC)
SPAN
SUP
Table 7-5 shows sample bandwidth allocation in the fabric for various tenant containers and traffic
classes
Table 7-5
ACI Traffic
Class
Type
DSCP
Bandwidth
Dot1P COS Weight
Tenant EPGs
Level-1
Low Latency
EF
-NA-
E-Gold/Gold
No Rate
limiting at
ACI ingress
Level-2
Bandwidth
Guaranteed
Any
(including
Call
signaling)
3 for CS,
2/1
85%
E-Gold/Gold/Silver
Bandwidth
weight kept
high to
guarantee
this class
over level3
Level-3
Best Effort
any
any
15%
Bronze/Copper
Bandwidth
Weight kept
low to allow
level2 to go
first during
congestion
Remarks
7-8
Implementation Guide
Chapter 7
Bandwidth allocation in APIC for traffic classes is done under QoS Class Policies. This is a global
configuration setting that affects the whole fabric. The total bandwidth available on a fabric switch port
is divided among the three traffic classes as shown in Figure 7-4. The scheduling algorithm is set as
Strict Priority for Level-1 and WRR for other classes.
Figure 7-4
Note
The GUI allows bandwidth percentage configuration for Level-1 class when Strict Priority is selected.
The Strict Priority algorithm does not look at the bandwidth percentage and rate-limiting is not allowed.
An enhancement is filed to grey out bandwidth allocated field when Strict priority is configured
(CSCur84469).
Classification
In the ACI Fabric, you can configure classification policies using QoS Class drop-down list at the EPG
level or External Network level. When you set the QoS class at the EPG level or External Network level,
all incoming traffic will be classified into the specified level (Level-1, Level-2 or Level-3) within the
fabric. Figure 7-5 shows an EPG level classification policy using "QoS Class" configuration for Bronze
tenant.
Figure 7-5
For EPGs and External Routed Networks, you can also configure Custom QoS policies. The Custom
QoS Policies allows classification of traffic to different classes based on incoming CoS or DSCP. With
Custom QoS policy, you can also remark the DSCP if needed.
7-9
Chapter 7
QoS Transparency
The External Bridged Networks configuration does not support Custom QoS at this time. This means
that all incoming traffic needs to be classified into the same class irrespective of the type of traffic. In
the case of Gold and L2 Bronze tenants, Bridged External Networks is used for external connectivity.
All Gold traffic entering the Fabric from ASR9K-NV edge is classified into Level-2 at the ingress of the
fabric. This means that the low latency voice traffic and the data traffic go into the same Level-2 class
in the North-South direction. An enhancement bug is filed to address this limitation.
CSCur79905ENH: Need to support custom QoS for External Bridged Networks
Figure 7-6 shows all traffic received on the external bridged network is classified into Level-2 class.
Figure 7-6
At the EPG level, a custom QoS policy can be applied to separate VOIP traffic and data traffic into
different traffic classes. Figure 7-7 shows a custom QoS policy at the EPG level.
Figure 7-7
Figure 7-8 shows a custom QoS policy for classifying Gold tenant's south-north traffic. In this example,
the VoIP traffic is classified into Level-1 based on DSCP CS5 and EF. Traffic that falls into DSCP range
CS1 to AF43 is classified into Level-2 bucket.
7-10
Implementation Guide
Chapter 7
Figure 7-8
Trust
This implementation uses a simplified trust model in which the tenant EPG is mapped to a single class
or set of classes.
Marking
Traffic inside fabric is expected to use the traffic class assigned by configuring the EPG, external routed
network or external bridged network to an ACI traffic class or level. Additionally marking of the
DSCP bits can be done if needed.
7-11
Chapter 7
QoS Transparency
Note
In this implementation the Customer IP/DSCP is not modified to provide QoS Transparency.
Traffic exiting the ACI Fabric towards UCS fabric interconnects or external devices have dot1p set to 0.
An enhancement request has been filed to provide Dot1P marking corresponding to the ACI Traffic
Level.
Note
UCS QoS
The UCS unified fabric unifies LAN and SAN traffic on a single Ethernet transport for all blade servers
within a UCS instance. In a typical compute implementation, UCS uses FCoE to carry Fibre Channel
and Ethernet traffic on the same physical Ethernet connection between the Fabric Interconnect and the
server. This connection terminates at a converged network adapter on the server, and the unified fabric
terminates on the uplink ports of the Fabric Interconnect. Separate uplink ports are used for Ethernet and
FC/FCoE traffic on the uplink side, and also separate vEth and vHbas are created on the server.
Note
This implementation uses only IP-based storage not FC/FCoE; all traffic is on Ethernet NICs.
AVS Encapsulation
Tenant VMs attach to the AVS, and the ACI fabric extends up to the virtual port-group where the VM is
connected. VMs are considered as End points by ACI fabric and belong to an EPG. AVS is used with
local switching enabled and VLAN encapsulation in this implementation. EPGs are mapped to virtual
port-groups on VMM and a VLAN from the vlan pool is used to transport traffic based on the policy
configured on the APIC. Currently the AVS does not support marking Dot1p QoS corresponding to the
EPG QoS. Hence all traffic will be marked with dot1p bits set to 0.
The following enhancement is filed to track marking capabilities in AVS.
Note
Note
For this implementation, the dot1p bits are not set by AVS, hence all the traffic will use the default queue.
It is recommended to set the bandwidth weight for this queue to a high value.
7-12
Implementation Guide
Chapter 7
QoS Policy
The QoS policy determines the QoS treatment for the outgoing traffic from a vNIC of the UCS blade
server. For UCS servers deployed with the AVS, a QoS policy with the Host Control Full setting is
attached to all vNICs on the service profile (logical blade server). The policy allows the UCS to preserve
the CoS markings assigned by the AVS in future, when supported. If the egress packet has a valid CoS
value assigned by the host (that is, marked by AVS QoS policies), then the UCS uses that value.
Otherwise, the UCS uses the CoS value associated with the Best Effort priority selected in the Priority
drop-down list. Figure 10-9 shows the QoS policy configuration.
Figure 7-9
7-13
Chapter 7
Deployment Considerations
Deployment Considerations
The following considerations are recommended.
ACI Fabric currently allows 3 traffic classes that are user configurable. These classes are referred
as Level-1, Level-2 & Level-3. Level-1 is typically configured as the strict priority class.
You can classify the traffic to Level-1, Level-2 or Level-3 class at the EPG, "External Bridged
Network" and "External Routed Network" level.
Custom QoS policy configuration at the EPG and External Routed Network level allows
classification based on incoming CoS or DSCP. This policy also allows remarking DSCP based on
incoming CoS/DSCP however this capability is not implemented in this solution to allow DSCP
based QoS transparency.
If the incoming traffic has CoS marking, ACI Fabric reset it to CoS0. An enhancement
(CSCuq78913) is filed to preserve CoS across ACI fabric.
You can't configure Custom QoS policy at the "External Bridged Network" level. An enhancement
(CSCur79905) is filed to enable this capability.
In this implementation, premium data traffic shares the Level-2 class with IP storage (NFS) and
Vmotion.
Cisco AVS does not offer marking capabilities. An enhancement CSCuq74957 is filed to track this.
The global QoS policy in APIC allows bandwidth configuration for Strict Priority class. Since there
is no rate-limiting capability for strict priority class, bandwidth modification should not be allowed.
CSCur84469 is filed to grey out this field.
In the current implementation with UCS, all traffic falls into the default queue since dot1p marking
is not supported on AVS and Leaf switches.
7-14
Implementation Guide
CH A P T E R
APIC1
APIC3
NetScaler 1000V
APP
OS
APP
OS
Tenant VMs
UCS-6296-FI-A
UCS-6296-FI-B
Leaf101
Spine201
Leaf103
Spine202
Leaf104
Leaf102
Spine203
Leaf105
Spine204
Leaf106
Netapp
FAS3200
Series
APIC2
298515
APP
OS
The Expanded Gold Tenant Container made use of the Cisco ASA 5585 adaptive security appliance as
the perimeter firewall for the protected workload virtual machines (VMs) and virtual appliances. Cisco
ASA 5585 delivers superior scalability and effective, always-on security designed to meet the needs of
an array of deployments. The Cisco ASA 5585 security appliance is partitioned into multiple virtual
devices, known as security contexts. Each ASA security context is an independent virtual device, with
its own security policies, interfaces, and administrators. For each Expanded Gold Tenant Container, two
ASA security contexts are deployed.
8-1
Chapter 8
The Citrix NetScaler 1000v is an application delivery controller that offers web and application load
balancing, acceleration, security and offload feature set in a simple and easy to install virtual appliance
form factor. The NetScaler 1000v is used to intelligently distribute and optimize Layer 4 to Layer 7
(L4L7) network traffic for the generic 3-tier application with web, application, and database tiers. The
Expanded Gold Tenant Container has two NetScaler 1000v pair (configured in active-standby high
availability mode) for the container.
The Application Centric Infrastructure (ACI) fabric provides the robust network fabric to create a highly
flexible, scalable, and resilient architecture of low-latency, high-bandwidth links. The ACI Fabric
simplified and flattened the data center network, with centralized automation and policy-driven
application profiles. The Cisco Application Policy Infrastructure Controller (APIC) is the unifying point
of automation and management for the ACI Fabric. APIC provides centralized access to all fabric
information, optimizes the application lifecycle for scale and performance, and supports flexible
application provisioning across physical and virtual resources.
The ASR 9000 Aggregation Services Router provides access to external networks for the Expanded Gold
Tenant Container, typically via IP or Multi Protocol Label Switching (MPLS) based connectivity to the
private intranet and Internet. The ASR 9000 is utilized in network virtualization (nV) mode, where two
or more physical ASR 9000 chassis are combined to form a single logical switching or routing entity.
The ASR 9000 is used as an MPLS Provider Edge (PE) router, providing Layer 3 VPN (L3VPN) and
Internet connectivity to the service provider IP/MPLS network for the Expanded Gold Tenant Container.
The workload VMs and virtual appliances for the Expanded Gold Tenant Container are hosted on
VMware vSphere infrastructure. The VMware vSphere ESXi hypervisors are hosted on the hardware
platform provided by the Cisco Unified Computing System (UCS) System with B-Series blade servers.
VMware vSphere is VMware's cloud operating system that virtualizes, aggregates, and manages a large
collection of infrastructure resources (CPUs, memory, networking, and so on) to provide pools of virtual
resources to the data centers, transforming them into dramatically simplified cloud computing
infrastructures. VMware vSphere consists of several technologies that provide live migration, disaster
recovery protection, power management and automatic resource balancing for data centers.
Cisco UCS is the next generation data center platform that unites compute, network, storage access,
virtualization and management software into a single, highly available, cohesive, and energy efficient
system. UCS is designed and optimized for various layers of virtualization to provide an environment in
which applications run on one or more uniform pools of server resources. The system integrates
low-latency, lossless Ethernet unified network fabric with x86-architecture servers.
The Cisco Application Virtual Switch (AVS), which is a hypervisor-resident virtual network edge switch
designed for the ACI Fabric, serves as the distributed virtual network switch on the VMware vSphere
ESXi hypervisors. Cisco AVS provides consistent virtual networking across multiple hypervisors to
simplify network operations and provide consistency with the physical infrastructure.
Persistent disk storage for the workload VMs and virtual appliances are provided by NetApp FAS3200
series storage system. The NetApp FAS3200 family offers robust enterprise-grade feature set with tools
to make management easier over the life of the system. The NetApp storage system is configured in
cluster-mode (or c-mode), which supports multi-controller configurations with a global namespace and
clustered file system. Access to the storage system is via the Network File System (NFS) protocol.
8-2
Implementation Guide
Chapter 8
Figure 8-2
MPLS
L3 VPN
Internet
NetScaler 1000V
HA-Pair
10.1.11.254
11.1.8.254
ASR 9000
SLB 10.1.4.0/24
10.1.11.253
10.1.4.253
Web 10.1.1.0/24
APP
OS
11.1.8.253
10.1.5.252
10.1.5.253
10.1.1.253
pvt_asa
10.1.2.253
10.1.3.253
APP
OS
10.1.7.253
dmz_asa
11.1.8.6
Web VMs
NetScaler 1000V
HA-Pair
App 10.1.2.0/24
APP
OS
Database 10.1.3.0/24
App VMs
APP
OS
APP
OS
DMZ VMs
APP
OS
APP
OS
Database VMs
298605
APP
OS
DMZ 11.1.8.0/29
The private zone has three private subnets, designed for generic 3-tier application profile with web,
application and database tiers, plus a dedicated subnet for the Citrix NetScaler 1000v operating in one
arm mode. The tenant connects to the ASR 9000 PE to access the private intranet over the service
provider MPLS L3VPN network. Each tenant has its own Virtual Routing and Forwarding (VRF)
instance on the ASR 9000. The tenant VRF instance on the ASR 9000 is extended to the ASA firewall
via a dedicated tenant VLAN carried over the ACI Fabric.
The DMZ has one public subnet for DMZ workload VMs, as well as a dedicated subnet for the Citrix
NetScaler 1000v. The DMZ allows the tenant to isolate and secure access from Internet to services on
workload VMs hosted on the DMZ. The workload VMs on the DMZ has restricted access to the private
zone workload VMs to access services (such as business logic processing on application tier) not offered
on DMZ, no direct access from Internet to the private zone. For each tenant, the ASA firewall connects
to the ASR 9000 on a point-to-point VLAN sub-interface.
Both ASA firewalls are configured in L3 routing mode, serving as default gateway for the workload
VMs and virtual appliances in each zone. The ACI Fabric only provides L2 switching functionality; all
L3 routing functions are disabled.
Note
As of ACI version 1.0, traffic redirection mechanism is not supported; as such, when inserting ASA in
L3 routing mode via the service graph, the ACI Fabric must be configured in L2 switching mode, with
the ASA serving as default gateway for the workload VMs and virtual appliances.
High Availability
Application availability is inversely proportional to the total application downtime in a given time period
(typically a month), and the total downtime is simply the sum of the duration of each outage. To increase
a system's availability, and achieve high availability (HA), the duration of the outages, the frequency of
outages, or both must be decreased.
8-3
Chapter 8
Traffic Flows
The ACI Fabric, MPLS core network, UCS compute platform, and storage systems are designed and
deployed with HA features and capabilities. Table 8-1 defines the HA features incorporated into the
components that made up the Expanded Gold Tenant Container.
Table 8-1
Component
HA Implementation
ASA Security Context ASA clustering on the physical ASA security appliances
Spanned EtherChannel with Link Aggregation Control Protocol (LACP)
NetScaler 1000v
NetScaler HA pair
Cisco UCS B-Series blade servers and Fabric Interconnects
VMware vSphere HA
Traffic Flows
The following traffic flows are defined:
Private Zone
Figure 8-3 shows the traffic flow from private intranet to the 3-tier application on the private zone. In
this traffic flow, the web and application tiers VMs are fronted by the NetScaler 1000v server load
balancer, while the database VM is accessed directly.
Note
The ASA and NetScaler 1000v are depicted multiple times to keep the diagram simple and uncluttered.
Both the workload VMs and the NetScaler 1000v virtual appliances have only one data vNIC/interface.
8-4
Implementation Guide
Chapter 8
Figure 8-3
MPLS
L3 VPN
Route flow to
tenant ASA
Filter flow
Route to Web SLB
ASR 9000
pvt_asa
Filter flow
Route to Web VM
Filter flow
Route to App SLB
APP
OS
pvt_asa
APP
OS
pvt_asa
Web VMs
Make SLB decision
Route flow to ASA
pvt_ns
APP
OS
Filter flow
Route to App VM
Filter flow
Route to Database VM
APP
OS
APP
OS
pvt_asa
App VMs
Apply App logic
Send request to Database
APP
OS
Retrieve
data
Database VMs
298606
pvt_asa
Demilitarized Zone
Figure 8-4 shows the traffic flow from Internet to DMZ VM on the DMZ zone. In this traffic flow, the
DMZ NetScaler 1000v server load balancer fronts the DMZ VMs. Upon receiving the user request from
Internet, the DMZ VMs send request the private zone VMs for further processing, the private VMs are
fronted by the NetScaler 1000v on the private Zone.
Note
The ASA and NetScaler are depicted multiple times to keep the diagram simple and uncluttered. Both
the workload VMs and the NetScaler 1000v virtual appliances have only one data vNIC/interface.
8-5
Chapter 8
Figure 8-4
Internet
Route flow to
tenant ASA
Filter flow
Route to DMZ SLB
dmz_asa
ASR 9000
Filter flow
Route to Private ASA
APP
OS
Filter flow
Route to DMZ VM
Filter flow
Route to Private SLB
APP
OS
dmz_asa
APP
OS
dmz_asa
pvt_asa
DMZ VMs
Process web request
Send request to Private SLB
Filter flow
Route to Private VM
pvt_asa
APP
OS
Process
request
Private VMs
298607
APP
OS
8-6
Implementation Guide
Chapter 8
Figure 8-5
EPG: pvt_ns_epg
contract01
Bridge Domain:
bd01
pvt_ns
Bridge Domain:
inter_asa_bd
dmz_inter_asa
pvt_inter_asa
dmz_outside
Bridge Domain:
pvt_ns_bd
pvt_outside
pvt_ns_contract
Bridge Domain:
dmz-external_bd
pvt_ns
inside, outside
ASR 9000
Bridge Domain:
dmz-external_bd
dmz_ns_contract
Bridge Domain:
dmz_ns_bd
dmz_ns
pvt_inside1
EPG: epg01
pvt_asa
pvt_inside2
dmz_asa
pvt_inside3
EPG: pvt_ns_epg
dmz_contract
dmz_inside1
inside, outside
contract02
EPG: epg02
contract03
Bridge Domain:
bd03
Bridge Domain:
dmz_bd
EPG: epg03
EPG: dmz_epg
dmz_ns
contract
pvt_inside3 Logical interface
Consumer
Provider
298608
Bridge Domain:
bd02
The L2 segment for each ASA interface is modeled as a bridge domain. All bridge domains for the tenant
will be placed under one private-network/context (or VRF). Only one context is required for the tenant
on ACI Fabric for the tenant, since the fabric is operating in L2 mode, with L2 segments providing the
IP address space isolation. The EPGs for the tenant are organized into two application profiles, one for
private zone and one for DMZ. The IP subnets on the private zone are allocated from private IP address
space from RFC 1918, the DMZ uses mixture of both public and private IP address space.
The tenant container connects to outside networks (private intranet over L3VPN and Internet) with the
external bridged network logical construct, which bridged the external interface of the ASAs to the
VLAN sub-interfaces on the ASR 9000 router. On the ASR 9000 router, each tenant has its own VRF
for connection to the L3VPN private intranet. The Internet connection is setup as a common VRF shared
among multiple tenants on the ASR 9000 router.
Figure 8-6 shows the managed objects (MOs) that are constructed/configured for the Expanded Gold
Tenant Container; the name of the tenant container is g008. Generic MO names are used within the tenant
container.
8-7
Chapter 8
Prerequisites
Figure 8-6
Prerequisites
The following prerequisites to constructing/configuring the Expanded Gold Tenant Container:
1.
Configure the APICs and Nexus 9000 Series switches to operate in ACI Fabric mode.
2.
Configure the ASR 9000 to serve as PE router. The ASR 9000 PE should have bundle Ethernet
interfaces that are connected to the ACI Fabric virtual port-channel (VPC) interfaces.
3.
Configure the ASA physical security appliances into active/active high available cluster operating
in multi contexts mode. The data interfaces of the ASA physical appliances should be bundled and
connected to the ACI Fabric VPC interface.
4.
Configure the UCS B-Series blade servers for hosting the VMware vSphere infrastructure. The UCS
fabric interconnects should have port-channel bundles that connect the ACI Fabric VPC interfaces.
5.
Configure VMware vSphere for hosting workload VMs and virtual appliances.
6.
Configure NetApp storage system to provide persistent disk storage for the workload VMs and
virtual appliances.
7. APIC should have either in-band or out-of-band management access to the management network.
Specifically, APIC should have management access to vSphere vCenter, ASA security appliance and
the virtual service appliances via the management network.
8.
Configure Virtual Machine Management (VMM) domains for the vSphere virtual datacenters
(vDCs) that would host the workload VMs and virtual appliances.
8-8
Implementation Guide
Chapter 8
9.
10. Configure Cisco AVS or VMware vSphere Distributed Switch (VDS) to provide distributed virtual
network switch for workload VMs and virtual appliances hosted on the VMware vSphere
infrastructure.
11. Upload the device packages for Cisco ASA and Citrix NetScaler 1000v to APIC.
Summary of Steps
Table 8-2 provides an overview of steps required to construct/configure the Expanded Gold Tenant
Container.
Table 8-2
Configuration Procedure
Task
Notes/Remarks
Create Tenant
Create Private Network (or A context is a unique L3 forwarding and application policy domain
Context), and Bridge
(a private network or VRF) that provides IP address space isolation
Domains
for tenants.
A bridge domain represents a L2 forwarding construct within the
fabric. A bridge domain must be linked to a context.
Create Application
Profiles and EPGs
8-9
Chapter 8
Summary of Steps
Table 8-2
Task
Notes/Remarks
Associate Contracts to
EPGs
The VLAN pools define the range of VLAN IDs assigned to each
ASA logical device.
13 Create Logical Device for A logical device (also known as a device cluster) is one or more
private zone ASA
concrete devices that act as a single device. A logical device is
addressed and managed through a management IP address that is
assigned to the cluster.
14 Create Concrete Device
for private zone ASA
See above.
See above.
See above.
Four NetScaler 1000v virtual appliances are deployed out of band per
Expanded Gold Tenant Container, one HA-pair for private zone, and
one HA-pair for DMZ.
Each NetScaler 1000v should have basic initial configuration to
allow APIC access.
8-10
Implementation Guide
Chapter 8
Table 8-2
Task
Notes/Remarks
See above.
See above.
See above.
These are the ASA network and service objects/groups. These objects
identify the IP subnets, SLB virtual IPs, and services that are used in
both private and DMZ ASA configurations.
29 Modeling the private zone Configure L4-L7 service parameters for ASA:
ASA with Configure
- Interfaces IP address and security level
L4-L7 service parameters
- Static routes
- Security access control lists
- Attach access control lists to interfaces
30 Modeling the private zone Configure L4-L7 service parameters for NetScaler 1000v:
NetScaler 1000v with
- Subnet IP Address
L4-L7 service parameters
- Static Routes
- Service monitor, service groups, and virtual servers
8-11
Chapter 8
Summary of Steps
Table 8-2
Task
Notes/Remarks
See above.
See above.
See above.
Detailed Steps
The following sections detailed the steps to constructs/configure the ACI logical model of the Expanded
Gold Tenant Container. The API requests make use of XML data structure, instead of JSON.
For steps involving APIC API requests, it is possible to merge some of steps and hence the XML data
structures, instead of sending multiple API requests with separate XML data structures; for example, the
API requests for creating tenant, context, bridge domains, application profiles and EPGs can be
combined into one API request, with the XML data structures merged. The API requests are shown
separately, to make the documentation clearer and easier to digest.
Unless specify otherwise, all the API requests detailed in the following sections make use of HTTP
POST method with the following normalized URL:
http://{apic_ip_or_hostname}/api/mo/uni.xml
Step 1
Note
Step 2
8-12
Implementation Guide
Chapter 8
A tenant (fv:Tenant) is a logical container for application policies that enable an administrator to
exercise domain-based access control. The security domain is associated with the tenant to scope the
access of users to the tenant. The following shows the XML data structure to create a tenant, and
associate a security domain to the tenant.
<fvTenant name="g008" descr="gold container with asa">
<aaaDomainRef name="g008_sd" />
</fvTenant>
Step 3
Note
Step 4
On the Expanded Gold Tenant Container, unicast routing is disabled for each bridge domain; since the
ACI Fabric only provides L2 forwarding functionality. ARP and unknown unicast flooding are enabled;
the flooding settings are required for the ACI Fabric to operate properly with the ASAs inserted by the
service graphs.
Create Application Profiles and EPGs.
8-13
Chapter 8
Summary of Steps
An application profile (fv:Ap) models application requirements. Application profiles contain one or
more EPGs. An EPG (fv:AEPg) is a managed object that is a named logical entity that contains a
collection of endpoints. Endpoints are virtual or physical devices that are connected to the network
directly or indirectly.
For the Expanded Gold Tenant Container, two application profiles are created, one for the private zone,
and one for the DMZ. Each EPG is associated with a bridge domain; constructing/configuring a 1:1
relationship between EPG and bridge domain to provide isolation of IP address space and traffic between
the EPGs. For EPGs that will have VM endpoints attachment, a VMM domain where the VMs will reside
is associated with the EPG.
The XML data structure below creates the applications profiles and the EPGs for each application
profile.
<fvTenant name="g008">
<fvAp name="app01">
<fvAEPg name="epg01">
<fvRsBd tnFvBDName="bd01" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc"
resImedcy="immediate" />
</fvAEPg>
<fvAEPg name="epg02">
<fvRsBd tnFvBDName="bd02" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc"
resImedcy="immediate" />
</fvAEPg>
<fvAEPg name="epg03">
<fvRsBd tnFvBDName="bd03" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc"
resImedcy="immediate" />
</fvAEPg>
<fvAEPg name="pvt_ns_epg">
<!-- no vmm domain for private ns epg -->
<fvRsBd tnFvBDName="pvt_ns_bd" />
</fvAEPg>
</fvAp>
<fvAp name="app02">
<fvAEPg name="dmz_epg">
<fvRsBd tnFvBDName="dmz_bd" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc"
resImedcy="immediate" />
</fvAEPg>
<fvAEPg name="dmz_ns_epg">
<!-- no vmm domain for dmz ns epg -->
<fvRsBd tnFvBDName="dmz_ns_bd" />
</fvAEPg>
</fvAp>
</fvTenant>
Note
instrImedcy="immediate"
instrImedcy="immediate"
instrImedcy="immediate"
instrImedcy="immediate"
The pvt_ns_epg and dnz_ns_epg EPGs do not have VMM domain association, APIC will insert the
NetScaler 1000v virtual appliances when the service graphs are deployed.
For each EPG with VMM domain association, APIC creates a VDS port-group on vSphere vCenter. The
vSphere administrator attaches the vNIC of the workload VM to the VDS port-group to make it an
endpoint of an EPG. The EPG backed VDS port-group has the following naming convention:
{tenant_name}|{app_profile_name}|{epg_name}
Step 5
8-14
Implementation Guide
Chapter 8
External network managed object (l2ext:Out or l3ext:Out) controls connectivity to outside network.
External Bridged Network (l2ext:Out) specified the relevant L2 properties that control communications
between an outside network and the ACI Fabric.
For the Expanded Gold Tenant Container, two external bridged networks are configured, one for access
to/from private intranet (via L3VPN on ASR 9000 PE), and one for access to/from Internet (via ASR
9000 PE).
The following XML data structure creates the external bridged networks. The pvt_external MO bridged
the pvt_external_bd bridge domain (connected to outside interface of the private zone ASA) to the
VLAN 1008 sub-interface on the ASR 9000 PE, while dmz_external MO bridged the dmz_external_bd
bridge domain (connected to outside interface of the DMZ ASA) to the VLAN 2998 sub-interface on the
ASR 9000 PE.
<fvTenant name="g008">
<l2extOut name="pvt_external">
<l2extRsEBd tnFvBDName="pvt_external_bd" encap="vlan-1008" />
<l2extLNodeP name="l2_nodes">
<l2extLIfP name="l2_interface">
<l2extRsPathL2OutAtt
tDn="topology/pod-1/protpaths-105-106/pathep-[vpc_n105_n106_asr9k]" />
</l2extLIfP>
</l2extLNodeP>
<l2extInstP name="pvt_external" />
</l2extOut>
<l2extOut name="dmz_external">
<l2extRsEBd tnFvBDName="dmz_external_bd" encap="vlan-2998" />
<l2extLNodeP name="l2_nodes">
<l2extLIfP name="l2_interface">
<l2extRsPathL2OutAtt
tDn="topology/pod-1/protpaths-103-104/pathep-[vpc_n103_n104_asr9knv]" />
</l2extLIfP>
</l2extLNodeP>
<l2extInstP name="dmz_external" />
</l2extOut>
</fvTenant>
Note
Step 6
For this implementation, two virtual port-channels (VPCs) are configured from the ACI Fabric to the
ASR 9000; the VPCs are configured as Ethernet bundles on the ASR 9000 PE. It is also possible to use
only one VPC.
Configure ASR 9000 PE.
The ASR 9000 PE is not part of the ACI Fabric. It is considered the external network in the ACI
framework. The configuration of the ASR 9000 is included here for completeness. The following shows
the configuration of VRF, VLAN sub-interfaces, MP-BGP and static routes, and so on, on ASR 9000
router. Each Expanded Gold Tenant Container has one dedicated VRF for connection to private intranet
via L3VPN, and a shared Internet VRF. Private IP addresses are used on the private zone; the DMZ uses
public IP addresses.
vrf g008
address-family ipv4 unicast
import route-target
1:1008
export route-target
1:1008
!
interface Bundle-Ether 9.1008
description g008 private
vrf g008
ipv4 address 10.1.11.254 255.255.255.0
8-15
Chapter 8
Summary of Steps
Note
Step 7
Static routes are used between ASR 9000 PE and the ASA firewalls. ASA device package version 1.0(1)
does not support dynamic routing protocols.
Create Filters and Contracts.
Contract (vz:BrCp) governs the communication between EPGs that are labeled providers, consumers, or
both. EPGs can only communicate with other EPGs according to contract rules. Contract makes use of
Filters (vz:Filter), which are organized into one of more subjects (vz:Subj), to specify the type of traffic
that can be communicated and how it occurs (unidirectional or bidirectional traffic). Filters are L2 to
Layer 4 fields, TCP/IP header fields such as L3 protocol type, Layer 4 ports, and so forth used to
categorize traffic flows.
For this implementation, four filters are used (three filters, http, https and ssh are created below; the icmp
filter is inherited from the common tenant) on the contracts to allow incoming traffic from external
bridged networks; SSH is only allowed from the private intranet.
<fvTenant name="g008">
<vzFilter name="http">
<vzEntry name="rule01" etherT="ip" prot="tcp" dFromPort="http" dToPort="http"
/>
</vzFilter>
<vzFilter name="https">
<vzEntry name="rule01" etherT="ip" prot="tcp" dFromPort="https"
dToPort="https" />
</vzFilter>
<vzFilter name="ssh">
<vzEntry name="rule01" etherT="ip" prot="tcp" dFromPort="22" dToPort="22" />
</vzFilter>
<vzBrCP name="contract01">
<vzSubj name="subject01">
8-16
Implementation Guide
Chapter 8
Step 8
8-17
Chapter 8
Summary of Steps
Table 8-3
Contract Relationships
Contract
contract01
pvt_external
epg01
contract02
pvt_external
epg02
contract03
pvt_external
epg03
pvt_ns_contract
pvt_external
pvt_ns_epg
dmz_contract
dmz_external
dmz_epg
dmz_ns_contract
dmz_external
dmz_ns_epg
inter_asa_contract pvt_external
dmz_epg
The following XML data structure setups the consumer/provider relationship between EPGs and
contract.
<fvTenant name="g008">
<fvAp name="app01">
<fvAEPg name="epg01">
<fvRsProv tnVzBrCPName="contract01" />
</fvAEPg>
<fvAEPg name="epg02">
<fvRsProv tnVzBrCPName="contract02" />
</fvAEPg>
<fvAEPg name="epg03">
<fvRsProv tnVzBrCPName="contract03" />
</fvAEPg>
<fvAEPg name="pvt_ns_epg">
<fvRsProv tnVzBrCPName="pvt_ns_contract" />
</fvAEPg>
</fvAp>
<fvAp name="app02">
<fvAEPg name="dmz_epg">
<fvRsProv tnVzBrCPName="dmz_contract" />
<fvRsProv tnVzBrCPName="inter_asa_contract" />
</fvAEPg>
<fvAEPg name="dmz_ns_epg">
<fvRsProv tnVzBrCPName="dmz_ns_contract" />
</fvAEPg>
</fvAp>
<l2extOut name="pvt_external">
<l2extInstP name="pvt_external">
<fvRsCons tnVzBrCPName="contract01" />
<fvRsCons tnVzBrCPName="contract02" />
<fvRsCons tnVzBrCPName="contract03" />
<fvRsCons tnVzBrCPName="pvt_ns_contract" />
<fvRsCons tnVzBrCPName="inter_asa_contract" />
</l2extInstP>
</l2extOut>
<l2extOut name="dmz_external">
<l2extInstP name="dmz_external">
<fvRsCons tnVzBrCPName="dmz_contract" />
<fvRsCons tnVzBrCPName="dmz_ns_contract" />
</l2extInstP>
</l2extOut>
</fvTenant>
8-18
Implementation Guide
Chapter 8
Note
Step 9
The contract relationships do not adhere to the generic 3-tier application traffic profile (external web
tier app tier database tier). As of APIC version 1.0, traffic redirection mechanism is not supported,
and ASA and NetScaler 1000v service devices are not aware of the ACI contracts. The generic 3-tier
application traffic profile is enforced by the firewall rules on the ASAs. The ACI contracts are enforced
only on traffic entering and leaving the pvt_external_bd and dmz_external_bd bridge domains from/to
the ASR 9000 PE.
Attach vNIC to EPG Port-Group.
APIC will not attach the vNIC of workload VMs to EPG backed VDS port-groups on vSphere; the
operation has to be done manually or programmatically on vSphere. The following shows the vSphere
PowerCLI Cmdlets to attach the first vNIC of a workload VM to an EPG backed VDS port-group.
$vdsPG = Get-VirtualSwitch -Distributed -Name "ics3_prod_vc" | Get-VirtualPortGroup
-Name "g008|app01|epg01"
Get-VM -name "g008-vm01" | Get-NetworkAdapter -name "Network adapter 1" |
Set-NetworkAdapter -NetworkName $vdsPG.Name -confirm:$false
Step 10
Each ASA security context must have its own management interface and IP address; the
management interface must be reachable to APIC.
3. All data interfaces allocated to an ASA security context must be VLAN sub-interfaces belonging to
Some features and configuration are not supported in ASA security context; refer to ASA
documentation,
http://www.cisco.com/c/en/us/td/docs/security/asa/asa90/configuration/guide/asa_90_cli_config/h
a_contexts.html#91406, for more details.
6.
If clustering or active/standby failover is required for the physical ASA, the configuration must be
performed out of band; APIC will have no awareness that the ASA is clustered or have failover
enabled.
7. APIC is not aware of the ASA security context on the physical ASA; on APIC, each ASA security
8-19
Chapter 8
Summary of Steps
interface port-channel2.3011
description g008-pvt
vlan 3011
!
interface port-channel2.3012
description g008-pvt
vlan 3012
!
interface port-channel2.3013
description g008-pvt
vlan 3013
!
interface port-channel2.3014
description g008-pvt
vlan 3014
!
interface port-channel2.3015
description g008-pvt
vlan 3015
!
interface port-channel2.3016
description g008-pvt
vlan 3016
!
interface port-channel2.3017
description g008-dmz
vlan 3017
!
interface port-channel2.3018
description g008-dmz
vlan 3018
!
interface port-channel2.3019
description g008-dmz
vlan 3019
!
interface port-channel2.3020
description g008-dmz
vlan 3020
!
context g008-pvt
allocate-interface Management0/1 management0
allocate-interface port-channel2.3011-port-channel2.3016
config-url disk0:/contexts/g008-pvt.cfg
!
context g008-dmz
allocate-interface Management0/1 management0
allocate-interface port-channel2.3017-port-channel2.3020
config-url disk0:/contexts/g008-dmz.cfg
!
end
write memory
!
changeto context g008-pvt
conf t
crypto key generate rsa modulus 2048
!
ip local pool mgmt-pool 10.0.32.114-10.0.32.115 mask 255.255.255.0
!
interface management0
management-only
nameif management
ip address 10.0.32.113 255.255.255.0 cluster-pool mgmt-pool
!
8-20
Implementation Guide
Chapter 8
The management interfaces of the ASA security contexts shared the physical management interface with
the system context. The interface name of the physical Management0/1 interface is mapped to
management0 within the ASA security context. All data (sub)-interfaces allocated to the ASA security
contexts are from the same physical main interface. The interface names of the data interfaces are not
mapped.
8-21
Chapter 8
Summary of Steps
Note
Step 11
It is advisable to store the configurations of ASA security context on a sub-directory of the flash device.
On some ASA models, if the ASA security context configurations are stored in the root directory of the
flash device, the root directory might run out of directory entries space, even though there is available
space on the flash device. This is because some ASA models use the FAT 16 file system for internal flash
device. See http://support.microsoft.com/kb/120138/en-us for more details.
Create VLAN Pools.
The VLAN pools (fvns:VlanInstP) define the range of VLAN IDs assigned to each ASA logical device.
Two VLAN pools are used, one pool for the VLANs allocated to the private zone ASA, and another pool
for the VLANs allocated to the DMZ ASA. The following XML data structure creates the VLAN pools,
the VLAN ranges (fvns:EncapBlk) should match those configured for the ASA security contexts in the
previous steps.
<infraInfra>
<fvnsVlanInstP name="g008_pvt_asa_pool" allocMode="dynamic">
<fvnsEncapBlk from="vlan-3011" to="vlan-3016" />
</fvnsVlanInstP>
<fvnsVlanInstP name="g008_dmz_asa_pool" allocMode="dynamic">
<fvnsEncapBlk from="vlan-3017" to="vlan-3020" />
</fvnsVlanInstP>
</infraInfra>
Step 12
Step 13
Note
A logical device has logical interfaces (vns:LIf), which describe the interface information for the logical
device. During service graph instantiation, function node connectors are associated with logical
interfaces. The XML data structure below does not create the logical interfaces for the logical device;
the logical interfaces are created with the XM data structure from the next section. This is done to
minimize the number of faults APIC raises when the logical device is created.
<fvTenant name="g008">
<vnsLDevVip name="pvt_asa" contextAware="single-Context" devtype="PHYSICAL"
funcType="GoTo" mode="legacy-Mode">
<vnsRsMDevAtt tDn="uni/infra/mDev-CISCO-ASA-1.0.1" />
8-22
Implementation Guide
Chapter 8
Note
Step 14
The username/password credential configured should have read/write administrative access to the ASA
security context, as APIC will use the credential to push the configuration to the ASA.
Create Concrete Device for Private Zone ASA.
A concrete device (vns:CDev) identifies an instance of a service device, which can be physical or virtual.
Each concrete device has its own management IP address for configuration and monitoring through the
APIC. A concrete device has concrete interfaces (vns:CIf); when a concrete device is added to a logical
device, concrete interfaces are mapped to the logical interfaces. During service graph instantiation,
VLANs (from the VLAN pool) are programmed on concrete interfaces based on their association with
logical interfaces.
The XML data structure below creates the concrete device, and the logical interfaces of the logical
device for the private zone ASA. With the physical ASA, the concrete device has only one concrete
interface (port-channel2 in this case); all logical interfaces are mapped to the same concrete interface.
APIC creates VLAN sub-interfaces on the concrete interface for each of the logical interface, using the
VLAN IDs specified in the VLAN pool assigned.
<fvTenant name="g008">
<vnsLDevVip name="pvt_asa">
<vnsCDev name="asa01">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.32.113" port="443" />
<vnsCIf name="port-channel2">
<!-- this cif is the main interface, apic will take care of
sub-interface base on vlan allocation -->
<vnsRsCIfPathAtt
tDn="topology/pod-1/protpaths-105-106/pathep-[vpc_n105_n106_asa5585_data]" />
</vnsCIf>
</vnsCDev>
<vnsLIf name="pvt_outside">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-external"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="pvt_inside1">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="pvt_inside2">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="pvt_inside3">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="pvt_ns">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal"
/>
/>
/>
/>
/>
8-23
Chapter 8
Summary of Steps
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="pvt_inter_asa">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal" />
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[port-channel2]" />
</vnsLIf>
</vnsLDevVip>
</fvTenant>
Note
Step 15
The ASA device package does not support physical ASA operating in multi context mode, see
CSCuq96552 for more details. When operating the physical ASA in multi context mode with APIC, each
ASA security context is modeled as a physical ASA operating in single context mode. The ASA device
is deployed in standalone mode, that is, the logical device contains only one concrete device. The
management IP address for logical devices and concrete devices is the same.
Create Physical Domain for DMZ ASA.
The XML data structure below creates the physical domain for DMZ ASA, and associates the VLAN
pool to the physical domain.
<physDomP name="g008_dmz_asa_phy">
<infraRsVlanNs tDn="uni/infra/vlanns-[g008_dmz_asa_pool]-dynamic" />
</physDomP>
Step 16
Step 17
8-24
Implementation Guide
Chapter 8
</vnsCDev>
<vnsLIf name="dmz_outside">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-external"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa02/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="dmz_inside1">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa02/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="dmz_ns">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa02/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="dmz_inter_asa">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa02/cIf-[port-channel2]" />
</vnsLIf>
</vnsLDevVip>
</fvTenant>
Step 18
/>
/>
/>
/>
APIC pushed the following configuration to each of the ASA service device:
domain-name aci.icdc.sdu.cisco.com
dns server-group DefaultDNS
domain-name aci.icdc.sdu.cisco.com
!
logging enable
8-25
Chapter 8
Summary of Steps
Note
Step 19
NTP server configuration is not supported with ASA security context; NTP server for the physical ASA
must be configured on the ASA system context out of band.
Deploy NetScaler 1000v Virtual Appliances on vSphere.
Four NetScaler 1000v virtual appliances are deployed per Expanded Gold Tenant Container, one
HA-pair for private zone, and one HA-pair for DMZ. APIC will not deploy the NetScaler 1000v virtual
appliances on vSphere, the virtual appliances must be deployed out of band.
Once the NetScaler 1000v virtual appliances are deployed, the following initial configuration should be
made to allow management access by APIC.
add route 10.0.0.0 255.255.0.0 10.0.39.253
add route 172.18.0.0 255.255.0.0 10.0.39.253
rm route 0.0.0.0 0.0.0.0 10.0.39.253
!
set system user nsroot Cisco12345
add system user apic Cisco12345
bind system user apic superuser 100
add system user admin Cisco12345
bind system user admin superuser 100
!
save ns config
Note
Step 20
The management subnets for this implementation are 10.0.0.0/16 and 172.18.0.0/16. Static routes are
configured on the NetScaler 1000v to allow access to/from management network. The default route
0.0.0.0/0 is added during the deployment of the NetScaler 1000v virtual appliance, and is set to point
toward the management network; this default route should be removed.
Create Logical Device for Private Zone NetScaler 1000v.
The following XML data structure configures the logical device for the private zone NetScaler 1000v.
The logical device is associated with a VMM domain, which specifies the vSphere virtual datacenter
where the NetScaler 1000v virtual appliances reside.
<fvTenant name="g008">
<vnsLDevVip name="pvt_ns" contextAware="single-Context" devtype="VIRTUAL"
funcType="GoTo" mode="legacy-Mode">
<vnsRsMDevAtt tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5" />
<vnsCMgmt host="10.0.39.225" port="80" />
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsRsALDevToDomP tDn="uni/vmmp-VMware/dom-ics3_prod_vc" />
<vnsDevFolder key="enableMode" name="modes">
<vnsDevParam key="L3" name="l3mode" value="ENABLE" />
<vnsDevParam key="DRADV" name="dradv" value="ENABLE" />
<vnsDevParam key="USNIP" name="usnip" value="ENABLE" />
</vnsDevFolder>
<vnsDevFolder key="enableFeature" name="features">
<vnsDevParam key="SSL" name="ssl" value="ENABLE" />
<vnsDevParam key="LB" name="lb" value="ENABLE" />
</vnsDevFolder>
<vnsDevFolder key="ntpserver" name="ntpserver1">
<vnsDevParam key="serverip" name="ip" value="172.18.114.20" />
<vnsDevParam key="preferredntpserver" name="preferred" value="YES" />
</vnsDevFolder>
8-26
Implementation Guide
Chapter 8
</vnsLDevVip>
</fvTenant>
Note
The NetScaler features, modes and NTP configurations are performed here to avoid race condition. If
those configurations are performed right after the concrete devices are created (like what has been done
for the ASAs), some of those configurations might not get pushed to the NetScaler 1000v service
devices.
The XML data structure above also configures modes, features, and NTP parameters for the NetScaler
1000v logical device, which is equivalent to the following configuration on the NetScaler 1000v:
enable ns feature LB SSL
enable ns mode L3 DRADV USNIP
add ntp server 172.18.114.20
set ntp server 172.18.114.20 -preferredNtpServer YES
Note
Step 21
The NetScaler device package does not push the NTP configuration to the NetScaler 1000v, see Citrix
BUG0503304.
Create Concrete Devices for Private Zone NetScaler 1000v
The XML data structure below creates two concrete devices, and the logical interfaces of the logical
device for the private zone NetScaler 1000v. The private zone NetScaler 1000v is operating in one-arm
mode; both the inside and outside logical interfaces mapped to the same concrete interface (interface 1/1
in this case, but referred to as 1_1 with the Citrix NetScaler 1000v device package).
<fvTenant name="g008">
<vnsLDevVip name="pvt_ns">
<vnsCDev name="ns01" vcenterName="ics3_vc_tenant_cluster" vmName="g008-ns01">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.39.221" port="80" />
<vnsCIf name="1_1" vnicName="Network adapter 2"/>
</vnsCDev>
<vnsCDev name="ns02" vcenterName="ics3_vc_tenant_cluster" vmName="g008-ns02">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.39.222" port="80" />
<vnsCIf name="1_1" vnicName="Network adapter 2"/>
</vnsCDev>
<vnsLIf name="outside">
<vnsRsMetaIf tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mIfLbl-outside"
/>
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_ns/cDev-ns01/cIf-[1_1]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_ns/cDev-ns02/cIf-[1_1]" />
</vnsLIf>
<vnsLIf name="inside">
<vnsRsMetaIf tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mIfLbl-inside" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_ns/cDev-ns01/cIf-[1_1]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_ns/cDev-ns02/cIf-[1_1]" />
</vnsLIf>
</vnsLDevVip>
</fvTenant>
8-27
Chapter 8
Summary of Steps
The name of the VMM controller (the vcenterName attribute above, is not the vCenter hostname or IP
address, but rather the VMM controller name of the VMM domain), the VM name and vNIC name of
the NetScaler 1000v virtual appliance are part of the concrete devices configuration, so that APIC can
attach the appliances vNIC to the shadow EPG backed VDS port-group created when the service graphs
are deployed.
Step 22
10.0.39.221
10.0.39.222
0/1
ns02
Active Unit
298610
HA Peer to
each other
ns01
Standby Unit
The XML data structure below configures the two NetScaler 1000v concrete devices as HA peer for each
other.
<fvTenant name="g008">
<vnsLDevVip name="pvt_ns">
<vnsCDev name="ns01">
<vnsDevFolder key="HAPeer" name="HAPeer">
<vnsDevParam key="ipaddress" name="ipaddress" value="10.0.39.222" />
<vnsDevParam key="id" name="id" value="1" />
</vnsDevFolder>
<vnsDevFolder key="HighAvailability" name="HighAvailability">
<vnsDevParam key="interface" name="interface" value="0_1" />
<vnsDevParam key="snip" name="snip" value="10.0.39.225" />
<vnsDevParam key="netmask" name="netmask" value="255.255.255.0" />
<vnsDevParam key="mgmtaccess" name="mgmtaccess" value="ENABLE" />
</vnsDevFolder>
</vnsCDev>
<vnsCDev name="ns02">
<vnsDevFolder key="HAPeer" name="HAPeer">
<vnsDevParam key="ipaddress" name="ipaddress" value="10.0.39.221" />
<vnsDevParam key="id" name="id" value="1" />
</vnsDevFolder>
<vnsDevFolder key="HighAvailability" name="HighAvailability">
<vnsDevParam key="interface" name="interface" value="0_1" />
<vnsDevParam key="snip" name="snip" value="10.0.39.225" />
<vnsDevParam key="netmask" name="netmask" value="255.255.255.0" />
<vnsDevParam key="mgmtaccess" name="mgmtaccess" value="ENABLE" />
</vnsDevFolder>
</vnsCDev>
</vnsLDevVip>
</fvTenant>
The XM data structure above causes APIC to push the following configuration to ns01:
8-28
Implementation Guide
Chapter 8
And to ns02:
add HA node 1 10.0.39.221
add ns ip 10.0.39.225 255.255.255.0 -vServer DISABLED -mgmtAccess ENABLED
Step 23
Step 24
8-29
Chapter 8
Summary of Steps
Step 25
The XM data structure above causes APIC to push the following configuration to ns03:
add
add
And
add
add
Step 26
HA
ns
to
HA
ns
node 1 10.0.39.224
ip 10.0.39.226 255.255.255.0 -vServer DISABLED -mgmtAccess ENABLED
ns04:
node 1 10.0.39.223
ip 10.0.39.226 255.255.255.0 -vServer DISABLED -mgmtAccess ENABLED
8-30
Implementation Guide
Chapter 8
Note
Figure 8-8
Figure 8-9
Service Graph Template with ASA Firewall & Citrix Load Balancing Function Nodes
Figure 8-10
ASA device package version 1.0(1) specifies/supports only the firewall function. The NetScaler 1000v
device package version 10.5 specifies a number of functions, but only the LoadBalancing function is
officially supported.
The following XML data structure creates the service graph template with one ASA firewall function
node. The connection on the service graph is configured with L2 adjacency type and unicast routing
disabled, since the ACI Fabric is only providing L2 forwarding service.
<fvTenant name="g008">
<vnsAbsGraph name="single_asa_graph">
<vnsAbsNode name="asa_fw" funcType="GoTo">
<vnsAbsFuncConn name="external">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-external" />
</vnsAbsFuncConn>
<vnsAbsFuncConn name="internal">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-internal" />
</vnsAbsFuncConn>
<vnsRsNodeToMFunc tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall" />
</vnsAbsNode>
<vnsAbsTermNodeCon name="consumer">
<vnsAbsTermConn name="1" />
<vnsInTerm name="input-terminal" />
<vnsOutTerm name="output-terminal" />
</vnsAbsTermNodeCon>
<vnsAbsTermNodeProv name="provider">
<vnsAbsTermConn name="2" />
8-31
Chapter 8
Summary of Steps
The following XML data structure creates the service graph template with an ASA firewall function
node and a NetScaler load balancing function node.
<fvTenant name="g008">
<vnsAbsGraph name="asa_ns_graph">
<vnsAbsNode name="asa_fw" funcType="GoTo">
<vnsAbsFuncConn name="external">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-external" />
</vnsAbsFuncConn>
<vnsAbsFuncConn name="internal">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-internal" />
</vnsAbsFuncConn>
<vnsRsNodeToMFunc tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall" />
</vnsAbsNode>
<vnsAbsNode name="slb" funcType="GoTo">
<vnsAbsFuncConn name="external">
<vnsRsMConnAtt
tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mFunc-LoadBalancing/mConn-external" />
</vnsAbsFuncConn>
<vnsAbsFuncConn name="internal">
<vnsRsMConnAtt
tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mFunc-LoadBalancing/mConn-internal" />
</vnsAbsFuncConn>
<vnsRsNodeToMFunc
tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mFunc-LoadBalancing" />
</vnsAbsNode>
<vnsAbsTermNodeCon name="consumer">
<vnsAbsTermConn name="1" />
<vnsInTerm name="input-terminal" />
<vnsOutTerm name="output-terminal" />
</vnsAbsTermNodeCon>
<vnsAbsTermNodeProv name="provider">
<vnsAbsTermConn name="2" />
<vnsInTerm name="input-terminal" />
<vnsOutTerm name="output-terminal" />
</vnsAbsTermNodeProv>
<vnsAbsConnection name="connection1" adjType="L2" unicastRoute="no"
connType="external">
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-asa_ns_graph/AbsNode-asa_fw/AbsFConn-external" />
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-asa_ns_graph/AbsTermNodeCon-consumer/AbsTConn" />
8-32
Implementation Guide
Chapter 8
</vnsAbsConnection>
<vnsAbsConnection name="connection2" adjType="L2" unicastRoute="no"
connType="external">
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-asa_ns_graph/AbsNode-asa_fw/AbsFConn-internal" />
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-asa_ns_graph/AbsNode-slb/AbsFConn-external" />
</vnsAbsConnection>
<vnsAbsConnection name="connection3" adjType="L2" unicastRoute="no"
connType="external">
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-asa_ns_graph/AbsNode-slb/AbsFConn-internal" />
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-asa_ns_graph/AbsTermNodeProv-provider/AbsTConn" />
</vnsAbsConnection>
</vnsAbsGraph>
</fvTenant>
The following XML data structure creates the service graph template with two ASA firewall function
nodes.
<fvTenant name="g008">
<vnsAbsGraph name="dual_asa_graph">
<vnsAbsNode name="pvt_asa" funcType="GoTo">
<vnsAbsFuncConn name="external">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-external" />
</vnsAbsFuncConn>
<vnsAbsFuncConn name="internal">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-internal" />
</vnsAbsFuncConn>
<vnsRsNodeToMFunc tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall" />
</vnsAbsNode>
<vnsAbsNode name="dmz_asa" funcType="GoTo">
<vnsAbsFuncConn name="external">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-external" />
</vnsAbsFuncConn>
<vnsAbsFuncConn name="internal">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-internal" />
</vnsAbsFuncConn>
<vnsRsNodeToMFunc tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall" />
</vnsAbsNode>
<vnsAbsTermNodeCon name="consumer">
<vnsAbsTermConn name="1" />
<vnsInTerm name="input-terminal" />
<vnsOutTerm name="output-terminal" />
</vnsAbsTermNodeCon>
<vnsAbsTermNodeProv name="provider">
<vnsAbsTermConn name="2" />
<vnsInTerm name="input-terminal" />
<vnsOutTerm name="output-terminal" />
</vnsAbsTermNodeProv>
<vnsAbsConnection name="connection1" adjType="L2" unicastRoute="no"
connType="external">
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-dual_asa_graph/AbsNode-pvt_asa/AbsFConn-external" />
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-dual_asa_graph/AbsTermNodeCon-consumer/AbsTConn" />
</vnsAbsConnection>
<vnsAbsConnection name="connection2" adjType="L2" unicastRoute="no"
connType="external">
8-33
Chapter 8
Summary of Steps
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-dual_asa_graph/AbsNode-pvt_asa/AbsFConn-internal"
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-dual_asa_graph/AbsNode-dmz_asa/AbsFConn-external"
</vnsAbsConnection>
<vnsAbsConnection name="connection3" adjType="L2" unicastRoute="no"
connType="external">
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-dual_asa_graph/AbsNode-dmz_asa/AbsFConn-internal"
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-dual_asa_graph/AbsTermNodeProv-provider/AbsTConn"
</vnsAbsConnection>
</vnsAbsGraph>
</fvTenant>
Note
/>
/>
/>
/>
The XML data structures for service graphs for the extended gold container are created from APIC
version 1.0(1k). For APIC version 1.0(2j) and above, APIC requires the service graph to be associated
with a function profile.
The XML data structures presented above (without the function profile association) to create service
graphs for the extended gold tenant container will still work with APIC version 1.0(2x) when configured
using REST API, but using a function profile going forward is recommended and necessary when
configuring via the APIC GUI.
Refer to the Silver tenant configuration in ASR 9000 Tenant Configuration for IBGP as Provider
edge-Customer edge Routing Protocol, page 6-6 for using the function profile based service graphs.
Step 27
8-34
Implementation Guide
Chapter 8
8-35
Chapter 8
Summary of Steps
The XML data structure above configures the following CLI equivalent on the ASA service devices:
object network inside1_subnet
subnet 10.1.1.0 255.255.255.0
object network inside2_subnet
subnet 10.1.2.0 255.255.255.0
object network inside3_subnet
subnet 10.1.3.0 255.255.255.0
object network dmz_subnet
subnet 11.1.8.0 255.255.255.248
object network epg01_vip
host 10.1.4.111
object network epg02_vip
host 10.1.4.112
object network epg03_vip
host 10.1.4.113
object network dmz_vip
host 10.1.7.111
object network public_dmz_vip
host 12.1.1.8
!
object-group service web_https
service-object tcp destination eq www
service-object tcp destination eq https
object-group service web_https_mysql
service-object tcp destination eq www
service-object tcp destination eq https
service-object tcp destination eq 3306
Step 28
8-36
Implementation Guide
Chapter 8
Table 8-4
contract01/single_asa_graph/asa_fw
pvt_asa
external pvt_outside/pvt_external_bd
internal pvt_inside1/bd01
contract02/single_asa_graph/asa_fw
pvt_asa
external pvt_outside/pvt_external_bd
internal pvt_inside2/bd02
contract03/single_asa_graph/asa_fw
pvt_asa
external pvt_outside/pvt_external_bd
internal pvt_inside3/bd03
pvt_ns_contract/single_asa_graph/asa_fw pvt_asa
external pvt_outside/pvt_external_bd
internal pvt_ns/pvt_ns_bd
pvt_ns_contract/single_asa_graph/slb
pvt_ns
external outside/pvt_ns_bd
internal inside/pvt_ns_bd
Note
Table 8-4 shows mapping of multiple logical device contexts (hence multiple service graph instances)
to the same logical device, with the connectors of each logical device context mapped to different logical
interfaces, effectively creating a service device with more than two interfaces.
Note
The connectors in Table 8-4 are the name of the function node connectors configured in the respective
service graph template.
The following XML data structure creates the logical device contexts specified in the table above.
<fvTenant name="g008">
<vnsLDevCtx ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-pvt_asa" />
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-bd01" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_inside1"
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-pvt_external_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_outside"
</vnsLIfCtx>
</vnsLDevCtx>
<vnsLDevCtx ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-pvt_asa" />
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-bd02" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_inside2"
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-pvt_external_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_outside"
</vnsLIfCtx>
</vnsLDevCtx>
/>
/>
/>
/>
8-37
Chapter 8
Summary of Steps
The name attribute for each of the logical device context is not specified in the XML data structure
above. Unlike other MOs, in which the name attribute of the MO is used as part of the DN (such as
uni/tn-g008, where g008 is the name of the tenant), the name attribute is not mandatory for the logical
device context MO; instead APIC automatically constructs the DN of the logical device context MO in
the following format:
uni/tn-{tenant}/ldevCtx-c-{contract}-g-{service_graph}-n-{function_node}
Step 29
8-38
Implementation Guide
Chapter 8
Figure 8-11
Ingress ACL
permit icmp and ssh
permit http/https to web subnet/vip
permit http/https to dmz subnet/vip
ASR 9000
10.1.11.254
Ingress ACL
permit icmp and ssh
permit http/https/mysql to web subnet
permit http/https/mysql to app subnet
permit http/https/mysql to db subnet
Database 10.1.3.0/24
Static Routes
10.0.0.0/8 via ASR 9000
30
Ingress ACL
permit icmp
pvt_outside_if
10.1.11.253
70
40
10.1.4.253
pvt_ns_if
60
pvt_asa
Ingress ACL
permit icmp
permit http/https/mysql to app subnet/vip
10.1.1.253
pvt_inside1_if
App 10.1.2.0/24
50
Web 10.1.1.0/24
Ingress ACL
permit icmp
permit http/https/mysql to db subnet/vip
298614
SLB 10.1.4.0/24
10.1.3.253 pvt_inside3_if
10.1.2.253 pvt_inside2_if
The private zone ASA has one outside interface, and three inside interfaces to support generic 3-tier
application, as well as an interface for hosting the private zone NetScaler 1000v operating in one-arm
mode. Private IP addresses that are subnets from 10.1.0.0/16 super-net are assigned to all ASA
interfaces. It is assumed that the private intranet reachable via the ASR 9000 PE is subnets of 10.0.0.0/8
super-net. Static routing is used, as the ASA device package does not support dynamic routing protocols.
Security access control lists (ACLs) are attached to all ASA interfaces in the ingress direction to filter
application traffic in accordance with the generic 3-tier application traffic profile: external web tier
app tier database tier. To ease troubleshooting, ICMP packets are allowed to/from all ASA interfaces.
The following L4-L7 service parameters are configured:
Static routes
8-39
Chapter 8
Summary of Steps
8-40
Implementation Guide
Chapter 8
</vnsFolderInst>
<vnsFolderInst key="InIntfConfigRelFolder" name="intConfig"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="InIntfConfigRel" name="intConfigRel"
targetName="pvt_ns_if" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>
The ASA outside interface, pvt_outside_if, is the common interface of all the service graph instances.
The L4-L7 service parameters for the outside interface is configured on the application profile MO, with
the contract (ctrctNameOrLbl attribute) set to any to allow any contract to pick up the parameters.
The L4-L7 service parameters for the other ASA interfaces are configured on the EPG MO, with the
ctrctNameOrLbl attribute set to the contract that would instantiate the service graph.
The XML data structure above configures the following CLI equivalent on the ASA service device:
interface Port-channel2.3013
nameif pvt_outside_if
security-level 30
ip address 10.1.11.253 255.255.255.0
!
interface Port-channel2.3012
nameif pvt_inside1_if
security-level 50
ip address 10.1.1.253 255.255.255.0
!
interface Port-channel2.3011
nameif pvt_inside2_if
security-level 60
ip address 10.1.2.253 255.255.255.0
!
interface Port-channel2.3015
nameif pvt_inside3_if
security-level 70
ip address 10.1.3.253 255.255.255.0
!
interface Port-channel2.3014
nameif pvt_ns_if
security-level 40
ip address 10.1.4.253 255.255.255.0
Note
APIC randomly assigns VLAN IDs to the ASA named interfaces, using the VLAN IDs in the VLAN
pools assigned to the ASA logical device, during the service graph instantiation. For example, the
interface pvt_inside2_if is assigned VLAN ID 3014 when the service graph is instantiated, if the service
graph is re-instantiated (by de-associate the graph from the contract, and re-associate), the interface
pvt_inside2_if might be assigned with another VLAN ID.
Static Routes
The XML data structure below configures the L4-L7 service parameters to model the ASA static route
on APIC.
<fvTenant name="g008">
<fvAp name="app01">
<vnsFolderInst key="Interface" name="pvt_outside_if" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="StaticRoute" name="staticRoute" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
8-41
Chapter 8
Summary of Steps
The XML data structure creates the following CLI equivalent on the ASA service device:
route pvt_outside_if 10.0.0.0 255.0.0.0 10.1.11.254 1
The XML data structure below configures the L4-L7 service parameters to model the ASA security
access control lists on APIC. The configuration makes use of the network and service objects/groups that
are created in Step 27Configure ASA Network and Service Objects., page 8-34.
<fvTenant name="g008">
<fvAp name="app01">
<vnsFolderInst key="AccessList" name="pvt_outside_if_acl"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp" ctrctNameOrLbl="contract01"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_ssh"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="tcp" />
</vnsFolderInst>
<vnsFolderInst key="destination_service" name="destination_service"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="operator" name="operator" value="eq" />
<vnsParamInst key="low_port" name="low_port" value="22" />
<vnsParamInst key="high_port" name="high_port" value="22" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg01"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="30" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https" />
</vnsFolderInst>
<vnsFolderInst key="destination_address" name="destination_address"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="inside1_subnet" />
</vnsFolderInst>
</vnsFolderInst>
8-42
Implementation Guide
Chapter 8
8-43
Chapter 8
Summary of Steps
<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https_mysql" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="contract01"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="inside2_subnet" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg02_vip"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="30" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https_mysql" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="contract01"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="epg02_vip" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="epg02">
<vnsFolderInst key="AccessList" name="pvt_inside2_if_acl"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg03"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https_mysql" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="contract02"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="inside3_subnet" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg03_vip"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="30" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https_mysql" />
8-44
Implementation Guide
Chapter 8
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="contract02"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="epg03_vip" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="epg03">
<vnsFolderInst key="AccessList" name="pvt_inside3_if_acl"
ctrctNameOrLbl="contract03" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="contract03" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="contract03" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="pvt_ns_epg">
<vnsFolderInst key="AccessList" name="pvt_ns_if_acl"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg01"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https_mysql" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="inside1_subnet" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg02"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="30" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https_mysql" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
8-45
Chapter 8
Summary of Steps
The XML data structure creates the following CLI equivalent on the ASA service device:
access-list pvt_outside_if_acl extended permit icmp any any
access-list pvt_outside_if_acl extended permit tcp any any eq ssh
access-list pvt_outside_if_acl extended permit object-group web_https any object
inside1_subnet
access-list pvt_outside_if_acl extended permit object-group web_https any object
epg01_vip
access-list pvt_outside_if_acl extended permit object-group web_https any object
dmz_subnet
access-list pvt_outside_if_acl extended permit object-group web_https any object
dmz_vip
!
access-list pvt_inside1_if_acl extended permit icmp any any
access-list pvt_inside1_if_acl extended permit object-group web_https_mysql any object
inside2_subnet
access-list pvt_inside1_if_acl extended permit object-group web_https_mysql any object
epg02_vip
!
access-list pvt_inside2_if_acl extended permit icmp any any
access-list pvt_inside2_if_acl extended permit object-group web_https_mysql any object
inside3_subnet
access-list pvt_inside2_if_acl extended permit object-group web_https_mysql any object
epg03_vip
!
access-list pvt_inside3_if_acl extended permit icmp any any
!
access-list pvt_ns_if_acl extended permit icmp any any
access-list pvt_ns_if_acl extended permit object-group web_https_mysql any object
inside1_subnet
access-list pvt_ns_if_acl extended permit object-group web_https_mysql any object
inside2_subnet
access-list pvt_ns_if_acl extended permit object-group web_https_mysql any object
inside3_subnet
The XML data structure below configures the L4-L7 service parameters to attach security access control
lists to ASA interfaces. Each ASA interface has an ingress security access control list attached.
8-46
Implementation Guide
Chapter 8
<fvTenant name="g008">
<fvAp name="app01">
<vnsFolderInst key="Interface" name="pvt_outside_if"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="accessGroup"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="pvt_outside_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
<fvAEPg name="epg01">
<vnsFolderInst key="Interface" name="pvt_inside1_if"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="accessGroup"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="pvt_inside1_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="epg02">
<vnsFolderInst key="Interface" name="pvt_inside2_if"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="accessGroup"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="pvt_inside2_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="epg03">
<vnsFolderInst key="Interface" name="pvt_inside3_if"
ctrctNameOrLbl="contract03" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="accessGroup"
ctrctNameOrLbl="contract03" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="pvt_inside3_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="pvt_ns_epg">
<vnsFolderInst key="Interface" name="pvt_ns_if"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="accessGroup"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="pvt_ns_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>
The XML data structure creates the following CLI equivalent on the ASA service device:
access-group
access-group
access-group
access-group
access-group
Step 30
8-47
Chapter 8
Summary of Steps
The NetScaler 1000v in the private zone is configured to load balance application traffic such as HTTP
and MySQL. The private zone NetScaler 1000v is configured in one-arm mode operation with only a
single data interface. The private zone NetScaler 1000v load balance application traffic for web,
application and database tiers, each tier has it own vServer IP. The following L4-L7 service parameters
are configured:
Note
Subnet IP address
Static routes
SSL offload configuration of the NetScaler 1000v is not implemented here. Refer to Silver Tenant
Container for details of modeling SSL offload on NetScaler 1000v with L4-L7 service parameters.
Subnet IP Address
Only a single SNIP is configured for the data interface of the private zone NetScaler 1000v. The SNIP
is used for health monitoring, and as source IP address to proxy client connections to real servers/VMs.
The XML data structure below configures the L4-L7 service parameters to model the NetScaler 1000v
SNIP on APIC.
<fvTenant name="g008">
<fvAp name="app01">
<fvAEPg name="pvt_ns_epg">
<vnsFolderInst key="Network" name="network"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsFolderInst key="nsip" name="snip" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ipaddress" name="ip" value="10.1.4.21" />
<vnsParamInst key="netmask" name="netmask" value="255.255.255.0"
/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="internal_network" name="snip"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="internal_network_key" name="snip_key"
targetName="network/snip" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>
The XML data structure creates the following CLI equivalent on the NetScaler 1000v service device:
add vlan 1508
add ns ip 10.1.4.21 255.255.255.0 -vServer DISABLED
bind vlan 1508 -ifnum 1/1
bind vlan 1508 -IPAddress 10.1.4.21 255.255.255.0
Note
The NetScaler 1000v is a virtual appliance hosted on the VMM domain configured on the logical device.
When the service graphs are instantiated, APIC randomly selects the VLAN ID 1508 from the VLAN
pool assigned to the VMM domain. The interface ID 1/1 is the concrete interface configured on concrete
device.
8-48
Implementation Guide
Chapter 8
Static Routes
The private zone NetScaler 1000v is configured in one-arm mode operation with only a single data
interface, a single default route is required in such setup. The XML data structure below configures the
L4-L7 service parameters to model the NetScaler 1000v default route on APIC.
<fvTenant name="g008">
<fvAp name="app01">
<fvAEPg name="pvt_ns_epg">
<vnsFolderInst key="Network" name="network"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsFolderInst key="route" name="route01"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="gateway" name="gateway" value="10.1.4.253" />
<vnsParamInst key="netmask" name="netmask" value="0.0.0.0" />
<vnsParamInst key="network" name="network" value="0.0.0.0" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="external_route" name="ext_route"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="external_route_rel" name="ext_route_rel"
targetName="network/route01" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>
The XML data structure creates the following CLI equivalent on the NetScaler 1000v service device:
add route 0.0.0.0 0.0.0.0 10.1.4.253
Note
The NetScaler 1000v virtual appliance actually has two interfaces, one for management only traffic, and
one for the tenant data traffic. The default route is configured for the data interface. Static routes for the
management interface are configured out of band.
Service Groups and vServers
Table 8-5 shows the vServer IPs, service groups and real servers for the private zone NetScaler 1000v.
The same vServer IP address is used for both HTTP and MySQL services on each tier of the 3-tier
application.
Table 8-5
Service Group
Real Server
10.1.4.111 HTTP / 80
web_service_grp1
10.1.1.11
10.1.1.12
10.1.1.13
10.1.4.112 HTTP / 80
web_service_grp2
10.1.2.11
10.1.2.12
10.1.2.13
10.1.4.113 HTTP / 80
web_service_grp3
10.1.3.11
10.1.3.12
10.1.3.13
8-49
Chapter 8
Summary of Steps
Table 8-5
Service Group
Real Server
8-50
Implementation Guide
Chapter 8
8-51
Chapter 8
Summary of Steps
8-52
Implementation Guide
Chapter 8
The XML data structure below configures the L4-L7 service parameters to model the server load
balancing of MySQL service on APIC.
<fvTenant name="g008">
<fvAp name="app01">
<fvAEPg name="pvt_ns_epg">
<vnsFolderInst key="lbmonitor" name="mysql_mon"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="monitorname" name="monitorname" value="mysql_mon"
/>
<vnsParamInst key="type" name="type" value="TCP" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup" name="mysql_service_grp1"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="servicegroupname" name="srv_grp_name"
value="mysql_service_grp1" />
<vnsParamInst key="servicetype" name="servicetype" value="TCP" />
<vnsFolderInst key="servicegroup_lbmonitor_binding"
name="monitor_binding" ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="mysql_mon" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="mysql_service_binding1" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.1.11" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="mysql_service_binding2" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.1.12" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="msql_service_binding3" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.1.13" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="servicegroup" name="mysql_service_grp2"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="servicegroupname" name="srv_grp_name"
value="mysql_service_grp2" />
<vnsParamInst key="servicetype" name="servicetype" value="TCP" />
8-53
Chapter 8
Summary of Steps
<vnsFolderInst key="servicegroup_lbmonitor_binding"
name="monitor_binding" ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="mysql_mon" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="mysql_service_binding1" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.2.11" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="mysql_service_binding2" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.2.12" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="msql_service_binding3" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.2.13" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="servicegroup" name="mysql_service_grp3"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="servicegroupname" name="srv_grp_name"
value="mysql_service_grp3" />
<vnsParamInst key="servicetype" name="servicetype" value="TCP" />
<vnsFolderInst key="servicegroup_lbmonitor_binding"
name="monitor_binding" ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="mysql_mon" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="mysql_service_binding1" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.3.11" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="mysql_service_binding2" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.3.12" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="msql_service_binding3" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.3.13" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="lbvserver" name="epg01_mysql_vip"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="name" name="name" value="epg01_mysql_vip" />
<vnsParamInst key="ipv46" name="ipv46" value="10.1.4.111" />
<vnsParamInst key="servicetype" name="servicetype" value="TCP" />
<vnsParamInst key="port" name="port" value="3306" />
<vnsParamInst key="lbmethod" name="lbmethod" value="ROUNDROBIN" />
8-54
Implementation Guide
Chapter 8
<vnsFolderInst key="lbvserver_servicegroup_binding"
name="mysql_service_grp1" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicename" name="srv_grp_name"
targetName="mysql_service_grp1" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="lbvserver" name="epg02_mysql_vip"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="name" name="name" value="epg02_mysql_vip" />
<vnsParamInst key="ipv46" name="ipv46" value="10.1.4.112" />
<vnsParamInst key="servicetype" name="servicetype" value="TCP" />
<vnsParamInst key="port" name="port" value="3306" />
<vnsParamInst key="lbmethod" name="lbmethod" value="ROUNDROBIN" />
<vnsFolderInst key="lbvserver_servicegroup_binding"
name="mysql_service_grp2" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicename" name="srv_grp_name"
targetName="mysql_service_grp2" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="lbvserver" name="epg03_mysql_vip"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="name" name="name" value="epg03_mysql_vip" />
<vnsParamInst key="ipv46" name="ipv46" value="10.1.4.113" />
<vnsParamInst key="servicetype" name="servicetype" value="TCP" />
<vnsParamInst key="port" name="port" value="3306" />
<vnsParamInst key="lbmethod" name="lbmethod" value="ROUNDROBIN" />
<vnsFolderInst key="lbvserver_servicegroup_binding"
name="mysql_service_grp3" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicename" name="srv_grp_name"
targetName="mysql_service_grp3" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="mFCnglbmonitor" name="mysql_mon_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="lbmonitor_key" name="lbmonitor_key"
targetName="mysql_mon" />
</vnsFolderInst>
<vnsFolderInst key="mFCngservicegroup" name="mysql_service1_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicegroup_key" name="service_key"
targetName="mysql_service_grp1" />
</vnsFolderInst>
<vnsFolderInst key="mFCngservicegroup" name="mysql_service2_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicegroup_key" name="service_key"
targetName="mysql_service_grp2" />
</vnsFolderInst>
<vnsFolderInst key="mFCngservicegroup" name="mysql_service3_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicegroup_key" name="service_key"
targetName="mysql_service_grp3" />
</vnsFolderInst>
<vnsFolderInst key="mFCnglbvserver" name="epg01_mysql_vip_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key"
targetName="epg01_mysql_vip" />
</vnsFolderInst>
<vnsFolderInst key="mFCnglbvserver" name="epg02_mysql_vip_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key"
targetName="epg02_mysql_vip" />
8-55
Chapter 8
Summary of Steps
</vnsFolderInst>
<vnsFolderInst key="mFCnglbvserver" name="epg03_mysql_vip_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key"
targetName="epg03_mysql_vip" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>
The XML data structures create the following CLI equivalent on the NetScaler 1000v service device:
add lb monitor http_mon HTTP
add lb monitor mysql_mon TCP
!
add server 10.1.1.11 10.1.1.11
add server 10.1.1.12 10.1.1.12
add server 10.1.1.13 10.1.1.13
add server 10.1.2.11 10.1.2.11
add server 10.1.2.12 10.1.2.12
add server 10.1.2.13 10.1.2.13
add server 10.1.3.11 10.1.3.11
add server 10.1.3.12 10.1.3.12
add server 10.1.3.13 10.1.3.13
!
add serviceGroup web_service_grp1 HTTP
add serviceGroup web_service_grp2 HTTP
add serviceGroup web_service_grp3 HTTP
add serviceGroup mysql_service_grp1 TCP
add serviceGroup mysql_service_grp2 TCP
add serviceGroup mysql_service_grp3 TCP
!
bind serviceGroup web_service_grp1 10.1.1.11 80
bind serviceGroup web_service_grp1 10.1.1.12 80
bind serviceGroup web_service_grp1 10.1.1.13 80
bind serviceGroup web_service_grp1 -monitorName http_mon
bind serviceGroup web_service_grp2 10.1.2.11 80
bind serviceGroup web_service_grp2 10.1.2.12 80
bind serviceGroup web_service_grp2 10.1.2.13 80
bind serviceGroup web_service_grp2 -monitorName http_mon
bind serviceGroup web_service_grp3 10.1.3.11 80
bind serviceGroup web_service_grp3 10.1.3.12 80
bind serviceGroup web_service_grp3 10.1.3.13 80
bind serviceGroup web_service_grp3 -monitorName http_mon
bind serviceGroup mysql_service_grp1 10.1.1.11 3306
bind serviceGroup mysql_service_grp1 10.1.1.12 3306
bind serviceGroup mysql_service_grp1 10.1.1.13 3306
bind serviceGroup mysql_service_grp1 -monitorName mysql_mon
bind serviceGroup mysql_service_grp2 10.1.2.11 3306
bind serviceGroup mysql_service_grp2 10.1.2.12 3306
bind serviceGroup mysql_service_grp2 10.1.2.13 3306
bind serviceGroup mysql_service_grp2 -monitorName mysql_mon
bind serviceGroup mysql_service_grp3 10.1.3.11 3306
bind serviceGroup mysql_service_grp3 10.1.3.12 3306
bind serviceGroup mysql_service_grp3 10.1.3.13 3306
bind serviceGroup mysql_service_grp3 -monitorName mysql_mon
!
add lb vserver epg01_mysql_vip TCP 10.1.4.111 3306 -persistenceType NONE -lbMethod
ROUNDROBIN
add lb vserver epg02_mysql_vip TCP 10.1.4.112 3306 -persistenceType NONE -lbMethod
ROUNDROBIN
add lb vserver epg03_mysql_vip TCP 10.1.4.113 3306 -persistenceType NONE -lbMethod
ROUNDROBIN
8-56
Implementation Guide
Chapter 8
Note
Step 31
The NetScaler 1000v is configured to load balance MySQL service as simple TCP sockets, instead of as
MySQL application. Citrix NetScaler 1000v device package version 10.5 does not officially supports the
DataStream function required for load balancing the MySQL application.
Create Logical Device Contexts for DMZ ASA.
Table 8-6 shows the logical device contexts for the DMZ ASA of the Extended Gold Tenant Container.
Table 8-6
dmz_contract/single_asa_graph/asa_fw
dmz_asa
external dmz_outside/dmz_external_bd
internal dmz_inside1/dmz_bd
dmz_ns_contract/single_asa_graph/asa_fw dmz_asa
external dmz_outside/dmz_external_bd
internal dmz_ns/dmz_ns_bd
dmz_ns_contract/single_asa_graph/slb
dmz_ns
external outside/dmz_ns_bd
internal inside/dmz_ns_bd
The following XML data structure creates the logical device contexts specified in the table above.
<fvTenant name="g008">
<vnsLDevCtx ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-dmz_asa" />
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-dmz_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_asa/lIf-dmz_inside1" />
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-dmz_external_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_asa/lIf-dmz_outside" />
</vnsLIfCtx>
</vnsLDevCtx>
<vnsLDevCtx ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="asa_fw">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-dmz_asa" />
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-dmz_ns_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_asa/lIf-dmz_ns" />
8-57
Chapter 8
Summary of Steps
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-dmz_external_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_asa/lIf-dmz_outside" />
</vnsLIfCtx>
</vnsLDevCtx>
<vnsLDevCtx ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-dmz_ns" />
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-dmz_ns_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_ns/lIf-inside" />
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-dmz_ns_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_ns/lIf-outside" />
</vnsLIfCtx>
</vnsLDevCtx>
</fvTenant>
Step 32
ASR 9000
11.1.8.254
Ingress ACL
permit icmp and ssh
permit http/https to dmz subnet/vip
10
Static Routes
default route via ASR 9000
pvt_outside_if 11.1.8.253
dmz_ns_if
10.1.7.253
40
DMZ SLB 10.1.7.0/24
Static Routes
dmz_public_vip via dmz_ns_snip
pvt_asa
pvt_inside1_if 11.1.8.6
Ingress ACL
permit icmp
Ingress ACL
50
permit mysql to epg01 subnet/vip
permit icmp
permit http/https to dmz subnet
298615
DMZ 11.1.8.0/29
The DMZ ASA has one outside interface, and one inside interface to host the workload VMs accessible
from Internet, as well as an interface for hosting the NetScaler 1000v operating in one-arm mode.
Both public and private IP addresses are used on the ASA interfaces. Public IP addresses are used on
subnets that require access to/from Internet. Static routing is used, as the ASA device package does not
support dynamic routing protocols. The ASA has default route to the ASR 9000 for access to/from
Internet, and a static route to the public vServer IP on the NetScaler 1000v.
Note
NAT is not configured; as the ASA device package version 1.0(1) has limited support when configuring
NAT rules on APIC, see CSCuq16294 for more details.
8-58
Implementation Guide
Chapter 8
Security access control lists are attached to all ASA interfaces in the ingress direction to filter
application traffic. The ingress ACL of the dmz_inside1_if interface allows the DMZ servers/VMs to
initiate requests to the application servers/VMs on the private zone. To ease troubleshooting, ICMP
packets are allowed to/from all ASA interfaces. The following L4-L7 service parameters are configured:
Static routes
The XML data structure below configures the L4-L7 service parameters to model the ASA interface
name, IP address and security level on APIC.
<fvTenant name="g008">
<fvAp name="app02">
<vnsFolderInst key="Interface" name="dmz_outside_if" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="InterfaceConfig" name="ifcfg" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="ipv4_address" name="ipv4_addr"
value="11.1.8.253/255.255.255.252" />
<vnsParamInst key="security_level" name="security_level" value="10" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="ExIntfConfigRelFolder" name="extConfig"
ctrctNameOrLbl="any" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="ExIntfConfigRel" name="extConfigRel"
targetName="dmz_outside_if" />
</vnsFolderInst>
<fvAEPg name="dmz_epg">
<vnsFolderInst key="Interface" name="dmz_inside1_if"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsFolderInst key="InterfaceConfig" name="ifcfg"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="ipv4_address" name="ipv4_addr"
value="11.1.8.6/255.255.255.248" />
<vnsParamInst key="security_level" name="security_level"
value="50" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="InIntfConfigRelFolder" name="intConfig"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="InIntfConfigRel" name="intConfigRel"
targetName="dmz_inside1_if" />
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="dmz_ns_epg">
<vnsFolderInst key="Interface" name="dmz_ns_if"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="InterfaceConfig" name="ifcfg"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="ipv4_address" name="ipv4_addr"
value="10.1.7.253/255.255.255.0" />
<vnsParamInst key="security_level" name="security_level"
value="40" />
</vnsFolderInst>
8-59
Chapter 8
Summary of Steps
</vnsFolderInst>
<vnsFolderInst key="InIntfConfigRelFolder" name="intConfig"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="InIntfConfigRel" name="intConfigRel"
targetName="dmz_ns_if" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>
The ASA outside interface, dmz_outside_if, is the common interface of all the service graph instances.
The L4-L7 service parameters for the outside interface is configured on the application profile MO, with
the contract (ctrctNameOrLbl attribute) set to any to allow any contract to pick up the parameters.
The L4-L7 service parameters for the other ASA interfaces are configured on the EPG MO, with the
ctrctNameOrLbl attribute set to the contract that would instantiate the service graph.
The XML data structure above configures the following CLI equivalent on the ASA service device:
interface port-channel2.3018
nameif dmz_outside_if
security-level 10
ip address 11.1.8.253 255.255.255.252
!
interface port-channel2.3017
nameif dmz_inside1_if
security-level 50
ip address 11.1.8.6 255.255.255.0
!
interface port-channel2.3019
nameif dmz_ns_if
security-level 40
ip address 10.1.7.253 255.255.255.0
Static Routes
The XML data structure below configures the L4-L7 service parameters to model the ASA static routes
on APIC.
<fvTenant name="g008">
<fvAp name="app02">
<vnsFolderInst key="Interface" name="dmz_outside_if" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="StaticRoute" name="staticRoute" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="route" name="route01" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="network" name="network" value="0.0.0.0" />
<vnsParamInst key="netmask" name="netmask" value="0.0.0.0" />
<vnsParamInst key="gateway" name="gateway" value="11.1.8.254" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
<fvAEPg name="dmz_ns_epg">
<vnsFolderInst key="Interface" name="dmz_ns_if"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="StaticRoute" name="staticRoute"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="route" name="route01"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="network" name="network" value="12.1.1.8" />
<vnsParamInst key="netmask" name="netmask"
value="255.255.255.255" />
<vnsParamInst key="gateway" name="gateway" value="10.1.7.21"
/>
8-60
Implementation Guide
Chapter 8
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>
The XML data structure creates the following CLI equivalent on the ASA service device:
route dmz_outside_if 0.0.0.0 0.0.0.0 11.1.8.254 1
route dmz_ns_if 12.1.1.8 255.255.255.255 10.1.7.21 1
The XML data structure below configures the L4-L7 service parameters to model the ASA security
access control lists on APIC. The configuration makes use of the network and service objects/groups that
are created in Step 27Configure ASA Network and Service Objects., page 8-34.
<fvTenant name="g008">
<fvAp name="app02">
<vnsFolderInst key="AccessList" name="dmz_outside_if_acl"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_dmz"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https" />
</vnsFolderInst>
<vnsFolderInst key="destination_address" name="destination_address"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="dmz_subnet" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_dmz_vip"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="30" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https" />
</vnsFolderInst>
8-61
Chapter 8
Summary of Steps
8-62
Implementation Guide
Chapter 8
<vnsFolderInst key="destination_service"
name="destination_service" ctrctNameOrLbl="dmz_contract"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="operator" name="operator" value="eq" />
<vnsParamInst key="low_port" name="low_port" value="3306" />
<vnsParamInst key="high_port" name="high_port" value="3306" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="dmz_ns_epg">
<vnsFolderInst key="AccessList" name="dmz_ns_if_acl"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_dmz"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="dmz_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="dmz_subnet" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>
The XML data structure creates the following CLI equivalent on the ASA service device:
access-list dmz_outside_if_acl extended permit icmp any any
access-list dmz_outside_if_acl extended permit object-group web_https any object
dmz_subnet
access-list dmz_outside_if_acl extended permit object-group web_https any object
public_dmz_vip
!
access-list dmz_inside1_if_acl extended permit icmp any any
access-list dmz_inside1_if_acl extended permit tcp any object inside1_subnet eq 3306
access-list dmz_inside1_if_acl extended permit tcp any object epg01_vip eq 3306
!
access-list dmz_ns_if_acl extended permit icmp any any
access-list dmz_ns_if_acl extended permit object-group web_https any object dmz_subnet
The XML data structure below configures the L4-L7 service parameters to attach security access control
lists to ASA interfaces. Each ASA interface has an ingress security access control list attached.
<fvTenant name="g008">
8-63
Chapter 8
Summary of Steps
<fvAp name="app02">
<vnsFolderInst key="Interface" name="dmz_outside_if"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="AccessGroup"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="dmz_outside_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
<fvAEPg name="dmz_epg">
<vnsFolderInst key="Interface" name="dmz_inside1_if"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="AccessGroup"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="dmz_inside1_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="dmz_ns_epg">
<vnsFolderInst key="Interface" name="dmz_ns_if"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="AccessGroup"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="dmz_ns_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>
The XML data structure creates the following CLI equivalent on the ASA service device:
access-group dmz_outside_if_acl in interface dmz_outside_if
access-group dmz_inside1_if_acl in interface dmz_inside1_if
access-group dmz_ns_if_acl in interface dmz_ns_if
Step 33
Subnet IP address
Static routes
Subnet IP Address
Only a single SNIP is configured for the data interface of the DMZ NetScaler 1000v. The SNIP is used
for health monitoring, and as source IP address to proxy client connections to real servers/VMs. The
XML data structure below configures the L4-L7 service parameters to model the NetScaler 1000v SNIP
on APIC.
<fvTenant name="g008">
<fvAp name="app02">
<fvAEPg name="dmz_ns_epg">
8-64
Implementation Guide
Chapter 8
The XML data structure creates the following CLI equivalent on the NetScaler 1000v service device:
add vlan 1865
add ns ip 10.1.7.21 255.255.255.0 -vServer DISABLED
bind vlan 1865 -ifnum 1/1
bind vlan 1865 -IPAddress 10.1.7.21 255.255.255.0
Static Routes
The DMZ NetScaler 1000v is configured in one-arm mode operation with only a single data interface, a
single default route is required in such setup. The XML data structure below configures the L4-L7
service parameters to model the NetScaler 1000v default route on APIC.
<fvTenant name="g008">
<fvAp name="app02">
<fvAEPg name="dmz_ns_epg">
<vnsFolderInst key="Network" name="network"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsFolderInst key="route" name="route01"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="gateway" name="gateway" value="10.1.7.253" />
<vnsParamInst key="netmask" name="netmask" value="0.0.0.0" />
<vnsParamInst key="network" name="network" value="0.0.0.0" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="external_route" name="ext_route"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="external_route_rel" name="ext_route_rel"
targetName="network/route01" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>
The XML data structure creates the following CLI equivalent on the NetScaler 1000v service device:
add route 0.0.0.0 0.0.0.0 10.1.7.253
Table 8-7 shows the vServer IPs, service groups and real servers for the DMZ NetScaler 1000v. The
DMZ does not have MySQL servers/VMs, only web servers/VMs. Three are two vServer IP addresses
configured for the same set of real servers, one for access from the Internet, one for local private use.
8-65
Chapter 8
Summary of Steps
Table 8-7
Real Server
web_service_grp1 11.1.8.1
12.1.1.8
11.1.8.2
11.1.8.3
The XML data structure below configures the L4-L7 service parameters to model server load balancing
of HTTP service on APIC.
<fvTenant name="g008">
<fvAp name="app02">
<fvAEPg name="dmz_ns_epg">
<vnsFolderInst key="lbmonitor" name="http_mon"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="monitorname" name="monitorname" value="http_mon" />
<vnsParamInst key="type" name="type" value="http" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup" name="web_service_grp1"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="servicegroupname" name="srv_grp_name"
value="web_service_grp1" />
<vnsParamInst key="servicetype" name="servicetype" value="HTTP" />
<vnsFolderInst key="servicegroup_lbmonitor_binding"
name="monitor_binding" ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="http_mon" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding1" ctrctNameOrLbl="dmz_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="11.1.8.1" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding2" ctrctNameOrLbl="dmz_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="11.1.8.2" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding3" ctrctNameOrLbl="dmz_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="11.1.8.3" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="lbvserver" name="dmz_private_vip"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="name" name="name" value="dmz_private_vip" />
<vnsParamInst key="ipv46" name="ipv46" value="10.1.7.111" />
<vnsParamInst key="servicetype" name="servicetype" value="HTTP" />
<vnsParamInst key="port" name="port" value="80" />
<vnsParamInst key="lbmethod" name="lbmethod" value="ROUNDROBIN" />
<vnsParamInst key="persistencetype" name="persistencetype"
value="COOKIEINSERT" />
<vnsFolderInst key="lbvserver_servicegroup_binding"
name="web_service_grp1" ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
8-66
Implementation Guide
Chapter 8
The XML data structures creates the following CLI equivalent on the NetScaler 1000v service device:
add lb monitor http_mon HTTP
!
add server 11.1.8.1 11.1.8.1
add server 11.1.8.2 11.1.8.2
add server 11.1.8.3 11.1.8.3
!
add serviceGroup web_service_grp1 HTTP
!
bind serviceGroup web_service_grp1 11.1.8.1 80
bind serviceGroup web_service_grp1 11.1.8.2 80
bind serviceGroup web_service_grp1 11.1.8.3 80
bind serviceGroup web_service_grp1 -monitorName http_mon
!
add lb vserver dmz_private_vip HTTP 10.1.7.111 80 -lbMethod ROUNDROBIN
-persistenceType COOKIEINSERT
add lb vserver dmz_public_vip HTTP 12.1.1.8 80 -lbMethod ROUNDROBIN -persistenceType
COOKIEINSERT
!
8-67
Chapter 8
Summary of Steps
Step 34
inter_asa_contract/dual_asa_graph/pvt_asa
pvt_asa
external pvt_outside/pvt_external_bd
internal pvt_inter_asa/inter_asa_bd
inter_asa_contract/dual_asa_graph/dmz_asa dmz_asa
external dmz_inter_asa/inter_asa_bd
internal dmz_inside1/dmz_bd
The following XML data structure creates the logical device contexts specified in the table above.
<fvTenant name="g008">
<vnsLDevCtx ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-pvt_asa" />
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-pvt_external_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_outside" />
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-inter_asa_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_inter_asa" />
</vnsLIfCtx>
</vnsLDevCtx>
<vnsLDevCtx ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-dmz_asa" />
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-inter_asa_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_asa/lIf-dmz_inter_asa" />
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-dmz_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_asa/lIf-dmz_inside1" />
</vnsLIfCtx>
</vnsLDevCtx>
</fvTenant>
Step 35
8-68
Implementation Guide
Chapter 8
Figure 8-13
Ingress ACL
permit icmp and ssh
permit http/https to dmz subnet/vip
ASR 9000
Static Routes
dmz_ns subnet via dmz_asa
default route via dmz_asa
pvt_inter_asa_if
10.1.5.253
20
10.1.5.252
pvt_inter_asa_if
pvt_asa
dmz_asa
Static Routes
private intranet via pvt_asa
298616
Ingress ACL
permit icmp
permit http/https/mysql to epg01 subnet/vip
One additional interface each is added to pvt_asa and dmz_asa to facilitate inter zone communication.
Static routes are configured on each ASA firewall to route the traffic flows to the correct destinations.
Ingress security access control lists are attached to the inter zone ASA interfaces to filter the inter zone
traffic. The following L4-L7 Service Parameters are configured:
Static routes
The XML data structure below configures the L4-L7 service parameters to model the ASA interface
name, IP address and security level on APIC. The configuration is applied to both pvt_asa and dmz_asa
service devices.
<fvTenant name="g008">
<vnsFolderInst key="Interface" name="pvt_inter_asa_if"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsFolderInst key="InterfaceConfig" name="ifcfg"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsParamInst key="ipv4_address" name="ipv4_addr"
value="10.1.5.253/255.255.255.0" />
<vnsParamInst key="security_level" name="security_level" value="20" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="InIntfConfigRelFolder" name="intConfig"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="InIntfConfigRel" name="intConfigRel"
targetName="pvt_inter_asa_if" />
</vnsFolderInst>
<vnsFolderInst key="Interface" name="dmz_inter_asa_if"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
8-69
Chapter 8
Summary of Steps
The L4-L7 service parameters are configured on the tenant MO, since they are applicable to both the
app01 and app02 application profiles.
Note
The service graph of the inter zone setup, dual_asa_graph, has fours logical interfaces, namely
pvt_outside_if, pvt_inter_asa_if, dmz_inter_asa_if, and dmz_if. The XML data structure above only
configures the L4-L7 service parameters for pvt_inter_asa_if and dmz_inter_asa_if interfaces, the
L4-L7 service parameters for the other two interfaces are already configured by other service graph
instances.
The XML data structure above configures the following CLI equivalent on pvt_asa service device:
interface port-channel2.3016
nameif pvt_inter_asa_if
security-level 20
ip address 10.1.5.253 255.255.255.0
Static Routes
The XML data structure below configures the L4-L7 service parameters to model the ASA static routes
on APIC. The configuration is applied to both pvt_asa and dmz_asa service devices.
<fvTenant name="g008">
<vnsFolderInst key="Interface" name="pvt_inter_asa_if"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsFolderInst key="StaticRoute" name="StaticRoute"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsFolderInst key="route" name="route01"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsParamInst key="network" name="network" value="10.1.7.0" />
<vnsParamInst key="netmask" name="netmask" value="255.255.255.0" />
<vnsParamInst key="gateway" name="gateway" value="10.1.5.252" />
</vnsFolderInst>
<vnsFolderInst key="route" name="route02"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
8-70
Implementation Guide
Chapter 8
The XML data structure above configures the following CLI equivalent on pvt_asa service device:
route pvt_inter_asa_if 10.1.7.0 255.255.255.0 10.1.5.252 1
route pvt_inter_asa_if 0.0.0.0 0.0.0.0 10.1.5.252 1
The XML data structure below configures the L4-L7 service parameters to model the ASA security
access control lists on APIC. The configuration makes use of the network and service objects/groups that
are created in Step 27Configure ASA Network and Service Objects., page 8-34. The configuration is
applied to both pvt_asa and dmz_asa service devices.
<fvTenant name="g008">
<vnsFolderInst key="AccessList" name="pvt_inter_asa_if_acl"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg01"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https_mysql" />
8-71
Chapter 8
Summary of Steps
</vnsFolderInst>
<vnsFolderInst key="source_address" name="source_address"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="dmz_subnet" />
</vnsFolderInst>
<vnsFolderInst key="destination_address" name="destination_address"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="inside1_subnet" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg01_vip"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="30" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https_mysql" />
</vnsFolderInst>
<vnsFolderInst key="source_address" name="source_address"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="dmz_subnet" />
</vnsFolderInst>
<vnsFolderInst key="destination_address" name="destination_address"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="epg01_vip" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessList" name="dmz_inter_asa_if_acl"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_ssh"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsParamInst key="name_number" name="name" value="tcp" />
</vnsFolderInst>
8-72
Implementation Guide
Chapter 8
The XML data structure above configures the following CLI equivalent on pvt_asa service device:
access-list pvt_inter_asa_if_acl extended permit icmp any any
access-list pvt_inter_asa_if_acl extended permit object-group web_https_mysql object
dmz_subnet object inside1_subnet
access-list pvt_inter_asa_if_acl extended permit object-group web_https_mysql object
dmz_subnet object epg01_vip
8-73
Chapter 8
Summary of Steps
The XML data structure below configures the L4-L7 service parameters to attach security access control
lists to ASA interfaces. Each ASA interface has an ingress security access control list attached. The
configuration is applied to both pvt_asa and dmz_asa service devices.
<fvTenant name="g008">
<vnsFolderInst key="Interface" name="pvt_inter_asa_if"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsFolderInst key="AccessGroup" name="AccessGroup"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="pvt_inter_asa_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="Interface" name="dmz_inter_asa_if"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsFolderInst key="AccessGroup" name="AccessGroup"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="dmz_inter_asa_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
</fvTenant>
The XML data structure above configures the following CLI equivalent on pvt_asa service device:
access-group pvt_inter_asa_if_acl in interface pvt_inter_asa_if
Step 36
Contract
Service Graph
contract01
single_asa_graph
contract02
single_asa_graph
contract03
single_asa_graph
pvt_ns_contract
asa_ns_graph
dmz_contract
single_asa_graph
dmz_ns_contract
asa_ns_graph
inter_asa_contract dual_asa_graph
The XML data structure below associates the service graphs to the respective contract.
<fvTenant name="g008">
<vzBrCP name="contract01">
<vzSubj name="subject01">
8-74
Implementation Guide
Chapter 8
Once the service graphs are deployed, APIC creates shadow EPG backed VDS port-groups for the virtual
service appliances; the DVS port-groups have the following naming convention:
{tenant}|{logical_device}ctx{context}{bridge_domain}|{connector}
APIC automatically attaches the data vNICs of the virtual service appliances (in this case, the NetScaler
1000v virtual appliances) to the DVS port-groups once the service graphs are deployed.
8-75
Chapter 8
Summary of Steps
Note
Step 2
It is assumed that the VSS port-group dummy already existed on all vSphere ESXi hosts.
De-associate Service Graphs from Contracts.
The service graphs must be de-associated from the contracts before the tenant is decommissioned. When
service graph is de-associated from contract, APIC (via the device packages) removes the configurations
pushed in by the device script when the service graphs are deployed. If the tenant is decommissioned
without de-associating the service graphs from the contracts, the configuration might not be removed,
see CSCur05367 and CSCuq90719 for more details.
The following XML data structure de-associates the service graphs from the contracts.
<fvTenant name="g008">
<vzBrCP name="contract01">
<vzSubj name="subject01">
<vzRsSubjGraphAtt status="deleted"
</vzSubj>
</vzBrCP>
<vzBrCP name="contract02">
<vzSubj name="subject01">
<vzRsSubjGraphAtt status="deleted"
</vzSubj>
</vzBrCP>
<vzBrCP name="contract03">
<vzSubj name="subject01">
<vzRsSubjGraphAtt status="deleted"
</vzSubj>
</vzBrCP>
<vzBrCP name="pvt_ns_contract">
<vzSubj name="subject01">
<vzRsSubjGraphAtt status="deleted"
</vzSubj>
</vzBrCP>
<vzBrCP name="dmz_contract">
<vzSubj name="subject01">
<vzRsSubjGraphAtt status="deleted"
</vzSubj>
</vzBrCP>
<vzBrCP name="dmz_ns_contract">
<vzSubj name="subject01">
<vzRsSubjGraphAtt status="deleted"
</vzSubj>
</vzBrCP>
<vzBrCP name="inter_asa_contract">
<vzSubj name="subject01">
<vzRsSubjGraphAtt status="deleted"
</vzSubj>
</vzBrCP>
</fvTenant>
Step 3
/>
/>
/>
/>
/>
/>
/>
Decommission Tenant.
The following XML data structures decommission the tenant from APIC MIT. When the tenant is
decommissioned, all MOs contained within the tenant container are deleted from APIC MIT.
<polUni>
<fvTenant name="g008" status="deleted" />
</polUni>
The XML data structure below delete the security domain, which is no longer required.
8-76
Implementation Guide
Chapter 8
<aaaUserEp>
<aaaDomain name="g008_sd" status="deleted" />
</aaaUserEp>
Step 4
Step 5
The XML data structure below deletes the second physical domain.
<physDomP name="g008_dmz_asa_phy" status="deleted" />
Step 6
Step 7
8-77
Chapter 8
no interface
no interface
no interface
no interface
end
port-channel2.3017
port-channel2.3018
port-channel2.3019
port-channel2.3020
Step 2
8-78
Implementation Guide
Chapter 8
Step 3
Note
Gig0/0
pvt_outside_if
pvt_outside
Network adapter 2
Gig0/1
pvt_inside1_if
pvt_inside1
Network adapter 3
Gig0/2
pvt_inside2_if
pvt_inside2
Network adapter 4
Gig0/3
pvt_inside3_if
pvt_inside3
Network adapter 5
Gig0/4
pvt_ns_if
pvt_ns
Network adapter 6
Gig0/5
pvt_inter_asa_if
pvt_inter_asa
Network adapter 7
Gig0/7
failover_lan
failover_lan
Network adapter 9
Gig0/8
failover_link
failover_link
Network adapter 10
The first vNIC of the ASAv virtual appliance, Network adapter 1, is for management purpose only; APIC
does not model the management interface of the ASAv virtual appliance.
The XML data structure below creates two concrete devices, and the logical interfaces of the logical
device for the private zone ASA.
<fvTenant name="g008">
<vnsLDevVip name="pvt_asa">
<vnsCDev name="asa01" vcenterName="ics3_vc_tenant_cluster"
vmName="g008-asa01">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.32.241" port="443" />
<vnsCIf name="Gig0/0" vnicName="Network adapter 2" />
<vnsCIf name="Gig0/1" vnicName="Network adapter 3" />
<vnsCIf name="Gig0/2" vnicName="Network adapter 4" />
<vnsCIf name="Gig0/3" vnicName="Network adapter 5" />
<vnsCIf name="Gig0/4" vnicName="Network adapter 6" />
<vnsCIf name="Gig0/5" vnicName="Network adapter 7" />
<vnsCIf name="Gig0/7" vnicName="Network adapter 9" />
<vnsCIf name="Gig0/8" vnicName="Network adapter 10" />
</vnsCDev>
<vnsCDev name="asa02" vcenterName="ics3_vc_tenant_cluster"
vmName="g008-asa02">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.32.242" port="443" />
<vnsCIf name="Gig0/0" vnicName="Network adapter 2" />
<vnsCIf name="Gig0/1" vnicName="Network adapter 3" />
<vnsCIf name="Gig0/2" vnicName="Network adapter 4" />
<vnsCIf name="Gig0/3" vnicName="Network adapter 5" />
<vnsCIf name="Gig0/4" vnicName="Network adapter 6" />
<vnsCIf name="Gig0/5" vnicName="Network adapter 7" />
8-79
Chapter 8
The name of the VMM controller (the vcenterName attribute above, is not the vCenter hostname or IP
address, but rather the VMM controller name of the VMM domain), the VM name and vNIC name of
the ASAv virtual appliance are part of the concrete devices configuration, so that APIC can attach the
appliances vNIC to the shadow EPG backed VDS port-group created when the service graphs are
deployed.
Step 4
8-80
Implementation Guide
Chapter 8
Figure 8-14
10.0.32.242
asa01
Primary Unit
asa02
Secondary Unit
298617
The XML data structure below configures the HA implementation in Figure 8-14.
<fvTenant name="g008">
<vnsLDevVip name="pvt_asa">
<vnsCDev name="asa01">
<vnsDevFolder key="FailoverConfig" name="failover_config">
<vnsDevParam key="failover" name="enable_failover" value="enable" />
<vnsDevParam key="lan_unit" name="primary" value="primary" />
<vnsDevParam key="key_secret" name="secret" value="Cisco12345" />
<vnsDevParam key="http_replication" name="http_replication"
value="enable" />
<vnsDevFolder key="mgmt_standby_ip" name="mgmt_standby_ip">
<vnsDevParam key="standby_ip" name="standby_ip"
value="10.0.32.242" />
</vnsDevFolder>
<vnsDevFolder key="failover_ip" name="failover_ip">
<vnsDevParam key="active_ip" name="active_ip" value="10.255.8.1"
/>
<vnsDevParam key="netmask" name="netmask" value="255.255.255.248"
/>
<vnsDevParam key="interface_name" name="interface_name"
value="failover_lan" />
<vnsDevParam key="standby_ip" name="standby_ip" value="10.255.8.2"
/>
</vnsDevFolder>
<vnsDevFolder key="failover_lan_interface"
name="failover_lan_interface">
<vnsDevParam key="interface_name" name="interface_name"
value="failover_lan" />
</vnsDevFolder>
<vnsDevFolder key="failover_link_interface"
name="failover_link_interface">
<vnsDevParam key="interface_name" name="interface_name"
value="failover_link" />
</vnsDevFolder>
</vnsDevFolder>
</vnsCDev>
<vnsCDev name="asa02">
<vnsDevFolder key="FailoverConfig" name="failover_config">
<vnsDevParam key="failover" name="enable_failover" value="enable" />
<vnsDevParam key="lan_unit" name="primary" value="secondary" />
<vnsDevParam key="key_secret" name="secret" value="Cisco12345" />
<vnsDevParam key="http_replication" name="http_replication"
value="enable" />
<vnsDevFolder key="mgmt_standby_ip" name="mgmt_standby_ip">
<vnsDevParam key="standby_ip" name="standby_ip"
value="10.0.32.242" />
</vnsDevFolder>
<vnsDevFolder key="failover_ip" name="failover_ip">
<vnsDevParam key="active_ip" name="active_ip" value="10.255.8.1"
/>
8-81
Chapter 8
Note
Step 5
ASA device package version 1.0(1) does not support using the same vNIC for failover LAN and failover
link.
Create Logical Device for DMZ ASA.
The following XML data structure configures the logical device for the DMZ ASA. The logical device
is associated with a VMM domain, which specifies the vSphere virtual datacenter where the ASAv
virtual appliances reside.
<fvTenant name="g008">
<vnsLDevVip name="dmz_asa" contextAware="single-Context" devtype="VIRTUAL"
funcType="GoTo" mode="legacy-Mode">
<vnsRsMDevAtt tDn="uni/infra/mDev-CISCO-ASA-1.0.1" />
<vnsCMgmt host="10.0.32.243" port="443" />
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsRsALDevToDomP tDn="uni/vmmp-VMware/dom-ics3_prod_vc" />
</vnsLDevVip>
</fvTenant>
Step 6
dmz_outside_if
dmz_outside
Network adapter 2
Gig0/1
dmz_inside1_if
dmz_inside1
Network adapter 3
Gig0/2
dmz_ns_if
dmz_ns
Network adapter 4
Gig0/3
pvt_inter_asa_if
dmz_inter_asa
Network adapter 5
Gig0/7
failover_lan
failover_lan
Network adapter 9
Gig0/8
failover_link
failover_link
Network adapter 10
8-82
Implementation Guide
Chapter 8
The XML data structure below creates two concrete devices, and the logical interfaces of the logical
device for the DMZ ASA.
<fvTenant name="g008">
<vnsLDevVip name="dmz_asa">
<vnsCDev name="asa03" vcenterName="ics3_vc_tenant_cluster"
vmName="g008-asa03">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.32.243" port="443" />
<vnsCIf name="Gig0/0" vnicName="Network adapter 2" />
<vnsCIf name="Gig0/1" vnicName="Network adapter 3" />
<vnsCIf name="Gig0/2" vnicName="Network adapter 4" />
<vnsCIf name="Gig0/3" vnicName="Network adapter 5" />
<vnsCIf name="Gig0/7" vnicName="Network adapter 9" />
<vnsCIf name="Gig0/8" vnicName="Network adapter 10" />
</vnsCDev>
<vnsCDev name="asa04" vcenterName="ics3_vc_tenant_cluster"
vmName="g008-asa04">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.32.244" port="443" />
<vnsCIf name="Gig0/0" vnicName="Network adapter 2" />
<vnsCIf name="Gig0/1" vnicName="Network adapter 3" />
<vnsCIf name="Gig0/2" vnicName="Network adapter 4" />
<vnsCIf name="Gig0/3" vnicName="Network adapter 5" />
<vnsCIf name="Gig0/7" vnicName="Network adapter 9" />
<vnsCIf name="Gig0/8" vnicName="Network adapter 10" />
</vnsCDev>
<vnsLIf name="dmz_outside">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-external" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa03/cIf-[Gig0/0]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa04/cIf-[Gig0/0]" />
</vnsLIf>
<vnsLIf name="dmz_inside1">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa03/cIf-[Gig0/1]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa04/cIf-[Gig0/1]" />
</vnsLIf>
<vnsLIf name="dmz_ns">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa03/cIf-[Gig0/2]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa04/cIf-[Gig0/2]" />
</vnsLIf>
<vnsLIf name="dmz_inter_asa">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-external" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa03/cIf-[Gig0/3]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa04/cIf-[Gig0/3]" />
</vnsLIf>
<vnsLIf name="failover_lan">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-failover_lan" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa03/cIf-[Gig0/7]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa04/cIf-[Gig0/7]" />
</vnsLIf>
<vnsLIf name="failover_link">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-failover_link" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa03/cIf-[Gig0/8]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa04/cIf-[Gig0/8]" />
</vnsLIf>
</vnsLDevVip>
</fvTenant>
Step 7
8-83
Chapter 8
The XML data structure below configures the active/standby HA setup for the DMZ ASA consisting of
two ASAv virtual appliances.
<fvTenant name="g008">
<vnsLDevVip name="dmz_asa">
<vnsCDev name="asa03">
<vnsDevFolder key="FailoverConfig" name="failover_config">
<vnsDevParam key="failover" name="enable_failover" value="enable" />
<vnsDevParam key="lan_unit" name="primary" value="primary" />
<vnsDevParam key="key_secret" name="secret" value="Cisco12345" />
<vnsDevParam key="http_replication" name="http_replication"
value="enable" />
<vnsDevFolder key="mgmt_standby_ip" name="mgmt_standby_ip">
<vnsDevParam key="standby_ip" name="standby_ip"
value="10.0.32.244" />
</vnsDevFolder>
<vnsDevFolder key="failover_ip" name="failover_ip">
<vnsDevParam key="active_ip" name="active_ip" value="10.255.8.11"
/>
<vnsDevParam key="netmask" name="netmask" value="255.255.255.248"
/>
<vnsDevParam key="interface_name" name="interface_name"
value="failover_lan" />
<vnsDevParam key="standby_ip" name="standby_ip"
value="10.255.8.12" />
</vnsDevFolder>
<vnsDevFolder key="failover_lan_interface"
name="failover_lan_interface">
<vnsDevParam key="interface_name" name="interface_name"
value="failover_lan" />
</vnsDevFolder>
<vnsDevFolder key="failover_link_interface"
name="failover_link_interface">
<vnsDevParam key="interface_name" name="interface_name"
value="failover_link" />
</vnsDevFolder>
</vnsDevFolder>
</vnsCDev>
<vnsCDev name="asa04">
<vnsDevFolder key="FailoverConfig" name="failover_config">
<vnsDevParam key="failover" name="enable_failover" value="enable" />
<vnsDevParam key="lan_unit" name="primary" value="secondary" />
<vnsDevParam key="key_secret" name="secret" value="Cisco12345" />
<vnsDevParam key="http_replication" name="http_replication"
value="enable" />
<vnsDevFolder key="mgmt_standby_ip" name="mgmt_standby_ip">
<vnsDevParam key="standby_ip" name="standby_ip"
value="10.0.32.244" />
</vnsDevFolder>
<vnsDevFolder key="failover_ip" name="failover_ip">
<vnsDevParam key="active_ip" name="active_ip" value="10.255.8.11"
/>
<vnsDevParam key="netmask" name="netmask" value="255.255.255.248"
/>
<vnsDevParam key="interface_name" name="interface_name"
value="failover_lan" />
<vnsDevParam key="standby_ip" name="standby_ip"
value="10.255.8.12" />
</vnsDevFolder>
<vnsDevFolder key="failover_lan_interface"
name="failover_lan_interface">
<vnsDevParam key="interface_name" name="interface_name"
value="failover_lan" />
</vnsDevFolder>
8-84
Implementation Guide
Chapter 8
<vnsDevFolder key="failover_link_interface"
name="failover_link_interface">
<vnsDevParam key="interface_name" name="interface_name"
value="failover_link" />
</vnsDevFolder>
</vnsDevFolder>
</vnsCDev>
</vnsLDevVip>
</fvTenant>
Step 8
8-85
Chapter 8
8-86
Implementation Guide
CH A P T E R
MPLS
L3 VPN
ASR 9000
ACI Fabric
APP
OS
APP
OS
Tier01 VMs
APP
OS
APP
OS
Tier02 VMs
APP
OS
APP
OS
Tier03 VMs
298514
NetScaler 1000V
HA-Pair
9-1
Chapter 9
Each tenant can host different applications based on the requirement of the customer. This may require
a number of application tiers of Virtual Machines (VMs) to be implemented such as web, application,
and database. In our implementation, silver tenant is defined with three application tiers. Each tier has
a unique VLAN assigned and hosts web, application and database services. Silver tenant also provides
load-balancing services for the application tiers using Citrix NetScaler 1000v. The NetScaler VMs sit on
a different VLAN, thus maintaining a logical separation from the other application tiers. The number of
application tiers can be expanded easily by assigning a new VLAN to the new tier and provide a
multi-tier service. This section covers the following topics:
Physical Topology
Logical Topology
Tenant Construction
Physical Topology
The Silver tenant physical topology is shown in Figure 9-2.
Figure 9-2
APIC1
APIC3
NetScaler 1000V
APP
OS
APP
OS
Tenant VMs
UCS-6296-FI-A
UCS-6296-FI-B
Leaf101
Spine201
Leaf103
Spine202
Leaf104
Leaf102
Spine203
Leaf105
Spine204
Leaf106
Netapp
FAS3200
Series
APIC2
298515
APP
OS
All the tiers hosting applications and NetScaler VMs are deployed on UCS B-Series Servers. ASR 9000
provides the external connectivity to the applications. Leafs 101 and 102 are access leafs and other leafs
are used to connect to the edge device.
9-2
Implementation Guide
Chapter 9
Logical Topology
In this section, the physical topology translated into a logical layout. Figure 9-2 depicts how the Silver
container is constructed logically. The logical topology can be divided into two sections, first, ACI
Fabric to Application Servers and second, ACI Fabric to external MPLS cloud.
Figure 9-3
MPLS
L3 VPN
Loopback Interface 10.2.200.1
10.2.201.1
10.2.202.1
10.2.202.2
Border Leaf - 2
Loopback Interface 10.2.200.106
ACI Fabric
Access Leaf - 1
Access Leaf - 2
NetScaler 1000V
HA-Pair
UCS Chassis
SLB
VIP - Web: 11.2.1.0/24
VIP - App and DB: 10.2.4.128/25
SNIP - 10.2.4.0/25
APP
OS
APP
OS
Web VMs
App 10.2.2.0/24
APP
OS
APP
OS
App VMs
Database 10.2.3.0/24
APP
OS
APP
OS
Database VMs
298516
Web 10.2.1.0/24
A unique VRF is assigned to each silver tenant which is defined in the access leafs in the fabric. Each
of the application tier and load balancer is assigned a specific VLAN, which are a part of the VRF
assigned to the silver tenant. The fabric serves as the default gateway for each of the tiers and NetScaler.
With the ACI Fabric being the default gateway it has the capability to route the packets from one tier to
another for both load balanced and non-load balanced flow. For external connectivity two leafs in the
fabric are used as border leafs to connect to ASR 9000 nV Edge router using port channels. Switched
virtual interfaces (SVI) are configured on the leaf switches and static routes help to route the packets to
the edge router. Interior BGP (IBGP) is configured between the two devices to advertise the routes for
traffic to reach the application tiers. Loopback interfaces are configured for the same.
9-3
Chapter 9
Silver TenantAPIC
ASR 9000
Port-channel
Port-channel
VRF net01
contract01
contract02
http
ftp-data
https
ftp-control
icmp
Bridge Domain:
slb_bd
contract03
mysql
icmp
icmp
Bridge Domain:
bd01
Bridge Domain:
bd02
Bridge Domain:
bd03
EPG: epg01
EPG: epg02
EPG: epg03
contract
http
Filter
Consumer
Provider
298517
Figure 9-4
b.
Right-click on the Security Domain tab and click on Create Security Domain.
Figure 9-5
9-4
Implementation Guide
Chapter 9
Figure 9-6
Step 2
Right-click on Local Users and click Create Local User. The first step is to add the user to a
security domain and s001_sd is the user.
b.
Next, assign access roles. The user accesses only tenant s001, the tenant-admin role is assigned and
user information is entered. Select Submit.
Figure 9-7
9-5
Chapter 9
Figure 9-8
Figure 9-9
Create Tenant
All the service tier configurations are done inside the tenant container. Tenant (fvTenant) is basically a
logical container for application policies that represent a customer, organization or just a group of
policies. Adding a security domain to this container enables the use of domain based access control.
Using such a construct helps to maintain isolation between policies for different customers.
Create tenant.
a.
9-6
Implementation Guide
Chapter 9
b.
Enter the required fields and make sure you select the security domain s001_sd.
c. The GUI prompts you to add a private network. This is an optional step that can be done later.
d.
Figure 9-10
Create Tenant 1
Figure 9-11
Create Tenant 2
Private Network
A private network (fvCtx) is an L3 context or can be more aptly termed as a Virtual Route Forwarding
(VRF). This provides an IP address space isolation for different tenant defined in the ACI Fabric. The
IP addresses are overlapping when the tenants do not share the VRF space in the ACI Fabric.
9-7
Chapter 9
Figure 9-12
Step 2
Figure 9-13
Step 3
If you wish to create a bridge domain, select the check box Create a Bridge Domain. The bridge
domain can be created later as well.
9-8
Implementation Guide
Chapter 9
Figure 9-14
Bridge Domain
The bridge domain (fvBD) is a Layer 2 (L2) forwarding construct defined in the fabric. To define a
subnet under the bridge domain it must be linked to the L3 context (private network). Bridge domain has
a unique L2 MAC address space and L2 flooding domain if enabled. The private network or VRF can
have multiple subnets in the given address space which in turn can be a part of one or more bridge
domains. A subnet (fvSubnet) defined inside a bridge domain is contained within the bridge domain
itself.
Expand the Networking folder, right-click on Bridge Domains and select Create Bridge Domain.
b.
c.
9-9
Chapter 9
Figure 9-15
Step 2
Enter the gateway IP address and the subnet mask. If the subnet needs to be advertised to the external
world mark the scope as public else to limit the subnet within the tenant itself mark the scope as private.
Figure 9-16
9-10
Implementation Guide
Chapter 9
Application Profile
An application profile (fvAp) can be considered as a logical container for End point groups. They cater
for the application requirements. Depending on the capability of the application the number of end point
groups in an application profile can vary. For the Silver tenant implementation, there are 3 end-point
groups (EPG) for web, application, and database servers, respectively. Based on requirements, there can
be multiple application profiles.
b.
c. The dialog box also provides the capability to create contracts and EPGs, but this can be done as a
later step.
d.
Click Finish.
9-11
Chapter 9
Figure 9-17
Figure 9-18
9-12
Implementation Guide
Chapter 9
b.
c.
d.
e.
Figure 9-19
9-13
Chapter 9
Figure 9-20
Filters
Filters contain Layer 2 to Layer 4 fields such as TCP/IP header fields, Layer 3 protocol type, allowed
Layer 4 ports, and so on. Filters are associated to contracts defined for EPG communication. Based on
the matching criteria defined in the filters traffic is handled.
9-14
Implementation Guide
Chapter 9
b.
Figure 9-21
Step 2
Create Filter 1
Enter the name for the filter and click on Entries to add a rule. A list of EtherType and IP Protocol is
given in the GUI. Source and destination ports are user defined based on the application under
consideration for web server - port 80, database - port 3306 etc can be configured. APIC GUI provides
a pre-defined list of ports.
Figure 9-22
Create Filter 2
9-15
Chapter 9
Figure 9-23
Create Filter 3
Contracts
A contract (vzBrCP) is needed for inter-EPG communication. Subjects defined within the contract use
filters to dictate the traffic that can pass between the EPGs. Subjects have the capability to define if the
filters are unidirectional or bi-directional. Contracts can have multiple rules one for http, another for
https, and so on. While assigning a contract to an EPG they need to be labeled as either consumer or
provider. When an EPG consumes a contract, the end points that are a part of the consumer EPG will
initiate the communication (client) with end points in the provided EPG. EPG can consume and provide
the same contract. When a contract is not established between two EPGs, communication between them
is disabled.
9-16
Implementation Guide
Chapter 9
b.
Figure 9-24
Step 2
Create Contract 1
Enter contract name and add subjects. For example to add http filter to the contract, click on add subject
which opens up a new dialog box. We can select the filter http from the drop-down option and also set
the direction of filter.
Figure 9-25
Create Contract 2
9-17
Chapter 9
Figure 9-26
Create Contract 3
based
9-18
Implementation Guide
Chapter 9
/>
/>
/>
/>
/>
</fvAp>
</fvTenant>
Static route to
ASR 9000 Loopback
10.2.201.1
10.2.202.1
Static route to
ASR 9000 Loopback
10.2.202.2
Border Leaf - 2
Loopback Interface 10.2.200.106
298540
ACI Fabric
BGP sessions are established to the ASR 9000 nV Edge router through two port channels. As seen in
Figure 9-27, two border leaves are used. Three loopback interfaces are configured one on ASR 9000
router and one each on the border leaves. Static routing is done to route the packets to the peer.
L3 external routed outside network is for the private network defined in the tenant.
<l3extOut name="l3_outside">
<bgpExtP descr="enable bgp" />
<l3extRsEctx tnFvCtxName="net01" />
</l3extOut>
9-19
Chapter 9
If any address space needs to be advertised to the outside network, the external routed network needs to
be bound to the corresponding bridge domain where the subnet is defined.
<fvBD name="bd01">
#Tier1 BridgeDomain
<fvRsBDToOut tnL3extOutName="l3_outside" />
</fvBD>
Once the BGP sessions are established successfully, the routes on the edge router are visible. It is
important to note that the private network (VRF) is dynamically instantiated and is deployed on the leaf
nodes when an endpoint is attached to the EPG. The VRF is deployed on the border leaves only when
the L3 external network is associated with the private network.
b.
Figure 9-28
Step 2
In the dialog box, enter the name and select the protocol BGP.
a.
From the drop-down list for Private Network, select the private network to associate the external
network with. In this implementation, it is net01.
b.
c.
9-20
Implementation Guide
Chapter 9
Figure 9-29
Step 3
Enter a name for the Node Profile. As seen in Figure 9-30, there are two leaves to the core router. Node
(loopback interface) is configured on both the leaves and the ASR 9000 peer information is also
provided. This also configures Static routes to reach the Peer. Interface profiles are used to configure
SVIs on the leaf nodes.
Figure 9-30
9-21
Chapter 9
Step 4
Figure 9-31
Create Nodes 1
Figure 9-32
9-22
Implementation Guide
Chapter 9
Figure 9-33
Figure 9-34
9-23
Chapter 9
Step 5
Figure 9-35
Figure 9-36
Configure external EPG networks. These are the networks that the EPG members can see and reach.
Since the client network subnet can be anything, leave it as 0.0.0.0/0 which allows all subnets.
9-24
Implementation Guide
Chapter 9
Figure 9-37
Step 6
Step 7
9-25
Chapter 9
Figure 9-39
Note
While using the XML script, make sure you remove the #comments.
<fvTenant name="s001">
#TenantName
<l3extOut name="l3_outside">
#External Routed network
<bgpExtP descr="enable bgp" />
#Select BGP Protocol
<l3extRsEctx tnFvCtxName="net01" />
#PrivateNetwork / VRF
<l3extLNodeP name="bgp_nodes">
#BGP nodes for Peering
<bgpPeerP addr="10.2.200.1" />
#PeerNode IP on asr9k
<l3extRsNodeL3OutAtt rtrId="10.2.200.105"
#BGP node1 on ACI fabric
tDn="topology/pod-1/node-105">
<ipRouteP ip="10.2.200.1/32">
#static-route to Peer
<ipNexthopP nhAddr="10.2.201.1" />
</ipRouteP>
</l3extRsNodeL3OutAtt>
<l3extRsNodeL3OutAtt rtrId="10.2.200.106"
#Node2
tDn="topology/pod-1/node-106">
<ipRouteP ip="10.2.200.1/32">
#static-route
<ipNexthopP nhAddr="10.2.202.1" />
</ipRouteP>
</l3extRsNodeL3OutAtt>
<l3extLIfP name="svi01">
#svi for portchannel1
<l3extRsPathL3OutAtt addr="10.2.201.2/24"
encap="vlan-411" ifInstT="ext-svi"
9-26
Implementation Guide
Chapter 9
tDn="topology/pod-1/paths-105/pathep-[pc_n105_asr9k]" />
</l3extLIfP>
<l3extLIfP name="svi02">
#svi for port-channel2
<l3extRsPathL3OutAtt addr="10.2.202.2/24"
encap="vlan-411" ifInstT="ext-svi"
tDn="topology/pod-1/paths-106/pathep-[pc_n106_asr9k]" />
</l3extLIfP>
</l3extLNodeP>
<l3extInstP name="outside_network">
#Layer3 ext-EPG
<fvRsCons tnVzBrCPName="contract01" />
#consume Tier1 contract
<l3extSubnet ip="10.2.201.0/24" />
#external allowed subnet
<l3extSubnet ip="10.2.202.0/24" />
#external allowed subnet
<l3extSubnet ip="100.2.201.0/24" />
#external allowed subnet
</l3extInstP>
</l3extOut>
</fvTenant>
3.
9-27
Chapter 9
!
!# neighbor peer information neighbor <ip-address> update-source <interface>
router bgp 200
vrf s001
rd 2:417
address-family ipv4 unicast
!
neighbor 10.2.200.105
remote-as 200
update-source loopback411
address-family ipv4 unicast
route-policy allow-all in
route-policy allow-all out
!
neighbor 10.2.200.106
remote-as 200
update-source loopback411
address-family ipv4 unicast
route-policy allow-all in
route-policy allow-all out
commit
end
For the all the load balanced flows, traffic hits the VIP configured on NetScaler and based on the load
balancing algorithm the request is forwarded to the corresponding real server.
9-28
Implementation Guide
Chapter 9
Figure 9-40
MPLS
L3 VPN
Route flow to
border leaf
Border leaves
ASR 9000
leaf105
leaf106
Access leaves
Route request
to Web SLB
leaf101
LB the request
and send to
Web VM
pvt_ns
leaf102
APP
OS
APP
OS
Tier01
Web VMs
Figure 9-41
298553
APP
OS
MPLS
L3 VPN
Route flow to
border leaf
Border leaves
ASR 9000
leaf105
leaf106
Access leaves
leaf101
leaf102
APP
OS
APP
OS
Tier01
Web VMs
298554
APP
OS
9-29
Chapter 9
Figure 9-42
leaf102
1
pvt_ns
APP
OS
APP
OS
APP
OS
APP
OS
APP
OS
APP
OS
Tier01
Web VMs
Figure 9-43
Tier02
App VMs
298555
leaf101
leaf102
1
pvt_ns
Retrieve data
APP
OS
APP
OS
APP
OS
APP
OS
APP
OS
APP
OS
Tier02
App VMs
Tier02
Database VMs
298556
leaf101
9-30
Implementation Guide
Chapter 9
One-Arm Mode
In one-arm mode, a single data interface on the NetScaler1000v is used for both internal and external
communication. A VLAN is assigned to the arm attached to the load balancer. In such a configuration
the default gateway for the NetScaler will be the upstream router. From a traffic-flow standpoint, traffic
destined a web server sitting in the server farm will hit the load balanced server VIP sitting in the
NetScaler device. Once the load balanced algorithms are applied the request is forwarded from the load
balancer through the same interface to the upstream router and then gets forwarded to the real server.
Figure 9-44
ACI Fabric
NetScaler
1000V
VM1
VM2
VM3
VM4
298557
Server Farm
9-31
Chapter 9
Figure 9-45
HA Cluster
Heartbeat
Exchange
Config Sync
NetScaler 1000V
Standby Node
298558
NetScaler 1000V
Active Node
Network Setup
Setting up the network for the NetScaler 1000v appliance includes configuring NetScaler IP (NSIP) for
management connectivity, Subnet IP address (SNIP) for communication along the data plane, static
routes for reachability to external subnets and default gateway, Source NAT IP, VLAN, and so on. While
implementing with ACI, all of these are automated except for NSIP. When the NetScaler appliance is
created from an ova template, configuring the NSIP is a mandatory step. Once NSIP is configured the
management access to the instance is established. All of the remaining network configuration is done
through APIC.
Load Balanced
Server VIP
VM1
Client
Source NAT
Health Moniter
VM3
298559
VM2
Server
The first step in creating an LBS is to define the application server. The only information needed is the
IP address of the server. To create an LBS between a group of servers, all the server information needs
to be added.
add server <server-name> <server-ipaddress>
9-32
Implementation Guide
Chapter 9
Health Monitoring
To check the connectivity to the servers, monitor probes are used. Probes are sent at regular intervals to
check the health of the server. Based on the result the services/service groups state are marked as up or
down. Based on the application, there can be different types of health monitor such as ping, tcp, http,
and so on.
Service Graph
9-33
Chapter 9
Device description includes the model, version and package version. Interface labels are used by APIC
to bind an interface with a connector for specific functions that are provided by the device. Inside
interface can be considered as an interface used for secure internal communication where as outside
is used for less secure external communication. Interface labels are mapped to the physical interfaces on
the registered device. L4-L7 service functions provided by NetScaler device package is show in
Figure 9-48. The two main functions that are used are the LoadBalancing and SSLOffload.
9-34
Implementation Guide
Chapter 9
Figure 9-48
Figure 9-49
These functions are called as function nodes, which basically apply the functions defined in the node to
the traffic flowing. Each function node has two connectors called as meta-connectors. The
meta-connectors define the input and output connections for a node. The load balancing function nodes
are called as external and internal.
9-35
Chapter 9
Figure 9-50
Concrete
Device 2
1/1
1/1
Inside
(Custom Name)
Logical interfaces
Concrete Interface
Outside
(Custom Name)
298563
Inside
Outside
Device Package -- Logical Interface Labels
Concrete Devices
Concrete devices (cDev) are the actual application running devices. They can be virtual or physical. In
our implementation, two concrete devices are defined as an HA Cluster. While configuring the concrete
device provide information such as access information, VM Name, virtual interface details, and so on.
When concrete devices are added to logical device cluster, the physical interfaces (concrete interface)
(vnsCIf) on the Concrete device are mapped to the logical interface.
Logical Interfaces
For each logical device, there are two logical interfaces (vnsLIf) internal and external that are
mapped to the logical interface labels defined in the device package. During service graph rendering the
function node connectors are mapped to these logical interfaces. Figure 9-51 shows the mapping
between interfaces at different levels. The service graph connectivity is explained in detail in the next
section.
9-36
Implementation Guide
Chapter 9
Figure 9-51
Interface Mapping
Internal
vnsAbsFuncConn
vnsAbsFuncConn
Function
Node
Service Graph
vnsLIf external
vnsLIf internal
vnsCIf 1-1
Concrete
Device
vnsCIf 1-1
Concrete
Device
Logical Device
mIfLbl
mIfLbl
Inside
Outside
298564
Note
The default gateway for the NetScaler device is the ACI Fabric that is added as a part of the service graph
configuration.
Add route 10.0.0.0 255.255.0.0 10.0.39.253
add route 172.0.0.0 255.0.0.0 10.0.39.253
rm route 0.0.0.0 0.0.0.0 10.0.39.253
add system user apic Cisco12345
bind system user apic superuser 100
add system user admin Cisco12345
bind system user admin superuser 100
save ns config
b.
9-37
Chapter 9
Figure 9-52
Step 2
Under the General tab enter the name for the logical device.
a.
Select the device package from the drop-down. Mode can be set to HA Cluster. Device type Virtual
and it is a single context based implementation. Once the device type is selected as Virtual, the
connectivity is set to VMM domain.
b.
c.
Enter the access credentials. This is common for both the logical and concrete device. For concrete
device configuration, provide the management IP address, management port, and select the VM
from the list of VMs. The data interface used is 1_1 is used for data plane communication. The
direction for the interface can be provider, consumer or provider and consumer. Since deployment
of NetScaler is in one-arm mode, the same interface is used for both provider and consumer
communication.
Note
APIC does not support the syntax 1/1 for virtual interface and it needs to replace / with _ , that is, 1_1.
Note
Direction Parameter simply indicates the type of logical interface to which the virtual concrete interface
is mapped. Provider refers to internal interface and Consumer refers to external interface. As seen in
Figure 9-53, 1_1 mapped to external, internal.
9-38
Implementation Guide
Chapter 9
Figure 9-53
Step 3
Configure the HA on the devices and enable the required modes and features. Management interface
0_1 is used for HA heartbeat exchange. As the management access is enabled over the SNIP, user should
be able to access the cluster IP through user interface or SSH. Once the device-specific parameters are
configured, the cluster features can be enabled. Features enabled are SSL, SSLOffload, LoadBalancing,
and LB.
9-39
Chapter 9
Figure 9-54
Figure 9-55
9-40
Implementation Guide
Chapter 9
2.
9-41
Chapter 9
</vnsDevFolder>
</vnsCDev>
</vnsLDevVip>
</fvTenant>
Note
Service Graph
In ACI, services are inserted using a Service graph that is instantiated on the ACI Fabric using APIC.
The user defines services, whereas service graph cater for the network and service functions that are
needed by the application. Service graph can provide a single function or can even concatenate functions
together. The basic functionality can be thought of as a firewall sitting between two application tiers.
Thus the traffic running between the two EPGs will have to pass through the function defined in the
service graph, which is established when the graph is rendered. The service graph can be configured
either using XML scripts or the APIC GUI. The function required for the service can be chosen from the
GUI. From a device package point of view, the function node can be termed as as a meta device. The
meta device is associated to the actual device when the graph is rendered. Figure 9-56 shows two
examples of a service graph. The first one supports just a single firewall function, where as in the second
one two functions firewall and load-balancer functions are concatenated together. The main components
that make up the graph are function nodes, terminal nodes (provider and consumer) and the function
node connector.
Service graph stitches functions together not actual network devices.
Figure 9-56
Consumer
Consumer
Service Graph
Function Node
Firewall
Function Node
Firewall
Function Node
Load Balancer
Provider
Provider
298569
Note
9-42
Implementation Guide
Chapter 9
Figure 9-57
Function node represents the service function and contains the parameters to be configured for the
service. These parameters can be configured at the EPG, application profile or tenant level. Parameters
can also be configured as a part of the service graph itself. The function node connectors have a VLAN
or VXLAN associated with them and can be considered as an EPG. A service graph is inserted for traffic
between two application EPGs. These consumer and provider EPGs are referred as terminal nodes.
b.
Right-click on Function Profiles and select Create L4-L7 Services Function Profile.
9-43
Chapter 9
Figure 9-58
Step 2
Since simple load-balancing for applications like HTTP, HTTPS, FTP, and MySQL is performed, a
single profile can be created.
b.
Since there are multiple profiles based on the function node, a profile can be created and added to
the function profile.
Figure 9-59
c.
Make sure you uncheck the Copy Existing Profile Parameters field. There are predefined profile
parameters. However,in creating a custom profile, existing ones don't need to be copied.
d.
Select the device function from the drop-down menu. The list contains all the functions provided in
the device package. Select Load Balancing.
e.
9-44
Implementation Guide
Chapter 9
Figure 9-60
Note
None of the parameters under the function profile is configured. This can be done at a later stage. Empty
profile is useful when the service graph parameters are configured at EPG, application profile or tenant
Level.
Note
Function profiles and function profile groups that are created within a tenant cannot be used in some
other tenant. Basically all configurations are local to the tenant
Step 3
Create the Service Graph. There are two options available for creating the Service Graph. Option one
is creating a graph using pre-existing templates. Option two is to create a custom template to select the
function node and define the properties. The following steps explore option one first.
a.
b.
9-45
Chapter 9
Figure 9-61
c.
Enter a name for the graph and from the Type drop-down field and select a pre-defined template. In
this implementation, there is a single load-balancer between two EPGs which is configured in
one-arm mode.
d.
Figure 9-62
e.
f.
9-46
Implementation Guide
Chapter 9
Figure 9-63
g. This creates a graph as shown in Figure 9-64. Note that the function node name is given as ADC by
Step 4
In the second option, an advanced option is used to create the Service Graph.
a.
9-47
Chapter 9
Figure 9-65
b.
From the list of functions under the device package on the left hand side, drag and drop the
load-balancing function. This creates three nodes on the screen, the function node and two terminal
nodes provider and consumer.
c.
d.
Once the function node is inserted, a screen pops up to associate a function profile to the node. Select
the function profile created in step 2. Since there are no parameters configured under the profile, the
function node will not have any parameters configured.
Figure 9-66
9-48
Implementation Guide
Chapter 9
Figure 9-67
e. Add connections between the terminal nodes and the function node. Once the connections are made,
g.
Select Ok.
Figure 9-68
Connection Properties
h. While creating a service graph in the advanced mode, the function node and connector names can
be renamed. To change them, double-click on the name and enter any custom name. Then select the
Submit button.
Figure 9-69
9-49
Chapter 9
2.
9-50
Implementation Guide
Chapter 9
<vnsRsAbsConnectionConns
tDn="uni/tn-s001/AbsGraph-lb_epg_graph/AbsTermNodeProv-Provider/AbsTConn" />
</vnsAbsConnection>
</vnsAbsGraph>
</fvTenant>
Click on the Function Profile associated with the service graph and click on the pencil icon. This enters
into the edit mode.
Figure 9-70
Step 2
As mentioned earlier there are two main folders to be configured Device Config and Function Config.
Figure 9-71
Step 3
Open up the Device config folder. IP addresses are defined under the network folder.
a.
Double-click on network folder and assign a name to this parent folder and hit update.
9-51
Chapter 9
Figure 9-72
Step 4
Browse the Network folder to locate the folder, nsip. This is where all the SNIP addresses are
configured.
a.
Double-click on nsip folder, name it, and select the Update button.
Figure 9-73
Step 5
Double-click on these parameters, and assign names and values. Make sure the name is unique does
not overlap with any other name already present. Both value fields are the actual ipaddress and
netmask to be configured.
9-52
Implementation Guide
Chapter 9
Figure 9-74
Step 6
Configure the function parameters. Once the device parameters are configured, the function parameters
can be configured.
a. As shown in Figure 9-73, there are predefined folders for a specific function; that is, the folder for
the external communication is called external_network, the folder for the internal communication
is called internal_network, and so on. The SNIP address configured in the previous step is used
for internal communication between the NetScaler device and real servers.
b.
9-53
Chapter 9
Step 7
Once the parameters are configured, they are visible under Function profile>All parameters.
Figure 9-76
9-54
Implementation Guide
Chapter 9
Figure 9-77
Step 1
b.
c.
d.
Select the contract, graph and node name. As seen in the function profile there are two configuration
foldersthe device configuration and the function configuration.
Figure 9-78
9-55
Chapter 9
Figure 9-79
Step 2
Open up Device config folder. IP addresses are defined under the Network folder.
a.
b.
Select Update.
Figure 9-80
Step 3
Browse the Network folder and locate the folder nsip. This is where all the SNIP addresses are
configured.
a.
b.
Select Update.
Figure 9-81
Step 4
Double-click on these parameters, and assign names and values. Make sure the name is unique does
not overlap with any other name already present. Both value fields are the actual ipaddress and
netmask to be configured.
b.
Select Update.
9-56
Implementation Guide
Chapter 9
Figure 9-82
Step 5
Configure the function parameters. Once the device parameters are configured, the function parameters
can be configured.
a. As shown in Figure 9-83, there are predefined folders for a specific function; that is, the folder for
the external communication is called external_network, the folder for the internal communication
is called internal_network, and so on. The SNIP address configured in the previous step is used
for internal communication between the NetScaler device and real servers.
b.
Figure 9-84
9-57
Chapter 9
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing" key="nsip" name="snip">
<vnsParamInst key="ipaddress" name="ip1" value="10.2.4.21" />
<vnsParamInst key="netmask" name="netmask1" value="255.255.255.128"
/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing" key="internal_network"
locked="no" name="snipip">
<vnsCfgRelInst key="internal_network_key" name="snip_key"
targetName="network/snip" />
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>
Right-click on Device Selection Policy and select Create Logical Device Context.
9-58
Implementation Guide
Chapter 9
Figure 9-85
Step 2
b.
Next, select the Device Cluster to which the Service Graph configurations are pushed. While
mapping the logical interface contexts the connector names refer to the function node connector
names in the service graph. If the names were left as the default names, they are external and
internal.
Figure 9-86
9-59
Chapter 9
Select the Contract and then select the Subject to which the Service Graph is being applied.
b.
c.
Once the Graph is deployed successfully, entries appear under the Deployed Graph Instances and
Deployed Devices.
9-60
Implementation Guide
Chapter 9
Figure 9-87
Figure 9-88
9-61
Chapter 9
Load-Balancing Implementation
This validation tested for applications. HTTP and HTTPs Web service, FTP service and MySQL service
with load-balancing provided for each of them. In Cisco NetScaler device package, LoadBalancing for
HTTPs application is provided using SSLOffload function node. Load balancing functionality for the
remaining nodes can be provided using LoadBalancing function node. Multiple services can be added to
NetScaler as single services when there is only a single server hosting the application or as a service
group when multiple servers host the application. Both the implementations are covered in the following
section.
9-62
Implementation Guide
Chapter 9
ApplicationHTTP
Two servers are configured to host HTTP web application. These servers are added as a service group
and a VIP is configured to load balance the application. An http monitor is created to check the health
of the servers. Figure 9-89 summarizes the configuration and the following configuration is done
through the Service Graph.
add server 10.2.1.1 10.2.1.1
add server 10.2.1.2 10.2.1.2
add serviceGroup servicegroup_web HTTP
bind serviceGroup servicegroup_web 10.2.1.1 80
bind serviceGroup servicegroup_web 10.2.1.2 80
add lb monitor aci_http HTTP
bind serviceGroup servicegroup_web -monitorName aci_http
add lb vserver http_11.2.1.1 HTTP 11.2.1.1 80 -persistenceType COOKIEINSERT
bind lb vserver http_11.2.1.1 servicegroup_web
Figure 9-89
Server1
10.2.1.1
NetScaler
lb monitor aci_http
Server2
10.2.1.2
298602
VIP 11.2.1.1
Port 80
XML Configuration
<fvTenant name="s001">
<fvAp name="app01">
<fvAEPg name="epg01">
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="lbvserver" name="VServer1" scopedBy="epg">
<vnsParamInst name="name" key="name" value="http_11.2.1.1"/>
<vnsParamInst name="ipv46" key="ipv46" value="11.2.1.1"/>
<vnsParamInst name="TCP" key="servicetype" value="HTTP"/>
<vnsParamInst name="port" key="port" value="80"/>
<vnsParamInst name="persistencetype" key="persistencetype"
value="COOKIEINSERT"/>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing"
key="lbvserver_servicegroup_binding" name="lbService1">
<vnsCfgRelInst key="servicename"
name="WebServiceGroup1" targetName="ServiceGroup1"/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="servicegroup" name="ServiceGroup1" scopedBy="epg">
<vnsParamInst key="servicegroupname" name="srv_grp_name"
value="servicegroup_web"/>
<vnsParamInst key="servicetype" name="servicetype"
value="HTTP"/>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing"
key="servicegroup_servicegroupmember_binding" name="servbind1" scopedBy="epg">
<vnsParamInst key="ip" name="ip1" value="10.2.1.1"/>
9-63
Chapter 9
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing" key="mFCnglbmonitor"
locked="no" name="lbmonitor">
<vnsCfgRelInst key="lbmonitor_key" name="lbmonitor_key"
targetName="lbMon1" />
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="mFCngservicegroup" locked="no"
name="servicegroup_cfg">
<vnsCfgRelInst key="servicegroup_key" name="servicegroup_key"
targetName="ServiceGroup1" />
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing" key="mFCnglbvserver"
locked="no" name="lbvserver_cfg">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key"
targetName="VServer1"/>
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>
Application FTP
Tier02 servers host FTP application. Traffic from tier01 VMs to tier02 VMs is governed using contracts.
Passive FTP is configured on the real servers. Port range 10100 10500 is used for ftp-data
communication.
9-64
Implementation Guide
Chapter 9
Note
To support passive FTP, on primary ns1000v you should configure the FTP port range. Go to
System>Settings>Global Setting Parameters and configure the FTP port range as start: 10100 and end:
10500.
add server 10.2.2.1 10.2.2.1
add service service_ftp 10.2.2.1 FTP 21 -gslb NONE -healthMonitor NO -maxClient 0
-maxReq 0 -cip DISABLED -usip NO -useproxyport NO -sp ON -cltTimeout 120 -svrTimeout
120 - -CKA NO -TCPB YES -CMP NO
add lb monitor aci_ftp FTP username aci password Cisco12345
bind service service_ftp -monitorName aci_ftp
add lb vserver ftp_10.2.4.132 FTP 10.2.4.132 21
bind lb vserver ftp_10.2.4.132 service_ftp
Figure 9-90
VIP 10.2.4.132
ftp control Port 21
ftp data port 10100-10500
Server
10.2.2.1
NetScaler
lb monitor aci_ftp
Type ftp
298603
Service
XML Configuration
<fvTenant name="s001">
<fvAp name="app01">
<fvAEPg name="epg02">
<vnsFolderInst ctrctNameOrLbl="contract02" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="lbvserver" name="VServer3" scopedBy="epg">
<vnsParamInst name="name" key="name" value="ftp_10.2.4.132"/>
<vnsParamInst name="ipv46" key="ipv46" value="10.2.4.132"/>
<vnsParamInst name="TCP" key="servicetype" value="FTP"/>
<vnsParamInst name="port" key="port" value="21"/>
<vnsFolderInst ctrctNameOrLbl="contract02"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing"
key="lbvserver_service_binding" name="lbService2">
<vnsCfgRelInst key="servicename" name="ftpService2"
targetName="Service3"/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract02" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="service" name="Service3" scopedBy="epg">
<vnsParamInst name="name" key="name" value="service_ftp"/>
<vnsParamInst name="ip" key="ip" value="10.2.2.1"/>
<vnsParamInst name="TCP" key="servicetype" value="FTP"/>
<vnsParamInst name="port" key="port" value="21"/>
<vnsParamInst name="maxclient" key="maxclient" value="0"/>
<vnsParamInst name="maxreq" key="maxreq" value="0"/>
<vnsParamInst name="cip" key="cip" value="DISABLED"/>
<vnsParamInst name="usip" key="usip" value="NO"/>
<vnsParamInst name="useproxyport" key="useproxyport"
value="YES"/>
<vnsParamInst name="sp" key="sp" value="ON"/>
<vnsParamInst name="clttimeout" key="clttimeout" value="180"/>
<vnsParamInst name="svrtimeout" key="svrtimeout" value="360"/>
<vnsParamInst name="cka" key="cka" value="NO"/>
<vnsParamInst name="tcpb" key="tcpb" value="NO"/>
<vnsParamInst name="cmp" key="cmp" value="NO"/>
9-65
Chapter 9
<vnsFolderInst ctrctNameOrLbl="contract02"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing"
key="service_lbmonitor_binding" name="servMon1" scopedBy="epg">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="lbMon3"/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract02" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="lbmonitor" name="lbMon3" scopedBy="epg">
<vnsParamInst name="monitorname" key="monitorname"
value="aci_ftp"/>
<vnsParamInst name="type" key="type" value="ftp"/>
<vnsParamInst name="username" key="username" value="aci"/>
<vnsParamInst name="password" key="password" value="Cisco12345"/>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract02" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="mFCnglbmonitor" locked="no" name="lbmonitor">
<vnsCfgRelInst key="lbmonitor_key" name="lbmonitor_key_ftp"
targetName="lbMon3" />
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract02" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="mFCngservice" locked="no" name="service_cfg">
<vnsCfgRelInst key="service_key" name="service_key_ftp" targetName="Service3"
/>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract02" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="mFCnglbvserver" locked="no" name="lbvserver_cfg">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key_ftp"
targetName="VServer3" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>
ApplicationMySQL
Tier03 servers host MySQL application. Traffic from tier02 VMs to tier03 VMs is governed using
contracts. To support the service of type MySQL, the Database User on the NetScaler needs to be
configured manually.
Note
For MySQL, from the NetScaler GUI, go to System>User Administration>Database Users and add a
new entry. The login credentials will be the one used to login into the backed server root/Cisco12345.
add server 10.2.3.1 10.2.3.1
add service service_mysql 10.2.3.1 MYSQL 3306 -gslb NONE -healthMonitor NO -maxClient
0 -maxReq 0 -cip DISABLED -usip NO -useproxyport NO -sp ON -cltTimeout 120
-svrTimeout 120 - -CKA NO -TCPB YES -CMP NO
add lb monitor aci_mysql MYSQL username root password Cisco12345 database tenant
sqlquery Select * from tenant
bind service service_mysql -monitorName aci_mysql
add lb vserver mysql_10.2.4.133 MYSQL 10.2.4.133 3306
bind lb vserver mysql_10.2.4.133 service_mysql
9-66
Implementation Guide
Chapter 9
Figure 9-91
VIP 10.2.4.132
ftp control Port 21
ftp data port 10100-10500
Server
10.2.3.1
NetScaler
298604
Service
lb monitor aci_mysql
Type MYSQL
XML Configuration
<fvTenant name="s001">
<fvAp name="app01">
<fvAEPg name="epg03">
<vnsFolderInst ctrctNameOrLbl="contract03" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="lbvserver" name="VServer4" scopedBy="epg">
<vnsParamInst name="name" key="name"
value="mysql_10.2.4.133"/>
<vnsParamInst name="ipv46" key="ipv46" value="10.2.4.133"/>
<vnsParamInst name="TCP" key="servicetype" value="MYSQL"/>
<vnsParamInst name="port" key="port" value="3306"/>
<vnsFolderInst ctrctNameOrLbl="contract03"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing"
key="lbvserver_service_binding" name="lbService2">
<vnsCfgRelInst key="servicename" name="mysqlService"
targetName="Service4"/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract03" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="service" name="Service4" scopedBy="epg">
<vnsParamInst name="name" key="name" value="service_mysql"/>
<vnsParamInst name="ip" key="ip" value="10.2.3.1"/>
<vnsParamInst name="TCP" key="servicetype" value="MYSQL"/>
<vnsParamInst name="port" key="port" value="3306"/>
<vnsParamInst name="maxclient" key="maxclient" value="0"/>
<vnsParamInst name="maxreq" key="maxreq" value="0"/>
<vnsParamInst name="cip" key="cip" value="DISABLED"/>
<vnsParamInst name="usip" key="usip" value="NO"/>
<vnsParamInst name="useproxyport" key="useproxyport"
value="YES"/>
<vnsParamInst name="sp" key="sp" value="ON"/>
<vnsParamInst name="clttimeout" key="clttimeout" value="180"/>
<vnsParamInst name="svrtimeout" key="svrtimeout" value="360"/>
<vnsParamInst name="cka" key="cka" value="NO"/>
<vnsParamInst name="tcpb" key="tcpb" value="NO"/>
<vnsParamInst name="cmp" key="cmp" value="NO"/>
<vnsFolderInst ctrctNameOrLbl="contract03"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing"
key="service_lbmonitor_binding" name="servMon1" scopedBy="epg">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="lbMon4"/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract03" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="lbmonitor" name="lbMon4" scopedBy="epg">
<vnsParamInst name="monitorname" key="monitorname"
value="aci_mysql"/>
<vnsParamInst name="type" key="type" value="MYSQL"/>
9-67
Chapter 9
SSLOffload Implementation
A simple SSL offloading setup terminates SSL traffic (HTTPS), decrypts the SSL records, and forwards
the clear text (HTTP) traffic to the back-end web servers. However, the clear text traffic is vulnerable to
being spoofed, read, stolen, or compromised by individuals who succeed in gaining access to the
back-end network devices or web servers. You can, therefore, configure SSL offloading with end-to-end
security by re-encrypting the clear text data and using secure SSL sessions to communicate with the
back-end Web servers. To configure SSL Offloading with end-to-end encryption, add SSL based services
that represent secure servers with which the NetScaler appliance will carry out end-to-end encryption.
Then create an SSL based virtual server, and create and bind a valid certificate-key pair to the virtual
server. Bind the SSL services to the virtual server to complete the configuration.
Note
Before configuring SSL services and virtual server, create the SSL key and certificates. Certificate
key-pair is created by APIC.
create ssl rsakey /nsconfig/ssl/acikey.pem 2048 -exp F4
create ssl certReq /nsconfig/ssl/acireq.pem -keyFile /nsconfig/ssl/acikey.pem
-countryName US -stateName NC -organizationName Cisco
create ssl cert /nsconfig/ssl/acicert.pem /nsconfig/ssl/acireq.pem ROOT_CERT -keyFile
/nsconfig/ssl/acikey.pem -days 365
XML Configuration
<fvTenant name="s001">
<fvAp name="app01">
<fvAEPg name="epg01">
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload" key="lbvserver" name="VServer2" scopedBy="epg">
<vnsParamInst name="name" key="name" value="https_11.2.1.2"/>
9-68
Implementation Guide
Chapter 9
<vnsParamInst
<vnsParamInst
<vnsParamInst
<vnsParamInst
value="COOKIEINSERT"/>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload"key="lbvserver_service_binding" name="lbService2">
<vnsCfgRelInst key="servicename" name="Service2"
targetName="Service2"/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload" key="service" name="Service2" scopedBy="epg">
<vnsParamInst name="name" key="name" value="service_https"/>
<vnsParamInst name="ip" key="ip" value="10.2.1.11"/>
<vnsParamInst name="TCP" key="servicetype" value="SSL"/>
<vnsParamInst name="port" key="port" value="443"/>
<vnsParamInst name="maxclient" key="maxclient" value="0"/>
<vnsParamInst name="maxreq" key="maxreq" value="0"/>
<vnsParamInst name="cip" key="cip" value="DISABLED"/>
<vnsParamInst name="usip" key="usip" value="NO"/>
<vnsParamInst name="useproxyport" key="useproxyport"
value="YES"/>
<vnsParamInst name="sp" key="sp" value="ON"/>
<vnsParamInst name="clttimeout" key="clttimeout" value="180"/>
<vnsParamInst name="svrtimeout" key="svrtimeout" value="360"/>
<vnsParamInst name="cka" key="cka" value="NO"/>
<vnsParamInst name="tcpb" key="tcpb" value="NO"/>
<vnsParamInst name="cmp" key="cmp" value="NO"/>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="ssl_epg_graph" nodeNameOrLbl="SSLOffload"
key="service_lbmonitor_binding" name="servMon1" scopedBy="epg">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="lbMon2"/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload" key="lbmonitor" name="lbMon2" scopedBy="epg">
<vnsParamInst name="monitorname" key="monitorname"
value="aci_https"/>
<vnsParamInst name="type" key="type" value="tcp"/>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload" key="sslcertkey" name="sslcertkey" scopedBy="epg">
<vnsParamInst key="certkey" name="certkey" value="acisslcert"/>
<vnsParamInst key="cert" name="certfile" value="acicert.pem"/>
<vnsParamInst key="key" name="keyfile" value="acikey.pem"/>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload" key="sslvserver" name="WebSSLVServer2" scopedBy="epg">
<vnsParamInst key="clientauth" name="clienthauth" value="ENABLED"/>
<vnsParamInst key="vservername" name="vservername"
value="https_11.2.1.2"/>
<vnsParamInst key="sendclosenotify" name="sendclosenotify" value="NO"/>
9-69
Chapter 9
References
References
The following references are provided for your convenience.
http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/VMDC/2-2/collateral/vmdc
ConsumerModels.html
http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/VMDC/2-0/large_pod_desig
n_guide/vmdc20Lpdg/VMDC_2-0_DG_1.html
http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/VMDC/2-3/implementation
_guide/VMDC_2-3_IG/VMDC2-3_IG1.html#wp2270214
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/aci-fundamentals/b_ACIFundamentals.html
http://www.cisco.com/c/en/us/products/collateral/switches/citrix-netscaler-1000v/datasheet-c78-731
508.pdf
9-70
Implementation Guide
Chapter 9
http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrast
ructure/white-paper-c11-732493.html
9-71
Chapter 9
References
9-72
Implementation Guide
CH A P T E R
10
Overview
The Bronze tenant container is one of the simplest container models defined in the Cisco Virtualized
Multi-Service Data Center (Cisco VMDC) architecture. This container provides single subnet per tenant
for resource placement, such as virtual machines or bare metal servers. This container can be
implemented in two different ways:
These two differ in the way the default gateway is implemented within the container and has scale
implications. The L2 Bronze uses only layer-2 constructs in ACI and scales a lot higher than the L3
Bronze which is limited to the verified scalability of 100 tenants in the current release.
The following sections detail the implementation of L3 and L2 Bronze tenant containers in the ACI
Fabric.
Layer 3 Bronze
The L3 Bronze container has the virtual machine default gateway configured on the ACI Fabric. The
ACI Fabric, in turn, routes to an upstream ASR 9000 network virtualization (nV) edge device using
routing protocols or static routes.
Note
The ACI Fabric utilizes Interior Border Gateway Protocol (IBGP), Open Shortest Path First (OSPF), and
static routing for L3 external connectivity.
Physical Topology
Figure 10-1 details the L3 Bronze container physical topology. The Cisco Integrated Compute Stack
(ICS) is connected to pair of leaf switches over virtual port channels (vPC). The border leaf switches
(105 and 106) connect to the edge device over the vPCs.
10-1
Chapter 10
Layer 3 Bronze
Note
ACI leaf switches do not support L3 port channel interface or port channel sub-interface.
Figure 10-1
Spine202
Spine203
Spine204
Nexus 9508
Nexus 9396
Leaf102
Leaf103
Leaf104
Leaf105
Border
Leaf
BE-5
APIC1
APIC3
UCS-6296-fi-A
UCS-6296-FI-B
Leaf106
BE-6
APIC2
UCS-6296-FI-C
UCS-6296-FI-D
298477
Leaf101
Nexus 9396
Logical Topology
Figure 10-2 details the L3 Bronze container logical topology. In this figure, the virtual machines reside
in the 10.3.1.0/24 subnet and the ACI Fabric acts as the default gateway (def gwy) for this subnet. The
ACI Fabric connects to the ASR 9000 nV over L3 paths via leaf 105 and leaf 106. On each border leaf,
a L3 logical interface is defined and mapped to an external VLAN that is carried over the L2 port channel
to the ASR 9000. On the ASR 9000, the L3 bundle Ethernet interface with sub-interfaces separates the
tenant traffic. On each border leaf, IBGP or static routes implement routing to the external network.
10-2
Implementation Guide
Chapter 10
MPLS
L3 VPN
RTR_ID 10.3.200.1
IBGP
10.3.1.201.1
IBGP
10.3.1.202.1
BE-5.421
RTR_ID - 10.3.200.1
ip route 10.3.1.0/24 10.3.201.2
ip route 10.3.1.0/24 10.3.202.2
10.3.1.201.1
10.3.1.202.1
BE-6.421
BE-5.422
ASR 9000-NV
10.3.201.2
10.3.202.2
RTR_ID 10.3.200.105
10.3.201.2
RTR_ID 10.3.200.106
Node-105
BE-6.422
ASR 9000-NV
10.3.202.2
Node-106
Node-105
Node-106
GW - 10.3.1.253
GW - 10.3.1.253
Node-101
Node-102
Node-101
10.3.1.0/24
APP
OS
Node-102
10.3.1.0/24
APP
OS
APP
OS
Web/App/Database VMs
APP
OS
Web/App/Database VMs
298478
Figure 10-2
Figure 10-3 shows the L3 Bronze logical construct in Cisco Application Policy Infrastructure Controller
(Cisco APIC).
Figure 10-3
svi01
Filter
contract01 Contract
ASR 9000
ACI Fabric
Logical Interface
svi02
Tenant: b001
Context: net01
contract01
http
https
298479
icmp
Bridge Domain:
bd01
EPG: epg01
Each tenant is identified by a name in the APIC. The tenant has a private network (net01) that
corresponds to a L3 context or Virtual Fragment Reassembly (VRF) in a traditional network. The bridge
domain (bd01) that identifies the boundary of the bridged traffic is similar to a VLAN in a traditional
network. The bridge domain has an end-point group (EPG), epg01, that identifies a collection of end
points such as virtual machines. A subnet is defined as part of the bridge domain that configures the
default gateway within the fabric. There is an application profile (app01) that defines the policies
associated with the EPG. The tenant container connects to the outside network over an external routed
network. On each of the border leaf switches, a logical switch virtual interface (SVI) routes to external
networks. A contract (contract01) is defined between the EPG and the external routed network. The
epg01 is the provider and outside_network is the consumer of this contract. Filters such as HTTP or
ICMP define the traffic allowed by the contract.
10-3
Chapter 10
Layer 3 Bronze
Prerequisites
For implementing this solution, these are the prerequisites:
Virtual Port Channels (vPC) should be configured and connectivity established to ACI Fabric.
2.
3.
4.
5.
6.
Create filters
7.
Create a contract
8.
9.
Note
Step 1
Steps 8 and 9 are different for Interior Border Gateway Protocol (IBGP) and static route implementation
of L3 Bronze container.
Create a security domain.
a.
b.
c.
In the navigation pane, right-click on Security Management and choose Create Security Domain.
d.
Enter a name for the security domain and click on Submit (Figure 10-4).
10-4
Implementation Guide
Chapter 10
Figure 10-4
You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<aaaUserEp>
<aaaDomain name="b001_sd" />
</aaaUserEp>
Note
Step 2
A security domain is required for the tenant administrator to log into APIC and manage the tenant's
resources.
Create a tenant container.
a. To create a logical tenant container, click on Tenants in the main menu bar and from the submenu,
In the pop-up window, provide a name, and select the Security Domain that was created in the
previous step.
c.
Click Next button to go to the next screen and click Submit to finish the task. The tenant
configuration window opens.
d.
Note that the APIC GUI allows configuration of private network information before submitting the
task (Figure 10-5).
10-5
Chapter 10
Layer 3 Bronze
Figure 10-5
Create a Tenant
You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b001" descr="bgp routed external">
<aaaDomainRef name="b001_sd" />
</fvTenant>
Step 3
In the Tenant navigation pane, right-click on the Networking folder and select Create Private
Network (Figure 10-6).
b.
c. To minimize these steps, you may choose to configure a bridge domain in the next window.
Figure 10-6
You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b001">
10-6
Implementation Guide
Chapter 10
Step 4
c.
d.
Click on the +next to Subnets field (Figure 10-7) which opens a window to enter subnet-specific
information.
e.
Enter the default gateway and select the Public Scope. This scope allows advertising the subnet
outside the fabric.
Figure 10-7
You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b001">
<fvBD name="bd01">
<fvSubnet ip="10.3.1.253/24" scope="public" />
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
</fvTenant>
Step 5
EPG.
b.
c.
In the Associated Domain Profile box (Figure 10-8), click on the + and select the VMM domain
where the virtual machine resides.
d.
10-7
Chapter 10
Layer 3 Bronze
e.
Click on Update and then, click the Finish button to finish the configuration.
Figure 10-8
Sub-step d) creates a port-profile on the VMWare vCenter Server. You can assign the port-profile to a
virtual machine that resides in the same EPG (Figure 10-9).
Figure 10-9
You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b001">
<fvAp name="app01">
<fvAEPg name="epg01">
<fvRsBd tnFvBDName="bd01" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc" instrImedcy="immediate"
resImedcy="immediate" />
</fvAEPg>
</fvAp>
</fvTenant>
Step 6
Create filters.
a.
Expand Security Policies and right-click on Filters to create one or more filters.
10-8
Implementation Guide
Chapter 10
b.
Provide a name for the filter and update the Entries box for parameters, such as EtherType, IP
Protocol, L4 port numbers, and so on (Figure 10-10).
Figure 10-10
Create Filters
You can use the following XML to do the same task. The value of the variable is highlighted in bold. In
this example, multiple filters are created.
<fvTenant name="b001">
<vzFilter name="http">
<vzEntry name="rule01" etherT="ip"
/>
</vzFilter>
<vzFilter name="https">
<vzEntry name="rule01" etherT="ip"
dToPort="https" />
</vzFilter>
<vzFilter name="ftp-data">
<vzEntry name="rule01" etherT="ip"
dToPort="10500" />
</vzFilter>
<vzFilter name="ftp-control">
<vzEntry name="rule01" etherT="ip"
</vzFilter>
<vzFilter name="mysql">
<vzEntry name="rule01" etherT="ip"
/>
</vzFilter>
<vzFilter name="ssh">
<vzEntry name="rule01" etherT="ip"
</vzFilter>
</fvTenant>
Step 7
prot="tcp" dFromPort="https"
prot="tcp" dFromPort="10100"
Create a contract.
a.
b.
Provide a name for the contract. The default scope is Context (Figure 10-11).
10-9
Chapter 10
Layer 3 Bronze
Figure 10-11
Create a Contract
c.
d.
e.
Click on the + to select Update in the filter chain field with filters created in Sub-step d
(Figure 10-12).
Figure 10-12
You can use the following XML to do the same task. The value of the variable is highlighted in bold. In
this example, multiple filters are used with the contract.
<fvTenant name="b001">
<vzBrCP name="contract01">
<vzSubj name="subject01">
<vzRsSubjFiltAtt tnVzFilterName="http" />
<vzRsSubjFiltAtt tnVzFilterName="https" />
10-10
Implementation Guide
Chapter 10
<vzRsSubjFiltAtt
<vzRsSubjFiltAtt
<vzRsSubjFiltAtt
<vzRsSubjFiltAtt
</vzSubj>
</vzBrCP>
</fvTenant>
Step 8
tnVzFilterName="icmp" />
tnVzFilterName="ftp-data" />
tnVzFilterName="ftp-control" />
tnVzFilterName="mysql" />
In this example, tenant b001 uses IBGP between ACI Fabric and ASR 9000 while tenant b002 uses
static routes.
c.
IBGP Configuration: The ASR 9000 nV edge device has two Bundle-Ethernet interfaces with
sub-interfaces for IBGP peering to border leaf nodes.
10-11
Chapter 10
Layer 3 Bronze
route-policy allow-all in
route-policy allow-all out
commit
end
!
e.
Static Routing to ACI border Leaf: The ASR 9000 has static routes pointing to the SVIs on the
border leaf switches to reach the server subnet. The connected and static routes are redistributed into
BGP so that the remote provider edge (PE) device can reach the tenant server subnets.
Step 9
Creating an external routed network with IBGP consists of the following major tasks:
routes for each border nodes. The loopback address is required for fabric route reflection.
10-12
Implementation Guide
Chapter 10
c. Create interface profiles. This includes configuring a logical SVI interface on each border leaf
In this example, border leaf nodes 105 & 106 are configured as IBGP nodes.
d.
e.
Select Create Routed Outside. In the pop-up window, enter a name for the Routed Outside (See
Figure 10-13).
f.
Select the private network from the drop-down list, and then select BGP.
Figure 10-13
g.
Click on + in the previous screen which would open another window to create a node profile.
Enter a name for the node profile.
h.
i.
Enter the ASR 9000 loopback interface address in the BGP peer connectivity profile. You need to
enter the next hop address to reach the ASR 9000 loopback address when you configure each node.
10-13
Chapter 10
Layer 3 Bronze
Figure 10-14
j.
Click on the + in the Interface Profiles to create an interface profile (Figure 10-14).
k.
Enter the configuration for the SVI interface on each node. This includes the name of the port
channel, IP address for the SVI interface and VLAN. You may configure separate interface profiles
for each node.
Figure 10-15
l.
Click Ok button in the Interface Profiles and Node Profile configuration window. This navigates
back to the Create Routed Outside window (Figure 10-15).
m.
Click on the + under External EPG Networks (Figure 10-16). Provide a name for the external
network.
n.
Open the subnet box and enter the subnet 0.0.0.0/0 that is allowed to come in.
10-14
Implementation Guide
Chapter 10
Figure 10-16
You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b001">
<l3extOut name="l3_outside">
<bgpExtP descr="this node enable bgp" />
<l3extRsEctx tnFvCtxName="net01" />
<l3extLNodeP name="bgp_nodes">
<bgpPeerP addr="10.3.200.1" />
<l3extRsNodeL3OutAtt rtrId="10.3.200.105" tDn="topology/pod-1/node-105">
<ipRouteP ip="10.3.200.1/32">
<ipNexthopP nhAddr="10.3.201.1" />
</ipRouteP>
</l3extRsNodeL3OutAtt>
<l3extRsNodeL3OutAtt rtrId="10.3.200.106" tDn="topology/pod-1/node-106">
<ipRouteP ip="10.3.200.1/32">
<ipNexthopP nhAddr="10.3.202.1" />
</ipRouteP>
</l3extRsNodeL3OutAtt>
<l3extLIfP name="svi01">
<l3extRsPathL3OutAtt addr="10.3.201.2/24" encap="vlan-421"
ifInstT="ext-svi" tDn="topology/pod-1/paths-105/pathep-[pc_n105_asr9k]" />
</l3extLIfP>
<l3extLIfP name="svi02">
<l3extRsPathL3OutAtt addr="10.3.202.2/24" encap="vlan-421"
ifInstT="ext-svi" tDn="topology/pod-1/paths-106/pathep-[pc_n106_asr9k]" />
</l3extLIfP>
</l3extLNodeP>
<l3extInstP name="outside_network">
<l3extSubnet ip="0.0.0.0/0" />
<!-- allows any external source IP to come in -->
</l3extInstP>
</l3extOut>
</fvTenant>
o.
External Network Configuration with Static Routes: Creating an external routed network with
static routes consists of the following major tasks:
1.
10-15
Chapter 10
Layer 3 Bronze
2.
Create a node profile. This includes configuring loopback address, next hop address and static
routes. The loopback address is required for fabric route reflection.
3.
Create interface profiles. This includes configuring a logical SVI interface on each border leaf
and mapping to the port channel connecting to ASR 9000.
4.
Configure the external network such that any external source IP can reach it.
p.
In this example, border leaf nodes 105 & 106 are configured to use static routes to reach external
networks.
q.
In the Tenant navigation pane, right-click on the Networking folder and select Create Routed
Outside. In the pop-up window, enter a name for Routed Outside.
r.
Select the private network from the drop-down list. Figure 10-17shows how to configure a routed
outside policy.
Figure 10-17
s.
Click on the + in the previous screen which would open another window to create a node profile.
Enter a name for the node profile.
t.
Click on the + next to Nodes and configure the nodes (Figure 10-18).
u.
Select the node from the drop down list and configure the Router ID to identify the node. Configure
a static route to reach the outside networks.
10-16
Implementation Guide
Chapter 10
Figure 10-18
Note
The node profile window does not display the next hop address associated with a static route. Currently
you need to open each node configuration entry to see the next hop address. An enhancement defect
CSCur46784 is filed to address this issue.
v.
w.
Figure 10-19
x.
Click OK button in the Create Interface Profile window and again in the Node Profile
configuration window. This takes you back to Created Routed Outside window (Figure 10-20).
y.
Click on the + under External EPG Networks. This opens a pop-up window. Provide a name for
the external network. Open the subnet box and enter the subnet 0.0.0.0/0 that is allowed to come in.
z.
Click OK to close this window or click on Finish button to submit the configuration.
10-17
Chapter 10
Layer 3 Bronze
Figure 10-20
You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b002">
<l3extOut name="l3_outside">
<l3extRsEctx tnFvCtxName="net01" />
<l3extLNodeP name="static_nodes">
<l3extRsNodeL3OutAtt rtrId="10.3.200.105" tDn="topology/pod-1/node-105">
<ipRouteP ip="0.0.0.0/0">
<ipNexthopP nhAddr="10.3.201.1" />
</ipRouteP>
</l3extRsNodeL3OutAtt>
<l3extRsNodeL3OutAtt rtrId="10.3.200.106" tDn="topology/pod-1/node-106">
<ipRouteP ip="0.0.0.0/0">
<ipNexthopP nhAddr="10.3.202.1" />
</ipRouteP>
</l3extRsNodeL3OutAtt>
<l3extLIfP name="svi01">
<l3extRsPathL3OutAtt addr="10.3.201.2/24" encap="vlan-422"
ifInstT="ext-svi" tDn="topology/pod-1/paths-105/pathep-[pc_n105_asr9k]" />
</l3extLIfP>
<l3extLIfP name="svi02">
<l3extRsPathL3OutAtt addr="10.3.202.2/24" encap="vlan-422"
ifInstT="ext-svi" tDn="topology/pod-1/paths-106/pathep-[pc_n106_asr9k]" />
</l3extLIfP>
</l3extLNodeP>
<l3extInstP name="outside_network">
<l3extSubnet ip="0.0.0.0/0" />
<!-- allows any external source IP to
come in -->
</l3extInstP>
</l3extOut>
</fvTenant>
Step 10
Expand the Networking folder and select the bridge domain that was created in the previous step.
b.
Click the + next to Associated L3 Outs and add the L3 outside policy (Figure 10-21).
10-18
Implementation Guide
Chapter 10
Figure 10-21
You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b001">
<fvBD name="bd01">
<fvRsBDToOut tnL3extOutName="l3_outside" />
</fvBD>
</fvTenant>
Step 11
Expand the Application Profiles folder and, under the EPG, right-click Contracts.
b.
Choose Add Provided Contract and in this window, select the contract from the pull-down list
(Figure 10-22).
c.
d.
The following screen shot shows how to add a provided contract to the EPG.
10-19
Chapter 10
Layer 3 Bronze
Figure 10-22
You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b001">
<fvAp name="app01">
<fvAEPg name="epg01">
<fvRsProv tnVzBrCPName="contract01" />
</fvAEPg>
</fvAp>
<l3extOut name="l3_outside">
<l3extInstP name="outside_network">
<fvRsCons tnVzBrCPName="contract01" />
</l3extInstP>
</l3extOut>
</fvTenant>
10-20
Implementation Guide
Chapter 10
Verify Configuration
To verify the tenant subnet reachability from the ASR 9000 nV edge device, use the following show
CLIs.
RP/0/RSP1/CPU0:v6-pe-NV# show vrf b001
Fri Oct 17 09:58:01.790 EDT
VRF
RD
b001
3:421
RT
AFI
import 3:421
export 3:421
RP/0/RSP1/CPU0:v6-pe-NV# show interface loopback 421
Fri Oct 17 09:58:02.101 EDT
Loopback421 is up, line protocol is up
Interface state transitions: 1
Hardware is Loopback interface(s)
Internet address is 10.3.200.1/32
MTU 1500 bytes, BW 0 Kbit
reliability Unknown, txload Unknown, rxload Unknown
Encapsulation Loopback, loopback not set,
Last input Unknown, output Unknown
Last clearing of "show interface" counters Unknown
Input/output data rate is disabled.
RP/0/RSP1/CPU0:v6-pe-NV# show interface Bundle-Ether 5.421
Fri Oct 17 09:58:02.604 EDT
Bundle-Ether5.421 is up, line protocol is up
Interface state transitions: 1
Hardware is VLAN sub-interface(s), address is 4055.3943.0f93
Internet address is 10.3.201.1/24
MTU 9004 bytes, BW 20000000 Kbit (Max: 20000000 Kbit)
reliability 255/255, txload 0/255, rxload 0/255
Encapsulation 802.1Q Virtual LAN, VLAN Id 421, loopback not set,
ARP type ARPA, ARP timeout 04:00:00
Last input 00:00:21, output 00:00:21
Last clearing of "show interface" counters never
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
5486 packets input, 370024 bytes, 0 total input drops
0 drops for unrecognized upper-level protocol
Received 4 broadcast packets, 0 multicast packets
10883 packets output, 1421734 bytes, 0 total output drops
Output 3 broadcast packets, 5365 multicast packets
RP/0/RSP1/CPU0:v6-pe-NV# show interface Bundle-Ether 6.421
Fri Oct 17 09:58:02.693 EDT
Bundle-Ether6.421 is up, line protocol is up
Interface state transitions: 1
Hardware is VLAN sub-interface(s), address is 4055.3943.1f93
Internet address is 10.3.202.1/24
MTU 9004 bytes, BW 20000000 Kbit (Max: 20000000 Kbit)
reliability 255/255, txload 0/255, rxload 0/255
Encapsulation 802.1Q Virtual LAN, VLAN Id 421, loopback not set,
ARP type ARPA, ARP timeout 04:00:00
Last input 00:00:21, output 00:00:21
Last clearing of "show interface" counters never
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
5500 packets input, 371543 bytes, 0 total input drops
0 drops for unrecognized upper-level protocol
Received 4 broadcast packets, 0 multicast packets
10819 packets output, 1416950 bytes, 0 total output drops
Output 3 broadcast packets, 5365 multicast packets
RP/0/RSP1/CPU0:v6-pe-NV# show ip route vrf b001
Fri Oct 17 09:58:03.003 EDT
SAFI
IPV4
IPV4
Unicast
Unicast
10-21
Chapter 10
Layer 3 Bronze
Figure 10-24 and Figure 10-25are taken from the APIC INVENTORY submenu.
Figure 10-24 shows the BGP adjacency on the border leaf.
10-22
Implementation Guide
Chapter 10
Figure 10-24
10-23
Chapter 10
Physical Topology
The physical topology is the same as show in Figure 10-1 except for the connection between the border
leaves and the ASR 9000. A vPC is configured between the border leaf switches and ASR 9000 nV to
extend the L2 subnet. The vPC connectivity is shown in Figure 10-26. A bundle-Ethernet interface,
labeled BE-9, is configured on the ASR 9000 nV and it terminates on a vPC between leaf 105 and leaf
106.
Figure 10-26
BE-9
vPC
298502
ACI Fabric
Logical Topology
The L2 Bronze logical topology is shown in Figure 10-27. The ACI Fabric is in L2 mode and the ASR
9000 is the default gateway for the tenant server subnet.
Figure 10-27
MPLS
L3 VPN
ASR 9000 nV
BE-9.430
10.3.1.254/24
Node-105
Node-106
Node-102
Node-102
APP
OS
APP
OS
Web/App/Database VMs
298503
10.3.1.0/24
10-24
Implementation Guide
Chapter 10
Figure 10-28 shows the APIC construct for L2 Bronze container b010. A bridge domain and EPG are
defined and are mapped to the server subnet. An external bridged network connects the ACI Fabric to
the upstream ASR 9000. A logical interface maps to the vPC connection between the border leaves and
the ASR 9000. In this example, a default contract that is defined under the common tenant is used. This
contract allows all traffic between epg01 and outside network.
Figure 10-28
L2_interface
ACI Fabric
default Contract
Tenant: b010
Context: net01
default
298504
Bridge Domain:
bd01
EPG: epg01
Note
2.
3.
4.
5.
8.
9.
Step 2
10-25
Chapter 10
Step 3
Step 4
Step 5
Step 6
defined in a VLAN pool and later assigned when the external bridged network is created. In this
example, VLAN 430 is added to asr9k_vlan_pool which belongs to as9k_phy domain
(Figure 10-29).
Figure 10-29
Step 7
contracts and filters defined under the common tenant. If you would like to create unique contracts
and filters, refer to the L3 Bronze section. In this step, the default contracts and filters previously
defined as shown under tenant common are used while setting up a new tenant.
Figure 10-30 shows the default contract in APIC GUI. By default, all traffic is permitted.
10-26
Implementation Guide
Chapter 10
Figure 10-30
Step 8
snip, Bundle-Ethernet 9.430 is configured as the default gateway for tenant b010. The subnet is
redistributed into BGP so that it can be advertised to the remote PE device.
ASR 9000 configuration for tenant b010 is shown below. !
conf t
vrf b010
# VRF for tenant b010
address-family ipv4 unicast
import route-target
3:430
export route-target
3:430
!
interface loopback 430
vrf b010
ipv4 address 10.3.200.1/32
!
interface Bundle-Ether 9.430
vrf b010
ipv4 address 10.3.1.254 255.255.255.0
# Default gateway for tenant subnet
encapsulation dot1q 430
10-27
Chapter 10
!
router bgp 200
vrf b010
rd 3:430
address-family ipv4 unicast
redistribute connected
commit
end
!
Step 9
b.
c.
Notice that VLAN-430 is used as the external VLAN connecting to ASR 9000.
Figure 10-32
d.
Click on the + under Nodes and Interfaces Protocol Profiles (Figure 10-32). The Create Node
Profile window opens (Figure 10-33).
e.
Figure 10-33
f.
Click on + sign under the Interface Profiles to configure an interface profile. The vPC interface
is chosen as the outside interface to connect to ASR 9000 (Figure 10-33). The Create Interface
Profile window opens (Figure 10-34).
10-28
Implementation Guide
Chapter 10
Figure 10-34
g.
Next you need to configure an external EPG network so that a contract can be assigned
(Figure 10-35).
Figure 10-35
You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b010">
<l2extOut name="l2_outside">
<l2extRsEBd tnFvBDName="bd01" encap="vlan-430" />
<l2extLNodeP name="l2_nodes">
<l2extLIfP name="l2_interface">
<l2extRsPathL2OutAtt
tDn="topology/pod-1/protpaths-105-106/pathep-[vpc_n105_n106_asr9k]" />
</l2extLIfP>
</l2extLNodeP>
<l2extInstP name="outside_network" />
</l2extOut>
</fvTenant>
Step 10
Expand the Application Profiles folder and right-click on Contracts under Application EPG
(epg01) and select Add Provided Contract (Figure 10-36).
b.
10-29
Chapter 10
Figure 10-36
c.
Expand the Networking and External Bridged Networks folders, and select outside_network
under Networks.
d.
Click on the + next to Consumed Contracts and select the Default contract (Figure 10-37).
Figure 10-37
You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b010">
<fvAp name="app01">
<fvAEPg name="epg01">
<fvRsProv tnVzBrCPName="default" />
</fvAEPg>
</fvAp>
<l2extOut name="l2_outside">
<l2extInstP name="outside_network">
<fvRsCons tnVzBrCPName="default" />
</l2extInstP>
</l2extOut>
</fvTenant>
10-30
Implementation Guide
Chapter 10
Verify Configuration
To verify the tenant subnet reachability from the ASR 9000 nV, use the following show CLIs. In this
example, tenant b010 is used.
RP/0/RSP1/CPU0:v6-pe-NV# show vrf b010
Tue Oct 21 15:25:40.464 EDT
VRF
RD
b010
3:430
RT
AFI
SAFI
import 3:430
IPV4 Unicast
export 3:430
IPV4 Unicast
RP/0/RSP1/CPU0:v6-pe-NV# show interface loopback 430
Tue Oct 21 15:25:40.762 EDT
Loopback430 is up, line protocol is up
Interface state transitions: 1
Hardware is Loopback interface(s)
Internet address is 10.3.200.1/32
MTU 1500 bytes, BW 0 Kbit
reliability Unknown, txload Unknown, rxload Unknown
Encapsulation Loopback, loopback not set,
Last input Unknown, output Unknown
Last clearing of "show interface" counters Unknown
Input/output data rate is disabled.
RP/0/RSP1/CPU0:v6-pe-NV# show interface Bundle-Ether 9.430
Tue Oct 21 15:25:41.064 EDT
Bundle-Ether9.430 is up, line protocol is up
Interface state transitions: 1
Hardware is VLAN sub-interface(s), address is f025.72a9.b274
Internet address is 10.3.1.254/24
MTU 1518 bytes, BW 20000000 Kbit (Max: 20000000 Kbit)
reliability 255/255, txload 0/255, rxload 0/255
Encapsulation 802.1Q Virtual LAN, VLAN Id 430, loopback not set,
ARP type ARPA, ARP timeout 04:00:00
Last input 00:00:00, output 00:00:00
Last clearing of "show interface" counters never
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
1873 packets input, 177988 bytes, 0 total input drops
0 drops for unrecognized upper-level protocol
Received 20 broadcast packets, 0 multicast packets
3007 packets output, 517966 bytes, 0 total output drops
Output 1 broadcast packets, 2208 multicast packets
RP/0/RSP1/CPU0:v6-pe-NV# show ip route vrf b010
Tue Oct 21 15:25:41.370 EDT
Codes: C - connected, S - static, R - RIP, B - BGP, (>) - Diversion path
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
I - ISIS, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, su - IS-IS summary null, * - candidate default
U - per-user static route, o - ODR, L - local, G - DAGR
A - access/subscriber, a - Application route, (!) - FRR Backup path
Gateway of last resort is not set
C
10.3.1.0/24 is directly connected, 18:18:28, Bundle-Ether9.430
L
10.3.1.254/32 is directly connected, 18:18:28, Bundle-Ether9.430
L
10.3.200.1/32 is directly connected, 18:18:28, Loopback430
B
100.3.201.0/24 [200/0] via 10.255.255.201 (nexthop in vrf default), 18:18:26
RP/0/RSP1/CPU0:v6-pe-NV#
RP/0/RSP1/CPU0:v6-pe-NV# ping vrf b010 10.3.1.1 Ping a VM in EPG01
Tue Oct 21 15:26:43.277 EDT
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.3.1.1, timeout is 2 seconds:
!!!!!
10-31
Chapter 10
Deployment Considerations
Deployment Considerations
The following considerations are recommended.
There are two ways to implement Cisco Bronze container; they are
1.
2.
The L2 Bronze model currently provides a higher tenancy scale in the ACI Fabric than the L3
Bronze provides. For L3 Bronze, the verified scale is 100 tenants.
Refer to the scalability matrix in the following document for more details on supported scale
numbers:
http://mishield-bld.insieme.local/documentation/pdf/ACI_Verified_Scalability_Guide.pdf
As of APIC software version 1.0.2, the L3 routed external policy may use IBGP, OSPF, or static
connectivity to upstream ASR 9000 edge device.
L3 over vPC is not supported between border leaves and upstream router. To work around this
limitation, you can configure separate L3 paths from each border leaf to ASR 9000 network
virtualization.
On the border leaf node, L3 port channel or port channel sub-interfaces are not supported as of 1.0.2
release.
For L3 Bronze, only one L3 external connection per border leaf node per tenant is currently
supported.
10-32
Implementation Guide
CH A P T E R
11
The Copper tenant container underlying network infrastructure is provided by the Cisco ACI fabric.
The Cisco ASA 5585-based ASA cluster technology provides proven security solution.
The Cisco ASR 9000 nV edge offers highly-available connectivity outside of the data center.
In this implementation, OpenStack is based on Canonical Ubuntu 14.04/Icehouse and provides the
compute services to the tenants.
The Cisco Unified Compute System (UCS) C-series servers build the compute pods that consist of
both compute and storage resources.
This implementation allows Copper tenants access to both the traditional block storage via NetApp
storage, and also, to RedHat's Ceph software-defined storage.
The Nexus 1000v for KVM provides the virtual switching capabilities as well as virtual
machine-level security to tenants.
Logical Topology
Figure 11-1 shows the Copper container logical topology with respect to the IP addressing, routing,
security, and Network Address Translation (NAT).
11-1
Chapter 11
Figure 11-1
Internet
AS 200
ASR 9000
10.4.101.1
eBGP
Outside 10.4.101.0/24
OS Management 10.0.46.0/24
10.4.101.2
APP
OS
AS 65101
111.21.1.x
10.0.46.254
St
ati
A
cN
10.21.2.254
tic
NA
10.21.1.254
Sta
ic
APP
OS
Private IP
Addresses
10.21.3.254
ASA Cluster
at
St
Public IP
Addresses
for NAT
111.21.2.x
OpenStack Swift/
Rados GW Servers
NA
APP
OS
OS Instances
APP
OS
OS Instances
APP
OS
APP
OS
OS Instances
298729
APP
OS
All Copper tenants share the same context on ASA cluster and have their own VLAN. Since all Copper
tenants share the same context, non-overlapping IP addressing has been used for each tenant. The
connectivity from each tenant to the ASA inside is based on sub-interfaces created on ASA clustered
data port channel. Traffic going out to the Internet shares the same outside interface.
External BGP (EBGP) is used to exchange the routing information between the ASR 9000 edge router
and the ASA cluster. Static routes on ASR 9000 force the traffic from the Internet to the NAT destination
to be directed to the ASA. BGP learned routes on ASA redirects the traffic out of the container on to the
ASR 9000 edge router.
Static NAT is used to allow tenant access to and from the Internet while maintaining private addresses
in the tenant address space. Depending on the access requirements with the Internet to OpenStack
instances, each tenant may require multiple, static NAT IPs.
Static NAT allows tenants access to the provider-backend OpenStack object storage. This enables the
provider to conceal the backend servers from the tenants while still allowing access to the OpenStack
SWIFT/RADOS services hosted in the provider backend.
11-2
Implementation Guide
Chapter 11
Leaf103
ASR 9000 nV
Edge Router
Internet
Leaf104
Spine202
Leaf105
OpenStack
C-Series
Servers
APP
OS
APP
OS
OpentStack
Instances
Spine203
Leaf106
ASA 5585
Cluster
Spine204
Leaf107
Leaf108
OpenStack
C-Series
Servers
APP
OS
APP
OS
OpentStack
Instances
298730
Figure 11-2
Overview
The Copper container uses ACI fabric as a L2 transport medium. Each Copper tenant has a
corresponding application EPG and their own bridge domain. Unique L3 private network in a pure L2
environment is not needed, so all tenants are sharing the same private network. This implementation
does not use contracts and applies all security policing at the ASA cluster.
The Copper container uses the ASA cluster configuration instead of using the ACI fabric service
graphing. The ASA cluster control links (CCL) and the data links use the ACI fabric to reach both the
tenant instances and the ASR 9000 nV edge router.
All connections use static binding, including the connections from C-Series servers, ASA Cluster, ASR
9000, and NetApp NFS storage.
The process steps for configuring the Copper tenant container configuration are divided in to two main
sections, as follows:
1. ACI Link Configuration
You must configure physical connectivity from various devices to the ACI fabric first in order so
they are referred to in tenant configurations. These include configuring physical ports-to-vPC port
channel mapping and defining which protocols (such as, LACP, CDP, LLDP, STP, and so on) to
configure for on each vPC.
In addition VLAN pools and physical domains, you need to configure each connection map before
so that it is used in tenant configurations.
2. ACI Tenant Configuration
11-3
Chapter 11
Create a security domain which can be tied to AAA authentication/authorization methods for securing
tenant access to the APIC configuration.
<aaaUserEp>
<aaaDomain name="copper_sd" />
</aaaUserEp>
Step 2
Step 3
Create the application profile that provides end point groups (EPGs) later on.
<fvTenant name="copper">
<fvAp name="copper_app_profile">
</fvAp>
</fvTenant>
11-4
Implementation Guide
Chapter 11
Server-to-ASA Configuration
Each Copper tenant in this implementation has a unique bridge domain where their OpenStack instances
reside. These ACI bridge domains have their unicast routing disabled because the ACI fabric only
provides Layer 2 forwarding functionality to Copper tenants. Address Resolution Protocol (ARP) and
unknown unicast flooding are enabled for the proper operation of the bridge domains. Subnets are not
required for each bridge domain because the OpenStack instances have their default gateway pointing
to the ASA cluster.
Each bridge domain must be associated with a L3 context or a private network which provides IP
addressing isolation and an attachment point for L3 policies for ACI tenants. In this implementation, all
Copper tenants share a single private network and context.
Each Copper tenant has a unique EPG. EPG configures the links permitted to carry tenant VLANs.
Once the base configuration is completed, the following steps create a private network, bridge domain,
and EPG for a given Copper tenant. The private network is configured only once and it is shared by all
tenants.
Step 1
Step 2
11-5
Chapter 11
Figure 11-3
Step 3
Create an EPG under the previously created application profile and associate it with the tenant bridge
domain.
<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="epg01">
<fvRsBd tnFvBDName="bd01" />
</fvAEPg>
</fvAp>
</fvTenant>
Step 4
11-6
Implementation Guide
Chapter 11
Figure 11-4
Leaf106
ASA 5585
Cluster
Leaf107
Leaf108
Leaf103
Leaf104
OpenStack
C-Series
Servers
OpenStack
C-Series
Servers
APP
OS
APP
OS
APP
OS
OpentStack
Instances
APP
OS
OpentStack
Instances
298732
Leaf105
There are two types of static bindings, static paths and static leaves. Static paths allow the VLANs per
vPC and the static leaves permit the VLANs per the whole switch. With static leaves the number of
entries required for VLAN is minimal compared to static paths.
The following XML REST API snippet shows how to use static paths to create static binding of VLAN
501:
1.
2.
11-7
Chapter 11
Figure 11-5
The following XML REST API snippets show how to use static leaves to create static binding of VLAN
501 to the leaf pair 107/108. Leaf switch pair 107/108 has eight OpenStack servers which each would
require 8 entries; but, using the static path binding requires only two entries
<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="epg01">
<fvRsNodeAtt encap="vlan-501" instrImedcy="immediate"
tDn="topology/pod-1/node-107"/>
<fvRsNodeAtt encap="vlan-501" instrImedcy="immediate"
tDn="topology/pod-1/node-108"/>
</fvAEPg>
</fvAp>
</fvTenant>
Step 5
11-8
Implementation Guide
Chapter 11
<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="epg01">
<fvRsBd tnFvBDName="bd01" />
<fvRsDomAtt tDn="uni/phys-asa_data_phy" instrImedcy="immediate"
resImedcy="immediate" />
<fvRsDomAtt tDn="uni/phys-OpenStack_phy" instrImedcy="immediate"
resImedcy="immediate" />
</fvAEPg>
</fvAp>
</fvTenant>
11-9
Chapter 11
Figure 11-8
VLAN 500
Leaf104
Leaf105
Leaf106
Po2.500
Po1.500
ASR 9000 nV
Edge Router
ASA 5585
Cluster
Internet
298736
Leaf103
The following steps detail how to configure the connectivity between the ASA data path and the ASR
9000 edge router.
Step 1
Step 2
Create the bridge domain and associate it with private network. Because of the server-to-ASA
configuration, the bridge domain requires ARP flooding and unknown unicast flooding to be enabled.
The subnet is not required since EPG is bridged to the external network and the VM default gateway
points to ASR 9000.
<fvTenant name="copper">
<fvBD name="copper_ext_bd" arpFlood="yes" unkMacUcastAct="flood"
unicastRoute="no">
<fvRsCtx tnFvCtxName="copper_ext_network" />
</fvBD>
</fvTenant>
Step 3
Create an EPG under the previously created application profile and associate it with the tenant bridge
domain.
<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="copper_ext_epg">
<fvRsBd tnFvBDName="copper_ext_bd" />
<fvRsDomAtt tDn="uni/phys-asa_data_phy" instrImedcy="immediate"
resImedcy="immediate" />
<fvRsDomAtt tDn="uni/phys-asr9k_copper_phy" instrImedcy="immediate"
resImedcy="immediate" />
</fvAEPg>
</fvAp>
</fvTenant>
Step 4
Create static bindings for the tenant's VLAN and associate them to the ASA data links vPC and ASR
9000 vPC.
<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="copper_ext_epg">
11-10
Implementation Guide
Chapter 11
Step 5
Spine201
Spine202
Spine203
Spine204
VLAN 549
Leaf106
Leaf107
Leaf108
OpenStack
C-Series
Servers
ASA 5585
Cluster
Nexus
7009-A
Nexus
7009-B
Management Servers
298737
Leaf105
The following steps detail how to configure the ACI tenant to access the Nexus 7009 management
gateways.
11-11
Chapter 11
Step 1
Create the bridge domain and associate it with private network. Management EPG shares the same
private network defined for tenant EPGs. Bridge domain requires ARP flooding and unknown unicast
flooding to be enabled.
<fvTenant name="copper">
<fvBD name="mgmt_bd" arpFlood="yes" unkMacUcastAct="flood" unicastRoute="no">
<fvRsCtx tnFvCtxName="copper_tenant_network" />
</fvBD>
</fvTenant>
Step 2
Create an EPG under the previously created application profile and associate it with the tenant bridge
domain.
<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="mgmt_epg">
<fvRsBd tnFvBDName="mgmt_bd" />
</fvAEPg>
</fvAp>
</fvTenant>
Step 3
Create the static bindings for the management VLAN and associate them with the ASA data links vPC
and Nexus 7009 Switch vPCs.
<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="mgmt_epg">
<fvRsPathAtt encap="vlan-549" instrImedcy="immediate"
tDn="topology/pod-1/protpaths-105-106/pathep-[vpc_n105_n106_asa5585_data]" />
<fvRsPathAtt encap="vlan-549" instrImedcy="immediate"
tDn="topology/pod-1/protpaths-107-108/pathep-[vpc_n107_n108_vmi]" />
<fvRsNodeAtt encap="vlan-549" instrImedcy="immediate"
tDn="topology/pod-1/node-107"/>
</fvAEPg>
</fvAp>
</fvTenant>
Step 4
2.
11-12
Implementation Guide
Chapter 11
NetApp controller Physical connectivity is described under the storage tenant section in Chapter 2. The
following configuration snippet describes the Copper tenant-specific EPG configuration under the
storage tenant.
This snippet creates an EPG and associates it to the bridge domain "ip_storage." The static path bindings
are created for the four OpenStack Nova compute hosts as well as for the three OpenStack control nodes
hosting the OpenStack cinder service. These ports are configured as access ports with "untagged"
encapsulation type. These servers are connected to leaf pair 107/108 as shown in Figure 2. Next, the
corresponding physical domain is associated with the EPG.
<fvAEPg name="os_nfs_hosts">
<fvRsPathAtt encap="vlan-91" instrImedcy="immediate" mode="untagged"
tDn="topology/pod-1/paths-108/pathep-[eth1/31]"/>
<fvRsPathAtt encap="vlan-91" instrImedcy="immediate" mode="untagged"
tDn="topology/pod-1/paths-107/pathep-[eth1/29]"/>
<fvRsPathAtt encap="vlan-91" instrImedcy="immediate" mode="untagged"
tDn="topology/pod-1/paths-107/pathep-[eth1/30]"/>
<fvRsPathAtt encap="vlan-91" instrImedcy="immediate" mode="untagged"
tDn="topology/pod-1/paths-108/pathep-[eth1/29]"/>
<fvRsPathAtt encap="vlan-91" instrImedcy="immediate" mode="untagged"
tDn="topology/pod-1/paths-108/pathep-[eth1/30]"/>
<fvRsPathAtt encap="vlan-91" instrImedcy="immediate" mode="untagged"
tDn="topology/pod-1/paths-108/pathep-[eth1/32]"/>
<fvRsPathAtt encap="vlan-91" instrImedcy="immediate" mode="untagged"
tDn="topology/pod-1/paths-107/pathep-[eth1/31]"/>
<fvRsDomAtt instrImedcy="immediate" resImedcy="immediate"
tDn="uni/phys-OpenStack_phy"/>
<fvRsBd tnFvBDName="ip_storage"/>
</fvAEPg>
11-13
Chapter 11
Figure 11-10
Internet
Static routes pointing
to NAT subnets
Src: 10.21.1.x
Dest: 192.168.100.100
AS 200
NAT to
ASR 9000
Src: 10.0.46.x
Dest: 10.0.45.78
(ha proxy VIP for Rados GW)
10.4.101.1
eBGP
Outside 10.4.101.0/24
OS Management 10.0.46.0/24
10.4.101.2
APP
OS
AS 65101
111.21.1.x
10.0.46.254
St
ati
A
cN
10.21.2.254
tic
NA
10.21.1.254
Sta
ic
APP
OS
Private IP
Addresses
10.21.3.254
ASA Cluster
at
St
Public IP
Addresses
for NAT
111.21.2.x
OpenStack Swift/
Rados GW Servers
NA
APP
OS
OS Instances
APP
OS
OS Instances
APP
OS
APP
OS
OS Instances
298738
APP
OS
Interface Configuration
The following configuration snippet details the interface configuration inside the system context.
interface TenGigabitEthernet0/7
channel-group 2 mode active vss-id 1
!
interface TenGigabitEthernet0/9
channel-group 2 mode active vss-id 2
!
interface Port-channel2
description Data Uplinks
lacp max-bundle 8
port-channel load-balance src-dst-ip-port
port-channel span-cluster vss-load-balance
!
interface Port-channel2.500
description copper outside
vlan 500
11-14
Implementation Guide
Chapter 11
!
interface Port-channel2.501
description tenant 1 inside
vlan 501
!
interface Port-channel2.502
description tenant 2 inside
vlan 502
!
interface Port-channel2.503
description tenant 3 inside
vlan 503
!
interface Port-channel2.504
description tenant 4 inside
vlan 504
!
interface Port-channel2.541
description Spirent traffic from 2nd nic
vlan 541
!
interface Port-channel2.549
description To Management for Swift/rados
vlan 549
The interfaces from this snippet are then used by the Copper context as follows:
context copper
allocate-interface Management0/1 management0
allocate-interface Port-channel2.500-Port-channel2.504 ethernet1-ethernet5
allocate-interface Port-channel2.541 ethernet7
allocate-interface Port-channel2.549 ethernet6
config-url disk0:/aci-copper1.cfg
!
BGP Configuration
The ASA BGP configuration requires all contexts to share the same BGP autonomous number; therefore,
this is configured inside the system context as shown below:
router bgp 65101
bgp log-neighbor-changes
Once the autonomous number is configured in the system context, additional configurations can be
added under the individual contexts.
Base Configuration
The following configuration snippet shows the inside interfaces configured for four tenants and the
interfaces for management and outside.
interface ethernet1
description asa outside to asr9k po10.500
nameif outside
security-level 0
11-15
Chapter 11
BGP Configuration
The following configuration snippet shows the BGP configuration for Copper tenants.
router bgp 65101
address-family ipv4 unicast
neighbor 10.4.101.1 remote-as 200
neighbor 10.4.101.1 activate
no auto-summary
no synchronization
exit-address-family
None of the internal Copper tenant VLANs are advertised out to the ASR 9000 because the static routes
are configured on the ASR 9000 for the NAT subnets.
NAT Configuration
The following sections detail the NAT configurations for Copper tenants.
11-16
Implementation Guide
Chapter 11
Deployment Considerations
The following considerations are recommended.
The Copper container uses ACI fabric as a layer 2 transport. The default gateway for tenant VMs is
configured on the ASA context.
Each tenant has unique BD/EPG in the ACI fabric with unicast routing disabled while ARP and
unknown unicast flooding enabled.
For OpenStack integration with ACI fabric, VLAN to EPG mapping is done statically.
The copper container implementation does not use contracts for policy enforcement; instead it uses
security policies defined on the ASA context.
Non-overlapping IP addressing is used for copper tenants since all tenants share the same ASA
context.
Static NAT is used on the ASA to provide access from internet to tenant subnet.
Static NAT is used on the ASA to provide OpenStack instances access to object storage services.
11-17
Chapter 11
Deployment Considerations
11-18
Implementation Guide