Module 13 Securing Mobile
Module 13 Securing Mobile
Module 13 Securing Mobile
Interoperability Demonstration
OIF/ONF Whitepaper
February 10, 2017
www.oiforum.com
www.opennetworking.org
2
1 Executive Summary
The Transport SDN market is driven by the need to accelerate service
provisioning to meet dynamic and on-demand characteristics of cloud and edge
applications. Market adoption of transport SDN is steady and determined.
Carriers are planning, trialing and deploying transport SDN, mostly in contained,
greenfield domains. With accelerated service provisioning as the driving
application, market focus is aligning on network automation and programmability
capabilities and the expected operational simplification benefits (versus Capex
savings). In turn, this is driving a need for open systems/software, flexible and
disaggregated functional components as well as flexible transport mechanisms
and programmable wavelength modulation schemes.
The trigger point for significant commercial deployments (i.e. mass adoption) of
transport SDN may depend on defining, testing and assuring interoperability of
key network functions and interfaces. To that end, the OIF developed and
published a Transport SDN Framework that defines key functions and interfaces.
In 2014, the OIF partnered with the Open Networking Foundation (ONF) to
conduct an interop demo in 2014 that tested pre-standard ONF OpenFlow
extensions for the Southbound Interfaces or Application Programming Interfaces
(APIs) and prototype transport Northbound APIs to support Service and Topology
requests. That work led to the initiation of standards work in ONF on the
Northbound Transport API (T-API) and approval of T-API specs in 2H2016.
In the 2016 OIF SDN Transport API Interoperability Demonstration the OIF and
ONF partnered to lead the industry toward the wide scale deployment of
commercial SDN by testing ONF T-API standards. The interoperability test and
demonstration, managed by the OIF, addressed multi-layer and multi-domain
environments in global carrier labs located in Asia, Europe and North America.
Interop testing in carrier labs under the OIF umbrella allows carriers to have
direct visibility into T-API implementations to assure interoperability of different
vendors across the T-API interface. Additional use cases based upon the API
standards are clarified in the testing and may be defined through OIF
implementation agreements to provide a common set of requirements.
Participants in the OIF Transport SDN interoperability event also submitted a
proof of concept demo proposal to ETSI NFV. The proposal, “Mapping ETSI-
NFV onto Multi-Vendor, Multi-Domain Transport SDN”, was accepted.
4
2 Introduction
Operators today are looking towards Software Defined Networks (SDN) to enable
programmability of their networks for efficiency, speed of deployment and new
revenue-generating network services. Widespread adoption of the
programmability paradigm depends on the availability of common or
standardized APIs that allow access to domain specific attributes and
mechanisms without requiring the API itself to be specific to the vendor or
technology. The Transport API (T-API) is designed to allow network operators to
deploy SDN across a multi-domain, multi-vendor transport infrastructure,
extending programmability across their networks end-to-end.
By abstracting the details of the lower level Domain, T-API supports integration of
Domains of different technology and different vendor equipment into a single
virtualized network infrastructure:
5
3 Demonstration Set-up
Participating Carriers:
• Asia: China Telecom, China Unicom, SKTelecom
• Europe: Telefonica
• North America: Verizon
One of the key characteristics of the demonstration was the ability to test
applications, controller implementations and optical network elements
implemented by different organizations, interoperating through prototype
standard or common interfaces.
The ONF Transport API (T-API) functional requirements are defined in the ONF
TR-527 (Functional Requirements for Transport API, June 2016). For the initial
version of T-API this includes 5 functional modules supporting different services:
• Topology Service
• Connectivity Service
• Path Computation Service
• Virtual Network Service
• Notification Service
Figure 3 presents top level decomposition of the T-API defined by [TR-527]. For
the 2016 Demo 3 main modules were supported: Topology, Connectivity and
Notification.
7
Figure 3: Top level decomposition of the T-API Services [1].
The following sections discuss the specific module functions used in testing.
More information about the T-API work at ONF is contained in Appendix E to this
document.
Topology Service is first out of five T-API services as defined by ONF [TR-527]. It
allows to retrieve the most basic data about controlled networks – abstract view
of devices and the way how they interconnect. As a basic one, apart from
providing important network inventory information, it gives a foundation to:
• other parts of T-API - Path Computation, Connectivity and Virtual Network,
• potential applications, which define services on top of collected topological
data (like monitoring, planning, optimization and others),
Figure 4 presents key objects in the Topology module with their attributes and
dependencies [T-API SDK,
http://github.com/OpenNetworkingFoundation/Snowmass-ONFOpenTransport/].
For the simplicity reasons attributes of composite types (referred also as
“topology-pacs”) were hidden.
8
Figure 4: Key objects of Topology module.
The Topology module enables Retrieval for the objects in a Context shared
between provider and client. Typical functional operations are defined as follows:
• get list of topologies - a collection of references to Topology objects
contained in the shared context,
• get details of a given topology - attributes of the Topology object,
collection of references to Node and Link objects contained in a specific
Topology,
• get list of nodes – a collection of Node objects in a given Topology,
• get details of a given node – attributes of the Node object,
• get list of links - a collection of references to Link objects contained in
specific Topology,
• get details of a given link – attributes of the Link object,
• get list of node edge points - a collection of references to NodeEdgePoint
objects contained in specific Topology and Node
• get details of a given node edge point – attributes of the NodeEdgePoint
object
The T-API server responds to the request for Connectivity Service with a unique
identifier for the service, the lifecycle state of the service, plus details of the
service such as the constraints that have been met and optionally the identifiers
and details of supporting transport connections.
The initial response from the server may be returned before the Connectivity
Service is fully implemented, for example, in cases where photonic connections
are needed and some latency for tuning and balancing the photonic elements is
incurred. In this case the initial lifecycle state indicates that the service is still in
Potential rather than Installed state. Determination of when the service
transitions to Installed state can be done either by polling for the service’s
lifecycle state using the retrieval functions discussed below, or by using
Notification service to request notification of state changes in the service.
For the Interop demo, only Ethernet point-to-point private line services were
tested as this was supported by all vendors and their domains. Internally, the
Ethernet service was transported over packet, OTN ODU or OTN OCh switched
networks depending on the particular vendor and domain.
In the demonstration, a controller that did not support notification would need to
be periodically polled for any change in state or topology. A controller supporting
notification would provide an address that the client would connect to via
websocket to receive any subsequent notifications for which they had created a
subscription.
Since the transport networks generally reside at the lower layers of the
networking infrastructure hierarchy, the Transport SDN use cases are often more
relevant when considered in the context of a larger service and user ecosystem.
For this interoperability demonstration, the use cases tested were framed in the
context of the ETSI-NFV architecture as depicted below.
11
ETSI-NFV use cases describe multiple sites hosting NFVI-POPs, which are
interconnected over a Wide Area Network (WAN) infrastructure. As shown in
Figure 5, the network is architecturally configured by a network controller,
interfacing with WIM, and Wide Area Network (WAN) infrastructure. The WAN
interconnects multiple ETSI-NFV sites. The use cases in this interoperability test
aim to demonstrate connectivity life cycle management using SDN Network
Controllers for geographically distributed ETSI-NFV site interconnections.
One of the main Use Cases tested was Multi-Domain Connectivity Service with
local and end-to-end path constraints and local and end-to-end recovery. This
12
Use Case involved several steps executed by a Multi-Domain Controller working
with multiple lower level Domain Controllers as shown in the following figure:
In the initial stage of the Use Case, the MD Controller queries its Domain
Controllers for their topology information using the Topology API. Based on the
retrieved information and internal knowledge of the inter-domain links, the MD
Controller then builds a multi-domain topology that can be used to compute paths
for new services.
In the next stage of the Use Case, the MD Controller builds a multi-domain
connectivity service using the Connectivity Service API to create the required
services in each domain. At first, a service is built without specifying path
constraints, allowing each Domain Controller to perform internal path
computation per its local optimization, e.g., using shortest path. After each
Domain Controller indicated that their portion of the service was installed, the
data plane connectivity was tested if possible (for services spanning domains in
different labs, data plane connectivity could not be tested as there was no actual
capacity available between labs).
13
Figure 8: Connectivity Service with Local Path Diversity Constraints
Finally, local and end-to-end recovery was specified in the Use Case, where
these involved the following procedures:
• For local recovery, simulating a failure within a domain and triggering that
domain’s internal recovery functions so that the services is restored within
that domain without disturbing other domains in the service
• For end-to-end recovery, simulating a failure that was not fixable within the
associated domain and using the MD Controller to provision restoration of
the service across a path bypassing the affected domain.
14
Figure 10: Local and End-to-End Recovery
15
Figure 11: Transport SDN Controller Hierarchy
16
Testing of connectivity through a representative core network was accomplished
using multiple vendor domains including optical and packet switched topologies
as shown in Figure 13 below. The controller hierarchy is shown in Figure 10.
The Multi-Domain Controller queries the domain controllers for topology using
the Topology API, and uses this information to calculate a path across the
network.
17
6 Findings
SDN is not only changing the way we control and manage the network, but also
is also changing the manner and speed of standardization. More emphasis is
being placed on following an agile process for faster and incremental
implementation and deployment of the technology. Keeping this in mind, the
objective of the interop was not certification, but validation of the ONF T-API as a
tool to enable programmability of the transport network.
At the same time, some functional as well protocol related issues and gaps were
identified with the transport APIs. Examples include:
18
o Example: Some domain controllers have preferred to use the RPC
style encoding of the RESTConf specification (of the T-API YANG
model) while other have preferred the SCRUD flavor
o Resolution: The T-API was amended to define a common data model
that was shared by both the RPC and the SCRUD envelopes. This
made it easier for the multi-domain controller to implement both API
mechanisms.
Many of the findings from the interop are already being incorporated into the next
version of the ONF T-API specification. It should be acknowledged that these
important findings could have only been facilitated through such interop testing
amongst multiple vendor and controller systems.
7 Benefits
19
7.2 T-API Benefits
SDN and virtualization have the promise of simplifying transport network control,
adding management flexibility, and allowing the rapid development of new
service offerings by enabling programmatic control of transport networks and
equipment. Open well defined T-API’s are required for services to become
programmable. They expose the resource view and provide network functionality
per service level agreements. Standards based T-API enables infrastructure
agnostic service provisioning and facilitates potential to integrate carriers’ green
and brownfield domains into a single virtualized Transport SDN infrastructure.
8 Conclusion
In the 2016 OIF/ONF SDN Transport API Interoperability Demonstration the OIF
and ONF partnered to lead the industry toward the wide scale deployment of
commercial SDN by testing key Transport API standards. The interoperability test
and demonstration, managed by the OIF, addressed multi-layer and multi-
domain environments in global carrier labs located in Asia, Europe and North
America. The testing successfully demonstrated that T-API enables real-time
orchestration of on-demand connectivity setup, control and monitoring across
diverse multi-layer, multi-vendor, multi-carrier networks. Some functional as well
protocol related issues and gaps were identified with the Transport API. The
experiences of the testing will be shared across the industry to help develop
critical implementation agreements and specifications. The goal is to accelerate
service provisioning to meet dynamic and on-demand characteristics of cloud
and edge applications.
20
1 Appendix A: List of Contributors
Dave Brown (Editor) – Nokia
21
4 Appendix D: Glossary
API Application Programming Interface
CDPI Control to Data Plane Interface
CIM Common Information Model
COTS Common-off-the-shelf
CVNI Control Virtual Network Interface
DCN Data Communication Networks
E-NNI External Network-Network Interface
ETSI European Telecommunications Standards Institute
GFP Generic Framing Procedure
IA Implementation Agreement
ITU-T Telecommunication Standardization Sector of the International Telecommunications U
JSON JavaScript Object Notation
MEF Metro Ethernet Forum
MD Multi-Domain
NBI Northbound Interface
OCh Optical Channel
ODU Optical channel Data Unit
OF OpenFlow
OIF Optical Internetworking Forum
ONF Open Networking Foundation
OpEx Operational Expenditure
OSSDN Open Source SDN
OTN Optical Transport Networking
OTU Optical channel Transport Unit
PoP Point of Presence
QoE Quality of Experience
T-API is a product of the ONF Open Transport Working Group (OTWG) with
input from the OIF and joint interoperability testing. T-API is closely based on the
ONF’s Common Information Model [CIM] developed by the ONF Information
Modeling project. The CIM provides a common, technology-independent
representation of data plane resources for management-control by the operator
that is derived from industry models from TMF and ITU-T. The CIM effort has in
22
turn been adopted by organizations like TMF, ITU-T, OIF and MEF as the basis
for definition work.
T-API derives its Information Model by pruning and refactoring the CIM Core
Information Model as a purpose-specific realization for Transport Networks. In
progressing from model to realization, T-API also incorporates work from multiple
ONF Open Source SDN (OSSDN) projects:
The T-API SDK is available through the OSSDN SNOWMASS project under
Apache 2 license.
24