Odlfair Play
Odlfair Play
Odlfair Play
Documentation
Release Beryllium
OpenDaylight Project
i
ii
OpenDaylight Documentation Documentation, Release Beryllium
Contents:
Contents 1
OpenDaylight Documentation Documentation, Release Beryllium
2 Contents
CHAPTER 1
1.1 Introduction
The OpenDaylight project is an open source platform for Software Defined Networking (SDN) that uses open protocols
to provide centralized, programmatic control and network device monitoring. Like many other SDN controllers,
OpenDaylight supports OpenFlow, as well as offering ready-to-install network solutions as part of its platform.
Much as your operating system provides an interface for the devices that comprise your computer, OpenDaylight
provides an interface that allows you to connect network devices quickly and intelligently for optimal network perfor-
mance.
It’s extremely helpful to understand that setting up your networking environment with OpenDaylight is not a single
software installation. While your first chronological step is to install OpenDaylight, you install additional functionality
packaged as Karaf features to suit your specific needs.
Before walking you through the initial OpenDaylight installation, this guide presents a fuller picture of OpenDaylight’s
framework and functionality so you understand how to set up your networking environment. The guide then takes you
through the installation process.
Major distinctions of OpenDaylight’s SDN compared to traditional SDN options are the following:
• A microservices architecture, in which a “microservice” is a particular protocol or service that a user wants to
enable within their installation of the OpenDaylight controller, for example:
– A plugin that provides connectivity to devices via the OpenFlow or BGP protocols
– An L2-Switch or a service such as Authentication, Authorization, and Accounting (AAA).
• Support for a wide and growing range of network protocols beyond OpenFlow, including SNMP, NETCONF,
OVSDB, BGP, PCEP, LISP, and more.
• Support for developing new functionality comprised of additional networking protocols and services.
Note: A thorough understanding of the microservices architecture is important for experienced network developers
who want to create new solutions in OpenDaylight. If you are new to networking and OpenDaylight, you most likely
won’t design solutions, but you should comprehend the microservices concept to understand how OpenDaylight works
and how it differs from other SDN programs.
3
OpenDaylight Documentation Documentation, Release Beryllium
To set up your environment, you first install OpenDaylight followed by the Apache Karaf features that offer the func-
tionality you require. The OpenDaylight Getting Started Guide covers feature descriptions, OpenDaylight installation
procedures, and feature installation.
The Getting Started Guide also includes other helpful information, with the following organization:
1. An overview of OpenDaylight and common use models
2. Who should use this guide?
3. OpenDaylight concepts and tools
4. Explanations of OpenDaylight Apache Karaf features and other features that extend network functionality
5. OpenDaylight Beryllium system requirements and Release Notes
6. OpenDaylight installation instructions
7. Feature tables with installation names and compatibility notes
1.2 Overview
OpenDaylight is for users considering open options in network programming. This guide provides information for the
following types of users:
1. Those new to OpenDaylight who want to install it and select the features they need to run their network envi-
ronment using only the command line and GUI. Such users include:
(a) Students
(b) Network administrators and engineers.
2. Network engineers and network application developers who want to use OpenDaylight’s REST APIs to manage
their network programmatically.
3. Network engineers and network application developers who want to write their own OpenDaylight services and
plugins for greater functionality. This group of users needs a significant level of expertise in the following areas,
which is beyond the scope of this document:
(a) The YANG modeling language
(b) The Model-Driven Service Abstraction Layer (MD-SAL)
(c) Maven build tool
(d) Management of the shared data store
(e) How to handle notifications and/or Remote Procedure Calls (RPCs)
4. Developers who would like to join the OpenDaylight community and contribute code upstream. People in this
group design offerings such as applications/services, protocol implementations, and so on, to increase Open-
Daylight functionality for the benefit of all end-users.
Note: If you develop code to build new functionality for OpenDaylight and push it upstream (not required), it can
become part of the OpenDaylight release. Users can then install the features to implement the solution you’ve created.
In this section we discuss some of the concepts and tools you encounter with basic use of OpenDaylight. The guide
walks you through the installation process in a subsequent section, but for now familiarize yourself with the informa-
tion below.
• To date, OpenDaylight developers have formed more than 50 projects to address ways to extend network func-
tionality. The projects are a formal structure for developers from the community to meet, document release
plans, code, and release the functionality they create in an OpenDaylight release.
The typical OpenDaylight user will not join a project team, but you should know what projects are as we refer
to their activities and the functionality they create. The Karaf features to install that functionality often share the
project team’s name.
• Apache Karaf provides a lightweight runtime to install the Karaf features you want to implement and is included
in the OpenDaylight platform software. By default, OpenDaylight has no pre-installed features.
• After installing OpenDaylight, you install your selected features using the Karaf console to expand networking
capabilities. In the Karaf feature list below are the ones you’re most likely to use when creating your network
environment.
As a short example of installing a Karaf feature, OpenDaylight Beryllium offers Application Layer Traffic
Optimization (ALTO). The Karaf feature to install ALTO is odl-alto-all. On the Karaf console, the command to
install it is:
feature:install odl-alto-all
• DLUX is a web-based interface that OpenDaylight provides for you to manage your network. Its Karaf feature
installation name is “odl-dlux-core”.
1. DLUX draws information from OpenDaylight’s topology and host databases to display the following in-
formation:
(a) The network
(b) Flow statistics
(c) Host locations
This section provides brief descriptions of the most commonly used Karaf features developed by OpenDaylight project
teams. They are presented in alphabetical order. OpenDaylight installation instructions and a feature table that lists
installation commands and compatibility follow.
• AAA
• ALTO
• Border Gateway Protocol (including Link-state Distribution (BGP)
• Border Gateway Monitoring Protocol (BMP)
• Control and Provisioning of Wireless Access Points (CAPWAP)
• Controller Shield
• Device Identification and Driver Management (DIDM)
• DLUX
1.5.1 AAA
Standards-compliant Authentication, Authorization and Accounting Services. RESTCONF is the most common con-
sumer of AAA, which installs the AAA features automatically. AAA provides:
• Support for persistent data stores
• Federation and SSO with OpenStack Keystone
The Beryllium release of AAA includes experimental support for having the database of users and credentials stored
in the cluster-aware MD-SAL datastore.
1.5.2 ALTO
Implements the Application-Layer Traffic Optimization (ALTO) base IETF protocol to provide network information
to applications. It defines abstractions and services to enable simplified network views and network services to guide
application usage of network resources and includes five services:
1. Network Map Service - Provides batch information to ALTO clients in the forms of ALTO network maps.
Is a southbound plugin that provides support for Border Gateway Protocol (including Link-state Distribution) as a
source of L3 topology information.
Is a southbound plugin that provides support for BGP Monitoring Protocol as a monitoring station.
Enables OpenDaylight to manage CAPWAP-compliant wireless termination point (WTP) network devices. Intelligent
applications, e.g., radio planning, can be developed by tapping into the operational states made available via REST
APIs of WTP network devices.
Creates a repository called the Unified-Security Plugin (USecPlugin) to provide controller security information to
northbound applications, such as the following:
• Collating the source of different attacks reported in southbound plugins
• Gathering information on suspected controller intrusions and trusted controllers in the network
Information collected at the plugin may also be used to configure firewalls and create IP blacklists for the network.
Provides device-specific functionality, which means that code enabling a feature understands the capability and lim-
itations of the device it runs on. For example, configuring VLANs and adjusting FlowMods are features, and there
may be different implementations for different device types. Device-specific functionality is implemented as Device
Drivers.
1.5.8 DLUX
Creates a common abstraction layer on top of a physical network so northbound APIs or services can be more easily
mapped onto the physical network as a concrete device configuration.
Defines an application-centric policy model for OpenDaylight that separates information about application connec-
tivity requirements from information about the underlying details of the network infrastructure. Provides support
for:
• Integration with OpenStack Neutron
• Service Function Chaining
• OFOverlay support for NAT, table offsets
Developing a data-centric middleware to act as a oneM2M-compliant IoT Data Broker (IoTDB) and enable authorized
applications to retrieve IoT data uploaded by any device.
LACP can auto-discover and aggregate multiple links between an OpenDaylight-controlled network and LACP-
enabled endpoints or switches.
1.5.13 Location Identifier Separation Protocol (LISP) Flow Mapping Service (LISP)
LISP (RFC6830) enables separation of Endpoint Identity (EID) from Routing Location (RLOC) by defining an overlay
in the EID space, which is mapped to the underlying network in the RLOC space.
LISP Mapping Service provides the EID-to-RLOC mapping information, including forwarding policy (load balancing,
traffic engineering, and so on) to LISP routers for tunneling and forwarding purposes. The LISP Mapping Service can
serve the mapping data to data plane nodes as well as to OpenDaylight applications.
To leverage this service, a northbound API allows OpenDaylight applications and services to define the mappings and
policies in the LISP Mapping Service. A southbound LISP plugin enables LISP data plane devices to interact with
OpenDaylight via the LISP protocol.
1.5.14 NEMO
Is a Domain Specific Language (DSL) for the abstraction of network models and identification of operation patterns.
NEMO enables network users/applications to describe their demands for network resources, services, and logical
operations in an intuitive way that can be explained and executed by a language engine.
1.5.15 NETCONF
1.5.16 NetIDE
Enables portability and cooperation inside a single network by using a client/server multi-controller architecture. It
provides an interoperability layer allowing SDN Applications written for other SDN Controllers to run on OpenDay-
light. NetIDE details:
• Architecture follows a client/server model: other SDN controllers represent clients with OpenDaylight acting as
the server.
• OpenFlow v1.0/v1.3 is the only southbound protocol supported in this initial release. We are planning for other
southbound protocols in later releases.
• The developer documentation contains the protocol specifications required for developing plugins for other
client SDN controllers.
• The NetIDE Configuration file contains the configurable elements for the engine.
Several services and plugins in OpenDaylight work together to provide simplified integration with the OpenStack
Neutron framework. These services enable OpenStack to offload network processing to OpenDaylight while enabling
OpenDaylight to provide enhanced network services to OpenStack.
OVSDB Services are at parity with the Neutron Reference Implementation in OpenStack, including support for:
• L2/L3
– The OpenDaylight Layer-3 Distributed Virtual Router is fully on par with what OpenStack offers and now
provides completely decentralized Layer 3 routing for OpenStack. ICMP rules for responding on behalf
of the L3 router are fully distributed as well.
– Full support for distributed Layer-2 switching and distributed IPv4 routing is now available.
• Clustering - Full support for clustering and High Availability (HA) is available in the OpenDaylight Beryllium
release. In particular, the OVSDB southbound plugin supports clustering that any application can use, and the
Openstack network integration with OpenDaylight (through OVSDB Net-Virt) has full clustering support. While
there is no specific limit on cluster size, a 3-node cluster has been tested extensively as part of the Beryllium
release.
• Security Groups - Security Group support is available and implemented using OpenFlow rules that provide
superior functionality and performance over OpenStack Security Groups, which use IPTables. Security Groups
also provide support for ConnTrack with stateful tracking of existing connections. Contract-based Security
Groups require OVS v2.5 with contract support.
• Hardware Virtual Tunnel End Point (HW-VTEP) - Full HW-VTEP schema support has been implemented in the
OVSDB protocol driver. Support for HW-VTEP via OpenStack through the OVSDB-NetVirt implementation
has not yet been provided as we wait for full support of Layer-2 Gateway (L2GW) to be implemented within
OpenStack.
• Service Function Chaining
• Open vSwitch southbound support for quality of service and Queue configuration Load Balancer as service
(LBaaS) with Distributed Virtual Router, as offered in the Lithium release
Provides a process for an Operation Context containing an OpenFlow Switch that uses OF-CONFIG to communicate
with an OpenFlow Configuration Point, enabling remote configuration of OpenFlow datapaths.
Supports connecting to OpenFlow-enabled network devices via the OpenFlow specification. It currently supports
OpenFlow versions 1.0 and 1.3.2.
In addition to support for the core OpenFlow specification, OpenDaylight Beryllium also includes preliminary support
for the Table Type Patterns and OF-CONFIG specifications.
Is a southbound plugin that provides support for performing Create, Read, Update, and Delete (CRUD) operations on
Multiprotocol Label Switching (MPLS) tunnels in the underlying network.
Leverages manufacturer-installed IEEE 802.1AR certificates to secure initial communications for a zero-touch ap-
proach to bootstrapping using Docker. SNBi devices and controllers automatically do the following:
1. Discover each other, which includes:
(a) Revealing the physical topology of the network
(b) Exposing each type of a device
(c) Assigning the domain for each device
2. Get assigned an IP-address
3. Establish secure IP connectivity
SNBi creates a basic infrastructure to host, run, and lifecycle-manage multiple network functions within a network
device, including individual network element services, such as:
• Performance measurement
• Traffic-sniffing functionality
• Traffic transformation functionality
SNBi also provides a Linux side abstraction layer to forward elements as well as enhancements to feature the abstrac-
tion and bootstrapping infrastructure. You can also use the device type and domain information to initiate controller
federation processes.
Provides the ability to define an ordered list of network services (e.g. firewalls, load balancers) that are then “stitched”
together in the network to create a service chain. SFC provides the chaining logic and APIs necessary for OpenDaylight
to provision a service chain in the network and an end-user application for defining such chains. It includes:
The SNMP southbound plugin allows applications acting as an SNMP Manager to interact with devices that support
an SNMP agent. The SNMP plugin implements a general SNMP implementation, which differs from the SNMP4SDN
as that project leverages only select SNMP features to implement the specific use case of making an SNMP-enabled
device emulate some features of an OpenFlow-enabled device.
1.5.24 SNMP4SDN
Provides a southbound SNMP plugin to optimize delivery of SDN controller benefits to traditional/legacy ethernet
switches through the SNMP interface. It offers support for flow configuration on ACLs and enables flow configuration
via REST API and multi-vendor support.
Enables creation of a tag that allows you to filter traffic instead of using protocol-specific information like addresses
and ports. Via SXP an external entity creates the tags, assigns them to traffic appropriately, and publishes information
about the tags to network devices so they can enforce the tags appropriately.
More specifically, SXP Is an IETF-published control protocol designed to propagate the binding between an IP address
and a source group, which has a unique source group tag (SGT). Within the SXP protocol, source groups with common
network policies are endpoints connecting to the network. SXP updates the firewall with SGTs, enabling the firewalls
to create topology-independent Access Control Lists (ACLs) and provide ACL automation.
SXP source groups have the same meaning as endpoint groups in OpenDaylight’s Group Based Policy (GBP), which is
used to manipulate policy groups, so you can use OpenDaylight GPB with SXP SGTs. The SXP topology-independent
policy definition and automation can be extended through OpenDaylight for other services and networking devices.
Provides a framework for simplified aggregation and topology data query to enable a unified topology view, including
multi-protocol, Underlay, and Overlay resources.
Creates a framework for collecting, storing, querying, and maintaining time series data in OpenDaylight. You can
leverage various data-driven applications built on top of TSDR when you install a datastore and at least one collector.
Functionality of TDSR includes:
• Data Query Service - For external data-driven applications to query data from TSDR through REST APIs
• NBI integration with Grafana - Allows visualization of data collected in TSDR using Grafana
• Data Purging Service - Periodically purges data from TSDR
• Data Collection Framework - Data Collection framework to allow plugging in of various types of collectors
• HSQL data store - Replacement of H2 data store to remove third party component dependency from TSDR
• Enhancement of existing data stores including HBase to support new features introduced in Beryllium
• Cassandra data store - Cassandra implementation of TSDR SPIs
• NetFlow data collector - Collect NetFlow data from network elements
• SNMP Data Collector - Integrates with SNMP plugin to bring SNMP data into TSDR
• Syslog data collector - Collects syslog data from network elements
TSDR has multiple features to enable the functionality above. To begin, select one of these data stores:
• odl-tsdr-hsqldb-all
• odl-tsdr-hbase
• odl-tsdr-cassandra
Then select any “collectors” you want to use:
• odl-tsdr-openflow-statistics-collector
• odl-tsdr-netflow-statistics-collector
• odl-tsdr-controller-metrics-collector
• odl-tsdr-snmp-data-collector
• odl-tsdr-syslog-collector
See these TSDR_Directions for more information.
Provides a central server to coordinate encrypted communications between endpoints. Its client-side agent informs the
controller about its encryption capabilities and can be instructed to encrypt select flows based on business policies.
A possible use case is encrypting controller-to-controller communications; however, the framework is very flexible,
and client side software is available for multiple platforms and device types, enabling USC and OpenDaylight to
centralize the coordination of encryption across a wide array of endpoint and device types.
Implements the infrastructure services required to support L3 VPN service. It initially leverages open source routing
applications as pluggable components. L3 services include:
• The L3 VPN Manager
Provides multi-tenant virtual network on an SDN controller, allowing you to define the network with a look and feel
of a conventional L2/L3 network. Once the network is designed on VTN, it automatically maps into the underlying
physical network and is then configured on the individual switch, leveraging the SDN control protocol.
By defining a logical plane with VTN, you can conceal the complexity of the underlying network and better manage
network resources to reduce network configuration time and errors.
• Messaging4Transport
• Network Intent Composition (NIC)
• UNI Manager Plug-in (Unimgr)
• YANG-PUBSUB
1.6.1 Messaging4Transport
Adds AMQP bindings to the MD-SAL, which makes all MD-SAL APIs available via that mechanism. AMQP bindings
integration exposes the MD-SAL datatree, rpcs, and notifications via AMQP, when installed.
Offers an interface with an abstraction layer for you to communicate “intentions,” i.e., what you expect from the
network. The Intent model, which is part of NIC’s core architecture, describes your networking services requirements
and transforms the details of the desired state to OpenDaylight. NIC has four features:
• odl-nic-core-hazelcast: Provides the following:
– A distributed intent mapping service implemented using hazelcast, which stores metadata needed to pro-
cess Intent correctly
– An intent REST API to external applications for Create, Read, Update, and Delete (CRUD) operations on
intents, conflict resolution, and event handling
• odl-nic-core-mdsal: Provides the following:
– A distributed Intent mapping service implemented using MD-SAL, which stores metadata needed to pro-
cess Intent correctly
– An Intent rest API to external applications for CRUD operations on Intents, conflict resolution, and event
handling
• odl-nic-console: Provides a Karaf CLI extension for Intent CRUD operations and mapping service operations
• Four renderers to provide specific implementations to render the Intent:
– Virtual Tenant Network Renderer
– Group Based Policy Renderer
– OpenFlow Renderer
– Network MOdeling Renderer
Formed to initiate the development of data models and APIs that facilitate OpenDaylight software applications’ and/or
service orchestrators’ ability to configure and provision connectivity services.
1.6.4 YANG-PUBSUB
An experimental feature Plugin that allows subscriptions to be placed on targeted subtrees of YANG datastores residing
on remote devices. Changes in YANG objects within the remote subtree can be pushed to OpenDaylight as specified
and don’t require OpenDaylight to make continuous fetch requests. YANG-PUBSUB is developed as a Java project.
Development requires Maven version 3.1.1 or later.
1.7.1 OpFlex
Provides the OpenDaylight OpFlex Agent , which is a policy agent that works with Open vSwitch (OVS), to enforce
network policy, e.g., from Group-Based Policy, for locally-attached virtual machines or containers.
5. Map overlays
6. Preset user-friendly interactions
NeXt can work with DLUX to build OpenDaylight applications. NeXt does not support Internet Explorer. Check out
the NeXt_demo for more information on the interface.
1.8 API
We are in the process of creating automatically generated API documentation for all of OpenDaylight. The following
are links to the preliminary documentation that you can reference. We will continue to add more API documentation
as it becomes available.
• mdsal
• odlparent
• yangtools
You complete the following steps to install your networking environment, with specific instructions provided in the
subsections below.
Before detailing the instructions for these, we address the following: Java Runtime Environment (JRE) and operating
system information Target environment Known issues and limitations
The default distribution can be found on the OpenDaylight software download page: http://www.opendaylight.org/
software/downloads
The Karaf distribution has no features enabled by default. However, all of the features are available to be installed.
Note: For compatibility reasons, you cannot enable all the features simultaneously. We try to document known
incompatibilities in the Install the Karaf features section below.
$ ls distribution-karaf-0.4.0-Beryllium.zip
distribution-karaf-0.4.0-Beryllium.zip
$ unzip distribution-karaf-0.4.0-Beryllium.zip
Archive: distribution-karaf-0.4.0-Beryllium.zip
creating: distribution-karaf-0.4.0-Beryllium/
creating: distribution-karaf-0.4.0-Beryllium/configuration/
creating: distribution-karaf-0.4.0-Beryllium/data/
creating: distribution-karaf-0.4.0-Beryllium/data/tmp/
creating: distribution-karaf-0.4.0-Beryllium/deploy/
creating: distribution-karaf-0.4.0-Beryllium/etc/
creating: distribution-karaf-0.4.0-Beryllium/externalapps/
...
inflating: distribution-karaf-0.4.0-Beryllium/bin/start.bat
inflating: distribution-karaf-0.4.0-Beryllium/bin/status.bat
inflating: distribution-karaf-0.4.0-Beryllium/bin/stop.bat
$ cd distribution-karaf-0.4.0-Beryllium
$ ./bin/karaf
To install a feature, use the following command, where feature1 is the feature name listed in the table below:
feature:install <feature1>
Note: For compatibility reasons, you cannot enable all Karaf features simultaneously. The table below documents
feature installation names and known incompatibilities.Compatibility values indicate the following:
Uninstalling features
To uninstall a feature, you must shut down OpenDaylight, delete the data directory, and start OpenDaylight up again.
Important: Uninstalling a feature using the Karaf feature:uninstall command is not supported and can cause unex-
pected and undesirable behavior.
To find the complete list of Karaf features, run the following command:
feature:list
feature:list -i
Features to implement networking functionality provide release notes, which you can find in the Project-specific
Release Notes section.
Windows 10 cannot be identify by Karaf (equinox). Issue occurs during installation of karaf features e.g.:
opendaylight-user@root>feature:install odl-restconf
Error executing command: Can't install feature odl-restconf/0.0.0:
Could not start bundle mvn:org.fusesource.leveldbjni/leveldbjni-all/1.8-odl in
˓→feature(s) odl-akka-leveldb-0.7: The bundle "org.fusesource.leveldbjni.leveldbjni-
˓→all_1.8.0 [300]" could not be resolved. Reason: No match found for native code:
Workaround is to add
org.osgi.framework.os.name = Win32
to the karaf file
etc/system.properties
The workaround and further info are in this thread: http://stackoverflow.com/questions/35679852/
karaf-exception-is-thrown-while-installing-org-fusesource-leveldbjni
The following functionality is labeled as experimental in OpenDaylight Beryllium and should be used accordingly. In
general, it is not supposed to be used in production unless its limitations are well understood by those deploying it.
Most components that offer REST APIs will automatically load the RESTCONF API Support component, but if for
whatever reason they seem to be missing, install the “odl-restconf” feature to activate this support.
For Execution
The OpenDaylight Karaf container, OSGi bundles, and Java class files are portable and should run on any Java 7- or
Java 8-compliant JVM to run. Certain projects and certain features of some projects may have additional requirements.
Those are noted in the project-specific release notes.
Projects and features which have known additional requirements are:
• TCP-MD5 requires 64-bit Linux
• TSDR has extended requirements for external databases
• Persistence has extended requirements for external databases
• SFC requires addition features for certain configurations
• SXP depends on TCP-MD5 on thus requires 64-bit Linux
• SNBI has requirements for Linux and Docker
• OpFlex requires Linux
• DLUX requires a modern web browser to view the UI
• AAA when using federation has additional requirements for external tools
• VTN has components which require Linux
For Development
OpenDaylight is written primarily in Java project and primarily uses Maven as a build tool Consequently the two main
requirements to develop projects within OpenDaylight are:
• A Java 8-compliant JDK
• Maven 3.1.1 or later
Applications and tools built on top of OpenDaylight using it’s REST APIs should have no special requirements beyond
whatever is needed to run the application or tool and make the REST calls.
In some places, OpenDaylight makes use of the Xtend language. While Maven will download the appropriate tools to
build this, additional plugins may be required for IDE support.
The projects with additional requirements for execution typically have similar or more extensive additional require-
ments for development. See the project-specific release notes for details.
Other than as noted in project-specific release notes, we know of the following limitations:
• Migration from Helium, Lithium and Beryllium to Boron has not been extensively tested. The per-project
release notes include migration and compatibility information when it is known.
• There are scales beyond which the controller has been unreliable when collecting flow statistics from OpenFlow
switches. In tests, these issues became apparent when managing thousands of OpenFlow switches, however this
may vary depending on deployment and use cases.
For the release notes of individual projects, please see the following pages on the OpenDaylight Wiki.
TBD: add Boron release notes
• Authentication, Authorization and Accounting (AAA)
• ALTO
• BGPCEP
• Controller
• Control And Provisioning of Wireless Access Points (CAPWAP)
• Identification and Driver Management (DIDM)
• DLUX
• FaaS
• Group_Based_Policy (GPB)
• Internet of Things Data Management (IoTDM)
• L2_Switch
• Link Aggregation Control Protocol (LACP)
• LISP_Flow_Mapping
• MDSAL
• NEMO
• NETCONF
• NetIDE
• NeXt
• Network Intent Composition (NIC)
• Neutron_Northbound
• OF-Config
• OpFlex
• OpenFlow_Plugin
• OpenFlow_Protocol_Library
• OVSDB_Netvirt
• Packet_Cable / PCMM
• SDN_Interface_Application
• Secure Network Bootstrapping Infrastructure (SNBI)
• SNMP4SDN
• SNMP_Plugin
• Secure tag eXchange Protocol (SXP)
• Service Function Chaining (SFC)
• TCPMD5
The following projects participated in Boron, but intentionally do not have release notes.
• Documentation Project produced this and the other downloadable documentation
• Integration Group hosted the OpenDaylight-wide tests and main release distribution
• Release Engineering - autorelease was used to build the Boron release artifacts and including the main release
download.
This page details changes and bug fixes between the Beryllium Stability Release 2 (Beryllium-SR2) and the Beryllium
Stability Release 3 (Beryllium-SR3) of OpenDaylight.
• ed72f9 Fix for BUG-6082 - idpmapping will failed for the case sensitivity
• ee6b8d Modify idmtool insecure option to work with older versions of requests
• 77d2cb Enhance idmtool to allow disabling https certificate verification
• 935bab Git ignore .checkstyle file create by Eclipse Checkstyle plugin
BGP PCEP
• 72cba2 BUG-6264: Beryllium SR3 Build Unstable - when waiting for expected number of messages, try every
50 msecs till a max of 10 secs
• c2572b BUG-6120: Fix intermitent test fail
• b07d1c BUG-5742: Race condition when creating Application Peer on clustering
• d12fd5 BUG-5610: PCRpt message with vendor specific object throws exception
• a2c7a4 BUG-6108: Fix IAE on ApplicationPeer
• 5f7b28 BUG-6084: get restart time from open message error - fix size of left-shift while calculating graceful
restart capability restart time
• 7b9026 Fix failing unit tests
• 59f858 Fix unit test regression after netty version bump
• 95768e removed precondition checks for v4/v6 next-hops for v4/v6 routes
• d14d8d BUG-6005: PCErr generated while parsing of received PCRpt message is not sent out - use Chan-
nel#writeAndFlush instead of ChannelHandlerContext#write when sending out PCEP error message so that
decode handler is invoked - added listener to ChannelFuture to log result of send operation
• 694404 BUG-6019: Wrong path name for route distinguisher
• dc7453 BUG-6001: Injecting route with missing next-hop value causes exception in reachability topology
builder - added check in reachability topology builder to handle scenario when next-hop value is null - entry
will be skipped from topology processing in such cases
• f18e4f BUG-5978: Unrecognized attribute flagged Well Known - set optional bit when serializing unrecognized
attributes - updated unit-test
• cbb0ca BUG-5763 Disallow redelegation for PCC-initiated LSP
• b1550e BUG-5548: NH serializer removal
• ad543a BUG-5548: Wrong NH handler picked up
• 7840f5 Remove nexusproxy property as it is inherited via odlparent
• 996f5f BUG-5855: Transitive Unrecognized Attribute not transiting - added missing serializer for BGP unrec-
ognized attributes - added unit-test to test the unrecognized attributes serializer
• 413133 Remove eclipse project files, add more extensions to gitignore
• 486a89 BUG-5731: Send Error Message if LSP-IDENTIFIERS TLV is missing
• 205e54 BUG-5689: Unhandled message is causing failure
• 3cfdd3 BUG-5612: ODL(PCEP) infinitely waits for the response from peer for addlsp (cherry-pick) - added
configurable timeout (default 30 secs) while processing RPC requests which need response from PCC - updated
unit-test
Controller
DLUX
Documentation
Integration/Distribution
L2 Switch
MD-SAL
NETCONF
NetIDE
Network Virtualization
• 2e01de BUG-5813: Vxlan ports should not be removed in table 110 flow entry unless last VM instance removed
from the openstack node. * Before deleting the Vxlan port in flow entry it should check whether the deleted vm
instance is last or not. If it is the last vm instance Vxlan port should be delete from source node in flow entry
else vxlan port shouldn’t be delete.
• 0c34a9 BUG-5614: Ovsdb should not flood the packets to compute nodes unless tenant network exists in the
compute node * Before adding tunnel rules, checking the network present or not in src and dst node. If network
present in both nodes adding the Vxlan port in flood entry in src and dst.Else do not add vxlan ports.
• b80dc1 Remove ovsdb related in resources
• d8aba5 postman: use 1 for netvirt table offset
• 08e1d0 Added isPortSecurityEnabled check to enable/disable SG.
• 461693 Quick (-Pq) should skip running tests, but not building them
• 68e5f6 Use CustomBundleURLStreamHandlerFactory from features-test
• e1af02 Use more specific dependencies than karaf-maven-plugin
• cbaf17 Introduce “mvn -Pq install” to just build JAR, but no tests, QA etc.
• 7e1d3a Upgrade Netty 4.0.33.Final -> 4.0.37.Final
• 1b688e BUG-4692: Add Netty’s native epoll transport
OVSDB Integration
OpenFlow Plugin
• d13d3e BUG-5636: making table features configurable for the He plugin. DEFAULT will be ON, can be
TURNED OFF.
• 3a666e BUG-5839 - Removing stale-marked groups/meters durng reconciliation [cherry-pick from master]
• cdcfe9 BUG-5914 - Flow statistics don’t show up on the same flow id, if flow uses IPv6 match with subnet mask
• d96bdb BUG-5841: Enhances bulk-o-matic to stress test Data Store and Openflowplugin RPC Added asciidoc
for the same
• 0adac8 added table features skip flag
• d3f498 Add custom compare for ArpMatch objects
• 6f3b6d BUG-4099: Group pointing to ports during reconciliation will be provisioned only after the ports come
up after configurable number of retries
SNMP4SDN
• daac8f BUG-6190 - Export task is not send to remote Peer due to partitioning error
• b975d9 BUG-5975 - SXP may leave opened ChannelHandlerContext for Both mode
TCP-MD5
• b880d2 merge the fix of the uni it test instability which was fixed by donald in the master to the stable/berilium
branch.
VPN Service
YANG Tools
This document is for the user to install the artifacts that are needed for using Centinel functionality in the OpenDaylight
by enabling the default Centinel feature. Centinel is a distributed reliable framework for collection, aggregation and
analysis of streaming data which is added in OpenDaylight Beryllium Release.
Overview
The Centinel project aims at providing a distributed, reliable framework for efficiently collecting, aggregating and
sinking streaming data across Persistence DB and stream analyzers (e.g., Graylog, Elasticsearch, Spark, Hive). This
framework enables SDN applications/services to receive events from multiple streaming sources (e.g., Syslog, Thrift,
Avro, AMQP, Log4j, HTTP/REST).
In Beryllium, we develop a “Log Service” and plug-in for log analyzer (e.g., Graylog). The Log service process
real time events coming from log analyzer. Additionally, we provide stream collector (Flume- and Sqoop-based) that
collects logs from OpenDaylight and sinks them to persistence service (integrated with TSDR). Centinel also includes
a RESTCONF interface to inject events to north bound applications for real-time analytic/network configuration.
Further, a Centinel User Interface (web interface) will be available to operators to enable rules/alerts/dashboard etc.
There are some additional pre-requisites for Centinel, which can be done by integrate Graylog server, Apache Drill,
Apache Flume and HBase.
• Install MongoDB
– import the MongoDB public GPG key into apt:
• Install Elasticsearch
– Graylog2 v0.20.2 requires Elasticsearch v.0.90.10. Download and install it with these commands:
cd ~; wget https://download.elasticsearch.org/elasticsearch/elasticsearch/
˓→elasticsearch-0.90.10.deb
– We need to change the Elasticsearch cluster.name setting. Open the Elasticsearch configuration file:
sudo vi /etc/elasticsearch/elasticsearch.yml
– Find the section that specifies cluster.name. Uncomment it, and replace the default value with graylog2:
cluster.name: graylog2
– Find the line that specifies network.bind_host and uncomment it so it looks like this:
network.bind_host: localhost
script.disable_dynamic: true
– Save and quit. Next, restart Elasticsearch to put our changes into effect:
– After a few seconds, run the following to test that Elasticsearch is running properly:
– Let’s create a symbolic link to the newly created directory, to simplify the directory name:
– Now must configure the admin password and secret key. The password secret key is configured in gray-
log2.conf, by the password_secret parameter. Generate a random key and insert it into the Graylog2
configuration with the following two commands:
SECRET=$(pwgen -s 96 1)
sudo -E sed -i -e 's/password_secret =.*/password_secret = '$SECRET'/' /etc/
˓→graylog2.conf
rest_transport_uri = http://127.0.0.1:12900/
elasticsearch_shards = 1
– Now let’s install the Graylog2 init script. Copy graylog2ctl to /etc/init.d:
– Update the startup script to put the Graylog2 logs in /var/log and to look for the Graylog2 server JAR file
in /opt/graylog2-server by running the two following sed commands:
˓→graylog2-server.jar}/' /etc/init.d/graylog2
• Download the OVA image from link given below and save it to your disk locally: https://github.com/Graylog2/
graylog2-images/tree/master/ova
• Run the OVA in many systems like VMware or VirtualBox.
HBase Installation
• Download hbase-0.98.15-hadoop2.tar.gz
• Unzip the tar file using below command:
mv hbase-0.98.15-hadoop2/usr/lib/hbase/hbase-0.98.15-hadoop2 hbase
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
export PATH=$PATH:$HBASE_HOME/bin
HBASE_PATH$ bin/start-hbase.sh
• Create centinel table in HBase with stream,alert,dashboard and stringdata as column families using below com-
mand:
create 'centinel','stream','alert','dashboard','stringdata'
HBASE_PATH$ bin/stop-hbase.sh
• Download apache-flume-1.6.0.tar.gz
• Copy the downloaded file to the directory where you want to install Flume.
• Extract the contents of the apache-flume-1.6.0.tar.gz file using below command. Use sudo if necessary:
• Starting flume
– Navigate to the Flume installation directory.
– Issue the following command to start flume-ng agent:
• Download apache-drill-1.1.0.tar.gz
• Copy the downloaded file to the directory where you want to install Drill.
• Extract the contents of the apache-drill-1.1.0.tar.gz file using below command:
• Starting Drill:
– Navigate to the Drill installation directory.
– Issue the following command to launch Drill in embedded mode:
bin/drill-embedded
Deploying plugins
• Navigate to the installation directory and build the code using maven by running below command:
centinel/plugins/centinel-alertcallback/target/centinel-alertcallback-0.1.0-
˓→SNAPSHOT.jar
centinel/plugins/centinel-output/target/centinel-output-0.1.0-SNAPSHOT.jar
Configure rsyslog
/etc/rsyslog.conf
$ActionFileDefaultTemplate RSYSLOG_SyslogProtocol23Format
Finally, from the Karaf console install the Centinel feature with this command:
feature:install odl-centinel-all
If the feature install was successful you should be able to see the following Centinel commands added:
centinel:list
centinel:purgeAll
Troubleshooting
Check the ../data/log/karaf.log for any exception related to Centinel related features
Beryllium being the first release supporting Centinel functionality, only fresh installation is possible.
Uninstalling Centinel
To uninstall the Centinel functionality, you need to do the following from Karaf console:
feature:uninstall centinel-all
Its recommended to restart the Karaf container after uninstallation of the Centinel functionality.
Required Packages
You’ll need to set up your VM host uplink interface. You should ensure that the MTU of the underlying network is
sufficient to handle tunneled traffic. We will use an example of setting up eth0 as your uplink interface with a vlan of
4093 used for the networking control infrastructure and tunnel data plane.
We just need to set the MTU and disable IPv4 and IPv6 autoconfiguration. The MTU needs to be large enough to allow
both the VXLAN header and VLAN tags to pass through without fragmenting for best performance. We’ll use 1600
bytes which should be sufficient assuming you are using a default 1500 byte MTU on your virtual machine traffic.
If you already have any NetworkManager connections configured for your uplink interface find the connection name
and proceed to the next step. Otherwise, create a connection with (be sure to update the variable UPLINK_IFACE as
needed):
UPLINK_IFACE=eth0
nmcli c add type ethernet ifname $UPLINK_IFACE
CONNECTION_NAME="ethernet-$UPLINK_IFACE"
nmcli connection mod "$CONNECTION_NAME" connection.autoconnect yes \
ipv4.method link-local \
ipv6.method ignore \
802-3-ethernet.mtu 9000 \
ipv4.routes '224.0.0.0/4 0.0.0.0 2000'
Next, create the infrastructure interface using the infrastructure VLAN (4093 by default). We’ll need to create a vlan
subinterface of your uplink interface, the configure DHCP on that interface. Run the following commands. Be sure to
replace the variable values if needed. If you’re not using NIC teaming, replace the variable team0 below:
UPLINK_IFACE=team0
INFRA_VLAN=4093
nmcli connection add type vlan ifname $UPLINK_IFACE.$INFRA_VLAN dev $UPLINK_IFACE id
˓→$INFRA_VLAN
If you were successful, you should be able to see an IP address when you run:
We’ll need to configure an OVS bridge which will handle the traffic for any virtual machines or containers that are
hosted on the VM host. First, enable the openvswitch service and start it:
Next, we can create an OVS bridge (you may wish to use a different bridge name):
Port "br0_vxlan0"
Interface "br0_vxlan0"
type: vxlan
options: {dst_port="8472", key=flow, remote_ip=flow}
Port "br0"
Interface "br0"
type: internal
ovs_version: "2.3.90"
Agent Configuration
Before enabling the agent, we’ll need to edit its configuration file, which is located at “/etc/opflex-agent-ovs/opflex-
agent-ovs.conf”.
First, we’ll configure the Opflex protocol parameters. If you’re using an ACI fabric, you’ll need the OpFlex domain
from the ACI configuration, which is the name of the VMM domain you mapped to the interface for this hypervisor.
Set the “domain” field to this value. Next, set the “name” field to a hostname or other unique identifier for the VM
host. Finally, set the “peers” list to contain the fixed static anycast peer address of 10.0.0.30 and port 8009. Here is an
example of a completed section (bold text shows areas you’ll need to modify):
"opflex": {
// The globally unique policy domain for this agent.
"domain": "[CHANGE ME]",
"ssl": {
// SSL mode. Possible values:
// disabled: communicate without encryption
// encrypted: encrypt but do not verify peers
// secure: encrypt and verify peer certificates
"mode": "encrypted",
Next, configure the appropriate policy renderer for the ACI fabric. You’ll want to use a stitched-mode renderer. You’ll
need to configure the bridge name and the uplink interface name. The remote anycast IP address will need to be
obtained from the ACI configuration console, but unless the configuration is unusual, it will be 10.0.0.32:
// Renderers enforce policy obtained via OpFlex.
"renderers": {
"mac": "00:22:bd:f8:19:ff"
}
},
The agent is now running and ready to enforce policy. You can add endpoints to the local VM hosts using the OpFlex
Group-based policy plugin from OpenStack, or manually.
Overview
This guide is geared towards installing OpenDaylight to use the OVSDB project to provide Neutron support for Open-
Stack.
Open vSwitch (OVS) is generally accepted as the unofficial standard for Virtual Switching in the Open hypervisor
based solutions. For information on OVS, see Open vSwitch.
With OpenStack within the SDN context, controllers and applications interact using two channels: OpenFlow and
OVSDB. OpenFlow addresses the forwarding-side of the OVS functionality. OVSDB, on the other hand, addresses
the management-plane. A simple and concise overview of Open Virtual Switch Database (OVSDB) is available at:
http://networkstatic.net/getting-started-ovsdb/
Note: By default the ODL OVSDB L3 forwarding is disabled. Enable the functionality by editing the
ovsdb.l3.fwd.enabled setting and setting it to yes:
vi etc/custom.properties
ovsdb.l3.fwd.enabled=yes
feature:install odl-ovsdb-openstack
Recognize that for the feature to fully install requires installing other required features and may take 30–60 seconds
as those other features are automatically installed. The Karaf prompt will return before the feature is fully installed.
To verify that the installation was successful, use the following commands in Karaf and verify that the required features
have been installed:
Use the following command in karaf to view the logs. Verify that there are no errors logs relating to
odl-ovsdb-openstack:
log:display
Look for the following log to indicate that the odl-ovsdb-openstack feature has been fully installed:
Troubleshooting
Reference the following link to the OVSDB NetVirt project wiki. The link has very helpful information for under-
standing the OVSDB Network Virtualization project:
• a link to a tutorial describing the project, it’s goals, features and architecture.
• a link to a VirtualBox OVA file containing an all-in-one setup that you can simply import and see the OpenDay-
light and OpenStack integration in action.
• slides describing how to use the OVA, run the demo and how to debug.
• a link to a Youtube presentation covering the slides and demo.
https://wiki.opendaylight.org/view/OVSDB_Integration:Main#Getting_Started_with_OpenDaylight_OVSDB_
Plugin_Network_Virtualization
feature:uninstall odl-ovsdb-openstack
system:shutdown
Use the following command to clean and reset the working state before starting OpenDaylight again:
This document is for the user to install the artifacts that are needed for using Time Series Data Repository (TSDR)
functionality in the ODL Controller by enabling either an HSQLDB, HBase, or Cassandra Data Store.
Overview
The Time Series Data Repository (TSDR) project in OpenDaylight (ODL) creates a framework for collecting, storing,
querying, and maintaining time series data in the OpenDaylight SDN controller. Please refer to the User Guide for the
detailed description of the functionality of the project and how to use the corresponding features provided in TSDR.
The software requirements for TSDR HBase Data Store are as follows:
• In the case when the user chooses HBase or Cassandra data store, besides the software that ODL requires, we
also require HBase and Cassandra database running in single node deployment scenario.
No additional software is required for the HSQLDB Data Stores.
• When using HBase data store, download HBase from the following website:
http://archive.apache.org/dist/hbase/hbase-0.94.15/
• When using Cassandra data store, download Cassandra 2.1 from the following website:
https://cassandra.apache.org/download/
• No additional steps are required to install the TSDR HSQL Data Store.
Once OpenDaylight distribution is up, from karaf console install the HSQLDB data store using the following com-
mand:
feature:install odl-tsdr-hsqldb-all
This will install hsqldb related dependency features (and can take sometime) as well as openflow statistics collector
before returning control to the console.
mkdir /usr/lib/hbase
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///usr/lib/hbase/data</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/lib/hbase/zookeeper</value>
</property>
</configuration>
mv apache-cassandra-2.1.10-bin.tar.gz cassandra/
cd cassandra
tar -xvzf apache-cassandra-2.1.10-bin.tar.gz
feature:install odl-tsdr-cassandra
After the TSDR data store is installed, no matter whether it is HBase data store, Cassandra data store, or HSQLDB
data store, the user can verify the installation with the following steps.
1. Verify if the following two tsdr commands are available from Karaf console:
tsdr:list
tsdr:purgeAll
(c) From Karaf console, the user should be able to retrieve the statistics data of OpenFlow statistics data from
the console:
tsdr:list FLOWSTATS
Troubleshooting
The feature installation takes care of automated configuration of the datasource by installing a file in <install
folder>/etc named org.ops4j.datasource-metric.cfg. This contains the default location of <install folder>/tsdr where
the HSQLDB datastore files are stored. If you want to change the default location of the datastore files to some other
location update the last portion of the url property in the org.ops4j.datasource-metric.cfg and then restart the Karaf
container.
The HBase data store was supported in the previous release as well as in this release. However, we do not support
data store upgrade for HBase data store. The user needs to reinstall TSDR and start to collect data in TSDR HBase
datastore after the installation.
HSQLDB and Cassandra are new data stores introduced in this release. Therefore, upgrading from previous release
does not apply in these two data store scenarios.
To uninstall the TSDR functionality with the default store, you need to do the following from karaf console:
feature:uninstall odl-tsdr-hsqldb-all
feature:uninstall odl-tsdr-core
feature:uninstall odl-tsdr-hsqldb
feature:uninstall odl-tsdr-openflow-statistics-collector
It is recommended to restart the Karaf container after the uninstallation of the TSDR functionality with the default
store.
feature:uninstall odl-tsdr-hbase
feature:uninstall odl-tsdr-core
cd <hbase-installation-directory>
./stop-hbase.sh
• remove the file directory that contains the HBase server installation:
rm -r <hbase-installation-directory>
It is recommended to restart the Karaf container after the uninstallation of the TSDR data store.
feature:uninstall odl-tsdr-cassandra
feature:uninstall odl-tsdr-core
rm <cassandra-installation-directory>
It is recommended to restart the Karaf container after uninstallation of the TSDR data store.
Overview
OpenDaylight Virtual Tenant Network (VTN) is an application that provides multi-tenant virtual network on an SDN
controller.
Conventionally, huge investment in the network systems and operating expenses are needed because the network is
configured as a silo for each department and system. Therefore various network appliances must be installed for each
tenant and those boxes cannot be shared with others. It is a heavy work to design, implement and operate the entire
complex network.
The uniqueness of VTN is a logical abstraction plane. This enables the complete separation of logical plane from
physical plane. Users can design and deploy any desired network without knowing the physical network topology or
bandwidth restrictions.
VTN allows the users to define the network with a look and feel of conventional L2/L3 network. Once the network
is designed on VTN, it will automatically be mapped into underlying physical network, and then configured on the
individual switch leverage SDN control protocol. The definition of logical plane makes it possible not only to hide the
complexity of the underlying network but also to better manage network resources. It achieves reducing reconfigu-
ration time of network services and minimizing network configuration errors. OpenDaylight Virtual Tenant Network
(VTN) is an application that provides multi-tenant virtual network on an SDN controller. It provides API for creating
a common virtual network irrespective of the physical network.
It is implemented as two major components
• VTN Manager
• VTN Coordinator
VTN Manager
An OpenDaylight Plugin that interacts with other modules to implement the components of the VTN model. It also
provides a REST interface to configure VTN components in OpenDaylight. VTN Manager is implemented as one
plugin to the OpenDaylight. This provides a REST interface to create/update/delete VTN components. The user
command in VTN Coordinator is translated as REST API to VTN Manager by the OpenDaylight Driver component.
In addition to the above mentioned role, it also provides an implementation to the OpenStack L2 Network Functions
API.
VTN Coordinator
The VTN Coordinator is an external application that provides a REST interface for an user to use OpenDaylight
VTN Virtualization. It interacts with VTN Manager plugin to implement the user configuration. It is also capable of
multiple OpenDaylight orchestration. It realizes VTN provisioning in OpenDaylight instances. In the OpenDaylight
architecture VTN Coordinator is part of the network application, orchestration and services layer. VTN Coordinator
will use the REST interface exposed by the VTN Manger to realize the virtual network using OpenDaylight. It uses
OpenDaylight APIs (REST) to construct the virtual network in OpenDaylight instances. It provides REST APIs for
northbound VTN applications and supports virtual networks spanning across multiple OpenDaylight by coordinating
across OpenDaylight.
VTN Manager
VTN Coordinator
1. Arrange a physical/virtual server with any one of the supported 64-bit OS environment.
• RHEL 7
• CentOS 7
• Fedora 20 / 21 / 22
2. Install these packages:
Installing VTN
VTN Manager
Install Feature:
Note: The above command will install all features of VTN Manager. You can install only REST or Neutron also.
VTN Coordinator
cd distribution-karaf-0.4.0-Beryllium/externalapps
• Run the below command to extract VTN Coordinator from the tar.bz2 file in the externalapps directory:
This will install VTN Coordinator to /usr/local/vtn directory. The name of the tar.bz2 file name varies depending on
the version. Please give the same tar.bz2 file name which is there in your directory.
/usr/local/vtn/sbin/db_setup
/usr/local/vtn/bin/vtn_start
{"api_version":{"version":"V1.2"}}
VTN Manager
• In the karaf prompt, type the below command to ensure that vtn packages are installed:
VTN Coordinator
Uninstalling VTN
VTN Manager
feature:uninstall odl-vtnmanager-all
VTN Coordinator
1. Stop VTN:
/usr/local/vtn/bin/vtn_stop
This section introduces you to the OpenDaylight User Experience (DLUX) application.
DLUX provides a number of different Karaf features, which you can enable and disable separately. In Boron they are:
1. odl-dlux-core
2. odl-dlux-node
3. odl-dlux-yangui
4. odl-dlux-yangvisualizer
Logging In
Note: OpenDaylight’s default credentials are admin for both the username and password.
After you login to DLUX, if you enable only odl-dlux-core feature, you will see only topology application available
in the left pane.
Note: To make sure topology displays all the details, enable the odl-l2switch-switch feature in Karaf.
DLUX has other applications such as node, yang UI and those apps won’t show up, until you enable their features
odl-dlux-node and odl-dlux-yangui respectively in the Karaf distribution.
Note: If you install your application in dlux, they will also show up on the left hand navigation after browser page
refresh.
The Nodes module on the left pane enables you to view the network statistics and port information for the switches in
the network.
To use the Nodes module:
1. Select Nodes on the left pane. The right pane displays atable that lists all the nodes, node connectors and the
statistics.
2. Enter a node ID in the Search Nodes tab to search by node connectors.
3. Click on the Node Connector number to view details such as port ID, port name, number of ports per switch,
MAC Address, and so on.
4. Click Flows in the Statistics column to view Flow Table Statistics for the particular node like table ID, packet
match, active flows and so on.
5. Click Node Connectors to view Node Connector Statistics for the particular node ID.
Note: DLUX does not allow for editing or adding topology information. The topology is generated and edited in
other modules, e.g., the OpenFlow plugin. OpenDaylight stores this information in the MD-SAL datastore where
DLUX can read and display it.
The Yang UI module enables you to interact with the YANG-based MD-SAL datastore. For more information about
YANG and how it interacts with the MD-SAL datastore, see the Controller and YANG Tools section of the OpenDay-
light Developer Guide.
To use Yang UI:
1. Select Yang UI on the left pane. The right pane is divided in two parts.
2. The top part displays a tree of APIs, subAPIs, and buttons to call possible functions (GET, POST, PUT, and
DELETE).
Note: Not every subAPI can call every function. For example, subAPIs in the operational store have GET
functionality only.
Inputs can be filled from OpenDaylight when existing data from OpenDaylight is displayed or can be filled by
user on the page and sent to OpenDaylight.
Buttons under the API tree are variable. It depends on subAPI specifications. Common buttons are:
• GET to get data from OpenDaylight,
• PUT and POST for sending data to OpenDaylight for saving
• DELETE for sending data to OpenDaylight for deleting.
You must specify the xpath for all these operations. This path is displayed in the same row before buttons and it
may include text inputs for specific path element identifiers.
3. The bottom part of the right pane displays inputs according to the chosen subAPI.
• Lists are handled as a special case. For example, a device can store multiple flows. In this case “flow” is
name of the list and every list element is identified by a unique key value. Elements of a list can, in turn,
contain other lists.
• In Yang UI, each list element is rendered with the name of the list it belongs to, its key, its value, and a
button for removing it from the list.
4. After filling in the relevant inputs, click the Show Preview button under the API tree to display request that will
be sent to OpenDaylight. A pane is displayed on the right side with text of request when some input is filled.
To display topology:
1. Select subAPI network-topology <topology revision number> == > operational == > network-topology.
2. Get data from OpenDaylight by clicking on the “GET” button.
3. Click Display Topology.
Lists in Yang UI are displayed as trees. To expand or collapse a list, click the arrow before name of the list. To
configure list elements in Yang UI:
1. To add a new list element with empty inputs use the plus icon-button + that is provided after list name.
2. To remove several list elements, use the X button that is provided after every list element.
3. In the YANG-based data store all elements of a list must have a unique key. If you try to assign two or more
elements the same key, a warning icon ! is displayed near their name buttons.
4. When the list contains at least one list element, after the + icon, there are buttons to select each individual list
element. You can choose one of them by clicking on it. In addition, to the right of the list name, there is a button
which will display a vertically scrollable pane with all the list elements.
Clustering Overview
Clustering is a mechanism that enables multiple processes and programs to work together as one entity. For example,
when you search for something on google.com, it may seem like your search request is processed by only one web
server. In reality, your search request is processed by may web servers connected in a cluster. Similarly, you can have
multiple instances of OpenDaylight working together as one entity.
Advantages of clustering are:
• Scaling: If you have multiple instances of OpenDaylight running, you can potentially do more work and store
more data than you could with only one instance. You can also break up your data into smaller chunks (shards)
and either distribute that data across the cluster or perform certain operations on certain members of the cluster.
• High Availability: If you have multiple instances of OpenDaylight running and one of them crashes, you will
still have the other instances working and available.
• Data Persistence: You will not lose any data stored in OpenDaylight after a manual restart or a crash.
The following sections describe how to set up clustering on both individual and multiple OpenDaylight instances.
The following sections describe how to set up multiple node clusters in OpenDaylight.
Deployment Considerations
Note: This is because clustering in OpenDaylight requires a majority of the nodes to be up and one node cannot
be a majority of two nodes.
• Every device that belongs to a cluster needs to have an identifier. OpenDaylight uses the node’s role for
this purpose. After you define the first node’s role as member-1 in the akka.conf file, OpenDaylight uses
member-1 to identify that node.
• Data shards are used to contain all or a certain segment of a OpenDaylight’s MD-SAL datastore. For example,
one shard can contain all the inventory data while another shard contains all of the topology data.
If you do not specify a module in the modules.conf file and do not specify a shard in
module-shards.conf, then (by default) all the data is placed in the default shard (which must also be
defined in module-shards.conf file). Each shard has replicas configured. You can specify the details of
where the replicas reside in module-shards.conf file.
• If you have a three node cluster and would like to be able to tolerate any single node crashing, a replica of every
defined data shard must be running on all three cluster nodes.
Note: This is because OpenDaylight’s clustering implementation requires a majority of the defined shard
replicas to be running in order to function. If you define data shard replicas on two of the cluster nodes and one
of those nodes goes down, the corresponding data shards will not function.
• If you have a three node cluster and have defined replicas for a data shard on each of those nodes, that shard will
still function even if only two of the cluster nodes are running. Note that if one of those remaining two nodes
goes down, the shard will not be operational.
• It is recommended that you have multiple seed nodes configured. After a cluster member is started, it sends
a message to all of its seed nodes. The cluster member then sends a join command to the first seed node that
responds. If none of its seed nodes reply, the cluster member repeats this process until it successfully establishes
a connection or it is shut down.
• After a node is unreachable, it remains down for configurable period of time (10 seconds, by default). Once a
node goes down, you need to restart it so that it can rejoin the cluster. Once a restarted node joins a cluster, it
will synchronize with the lead node automatically.
netty.tcp {
hostname = "127.0.0.1"
Note: The value you need to specify will be different for each node in the cluster.
5. Find the following lines and replace _127.0.0.1_ with the hostname or IP address of any of the machines that
will be part of the cluster:
cluster {
seed-nodes = ["akka.tcp://[email protected]:2550"]
6. Find the following section and specify the role for each member node. Here we assign the first node with the
member-1 role, the second node with the member-2 role, and the third node with the member-3 role:
roles = [
"member-1"
]
7. Open the configuration/initial/module-shards.conf file and update the replicas so that each shard is replicated to
all three nodes:
replicas = [
"member-1",
"member-2",
"member-3"
]
10. Enable clustering by running the following command at the Karaf command line:
feature:install odl-mdsal-clustering
OpenDaylight should now be running in a three node cluster. You can use any of the three member nodes to access
the data residing in the datastore.
odl-cluster-data {
bounded-mailbox {
mailbox-type = "org.opendaylight.controller.cluster.common.actor.
˓→MeteredBoundedMailbox"
mailbox-capacity = 1000
mailbox-push-timeout-time = 100ms
}
metric-capture-enabled = true
akka {
loglevel = "DEBUG"
loggers = ["akka.event.slf4j.Slf4jLogger"]
actor {
provider = "akka.cluster.ClusterActorRefProvider"
serializers {
java = "akka.serialization.JavaSerializer"
proto = "akka.remote.serialization.ProtobufSerializer"
}
serialization-bindings {
"com.google.protobuf.Message" = proto
}
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "10.194.189.96"
port = 2550
maximum-frame-size = 419430400
send-buffer-size = 52428800
receive-buffer-size = 52428800
}
}
cluster {
seed-nodes = ["akka.tcp://[email protected]:2550"]
auto-down-unreachable-after = 10s
roles = [
"member-1"
]
}
}
}
odl-cluster-rpc {
bounded-mailbox {
mailbox-type = "org.opendaylight.controller.cluster.common.actor.
˓→MeteredBoundedMailbox"
mailbox-capacity = 1000
mailbox-push-timeout-time = 100ms
}
metric-capture-enabled = true
akka {
loglevel = "INFO"
loggers = ["akka.event.slf4j.Slf4jLogger"]
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "10.194.189.96"
port = 2551
}
}
cluster {
seed-nodes = ["akka.tcp://[email protected]:2551"]
auto-down-unreachable-after = 10s
}
}
}
module-shards = [
{
name = "default"
shards = [
{
name="default"
replicas = [
"member-1",
"member-2",
"member-3"
]
}
]
},
{
name = "topology"
shards = [
{
name="topology"
replicas = [
"member-1",
"member-2",
"member-3"
]
}
]
},
{
name = "inventory"
shards = [
{
name="inventory"
replicas = [
"member-1",
"member-2",
"member-3"
]
}
]
},
{
name = "toaster"
shards = [
{
name="toaster"
replicas = [
"member-1",
"member-2",
"member-3"
]
}
]
}
]
Clustering Scripts
Note: Scripts are stored in the OpenDaylight distribution/bin folder, and maintained in the distribution project repos-
itory in the folder distribution-karaf/src/main/assembly/bin/.
This script is used to configure the cluster parameters (e.g. akka.conf, module-shards.conf) on a member of the
controller cluster. The user should restart the node to apply the changes.
Note: The script can be used at any time, even before the controller is started for the first time.
Usage:
• index: Integer within 1..N, where N is the number of seed nodes. This indicates which controller node (1..N) is
configured by the script.
• seed_nodes_list: List of seed nodes (IP address), separated by comma or space.
The IP address at the provided index should belong to the member executing the script. When running this script on
multiple seed nodes, keep the seed_node_list the same, and vary the index from 1 through N.
Optionally, shards can be configured in a more granular way by modifying the file “custom_shard_configs.txt” in the
same folder as this tool. Please see that file for more details.
Example:
The above command will configure the member 2 (IP address 192.168.0.2) of a cluster made of 192.168.0.1
192.168.0.2 192.168.0.3.
This script is used to enable or disable the config datastore persistence. The default state is enabled but there are cases
where persistence may not be required or even desired. The user should restart the node to apply the changes.
Note: The script can be used at any time, even before the controller is started for the first time.
Usage:
bin/set_persistence.sh <on/off>
Example:
bin/set_persistence.sh off
XSQL Overview
XSQL is an XML-based query language that describes simple stored procedures which parse XML data, query or
update database tables, and compose XML output. XSQL allows you to query tree models like a sequential database.
For example, you could run a query that lists all of the ports configured on a particular module and their attributes.
The following sections cover the XSQL installation process, supported XSQL commands, and the way to structure
queries.
Installing XSQL
To run commands from the XSQL console, you must first install XSQL on your system:
1. Navigate to the directory in which you unzipped OpenDaylight
2. Start Karaf:
./bin/karaf
3. Install XSQL:
feature:install odl-mdsal-xsql
The following table describes the commands supported in this OpenDaylight release.
Supported XSQL Console Commands
Command Description
r Repeats the last command you executed.
list vtables Lists the schema node containers that are currently installed. Whenever an OpenDaylight
module is installed, its YANG model is placed in the schema context. At that point, the XSQL
receives a notification, confirms that the module’s YANG model resides in the schema context
and then maps the model to XSQL by setting up the necessary vtables and vfields. This
command is useful when you need to determine vtable information for a query.
list vfields Lists the vfields present in a specific vtable. This command is useful when you need to
<vtable determine vfields information for a query.
name>
jdbc <ip When the ODL server is behind a firewall, and the JDBC client cannot connect to the JDBC
address> server, run this command to start the client as a server and establish a connection.
exit Closes the console.
tocsv Enables or disables the forwarding of query output as a .csv file.
filename Specifies the .tocsv file to which the query data is exported. If you do not specify a value for this
<filename> option when the toccsv option is enabled, the filename for the query data file is generated
automatically.
XSQL Queries
You can run a query to extract information that meets the criteria you specify using the information provided by the
list vtables and list vfields _<vtable name>_ commands. Any query you run should be structured as follows:
select _<vfields you want to search for, separated by a comma and a space>_ from _<vtables you want to search in,
separated by a comma and a space>_ where _<criteria>_ ‘*_<criteria operator>_‘;*
For example, if you want to search the nodes/node ID field in the nodes/node-connector table and find every instance
of the Hardware-Address object that contains _BA_ in its text string, enter the following query:
If you are looking at the following structure and want to determine all of the ports that belong to a YY type module:
• Network Element 1
– Module 1, Type XX
* Port 1
* Port 2
If you specify Module.Type=’YY’ in your query criteria, the ports associated with module 1.1 will not be returned since
its parent module is type XX. Instead, enter Module.Type=’YY’ or skip Module!=’YY’. This tells XSQL to disregard
any parent module data that does not meet the type YY criteria and collect results for any matching child modules. In
this example, you are instructing the query to skip module 1 and collect the relevant data from module 1.1.
Overview
This feature allows NETCONF/RESTCONF users to determine the version of OpenDaylight they are communicating
with.
./bin/karaf
feature:install odl-distribution-version
˓→modules/module/odl-distribution-version:odl-version/odl-distribution-version
{
"module": [
{
"type": "odl-distribution-version:odl-version",
"name": "odl-distribution-version",
"odl-distribution-version:version": "0.5.0-SNAPSHOT"
}
]
}
This document discusses the various security issues that might affect OpenDaylight. The document also lists specific
recommendations to mitigate security risks.
This document also contains information about the corrective steps you can take if you discover a security issue with
OpenDaylight, and if necessary, contact the Security Response Team, which is tasked with identifying and resolving
security threats.
There are many different kinds of security vulnerabilities that could affect an OpenDaylight deployment, but this guide
focuses on those where (a) the servers, virtual machines or other devices running OpenDaylight have been properly
physically (or virtually in the case of VMs) secured against untrusted individuals and (b) individuals who have access,
either via remote logins or physically, will not attempt to attack or subvert the deployment intentionally or otherwise.
While those attack vectors are real, they are out of the scope of this document.
What remains in scope is attacks launched from a server, virtual machine, or device other than the one running Open-
Daylight where the attack does not have valid credentials to access the OpenDaylight deployment.
The rest of this document gives specific recommendations for deploying OpenDaylight in a secure manner, but first
we highlight some high-level security advantages of OpenDaylight.
• Separating the control and management planes from the data plane (both logically and, in many cases, physi-
cally) allows possible security threats to be forced into a smaller attack surface.
• Having centralized information and network control gives network administrators more visibility and control
over the entire network, enabling them to make better decisions faster. At the same time, centralization of
network control can be an advantage only if access to that control is secure.
Note: While both previous advantages improve security, they also make an OpenDaylight deployment an
attractive target for attack making understanding these security considerations even more important.
• The ability to more rapidly evolve southbound protocols and how they are used provides more and faster mech-
anisms to enact appropriate security mitigations and remediations.
• OpenDaylight is built from OSGi bundles and the Karaf Java container. Both Karaf and OSGi provide some
level of isolation with explicit code boundaries, package imports, package exports, and other security-related
features.
• OpenDaylight has a history of rapidly addressing known vulnerabilities and a well-defined process for reporting
and dealing with them.
• If you have any security issues, you can send a mail to [email protected].
• For the list of current OpenDaylight security issues that are either being fixed or resolved, refer to https://wiki.
opendaylight.org/view/Security_Advisories.
• To learn more about the OpenDaylight security issues policies and procedure, refer to https://wiki.opendaylight.
org/view/Security:Main
We recommend that you follow the deployment guidelines in setting up OpenDaylight to minimize security threats.
• The default credentials should be changed before deploying OpenDaylight.
• OpenDaylight should be deployed in a private network that cannot be accessed from the internet.
• Separate the data network (that connects devices using the network) from the management network (that con-
nects the network devices to OpenDaylight).
Note: Deploying OpenDaylight on a separate, private management network does not eliminate threats, but only
mitigates them. By construction, some messages must flow from the data network to the management network,
e.g., OpenFlow packet_in messages, and these create an attack surface even if it is a small one.
• Implement an authentication policy for devices that connect to both the data and management network. These
are the devices which bridge, likely untrusted, traffic from the data network to the management network.
OSGi is a Java-specific framework that improves the way that Java classes interact within a single JVM. It provides an
enhanced version of the java.lang.SecurityManager (ConditionalPermissionAdmin) in terms of security.
Java provides a security framework that allows a security policy to grant permissions, such as reading a file or opening
a network connection, to specific code. The code maybe classes from the jarfile loaded from a specific URL, or a class
signed by a specific key. OSGi builds on the standard Java security model to add the following features:
• A set of OSGi-specific permission types, such as one that grants the right to register an OSGi service or get an
OSGi service from the service registry.
• The ability to dynamically modify permissions at runtime. This includes the ability to specify permissions by
using code rather than a text configuration file.
• A flexible predicate-based approach to determining which rules are applicable to which ProtectionDomain. This
approach is much more powerful than the standard Java security policy which can only grant rights based on
a jarfile URL or class signature. A few standard predicates are provided, including selecting rules based upon
bundle symbolic-name.
• Support for bundle local permissions policies with optional further constraints such as DENY operations. Most
of this functionality is accessed by using the OSGi ConditionalPermissionAdmin service which is part of the
OSGi core and can be obtained from the OSGi service registry. The ConditionalPermissionAdmin API replaces
the earlier PermissionAdmin API.
For more information, refer to http://www.osgi.org/Main/HomePage.
Apache Karaf is a OSGi-based runtime platform which provides a lightweight container for OpenDaylight and ap-
plications. Apache Karaf uses either Apache Felix Framework or Eclipse Equinox OSGi frameworks, and provide
additional features on top of the framework.
Apache Karaf provides a security framework based on Java Authentication and Authorization Service (JAAS) in
compliance with OSGi recommendations, while providing RBAC (Role-Based Access Control) mechanism for the
console and Java Management Extensions (JMX).
The Apache Karaf security framework is used internally to control the access to the following components:
• OSGi services
• console commands
• JMX layer
• WebConsole
The remote management capabilities are present in Apache Karaf by default, however they can be disabled by using
various configuration alterations. These configuration options may be applied to the OpenDaylight Karaf distribution.
Note: Refer to the following list of publications for more information on implementing security for the Karaf con-
tainer.
You can lock down your deployment post installation. Set karaf.shutdown.port=-1 in
etc/custom.properties or etc/config.properties to disable the remote shutdown port.
Many individual southbound plugins provide mechanisms to secure their communication with network devices. For
example, the OpenFlow plugin supports TLS connections with bi-directional authentication and the NETCONF plugin
supports connecting over SSH. Meanwhile, the Unified Secure Channel plugin provides a way to form secure, remote
connections for supported devices.
When deploying OpenDaylight, you should carefully investigate the secure mechanisms to connect to devices using
the relevant plugins.
AAA stands for Authentication, Authorization, and Accounting. All three of can help improve the security posture
of and OpenDaylight deployment. In this release, only authentication is fully supported, while authorization is an
experimental feature and accounting remains a work in progress.
The vast majority of OpenDaylight’s northbound APIs (and all RESTCONF APIs) are protected by AAA by default
when installing the +odl-restconf+ feature. In the cases that APIs are not protected by AAA, this will be noted in the
per-project release notes.
By default, OpenDaylight has only one user account with the username and password admin. This should be changed
before deploying OpenDaylight.
While OpenDaylight clustering provides many benefits including high availability, scale-out performance, and data
durability, it also opens a new attack surface in the form of the messages exchanged between the various instances of
OpenDaylight in the cluster. In the current OpenDaylight release, these messages are neither encrypted nor authenti-
cated meaning that anyone with access to the management network where OpenDaylight exchanges these clustering
messages can forge and/or read the messages. This means that if clustering is enabled, it is even more important that
the management network be kept secure from any untrusted entities.
• genindex
• modindex
• search
79