Ch1 PDF
Ch1 PDF
Ch1 PDF
1
Applying Software-defined Networks to Cloud
Computing
Abstract
Network virtualization and network management for cloud computing systems have be-
come quite active research areas in the last years. More recently, the advent of the
Software-Defined Networks (SDNs) introduced new concepts for tackling these issues,
fomenting new research initiatives oriented to the development and application of SDNs
in the cloud. The goal of this course is to analyze these opportunities, showing how
the SDN technology can be employed to develop, organize and virtualize cloud network-
ing. Besides discussing the theoretical aspects related to this integration, as well as the
ensuing benefits, we present a practical a case study based on the integration between
OpenDaylight SDN controller and OpenStack cloud operating system.
1.1. Introduction
The present section introduces the main topics of the course, providing an evolutionary
view of network virtualization in cloud computing and distributed systems. We present the
main changes occurred in the field in the latest years, focusing in the advent of Software-
defined Networks (SDN) and its implications in the current research scenario.
• Resource sharing: when a device has more resources than what can be con-
sumed by a single entity, those resources can be shared among different users or processes
for better usage efficiency. For example, the different user applications or VMs running
on a server can share its multiple processors, storage disks or network links. If properly
executed, the economy achieved in small server consolidation onto VMs, for example,
can be from 29 % to 64 % [Menascé 2005];
• Resource aggregation: devices with a low availability of resources can be com-
bined to create a larger-capacity virtual resource. For example, with an adequate file
management system, small-size magnetic disks can be combined to create the impression
of a large virtual disk.
• Ease of management: one of the main advantages of virtualization is that it
facilitates maintenance of virtual hardware resources. One reason is that virtualization
usually provides standard software interfaces that abstract the underlying hardware (ex-
cept for para-virtualization). In addition, legacy applications placed in virtualized en-
vironments can keep running even after being migrated to a new infrastructure, as the
hypervisor becomes responsible to translate old instructions into those comprehensible
by the underlying physical hardware.
• Dynamics: with the constant changes to application requirements and work-
loads, rapid resource realocation or new resource provisioning becomes essential for ful-
filling these new demands. Virtualization is a powerful tool for this task, since virtual
resources can be easily expanded, realocated, moved or removed without concerns about
which physical resources will support the new demands. As an example, when a user
provisions a dynamic virtual disk, the underlying physical disk does not need to have that
capacity available at provisioning time: it just needs to be available when the user actually
needs to use it.
• Isolation: multiple users environments may contain users that do not trust on
each other. Therefore, it is essential that all users have their resources isolated from other
users, even if this is done logically (i.e., in software). When this happens, malicious users
are unable to monitor and/or interfere with other users’ activities, preventing a vulnera-
bility or attack to a given machine from affecting other users.
Different scenarios can also originate different groups of security threats in cloud
networking scenarios. Consequently, different groups of security solutions built upon net-
work virtualization mechanisms should be applied to ensure secure cloud services provi-
sion. Also according to [Barros et al. 2015], the sources of security threats concerned to
cloud networking scenario are described as follow.
1. Active Networks (from the mid-1990s to the early 2000s): This phase follows
the historical advent of the Internet, a period in which the demands for innovation in the
computer networks area were met mainly by the development and tests of new proto-
cols in laboratories with limited infrastructure and simulation tools. In this context, the
so-called “active networks” appeared as a first initiative aiming to turn network devices
(e.g., switches and routers) into programmable elements and, thus, allow furthers inno-
vations in the area. This programmability could then allow a separation between the two
main functionalities of networking elements: the control plane, which refers to the de-
vice’s ability to decide how each packet should be dealt with; and the data plane, which
is responsible for forwardind packets at high speed following the decisions made by the
control plane. Specifically, active networks introduced an new paradigm for dealing with
the network’s control plane, in which the resources (e.g., processing, storage, and packet
queues) provided by the network elements could be accessed through application pro-
gramming interfaces (APIs). As a result, anyone could develop new functionalities for
customizing the treatment given to the packets passing by each node composing the net-
work, promoting innovations in the networking area However, the criticism received due
to the potential complexity it would add to the Internet itself, allied to the fact that the
distributed nature of the Internet’s control plane was seen as a way to avoid single points
of failure, reduced the interest and diffusion of the active network concept in the industry.
2. Control- and data-plane separation (from around 2001 to 2007): After the In-
ternet became a much more mature technology in the late 1990’s, the continuous growth in
the volume of traffic turned the attention of the industry and academic communities to re-
quirements such as reliability, predictability and performance of computer networks. The
increasing complexity of network topologies, together with concerns regarding the perfor-
mance of backbone networks, led different hardware manufacturers to develop embedded
protocols for packet forwarding, promoting the high integration between the control and
data planes seen in today’s Internet. Nevertheless, network operators and Internet Service
Providers (ISPs) would still seek new management models to meet the needs from net-
work topologies ever larger and more complex. The importance of a centralized control
model has become more evident, as well as the need of a separation between the control
and data planes. Among the technological innovations arising from this phase, we can cite
the creation of open interfaces for communications between the control and data planes
such as ForCES (Forwarding and Control Element Separation)[Yang et al. 2004], whose
goal was to enable a locally centralized control over the hardware elements distributed
along the network topology [Caesar et al. 2005, Lakshman et al. 2004]. To ensure the ef-
ficiency of centralized control mechanisms, the consistent replication of the control logic
among the data plan elements would play a key role. The development of such distributed
state management techniques is also among the main technological contributions from
this phase. There was, however, considerable resistance from equipment suppliers to
implement open communication interfaces, which were seen as a factor that would facil-
itate the entry of new competitors in the network market. This ended up hindering the
widespread of the separation of data and control planes, limiting the number and variety
of applications developed for the control plane in spite of the possibility of doing so.
3. OpenFlow and Network Operating System (from 2007 to 2010): The
ever growing demand for open interfaces in the data plane led researchers to ex-
plore different clean slate architectures for logically centralized network control
[Casado et al. 2007, Greenberg et al. 2005, Chun et al. 2003]. In particular, the Ethane
project [Casado et al. 2007] created a centralized control solution for enterprise networks,
reducing switch control units to programmable flow-tables. The operational deployment
of Ethane in the Stanford computer science department, focusing on network experi-
mentation inside the campus, was indeed huge success, and resulted in the creation of
OpenFlow protocol [McKeown et al. 2008]. OpenFlow enables fully programmable net-
works by providing a standard data plane API for existing packet switching hardware.
The creation of the OpenFlow API, on its turn, allowed the emergence of SDN control
platforms such as NOX [Gude et al. 2008], thus enabling the creation of a wide range of
network applications. OpenFlow provided an unified abstraction of network devices and
its functions, defining forwarding behavior through traffic flows based on 13 different in-
structions. OpenFlow also led to the vision of a network operating system that, different
from the node-oriented system preconized by active networks, organize the network’s op-
eration into three layers: (1) a data plane with an open interface; (2) a state management
layer that is responsible for maintaining a consistent view of the overall network state; and
(3) control logic that performs various operations depending on its view of network state
[Koponen et al. 2010]. The need for integrating and orchestrating multiple controllers for
scalability, reliability and performance purposes also led to significant enhancements on
distributed state management techniques. Following these advances, solutions such as
Onix [Koponen et al. 2010] and its open-source counterpart, ONOS (Open Network Op-
erating System) [Berde et al. 2014], introduced the idea of a network information base
that consists of a representation of the network topology and other control state shared by
all controller replicas, while incorporating past work in distributed systems to satisfy state
consistency and durability requirements.
Analyzing this historical perspective and the needs recognized in each phase, it
becomes easier to see that the SDN concept emerged as a tool for allowing further net-
work innovation, helping researchers and network operators to solve longstanding prob-
lems in network management and also to provide new network services. SDN has been
successfully explored in many different research fields, including areas such as network
virtualization and cloud networking.
That is the main reason why TCP/IP’s simplicity is sometimes accused of being
responsible for the “ossification of the Internet” (See Figure 1.2): without the ability of
adding intelligence to the core of the network itself, many applications had to take cor-
rective actions on other layers; many patches would be sub-optimal, imposing certain
restrictions on the applications that could be deployed with the required levels of secu-
rity, performance, scalability, mobility, maintainability, etc. Therefore, even though the
TCP/IP model displays a reasonably good level of efficiency and is able to meet many of
the original requirements of the Internet, many believe it may not be the best solution for
the future [Alkmim et al. 2011].
Many of the factors pointed out as the cause of the Internet’s ossification are re-
lated to the strong coupling between the control and data planes, so the decision on how
to treat the data flow and the execution of this decision are both handled by the same
device. In such environment, new network applications or features have to be deployed
directly into the network infrastructure, a cumbersome task given the lack of standard in-
terfaces for doing so in a market dominated by proprietary solutions. Actually, even when
a vendor does provide interfaces for setting and implementing policies into the network
infrastructure, the presence of heterogeneous devices with incompatible interfaces ends
up hindering such seemingly trivial tasks.
This ossification issue has led to the creation of dedicated appliances for tasks
seem as essential for the correct network’s operation, such as firewalls, intrusion detection
systems (IDS), network address translators (NAT), among others [Moreira et al. 2009].
Since such solutions are many times seen as palliative, studies aimed at changing this
ossification state became more prominent, focusing especially in two approaches. The
first, more radical, involved the proposal of a completely new architecture that could
replace the current Internet model, based on past experiences and identified limitations.
This “clean state” strategy has not received much support, however, not only to the high
costs involved in its deployment, but also because it is quite possible that, after years of
effort to build such specification, it might become outdated after a few decades due to the
appearance of new applications with unanticipated requirements. The second approach
suggests evolving the current architecture without losing compatibility with current and
future devices, thus involving lower costs. By separating the data and control planes, thus
adding flexibility to how the network is operated, the SDN paradigm gives support to this
second strategy [Feamster et al. 2014].
According to [Open Networking Foundation 2012], the formal definition of an
SDN is: “an emerging architecture that is dynamic, manageable, cost-effective, and adapt-
able, making it ideal for the high-bandwidth, dynamic nature of today’s applications. This
architecture decouples the network control and forwarding functions enabling the net-
work control to become directly programmable and the underlying infrastructure to be
abstracted for applications and network services.” This definition is quite comprehensive,
making it clear that the main advantage of the SDN paradigm is to allow different policies
to be dynamically applied to the network by means of a logically centralized controller,
which has a global view of the network and, thus, can quickly adapt the network con-
figuration in response to changes [Kim and Feamster 2013]. At the same time, it enables
independent innovations in the now decoupled control and data planes, besides facilitat-
ing the network state visualization and the consolidation of several dedicated network
appliances into a single software implementation [Kreutz et al. 2014]. This flexibility is
probably among the main reasons why companies from different segments (e.g., device
manufacturers, cloud computing providers, among others) are increasingly adopting the
SDN paradigm as the main tool for managing their resources in an efficient and cost-
effective manner [Kreutz et al. 2014].
The data plane corresponds to the switching circuitry that interconnects all de-
vices composing the network infrastructure, together with a set of rules that define which
actions should be taken as soon as a packet arrives at one of the device’s ports. Examples
of common actions are to forward the packet to another port, rewrite (part of) its header,
or even to discard the packet.
The control plane, on its turn, is responsible for programming and managing
the data plane, controlling how the routing logic should work. This is done by one or
more software controllers, whose main task is is to set the routing rules to be followed
by each forwarding device through standardized interfaces, called the southbound inter-
faces. These interfaces can be implemented using protocols such as OpenFlow 1.0 and
1.3 [OpenFlow 2009, OpenFlow 2012], OVSDB [Pfaff and Davie 2013] and NETCONF
[Enns et al. 2011] The control plane concentrates, thus, the intelligence of the network,
using information provided by the forwarding elements (e.g., traffic statistics and packet
headers) to decide which actions should be taken by them [Kreutz et al. 2014].
Finally, developers can take advantage of the protocols provided by the control
plane through the northbound interfaces, which abstracts the low-level operations for con-
trolling the hardware devices similarly to what is done by operating systems in computing
devices such as desktops. These interfaces can be provided by remote procedure calls
(RPC), restful services and other cross-application interface models. This greatly facili-
tates the construction of different network applications that, by interacting with the control
plane, can control and monitor the underlying network. This allows them to customize the
behavior of the forwarding elements, defining policies for implementing functions such
as firewalls, load balancers, intrusion detection, among others.
1.3.4. The OpenFlow Protocol
The OpenFlows protocol is one of the most commonly used southbound interfaces, being
widely supported both in software and hardware, and standardized by the Open Network-
ing Foundation (ONF). It works with the concept of flows, defined as groups of packets
matching a specific (albeit non-standard) header [McKeown et al. 2008], which receive
may be treated differently depending how the network is programmed. OpenFlow’s sim-
plicity and flexibility, allied to the high performance at low cost, ability to isolate experi-
mental traffic from production traffic, and to cope with vendors’ need for closed platforms
[McKeown et al. 2008], are probably among the main reasons for this success.
Whereas other SDN approaches take into account other network elements, such
as routers, OpenFlow focus mainly on switches [Braun and Menth 2014]. Its architecture
comprises, then, three main concepts [Braun and Menth 2014]: (1) the network’s data
plane is composed by OpenFlow-compliant switches; (2) the control plane consists of
one or more controllers using the OpenFlow protocol; (3) the connection between the
switches and the control plane is made through a secure channel.
An OpenFlow switch is basically a forwarding device endowed with a Flow Table,
whose entries define the packet forwarding rules to be enforced by the device. To accom-
plish this goal, each entry of the table comprises three elements [McKeown et al. 2008]:
match fields, counters, and actions. The match fields refer to pieces of information that
identify the input packets, such as fields of its header or its ingress port. The counters, on
their turn, are reserved for collecting statistics about the corresponding flow. They can,
for example, be used for keeping track of the number of packets/bytes matching that flow,
or of the time since the last packet belonging to that flow was seen (so inactive flows can
be easily identified) [Braun and Menth 2014]. Finally, the actions specify how the pack-
ets from the flow must be processed, the most basic options being: (1) forward the packet
to a given port, so it can be routed to through the network; (2) encapsulate the packet and
deliver it to a controller so the latter can decide how it should be dealt with (in this case,
the communication is done through the secure channel); or (3) drop the packet (e.g., for
security reasons).
There are two models for the implementation of an OpenFlow switch
[McKeown et al. 2008]. The first, consists in a dedicated OpenFlow switch, which is
basically a “dumb” device that only forwards packets according to the rules defined by a
remote controller.
In this case (See Figure 1.4), the flows can be broadly defined by the applications,
so the network capabilities are only limited by how the Flow Table is implemented and
which actions are available. The second, which may be preferable for legacy reasons, is a
classic switch that supports OpenFlow but also keeps its ability to make its own forward-
ing decisions. In such hybrid scenario, it is more complicated to provide a clear isolation
between OpenFlow and “classical” traffic. To be able to do so, there are basically two
alternatives: (1) to implement one extra action to the OpenFlow Table, which forwards
packets to the switches normal processing pipeline, or (2) to define different VLANs for
each type of traffic.
Whichever the case, the behaviors of the switch’s OpenFlow-enabled portion may
Figure 1.4. OpenFlow switch proposed by [McKeown et al. 2008].
be either reactive or proactive. In the reactive mode, whenever a packet arrives at the
switch, it tries to find an entry in its Flow Table matching that packet. If such an entry
is found, the corresponding action is executed; otherwise, the flow is redirected to the
controller, which will insert a new entry into the switch’s Flow Table for handling the
flow and only then the packet is forwarded according to this new rule. In the proactive
mode, on the other hand, the switch’s Flow Table is pre-configured and, if an arriving flow
does not math any of the existing rules, the corresponding packets are simply discarded
[Hu et al. 2014a].
Operating in the proactive mode may lead to the need of installing a large number
of rules beforehand on the switches, one advantage over the reactive mode is that in
this case the flow is not delayed by the controller’s flow configuration process. Another
relevant aspect is that, if the switch is unable to communication with the controller in the
reactive mode, then the switch’s operation will remain limited to the existing rules, which
may not be enough for dealing with all flows. In comparison, if the network is designed
to work in the proactive mode from the beginning, it is more likely that all flows will be
handled by the rules already installed on the switches.
As a last remark, it is interesting to notice that implementing the controller as a
centralized entity can provide a global and unique view of the network to all applications,
potentially simplifying the management of rules and policies inside the network. How-
ever, as any physically centralized server, it also becomes a single point of failure, po-
tentially impairing the network’s availability and scalability. This issue can be solved by
implementing a a physically distributed controller, so if one controller is compromised,
only the switches under its responsibility are affected. In this case, however, it would
be necessary to implement synchronization protocols for allowing a unique view of the
whole network and avoid inconsistencies. Therefore, to take full advantage of the benefits
from a distributed architecture, such protocols must be efficient enough not to impact the
overall network’s performance.
1.3.5. SDN Controllers
An SDN controller, also called a network operating system, is a software platform where
all the network control applications are deployed. SDN controllers commonly contain
a set of modules that provide different network services for the deployed applications,
including routing, multicasting, security, access control, bandwidth management, traffic
engineering, quality of service, processor and storage optimization, energy usage, and
all forms of policy management, tailored to meet business objectives. The network ser-
vices provided by the SDN controller consist of network applications running upon the
controller platform, and can be classified as follows:
Open source controllers have been an important vector of innovation in the SDN
field. The dynamics of the open source community led the development of lots of SDN
projets, including software-based switches and SDN controllers [Casado 2015]. To eval-
uate and compare different open-source controller solutions and their suitability to each
deployment scenario, one can employ the following metrics:
Table 1.1. Summary of the main characteristics of open source SDN controllers
NOX POX Ryu Floodlight ODL
Language C++ Python Python Java Java
Performance High Low Low High High
Distributed No No Yes Yes Yes
OpenFlow 1.0 1.0 1.0, 1.2–1.4 1.0, 1.3 1.0, 1.3
Multi-tenant clouds No No Yes Yes Yes
Learning curves Moderate Easy Moderate Steep Steep
• Control plane and data plane separation: The separation between control and
data planes in SDN architectures, as well as the standardization of interfaces for the com-
munication between those layers, allowed to conceptually unify different vendor network
devices under the same control mechanisms. For network virtualization purposes, the
abstraction provided by the control plane and data plane separation facilitates deploying,
configuring, and updating devices across virtualized network infrastructures. The control
plane separation also introduces the idea of network operating systems, which consists
of a scalable and programmable platform for managing and orchestrating virtualized net-
works.
• Network programmability: Programmability of network devices is one of the
main contributions from SDN to network virtualization. Before the advent of SDN, net-
work virtualization was limited to the static implementation of overlay technologies (such
as VLAN), a task delegated to network administrators and logically distributed among
the physical infrastructure. The programming capabilities introduced by SDN provide the
dynamics necessary to rapidly scale, maintain and configure new virtual networks. More-
over, network programmability also allows the creation of custom network applications
oriented to innovative network virtualization solutions.
• Logically centralized control: The abstraction of data plane devices provided by
SDN architecture gives the network operating system, also known as SDN orchestration
system, a unified view of the network. Therefore, it allows custom control applications to
access the entire network topology from a logically centralized control platform, enabling
the centralization of configurations and policy management. This way, the deployment
and management of network virtualization technologies becomes easier than in early dis-
tributed approaches.
• Automated management: the SDN architecture enhances network virtualization
platforms by providing support for automation of administrative tasks. The centralized
control and the programming capabilities provided by SDN allow the development of
customized network applications for virtual network creation and management. Auto-
scaling, traffic control and QoS are examples of automation tools that can be applied to
virtual network environments.
Among the variety of scenarios where SDN can improve network virtualization
implementations, we can mention campus network testbeds [Berman et al. 2014], enter-
prise networks [Casado et al. 2007], multitenant data centers [Koponen et al. 2014] and
cloud networking [Jain and Paul 2013b]. Despite this successful application of SDN
technologies in such network virtualization use cases scenarios, however, much work is
needed both to improve the existing network infrastructure and to explore SDN’s poten-
tial for solving problems in network virtualization. Examples include SDN applications to
scenarios such as home networks, enterprise networks, Internet exchange points, cellular
networks, Wi-Fi radio access networks, and joint management of end-host applications.
• Identity and Access Management (IAM): refers to controls for identity verifica-
tion and access management.
• Data Loss Prevention: related to monitoring, protecting and verifying the secu-
rity of data at rest, in motion and in use.
• Web Security: real-time protection offered either on-premise, through soft-
ware/appliance installation, or via the cloud, by proxying or redirecting web traffic to
the cloud provider.
• Email Security: control over inbound and outbound email, protecting the orga-
nization from phishing or malicious attachments, as well as enforcing corporate polices
(e.g., acceptable use and spam prevention), and providing business continuity options.
• Security assessments: refers to third-party audits of cloud services or assess-
ments of on-premises systems.
• Intrusion Management: using pattern recognition to detect and react to statisti-
cally unusual events. This may include reconfiguring system components in real time to
stop or prevent an intrusion.
• Security Information and Event Management (SIEM): analysis of logs and event
information analysis aiming to provide real-time reporting and alerting on incidents that
may require intervention. The logs are likely to be kept in a manner that prevents tamper-
ing, thus enabling their use as evidence in any investigations.
• Encryption: providing data confidentiality by means of encryption algorithms.
1 Projet home page: http://opennaas.org
• Business Continuity and Disaster Recovery: refers to measures designed and
implemented to ensure operational resiliency in the event of service interruptions.
• Network Security: security services that allocate, access, distribute, monitor,
and protect the underlying resource services.
Security service solutions for the Internet can be commonly found nowadays, in
what constitutes a segmentation of the Software as a Service (SaaS) market. This can be
verified, for example, in sites that provide credit card payment services, that offer online
security scanning (e.g. anti-malware/anti-spam) to a user’s personal computer, or even
on Internet access providers that offer firewall services to its users. These solutions are
closely related to the above-mentioned Web Security, Email Security and Intrusion Man-
agement categories, and have as main vendors Cisco, McAfee, Panda Software, Syman-
tec, Trend Micro and VeriSign [Rouse 2010].
However, this kind of services has been deemed insufficient to attract the trust of
many security-aware end-users, especially those that have knowledge of cloud inner work-
ings or are in search of IaaS services. Aiming to attract this audience and, especially, to
improve cloud internal security requirements, organizations have been investing in SDN
solutions capable of improving security on (cloud) virtual networks. To cite a recent
example of cloud-oriented SDN firewall, we can mention Flowguard [Hu et al. 2014b]
Besides basic firewall features, Flowguard also provides a comprehensive framework for
facilitating detection and resolution of firewall policy violations in dynamic OpenFlow-
based networks: security policy violations can be detected in real time, when the network
status is updated, allowing a (tenant or cloud) administrators to decide whether to adopt
distinct security strategies for each network state [Hu et al. 2014b].
Another recent security solution is Ananta [Patel et al. 2013], an SDN-based load
balancer for large scale cloud computing environments. In a nutshell, the solution consists
of a layer-4 load balancer that, by placing one agent in every host, allows packet modifi-
cation tasks to be distributed along the network, thus improving scalability Finally, for the
purpose of detecting or preventing intrusions, one recent solution is the one introduced in
[Xiong 2014], which can be seen as an SDN-based defensive system for detection, anal-
ysis, and mitigation of anomalies. Specifically, the proposed solution takes advantage of
the flexibility, compatibility and programmability of SDN to propose a framework with
a Customized Detection Engine, Network Topology Finder, Source Tracer and further
user-developed security appliances, including protection against DDoS attacks.
It is interesting to notice that, although the NFV and SDN concepts are considered
highly complementary, they do not dependent on each other [ETSI 2012]. Instead, both
approaches can be combined to promote innovation in the context of network: the ca-
pability of SDN to abstract and programmatically control network resources are features
that fit well to the need of NFV to create and manage a dynamic and on-demand network
environment with performance. This synergy between these concepts has lead ONF and
ETSI to work together with the common goal of evolving both approaches and provide a
structured environment for their development. Table 1.2 provides a comparison between
both SDN and NFV concepts.
Table 1.2. Comparison between SDN and NFV (Adapted from [Jammal et al. 2014]).
SDN NFV
Motivation Decoupling of control and data planes; Pro- Abstraction of network functions from dedi-
viding centralized controller and network cated hardware appliances to Commercial off-
programmability the-shelf (COTS) servers
Network Loca- Data centers Service provider networks
tion
Network Devices Servers and switches Servers and switches
Protocols OpenFlow Not Applicable
Applications Cloud orchestration and networking Firewalls, gateways, content delivery networks
Standardization Open Networking Forum (ONF) ETSI NFV group
Committee
1.5.2.1. Components
The Neutron service comprises several components. To explain how these components
are deployed in an OpenStack environment, it is useful to consider a typical deployment
scenario with dedicated nodes for network, compute and control services, as shown in
Figure 1.9. The roles of each Neutron component illustrated in this figure are:
• neutron-server: The Neutron server component provides the APIs for all net-
work services implemented by Neutron. In the deployment shown in Figure 1.9, this
component is located inside the cloud controller node, the host responsible to provide the
APIs for all the OpenStack services running inside the cloud through the API network.
The controller node can provide API access for both the other OpenStack and the end
users of the cloud, respectively through the management network and the internet.
• neutron-*-plugin-agent:Neutron plug-in agents implement the network services
provided by the Neutron API, such as layer 2 connectivity and firewall. The plug-ins are
distributed among network and compute nodes and provide different levels of network
services for the OpenStack cloud infrastructure.
• neutron-l3-agent: The Neutron L3 agent is the component that implements
Neutron API’s layer 3 connectivity services. It connects tenant VMs via layer 3 net-
works, including internal and external networks. The Neutron L3 agent is located on the
network node, which is connected to the Internet via the External network.
• neutron-dhcp-agent: The Neutron DHCP agent provides dynamic IP distribu-
tion for tenant networks. It also implements the floating IP service, which provides exter-
nal IP addresses for tenant VM, enabling Internet connectivity. It is also located on the
network node and connected to the Internet via the External network.
Figure 1.9 also depicts a standard deployment architecture for physical data center
networks, which are:
Besides these components and physical networks, Neutron makes use of virtu-
alized network elements such as virtual switches and virtual network interfaces to pro-
vide connectivity to tenant VMs. The concept of bridges is particularly important here:
bridges are instances of virtual switches implemented by a software such as Open vSwitch
[Open vSwitch 2015, Pfaff et al. 2009] and used to deploy network virtualization for ten-
ant VMs. There are three types of bridges created and managed by Neutron in an Open-
Stack deployment:
The connectivity architecture provided by Neutron, with the relationships between VMs,
physical nodes and VMs, is illustrated in Figure 1.10. In this figure, it is assumed the
same deployment scenario presented in Figure 1.9, focusing on the network and compute
nodes of the OpenStack infrastructure.
Figure 1.10. Implementation of virtual networks using neutron and virtual switches.
Other two important components of the Neutron networking are the network
node’s router and dhcp components (see Figure 1.10). They are implemented by means
of network namespace, which is a kernel facility that allows groups of processes to have
a network stack (interfaces, routing tables, iptables rules) distinct from that of the under-
lying host. More precisely, a Neutron router is a network namespace with a set of routing
tables and iptables rules that handle the routing between subnets, while the DHCP server
is an instance of the dnsmasq software running inside a network namespace, providing
dynamic IP distribution or floating IPs for tenant networks.
requiring less initial and ongoing effort than adding a new monolithic core plugin.
Figure 1.12 presents the overall architecture of the Neutron ML2 plugin. As the
name implies, the ML2 framework has a modular structure, composed of two different
set of drivers: one for the different network types (TypeDrivers) and another for the dif-
ferent mechanisms for accessing each network type, as multiple mechanisms can be used
simultaneously to access different ports of the same virtual network (MechanismDriver).
Mechanisms can access L2 agents via remote procedure calls (RPC) and/or use mecha-
nism drivers to interact with external devices or controllers.
• Type drivers: Each available network type is managed by an ML2 TypeDriver.
TypeDrivers maintain any needed type-specific network state, and perform provider net-
work validation and tenant network allocation. ML2 plugin currently includes drivers for
local, flat, VLAN, GRE and VXLAN network types.
• Mechanism drivers: The MechanismDriver is responsible for taking the in-
formation established by the TypeDriver and ensuring that it is properly applied, given
Figure 1.13. Overview of OpenDaylight functions and benefits
the specific networking mechanisms that have been enabled. The MechanismDriver in-
terface currently supports the creation, update, and deletion of network resources. For
every action taken on a resource, the mechanism driver exposes two methods: a precom-
mit method (called within the database transaction context) and a postcommit method
(called after the database transaction is complete). The precommit method is used by
mechanism drivers to validate the action being taken and make any required changes to
the mechanism driver’s private database, while the postcommit method is responsible for
appropriately pushing the change to the resource or to the entity responsible for applying
that change.
The ML2 plugin architecture facilitates the type drivers to support multiple net-
working technologies, and mechanism drivers to facilitate the access to the networking
configuration in a transactional model.
form also implements a service abstraction layer (SAL), which provides a high-level view
of the data plane protocols to facilitate the development of control plane applications.
• Southbound Interfaces and Protocol Plugins: Southbound interfaces contain
the plugins that implement the protocols used for programming the data plane.
• Data Plane Elements: Physical and virtual network devices that compose the
data plane and are programmed via the southbound protocol plugins. The variety of
southbound protocols supported by the OpenDaylight controller allows the deployment
of network devices from different vendors in the underlying network infrastructure.
The service abstraction layer (SAL) is one of the main innovations of the Open-
Daylight architecture To enable communication between plugins, this message exchange
mechanism ignores the role of southbound and northbound plugins and builds upon
the definition of Consumer and Provider plugins (see Figure 1.15): providers are plu-
gins that expose features to applications and other plugins through its northbound API,
whereas consumers are components that make use of the features provided by one or more
Providers. This change implies that every plugin inside OpenDaylight can be seen as both
a provider and a consumer, depending only on the messaging flow between the plugins
involved.
In OpenDaylight, SAL is responsible for managing the messaging between all the
applications and underlying plugins. Figure 1.16 shows the life of a package inside the
OpenDaylight architecture, depicting the following steps:
2. The plugin parses the packet and generates an event for SAL;
3. SAL dispatches the packet to the service plugins listening for DataPacket;
4. Module handles the packet and sends is out via the IDataPacketService;
5. SAL dispatches the packet to the southbound plugins listening for DataPacket;
6. OpenFlow message sent to appropriate switch
Table 1.4. Hardware and software requirements for the OpenStack nodes.
As explained before, this experiment will be performed based on two VMs configured as
described in Table 1.4. Any virtualization system can be adopted to perform the experi-
ment in a virtualized environment, which means running the compute and network nodes
as VMs. The only important restriction is that both nodes should be connected in the same
local network. For the purpose of this demo, we assume the network configurations pre-
sented in Figure 1.19. Before proceeding with the experiment, it is important to execute
ping requests between the nodes, with the corresponding IP addresses, to ensure there is
connectivity between them.
To proceed with the setup of OpenStack, we should start the Open vSwitch software
[Open vSwitch 2015], the switch virtualization software that is used to create the virtual
bridges for both compute and network nodes, as discussed in Section 1.5.2. During the
OpenStack startup process, Neutron makes use of the running Open vSwitch software to
build the necessary bridges in the server nodes. To run the Open vSwitch software in the
experiment’s servers, the following command should be executed on the terminal of both
compute and network nodes.
$ sudo /sbin/service openvswitch start
To verify that there is no existing bridges so far, the following command should
be run on the terminal of both compute and network nodes:
$ sudo ovs-vsctl show
The expected result is an empty list, indicating that the nodes have no bridge
configured. If that is not the case, the existing bridges should be removed. This can be
accomplished by running the following command, which deletes the bridge named br0
and all of its configurations:
$ sudo ovs-vsctl del-br br0
1.5.6.3. Running the OpenDaylight Controller
We are now ready to run the OpenDaylight SDN controller, which will be used by Neutron
to dynamically program the created virtual network bridges. As explained in Section
1.5.5, the SDN controller receives REST requests from the ODL driver, which implements
the methods called by ML2 plugin for implementing the layer 2 network services provided
by Neutron API. To run OpenDayligh, the following commands should be executed on
the Network and Controller node:
$ cd odl/opendaylight/
$ ./run.sh -XX:MaxPermSize=384m -virt ovsdb -of13
Now that we have the OpenvSwitch and the OpenDaylight SDN controller running, we
can start the OpenStack services on the compute node and on the network and controller
node. These nodes will then make use of these software resources to create the entire
virtual network infrastructure.
In this experiment, we run OpenStack through the Devstack project
[Devstack 2015], which consists of an installation script to get the whole stack of Open-
Stack services up and running. Created for development purposes, Devstack provides
a non-persistent cloud environment that supports the deployment, experimentation, de-
bugging and test of cloud applications. To start the necessary OpenStack services in our
deployment, the following commands should be executed on both compute and network
nodes.
$ cd devstack
$ ./stack.sh
The initialization can take a few minutes. The reason is that the script “stack.sh”
contains the setup commands to start all the specified OpenStack services in the local
node. For the purpose of this demo, the Devstack scripts located inside the nodes are
pre-configured to run network, controller and compute services inside the network node
and only compute services inside the compute node. For didactic purposes, we also start
compute services inside the network node. This allows us to have two compute nodes
in our deployment infrastructure, so we can distribute the instantiated VMs in different
servers over the layer 2 configuration. To verify that we have two compute nodes up and
running, the following command should be run on the network node:
$ . ./openrc admin demo
$ nova hypervisor-list
As a result of this command, the ID and the hostname of two hypervisors running
in both compute and network node should be shown.
After executing the Devstack script, we should see new log messages in the Open-
Daylight terminal, inside the network node. The messages correspond to the creation of
two virtual networks by Neutron using the virtual bridges. The networks created corre-
spond to the default private and public OpenStack networks, used to connect the VMs of
the default tenant in a private virtual LAN and to provide connectivity to the Internet for
those VMs. By running the following command inside each OpenStack node, we should
able to visualize the virtual bridges created by Neutron during the setup process with the
OpenDaylight controller.
$ sudo ovs-vsctl show
The results obtained should be different for the compute and the network nodes.
This happens because Neutron creates the external bridge (br-ex) only for the network
node, enabling the network node to provide Internet connectivity for cloud VMs.
In this step, we instantiate two VMs for the default gateway of the OpenStack cloud.
Then, we analyze their communication through the virtual network architecture created
by Neutron. The following commands instantiate the VMs, named demo-vm1 and demo-
vm2:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --nic
net-id=$(neutron net-list | grep private | awk ’{print $2}’) demo-vm1
To check the status of both VMs and verify if they were successfully created, the
following command should be used. The command should show that both VMs are active.
$ nova list
To verify the node where each VM is running, use the following command:
$ sudo virsh list
Now, to make sure that the created VMs are reachable in the network created by
Neutron, we can ping them from both the router and the dhcp servers created as compo-
nents of the virtual network infrastructure deployed in the network node. To be able to do
that, we should first run the following command on the network node:
$ ip netns
From the output of the previous command, we are able to identify the namespaces
of the qrouter and of the qdhcp services running inside the network node. The names-
paces are necessary to ping the VMs from both qrouter and qdhcp using the VMs’ private
network IPs, as illustrated by the following command.
qdhcp-3f0cfbd2-f23c-481a-8698-3b2dcb7c2657
qrouter-992e450a-875c-4721-9c82-606c283d4f92
Finally, we can visualize the network topology created by Neutron via the OpenDayligh
GUI (Graphical User Interface). To do that, the following url should be accessed from
any of the server nodes:
http://192.168.56.20:8080
The Open Daylight GUI should show all the network bridges (represented by net-
work devices) created and configured by Neutron using the controller. It is possible to
visualize the flow tables of each virtual bridge, as well as to insert and delete flows. The
OpenDaylight GUI also acts as a control point for the entire data plane elements supported
by the southbound API protocols, enabling monitoring, management and operation func-
tions over the network.
References
[Al-Shaer and Al-Haj 2010] Al-Shaer, E. and Al-Haj, S. (2010). FlowChecker: Config-
uration analysis and verification of federated Openflow infrastructures. In Proc. of the
3rd ACM Workshop on Assurable and Usable Security Configuration (SafeConfig’10),
pages 37–44, New York, NY, USA. ACM.
[Alkmim et al. 2011] Alkmim, G., Batista, D., and Fonseca, N. (2011). Mapeamento de
redes virtuais em substratos de rede. In Anais do Simpósio Brasileiro de Redes de Com-
putadores e Sistemas Distribuídos – SBRC’2011, pages 45–58. Sociedade Brasileira de
Computação – SBC.
[Amazon 2014] Amazon (2014). Virtualization Types – Amazon Elastic Com-
pute Cloud. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/
virtualization_types.html. Accessed: 2015-03-13.
[Anderson et al. 2005] Anderson, T., Peterson, L., Shenker, S., and Turner, J. (2005).
Overcoming the Internet impasse through virtualization. Computer, 38(4):34–41.
[Autenrieth et al. 2013] Autenrieth, A., Elbers, J.-P., Kaczmarek, P., and Kostecki, P.
(2013). Cloud orchestration with SDN/OpenFlow in carrier transport networks. In
15th Int. Conf. on Transparent Optical Networks (ICTON), pages 1–4.
[Aznar et al. 2013] Aznar, J., Jara, M., Rosello, A., Wilson, D., and Figuerola, S. (2013).
OpenNaaS based management solution for inter-data centers connectivity. In IEEE 5th
Int. Conf. on Cloud Computing Technology and Science (CloudCom), volume 2, pages
75–80.
[Barros et al. 2015] Barros, B., Iwaya, L., Andrade, E., Leal, R., Simplicio, M., Car-
valho, T., Mehes, A., and Näslund, M. (2015). Classifying security threats in cloud
networking. In Proc. of the 5th Int. Conf. on Cloud Computing and Services Science
(CLOSER’2015) (to appear). Springer.
[Bavier et al. 2006] Bavier, A., Feamster, N., Huang, M., Peterson, L., and Rexford, J.
(2006). In VINI veritas: Realistic and controlled network experimentation. In Proc. of
the 2006 Conf. on Applications, Technologies, Architectures, and Protocols for Com-
puter Communications (SIGCOMM’06), pages 3–14, New York, NY, USA. ACM.
[Berde et al. 2014] Berde, P., Gerola, M., Hart, J., Higuchi, Y., Kobayashi, M., Koide, T.,
Lantz, B., O’Connor, B., Radoslavov, P., Snow, W., and Parulkar, G. (2014). ONOS:
towards an open, distributed SDN OS. In Proc. of the 3rd Workshop on Hot topics in
software defined networking, pages 1–6. ACM.
[Berman et al. 2014] Berman, M., Chase, J., Landweber, L., Nakao, A., Ott, M., Ray-
chaudhuri, D., Ricci, R., and Seskar, I. (2014). GENI: A federated testbed for innova-
tive network experiments. Computer Networks, 61:5–23.
[Bilger et al. 2013] Bilger, B., Boehme, A., Flores, B., Schweitzer, J., and Islam, J.
(2013). Software Defined Perimeter. Cloud Security Alliance – CSA. https:
//cloudsecurityalliance.org/research/sdp/. Accessed: 2015-03-13.
[Braun and Menth 2014] Braun, W. and Menth, M. (2014). Software-defined network-
ing using OpenFlow: Protocols, applications and architectural design choices. Future
Internet, 6(2):302–336.
[Caesar et al. 2005] Caesar, M., Caldwell, D., Feamster, N., Rexford, J., Shaikh, A., and
van der Merwe, J. (2005). Design and implementation of a routing control platform.
In Proc. of the 2nd Symposium on Networked Systems Design & Implementation, vol-
ume 2, pages 15–28. USENIX Association.
[Canini et al. 2012] Canini, M., Venzano, D., Perešíni, P., Kostić, D., and Rexford, J.
(2012). A NICE way to test Openflow applications. In Proc. of the 9th USENIX Conf.
on Networked Systems Design and Implementation (NSDI’12), pages 10–10.
[Carapinha and Jiménez 2009] Carapinha, J. and Jiménez, J. (2009). Network virtual-
ization: a view from the bottom. In Proc. of the 1st ACM workshop on Virtualized
infrastructure systems and architectures, pages 73–80. ACM.
[Casado et al. 2007] Casado, M., Freedman, M., Pettit, J., Luo, J., McKeown, N., and
Shenker, S. (2007). Ethane: Taking control of the enterprise. In Proc. of the 2007
Conference on Applications, Technologies, Architectures, and Protocols for Computer
Communications (SIGCOMM’07), pages 1–12, New York, NY, USA. ACM.
[Cheng et al. 2014] Cheng, Y., Ganti, V., Lubsey, V., Shekhar, M., and Swan, C.
(2014). Software-Defined Networking Rev. 2.0. White paper, Open Data Center
Alliance, Beaverton, OR, USA. http://www.opendatacenteralliance.org/
docs/software_defined_networking_master_usage_model_rev2.pdf.
Accessed: 2015-03-13.
[Chun et al. 2003] Chun, B., Culler, D., Roscoe, T., Bavier, A., Peterson, L., Wawrzo-
niak, M., and Bowman, M. (2003). PlanetLab: an overlay testbed for broad-coverage
services. ACM SIGCOMM Computer Communication Review, 33(3):3–12.
[Costa et al. 2012] Costa, P., Migliavacca, M., Pietzuch, P., and Wolf, A. (2012). NaaS:
Network-as-a-service in the cloud. In Proc. of the 2nd USENIX Conf. on Hot Topics in
Management of Internet, Cloud, and Enterprise Networks and Services (Hot-ICE’12),
pages 1–1, Berkeley, CA, USA. USENIX Association.
[CSA 2011] CSA (2011). SecaaS: Defined categories of service 2011. Technical report,
Cloud Security Alliance. https://downloads.cloudsecurityalliance.
org/initiatives/secaas/SecaaS_V1_0.pdf.
[Devstack 2015] Devstack (2015). DevStack - an OpenStack Community Pro-
duction. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/
virtualization_types.html. Accessed: 2015-03-13.
[Farinacci et al. 2000] Farinacci, D., Li, T., Hanks, S., Meyer, D., and Traina, P. (2000).
RFC 2784 – generic routing encapsulation (GRE). https://tools.ietf.org/
html/rfc2784.
[Feamster et al. 2007] Feamster, N., Gao, L., and Rexford, J. (2007). How to lease
the internet in your spare time. ACM SIGCOMM Computer Communication Review,
37(1):61–64.
[Feamster et al. 2013] Feamster, N., Rexford, J., and Zegura, E. (2013). The road to
SDN. Queue, 11(12):20–40.
[Feamster et al. 2014] Feamster, N., Rexford, J., and Zegura, E. (2014). The road to
SDN: An intellectual history of programmable networks. SIGCOMM Comput. Com-
mun. Rev., 44(2):87–98.
[Floodlight 2015] Floodlight (2015). Floodlight OpenFlow Controller. http://www.
projectfloodlight.org/floodlight/. Accessed: 2015-03-01.
[Gember et al. 2012] Gember, A., Prabhu, P., Ghadiyali, Z., and Akella, A. (2012). To-
ward software-defined middlebox networking. In Proc. of the 11th ACM Workshop on
Hot Topics in Networks, HotNets-XI, pages 7–12, New York, NY, USA. ACM.
[Greenberg et al. 2005] Greenberg, A., Hjalmtysson, G., Maltz, D. A., Myers, A., Rex-
ford, J., Xie, G., Yan, H., Zhan, J., and Zhang, H. (2005). A clean slate 4d approach
to network control and management. ACM SIGCOMM Computer Communication Re-
view, 35(5):41–54.
[Gude et al. 2008] Gude, N., Koponen, T., Pettit, J., Pfaff, B., Casado, M., McKeown,
N., and Shenker, S. (2008). NOX: towards an operating system for networks. ACM
SIGCOMM Computer Communication Review, 38(3):105–110.
[Handigol et al. 2012a] Handigol, N., Heller, B., Jeyakumar, V., Lantz, B., and McKe-
own, N. (2012a). Reproducible network experiments using container-based emulation.
In Proc. of the 8th Int. Conf. on Emerging networking experiments and technologies,
pages 253–264. ACM.
[Handigol et al. 2012b] Handigol, N., Heller, B., Jeyakumar, V., Maziéres, D., and McK-
eown, N. (2012b). Where is the debugger for my software-defined network? In Proc.
of the 1st Workshop on Hot Topics in Software Defined Networks (HotSDN’12), pages
55–60, New York, NY, USA. ACM.
[Hu et al. 2014a] Hu, F., Hao, Q., and Bao, K. (2014a). A survey on software-defined
network and OpenFlow: From concept to implementation. IEEE Communications
Surveys & Tutorials, 16(4):2181–2206.
[Hu et al. 2014b] Hu, H., Han, W., Ahn, G.-J., and Zhao, Z. (2014b). FlowGuard: Build-
ing robust firewalls for software-defined networks. In Proc. of the 3rd Workshop on
Hot Topics in Software Defined Networking (HotSDN’14), pages 97–102, New York,
NY, USA. ACM.
[IEEE 2012a] IEEE (2012a). 802.1BR-2012 – IEEE standard for local and metropolitan
area networks–virtual bridged local area networks–bridge port extension. Technical
report, IEEE Computer Society.
[IEEE 2012b] IEEE (2012b). IEEE standard for local and metropolitan area networks–
media access control (MAC) bridges and virtual bridged local area networks–
amendment 21: Edge virtual bridging. IEEE Std 802.1Qbg-2012, pages 1–191.
[IEEE 2014] IEEE (2014). IEEE standard for local and metropolitan area networks–
bridges and bridged networks. IEEE Std 802.1Q-2014, pages 1–1832.
[Jafarian et al. 2012] Jafarian, J., Al-Shaer, E., and Duan, Q. (2012). Openflow ran-
dom host mutation: Transparent moving target defense using software defined net-
working. In Proc. of the 1st Workshop on Hot Topics in Software Defined Networks
(HotSDN’12), pages 127–132, New York, NY, USA. ACM.
[Jain and Paul 2013a] Jain, R. and Paul, S. (2013a). Network virtualization and software
defined networking for cloud computing: a survey. Communications Magazine, IEEE,
51(11):24–31.
[Jain and Paul 2013b] Jain, R. and Paul, S. (2013b). Network virtualization and software
defined networking for cloud computing: a survey. Communications Magazine, IEEE,
51(11):24–31.
[Jammal et al. 2014] Jammal, M., Singh, T., Shami, A., Asal, R., and Li, Y. (2014).
Software-defined networking: State of the art and research challenges. CoRR,
abs/1406.0124.
[Khurshid et al. 2013] Khurshid, A., Zou, X., Zhou, W., Caesar, M., and Godfrey, P.
(2013). VeriFlow: Verifying network-wide invariants in real time. In Proc. of the 10th
USENIX Conference on Networked Systems Design and Implementation (NSDI’13),
pages 15–28, Berkeley, CA, USA. USENIX Association.
[Kim et al. 2013] Kim, D., Gil, J.-M., Wang, G., and Kim, S.-H. (2013). Integrated sdn
and non-sdn network management approaches for future internet environment. In Mul-
timedia and Ubiquitous Engineering, pages 529–536. Springer.
[Kim and Feamster 2013] Kim, H. and Feamster, N. (2013). Improving network manage-
ment with software defined networking. Communications Magazine, IEEE, 51(2):114–
119.
[Koponen et al. 2014] Koponen, T., Amidon, K., Balland, P., Casado, M., Chanda, A.,
Fulton, B., Ganichev, I., Gross, J., Gude, N., Ingram, P., et al. (2014). Network virtu-
alization in multi-tenant datacenters. In USENIX NSDI.
[Koponen et al. 2010] Koponen, T., Casado, M., Gude, N., Stribling, J., Poutievski, L.,
Zhu, M., Ramanathan, R., Iwata, Y., Inoue, H., Hama, T., et al. (2010). Onix: A
distributed control platform for large-scale production networks. In OSDI, volume 10,
pages 1–6.
[Kreutz et al. 2013] Kreutz, D., Ramos, F., and Verissimo, P. (2013). Towards secure and
dependable software-defined networks. In Proc. of the 2nd ACM SIGCOMM Workshop
on Hot Topics in Software Defined Networking (HotSDN’13), pages 55–60, New York,
NY, USA. ACM.
[Kreutz et al. 2014] Kreutz, D., Ramos, F. M. V., Veríssimo, P., Rothenberg, C. E.,
Azodolmolky, S., and Uhlig, S. (2014). Software-defined networking: A compre-
hensive survey. CoRR, abs/1406.0440.
[Lakshman et al. 2004] Lakshman, T., Nandagopal, T., Ramjee, R., Sabnani, K., and
Woo, T. (2004). The softrouter architecture. In Proc. ACM SIGCOMM Workshop
on Hot Topics in Networking, volume 2004.
[Lantz et al. 2010] Lantz, B., Heller, B., and McKeown, N. (2010). A network in a lap-
top: rapid prototyping for software-defined networks. In Proc. of the 9th ACM SIG-
COMM Workshop on Hot Topics in Networks, page 19. ACM.
[Lin et al. 2014] Lin, Y., Pitt, D., Hausheer, D., Johnson, E., and Lin, Y. (2014).
Software-defined networking: Standardization for cloud computing’s second wave.
Computer, 47(11):19–21.
[Mahalingam et al. 2014a] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger,
L., Sridhar, T., Bursell, M., and Wright, C. (2014a). Virtual extensible local area net-
work (VXLAN): A framework for overlaying virtualized layer 2 networks over layer
3 networks. Internet Req. Comments.
[Mahalingam et al. 2014b] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger,
L., Sridhar, T., Bursell, M., and Wright, C. (2014b). Vxlan: A framework for overlay-
ing virtualized layer 2 networks over layer 3 networks. draft-mahalingam-dutt-dcops-
vxlan-08.
[McKeown et al. 2008] McKeown, N., Anderson, T., Balakrishnan, H., Parulkar, G., Pe-
terson, L., Rexford, J., Shenker, S., and Turner, J. (2008). OpenFlow: enabling in-
novation in campus networks. ACM SIGCOMM Computer Communication Review,
38(2):69–74.
[Mechtri et al. 2013] Mechtri, M., Houidi, I., Louati, W., and Zeghlache, D. (2013).
SDN for inter cloud networking. In IEEE SDN for Future Networks and Services
(SDN4FNS), pages 1–7.
[Medved et al. 2014] Medved, J., Varga, R., Tkacik, A., and Gray, K. (2014). Open-
Daylight: Towards a Model-Driven SDN Controller architecture. In 2014 IEEE 15th
International Symposium on, pages 1–6. IEEE.
[Mell and Grance 2011] Mell, P. and Grance, T. (2011). The nist definition of cloud
computing. Technical Report 800-145, National Institute of Standards and Technology
(NIST).
[Moreira et al. 2009] Moreira, M. D. D., Fernandes, N. C., Costa, L. H. M. K., and
Duarte, O. C. M. B. (2009). Internet do futuro: Um novo horizonte. Minicursos do
Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos – SBRC’2009,
2009:1–59.
[Nayak et al. 2009] Nayak, A., Reimers, A., Feamster, N., and Clark, R. (2009). Reso-
nance: Dynamic access control for enterprise networks. In Proc. of the 1st ACM Work-
shop on Research on Enterprise Networking (WREN’09), pages 11–18, New York, NY,
USA. ACM.
[Open vSwitch 2015] Open vSwitch (2015). Production Quality, Multilayer Open Vir-
tual Switch. http://openvswitch.org/. Accessed: 2015-02-28.
[OpenStack 2015] OpenStack (2015). OpenStack: Open source cloud computing soft-
ware. https://www.openstack.org/. Accessed: 2015-02-28.
[Pan et al. 2011] Pan, J., Paul, S., and Jain, R. (2011). A survey of the research on future
internet architectures. Communications Magazine, IEEE, 49(7):26–36.
[Pan and Wu 2009] Pan, L. and Wu, H. (2009). Smart trend-traversal: A low delay and
energy tag arbitration protocol for large rfid systems. In INFOCOM 2009, IEEE, pages
2571–2575. IEEE.
[Patel et al. 2013] Patel, P., Bansal, D., and Yuan, L. (2013). Ananta: Cloud scale load
balancing. In Proc. of SIGCOMM 2013, pages 207–218, New York, NY, USA. ACM.
[PCI-SIG 2010] PCI-SIG (2010). Single Root I/O Virtualization and Sharing 1.1
Specification. PCI-SIG. http://docs.aws.amazon.com/AWSEC2/latest/
UserGuide/virtualization_types.html. Accessed: 2015-03-13.
[Pfaff and Davie 2013] Pfaff, B. and Davie, B. (2013). The Open vSwitch Database Man-
agement Protocol. RFC Editor.
[Pfaff et al. 2009] Pfaff, B., Pettit, J., Amidon, K., Casado, M., Koponen, T., and
Shenker, S. (2009). Extending networking into the virtualization layer. In Hotnets.
[Porras et al. 2015] Porras, P., Cheung, S., Fong, M., Skinner, K., and Yegneswaran, V.
(2015). Securing the Software-Defined Network Control Layer. In Proc. of the 2015
Network and Distributed System Security Symposium (NDSS).
[Porras et al. 2012] Porras, P., Shin, S., Yegneswaran, V., Fong, M., Tyson, M., and Gu,
G. (2012). A security enforcement kernel for openflow networks. In Proc. of the 2st
Workshop on Hot Topics in Software Defined Networks (HotSDN’12), pages 121–126,
New York, NY, USA. ACM.
[Richardson and Ruby 2008] Richardson, L. and Ruby, S. (2008). RESTful web services.
" O’Reilly Media, Inc.".
[Rouse 2010] Rouse, M. (2010).Security as a Service (SaaS). http://
searchsecurity.techtarget.com/definition/Security-as-a-Service.
Accessed: 2015-03-01.
[Scott-Hayward et al. 2013] Scott-Hayward, S., O’Callaghan, G., and Sezer, S. (2013).
SDN security: A survey. In Future Networks and Services (SDN4FNS), 2013 IEEE
SDN for, pages 1–7.
[Sezer et al. 2013] Sezer, S., Scott-Hayward, S., Chouhan, P., Fraser, B., Lake, D.,
Finnegan, J., Viljoen, N., Miller, M., and Rao, N. (2013). Are we ready for SDN? Im-
plementation challenges for software-defined networks. Communications Magazine,
IEEE, 51(7):36–43.
[Sherwood et al. 2010] Sherwood, R., Gibb, G., Yap, K.-K., Appenzeller, G., Casado,
M., McKeown, N., and Parulkar, G. M. (2010). Can the production network be the
testbed? In OSDI, volume 10, pages 1–6.
[Shin et al. 2013] Shin, S., Porras, P., Yegneswaran, V., Fong, M., Gu, G., and Tyson,
M. (2013). FRESCO: Modular composable security services for software-defined net-
works. In 20th Annual Network and Distributed System Security Symposium (NDSS).
The Internet Society.
[Sridharan et al. 2011] Sridharan, M., Greenberg, A., Venkataramiah, N., Wang, Y.,
Duda, K., Ganga, I., Lin, G., Pearson, M., Thaler, P., and Tumuluri, C. (2011). Nvgre:
Network virtualization using generic routing encapsulation. IETF draft.
[Turner and Taylor 2005] Turner, J. and Taylor, D. (2005). Diversifying the Internet. In
IEEE Global Telecommunications Conference (GLOBECOM’05), volume 2, pages 6–
pp.
[Wen et al. 2012] Wen, X., Gu, G., Li, Q., Gao, Y., and Zhang, X. (2012). Comparison
of open-source cloud management platforms: OpenStack and OpenNebula. In 2012
9th Int. Conf. on Fuzzy Systems and Knowledge Discovery, pages 2457–2461.
[Yang et al. 2004] Yang, L., Dantu, R., Anderson, T., and Gopal, R. (2004). RFC 3746 –
forwarding and control element separation (ForCES) framework. https://tools.
ietf.org/html/rfc3746.
[Yeganeh et al. 2013] Yeganeh, S., Tootoonchian, A., and Ganjali, Y. (2013). On scala-
bility of software-defined networking. Communications Magazine, IEEE, 51(2):136–
141.
[YuHunag et al. 2010] YuHunag, C., MinChi, T., YaoTing, C., YuChieh, C., and YanRen,
C. (2010). A novel design for future on-demand service and security. In 12th IEEE
Int. Conf. on Communication Technology (ICCT), pages 385–388.