Flowstalker: Comprehensive Traffic Flow Monitoring On The Data Plane Using P4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

FlowStalker: Comprehensive Traffic Flow

Monitoring on the Data Plane Using P4


Lucas Castanheira, Ricardo Parizotto, Alberto E. Schaeffer-Filho
Institute of Informatics, Federal University of Rio Grande do Sul, Porto Alegre, Brazil
{lbcastanheira, rparizotto, alberto}@inf.ufrgs.br

Abstract—Programmability has been extensively investigated to manner. Traditionally this is done by sampling packets [4],
enable a more flexible operation of computer networks, and which may result in low accuracy of the sampled data. How-
in this context the P4 language was designed entirely for ever, with the advent of the P4 language came the opportunity
programming the data plane. With programmable data planes
comes the possibility of revisiting many inefficient approaches to for revisiting this problem with a more decentralized approach
networking problems, for example how we use the data plane using a programmable data plane [5]. While P4 offers us
to create an understanding of the network state. Traditional the possibility of analyzing every packet of every flow, such
switches expose only a bare minimum of what happens in their method is inadequate for the data plane and does not scale
forwarding plane, forcing us to resort to inefficient methods, (as seen by Cisco’s NetFlow [6], that showed to be too
such as snapshotting, to acquire the state of the network. We
advocate that by combining the programmable hardware on cumbersome for high speed networking). Thus, we need to
switches with every switch specific view over its traffic, we filter out irrelevant flows to minimize the impact, both in terms
are able to accomplish the same tasks in a more efficient and of memory and CPU, and track only the necessary information
comprehensive manner. In this paper we present an efficient that can later be derived into more meaningful features.
monitoring mechanism using programmable data planes. Our
mechanism capitalizes on data plane programmability to perform In this work we present FlowStalker, a comprehensive moni-
tasks that are usually performed solely by the control plane (e.g. toring system that operates entirely on the data plane, relying
traffic monitoring and information gathering). Firstly, we present
a monitoring system based on a two-phase monitoring scheme on P4 switches to gather traffic data in a distributed manner.
that runs directly on the data plane. Secondly, we introduce a Firstly, we introduce an extensible and scalable monitor-
flexible method for network data gathering, enabling “control- ing approach that consists of two phases: (1) a proactive,
plane-free" consolidation of data from switches. Finally, we show lightweight, phase that detects target flows based on heavy
techniques for using both the monitor and the gathering system hitter detection strategies; and (2) a reactive, heavyweight,
to create constant, snapshot-free analysis of network traffic.
phase, which captures and stores specific meta-data about
these flows. Secondly, we propose a data gathering system that
I. I NTRODUCTION leverages data plane telemetry [5] to ascertain the in-locus state
of a network by gathering monitored data from the switches
Efforts to bring programmability to the data plane are be- and consolidating it at the controller. The core idea of our
coming a viable alternative to traditional switches. The P4 gathering system is to subdivide a network into clusters and
programming language [1] is a step forward in conciliating enable cluster data to be collected quickly should we need it.
the theory of programmable data planes with the needs of In order to do so, we created a special packet, which we call
the industry. These efforts however suffer from the fact that Crawler Packet (Cp), that enables the information gathering
application specific integrated circuits (ASICs) nowadays are to take place as a cluster specific event without the controller
nearly unbeatable in terms of speed. To make programmability needing to interact with all of the cluster members. As opposed
attractive we need a suitable trade-off between flexibility and to the usual method for doing this in SDN, which is to poll the
efficiency. This has led to some compromises which cause switches individually and wait for their response, our method
restrictions to programmability on the data plane. On the plus acts with very little intervention from the controller.
side of these compromises, data plane programmability gives
us a degree of control and proximity to the switches that was Our results in monitoring a reduced set of features show that
previously impossible. Even in Software Defined Networking we can achieve meaningful monitoring within the data plane
(SDN) our access to the inner workings of the switches was with relatively low impact on the throughput of monitored
limited to the their interface with the controller, meaning that flows. We also evaluated our strategy on different topologies
most of the passing meta-data is lost. The fact that we discard with different cluster sizes and it has shown to be very efficient
relevant meta-data at switches only means that we have to go in reducing message exchanging and connection overhead on
through much greater lengths to recover it when we need it the control path between the controller and the switches.
(e.g., for QoE prediction [2], anomaly detection [3]).
The rest of this paper is structured as follows. In Section II
Usually, non-programmable data planes lack the resources to we present an overview on programmable networks and the P4
implement an in-depth analysis of network state in a scalable language. Section III gives an in-depth analysis of FlowStalker.

978-1-5386-8088-9/19/$31.00 ©2019 IEEE


Section IV presents our experimental evaluations. Section V Reactive Phase: The reactive phase is where our heavyweight
discusses the related work, and finally Section VI presents the monitoring occurs. Since it is only applied to flows already
concluding remarks and future work. identified as targets, the heavy monitoring employed will have
reduced footprint on both memory and CPU. This consists in
II. BACKGROUND extracting and storing per-flow and per-packet data from flows:

Software-based paradigms for networking enable decoupling • per-flow: metrics are generated once and updated for
software solutions from the hardware in which they exe- each packet of the target flow during the reactive phase.
cute, making the management and operation of the network For instance, beginning of monitoring, packet and byte
infrastructure more flexible and adaptive. Software-Defined counts, moving average of RTT;
Networking (SDN) promotes the separation of the control • per-packet: a new instance of data is generated and
and data planes, transforming the switch into a bare packet stored for each packet pertaining to a target flow. These
forwarding device [7]. Emerging needs for both operating metric values are inherently larger than per-flow data.
and managing networks, however, can benefit from enhanced Hence, a more in-depth record of the flow is possible,
functionalities on the data plane, such as ease of deployment maintaining data, such as timestamps, from which many
for new services or protocol extensibility. This has motivated other interesting features can be later derived for use in
the idea of using P4 to create a programmable network core. controller applications such as classification algorithms
or QoS monitoring.
P4 [1] is a programming language to describe the behavior of
network switches. It builds its interface upon a set of features The reactive phase keeps a hash table in the switch to encode
common to many existing switch architectures. This ensures data, in which a flow (defined by its source and destination
flexibility for deployment and the high level of abstraction IP addresses1 ) is assumed to be the lookup key. Each entry
needed to make the language compatible with current designs. in the hash table is composed of an array of P4 registers.
In P4, a parser handles every incoming packet, mapping The rationale behind this abstract representation is to encode
packet headers into P4 data structures. Packets are sent to new values of a given metric of interest using a lightweight
a match+action table, which will match the packets against allocation procedure, which manages both the allocation of
control plane defined rules, and act upon them based on P4- entries to flows and the registers inside an entry. The allocation
programmed actions. These actions can manipulate headers procedure necessary to encode each type of metric varies, but
and store information on persistent registers. P4 programs can can generally be implemented through bitwise operations. We
be compiled to specific target switches, with each construct of advocate this offers a space-efficient data encoding abstraction,
the program being mapped into target-specific primitives and while also presenting low computational overhead (as shown
their architectural counterparts. in Section IV). This data structure can be used to store a
history of per-flow and per-packet metrics, such as packet-
III. F LOW S TALKER sizes and packet arrival timestamps. A small sample of raw
features that can be tracked and the corresponding derivable
In this section, we present a novel mechanism based on P4
metrics are presented in Table I.
to monitor stateful data on the data plane, called FlowStalker.
Next, we discuss the design of our solution, and highlight the
TABLE I
advantages it brings over traditional monitoring strategies. D ERIVATIONS OF R AW METRICS .

A. Data Plane Monitoring Raw Derivables


Byte Counts Bytes/Second
Packet Drops
Network traffic analysis at line-rate cannot cope with in- Per-Flow
Packet Counts
Flow Error Rate
specting and operating on every packet. To make our system Flow Start Flow Duration
feasible for data plane deployment, we separated monitoring Packet Size Packet Length variance
Per-Packet Processing Latency
into two phases: the Proactive phase filters out irrelevant Timestamps
Inter-Packet Arrival Times
flows, and the Reactive phase more closely monitors the
relevant ones. This requires defining low and high thresholds We describe next how the abstraction mentioned above could
on switches. Thresholds are application-specific and depend on be used to encode packet arrival timestamps of a particular
factors such as network traffic (and should be set accordingly). flow, which is a relevant information for SDN controller
Proactive Phase: The proactive phase intercepts all incoming applications [8]. Given the static and thus limited length of
packets in the switch. Because it will operate on every packet, P4 data structures, we need the collected timestamps to be
we need a simple, lightweight procedure. We perform this by stored efficiently. For this, we save on the table only the
identifying every packet’s respective flow and incrementing a difference between the beginning of the monitoring (i.e., the
packet counter, checking it against the low threshold. When 1 Although our implementation uses only source and destination IP ad-
counters cross the threshold, the flow is classified as a target, dresses, it would be possible to further differentiate the flows, for example,
and the reactive phase will start to handle its monitoring. by considering source and destination ports if necessary.
Data Plane
1 Warning about violating flow

Cluster 0

Full CP

Stateful Data
Flows Per-
Packet Per Flow
Empty CP Reactive
IP A → IP B 0110 0110 ... 01100110 0110
Behavior 0110 0110 ... 0110

Control
Plane API IP V → IP W 0110 0110 ...... 0110 0110
0110 0110 0110 0110

Controller Injects CP
2

...
...

...
...

...
Configuration into Clusters
Gathering
IP X → IP Y 0110 0110 ...... 01100110 0110
System 0110 0110 0110

Thresholds

Clustering Empty CP
Monitoring
Append stateful data into
Module
3  the crawler packet
Intercept
Table FlowStalker Eth. S0
Crawler Packet
Full CP
S0

Eth. S0 S1
Eth. S0 S1 S2 S2 S1
Crawler Packet
Crawler Packet

5 Filled CP is forwarded 4
back to the controller Data Plane CP Routed internally
Cluster 1 by the nodes,
collecting data

Fig. 1. FlowStalker overview: monitored metrics are maintained at the switches. After a warning is sent to the controller (step 1), Crawler Packets are sent
to each cluster to collect stateful data from individual switches (steps 2-3). After a Cp traverses the cluster, it returns with gathered information (steps 4-5).

Base timestamp) and the next arriving packet timestamp. For B. Aggregate Data Plane Information Gathering
example, Figure 2 presents an overview of how it is possible
to record packet arrival timestamps. The representation of the In this section, we discuss our strategy for efficiently collecting
array reflects the implementation through a sequence of P4 the data that the SDN controller will need. As opposed to tra-
registers. At switch time 15, the first monitored packet arrives ditional polling strategies, in which the controller obtains data
and the per-flow metric Base is set. When the next packet snapshots from individual switches, we present a decentralized
arrives, at t1 , only its offset from Base will be stored in the gathering mechanism in the data plane. We perform network
register. The same applies to the packet arriving at timestamp telemetry to consolidate the stateful data stored on switches.
t2 = 24, in which the offset from the base timestamp is
calculated and bit 9 is marked; t3 follows the same pattern. Next, we discuss how we divide the network into clusters
and setup routes inside them to keep telemetry efficient and
less prone to bottlenecking at both the controller and the link
Arriving Packets Encoded
between a switch and its controller (control path). We then
Base
Timestamps ( ti ) Timestamps show how information is aggregated from the switches within
a specific cluster and sent back to the control plane.
 t0 =15  offset =0 15 0000 0000 0000
Clustering Method: We divide the data plane switches into
t1=16 offset=1
15 0100 0000 0000
logical groups, called clusters, to modularize data collection.
This creates the neighborhoods in which data-collecting pack-
t2=24 offset=9  0100 0010 0000
ets (Crawler Packets, as described next) circulate. The process
15
of partitioning the network into clusters is accomplished by
t3=28 offset=11 using a Markov clusterization algorithm, described in [9], that
15 0100 0010 0001
analyzes the “closeness” of nodes (i.e., switches) according to
their connections (i.e., links between switches). The weights
Fig. 2. Tracking packet arriving timestamps.
of these connections should be set according to a metric that
produces the best results for our application, e.g., network
After extracting data from the flow, the reactive phase incre- latency. After clusters are formed, we use a DFS algorithm
ments the packet counter of this flow (previously maintained on each cluster to determine a single route that spans the
by the proactive phase) and compares it against the high whole cluster. This creates forwarding rules that match crawler
threshold, which once crossed will trigger a warning (Figure packets for every switch in a cluster. Since a switch never
1, step 1) to the controller about the violating flow, which in receives more than one crawler packet from the same neighbor,
turn should start the data gathering process (described next). the rules act by matching the source IP and forwarding the
Crawler Packet Write Area IV. S TRATEGY E VALUATION
L2 HDR CP-Info Switch 1 Switch 2 Switch 3 ... Switch N
To evaluate FlowStalker, we observed both the monitoring
InfoType FlowOffset Data Offset and gathering systems on the BMv2 P4 software switch2 . The
experiments were performed on a Linux virtual machine with
Fig. 3. Structure of a Crawler packet. 6 logical CPUs at 3.20 GHz and 8GB of RAM.

packet to the next hop as determined by the DFS. A. Monitoring


Crawler Packet: For consolidating information from each
cluster, we propose an abstraction called the Crawler Packet Since the monitoring system runs on the data plane, its impact
(Cp). Crawler Packets are created by the controller in re- on throughput is a defining factor in evaluating FlowStalker.
sponse to a warning indicating that a high threshold has We measured the achievable throughput of a TCP connec-
been exceeded, as described in Section III.A (and shown in tion monitored by the heavyweight phase by running iPerf3
Figure 1, step 1). They are then injected through a control between two hosts connected by two BMv2 switches. Addi-
path into predefined entry points in each cluster from which tionally, since our system can monitor a wide range of metrics,
the controller wants to get information (Figure 1, step 2). we incrementally added more items to be measured on a per
This abstraction allows the controller to trigger a distributed packet basis. The measurements were made with both switches
data gathering mechanism that efficiently consolidates data actively monitoring the flow with FlowStalker. The reason for
from the data plane. Once inserted a Cp is routed through measuring the monitoring delay twice is to better replicate
the cluster in the switches’ usual data path (fast path), each a real-life scenario where only the two border switches are
hop appending information into specific data segments of the actively monitoring traffic, whereas we can have as many
crawler packet, effectively increasing its size by the size of a core switches (which are not present in our experiment) in-
block of data (Figure 1, steps 3 and 4). Once the Cp completes between that are not running FlowStalker, and thus will not
its tour through the cluster, it returns to the controller with all cause further monitoring delays.
the information gathered from the cluster (Figure 1, step 5),
having traversed the SDN control path only twice (firstly at
20
its insertion and secondly at its recovery). Unmonitored
18 FlowStalker
The purpose of doing so is to avoid the many limitations that
16
the control path has when compared to normal forwarding in
14
the fast path [10]. By mainly using the fast path we not only
Throughput (Mbps)

reduce the time it takes for telemetry to be accomplished but 12

we also avoid stressing the low throughput control path of 10


the cluster switches. Additionally, the P4 language offers the
8
possibility of shedding communication overhead by having op-
erational information locally on switches (as custom protocols) 6

and not explicitly on packets. As such, we created the Cp as a 4


customizable abstraction which drastically reduces connection 2
overhead when compared to the usual TCP/IP manner that the
0
SDN controller uses for polling switches. The Cp has a Layer 1 2 3 4 5 6 7 8 9 10
2 header, a Cp header that stores control information for the Number of Monitored Items
switches, and a read/write area to be filled by the switches
Fig. 4. Throughput of monitored vs. unmonitored Flows.
within the cluster (Figure 3, showing a Cp for N switches, N
being the number of switches in a cluster). This read/write area Figure 4 shows the impact of FlowStalker in terms of through-
is divided into equal chunks, and each fragment will contain put between the two end hosts, as we increase the number of
information of a single switch. Note that the larger the value monitored and maintained metrics in each switch. As we can
of N , the smaller the chunks can be, because as N grows the see there was an overall drop of about 12% in throughput
Cp has to accommodate information from more switches into for the FlowStalker monitored solutions. The change in the
the same MTU. In our implementation the read/write area can number of monitored items caused no perceptible downgrade
only store the type of measurement specified in the InfoType in the throughput. This experiment suggests that the impact
field, which cannot be changed after the insertion of the Cp. in the throughput of the switch is due to the fixed cost of the
We avoid mixing measurements inside the Cp so that we can procedure that accesses the hash table and discovers where
homogeneously divide the blocks in the read/write area. A 120 to store the information about a specific flow; conversely,
bytes read/write area, for example, could be used to gather 4
timestamps (6 bytes each) from each switch in a cluster with 2 http://github.com/p4lang/behavioral-model

5 switches. 3 http://iperf.fr/
increasing the number of monitored items (and the number
400
of operations to compute and store them) did not cause any
Heavy
perceptible impact in terms of throughput. 350 Medium
Light

300
B. Gathering

End-to-End Delay
250

In our gathering system, most of the information heavy lifting 200


is done by the data plane (i.e. the majority of links a crawler
packet will traverse are intra-cluster, not involving the control 150

path). Given that our goals are to minimize information


100
exchanged between the planes and to minimize the use of the
control path [10], the ideal benchmark would be to measure 50
how much we use the control path in our strategy and how
many switches are affected by doing this. Therefore, our 0
1 2 3 4 5 6 7 8 9 10
experiment is as follows. We defined ring topologies, with ClusterSize
a varying number of switches inside them, having in each and Fig. 6. Tour latency measured according to number of switches.
every link a 1ms artificial delay. We created two experiments
to measure communication overhead and latency.
Latency: We wanted to measure the end to end delay of
Communication Overhead: Assume we want to acquire data the Cp under different cluster sizes. For this, we considered
from every switch in a topology, we can do this either by (A) gathering the workloads above (i.e., 4 bytes, 512 bytes and
FlowStalker or (B) BMv2’s Control Plane API (similar to an 8192 bytes), from each switch in a test cluster. We repeated the
OpenFlow Agent). We evaluate how many bytes are exchanged experiment with the number of switches in said cluster ranging
between the planes through control paths of switches for both from one to ten switches. Figure 6 shows the linear increase
strategies. Additionally, we do this for three different sizes of in Cp end-to-end delay when inserted into these different test
data, a light workload (4 bytes), a medium workload (512 clusters. We can achieve considerably fast communication with
bytes) and a heavy workload (8192 bytes). It is important little overhead on the control path (as seen in the previous
to note that while the OpenFlow-like strategy polls every experiment). Should an application need faster response from
single switch with a request through the control path (therefore the Cp, smaller cluster sizes can be defined, as this reduces
stressing the link and the switch) the FlowStalker strategy will the tour latency for the Cp (but still with smaller overhead).
only stress two switches, the input and the output ones.

V. R ELATED W ORK
Light Medium Heavy
4 KB 60 KB 120 KB
BMv2 Thrift

3.5 KB
Crawler Packet Monitoring strategies vary, but more recent approaches tend
50 KB 100 KB
to have more sophisticated systems reliant on programmability
Bytes Exchanged Control Path

3 KB

40 KB 80 KB
of both control and data planes.
2.5 KB

Usual Monitoring Strategies: sFlow [4], short for “sampled


2 KB 30 KB 60 KB
flow”, is a monitoring strategy that exports data about the
1.5 KB
20 KB 40 KB
network based on a sampling rate. This sampling rate deter-
1 KB mines when a random packet will be exported to be analyzed
0.5 KB
10 KB 20 KB along with all relevant meta-data. NetFlow [6], unlike sFlow,
aggregates the monitoring under the concept of flows. NetFlow
0 KB 0 KB 0 KB
0 5 10 0 5 10 0 5 10 was designed to operate by monitoring all packets and sending
Switches
digests to an external server through the NetFlow protocol.
Fig. 5. Bytes exchanged through the control path.
However, NetFlow resorts to sampling when performance
issues arise from the continuous monitoring, thus becoming
As we can see in Figure 5, the overhead of our crawler
similar to sFlow.
packet is considerably smaller than the OpenFlow strategy.
That strengthens our claim that usual gathering methods (e.g. Monitoring Strategies on Non-Traditional Data Planes: In-
OpenFlow) not only unnecessary depend on heavyweight band Network Telemetry (INT) [5], [11] is a system that
application protocols over TCP/IP to gather data from the data works by appending meta-data to the header of transient
plane, but are also inefficient. Instead, a much more adequate packets. INT headers are manipulated by switches along the
strategy is to leverage programmability and put most of the route, appending and modifying meta-data. The header is then
required information for communication not on packet headers extracted from the packet prior to its delivery and sent to a
or application layer protocols, but on the P4 program itself. sink. If this sink is the controller, this leads to a constant
involvement of the control plane, as opposed to FlowStalker ing algorithms to be run on the control plane, and periodically
where we only involve the control plane when it is necessary, exchanging information through the gathering system.
promoting more separation between the planes.
ACKNOWLEDGMENTS
Our work also shares similarities with Marple, which was in-
troduced in [12]. Both revolve around the idea that the network Alberto E. Schaeffer-Filho would like to thank CNPq for
core needs to be better equipped to track more complex aspects research grants ref. 311088/2015-5 and 407899/2016-2. This
of its internals than current switches do. Both also propose study was financed in part by the Coordenação de Aper-
doing so at line-rate. Both approaches share many similarities feiçoamento de Pessoal de Nível Superior - Brasil (CAPES)
in features and expressiveness for monitoring. While Marple - Finance Code 001, and also by NSF CNS-1740911 and
has a more robust orchestration of its monitoring, to be RNP/CTIC (P4Sec) grants.
implemented on theoretical hardware, FlowStalker is more R EFERENCES
bare-bones, being implemented on existing hardware.
[1] P. Bosshart, D. Daly, G. Gibb, M. Izzard, N. McKeown, J. Rexford,
Finally, another monitoring solution similar to ours is Strea- C. Schlesinger, D. Talayco, A. Vahdat, G. Varghese, and D. Walker,
Mon [13], which can be used to monitor the data plane also “P4: Programming protocol-independent packet processors,” SIGCOMM
by relying on programming abstractions. However, because Comput. Commun. Rev., vol. 44, no. 3, pp. 87–95, Jul. 2014. [Online].
Available: http://doi.acm.org/10.1145/2656877.2656890
FlowStalker is implemented in P4, it has a much more strict
separation of concerns when it comes to processing data on [2] V. Vasilev, J. Leguay, S. Paris, L. Maggi, and M. Debbah, “Predict-
ing QoE factors with machine learning,” in 2018 IEEE International
the data plane. While this processing is a common practice to Conference on Communications (ICC), May 2018, pp. 1–6.
StreaMon, sometimes with the help of hardware accelerators,
[3] F. Yang, Y. Jiang, T. Pan, and X. E., “Traffic anomaly detection and
FlowStalker’s approach is to delegate more intensive process- prediction based on sdn-enabled icn,” in 2018 IEEE International Con-
ing to the control plane through the warning step. ference on Communications Workshops (ICC Workshops), May 2018,
pp. 1–5.
[4] sFlow, “sFlow - making the network visible,” Available at:
VI. C ONCLUSION & F UTURE W ORK https://sflow.org/, last accessed in Feb 17, 2019.
[5] C. Kim, A. Sivaraman, N. Katta, A. Bas, A. Dixit, and L. J. Wobker,
Network programmability promotes a more flexible operation “In-band network telemetry via programmable dataplanes,” in ACM
of computer networks and makes it possible to revisit many SIGCOMM, 2015.
inefficient approaches to networking problems, for example, [6] Cisco, “Introduction to Cisco IOS NetFlow - a technical
to gather data plane information to create a better under- overview,” https://www.cisco.com/c/en/us/products/collateral/ios-
nx-os-software/ios-netflow/prod_white_paper0900aecd80406232.html,
standing of the network state. In this paper, we proposed a last accessed in Feb 17, 2019.
mechanism to comprehensively monitor relevant flows passing
[7] N. Feamster, J. Rexford, and E. Zegura, “The road to sdn: An
by a switch and expose this information, allowing for a intellectual history of programmable networks,” SIGCOMM Comput.
much more transparent data plane than that achieved with Commun. Rev., vol. 44, no. 2, pp. 87–98, Apr. 2014. [Online].
black box switches and sampling strategies. We also created Available: http://doi.acm.org/10.1145/2602204.2602219
a gathering system to efficiently consolidate the monitored [8] T. Mizrahi and Y. Moses, “The case for data plane timestamping in sdn,”
information when necessary. This gathering system is reliant in Computer Communications Workshops (INFOCOM WKSHPS), 2016
IEEE Conference on. IEEE, 2016, pp. 856–861.
on customized protocols and thus minimizes communication
overheads present in OpenFlow-like agents. These two subsys- [9] S. Van Dongen, “Graph clustering via a discrete uncoupling process,”
SIAM J. Matrix Anal. Appl., vol. 30, no. 1, pp. 121–141, Feb. 2008.
tems comprise FlowStalker. We prototyped FlowStalker with [Online]. Available: http://dx.doi.org/10.1137/040608635
the P4 language.
[10] A. Wang, Y. Guo, F. Hao, T. Lakshman, and S. Chen, “Scotch:
Through simulations, we found that: (1) the monitoring system Elastically scaling up sdn control-plane using vswitch based overlay,”
in Proceedings of the 10th ACM International on Conference on
enables accurate monitoring of the data plane with relative Emerging Networking Experiments and Technologies, ser. CoNEXT
low overhead, achieved by only monitoring relevant flows; ’14. New York, NY, USA: ACM, 2014, pp. 403–414. [Online].
and (2) FlowStalker’s gathering system reduces considerably Available: http://doi.acm.org/10.1145/2674005.2675002
the number of bytes exchanged in the control path between [11] N. Van Tu, J. Hyun, and J. W.-K. Hong, “Towards onos-based sdn mon-
the SDN controller and the switches, while operating almost itoring using in-band network telemetry,” in Network Operations and
Management Symposium (APNOMS), 2017 19th Asia-Pacific. IEEE,
entirely on the fast path (leading to many benefits as seen for 2017, pp. 76–81.
example in [10]).
[12] S. Narayana, A. Sivaraman, V. Nathan, P. Goyal, V. Arun, M. Alizadeh,
As future work, we intend to create a control plane counterpart V. Jeyakumar, and C. Kim, “Language-Directed Hardware Design for
Network Performance Monitoring,” in SIGCOMM 2017, Los Angeles,
for FlowStalker, which will rely on data monitored by the CA, August 2017.
data plane FlowStalker (i.e., the one presented in this paper).
[13] M. Bonola, G. Bianchi, G. Picierro, S. Pontarelli, and M. Monaci,
We expect to create a synergy between these two systems by “Streamon: A data-plane programming abstraction for software-defined
allowing each to separately perform the tasks more suited to stream monitoring,” IEEE Transactions on Dependable and Secure
them, for example, allowing more sophisticated data process- Computing, vol. 14, no. 6, pp. 664–678, Nov 2017.

You might also like