Edge Computing For 5 G Networks
Edge Computing For 5 G Networks
Edge Computing For 5 G Networks
Version 1.0
DOI 10.5281/zenodo.3698117
URL https://doi.org/10.5281/zenodo.3698117
Table of Contents
Executive Summary ........................................................................................................ 4
1. Introduction - Why Edge Computing is key for 5G and beyond............................ 6
1.1 What is Edge Computing ............................................................................... 6
1.2 Why is Edge Computing critical for 5G ....................................................... 7
1.3 Where is the Edge of the Network ................................................................ 9
1.4 How does the Edge look like? ...................................................................... 12
1.5 Introduction to the 5G Edge Cloud Ecosystem.......................................... 13
2. Key Technologies for 5G on Edge Computing ..................................................... 15
2.1 Resources Virtualization framework .......................................................... 15
2.1.1 Virtual Machines and Containerization ................................................................................... 15
2.1.2 Lightweight virtualization ....................................................................................................... 16
2.2 Orchestration framework ............................................................................ 19
2.2.1 Kubernetes ............................................................................................................................... 20
2.2.2 OSM ......................................................................................................................................... 22
2.2.3 ONAP ...................................................................................................................................... 23
2.3 Networking programmability framework.................................................. 23
2.3.1 SDN for Edge Computing ....................................................................................................... 24
2.3.2 Data plane programmability .................................................................................................... 24
2.4 Acceleration at the Edge: The Need for High Performance at the Edge. 27
2.4.1 FPGA as a Platform to Accelerate Edge Computing............................................................... 28
2.4.2 Direct Memory Access on FPGA ............................................................................................ 28
2.4.3 Seamless Virtualized Acceleration Layer................................................................................ 29
2.5 Operations at the Edge ................................................................................. 30
2.5.1 From DevOps to Dev-for-Operations ...................................................................................... 30
2.5.2 DevSecOps and Edge Computing ........................................................................................... 33
2.5.3 Monitoring ............................................................................................................................... 34
3. Edge Computing and Security............................................................................... 37
3.1 Key security threats induced by virtualization .......................................... 37
3.2 Security of the MEC infrastructure ............................................................ 38
3.2.1 MEC specific threats................................................................................................................ 39
3.2.2 E2E slice security in the context of MEC ................................................................................ 39
3.3 New trends in virtualization techniques ..................................................... 39
3.4 Integrity and remote attestation .................................................................. 42
3.5 Remediation to introspection attacks by trusted execution environments
43
3.6 Conclusions on security ................................................................................ 45
4. The Battle for the Edge.......................................................................................... 46
4.1 Edge Computing Ecosystem ........................................................................ 46
4.2 Coopetitive Landscape ................................................................................. 49
4.2.1 Competitive Scenarios ............................................................................................................. 49
4.2.2 Partially collaborative scenarios .............................................................................................. 53
4.2.3 Fully Collaborative Scenario ................................................................................................... 54
4.2.4 Complementary players and mixed scenarios ......................................................................... 55
Executive Summary
The EU-funded research projects under the 5G PPP initiative 1 started back in 2015, when
the so-called Phase 1 of research activities was launched to provide the first 5G concepts.
This was followed up with the second phase in 2017 where the first mechanisms were
designed, and significant technological breakthroughs were achieved. Those projects
posed the basis for the architecture and services of the 5G and beyond systems. With
Phase 32 a new set of projects was launched in 2018, starting with the three Infrastructure
projects, followed up with the three cross-border automotive projects, the advanced
validation trials across multiple vertical industries and the projects dealing with the 5G
longer term vision. 5G PPP is currently on boarding the latest projects, the latest of which
are expected to start in January 2021 and deal with smart connectivity beyond 5G
networks.
It is therefore a good time to review how 5GPPP projects have been using and enhancing
Edge Computing for 5G and beyond systems, based on the information shared by the
projects themselves. But before delving into that analysis, this whitepaper presents a
rationale on why Edge Computing and 5G go hand by hand, and how the latter can benefit
most from the former.
Section 1 of this whitepaper presents a brief intro to the Edge Computing concept with
some perspective linking it to the explosion of data usage driven by other technologies
like Artificial Intelligence (AI) and the relevance of Data Gravity. It also elaborates on
how Edge Computing helps the 5G Value proposition. It then goes over Edge locations
and how an Edge deployment could look like, to finalise with the Edge Cloud ecosystem
introducing the roles of the main actors in the value chain.
Section 3 analyses the role of Security in Edge Computing, reviewing key security threads
and how they can be remediated, and how some 5G PPP projects have addressed these
problems.
Section 4 presents the so-called Battle for Edge that many companies are currently
fighting, trying to gain the best possible position in the ecosystem and value chain. It
describes the different actors and roles for these companies, and then describes the
1 https://5g-ppp.eu/
2 https://5g-ppp.eu/5g-ppp-phase-3-projects/
“Coopetitive Landscape”, analysing both scenarios where one actor can take the dominant
role and other more collaborative scenarios.
These sections of the whitepaper provide the context on motivation on using Edge
Computing for 5G, the technology and security landscape and the options for building an
Ecosystem around Edge Computing for mobile networks, preparing the reader for the
main section of the whitepaper.
Section 5 enters in the main focus of the whitepaper, describing 5GPPP projects approach
to Edge Computing and 5G. This analysis has been based on 17 answers from Phase 2
and Phase 3 5GPPP projects to an Edge Computing Questionnaire created specifically for
this whitepaper. The questionnaire asked about the type of infrastructure deployed, the
location of the Edge used in the project, the main technologies used for these
deployments, the Use Cases and Vertical Applications deployed at the Edge, and what
drivers were used to select those. As the reader will see, Edge computing solutions have
been extensively used by many 5G PPP projects and for diverse use cases. The analysis
of the received answers provides some useful insight to the reader about the usefulness
of Edge Computing in real networks.
We are confident that this whitepaper will be of interest for the whole 5G research
community and will serve as a useful guideline and reference of best practises used by
5G PPP projects.
So, Edge Computing reduces the distance between Users (Applications) and Services
(Data). But the question remains: “Why has Edge Computing become such a popular
technology trend during the past years?”
Period: 2000-2010
• RDBMS & data warehousing • Information retrieval and extraction • Location-aware analysis
• Extract Transfer Load • Opinion mining • Person-centered analysis
• Online Analytical Processing • Question answering • Context-relevant analysis
• Dashboards & scoreboards • Web analytics and web intelligence • Mobile visualization
• Data mining & Statistical analysis • Social media analytics • Human-Computer Interaction
• Social network analysis
• Spatial-temporal analysis
Figure 2: Big Data major phases from the Enterprise Big Data Professional Guide 4
3 https://github.com/lf-edge/glossary/blob/master/edge-glossary.md
4 https://www.bigdataframework.org/short-history-of-big-data/
While the beginning of Big Data can be set in the 90s, it is really in the last decade that
Data explosion took place.
The application of AI to Big Data increased the need for larger Data sets to train inference
models. Public cloud has played an instrumental role in this space, but the more the data
set grows, the more difficult is to move the data.
That´s why Dave McCrory in 2010 introduced the concept of “Data Gravity” 5. The idea
is that data and applications are attracted to each other, similar to the attraction between
objects as explained by the Law of Gravity.
In such an environment Edge Computing plays a key role as the enabling technology to
shorten the distance between Users (Apps) and Services (Data) and enable guaranteed
Latencies and Throughputs, as required by services and applications. These requirements
have become apparent especially with the digitization of Verticals such as Industry 4.0,
Collaborative and Automated Driving, E-Health etc.6
5 https://datagravitas.com/2010/12/07/data-gravity-in-the-clouds/
6 5G PPP, White paper, “Empowering Vertical Industries, Through 5G Networks”, https://5g-ppp.eu/wp-
content/uploads/2020/09/5GPPP-VerticalsWhitePaper-2020-Final.pdf
7 https://buildfire.com/app-statistics/
• 4G Networks were designed for Data services, modelling Voice service as Data
(VoLTE), while most of the traffic in 4G Networks is Video (Video will represent
82% of all IP traffic in 2021)8.
If the Telco Industry would have known that Video was to account for 80% of traffic,
most probably the design of 4G Networks would have been different, e.g., introducing
Content Delivery Network (CDN) in the architecture.
The reality is that it is impossible to predict how users are going to drive the usage of
newly introduced mobile networks. Therefore, for 5G Networks, 3GPP has taken a
Service Oriented approach, introducing new key concepts, such as Network Slicing, or a
Service Bus Architecture for Microservices, to offer the possibility to create a Virtual
Network for a specific Service to deliver the best user experience to customers.
The 5G Network value proposition relies on three pillars or capabilities, usually displayed
like in Figure 4, associated to most relevant use cases:
5G Usage Scenarios
Enhanced Mobile Broadband
Gigabits in a second
3D Video, UHD screens
Augmented Reality
.…
Industry Automation
8 https://www.businessinsider.com/heres-how-much-ip-traffic-will-be-video-by-2021-2017-6?IR=T
9 https://www.itu.int/en/ITU-
D/Conferences/GSR/Documents/GSR2017/IMT2020%20roadmap%20GSR17%20V1%202017-06-21.pdf
In order to deliver the above mentioned above value proposition, Edge Computing plays
a fundamental role, as Compute resources are critical to enable those three capabilities to
the Network, so to be able to finally deliver a satisfactory E2E experience.
Figure 5 elaborates on what the main enhancements to some key system capabilities are,
when moving from a 4G network to a 5G one.
is the concept of placing computing resources closer to users´ locations. Almost any
device with computational power that is near or at the user’s location can act as an Edge
Computing device, as long as it can process a computational workload.
Edge Computing is typically placed between users´ Devices and Centralized computing
datacenters whether they are Public clouds or Telco Cloud facilities.
Device computing resources are hard to manage because of their heterogeneity and the
network environment where they are connected to (typically LAN environments).
We can mention several Edge Computing deployment examples that help us to identify
different Edge Computing Locations:
• On Premise: Companies deploying 4G/5G Private Networks deploy a full
Network Core in the premise infrastructure connected to business applications 12
• RAN/Base station: some companies are deploying infrastructure collocated with
RAN in the streets, using Cabinets / MiniDatacenters (e.g., see Figure 7
5GCity/Vapor.io13)
• Central Offices (COs): COs are at the Cloud Service Provider (CSP) network
edge, which serves as the aggregation point for fixed and mobile traffic to and
from end user. All traffic is aggregated to the CO, which creates a bottleneck
12 https://www.daimler.com/innovation/production/factory-56.html
13 https://www.vapor.io/36-kinetic-edge-cities-by-2021/
that can cause a variety of problems. Throughput and latency suffer greatly in
the traditional access network, essentially cancelling out much of the gain
from technologies such as optical line transfer (OLT) and fiber-to-the-home
(FTTH), and 5G networks.
Mobile
OTT Data
Center
Access Core
Residential CO Internet
Network Network
Core Data
Center
Business
Traditional CO Virtual CO
Proprietary hardware appliances are
GGSN/SGSN
replaced by servers for control and
BBU vBNG/vEPC/vFW
data planes. Hardware Acceleration in IMS Control Plane
added as a means of meeting S-GW
Data Plane
customers service expectations (speed, MVC
latency, jitter) economically.
While Edge can be located in different locations, they are not exclusive, and there can be
several Edge locations used in a network deployment.
The term Fog Computing as defined by the National Institute of Standards and
Technology18, states that Fog Computing is a layered model for enabling ubiquitous
14 https://www.opennetworking.org/cord/
15 https://www.opnfv.org/wp-content/uploads/sites/12/2017/09/OPNFV_VCO_Oct17.pdf
16 https://aws.amazon.com/cloudfront/features/
17 http://map-cdn.buildazure.com
18 https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.500-325.pdf
access to a shared continuum of scalable computing resources. The model facilitates the
deployment of distributed, latency-aware applications and services, and consists of fog
nodes (physical or virtual), residing between smart end-devices and centralized (cloud)
services.
Spine Spine
Switch Switch
Compute Compute
Access Nodes Nodes Network
Node I/O
VNF1 VNF3
VNF2 VNF4
These solutions are typically designed for full 42Us racks. Smaller footprint solutions are
recently available from open organizations like the Open Compute Project, where the
Open Edge project has released the Open Edge specifications with 2U and 3U form
factors 19.
19 http://files.opencompute.org/oc/public.php?service=files&t=32e6b8ffca7e964ec65de17ec435a9fc&download
Looking at the evolution of Public cloud solutions and Services, they have been driven
by a few actors that have grown into big global players, now often called over-the-top
(OTT), providing services “on top of” today’s Internet, or hyperscalers. The latter
referring to their capability of seamlessly provision and adding compute, memory,
networking, and storage resources to their infrastructure and make those available as
scalability properties of the services offered. In addition, local IT and cloud providers
have provided more tailor-made solutions and services that have properties and added
value beyond commodity services.
With the emergence of Edge Cloud Services (leveraging Edge Computing technologies
and solutions) we anticipate a richer set of actors entering the market, at the same time
competing and collaborating. The illustration below identifies this wider set of players.
In such “coopetitive” (cooperative and competitive) landscape we can identify the Global
OTT or Hyperscalers and the Local IT & Cloud Providers. On the same side we can also
highlight the Global IT Solution Providers such as IBM, Oracle, HPE, etc. On the other
side we have Telecom Operators, e.g., Telcos/Mobile Network Operators (MNOs), and
Communication Service Providers (CSPs). Moreover, telco vendors, e.g., Network
Equipment Provider (NEP), are increasingly also offering managed services, thus acting
as Manage Service Providers (MSP). With 5G and network capabilities addressing
various Industry 4.0 use cases, the global industry device & solution providers (e.g.
Siemens, Bosch, ABB, etc.) will as well address the Edge Computing and Edge Cloud
Services space.
In the midst of these actors, we also point out the so-called Neutral Host (provider),
potentially managing local or private spectrum and offering services to allow physical or
virtual assets to be shared by multiple service-providers, and in this way improving the
economic efficiency at locations where other actors acting individually do not see an
effective business case.
To introduce some example configurations, functional roles and potential actor positions
the following illustration is provided.
SA EDGE OPERATOR
HYPERSCALER OPEN TELCO EDGE CLOUD
INTERCONNECTS
FRONTEND FRONT END APP MARKETPLACE FORNT END
EDGE CLOUD
MANAGES EDGE RESOURCES MANAGES EDGE RESOURCES
PLATFORM
Figure 11: Example configurations, functional roles and potential actor positions for Edge Cloud
The “Hyperscaler-based edge” to the left shows how an hyperscaler provides most of the
Edge Cloud stack, just building its infrastructure on top of the transport network
infrastructure offered by a network operator.
The next three example stacks show various configurations where the Telcos play a
significant role.
In the “Connectivity Wholesaler” case the Telco wholesale role is adapted to become a
provider of Edge infrastructure such as Edge Cloud datacentre resources. On top of this
a layer of Open Telco Edge Cloud (OTEC) capabilities is provided.
Both stacks in the middle of the Figure 11 can be considered as various ways for Telcos
to share Edge Cloud resources. The Edge Cloud capabilities can be offered to the vertical
enterprise customer by a specialized services provider.
Finally, the right-most stack shows an individual Telco providing the full stack by itself.
In this section, we introduce some key technologies into four areas: the virtualisation, the
orchestration, the network control and operational frameworks. In the last sub-section, we
introduce some typical Edge Apps and Services through the description of some Edge
Connectivity scenarios.
According to Docker, a container is a unit of software that packages up code, library and
all its dependencies so the Apps run quickly and reliably from one computing
environment to another22. A developer should create from scratch or starting from another
container image, a standalone and lightweight package of software containing the
operating system environment, libraries, tools, configurations and code needed for
running the specific service. All these lines of code should be compiled and packed up in
a Docker container image.
20 https://ninenines.eu/docs/en/cowboy/2.3/guide/rest_principles/#_rest_architecture
21 https://www.cncf.io/
22 https://www.docker.com/resources/what-container
environments for running software services and security by design. Containers can
provide better service agility, performance, time to run, quick deployments and updates,
scaling when necessarily, portability and better security23.
Service agility and performance of a software container are put in place by the possibility
to run directly on host. A software container runs on different namespaces or different
parts of namespaces, the only things that are shared being some kernel features which are
not completely isolated. Regarding resources, it does not use quota management
resources, being protected from the noisy neighbour problem that is present in virtual
infrastructures with VMs in place.
25 A. Madhavapeddy et al., “Unikernels: Library Operating Systems for the Cloud,” ACM SIGPLAN Notices, vol.
48, no. 4. 2013, pp. 461–72.
together with the application code into an image (no division between kernel and user
spaces) that can be run on top of a hypervisor or directly on a hardware layer. Different
library OSs (e.g., IncludeOS, UKL, MirageOS, OSv, Rumprun, runtime.js) can be used
to develop unikernels, with slightly different security profiles, programming languages
(some of them aiming to avoid programming directly in C), and legacy compatibility.
Among other advantages, unikernels improve security over other virtualization paradigms
since (i) they have no other functions/ports apart from the specific application they were
built for, thus the attack surface is minimal, and (ii) they achieve a degree of isolation
similar to VMs and much higher than containers, since the latter share a common kernel.
Besides, due to their specialization, unikernels come with the benefit of faster boot times
and lower images size than containers, as well as similar degree of memory consumption
when running.
Still, unikernels have some drawbacks that come mainly from their immaturity. The most
critical one is related to the high development times, as (i) kernel functionalities have to
be carefully selected and configured for the specific application, (ii) there is a lack of
tools designed for debugging unikernels, and (iii) to be updated they have to be shut down,
updated, recompiled and instantiated, a set of operations that is not possible to run on the
fly. Besides, their performance shows room for improvement, as initial tests have shown
that time for (some particular) processes completion is higher in unikernels due to lower
efficiency of memory management and hypervisor overhead 26. This technology is more
powerful in applications with high context switching between kernel and user spaces 27.
The nature of unikernels make them suitable for deploying stateless, high-response low-
latency VNFs located at Edge nodes. General algorithms (e.g., compression, encryption,
data aggregation) and specific functions for Vehicular Edge Computing (VEC), Edge
Computing for smart cities and Augmented Reality (AR)28 are use cases in which
unikernels can be of utility. The UNICORE project29, which aims at providing a toolchain
for facilitating the development of secure, portable, scalable, lightweight and high-
performance unikernels, foresees their potential application in 5G-RAN, vCPE and
serverless computing, among other fields. As current Virtualized Infrastructure Managers
(VIMs) support unikernels, some H2020 5G-PPP projects (such as 5G-MEDIA 30,
5GCity31, Superfluidity32, 5G-Complete33, etc.) are using them jointly with VMs and
26 R. Behravesh, E. Coronado and R. Riggio, "Performance Evaluation on Virtualization Technologies for NFV
Deployment in 5G Networks," 2019 IEEE Conference on Network Softwarization (NetSoft), Paris, France, 2019,
pp. 24-29.
27 T. Goethals, M. Sebrechts, A. Atrey, B. Volckaert and F. De Turck, "Unikernels vs Containers: An In-Depth
Benchmarking Study in the Context of Microservice Applications," 2018 IEEE 8th International Symposium on
Cloud and Service Computing (SC2), Paris, 2018, pp. 1-8
28 R. Morabito, V. Cozzolino, A. Y. Ding, N. Beijar and J. Ott, "Consolidate IoT EDGE Computing with
Lightweight Virtualization," in IEEE Network, vol. 32, no. 1, pp. 102-111, Jan.-Feb. 2018.
29 http://unicore-project.eu
30 http://www.5gmedia.eu
31 https://www.5gcity.eu
32 http://superfluidity.eu
33 https://5gcomplete.eu
On the other hand, serverless computing is a paradigm for virtualized environments that
appeared during the past decade and has attracted great interest among services customers
and providers. In this paradigm, developers have to focus on writing the code of their
applications as a set of stateless event-triggered functions, in a Function-as-a-Service
(FaaS) model, without having to manage aspects related to infrastructure (e.g. resource
allocation, placement, scaling) since the platform is in control of those tasks. Despite the
fact of being a novel concept, most major vendors have a FaaS offering, AWS Lambda
being one of the most popular one. Still, there are different open source solutions for
developing a serverless computing platform based on Kubernetes cluster on any
public/private cloud or bare metal. Among them, one can find solutions such as Apache
OpenWhisk34, OpenLambda35, Knative36, Kubeless37, Fission38 and OpenFaaS 39. Apart
from the computing service, serverless architectures usually require other services like
data storage or Application Programming Interface (API) gateways to be functional.
Edge Computing can benefit from some of the aspects provided by serverless paradigm,
although it may not be an optimal choice for some services of the virtualized networking
domain such as packet flow management or firewalls 41, since the required start-up
latencies can affect their overall performance. An option to minimize this drawback is to
make use of unikernels as underlying runtime engines, but as aforementioned, this
technology is still immature and most serverless architectures work now with containers.
In any case, serverless computing can be considered at Edge nodes for performing
anomaly detection or data processing services. ETSI foresees its utility for 5G mMTC in
34 https://openwhisk.apache.org
35 https://github.com/open-lambda/open-lambda/blob/master/README.md
36 https://knative.dev
37 https://kubeless.io
38 https://fission.io
39 https://www.openfaas.com
40 Kratzke, N. A Brief History of Cloud Application Architectures. Appl. Sci. 2018, 8, 1368.
41 P. Aditya et al., "Will Serverless Computing Revolutionize NFV?," in Proceedings of the IEEE, vol. 107, no. 4,
pp. 667-678, April 2019
MEC deployments 42, and the 5G-PPP 5G-MEDIA project has adopted this paradigm for
developing VNFs for immersive media, remote and smart media production in
broadcasting and CDN use-cases. We remind here an important distinction between Edge
Computing and MEC: Edge Computing is a concept, and MEC is an ETSI standard
architecture.
Typical architectures of VMs, containers and unikernels are depicted in Figure 12.
Serverless functions would leverage these architectures, transparently to end users,
although in the case of unikernels the provider should bake the function code with the
minimal required OS services and then deploy the resulting unikernel on top of a
hypervisor. It should be mentioned that depending on the type of hypervisor, they can
work either with or without an underlying host OS.
42 https://www.etsi.org/images/files/ETSIWhitePapers/etsi_wp28_mec_in_5G_FINAL.pdf
43 Microsoft Azure, https://azure.microsoft.com
44 Heroku, https://www.heroku.com
45 Cloud Foundry, https://www.cloudfoundry.org
OpenShift46 that allow users to run their own PaaS (on-premise or in the cloud).
• The current third generation of PaaS includes platforms like Flynn, and Tsuru 47,
which are built on Docker from scratch and are deployable on own servers or on
public IaaS clouds.
In the following we introduce the three main orchestration platforms for both VMs and/or
Containers suitable to Edge domain.
2.2.1 Kubernetes
Over the last few years Kubernetes 48 (noted as K8s) has become a de facto standard for
container orchestration. An important thing to recognize about Kubernetes is that it is a
very smart intent-based orchestration engine, a fact that is overlooked by the current
standard approach named Management and Network Orchestration (MANO), which
treats Kubernetes as “dumb” NFV Infrastructure (NFVI). Essentially, the common
approach is to provide a Kubernetes VIM that is used by an orchestration engine “brain”
to interact with Kubernetes. A short-term advantage of this approach is clear: providing
a low effort standard way of integrating existing MANO frameworks with Kubernetes.
However, the long-term advantages of this approach are much less clear.
First, insulating developers and operators from Kubernetes Native Infrastructure (KNI)
prevents them from acquiring cloud-native skills and state of mind, which are required to
drive innovation in the telecom industry. As container transformation unfolds in the
telecom industry, VM based VNFs give way to Container Network Functions (CNFs).
These are a natural fit for Kubernetes based orchestration. In fact, CNFs are the primary
motivation for shifting the management and orchestration plane centre of gravity to
Kubernetes itself. However, it should be noted that by virtue of the Custom Resource
Definition (CRD) mechanism, non-Kubernetes resources can be easily added to the
Kubernetes ecosystem. Thus, a control and management plane grounded in Kubernetes
can orchestrate not just containers, but also resources in other NFVIs (VMs and PNFs
alike). At the same time, it is straightforward to reuse legacy orchestration, such as Heat
templates, triggering them from Kubernetes.
Thirdly, treating Kubernetes as just one more NFVI does not allow to use very strong
features such as intent driven management that continuously reconciles an observed state
of a service with a desired one (i.e., an intended declared state). A best practice to
consume this intent management mechanism is via the Operator pattern 50. This pattern
46 OpenShift, https://www.openshift.com
47 Flynn, https://flynn.io. Tsuru, https://tsuru.io
48 https://kubernetes.io
49 https://kubevirt.io/
50 https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
can be used to develop Kubernetes native S(pecialized)-VNFM for network services. That
same pattern can be used to develop G(eneric)-VNFM and NFVO.
Finally, MANO today is rather workflow oriented than Operator oriented. While
Operators and workflows are radically different patterns, Kubernetes-native workflow
orchestration engines, such as Argo51 use the operator approach to reconcile an actual
state of a workflow execution with the desired execution state (i.e., the required workflow
steps). Thus, Kubernetes also natively provides workflow capabilities needed in many
practical orchestration situations where pure reconciliation cycles of the operator pattern
might be too slow.
The CSPs need to deploy Kubernetes at large scale with hundreds of thousands of
instances at the edge. However, this distributed cloud architecture imposes challenges in
terms of resource management and application orchestration. In this perspective, k3s a
lightweight K8s is put forward by Rancher 52 to address the increasing demand for small,
easy to manage Kubernetes clusters running in resource-constrained environments such
as edge. It is straightforward to see that k3s will enable the rolling out of new 5G
services relying on multi-access Edge Computing deployments.
All these components are bundles into combined processes that are presented as a simple
server and agent model which will facilitate their deployment in the edge environment.
51 https://argoproj.github.io/
52 https://rancher.com/
K3 Agent K3 Server
Process
Kubelet
Scheduler Manager
Containerd
Edge
2.2.2 OSM
Open Source MANO (OSM) is an ETSI-hosted open source community delivering a
production-quality MANO stack for NFV, capable of consuming openly published
information models, available to everyone, suitable for all VNFs, operationally
significant and VIM-independent. OSM is aligned to NFV ISG information models while
providing first-hand feedback based on its implementation experience 53 54.
OSM Release EIGHT brings several improvements over previous releases. It allows you
to combine within the same Network Service the flexibility of cloud-native applications
with the predictability of traditional virtual and physical network functions (VNFs and
PNFs) and all the required advanced networking required to build complex E2E telecom
services. OSM Release EIGHT is at the forefront of Edge and 5G operations technology,
deploying and operating containerized network functions on Kubernetes with a complete
lifecycle management, and automated integration.
In addition, OSM extends the SDN framework to support the next generation of SDN
solutions providing higher level primitives and increasing the number of available options
for supporting I/O-intensive applications. Furthermore, the plugin models for intra and
inter-datacenter SDN have been consolidated, and the management, addition, and
maintenance of SDN plugins significantly simplified.
OSM Release EIGHT also brings major enhancements designed to improve the overall
user experience and interoperability choices. This includes an improved workflow for
VNF configuration which allows much faster and complex operations, and the support of
additional types of infrastructures, such as Azure and VMware's vCD 10, complementing
the previously available choices (OpenStack-based VIMs, VMware VIO, VMware vCD,
AWS, Fog05 and OpenVIM). It improves the orchestration of diverse virtualization
environments, including PNFs, a number of different VIMs for VNFs, and Kubernetes
for CNFs.
53 https://osm.etsi.org/docs/user-guide/02-osm-architecture-and-functions.html
54 https://www.etsi.org/technologies/open-source-mano
2.2.3 ONAP
Open Network Automation Platform (ONAP) is an open-source project hosted by Linux
Foundation55, officially launched in 2017, enabling telco networks to become
increasingly more autonomous. ONAP is capable of providing a real time, policy-driven
service orchestration and automation, enabling telco operators and application developers
to instantiate and configure network functions. ONAP, through the different releases,
supports features like a) multi-site and multi-vendor automation capabilities, b) service
and resources deployment, providing c) cloud network elements and services instantiation
in a dynamic, real time and closed-loop manner for several major telco activities, (e.g.
design, deployment and operation of services at design-time and run-time.
Various edge cloud architectures have already emerged from different communities and
potentially can be plugged into the ONAP architecture for service orchestration. The
ONAP community analyses the orchestration requirements of services over various edge
clouds and how these requirements impact ONAP components in terms of data collection,
processing, policy management, resource management, control loop models, security, as
well as application & network function deployment and control. We invite the reader to
read more detail in this link 56.
The SDN controller is also defined as the "Network Operating System" (NOS) because
of his similar role of a computer operating system: it acts as a middle layer between the
network applications and the network resources (i.e., switches).
The northbound interface of the controller is responsible for the interaction with the
network applications and provides all the programming APIs that can be exploited by the
network developer. The level of abstraction provided by such APIs can be very different,
from very low-level details (i.e., describing each single processing and forwarding
operation of the switch) to a very high level (i.e., describing only what the application
should do, and not how, as in the Intent-Based approach). The southbound interface is
instead responsible for the interaction with the network switches and supports all the
protocols necessary to program the forwarding behaviour of the switch. OpenFlow has
become one reference protocol for the southbound interface, but many other protocols
have been defined as alternative or complementary to OpenFlow (e.g., netconf, ovsdb).
The reference architecture for SDN is typically network based, in the sense that the
controller interacts with the switches along the path of a data flow in order to process and
route the traffic correctly. An alternative SDN architecture is source based, in the sense
that the controller interacts only with the source switch (i.e., the first SDN switch along
the path of a flow), which adds, in piggybacking, the route information on the packets.
This information describes how packets should be processed and switched at each
traversed switch, extending the classical concept of source routing. This source-based
architecture offers a wide flexibility in programming the network and has been
implemented through the Segment Routing (SR) protocol 58. Notably, SR is compatible
with hybrid architectures in which a standard IP network coexists with SDN networks,
since the route information for SDN is encapsulated within standard IP packets, switched
as usual between legacy non-SDN switches.
SDN controllers can implement advanced traffic engineering schemes, able to cope
autonomously with network impairments (e.g., link congestion, node/link failure). The
adoption of AI enables the operation of "self-managing" networks.
Another dimension of the usage of SDN is related to users’ mobility. This implies that
the services should migrate from one EDGE to another in a seamless fashion for the final
user. Migrating services is very challenging, since it requires to migrate the corresponding
VM to a remote server, after having synchronized the internal state of the corresponding
VMs and rerouted the corresponding traffic to the new server. The complexity of such
migration requires a strict control on the traffic routing, as enabled by SDN 59.
58 RFC 8402
59 Baktir, A. C., Ozgovde, A., & Ersoy, C. , "How can EDGE computing benefit from software-defined networking:
A survey, use cases, and future directions", IEEE Communications Surveys & Tutorials, 2017
to network applications (e.g., load balancing, in-band network telemetry, etc.) while
providing high-performance and efficiency. Such applications may be implemented in
software-based switches using commodity CPUs or hardware-accelerated devices such
as programmable switches, smartNICs, etc. Programming these network elements to
support complex network functions is achieved by defining finite state machines directly
within the processing pipeline60 or by defining primitives through a domain-specific
language e.g., P461.
The current specification of the language P4 introduces the concept of the P4-
programmable blocks; it essentially provides the interface to program the target via its set
of P4-programmable components, externs and fixed components. Along with the
corresponding P4 compiler, it enables programming the P4 target.
60 Pontarelli, Salvatore, et al. "FlowBlaze: Stateful packet processing in hardware." , NSDI 2019
61 P. Bosshart, D. Daly, G. Gibb, M. Izzard, N. McKeown, J. Rexford, C. Schlesinger, D. Talayco, A. Vahdat, G.
Varghese, and D. Walker, “P4: Programming protocol-independent packet processors,” SIGCOMM Comput.
Commun. Rev., vol. 44, no. 3, pp. 87–95, Jul. 2014.
62 https://p4.org/
Towards that end, making stateful data plane algorithms programmable, complementing
the programmable forwarding plane solutions, can be beneficial in terms of meeting QoS
requirements (e.g., low latency communications) and enhance network flexibility.
Programmable data plane solutions such as P4 and supported architectures, provide an
excellent way to define the packet forwarding behaviour of network devices. However,
most programmable devices still typically have non-programmable traffic managers.
Towards that end, 5GROWTH 63,64 investigates fully programmable and customized data
planes, through the introduction of simple data-plane abstractions and primitives beyond
forwarding, enabling optimized traffic management per slice, depending on the
application profile and corresponding Service Level Agreement (SLA).
An alternative solution is to leverage a stateful data plane, e.g., based on P4. This implies
that the state replication is offloaded from the VNFs to the P4 switches, which take the
responsibility of coordinating the exchange of replication messages between VNFs, with
a beneficial effect on the VNF load and thus on the overall scalability.
The easiest way to achieve such high performance at the network Edge is to move the
data processing and forwarding closer to the end users. There is no room for monolithic,
single-function ASIC-based appliances to provide the necessary performance, so a
solution is needed that can take advantage of existing networking equipment while
accelerating the data path.
The solution must provide the performance of an ASIC with the agility of software. The
answer is to offload the virtual functions to hardware, providing the necessary
acceleration while maintaining flexibility.
X86
Control
Server
Data PCIe Data
10/25/100G 10/25/100G
NETWORK INTERFACE CARD
CONTROL CONTROL
CPU Cores
X86 Control
Server
Data PCIe Data
10/25/100G 10/25/100G
Data Processing
Figure 14: Up: a standard NIC on an x86 server, Down: a FPGA-based SmartNIC
In order to enable the Edge Computing infrastructure with high bandwidth, low latency,
and a highly secure data path, we need hardware-based acceleration at the network Edge.
There are various hardware technologies capable of offloading certain network functions,
however none accomplish full offload of the data plane as well as field-programmable
gate arrays (FPGAs).
FPGA SmartNICs are also very effective in reducing latency and jitter. By using an FPGA
to handle data processing, it is feasible to achieve a latency of a few microsecond (µs),
because the data path avoids the CPU entirely. Instead, the data is fully offloaded from
the CPU to the FPGA on the NIC. By comparison, when software on a CPU is used for
networking, latency lower than 50-100 ms is considered a very good achievement.
Another important advantage of offloading the data path from CPUs is in the area of
cybersecurity. If the data never needs to reach the CPU, the networking is entirely
separated from the computation. Should the CPU, which is much more vulnerable to
breaches than an FPGA, be hacked, the data path (handled by the FPGA) is still protected.
The FPGA also can efficiently handle security functions such as encryption and
decryption, Access Control List (ACL), and firewall, thereby reducing the load on the
CPU.
Beyond meeting the bandwidth, latency, and security requirements of challenging Edge
Computing implementations, FPGAs also have the benefit of being open, programmable
and configurable hardware, and a perfect complement to commercial off-the-shelf servers
in that they are general purpose and agile. Their full reprogrammability means that they
are futureproof, i.e., hardware does not need to be replaced or upgraded when new
functionalities and features emerge. The FPGA SmartNIC can be reprogrammed as
needed instead of replacing the whole card if the applications or use cases change.
66 https://ethernitynet.com/cornerstones/fpga-smartnics-for-network-acceleration/
appliances. Ideally, a preferred solution should be able to provide all the required features
without compromising on performance and without having cost tradeoffs to achieve these
performing solutions. However, such high-end server solutions are very expensive, as
they contain a high number of CPU cores and vast amounts of memory in order to achieve
such performance.
An FPGA that also includes an embedded PCIe Direct Memory Access (DMA) engine
allows NFV performance to be boosted by accelerating several virtual software
appliances in the hardware. Two main technologies can best use the DMA capabilities:
SR-IOV and PCI Passthrough. By using these two technologies on a single FPGA board,
the traffic can bypass different server bottlenecks around the hypervisors and gain direct
access through the PCIe to many networking tasks in hardware. If the ability to use
DPDK67 is added to the DMA functionality, it is possible to receive even greater
acceleration and further improvement to the NIC performance. The combined result is a
boost to the performance of multiple virtual networking functions to the level of that of
dedicated hardware appliances.
A server that incorporates FPGA-based SmartNICs, that are capable of combining DMA
functionality with hardware forwarding offload engines, provides a highly performing,
cost-optimized alternative to a costly server and/or to dedicated hardware appliances. The
FPGA can perform different networking functions in hardware as if they were in a virtual
software environment. This capability can replace multiple VMs, which then reduces the
number of CPU cores and provides the required performance without any cost tradeoff.
The suggested approach for hardware offload is transparent control flow mode, in which
the FPGA configuration is transparent to the DPDK application. In this mode, the FPGA
is not a separate controlled element. The DPDK application sees a single SmartNIC entity
that combines the Ethernet controller and FPGA. The benefit of this control flow mode is
that the application does not need to write any specific code to use the FPGA acceleration
and is therefore agnostic to the underlying hardware.
The objective of DevOps is to break barriers between development and operations teams
in the software engineering and usage stages69. This is usually done by assigning certain
operation tasks to developers and vice versa. However, the whole concept goes much
further and is best summarised as implementing a continuous cross-functional mode of
working with focus on automation and alignment with the business objectives; this is
commonly represented by a kind of “infinite” loop such as the one in Figure 15:
Monitor Plan
Deploy Code
DevOps Infinite Loop
Release Build
Test
69 Erich F., Amrit C., Daneva M., “A Mapping Study on Cooperation between Information System Development and
Operations”, In: Jedlitschka A., Kuvaja P., Kuhrmann M., Männistö T., Münch J., Raatikainen M. (eds) Product-
Focused Software Process Improvement. PROFES 2014. Lecture Notes in Computer Science, vol. 8892. Springer,
Cham, 2014.
70 70 Ignacio Labrador, Aurora Ramos and Aljosa Pasic, “Next Generation Platform-as-a-Service (NGPaaS) From
DevOps to Dev-for-Operations”, White Paper, available online: https://atos.net/wp-content/uploads/2019/07/white-
paper_ari_NGPaaS.pdf
The Dev-for-Operations model developed in the NGPaaS project differs and enhances
in several aspects DevOps, for instance:
71 Kahle, J. (2018). Why Are Some Industries So Far Behind on DevOps? - Highlight. [online] Highlight: The world
of enterprise IT is changing, fast. Keep up. Available at: https://www.ca.com/en/blog-highlight/why-are-some-
industries-so-far-behind-ondevops.html [Accessed 26 Apr. 2018]
72 http://ngpaas.eu/
a) It should be possible to execute a vendor specific CI/CD loop at the vendor’s site
in order to make it possible to iteratively develop and debug the service before
delivering it towards the operator’s side.
b) The Dev-for-Operations model should make possible the communication of the
operator insights towards the vendor’s environment in some way. This should
enable vendors to have a deep understanding of the operational environment, so
they can perform a kind of “operation-aware” testing function on their own. This
requirement has a lot of impact in the Edge domain. Unlike the cloud, the Edge
can be unstable and even disconnected by design. There can be many points of
failure in an Edge solution. Building an Edge-native application requires the
ability to be ready to scale back to the cloud at any point. This means they should
perform CI/CD processes using test batteries already integrating the relevant
features of the operational environment.
c) DevOps delivers the application, but Dev-for-Operations should make it possible
to deliver a fully realized service including the core application, monitoring and
analytic, as well as deployment and adaptation capabilities.
d) Like in the regular DevOps approach, there should be also a specific feedback
loop to propagate the information from the Operator’s side towards the vendor
environment, but in this case, the feedback should integrate information not only
from the software application itself, but also regarding the associated monitoring
and analytics, as well as the deployment and adaptation indicators.
e) The feedback mechanism takes on a different character in Dev-for-Operations: it
should consider the separation between vendor and operator but keeping the
automatic or semi-automatic mechanisms needed to provide the feedback in a
timely manner.
The Dev-for-Operations model is well suited to develop applications and services in the
Edge which is characterized by a few nuances like scaling, types of devices, application
footprint, operating speed, and disconnection.
Devops Dev-for-Operations
Vertical
Testing
Application Release
Feedback
Operator aware
Feedback
Testing
Operator-specific insights
Feedback Integration
Operations
Operations
Application
Application
Infrastructure Infrastructure
Security (both at software and system levels) and privacy need to be of utmost importance
all along (1)-(5) activities in order to deliver a trustworthy software.
DevSecOps in the Edge context is challenging because: (a) design requires tools for
describing the dynamic behaviour of application components in Edge environments; (b)
development needs to implement quality assured software based on the designed model;
(c) testing requires real-time simulation of the application behaviour in the runtime
environment of a heterogeneous Fog environment; (d) deployment requires mechanisms
to seamlessly redeploy the software in the Fog at runtime; (e) maintenance and analysis
requires extraction of process logs to provide recommendations for redesigning the
model. The transient nature of the environment and the massively distributed geographic
resources make DevSecOps challenging. Additionally, a DevSecOps framework that can
undertake activities, such as automated management, including software adaptivity to
respond to the changing environment, would alleviate the burden of managing serverless
functions. Currently, there are no DevSecOps platforms that can manage the activities
from modelling to (re)deployment of a Fog application that is designed via serverless
computing. A few example platforms are available for the Cloud 73 74 75. However, they
do not address adaptivity (provide tools for modelling and enacting self-properties) and
are not designed for serverless environments.
73 J. Wettinger et al. "Middleware-oriented Deployment Automation for Cloud Applications." IEEE Trans. on Cloud
Computing, Vol. 6, Issue 4, pp. 1054-1066, 2016.
74 G. Pallis et al. "DevOps as a Service: Pushing the Boundaries of Microservice Adoption." IEEE Internet
Computing, Vol. 22, Issue 3, 2018, pp. 65-71.
75 N. Ferry et al. “CloudMF: Model-Driven Management of Multi-Cloud Applications.” ACM Transactions on
Internet Technology, Vol. 18, No. 2, pp. 16:1-16:24, 2018.
Edge continuum and the consequent distribution of data that needs to be managed from
privacy breaches arising from unwanted and unforeseen data affinities.
2.5.3 Monitoring
Observability and analysis, consisting of monitoring, logging and tracing, are crucial
requirements of any service deployment, and particularly for VNFs 76.
In this section we elaborate on how these requirements apply to the network functions
that reside at the Edge of the network. But before we embark onto this, let’s define what
each of these capabilities is and why they are critical for DevOps.
In general, observability involves gathering data about the operation of services, typically
referred to as “telemetry”. Modern service platforms, infrastructures and frameworks
have observability systems in place that gather three types of telemetry:
● Metrics: Time-series data that typically measure the four “golden signals” of
monitoring: latency, traffic, errors, and saturation. Analysis is done in monitoring
dashboards that summarize these metrics, providing aggregations, slicing & dicing,
statistical analysis, outlier detection and alerting capabilities. DevOps depends on
these metrics to understand the performance, throughput, reliability and scale of the
services. They also monitor Service Level Indicators (SLIs) to detect any deviations
from Service Level Objectives (SLOs), ideally before they lead to SLA violations.
● Logs: As traffic flows into a service, this is the capability to generate a full record of
each request, including source and destination metadata. This information enables
DevOps to audit service behaviour down to the individual service instance level.
Analysis is typically done via search UIs that filter logs based on queries and patterns,
indispensable for troubleshooting and root cause analysis of operational issues.
● Traces: Timestamped records about the handling of requests, or “calls”, by service
instances. As a result of the decomposition of network services into many VNFs and
of monoliths into numerous micro-services, and the creation of service chains/meshes
that route calls between them, modern service infrastructures offer distributed tracing
capabilities. They generate trace spans for each service, providing DevOps with
detailed visibility of call flows and service dependencies within a chain/mesh.
On the surface, the approaches towards delivering the observability capabilities have been
quite different between the NFV and Cloud Native Computing Foundation (CNCF)
“ecosystems”. Before the softwarization of network functions, each PNF had to offer its
own monitoring, logging and tracing functions, ideally through (de facto) standard
protocols (SNMP, syslog, IPFIX/NetFlow, etc.). Moreover, specialized network
appliances, such as Probes, DPIs and Application Delivery Controllers (ADCs) offered
more advanced network visibility capabilities, in terms of gathering deep network
telemetry, both in-band (inline) or out-of-band (via port-mirroring).
When PNFs transformed into VNFs, deployed as VMs, they have started to leverage the
telemetry capabilities of initially the VIM and subsequently of the NFVO/MANO stack
of choice. This resulted into a proliferation of relevant projects:
76 https://5g-ppp.eu/wp-content/uploads/2019/09/5GPPP-Software-Network-WG-White-Paper-2019_FINAL.pdf
● OpenStack: The set of projects under OpenStack Telemetry, with Ceilometer being
the one most widely adopted 77.
● OPNFV: The Barometer 78 and VES79 projects.
● OSM: The OSM MON module and respective Performance Management
capabilities 80.
● ONAP: The Data Collection Analytics and Events (DCAE) project 81.
On the deep network visibility front, there have been efforts to enable network monitoring
in a programmable fashion 82 (see 2.3.2) and ongoing standardization activities under
IETF83.
On the CNCF side, there is a separate set of projects under the Observability & Analysis
section of the landscape84, with Prometheus 85, fluentd86 and Jaeger87 as the graduated
monitoring, logging and tracing projects correspondingly, with
OpenMetrics/OpenTelemetry aiming to establish open standards and protocols. The open
APM ecosystem is even broader 88.
In addition, the specialized appliances we mentioned e.g., ADCs, which have since
embraced or reinforced their softwarization, virtualization & cloudification, will be
enhanced with capabilities that better position them in a hybrid multi-cloud world of
cloud-native applications and services.
The enhancements towards cloud native and PaaS are discussed in ETSI IFA02989, where
the concept of VNF common and dedicated services has been introduced. These VNFs
are instantiated inside the PaaS and expose capabilities that are consumed by the network
services (composed by consumer VNFs) that run over the PaaS:
• VNF Common Service: common services or functions for multiple consumers.
Instantiated independently of any consumer.
77 https://wiki.openstack.org/wiki/Telemetry
78 https://wiki.opnfv.org/display/fastpath/Barometer+Home
79 https://wiki.opnfv.org/display/ves/VES+Home
80 https://osm.etsi.org/wikipub/index.php/OSM_Performance_Management
81 https://wiki.onap.org/display/DW/Data+Collection+Analytics+and+Events+Project
82 https://p4.org/p4/inband-network-telemetry/
83 https://datatracker.ietf.org/doc/draft-ietf-opsawg-ntf/
84 https://landscape.cncf.io/category=observability-and-analysis
85 https://prometheus.io
86 https://www.fluentd.org
87 https://www.jaegertracing.io
88 https://openapm.io/landscape
89 https://www.etsi.org/deliver/etsi_gr/NFV-IFA/001_099/029/03.03.01_60/gr_NFV-IFA029v030301p.pdf
• VNF Dedicated Service: required by a limited set of consumers with a specific scope.
Instantiated dependently of their consumers (when required by a consumer) and destroyed
when no relation exists with any consumer90.
For example, ONF EDGE Cloud91 platforms, i.e. Aether, CORD & XOS, have already
adopted the pattern of offering logging and monitoring as platform micro-services,
leveraging projects from the CNCF observability and open APM ecosystems (Kafka,
Prometheus/Grafana and ELK/Kibana).
This trend is strengthened further by the approach pursued by the Hyperscalers to expand
their cloud services into the Edge of the network. AWS Outposts, Azure Stack, Google
Anthos, IBM Cloud Satellite (will) all offer Kubernetes on the Edge. There is some
fragmentation in how observability is implemented by each cloud provider, because of
the different cloud services that support the monitoring aspects (AWS CloudWatch,
Azure Monitor and Google Stackdriver). But Istio 92 is acting as a unifying service mesh
technology, since it implements the observability functions in a common way, without
additional burden on the service developers. We will have to see if/how the service mesh
expands to the Edge offerings of the Hyperscalers.
Similarly, early stage & fragmented are the monitoring features of serverless frameworks.
Most of them provide or support eventing frameworks as standard, that can be used for
building metrics and telemetry capabilities. But the approaches and tools are not common.
90 https://5g-ppp.eu/wp-content/uploads/2020/02/5G-PPP-SN-WG-5G-and-Cloud-Native.pdf
91 https://www.opennetworking.org/onf-EDGE-cloud-platforms/
92 https://istio.io
93 https://k3s.io
94 https://kubeEDGE.io/
95 https://www.acumos.org
96 www.monb5g.eu
Edge computing inherits its paradigm and key technical building blocks from
virtualization and cloud-native processing. When deployed for 5G networking, edge
computers will be one more computing resource over the network, able to receive
certified payloads (VNF or CNF) from the orchestrator, check their validity running the
security procedure and execute the code. It implicitly also inherits the security threats
brought by virtualisation and containerization with a special emphasis however where it
differs from core network computing. Edge computing are typically processed in isolated
cabinets closed to users. Small processing units cannot compete with stringent security
policy rules and standards of a single site massive processing delivered by core networks
infra operators. Nevertheless, when verticals such as autonomous cars rely on cabinet-
hosed edges, security is a major concern at the Edge too. It is important to reassert on
which flank Edge Computing is or could be more vulnerable on possible attacks which
are more likely to occur. Looking at a high level, the main security needs can be defined
as:
i) Protecting a payload (container or VM) from the application inside it
ii) Inter payload (container or VM) protection
iii) Protecting the host from a payload (container or VM)
iv) Protecting the payload (container or VM) against the host (aka, introspection)
Simply said, the attack path may originate from the container or the VM and is directed
to the host (with an intent to brake isolation barrier of a targeted VM or container) or
reversely be initiated at the host with full introspection mean to access to one VM or
container memory space. The former threat is remediated by VM or container isolation
techniques which act at several levels (i.e., limiting the types of interactions-system calls
with the host, memory segregation into payload isolated partitions, payload resource
consumption control). For the latter (e.g., introspection), the remediation comes with the
concept of trusted execution and the associated technologies (e.g, Intel SGX enclave)
that makes certain that even a malicious host OS or operator cannot tamper or inspect any
managed payload memory space.
The MEC platform manager has privileged access to all the managed MEC hosts where
MEC applications are running, therefore should be protected against unauthorized access
using best practices of access control, e.g. least privilege principle, separation of duties,
RBAC/ABAC policy enforcement, to name a few. In particular, the MEC platform
manager should strongly authenticate requests (e.g. with X.509 certificate) on its
management interfaces (Mm2/Mm3), to verify they originate from an authorized MEC
orchestrator or OSS. Similarly, the underlying VIM, which manages the virtualization
infrastructure of the MEC hosts (where the data plane runs), should strongly authenticate
requests on its management interfaces (Mm4/Mm6) as coming from an authorized MEC
platform manager if not in the same trust domain (e.g. co-located), or an authorized MEC
orchestrator.
The MEC hosts must be secured according to best practices of server security and
virtualization infrastructure security.
• NFV recommendations: for MEC systems based on the NFV architecture and
running sensitive workloads, the ETSI NFV-SEC 003 specification 98 defines
specific security requirements for isolation of such workloads (e.g. security
functions) from non-sensitive ones and describes different technologies to
enhance the security of the host system (e.g. MEC host) in this regard: system
hardening techniques, system-level authentication and access control, physical
controls, communications security, software integrity protection, Trusted
Execution Environments, Hardware Security Modules, etc.
• MEC-specific recommendations MEC platform should strongly authenticate
requests on its Mm5 interface as coming from an authorized MEC platform
manager. Similarly, the Virtualisation infrastructure should strongly authenticate
requests on its Mm7 interface to make sure each one is a valid request from an
authorized VIM. Furthermore, inside the MEC host, both isolations of resources
and data must be guaranteed between the MEC apps, since they may belong to
different tenants, users, or network slices in 5G context. In particular, the MEC
platform is shared by the various MEC apps and therefore must use fine-grained
access control mechanisms to guarantee such isolations, i.e. let a given MEC app
access only the services and information they have been authorized to.
At the MEC system level, the MEC orchestrator is not only critical because it has
privileged access to the MEC platform manager and VIM, but also because it is
particularly exposed to end-user devices via the User app Life Cycle Management proxy.
Indeed, this proxy allows device applications to create and terminate (and possibly more)
user applications in the MEC system, via the MEC orchestrator.
Besides the threats identified by ENISA, the ETSI MEC 002 specification 100 has stated a
few security requirements in section 8.1:
• [Security-01] The MEC system shall provide a secure environment for running
services for the following actors: the user, the network operator, the third-party
application provider, the application developer, the content provider, and the
platform vendor.
• [Security-02] The MEC platform shall only provide a MEC application with the
information for which the application is authorized.
99 https://www.enisa.europa.eu/publications/enisa-threat-landscape-for-5g-networks
100 ETSI GS MEC 002 v2.1.1 (Phase 2 : Use cases and requirements)
101 NGMN Alliance: "5G White Paper", February 2015
102 ETSI GR MEC 024 v2.1.1 (Support for network slicing)
103 ETSI GS NFV-SEC 013 V3.1.1 (Security management and monitoring specification)
Both meet the cost effectiveness needed at the Edge. There are two emerging competing
techniques dealing with both security, limited storage requirement and instant payload
start-up. They are lightweight hardware-level virtualization (aka, lightweight virtual
machine), embarking one bare minimal guest kernel on the one hand, and on the other
hand, operating system level virtualization (aka, containers). Both technologies are
backed by intense research and industrial deployment by IT leaders (Intel, IBM, Amazon,
Google) resulting from internal developments and first running deployments. Amazon
and Google are already exploiting these technologies on their running operations for
improving the security, running costs and quality of service.
The relative strengths on the two techniques are accepted as follows: VMs bring higher
process isolation and deployment flexibility but at higher memory costs (i.e., replication
of different feature-rich guest operating systems in each VM) and are much slower to
start. Designing a lightweight virtualization (as Amazon’s Firecracker) is aimed at
maintaining the security advantage while significantly thinning-out the above-mentioned
known drawbacks and somehow losing the flexibility advantage too as the guest OS is
reduced, optimized and unique.
Valuated as less secure, containers last improvements were aimed at enhancing security
and process isolation to bridge the security gap from what virtualization offers. Linux
container isolation has been significantly improved in the recent past with new
frameworks (see below), instantiating same core Linux OS container security enablers
(cgroups, namespaces, seccomp, …).
For an interested reader on this subject, there are four initiatives that are likely to pave
the way for the future of (Edge Computing) virtualization: IBM Nabla containers, Google
gVisor containers, Amazon’s Firecracker lightweight VMs and OpenStack Kata
lightweight VMs.
IBM’s researcher James Bottomley had reached an atypical conclusion (versus the
commonly accepted opinion) by discerning from his research that containers are more
secured than VMs. Simply said, he estimates the number of lines of kernel code (with a
linear relationship with the number of possible vulnerabilities resident there) that interacts
with the payload. The container engine (a kernel module that interacts with all containers)
exposes less code than a VM hypervisor added with the full OS code resident in each
VM. An extra benefit is viewed that if the container engine has been found vulnerable, its
replacement directly benefits to all supported containers without requiring any changes
on containers. This opposes to a failed VM hypervisor which entails the replacement of
all guest OS in the majority of the cases. This quantitative approach has its merits to shed
light on the kernel code potential vulnerabilities and the much higher size of virtual
machine kernel code. However, a complementary qualitative approach would be
beneficial to evaluate the security gains brought by hardware-based Intel Virtual
Technology (or equivalent at AMD) as well as the gains brought by the barrier erected by
the guest OS (of VMs), creating a walled-garden for the attacker.
Isolation through userspace kernel Isolation through lightweight Isolation through lightweight
MicroVMs MicroVMs wrapping the container
CONTAINER CONTAINER
MicroVM MicroVM
Userspace kernel
IBM Nabla and Google’s gVisor are two similar container technologies, offered for
enhanced container security. Both adds a userspace kernel code to sandbox the container
system calls (seccomp functionality). This code is capable to handle most of the system
calls inside the container so that the pending system calls to the OS are limited in type
and quantity. Both technologies need their specific runtime module (runnc and runsc
respectively) to be installed on the machine.
Amazon Firecracker and OpenStack foundation ‘s Kata are two similar lightweight
VM technologies, delivering feature-restricted agile guest OS for instant start-up and low
footprint. They are both developed in different language for security reasons and can also
be considered for direct applications or containerized applications. Both are derived or
directly using KVM hypervisor and leverage Intel VT hardware virtualization
Target Platform
Virtual
Machine
Attestation of VM
Remote Trust
Hypervisor vTPM Attestation of Hypervisor
Verifier Integrity Assessment
Verification
Result
Compute Node TPM
3GPPP adoption of the Service Base Architecture (SBA) and the microservices approach
for 5G networks, has generated a lot of attraction in the Containers technology, i.e.
dockers, mainly by its efficiency in resources demand and instantiation deployment.
Precisely, the security exposure for this light virtualization technology, that share kernel
functions, demands technologies to provide trust. There are already initiatives in progress
to extend the remote attestation to the containers technology to address this lack of trust
problems. One of the most attractive aspects for Remote attestation technology is that
being based on TPM standard (currently in version 2) lead by Trust Computing Group,
and not dependent on proprietary implementations, such as intel SGX Enclave or AMD
trust Zones.
Software and Hardware based attestation. The difference
• Software solution can bring authentication service. Before starting a process, a call is
made to a verification routine which produces the hash and decrypts the signature
(associated with the code package) and compares them. A tampered code will not
launch or at the cost of strapping the authentication routine. It is a first layer of
security.
• TPM based authentication prevents such tampering and in addition creates a secure
communication channel to deliver safely at a remote place (at the security
management location) the unalterable evidence (using Diffie-Hellman asymmetric
encryption based protocol) that the code is original. TPM based attestation delivers
more security locally and a remote evidence of code correctness.
If TEE are strong security enablers to consider, there are strong operational obstacles to
put them in practice. These relate to the performance overhead, effort to setup,
compilation requirement and access to source level code changes. Most importantly, TEE
technologies are not compatible one with each other’s. TEE-enabled software deployment
must be carefully done (on targeted processors only). Intel TEE-enabled VNF will not
run on one AMD board (TEE enabled or not).
Normal OS TEE
Trusted Applications
Trusted Kernel
Secure Secure Secure
Provisioning Attestation Storage
Root of Trust
Each processor vendor has its own definition with some overlaps on a restricted functional
area from one solution to another.
When considering the SDN-NFV (and 5G core network and Edge Computing), Intel
SGX104 and SME-SEV105 are the two first TEEs to consider, as brought to standard X86
architecture processors, capturing the entirety or a very large share (at the time of writing)
of the cloud blade market. In its view of comparison of SGX and SEV, The Wayne State
University106 and their presentation at HASP, June 2018107 reflected the two diverging
approaches which rely on two opposing architectural designs. Intel SGX is depicted a
means to secure small payload which must be preferably be an extraction of a reduced
part of a larger code, whereas SEV is a basic VM encryption with no code extraction-
selection to be made. Moreover, Intel‘s SGX interacts with user code (ring-3) while SEV
operates on ring-0. When SGX imposes code changes (typically to remove all system
calls) and a new compilation worked out through Intel’s SGX user SDK, SEV is totally
transparent to the payload. When reading these elements, it is difficult to get more
diverging techniques. In all respects (required code changes, size of the Trusted
Computing Basis from a security-sensitive function or a complete VM with its operating
system, offered security guaranties), SGX and SEV differ.
Because Intel SGX enclave implementation is relatively complex (and relatively scaring
for a wildcard developer with no special expertise on security), several frameworks
emerged as Panoply, Scone and SGX-LKL. These frameworks simplify the setup
workflow, all sharing the same design idea of placing a micro kernel inside the SGX
enclave to limit and control all interactions with the external world. This is motivated to
shrink all developer work related to system calls as they are not permitted inside the TEE.
They also remove the burden of selecting the correct section of code as the complete
application is placed. However, the overhead impact is of at least 30%. On a pure security
point of view, these frameworks deviate with Intel’s recommendations for the smallest
TCB (i.e., the code inside the TEE), as they not only insert a complete un-touched
application but associated with an external micro-kernel. They expose a large flank to
vulnerability exploitations.
104 https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html
105 https://developer.amd.com/sev/
106 https://caslab.csl.yale.edu/workshops/hasp2018/HASP18_a9-mofrad_slides.pdf
107 http://webpages.eng.wayne.edu/~fy8421/paper/sgxsev-hasp18.pdf
Before describing the actor role model, it is important to highlight some fundamental
business modelling concepts:
• Α stakeholder is a party that holds an interest in the Edge Computing and in the
5G and beyond ecosystem.
• An actor is a party that consumes services or contributes to the service
provisioning.
• An actor role is a specific well-defined function performed by an actor. An actor
may perform multiple actor roles, while and actor role can be adopted by several
actors.
• A business relationship is an association or interaction between two actor roles.
Figure 20 presents the main actor roles involved in Edge Computing-enabled services
provisioning. The actor roles (blue rectangles) are grouped into “actor role clusters”
(dotted rectangles) of several colors, while the potential business relationships are
identified with blue arrows. Solid arrows reflect the money flow, while open arrows the
service flow.
108 5G-VINNI report D5.1 “Ecosystem analysis and specification of B&E KPIs”, June 2019.
109 3GPP TR 28.801. Telecommunications management; Study on management and orchestration of network slicing
for next generation networks.
Figure 20: Actor Role Model for the Edge Computing Ecosystem
We first introduce three value chain scenarios where one actor controls the customer
relationship and supply chain. Next, we introduce two collaborative ecosystem scenarios
where the actors are inter-dependent in their value creation and supply and the customer
interface and operation is many faceted. The scenarios serve to illustrate how Edge
Computing affects the possible evolution of market dynamics, ways of organizing
services, roles, and how current actors may position in the roles; in future markets the
different introduced scenarios may exist in parallel.
110 Operator Platform Concept, Phase 1: Edge Cloud Computing, GSMA, January 2020.
We also assume that all three key actors, i.e., MNO A, Hyperscaler A, and Local Cloud
Provider A, maintain a physical infrastructure. In particular, each of them maintains its
own physical infrastructure (i.e., datacenter resources) deployed to location(s) owned
either by itself, another key actor or Venue Owner A. Given that MNOs traditionally
interact with other venue owners in order to deploy their equipment in the appropriate
geographic locations and structures, the MNO A, naturally, adopts the role of Venue
Aggregator and thus has access and control over multiple venues. Taking advantage of
this business opportunity, we assume that MNO A serves as facilitator for the deployment
of other key actors’ infrastructure in the venues it controls, of course, with the appropriate
charging. All three key actors play the role of Infrastructure Aggregator, i.e., they
aggregate physical resource that may belong to different Infrastructure Operators.
In the Virtual Infrastructure layer, we assume that all key actors adopt the VISP role, that
is they build and operate virtualized infrastructure over the physical resources they
control. When it comes to the aggregation of virtualized infrastructure that may be located
to different geographic regions, the MNO A that also play the Network Operator role has
again a competitive advantage, such as current presence in multiple locations along with
transport network infrastructure already in place. In this scenario, we assume that MNO
A exploits this competitive advantage to take over the VIA role and serves as the
intermediate between the Service and Virtual Infrastructure layers, in fact “displacing”
Hyperscaler A and Local Cloud Provider A from the Service layer.
MNO A controls the Service layer by adopting the SA role. That is, we assume that a
strength of the MNO A is that it operates a global platform, where Edge Computing-
enabled services are offered to the Enterprise Customer A (having the VSC role). The
global reach and coverage are achieved by anticipated future federation and
interconnection among partner MNOs. Note that the services offered by the SA may
include Edge-provided applications developed by Application Provider A and
communication services (i.e., network slices) provisioned by MNO A. Hence, the service
We now assume that Hyperscaler A is more aggressive at the Venue layer, by also
adopting the Venue Aggregator role. This means that Hyperscaler A can also now
aggregate venues from different Venue Owners and then provide collocation rights to
other actors over multiple locations. However, as discussed in Scenario 1, MNOs have a
competitive advantage when it comes to interaction with venue owners, thus we expect
that both MNO A and Hyperscaler A will remain active at this role. Thus, competition
among these two actors may arise when it comes to venue aggregation.
At the Service layer, the SA role is now performed by Hyperscaler A, leveraging upon
Hyperscalers experience with end-customers on offering self-service cloud services. In
general, we foresee that Hyperscalers will push towards the adoption of a platform/service
model that is similar to the traditional cloud computing services. MNO A still contributes
to/complements the Service layer/platform through the CSP role and by offering network
slices (potentially across domains) that enable Edge-provisioned service/applications in
UEs with his network. However, MNO A does not directly interact with the Enterprise
Customer A.
We assume that Local Cloud Provider A adopts the Venue Aggregator role, aggregating
multiple venues in a specific geographic region by having agreements with multiple local
venue owners. Then, Local Cloud Provider A could also offer collocation rights to other
actors. However, MNO A and Hyperscaler A still maintain the role of Venue Aggregators
covering multiple regions. Thus, competition over venue aggregation may arise both at a
global and local level. We also assume that Local Cloud Provider A takes over the VIA
role aggregating and operating virtualised infrastructure in a local level. In such a
scenario, Local Cloud Provider A could take control over MNO A’s and Hyperscale A’s
virtualised infrastructure in a certain region.
We foresee that there is a potential for local platforms operated by local/regional Cloud
Operators to be emerged. Hence, we assume that Local Cloud Operator A is actively
involved at the Service layer, by operating a local platform utilizing Edge-provisioned
applications provided by Application Provider A and communications services provider
by MNO A. Nevertheless, it may be difficult for a local platform to attract DSPs, and
when it comes to services that involve multiple locations, collaboration with other actors
should be established.
Figure 23 Local Cloud Provider A maintains the prominent position, only possible at local level.
A scenario where MNO A collaborates with Local Cloud Provider A could also make
sense, if local MNOs succeed in joining forces with local Cloud Operators and take
advantage of local presence in customer relationships. Also, Hyperscaler A and Local
Cloud Provider A can be complementary and join forces to serve local customers;
however, they would always be dependent on contribution of MNO A for
applications/services that require communication services to be established.
Figure 25 illustrates an example where MNO A, Hyperscaler A and Local Cloud Provider
A play the roles of Venue Aggregator, Infrastructure Aggregator and Virtualized
Infrastructure Aggregator. This means that none of the key actors follows an aggressive
strategy taking full control of the customer relationships or one central platform. This
leaves space for all actors to enter the market and address customers with LNaaS, provide
resources, and pursuit collaborations.
In the Service layer, apart from three key actors, we assume that Consultation Service
Provider A may also adopt the SA role serving mostly as reseller towards the Enterprise
Customer A, while there is also a potential for Application Provider A to play the SA role
and be the contact point for the Enterprise Customer A.
Figure 25: Fully collaborative scenario, where multiple aggregator roles are shared among the
different actors.
While the above scenarios are largely motivated by the ability and strengths by these
actors to cater for and effectively enable customer management and service support, the
below bullet points are key in introducing and discussing complementary factors and
actors that can have large impacts on the Edge cloud ecosystem.
• Telco Network Equipment Providers (e.g., Ericsson, Nokia, Huawei, etc.) offering
managed network and cloud services to the MNOs and Telcos. These actors have
the opportunity to leverage their managed services for managing operator
networks and evolve these capabilities into managed services for managing Edge
cloud infrastructures and general services.
• IT solution providers (e.g. IBM, HPE, Dell, Oracle, etc.) offer cloud services,
solutions and/or support services that have the potential to impact the ecosystem.
These actors are strong in the enterprise office IT and Software market and we see
solutions to address the emerging Edge cloud space.
Both of the above types of actors can help either MNOs / Telcos or Local Cloud (IT and
hosting) providers in strengthening their position towards the vertical enterprise
customers. However, the fact that currently Telco NEP collaborate with MNOs / Telcos,
whereas the global IT providers have a strong relationship with the Local Cloud (IT and
hosting) providers, can influence how these business relationships will evolve.
The venue location, condition and context of the Edge datacentre will also play a key role.
Three categories of venues have distinct properties and contexts and will require separate
analysis of strengths and conditions from a multi-actor PoV;
i) MNO RAN and base station venue
ii) Enterprise office indoor venue (enterprise office park), and
iii) Industry or factory venue
While i) is facilitating for and is part of the public network, iii) on the other hand is
focused on non-public networks, while ii) in particular, is required to facilitate for a mix
of public and private logical networks. In particular, iii) will facilitate for industrial,
production operational technologies, solutions and networks, which typically includes
and relies on strict time sensitive and deterministic networking which sets strict
requirements towards the 5G and beyond services.
In the area of Factory of the Future (FoF) / Industry 4.0 there are large global players for
industry equipment and solutions that will play an important role in the establishment of
industrial indoor and non-public Edge computing (technology orientation) and Edge
cloud (service orientation) solutions. The importance of time sensitive networking
indicates that both these industry players, as well as Telcos putting emphasis on these
capabilities, can become important players in this field. The new OT networks enabled
by 5G and adjacent technologies and the new operator’s dashboards needed for such next
generation OT industry appears to be a key area of technology and business development.
Looking further up the value stack and to the upper part of the general Actor Role Model
for Edge Computing Ecosystem and addressing again the SA role, one may argue that
this will not be simply a role played by a single player. One may speak of multiple roles
needed in this area both addressing the aggregation of the general Edge Cloud Services
and even more certainly there is a need of specific Vertical Service Aggregator Players
address the dedicated needs of the specific verticals.
However, along with the need for collaboration, cooperation, and industry development,
we expect alliances or multi-actor partnerships to appear. While the global hyperscalers
have the strength and might drive such partnerships individually and shape how they
collaborate with Telcos and other players, traditionally local Telcos will need a more
collaborative and structured approach to the establishment of one strong alliance to enter
the global scene. Again, in the field of Industry 4.0 the preferences and choices of the
large and global players within the device, solutions and applications industry will have
a large influence of the evolution of multi-actor partnerships and alliances. The traditional
strengths of Telcos in enabling interconnection and the potential strength and ability of
developing future oriented collaborative solutions (and offerings based on global reach
and standardized global solutions) implies again uncertainty in how the various
ecosystems and business models around Edge cloud and the verticals will evolve.
Initiatives driven by the public side should also be noted. Recently, the European
Commission sent out a press release welcoming the political intention expressed by all
27 Member States on the next generation cloud for Europe. It is pointed out that – “Cloud
computing enables data-driven innovation and emerging technologies, such as 5G/6G,
artificial intelligence and Internet of Things. It allows European businesses and the
public sector to run and store their data safely, according to European rules and
standards.”115 Alongside the expression of these goals, we also recognize the GAIA-X
initiative driven first by France and Germany that want to create the next generation of
data infrastructure for Europe, its states, its companies and its citizens 116. Other initiatives,
for instance the one driven by the BDVA association 117, are shaping the convergence of
Data, AI and Robotics in the networks of the future, where Edge capabilities will play a
pivotal role and will be instrumental to fulfil a smooth integration of different
technologies.
111 https://stlpartners.com/research/telco-edge-computing-how-to-partner-with-hyperscalers/
112 Operator Platform Concept, Phase 1: Edge Cloud Computing, GSMA, January 2020.
113 Operator Platform Telco Edge Proposal, Version 1.0, GSMA Whitepaper, 22 October 2020
114 Telco Edge Cloud: Edge Service Description and Commercial Principles, GSMA Whitepaper, October 2020
115 https://ec.europa.eu/digital-single-market/en/news/towards-next-generation-cloud-europe
116 https://www.data-infrastructure.eu/GAIAX/Redaktion/EN/FAQ/faq-projekt-gaia-x.html?cms_artId=1825136
117 https://www.bdva.eu/.
the industry and support the EU’s vision by building a pan-European cloud “federation”
of interconnected cloud capabilities.118 Furthermore, ETNO underlines – “A resilient,
efficient digital infrastructure is the necessary backbone of any trusted data sharing
architecture. Cloud infrastructure will need widespread 5G and fibre networks that
support data processing closer to the user, including edge computing. … European
telecom companies have a key role in investing and operating edge computing
capabilities over their networks. This will offer a major alternative to the centralised
cloud computing model operated by Big Tech.”.
Apparently, the industry will see increased investments in the years ahead into cloud
solutions in general and more specifically in enabling solutions for Edge Computing.
This whitepaper provides a holistic overview of all the technical topics to consider and
insights into the maturity and evolution of the different technical areas. The assessment
of technical maturity and judgements on what will be the smarter technical roadmap are
crucial topics to be analysed in order to drive and settle the business level decisions, action
plans and agreements needed in the years ahead.
118 https://etno.eu/news/all-news/683:eu-telcos-welcome-cloud-declaration.html
In order to provide the right context, we start by summarizing in Section 5.1 the main use
cases addressed by those research projects. For more details, one can refer to the
deliverables and project websites listed in Annex 1. Besides, in this section we also
provide a project taxonomy/clustering according to the key functionalities deployed at
the Edge (e.g., AR/VR/Video processing/analytics, 4G/5G core functionalities) for the
various use cases. In subsequent sections, we analyze the specific implementations
carried out by the projects in terms of type of Edge Computing infrastructure deployed
(e.g., ETSI-MEC, CORD-like; Section 5.2), location of such computing infrastructure
(e.g., on-premise, street-cabinets; Section 5.3), technologies used (e.g., server type,
acceleration technology; Section 5.4), and applications/VNFs hosted at the Edge (vEPC,
vertical applications; Section 5.5). Each section reports on details at the project level and
discusses the rationale behind technological decisions. For each section, we also provide
a brief analysis of the survey results.
5G Mobix
5G Transformer 5G Transformer 5G Mobix
5G CroCo
5G CroCo
5G Heart 5Growth 5G Zorro 5G Eve 5Genesis 5G Eve 5G Heart
5G!Drones 5G Victori
5G!Drones 5G Vinni 5G Transformer 5Growth
SaT5G 5G Dive
MonB5G 5G Dive
5Growth 5G Vinni 5Genesis
5G Picture 5G Victori 5G Dive
5G Dive 5G Carmen
5G Heart
5G Victori 5Growth 5G CroCo
Slicenet
Geo-dependent computation
5G Picture IoT GW/Data
management SaT5G
5G Vinni 5G Dive
5G Zorro
MonB5G
Autonomous 5G Dive
edge Slicenet Multi-link
aggregation
vRAN at edge
AI at edge
Figure 26: Clustering of projects according to the specific key components in their respective use
cases.
The use cases clustering revolves around the following 9 key functionalities:
• AR/VR/Video processing/analytics and caching: Any kind of video processing or
caching performed at the Edge with the aim of a faster computation of AR/VR,
reduction of load at backhaul or other kind of video related processing requiring
low latency.
• Low latency computation: Non-video applications located at the Edge in order to
reduce the latency between the user and the application server.
• 4G/5G-Core functionalities at edge (e.g., PGW, UPF): Hosting at the Edge parts
(typically, from the data plane) of the 4G or 5G core functions.
• IoT GW/Data management: Virtualized versions of IoT GWs hosted at the Edge
as a mechanism to reduce load or pre-processing data.
• Geo-dependent computations: Championed by the automotive scenarios, this
cluster includes the use cases which place functions at the Edge to serve a certain
geographical region.
• Multi-link aggregation: The Edge as an aggregation point where multiple
technologies can be used to connect to the core network.
• Autonomous Edge: The Edge as a mechanism to operate with low or non-existing
backhaul, therefore typically hosting core functions to work in an autonomous
way.
• AI functions at Edge: the Edge used to run AI functions leveraging contextual
information available in the vicinity of the user.
• Virtual RAN (vRAN) at Edge: The Edge as a hosting platform for Virtual RAN
functions.
Table 1 present the use cases that are being considered by each of the 17 projects. As it
can been seen, the 5G PPP projects have used Edge Computing solution in multiple
vertical sectors (e.g., Automotive and Transport, Industry 4.0, eHealth, smart cities,
energy, etc.). This is to be expected as Edge Computing is identified as one of the most
promising solutions to meet the vertical requirements (e.g., reduced delay).
The SliceNet120 infrastructure is fully compliant with ETSI MEC specifications and has
been used as an ETSI PoC. This framework manages E2E network slicing across all the
different network segments of the infrastructure, namely, (i) enterprise network segment,
where final users and vertical business are located; and (ii) RAN segment, providing
coverage that final users via RAN front-haul interface. Edge Computing comprises
physical devices located between the RAN and the datacenter. Edge Computing is
connected to the RAN via back-haul interface and to the datacenter network segment via
the transport network segment. Both Edge and datacenter locations support virtualization
and containerization and they are controlled via a logically centralized management
framework by making use of multi-zone support capabilities to decide where to deploy
and migrate virtual resources. On top of this infrastructure, the project deploys
softwarized 5G architectural components as services both at the Edge and datacenter
locations. Usually, 5G Core VNFs are deployed at the datacenter and both 5G RAN VNFs
119 http://5g-transformer.eu
120 https://slicenet.eu
services and ETSI MEC VNFs services at the Edge. Even if both RAN and MEC VNFs
are deployed at the Edge, they create a logical function chain where the traffic going from
the RAN to the CORE goes through the MEC nodes, acting as a monitoring and control
point for low-latency optimizations.
In 5G-PICTURE122, the emulated MEC solution had two main requirements: (i) low
latency between devices for AR/VR application; and (ii) the creation of high throughput
traffic between the nodes to demonstrate the FPGA based Time Shared Optical Network
(TSON) used to aggregate fronthaul and backhaul at the Edge of the network and further
distribute the links back at the central cloud network datacenter.
Based on these requirements and due to the lack of ETSI MEC availability, in the test
network an emulated MEC solution was implemented. Different services and software
components of the use cases were deployed at the Edge and central cloud datacenter
similar to a Fog architecture yet not compliant to any existing standard.
This solution successfully provided the project with low latency communication between
UEs and compute resources, while also prevented backhaul link capacity saturation for
transferring raw video streams that were later used for analytic purposes.
121 https://www.sat5g-project.eu
122 https://www.5g-picture-project.eu
123 https://www.5g-eve.eu
5G-VINNI124 does not restrict to any specific Edge infrastructure type. As an ICT-17
project, each test facility has the freedom to include Edge infrastructure or not and
implement in the way that suits their targeted experimentation and intent. 5G-VINNI
Architecture v1 (D1.1) included a Research Item on Edge, which builds on ETSI MEC
principles, but does not mandate that basis. In 5G-VINNI Architecture v2 (D1.4) a more
prescriptive definition of Edge implementation will be provided but this will again be
optional at a test facility and will not mandate any specific approach. 5G-VINNI takes
3GPP work as its basis, and notes that 3GPP TS 23.501 includes MEC natively with the
5G NR architecture, in particular allowing for the UPF to be distributed. In 5G-VINNI
D1.4, work in SA2, SA6 EDGEAPP and ETSI MEC will be considered.
124 https://www.5g-vinni.eu
125 https://5genesis.eu/.
easily to Local Applications (for Local Break Out) and to the PDN using simple
IP routing.
CORD supports both approaches, as ETSI MEC SW stack and EPC can both be deployed
at the Edge Computing infrastructure.
The 5GENESIS Malaga Platform has chosen the second option, due to the fact that a
consortium partner is an EPC provider. Moreover, there exist a great variety of vendors
and open source solutions, while there is a small number of entities that can provide ETSI
MEC SW stacks, mainly in the commercial space.
Similarly, in the Athens 5GENESIS platform, COSMOTE is the provider of the Edge
Computing infrastructure following the ETSI MEC approach. COSMOTE operates a
hybrid 4G/ NSA 5G/ MEC testbed complemented with an Openstack-based SDN/NFV
Cloud infrastructure with two flavors of MEC implementation i) via second SPGW and
ii) via SGW-LBO.
The trial site in Barcelona, operated by CTTC, I2CAT, Nextworks, and Worldsensing,
also built upon well accepted open source solutions including OpenStack, ETSI
OpenSource MANO combined with the 5GCity Neutral Hosting Platform, SONATA and
the Service Orchestrator and Multi-domain Orchestrator for managing E2E Network Slice
deployments across the target core and Edge domains.
The Edge position selected are close to the gNBs in order to satisfy the strict latency
criteria of the CCAM use cases. The main project focus is to analyse cross-border from a
cellular network mobility viewpoint. Further, given the status of the work in
standardization to define related 3GPP/MEC mobility concepts, the deployment of a
126 https://5gcroco.eu
127 https://www.astazero.com/the-test-site/about. Active Safety Test Area and Zero (AstaZero).
128 https://www.5g-mobix.com
unified Edge Computing platform was not considered to be feasible and of priority. Those
aspects where discussed in the project set-up phase and the design of the project has been
decided accordingly.
At the Dutch 5Groningen platform, the M-CORD-like type of Edge Computing was
chosen mainly due to the usage of commodity hardware, open source software and the
communities behind open source projects. The openness allows for modularity and choice
between different components depending on use case needs as well as easier switch
between choices made.
The 5G!Drones 130 project is an ICT19 (trial project) that aims at conducting trials
implicating Drones on two of the ICT-17 trial facilities, namely 5G-EVE and Athens
Platform of 5GENESIS. The consortium also plans to experiment drone’s usage and
measure relevant KPIs on other 5G testbeds 5GTN and X-Network based in Finland.
Implementations of Edge at 5G-EVE follow ETSI MEC specifications compliant with
the 3GPP architecture. The 5GENESIS Athens Platform integrates Edge Computing
infrastructure in various locations within its topology, for the deployment of Edge
applications and Network Service components. More specifically, for the 5G!Drones
trials two Edge Computing deployments of 5GENESIS have been exploited: The first one
is based on the NCSR Demokritos 5G Amarisoft solution enhanced with lightweight Edge
Computing capabilities that deployed at the Egaleo Stadium, while the second MEC
deployment that supported 5G!Drones trials is operated at COSMOTE Academy campus
and is based on production grade equipment. The 5GTN infrastructure uses Nokia vMEC,
based on ETSI MEC. Finally, X-Network (ETSI MEC and FOG computing) provided by
Aalto university, is composed of ETSI compliant MEC platform developed by Nokia and
a set of Fog servers. Nokia vMEC was adopted due to its rich functionalities and its
compatibility with other Nokia products available in the same facility. Meanwhile, Fog
servers allow the deployment and the trial of new functionalities not available in the
closed source Nokia vMEC (e.g., Edge services migration, container-based service
orchestration).
5GROWTH131 considers applying generic Edge Computing approach for the vertical
pilots. The goal is to deliver traffic that requires low latencies to the vertical applications
129 https://5gheart.org
130 https://5gdrones.eu
131 https://5growth.eu
running at the Edge, to comply with the low latency requirements of the vertical services
(e.g., industry 4.0, railway transportation safety).
5G-VICTORI132 architecture follows the ETSI NFV standards in order to provide the
required services and functionality such as network slicing. This is extended to the Edge
following the ETSI MEC principles. The Extended MEC (xMEC) hosting infrastructure
includes Edge Computing functionalities involving virtualized MEC computing,
networking and storage resources with the MEC NFVI being its overlay. xMEC provides
a set of VNFs as well as access to communication, computing and storage resources to
service functions of multiple domains in an integrated fashion and can accommodate all
complex time critical functions, due to its physical proximity from the relevant network
element. Therefore, the main drivers for choosing the ETSI MEC type of Edge
architecture are: (a) compliance with the ETSI standards, (b) provision of compute as well
as networking VNFs.
132 https://www.5g-victori-project.eu
133 https://www.monb5g.eu
134 https://www.5gzorro.eu
135 https://5g-dive.eu
datacenters (DCs) on top, Edge datacenters (Edge DCs) in the middle, and Fog computing
devices (Fog CDs) that are available locally in the access area. Finally, ITRI MEC
prototype, called intelligent Mobile Edge Cloud (iMEC) will be integrated in the 5G-
DIVE architecture and, later on, it will be used for the in-site trial of the Autonomous
Drone Scouting vertical pilot. In summary, the Edge concept of 5G-DIVE is an
integration of ETSI MEC concepts into OpenFog (now Industrial Internet Consortium)
architecture.
Fog
Computing, 4,
15%
Other, 11, 41%
ETSI-MEC, 9,
33%
CORD-like, 3,
11%
It is worth noting that out of the Phase 3 Infrastructure projects, 5G-EVE reported
Distributed Cloud as its Edge choice, 5G-VINNI reported that the project is Edge-type
agnostic, so any kind of Edge can be used, while 5GENESIS declared the use of a CORD-
like approach.
Finally, the prevalence of ETSI MEC over CORD and/or Fog approaches is clear in the
European projects that replied to the questionnaire.
Three different testbeds are deployed in SliceNet. The Smart City use case has as Edge a
micro-datacenter composed by only one or two nodes inside of a cabinet located on the
top of the enterprise building. This cabinet has inside both RAN and Edge equipment and
the antennas are directly installed close to the cabinet to provide coverage. The Smart
Grid use case makes use of a Cloud-RAN deployment where the Edge and RAN are
distributed across different locations with a 10-kilometer fiber cable. In this scenario, the
Edge is composed by a microdatacenter where both 5G Centralized Unit (CU) and ETSI
MEC are deployed and it is directly located in the telco premises. The Smart Heath use
case is logically similar to the Smart Grid use case with the only difference that the Edge
location is physically installed in a street cabinet rather than in the telco premises.
The 5GUK Test Network deployed in 5G-PICTURE138 is hosted at locations within the
Bristol City Centre, while the cloud network was placed at the University of Bristol Smart
136 http://5g-transformer.eu
137 https://www.sat5g-project.eu
138 https://www.5g-picture-project.eu
Internet Lab. The geographical spread of the nodes is within a couple of Km from each
other and while using dark fiber for connectivity between sites, the location of the Edge
servers made little difference to the latency observed in the service delivery. For this
reason, the MEC architecture was emulated with VMs spread for the convenience of
power and space at different locations similar to a Fog deployment. For the smart City
Safety use case, the image processing node was performed at the “We The Curious”
hosting site’s IT room close to the end users in receiving the output for monitoring
purposes. It should be noted that this service was deployed as Fog deployment and not all
functions were at the Edge of the network.
139 https://www.5g-eve.eu
140 https://www.5g-vinni.eu
141 https://5genesis.eu
Office using MOCN to connect to both Cores, i.e. the Telefónica Commercial core and
the UMA Mobile core.
The Edge solution in 5G-MOBIX144 is deployed at a distributed site where the traffic
from several radio sites is received in the commercial network. This site is used to
aggregate radio traffic from several radio sites and redirect this traffic to the Core
network. The system is deployed as a virtualized infrastructure comprising a full-fledged
5G EPC with and without LBO (PGW-U) at the Edge.
The 5G!Drones146 project leverages 5G trials facility for testing scenarios and evaluating
KPIs involving Drones. In these facilities dedicated for testing and experiments the Edge
infrastructures are deployed on premises for the following reasons:
• Availability of computing resources near to the deployed eNBs/gNBs.
• Availability of dedicated high-performance transport network within the
facilities.
• Security concerns.
• Facilitating potential manual interventions.
In 5GROWTH147 Edge VNFs can be deployed either next to the access network
infrastructure offering coverage to the devices being served (e.g., base station, access
point) or within a private cloud/Edge infrastructure of the verticals at the vertical
premises. In the former case, it is mostly deployed at the base station/RAN infrastructure
provided by the operators shared among the private network of the vertical with the
operator public network, though it could also be deployed elsewhere in the operator
infrastructure as long as latency requirements are fulfilled (e.g., micro-datacenter). In the
latter case, the Edge infrastructure is a private cloud infrastructure belonging to the
verticals.
The 5G VICTORI148 project comprises four different testbed facilities, i.e., 5G-VINNI
(Patras), 5GENESIS (Berlin), 5G-EVE (Alba Iulia) and 5G UK (Bristol). Each of these
facilities provides different capabilities. However, in general, each facility is equipped
with an on premise, private, micro datacenter, which is hosted at the premises of each
testbed responsible organization. In addition, street cabinets and base stations are used in
some of the facilities to host the Edge infrastructure even closer to the end users.
Specifically, in Patras and in Bristol, “pop-up networks-in-a-box” will be deployed at
certain locations, physically located inside street cabinets or on-prem IT rooms, which
will provide 5G RAN connected with a local micro datacenter.
146 https://5gdrones.eu
147 https://5growth.eu
148 https://www.5g-victori-project.eu
149 https://www.monb5g.eu
strictly focus on a single deployment option, but rather consider several of them in order
to support dynamic slice setup and reconfiguration in multiple scenarios.
Differently from MonB5G, in 5GZORRO150, two main types of locations are considered
for the deployment of Edge infrastructure. The reasoning behind such selection is based
on two main criteria: site availability and compatibility with use cases’ requirements. In
particular, street cabinets and micro datacenter, with the inherently reduced edge-
compliant capacities, are available as part of the smart city IT infrastructures deployed in
the 5GBarcelona facility. This deployment provides a minimal distributed Edge
Computing ecosystem, where the presence of multiple stakeholders, controlling different
Edge resources, are emulated in order to realize the considered use cases.
150 https://www.5gzorro.eu
151 https://5g-dive.eu
Other
Public Cloud
2 - 5%
2 - 4% On premise
10 - 23%
Private Datacenter
10 - 23%
Base Station/RAN
7 - 16%
Micro Datacenter
5 - 11%
Fog Devices
1 - 2%
Street Cabinet, 4, 9%
Central Office,
3, 7%
The 5G-TRANSFORMER stack can also integrate any MANO project as long as a
wrapper is defined to translate between the ETSI-compliant specifications that the 5G-
TRANSFORMER service orchestrator uses and the APIs of the corresponding MANO
project. Due to functionality offered and community critical mass, two MANO platforms
were integrated, namely OSM and Cloudify.
As for the SLICENET project, it implements the same management plane for both Edge
and datacenter in all its deployments. This is significantly different from other proposals
where there is a complete management plane for the datacenter and another one for the
Edge. SLICENET testbeds are based on OSM over OpenStack with either OpenDaylight
or ONOS. The project also makes use of Kubernetes but mainly for the deployment of
the management functions. The Edge is then seen as a geographical area in the
management plane. Such area is composed by a set of X86 COTS servers that may have
any type of acceleration. The Smart Grid use case requires GPU acceleration in the Edge
to deal with Edge AI. Also, the project relies on FPGA and NPU acceleration in the
network cards located in such servers. Then, the 5G VNFs are deployed as indicated in
the previous question.
The focus of the SaT5G project is on satellite integration and not particularly on NFV /
SDN development, although in the course of the project, partners virtualized multiple
satellite specific functions. For this reason, partners preferred to use a platform which was
not too demanding on computing facilities, straightforward to download and deploy, and
with lots of support on fora, etc. Therefore, a decision to utilize OpenStack for the MEC
part was made, although the project also used Kubernetes docker / container for
virtualizing some core network components. The project also aimed at developing
desktop demos to prove specific principles. The demos are typically based on 4 Intel
NUCs, each with Intel i7 processor, 32GB of RAM, and 64 GB SSD. Devices are
connected via gigabit switch. 4G base station and UE are connected via SDR (Software
Defined Radio) boards, which are connected via cables instead of antennas. This was
done so we could use the same frequencies as network operators would use, and to avoid
actual transmission of radio signals, and their impact on commercial networks.
For the Smart City Demonstration, several x86 bare metal servers were used in 5G-
PICTURE to host the virtual infrastructure to ensure compatibility and compute resource
availability. OpenStack was widely used as virtualization platform to host different VNFs
Dissemination level: Public Page 75 / 96
5G PPP Technology Board Edge Computing for 5G Networks
such as fundamental network services like DNS, DHCP, VPN, etc., and the network
services related to the use cases. OpenDayLight was used as the SDN controller in this
network to offer compatibility with the switches and network resources in the Testbed.
Also, OSM was implemented as a domain orchestrator for this project to deploy different
VNFs within the network.
The Time-Shared Optical Network (TSON) was used as a dynamic optical transport
network solution to provide high bandwidth and low latency connectivity between the
network edges and the datacenter. For this solution an FPGA board from Xilinx (one of
the consortium partners was used to demonstrate the programmability of the TSON
solution).
For the Stadium Demonstration, the project used x86 servers because they were suitable
for the compute requirements of the VNFs used during the demo and the different
controllers.
Pishahang is an NFV MANO that allows management and orchestration of NFV services
across multiple VIM domains. A single service in Pishahang can contain VNFs that can
run on AWS, Open Stack and Kubernetes. This allows to use heterogeneous resources
offered by different VIMs for the same services. Unlike other MANO frameworks that
run Kubernetes on a VM, the Kubernetes VIM of Pishahang runs on bar metal. This
removes one layer of virtualization and improves the performance of the containers.
Kubernetes was used because of two main reason namely 1) it allows managing container-
based VNFs, which have better performance compared to VM-based VNFs, and 2) it also
allows faster management and orchestration of NFV services compared to other solutions
such as OpenStack.
Finally, the Edge Computing infrastructure deployed in the 5GENESIS Málaga Platform
comprises:
1) COTS OCP X86 Servers: X86 servers from OCP (vendor agnostic) to run Edge
datacenter management and provide compute resources to Edge VNFs. X86 is the
most adopted compute resource and supported by most Open Source components
used in CORD.
2) Openflow Whiteboxes Switches: Servers are connected to Openflow Whiteboxes
Switches so that connectivity can be programmed and managed from an SDN
controller. All connectivity inside Edge Datacenter and from the Datacenter to the
RAN and transport Network is managed by an SDN controller.
3) ONOS SDN Controller: This is the out of the box SDN controller developed by
Open Networking Foundation, that is included in the CORD solution. It supports
Openflow and now P4 to manage the Switching fabric of the Datacenter.
4) OpenNebula Datacenter VIM: Open Nebula is a lightweight VIM to manage
hardware resources (compute, storage, networking and other PCI devices). It can
use different hypervisors, but 5Genesis has selected KVM as it is one of the most
adopted ones being part of Open source solutions, like OpenStack. Compared to
this one, Open Nebula consumes much less resources, a critical feature for small
Datacenter, designed for Edge capabilities.
Container solutions can be deployed inside the Edge Datacenter running on VMs,
enabling the deployment of containerized VNFs and Applications if needed.
In the Athens 5GENESIS platform, COSMOTE is the provider of the Edge Computing
infrastructure. COSMOTE operates a hybrid 4G/ NSA 5G/ MEC testbed complemented
with an Openstack-based SDN/NFV Cloud infrastructure. More specifically, the
COSMOTE 4G/5G testbed is composed of:
10Gb/s broadband connection (over GRNET) serves as a backhaul link towards the
internet and the NCSR Demokritos premises, where the ATHONET EPC and/or 5G
Core is operated.
Overall, the choice of hardware is mainly due to the ease of procurement. The choice of
orchestration solutions and virtual infrastructure management is due to their simplicity
and large community support.
Some hardware acceleration is needed, specifically for the video analytics in the
aquaculture use case.
As 5G!drones relies on four different trial facilities, many of the technologies listed in
Table 4 are used for the 5G!Drones Edge Computing deployment. Specifically,
• 5G-EVE relies on X86 Servers, a home-made MEC orchestrator as well as
Kubernetes for managing edge resources;
• 5GENESIS provides in the Athens platform two types of Edge Computing
infrastructures that are deployed on small form factor (SFF) x86 Servers: (i)
OpenStack and (ii) Kubernetes. Katana Slice Manager is another open source
152 https://5gcarmen.eu
software component that is closely aligned with the 3GPP and GSMA standards
regarding the network slicing. It was designed to support the activities of the
5GENESIS platforms, supporting the management of E2E network slices on top
of the platform facilities.
• 5GTN uses also X86 Servers in addition to COTS devices as MEC hardware while
OpenStack and Kubernetes represent the VIMs. Open Source MANO is used as
an orchestration tool. X-Network facility deploys X86 servers as Fog servers,
while Nokia MEC comes as COTS solution. LXD is used for the management of
Edge services using Linux Containers LXC, this technology was adopted because
it allows live service migration between edge servers
As discussed earlier, 5GROWTH leverages on the 5G-TRANSFORMER stack that
allows integrating any kind of technology out of those listed in Table 4 as long as the
corresponding plug-in is developed. Some of the marked technologies are those for which
a plug-in/wrapper was already developed in 5G-TRANSFORMER, and some (e.g., GPU
Acceleration) will be developed in 5GROWTH. In general, X86 servers were selected
because of availability and familiarity. COTS devices were used as interfaces in the
nodes, UEs in the tests, etc. The 5GROWTH stack has integrated two MANO platforms,
namely OSM and Cloudify. Likewise, the 5GROWTH Resource Layer can integrate a
plethora of heterogeneous transport and computing technologies through the
corresponding plug-ins: ONOS, ODL, Ryu, and ABNO were the ones integrated due to
functionality offered and their availability in the labs of partners that were familiar with
them. This is also based on past developments of use from previous projects (e.g., 5G-
Crosshaul). OpenStack and Kubernetes will also be further evaluated within the project
to explore different options to deploy VNFs.
Each of the four facilities in the 5G-VICTORI project features on premise, private, and
micro datacenters. These are built using primarily COTS x86 servers, some of which have
GPU acceleration and switches (some are SDN enabled). In addition, smaller form factor
devices, such as Intel NUC, which also in some instances include GPUs, are deployed in
the field (e.g. street cabinets). Most of the tools comprising the protocol stack are open
source. All the facilities utilize OSM for network management. In addition to that, Orange
Romania at the Alba Iulia site will also investigate the integration of ONAP. Last, in
Patras, OpenSlice153 is being exploited for service orchestration, as a tool that was
developed by the University of Patras previously. In terms of SDN controller, both ONOS
and ODL are deployed. In addition, the Bristol site is also equipped with the Zeetta
NetOS154, a network control and management software platform that simplifies and
automates Network Operations (NetOps). This is used with their Rapide box. The primary
VIM platform is OpenStack, providing support for VMs, which is the preferred solution
for vertical application deployment. However, Kubernetes is also deployed in some Edge
environments, because of its low resource footprint requirements. In addition, some of
the underlying tools used, such as OSM and OpenSlice, are by themselves deployed in
the form of containers.
153 http://openslice.io.
154 https://zeetta.com/netos-architecture.
however, x86 servers are likely to be adopted in future PoC deployments due to their
popularity and relatively low costs. Both OSM and ONAP platforms are currently
considered as MANO platforms. OSM follows the “de facto standard” of ETSI NFV
MANO architecture, and ONAP is commonly considered as a future solution for
automation of technical processes. Despite providing a valuable starting point from an
architectural point of view however, none of them fully adheres with the scalable data-
driven network slice management and orchestration architecture envisioned by the
MonB5G project.
Again, the discussion around SDN did not reach a consensus on the specific platform to
be exploited. At the time of writing, both ONOS and OpenDayLight controllers present
some limitation in terms of scalability due to code size and documentation. Most likely
the project will adopt lighter solutions.
The network slicing scenario considered in the MonB5G project implies the mobile
network infrastructure to be highly flexible and dynamically reconfigurable. To exploit
the full potential of NFV technologies and support the development of its distributed
architecture, the MonB5G project will exploit both OpenStack and Kubernetes open-
source platforms. On the one side, Kubernetes is the most widely used container
orchestration tool and allows for fast automation and configuration of both networking
and vertical services. When compared with VNF-based deployments, containers can
usually provide faster setup and easier portability thanks to their lightweight. This might
be especially useful in case of migration and/or service reconfiguration. On the other side,
single VNFs instances hosted on VMs have been proved to be more secure thanks to the
complete isolation from OS Kernel provided by the virtualization hypervisor. Thus, the
project envisions a co-existence of these technologies to fulfil the flexibility and resilience
requirements imposed by the network slicing scenario.
16
14
12
10
ONAP
Constr.
Other
Other
ODL
Other
Other
Kubernetes
Cloudify
ONOS
OSM
O. Baton
OpenStack
FPGA
AMD
Other
X86
ARM
GPU
COTS
Server type Acceleration Device type Orchestration plaform SDN/NFV Container orchestrator
Figure 29 reveals some clear trends. In terms of architecture, there is a clear preference
for the x86 architecture, with just a couple of projects looking at other architectures, i.e.,
ARM. Regarding acceleration, it seems most of the projects are not using acceleration at
all, and the few ones using it focus on GPU acceleration. This trend probably originates
in projects studying AR/VR scenarios and requiring of GPU acceleration for the rendering
of images. Regarding orchestration platforms, OSM clearly dominates. The high number
of ‘Other’ answers in the Orchestration part evidences that multiple projects are
developing their own solutions in contrast to using well-known platforms.
In terms of SDN controller platform, ODL is used as preferred platform, although ONOS
is also widely used. In the category of ‘Other’ we can find mostly deployments using Ryu
as SDN controller, or specific developments.
Finally, in terms of VIM (Container Orchestrator), OpenStack and Kubernetes are mostly
used equally, followed by project-specific developments.
For Cloud RAN, it is natural to deploy the CU/BBU at the edge. For vertical applications,
firstly in the eHealth use case, SliceNet deploys an App called TeleStroke to benefit from
the low latency in the MEC platform in order to support the timely diagnosis of onboard
patients who may suffer from stroke. Secondly, in the Smart Lighting use case, SliceNet
deploys an IoT Gateway MEC App to enhance the timely processing capabilities of the
Gateway at the Edge of the network and also improve the scalability of the gateway in
supporting mMTC.
SaT5G, considers a use case on DASH Live Streaming over Satellite Backhaul.
Specifically, the project uses satellite communications as backhaul in a 5G network to
support 4K video streaming applications with QoE assurance. SaT5G focuses on HTTP-
based live streaming scenario, where video content is generated on-the-fly at a content
origin server and delivered to geographically distributed end-users through a 5G network
with satellite backhaul. Specifically, it presents a 5G SBA-based framework that provides
QoE assurance in a context-aware manner. The project envisages which stakeholders are
involved in this scenario, i.e., 5G MNO, video content provider (CP) and satellite network
operator (SNO). In the proposed framework, the 5G MNO virtualizes its computing and
storage resources and leases them to CPs, where the latter can deploy their own VNFs in
MEC servers. Meanwhile, the SNO leases its satellite channel bandwidth resource to the
5G MNO, so that the latter uses it as a backhaul link in addition to the standard terrestrial
backhaul. The key contributions are as follows:
• This is the first system developed in literature that utilizes both SBA-based 5G
core network and satellite backhaul to support 4K HTTP-based live streaming
applications with QoE assurance. Specifically, it leverages both the context
awareness and flexibility that are enabled by 5G SBA architecture, as well as the
multicast capability of satellite backhaul. It also utilizes virtualization technology
to enable CPs to deploy their own VNFs in MEC servers at 5G mobile Edge,
which not only performs content operations such as transient segment holding,
but also realizes last-hop multicast at application layer. Overall, the proposed
system assures live users’ QoE while maintaining the video quality at or above
4K; it also ensures that video content is always delivered through the backhaul in
the most efficient manner.
• This is the first time that a 5G core network and a real satellite communications
link have been implemented and integrated as a holistic system, where the latter
serves as the backhaul of the 5G network. The establishment of such a system
means that it is possible to test the performance of MEC servers with content
operations (such as transient segment holding) in terms of content delivery and
QoE assurance through a real satellite backhaul.
• Autonomous Edge Node: placing the comprehensive control plane elements at the
Edge including an additional front-end for device management and for user data
subscription, using information stored in the local cache and default subscription
profiles will enable the system to act as an autonomous connectivity island which
makes decisions on its own functioning. In this case the Edge side can function in
a complete manner when the backhaul connectivity is lost. However, with passing
the subscription profiles to the Edge node, an increase security of these nodes has
to be established. This solution should be considered only when the trust in these
edge nodes is large. In this extreme case, the 5GC VNFs deployed at the edge
correspond to AF, UPF, AMF, SMF, PCF, and potentially also DM and UDM.
The main deployed service in 5GENESIS is the 3GPP-compliant mission-critical
services (MCS). The Nemergent MCS server-side provides the application-level
components required to deploy MCPTT services: Mission Critical Video (MCVideo) and
Mission Critical Data (MCData) services. The Nemergent MCS system is deployed as a
series of server components, each of them fulfilling a different functional role. Among
the required standardized components, the project offers MCS Application Server (both
Participating and Controlling roles), MCS Management Servers (all Identity,
Configuration, Key and Group Management Servers), IMS Core (with a SIP-based load-
balancer), and networking-based management modules such as DNS, NAT transversal
and so forth. All the above-mentioned components are VDUs that constitute an all-in-one
MCSVNF. The main reason to select this VNF for the Edge infrastructure has been the
ability and the great potential of this network paradigm. For instance, being able to handle
crowded events and utilization of MCS communications that are sensitive to latencies,
while at the same time being able to support a large number of resources like MCVideo
communications. Additionally, in order to enable LBO of user ́s traffic to MCS VNFs
running at Edge Computing node, 5GENESIS has deployed vEPC VNFs. Since there is
a need to steer traffic coming from RAN (S1interface) towards MCS VNFs, it is necessary
to terminate GTP tunnels at the Edge node, and by doing so, the project has deployed S/P-
GW function in a separate VNF and configured the Mobile Core accordingly. Lastly, the
Edge Computing node runs additional VNFs for infrastructure management. Two mail
VNFs are used for this management: ONOS is the SDN controller to manage connectivity
inside the datacenter and derives from the CORD design. The project adopts a canonical
ONOS SW release by the Open Networking Foundation. Open Nebula is the VIM
selected to manage resources, and it runs also in VMs as VNFs. This VIM includes the
interfaces to external management system like OSM that orchestrate the deployment of
VNFs from VNF catalogue in the Edge Computing node.
In the Athens 5GENESIS platform, the deployed services support the various
requirements of UAVs applications, namely the FCC virtualised units. The UAVs have
been brought into the scope of the latest 3GPP releases, in order to study and address the
related needs and requirements (e.g. TS22.125, TS22.261, TR 36.777, TS 22.125).
functions possible which will be integrated with the SMF in case of the 5G core
deployments. Depending on the site, different vertical applications are hosted. The most
common application will be an application facilitating the exchange of V2X messages
between vehicles and between infrastructure and vehicles. Also, several post processing
applications are deployed, responsible for data fusion and vehicle controlling.
As discussed earlier, the 5G!Drones project focuses on trialling UAV scenarios on top
of existing 5G facilities. UAV relies on flying drones which needs to be controlled and
commanded via a remote application, where low latency communication is critical.
Clearly, the remote control/command application needs to sit at the Edge, aiming at
guaranteeing low-latency connection to the flying drones deployed on top of the 5G
network. In addition to controlling the flying platforms, 5G!Drones also investigates use
cases where the UAVs embed various services and applications such as video monitoring,
3D mapping, etc., which also require Edge Computing capabilities. The ETSI MEC
deployment will bring many benefits to these use cases since they are latency-sensitive
and require RNIS, Location API, video processing at the Edge. It will further improve the
scalability and allows the sensor and components involved in this use cases to maintain a
consistent and reliable connection.
There are mainly two types of VNFs deployed in 5GROWTH , namely (i) those related
with the low-latency components of vertical applications (e.g., virtual M3Box composed
of several control applications for controlling AGV and the CMM); and (ii) those
components of the mobile network that are needed to be able to do the LBO to have access
to these applications directly at the Edge without having to reach the core network of the
operator (e.g., vEPC, UPF).
There are three primary VNF group types deployed at the edge in 5G VICTORI. First
and foremost are the vertical/end-user applications as described with the three use cases
in Table 1, as well as others being trialled in the project. 5G VICTORI will trial a number
of use cases and most of them are deploying some component at the Edge in the form of
VM and in some cases, as containers. The rationale of deploying these applications at the
Edge is to meet the latency requirements and/or provide bandwidth efficiency. There are
some secondary benefits of that, such as security, whereby data is not allowed to leave
the premises of a facility, but these are not prevalent in the scenarios we evaluate. The
second group is the SDN controller and the MANO system (OSM) that controls this Edge
instance itself. It is possible to have a single MANO responsible for both the core and
Edge cloud, but we have opted for a hierarchical architecture, where each edge is
autonomous and a common platform (5G-VIOS) is providing the inter-domain (inter-
Any services required by the RAN components of the infrastructure have to be placed as
close as possible to the radio equipment. In 5GZORRO these components will be the
virtualized layer 3 component of the LTE stack (vL3) and vEPCs for the different
operators that intend to deploy services in the network. By placing these elements at the
Edge, the KPIs of small latency and round-trip times can be met, which would have not
been the case if they were placed in the core/main datacenter. It will have to be evaluated
during the project whether other VNFs or services have similar requirements and whether
they will be deployed at the Edge.
compared with 5G Core functions. Within the 5G Core functions, UPF is the most
commonly deployed one.
Vertical App.,
14 - 27%
5G Core
8 - 16%
SDN Controller,
3 - 6%
From this figure, we can see that Edge Computing infrastructure serves clearly two
purposes:
1. Run infrastructure components of the 5G network such as Cloud RAN, SDN and
Core elements (above 50%) to shorten the distance to …
2. Apps running at the edge (near to 40%), collocated with Core components for
getting Local Break Out access to users traffic.
This combination of Core components + Apps enable delivering the 5G value proposition
for URLLC, eMBB and mMTC.
6. Conclusions
Computers and Networks, Networks and Computers, are two technologies that have been
evolving hand by hand in the last 50 years, since the introduction of Internet. 155
We are now facing the next Mobile Network technology generation for 5G and, even
though we heard about Edge Computing since the introduction of CDNs, in the 2000s, it
is 5G the one that is driving the development of Edge Computing.
In this paper, we have seen that 5G PPP projects researching on very different 5G use
cases and with a variety of companies and technologies, they all have embraced Edge
Computing as part of their solution. Seventeen out of seventeen projects reported using
some type of Edge Computing at the Edge of the network (Table 2).
We have also explored that the concept of Edge of the Network can be flexible, and
depending on the context, 5G PPP projects have been using different locations for the
Edge resources (Table 3), from pure On Premise infrastructure to the Public Cloud.
And we have confirmed that this infrastructure is used to implement the LBO function to
shorten the path between Users and Applications. As shown in Table 5, vEPC and 5G
Core functions are co-located with Vertical Apps at the Edge.
This approach is the way to go to deliver the 5G value proposition: Ultra reliable low
latency communications, enhanced mobile broadband (higher bandwidth) and massive
Machine Type Communications.
But the picture in not completely clear. The Edge Computing ecosystem needs to mature
so that companies can get access to commercial solutions with a clear value chain. It is
not clear if this ecosystem will be dominated by Telcos, Hyperscalers or there will be
some kind of coopetition (collaborative competition). Telcos have the capillarity and
dominate Locality, while Hyperscalers have the Cloud Technologies and are typically
Global.
There are also uncertainties about security, privacy and regulation, to name a few, that
need to be addressed before the market matures.
Security issues have been raised by Interpol regarding “Law enforcement and judicial
aspects related to 5G” 156 on topics such as Lawful Interception and Authenticity of the
evidence, in virtualized collaborative environments.
Privacy issues are also related to these types of environments, as guidance is needed on
what data can be shared between network elements and applications running in the same
infrastructure.
155 https://en.wikipedia.org/wiki/History_of_the_Internet#Merging_the_networks_and_creating_the_Internet_(1973–
95)
156 https://www.statewatch.org/media/documents/news/2019/jun/eu-council-ctc-5g-law-enforcement-8983-19.pdf
Lastly, Telcos and Hyperscale are very different regulated. In offering services on 5G,
new regulation is needed that harmonizes the roles for all the actors involved in the value
chain and enable the development of a healthy Edge Ecosystem.
5G is ready to take off, now is time for Edge Computing to step up and mature to become
the perfect partner.
5G-VICTORI (https://www.5g-victori-project.eu)
• D2.1 - 5G VICTORI Use case and requirements definition and reference
architecture for vertical services.
• D2.2 - Preliminary individual site facility planning.
5G-PICTURE (https://www.5g-picture-project.eu)
• D4.1 State of the art and initial function design. [Section 4].
• D4.2 Complete design and initial evaluation of developed functions. [Section
3.2]
• D5.3 Support for multi-version services. [Section 3]
• D6.3 Final Demo and Testbed experimentation results. [Section 8]
5G- VINNI (https://www.5g-vinni.eu)
• D1.1 – Design of infrastructure architecture and subsystems. [Section 6.1]
• D2.1 – 5G-VINNI Solution Facility-sites High Level Design (HLD). [Section
4.5]
5G-HEART (https://5gheart.org)
• D2.1: Use Case Description and Scenario Analysis. [Chapter 4]
• D2.2: User Requirements Specification, Network KPIs Definition and Analysis.
[Chapter 3]
• D3.2 Initial Solution and Verification of Healthcare Use Case Trials. [Chapters
2-6]
• D4.2 Initial Solution and Verification of Transport Use Case Trials.
• D5.2 Initial Solution and Verification of Aquaculture Use Case Trials
5G-CROCO (https://5gcroco.eu)
• D2.1 Test Case Definition and Test Site Description Part 1 [Section 3]
• D4.4 Detailed Roadmap of Test Sites- Project Year Two [Section 6; Sections
10-16]
5G-MOBIX (https://www.5g-mobix.com)
• D2.2 5G architecture and technologies for CCAM specifications. [Sections 2.2,
3.4.2, 4,5]
• D2.3 Specification of roadside and cloud infrastructure and applications to
support CCAM. [Section 3]
• D3.1” Corridor and Trial Sites Rollout Plan” [Section 2.2]
5GROWTH (http://5growth.eu)
• D1.1 Business Model Design
• D2.1 Initial design of 5G End-to-End Service Platform
• D3.1 ICT-17 facilities Gap analysis
5G-TRANSFORMER (http://5g-transformer.eu)
List of Contributors
Name Company / Institute / University Country
Editorial Team
Overall Editors
David Artuñedo Telefónica I+D Spain
Section 2 Editors
Bessem Sayadi NOKIA Bell Labs France
5G-PPP Software Network WG Chairman
Section 3 Editors
Pascal Bisson Thales Group France
Jean Phillippe Wary Orange France
Section 4 Editors
Hakon Lonsethagen Telenor Norway
Section 5 Editors
Carles Anton-Haro Centre Tecnològic Telecom. Catalunya Spain
(CTTC)
Antonio Oliva Universidad Carlos III de Madrid (UC3M) Spain
Contributors
Alexandros Kaloxylos The 5GIA (Reviewer) Belgium
John Cosmas Brunel University UK
Robert Muller Fraunhofer Institute for Integrated Circuits Germany
IIS
Ben Meunier Brunel University UK
Yue Zhang University of Leicester UK
Xun Zhang Institut supérieur d'électronique de Paris France
Josep Mangues CTTC (on behalf of 5G Transformer and Spain
5Growth)
Carlos Bernardos U. Carlos III Madrid (on behalf of 5G Spain
Transformer)
Xi Li NEC Labs (on behalf of 5G Transformer) Germany
Qi Wang U. West Scotland (on behalf of Slicenet) UK
Maria Barros Eurescom (on behalf of Slicenet) Germany
Amelie Werbrouck SES (on behalf of Sat5G) Luxembourg
Jesus Gutierrez Teran IHP (on behalf of 5G Picture) Germany
Marc Molla Ericsson (on behalf of 5G EVE) Spain
Manuel Lorenzo Ericsson (on behalf of 5G EVE) Spain
Dan Warren SAMSUNG (on behalf of 5G VINNI) UK
Valerio Frascolla Intel (on behalf of 5Genesis, Reviewer) Germany
David Artuñedo Telefonica (on behalf of 5Genesis) Spain
Fofy Setaki COSMOTE (on behalf of 5Genesis) Greece
Dimitris Tsolkas FOGUS (on behalf of 5Genesis) Greece
George Xiloutis NCSR Democritos (on behalf of 5Genesis) Greece
Harilaos Koumaras NCSR Democritos (on behalf of 5Genesis) Greece
Maciej Muehleisen Ericsson (on behalf of 5GCroCo) Belgium
Andreas Heider T-systems (on behalf of 5G Carmen) Germany
Kostas Trichias WINGS (on behalf of 5GMobix) Greece
Pascal Bisson Thales (on behalf of 5G-Drones) France