Developing Software For Multi-Access Edge Computing: ETSI White Paper No. 20
Developing Software For Multi-Access Edge Computing: ETSI White Paper No. 20
Developing Software For Multi-Access Edge Computing: ETSI White Paper No. 20
20
Authors:
Alex Reznik (editor), Rohit Arora, Mark Cannon, Luca Cominardi, Walter Featherstone, Rui Frazao,
Fabio Giust, Sami Kekki, Alice Li, Dario Sabella, Charles Turyagyenda, Zhou Zheng
ETSI
06921 Sophia Antipolis CEDEX, France
Tel +33 4 92 94 42 00
[email protected]
www.etsi.org
About the authors
Alex Reznik (editor)
HPE
Rohit Arora
HPE
Mark Cannon
Virtuosys
Luca Cominardi
UC3M
Walter Featherstone
Viavi Solutions
Rui Frazao
Vasona
Fabio Giust
NEC
Sami Kekki
Huawei
Alice Li
Vodafone
Dario Sabella
Intel
Charles Turyagyenda
InterDigital
Zhou Zheng
Huawei
This idea is quite recent, although not totally new, and the ecosystem is quickly moving to use systems
like Greengrass for Amazon’s AWS Lambda, Microsoft’s Azure IoT stack and GE’s Predix to enable it.
Let’s take, for example, AWS Greengrass. This consists of the AWS Greengrass core (which is responsible
for providing compute capabilities closer to the devices) and the AWS IoT devices enabled with AWS IoT
Software Development Kit (SDK). Using this architecture, AWS IoT applications can in real time respond
to local events and also use cloud capabilities for certain functions that don’t require real time
processing of data. An IoT application developer targeting AWS Greengrass has to architect the
application in a way that uses these edge systems for certain features that require real time processing
or which perform some other useful tasks (e.g. limiting the data flood to the central location), while
keeping other features in the traditional cloud.
To provide these new services and to make the most out of MEC it is also important for the application
developers and content providers to understand the main characteristics of MEC environment and the
additional services which distinguish MEC from other “edge computes”, namely: extreme user
proximity, ultra low latency, high bandwidth, real time access to radio network and context information
and location awareness.
On this basis this white paper provides guidance for software developers on how to properly approach
architecting and developing applications with components that will run in edge clouds, such as those
compliant with ETSI’s MEC standards. The white paper will summarize the key properties of edge clouds,
as distinct from a traditional cloud point-of-presence, as well as the reasons why an application
developer should choose to design specifically for these. It will then provide high-level guidance on how
to approach such design, including interaction with modern software development paradigms, such as
micro-services based architectures and DevOps.
A MEC point-of-presence (PoP) is distinct from a traditional cloud PoP. It may offer significant
advantages to application components/services running there, while also presenting some challenges,
e.g. higher cost, relatively small compute footprint, good local but not global reachability, etc. As such, it
is crucial for an application developer to design with specific intent towards running some application
components at the network edge when developing for MEC.
This results in a new development model with 3 “locations”: Client, Near Server, Far Server (depicted in
Figure 2. The client location can be a traditional smartphone or other wireless connected compute
elements in a car, smart home or industrial location that can run dedicated client applications. The
model is quite new to the majority of software developers, and while modern development paradigms
(e.g. microservices) make it easier to adapt to it, a clear and concise summary of this new development
model and guidance on how to properly approach it will help accelerate the application development
for the network edge and thus accelerate MEC adoption.
As depicted in Figure 2, a MEC Host, usually deployed at the network edge, contains a MEC platform and
provides compute, storage and network resources. The MEC platform offers a secure environment
where MEC applications may, via RESTful APIs, discover, advertise, consume and offer services.
MEC app
UE app MEC app
Service
Mp1
Remote app
MEC
MEC service
platform Mp3
Service registry Web
The MEC platform is where the network and the applications can converge in a meaningful way without
giving up the key benefits of the hour-glass model. MEC can support any application and any application
can run in MEC. However, MEC can offer additional services to those applications which have been
designed to be MEC-aware. MEC Application Enablement (described in ETSI GS MEC 011) introduces
such a service environment, and this can be used to improve the user experience tremendously.
Software designed to take advantage of MEC services can leverage additional information about where
the application is supposed to run, in terms of expected latency, throughput and other available MEC
services. Simply put, with MEC, the environment becomes less unpredictable and environmental (i.e.
contextual) information can be leveraged to actively adjust the application behaviour in run time. The
network becomes a resource used to deliver the end to end service. For example, an application is able
to not only precisely monitor the radio link via the MEC Radio Network Information Service, but also
make a bandwidth request and inform the network of other application requirements via APIs provided
by the MEC platform. This allows edge applications to benefit from low latency and high throughput in a
fairly predictable/controllable way. In addition, the network itself could also benefit from the MEC
services provided by the applications, for instance the network scheduler could also predict the
incoming user behaviour to maximize the network efficiency.
A key aspect of software design for MEC is the need to split the application into terminal device
component(s), edge component(s) and remote component(s). This aspect creates an additional task for
developers with respect to a more traditional client-server architecture, since an additional processing
stage (at the edge) must be added to the application’s workflow, with well-defined characteristics and
capabilities. The terminal device can do some preliminary processing to determine the need for further
actions. Such preliminary processing requires near zero latency and it requires the terminal device to
support some computing capabilities, e.g. to receive and instantiate algorithms or instructions. The edge
component(s) include a set of operations that the application performs at the edge cloud, e.g. to offload
the computing away from the terminal device while still leveraging very low latency and predictable
performance. The remote components implement operations to be carried out in the remote data
centre, e.g. to benefit from large storage and database access. This concept should not be confused with
An example of the concept expressed above comes from applications for video analytics from
surveillance cameras. Here, the task is to extract information for face recognition from different video
streams. In a traditional approach, the video streams are transported up to the data centre where the
software components for face recognition, databases, etc. are located. A lot of bandwidth can be saved
by splitting the job and performing some functions in the edge or even in the terminal (if the terminal
has the necessary computational power). For instance, image areas with suspected faces can be
cropped from HD images, compressed, and then sent to the remote cloud, for further processing (i.e.
final face identification) and database lookup. When cameras are active, they are constantly searching
for pre-defined objects or events. To avoid the need to stream the video constantly in real time to the
customer’s facilities (e.g. central cloud) the terminal device can do some preliminary processing. Based
on the preliminary processing, further action may be necessary on the selected objects, e.g. further
analysis of the object’s track, detailed identification of the object, etc. Such processing may be
delegated to the edge cloud component of the application. The example is illustrated in Figure 3.
High latency
communication
Edge component(s)
Low latency
communication
Remote
component(s)
Terminal
component(s)
Figure 3: Example of splitting an application into “terminal”, “edge” and “remote” components
What is worth highlighting here is that MEC can be exploited to implement computation offloading
techniques among all the application’s components. In fact, the server can be programmed to
dynamically shift the processing among the terminal, the edge and the remote component(s), for a
number of reasons ranging from adaptation to network conditions, improving application specific KPIs,
Edge #1
API Micro-service API Micro-service
Calls #1 Calls #1 database.
(e.g NoSQL)
UE #1
API
Gateway
UE #2
API Micro-service API Micro-service
Calls #N Calls #N database.
(e.g SQL)
UE #3
DevOps is considered to be a business-driven software delivery approach. It is based on lean and agile
principles in which cross-functional teams collaborate to deliver software in a continuous manner. Such
teams consist of representatives from all disciplines responsible for developing and deploying software
services, including the business owners, developers, operations and quality assurance. This continuous
delivery (CD) based approach enhances a business’ ability to exploit new market opportunities and
quickly adapt to customer feedback. The MEC ecosystem is evolving and presents opportunities for the
development of new and innovative services, which means that the DevOps approach is ideally suited
and offers a genuine path to deriving new business value.
Adopting the microservices approach results in services that are modularized and small in nature. Given
their size, each individual service is easier to develop, test and maintain. The services may also be
developed and deployed in parallel. This facilitates continuous integration (CI) and delivery, which are
key principles of the DevOps approach. DevOps teams are also well placed to take the individual pieces
of functionality offered through microservices and use those as the building blocks for larger
applications and systems. Those systems can then be easily expanded by adding new microservices
without unnecessarily affecting other parts of the application, thereby offering flexibility and achieving
scalability. The consequence is that software development organizations already employing DevOps
practices to develop microservices will already be well placed to embrace the MEC software
development paradigm, where an overall MEC solution will comprise of multiple software components
(i.e. microservices) each providing different capabilities and functions, potentially from different
providers, and where those components are expected to continue to expand and evolve thereby
benefiting from DevOps CI and CD processes. This expansion will happen at a high and low level, for
instance at a high level as the MEC platform’s overall capabilities are enhanced to support multiple
access technologies and full application mobility and at a lower level as specific applications are further
developed to exploit enhanced API functionality, for instance as the Radio Network Information Service
capabilities expand into the multi-access domain.
An activity closely related to implementing the DevOps approach is the efforts underway in the MEC ISG
to define a test framework to cover aspects such as conformance and interoperability. The framework
will deliver a test suite that is ideal for automated testing. This is considered an integral DevOps
component, since automated testing supports the delivery pipeline in creating processes that are
iterative, frequent, repeatable, and reliable.
Mobility
The terminal is likely to be a mobile device and the current MEC host may not be the best choice for the
user due to user mobility. The MEC system allows to relocate the user context from one application
instance to another running in a MEC host closer to the user. Furthermore, the MEC system implements
a mechanism to relocate the application instance as a whole if necessary. Several use cases for
application relocation are documented in ETSI GR MEC 018 “End to End Mobility Aspects”. In these
regards, programmers might design their applications with certain capabilities in order to leverage and
optimize the UE relocation procedure. Such capabilities might be monitoring the application KPIs to
assist the relocation decision, mechanisms to determine the right relocation timing, data
synchronization, etc.
This White Paper is issued for information only. It does not constitute an official or agreed position of ETSI, nor of its Members. The
views expressed are entirely those of the author(s).
ETSI declines all responsibility for any errors and any loss or damage resulting from use of the contents of this White Paper.
ETSI also declines responsibility for any infringement of any third party's Intellectual Property Rights (IPR), but will be pleased to
acknowledge any IPR and correct any infringement of which it is advised.
Copyright Notification
Copying or reproduction in whole is permitted if the copy is complete and unchanged (including this copyright statement).
© European Telecommunications Standards Institute 2017. All rights reserved.
DECT™, PLUGTESTS™, UMTS™, TIPHON™, IMS™, INTEROPOLIS™, FORAPOLIS™, and the TIPHON and ETSI logos are Trade Marks of ETSI
registered for the benefit of its Members.
3GPP™ and LTE™ are Trade Marks of ETSI registered for the benefit of its Members and of the 3GPP Organizational Partners.
GSM™, the Global System for Mobile communication, is a registered Trade Mark of the GSM Association.