Edge Computing
Edge Computing
Edge Computing
specific applications, this survey paper provides a comprehensive SpanEdge Programming models
overview of the existing edge computing systems and introduces Cloud-sea AirBox Firework
representative projects. A comparison of open source tools is computing
presented according to their applicability. Finally, we highlight Supporting Supporting
energy efficiency and deep learning optimization of edge com-
Open-source systems
puting systems. Open issues for analyzing and designing an edge
Akraino
computing system are also studied in this survey. CORD
Edge Stack
EdgeX Apache Azure IoT
Foundry Edgent Edge
I. I NTRODUCTION
In the post-Cloud era, the proliferation of Internet of Things Fig. 1. Categorization of edge computing systems.
(IoT) and the popularization of 4G/5G, gradually changes the
public’s habit of accessing and processing data, and challenges
the linearly increasing capability of cloud computing. Edge AzureStack in 2017, which allows cloud computing ca-
computing is a new computing paradigm with data processed pabilities to be integrated into the terminal, and data can
at the edge of the network. Promoted by the fast growing be processed and analyzed on the terminal device.
demand and interest in this area, the edge computing systems • Pull from IoT. Internet of Things (IoT) applications
and tools are blooming, even though some of them may not pull services and computation from the faraway cloud
be popularly used right now. to the near edge to handle the huge amount of data
There are many classification perspectives to distinguish generated by IoT devices. Representative systems include
different edge computing system. To figure out why edge PCloud, ParaDrop, FocusStack and SpanEdge. Advances
computing appears as well as its necessity, we pay more in embedded Systems-on-a-Chip (SoCs) have given rise
attention to the basic motivations. Specifically, based on dif- to many IoT devices that are powerful enough to run
ferent design demands, existing edge computing systems can embedded operating systems and complex algorithms.
roughly be classified into three categories, together yielding Many manufacturers integrate machine learning and even
innovations on system architecture, programming models and deep learning capabilities into IoT devices. Utilizing edge
various applications, as shown in Fig. 1. computing systems and tools, IoT devices can effectively
• Push from cloud. In this category, cloud providers push share computing, storage, and network resources while
services and computation to the edge in order to leverage maintaining a certain degree of independence.
locality, reduce response time and improve user experi- • Hybrid cloud-edge analytics. The integration of advan-
ence. Representative systems include Cloudlet, Cachier, tages of cloud and edge provides a solution to facilitate
AirBox, and CloudPath. Many traditional cloud comput- both global optimal results and minimum response time
ing service providers are actively pushing cloud services in modern advanced services and applications. Represen-
closer to users, shortening the distance between customers tative systems include Firework and Cloud-Sea Comput-
and cloud computing, so as not to lose market to mo- ing Systems. Such edge computing systems utilize the
bile edge computing. For example, Microsoft launched processing power of IoT devices to filter, pre-process,
and aggregate IoT data, while employing the power and
F. Liu is with the School of Data and Computer Science, Sun Yat-sen Uni-
versity, Guangzhou, Guangdong, China (e-mail: [email protected]). flexibility of cloud services to run complex analytics
G. Tang is with the Key Laboratory of Science and Technology on Information on those data. For example, Alibaba Cloud launched its
System Engineering, National University of Defense Technology, Changsha, first IoT edge computing product, LinkEdge, in 2018,
Hunan, China (e-mail: [email protected]). Y. Li is with the School of
Compute Science and Technology, Hangzhou Dianzi University, China. Z. which expands its advantages in cloud computing, big
Cai and T. Zhou are with the College of Computer, National University of data and artificial intelligence to the edge to build a
Defense Technology, Changsha, Hunan, China (e-mail: [email protected], cloud/edge integrated collaborative computing system;
[email protected]). X. Zhang is with State Key Laboratory of
Computer Architecture, Institute of Computing Technology, Chinese Academy Amazon released AWS Greengrass in 2017, which can
of Sciences, China. extend AWS seamlessly to devices so that devices can
2
System View
Energy Efficiency
Edge Computing Systems & Tools
(Sec. II)
(Sec. IV)
Open Source Edge Computing Projects
(Sec. III)
Application View
Intent-initiate Application-
service created services
Intent Composition
Networked Resources
Cloud
Local
TP~PB-Scale
Millions
CDN/CGN
SeaHTTP Fig. 9. Cachier System [16].
Seaport HTTP 1.1
Billions, TB HTTP 2.0
Trillions, KB~GB
Containers
Geocast query
OpenStack nova
Containers
Apps
LSA Subsystem
OSE Subsystem
Gclib nova
Edge Devices
LSA awareness messages
Edge Devices
OSE Messages when
in focus of attention
OSE Always-on messages
TABLE I
S UMMARY OF EDGE COMPUTING SYSTEMS
Edge
Application End Computation
Computing Edge Nodes Features/Targets
Scenarios Devices Architecture
Systems
Hybrid Lightweight
Cloudlet Mobile devices Cloudlet
(3-tier) VM migration
Mobile devices, Hybrid Resource integration,
PCloud Mobile devices
local server, PC (3-tier) dynamic allocation
General Hybrid Hardware,
ParaDrop IoT devices Home gateway
Usage (3-tier) developer support
Scenario Mobile devices, Hybrid Figure recognition,
Cachier & Precog Mobile devices
local server, PC (3-tier) identification
Hybrid Location-based info,
FocusStack IoT devices Router, server
(3-tier) OpenStack extension
Local cluster, Hybrid Streaming processing,
SpanEdge IoT devices
Cloudlet, Fog (2-tier) local/global task
Mobile devices, Hybrid
AirBox IoT devices Security
local server, PC (3-tier)
Multi-level Hybrid
CloudPath Mobile devices Path computing
data centers (multi-tier)
Two-layer
Firework Firework.Node Firework.Node Programming model
scheduling
Hybrid Minimal extension,
Cloud-Sea Sea Seaport
(3-tier) transparency
Hybrid General
Vehicular OpenVDAP CAVs XEdge
(2-tier) platform
Data
Smartphones Hybrid In-vehicle
Analytics SafeShareRide Smartphones
and vehicles (2-tier) security
Smart Edge Smart
Smart Vigilia Hubs
home devices only Home Security
Home
Smart Edge Smart
HomePad Routers
home devices only Home Security
Edge Low
Video LAVEA ∼ ∼
or cloud latency response
Stream
Cameras and
Analytics Hybrid Resource-accuracy
VideoEdge Cameras private
(3-tier) tradeoff
clusters
Video Autonomous Portable Edge Bandwidth
on drones drones edge computers only saving
Resource utilization
Virtual Individual Edge
MUVR Smartphones efficiency
Reality households only
optimization
tematic mechanisms, including varied wireless interfaces, to Portable edge computers are required here to support dynamic
utilize the heterogeneous computation resources of nearby transportation during a mission. Totally four different video
CAVs, edge nodes, and the cloud. For optimal utilization of the transmission strategies are presented to build an adaptive and
resources, a dynamic scheduling interface is also provided to efficient computer vision pipeline. In addition to the analytics
sense the status of available resources and to offload divided work (e.g., object recognition), the edge nodes also train filters
tasks in a distributed way for computation efficiency. Safe- for the drones to avoid the uploading of the uninteresting video
ShareRide is an edge-based attack detection system addressing frames.
in-vehicle security for ridesharing services [22]. Its three In order to provide flexible virtual reality (VR) on unteth-
detection stages leverage both the smartphones of drivers and ered smartphones, edge computing can be useful to transport
passengers as edge computing platform to collect multime- the heavy workload from smartphones to their nearby edge
dia information in vehicles. Specifically, speech recognition cloud [29]. However, the rendering task of the panoramic VR
and driving behavior detection stages are first carried out frames (i.e., 2GB per second) will also saturate the individual
independently to capture in-vehicle danger, and video capture households as common edge in the house. In [29], the MUVR
and uploading stage is activated when abnormal keywords or system is designed to support multi-user VR with efficient
dangerous behaviors are detected to collect videos for cloud- bandwidth and computation resources utilization. MUVR is
based analysis. By using such an edge-cloud collaborative built on a basic observation that the VR frames being rendered
architecture, SafeShareRide can accurately detect attacks in- and transmitted to different users are highly redundant. For
vehicle with low bandwidth demand. computation efficiency, MUVR maintains a two-level hierar-
Another scenario that edge computing can play an impor- chical cache for invariant background at the edge and the user
tant role is the IoT devices management in the smart home end to reuse frames whenever necessary. Meanwhile, MUVR
environment. Wherein, the privacy issue of the wide range of transmits a part of all frames in full and delivers the distinct
home-devices is a popular topic. In [23], the Vigilia system portion for the rest frames to further reduce the transmission
is proposed to harden smart home systems by restricting the costs.
network access of devices. A default access deny policy and
an API-granularity device access mechanism for applications III. O PEN S OURCE E DGE C OMPUTING P ROJECTS
are adopted to enforce access at the network level. Run time Besides the designed edge computing systems for specific
checking implemented in the routers only permits those de- purposes, some open source edge computing projects have
clared communications, thus helping users secure their home- also been launched recently. The Linux Foundation pub-
devices. Similarly, the HomePad system in [24] also proposes lished two projects, EdgeX Foundry in 2017 and Akraino
to execute IoT applications at the edge and introduces a Edge Statck [30] in 2018. The Open Network Founda-
privacy-aware hub to mitigate security concerns. Homepad tion(ONF) launched a project namely CORD (Central Office
allows users to specify privacy policy to regulate how ap- Re-architected as a Datacenter) [31]. The Apache Software
plications access and process their data. Through enforcing Foundation published Apache Edgent. Microsoft published
applications to use explicit information flow, Homepad can use Azure IoT Edge in 2017 and announced it as an open source
Prolog rules to verify whether applications have the ability to in 2018.
violate the defined privacy policy at install time. Among them, CORD and Akraino Edge Stack focus on
Edge computing has also been widely used in the analysis providing edge cloud services; EdgeX Foundry and Apache
of video stream. LAVEA is an edge-based system built for Edgent focus on IoT and aim to solve problems which bring
latency-aware video analytics nearby the end users [25]. In difficulties to practical applications of edge computing in IoT;
order to minimize the response time, LAVEA formulates an Azure IoT Edge provides hybrid cloud-edge analytics, which
optimization problem to determine which part of tasks to be helps to migrate cloud solutions to IoT devices.
offloaded to the edge computer and uses a task queue priori-
tizer to minimize the makespan. It also proposes several task
placement schemes to enable the collaboration of nearby edge A. CORD
nodes, which can further reduce the overall task completion CORD is an open source project of ONF initiated by AT&T
time. VideoEdge is a system that provides the most promising and is designed for network operators. Current network in-
video analytics implementation across a hierarchy of clusters frastructure is built with closed proprietary integrated systems
in the city environment [26]. A 3-tier computation architecture provided by network equipment providers. Due to the closed
is considered with deployed cameras and private clusters as property, the network capability cannot scale up and down
the edge and remote server as the cloud. The hierarchical dynamically. And the lack of flexibility results in inefficient
edge architecture is also adopted in [27] and is believed utilization of the computing and networking resources. CORD
to be promising in processing live video stream at scale. plans to reconstruct the edge network infrastructure to build
Technically, VideoEdge searches thousands of combinations datacenters with SDN [32], NFV [33] and Cloud technolo-
of computer vision components implementation, knobs, and gies. It attempts to slice the computing, storage and network
placement and finds a configuration to balance the accuracy resources so that these datacenters can act as clouds at the
and resource demands using an efficient heuristic. In [28], a edge, providing agile services for end users.
video analytics system for autonomous drones is proposed, CORD is an integrated system built from commodity hard-
where edge computing is introduced to save the bandwidth. ware and open source software. Fig. 14 shows the hardware
11
this module image to the edge device with the help of the IoT
instantiate deploy module Edge interface. Then the IoT Edge receives the deployment
module IoT Edge
ensure running agent information, pulls the module image, and instantiates the
report module
IoT Edge
status module instance.
Data
hub
Analystics Azure IoT Edge has wide application areas. Now it has ap-
Module IoT Hub
data
data plication cases on intelligent manufacturing, irrigation system,
Meachine
Learning drone management system and so on. It is worth noting that
Module data
Azure IoT Edge is open-source but the Azure services like
Azure
Fuctions Azure Functions, Azure Machine Learning and Azure Stream
Module IoT Edge cloud
interface are charged.
IoT Edge runtime
sensors
and
Azure IoT edge device
devices
F. Comparative Study
We summarize the features of the above open source edge
Fig. 19. Diagram of Azure IoT Edge.
computing systems in Table II. Then we compare them from
different aspects in Table II, including the main purpose of the
systems,application area, deployment, target user, virtualiza-
as machine learning, image recognition and other tasks about tion technology, system characteristic, limitations, scalability
artificial intelligence. and mobility. We believe such comparisons give better under-
Azure IoT Edge consists of three components: IoT Edge standings of current open source edge computing systems.
modules, IoT Edge runtime and a cloud-based interface as 1) Main purpose: The main purpose shows the target
depicted in Fig. 19. The first two components run on edge problem that a system tries to fix, and it is a key factor
devices, and the last one is an interface in the cloud. IoT Edge for us to choose a suitable system to run edge applications.
modules are containerized instances running the customer code As an interoperability framework, EdgeX Foundry aims to
or Azure services. IoT Edge runtime manages these modules. communicate with any sensor or device in IoT. This ability is
The cloud-based interface is used to monitor and manage the necessary for edge applications with data from various sensors
former two components, in other words, monitor and manage and devices. Azure IoT Edge offers an efficient solution to
the edge devices. move the existing applications from cloud to edge, and to
IoT Edge modules are the places that run specific applica- develop edge applications in the same way with the cloud
tions as the units of execution. A module image is a docker applications. Apache Edgent helps to accelerate the develop-
image containing the user code. A module instance, as a ment process of data analysis in IoT use cases. CORD aims
docker container, is a unit of computation running the module to reconstruct current edge network infrastructure to build
image. If the resources at the edge devices are sufficient, these datacenters so as to provide agile network services for end-user
modules can run the same Azure services or custom applica- customers. From the view of edge computing, CORD provides
tions as in the cloud because of the same programming model. with multi-access edge services. Akraino Edge Stack provides
In addition, these modules can be deployed dynamically as an open source software stack to support high-availability edge
Azure IoT Edge is scalable. clouds.
IoT Edge runtime acts as a manager on the edge devices. It 2) Application area: EdgeX Foundry and Apache Edgent
consists of two modules: IoT Edge hub and IoT Edge agent. both focus on IoT edge, and EdgeX Foundry is geared toward
IoT Edge hub acts as a local proxy for IoT Hub which is a communication with various sensors and devices, while Edgent
managed service and a central message hub in the cloud. As a is geared toward data analysis. They are suitable for intelligent
message broker, IoT Edge hub helps modules communicate manufacturing, intelligent transportation and smart city where
with each other, and transport data to IoT Hub. IoT Edge various sensors and devices generate data all the time. Azure
agent is used to deploy and monitor the IoT Edge modules. IoT Edge can be thought as the expansion of Azure Cloud. It
It receives the deployment information about modules from has an extensive application area but depends on the compute
IoT Hub, instantiates these modules, and ensures they are resources of edge devices. Besides, it is very convenient to
running, for example, restarts the crashed modules. In addition, deploy edge applications about artificial intelligence such as
it reports the status of the modules to the IoT hub. machine learning and image recognition to Azure IoT Edge
IoT Edge cloud interface is provided for device manage- with the help of Azure services. CORD and Akraino Edge
ment. By this interface, users can create edge applications, Stack support edge cloud services, which have no restriction
then send these applications to the device and monitor the on application area. If the edge devices of users don’t have
running status of the device. This monitoring function is useful sufficient computing capability, these two systems are suitable
for use cases with massive devices, where users can deploy for users to run resource-intensive and interactive applications
applications to devices on a large scale and monitor these in connection with operator network.
devices. 3) Deployment: As for the deployment requirements,
A simple deployment procedure for applications is that: EdgeX Foundry, Apache Edgent and Azure IoT Edge are
users choose a Azure service or write their own code as an deployed in edge devices such as routers, gateways, switchers
application, build it as an IoT Edge module image, and deploy and so on. Users can deploy EdgeX Foundry by themselves,
15
TABLE II
C OMPARISON OF O PEN E DGE S YSTEM C HARACTERISTICS
add or reduce microservices dynamically, and run their own 6) System characteristic: System characteristics show the
edge applications. Differently, users need the help of cloud- unique features of the system, which may help users to
based interface to deploy Azure IoT Edge and develop their develop, deploy or monitor their edge applications. It will
edge applications. CORD and Akraino Edge Stack are de- save lots of workload and time if making good use of these
signed for network operators, who need fabric switches, access characteristics. EdgeX Foundry provides a common API to
devices, network cards and other related hardware apart from manage the devices, and this brings great convenience to de-
compute machines. Customers have no need to think about ploying and monitoring edge applications in large scale. Azure
the hardware requirements and management process of the IoT Edge provides powerful Azure services to accelerate the
hardware, but to rent the services provided by the network development of edge applications. Apache Edgent provides a
operators like renting a cloud service instead of managing a series of functional APIs for data analytics, which lowers the
physical server. difficulty and reduces the time on developing edge analytic
applications. CORD and Akraino Edge Stack provide with
4) Target user: Though these open source systems focus multi-access edge services on edge cloud. We only need to
on edge computing, their target users are not the same. keep connection with operator network, and we can apply for
EdgeX Foundry, Azure IoT Edge and Apache Edgent have these services without the need to deploy an edge computing
no restriction on target users. Therefore, every developer can system on edge devices by ourselves.
deploy them into local edge devices like gateways, routers
7) Limitation: This subsection discusses the limitation of
and hubs. Differently, CORD and Akraino Edge Stack are
the latest version of them to deploy edge applications. The
created for network operators because they focus on edge
lastest version of EdgeX Foundry has not provided a pro-
infrastructure.
grammable interface in its architecture for developers to write
5) Virtualization technology: Virtualization technologies their own applications. Although EdgeX allows us to add
are widely used nowadays. Virtual machine technology can custom implementations, it demands more workload and time.
provide better management and higher utilization of resources, As for Azure IoT Edge, though it is open-source and free,
stability, scalability and other advantages. Container technol- Azure services are chargeable as commercial software. For
ogy can provide services with isolation and agility but with Apache Edgent, it is lightweight and it focuses on only data
negligible overhead, which can be used in edge devices [39]. analytics. As for CORD and Akraino Edge Stack, these two
Using OpenStack and Docker as software components, CORD systems demand stable network between data sources and the
and Akraino Edge Stack use both of these two technologies to operators because the edge applications are running on the
support edge cloud. Different edge devices may have different edge of operator network rather than local devices.
hardware and software environment. For those edge systems 8) Scalability: Increasing applications at edge make the
which are deployed on edge devices, container is a good network architecture more complex and the application man-
technology for services to keep independence in different agement more difficult. Scalability is one major concern in
environment. Therefore, EdgeX Foundry and Azure IoT Edge edge computing. Among these edge computing systems, Azure
choose to run as docker containers. As for Edgent, Edgent IoT Edge, CORD and Akraino Edge Stack apply docker
applications run on JVM. technology or virtual machine technology to support users to
16
Device EdgeX
management Foundry Top Cloud Layer
capability
Local solutions while
supporting
third-party clouds
As an alternative to the cloud computing, does the edge/fog that the obtained results with DockerCap is comparable to that
computing paradigm consume more or less energy? Different from the power capping solution provided by the hardware
points of view have been given: some claim that decentralized (Intel RAPL).
data storage and processing supported by the edge computing An energy aware edge computer architecture is designed
architecture are more energy efficient [41], [42], while some to be portable and usable in the fieldwork scenarios in [51].
others show that such distributed content delivery may con- Based on the architecture, a high-density cluster prototype is
sume more energy than that of the centralized way [43]. built using the compact general-purpose commodity hardware.
The authors in [44] give a thorough energy analysis for Power management policies are implemented in the prototype
applications running over the centralized DC (i.e., under cloud to enable the real-time energy-awareness. Through various
mode) and decentralized nano DCs (i.e., under fog mode), experiments, it shows that both the load balance strategies and
respectively. The results indicate that the fog mode may cluster configurations have big impacts on the system energy
be with a higher energy efficiency, depending on several consumption and responsiveness.
system design factors (e.g., type of application, type of access 2) Green-energy powered sustainable computing: Dual en-
network, and ratio of active time), and those applications that ergy sources are employed to support the running of a fog
generate and distribute a large amount of data in end-user computing based system in [52], where solar power is utilized
premises result in best energy saving under the fog mode. as the primary energy supply of the fog nodes. A comprehen-
Note that the decentralized nano DCs or fog nodes in sive analytic framework is presented to minimize the long-term
our context are different from the traditional CDN datacen- cost on energy consumption. Meanwhile, the framework also
ters [45], [46], [47]. They are also designed in a decentralized enables an energy-efficient data offloading (from fog nodes to
way but usually with much more powerful computing/com- the cloud) mechanism to help provide a high quality of service.
municating/storage capacities. In [53], a rack-scale green-energy powered edge infrastruc-
ture, InSURE (in-situ server system using renewable energy)
is implemented for data pre-processing at the edge. InSURE
B. At the Middle Edge Server Layer can be powered by standalone (solar/wind) power and with
At the middle layer of the edge computing paradigm, energy batteries as the energy backup. Meanwhile, an energy buffering
is also regarded as an important aspect, as the edge servers mechanism and a joint spatio-temporal power management
can be deployed in a domestic environment or powered by scheme are applied, to enable efficient energy flow control
the battery (e.g., a desktop or a portable WiFi router, as from the power supply to the edge server.
shown in Fig. 21). Thus, to provide a higher availability, many
power management techniques have been applied to limit the C. At the Bottom Device Layer
energy consumption of edge servers, while still ensuring their As a well-recognized fact, the IoT devices in edge comput-
performances. We give a review of two major strategies used ing usually have strict energy constraints, e.g., limited battery
at the edge server layer in recent edge computing systems. life and energy storage. Thus, it remains a key challenge to
1) Low-power system design and power management: power a great number (can up to tens of billions) of IoT
In [48], the tactical cloudlet is presented and its energy devices at the edge, especially for those resource-intensive
consumption when performing VM synthesis is evaluated applications or services [54]. We review the energy saving
particularly, under different cloudlet provisioning mechanisms. strategies adopted at the device layer of the edge computing
The results show that the largest amount of energy is consumed diagram. Specifically, we go through three major approaches
by i) VM synthesis due to the large payload size, and ii) to achieving high energy efficiency in different edge/fog
on-demand VM provisioning due to the long application- computing systems.
ready time. Such results lead to the high energy efficiency 1) Computation offloading to edge servers or cloud: As a
policy: combining cached VM with cloudlet push for cloudlet natural idea to solve the energy poverty problem, computa-
provision. tion offloading from the IoT devices to the edge servers or
A service-oriented architecture for fog/edge computing, Fog cloud has been long investigated [55], [56], [57]. It was also
Data, is proposed and evaluated in [49]. It is implemented with demonstrated that, for some particular applications or services,
an embedded computer system and performs data mining and offloading tasks from IoT devices to more powerful ends can
data analytics on the raw data collection from the wearable reduce the total energy consumption of the system, since the
sensors (in telehealth applications). With Fog Data, orders task execution time on powerful servers or cloud can be much
of magnitude data are reduced for transmission, thus leading shortened [58]. Although it increases the energy consumption
to enormous energy saving. Furthermore, Fog Data is with a of (wireless) data transmission, the tradeoff favors the offload-
low power architecture design, and even consumes much less ing option as the computational demand increases [59].
energy than that of a Raspberry Pi. Having realized that the battery life is the primary bottle-
In [50], a performance-aware orchestrator for Docker con- neck of handheld mobile devices, the authors in [57] present
tainers, named DockerCap, is developed to meet the power MAUI, an architecture for mobile code offloading and re-
consumption constraints of the edge server (fog node). Follow- mote execution. To reduce the energy consumption of the
ing the observe-decide-act loop structure, DockerCap is able smartphone program, MAUI adopts a fine-grained program
to manage container resources at run-time and provide soft- partitioning mechanism and minimizes the code changes re-
level power capping strategies. The experiments demonstrate quired at the remote server or cloud. The ability of MAUI
18
in energy reduction is validated by various experiments upon cation performance through an LTE-optimized protocol, where
macro-benchmarks and micro-benchmarks. The results show Device-to-Device (D2D) communication is applied as an im-
that MAUI’s energy saving for a resource-intensive mobile portant supporting technology (to pull memory replicas from
application is up to one order of magnitude, also with a IoT devices). The total energy consumption of REPLISON
significant performance improvement. is generally worse than the conventional LTE scenario, as it
Like MAUI [57] , the authors in [59] design and implement needs more active devices. However, the energy consumed
CloneCloud, a system that helps partition mobile application per device during a single replicate transmission is much less.
programs and performs strategically offloading for fast and With further evaluation results, it shows that REPLISOM has
elastic execution at the cloud end. As the major difference an energy advantage over the conventional LTE scenarios as
to MAUI, CloneCloud involves less programmer help during long as the size of replica is sufficiently small.
the whole process and only offloads particular program parti- For IoT devices distributed at the edge, the authors in [64]
tions on demand of execution, which further speeds up the leverage the software agents running on the IoT devices to
program execution. Evaluation shows that CloneCloud can establish an integrated multi-agent system (MAS). By sharing
improve the energy efficiency of mobile applications (along data and information among the mobile agents, edge devices
with their execution efficiency) by 20 times. Similarly in [60], are able to collaborate with each other and improve the
by continuous updates of software clones in the cloud with a system energy efficiency in executing distributed opportunistic
reasonable overhead, the offloading service can lead to energy applications. Upon the experimental platform with 100 sensor
reduction at the mobile end by a factor. For computation nodes and 20 smartphones as edge devices, the authors show
intensive applications on resource constrained edge devices, the great potential of data transmission reduction with MAS.
their executions usually need to be offloaded to the cloud. This leads to a significant energy saving, from 15% to 66%,
To reduce the response latency of the image recognition under different edge computing scenarios. As another work
application, Precog is presented which has been introduced in applying data reduction for energy saving, CAROMM [65]
Sec. II-G. With the on-device recognition caches, Precog much employs a change detect technique (LWC algorithm) to control
reduces the amount of images offloading to the edge server or the data transmission of IoT devices, while maintaining the
cloud, by predicting and prefetching the future images to be data accuracy.
recognized.
2) Collaborated devices control and resource management: V. D EEP L EARNING O PTIMIZATION AT THE E DGE
For energy saving of the massive devices at the edge, besides In the past decades, we have witnessed the burgeoning of
offloading their computational tasks to more powerful ends, machine learning, especially deep learning based applications
there is also a great potential via sophisticated collaboration which have changed human being’s life. With complex struc-
and cooperation among the devices themselves. Particularly, tures of hierarchical layers to capture features from raw data,
when the remote resources from the edge server or cloud are deep learning models have shown outstanding performances in
unavailable, it is critical and non-trivial to complete the edge those novel applications, such as machine translation, object
tasks while without violating the energy constraint. detection, and smart question and answer systems.
PCloud is presented in [10] to enhance the capability of Traditionally, most deep learning based applications are
individual mobile devices at the edge. By seamless using deployed on a remote cloud center, and many systems and
available resources from the nearby devices, PCloud forms tools are designed to run deep learning models efficiently
a personal cloud to serve end users whenever the cloud on the cloud. Recently, with the rapid development of edge
resources are difficult to access, where device participation computing, the deep learning functions are being offloaded
is guided in a privacy-preserving manner. The authors show to the edge. Thus it calls for new techniques to support the
that, by leveraging multiple nearby device resources, PCloud deep learning models at the edge. This section classifies these
can much reduce the task execution time as well as energy technologies into three categories: systems and toolkits, deep
consumption. For example, in the case study of neighborhood learning packages, and hardware.
watch with face recognition, the results show a 74% reduction
in energy consumption on a PCloud vs. on a single edge
device. Similar to PCloud, the concept of mobile device cloud A. Systems and Toolkits
(MDC) is proposed in [61], where computational offloading Building systems to support deep learning at the edge
is also adopted among the mobile devices. It shows that the is currently a hot topic for both industry and academy.
energy efficiency (gain) is increased by 26% via offloading There are several challenges when offloading state-of-the-
in MDC. The authors of [62] propose an adaptive method art AI techniques on the edge directly, including computing
to dynamically discovery available nearby resource in het- power limitation, data sharing and collaborating, and mismatch
erogeneous networks, and perform automatic transformation between edge platform and AI algorithms. To address these
between centralized and flooding strategies to save energy. challenges, OpenEI is proposed as an Open Framework for
As current LTE standard is not optimized to support a large Edge Intelligence [66]. OpenEI is a lightweight software
simultaneous access of IoT devices, the authors in [63] propose platform to equip edges with intelligent processing and data
an improved memory replication architecture and protocol, sharing capability. OpenEI consists of three components: a
REPLISON, for computation offloading of the massive IoT package manager to execute the real-time deep learning task
devices at the edge. REPLISON improves the memory repli- and train the model locally, a model selector to select the
19
most suitable model for different edge hardware, and a library pre-fused activations, and quantized kernels that allow smaller
including a RESTFul API for data sharing. The goal of OpenEI and faster (fixed-point math) models.
is that any edge hardware will has the intelligent capability Facebook published Caffe2 [76] as a lightweight, modular,
after deploying it. and scalable framework for deep learning in 2017. Caffe2
In the industry, some top-leading tech-giants have published is a new version of Caffe which is first developed by UC
several projects to move the deep learning functions from the Berkeley AI Research (BAIR) and community contributors.
cloud to the edge. Except Microsoft published Azure IoT Edge Caffe2 provides an easy and straightforward way to play with
which have been introduced in Sec. III-E, Amazon and Google the deep learning and leverage community contributions of
also build their services to support deep learning on the edge. new models and algorithms. Comparing with the original Caffe
Table III summarizes the features of the systems which will framework, Caffe2 merges many new computation patterns,
be discussed below. including distributed computation, mobile, reduced precision
Amazon Web Services (AWS) has published IoT Greengrass computation, and more non-vision use cases. Caffe2 supports
ML Inference [67] after IoT Greengrass. AWS IoT Greengrass multiple platforms which enable developers to use the power
ML Inference is a software to support machine learning of GPUs in the cloud or at the edge with cross-platform
inferences on local devices. With AWS IoT Greengrass ML libraries.
Inference, connected IoT devices can run AWS Lambda func- PyTorch [71] is published by Facebook. It is a python pack-
tions and have the flexibility to execute predictions based on age that provides two high-level features: tensor computation
those deep learning models created, trained, and optimized with strong GPU acceleration and deep Neural Networks built
in the cloud. AWS IoT Greengrass consists of three software on a tape-based auto-grad system. Maintained by the same
distributions: AWS IoT Greengrass Core, AWS IoT Device company (Facebook), PyTorch and Caffe2 have their own ad-
SDK, and AWS IoT Greengrass SDK. Greengrass is flexible vantages. PyTorch is geared toward research, experimentation
for users as it includes a pre-built TensorFlow, Apache MXNet, and trying out exotic neural networks, while caffe2 supports
and Chainer package, and it can also work with Caffe2 and more industrial-strength applications with a heavy focus on
Microsoft Cognitive Toolkit. the mobile. In 2018, Caffe2 and PyTorch projects merged into
Cloud IoT Edge [68] extends Google Cloud’s data process- a new one named PyTorch 1.0, which would combine the user
ing and machine learning to edge devices by taking advantages experience of the PyTorch frontend with scaling, deployment
of Google AI products, such TensorFlow Lite and Edge TPU. and embedding capabilities of the Caffe2 backend.
Cloud IoT Edge can either run on Android or Linux-based MXNet [72] is a flexible and efficient library for deep
operating systems. It is made up of three components: Edge learning. It was initially developed by the University of Wash-
Connect ensures the connection to the cloud and the updates ington and Carnegie Mellon University, to support CNN and
of software and firmware, Edge ML runs ML inference by long short-term memory networks (LSTM). In 2017, Amazon
TensorFlow Lite, and Edge TPU specific designed to run announced MXNet as its choice of deep learning framework.
TensorFlow Lite ML models. Cloud IoT Edge can satisfy the MXNet places a special emphasis on speeding up the devel-
real-time requirement for the mission-critical IoT applications, opment and deployment of large-scale deep neural networks.
as it can take advantages of Google AI products (such as It is designed to support multiple different platforms (either
TensorFlow Lite and Edge TPU) and optimize the performance cloud platforms or the edge ones) and can execute training and
collaboratively. inference tasks. Furthermore, other than the Windows, Linux,
and OSX operating systems based devices, it also supports the
Ubuntu Arch64 and Raspbian ARM based operating systems.
B. Deep Learning Packages CoreML [73] is a deep learning framework optimized
Many deep learning packages have been widely used to for on-device performance at memory footprint and power
deliver the deep learning algorithms and deployed on the consumption. Published by Apple, users can integrate the
cloud data centers, including TensorFlow [69], Caffe [70], trained machine learning model into Apple products, such as
PyTorch [71], and MXNet [72]. Due to the limitations of Siri, Camera, and QuickType. CoreML supports not only deep
computing resources at the edge, the packages designed for learning models, but also some standard models such as tree
the cloud are not suitable for edge devices. Thus, to support ensembles, SVMs, and generalized linear models. Built on top
data processing with deep learning models at the edge, several of low level technologies, CoreML aims to make full use of
edge-based deep learning frameworks and tools have been re- the CPU and GPU capability and ensure the performance and
leased. In this section, we introduce TensorFlow Lite, Caffe2, efficiency of data processing.
PyTorch, MXNet, CoreML [73], and TensorRT [74], whose The platform of TensorRT [74] acts as a deep learning infer-
features are summarized in Tables IV. ence to run the models trained by TensorFlow, Caffe, and other
TensorFlow Lite [75] is TensorFlow’s lightweight solution frameworks. Developed by NVIDIA company, it is designed to
which is designed for mobile and edge devices. TensorFlow reduce the latency and increase the throughput when executing
is developed by Google in 2016 and becomes one of the the inference task on NVIDIA GPU. To achieve computing
most widely used deep learning frameworks in cloud data acceleration, TensorRT leverages several techniques, including
centers. To enable low-latency inference of on-device deep weight and activation precision calibration, layer and tensor
learning models, TensorFlow Lite leverages many optimization fusion, kernel auto-tuning, dynamic tensor memory, and multi-
techniques, including optimizing the kernels for mobile apps, stream execution.
20
TABLE III
C OMPARISON OF D EEP LEARNING S YSTEMS ON E DGE
Features AWS IoT Greengrass Azure IoT Edge Cloud IoT Edge
Developer Amazon Microsoft Google
IoT Greengrass Core, IoT Device SDK, IoT Edge modules, IoT Edge runtime,
Components Edge Connect, Edge ML, Edge TPU
IoT Greengrass SDK Cloud-based interface
OS Linux, macOS, Windows Windows, Linux, macOS Linux, macOS, Windows, Android
Target device Multiple platforms (GPU-based, Raspberry Pi) Multiple platforms TPU
Characteristic Flexible Windows friendly Real-time
Considering the different performance of the packages and Following the above work, Qiu et al. [82] propose a CNN
the diversity of the edge hardware, it is challenging to choose a accelerator designed upon the embedded FPGA, Xilinx Zynq
suitable package to build edge computing systems. To evaluate ZC706, for large-scale image classification. It presents an
the deep learning frameworks at the edge and provide a in-depth analysis of state-of-the-art CNN models and shows
reference to select appropriate combinations of package and that Convolutional layers are computational-centric and Fully-
edge hardware, pCAMP [77] is proposed. It compares the Connected layers are memory-centric. The average perfor-
packages’ performances (w.r.t. the latency, memory footprint, mances of the CNN accelerator at convolutional layers and the
and energy) resulting from five edge devices and observes that full CNN are 187.8 GOPS and 137.0 GOPS under a 150M Hz
no framework could win over all the others at all aspects. It working frequency, respectively, which outperform previous
indicates that there is much room to improve the frameworks approaches significantly.
at the edge. Currently, developing a lightweight, efficient and An efficient speech recognition engine (ESE) is designed
high-scalability framework to support diverse deep learning to speed up the predictions and save energy when applying
modes at the edge cannot be more important and urgent. the deep learning model of LSTM. ESE is implemented in a
In addition to these single-device based frameworks, more Xilinx XCKU060 FPGA opearting at 200M Hz. For the sparse
researchers focus on distributed deep learning models over LSTM network, it can achieve 282GOPS, corresponding to
the cloud and edge. DDNN [78] is a distributed deep neural a 2.52 TOPS on the dense LSTM network. Besides, energy
network architecture across cloud, edge, and edge devices. efficiency improvements of 40x and 11.5x are achieved,
DDNN maps the sections of a deep neural network onto respectively, compared with the CPU and GPU based solution.
different computing devices, to minimize communication and 2) GPU-based hardware: GPU can execute parallel pro-
resource usage for devices and maximize usefulness of features grams at a much higher speed than CPU, which makes it fit
extracted from the cloud. for the computational paradigm of deep learning algorithms.
Neurosurgeon [79] is a lightweight scheduler which can Thus, to run deep learning models at the edge, building the
automatically partition DNN computation between mobile hardware platform with GPU is a must choice. Specifically,
devices and datacenters at the granularity of neural network NVIDIA Jetson TX2 and DRIVE PX2 are two representative
layers. By effectively leveraging the resources in the cloud and GPU-based hardware platforms for deep learning.
at the edge, Neurosurgeon achieves low computing latency, NVIDIA Jetson TX2 [83] is an embedded AI computing
low energy consumption, and high traffic throughput. device which is designed to achieve low latency and high
power-efficient. It is built upon an NVIDIA Pascal GPU with
256 CUDA cores, an HMP Dual Denver CPU and a Qualcomm
C. Hardware System ARM CPU. It is loaded with 8GB of memory and 59.7GB/s of
The hardware designed specifically for deep learning can memory bandwidth and the power is about 7.5 watts. The GPU
strongly support edge computing. Thus, we further review rele- is used to execute the deep learning task and CPUs are used to
vant hardware systems and classify them into three categories: maintain general tasks. It also supports the NVIDIA Jetpack
FPGA-based hardware, GPU-based hardware, and ASIC. SDK which includes libraries for deep learning, computer
1) FPGA-based Hardware: A field-programmable gate ar- vision, GPU computing, and multimedia processing.
ray (FPGA) is an integrated circuit and can be configured by NVIDIA DRIVE PX [84] is designed as the AI supercom-
the customer or designer after manufacturing. FPGA based puter for autonomous driving. The architecture is available in a
accelerators can achieve high performance computing with variety of configurations, from the mobile processor operating
low energy, high parallelism, high flexibility, and high secu- at 10 watts to a multi-chip AI processors delivering 320 TOPS.
rity [80]. It can fuse data from multiple cameras, as well as lidar, radar,
[81] implements a CNN accelerator on a VC707 FPGA and ultrasonic sensors.
board. The accelerator focuses on solving the problem that 3) ASIC: Application-Specific Integrated Circuit (ASIC) is
the computation throughput does not match the memory the integrated circuit which supports customized design for
bandwidth well. By quantitatively analyzing the two factors a particular application, rather than the general-purpose use.
using various optimization techniques, the authors provide a ASIC is suitable for the edge scenario as it usually has a
solution with better performance and lower FPGA resource smaller size, lower power consumption, higher performance,
requirement, and their solution achieves a peak performance and higher security than many other circuits. Researchers and
of 61.62 GOPS under a 100M Hz working frequency. developers design ASIC to meet the computing pattern of deep
21
TABLE IV
C OMPARISON OF D EEP L EARNING PACKAGES ON E DGE
[22] L. Liu, X. Zhang, M. Qiao, and W. Shi, “Safeshareride: edge-based at- [48] G. Lewis, S. Echeverrı́a, S. Simanta, B. Bradshaw, and J. Root, “Tactical
tack detection in ridesharing services,” in Proceedings of the IEEE/ACM cloudlets: Moving cloud computing to the edge,” in Military Commu-
Symposium on Edge Computing (SEC). IEEE, 2018. nications Conference (MILCOM), 2014 IEEE. IEEE, 2014, pp. 1440–
[23] R. Trimananda, A. Younis, B. Wang, B. Xu, B. Demsky, and G. Xu, 1446.
“Vigilia: securing smart home edge computing,” in Proceedings of the [49] H. Dubey, J. Yang, N. Constant, A. M. Amiri, Q. Yang, and
IEEE/ACM Symposium on Edge Computing (SEC). IEEE, 2018, pp. K. Makodiya, “Fog data: Enhancing telehealth big data through fog
74–89. computing,” in Proceedings of the ASE BigData & SocialInformatics
[24] I. Zavalyshyn, N. O. Duarte, and N. Santos, “Homepad: a privacy-aware 2015. ACM, 2015, p. 14.
smart hub for home environments,” in Proceedings of the IEEE/ACM [50] A. Asnaghi, M. Ferroni, and M. D. Santambrogio, “Dockercap: A
Symposium on Edge Computing (SEC). IEEE, 2018, pp. 58–73. software-level power capping orchestrator for docker containers,” in
[25] S. Yi, Z. Hao, Q. Zhang, Q. Zhang, W. Shi, and Q. Li, “Lavea: latency- Computational Science and Engineering (CSE) and IEEE Intl Confer-
aware video analytics on edge computing platform,” in Proceedings of ence on Embedded and Ubiquitous Computing (EUC) and 15th Intl
the Second ACM/IEEE Symposium on Edge Computing (SEC). ACM, Symposium on Distributed Computing and Applications for Business
2017, p. 15. Engineering (DCABES), 2016 IEEE Intl Conference on. IEEE, 2016,
[26] C.-C. Hung, G. Ananthanarayanan, P. Bodik, L. Golubchik, M. Yu, pp. 90–97.
P. Bahl, and M. Philipose, “Videoedge: processing camera streams using [51] T. Rausch, C. Avasalcai, and S. Dustdar, “Portable energy-aware cluster-
hierarchical clusters,” in Proceedings of the IEEE/ACM Symposium on based edge computers,” in 2018 IEEE/ACM Symposium on Edge Com-
Edge Computing (SEC). IEEE, 2018, pp. 115–131. puting (SEC). IEEE, 2018, pp. 260–272.
[27] L. Tong, Y. Li, and W. Gao, “A hierarchical edge cloud architecture for [52] Y. Nan, W. Li, W. Bao, F. C. Delicato, P. F. Pires, Y. Dou, and A. Y.
mobile computing,” in Proceedings of the 35th Annual IEEE Interna- Zomaya, “Adaptive energy-aware computation offloading for cloud of
tional Conference on Computer Communications (INFOCOM). IEEE, things systems,” IEEE Access, vol. 5, pp. 23 947–23 957, 2017.
2016, pp. 1–9. [53] C. Li, Y. Hu, L. Liu, J. Gu, M. Song, X. Liang, J. Yuan, and T. Li,
[28] J. Wang, Z. Feng, Z. Chen, S. George, M. Bala, P. Pillai, S.-W. Yang, and “Towards sustainable in-situ server systems in the big data era,” in ACM
M. Satyanarayanan, “Bandwidth-efficient live video analytics for drones SIGARCH Computer Architecture News, vol. 43, no. 3. ACM, 2015,
via edge computing,” in Proceedings of the IEEE/ACM Symposium on pp. 14–26.
Edge Computing (SEC). IEEE, 2018, pp. 159–173. [54] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A survey
[29] Y. Li and W. Gao, “Muvr: supporting multi-user mobile virtual reality on mobile edge computing: The communication perspective,” IEEE
with resource constrained edge cloud,” in Proceedings of the IEEE/ACM Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2322–2358,
Symposium on Edge Computing (SEC). IEEE, 2018, pp. 1–16. 2017.
[30] “Akraino edge statck,” 2018, https://www.akraino.org. [55] A. Rudenko, P. Reiher, G. J. Popek, and G. H. Kuenning, “Saving
[31] “Cord,” 2018, https://www.opennetworking.org/cord. portable computer battery power through remote process execution,”
[32] B. A. A. Nunes, M. Mendonca, X.-N. Nguyen, K. Obraczka, and ACM SIGMOBILE Mobile Computing and Communications Review,
T. Turletti, “A survey of software-defined networking: Past, present, and vol. 2, no. 1, pp. 19–26, 1998.
future of programmable networks,” IEEE Communications Surveys & [56] R. Kemp, N. Palmer, T. Kielmann, F. Seinstra, N. Drost, J. Maassen, and
Tutorials, vol. 16, no. 3, pp. 1617–1634, 2014. H. Bal, “eyedentify: Multimedia cyber foraging from a smartphone,” in
[33] H. Hawilo, A. Shami, M. Mirahmadi, and R. Asal, “Nfv: state of the 2009 11th IEEE International Symposium on Multimedia. IEEE, 2009,
art, challenges, and implementation in next generation mobile networks pp. 392–399.
(vepc),” IEEE Network, vol. 28, no. 6, pp. 18–26, 2014. [57] E. Cuervo, A. Balasubramanian, D.-k. Cho, A. Wolman, S. Saroiu,
[34] A. W. Manggala, Hendrawan, and A. Tanwidjaja, “Performance anal- R. Chandra, and P. Bahl, “Maui: making smartphones last longer with
ysis of white box switch on software defined networking using open code offload,” in Proceedings of the 8th international conference on
vswitch,” in International Conference on Wireless and Telematics, 2016. Mobile systems, applications, and services. ACM, 2010, pp. 49–62.
[35] K. C. Okafor, I. E. Achumba, G. A. Chukwudebe, and G. C. Onon- [58] K. Ha, “System infrastructure for mobile-cloud convergence,” Ph.D.
iwu, “Leveraging fog computing for scalable iot datacenter using dissertation, Carnegie Mellon University, 2016.
spine-leaf network topology,” Journal of Electrical and Computer [59] B.-G. Chun, S. Ihm, P. Maniatis, M. Naik, and A. Patti, “Clonecloud:
Engineering,2017,(2017-4-24), vol. 2017, no. 2363240, pp. 1–11, 2017. elastic execution between mobile device and cloud,” in Proceedings of
[36] “Edgex foundry,” 2018, https://www.edgexfoundry.org. the sixth conference on Computer systems. ACM, 2011, pp. 301–314.
[37] “Apache edgent,” 2018, http://edgent.apache.org. [60] M. V. Barbera, S. Kosta, A. Mei, and J. Stefa, “To offload or not to
[38] “Azure iot,” 2018, https://azure.microsoft.com/en-us/overview/iot/. offload? the bandwidth and energy costs of mobile cloud computing,”
[39] R. Morabito, “Virtualization on internet of things edge devices with in INFOCOM, 2013 Proceedings IEEE. IEEE, 2013, pp. 1285–1293.
container technologies: a performance evaluation,” IEEE Access, vol. 5, [61] A. Fahim, A. Mtibaa, and K. A. Harras, “Making the case for compu-
pp. 8835–8850, 2017. tational offloading in mobile device clouds,” in Proceedings of the 19th
[40] T. Mastelic, A. Oleksiak, H. Claussen, I. Brandic, J.-M. Pierson, and annual international conference on Mobile computing & networking.
A. V. Vasilakos, “Cloud computing: Survey on energy efficiency,” Acm ACM, 2013, pp. 203–205.
computing surveys (csur), vol. 47, no. 2, p. 33, 2015. [62] W. Liu, T. Nishio, R. Shinkuma, and T. Takahashi, “Adaptive resource
[41] V. Valancius, N. Laoutaris, L. Massoulié, C. Diot, and P. Rodriguez, discovery in mobile cloud computing,” Computer Communications,
“Greening the internet with nano data centers,” in Proceedings of the vol. 50, pp. 119–129, 2014.
5th international conference on Emerging networking experiments and [63] S. Abdelwahab, B. Hamdaoui, M. Guizani, and T. Znati, “Replisom:
technologies. ACM, 2009, pp. 37–48. Disciplined tiny memory replication for massive iot devices in lte edge
[42] S. Sarkar and S. Misra, “Theoretical modelling of fog computing: A cloud,” IEEE Internet of Things Journal, vol. 3, no. 3, pp. 327–338,
green computing paradigm to support iot applications,” Iet Networks, 2016.
vol. 5, no. 2, pp. 23–29, 2016. [64] S. Abdelwahab, B. Hamdaoui, M. Guizani, T. Znati, Taieb Leppnen,
[43] A. Feldmann, A. Gladisch, M. Kind, C. Lange, G. Smaragdakis, and F.- and J. Riekki, “Energy efficient opportunistic edge computing for the
J. Westphal, “Energy trade-offs among content delivery architectures,” internet of things,” Web Intelligence and Agent Systems, p. In Press,
in Telecommunications Internet and Media Techno Economics (CTTE), 2018.
2010 9th Conference on. IEEE, 2010, pp. 1–6. [65] P. P. Jayaraman, J. B. Gomes, H. L. Nguyen, Z. S. Abdallah, S. Kr-
[44] F. Jalali, K. Hinton, R. Ayre, T. Alpcan, and R. S. Tucker, “Fog ishnaswamy, and A. Zaslavsky, “Cardap: A scalable energy-efficient
computing may help to save energy in cloud computing,” IEEE Journal context aware distributed mobile data analytics platform for the fog,” in
on Selected Areas in Communications, vol. 34, no. 5, pp. 1728–1739, East European Conference on Advances in Databases and Information
2016. Systems. Springer, 2014, pp. 192–206.
[45] G. Tang, H. Wang, K. Wu, and D. Guo, “Tapping the knowledge of dy- [66] X. Zhang, Y. Wang, S. Lu, L. Liu, L. Xu, and W. Shi, “OpenEI: An Open
namic traffic demands for optimal cdn design,” IEEE/ACM Transactions Framework for Edge Intelligence,” in 2019 IEEE 39th International
on Networking (TON), vol. 27, no. 1, pp. 98–111, 2019. Conference on Distributed Computing Systems (ICDCS), July 2019.
[46] G. Tang, K. Wu, and R. Brunner, “Rethinking cdn design with distributee [67] (2019) Aws iot greengrass. https://aws.amazon.com/greengrass/”.
time-varying traffic demands,” in INFOCOM. IEEE, 2017, pp. 1–9. [Online]. Available: https://aws.amazon.com/greengrass/
[47] G. Tang, H. Wang, K. Wu, D. Guo, and C. Zhang, “When more may not [68] (2019) Cloud iot edge: Deliver google ai capabilities at the
be better: Toward cost-efficient cdn selection,” in INFOCOM WKSHPS. edge. https://cloud.google.com/iot-edge/. [Online]. Available: https:
IEEE, 2018, pp. 1–2. //cloud.google.com/iot-edge/
24