Icii 2018 00019
Icii 2018 00019
Icii 2018 00019
Abstract—Due to the ever-growing demands in modern cities, to resolve the issues related to a variety of operational and
unreliable and inefficient power transportation becomes one energy problems in the electrical power grid. By combining
critical issue in nowadays power grid. This makes power grid
the advanced technologies like Internet of Things (IoT) [1]
monitoring one of the key modules in power grid system and play
an important role in preventing severe safety accidents. However, and Artificial Intelligence (AI) [2], the smart grid can create
the traditional manual inspection cannot efficiently achieve this an automated energy delivery network which is more secure,
goal due to its low efficiency and high cost. Smart grid as a reliable and intelligent than the current grid. In smart grid, the
new generation of the power grid, sheds new light to construct regular monitoring is expected to be performed with all kinds
an intelligent, reliable and efficient power grid with advanced
of IoT devices and processing servers instead of manpower,
information technology. In smart grid, automated monitoring
can be realized by applying advanced deep learning algorithms where IoT devices such as sensors and cameras will collect
on powerful cloud computing platform together with such IoT and upload the real-time videos and other information about
(Internet of Things) devices as smart cameras. The performance the power line to the processing servers. Such information will
of cloud monitoring, however, can still be unsatisfactory since a then be automatically processed by the processing servers run-
large amount of data transmission over the Internet will lead to
ning deep learning algorithms to detect potential threats and,
high delay and low frame rate. In this paper, we note that the
edge computing paradigm can well complement the cloud and if necessary, trigger appropriate actuations to achieve timely
significantly reduce the delay to improve the overall performance. and intelligent monitoring with automatic threat identification.
To this end, we propose an edge computing framework for real- Since deep learning algorithms are extremely data intensive,
time monitoring, which moves the computation away from the computation intensive and hardware-dependent, the process-
centralized cloud to the near-device edge servers. To maximize the
ing servers of smart grid are expected to be equipped with
benefits, we formulate a scheduling problem to further optimize
the framework and propose an efficient heuristic algorithm based abundant computation resources. This makes cloud computing
on the simulated annealing strategy. Both real-world experiments be widely proposed as a natural choice to host such servers
and simulation results show that our framework can increase the [3]. However, transferring large volume of data into the cloud
monitoring frame rate up to 10 times and reduce the detection will push significant pressure to the network and generate
delay up to 85% comparing to the cloud monitoring solution.
huge communication costs. In addition, from power providers
Index Terms—Edge Computing; Smart Grid; Deep Learning.
perspective, moving data to the remote cloud may also incur
privacy concerns. Moreover, the latency in the network can
become a severe performance bottleneck due to the latency
I. I NTRODUCTION
sensitivity of real-time monitoring.
The electric power grid has become an important infras- Recently, the concept of edge computing has been proposed
tructure in daily life, delivering electricity from large power as a complement of cloud computing, attracting great interests
stations to customers. Though it has existed for a long period from both academia and industry. In contrast to cloud, edge
of time, the current power grid is almost built and operated usually refers to a geographical concept which is in close
following the theories and technologies 100 years ago [1], proximity to the end devices in the network. By pushing
, which have been shown less capable to fulfill the require- applications, data and services away from centralized cloud
ments in modern society. For example, one of the important to edge servers, the computing paradigm will be extended
components in power grid system, the regular monitoring on to an edge-cloud collaborative computing, which has shown
high voltage power line, still adopts the traditional manual outstanding performance on communication latency and traffic
monitoring scheme, causing a number of problems such as reduction [4], and ease the privacy concerns of users as well.
low efficiency, hard to be checked during off-work period In this paper, we for the first time introduce edge computing
and unable to provide real-time monitoring. As a result, the to the smart grid scenario and propose a five-layer edge
electricity system frequently suffers from blackouts and grid computing framework to achieve high performance real-time
failures in the past decades, leading to security risks and monitoring for smart grid, where by moving deep learning
inconvenience issues, as well as huge economic losses for both algorithms to edge servers, the monitoring performance can
power providers and consumers. be greatly improved in terms of detection latency, frame rate,
In recent years, the concept of smart grid has been proposed etc. To maximize the potential benefits, we further formulate a
100
monitoring procedure will go through these five layers from
the bottom to the top.
IoT device layer: The IoT device layer consists of such IoT
Data devices as smart cameras that are used to monitor the smart
Cloud
grid. These devices are not only responsible for capturing the
high resolution pictures or video streams, but also installed
with a communication module and local storage to assist the
transmission of the captured information to the edge server.
Internet
Edge network layer: This layer connects IoT devices to
edge servers. For connectivities, since each edge server may
take charge of multiple IoT devices that may be attached to
the grid arbitrarily, wireless connections are often preferred
than wired connections due to its convenience. Also, the
Edge server
transmissions of images and videos require high data rate for
the connection. This makes Wi-Fi and 5G networks are better
choices over ZigBee and Bluetooth.
Edge server layer: One key module in this layer is the
Edge network
threat identification module, which executes the inference
Wi-Fi 5G
algorithm based on some trained deep learning models. Once it
finds a threat in the received picture or videos, such as people,
a car or a bird getting close to the power line, the module
IoT device will upload the monitoring data with warning information to
Camera Camera Camera the cloud. Based on the edge network connectivity type, edge
servers can be deployed at Wi-Fi access points or 5G base
Fig. 1: An edge computing framework for real-time monitoring stations or both. These servers should be equipped with certain
in smart grid high performance GPU resources to support the execution of
the threat identification module.
Internet layer: This layer connects edge servers to the
cloud server. Different from edge network layer, wired connec-
tion cost [13], inference latency and energy consumption [14].
tions are often used here as backbones in the public network
This motivates us to investigate the opportunity to introduce
domain.
edge computing to the smart grid scenario and propose a edge
Cloud layer: This layer is the central controller of the moni-
computing framework to achieve high performance real-time
toring system. It will receive warning information from all the
monitoring for smart grid.
edge servers and according to the smart grid’s strategies to
III. I NTRODUCING E DGE TO R EAL - TIME M ONITORING react to those warnings. For instance, if the cloud consistently
FOR S MART G RID receives warnings from one specific area, then the smart grid
will recognize that there are potential threats in the area and
In this section, we present our framework that introduces further checking and restoring process will be executed.
edge computing to smart grid real-time monitoring. We will
first describe the details of the framework and then make a B. Comparison Between Edge, Cloud and Traditional Moni-
comparison with cloud and traditional monitoring schemes. toring Approaches
Traditional monitoring approaches adopt manual inspection,
A. Real-time Edge Monitoring Framework
which involves enormous human labors and cannot satisfy the
The power line monitoring is one of the important work real-time requirements , not to mention the introduced high
in power grids. The common solution in traditional grid is employment cost. Even in developing countries, the monetary
to delegate professional staff to inspect whether the voltage cost for one climb on the electric tower can take USD
power lines are damaged or not, while in smart grid, all $100. Comparing to new monitoring schemes using advanced
monitoring tasks are replaced by IoT devices attached to the information technologies, the traditional approaches should not
grid. The centralized cloud will then gather such information be adopted under the smart grid environment.
as pictures or videos captured by IoT devices through the Cloud monitoring system, in contrast, is intelligent, auto-
Internet and process with deep learning algorithms to detect mated and able to achieve real-time monitoring. Also, the man-
the potential threats. agement cost is significantly reduced comparing to traditional
We propose a novel framework which introduces edge com- monitoring. However, one critical issue is about the network
puting into smart grid, as illustrated in Fig. 1. Our framework performance. The traffic created in the Internet is enormous be-
contains five main layers: IoT device layer, edge network layer, cause all the picture and video information captured by every
edge server layer, Internet layer and cloud server layer. The single IoT device is required to uploaded to the cloud through
101
Edge monitoring Cloud monitoring Traditional monitoring
Manual or automated Automated Automated Manual
Monitoring frequency Real-time Real-time Long-term periodic inspection
Threat detection Deep learning based approaches Deep learning based approaches Manual inspection
Server hardware Moderate storage and computation re- Large storage and computation re- No server required
sources with one or several GPUs sources with powerful GPU clusters
Server location Co-locate with wireless access points, Set at specific places where the cloud No server required
e.g., Wi-Fi routers and 5G base stations service providers choose, with the size
of several stadiums
Network latency Less than tens of milliseconds Usually between 50 to 500 ms No network
Network traffic Small since only warning information Large because of all the picture and No network
will be transmitted through the Internet video information will be transmitted
and this occurs in low frequency. through the Internet.
Data privacy More secure. Most of the data stored at Captured pictures and videos may leak No data leak concern
IoT devices and edge servers without on the Internet.
go through the Internet.
Reliability High reliability with distributed located Low reliability. Centralized control Low reliability since its inspection
edge servers. Single edge server down module will cause termination of the cycle is extremely long comparing
won’t affect the whole monitoring sys- whole system once there are severe with automated real-time monitor-
tem. issues with the cloud. ing.
Price Moderate. Most cost is the one-time High. Cloud monitoring will require Extremely high. Each time of the
charge for buying and setting those IoT huge network bandwidth to serve for maintenance requires high-quality
devices and edge servers. large-scale data transmission, which staff and the labor cost is relatively
will be charged a lot from Internet high.
Service Provider (ISP).
TABLE I: The comparison between edge, cloud and traditional monitoring approaches
the Internet. This results in large network bandwidth resource flexible and they are influencing the monitoring performance
is occupied and network is likely to be congested. Also, the of the overall system. One important performance metric is
high network latency will increase the threat detection latency the effective frame rate. The effective frame rate is defined as
and degrade the detection performance. the picture frames the whole system can capture and process
Edge monitoring system, on the other hand, can well com- in the threat detection module within one second. The higher
plement the cloud system. Since edge servers are distributedly effective frame rate the system owns, the better monitoring
located near Wi-Fi routers and/or 5G base stations, each edge quality the system can achieve. Also the delay is another im-
server only needs to take charge of limited number of IoT portant metric we need to consider. We hope that the potential
devices and will less likely be overloaded. The network traffic threat in one picture frame should be detected with low delay
generated over the Internet is also greatly reduced since the after this frame being captured by the device. Based on these
data will only be transmitted when the edge server finds a two optimization objectives, we mathematically formulate a
potential threat and this usually happens with only a small scheduling problem for our edge monitoring system.
probability. Also, the edge network latency is largely smaller Suppose there are n IoT devices, m edge servers and a
than the Internet latency. This makes the real-time threat centralized cloud in the smart grid. We use set C to represent
detection become more feasible. Besides these advantages, the set of IoT devices and ci to represent the each IoT device
edge monitoring excels cloud monitoring from the perspectives where 1 ≤ i ≤ n. Also, we use set E to represent the set
of data privacy and system reliability. We summarize the of edge servers and ej to represent each edge server where
comparison between edge, cloud and traditional monitoring 1 ≤ j ≤ m.
approaches in TABLE I.
We first introduce the constants involved in our problem
IV. E DGE S CHEDULING O PTIMIZATION formulation. For simplicity, we consider each IoT device as
As edge servers are distributedly located, an IoT device the same type of smart camera and each edge server possesses
may be eligible to connect to multiple edge servers and each the same computation resource. For each IoT device ci , each
feasible connection will possess different network bandwidth. picture it captures is in the same format with the size of s.
Thus it is important to appropriately arrange the connections The maximum frame rate for each IoT device to capture is
to obtain the optimal performance. In this section, we will fu . Since the probability to be detected with threats among
formulate this as a scheduling problem and further propose an different locations varies, we define pi as the probability to
algorithm to optimize it. be detected with threats in one picture captured by device
ci . These probabilities can be counted from real-world data.
A. Problem Formulation For each edge server, we define the edge processing rate
In the edge monitoring system, there can be hundreds of as ve , which refers to the number of picture frames can be
IoT devices and a few number of edge servers. Though these processed on each edge server within 1 second. For the cloud,
edge servers and IoT devices are located in fixed place, the the processing rate is denoted as vc . For the connections, the
connections between IoT devices and edge servers are still uplink bandwidth from each IoT device ci to the cloud is
102
denoted as bci , and the uplink bandwidth from each edge cloud when this picture frame is detected with threats. The
server ej to the cloud is denoted as bej . The uplink bandwidth mathematical expression for De is shown as below:
between each IoT device and each edge server will change n s 1 s
due to the geographical distance. We define for the connection i=1 xi,j ∗ ( bi,j + ve + bej ∗ pi )
De = (8)
between device ci and edge server ej , the uplink bandwidth n
is bi,j . For comparison, the average detection delay for cloud
Then we introduce the variables. We define the set X = monitoring Dc is calculated as below:
{xi,j |1 ≤ i ≤ n, 1 ≤ j ≤ m} to indicate the connection n s 1
between each IoT device and edge server. If ci is chosen to i=1 bci + vc
Dc = (9)
connect to ej , xi,j will be set 1 otherwise 0. Since each IoT n
device can only connect to one edge server, therefore For better system performance, our two optimization ob-
m
jectives are increasing the effective frame rate and reducing
xi,j = 1, ∀i (1) the average detection delay respectively. Since the cloud
j=1 monitoring’s effective frame rate and average detection delay
are fixed values, we transform the first objective into reducing
xi,j ∈ {0, 1}, ∀i, j (2) effective frame rate ratio between cloud monitoring and edge
monitoring, and transform the second objective into reducing
For the IoT device ci , the effective frame rate which is
average detection delay ratio between edge monitoring and
the actual number of picture frames being processed within
cloud monitoring. To unify these two objectives, we set
1 second is denoted as fi . fi cannot be greater than the
coefficients before the two ratios and the finalized optimization
number of frames which the edge network can transmit during
objective O is to minimize the sum of the weighted ratios. O
1 second, and also cannot be greater than the maximum frame
can be defined as the function of scheduling X as below:
rate for an IoT device. It should satisfy the minimum frame
rate requirement, so that Fc De
O(X) = A ∗ +B∗ (10)
bi,j Fe Dc
xi,j ∗ fi ≤ min( , fu ), ∀i, j (3)
s And the coefficients A and B should follow
Also, for one specific edge server ej , the sum of frames it A + B = 1, A, B ≥ 0 (11)
receives in 1 second should not be greater than the number of
frames it can process, thus Thus our problem can be generalized as:
n
arg min O(X) (12)
xi,j ∗ fi ≤ ve , ∀j (4) X
i=1
subject to the constraints in equations (1), (2), (6) to (11).
Till now, we are able to calculate some important metrics.
We define the effective frame rate for edge monitoring as Fe . B. A Simulated Annealing Scheduling Solution
It can be calculated as the sum of each device’s effective frame Our optimization goal is to solve a NP-hard scheduling
rate: problem. In this subsection, we give a heuristic scheduling
n
algorithm with simulated annealing strategies.
Fe = fi (5)
Simulated annealing is a probabilistic strategy for approx-
i=1
imating the optimum solution of a given function. Based
Also, according to equation (1)(2)(3)(4), Fe can be trans- on an initial solution, simulated annealing will continuously
formed to: revise it and try to approach the optimum solution. It has the
m
n
similar idea like greedy search to accept the move which turns
bi,j
Fe = min( xi,j ∗ min( , fu ), ve ) (6) to a better return of the given function, meanwhile, it also
j=1 i=1
s
accepts the move which gets a worse return under the dynamic
For cloud monitoring, the effective frame rate Fc is con- probability, since this move may create space for future moves
strained by either the uplink bandwidth from IoT device to which can result in finding the global optimum solution. Each
cloud, or the cloud’s processing rate. time it accepts a worse move, the probability to accept the
next worse move will decrease.
bci ∗ n The algorithm’s pseduo code is shown as Algorithm 1. This
Fc = min( , vc ) (7)
s algorithm can be divided into two parts. The first part is using
We are able to calculate the average detection delay De greedy search to find an initial scheduling. We first sort the set
for one picture frame for edge monitoring as well. De can of IoT devices C following a non-increasing order according
be divided into three parts: the uploading time from the IoT to its probability to detect with threats pi. Then we arrange
device to the edge server, the edge processing time, and each IoT device to connect to one edge server which has the
the potential uploading time from the edge server to the largest uplink edge network bandwidth.
103
(GJH 'DWDWUDQVPLVVLRQWRWKHFORXG
'DWDWUDQVPLVVLRQWRWKHHGJH
&ORXG 'DWDSURFHVVLQJRQWKHFORXG
'DWDSURFHVVLQJRQWKHHGJH
'DWDWUDQVPLVVLRQWRWKHFORXG
'HOD\ PV
'HOD\ PV
'HOD\ PV
3UREDELOLW\WREHGHWHFWHGZLWKWKUHDWV 3UREDELOLW\WREHGHWHFWHGZLWKWKUHDWV 3UREDELOLW\WREHGHWHFWHGZLWKWKUHDWV
(a) Detection delay comparison (b) Detection delay of edge monitoring (c) Detection delay of cloud monitoring
(GJH (GJH
&ORXG &ORXG
)UDPHUDWH ISV
7UDIILF 0%
3UREDELOLW\WREHGHWHFWHGZLWKWKUHDWV
3UREDELOLW\WREHGHWHFWHGZLWKWKUHDWV
The second part is to use the simulated annealing strategy Parameters Values
s 0.9 MB
to optimize our current scheduling. The simulated annealing fu 60 fps
rescheduling process will last for a fixed rounds, and in each ve 115
round every IoT device will own the chance to reconnect vc 1550
to other edge servers. For the reconnection, the IoT device bej 17.76 Mbps
bci 15.76 Mbps
will reconnect to anyone of all the edge servers. If current bi,j 10 - 320 Mbps or no connection
scheduling scheme can achieve a better return which means pc 0.5
the calculated value of O is smaller than previous scheduling, pd 0.05
we consider it is a better move and always accept it. Otherwise, TABLE II: The values of simulation parameters
we consider it as a worse move. The algorithm will accept this
worse move according to the current acceptance probability.
Besides the parameters involved in the problem definition in A. Real-world experiment
Section IV, we also define rl as the number of rounds which
the simulated annealing rescheduling process will last, and pc, The first part is the real-world experiment. We made a
pd as the two parameters to control the acceptance probability prototype following the proposed edge monitoring framework
for worse moves. in Section III and deploy it in real-world experiments. The
experiments use 1 smart camera, 1 edge server and 1 cloud
platform which has a powerful GPU cluster. The type of the
V. S YSTEM EVALUATION smart camera is D-Link DCS-936L HD Wi-Fi camera . The
edge server is a Dell server (OPTIPLEX 7010) and equipped
In this section, we will first evaluate our edge monitor with an Intel Core i7-3770 3.4 GHz quad core CPU, 16 GB
system’s performance from real-world experiments. Since the 1333 MHz DDR3 RAM, and an NVIDIA GeForce GTX 1080
experiment resource is limited, we further design a simulation Ti GPU. The cloud platform is the Google Cloud Platform
test to show our system’s execution efficiency involving with with the instance with 13 GB RAM and a NVIDIA Telsa K80
multiple edge servers and multiple smart cameras. GPU. The deep learning application for threat detection uses
104
Algorithm 1 Simulated Annealing Rearranging Algorithm
(GJH
Input: &ORXG
X, bi,j , ∀i, j;
)UDPHUDWH ISV
simulated annealing parameters rl, pc, pd.
Output:
best scheduling scheme produced by simulated annealing
strategy Xbest
1: Set all xi,j = 0, ∀1 ≤ i ≤ n, 1 ≤ j ≤ m
2: Sort the IoT device set C = {ci |1 ≤ i ≤ n} in a non-
increasing order according to the value of pi
3: for i from 1 to n do
$PRXQWRIGHYLFHV
4: for j from 1 to m do
5: Find the index j0 where {bi,j0 ≥ bi,j |∀1 ≤ j ≤ m}; (a) Frame rate
6: xi,j0 ← 1;
7: end for Fig. 4: Simulation evaluation (part 1)
8: end for
9: Xbest , Xcurrent ← X;
10: objbest ← O(Xbest ); about 533 to 545 ms per picture frame. Our edge monitoring’s
11: for r from 1 to rl do delay is ranging from 113 to 255 ms. Comparing to cloud
12: Initialize visited edge server set as visited ← {}; monitoring, the delay is largely reduced by 53% - 79%.
13: for k from 1 to n do do Though with the increasing on the probability to be detected
i ← a random number from with threats, the edge monitoring’s traffic will increase as well.
14:
1 to n but not in visited; The threat will be detected beside a high voltage power line
15: Set visited ← visited i;
16: for j from 1 to m do in the real-world environment is extremely low and usually
17: Set xi,j ← 1; not beyond 10%. So this can prove the excellent efficiency on
18: Calculate objective function difference detection delay of edge monitoring.
δ ← O(X) − O(Xcurrent ); We further investigate the construction of the delay. Fig.
19: if δ < 0 then 2(b)(c) shows the construction of edge monitoring delay and
20: Update Xbest , objbest ; cloud monitoring delay respectively. We can see that the
21: Accept this move by setting Xcurrent ← X; biggest difference for edge computing and cloud computing
22: else is the data transmission time between IoT device and the
23: if epc > a random number between (0, 1) then processing server. In cloud monitoring system, the pictures
24: Accept this move by setting Xcurrent ← X; captured by the IoT device will be directly sent to the cloud
25: Adjust parameter pc ← pd ∗ pc; over the Internet, this process will cost more than 500 ms.
26: end if While in our edge monitoring system, the pictures are sent
27: end if to the near-end edge servers, which has superior network
28: Set xi,j ← 0; condition and larger bandwidth and the transmission only takes
29: end for about 30 ms to finish. This is the most significant advantage
30: end for of edge monitoring.
31: end for Then we evaluate the traffic produced on the Internet. See
32: return Xbest ; Fig. 3(a), the traffic produced by cloud monitoring is totaly 1.8
GB, while the traffic produced by edge monitoring is less than
750 MB. Though the traffic will increase with the increasing
on probability to be detected with threats, since more and
singleshot detection [15]. more processed pictures with detection information will be
The dataset are the pictures captured by the camera beside transmitted to the cloud. The size of traffic produced by edge
the high voltage power line. We prepared 4 sets of pictures. monitoring is still far less than the cloud monitoring. We also
Each set of picture contains 2,000 pictures and there are 10%, evaluate the effective frame rate and the result is shown as
20%, 30% and 40% of these pictures will be detected with Fig. 3(b). It proves that the frame rate has no relation with
threats for each set respectively. the probability to be detected with threats. And for our edge
We first evaluate the detection latency for each picture monitoring, the frame rate is more than 10 times higher than
frame. See Fig. 2. The first picture presents the delay com- the cloud monitoring.
parison between edge monitoring and cloud monitoring. The From the above real-world evaluation result, our edge
cloud monitoring’s delay from the picture being sent by the monitoring framework significantly outperforms cloud mon-
IoT device to the cloud finished the detection processing is itoring in detection delay, traffic and frame rate. In the next
105
6LPXODWHGDQQHDOLQJ 6LPXODWHGDQQHDOLQJ 6LPXODWHGDQQHDOLQJ
*UHHG\ *UHHG\ *UHHG\
5DQGRP 5DQGRP 5DQGRP
2EMHFWLYH
2EMHFWLYH
2EMHFWLYH
(a) Delay (pi = 5%) (b) Delay (pi = 20%) (c) Delay (normal distributed probabilities)
6LPXODWHGDQQHDOLQJ 6LPXODWHGDQQHDOLQJ
6LPXODWHGDQQHDOLQJ
*UHHG\ *UHHG\
*UHHG\
5DQGRP 5DQGRP
5DQGRP
2EMHFWLYH
2EMHFWLYH
2EMHFWLYH
1XPEHURI,R7GHYLFHV $ $
(a) Delay (power-law distributed probabilities) (b) Objective comparison (20 edge servers) (c) Objective comparison (100 edge servers)
106
the base of the numerator (the frame rate of edge monitoring) with more reliable and efficient services combing with in-
is so large that even a little increasing on cloud monitoring will formation technologies. Key researches on smart grid is to
lead to the reduction of the calculated objective O. Also we investigate on all kinds of IoT devices and sensors and build
noticed that, our simulated annealing algorithm performs better the communication network between these devices. Yun et
than other algorithms when the number of IoT devices is small. al. [16] proposes the architecture of smart grid and listed
However, the performance of each algorithm become similar several key technologies of IoT will be involved. Gungor et
when the number of IoT devices is large. This is caused by al. [17] investigates in the challenges and opportunities of
the bottleneck of limited computation performance of the edge wireless sensor networks under the smart grid environment and
server. Because the number of edge server is fixed, there exists conducts experiments on the statistical characterization study
a maximum effective frame rate for edge computing. When under different electric-power-system environments. Sagiroglu
the amount of IoT devices reaches a specific number, then et al. [18] presents their vision on dealing with issues featuring
the only solution to increase effective frame rate is to enlarge big data and smart grid.
the computation resource on each edge server or increase the Edge computing enables the computation performed on
amount of accessible edge servers. the near-end and deal with the high-latency issue of cloud
We then evaluate the detection delay. By setting A = computing. Zhu et al. [19] investigates the advantages of
1, B = 0 in equation (10), we are able to calculate the average mobile edge computing proposed a content optimization in-
detection delay for each picture frame. Fig.5(a)(b) presents frastructure. Karim et al. [8] proposed FemtoClouds to enable
the results being evaluated under similar environments except multiple mobile devices to share the computation resources
the probability to be detected with threats. The probability and configure a coordinated edge-cloud collaborative comput-
for each IoT device is all set to 5% in Fig.5(a) and 20% ing service, by leveraging mobile devices to provide cloud
in Fig.5(b). Similar as the effective frame rate evaluation services at the edge. Dawei et al. [20] proposed an adaptive
above, our edge monitoring system can finish the detection mobile object recognition framework called DeepCham, which
using only 12% - 25% time of what the cloud monitoring solves the issue of recognition accuracy degradation due to
requires. When concerning about the algorithms, our simulated context variations caused by different locations and time,
annealing algorithm has the same performance with greedy by collaboratively training a domain-aware adaption model
algorithm. It is because the greedy algorithm always looks for together with a domain-constrained deep model with the
the maximum network bandwidth for edge network connection introducing of the intermediate edge master. Chen et al. [9]
and this is the only variant in measuring the detection delay. focused on the distributed computational offloading problem
So that the greedy algorithm will always achieve the optimal and proposed a offloading model for mobile edge computing.
solution, as well as our algorithm since it is an enhancement He et al. [21] proposed heuristic strategies to protect the user’s
of greedy strategy. With the increasing of IoT devices, there location privacy by mimicking the user’s mobility.
is no significant changes in the objective function value.
VII. C ONCLUSION
These proves the scalability of the edge monitoring system.
Comparing with these two graphs, we can find the objective In this paper, we introduce the edge computing into the
function value is increasing when pi increases. This is the smart grid monitoring system. Edge monitoring has the huge
natural phenomenon since more and more pictures will be advantage in low latency and is able to largely improve the
uploaded to the cloud and increase the average detection delay. monitoring quality comparing to cloud monitoring. We also
Also, we conduct another two experiments where the prob- formulate a scheduling problem of optimizing the performance
abilities to be detected with threats for each IoT device will of our edge monitoring system. We further make a prototype
follow two specific distributions, i.e., normal distribution and of the edge monitoring system and conduct real-world ex-
power-law distribution. See Fig.5(c) and Fig.6(a), in each of periments and simulations to prove its efficiency. Our edge
the graphs, there is no evident trend of what our objective monitoring framework has been accepted by the State Grid
function value will change with the increasing on amount of Corporation of China, and we are deploying over 1,000 devices
IoT devices. This phenomenon shows that the probability is and edge servers in the power system of Liaoning Province,
not the determining factors in edge monitoring, comparing to China.
the connection of edge network. ACKNOWLEDGMENT
We finally make two simulations focusing on the coeffi-
This research is supported by an Industrial Canada Tech-
cients of our formulated problem, see Fig.6(b)(c). The x-axis
nology Demonstration Program (TDP) grant, an NSERC Dis-
means the value of A. We can find that with the increasing on
covery Grant, and a MITACS grant.
the value of A, the objective function will get smaller values.
This shows that our system is more efficient in increasing R EFERENCES
effective frame rate comparing to reducing detection delay. [1] S. E. Collier, “Ten steps to a smarter grid,” in Rural Electric Power
Conference, 2009. REPC’09. IEEE. IEEE, 2009, pp. B2–B2.
VI. R ELATED WORK [2] Y. He, G. J. Mendis, and J. Wei, “Real-time detection of false data
injection attacks in smart grid: A deep learning-based intelligent mech-
The smart grid was first proposed in 2005. It enables a anism,” IEEE Transactions on Smart Grid, vol. 8, no. 5, pp. 2505–2516,
new era for the electrical power grid system to be constructed 2017.
107
[3] M.-P. Hosseini, H. Soltanian-Zadeh, K. Elisevich, and D. Pompili, [13] S. Teerapittayanon, B. McDanel, and H. Kung, “Distributed deep neural
“Cloud-based deep learning of big eeg data for epileptic seizure pre- networks over the cloud, the edge and end devices,” in Proceedings
diction,” in Signal and Information Processing (GlobalSIP), 2016 IEEE of 37th International Conference on Distributed Computing Systems
Global Conference on. IEEE, 2016, pp. 1151–1155. (ICDCS). IEEE, 2017, pp. 328–339.
[4] A. Ahmed and E. Ahmed, “A survey on mobile edge computing,” in [14] Y. Kang, J. Hauswald, C. Gao, A. Rovinski, T. Mudge, J. Mars, and
Proceedings of 10th International Conference on Intelligent Systems and L. Tang, “Neurosurgeon: Collaborative intelligence between the cloud
Control (ISCO). IEEE, 2016. and mobile edge,” in Proceedings of 21st International Conference
[5] E. Mocanu, P. H. Nguyen, M. Gibescu, and W. L. Kling, “Deep learning on Architectural Support for Programming Languages and Operating
for estimating building energy consumption,” Sustainable Energy, Grids Systems (ASPLOS). ACM, 2017, pp. 615–629.
and Networks, vol. 6, pp. 91–99, 2016. [15] S. Leonardo, “Multibox single shot detector (ss-
[6] H. Tan, Z. Han, X.-Y. Li, and F. C. Lau, “Online job dispatching and d),” https://leonardoaraujosantos.gitbooks.io/artificial-
scheduling in edge-clouds,” in INFOCOM 2017-IEEE Conference on inteligence/content/single-shot-detectors/ssd.html.
Computer Communications, IEEE. IEEE, 2017, pp. 1–9. [16] M. Yun and B. Yuxin, “Research on the architecture and key technology
[7] T. Li, C. S. Magurawalage, K. Wang, K. Xu, K. Yang, and H. Wang, “On of internet of things (iot) applied on smart grid,” in Advances in Energy
efficient offloading control in cloud radio access network with mobile Engineering (ICAEE), 2010 International Conference on. IEEE, 2010,
edge computing,” in Distributed Computing Systems (ICDCS), 2017 pp. 69–72.
IEEE 37th International Conference on. IEEE, 2017, pp. 2258–2263. [17] V. C. Gungor, B. Lu, and G. P. Hancke, “Opportunities and challenges of
[8] K. Habak, M. Ammar, K. A. Harras, and E. Zegura, “Femto clouds: wireless sensor networks in smart grid,” IEEE transactions on industrial
Leveraging mobile devices to provide cloud service at the edge,” in electronics, vol. 57, no. 10, pp. 3557–3564, 2010.
Cloud Computing (CLOUD), 2015 IEEE 8th International Conference [18] S. Sagiroglu, R. Terzi, Y. Canbay, and I. Colak, “Big data issues in
on. IEEE, 2015, pp. 9–16. smart grid systems,” in Renewable Energy Research and Applications
[9] X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation (ICRERA), 2016 IEEE International Conference on. IEEE, 2016, pp.
offloading for mobile-edge cloud computing,” IEEE/ACM Transactions 1007–1012.
on Networking, vol. 24, no. 5, pp. 2795–2808, 2016. [19] J. Zhu, D. S. Chan, M. S. Prabhu, P. Natarajan, H. Hu, and F. Bonomi,
[10] U. Drolia, K. Guo, J. Tan, R. Gandhi, and P. Narasimhan, “Cachier: “Improving web sites performance using edge servers in fog computing
Edge-caching for recognition applications,” in Distributed Computing architecture,” in Proceedings of 7th International Symposium onService
Systems (ICDCS), 2017 IEEE 37th International Conference on. IEEE, Oriented System Engineering (SOSE). IEEE, 2013, pp. 320–323.
2017, pp. 276–286. [20] D. Li, T. Salonidis, N. V. Desai, and M. C. Chuah, “Deepcham:
[11] Y. Huang, X. Song, F. Ye, Y. Yang, and X. Li, “Fair caching algorithms Collaborative edge-mediated adaptive deep learning for mobile object
for peer data sharing in pervasive edge computing environments,” in recognition,” in Edge Computing (SEC), IEEE/ACM Symposium on.
Distributed Computing Systems (ICDCS), 2017 IEEE 37th International IEEE, 2016, pp. 64–76.
Conference on. IEEE, 2017, pp. 605–614. [21] T. He, E. N. Ciftcioglu, S. Wang, and K. S. Chan, “Location privacy in
[12] L. Wang, L. Jiao, J. Li, and M. Mühlhäuser, “Online resource allocation mobile edge clouds: A chaff-based approach,” IEEE Journal on Selected
for arbitrary user mobility in distributed edge clouds,” in Distributed Areas in Communications, 2017.
Computing Systems (ICDCS), 2017 IEEE 37th International Conference
on. IEEE, 2017, pp. 1281–1290.
108