The Cybersecurity Landscape in Industrial Control Systems

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

CONTRIBUTED

P A P E R

The Cybersecurity Landscape in


Industrial Control Systems
This paper surveys the state of the art in industrial control system (ICS) security,
identifies outstanding research challenges in this emerging area, and explains the key
concepts and principles for deployment of cybersecurity methods and tools to ICSs.
By Stephen McLaughlin, Charalambos Konstantinou, Xueyang Wang, Lucas Davi,
Ahmad-Reza Sadeghi, Michail Maniatakos, and Ramesh Karri

ABSTRACT | Industrial control systems (ICSs) are transition- and automate stable operation of industrial processes
ing from legacy-electromechanical-based systems to modern [1], [2]. ICSs interconnect, monitor, and control pro-
information and communication technology (ICT)-based sys- cesses in a variety of industries such as electric power
tems creating a close coupling between cyber and physical generation, transmission and distribution, chemical pro-
components. In this paper, we explore the ICS cybersecurity duction, oil and gas, refining and water desalination. The
landscape including: 1) the key principles and unique aspects security of ICSs is receiving attention due to its increas-
of ICS operation; 2) a brief history of cyberattacks on ICS; ing connections to the Internet [3]. ICS security vulnera-
3) an overview of ICS security assessment; 4) a survey of bilities can be attributed to several factors: use of
“uniquely-ICS” testbeds that capture the interactions between microprocessor-based controllers, adoption of communi-
the various layers of an ICS; and 5) current trends in ICS at- cation standards and protocols, and the complex distrib-
tacks and defenses. uted network architectures. The security of ICSs has
come under particular scrutiny owing to attacks on criti-
KEYWORDS | Computer security; industrial control; networked cal infrastructures [4], [5].
control systems; power system security; SCADA systems; security Traditional IT security solutions fail to address the
coupling between the cyber and physical components of
an ICS [6]. According to NIST [1], ICSs differ from tradi-
I. INTRODUCTION tional IT systems in the following ways. 1) The primary
Modern industrial control systems (ICSs) use informa- goal of ICSs is to maintain the integrity of the industrial
tion and communication technologies (ICTs) to control process. 2) ICS processes are continuous and hence need
to be highly available; unexpected outages for repair must
be planned and scheduled. 3) In an ICS, interactions with
Manuscript received August 31, 2015; revised November 19, 2015; accepted
physical processes are central and often times complex.
December 19, 2015. Date of publication March 16, 2016; date of current version 4) ICSs target specific industrial processes and may not
April 19, 2016. This work was supported in part by German Science Foundation as
part of Project S2 within the CRC 1119 CROSSING; by the European Union’s Seventh
have resources for additional capabilities such as security.
Framework Programme under Grant 609611, PRACTICE project; and by the Intel 5) In ICSs, timely response to human reaction and physi-
Collaborative Research Institute for Secure Computing (ICRI-SC). The NYU
researchers were also supported in part by Consolidated Edison, Inc., under Award
cal sensors is critical. 6) ICSs use proprietary communica-
4265141; by the U.S. Office of Naval Research under Award N00014-15-1-2182; and tion protocols to control field devices. 7) ICS components
by the NYU Center for Cyber Security (New York and Abu Dhabi).
S. McLaughlin is with KNOX Security, Samsung Research America, Mountain View,
are replaced infrequently (15–20 years or longer). 8) ICS
CA 94043 USA (e-mail: [email protected]). components are distributed and isolated and hence diffi-
C. Konstantinou, X. Wang, and R. Karri are with the Polytechnic School of
Engineering, New York University, Brooklyn, NY 11201 USA (e-mail:
cult to physically access to repair and upgrade.
[email protected]; [email protected]). Attacks on ICSs are happening at an alarming pace
L. Davi and A.-R. Sadeghi are with Technische Universität Darmstadt, Darmstadt
64289, Germany (e-mail: [email protected]; ahmad.sadeghi@
and the cost of these attacks is substantial for both gov-
trust.cased.de). ernments and industries [7]. Cyberattacks against oil and
M. Maniatakos is with the Electrical and Computer Engineering Department,
New York University Abu Dhabi, Abu Dhabi, UAE (e-mail: michail.maniatakos@
gas infrastructure are estimated to cost the companies
nyu.edu). $1.87 billion by 2018 [8]. Until 2001, most of attacks
Digital Object Identifier: 10.1109/JPROC.2015.2512235 originated internal to a company. Recently, attacks
0018-9219 Ó 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1039
Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

elements are distributed throughout the system


[13]. The distributed controllers are networked to
remotely monitor processes. The DCS can remain
operational even if a part of the control system
fails. DCSs are often found in continuous and
batch production processes which require ad-
vanced control and communication with intelli-
gent field devices.
· Supervisory control and data acquisition (SCA-
DA): SCADA is a computer system used to moni-
tor and control industrial processes. SCADA
monitors and controls field sites spread out over
a geographically large area. SCADA systems
gather data in real time from remote locations.
Supervisory decisions are then made to adjust
controls.

Fig. 1. General structure of an ICS. The industrial process data B. History of ICS Attacks
collected at remote sites are sent by field devices such as In an ICS, the stable operation could be disrupted
remote terminal units (RTUs), intelligent electronic devices
not only by an operator error or a failure at a production
(IEDs), and programmable logic controller (PLCs), to the control
center through wired and wireless links. The control server
unit, but also by a software error/bug, malware, or an in-
allows clients to access data using standard protocols. The tentional cyber criminal attack [14]. Just in 2014, the
human–machine interface (HMI) presents processed data to a ICS Cyber Emergency Response Team (ICS-CERT) re-
human operator, by querying the time-stamped data sponded to 245 incidents. Numerous cyberattacks on ICS
accumulated in the data historian. The gathered data are
are summarized in Fig. 2. We elaborate on four ICS at-
analyzed, and control commands are sent to remote controllers.
tacks that caused physical damages.
In 2007, Idaho National Laboratory staged the Aurora
attack, in order to demonstrate how a cyberattack could
external to a company are becoming frequent. This is destroy physical components of the electric grid [15].
due to the use of commercial off-the-shelf (COTS) de- The attacker gained the access to the control network of
vices, open applications and operating systems, and in- a diesel generator. Then a malicious computer program
creasing connection of the ICS to the Internet. was run to rapidly open and close the circuit breakers of
In an effort to keep up with the cyberattacks, cyber- the generator, out of phase from the rest of the grid, re-
security researchers are investigating the attack surface sulting in an explosion of the diesel generator. Since
and defenses for critical infrastructure domains such as most of the grid equipment uses legacy communications
the smart grid [9], oil and gas [10], and water SCADA protocols that did not consider security, this vulnerability
[11]. This survey will focus on the general ICS cybersecu- is especially a concern [16].
rity landscape by discussing attacks and defenses at vari- In 2008, a pipeline in Turkey was hit by a powerful
ous levels of abstraction in an ICS from the hardware to explosion spilling over 30000 barrels of oil in an area
the process. above a water aquifer. Further, it cost British Petroleum
$5 million a day in transit tariffs. The attackers entered
A. Industrial Control Systems the system by exploiting the vulnerabilities of the wire-
The general architecture of an ICS is shown in Fig. 1. less camera communication software, and then moved
The main components of an ICS include the following. deep into the internal network. The attackers tampered
· Programmable logic controller (PLC): A PLC is a with the units used to alert the control room about mal-
digital computer used to automate industrial elec- functions and leaks, and compromised PLCs at valve sta-
tromechanical processes. PLCs control the state tions to increase pressure in the pipeline causing the
of output devices based on the signals received explosion.
from the sensors and the stored programs. PLCs In 2010, Stuxnet computer worm infected PLCs in
operate in harsh environmental conditions, such 14 industrial sites in Iran, including an uranium enrich-
as excessive vibration and high noise [12]. PLCs ment plant [4], [17]. It was introduced to the target sys-
control standalone equipment and discrete tem via an infected USB flash drive. Stuxnet then
manufacturing processes. stealthily propagated through the network by infecting
· Distributed control system (DCS): DCS is an au- removable drives, copying itself in the network shared
tomated control system in which the control resources, and by exploiting unpatched vulnerabilities.

1040 Proceedings of the IEEE | Vol. 104, No. 5, May 2016


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

Fig. 2. Timeline of cyberattacks on ICS and their physical impacts.

The infected computers were instructed to connect to an designed for security robustness and tested prior to
external command and control server. The central server deployment. Control protocols should be fitted with
then reprogrammed the PLCs to modify the operation of security features and policies. ICSs should be reinforced
the centrifuges to tear themselves apart by the compro- by isolating critical operations by removing unnecessary
mised PLCs [18]. services and applications from ICS components. Exten-
In 2015, two hackers demonstrated a remote control sive discussion on vulnerability mitigation appears in
of a vehicle [19]. The zero-day exploit gave the hackers Section IV, followed by final remarks in Section V.
wireless control of the vehicles. The software vulnerabil-
ities in the vehicle entertainment system allowed the
hackers to remotely control it, including dashboard func- II. I CS VU L NE R A BI LI T Y AS S ES S M ENT
tions, steering, brakes, and transmission, enabling mali- In this section, we review the different layers in an ICS,
cious actions such as controlling the air conditioner and the vulnerability assessment process outlining the cyber-
audio, disabling the engine and the brakes, and comman- security assessment strategy and discuss ICS testbeds for
deering the wheel [20]. This is a harbinger of attacks in accurate vulnerability analyses in a lab environment.
an automated manufacturing environment where intelli-
gent robots cohabitate and coordinate with humans. A. The ICS Architecture and Vulnerabilities
The different layers of ICS architecture are shown in
C. Roadmap of This Paper Fig. 3.
Cybersecurity assessment can reveal the obvious and
nonobvious physical implications of ICS vulnerabilities 1) Hardware Layer: Embedded components such as
on the target industrial processes. Cybersecurity assess- PLCs and RTUs are hardware modules executing software.
ment of ICSs for physical processes requires capturing Hardware attacks such as fault injection and backdoors
the different layers of an ICS architecture. The chal- can be introduced into these modules. These vulnerabil-
lenges of creating a vulnerability assessment methodol- ities in the hardware can be exploited by adversaries to
ogy are discussed in Section II. Cybersecurity assessment gain access to stored information or to deny services.
of an ICS requires the use of a testbed. The ICS testbed The hardware-level vulnerabilities concern the entire
should help identify cybersecurity vulnerabilities as well lifecycle of an ICS from design to disposal. Security in
as the ability of the ICS to withstand various types of the processor supply chain is a major issue since hard-
attacks that exploit these vulnerabilities. In addition, ware trojans can be injected in any stage of the supply
the testbed should ensure that critical areas of the ICS chain introducing potential risks such as loss of reliabil-
are given adequate attention. This way one can lessen ity and security [21], [22]. Unauthorized users can use
the costs for fixing cybersecurity vulnerabilities emerg- JTAG ports—used for in-circuit test—to steal intellec-
ing from flaws in the design of ICS components and the tual property, modify firmware, and reverse engineer
ICS network. ICS testbeds are discussed in Section II. logic [23]–[25]. Peripherals introduce vulnerabilities.
Discussion on how one can construct attack vectors ap- For example, malicious USB drives can redirect commu-
pears in Section III. Attacks on ICSs have devastating nications by changing DNS settings or destroy the cir-
physical consequences. Therefore, ICSs need to be cuit board [26], [27]. Expansion cards, memory units,

Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1041


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

Fig. 3. Layered ICS architecture and the vulnerable components in the ICS stack.

and communication ports pose a security threat as well the real-time constraints related to the operation of ICSs,
[28]–[30]. firmware-driven systems typically adopt a real-time oper-
ating system (RTOS) such as VxWorks. In any case, vul-
2) Firmware Layer: The firmware resides between the nerabilities within the firmware could be exploited by
hardware and software. It includes data and instructions adversaries to abnormally affect the ICS process. A recent
able to control the hardware. The functionality of firm- study exploited vulnerabilities in a wireless access point
ware ranges from booting the hardware providing run- and a recloser controller firmware [31]. Malicious firm-
time services to loading an operating system (OS). Due to ware can be distributed from a central system in an

1042 Proceedings of the IEEE | Vol. 104, No. 5, May 2016


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

advanced metering infrastructure (AMI) to smart meters


[32]. Clearly, that vulnerabilities in firmware can be used
to launch DoS attacks to disrupt the ICS operation.

3) Software Layer: ICSs employ a variety of software


platforms and applications, and vulnerabilities in the
software base may range from simple coding errors to
poor implementation of access control mechanisms. Ac-
cording to ICS-CERT, the highest percentage of vulnera-
bilities in ICS products is improper input validation by
ICS software, also known as the buffer overflow vulnera-
bility [33]. Poor management of credentials and authen-
tication weaknesses are second and third, respectively.
These vulnerabilities in the implementation of software
Fig. 4. Security assessment of ICS.
interfaces (e.g., HMI) and server configurations may
have fatal consequences on the control functionality of
an ICS. For instance, a proprietary industrial automation
software for historian servers had a heap buffer overflow designed ICS model [37]. ICS process-centric attacks
vulnerability that could potentially lead to a Stuxnet-type may inject spurious/incorrect information (through spe-
attack [34]. cially crafted messages) to degrade performance or to
Sophisticated malware often incorporate both hard- hamper the efficiency of the controlled process [33].
ware and software. WebGL vulnerabilities are an example Process-centric attacks may also disturb the process state
of hardware-enabled software attacks: access to graphics (e.g., crash or halt) by modifying runtime process vari-
GPU hardware by a least-privileged remote party results ables or the control logic. These attacks can deny service
in the exposure of GPU memory contents from previous or change the industrial process without operator knowl-
workloads [35]. The implementation of the software layer edge. Therefore, it is imperative to determine if varia-
in a HIL testbed should reflect how each added compo- tions in the system process are nominal consequences of
nent to the ICS increases the attack surface. an expected operation or signal an anomaly/attack.
Process-centric/process-aware vulnerability analysis can
4) Network Layer: Vulnerabilities can be introduced contribute to practices that enable ICS processes to func-
into the ICS network in different ways [1]: a) firewalls tion in a secure manner. The vulnerabilities related to
(that protect devices on a network by monitoring and the information flow (e.g., dependencies on hardware/
controlling communication packets using filtering poli- software/network equipment with a single point of
cies); b) modems (that convert between serial digital failure) must be determined. The HIL testbed should
data and a signal suitable for transmission over a tele- properly emulate the target process, in order to effec-
phone line to allow devices to communicate); c) fieldbus tively assess and mitigate process-centric attacks [38].
network (that links sensors and other devices to a PLC
or other controller); d) communications systems and
routers (that transfer messages between two networks); B. ICS Vulnerability Assessment
e) remote access points (that remotely configure ICS and Fig. 4 presents the steps in the security assessment
access process data); and f) protocols and control net- process whose aim is to identify security weaknesses and
work (that connect the supervisory control level to lower potential risks in ICSs. Due to the real-world conse-
level control modules). DCS and SCADA servers, com- quences of ICS, security assessment of ICSs must ac-
municating with lower level control devices, often are count for all possible operating conditions of each ICS
not configured properly and not patched systematically component. Additionally, since ICS equipment can be
and hence are vulnerable to emerging threats [36]. more fragile than standard IT systems, the security as-
When designing a network architecture for an ICS, sessment should take into consideration the sensitive ICS
one should separate the ICS network from the corporate dependencies and connectivity [39].
network. In case the networks must be connected, only
minimal connections should be allowed and the connec- 1) Document Analysis: The first step in assessing any
tion must be through a firewall and a DMZ. ICS is to characterize the different parts of its architec-
ture. This includes gathering and analyzing information
5) Process Layer: All the aforementioned ICS layers in- in order to understand the behavior of each ICS com-
teract to implement the target ICS processes. The ob- ponent. For example, analyzing the features of IEDs
served dynamic behavior of the ICS processes must used in power systems such as a relay controller en-
follow the dynamic process characteristics based on the tails collecting information about its communication,

Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1043


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

actuators, and machinery. A well-designed HIL simula-


tor will mimic the actual process behavior as closely
as possible. A detailed discussion of developing an as-
sessment environment appears in Section II-C.

5) Testing and Impact: The ICS will be tested on the


testbed to demonstrate the outcomes of the attacks in-
cluding the potential effect on the physical components
of the ICS [44]. In addition, the system-level response
and the consequences to the overall network can be ob-
served. The results can be used to assess the impact of a
cyberattack on the ICS.

6) Vulnerability Remediation: Any weaknesses discov-


ered in the previous steps should be carefully mitigated.
This may involve working with vendors [45] and updat-
ing network policies [46]. If there is no practical mitiga-
Fig. 5. (a) Real ICS environment versus (b) HIL simulation of ICSs.
tion strategy to address a vulnerability, guidelines should
be developed to allow sufficient time to effectively re-
functionality, default configuration passwords, and sup- solve the issue.
ported protocols [40].
7) Validation Testing: The mitigation actions designed
2) Mission and Asset Prioritization: Prioritizing the mis- to resolve security issues must then be tested. A critical
sions and assets of the ICS is the next step in security as- part of this step is to reexamine the ICS and identify
sessment. Resources must be allocated based on the weaknesses.
purpose and sensitivity of each function. Demilitarized
zones (DMZs), for instance, can be used to add a layer 8) Monitoring: Implementing all the previous steps is
of security to the ICS network by isolating the ICS and half the battle. Continuous monitoring and reassessing
corporate networks [41]. Selecting DMZs is an important the ICS to maintain security is important [47]. Intrusion
task in this phase. detection systems (IDSs) can assist in continuously moni-
toring network traffic and discover potential threats and
3) Vulnerability Extrapolation: Next the ICS should be vulnerabilities.
examined for security vulnerabilities, to identify sources
of vulnerability, and to establish attack vectors [42]. De- C. ICS Testbeds
sign weaknesses and security vulnerabilities in critical The assessment environment, i.e., the testbed, effects
authentication, application and communication security all the stages of the assessment methodology. Assessment
components should be investigated. The attack vectors methodologies that include the production environment
should comprehensively explain the targeted components or testing individual components of the ICS are not rele-
and the attack technique. vant. Although these methodologies are effective for IT
systems, uniquely-ICS nature of using “data to manipu-
4) Assessment Environment: Depending on the type of late physics” makes these approaches inherently hazard-
industry and level of abstraction, assessment actions ous. Therefore, we focus on lab-based ICS testbeds. A
must be defined [37]. For example, in case when only HIL testbed offers numerous benefits by balancing accu-
software is used, the test vectors should address as many racy and feasibility. In addition, HIL testbeds can be
physical and cyber characteristics of the ICS as possible. used to train employees and ensure interoperability of
By modeling and simulating individual ICS modules, the the diverse components used in the ICS.
behavior of the system is emulated with regards to how The cyber–physical nature of ICSs presents several
the ICS and its internal functions react. challenges in the design and operation of an ICS testbed.
Due to the complexity and real-time requirements of The testbed must be able to model the complex behavior
ICSs, hardware-in-the-loop (HIL) simulation is more effi- of the ICS for both operation and nonoperation condi-
cient to test system resiliency against the developed at- tions. It should address scaling since the testbed is a
tack vectors [43]. HIL simulation adds the ICS scaled down model of the actual physical ICS. Further-
complexity to the assessment platform by adding the more, the testbed must accurately represent the ICS in
control system in a loop, as shown in Fig. 5(b). To order to support the protocols and standards as well as
capture the system dynamics, the physical process is to generate accurate data. It is also important for the
replaced with a simulated plant, including sensors, testbed to capture the interaction between legacy and

1044 Proceedings of the IEEE | Vol. 104, No. 5, May 2016


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

modern ICS. This interaction is important for both se- scenarios (incorporating legacy and modern
curity assessment and compatibility testing of the ICS. equipment);
Numerous other factors should be considered when de- · simulation: ICS phenomena are simulated faster
signing an ICS testbed including flexibility, interface than complex physical ICS events;
with IT systems, configuration settings, and testing for · accuracy: HIL simulators provide results compa-
extreme conditions. rable in terms of accuracy with the live ICS
Assessment of ICS using software-only testbed and environment;
techniques is not frequently adopted. Software models · repeatability: the controlled settings in the
and simulations cannot recreate real-world conditions testbed increase repeatability;
since they include only one layer of the complex ICS ar- · cost effectiveness: the combination of hardware
chitecture. Furthermore, the software models cannot in- HIL software reduces the implementation costs
clude every possible cyber–physical system state of the of the testbed;
ICS [48]. Software-only testbeds are also limited by the · safety: HIL simulation avoids the hazards present
supported hardware. Finally, the limitations of the com- when testing in a live ICS setting;
putational features supported by the software simulator · comprehensiveness: it is often possible to assess
might introduce delays, simplify assumptions, and use ICS scenarios over a wider range of operating
simple heuristics in the simulator engine (e.g., theoreti- conditions;
cal implementation of network protocols). Finally, in · modularity: HIL testbeds facilitate linkages with
most cases, a software-only testbed gives the users a other interfaces and testbeds, integrating multi-
false sense of security regarding the accuracy of the ple types of control components;
simulation results. On the other hand, software-only as- · network integration: protocols and standards can
sessment is advantageous in that one can study the be- be evaluated creating an accurate map of net-
havior of a system without building it. Scilab and Scicos worked units and their connection communica-
are two open-source software platforms for design, sim- tion links;
ulation, and realization of ICSs [49], [50]. · nondestructive test: destructive events can be
It is clear that an ICS testbed requires real hardware evaluated (e.g., aurora generator test [53]) with-
in the simulation loop. Such HIL simulation symbioti- out causing damage to the real system;
cally relates cyber and physical components [51]. A HIL · hardware security: HIL testbed allows one to
testbed can simulate real-world interfaces including in- study the hardware security of an ICS which has
teroperable simulations of control infrastructures, dis- become a major concern over the past decade 7
tributed computing applications, and communication (e.g., side-channel and firmware attacks [44]).
networks protocols.
3) Example ICS Testbeds: Over 35 smart grid testbeds
1) Security Objectives of HIL Testbeds: The primary ob- have been developed in the United States [54]. ENEL
jective of HIL testbeds is to guide implementation of cy- SPA testbed analyzes attack scenarios and their impact
bersecurity within ICSs. In addition, HIL testbeds are on power plants [55]. It includes a scaled down physical
essential to determine and resolve security vulnerabil- process, corporate and control networks, DMZs, PLCs,
ities. The individual components of an appropriate industrial standard software, etc. The Idaho National
testbed should capture all the ICS layers, and the interac- Laboratories (INL) SCADA Testbed is a large-scale
tions are shown in Fig. 3. testbed dedicated to ICS cybersecurity assessment, stan-
Equipment and network vulnerabilities can be tested in dards improvements, and training [56]. The PowerCyber
a protected environment that can facilitate multiple types testbed integrates communication protocols, industry
of ICS scenarios highlighting the several layers of the ICS control software, and field devices combined with virtua-
architecture. For instance, the cybersecurity testbed devel- lization platforms, real-time digital simulators (RTDSs),
oped by NIST covers several ICS application scenarios and ISEAGE WAN emulation in order to provide an ac-
[52]. The Tennessee Eastman scenario covers the continu- curate representation of cyber–physical grid interdepen-
ous process control in a chemical plant. The robotic assem- dencies [57]. Digital Bond’s Project Basecamp
bly scenario covers the discrete dynamic processes with demonstrates the fragility and insecurity of SCADA and
embedded control. The enclave scenario covers wide area DCS field devices, such as PLCs and RTUs [58]. New
industrial networks in an ICS such as SCADA. York University (NYU) has developed a smart grid
testbed to model the operation of circuit breakers and
2) Benefits of a HIL Assessment Methodology: The HIL demonstrate firmware modification attacks on relay con-
assessment methodology has the following advantages: trollers [44]. Many hybrid laboratory-scale ICS testbeds
exist in research centers and universities [54]. Besides
· flexibility: HIL systems provide reconfigurable ar- laboratory-scale ICS testbeds with real equipment, many
chitectures for testing several ICS application virtual testbeds are also being developed able to “create”

Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1045


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

ICS components including virtual devices and process the study shows that in most cases, these ICSs are at
simulators [59]. least a year behind the standard patch cycle. In some
Summarizing, given that many ICS attacks exploit cases, the DMZ separating the ICS from the corporate
vulnerabilities in one or more layers of an ICS, HIL ICS network had not been updated in years leaving DoS at-
testbeds are becoming standard for security assessment, tacks trivial. For example, in their evaluation of a net-
allowing development and testing of advanced security work connected PLC, it was found that ping flooding
methods. Additionally, HIL ICS testbeds have been quan- with 6-kB packets was sufficient to render the PLC inop-
titatively shown to produce results close to real-world erable, causing all state to be lost and forcing it to be
systems. power cycled.
Another hurdle in improving ICS security are three
commonly held myths [70]: 1) security can be achieved
III . ATTACKS ON I CS s through obscurity; 2) blindly deploying security tech-
An important part of the assessment process is the iden- nologies improves security; the naive application of
tification of vulnerabilities in the ICS under test. In this firewalls, cryptography, and antivirus software often
section, we present the current and emerging threat leaves system operators with a false sense of security;
landscapes for ICSs. and 3) standards compliance yields a secure system;
the North-American Energy Reliability Corporations Cy-
A. Current ICS Threat Landscape ber Infrastructure Protection standards [71] have been
ICSs are vulnerable to traditional computer viruses criticized for giving a false sense of security [72].
[60]–[62], remote break-ins [63], insider attacks [64],
and targeted attacks [65]. Industries affected by ICS at- 2) Attacks on PLCs: PLCs monitor and manipulate the
tacks include nuclear power and refinement of fissile ma- state of a physical system. A popular Siemens PLC was
terial [62], [65], transportation [63], [66], electric power shown to have vulnerabilities. The ISO-TSAP protocol
delivery [67], manufacturing [60], building automation used by these PLCs can implement a replay attack due to
[64], and space exploration [61]. lack of proper session freshness [73]. It was also possible
One class of attacks against ICS involves compromis- to bypass the PLC authentication, sufficient to upload
ing one or more of its components using traditional at- payloads as described in Section III-B, and to execute ar-
tacks, e.g., memory exploits, to gain control of the bitrary commands on the PLC. The Siemens PLCs used
systems behavior, or access sensitive data related to the in correctional facilities have vulnerabilities allowing ma-
process. We consider three classes of studies on ICS nipulation of cell doors [74].
vulnerabilities. The first considers studies of the secu-
rity and security readiness of ICS systems and their 3) Attacks on Sensors: Another critical element in an
operators. The second class considers security vulnera- ICS are the sensors that gather data and relay it back to
bilities in PLCs. The third class considers vulnerabil- the control units. Consider smart meters that are widely
ities in sensors, in this case, focusing on smart electric a deployed element of the evolving smart electric grid
meters, an important component of the smart grid [75]. A smart meter has the same form factor as a tradi-
infrastructure. tional analog electric meter with a number of enhanced
features: time of use pricing [76], automated meter
1) ICS Security Posture: There have been studies of the reading, power quality monitoring, and remote power
ICS security posture [68] and the conclusion is that disconnect. Security assessment of a real-world smart
there is substantial room for improvement. First, it was metering system considered energy theft by tampered
found that ICSs frequently rely on security through ob- measurement values [77]. The system under test allowed
scurity, due to their history of being proprietary systems for undetectable tampering of measurement values both
isolated from the Internet. However, use of commodity in the meter’s persistent storage, as well as in flight, due
OS (e.g., Microsoft Windows OS) and open, standard to a replay attack against the meter-to-utility authentica-
network protocols, have left ICS open not only to mali- tion scheme. A follow-up study examined meters from
cious attacks, but also to coincidental infiltration by In- multiple vendors and found vulnerabilities allowing for a
ternet malware. For example, the slammer worm single-packet denial of service attack against an arbitrary
infected machines belonging to an Ohio nuclear power meter, and full control of the remote disconnect switch
generation facility. These studies also showed a signifi- enabling a targeted disconnect of the service to a
cant rise in ICS cybersecurity incidents; while only one customer [78].
incident was reported in 2000, ten incidents were re-
ported in 2003. Penetration tests of over 100 real-world B. Emerging Threats
ICSs over the course of ten years, with emphasis on Here we introduce two new directions for attacks on
power control systems, corroborate these findings [69]. ICSs. The first of these constructs payloads targeting an
Besides identifying vulnerabilities throughout the ICS, ICS that an adversary may not have full access to. The

1046 Proceedings of the IEEE | Vol. 104, No. 5, May 2016


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

second class of attacks manipulate sensor inputs to mis- guarantee that the resulting payload will be stealthy.
guide the decisions made by the PLCs. Thus, the malicious behavior may be discovered before it
becomes problematic. Finally, there is no guarantee that
1) Payload Construction: One type of attacks aims to a payload can be constructed at all. If the malware is un-
gather intelligence about the victim ICS. For example, able to infer safety properties, timing loops, or the types
the Duqu worm seems to have gathered information of physical machinery present, then it is not possible to
about victim systems [79], before relaying it to command construct a payload that exploits them.
and control servers. The other type of attacks against ICS A targeted payload, on the other hand, attempts to
aims to influence the physical behavior of the victim achieve a specific goal within the physical system, such
system. Best known example of such an attack is the as causing a specific device to operate beyond its safe
Stuxnet worm, which manipulated the parameters of a limits. The alternative is a targeted payload where the
set of centrifuges used for uranium enrichment. Such an adversary is able to arbitrarily inspect the system under
attack has two stages: the compromise and the payload. attack, i.e., he has a copy of the exploited software. For
Traditionally, once an adversary has compromised an in- autonomous, malware-driven attacks against ICS, this is
formation system, delivering a preconstructed payload is not the case. Embedded controllers used in ICS may be
straightforward. This is because the attacker usually has air-gapped, meaning that once malware infects them, it
a copy of the software being attacked. However, for may no longer be able to contact its command and con-
ICSs, this is not necessarily the case. Depending on the trol servers. Additionally, possessing the control logic for
type of attack the adversary mounts, construction of the a given PLC may not be sufficient to analyze the system
payload may be either error prone or nearly impossible. manually, as the assembly-language-like control logic
A payload is either indiscriminate or targeted. does not reveal which physical devices are controlled by
An indiscriminate payload performs random attacks which program variables. Malware can construct such a
causing malicious actions within the machinery of a vic- targeted payload against a compromised ICS [84]. They
tim ICS. There are several ways malware can automati- assume that the adversary launching the malware has im-
cally construct indiscriminate payloads upon gaining perfect knowledge about the physical machinery in the
access to one or more of the victim ICS PLCs [80]. The ICS, and is also mostly aware of their interactions. How-
assumption here is that if the malware is able to write to ever, the adversary lacks two key pieces of information:
the PLC code area, then it must also be able to read 1) the complete and precise behavior of the ICS; and
from the PLCs code area.1 Given the ability to read the 2) the mapping between the memory addresses of the
PLC code several methods may be used to construct in- victim PLC and the physical devices in the ICS. This
discriminate payloads. mapping is important, as often the variable names in a
1) The malware infers basic safety properties PLC code reveal nothing about the devices they
known as interlocks [82] and generates a payload control.
which sequentially violates as many safety prop- Assuming that the attacker can encode his limited
erties as possible. knowledge of the victim plant into a temporal logic, a
2) The malware identifies the main timing loop in program analysis tool called SABOT can analyze the PLC
the system. Consider the example of a traffic code, and map behaviors of the memory addresses in the
light, where the main loop ensures that each code to those in the adversary’s temporal logic descrip-
color of light is active in sequence for a specific tion of the system. The results show that by carefully
period of time. The malware can then construct constructing the temporal logic description of the sys-
a payload that violates the timing loop, e.g., by tem, the adversary can provide the malware with enough
allowing certain lights to overlap. information to construct a targeted payload against most
3) In the bus enumeration technique, the malware ICS devices.
uses the standardized identifiers such as Profi- These advances in payload generation defeat one of
bus IDs to find specific devices within a victim the main forms of security through obscurity: the inac-
system [83]. cessibility and low-level nature of PLC code. The ability
While these indiscriminate payload construction to generate a payload for a system without ever seeing its
methods are generic, they have a number of shortcom- code represents a substantial lowering of the bar for ICS
ings. First, in the case where the payload is unaware of attackers, and thus should be a factor in any assessment
the actual devices in the victim ICS, one cannot guaran- methodology.
tee that the resulting payload will cause damage (or
achieve any other objective). Second, they cannot 2) False Data Injection (FDI): In an FDI attack, the ad-
versary selects a set of sensors that feed into one or
1
This assumption was confirmed in a study of PLC security mea- more controllers. The adversary then supplies carefully
sures placed as an ancillary section in an evaluation of a novel security
mechanism [81]. The conclusion was that PLC access control policies crafted malicious values to these sensors, thus achieving
are all or nothing, meaning that write access implies read access. a desired result from the controller. For example, if the

Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1047


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

supplied malicious values tell the controller that a tem- code of existing ICS software. Code-reuse attacks are
perature is becoming too low, it will increase the setting prevalent and are applicable to a wide range of comput-
on a heating element, even if the actual temperature is ing platforms. The Stuxnet is known to have used code-
fine. This will then lead to undetected overheating. reuse attacks [89].
The earliest FDI attack targeted power system state Defenses against these attacks focus on either the en-
estimation [85]. State estimation is an important step in forcing control-flow integrity (CFI) or randomizing the
distributed control systems, where the actual physical memory layout of an application by means of fine-
state is estimated based on a number of observables. grained code randomization. We elaborate on these two
Power system state estimation determines how electric defenses. These defenses assume an adversary who is
load is distributed across various high-voltage lines and able to overwrite control-flow information in the data
substations within the power transmission network. area of an application. There is a large body of work that
Compromising a subset of phasor measurement units prevents this initial overwrite; a discussion of these ap-
(PMUs) can result in incorrect state estimation. This proaches is beyond the scope of this paper.
work addressed two questions: 1) Which sensors should
be compromised, and how few are sufficient to achieve 1) Control-Flow Integrity: This defense technique
the desired result? 2) How can the maliciously crafted against code-reuse ensures that an application only exe-
sensor values bypass the error correction mechanisms cutes according to a predetermined control-flow graph
built into the state estimation algorithm? By compromis- (CFG) [90]. Since code injection and return-oriented pro-
ing only tens of sensors (out of hundreds or thousands) gramming result in a deviation of the CFG, CFI detects
it is possible to produce inaccurate state estimations in and prevents the attack. CFI can be realized as a compiler
realistic power system bus topologies. extension [91] or as a binary rewriting module [90].
FDI attacks on Kalman-filtering-based state estima- CFI has performance overhead caused by control-
tion has been reported in [86]. Kalman filters are a flow validation instructions. To reduce this overhead, a
more general form of state estimation than the linear, number of proposals have been made: kBouncer [92],
direct current (dc) system model. The susceptibility of a ROPecker [93], CFI for COTS binaries [94], ROPGuard
Kalman-filter-based state estimator to FDI attacks de- [95], and CCFIR [96]. These schemes enforce so-called
pends on inherent properties of the designed system coarse-grained integrity checks to improve performance.
[86]. The system is only guaranteed to be controllable For instance, they only constrain function returns to in-
via FDI attack if the underlying state transition matrix structions following a call instruction rather than checking
contains an unstable eigenvalue, among other condi- the return address against a list of valid return addresses
tions. This has important implications not only for at- held on a shadow stack. Unfortunately, this tradeoff be-
tacks, but also for defenses against FDI attacks, since a tween security and performance allows for advanced
system lacking an unstable eigenvalue may not be per- code-reuse attacks that stitch together gadgets from call-
fectly attacked. preceded sequences [97]–[100]. Some runtime CFI tech-
niques leverage low-level hardware events [101]–[103].
Another host-based CFI check injects intrusion detection
IV. MITIGATI NG ATTACKS ON ICSs functionality into the monitored program [104].
In this section, we review the following ICS defenses: Until now, the majority of research on CFI has focused
software-based mitigation, secure controller architectures on software-based solutions. However, hardware-based
to detect intrusions, and theoretical frameworks to un- CFI approaches are more efficient. Further, dedicated
derstand the limits of mitigation. hardware CFI instructions allow for system-wide CFI pro-
tection using these instructions. The first hardware-based
A. Software Mitigations CFI approach [105] realized the original CFI proposal
Embedded systems software is programmed using na- [90] as a CFI state machine in a simulation environment
tive (unsafe) programming languages such as C or assem- of the Alpha processor. HAFIX proposes hardware-based
bly language. As a consequence, it suffers from memory CFI instructions and has been implemented on real hard-
exploits, such as buffer overflows. After gaining control ware targeting Intel Siskiyou Peak and SPARC [106],
over the program flow the adversary can inject malicious [107]. It generates 2% performance overhead across
code to be executed (code injection [87]), or use existing different embedded benchmarks by focusing on pre-
pieces (gadgets) that are already residing in program venting return-oriented programming attacks exploiting
memory (e.g., in linked libraries) to implement the de- function returns.
sired malicious functionality (return-oriented program- Remaining Challenges: Most proposed CFI defenses
ming [88]). Moreover, return-oriented programming is focus on the detection and prevention of return-
Turing complete, i.e., it allows an attacker to execute ar- oriented programming attacks, but do not protect against
bitrary malicious code. The latter attacks are often re- return-into-libc attacks. This is only natural, because
ferred to as code-reuse attacks since they use benign the majority of code-reuse attacks require a few

1048 Proceedings of the IEEE | Vol. 104, No. 5, May 2016


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

return-oriented gadgets to initialize registers and prepare the runtime address of a valid code pointer, without
memory before calling a system call or critical function. knowing to which precise code part or function it points
However, Schuster et al. [108] have demonstrated that to. Hence, JIT-ROP can use any code pointer such as re-
code-reuse attacks based on only calling a chain of vir- turn addresses on the stack to instantiate the attack.
tual methods allow arbitrary malicious program actions. Based on the leaked address, JIT-ROP can disclose the
In addition, it has been demonstrated that pure return- content of multiple memory pages, and generates the
into-libc attacks can achieve Turing completeness [109]. return-oriented payload at runtime. The key insight of
Detecting such attacks is challenging: modern programs JIT-ROP is that a leaked code pointer will reside on a
link to a large number of libraries, and require dangerous 4-kB aligned memory page. This can be exploited
API and system calls to operate correctly [97]. Hence, leveraging a scripting engine (e.g., JavaScript) to deter-
for these programs, dangerous API and system calls are mine the affected page’s start and end address. After-
legitimate control-flow targets for indirect and direct call wards, the attacker can start disassembling the
instructions, even if fine-grained CFI policies are en- randomized code page from its start address, and identify
forced. In order to detect code-reuse attacks that exploit useful return-oriented gadgets.
these functions, CFI needs to be combined with addi- To tackle this class of code-reuse attacks, defenses
tional security checks, e.g., dynamic taint analysis and have been proposed [122]–[124]. Readactor leverages a
techniques that perform argument validation. Developing hardware-based approach to enable execute-only memory
such CFI extensions is an important future research [124]. For this, it exploits Intel’s extended page tables to
direction. conveniently mark memory pages as nonexecutable. In
addition, an LLVM-based instrumented compiler 1) per-
2) Fine-Grained Code Randomization: A widely de- mutes function; 2) strictly separates code from data; and
ployed countermeasure against code-reuse attacks is the 3) hides code pointers. As a consequence, a JIT-ROP at-
randomization of the applications’ memory layout. The tacker can no longer disassemble a page (i.e., the code
key idea here is one of software diversity [110]. The key pages are set to nonreadable). In addition, one cannot
observation is that an adversary typically attempts to abuse code pointers located on the application’s stack
compromise many systems using the same attack vector. and heap to identify return-oriented gadgets, since Read-
To mitigate this attack, one can diversify a program im- actor performs code pointer hiding.
plementation into multiple and different semantically Remaining Challenges: CFI provides provable security
equivalent instances [110]. The goal is to force the adver- [125]. That is, one can formally verify that CFI enforce-
sary to tailor the attack vector for each software instance, ment is sound. In particular, the explicit control-flow
making the attack prohibitive. Different approaches can checks inserted by CFI into an application provide strong
be taken for realizing software diversity, e.g., memory assurance that a program’s control flow cannot be arbi-
randomization [111], [112], based on a compiler [110], trarily hijacked by an adversary. In contrast, code ran-
[113], [114], or by binary rewriting and instrumentation domization does not put any restriction on the program’s
[115]–[118]. control flow. In fact, the attacker can provide any valid
A well-known instance of code randomization is ad- memory address as an indirect branch target. Another
dress space layout randomization (ASLR) which random- related problem of protection schemes based on code
izes the base address of shared libraries and the main randomization are side-channel attacks [126], [127].
executable [112]. Unfortunately, ASLR is often bypassed These attacks exploit timing and fault analysis side
in practice due to its low randomization entropy and channels to infer randomization information. Recently,
memory disclosure attacks which enable prediction of several defenses started to combine CFI with code ran-
code locations. To tackle this limitation, a number of domization. For instance, Mohan et al. [128] presented
fine-grained ASLR schemes have been proposed [115]– opaque CFI (O-CFI). This solution leverages coarse-
[120]. The underlying idea is to randomize the code grained CFI checks and code randomization to prevent
structure, for instance, by shuffling functions, basic return-oriented exploits. For this, O-CFI identifies a
blocks, or instructions (ideally for each program run unique set of possible target addresses for each indi-
[117], [118]). With fine-grained ASLR enabled, an adver- rect branch instruction. Afterwards, it uses the per-
sary cannot reliably determine the addresses of interest- indirect branch set to restrict the target address of the
ing gadgets based on disclosing a single runtime address. indirect branch to only its minimal and maximal mem-
However, a recent just-in-time return-oriented pro- bers. To further reduce the set of possible addresses, it ar-
gramming (JIT-ROP) attack, circumvents fine-grained ranges basic blocks belonging to an indirect branch set
ASLR by finding gadgets and generating the return- into clusters (so that they are located nearby to each other),
oriented payload on the fly [121]. As for any other real- and also randomizes their location. However, O-CFI re-
world code-reuse attack, it only requires a memory lies on precise static analysis. In particular, it statically
disclosure of a single runtime address. However, unlike determines valid branch addresses for return instructions
code-reuse attacks against ASLR, JIT-ROP only requires which typically leads to coarse-grained policies.

Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1049


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

Nevertheless, Mohan et al. [128] demonstrate that com- potentially dangerous operation. One of the main results
bining CFI with code randomization is a promising re- from C2 evaluation was that all deny disciplines should
search direction. support notifying the PLC of the denial, so that it knows
the plant did not receive the control signal. A key short-
B. Novel/Secure Control Architectures coming of C2 is that it can only detect violations immedi-
In this section, we consider mitigations for the prob- ately before they occur. What is preferable is a system
lem raised in Section III-B1. The threat here is that an that can give advanced warnings, like in the TSV’s static
adversary may tamper with a controller’s logic code, thus analysis, but can work for complex ICS, like C2 .
subverting its behavior. This can be generalized to the
notion of an untrusted controller. We consider four 3) Secure System Simplex Architecture (S3A) [130]: Sim-
novel architectures for this problem: TSV, a tool for stat- ilar to how TSV requires a copy of the control logic, S3A
ically checking controller code; C2 , a dynamic reference requires the high-level system control flow and execution
monitor for a running controller; S3A, a controller archi- time profiles for the system under observation. Similar
tecture that represents a middle ground between TSV to how C2 performs real-time monitoring, S3A aims to
and C2 ; and finally, an approach for providing a trusted detect when the system is approaching an unsafe state.
computing base (TCB) for a controller so that PLCs may However, different from C2 , S3A aims to give a deter-
dependably enforce safety properties on themselves. ministic time buffer before potentially entering the un-
safe state [131]. While S3A has the advantage of more
1) Trusted Safety Verifier (TSV) [81]: As previously dis- advanced detection, it cannot operate on arbitrarily com-
cussed, one method for tampering with an ICS process is plex systems like C2 can. However, it is appropriate for
to upload malicious logic to a PLC. This was demon- more complex systems than TSV.
strated by the Stuxnet attack. TSV prevents uploading Remaining Challenges: In this review of TSV, C2 , and
of malicious control logic by statically verifying that S3A, we see a tradeoff forming: complexity of the moni-
logic a priori [81]. TSV sits on an embedded devices tored system versus amount of advanced warning. TSV
next to a PLC and intercepts all PLC-bound code and sits at one end of this spectrum, offering the most ad-
statically verifies it against a set of designer supplied vanced warning for systems of bounded complexity,
safety properties. TSV does this in a number of steps. while C2 sits at the other end, offering last second detec-
First, the control logic is symbolically executed to pro- tion of unsafe states on arbitrarily complex systems. The
duce a symbolic scan cycle. A symbolic scan cycle rep- S3A approach represents a compromise between the two,
resents all possible single-scan cycle executions of the however, the more complex the system is, the more de-
control logic. It then finds feasible transitions between tailed the control flow and timing information fed to
subsequent symbolic scan cycles to form a temporal exe- S3A must be, while in the future, computational power
cution graph (TEG). The TEG is then fed into a model for verification may be substantial enough to allow for
checker which will verify that a set of linear temporal full TSV analysis of arbitrarily complex systems. How-
logic safety properties hold under the TEG model. If ever, for current, practical solutions, this is not a reason-
the control logic violates any safety property, the model able assumption.
checker will return a counterexample input that would Part of the reason none of the existing architectures
cause the violation, and the control logic would be can win both ends of the tradeoff is that they all exist
blocked from running on the PLC. The main drawback outside the PLC. This also adds significant cost and com-
of TSV is that often the TEG is a tree structure of plexity, as they must be physically integrated with an ex-
bounded depth. Thus, systems beyond a certain com- isting control system. An alternative approach is to
plexity cannot be effectively checked by TSV in a rea- construct future PLCs to provide a minimal trusted com-
sonable amount of time. puting base (TCB). One such TCB with the goal of re-
stricting the ability to manipulate physical machinery to
2) C2 Architecture [129]: It provides a dynamic refer- a small set of privileged code blocks within the PLC
ence monitor for sequential and hybrid control systems. memory is proposed in [132]. This TCB is not itself
Like TSV, C2 enforces a set of engineer-supplied safety aware of the ICS physical safety properties. Instead, the
properties. However, enforcement in C2 is done at run- goal of this TCB is to protect a privileged set of code
time, by an external module positioned between a PLC blocks that are able to affect the plant, i.e., via control
and the ICS hardware devices. At the end of each PLC signals. The privileged code blocks then contain the
scan cycle, a new set of control signals are sent to the ICS safety properties. Thus, C2 or S3A-like checks are done
devices. C2 will check these signals, along with the cur- from within these blocks. This approach has the added
rent ICS state, against the safety properties. Any unsafe benefit that a TSV-like verification of safety properties in
modifications of the plant state are denied. If at any step, the privileged blocks is substantially simpler than verify-
an attempted control signal is denied by C2 , it will enact ing an entire system, thus allowing for a static analysis of
one of a number of deny disciplines to deal with the more complex system than TSV.

1050 Proceedings of the IEEE | Vol. 104, No. 5, May 2016


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

C. Detection of Control and Sensor Injection normal operation, error conditions, etc., as well as the
Attacks network topology; and 3) a set of constraints on allowed
When considering attacks against ICS, there are two behavior. An evaluation of a prototype implementation
important channels that must be considered: the control showed that no more than 1.6% of CPU usage was
channel and the sensor channel. A control channel attack needed for monitoring the specification at meters. One
compromises a computer, a controller, or individual potential limitation of this approach is the need for ex-
upstream from the physical process. The compromised pert-provided information in the form of the system
entity then injects malicious commands into the system. model and constraints on allowed behavior.
A sensor channel attack corrupts sensor readings coming An alternative approach, which is not dependent on
from the physical plant in order to cause bad decision specifications, is given in [135]. This solution builds a
making by the controllers receiving those sensor model of good behavior through observation of three
readings.2 In this section, we review detection of control types of quantities visible to PLCs: sensor measurements,
channel attacks and FDI. Techniques for control channel control signals, and events such as alarms. The behavioral
attacks inherit from the existing body of work in network model uses autoregression to predict the next system
and host intrusion detection, whereas FDI detection state. To avoid low-and-slow attacks that autoregression
stems largely from the theory of state estimation and may not catch, upper and lower Shewart control limits
control. are used as absolute bounds on process variables that may
not be crossed. Their evaluation on one week of network
1) Detecting Control Channel Attacks: A survey of traces from a prototype control system showed that most
SCADA intrusion detection between 2004 and 2008 can normal behaviors were properly modeled by the autore-
be found in [133]. It presents a taxonomy in which de- gression. There were, however, several causes of devia-
tection systems are categorized based on the following. tions including nearly constant signals that occasionally
· Degree of SCADA specificity: How well does the deviated briefly before returning to their prior constant
solution leverage some of the unique aspects of value, and a counter variable that experienced a delayed
SCADA systems? increment. Such cases would represent false positives for
· Domain: Does the solution apply to any SCADA the autoregression model, but would not necessarily trip
system, or is it restricted to a single domain, e.g., the Shewart control limits.
water.
· Detection principle: What method is used to 2) Detecting FDI: While control channel attacks di-
categorize events: behavioral, specification, rectly target controllers with malicious commands, FDI
anomaly, or a combination? attacks can be more subtle, as they used forged sensor
· Intrusion-specific: Does the solution only address data to cause the controller to make misguided decisions.
intrusions, or is it also useful for fault detection? Detection of FDI attacks is thus deeply rooted in the ex-
· Time of detection: Is the threat detected and isting discipline of state estimation discussed earlier. This
reported in real time, or only as an offline will require a) a measurement model that relates the
operation? measured quantity to the physical value that caused it,
· Unit of analysis: Does the solution examine net- i.e., heat propagation; and b) an error detection method
work packets, API calls, or other events? to allow for faulty measurements to be discarded.
We find that among the categorized systems there are We described one attack against power grid state esti-
some deficiencies. First, they lack a well-defined threat mation in which estimation errors could be caused by
model. Second, they do not account for the degree of tampering with a relatively small number of PMUs [85].
heterogeneity found in real-world ICS, e.g., use of multi- In one approach to detecting such an attack, one can use
ple protocols. Finally, the proposed systems were not suf- a small, strategically selected set of tamper-resistant
ficiently evaluated for false positives, and insufficient meters to provide independent measurements [136].
strategies for dealing with false positives were given. These out-of-band measurements are used to determine
We review recent work that aims at greater feasibility the accuracy of the remaining majority of measure-
[134]. In this approach, a specification is derived for ments contributing to the state estimation.
traffic behavior over smart meter networks, and formal In a second approach [137], two security indices are
verification is used to ensure that any network trace computed for a given state estimator. The first index
conforming to the specification will not violate a given measures how well the state estimator’s bad data detec-
security policy. The specification is formed based on: tor can handle attacks where the adversary is limited to a
1) the smart meter protocols (in this case, the ANSI C12 few tampered measurements. The second index measures
family); 2) a system model consisting of state machines how well the bad data detector can handle attacks where
that describe a meter’s lifetime, e.g., provisioning, the adversary only makes small changes to measurement
magnitudes. Along with the grid topology information,
2
More information about FDI can be found in Section III-B2. these indices can be useful in determining how to

Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1051


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

allocate security functionality such as retrofitted encryp- positives, it is shown that some attacks are undetectable
tion to various measurement devices. if there is an initial state which produces the same final
The third approach differs from the above two in that state as an attack. Additionally, it is shown that some at-
it does not attempt to select a set of meters for security tacks cannot be distinguished from others. These results
enhancements, but instead, places weights on the grid are applicable to stealthy [141], replay [142], and FDI
topology to reflect the trustworthiness of various PMUs attacks.
[138]. The trust weights are integrated to a distributed The previous two approaches considered ways of
Kalman filtering algorithm to produce a state estimation modeling attacks and attack likelihoods against individual
that more accurately reflects the trustworthiness of the control loops. However, in some systems, a number of
individual PMUs. In the evaluation of the distributed otherwise independent control process are actually some-
Kalman filters, it was found that they converged to the what dependent due to the shared network. In this case,
correct state estimate in approximately 20 steps. a distributed denial of service (DDoS) attack against one
From these approaches to detecting both control controller may affect others. The problem of interdepen-
channel and FDI attacks, one can see that an effective dent control systems using a game-theoretic approach is
method toward detecting ICS intrusions involves moni- addressed in [143]. The noncooperative game consists of
toring the physical process itself, as well as its interac- two stages: 1) each control loop (player) chooses whether
tions with the controller and sensors. to apply security enhancements; and 2) each player ap-
plies the optimal control input to its plant.
D. Theoretical ICS Security Frameworks In the nonsocial form of this game, players only at-
A number of recent advances have generalized the in- tempt to minimize their own cost, which consists of the
creasing body of results to provide theoretical frame- operating costs of the plant plus the cost of adding and
works. In this section, we will review such theoretical maintaining security measures. For this form of the
frameworks based on the following three approaches: game, with M-players, there is shown to exist a unique
1) modeling attacker behavior and identifying likely at- equilibrium solution. The solutions to this game may not
tack scenarios; 2) defining the general detection and be globally optimal, due to externalities imposed by
identification of attacks against ICS; and 3) the distri- players that opt-out of security enhancements. To solve
bution of security enhancements in ICSs where con- this, penalties are introduced for players that do not se-
trollers share network infrastructure. A common theme lect security enhancements leading to a guaranteed un-
in these frameworks is the optimal distribution of secu- ique solution that is also globally optimal. While such a
rity protections in large, legacy ICSs. game-theoretic approach is useful in distributing the cost
Adding protections like cryptographic communica- of security enhancements, actually achieving robust con-
tions to legacy equipment is expensive. Thus, it is prefer- trol in distributed systems is more difficult, especially
able to secure the most vulnerable portions of an ICS. when the system in question is nonlinear. To this end, a
Teixeira et al. describe a risk management framework for modification to a traditional model-predicative control
ICSs [139]. Starting with the notion of security indices (MPC) problem has been suggested [144]. Adding a ro-
[137], this work looks at methods for identifying the bustness constraint to the MPC problem can bound the
most vulnerable measurements in both static and dy- values of future states.
namic control systems. For static control systems, it is These theoretical frameworks offer opportunities to
assumed that adversaries wish to execute the minimum understand and improve defenses. However, it is impor-
cost attack. In the static case, the k index described in tant to understand the assumptions behind the frame-
Section IV-C is sufficient, and methods are given for effi- works. For example, the above approaches assume the
ciently computing k for large systems. following.
In the case of dynamic systems, the maximum- 1) Attackers are omniscient, knowing the exact
impact, minimum-resource attacks are defined as a mul- measurement, control, and process matrices for
tiobjective optimization problem. For such a problem, each system, as well as all system states.
the basic security indices do not suffice. Instead, the 2) Attackers are nearly omnipotent, with the ability
multiobjective problem is transformed into a maximum- to compromise any measurement and control
impact, resource-constrained problem. An example is vector. There is an important exception here,
given where this is used to calculate the attack vectors which is that detectors are assumed to be im-
for a quadruple-tank system. The resulting, optimal attack mune to attackers.
strategy can be used to allocate defenses such as data 3) Detectors do not create false positives and sys-
encryption in the ICS. tems are completely deterministic (first two
Another framework considers generalizing attacks approaches).
against ICSs and describe the fundamental limitations of 4) Security enhancements can significantly mitigate
monitors against these attacks [140]. Assuming that an at- DDoS attacks on various network architectures
tack monitor is consistent, i.e., does not generate false (third approach).

1052 Proceedings of the IEEE | Vol. 104, No. 5, May 2016


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

In any assessment procedure, the actual set of assump- emerging area, and motivated the deployment of cyber-
tions should be considered and compared with those of security methods and tools to ICS. All levels of the multi-
the theoretical framework being used in the assessment. layered ICS architecture can be targeted by sophisticated
cyberattacks and disturb the control process of the ICS.
Assessing the vulnerabilities of ICS requires the develop-
V. CONCLUSION ment of a uniquely-ICS multilayered testbed that estab-
ICT-based ICS can deliver real-time information, result- lishes as many pathways as possible between the cyber
ing in automatic and intelligent control of industrial and physical components in the ICS. These pathways
processes. Inherently dangerous processes, however, are can assist in determining the real-world consequences
no longer immune to cyber threats, as vulnerable de- in terms of the technical impacts and the severity of
vices, formats, and protocols are not hosted on dedicated the outcomes. An important direction of research is
infrastructure due to cost pressures. Consequently, ICS to develop effective methods to detect ICS intrusions
infrastructure has become increasingly exposed, either by that involve monitoring the physical processes them-
direct connection to the Internet, or via interfaces to selves, as well as their interactions with the controller
utility IT systems. Therefore, inflicting substantial dam- and sensors. h
age or widespread disruption may be possible with a
comprehensive analysis of the target systems. Publicly
available information, combined with default and Acknowledgment
well-known ICS configuration details, could potentially The authors from New York University (NYU) would
allow a resource-rich adversary to mount a large-scale like to thank S. Lee, P. Robison, P. Stergiou, and S. Kim
attack. from Consolidated Edison for their continuous support
This paper surveyed the state of the art in ICS secu- on the project, Platform Profiling in Legacy and Modern
rity, identified outstanding research challenges in this Control and Monitoring Systems.

REFERENCES [9] Y. Mo et al., “Cyber-physical security of a industrial control systems: Is the power
smart grid infrastructure,” Proc. IEEE, industry prepared?” in Proc. 2nd Workshop
[1] K. Stouffer, J. Falco, and K. Scarfone, vol. 100, no. 1, pp. 195–209, Jan. 2012. Smart Energy Grid Security, 2014, pp. 13–22.
“Guide to industrial control systems (ICS)
security,” NIST Special Publication 800-82, [10] P. Radmand, A. Talevski, S. Petersen, and [19] C. Miller and C. Valasek, “Remote
2011. [Online]. Available: http://csrc.nist. S. Carlsen, “Taxonomy of wireless sensor exploitation of an unaltered passenger
gov/publications/nistpubs/800-82/ network cyber security attacks in the oil vehicle,” 2015. [Online]. Available: https://
SP800-82-final.pdf. and gas industries,” in Proc. 24th Int. www.defcon.org/html/defcon-23/
Conf. Adv. Inf. Netw. Appl., 2010, dc-23-speakers.html#Miller.
[2] E. Hayden, M. Assante, and T. Conway, pp. 949–957.
“An abbreviated history of automation & [20] K. Thomas, “Hackers demo Jeep security
industrial controls systems and [11] S. Amin, X. Litrico, S. Sastry, and hack,” 2015. [Online]. Available: http://
cybersecurity,” 2014. [Online]. Available: A. M. Bayen, “Cyber security of water www.welivesecurity.com/2015/07/22/
https://ics.sans.org/media/An-Abbreviated- SCADA systems—Part I: Analysis and hackers-demo-jeep-security-hack/.
History-of-Automation-and-ICS- experimentation of stealthy deception [21] N. G. Tsoutsos, C. Konstantinou, and
Cybersecurity.pdf. attacks,” IEEE Trans. Control Syst. Technol., M. Maniatakos, “Advanced techniques for
vol. 21, no. 5, pp. 1963–1970, 2013. designing stealthy hardware trojans,” in
[3] European Network and Information
Security Agency (ENISA), “Protecting [12] “What is a programmable logic controller Proc. 51st Design Autom. Conf., 2014, pp. 1–4.
industrial control systems— (PLC)?” [Online]. Available: http://www. [22] Y. Jin, M. Maniatakos, and Y. Makris,
Recommendations for Europe and member wisegeek.org/what-is-a-programmable- “Exposing vulnerabilities of untrusted
states,” 2011. [Online]. Available: https:// logic-controller.htm. computing platforms,” in Proc. 30th
www.enisa.europa.eu/. [13] A. Scott, “What is a distributed control IEEE Int. Conf. Comput. Design, 2012,
[4] T. M. Chen and S. Abu-Nimeh, “Lessons system (DCS)?” [Online]. Available: pp. 131–134.
from Stuxnet,” Computer, vol. 44, no. 4, http://blog.cimation.com/blog/bid/198186/ [23] K. Rosenfeld and R. Karri, “Attacks and
pp. 91–93, 2011. What-is-a-Distributed-Control-System-DCS. Defenses for jtag,” 2010.
[5] P. Muncaster, “Stuxnet-like attacks beckon [14] Kaspersky, “Cyperthreats to ICS systems,” [24] M. F. Breeuwsma, “Forensic imaging of
as 50 new SCADA threats discovered,” 2014. [Online]. Available: http://media. embedded systems using jtag
2011. [Online]. Available: http://www.v3. kaspersky.com/en/business-security/ (boundary-scan),” Digital Investigation,
co.uk/v3-uk/news/2045556/stuxnet-attacks- critical-infrastructure-protection/ vol. 3, no. 1, pp. 32–42, 2006.
beckon-scada-threats-discovered. Cyber_A4_Leaflet_eng_web.pdf.
[25] J. Barnaby, “Exploiting embedded systems,
[6] J. Weiss, “Assuring industrial control [15] J. Meserve, “Mouse click could plunge city Black Hat 2006,” 2006. [Online].
system (ICS) cyber security,” [Online]. into darkness, experts say,” CNN, 2007. Available: http://www.blackhat.com/
Available: http://csis.org/files/media/csis/ [Online]. Available: http://www.cnn.com/ presentations/bh-europe-06/
pubs/080825_cyber.pdf. 2007/US/09/27/power.at.risk/index.html. bh-eu-06-Jack.pdf.
[7] R. Anderson et al., “Measuring the cost of [16] Security Matters, “The Aurora attack.” [26] D. Schneider, “USB flash drives are more
cybercrime,” in The Economics of [Online]. Available: http://www.secmatters. dangerous than you think.” [Online].
Information Security and Privacy. Berlin, com/casestudy10. Available: http://spectrum.ieee.org/
Germany: Springer-Verlag, 2013, [17] D. Kushner, “The real story of Stuxnet,” tech-talk/computing/embedded-systems/
pp. 265–300. 2013. [Online]. Available: http://spectrum. usb-flash-drives-are-more-dangerous-than-
[8] Willis Group, “Energy market review ieee.org/telecom/security/ you-think.
2014—Cyber-attacks: Can the market the-real-story-of-stuxnet. [27] USB Killer. [Online]. Available: http://
respond?” 2014. [Online]. Available: [18] M. B. Line, A. Zand, G. Stringhini, and kukuruku.co/hub/diy/usb-killer.
http://www.willis.com/. R. Kemmerer, “Targeted attacks against

Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1053


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

[28] bunnie and xobs, 30c3d, “The exploration [45] M. Souppaya and K. Scarfone, “Guide to [63] S. Grad, “Engineers who hacked into
and exploitation of an SD memory card.” enterprise patch management technologies,” L.A. traffic signal computer, jamming
[Online]. Available: http://bunniefoo.com/ 2013. [Online]. Available: http://dx.doi.org/ streets, sentenced,” Los Angeles Times.
bunnie/sdcard-30c3-pub.pdf. 10.6028/NIST.SP.800-40r3. [Online]. Available: http://latimesblogs.
[29] S. Skorobogatov, “Flash memory ‘bumping’ [46] European Network and Information latimes.com/lanow/2009/12/engineers-
attacks,” in Proc. Cryptogr. Hardware Security Agency (ENISA), “Good practice who-hacked-in-la-traffic-signal-computers-
Embedded Syst., 2010, pp. 158–172. guide for certs in the area of industrial jamming-traffic-sentenced.html.
[30] R. Templeman and A. Kapadia, control systems,” 2013. [Online]. Available: [64] N. Leall, “Lessons from an insider attack
“Gangrene: Exploring the mortality of flash https://www.enisa.europa.eu/. on SCADA systems,” 2009. [Online].
memory,” HotSec, vol. 12, p. 1, 2012. [47] A. Nicholson, H. Janicke, and A. Cau, Available: http://blogs.cisco.com/security/
“Position paper: Safety and security lessons_from_an_insider_attack_
[31] X. Wang, C. Konstantinou, M. Maniatakos,
monitoring in ICS/SCADA systems,” in on_scada_systems/.
and R. Karri, “Confirm: Detecting firmware
modifications in embedded systems using Proc. 2nd Int. Symp. ICS SCADA Cyber [65] K. Zetter, “Clues suggest Stuxnet virus was
hardware performance counters,” in Proc. Security Res., 2014, pp. 61–66. built for subtle nuclear sabotage,” Wired,
34th IEEE/ACM Int. Conf. Comput.-Aided [48] C. Konstantinou, M. Maniatakos, F. Saqib, 2010. [Online]. Available: http://www.
Design, 2015, pp. 544–551. S. Hu, J. Plusquellic, and Y. Jin, wired.com/threatlevel/2010/11/
“Cyber-physical systems: A security stuxnet-clues/.
[32] CRitical Infrastructure Security AnaLysIS,
“Crisalis Project EU, Deliverable D2.2 Final perspective,” in Proc. 20th IEEE Eur. Test [66] J. Leyden, “Polish teen derails tram after
Requirement Definition.” [Online]. Symp., 2015, pp. 1–8. hacking train network,” The Register, 2008.
Available: http://www.crisalis-project.eu/. [49] Scilab, “Open source and cross-platform [Online]. Available: http://www.theregister.
platform.” [Online]. Available: http://www. co.uk/2008/01/11/tram_hack/.
[33] DHS, “Common cybersecurity vulnerabilities
in industrial control systems,” 2011. scilab.org/. [67] J. Meserve, “Sources: Staged cyber attack
[Online]. Available: http://ics-cert.us- [50] Scicos, “Block diagram modeler/simulator.” reveals vulnerability in power grid,” CNN,
cert.gov/. [Online]. Available: http://www.scicos.org/. 2007. [Online]. Available: http://articles.
cnn.com/2007-09-26/us/power.at.
[34] D. Beresford, “The Sauce of Utter pwnage,” [51] H. K. Fathy, Z. S. Filipi, J. Hagena, and
risk_1_generator-cyber-attack-electric-
2011. [Online]. Available: http:// J. L. Stein, “Review of hardware-in-the-loop
infrastructure?_s=PM:US.
thesauceofutterpwnage.blogspot.com/. simulation and its prospects in the
automotive area,” Ann Arbor, vol. 1001, [68] E. Byres and J. Lowe, “The myths and
[35] Common Vulnerabilities and Exposures,
pp. 48 109–2125, 2006. facts behind cyber security risks for
“CVE-2011-2367.” [Online]. Available:
industrial control systems,” in Proc. ISA
http://cve.mitre.org/. [52] National Institute of Standards and
Process Control Conf., 2003.
[36] C. Nan, I. Eusgeld, and W. Kröger, Technology, “A cybersecurity testbed for
industrial control systems,” 2014. [Online]. [69] J. Pollet, “Electricity for free? the dirty
“Hidden vulnerabilities due to
Available: http://www.nist.gov/manuscript- underbelly of SCADA and smart meters,”
interdependencies between two systems,”
publication-search.cfm?pub_id=915876. in Proc. Black Hat USA, 2010.
in Proc. Critical Inf. Infrastructures Security,
2013, pp. 252–263. [53] M. Zeller, “Myth or reality-does the aurora [70] L. Piétre-Cambacèdés, M. Trischler, and
vulnerability pose a risk to my generator?” G. N. Ericsson, “Cybersecurity myths on
[37] T. Macaulay and B. L. Singer, Cybersecurity
in Proc. 64th Annu. Conf. Protective Relay power control systems: 21 misconceptions
for Industrial Control Systems: SCADA, DCS,
Eng., 2011, pp. 130–136. and false beliefs,” IEEE Trans. Power
PLC, HMI, SIS. Boca Raton, FL, USA:
Delivery, vol. 26, no. 1, pp. 161–172, 2011.
CRC Press, 2011. [54] National Institute of Standards and
Technology, “Measurement Challenges and [71] National Energy Regulatory Commission,
[38] S. Shetty, T. Adedokun, and L.-H. Keel,
Opportunities for Developing Smart Grid “NERC CIP 002 1—Critical cyber asset
“Cyberphyseclab: A testbed for modeling,
Testbeds Workshop” 2014. [Online]. identification,” 2006.
detecting and responding to security
attacks on cyber physical systems,” in Proc. Available: http://www.nist.gov/smartgrid/ [72] J. Weiss, “Are the NERC CIPS making the
3rd ASE Int. Conf. Cyber Security, 2014. upload/SG-Testbed-Workshop-Report- grid less reliable,” in Proc. Control Global,
FINAL-1-2-8-2014.pdf. 2009.
[39] Center for the Protection of National
Infrastructure (CPNI), “Cyber security [55] I. N. Fovino, M. Masera, L. Guidi, and [73] D. Beresford, “Exploiting Siemens Simatic
assessments of industrial control systems,” G. Carpi, “An experimental platform for S7 PLCs,” in Proc. Black Hat USA, 2011.
2010. [Online]. Available: https://ics-cert. assessing SCADA vulnerabilities and [74] T. Newman, T. Rad, and J. Strauchs,
us-cert.gov/sites/default/files/documents/ countermeasures in power plants,” in Proc. “SCADA & PLC vulnerabilities in
Cyber_Security_Assessments_of_Industrial_ 3rd Int. Conf. Human Syst. Interactions, correctional facilities,” white paper, 2011.
Control_Systems.pdf. 2010, pp. 679–686.
[75] S. Blumsack and A. Fernandez, “Ready or
[40] C1 Working Group Members of Power [56] Idaho National Laboratory, “National not, here comes the smart grid,” Energy,
System Relaying Committee, “Cyber SCADA Test Bed (NSTB) Program.” vol. 37, no. 1, pp. 61–68, 2012.
security issues for protective relays,” 2008. [Online]. Available: https://www.inl.gov/.
[76] C. S. King, “The economics of real-time
[Online]. Available: https://www. [57] A. Hahn, A. Ashok, S. Sridhar, and and time-of-use pricing for residential
gedigitalenergy.com/smartgrid/May08/ M. Govindarasu, “Cyber-physical security consumers,” American Energy Institute,
7_Cyber-Security_Relays.pdf. testbeds: Architecture, application, Tech. Rep., 2001.
[41] R. E. Mahan et al., “Secure data transfer evaluation for smart grid,” IEEE Trans.
[77] S. McLaughlin, D. Podkuiko, and
guidance for industrial control and SCADA Smart Grid, vol. 4, no. 2, pp. 847–855,
P. McDaniel, “Energy theft in the
systems,” 2011. [Online]. Available: http:// 2013.
advanced metering infrastructure,” in Proc.
www.pnnl.gov/main/publications/external/ [58] Digital Bond, “Project basecamp.” [Online]. 4th Int. Workshop Critical Infrastructure
technical_reports/PNNL-20776.pdf. Available: http://www.digitalbond.com/ Protection, 2009.
[42] B. Rolston, “Attack methodology analysis: tools/basecamp/.
[78] S. McLaughlin, D. Podkuiko,
Emerging trends in computer-based attack [59] B. Reaves and T. Morris, “An open virtual S. Miadzvezhanka, A. Delozier, and
methodologies and their applicability to testbed for industrial control system P. McDaniel, “Multi-vendor penetration
control system networks,” 2005. [Online]. security research,” Int. J. Inf. Security, testing in the advanced metering
Available: http://www5vip.inl.gov/ vol. 11, no. 4, pp. 215–229, 2012. infrastructure,” in Proc. 26th Annu. Comput.
technicalpublications/Documents/ [60] P. F. Roberts, “Zotob, PnP Worms Slam 13 Security Appl. Conf., 2010.
3494179.pdf. DaimlerChrysler Plants,” 2008. [Online]. [79] Laboratory of Cryptography and System
[43] W. Grega, “Hardware-in-the-loop simulation Available: http://www.eweek.com. Security (CrySyS), “Duqu: A Stuxnet-like
and its application in control education,” in [61] “Computer viruses make it to orbit,” BBC malware found in the wild,” Budapest
Proc. 29th Annu. Front. Edu. Conf., 1999, News, Aug. 2008. [Online]. Available: Univ. Technol. Econ., Budapest, Hungary,
vol. 2, pp. 12B6–12B7. http://news.bbc.co.uk/2/hi/7583805.stm. Tech. Rep., 2011.
[44] C. Konstantinou and M. Maniatakos, [62] B. Krebs, “Cyber incident blamed for [80] S. McLaughlin, “On dynamic malware
“Impact of firmware modification attacks nuclear power plant shutdown,” The payloads aimed at programmable logic
on power systems field devices,” in Proc. Washington Post, Jun. 2008. [Online]. controllers,” in Proc. 6th USENIX Workshop
6th IEEE Int. Conf. Smart Grid Commun., Available: http://www.washingtonpost.com/ Hot Topics Security, 2011.
2015, pp. 1–6. wp-dyn/content/article/2008/06/05/ [81] S. McLaughlin, D. Pohly, P. McDaniel, and
AR2008060501958.html. S. Zonouz, “A trusted safety verifier for

1054 Proceedings of the IEEE | Vol. 104, No. 5, May 2016


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

process controller code,” in Proc. 21st [99] N. Carlini and D. Wagner, “ROP is still [116] J. D. Hiser, A. Nguyen-Tuong, M. Co,
Annu. Netw. Distrib. Syst. Security Symp., dangerous: Breaking modern defenses,” in M. Hall, and J. W. Davidson, “ILR:
2014. Proc. 23rd USENIX Security Symp., 2014. Where’d my gadgets go?” in Proc. 33rd
[82] C. Chevillat, D. Carrington, P. Strooper, [100] F. Schuster et al., “Evaluating the IEEE Symp. Security Privacy, 2012.
J. SüQ, and L. Wildman, “Model-based effectiveness of current anti-rop defenses,” [117] R. Wartell, V. Mohan, K. W. Hamlen, and
generation of interlocking controller in Research in Attacks, Intrusions and Z. Lin, “Binary stirring: Self-randomizing
software from control tables,” in Model Defenses, Lecture Notes in Computer instruction addresses of legacy x86 binary
Driven Architecture—Foundations and Science. Berlin, Germany: Springer-Verlag, code,” in Proc. 19th ACM Conf. Comput.
Applications, Lecture Notes in Computer 2014, vol. 8688, pp. 88–108. Commun. Security, 2012.
ScienceI. Schieferdecker, and A. Hartman, [101] J. Demme et al., “On the feasibility of [118] L. Davi, A. Dmitrienko, S. Nürnberger, and
Eds. Berlin, Germany: Springer-Verlag, online malware detection with A.-R. Sadeghi, “Gadge me if you
2008, vol. 5095, pp. 349–360. performance counters,” ACM SIGARCH can—Secure and efficient ad-hoc
[83] PROFIBUS, “IMS research estimates top Comput. Architect. News, vol. 41, no. 3, instruction-level randomization for x86 and
position for PROFINET,” 2010. [Online]. pp. 559–570, 2013. ARM,” in Proc. 8th ACM Symp. Inf. Comput.
Available: http://www.profibus.com/news- [102] A. Tang, S. Sethumadhavan, and Commun. Security, 2013.
press/detail-view/article/ims-research-est S. J Stolfo, “Unsupervised anomaly-based [119] S. Bhatkar, R. Sekar, and D. C. DuVarney,
imates-top-position-for-profinet/. malware detection using hardware “Efficient techniques for comprehensive
[84] S. McLaughlin and P. McDaniel, “SABOT: features,” in Research in Attacks, Intrusions protection from memory error exploits,” in
Specification-based payload generation for and Defenses. Berlin, Germany: Proc. 14th USENIX Security Symp., 2005.
programmable logic controllers,” in Proc. Springer-Verlag, 2014, pp. 109–129. [120] C. Kil, J. Jun, C. Bookholt, J. Xu, and
19th ACM Conf. Comput. Commun. Security, [103] X. Wang and R. Karri, “Numchecker: P. Ning, “Address space layout permutation
2012. Detecting kernel control-flow modifying (ASLP): Towards fine-grained
[85] Y. Liu, P. Ning, and M. K. Reiter, “False rootkits by using hardware performance randomization of commodity software,” in
data injection attacks against state counters,” in Proc. 50th Design Autom. Proc. 22nd Annu. Comput. Security Appl.
estimation in electric power grids,” in Proc. Conf., 2013, pp. 1–7. Conf., 2006.
16th ACM Conf. Comput. Commun. Security, [104] A. Cui and S. J. Stolfo, “Defending [121] K. Z. Snow et al., “Just-in-time code reuse:
2009. embedded systems with software On the effectiveness of fine-grained
[86] Y. Mo and B. Sinopoli, “False data symbiotes,” in Recent Advances in Intrusion address space layout randomization,” in
injection attacks in control systems,” in Detection. Berlin, Germany: Proc. 34th IEEE Symp. Security Privacy, 2013.
Proc. 1st Workshop Secure Control Syst., Springer-Verlag, 2011, pp. 358–377. [122] M. Backes and S. Nürnberger,
2010. [105] M. Budiu, Ú. Erlingsson, and M. Abadi, “Oxymoron: Making fine-grained memory
[87] A. One, “Smashing the stack for fun and “Architectural support for software-based randomization practical by allowing code
profit,” Phrack Mag., vol. 49, no. 14, 2000. protection,” in Proc. 1st Workshop Architect. sharing,” in Proc. 23rd USENIX Security
[88] R. Roemer, E. Buchanan, H. Shacham, and Syst. Support Improving Softw. Dependability, Symp., 2014.
S. Savage, “Return-oriented programming: 2006, pp. 42–51. [123] M. Backes et al., “You can run but you
Systems languages applications,” ACM [106] L. Davi, P. Koeberl, and A.-R. Sadeghi, can’t read: Preventing disclosure exploits in
Trans. Inf. Syst. Security, vol. 15, no. 1, “Hardware-assisted fine-grained control-flow executable code,” in Proc. 21st ACM Conf.
pp. 2:1–2:34, 2012. integrity: Towards efficient protection of Comput. Commun. Security, 2014.
[89] A. Matrosov, E. Rodionov, D. Harley, and embedded systems against software [124] S. Crane et al., “Readactor: Practical code
J. Malcho, “Stuxnet under the microscope,” exploitation,” in Proc. 51st Design Autom. randomization resilient to memory
2011. Conf. Special Session: Trusted Mobile disclosure,” in Proc. 36th IEEE Symp.
Embedded Computing, 2014. Security Privacy, 2015.
[90] M. Abadi, M. Budiu, Ú. Erlingsson, and
J. Ligatti, “Control-flow integrity: [107] O. Arias et al., “HAFIX: Hardware-assisted [125] M. Abadi, M. Budiu, Ú. Erlingsson, and
Principles, implementations, applications,” flow integrity extension,” in Proc. 52nd J. Ligatti, “A theory of secure control-flow,”
ACM Trans. Inf. Syst. Security, vol. 13, Design Autom. Conf., 2015. in Proc. 7th Int. Conf. Formal Methods Softw.
no. 1, 2009. [108] F. Schuster et al., “Counterfeit Eng., 2005.
[91] J. Pewny and T. Holz, “Compiler-based object-oriented programming: On the [126] R. Hund, C. Willems, and T. Holz,
CFI for iOS,” in Proc. 29th Annu. Comput. difficulty of preventing code reuse attacks “Practical timing side channel attacks
Security Appl. Conf., 2013. in C++ applications,” in Proc. 36th IEEE against kernel space ASLR,” in Proc. 34th
Symp. Security Privacy, 2015. IEEE Symp. Security Privacy, 2013.
[92] V. Pappas, M. Polychronakis, and
A. D. Keromytis, “Transparent ROP exploit [109] M. Tran et al., “On the expressiveness of [127] J. Seibert, H. Okhravi, and E. Söderström,
mitigation using indirect branch tracing,” return-into-libc attacks,” in Proc. 14th Int. “Information leaks without memory
in Proc. 22nd USENIX Security Symp., 2013. Conf. Recent Adv. Intrusion Detection, 2011. disclosures: Remote side channel attacks
[93] Y. Cheng, Z. Zhou, Y. Miao, X. Ding, and [110] F. B. Cohen, “Operating system protection on diversified code,” in Proc. 21st ACM
R. Huijie Deng, “ROPecker: A generic and through program evolution,” Comput. SIGSAC Conf. Comput. Commun. Security,
practical approach for defending against Security, vol. 12, no. 6, 1993. 2014.
ROP attacks,” in Proc. 21st Annu. Netw. [111] S. Forrest, A. Somayaji, and D. Ackley, [128] V. Mohan, P. Larsen, S. Brunthaler,
Distrib. Syst. Security Symp., 2014. “Building diverse computer systems,” in K. W. Hamlen, and M. Franz, “Opaque
[94] M. Zhang and R. Sekar, “Control flow Proc. 6th Workshop Hot Topics Oper. Syst., control-flow integrity,” in Proc. 22nd Annu.
integrity for COTS binaries,” in Proc. 22nd 1997. Netw. Distrib. Syst. Security Symp., 2015.
USENIX Security Symp., 2013. [112] PaX Team, “PaX Address Space Layout [129] S. McLaughlin, “Stateful policy enforcement
[95] I. Fratric, “ROPGuard: Runtime prevention Randomization (ASLR).” [Online]. for control system device usage,” in Proc.
of return-oriented programming attacks,” Available: http://pax.grsecurity.net/ 29th Annu. Comput. Security Appl. Conf.,
2012. docs/aslr.txt. 2013.
[96] C. Zhang et al., “Practical control flow [113] M. Franz, “E unibus pluram: Massive-scale [130] S. Mohan et al., “S3A: Secure system
integrity & randomization for binary software diversity as a defense simplex architecture for enhanced security
executables,” in Proc. 34th IEEE Symp. mechanism,” in Proc. Workshop New of cyber-physical systems,” in Proc. 2nd
Security Privacy, 2013. Security Paradigms, 2010. ACM Int. Conf. High Confidence Netw. Syst.,
[114] T. Jackson et al., “Compiler-generated 2013.
[97] E. Göktas, E. Athanasopoulos, H. Bos, and
G. Portokalidis, “Out of control: software diversity,” in Moving Target [131] S. Bak, K. Manamcheri, S. Mitra, and
Overcoming control-flow integrity,” in Defense, Advances in Information Security. M. Caccamo, “Sandboxing controllers for
Proc. 35th IEEE Symp. Security Privacy, Berlin, Germany: Springer-Verlag, 2011, cyber-physical systems,” in Proc. 2nd IEEE/
2014. vol. 54, pp. 77–98. ACM Int. Conf. Cyber-Phys. Syst., 2011.
[98] L. Davi, D. Lehmann, A.-R. Sadeghi, and [115] V. Pappas, M. Polychronakis, and [132] S. McLaughlin, “Blocking unsafe behaviors
F. Monrose, “Stitching the gadgets: On the A. D. Keromytis, “Smashing the gadgets: in control systems through static and
ineffectiveness of coarse-grained control-flow Hindering return-oriented programming dynamic policy enforcement,” in Proc. 52nd
integrity protection,” in Proc. 23rd USENIX using in-place code randomization,” in Design Autom. Conf., 2015.
Security Symp., 2014. Proc. 33rd IEEE Symp. Security Privacy, 2012. [133] B. Zhu and S. Sastry, “SCADA-specific
intrusion detection/prevention systems: A

Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1055


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

survey and tanomy,” in Proc. 1st Workshop state estimators in power networks,” in on water SCADA systems,” in Proc. 13th
Secure Control Syst., 2010. Proc. 1st Workshop Secure Control Syst., 2010. ACM Int. Conf. Hybrid Syst., Comput.
[134] R. Berthier and W. H. Sanders, [138] T. Jiang, I. Matei, and J. S. Baras, “A trust Control, 2010.
“Specification-based intrusion detection for based distributed Kalman filtering approach [142] Y. Mo and B. Sinopoli, “Secure control
advanced metering infrastructures,” in for mode estimation in power systems,” in against replay attacks,” in Proc. 47th Annu.
Proc. 17th IEEE Pacific Rim Int. Symp. Proc. 1st Workshop Secure Control Syst., 2010. Allerton Conf. Commun. Control Comput.,
Dependable Comput., 2011. [139] A. Teixeira, K. C. Sou, H. Sandberg, and 2009.
[135] D. Hadziosmanović, R. Sommer, E. Zambon, K. H. Johansson, “Secure control systems a [143] S. Amin, G. A. Schwartz, and S. S. Sastry,
and P. H. Hartel, “Through the eye of the quantitative risk management approach,” “Security of interdependent and identical
PLC: Semantic security monitoring for IEEE Control Syst. Mag., vol. 35, no. 1, networked control systems,” Automatica,
industrial processes,” in Proc. 31st Annu. pp. 24–45, Feb. 2015. vol. 49, no. 1, pp. 186–192, Jan. 2013.
Comput. Security Appl. Conf., 2014. [140] F. Pasqualetti, F. Döfler, and F. Bullo, [144] H. Li and Y. Shi, “Robust distributed
[136] R. B. Bobba et al., “Detecting false data “Attack detection and identification in model predicative control of constrained
injection attacks on dc state estimation,” in cyber-physical systems,” IEEE Trans. Autom. continuous-time nonlinear systems: A
Proc. 1st Workshop Secure Control Syst., Control, vol. 58, no. 11, pp. 2715–2729, robustness constraint approach,” IEEE
2010. Nov. 2013. Trans. Autom. Control, vol. 59, no. 6,
[137] H. Sandberg, A. Teixeira, and [141] S. Amin, X. Litrico, S. S. Sastry, and pp. 1673–1678, Jun. 2014.
K. H. Johansson, “On security indices for A. M. Bayen, “Stealthy deception attacks

ABOUT THE AUTHORS


Stephen McLaughlin received the Ph.D. degree Lucas Davi received the Ph.D. degree in com-
in computer science and engineering from the puter science from the Technische Universitt
Pennsylvania State University, State College, PA, Darmstadt, Darmstadt, Germany, in 2015.
USA, in 2014. He is an independent Claude Shannon re-
He is a Senior Engineer at Samsung Research search group leader of the Secure and Trustwor-
America, Mountain View, CA, USA. His work is thy Systems group at Technische Universitt
concerned with the ongoing security and safe op- Darmstadt. He is also a researcher at the Intel
eration of control systems that have already suf- Collaborative Research Institute for Secure Com-
fered a partial compromise. His results in this puting (ICRI-SC). His research focuses on soft-
area have been published in ACM Conference on ware exploitation technique and defenses. In
Computer and Communications Security (CCS), Internet Society Net- particular, he explores exploitation attacks such as return-oriented
work and Distributed System Security Symposium (NDSS), and IEEE SE- programming (ROP) for ARM and Intel-based systems.
CURITY AND PRIVACY MAGAZINE. As a part of the Samsung KNOX security
team, he identifies and responds to vulnerabilities in Samsung’s smart-
phones, mobile payments, and other devices near you right now.

Ahmad-Reza Sadeghi received the Ph.D. degree


Charalambos Konstantinou (Student Member,
in computer science with the focus on privacy
IEEE) received the five-year diploma degree in
protecting cryptographic systems from the Univer-
electrical and computer engineering from the
sity of Saarland, Saarbrücken, Germany, in 2003.
National Technical University of Athens, Athens,
He is a full Professor for Computer Science at
Greece. He is currently working toward the Ph.D.
the Technische Universitt Darmstadt, Darmstadt,
degree in electrical engineering at the Polytech-
Germany. He is the head of the System Security
nic School of Engineering, New York University,
Lab at the Center for Advance Security Research
Brooklyn, NY, USA.
Darmstadt (CASED), and the Director of Intel Col-
His interests include hardware security with
laborative Research Institute for Secure Computing
particular focus on embedded systems and smart
(ICRI-SC) at TU Darmstadt. Prior to academia, he worked in Research and
grid technologies.
Development of Telecommunications enterprises, among others Ericsson
Telecommunications. His research include systems security, mobile and
embedded systems security, cyberphysical systems, trusted and secure
computing, applied cryptography, and privacy-enhanced systems.
Xueyang Wang received the B.S. degree in auto- Dr. Sadeghi served as general/program chair as well as program
mation from Zhejiang University, Zhejiang, China, committee member of many established security conferences. He also
in 2008 and the M.S. and Ph.D. degrees in com- served on the editorial board of the ACM Transactions on Information
puter engineering and electrical engineering and System Security (TISSEC), and as guest editor of the IEEE TRANSAC-
from Tandon School of Engineering, New York TIONS ON COMPUTER-AIDED DESIGN (Special Issue on Hardware Security
University, Brooklyn, NY, USA, in 2010 and 2015, and Trust). Currently, he is the Editor-in-Chief of IEEE SECURITY AND PRI-
respectively. VACY MAGAZINE, and on the editorial board of ACM Books. He has been
His research interests include secure computing awarded with the renowned German prize Karl Heinz Beckurts for his
architectures, virtualization and its application to research on trusted and trustworthy computing technology and its
cybersecurity, hardware support for software se- transfer to industrial practice. The award honors excellent scientific
curity, and hardware security. achievements with high impact on industrial innovations in Germany.

1056 Proceedings of the IEEE | Vol. 104, No. 5, May 2016


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.
McLaughlin et al.: The Cybersecurity Landscape in Industrial Control Systems

Michail Maniatakos received the B.Sc. degree in Ramesh Karri received the Ph.D. degree in
computer science and the M.Sc. degree in em- computer science and engineering from the Uni-
bedded systems from the University of Piraeus, versity of California at San Diego, La Jolla, CA,
Piraeus, Greece, in 2006 and 2007, and the USA in 1993.
M.Sc. and M.Phil. degrees and the Ph.D. de- He is a Professor of Electrical and Computer
gree in electrical engineering from Yale Uni- Engineering at Tandon School of Engineering,
versity, New Haven, CT, USA in 2009, 2010, New York University, Brooklyn, NY, USA. His re-
and 2012. search and education activities span hardware
He is an Assistant Professor of Electrical and cybersecurity: trustworthy ICs, processors, and
Computer Engineering at New York University cyberphysical systems; security-aware computer-
(NYU) Abu Dhabi, Abu Dhabi, UAE, and a Research Assistant Professor aided design, test, verification, validation, and reliability; nano meets
at the NYU Polytechnic School of Engineering, Brooklyn, NY, USA. He is security; metrics; benchmarks; and hardware cybersecurity competi-
the Director of the MoMA Laboratory (nyuad.nyu.edu/momalab), NYU tions. He has over 200 journal and conference publications including
Abu Dhabi. His research interests, funded by industrial partners and the tutorials on trustworthy hardware in IEEE COMPUTER (two) and PROCEED-
U.S. Government, include robust microprocessor architectures, privacy- INGS OF THE IEEE (five).
preserving computation, as well as industrial control systems security. Dr. Karri was the recipient of the Humboldt Fellowship and the Na-
He has authored several publications in IEEE transactions and confer- tional Science Foundation CAREER Award. He is the area director for
ences, and holds patents on privacy-preserving data processing, cyber security of the NY State Center for Advanced Telecommunica-
Dr. Maniatakos is currently the Co-Chair of the Security track at IEEE tions Technologies at NYU-Poly; Cofounded (2015–present) the Center
International Conference on Computer Design (ICCD) and IEEE Interna- for Cyber Security (CCS) (http://crissp.poly.edu/), co-founded the Trust-
tional Conference on Very Large Scale Integration (VLSI-SoC). He also Hub (http://trust-hub.org/) and founded and organizes the Embedded
serves in the technical program committee for various conferences, in- Security Challenge, the annual red team blue team event at NYU,
cluding IEEE/ACM Design Automation Conference (DAC), International (http://www.poly.edu/csaw2014/csaw-embedded). His group’s work on
Conference on ComputerAided Design (ICCAD), ITC, and International hardware cybersecurity was nominated for best paper awards (ICCD
Conference on Compilers, Architectures and Synthesis For Embedded 2015 and DFTS 2015) and received awards at conferences (ITC 2014,
Systems (CASES). He has organized several workshops on security, and CCS 2013, DFTS 2013 and VLSI Design 2012) and at competitions (ACM
he currently is the faculty lead for the Embedded Security challenge Student Research Competition at DAC 2012, ICCAD 2013, DAC 2014,
held annually at Cyber Security Awareness Week (CSAW), Brooklyn, ACM Grand Finals 2013, Kaspersky Challenge and Embedded Security
NY, USA. Challenge). He co-founded the IEEE/ACM Symposium on Nanoscale
Architectures (NANOARCH). He served as program/general chair of
conferences including IEEE International Conference on Computer De-
sign (ICCD), IEEE Symposium on Hardware Oriented Security and Trust
(HOST), IEEE Symposium on Defect and Fault Tolerant Nano VLSI Sys-
tems (DFTS) NANOARCH, RFIDSEC 2015, and WISEC 2015. He serves
on several program committees (DAC, ICCAD, HOST, ITC, VTS, ETS,
ICCD, DTIS, WIFS). He is the Associate Editor of the IEEE TRANSACTIONS
ON INFORMATION FORENSICS AND SECURITY (2010–2014), IEEE TRANSACTIONS
ON COMPUTER-AIDED DESIGN (2014–present), ACM Journal of Emerging
Computing Technologies (2007–present), ACM Transactions on Design
Automation of Electronic Systems (2014–present), IEEE ACCESS (2015–
present), IEEE TRANSACTIONS ON EMERGING TECHNOLOGIES IN COMPUTING
(2015–present), IEEE DESIGN & TEST (2015–present), and IEEE EMBEDDED
SYSTEMS LETTERS (2016–present). He is an IEEE Computer Society Dis-
tinguished Visitor (2013–2015). He is on the Executive Committee of
IEEE/ACM Design Automation Conference leading the cybersecurity
initiative (2014–present). He has delivered invited keynotes and tuto-
rials on hardware security and trust (ESRF, DAC, DATE, VTS, ITC, ICCD,
NATW, LATW).

Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1057


Authorized licensed use limited to: University of Tartu Estonia. Downloaded on September 30,2020 at 13:11:51 UTC from IEEE Xplore. Restrictions apply.

You might also like