2012 (Vol. 1, No. 3)
2012 (Vol. 1, No. 3)
2012 (Vol. 1, No. 3)
Yasin
Department of Computer Science, Faculty of Computer Science and Information Technology,
Universiti Putra Malaysia
43400 UPM Serdang, Selangor, Malaysia
[email protected]
ABSTRACT
In this paper, we proposed a new approach for key
scheduling algorithm which is an enhancement of the
Rijndael key scheduling. This proposed algorithm was
developed to improve the weaknesses that has in the
Rijndael key schedule. The key schedule function in
Rijndael block cipher did not receive the same amount
of attention during design phase as the cipher
components. Based on our research, there are several
properties in key schedule that seemed to violate the
design criteria, which was published by NIST, and this
has led to many types of attack performed on Rijndael
block cipher. Thus we proposed an approach called
ShiftColumn, operates by shifting bit and the result will
be shifted with different offsets. This transformation is
added as the last function after the RCon function. Our
new approach improves the security of the original
Rijndael key scheduling, by enhancing the bit confusion
and diffusion of the subkey, which is output that is
produced from the key schedule transformation. The
subkeys produced by the proposed approach have been
proven to be a better result on both properties compared
to the subkeys that were produced from Rijndael key
scheduling transformation.
Keywords-component; Rijndael; Key Schedule;
Proposed Approach of Key Schedule; Cryptography;
Security
1 INTRODUCTION
Cryptography is a science and art of transforming
messages to make them secure and immune to
attacks [1]. There are three mechanism in
cryptography; symmetric key, asymmetric key, and
hashing. Symmetric key only use single key
encryption and decryption while asymmetric key
used two different key; public key to encrypt and
private key to decrypt. Hashing is a message digest
of fixed length. This cryptography mechanism was
used in many applications such as bank cards,
computer passwords, and electronic commerce that
help to secure the use of technology which depends
on the type of cryptography mechanism used.
Symmetric key mechanism was also known as other
term; single-key encryption. The main drawback of
this mechanism is the two parties must share the
single key. There are two different schemes in
symmetric key mechanism, which are either to use
block cipher or stream cipher. In 1977, the first
publicly available cryptographic algorithm which
was adopted by National Bureau of Standards (now
National Institute of Standards and Technology
(NIST)) as Federal Information Processing Standard
46 (FIPS PUB 46) was Data Encryption Standard
(DES) [2]. DES is a symmetric block cipher system
that was widely used for more than two decades as
encryption scheme by US federal agencies and
private sector.
In 1997, NIST initiated a process to select a
symmetric-key encryption algorithm that also
implement block cipher scheme to replace DES as
Advanced Encryption Standard (AES). NIST
announced that fifteen out of twenty one of received
algorithms have been selected as first candidates in
First AES Candidate Conference in August 1998.
After a year, in Second AES Candidate Conference,
five out of fifteen were selected as finalist
candidates; MARS, RC6, Rijndael, Serpent, and
Twofish. In October 2000, in the Third AES
Candidate Conference, Rijndael was announced by
160
A New ShiftColumn Transformation: An Enhancement of Rijndael Key Scheduling
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 160-166
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
NIST as the Advanced Encryption Standard. AES
was published as FIPS 197 in December 2001 [3].
Technology advances in information technology
and computer security have made everything
including a cipher vulnerable and can be exploited
to attack. There are many efforts that have been
done to redesign and reconstruct AES block cipher
with one objective, which is to improve the block
cipher [4]. Thus, the rapid growth of computer
technology and its resources may make this time
shorter than NIST estimated time to break the
algorithm [5].
Cryptanalysis is a new way of study to break a
cipher compared to the exhaustive key search which
was used as the basic technique to identify the
correct key. The growth in the computer speed is
always improving day by day and it is possible that
in the near future, the safety of AES can be broken
[6].
From analysis that has been made, the best public
cryptanalysis for AES or Rijndael block cipher is
the related-key attack [1]. Related- key attack was
first introduced by Eli Biham [7]. This related-key
attack examines the different between keys.
Study has been made and the result shows, among
the AES candidates, Rijndael key schedule fall into
a category in which knowledge of a round subkey
yields bits of other round subkeys or the master key
after some simple arithmetic operations or function
inversions [8]. The Rijndael key schedule appears to
be a more ad hoc design compared to cipher itself
and it has much slower diffusion structure than the
cipher and contains relatively few non-linear
elements [7]. This is because of the fact that
Rijndael block cipher has been attacked and
exploited from the weaknesses found in the key
schedule structure. Latest attack on Rijndael key
schedule were improved by [9] on the impossible
differential attack which reached up to 7-rounds for
AES 128-bit key and also 8-round on 256-bit key
compared to previous result by [10]. [9] also has
successfully improved the time complexity of the
differential attack on 7-round AES 192-bit by [11].
In 2009, there were two related-key attacks on the
full round AES 192-bit and 256-bit the AES key
schedule by [12].
Nevertheless, to enhance the Rijndael key schedule
security, there are two significant properties to be
focus on; confusion and diffusion. These are two
properties of a secure cipher which were identified
by Claude Shannon [13]. AES cipher algorithm
managed to attain both of these properties, however
in the key schedule; it is somewhat less rigorous in
obtaining these properties [14]. An important
theoretical foundation for bit confusion and bit
diffusion is the idea of Frequency and Strict
Avalanche Criterion (SAC) test, respectively [15].
The SAC obviates the need for a widely used
approximation, allowing more accurate evaluation
of the bit diffusion to key schedule [15] and the
frequency test is to evaluate the confusion bit
properties. Both of these properties shall be
obtained in this research together with designing a
new approach for Rijndael key schedule in order to
enhance the security of the cipher.
2 PROCESS OF KEY SCHEDULING
Rijndael (new AES) block cipher has two part of
transformations; cipher (round) and key schedule.
Key schedule is an iterative component in a block
cipher. A goal of a strong key schedule is to make
the cipher to be resistant from various kinds of
attacks. The key schedule has been studied for
many years but there are many mathematical
properties and weaknesses of this design which
were insufficiently discovered in order to make the
block cipher fully secured [15][16].
Key schedule is a transformation which uses master
key (secret key) as an input value in algorithm to
produce round keys (subkeys). The master key
input can be 128-bit, 192-bit, or 256-bit key;
however in this research 128-bit key is use as input.
It is stated that 128-bit is the minimum requirement
input in block cipher [17]. Rijndael key schedule
involves three different byte-oriented
transformation in each round; RotWord, SubBytes
and RCon.
161
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 160-166
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
2.1 Rijndael Key Schedule Process
The subkeys are derived from the cipher key (master
key) using the key schedule algorithm. RotWord
performs a one-byte circular left shift on input (e.g.,
[a, b, c, d]) which was taken from the rightmost
input of the master key (Fig. 1). The process of
RotWord will produce an output word (e.g., [b, c, d,
a]). This process of RotWord is as illustrated in Fig.
2. SubByte performs a function that returns a 4-byte
word in which each byte is the result of applying the
Rijndael S-box to the byte at the corresponding
position in the input word, which is the result from
RotWord function. Continuing from the example
given previously, Fig. 3 shows that the SubByte
process will take the output produced from RotWord
process (e.g., [b, c, d, a]) and produced a new output
(e.g., [k, m, p, t]). RCon is a 4-byte value in which
the rightmost three bytes are always zero. The input
word (input from the leftmost column in master key)
will be exclusive OR (XOR) with the result from
SubByte and also XOR with RCon input. Fig. 4
shows the process of RCon where the leftmost
column in master key (e.g., [e, f, g, h]) is XOR with
the output from SubByte (e.g., [k, m, p, t]) and input
from RCon (e.g., [s, 0, 0, 0]), which will produce the
output (e.g., [v, x, y, z]).
This output (v, x, y, z) will be the first in the round.
This process will be repeated until the output
produces the same value as the input master key
128-bit key. The illustration shown in Fig. 5 is the
summary of Rijndael key schedule transformation
that includes the all processes and examples of
outputs.
Figure 1. Example of master key input
Figure 2. Illustration of RotWord process.
Figure 3. Illustration of SubByte process.
Figure 4. Illustration of RCon process.
162
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 160-166
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Figure 5. Summary of Rijndael key schedule process.
2.2 The Proposed Approach Key Schedule
Process
In our proposed approach, one new transformation,
ShiftColumn, is added in the key expansion
algorithm and the new algorithm is the enhanced
version of Rijndael key schedule algorithm.
ShiftColumn operates by shifting columns with
different offsets. ShiftColumn is one extra
transformation added to the algorithm and the
process is adapted from ShiftRow in cipher
transformation of Rijndael block cipher but it is
more complex than the ShiftRow process, where the
proposed approach contains shift column, bitwise
and cyclic shift.
The proposed approach involves left shifting the bit
value in the column, and then the value is XOR
within the same column but with different row.
Next, the bit value in the column will be shifted to
the right. Lastly, the whole column (one selected
column) will be shifted with different offset. For
example the result from RCon (e.g., [v, x, y, z]) will
be inputted into the ShiftColumn process and the
process will produce an output (e.g., [f, j, r, u]),
which will also become the first word in the round
for the proposed key schedule and all the functions
(RotWord, SubByte, RCon, ShiftColumn) will be
repeated again until finished all the key schedule
transformation. The result for full key schedule
transformation (128 bit) will produce 10 subkeys
and each subkey will contains 4 words. The output
subkeys will be use for evaluation to obtain bit
confusion and bit diffusion properties. The
illustration for the ShiftColumn is shown in Fig. 6.
The process of the proposed approach which
includes all the processes (RotWord, SubByte,
RCon and ShiftColumn) is summarized as
illustrated in Fig. 7.
Figure 6. Illustration of the proposed approach
transformation (ShiftColumn).
163
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 160-166
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Figure 7. Key schedule process of the proposed approach.
3 TEST
Confusion and diffusion are two properties of the
operation of a secure cipher [12]. The frequency test
was performed to measure the bit mixing property
where it is a basic measure which is fundamental in
achieving confusion property and SAC test was
performed as a measure of the bit diffusion property
that checks one bit change in the input, on average,
changes to half the bits in the output [13]. Both tests
will be measured by the probability value (p-value).
The frequency test is performed using NIST
Statistical Test (from NIST test package) that focus
on proportion of zeroes and ones with the purpose
to determine whether the number of zeroes and ones
are approximately the same in the sequence as
would be expected for a truly random [15]. The
frequency test result is determined by p-value; if the
computed p-value is below than 0.01, then it can be
concluded that the sequence is non-random or
otherwise it can be concluded as sequence is
random and satisfies at the 0.01% critical level [18].
SAC test is generated by using the SPSS software
through one-sample kolmogorov-smirnov test (1-
sample K-S test). Decision rule for this research is
that if the p-value is more than 0.05, then we will
accept the null hypothesis, if otherwise, we will
then reject the null hypothesis and accept the
alternate hypothesis. Null hypothesis indicate that
the bit diffusion is satisfied at the 0.05% critical
level.
4 DISCUSSION
The experiment was conducted using 20 subkeys as
input for two compulsory tests in achieving
confusion and diffusion properties; SAC test (one-
sample kolmogorov-smirnov poisson distribution)
and frequency test. Critical values are assigned for
both of the tests. This subkeys were obtained from
the output produced using the Rijndael key schedule
transformation and also from the output of the
proposed approach.
SAC test: Results show that the proposed approach
of key schedule algorithm obtained a better result
than Rijndael key schedule though the 3 subkeys for
both approaches (Rijndael and proposed approach)
have failed the test. The graph plotted in Fig. 8
shows that more than half of the subkeys, from the
proposed approach yields higher p-value which also
means higher in bit diffusion property that
contribute to a secure cipher.
Frequency test: Fig. 9 shows the result of p-value
from the frequency test, where two of the subkeys
failed the test for both of the algorithms. This shows
that the proposed approach get a higher bit
confusion properties which contribute to a more
164
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 160-166
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
secure algorithm of key schedule compared to
Rijndael.
Figure 8. The result of SAC test.
Figure 9. The result of frequency test.
5 CONCLUSION
This research focused on achieving bit confusion
and diffusion on key schedule algorithm for the
proposed approach using 128-bit key size. The
analysis produced in this research is used to combat
weaknesses in Rijndael key schedule algorithm. Fig.
8 and Fig. 9 shows comparison between the
frequency test and SAC test results for both
Rijndael key schedule and the proposed approach.
As a conclusion of the results, this research has
achieved its objective. After analyzing both key
schedule algorithms (Rijndael and proposed
approach), somehow, the proposed approach shows
better result in both test by achieving better results
on both of the properties (confusion and diffusion).
For future enhancement, cryptanalysis attack can be
performed on the proposed approach as part of the
evaluation test. The result from the cryptanalysis
attack will help in permitting in subversion or
evasion.
6 REFERENCES
[1] Settia, N.: Cryptanalysis of Modern Cryptographic
Algorithms. In International Journal of Computer Science
and Technology, 1(2), pp. 166-169 (2010).
[2] Wright M. A.: The evolution of the Advanced Encryption
Standard. Network Security, vol. 1999, pp. 11-14 (1999).
[3] Jamil, T.: The Rijndael Algorithm. In Potentials, IEEE,
23(2), pp. 36-38(2004).
[4] Juremi, J., Mahmod, R., and Sulaiman, S.: A Proposal for
Improving AES S-Box with Rotation and Key-
Dependent. In Cyber Security, Cyber Warfare and Digital
Forensic (CyberSec), 2012 International Conference , pp.
38-42 (2012).
[5] Ali, S.A.:Improving the Randomness of Output Sequence
for the Advanced Encryption Standard Cryptographic
Algorithm. Universiti Putra Malaysia (2005).
[6] Jing, M-H., Chen, J-H., and Chen, Z-H.: Diversified
Mixcolumn Transformation of AES. In Information,
Communications & Signal Processing, 2007 on 6th
International Conference, pp. 1-3 (2007).
[7] Ferguson N., et al.: Improved cryptanalysis of Rijndael.
In Fast Software Encryption. LNCS, vol. 1978, pp. 213-
230. Springer Berlin/Heidelberg (2001).
[8] Carter, G., Dawson, E., and Nielsen, L.: Key Schedule
Classification of the AES Candidates. In Proceedings of
the end AES Conference, Rome, Italy, pp. 1-14 (1999).
[9] Wentao, Z., Wu, W., and Dengguo, F.: New Results on
Impossible Differential Cryptanalysis of Reduced AES.
In Information Security and Cryptology - ICISC 2007.
LNCS, vol. 4817, pp. 239-250. Springer Berlin/
Heidelberg (2007).
[10] Hee, C.J., Kim, M., Lee, J.Y., and Kang, S.W.: Improved
Impossible Differential Cryptanalysis of Rijndael and
Crypton. In Information Security and Cryptology
ICISC 2001. LNCS, vol. 2288, pp. 39-49. Springer
Berlin/Heidelberg (2002).
[11] Phan, R. C.-W.: Impossible Differential Cryptanalysis of
7-Round Advanced Encryption Standard (AES). In
Information Processing Letters, vol. 91, pp. 33-38 (2004).
[12] Biryukov, A., and Khovratovich, D.: Related-Key
Cryptanalysis of the Full AES-192 and AES-256. In
Advances in Cryptology ASIACRYPT 2009. LNCS,
vol. 5912, pp. 1-18. Springer Berlin/Heidelberg (2009).
[13] Shannon, C.: Communication Theory of Secrecy Systems.
In Bell System Technical Journal, vol. 28, pp. 656-715,
(1949).
[14] May, L., Henricksen, M., Millan, W., Carter, G., and
Dawson, E.: Strengthening the Key Schedule of the AES.
165
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 160-166
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
In Information Security and Privacy. LNCS, vol. 2384,
pp. 226-240. Springer Berlin/Heidelberg (2002).
[15] Muda, Z., Mahmod, R., and Sulong, M.R.: Key
Transformation Approach for Rijndael Security. In
Information Technology Journal, 9, pp. 290-297 (2010).
[16] Daemen, J., and Rijmen, V.: The First 10 Years of
Advanced Encryption. In IEEE Security and Privacy,
vol.8, pp. 72-74 (2010) .
[17] Nechvatal, J., Barker, E., Bassham, L., Burr, W.,
Dworkin, M., Foti, J., and Roback, E.: Report on the
Development of the Advanced Encryption Standard
(AES). Technical Report, NIST (2000).
[18] Rukhin, A., et al.: A Statistical Test Suite for Random and
Pseudorandom Number Generators for Cryptographic
Applications. NIST Special Publication 800-22 (2001).
166
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 160-166
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
1
Dept. of Communication Engineering, College of Electronic Engineering
1
University of Mosul, Iraq
2
Dept. of Communication Engineering, Faculty of Electrical and Electronic Engineering,
University Tun Hussein Onn Malaysia
1
[email protected],
2
[email protected]
ABSTRACT
KEYWORDS
LTE; LTE-Advanced; MIMO; OFDMA;
SCFDMA; CoMP.
1 INTRODUCTION
The specifications for LTE are produced
by the Third Generation Partnership
Project [1], in the same way as the
specifications for UMTS and GSM. They
are organized into releases, where each
of which contains a stable and clearly
defined set of features. The use of
releases allows equipment manufacturers
to build devices using some or all of the
features of earlier releases, while 3GPP
continues to add new features to the
system in a later release. Within each
release, the specifications progress
through a number of different versions.
New functionality can be added to
successive versions until the date when
the release is frozen, after which the only
changes involve refinement of the
technical details, corrections and
clarifications [2].
Table 1 lists the releases that 3GPP have
used since the introduction of UMTS,
together with the most important features
of each release. Note that the numbering
scheme was changed after Release 99, so
that later releases are numbered from 4
through to 11 (LTE-A).
167
Sophistication Techniques of Fourth Generations in Neoteric Mobile
LTE and LTE-Advanced
A. Z. Yonis
1
and M. F. L. Abdullah
2
Long Term Evolution (LTE-Advanced) is a
preliminary mobile communication standard
formally submitted as a candidate for 4G
systems to the ITU-T. LTE-A is being
standardized by the 3rd Generation
Partnership Project (3GPP) as a major
enhancement of the 3GPP Long Term
Evolution (LTE-Release 8) standard, which
proved to be sufficient to satisfy markets
demand. The 3GPP group has been working
on different aspects to improve LTE
performance, where the purpose of the
framework provided by LTE-Advanced,
includes higher order MIMO, carrier
aggregation (carriers with multiple
components), peak data rate, and mobility.
This paper presents a study on LTE
evolution toward LTE-Advanced in terms of
LTE enabling technologies (Orthogonal
Frequency Division Multiplexing (OFDM)
and Multiple-Input Multiple-Output
(MIMO)), and also focuses on LTE-
Advanced technologies MIMO
enhancements for LTE-Advanced,
Coordinated Multi Point transmission
(CoMP).
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 167-176
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Table 1
3GPP specification release for LTE and LTE-A
Section 2 of this paper discusses the
system requirements for LTE and LTE-
Advanced, while section 3 describes the
standards of long term evolution. Key
enabling technologies and feature of LTE
are described in section 4 with the uplink
and downlink system model and MIMO.
In section 5, the standards of long term
evolution Advanced are reviewed
appended by the characteristics of carrier
aggregation, peak data rate, mobility and
OFDMA. In section 6 the LTE-
Advanced technologies are considered
which include MIMO enhancements for
LTE-Advanced with both types of
uplink and downlink MIMO
transmission, the coordinated multi-point
transmission. Section 7 contains the
summary and discussion of the main
points for this paper which would be
useful for reader to understand the
current development for performance of
release 8 and release 10 in wireless
communications. Finally, section 8
concludes some general observations and
recommendations for this paper.
2 SYSTEM REQUIREMENTS FOR
LTE AND LTE-ADVANCED
Table 2 gives the system requirements
for the Rel. 8 LTE. The Rel. 8 LTE
supports scalable multiple transmission
bandwidths including 1.4, 3, 5, 10, 15,
and 20MHz. One of the most distinctive
features is the support for only the
packet-switching (PS) mode. Hence, all
traffic flows including real-time service
with a rigid delay requirement such as
voice services is provided in the PS
domain in a unified manner. The target
peak data rate is 100 Mbps in the
downlink and 50 Mbps in the uplink. The
target values for the average or cell edge
user throughput and spectrum efficiency
are specified as relative improvements
from those of High-Speed Downlink
Packet Access (HSDPA) or High-Speed
Uplink Packet Access (HSUPA) in the
downlink and uplink, respectively. Here,
the average cell spectral efficiency
corresponds to capacity, and the cell-
edge user throughput is defined as the
5% value in the cumulative distribution
function (CDF) of the user throughput.
Both are very important requirements
from the viewpoint of practical system
performance in cellular environments. In
particular, improvement in the cell-edge
user throughput is requested to mitigate
the unfair achievable performance
between the vicinity of the cell site and
cell edge. After extensive discussions in
the 3GPP meetings, it was verified that
the requirements and targets for the Rel.
8 LTE were achieved by the specified
radio interface using the relevant
techniques.
168
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 167-176
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Table 2
Major System Requirement for Rel.8 LTE [3]
The requirements for LTE-Advanced are
specified in [3]. The following general
requirements for LTE-Advanced were
agreed upon. First, LTE-Advanced will
be an evolution of Rel. 8 LTE. Hence,
distinctive performance gains from Rel. 8
LTE are requested. Moreover, LTE-
Advanced will satisfy all the relevant
requirements for Rel. 8 LTE. Second,
full backward compatibility with Rel. 8
LTE is requested in LTE-Advanced.
Thus, a set of user equipment (UE) for
LTE-Advanced must be able to access
Rel. 8 LTE networks, and LTE-
Advanced networks must be able to
support Rel. 8 LTE UEs. Third, LTE-
Advanced shall meet or exceed the IMT-
Advanced requirements within the ITU-
R time plan.
In Table 3, the requirements and target
values for LTE-Advanced and IMT-
Advanced are listed with those achieved
in the Rel. 8 LTE. The summary of the
major issues which depend on MIMO
channel transmissions is highlighted,
with respect to the peak data rate, the CL
refers to Recommendation ITU-R
M.1645, which specifies that the target
peak data rate for IMT-Advanced should
be higher than 1 Gbps in nomadic
environments. Based on the description
in the CL, the target peak data rate for
the downlink was set to 1 Gbps for LTE-
Advanced.
Meanwhile, the target peak data rate for
the uplink was set to 500 Mbps. The
target values for the peak frequency
efficiency are 2 and 4 fold those
achieved in the Rel. 8 LTE, i.e., 30 and
15 bps/Hz in the downlink and uplink,
respectively. It is noted, however, that
this requirement is not mandatory and is
to be achieved by a combination of base
stations (BSs) and high-class UEs with a
larger number of antennas. In LTE-
Advanced, 1.4 to 1.6 folds improvements
for the capacity and cell-edge user
throughput are expected from Rel. 8 LTE
for each antenna configuration.
Table 3
System Performance Requirements for LTE-A
compared to those achieved in Rel.8 LTE [3]
3 LONG TERM EVOLUTION
STANDARD
Long Term Evolution LTE has
become a widely known brand name for
the 3GPP-defined successor technology
of third-generation mobile systems. LTE
stands for Long Term Evolution and
originally denoted a work item in 3GPP
aimed at developing a successor to the
third-generation radio technology.
Gradually it came to denote first the new
radio technology itself, then also
encompassed the radio access network
( EUTRAN), and is now also used for
the entire system succeeding third-
generation mobile systems ( SAE,
EPS) including also the evolved core
169
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 167-176
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
network ( EPC), as a quick search for
the term LTE on the 3GPP home page
[3GPP] will reveal. The 3GPP LTE also
called release 8 specification defines the
basic functionality of a new, high-
performance air interface providing high
user data rates in combination with low
latency based on MIMO, OFDMA and
an optimized system architecture
evolution (SAE) as main enablers [4].
4 KEY ENABLING TECHNOLOG-
IES AND FEATURE OF LTE
This paper provides technical
information about two main LTE
enabling technologies. The areas covered
range from basic concepts to research-
grade material, including future
directions. The three main LTE enabling
technologies are:
4.1 LTE Downlink System Model
One of the key differences between
3G systems and LTE is the use of
orthogonal frequency division
multiplexing as shown in Figure 1,
OFDM offers a lot of advantages first of
all, by using a multiple carrier
transmission technique, the symbol time
can be made substantially longer than the
channel delay spread, which reduces
significantly or even removes the inter
symbol interference (ISI). In other
words, OFDM provides a high
robustness against frequency selective
fading. Secondly, due to its specific
structure, OFDM allows for low-
complexity implementation by means of
Fast Fourier Transform (FFT)
processing. Thirdly, the access to the
frequency domain (OFDMA) implies a
high degree of freedom to the scheduler.
Finally, it offers spectrum flexibility
which facilitates a smooth evolution
from already existing radio access
technologies to LTE. In the frequency
division duplexing (FDD) mode of LTE
each OFDM symbol is transmitted over
subcarriers of 15 or 7.5 kHz.
Figure 1 OFDM baseband system [5]
4.2 LTE Uplink System Model
An SC-FDMA uplink transmitter is
shown in Figure 2; SC-FDMA is used
rather than OFDM. SC-FDMA is also
known as DFT-spread OFDM
modulation. Basically, SC-FDMA is
identical to OFDM unless an initial FFT
is applied before the OFDM modulation.
The objective of such modification is to
reduce the peak to average power ratio,
thus decreasing the power consumption
in the user terminals [6].
Figure 2 SC-FDMA uplink transmitter for user1,
where user 1 is allocated subcarriers 1,2,, M of
L total subcarriers
4.3 Multiple-Input Multiple-Output
(MIMO)
The transmission diversity allows us
to improve the link performance when
the channel quality cannot be tracked at
the transmitter which is the case for high
mobility UEs. The transmission diversity
is also useful for delay-sensitive services
that cannot afford the delays introduced
170
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 167-176
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
by channel-sensitive scheduling. The
transmission diversity, however, does not
help in improving the peak data rates as a
single data stream is always transmitted.
The multiple transmission antennas at the
eNB is a combination with multiple
receiver antennas at the UE which can be
used to achieve higher peak data rates by
enabling multiple data stream
transmissions between the eNB and the
UE using MIMO spatial multiplexing.
Therefore, in addition to larger
bandwidths and high-order modulations,
MIMO spatial multiplexing is used in the
LTE system to achieve the peak data rate
targets. The MIMO spatial multiplexing
also provides improvement in cell
capacity and throughput as UEs with
good channel conditions can benefit from
multiple streams transmissions.
Similarly, the weak UEs in the system
benefit from beam-forming gains
provided by precoding signals
transmitted from multiple transmission
antennas [7]. MIMO is one of the most
important means to achieve the high data
rate objectives for LTE is multiple
antenna transmission. In LTE downlink it
is supported one, two or four transmit
antennas in the eNB and one, two or four
receive antennas in the UE. Multiple
antennas can be used in different ways:
to obtain additional transmit/receive
diversity or to get spatial multiplexing
increasing the data rate by creating
several parallel channels if conditions
allow to. Nevertheless, in LTE uplink
although one, two or four receive
antennas are allowed in the eNB, only
one transmitting antenna is allowed in
the UE. Therefore, multiple antennas can
be only used to obtain receive diversity.
5 LONG TERM EVOLUTION-
ADVANCED STANDARD
The next evolution of LTE, called
LTE-Advanced, was triggered by the
ITU-R Circular Letter requesting
candidate submissions for IMT-
Advanced radio interface technologies.
The schedule defined by ITU-R calls for
a Complete Technology submission
for June 2009 and a Final submission
for October 2009. Development of radio
interface specification recommendations
is targeted to be completed in February
2011 [8].
Release 10 enhances the capabilities
of LTE, to make the technology
compliant with the International
Telecommunication Unions require-
ments for IMT-Advanced. The resulting
system is known as LTE-Advanced. This
paper covers the new features of LTE-
Advanced, by focusing on carrier
aggregation, peak data rate, mobility and
orthogonal frequency division multiple
access (OFDMA).
For the most part, the Release 10
enhancements are designed to be
backwards compatible with Release 8.
Thus a Release 10 base station can
control a Release 8 mobile, normally
with no loss of performance, while a
Release 8 base station can control a
Release 10 mobile. In the few cases
where there is a loss of performance, the
degradation has been kept to a minimum
[1].
LTE-Advanced enhanced these
features that can be found in the
following:
5.1 Carrier Aggregation
The possibility for carrier aggregation
was introduced in LTE release 10. In the
case of carrier aggregation, multiple LTE
carriers, each with a bandwidth up to 20
MHz, can be transmitted in parallel
to/from the same terminal, thereby
allowing for an overall wider bandwidth
and correspondingly higher per-link data
rates. In the context of carrier
aggregation, each carrier is referred to as
a component carrier (CC) as, from an RF
171
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 167-176
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
point-of-view; the entire set of
aggregated carriers can be seen as a
single (RF) carrier. The possibility of up
to five component carriers, for different
bandwidths of up to 20 MHz, can be
aggregated allowing for an overall
transmission bandwidths up to 100 MHz
[9]. A terminal capable of carrier
aggregation may receive or transmit
simultaneously on multiple component
carriers. Each CC can also be accessed
by an LTE terminal from the earlier
releases that is, component carriers are
backwards compatible [10].
Figure 3 Carrier aggregation scenarios
As noted, that aggregated component
carriers do not need to be contiguous in
the frequency domain, but rather, with
respect to the frequency location of the
different component carriers. Three
different cases can be identified as shown
in Figure 3:
Intra-band aggregation with
frequency-contiguous component
carriers
Intra-band aggregation with non-
contiguous component carriers.
Inter-band aggregation with non-
contiguous component carriers.
5.2 Peak Data Rate
LTE-Advanced should support
significantly increased instantaneous
peak data rates. At a minimum, LTE-
Advanced should support enhanced peak
data rates to support advanced services
and applications (100 Mbps for high and
1 Gbps for low mobility were established
as targets for research).
5.3 Mobility
The system shall support mobility
across the cellular network for various
mobile speeds up to 350 km/h (or
perhaps even up to 500 km/h depending
on the frequency band). System
performance shall be enhanced for 010
km/h and preferably enhanced but at
least no worse than E-UTRA and E-
UTRAN for higher speeds [11].
5.4 Orthogonal Frequency Division
Multiple Access (OFDMA)
The block diagram for a downlink
OFDMA is shown in Figures 4 and 5.
The basic flow is very similar to an
OFDM system except for now K users
share the L subcarriers, with each user
being allocated M
K
subcarriers. Although
in theory it is possible to have users share
subcarriers, this never occur in practice,
so
k
M
k
= L and each subcarrier only
has one user assigned to it [12].
Figure 4 OFDMA downlink transmitter
172
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 167-176
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
In OFDM, all subcarriers are assigned to
a single user. Hence, for multiple users to
communicate with the BS, the set of
subcarriers are assigned to each in a
Time Division Multiple Access (TDMA)
fashion. Alternatively, an OFDM-based
multiple access mechanism, namely the
OFDMA, assigns sets of subcarriers to
different users. In particular, the total
available bandwidth is divided into M
sets, each consisting of L subcarriers.
Figure 5 OFDMA downlink receiver for user 1
Despite the relatively straight
forwardness of OFDMA, it has very
attractive advantages. Probably the most
important of these is its inherent
exploitation of frequency and multiuser
diversities. Frequency diversity is
exploited through randomly distributing
the subcarriers of a single user over the
entire band, reducing the probability that
all the subcarriers of a single user
experience deep fades. Such allocation is
particularly the case when distributed
subcarrier assignment is employed. On
the other hand, multiuser diversity is
exploited through assigning contiguous
sets of subcarriers to users experiencing
good channel conditions [13]. Another
important advantage of OFDMA is its
inherent adaptive bandwidth assignment.
Since the transmission bandwidth
consists of a large number of orthogonal
subcarriers that can be separately turned
on and off; wider transmission
bandwidths, as high as 100 MHz, can be
easily realized.
6 LTE-ADVANCED TECHNOLOG-
IES
LTE-Advanced maintains backward
compatibility with LTE, while achieving
higher system performance than LTE and
satisfying the minimum requirements for
IMT-Advanced, which is in the process
of being standardized by the ITU-R. In
order to achieve these goals, radio
interface technologies such as support of
wider transmission bandwidth,
enhancement of MIMO technology used
and relay, are being studied on a base of
LTE technology; in the following some
examples of technologies considered for
LTE-Advanced are outlined.
1. Multiple Users MIMO.
2. Coordinated multi-point transmission
(CoMP).
6.1 Multiple Users MIMO
Figure 6 shows a slightly different
technique. Here, two transmit and two
receive antennas are sharing the same
transmission times and frequencies, in
the same way as before [1].
Figure 6 Uplink multiple user MIMO
This time, however, the mobile antennas
are on two different mobiles instead of
one. This technique is known as multiple
users MIMO (MU-MIMO), in contrast
with the earlier spatial multiplexing
173
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 167-176
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
techniques, which are sometimes known
as single user MIMO (SU-MIMO).
Figure 6 specifically shows the
implementation of multiple users MIMO
on the uplink, which is the more
common situation. Here, the mobiles
transmit at the same time and on the
same carrier frequency, but without using
any precoding and without even knowing
that they are part of a spatial
multiplexing system. The base station
receives their transmissions and separates
them using (for example) the minimum
mean square error detector as noted
earlier.
This technique only works if the channel
matrix is well behaved, but can usually
guarantee this for two reasons. Firstly,
the mobiles are likely to be far apart, so
their ray paths are likely to be very
different. Secondly, the base station can
freely choose the mobiles that are taking
part, so it can freely choose mobiles that
lead to a well-behaved channel matrix.
Uplink multiple user MIMO does not
increase the peak data rate of an
individual mobile, but it is still beneficial
because of the increase in cell
throughput. It can also be implemented
using inexpensive mobiles that just have
one power amplifier and one transmit
antenna, not two. For these reasons,
multiple user MIMO is the standard
technique in the uplink of LTE Release
8: single user MIMO is not introduced
into the uplink until Release 10.
Multiple user MIMO can be also applied
to the downlink, as shown in Figure 7. In
this case, however, there is a problem,
where Mobile 1 can measure its received
signal y1 and the channel elements H11
and H12, in the same way as before.
Figure 7 Downlink multiple user MIMO
However, it has no knowledge of the
other received signal y2, or of the other
channel elements H21 and H22. The
opposite situation applies for mobile 2.
Neither mobile has complete knowledge
of the channel elements or of the
received signals, which invalidates the
techniques used.
6.2 Coordinated Multi-Point
transmission (CoMP)
One of the issues being addressed
beyond Release 10 is coordinated
multipoint (CoMP) transmission and
reception [14, 15]. This is a wide-ranging
term, which refers to any type of
coordination between the radios
communications that are taking place in
nearby cells. Its aim is to increase the
data rate at the cell edge and the overall
throughput of the cell.
There are two main varieties, which will
be described from the viewpoint of the
downlink. (Similar issues apply on the
uplink as well.) In coordinated
scheduling and beamforming (CS/CB), a
mobile receives data from one cell at a
time, its serving cell.
However, the serving cell can coordinate
its scheduling and beamforming
processes with those of cells nearby, so
as to minimize the inter-cell interference.
174
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 167-176
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
For example, a cell can configure its
beamforming pattern on the sub-carriers
that a mobile in a neighboring cell is
using, so as to place that mobile in a null.
In joint processing (JP), a mobile
receives data from multiple cells. These
cells can be controlled by one base
station, which is not too hard to
implement. Alternatively, the cells can
be controlled by multiple base stations,
which offer better performance but
makes issues such as backhaul and
synchronization far harder.
The cells used for joint processing can
transmit the same data stream as each
other, in which case they are operating as
diversity transmitters. (The same
technique is used for soft handover in
UMTS.) Alternatively, they can transmit
different data streams, in an
implementation of spatial multiplexing
that is known as cooperative MIMO as
shown in Figure 8.
This has some similarities with multiple
users MIMO, but instead of separating
the mobile antennas onto two different
devices, the networks antennas are
separated onto two different cells.
Figure 8 Cooperative MIMO on the LTE-
Advanced downlink
7 SUMMARY AND DISCUSSION
Through this paper, a detail
description is presented related to the
technologies precedent to LTE
technology (Release 8). According to the
comparison of LTE with different
existing technologies, LTE will provide
wireless subscribers with significant
advantages in traditional and non-
traditional wireless communication over
those currently provided via existing 3G
technologies. LTE offers scalable
bandwidths, from 1.4 up to 20 MHz,
together with support for both FDD
paired and time division duplexing
(TDD) unpaired spectrum. LTE will be
available not only in the next-generation
mobile phones, but also in notebooks,
ultra-portables, cameras, camcorders,
MBRs, and other devices that benefit
from mobile broadband. While LTE-
Advanced helps in integrating the
existing networks, new networks,
services, and terminals to suit the
escalating user demands.
The technical features of LTE-
Advanced may be summarized with the
word integration. LTE-Advanced will be
standardized in the 3GPP specification
Release 10 and will be designed to meet
the 4G requirements as defined by ITU.
LTE-Advanced as a system needs to take
many features into consideration due to
optimizations at each level which involve
lots of complexity and challenging
implementation. Numerous changes in
the physical layer can be expected to
support larger bandwidths with more
flexible allocations and to make use of
further enhanced antenna technologies.
Coordinated base stations, scheduling,
MIMO, interference management, and
suppression will also require changes in
the network architecture.
8 CONCLUSIONS
In conclusion, both LTE and LTE
Advanced offer high speed access to
internet, with high speed internet
connection on mobile, where users can
enjoy voice calls, video calls, and high
speed downloads or uploads of any data
and watch internet TV in live or on
demand services. The main targets for
175
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 167-176
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
this evolution are increased data rates,
improved spectrum efficiency, improved
coverage, reduced latency and packet-
optimized system that support multiple
Radio Access Technologies. The paper
has presented a study on evolution LTE
toward LTE-Advanced in terms of LTE
enabling technologies (Orthogonal
Frequency Division Multiplexing
(OFDM) and Multiple-Input Multiple-
Output (MIMO)), also focused on LTE-
Advanced technologies (MIMO enhan-
cements for LTE-Advanced, carrier
aggregation, peak data rate, mobility and
coordinated multi-point transmission
(CoMP). LTE-Advanced is a very
flexible and advanced system, further
enhancements to exploit spectrum
availability and advanced multi-antenna
techniques.
9 REFERENCES
1. Cox, C.: An introduction to LTE/ LTE,
LTE-advanced, SAE and 4G mobile
communications: John Wiley, pp. 16--280,
UK (2012).
2. 3rd Generation Partnership Project 3GPP
Releases. Available at:
http://www.3gpp.org/releases accessed 12
(2011).
3. Sawahashi, M., Kishiyama, Y., Taoka, H.,
Tanno M., and Nakamura T.: Broadband
Radio Access: LTE and LTE-Advanced:
IEEE Int. Symposium on Intelligent Signal
Processing and Communication Systems
(ISPACS), pp. 224--225, Dec., (2009).
4. Forsberg, D., Horn, G., Moeller, W., and
Niemi V.: LTE Security: John Wiley, pp.
25, UK (2010).
5. Khlifi, A. and Bouallegue R.: Comparison
between performances of channel estimation
techniques for CP-LTE and ZP-LTE
downlink systems: Int. Journal of Computer
Networks & Communications Vol.4, No.4,
pp. 223--228, July (2012).
6. Sacrist, D., Monserrat, F., Cabrejas-
Penuelas, J., Calabuig, D., Garrigas, S. and
Cardona, N.: On the way towards Fourth-
Generation Mobile: 3GPP LTE and LTE-
Advanced: Hindawi Publishing Corporation
EURASIP Journal on Wireless
Communications and Networking, pp. 3.
(2009).
7. Khan, F.: LTE for 4G Mobile Broadband
Air Interface Technologies and
Performance: Cambridge University Press,
pp. 3--148, New York (2009).
8. Preben, M., Koivisto, T., Pedersen, I.,
Kovcs, I., Raaf, B., Pajukoski K., and
Rinne M.: LTE-Advanced: the path towards
Gigabit/s in wireless mobile
communications: Int. Conference on
Wireless Communication, Vehicular
Technology, Information Theory and
Aerospace & Electronics Systems
Technology, pp. 147--149, Aalborg (2009).
9. Yonis, A., Abdullah, M., and Ghanim, M.:
Design and Implementation of Intra band
Contiguous Component Carriers on LTE-A:
Int. Journal of Computer Applications,
Vol.41, No.14, pp. 25 --28, USA (2012).
10. Dahlman, E., Parkvall S., and Skld J.: 4G
LTE/ LTE-Advanced for Mobile
Broadband: Elsevier, pp. 132--134, UK
(2011).
11. Yahiya, A.: Understanding LTE and its
Performance: Springer Dordrecht
Heidelberg London, pp. 9--14, New York
(2011).
12. Ghosh, A., Zhang, J., Andrews, J., and
Muhamed, R.: Fundamentals of LTE:
Pearson education, pp. 168--170, USA
(2011).
13. Yang, S.: OFDMA System Analysis and
Design: 1st ed. Boston, Artech house, USA
(2010).
14. 3GPP TR 36.814 Evolved Universal
Terrestrial Radio Access (E-UTRA); Further
Advancements for E-UTRA Physical Layer
Aspects, Release 9, section 8.1., (March
2011).
15. 3GPP TR 36.819 Coordinated Multi-Point
Operation for LTE Physical Layer Aspects,
Release 11, (Sept. 2011).
176
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 167-176
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
[email protected] [email protected] [email protected]
1
University college of Engineering Panruti ( A constituent college of Anna University Chennai), Panruti, Tamilnadu, India
2
Mahindra Group of Institutions, Tiruchengode, Tamilnadu, India
3
Mailam Engg College, Mailam, Tamilnadu, India
Index terms: Foreground Extraction, Optical flow,
SMED, Background subtraction, Surveillance sytems,
Traffic Analysis.
I. INTRODUCTION
A video surveillance system [8] must be
capable of continuous operation under various weather
and illumination conditions. Moreover, background
subtraction is a very important part of surveillance
applications for successful segmentation of objects from
video sequences, and the accuracy, computational
complexity, and memory requirements of the initial
background extraction are crucial in any background
subtraction method [2]. Foreground detection algorithm
should exactly detect moving objects that is the detection
result contains no noises possibly. Now the existing
foreground detection algorithms can be divided into three
categories: frame difference, optical flow and
background subtract.
Frame difference [18] calculates pixel gray
scale difference between adjacent two frames
in a continuous image sequences and determines
foreground by setting threshold. Lipton utilized double-
frame difference for moving object detection and then
classification and tracking. Frame difference method can
be used in dynamic environment, but it cannot
completely extract all the foreground area, the central
part of target will be lost, which results in bad target
recognition. In addition, this method is difficult to
accurately detect fast moving or slowly moving objects
as well as multiple objects.
Optical flow [7] is the velocity field which
warps one image into another (usually very similar)
image. The research of optical flow utilizes pixel
intensity changing and relevance to determine the
movement of pixels in image sequence. In fact, it is very
difficult to calculate the true velocity field using image
sequence and optical flow represents information of
moving objects, so the optical flow field can be used to
replace velocity field. However, each optical flow cannot
get rid of the light influences which result in background
noises.
Background subtract [9] is a common method
used in foreground detection. It calculates the difference
between the current image and background image and
detects foreground by setting threshold. There are two
methods to obtain background image, one is to appoint
an image as background artificially, another method uses
model to training background, such as Gaussian
background model (GBM). Compared to the former, the
latter is more accurate and the result of foreground
detection is much better. Background subtract method
has robustness to light changing and slight movement,
but when using this method to deal with long image
sequence there may be much accumulate error in the
foreground. Optical flow covers long distance and the
noise due to brightness change is less which results in
less accumulate error percentage.
Fig 1: Background subtraction
In digital image processing [10], the edge
detection is important technique. Edge detection is the
process of finding meaningful transitions in an image.
There are various edge detection [3] algorithms are
proposed, and that are based on gradient operator or
statistical approaches have been developed. Mostly the
gradient operators are easily affected by noise, and the
filtering operators are used to reduce the noise rate. In
edge detection, morphological edge detectors [5] are also
available which are effective than the gradient operators.
Some kinds of morphological detectors are also available
and those are not efficient while comparing to separable
morphological edge detector. A mathematical
EFFICIENT FOREGROUND EXTRACTION BASED ON OPTICAL
FLOW AND SMED FOR ROAD TRAFFIC ANALYSIS
K SuganyaDevi
1
N Malmurugan
2
R Sivakumar
3
Abstract-- Foreground detection is a key procedure in
video analysis such as object detection and tracking.
Several foreground detection techniques and edge
detectors have been developed until now but the
problem is, usually it is difficult to obtain an optimal
foreground due to weather, light, shadow and clutter
interference. Background subtract is a common
method in foreground detection. In background
subtract noise appears at fixed place, when it is used
to deal with long image sequence there may be much
accumulate error in the foreground. In OF (Optical
Flow) noise appears randomly and this covers long
distance over long period of time. Optical flow cannot
get rid of the light influences which result in
background noises. To overcome this SMED
(Separable Morphological Edge Detector) is used.
SMED has robustness to light changing and even
slight movement in the video sequence. This paper
proposes a new foreground detection approach called
OF and SMED which is more accurate in foreground
detection and elimination of noises is very high. This
approach is useful for efficient crowd and traffic
monitoring, user friendly, highly automatic
intelligent, computationally efficient system.
177
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 177-182
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
morphology is a kind of morphological tool which is
used to deal with various problems in image processing.
But the edges at different angles are not covered and thin
edges are missed by this mathematical morphological
detector. Hence separable morphological edge detector
detects thin edges and the edges at different angles with
lesser noise [4]. This paper primarily aims at the new
technique of video image processing used to solve
problems associated with the real-time road traffic
control systems. A new foreground detection approach
called Optical flow and SMED (OF-SMED) based on
optical flow and edge detection methods. The rest of the
paper is organized as follows, section II involves
literature survey, and section III introduces the proposed
approach which is called optical flow and SMED (OF-
SMED). In section IV some experimental results and
discussions will verify the proposed approach is useful
and feasible. Finally, the paper is concluded in section V.
II. LITERATURE SURVEY
Various methods have been proposed to video
image processing until now. But these existing methods
have some difficulties with congestion, shadows, noise
and various lighting conditions. This literature report
describes various techniques involved, their constraints
like memory, computing tie, complexity. The following
are some of the existing methods and their constraints.
Video surveillance method [12] has been
proposed, aims at robustness with low volume of false
positive and false negative rate simultaneously. But the
requirement is to have zero false negative rates and also
it should cope with varying illumination condition,
occlusion situations and low contrast. Real time video
surveillance [17] deals with real time detection of
moving objects. This deals with problems like storage
space and time consumption to record the video. To
avoid the above problems this uses motion detection
algorithm but this covers only the video that has
important information. In real time visual surveillance
W4 [13] method is the low cost PC based real time visual
surveillance system. It has been implemented to track
people and their parts of the body. It has the problem like
sudden illumination changes, shadow and occlusion.W4S
is an integrated real time stereo has addressed the
limitation that W4 met. It deals with tracking of people in
outdoor environment. But this makes tracking is much
harder in intensity images. End-to-End method has been
proposed which is used for removing moving targets
from a stream of real time videos, sorts them according
to image based properties. But this involves in forceful
tracking of moving targets. Smart video surveillance
systems support the human operators with identification
of significant events in video. It can do object detection
in outdoor and indoor environments under varying
illumination conditions. But this is based on the shape of
detected objects. Automatic video surveillance using
background subtraction has different problems. Pixel
based multi colour background model is a successful
solution to this problem. However this method suffers
from slow learning at the beginning and couldnt
differentiate between moving objects and moving
shadows. Multimedia surveillance [3] utilizes assorted
number of related media streams, each of which has a
different assurance level to attain numerous surveillance
tasks. It is difficult to insert a new stream in the system
with no knowledge of prior history.
Edge detection has been a challenging problem
in image processing. Due to lack of edge information the
output image is not visually pleasing.edge detection
techniques transforms images into edge images
benefiting from the changes of grey tones in the images
edge are the sign of lack of continuity and ending .as a
result of this transformation ,edge image is obtained
without encountering any changes in physical qualities of
the main image Various types of edge detectors are
discussed here, Robert edge detector [12] detects edges
which run along vertical axis of 45 and 135 degree. Only
drawback is that takes long time to compute. Gaussian
edge detector reduces noise by smoothing images and
gives better results in noisy environment. The difficulty
is that it is very time consuming and very complex for
computation. Zero crossing detectors uses second
derivative and it includes laplacian operator. It is having
fixed characteristics in all directions. But it is sensitive to
noise. Canny edge detector approach is that low
threshold produce false edges and high threshold miss
important edges. The problem is not very susceptible to
noise [3].
To overcome all the above problems involved
in the existing techniques a new proposed approach is
adapted. This is very effective and overcomes all the
above mentioned problems like congestion, shadow and
lighting transitions, robustness to light changing and
even slight movement. This proposed approach will be
very effective and best choice for both crowd and traffic
monitoring.
III.PROPOSED SYSTEM
a) OPTICAL FLOW
A new foreground detection approach called
OF-SMED which makes use of Lucas-Kanade optical
flow [1] is proposed. A perfect foreground cannot be
obtained by using optical flow alone due to some
brightness change. But, optimal foreground can be
obtained by OF-SMED effectively.
Fig 2: Cars on highway-Optical flow
It is known that there are five kinds of optical flow
method and LK optical flow is a kind of gradient-based
algorithm [4]. If I(x; y; t) is the intensity of pixel m(x; y)
at time t, vm = [vx; vy] is the velocity vector of pixel m(x;
y), then after a short time interval t, the optical flow
constrain equation
(1)
Where
is the spatial intensity
gradient vector. Because vm is two dimension variable,
more constraints are needed to settle this question. LK
optical flow method estimates vm by v expressed in (2)
on the assumption that vm is a constant in a small spatial
neighbourhood .
m =
W
2
(m) ( I .v+ )
2
(2)
178
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 177-182
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
In (2), W2 (m) is a window function making the central
part of the neighborhood has greater weight than the
peripheral part.For the pixels mi (i = 1, 2 ..., n) in , the
solution v can be obtained by
v = (A
T
W
2
A)
-1
A
T
W
2
b (3)
Where
A = (I (m1)... I (mn)) T;
W = diag (W (m1)... W (mn))
And
Because LK method calculates optical flow on
every pixel, so by using this method we can detect all the
changes between adjacent frames, therefore its the best
choice in detecting crowd movement [11]. However,
optical flow methods are very sensitive to brightness
change, when using LK method its difficult to find a
proper threshold to segment foreground and background.
In fact, no matter how to make a choice, the detection
result may either lose some foreground area or contain
some background noises. Obviously we cannot obtain an
optimal foreground by using LK method alone, so we try
to use other method to improve the result, after a lot of
experiments we found that by combining LK optical flow
and SMED method we could get a perfect result.
GBM [5] is one among kinds of background
subtract method. In this method, K Gaussian models are
used to approximate pixel values in the image, these
models are updated on every frames of the video. If the
residual value of pixel value and approximate value is
larger than the set threshold, this pixel is regarded as
foreground, otherwise it is background. Using K
Gaussian mixture models, the gray probability function
of pixel X at time t is given as
P(X) =
n=1
w
n (4)
Where w
n
is the weight of number n Gaussian model
whose mean and variance are
n
and
2
n .Usually, the value of K is from 3 to 5. In order to
represent a complex scene, we need
to use larger K. It should be noted that the calculation
time will increase with larger K.
By combining LK and GBM we propose a new approach
OFBM which is shown in Fig.1.
It can be seen that OFBM method applies LK optical
flow and GBM in parallel. On the one hand, we firstly
use the two adjacent images f(x; y; t1) and f(x; y; t) to
calculate the LK optical flow field, then median filter and
Gaussian filter are used to eliminate high-frequency
noises and salt and pepper noises respectively. After that
we use a threshold Tlk to segment optical flow field to
get LK foreground mask flk(x; y; t), our test results show
the range of Tlk is [0.05, 0.20], choosing smaller Tlk will
produce larger foreground area including background
noises, while choosing bigger threshold may lose some
foreground area. In order to detect all the movement area
we select the smallest value 0.05, and then we try to
eliminate the noises in the foreground mask flk(x; y; t).
On the other hand, GBM method is used to get another
foreground mask where the scale filter is employed for
segmenting foreground and background. In the scale
filter, we set another threshold Tg that means an area of
pixel block. For an obtained foreground image, if a pixel
block has smaller size than Tg, it will be classified as
background; otherwise it is kept as foreground. Hence,
we can get a new foreground mask fg(x; y; t). In our test,
the value of Tg should be near 1=400 of the image area.
For example, when the size of image is 320240, the
range of Tg is [160, 200]. As like LK method, we select
the smallest Tg to obtain the largest foreground mask
fg(x; y; t). Finally, these two masks are multiplied and we
operate morphological processing [6] to join the adjacent
areas and exclude small blocks in the foreground, then an
optimal foreground fore(x; y; t) can be obtained as shown
in Fig.1. Note that though both flk(x; y; t) and fg(x; y; t)
contain noises, the noise in flk(x; y; t) is caused by
brightness alteration and randomly appears on the
profiles of objects, in fg(x; y; t) the noise occurs on the
edge of objects and with time going by, the noise appears
at the same place. Because the two noises appear at
different place, we can eliminate most background noises
by multiplying flk(x; y; t) and fg(x; y; t). The foreground
image for(x; y; t) obtained by OFBM is then used in
density estimation.
Optical flow [16] and GBM not proves to be
very efficient as optimal foreground is not obtained Error
percentage also ranks high. Optical flow alone consists of
possibly less noises due to filtering and also it is robust to
brightness change.
b) SEPERABLE MORPHOLOGICAL EDGE
DETECTOR
Edge is a basic feature of image. The image
edges include rich information that is very significant for
obtaining the image characteristic by object recognition.
Edge detection refers to the process of identifying and
locating sharp discontinuities in an image. There are
various edge detection algorithms [3] are proposed, and
that are based on gradient operator or statistical
approaches have been developed. Mostly the gradient
operators are easily affected by noise, and the filtering
operators are used to reduce the noise rate. In edge
detection, morphological edge detectors are also
available which are effective than the gradient operators.
Some kinds of morphological detectors [15] are also
available and those are not efficient while comparing to
separable morphological edge detector. The effectiveness
of many image processing and computer vision tasks
depends on the perfection of detecting meaningful edges.
Due to lack of object edge information the output image
is not visually pleasing.
Various types of edges are:
Convex roof edge
Concave roof edge
Concave ramp edge
Step edge
Bar edge
Existing edge detectors are also available but
the main disadvantage is that they are sensitive to noise
and inaccurate. Some examples are Robel edge detector
and Sobel edge detector.
Fig 3: Various types of edges
179
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 177-182
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
The Roberts Detection
In Robert cross algorithm [3] the horizontal
and vertical edges bring out individually and then they
put together for the resulting edge detection.
+1 0
0 -1
G
x
G
y
Fig 4: Robert edge detector
The two individual images G
x
and G
y
are
combined using the approximation equation |G| = |G
x
| +
|G
y
| or by using G = sqrt (G
x
* G
x
+ G
y
* G
y
) to get the
exact magnitude values. As the Roberts Cross kernels are
relatively small, they are highly susceptible to noise.
Prewitt detection
The prewitt edge detector is an appropriate way
to estimate the magnitude and orientation of an edge.
Although differential gradient edge detection needs a
rather time consuming calculation to estimate the
orientation from the magnitudes in the xandy-directions,
the compass edge detection obtains the orientation
directly from the kernel with the maximum response. The
prewitt operator is limited to 8 possible orientations,
however experience shows that most direct orientation
estimates are not much more accurate. This gradient
based edge detector is estimated in the
3x3neighbourhood for eight directions. All the eight
convolution masks are calculated. One convolution mask
is then selected, namely that with thelargest module.
Sobel Edge Detection
The Sobel edge detection [3] technique is
similar to that of the Roberts Cross algorithm. Despite
the design of Sobel and Robert are common, the main
difference is the kernels that each uses to obtain the
image is different. The sobel kernels are more suitable to
detect edges along the horizontal and vertical axis
whereas the Robertss able to detect edges run along the
vertical axis of 45
and 135
.
Fig 5: Sobel Edge Detector
As existing edge detectors have some
disadvantages with noise, a new morphological edge-
detection operator separable morphological edge detector
(SMED) [4] is proposed. This has a lower computational
requirement while having comparable performance to
other morphological operators. The reasons for adopting
SMED operator in our application are as follows.
1) SMED can detect edges at different angles, while
other morphological operators are unable to detect all
kinds of edges.
2) The strength of the edges detected by SMED is twice
than other edge detectors.
3) SMED [15] uses separable median filtering to remove
noise. Separable median filtering has shown to have
comparable performance to the true median filtering, but
requires less computational power. SMED, which uses
compatible and easily implementable operators, has a
lower computational requirement, compared to the other
morphological edge-detection operator .Openclose has
better performance than SMED operator does, but it has
about eight times more computational power
requirement, therefore, it is not suitable for real-time
applications. In order to apply edge-based techniques to a
window, several steps have been taken to achieve real
time and accurate measurement of traffic parameters.
These steps are as follows.
1) The length of the windows used for counting vehicles
should be wide enough to allow most edges of a car
passing along a lane to be detected. In practice it should
nearly be equal to the width of the lane.
2) The width of the window should be more than three
lines of the image to compensate the effect of noise and
to ensure creating edges by passing vehicles.
3) A dynamic threshold selection algorithm is used to
compensate edges produced by the road surface or the
background.
Optical Flow with SMED
The output frames of the optical flow and
background modeling method is taken as the input to
SMED.
The two consecutive frames are taken and SMED
edge detector is applied to the frames.
So, the edge are sharpen while compare to the former
and the median filter is applied again in order to
reduce the noise.
(a) (b) (c) (d)
Fig 6 : (a) Original (b) Canny (c) OFBM
(d) OFSMED (Thicker edges)
When using SMED method, the foreground
containing much accumulated error due to noise should
be eliminated. Optical flow consisting probably less
noises can further be removed by applying separable
morphological edge detector which makes the approach
more effective than already existing approaches. While
using the proposed OF-SMED approach, almost all
noises are removed, and no foreground is lost, so the
final object detection result will be optimal. OFSMED
approach is effective.
IV. RESULTS AND DISCUSSION
Foreground detection [1] is the base of motion
analysis, such as object tracking, image segmentation,
-1 0 +1
-2 0 +2
-1 0 +1
+1 +2 +1
0 0 0
-1 -2 -1
180
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 177-182
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
and motion estimation. Proposed approach is carried out
on several different videos and a sample of 100 images
and result is discussed.
Average error rate is calculated for all methods
which show OF-SMED is very effective and has less
error rate. Numerical result shows OF-SMED is better
the error rate is only 1.74%. It can be seen that when
using optical flow method, there are some background
noises because each motion area is detected. The
algorithm uses a recent technique by applying simple but
effective operations. This approach reduces computation
time while compared to other using vehicle detection
operation. The vehicle detection operation is a less
sensitive edge-based technique. The threshold selection
is done dynamically to reduce the effects of variations of
lighting. The measurement algorithm has been applied to
traffic scenes with different lighting conditions [10].
When using SMED method, the foreground contains
much accumulates error should be eliminated [14].
As SMED possess median filtering it
eliminates all noise present in optical flow. Crowd
density estimation is very important in surveillance.
Texture analysis and moment analysis are two common
ways to estimate crowd density, in texture analysis a set
of density features can be extracted from Gray level co-
occurrence matrix (GLCM) which is calculated from
foreground image. If M is the GLCM of foreground
image, we can calculate a new feature F
M
defined as
follows.
F
M
= -
i, j
M (i, j)
2
-
i, j
M (i, j) ln M (i, j) - (1)
Fig 7: Flowchart of OF-SMED
In moment analysis, because the zeroth order moment
represents the total mass of the given image. So we
propose another feature F
00
defined as follows,
= ln A
f
-ln m
00
- (2)
Where A
f
is the area of foreground and m00 is the zeroth
order moment of foreground image. Both FM and F00
can be used to estimate crowd density, the larger values
of FM and the smaller values of F00 mean higher
density. In our test, we used FM to estimate the crowds
in different scenes and use F00 to measure the different
crowds in fix scene. We carried out our approach on
seven different videos which contain 1200 frames of
image, and we randomly picked up 100 images to
estimate OF-SMED. First we artificially appointed
foreground area on each image which is the real
foreground, and then we used the following equations to
test the error rate of OFBM
r =A
real
-A/A
real 100%
- (3)
Where A
real
is the area of real foreground, A is the area
of experimental result foreground, so r is the error rate
and R is the average error rate.
100
R =
i=1
r
i
/ 100 - (4)
Where r is the error rate and R is the average error rate.
The test result can be seen in Table1
Table 1: Comparison of average error rate
Table 2 : Comparison of Execution time (20 frames)
While comparing average error rate of SMED, OFBM,
OF-SMED numerical result shows, OF-SMED is better
than the other two, the error rate is only 1.74%. Also, the
OF-SMED has thicker edges than that of OFBM. The
execution time of OF-SMED is better than than the
OFBM. Thus OF-SMED is an Optimal approach for both
traffic as well as crowd monitoring.
V. CONCLUSION
Optical flow method [7] is used to detect
foreground which contains some background noises due
to brightness change. The proposed approach OF-SMED
combines the foreground together to eliminate noise. In
optical flow the noise appears randomly and in SMED
[4] method the noise appears at fix place such as the edge
of building, so by doing the combination almost all the
noises can be eliminated. When using the proposed
OFSMED approach, we can see that almost all noises are
removed, and no foreground is lost, so the final object
detection result is optimal. The processing of OF-SMED
is very fast with low computation time and cost effective
approach. The low cost vision based system OF-SMED
play an important role in monitoring, controlling, and
managing the whole traffic system and has the potential
to be used for applications such as electronic road
pricing, car park management system, detecting stolen
vehicles. Thus OF-SMED proves to be an optimal
approach for traffic and crowd monitoring with error rate
of 1.74% which is a satisfied result. Also, the execution
time of OF-SMED is comparatively better than the
OFBM.
REFERENCES
[1] Wei LI, Xiaojuan WU, Koichi MATSUMOTO and
Hua-A ZHAO, Foreground Detection Based on Optical
Flow and Background Subtract, IEEE Trans, Jul 2010.
Approach SMED OFBM OF-SMED
Error rate 4.85% 2.01% 1.74%
Approach SMED OFBM OF-SMED
Execution time
(20 frames)
4.85% 2.01% 1.74%
181
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 177-182
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
[2] M. Fathy and M. Y. Siyal, A Window-Based Image
Processing Technique for Quantitative and Qualitative
Analysis of Road Traffic Parameters, IEEE Trans, Jan
2010.
[3] N. Senthil kumaran and R. Rajesh, Edge Detection
Techniques for Image Segmentation-A Survey,
Proceedings of the International Conference on
Managing Next Generation Software Applications
(MNGSA-08), 2008, pp.749-760.
[4] M.Y.Siyal, A.Solangi, A Novel morphological edge
detector based approach for monitoring vehicles at traffic
junctions, Innovations in Information Technology, pp.1-
5,Nov 2006
[5] M. Fathy and M. Y. Siyal, An image detection
technique based on morphological edge detection and
background differencing for realtime traffic analysis,
Pattern Recognition Lett., vol. 16, pp. 13211330, 1995.
[6] Daniel L. Schmoldt, Pei Li and A. Lynn
Abbott,Machine vision using artificial neural networks
with local 3D neighborhoods, Computers and
Electronics in Agriculture, vol.16, 1997, pp.255-271.
[7] B.K.P. Horn and B.G. Schunck,Determining optical
flow, Artificial Intelligence, vol. 17, 1981, pp 185-203.
[8] B. Lucas and T. Kanade,An iterative image
registration technique with an application to stereo
vision, Proceedings of Imaging understanding
workshop, 1981, pp 121-130.
[9] C. Stauffer and W. Grimson,Adaptive background
mixture models for real-time tracking, 1999 IEEE
Computer Society Conference on Computer Vision and
Pattern Recognition, vol.2, 1999, pp. 246-252.
[10] R. Gonzalez, R. Woods, and S. Eddins,Digital
Image Processing, 2nd ed, Prentice Hall, Upper Saddle
River, NJ, 2002.
[11] H. Rahmalan, M.S. Nixon and J. N. Carter,On
Crowd Density Estimation for Surveillance,
International Conference on Crime Detection and
Prevention, 2006.
[12] Axel Baumann, Marco Boltz, Julia Ebling, Matthias
Koenig, Hartmut S. Loos, Marcel Merkel, Wolfgang
Niem, Jan KarlWarzelhan, Jie Yu A Review and
Comparison of Measures for Automatic Video
Surveillance Systems, Hindawi Publishing Corporation
EURASIP Journal on Image and Video Processing
Volume 2008, Article ID 824726, 30 pages
doi:10.1155/2008/824726.
[13] A. Rourke and M. G. H. Bell, Traffic analysis
using low cost image processing, in Proc. Seminar on
Transportation Planning Methods, PTRC, Bath, U.K.,
1988.
[14] N. Hashimoto et al., Development of an image
processing traffic flow measurement system, Sumitomo
Electronic Tech. Rev., no. 25, pp. 133138, Jan. 1988.
[15] J. Lee et al., Morphologic edge detection, IEEE J.
Robot. Automat. vol. RA-3 no. 2, Apr. 1987
[16] Alan Browne, T.M.Mc Ginnity, Girijesh Prasad,
Joan Condell,FPGA Based High Accuracy Optical Flow
Algorithm, ISSC 2010, Ucc, cork, JUNE 23-24.
[17] Nan Lu et. al, An Improved Motion Detection
Method for Real-Time Surveillance, Proc. of IAENG
International Journal of Computer Science, Issue 1, 2008,
pp.1-16.
[18]Karan Gupta, Anjali V. Kulkarni, Implementation
of an Automated Single Camera Object Tracking System
Using Frame Differencing and Dynamic Template
Matching.
182
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 177-182
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Faculty of Computer Science and Information Technology,
Universiti Putra Malaysia
43400 UPM Serdang, Selangor, Malaysia
[email protected]
ABSTRACT
1 INTRODUCTION
In 1997, the National Institute of Standards and
Technology send out a call for candidates to replace
the aging and obsolete Data Encryption Standard
(DES). NIST then announced the selection of
Rijndael as the proposed Advanced Encryption
Standard (AES) [1]. Rijndael, submitted by Joan
Daemen and Vincent Rijmen is designed for the use
with keys of lengths 128, 192, and 256 bits.
Although AES uses the same three key size
alternatives, it limits the block length to 128 bits
[2].
The input to the AES encryption and decryption
algorithm is a 128-bit block. The key which is
provided as the input is expanded into an array of
key schedule words; with each word is four bytes.
The total key schedule for 128-bit key is 44 words.
Full round of encryption will go through Nr rounds
(Nr=10, 12, 14) [3][4][5]. Rijndael round consists
of four different stages:
1. SubByte transformation: (Sbox substitution)
provides non linearity and confusion,
constructed by multiplicative inverse and
affine transformation.
2. ShiftRow: (rotations) provides inter-column
diffusion where the bytes in the last three
rows of the states are cyclically shifted.
3. MixColumn: (linear combination) provides
inter-byte diffusion where each column
vector is multiplied by a fixed matrix. The
bytes will be treated as polynomials rather
than numbers
4. AddRoundKey: (round key bytes XOR with
each byte of the state and the round key)
provides confusion.
The encryption process begins with an
AddRoundKey stage, and followed by nine rounds
of SubBytes, ShiftRows, MixColums and
AddRoundKey transformation. The transformation
will be performed respectively and iteratively (Nr
times) depending on the key length. The final round
will only include 3 stages; SubByte, ShiftRows and
AddRoundKey. All of the operations are byte-
oriented. The encryption and decryption structure
consists of several transformation stages as shown
in Fig. 1[2].
Enhancing Advanced Encryption Standard S-Box Generation
Based on Round Key
Julia Juremi Ramlan Mahmod Salasiah Sulaiman Jazrin Ramli
This paper presents a new AES-like design for key-
dependent AES using S-box rotation. The algorithm
involves key expansion algorithm together with S-box
rotation and this property can be used to make the S-box
key-dependent, hence providing a better security to the
block cipher. Fixed S-box allows attackers to study S-
box and find weak points while by using key-dependent
S-Box approach, it makes it harder for attacker to do any
offline analysis of an attack of one particular set of S-
boxes. The cipher structure resembles the original AES,
only the S-box is made key-dependent without changing
the value. This new design is tested using the NIST
Statistical Test and will be further cryptanalyzed with
algebraic attack in order to permit its subversion or
evasion.
Keywords AES, Key-dependent S-box, Inverse S-box,
Round key, Cryptanalysis
183
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 183-188
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
The decryption is essentially the same structure as
encryption, but SubByte, ShiftRow and MixColumn
are replaced by their inverses; InvSubBytes,
InvShiftRows, InvMixColums, and AddRoundKey.
It is the reverse order of the encryption structure.
Figure 1. AES encryption and decryption
This paper introduces a new approach for designing
key-dependent Advanced Encryption Standard
algorithm. This paper is organized as follows: In
Section 2, a proposed methodology for designing a
key-dependent AES algorithm is illustrated. Section
3 will be the explanation of evaluation criteria.
Section 4 suggesting some future enhancement on
proposed design and Section 5 summarizes and
concludes the paper.
Many efforts have been emulated to redesign and
reconstruct the AES algorithm to improve its
performance. [6] proposed a new key-dependent
AES but the S-box is completely replaced by a new
S-Box. AES original S-box has been designed and
tested thoroughly for linear and differential attack
and attempt on replacing the original S-box without
thorough analysis will violate the AES original
design and objectives [6].
2 A PROPOSED DESIGN FOR NEW KEY
DEPENDENT AES
In recent years there were many cryptanalysis
attempts on block ciphers. Most of the attacks prove
its effectiveness either on the simplified version or
the full version of AES. For this reason, we
proposed this cipher to help satisfy the current and
foreseeable future requirements [7]. Fig. 2 shows
input for a single original AES round.
A single 128-bit block will be the input to the
encryption and decryption algorithms where this
block is depicted as a square matrix of bytes, and
will then be copied into the state array, which will
be modified at each stage of encryption or
decryption.
Figure 2. Input for single AES round
The encryption and decryption process of this new
design resembles the original AES, it has confusion
and diffusion layer and key addition layer as well
184
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 183-188
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
but only that the original AES consists of 4 stages
while in this new design, it consists of five stages.
The extra stage is known as S-Box Rotation, and is
introduced at the beginning of the round function.
Fig. 2 shows input for a single round of original
AES.
After the final stage of transformation, the state is
copied to an output matrix. Similarly, 128-bit key is
then expanded into an array of key schedule words:
each word is four bytes and the total key schedule is
44 words for the 128-bit key, a round similar to a
state.
Fig. 3 and Fig. 4 show the new proposed key-
dependent encryption and decryption algorithm.
The remaining four stages are unchanged as it is in
AES.
Figure 3. New proposed key-dependent encryption algorithm
Figure 4. New proposed key-dependent decryption algorithm
2.1 S-Box Rotation and SubBytes/Inverse
SubBytes Transformations
This new proposed design uses rotation in the S-
Box operation. Below is the AES S-Box, shown in
Fig. 5 and Fig. 6 respectively.
Figure 5. AES S-box
185
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 183-188
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Figure 6. Inverse S-box
In the SubBytes process, each byte in the state is
replaced with its entry in the S-box. Consider a
byte, say C2 of the state. This will be replaced by
25 (in Hex) as shown in Figure 5.
25(Hex) = S-box(C2)
During the decryption process, the InverseSubBytes
operation performs the inverse operation using
Inverse S-box as shown in Figure 6. So, the value
25(Hex) will be replaced by the original value C2.
C2 = Inv-S-box(25)
2.2 Key-Dependent S-Box Generation
Static S-box means the same S-box will be used in
each round while the key-dependent S-box means
the S-box changes in each round depending on the
key and number of rounds. Fixed S-box allows
attackers to study S-box and find weak points while
by using key-dependent S-Box approach, it makes it
harder for attacker to do any offline analysis of an
attack of one particular set of S-boxes. However,
overall performance in terms of security and speed
has not been sufficiently addressed and widely
investigated [8].
By making the S-box as key-dependent, we assume
the AES algorithm will be stronger. We will use the
property of above S-box and apply it to an example,
to make it key-dependent. The round key generated
will be used for finding a value that is used to rotate
the S-box.
The subkeys (roundkey) are derived from the cipher
key using the key schedule algorithm [9]. Suppose
for a particular round n, if the round key value is
7D558EAC0E403CD82D95275E37199242(in Hex)
Apply XOR operation on all the bytes.
9F(Hex)=7D~55~8E~AC~0E~40~3C~D8~2D~95~
27~5E~37~19~92~42 (~ symbol used for XOR)
This routine get cipher key as input and generate
key-dependent S-Box from cipher key. The
resulting 9F(Hex) will then be used to rotate the S-
box.
We will next rotate the S-box, let say, to the right
by a value say 159(or 9F in Hex). The new S-Box
will be as shown in Figure 7.
During the decryption, the InverseSubBytes
operation performs the inverse operation using
Inverse S-box to get back to the original value.
Figure 7. Rotated 9F(Hex)times to the right
The rotation value is now dependent on the entire
round key. This property holds for all possible 256
rotations, and this property can be used to make the
S-box key-dependent [6][10][11][12]
3 EVALUATION CRITERIA
Having a good block cipher does not guarantee you
a better security. In particular, a block cipher must
withstand cryptanalysis. Cryptanalysis is to find the
weakness and to compromise any cryptosystems
186
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 183-188
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
(also referred as attacks). Since Rijndael has been
announced as the AES and became standard in
2001, there were various attempts in cryptanalyzing
the cipher. In order to make sure that our proposed
design will improve the security of the AES, we
will perform two main evaluation tests; NIST
Statistical Test (consists of 16 test) and one
cryptanalysis attack [13].
In this experiment, the data that we use should pass
all the 16 NIST Statistical Test. The main test is the
frequency test. All the data should pass this test
before proceed to the other 15 tests. If the data fails
for this frequency test, it means it will fail for the
entire 16 test.
3.1 Experimental Result
20000 samples were prepared for both original AES
algorithm and the new proposed algorithm.
Experiments are performed with 2 different keys
and the results has been observed and shown in
Table. 1 and Table. 2.
Table. 1. Plaintext and ciphertext (key1) samples
Plaintext Ciphertext
00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
50 94 43 9C F4 BE 6E
F4 9C 67 1E E4 54 33
4B 95
01 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
10 18 0C 6E A3 4A EE
28 BA 7B 25 1C 2F A6
68 0C
Table. 2. Plaintext and ciphertext (key2) samples
Plaintext Ciphertext
00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
B2 43 B5 85 CA DD F4
4E F5 E6 6E D1 D7 08
B3 0B
10 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
2F 1D C4 1D DB 52 D6
9D A5 74 99 69 9B 16
31 9E
3.2 Avalanche Effect Measurements
A small change in the plaintext or key that give a
large changes in the cipher text is known as
avalanche effect. A block cipher is said to have a
poor randomization if it does not exhibit the
avalanche effect to a significant degree [14]. For a
good quality block cipher, such a small change in
either key or plaintext should cause a drastic change
in the ciphertext.
Table.3. Avalanche effect for 1 bit change in plaintext
Number
of
samples
Number of
times
original
AES gives
better
avalanche
Number of
time
proposed
algorithm
gives better
avalanche
Number of
times both
original
and
proposed
algorithm
give same
avalanche
20000 8754 8802 2444
3.3 Strict Avalanche Criterion (SAC)
A function is said to satisfy the strict avalanche
criterion, if whenever a single input bit is
complemented, each of the output bits should
change with a probability of one half, as in [14]. To
measure confusion and diffusion, the Strict
Avalanche Criterion (SAC) test is presented,
together with its results. Table. 4 show the SAC by
changing one bit of plaintext in the samples:
Table. 4. Strict Avalanche Criteria for 1 bit change in the
plaintext
Number
of
samples
Number
of times
original
AES
gives
better
avalanche
Number
of time
proposed
algorithm
gives
better
avalanche
Number of
times both
original and
proposed
algorithm
give same
avalanche
20000 8025 8055 3920
187
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 183-188
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Results show that the enhancement on the original
AES does not violate the security of the cipher. The
enhanced version introduces confusion without
violating the diffusion property.
4 FUTURE ENHANCEMENT
AES has been designed to have very strong
resistance against the classical attacks, such as
linear cryptanalysis and differential cryptanalysis.
However since Rijndael is very algebraic, new
algebraic attacks appeared [15]. After the new
proposed algorithm successfully passes the 16
statistical tests, we will perform one cryptanalysis
attack, which is the algebraic attack in attempt to
break the cipher and test the security of this new
design.
5 CONCLUSION
In this paper, a new design for enhancing the
security of AES algorithm is proposed. This
approach design will not contradict the security of
the original AES algorithm by keeping all the
mathematical criteria of AES remain unchanged.
We try to improve the security of AES by making
its S-box to be key-dependent. We also hope to
perform cryptanalysis attack (algebraic attack) on
this new algorithm as part of the evaluation test.
The result of the cryptanalysis attack will help in
permitting its subversion or evasion.
6 REFERENCES
[1] Trappe, W. and Washington, L. C.: Introduction to
Cryptography with Coding Theory. United States:
Prentice Hall, (2002).
[2] Daemen, J., and Rijmen, V.: The block cipher Rijndael.
Proceedings of the Third International Conference on
smart card Reseacrh and Applications, CARDIS'98, 1820,
pp. 277-284, Berlin: Springer, (2000).
[3] Stallings, W.: Cryptography and Network Security,
Prentice Hall, (2010).
[4] Daemen, J. and Rijmen, V.: The First 10 Years of
Advanced Encryption. In IEEE Security and Privacy, vol.
8, pp. 72-74, November (2010).
[5] Federal Information Processing Standards Publications
FIPS 197, Advanced Encryption Standard (AES), 26 Nov
(2001).
[6] Fahmy, A., Shaarawy, M., El-Hadad, K., Salama, G., and
Hassanain, K.: A Proposal For A Key-Dependent AES.
3rd International Conference: Sciences of Electronic,
Technologies of Information and Telecommunications.
Tunisia: SETIT (2005).
[7] Sagheer, A. M., Al-Rawi, S. S., Dawood., A. O.:
Proposing of Developed Advance encryption Standard.
Developments in E-systems Engineering (DeSE), pp197-
202 (2011).
[8] Zhang, R., and Zhen, L.: A Block Cipher using key-
dependent S-box and P-Box. ISIE 2008 IEEE
International Symposium on Industrial Electronics, pp.
1463-1468 (2008).
[9] Sulaiman, S., Muda, Z., and Juremi, J.: A Proposed
Approach of Key Scheduling Transformation. In Cyber
Security Cyber Warfare and Digital Forensic(CyberSec),
2012 International Conference, pp. (2012).
[10] Krishnamurthy, G. N., and Ramasvamy, V.: Making AES
Stronger:AES with Key Dependent S-Box. IJCSNS
International Journal of Computer Science and Network
Security, vol.8, pp. 388-398 (2008).
[11] Schneier, B.: Description of a New Variable Length Key,
64-Bit Block Cipher (Blowfish), Fast Software
Encryption, Cambridge Security Workshop Proceedings,
pp. 191-204, Springer-Verlag (1994).
[12] Schneier, B., Kelsey, J., Whiting, D., Wagner, D., Hall,
C., and Ferguson, N.: The Twofish Encryption Algorithm.
Proc. 1st Advanced Encryption Standard (AES)
Conference (1998).
[13] Rukhin, A., et al.: A Statistical Test Suite for Random and
Pseudorandom Number Generators for Cryptographic
Applications. NIST Special Publication 800-22 (2001).
[14] Forre, R.: The strict avalanche criterion: spectral
properties of booleans functions and an extended
definition. Advances in cryptology, in: S.Goldwasser(Ed),
Crypto88, Lecture Notes in Computer Science, vol.403,
Springer-Verlag, pp.450-468 (1990).
[15] Ferguson, N., Schroeppel, R., and Whiting, D.: A simple
algebraic representation of Rijndael. Selected Areas in
Cryptography, pp. 103-111 (2001).
188
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 183-188
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Department of Computer Science
University of Memphis
Memphis, TN, USA
{rdharam, sshiva} @memphis.edu
KEYWORDS
Runtime Monitors, Post-deployment
Monitoring, Tautology, SQL Injection
Attacks (SQLIAs).
1 INTRODUCTION
Over the recent years our dependence on
web applications has increased
drastically in our everyday routine
activities. Therefore, we expect these
web applications to be secure and
reliable when we are paying bills,
shopping online, making transactions
etc. These web applications consist of
underlying databases containing
confidential users data like financial
information records, medical
information records, and personal
information records, which are highly
sensitive and valuable. This in turn
makes web applications an ideal target
for attacks. Some of the attacks targeted
on web applications include SQL
Injection Attacks (SQLIAs), Cross-Site
Scripting (CSS), Cross-Site Request
Forgery (CSRF), Path Traversal Attacks,
etc.
SQLIAs are identified as the major
security threats to web applications [1].
It gives attackers access to the database
Runtime Monitoring Technique to handle Tautology based SQL
Injection Attacks
Ramya Dharam and Sajjan G. Shiva
Software systems, like web applications, are
often used to provide reliable online services
such as banking, shopping, social
networking, etc., to users. The increasing
use of such systems has led to a high need
for assuring confidentiality, integrity, and
availability of user data. SQL Injection
Attacks (SQLIAs) is one of the major
security threats to web applications. It
allows attackers to get unauthorized access
to the back-end database consisting of
confidential user information. In this paper
we present and evaluate a Runtime
Monitoring Technique to detect and prevent
tautology based SQLIAs in web
applications. Our technique monitors the
behavior of the application during its post-
deployment to identify all the tautology
based SQLIAs. A framework called
Runtime Monitoring Framework, that
implements our technique, is used in the
development of runtime monitors. The
framework uses two pre-deployment testing
techniques, such as basis-path and data-flow
to identify a minimal set of all legal/valid
execution paths of the application. Runtime
monitors are then developed and integrated
to perform runtime monitoring of the
application, during its post-deployment for
the identified valid/legal execution paths.
For evaluation we targeted a subject
application with a large number of both
legitimate inputs and illegitimate tautology
based inputs, and measured the performance
of the proposed technique. The results of our
study show that runtime monitor developed
for the application was successfully able to
ABSTRACT detect all the tautology based attacks without
generating any false positives.
189
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
underlying web applications to retrieve,
modify and delete confidential user
information that are stored in the
database resulting in security violations,
identity theft, etc. SQLIAs occur when
data provided by the user is included
directly in a SQL query and is not
properly validated. Attackers take
advantage of this improper input
validation and submit input strings that
contain specially encoded database
commands [2]. Different kinds of
SQLIAs known to date are discussed in
[3, 4] which include the use of SQL
tautologies, illegal queries, union query,
piggy-backed queries, etc.
Even though the vulnerabilities leading
to SQLIAs are well understood, the
attack continues to be a problem due to
lack of effective techniques for detecting
and preventing them. In spite of
improved coding practices to
theoretically prevent SQLIAs,
techniques such as defensive
programming have been less effective in
addressing the problem. Furthermore,
attackers continue to find new exploits to
circumvent the input checks used by
programmers [3]. Software Testing
techniques specifically designed to target
SQLIAs provide partial solutions to the
problem due to following reasons:
Firstly, web applications have a very
short time-to-market, and hence
developers often tend to neglect the
testing process; secondly, it is
considered too time consuming,
expensive, and difficult to perform
complete testing of the software [5], and
thirdly, testing does not guarantee that
all possible behaviors of the
implementation are explored, analyzed,
and tested [6]. This lack of assurance
from testing of web applications has lead
to the exploitation of security
vulnerabilities by attackers to perform
attacks such as SQLIAs.
Pre-deployment testing techniques, such
as static analysis, source code review,
etc., perform security tests with the
software before they are deployed in its
actual target environment. These
techniques are either too imprecise or
focus only on a specific aspect of the
problem [7]. Post-deployment testing
techniques, such as vulnerability
scanning and penetration testing perform
security tests with the software deployed
in its actual target environment [8].
These techniques are either signature
based, or often suffer from issues related
to completeness that sometimes result in
false negatives being produced [8].
In this paper, we introduce a framework
called Runtime Monitoring Framework
that is used by our technique to handle
tautology based SQLIAs. The
framework uses knowledge gained from
pre-deployment testing of web
application to develop runtime monitors
which perform post-deployment
monitoring of web application. Basis-
path and data-flow testing are the two
pre-deployment testing techniques used
by the framework to initially find a
minimal set of legal/valid execution
paths of the application. Runtime
monitors are then developed for the
identified paths and integrated into the
application. The integrated monitors
observe the behavior of the application
for the valid/legal paths during its post-
deployment, and any deviation will be
immediately identified as the possible
occurrence of tautology based SQLIAs.
The monitor then halts the execution of
the application and notifies the
administrator about the attack.
190
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
In this paper, we also present
preliminary evaluation of our proposed
technique. We implemented the
technique on a target web application.
The application was provided with a
huge set of legitimate and illegitimate
inputs. The results obtained were
promising as the runtime monitor
developed for the subject was able to
handle all of the tautology based
SQLIAs.
The rest of our paper is organized as
follows. In Section 2, we discuss our
research strategy and methodology. In
Section 3, we discuss the
implementation of our proposed
framework. Evaluation and results
obtained are discussed in Section 4. We
discuss related work in Section 5 and
conclude in Section 6 with a discussion
of future work.
2 RESEARCH STRATEGY AND
METHODOLOGY
In this section, our research strategy and
methodology to design Runtime
Monitoring Framework is discussed. The
main idea of our work is to check if the
current behavior of the application
satisfies the specified behavior; any
deviation in the behavior will be
immediately detected as the possible
exploitation of SQL injection
vulnerability. Our framework uses the
information gathered from pre-
deployment testing of web application to
help in the development of runtime
monitor.
2.1 Modeling Tautology based SQL
Injection Attacks
A Web application structure is shown
below in the Figure 1. The three-tiered
architecture consists of a web browser,
an application server, and a backend
database server. A Web application with
such an architecture construct database
queries dynamically with the received
user input and dispatch the queries over
an application programming interface
(API) to the underlying database for
execution.
Figure 1: Web Application Structure
The application will then retrieve and
present data to the user based on the
users input. Serious security problems
arise if the users inputs are not handled
properly. In particular, SQLIAs occurs
when a malicious user passes crafted
input as part of the query, causing the
web application to generate and send a
query that in turn results in unintended
behavior of the application. This causes
the loss of confidential user information.
For example, if a database contains
usernames and passwords, the
application may contain code such as the
following:
Query = "SELECT * FROM
employeeinfo WHERE name = ' "+
request.getParameter ("name") +" '
AND password = ' "+
request.getParameter ("password") +" '
";
This code generates a query intended to
be used to authenticate a user who tries
to login to a web site. If a malicious user
enters OR 1 = 1 -- and
instead of a legitimate username and
191
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
password into their respective fields the
query string becomes as follows:
SELECT * FROM employeeinfo WHERE
name = ' OR 1 = 1 -- 'AND password
= ' ';
Any website that uses this code would
be vulnerable to tautology based
SQLIAs. The character -- indicates the
beginning of a comment, and everything
following the comment is ignored. The
database interprets everything after the
WHERE token as a conditional
statement, and the inclusion of OR
1=1 clause turns this conditional into a
tautology whose condition always
evaluates to true.
Thus, when the above query is executed
the user will bypass the authentication
logic and more than one record is
returned by the database. As a result, the
information about all the users will be
displayed by the application and the
attack succeeds.
2.2 Proposed Framework
The basic idea of our proposed
framework is the usage of information
gathered from pre-deployment testing of
web application, to help in development
of runtime monitor to detect and prevent
tautology based SQLIAs.
Our proposed framework first uses a
software repository which consists of a
collection of documents related to
requirements, security specifications,
source code, etc., to find the critical
variables. A Combination of basis-path
and data-flow testing techniques is then
used to find all the legal/valid execution
paths that the critical variables can take
during their lifetime in the application.
Data-flow analysis testing [10] is an
effective approach to detect improper
use of data and can be performed either
statically or dynamically. In static data-
flow analysis, the source code is
inspected to track the sequences of uses
of data items without its execution.
However, in the dynamic data-flow
analysis, the sequences of actions are
tracked during execution of the program.
In our proposed framework we use static
data-flow analysis. Basis-path testing is
a white box testing technique that
identifies the minimal set of all legal
execution paths [11] from both the
control flow graph of the program, and
by the calculation of cyclomatic
complexity - the measure of number of
independent paths in the program being
considered. We thus make use of the
aforementioned pre-deployment testing
techniques, i.e. basis-path and data-flow
techniques, to identify the minimum
number of critical paths to be monitored
during the post-deployment phase of the
application.
Runtime monitor is then developed to
observe the path taken by critical
variables and check them for compliance
with the obtained legal paths. During
runtime, if the path taken by the
identified critical variables violates the
legal paths obtained, this implies that the
critical variables consist of the malicious
input from the external user and the
query formed is trying to access
confidential information from the back-
end database. This abnormal behavior
of the application, due to the critical
variables, is identified by the runtime
monitor and immediately notified to the
administrator. The framework described
is shown in Figure 2 and consists of
three main steps which are discussed
below in detail.
192
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Critical Variables Identification:
Scan the software repository to identify
all the critical variables present in the
source code. Critical variables are those
which interact with the external world by
accepting user input, and also which are
part of critical operations that involve
query executions.
Path Identification Function:
By combining data-flow and basis-path
testing, legal execution paths of the
application are obtained. Data-flow
testing of the critical variables identifies
all the legal sub-paths that can be taken
by critical variables during execution.
Basis-path testing is performed to
identify the minimum number of legal
execution paths of the application. Since
basis-path testing leads to reduced
number of monitorable paths, the
complexity of our proposed technique in
terms of integrating monitors across
multiple paths also reduces. The path
identification function builds the set of
critical paths to be monitored in the
application.
Let C = {C
1
, C
2
., C
m
} be a set of m
critical variables identified during
critical variable identification phase.
Let P
C
= {{ P
C
1
} U { P
C
2
} U ..,{
P
C
m
}} be a set of critical variable sub-
paths such that, P
C
i
is a set of all valid
sub-paths a critical variable C
i
can take
during its lifetime in the application,
identified by performing data-flow
testing on C
i
, where i [0, m].
Let P = {P
1
, P
2
, P
k
} be a set of k
legal paths identified using basis-path
testing and CP is a set of paths we intend
to monitor.
CP is identified using the pseudo code
shown below:
CP = { }
for every P
j
P and
for every P
C
i
P
C
if (P
j
P
C
i
== P
C
i
)
CP = CP U { P
j
}
where, i [0, m] and j [0, k]
We thus identify all the critical paths of
the application to be monitored.
Figure 2. Runtime monitoring framework for
tautology based SQLIAs.
Monitor Development and
Integration:
In this phase, we develop a monitor for
the identified critical paths using
AspectJ [12]. The developed monitor is
then integrated with the respective
module of the application for monitoring
the critical paths. Henceforth, on every
query execution, the runtime monitor
193
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
tracks the identified critical variables by
monitoring their execution path. When a
critical variable follows an invalid path,
the runtime monitor immediately detects
the abnormal behavior of the application
due to the critical variable and notifies
the administrator.
Thus, using the above discussed phases
in our proposed framework, we develop
a runtime monitoring technique to
handle tautology based SQLIAs that
uses the knowledge gained from pre-
deployment testing of web application to
develop runtime monitors.
3 IMPLEMENTATION
To evaluate our approach, we developed
a framework called Runtime Monitoring
Framework to handle tautology based
SQLIAs in Java based web applications.
We chose to target Java because it is a
commonly used language for developing
web applications.
Figure 3 shows the high-level view of
the Runtime Monitoring Framework. As
the figure shows, the framework consists
of the following modules: i) Critical
Variables Identification Module ii)
Critical Paths Identification Module iii)
Runtime Monitor Development and
Instrumentation Module.
Critical Variables Identification
Module:
The Critical Variables Identification
Module identifies all the critical
variables, i.e. variables that are
initialized with the input provided by
external user and those that become a
part of SQL query. Input to this module
is a Java web application and it outputs
the critical variables. In our present
implementation, this is done manually
and we intend to automate this process
in our future implementation.
Figure 3: High Level View of Runtime
Monitoring Framework.
Critical Paths Identification Module:
The Critical Paths Identification Module
identifies the critical paths generated by
data-flow and basis-path testing
techniques. The module takes the
identified critical variables as input and
returns the paths that need to be
monitored. Data-flow testing of the
critical variables helps in identification
of all the legal sub-paths that can be
taken by critical variables during
execution. Basis-path testing is
performed to identify the minimum
number of legal execution paths of the
application. Since basis-path testing
leads to reduced number of monitorable
paths, the complexity of our proposed
technique in terms of integrating
monitors across multiple paths also
reduces. The path identification function
builds the set of critical paths to be
monitored in the application to detect
and prevent tautology based SQLIAs.
194
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Runtime Monitors Development and
Instrumentation Module:
This module develops the runtime
monitor for the identified critical paths
and instruments it to the appropriate part
of the source code. AspectJ [12] is used
to generate and integrate monitor into
the application.
4 EVALUATION
In this section, we discuss the evaluation
to assess effectiveness and efficiency of
our proposed technique. Following are
the research questions (RQ) for which
we intend to find solutions.
RQ1: What percentage of tautology
based SQLIAs can our proposed
technique detect and prevent that would
otherwise go undetected? (False
Negative Rate)
RQ2: What percentage of legitimate
accesses does our proposed technique
identify as tautology based SQLIAs and
prevents them from executing on the
database? (False Positive Rate)
4.1 Experimental Setup
To be able to investigate the research
questions, we developed an interactive
web application called Employee
Information Retrieval Application that
accepts HTTP requests from a client,
generate SQL queries, and issues them
to the underlying database.
4.1.1 Subject
The subject application we developed
for our experimentation purpose is an
Employee Information Retrieval
Application. It accepts input from an
external user through a web form, and
uses the input to build queries to an
underlying database, and retrieves the
relevant information of the particular
user. Front-end of the application is
developed using HTML language, Java
Servlet is used for processing the input
received from the user and connecting to
the back-end database for retrieving and
displaying the information to the user.
Also, MySQL database is used at the
back-end to store the employee related
information. The table empinfo
consists of six fields namely: UserName,
Password, SSN, Name, Age and
Department.
When legitimate input i.e. username and
password are provided by the user, the
submitted credentials are then used to
dynamically build the query as shown
below:
String query = "Select * FROM empinfo
where username = '"athomas"' and
password = '"andrew999"'";
The query executes successfully and the
application returns the relevant records
to the user.
4.1.2 Application of Runtime Monitors
to the subject.
In this section, we describe the results
obtained when the runtime monitor
developed using the proposed Runtime
Monitoring Framework is instrumented
into the web application discussed
above.
When an illegitimate input such as OR
1 = 1 -- and is provided by an
external user for username and password
variables respectively, this causes a
tautology based SQLIA on the
195
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
application. The submitted credentials
are then used to dynamically build the
query as shown below:
String query = "Select * FROM empinfo
where username = ' " OR 1 = 1 --" ' and
password = ' " " ' ";
The illegitimate input provided by the
external user will cause the application
to behave in an abnormal way by
displaying all the records present in the
database. The runtime monitor
instrumented in the web application will
detect this abnormal behavior of the
application and halts the execution of the
application. The monitor also notifies the
administrator about the attack.
4.1.3 Discussion of Results
Table 1 summarizes the results obtained
when a set of illegitimate tautology
based inputs are provided to the
instrumented web application described
above. The Attack Detected column in
the table will have YES value if the
attack is detected successfully by our
proposed technique, else it contains a
NO value.
Table 1: Results obtained by the application of
our framework to detect Illegitimate Queries
Illegitimate Inputs Attack
Detected
username: OR 1 = 1 --
password:
YES
username: athomas
password: OR 1 = 1
YES
username: 111 OR true#
password:
YES
username: mfranklin
password: aaa OR 1=1
YES
username: 111 OR 1=1 --
password:
YES
username: athomas
password: OR username between
A and Z
YES
username: mfranklin
password: admin OR 1>4
YES
username: mphelps
password: admin OR 4>1
YES
username: username OR 1=1 --
password:
YES
username: athomas
password: password OR 1=1
YES
username: admin OR 1<2 --
password:
YES
username: admin OR 7>6 --
password:
YES
Table 2 summarizes the results obtained
when a set of legitimate inputs are
applied to the instrumented web
application. The Query Successful
column will have a YES value in case
of successful query execution, else a
NO if the legitimate query is falsely
detected as an attack.
Table 2: Results obtained by the application of
our framework to detect Legitimate Queries
Legitimate Inputs Query
Successful
username: mdavid
password: ************
YES
username: rrandy
password: ************
YES
username: mfranklin
password:*************
YES
username: JSmith765
password:*************
YES
username: Anderson9John
password: **********
YES
username: LAdams
password: *********
YES
username: sparker
password: **********
YES
username: LauraAdams
password: ************
YES
username: SGreenSFO
password: *************
YES
username: parker765
password: ***********
YES
We used 12 illegitimate tautology based
inputs as shown in Table 1 and 10
legitimate inputs as shown in Table 2 for
our evaluation. The results of our study
196
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
clearly demonstrate the success of our
Runtime Monitoring Technique to
handle tautology based SQLIAs. Our
proposed technique was able to
successfully allow all the legitimate
queries to be executed on the application
and detected all the tautology based
SQLIAs i.e. both false positives and
false negatives were handled effectively.
Though we have performed our
experimentation on a simple target web
application and for a small number of
inputs, the preliminary results obtained
are encouraging; because we have used
all realistic tautology based attacks as
illegitimate inputs for the instrumented
application taken as subject. However,
before drawing definitive conclusions,
still more extensive experimentation is
needed.
5 RELATED WORK
Over the past decade, a lot of work has
been accomplished by the research
community in providing new techniques
to detect and prevent SQLIAs. In this
section, we discuss state-of-the-art in
SQLIA detection and prevention
techniques and classify them into two
categories namely: (i) Pre-deployment
Testing Techniques and (ii) Post-
deployment Testing Techniques.
5.1 Pre-deployment Testing
Techniques
Pre-deployment techniques consist of
methodologies which are used earlier in
the Software Development Life Cycle
i.e. before the software has been
deployed in the real world to detect
SQLIAs in web applications. Techniques
discussed in this section also come under
the category of static analysis using
which the applications are tested for
possible SQLIAs without executing the
application.
Huang et al. in [13], proposed a tool
WebSSARI which uses information flow
analysis for detecting input validation
related errors. It uses static analysis to
check taint flows against preconditions
for sensitive functions. The analysis
detects the points in which preconditions
have not been met and can suggest filters
and sanitization functions that can be
automatically added to the application to
satisfy these preconditions. It works
based on sanitized input that has passed
through a predefined set of filters.
Wasserman et al. in [14], proposed a
static analysis framework that operates
directly on the source code of the
application to prevent tautology attack.
Static analysis is used to obtain a set of
SQL queries that a program may
generate as a finite state automaton. The
framework then applies an algorithm on
the generated automaton to check
whether there is a tautology and the
existence of a tautology indicates the
presence of a potential vulnerability. The
important limitation of Tautology
Checker is that, it can detect only
tautology based SQLIAs but cannot
detect other types of SQLIAs.
Gould et al. in [15], describe about
JDBC Checker, a sound static analysis
tool to verify the correctness of
dynamically generated query strings.
JDBC Checker can detect SQL injection
vulnerabilities caused by improper type
checking of the user inputs. This
technique would not catch more general
forms of SQLIAs, but can be used to
prevent attacks that take advantage of
type mismatches in a dynamically-
197
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
generated query string. The root cause of
SQL injection vulnerabilities in code,
which is improper type checking of input
can be detected by a JDBC-Checker.
General forms of SQLIAs cannot be
caught by this technique because most of
these attacks consist of syntactically and
type-correct queries.
Livshits et al. in [16], propose a tool
based static analysis technique to detect
SQL injection vulnerabilities in web
applications. User-provided
specifications of vulnerability pattern in
PQL language are applied to Java byte
code and all vulnerabilities matching a
specification are found automatically in
the statically analyzed code. With static
analysis all potential security violations
can be found without executing the
application.
Fu et al. in [17], proposed SAFELI a
static analysis tool which can
automatically generate test cases
exploiting SQL injection vulnerabilities
in ASP.NET web applications. SAFELI
instruments the bytecode of Java Web
applications and utilizes symbolic
execution to statically inspect security
vulnerabilities. Whenever a hotspot
which submits SQL query is
encountered, a hybrid string equation is
constructed to find out the initial values
of Web controls which might be used to
apply SQLIAs. Once the equation is
successfully solved by a hybrid string
solver, the solution of the equation is
used to construct a test case which is
replayed by an automated GUI testing
tool. SAFELI can analyze the source
code and will be able to identify delicate
vulnerabilities that cannot be discovered
by black-box vulnerability scanners. The
main drawback of this technique is that,
this approach can discover the SQLIAs
only on Microsoft based products.
Mui et al. in [18], propose ASSIST to
protect Java based web applications
against SQLIAs. A combination of static
analysis and program transformation is
used by ASSIST to automatically
identify locations of SQL injection
vulnerabilities in code and instrument
them with calls to sanitized functions.
The automated technique will help
developers to eliminate the tedious
process of performing manual inspection
and sanitization of code.
All the above mentioned techniques are
used to detect SQLIAs in web
application before they are deployed in
the real world; these techniques use
static analysis i.e. they do not execute
the application to detect the
vulnerabilities instead perform code
check to verify for any possibility of
attack. But, in reality a lot of SQLIAs
occur once the software is deployed in
the real world. In this perspective, our
proposed framework is mainly focused
on developing software runtime monitor
that uses runtime monitoring technique
to detect SQLIAs based on the behavior
of the web application during its post-
deployment.
5.2 Post-deployment Testing
Techniques
Post-deployment techniques consist of
dynamic analysis technique which can
be used to detect SQLIAs in web
applications after it has been deployed in
the real world. In this section, we discuss
about the existing techniques that come
under the category of post-deployment
techniques and compare them with our
proposed approach.
198
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Buehrer et al. in [19], present a novel
runtime technique to eliminate SQL
injection. The technique is based on
comparing at runtime the parse tree of
the SQL statement before inclusion of
user input with that resulting after
inclusion of input. SQLGuard requires
the application developer to rewrite code
to use a special intermediate library or
manually insert special markers into the
code where user input is added to a
dynamically generated query. SQLGuard
uses a secret key to delimit user input
during parsing by the runtime checker
and so the security of the approach is
dependent on the attacker not being able
to discover the key.
Halfond et al. in [7, 20, and 21], propose
a model-based technique called
AMNESIA for detection and prevention
of SQLIAs that combines the static and
dynamic analysis. During the static
phase, models for the different types of
queries which an application can legally
generate at each point of access to the
database are built. During the dynamic
phase, queries are intercepted before
they are sent to the database and are
checked against the statically built
models. If the queries violate the model
then a SQLIA is detected and further
queries are prevented from accessing the
database. The accuracy of AMNESIA
depends on the static analysis for
building query models.
Su et al. in [22], proposed SQL-Check
which is a runtime checking system. The
technique used in SQL check will first
track the user input substring in the
program and syntactically track those
substrings using a syntactic policy. This
will specify all the permitted syntactic
forms. This process forms an annotated
query also called an augmented query. A
parser is then used by SQL Check to
parse the augmented query and to find
whether the query is legitimate or not. If
the query parses successfully, then the
query is supposed to have met the
syntactic constraints and is considered as
legitimate. But, if the query has not
successfully passed by the parser then it
is considered to be a command injection
attack query. This approach uses a secret
key to discover user inputs in the SQL
queries. Thus, the security of the
approach relies on attackers not being
able to discover the key. Also, this
approach requires the application
developer to either rewrite code to use a
special intermediate library or manually
insert special markers into the code
where user input is added to a
dynamically generated query.
Bisht et al. in [23], exhibit a novel and
powerful mechanism called CANDID
for automatically transforming web
applications to render them safe against
all SQLIAs. The proposed technique
dynamically mines the programmer-
intended query structure on any input
and detects attacks by comparing it
against the structure of the actual query
issued. CANDID retrofits web
applications written in Java through a
program transformation and its natural
and simple approach turns out to be very
powerful for detection of SQLIA.
Combined static and dynamic analysis
approaches as discussed above in [7, 19,
20, 21, 22 and 23] use static analysis
technique to identify the intended
structure of SQL queries in the absence
of user inputs by analyzing the source
code and constructing the syntactic
models like parse trees. The proposed
approaches then use dynamic analysis
and detect SQLIA at runtime if the
199
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
syntactic structure of the dynamically
generated query which includes user
inputs deviates from the statically
generated syntactic models. In our
proposed approach pre-deployment
testing techniques such as data-flow and
basis-path are used to first find the
valid/legal behaviors of the application
in the presence of the user input.
Runtime monitoring of the application is
then performed with the developed
monitors to see if the execution of the
application deviates from the specified
valid/legal path. Any deviation observed
by the monitor is identified as the
possible exploitation of SQLIA
vulnerability and immediately notified to
the administrator.
Halfond et al. in [2], proposed a highly
automated approach for dynamic
detection and prevention of SQLIAs.
The approach is based on dynamic
tainting which has been widely used to
address security problems related to
input validation. Traditional dynamic
tainting approaches mark untrusted data
from user input as tainted, track the flow
of tainted data at runtime, and prevent
this data from being used in potentially
harmful ways. Unlike any existing
dynamic tainting techniques, the
proposed approach is based on novel
concept of positive tainting i.e.
identification and marking of trusted
instead of untrusted data. The proposed
approach performs accurate taint
propagation by precisely tracking trust
markings at the character level and it
performs syntax-aware evaluation of
query strings before they are sent to the
database and blocks all queries whose
non-literal parts (i.e. SQL keywords and
operators) contain one or more
characters without trust markings.
Boyd et al. in [24], proposed SQLrand
which is an approach based on
instruction-set randomization. The
standard SQL keywords in queries are
modified by appending a random integer
value during the design time of the
application. During runtime, a proxy that
sits between the client and the database
server intercepts the SQL queries and
de-randomizes the query by removing
the inserted random integer before
submitting the queries to the database.
Therefore, any malicious user attempting
an SQLIA will not be successful
because, the user input inserted into the
randomized query will be classified as a
set of non-keywords resulting in an
invalid expression. SQLrand requires the
developers to randomize SQL queries
present in the application by appending a
random integer value, so its security
relies on attackers not being able to
discover the integer value. In my
proposed method SQL queries will be
written using standard keywords and the
monitors will be developed and
instrumented into the source code
automatically. Also, the need for the
deployment of proxy is eliminated.
Pietraszek et al. in [25], introduced
CSSE, a method to detect and prevent
injection attacks. CSSE works by
automatic marking of all user-originated
data with meta-data about its origin and
ensuring that this metadata is preserved
and updated when operations are
performed on the data. The metadata
enables a CSSE-enabled platform to
automatically carry out the necessary
checks at a very late stage and it is able
to independently determine and execute
the appropriate checks on the data it
previously marked unsafe. CSSE is
transparent to the application developer,
as the necessary checks are enforced at
200
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
the platform level and neither
modification nor analysis of the
application is required.
Huang et al. in [26], proposed WAVES a
blackbox technique for testing web
applications for SQL injection attacks.
The technique identifies all points in a
web application that can be used to
inject SQLIAs using a web crawler. It
then builds attacks that target those spots
based on a list of patterns and monitors
the applications response to the attacks
by utilizing machine learning to improve
its attack methodology. WAVES is
better than traditional penetration
testing, because it improves the attack
methodology, by using machine learning
approaches to guide its testing.
Valeur et al. in [27], proposed an
Intrusion Detection System (IDS) based
on a machine learning technique to
detect SQLIAs. The proposed system
uses anomaly-based detection approach
and learns profiles using a number of
different models to find the normal
database access performed by web
applications. During training phase,
profiles are learned automatically by
analyzing a number of sample database
accesses. During detection phase,
anomalous queries that lead to SQLIA
are identified. IDS detect attacks
successfully but, the overall IDS quality
depends on the quality of the training set
and they generate a large number of
false alarms.
Cova et al. in [28], present Swaddler, an
approach for the detection of attacks
against web applications based on the
analysis of the internal application state.
Swaddler analyzes the internal state of a
web application and learns the
relationships between the applications
critical execution points and the
applications internal state. The
approach is based on a detailed
characterization of the internal state of a
web application, by means of a number
of anomaly models. The internal state of
the application is monitored during the
learning phase. During this phase the
approach derives the profiles that
describe the normal values for the
applications state variables in critical
points of the applications components.
Then, during the detection phase, the
applications execution is monitored to
identify anomalous state.
Most of the post-deployment techniques
discussed above generate a meta-model
of possible attack queries during the
learning phase of software execution.
The queries then formed every time due
to the input provided by an external user
is compared with the generated meta-
model and appropriate decisions are
made. Since these techniques are mainly
dependent on the accuracy of the
learning phase, it is possible that few of
the SQLIAs may go unnoticed causing
threat to the database. In order to
overcome this, in our approach we
monitor the legitimate behavior of the
application during its execution to
handle SQLIAs.
6 CONCLUSION
In this paper, we introduced a new
technique to handle tautology based
SQLIAs. We also propose a framework
called Runtime Monitoring Framework
used by our technique for development
of runtime monitors, which perform
runtime monitoring of a web application
during its post-deployment to detect and
prevent tautology based SQLIAs. Thus,
using our framework, we ensure that the
201
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
quality and security of the application is
achieved not only during its pre-
deployment, but also during its post-
deployment phase, and any possible
exploitation of vulnerability by an
external attacker is detected and
prevented. We also presented the
evaluation of our proposed technique.
The results obtained clearly indicate that
our technique was successfully able to
handle all of the tautology based
SQLIAs and allowed legitimate inputs to
access the database.
We further intend to automate the entire
process of using the proposed
framework to develop the runtime
monitors and also extend the framework
to detect and prevent all other types of
SQLIAs.
7 REFERENCES
[1] OWASP Open Web Application Secuirty
Project. Top ten most web application
vulnerabilities. http:
//ww.owasp.org/index.php/OWASP_TOP_Ten_
Project, April 2010.
[2] W. G. J. Halfond, A. Orso and P. Manolios,
Using Positive Tainting and Syntax-aware
Evaluation to Counter SQL Injection
Attacks, Proceedings of the 14th ACM
SIGSOFT International Symposium on
Foundations of Software Engineering, 2006.
[3] W. G. J. Halfond, J. Viegas, and A.Orso, A
Classification of SQL - Injection Attacks and
Countermeasures, Proceedings of the IEEE
International Symposium on Secure Software
Engineering, 2006.
[4] A. Tajpour and M. Massrum, Comparison of
SQL Injection Detection and Prevention
Techniques, In 2
nd
International
Conference on Education Technology and
Computer, 2010.
[5] G. Erdogan, Security Testing of Web Based
Applications, Norwegian University of Science
and Technology (NTNU), 2009.
[6] Moonjoo Kim, Sampath Kannan, Insup Lee,
Oleg Sokolsky, and Mahesh Vishwanathan,
Computational Analysis of Runtime
Monitoring - Fundamentals of Java-Mac,
RV02 Runtime Verification 2002, Volume: 70,
Issue: 4, Dec 2002.
[7] W. G. J. Halfond and A. Orso, Combining
Static Analysis and Runtime Monitoring to
Counter SQL Injection Attacks, Proceedings of
3
rd
International Workshop on Dynamic
Analysis, 2005.
[8] Software Secuirty Testing, Software
Assurance Pocket Guide Series: Development,
Volume III, Version 1.0, May 21, 2012.
[9] Ramya Dharam, Sajjan. G. Shiva, A
Framework for Development of Runtime
Monitors, International Conference on
Computer and Information Sciences (ICCIS),
Kuala Lumpur, Malaysia, June 2012.
[10] K. Saleh, A. S. Boujarwah, J. Al-Dallal,
Anomaly Detection in Concurrent Java
Programs Using Dynamic Data Flow Analysis,
Information and Software Technology, Volume:
43, Issue: 15, December 2001.
[11] Mohd. Ehmer Khan, Different Approaches
to White Box Testing for finding Errors,
International Journal of Software
Engineering and Its Applications, Vol. 5, N0. 3,
July 2011.
[12] AspectJ Cookbook, Russ Miles, December
27, 2004.
[13] Y. W. Huang, F. Yu, C. Hang, C. H. Tsai,
D. T. Lee and S. Y. Kuo, Securing Web
Application Code by Static Analysis and
Runtime Protection, Proceedings of 13
th
International Conference on World Wide Web,
2004.
[14] G. Wassermann and Z. Su, An Analysis
Framework for Security in Web Applications,
Proceedings of the FSE Workshop on
Specification and Verification of Component
Based Systems, 2004.
[15] C. Gould, Z. Su and P. Devanbu, JDBC
Checker: A Static Analysis Tool for SQL/JDBC
Applications, Proceedings of the 26
th
International Conference on Software
Engineering, 2004.
202
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
[16] V. B. Livshits and M. S. Lam, Finding
Security Errors in Java Programs with Static
Analysis, Proceedings of the 14
th
Usenix
Security Symposium, 2005.
[17] X. Fu and K. Qian, SAFELI SQL
Injection Scanner Using Symbolic Execution,
Proceedings of 2008 Workshop on Testing,
Analysis, and Verification of Web Services and
Applications, 2008.
[18] R. Mui and P. Frankl, Preventing SQL
Injection through Automatic Query Sanitization
with ASSIST, Fourth International Workshop
on Testing, Analysis and Verification of Web
Software, 2010.
[19] G. T. Buehrer, B. W. Weide and P. A. G.
Sivilotti, Using Parse Tree Validation to
Prevent SQL Injection Attacks, International
Workshop on Software Engineering and
Middleware, 2005.
[20] W. G. Halfond and A. Orso, AMNESIA:
Analysis and Monitoring for Neutralizing SQL-
Injection Attacks, Proceedings of the IEEE and
ACM International Conference on Automated
Software Engineering, Nov 2005.
[21] W. G. Halfond and A. Orso, Preventing
SQL Injection Attacks Using AMNESIA,
Proceedings of 28
th
International Conference on
Software Engineering, 2006.
[22] Z. Su and G. Wassermann, The Essence of
Command Injection Attacks in web
Applications, The 33rd Annual Symposium
on Principles of Programming Languages, 2006.
[23] P. Bisht and P. Madhusudan, CANDID:
Dynamic Candidate Evaluations for Automatic
Prevention of SQL Injection Attacks,
Proceedings of the 14th ACM Conference on
Computer and Communications Security, 2007.
[24] S. W. Boyd and A. D. Keromytis,
SQLrand: Preventing SQL Injection Attacks,
Proceedings of the 2
nd
Applied Cryptography
and Network Security Conference, June 2004.
[25] T. Pietraszek and C. V. Berghe, Defending
Against Injection Attacks Through Context-
Sensitive String Evaluation, Proceedings of
Recent Advances in Intrusion Detection, 2005.
[26] Y.W. Huang, S. K. Huang, T. P. Lin & C.
H. Tsai, Web Application Security Assessment
by Fault Injection and Behavior Monitoring,
Proceedings of the 12th International Conference
on World Wide Web, 2003.
[27] F. Valeur, D. Mutz, and G. Vigna, A
Learning Based Approach to the Detection of
SQL Attacks, Proceedings of the Conference on
Detection of Intrusions and Malware and
Vulnerability Assessment, 2005.
[28] M. Cova, D. Balzarotti, Swaddler: An
Approach for the Anomaly-based Detection of
State Violations in Web Applications,
Proceedings of the 10th International
Symposium on Recent Advances in Intrusion
Detection, 2007.
203
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 189-203
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Faculty of Computer & Mathematical Sciences, Universiti
Teknologi Mara, 40450 Shah Alam, Selangor, Malaysia
[email protected]
1
, [email protected]
2
MIMOS Berhad, Technology Park Malaysia, 57000 Bukit
Jalil, Kuala Lumpur, Malaysia
[email protected]
Keywords- Trusted platform module; man-in-the-middle; man-in-
the-browser; remote user authentication; privacy; pseudonym
I. INTRODUCTION
Lately, client side attacks on online banking and electronic
commerce are on the rise due to inadequate security awareness
amongst end users. As a result, end user would not aware if
there is vulnerability on their machine or platform that might
lead to client side attack such as man-in-the-browser (MitB)
attacks. Furthermore, traditional security mechanisms such as
antivirus are not efficient enough in preventing these attacks
due to it evolves to be more complex with time. For instance,
man-in-the-middle (MitM) attack techniques which are mainly
targeting the information flow between a client and a server
have now evolved to become man-in-the-browser (MitB)
attack. MitB attack is designed to infiltrate the client software
such as the internet browser and manipulate or steal any
sensitive information.
Typically internet browser allows external browser
extension to be installed in order to add additional features to
the browser. This extension allows user to have extended
functionalities other than common browser function. However,
by allowing external extension to be installed with or without
the users consent into the end user machine through the
browser make it more vulnerable to MitB attacks. In fact,
attackers normally use the same distribution channel as
legitimate extension to distribute the malicious plugins into the
users browser. Furthermore, with lack of knowledge in
security awareness, end user would not be able to differentiate
the genuine or malicious extension. Consequently, malicious
plugins are able to infiltrate into the browser and would be able
to manipulate any sensitive information between a client and a
browser.
Therefore, in order to mitigate this issue, the integrity of the
client software as well as the users platform used in the
communication must be validated to detect any illegitimate
changes to the platform or application. Furthermore, to protect
sensitive information such as user credential from being
tampered or manipulate, it must be securely transmitted
between client and server. For this reason, a hardware-based
remote attestation is chosen to provide platform integrity
checking requirement in our proposed protocol. Specifically,
Trusted Platform Module (TPM) [1] based attestation is
integrated into our protocol in order to provide trust
relationship between interacting platforms. On the other hand,
Secure Remote Password (SRP) [21] is chosen as our password
authentication and key exchange protocol to authenticate user
and securely transmit sensitive information based on zero
knowledge proof and verifier based mechanism. In addition,
we have applied pseudonym technique to protect and improve
the privacy of user identity.
A. Secure Remote Password
Due to lack of security awareness amongst the end users,
their credentials such as passwords are exposed to attacks such
as brute force attack and dictionary based attacks. Typically
this situation happens due to weak passwords as its a weakest
link in the internet security. Once adversary manages to get
the password, they would be able to gain access to other
sensitive information. Thus, Secure Remote Password (SRP)
protocol is specifically designed to mitigate this issue. SRP
does not require strong passwords to be in place in order to
achieve strong security as it has been developed based on zero
knowledge proof and verifier based mechanism [21]. In the
event of authentication, zero knowledge proof is a method for
one party to provide authentication evidence or prove to other
party without revealing sensitive authentication information
such as password. On the other hand, verifier based
mechanism requires only verifier value that derived from the
password to be stored at the server side. Both techniques make
sure the password is not sent across the network or being
stored at the server side.
SRP protocol establishes its authentication process with
client calculates private key value (x) based on clients
password (P) and random salt (s). Clients verifier (v) then is
Mitigating Man-In-The-Browser Attacks with
Hardware-based Authentication Scheme
Fazli Bin Mat Nor
1
, Kamarularifin Abd Jalil
2
Jamalul-lail Ab Manan
Abstract Lack of security awareness amongst end users when
dealing with online banking and electronic commerce leave many
client side application vulnerabilities open. Thus, this is enables
attackers to exploit the vulnerabilities and launch client-side
attacks such as man-in-the-browser attack. The attack is
designed to manipulate sensitive information via clients
application such as internet browser by taking advantage of the
browsers extension vulnerabilities. This attack exists due to lack
of preventive measurement to detect any malicious changes on
the client side platform. Therefore, in this paper we are
proposing an enhanced remote authentication protocol with
hardware based attestation and pseudonym identity
enhancement to mitigate man-in-the-browser attacks as well as
improving user identity privacy.
204
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 204-210
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
derived from the private key. Server then stores clients
username (i), verifier (v) and random salt (s) for authentication
purposes. Steps of SRP authentication are follows [2]:
1. Client sends its username (i) to the server.
2. Server looks up for clients verifier (v) and salt (s)
based on the username (i). Server then sends salt (s)
to client. Upon receiving s, client computes private
key (x) based on (s) and clients password (P) and
sends it to the server.
3. Client computes public key (A) based on generated
random number (a) and sends it to the server
4. At the same time, server computes its public key (B)
based on clients verifier and generated random
number (b). Server then sends (B) and randomly
generated number (u) to the client.
5. Client and server then compute the session key (S)
based on the values available with each other.
6. Both sides then generate cryptographically strong
session key (k) by hashing session key (S).
7. Client then sends M1 as evidence to prove that it has
correct session key by hashing value of (A), (B) and
(k). The server verifies the M1 received from client
by comparing with its own calculated M1 values.
8. Server then sends M2 to client as evidence to prove
that it also has the correct session key. Once client
verifies the M2 values matches with its own
calculated M2 values, both parties now able to use
(k) as a session key for their secure communication.
Figure 1 : Secure Remote Password authentication protocol
B. Pseudonym
Expansion of the internet usage is leading to more privacy
threats such as disclosure of personally identifiable
information due to many users do not realize they are
submitting or exposing sensitive information that could be
linked back to their real life identity. In a nutshell, privacy is
an ability of the user to be able to protect his or her personal
identifiable information and anonymity is often use as a
protection of the privacy [22]. Most of authentication schemes
normally require user identity and this information is kept
inside the server side for the future authentication purposes.
However, if the server compromised, the user identity
information might be compromised as well. Thus, it is
important to conceal the information and pseudonym is one of
anonymity methods to mitigate this situation.
Pseudonym in authentication scheme is to make sure
concealed user identity cannot be linked back to the original
identity. Therefore, in order to strengthen the pseudonym
identifier and provide unlinkability, it is suggested the
identifier is processed from the combination of user identity
with other identity such as platform identity. This method is to
make sure actual user identity is not stored on the server side
and the server only stores pseudonym identifier for the
authentication purposes.
C. Man-in-the-Browser (MitB) Attack
As many security measures have been implemented to
prevent MitM attacks such as Secure Sockets Layer (SSL) or
Transport Layer Security (TLS) protocol, adversaries have
come out with a new variant of MitM attack which is known as
the Man-in-the-Browser (MitB) attack. Similar to the MitM,
the MitB trojans such as Zeus/SpyEye, URLzone, Silent
Banker, Sinowal and Gozi [2] are used to manipulate the
information between a user and a browser and is much harder
to detect due to its nature of attacks [2, 3]. This is apparent
when an adversary secretly attaches its malicious code into a
browser extension or plugins that seems to be doing legitimate
activities.
According to Nattakant [4], a browser extension is a small
application running on top of the browser and provides extra
features to the browser. In fact, the richness and the flexibility
of the browser extension or plugins nowadays have somewhat
lured malicious software into the users browser [5]. Thus, the
adversaries are taking advantage of the browser extension
features and its simple deployment in order for them to
implement the MitB attack.
The chronology of a MitB attack is shown in figure 2 and
the explanation is given below:
Once the malicious browser extension infects the
users browser, it sits inside the browser and waits for
the user to visit certain websites which are related to
the online banking or electronic commerce.
When the malicious extension found some related
patterns such as transaction details while scanning
through the user visited websites, it logs any
information entered by the user such as credentials.
205
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 204-210
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Furthermore, the malicious extension can even
manipulate the transaction information before the user
sends it to the server.
For example, when a user try to do a transaction of
$100 to bank account A, the malicious extension will
then change the amount information to $1000 as well
as changing the recipient bank account to B. The
server will not be able to differentiate between the
original and the manipulated transaction because the
information come from a legitimate user.
Figure 2 : MitB Attacks
D. Trusted Platform Module Based Remote Attestation
Trust relationship between interacting platforms (machines)
has become a major element in increasing the user confidence
when dealing with Internet transactions especially in online
banking and electronic commerce. For instance, users might
want to make sure that they are dealing with a legitimate
merchant and vice versa. However, how do you make sure that
your browser and machine are trusted and behave in the
expected manner while doing the Internet transactions? For this
reason, we have chosen the Trusted Platform Module (TPM)
due to its capability to provide attestation based on the
information about the platform and also can ensure the integrity
of the platform is not tampered [1]. In order to ensure the
validity of the integrity measurement from the genuine TPM,
the Attestation Identity Key (AIK) is used to sign the integrity
measurement. AIK is an asymmetric key and is derived from
the unique Endorsement Key (EK) certified by its manufacturer
which can identify the TPM identity.
As mentioned by the Trusted Computing Group (TCG) [6],
an entity is considered trusted when it always behaves in the
expected manner for the intended purpose. Due to this reason,
the attestation based information provided by the TPM must be
verified by the communicating parties before any transaction or
sensitive information is transferred. Therefore, the remote
attestation is the best option because it allows a remote host
such as a server to verify the integrity of another hosts (client)
platform.
In the remote attestation as shown in figure 3, a client
platform will start the attestation process by sending the request
for the service. Then, the host platform will send a challenge
respond. The client platform will then measures its integrity
information such as its operating system, BIOS, hardware and
software and this information will be stored in a non-volatile
memory in the TPM known as the Platform Configuration
Register (PCR) [7, 8]. The client will then send the integrity
report to the host for verification. The host will allow the client
to access its services once the clients platform integrity has
been verified.
Figure 3: Remote Attestation
E. Our Contribution
The platform integrity and the secrecy of a users sensitive
information are essential to combat MitB attacks. It is more
evident since the existing security measures such as antivirus
are not capable to prevent new and sophisticated attacks from
malicious browser extension [Error! Reference source not
found.]. Thus, it is crucial to develop trust relationship
between a client and a server in order to ensure that the
communication is protected from the illegitimate entity as well
as to preserve the secrecy of the users confidential
information.
Therefore, in this paper we are proposing a TPM based
remote attestation in order to provide trust relationship
between the communicating parties. In addition, we have also
incorporated pseudonym technique with the secure key
exchange in order to prevent the users confidential
information from being tapped. Thus, our proposed solution
should able to give better confidence level of users who use
browsers to do internet transactions such as on-line banking,
on line services.
F. Outline
This paper is organized as follows; section 2 discusses the
related works on MitB attacks and its solutions. In section 3,
we present our proposed solution; while section 4 discusses
the security analysis on the proposed protocol. Finally, section
5 concludes the paper.
206
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 204-210
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
II. RELATED WORKS
In the past, several protection mechanisms have been
proposed to prevent man-in-the-browser attacks. Bottani et al.
[10] has proposed to use personal trusted devices such as
mobile phone to secure the transactions between a client and a
merchant. This mechanism deters MitB attacks, by avoiding
important information such as payment to be submitted through
external devices. The IBM research team [11] has also
introduced external device approach to prevent MitM variant
attacks called Zurich Trusted Information Channel (ZTIC). The
ZTIC is an USB device containing components such as simple
verification display and other authentication components in
order to provide secure communication between a client and a
server. In this approach, ZTIC acts as a main medium between
the client and the server without relying on any clients
application or browser. ZTIC will scan and intercept any
sensitive information and will only permit to exchange the
information once the client has verified the information within
its display component. However, the external devices
requirement in the above solutions may become barrier for
some users.
On the other hand, Itoi et al. [12] has introduced an
Internet based smartcard and this solution uses Simple
Password Exponential Key Exchange (SPEKE) to address the
security issues regarding the off-line dictionary attack and
man-in-the-middle attack. However, this mechanism is unable
to provide a protection or notification whenever the platform
has been exposed to malicious software. Based on that
challenge, Starnberger et al. [13] took the initiative to introduce
a smartcard-based TPM attestation by integrating the platform
integrity verification in their proposed solution. Thus, it is able
to mitigate malicious software attacks on the clients machine
as well as providing extra security measurement through
external devices.
Abbasi et al. [14] has proposed a secure web contents to
mitigate the MitM and the MitB attacks. In their solutions, the
web contents from a server will be encrypted by a secured web
server and a secured proxy on the client side will decrypt the
web content. Thus, this solution provides better protection of
web content stored in the server. However, this solution
concentrates only on the protection of a web contents on the
server side and is lacking of protection against the malicious
software attack on the client machine.
Sidheeq et al. [15] has integrated the biometrics USB with
the TPM in order to mitigate the risk of malware attacks on the
clients machine. In detail, this solution requires the user to
provide biometrics evidence for authentication. Then it
compares the authenticity of the provided evidence with the
users biometrics stored in the USB which is protected by the
TPM. In addition, this solution uses the encryption feature
provided by the TPM in order to secure the data exchanged
between a client and a server.
III. PROPOSED SOLUTION
In this section, we present our proposed solution to mitigate
the MitB attacks (refer to Figure 4). The objective of our
protocol is to provide trust communication between a client
and a server as well as preserving the secrecy of the users
sensitive information. In order to achieve this objective, we
have incorporated the TPM based remote attestation in order to
provide the platform integrity verification. In addition, we have
adopted the Secure Remote Password (SRP) [16] as the secure
key exchange protocol in order to provide zero knowledge
proof that allows one party to prove themselves to another
without revealing any authentication information such as
password.
The proposed protocol comprises of the registration
process, the key exchange and also the platform attestation
phases. However, before these phases can take place, a client
will measure its platform integrity information such as its
BIOS, bootloader, operating system and software and store it in
the PCR values. The PCR values will be used in the
registration as part of the pseudonym data in order to preserve
the secrecy of the users confidential information and also in
the attestation process as part of its integrity reporting.
In the registration phase, the client will generate a
pseudonym identity (u) by hashing the combination of the user
identity and the platform PCR values. The client then
calculates the verifier (v) value by hashing the random salt
value (s) with the clients password. Next, the client sends (u),
(v), (s) and the public certificate of its AIK (a) to the server via
a secured channel. The server then stores that information in
the database for the authentication purposes.
In the authentication phase, in order to fulfill zero
knowledge proof requirements, each party is required to show
their evidence that they are using the same secure key
exchange without revealing their key. The client will start the
authentication process by calculating its asymmetric key and
sends the public key (A) and pseudonym identity (u) to the
server. The server then lookup the clients (v), (s) and (a) from
its database based on the (u) given by the client. Subsequently,
the server will calculate its asymmetric key and send the public
key (B) and (s) to the client. Prior sending (B) and (s) to the
client, the server will calculate its key exchange (k). Upon
receiving (s) and also the servers public key (B), the client
computes its key exchange (k). The client then sends M1 as
evidence to the server. The M1 is calculated based on the
mathematical formula stated in the SRP protocol and M1 is
used by the server to verify that the client has the same key
exchange. Once M1 has been verified, the server then sends its
evidence, M2, to the client and the client verifies M2 evidence
in order to make sure that the server has the same key. By
using this method, both parties will be able to prove to each
other that they have the same secure key without revealing the
key.
207
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 204-210
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Figure 4: Key Exchange and Attestation Phase
In the platform integrity attestation phase, the client starts
the attestation process by signing its PCR values that contains
its platform measurements with the AIK private key. The client
then encrypts the signature with (k) as the secret key. The
encrypted value (Ek) is sent to the server for the integrity
verification. Upon receiving (Ek), the server will decrypt it
using its secure key (k) and will verify the clients PCR
signature with the clients AIK public key (a). Once verified,
the client and the server are now able to communicate in a
trusted and secure channel.
IV. EXPERIMENTS
In order to test the proposed protocol, we have conducted
several experiments. The setup for the experiments consists of
three machines: a client machine, a server machine installed
with database storage and an adversary machine. The client
machine is integrated with the TPM and runs on the Intel
Core2 Duo @ 2.53GHz with 4GB of RAM and Ubuntu v9.10
with Linux kernel v2.6.31.22.6 as its operating system. On the
other hand, the server machine is prepared with Intel Core2
Quad @ 2.4GHz with 8GB of RAM and Ubuntu v9.10 with
Linux kernel v2.6.31.22.6 as its operating system. Database
storage in the server machine is configured with MySQL
v5.5.15 with default configuration. On the client side, we have
prepared a custom internet browser in order to fulfill our
proposed protocol requirements. As for the adversary machine,
it is equipped with Ubuntu v9.10 with Linux kernel
v2.6.31.22.6 and running on Intel Core i7 @ 2.8GHz with 4GB
of RAM.
In our first experiment, we simulated the transaction
between the untrusted client machine and the server whereby
the adversary is furnished with the legitimate user credential
and untrusted internet browser. In the second experiment, we
have setup man in the middle attack between the trusted client
and the server as shown in Figure 5 with the Ettercap v0.7.3
[17] installed at the adversary machine.
Figure 5: Man in the middle attacks
In our last experiment, we planted a MitB trojan, Zeus [18]
into the trusted client machine and have tried to manipulate the
transaction content between the client and the server. We
specifically measured the integrity of the client operating
system related files and applications such as the Internet
browser and stored it in the PCR-13 as shown in Figure 6 and
this PCR value is used to ensure that the integrity of the client
platform and application is not tampered.
Figure 6: PCR values with trusted client machine
V. RESULTS
Our first experimental result indicates that the adversarys
machine would not be able to do any transactions with the
server as shown in Figure 7, even though the simulated
transaction has been equipped with the legitimate user
credential as shown in Figure 8 and Figure 9.
208
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 204-210
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Figure 7: Untrusted client access
This is due to the lack of valid PCR values at the
adversarys machine and thus not able to provide the requested
valid identity (u) which is the combination of the user identity
and the platform PCR values.
Figure 8: Tampered client application
Figure 9: Authentication fail on tampered client application
In the second experiment, the adversary tried to
impersonate as a server to the client and as a client to the
server. However, the experiment result is similar with our first
experiment and it shows that the adversary would not be able
to impersonate either as a client or a server due to the invalid
evidence (M1) and (M2) in order to fulfill the zero knowledge
proof. Furthermore, the platform integrity measurement
provided by the adversarys machine is invalid. On the other
hand, the adversary would not be able to steal any information
from the transaction as the secret key (k) never being exposed
in the transaction.
Our last experimental result shows that the PCR-13 value in
the trusted client machine has changed as shown in Figure 10.
This indicated that the integrity of the client machine has been
tampered and in this scenario, the MitB trojan, Zeus has
tampered the integrity of the client platform and applications.
Consequently, our protocol will reject any authentication with
invalid PCR values and Zeus would not be able to manipulate
any transaction information.
Figure 10: PCR values with untrusted client machine
VI. SECURITY ANALYSIS
In this section, we analyze the proposed protocol based on a
few vulnerabilities that could compromise the security of the
protocol. According to Oppliger [5], there are three potential
vulnerabilities that are related to the client-side attacks:
credential-stealing attack, channel-breaking attack and content-
manipulation attack.
In the credential-stealing attacks, the adversaries normally
use the offline medium such as malicious attachment via email,
fake website or other phishing techniques to persuade user to
expose their credential unintentionally [5]. In order to mitigate
the risk of this attack, the platform integrity must be verified
free from any malicious software and the credential must not
be transferred or exposed through communication channel.
Therefore, the TPM based attestation implemented in our
protocol should be able to provide verification of platform
integrity. In addition, the credential-free authentication can be
achieved through SRP. As a result, when malicious software is
planted and embedded into a clients browser machine, the
server will automatically reject the clients authentication due
to the detection (i.e. different measured integrity value
compared to the one in the PCR) of the compromised clients
machine in the attestation phase. This capability of detecting
phishing attacks and preventing them will certainly increase
user confidence in using the internet for transactional activities.
In the channel-breaking attacks, generally the adversaries
will act as the man in the middle between a client and a server.
In this attack, the adversaries will act as a server to the client
and as a client to the server by maintaining a legitimate secure
209
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 204-210
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
connection between both sides. Thus, any information
exchanged between the client and the server will be routed to
the adversaries. However, with the adoption of zero knowledge
proof of the SRP protocol, the adversaries would need to
produce a secure key (k) in order to imitate as a client or a
server. Regrettably for the adversaries, in our protocol, the
secure key (k) is never exposed or sent across the network.
Therefore, the adversaries would not be able to provide correct
evidences (M1 or M2) that would enable them to try to get any
kind of benefit from the internet transactions.
In the content-manipulation attacks, the adversaries would
try to manipulate the content or the transaction information on
the client-side i.e. it will tamper the content or the transaction
information that is sent to the server. The use of integrity
measurement in the remote attestation of the client platform
and the browser application becomes the main objective to
mitigate this attack. In this case, any content manipulation is
strictly detected and prevented by our solution. Therefore, the
TPM based integrity measurement implemented in our protocol
is able to fulfill the objective.
VII. CONCLUSION AND FUTURE WORKS
In this paper, we have proposed an enhanced remote
authentication scheme that mitigates the man-in-the-browser
attacks. Briefly, we presented our solution to mitigate such
attacks, i.e. the TPM based remote attestation and the SRP key
exchanges that form the basis of our protocol which provides
the much needed trust, integrity and zero knowledge proof
authentications for both the client and the server. In order to
test the proposed protocol, we have conducted some
experiments. The result obtained from the experiment shows
that the proposed protocol is able to deter the attacks launched
by the adversaries. We have also demonstrated our security
analysis of the proposed protocol based on a few vulnerabilities
related to the client-side attacks and we hope that the proposed
protocol can effectively deter the attacks. For future works, it is
hope that the proposed protocol can be implemented in the
common internet browsers such as Firefox in order to have
better approach to mitigate MiTB attacks in actual
environments. Last but not least, we will look into alternative
solutions for strengthening this proposal such as optical tokens,
H-PIN and hTAN [19] or behaviour driven security [20]
ACKNOWLEDGMENT
The authors would like to thank Universiti Teknologi
MARA for funding this research under the Excellence Fund.
REFERENCES
[1] A. Sadeghi, Trusted Computing Special aspects and challenges
SOFSEM 2008. LNCS, vol. 4910, pp. 98117, 2008.
[2] RSA Lab, Making Sense of Man-in-the-browser Attacks
http://viewer.media.bitpipe.com/1039183786_34/1295277188_16/MITB
_WP_0510-RSA.pdf
[3] O. Eisen, Catching the fraudulent Man-in-the-Middle and Man-in-the-
Browser Network Security, Volume 2010, Issue 4, pp. 1112, 2010.
[4] N. Utakrit, Review of Browser Extensions, a Man-in-the-Browser
Proceedings of the 7th Australian Information Security Management
Conference, Perth, Western Australia , 2010.
[5] R. Oppliger, R. Rytz, T. Holderegger, Internet banking: Client-side
attacks and protection mechanisms. Computer 42(6), pp. 2733, 2009.
[6] Trusted Computing Group, TCG specification architecture overview,
specification revision 1.4, 2007.
[7] S. Kinney, Trusted Platform Module Basics: Using TPM in Embedded
System, NEWNES, 2006.
[8] D. Challener, K. Yoder, R. Catherman, D. Safford, L.V. Doorn, A
Practical Guide to Trusted Computing, IBM Press, 2008.
[9] B. Armin, L. Felix, S. Thomas, Banksafe Information Stealer Detection
Inside the Web Browser, RAID 2011, pp. 262-280, 2011.
[10] A. Bottoni, G. Dini, Improving authentication of remote card
transactions with mobile personal trusted devices, Computer
Communications 30(8), pp. 16971712, (2007.
[11] T. Weigold, T. Kramp, R. Hermann, F. Horing, P. Buhler, M. Baentsch,
The Zurich Trusted Information Channel: An efficient defence against
man-in-the-middle and malicious software attacks, Proc.
TRUST'2008. LNCS, vol. 4968,pp. 75-91. Springer, 2008.
[12] N. Itoi, T. Fukuzawa, P. Honeyman, Secure Internet Smartcards, First
International Workshop on Java on Smart Cards: Programming and
Security, pp.73-89, 2000.
[13] G. Starnberger, L. Froihofer, K.M. Goeschka, A generic proxy for
secure smart card-enabled web applications, Web Engineering, 10th
Intl. Conf., ICWE 2010, Vienna, Austria, July 5-9, 2010. Proceedings.
LNCS, vol 6189, pp. 370-384. Springer, 2010.
[14] A.G. Abbasi, S. Muftic, I. Hotamov, Web Contents Protection, Secure
Execution and Authorized Distribution, Computing in the Global
Information Technology, International Multi-Conference on, pp. 157-
162, 2010 Fifth International Multi-conference on Computing in the
Global Information Technology, 2010
[15] M. Sidheeq, A. Dehghantanha, G. Kananparan, Utilizing trusted
platform module to mitigate botnet attacks, Computer Applications and
Industrial Electronics (ICCAIE), 2010 International Conference on ,
vol., no., pp.245-249, 2010
[16] T. Wu, The Secure Remote Password protocol, Internet Society
Network and Distributed Systems Security Symposium (NDSS). San
Diego, pp. 97-111, 1998
[17] Ettercap, http://ettercap.sourceforge.net/
[18] Symantec, Zeus: King of the Bots
http://www.symantec.com/content/en/us/enterprise/media/security_respo
nse/whitepapers/zeus_king_of_bots.pdf
[19] L. Shujun, S. Ahmad-Reza, S. Roland, hPIN/hTAN: Low-Cost e-
Banking Secure against Untrusted Computers, Financial Cryptography,
2010
[20] N. Daniel, Security in Behaviour Driven Authentication for Web
Applications, Master thesis, Department of Computer Science,
Electrical and Space Engineering, Jan 2012
[21] T. Wu, The Secure Remote Password protocol, Internet Society
Network and Distributed Systems Security Symposium (NDSS). San
Diego, pp. 97-111, 1998
[22] I. Goldberg, "Privacy-enhancing technologies for the Internet",
Compcon '97 Proceedings, pp. 103 - 109, 1997
210
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 204-210
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Universiti Sains Islam Malaysia
Nilai, Malaysia
[email protected]
1
, [email protected]
2
, [email protected]
3
ABSTRACT
KEYWORDS
Measurement; Security Metrics;
Misuse cases; Security Requirements;
Software Security.
1 INTRODUCTION
Web applications are employed in a
wide variety of contexts to support
many daily social activities.
Unfortunately, the tremendous rise in
online applications has been
accompanied by a proportional rise in
the number and type of attacks against
them. Web applications are
continuously reported to be vulnerable
to attacks and compromises. According
to a recent analysis conducted by
Symantec Inc [1], vulnerabilities and
security breaches on enterprises are
increasing, with web application
attacks continuing to be a favoured
attack vector. Furthermore, a report by
WhiteHat security has found that 8 out
of 10 web applications are vulnerable
[2]. These reports indicate that even
present-day web applications are not
free from vulnerabilities. In security
engineering, vulnerabilities result from
defects or weaknesses that are
inadvertently introduced at the design
and implementation stages of the
development life cycle that can be
exploited by attackers to harm the
application and its asset [3]. Therefore,
security needs to be considered and
measured from the early stage of the
development life cycle.
Mellado et al. [4] believe in the
particular importance of security
requirements engineering, which
provide techniques and methods to
handle security at the early stage of the
software development lifecycle. A
survey to identify and describe
concrete techniques for eliciting
security requirements showed that a
misuse case is often considered an
Security Measurement Based On GQM To Improve Application Security During
Requirements Stage
Ala A. Abdulrazeg
1
., Norita Md Norwawi
2
., Nurlida Basir
3
Faculty of Science and Technology
Developing secure web applications that
can withstand malicious attacks requires a
careful injection of security considerations
into early stages of development lifecycle.
Assessing security at the requirement
analysis stage of the application
development life cycle may help in
mitigating security defects before they
spread their wings into the latter stages of
the development life cycle and into the
final version of product. In this paper, we
present a security metrics model based on
the Goal Question Metric (GQM)
approach, focusing on the design of the
misuse case model. Misuse case is a
technique to identify threats and integrate
security requirements during the
requirement analysis stage. The security
metrics model helps in discovering and
evaluating the misuse case models by
ensuring a defect-free model. Here, the
security metrics are based on the OWASP
top 10-2010, in addition to misuse case
modeling antipattern.
211
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 211-220
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Table1. Cost of fixing defects [30]
important part of the requirement stage
[5]. Misuse cases represent security
threats that the attacker might interact
with to breach security and cause harm
to the system. Misuse cases are created
by extending the use case model to
provide a systematic way for
identifying system functions, possible
threats, and required countermeasures
in one consistent view. The misuse
cases model must be accurately
modelled, because if security defects
and vulnerabilities are discovered late
in the development, the cost of fixing
them escalates significantly as shown
in table 1 [30].
A study on security improvement
program suggested that measurement
and metrics must be included earlier in
the development processes [8].
Measuring security at the requirement
stage, focusing on misuse case model
could mitigate security defects before
they reach the finalised product. This
paper proposes a new set of security
metrics model that quantifies security
at an early stage of web application
development life cycle, namely the
requirement stage. The security metrics
are defined using the Goal, Question,
Metrics approach. The proposed
metrics model is misuse case-centric to
ensure that the developed misuse case
models are defect-free, and mitigate
most well-known web application
security risks. The security metrics
model is defined by adopting the
antipatterns proposed by [9] to ensure
the modelled misuse cases are defect-
free. The model is based on the
prominent top 10-2010 web application
security risks OWASP [10] to ensure
the security use cases thoroughly
address these risks.
The rest of the paper is organized as
follows: Section 2 presents the
background of the work which
discusses the importance of security
metrics and then presents the concept
of the misuse case model. Section 3
presents the proposed security metrics
model. In section 4, related work has
been discussed. Finally, section 5
suggests future work and explains the
conclusions.
2 BACKGROUND
2.1 Why Security Metrics
Metrics are defined as standards of
measurement. Measurement is a
process of quantifying the attributes of
software to describe them according to
clearly defined rules [11]. Chew et al.
[12] defined measurements as the
process of data collection, analysis, and
reporting. The results of data collection
are called measures. Lord Kelvin is
known to have said, If you cannot
measure it, you cannot improve it.
When you can measure what you are
speaking about, and express it in
numbers, you know something about
it; but when you cannot measure it,
when you cannot express it in
numbers, your knowledge is of a
meagre and unsatisfactory kind[13].
The analysis and interpretation of
appropriate measures helps diagnose
problem and identify solutions during
the development of software, which
assists in reducing defects, rework, and
cycle time [7].
According to Wang et al. [6] we cannot
improve security if we cannot measure
it. Security metrics are considered
effective tools that allow information
security experts to characterize and
evaluate the effectiveness of security
and the levels of systems, products,
and processes in order to address
security issues and facilitate
212
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 211-220
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
improvements [14]. Security metrics
are used for decision support and these
decisions are actually risk management
decisions aimed at mitigating and
cancelling security risks. Defining
metrics based on goals has proven
successful in guaranteeing relevant
measurements, as it gives purpose to
the metrics [15].
The Goal Question Metric approach is
a goal-oriented approach which
provides a framework for metrics
development [15]. The GQM approach
was originally developed by Basili and
Weiss [16], and expanded by Rombach
[17]. Basili [18] stated that the Goal
Question Metric approach represents a
systematic approach for integrating
goals with models of the software
processes, products and quality
perspectives of interest, based upon the
specific needs of the project and the
organization. An example of GQM is
illustrated in figure 1 [29].
As illustrated in figure 1, the goal
Question Metric approach focuses on
defining measurable goals (conceptual
level) for products, processes, and
resources with respect to quality issue
perspectives of interest. Then, these
goals are refined into questions
(operational level) to characterize the
way the assessment/achievement of
these goals is going to be performed.
Once the goals are refined into a list of
questions, metrics are identified
(Quantitative level) to provide a
quantitative answer/information to
each question in a satisfactory way
[17].
2.2 Misuse Case Modelling
Ensuring the set of security
requirements obtained is complete and
consistent is a very important task
because the right set of security
requirements will lead to the
development of secure software,
whereas the wrong requirements can
lead to a never-ending cycle of security
failures [19]. Misuse case is a useful
technique for eliciting and modelling
functional security requirements and
threats at the requirement stage.
Use case diagrams have proven
effective during requirements
engineering for selecting functional
requirements, but offer limited support
for selecting security requirements
[20]. McDermott and Madison [21]
used the term abuse cases to express
threats and countermeasures using the
standard use case notation. In their
approach, the authors kept the abuse
case in separate models. Later, Sindre
and Opdahl [22] extended the positive
use case diagrams by adding negative
use cases (misuses cases) to model
undesirable behaviour in the system
and misuser to model the attacker.
Extending the use case model with
misuse cases provides the ability to
regard system functions and possible
attacks with one consistent view,
which assists in describing security
threat scenarios which would threaten
the system assets, mitigating threats
and thus improving security. The
ordinary use case relationships such as
association, generalization, include
and extend may also be used in
modelling of misuse cases. Sindre &
Opdahl [20] have refined the
relationships in misuse case modelling
by adopting threaten and mitigate
relationships as suggested by [23].
These two types of relationships
illustrate that a misuse case may
threaten a use case, while a security
use case might mitigate a misuse case.
Figure 1. The Goal Question Metrics approach
213
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 211-220
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
A security use case represents software
security requirements needed to protect
system assets from security threats.
The idea of security use cases as a way
of representing countermeasures is
presented by [24] and was adopted by
[20].
Figure 2 illustrates an example of a
misuse case diagram. In this figure
Account locked after N number of
unsuccessful authentication attempts is
a security use case added to protect
against the threat Guess user
authentication identified as a potential
misuse case that threatens the login
function.
3 SECURITY METRICS TO
IMPROVE MISUSE CASE
MODEL
In this work, we develop a security
metrics model to be applied at the
requirements stage. The proposed
security metrics model is misuse case-
centric and aims to discover and secure
security vulnerabilities and modeling
defects. It is significant to eliminate
modelling defects from the misuse case
model and improve security use cases
before those defects and weaknesses
find their way into the latter stages of
the development life cycle.
The GQM approach is used for a
structured and derivation of the
security metrics. The proposed security
metrics model is composed of two
main goals. The first goal is to improve
the quality of the developed misuse
case models by ensuring the models
are defect-free and do not contain any
incorrect and misleading information.
In order to achieve this goal, security
metrics are developed based on the
antipatterns specified by [9]. The
antipatterns are the poor modelling
decisions which result in low quality
misuse case models that can lead to
defects and harmful consequences in
the latter stages of development life
cycle [9]. The metrics have been scaled
so as to fit within the range 0 to 1 with
lower values considered a satisfactory
rating for the measurement.
Goal 1 To improve the modeling
quality of misuse case models by
identifying modeling defects.
Question 1.1 Do the misuse cases
correctly represent the application
vulnerabilities and are they consistent
with application security use cases?
Metrics 1.1.1 The ratio of the
number of misuse cases that do not
threaten the application to the total
number of misuse cases.
Consider a set of misuse cases in a
model as MC = {mc
1
,, mc
n
} and the
non-threatening misuse cases as NMC
= {nmc
1
,, nmc
n
} such that
MC NMC _ . The metric is expressed as
follows, where RN
MC
stands for the
ratio of misuse cases that do not
threaten the application.
MC
NMC
RN
MC
=
(1)
Metrics 1.1.2 The ratio of the
number of unmitigated misuse cases
that threaten the application to the
total number of misuse cases.
Consider a set of misuse cases in a
model as MC= {mc
1
,, mc
n
} and the
unmitigated misuse cases as UMC =
{umc
1
,,umc
n
} such that MC UMC _ .
The metric is expressed as follows,
where RU
MC
stands for the ratio of the
number of unmitigated misuse cases.
Figure 2. Misuse case diagram example
214
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 211-220
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
MC
UMC
RU
MC
=
(2)
Question 1.2 Are the functional
decompositions between misuse cases
correctly handled?
Metrics 1.2.1 The ratio of inclusion
misuse cases included once to the
total number of inclusion misuse
cases.
Consider a set of inclusion misuse
cases as IMC = {imc
1
,, imc
n
} and the
inclusion misuse cases included once
as OIM = {oim
1
,, oim
n
} such that
IMC OIM _ . The metric is expressed as
follows, where RO
IMC
stands for the
ratio of inclusion misuse cases
included once.
IMC
OIM
RO
IMC
=
(3)
Metrics 1.2.2 The ration of
extension misuse cases extended
once to the total number of
extension misuse cases.
Consider a set of extension misuse
cases as EMC = {emc
1
,, emc
n
} and
the extension misuse cases extended
once as OEM = (oem
1
,, oem
n
) such
that EMC OEM _ . The metric is defined
as follows, where RM
EMC
stands for the
ratio of extension misuse cases
extended once.
|
.
|
\
|
=
EMC
OEM
RM
EMC
1
(4)
Metrics 1.2.3 The ratio of misuse
cases used as pre/post conditions of
other misuse cases to the total
number of misuse cases.
Consider a set of misuse cases as MC
= {mc
1
,, mc
n
} and the misuse cases
used as pre/post conditions as PMC =
{pmc
1
,, pmc
n
} such that MC PMC _ .
The metric is expressed as follows,
where RP
MC
stands for the ratio of
misuse cases used as pre/post
conditions.
MC
PMC
RP
MC
=
(5)
Question 1.3 Are the misusers
presented and handled correctly in the
misuse case model?
Metrics 1.3.1 The ratio of the
number of the base misuse cases
associated to one misuser to the
total number of base misuse cases.
Consider a set of base misuse cases in
a model as MC= {mc
1
,, mc
n
} and
the base misuse cases associated to one
misuser as OMM= {omm
1
,,omm
n
}
such that MC OMM _ . The metric is
expressed as follows, where RM
MC
stands for the ratio of the number of
the base misuse cases associated to one
misuser.
|
.
|
\
|
=
MC
OMM
RM
MC
1
(6)
The second goal of the security metrics
is to discover omitted security use
cases that mitigate known-security
vulnerabilities to ensure that the
developed misuse cases cover these
vulnerabilities. To achieve this goal
security metrics based on web
application security risks OWASP top
10-2010 [10] were developed. In this
work, three security risks were
analyzed; SQL injection, Cross Site
Scripting, and Broken Authentication
and Session Management.
Goal 2: To ensure that the elicited
security use cases cover the well-
known security vulnerabilities.
Question 2.1 What is the number of
misuse cases found?
Metric 2.1.1 The total
number of identified misuse cases [
Total
MUC ].
215
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 211-220
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Question 2.2 What is the number of
elicited security use cases?
Metric 2.2.1 The total
number of elicited security use
cases [
Total
SUC ].
Question 2.3 Are the security
requirements which have been defined
sufficient to mitigate well-known
security vulnerabilities?
Metric
2.3.1
The number of excluded
security requirements that
ensure input/output handling
[Xr
1
].
Is a specific encoding scheme defined
for all inputs?
Is a process of canonicalization applied
to all inputs?
Is an appropriate validation defined and
applied to all inputs, in terms of type,
length, format/syntax and range?
Is a whitelist Filtering approach is
applied to all inputs?
Are all the validations performed on the
client and server side?
Is all unsuccessful input handling
rejected with an error message?
Is all unsuccessful input handling
logged?
Is output data to the client filtered and
encoded?
Is output encoding performed on server
side?
Metric
2.3.2
The total number of excluded
security requirements that
ensure Authentication &
Authorization handlin [Xr
2
].
Is a complex password policies applied
in order to choose proper passwords?
Is the minimum and maximum length of
password defined?
Is the account automatically locked for
the specified period when a specified
number of consecutive unsuccessful
authentication attempts exceeded?
Is authentication error messages not
verbose and do not contain sensitive
information?
Is the option that remembers the
authentication credentials such as Keep
me signed in avoided?
Is user allowed to change his/her
password?
Is user allowed to create his/her own
secret questions and answers for the
option of password recovery?
Is CAPTCHA (Completely Automated
Turing Test To Tell Computers and
Humans Apart) applied?
Is all authentication decision performed
on the server side?
Is all authentication actions (e.g, Login,
logout, password change) logged?
Is re-authentication required when
performing critical operations?
Is user forced to change Password after
a specific period of time (expiration
periods)?
Is user credentials rejected without even
validation when the account is locked?
Is secure data transmission protocol
applied to secure credentials transfer
between client and server.
Metric
2.3.3
The total number of
excluded security
requirements that ensure
session handling [Xr
3
].
Is session identifier created on server
side?
Is new session identifier assigned to
user on authentication?
Is session identifier changed on re-
authentication?
Is logout option provided for all
operations that require authentication?
Is session identifier cancelled when
authenticated user logs out?
Is session identifier killed after a period
of time without any actions?
Is users authenticated session identifier
protected via secure data transmission
protocol?
216
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 211-220
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Metric
2.3.4
The total Number of
excluded security
requirements that ensure
Error & Logging handling
[Xr
4
].
Is application has log file?
Is log control handled on server?
Is the application does not output error
messages that contain sensitive data?
Is all server failure and errors handled in
server and NOT deliver to user?
These metrics are implemented by
comparing the elicited security
requirements of the application during
the requirement stage to the stated
security requirements. These metrics
assess the threat of possible attacks on
the system. If a security requirement
has been excluded then a value of 1
will be given, and a value of 0 if it has
been considered.
Metric 2.3.5 The total
number of excluded security
requirements that put the system at
risk of possible attacks.
=
=
n
i
i SUC
Xr ExR
1
(7)
ExR
SUC
stands for the summation of
the excluded security requirements,
and Xr
i
represents the excluded
security requirements that put the
system at risk, where i {1, 2, ..n}.
Question 2.4 How vulnerable is the
application based on the stated
security requirements?
Metric 2.4.1 The ratio of the
number of included security
requirements to the total number of
stated security requirements.
|
.
|
\
|
=
SsR
ExR SsR
RV
SUC
SUC
1
(8)
SsR stands for the total number of
the stated security requirements. The
difference between SsR and ExR
SUC
indicates the included security
requirements. RV
SUC
stands for the
ratio of the number of included
security requirements. The value of the
metrics ranges from 0 to 1. If RV
SUC
converges to 0, that indicates many
stated security requirements have been
considered in the misuse case model.
The lower ratio is the satisfactory
rating for the measurement. The
security metrics model is illustrated
graphically in figure 3.
4 RELATED WORK
A number of related works have
already been done that introduce
security metrics, or mentioned how and
where to situate theses metrics in the
development life cycle of a system.
Nichols and Peterson [25] introduced a
metrics model based on OWASP top-
10 vulnerabilities and organized
according to the applications life
cycle. The authors suggested that if the
organization seeks to improve the
overall application security, they must
focus on security of the web
application itself. The authors also
suggested that web application
developers need to be concerned about
the vulnerabilities that may exist in the
application. In this paper, the authors
stated that design-time metrics are
essential to the application
development because of their ability to
identify and characterize weaknesses
early in the applications life cycle.
Mell et al [26] reported the Common
Vulnerability Scoring System (CVSS)
provides an open standardized tool to
measure the severity and risk of a
vulnerability discovered in a given
system. CVSS assists in prioritizing
these vulnerabilities to remediate those
that pose the greatest risk. Chowdhury
et al [27] defined a number of security
metrics that assess how securely a
systems source code is structured. The
proposed metrics can be applied to
evaluate the robustness, secure
217
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 211-220
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Q 1.3
Q 1.2
Q 1.1
M
1.1.1
G1
M
1.1.2
M
1.2.1
M
1.2.2
M
1.3.1
M
1.2.3
Q 2.4
Q 2.2
Q 2.1
M
2.1.1
G2
M
2.2.1
M
2.3.1
M
2.3.2
M
2.4.1
M
2.3.3
Q 2.3
M
2.3.4
M
2.3.5
Figure 3. Graphical representation of the security metrics model based on GQM
information flow and secure control
flow in code structures. Wang et al [6]
described an approach to define
security metrics based on
vulnerabilities included in software
systems and their impact on software
quality. The proposed security metrics
measure the severity level and the risk
of a representative weakness of
software that causes most of the
vulnerabilities to be exploited by the
attackers, taking into consideration the
time of occurrences of vulnerabilities
at the software product level.
Alshammari, et al [28] proposed a set
of security metrics to measure
information flow of object-oriented
designs based on the analysis of quality
design properties presented in the
Quality Model for Object-Oriented
Design. These properties include:
composition, coupling, extensibility,
inheritance, and design size. The
author studied each property and its
relevance to designing secure software
to define the security metrics.
5 CONCLUSIONS
In todays world, security is an
important aspect of web application. A
prudent approach for developing
security web applications is to
integrate security from the early stages
of development, specifically at the
requirements stage. This paper
provides a security metrics model to
examine the misuse case diagram to
ensure it is defect-free, and covers and
mitigates known-security risks and
vulnerabilities, so as to develop a
secure system. The proposed security
metrics give an indication of where the
security defects might occur. Future
works may consider conducting
experiments to evaluate and
demonstrate the usefulness and
effectiveness of the proposed security
metrics for the system development.
The effectiveness of the approach
could be validated by evaluating the
resulting misuse case diagram to fix
defects in the original model and
threats that are added to the model that
could jeopardize the application.
6 ACKNOWLEDGMENT
The first author gratefully
acknowledges the Ministry of Higher
Education in Libya for sponsoring his
PhD studies. The authors would like to
acknowledge the support of the Faculty
of Science and Technology at
Universiti Sains Islam Malaysia for
funding this work through the project
No.PPP/FST-1-15711.
218
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 211-220
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
7 REFERENCES
[1] Symantec Inc. Symantec Global Internet
Security Threat Report Trends for 2009.
Symantec Global Internet Security Threat
Report. Volume XV,p.7 (2010).
[2] J.Grossman,10 important facts about
website security and how they impact your
enterprise. WhiteHat Security .p.3
(2011).
[3] G. Elahi, E. Yu, and N. Zannone, A
vulnerability-centric requirements
engineering framework: analyzing security
at countermeasures, and requirements
based on vulnerabilities", Requirements
Engineering, . pp. 41-62 (2010).
[4] D. Mellado, C. Blanco, L. Sanchez and E.
FernadezMedina, A systematic review of
security requirements engineering.
Computer Standards & Interfaces. pp.
153-165 (2010).
[5] I. Tondel, M. Jaatun and H. Meland,
Security requirements for the rest of us:
A survey. IEEE Software, Vol.25. pp.20
27 (2008).
[6] J. Wang, H. Wang, M. Guo and M. Xia,
Security metrics for software systems
.In Proceedings of the 47th Annual
Southeast Regional Conference, South
Carolina (2009).
[7] J. McCurley, D. Zubrow and C. Dekkers,
Measures and Measurement for Secure
Software Development Software
Engineering Institute (2008).
[8] D. Taylor and G. McGraw, Adopting a
software security improvement program.
IEEE Security & Privacy. pp. 88-91
(2005).
[9] M. Elattar, "A framework for improving
quality in misuse case models", Business
Process Management Journal.pp.168 -196
(2012).
[10] J. Williams and D. Wichers, OWASP
top 10 2010 Technical report, The open
web application security project
(OWASP) (2010).
[11] C. Kaner, and W. Bond, Software
Engineering Metrics: What Do They
Measure and How Do We Know. In:
Proce 10th International Software Metrics
Symposium, Chicago, USA. pp 1-12
(2004).
[12] E. Chew, S. Marianne, S. Kevin, B.
Nadya, B. Anthony and R. Will,
Performance measurement guide for
information security. Research Technical
Report, NIST National Institute of
Standards and Technology, Special
Publication 800-55. July (2008).
[13] S. Bellovin, On the Brittleness of
Software and the Infeasibility of Security
Metrics. IEEE Security & Privacy. pp. 96
(2006).
[14] A. Wang, Information security models
and metrics. In: Procs of the 43rd annual
Southeast regional conference VOL: 2,
NY, USA. pp.178-184 (2005).
[15] B. Patrik and J. Per, A Goal Question
Metric Based Approach for Efficient
Measurement Framework Definition. In:
Proc of the 2006 ACM/IEEE international
symposium on Empirical software
engineering. Rio de Janeiro. pp 316 325
(2006).
[16] V. Basili and D. Weiss, A Methodology
for Collecting Valid Software Engine-
ering Data, IEEE Tram. Software
Engineering. Vol.10, No.6. pp.728-738
(1984).
[17] V. Basili, G. Caldiera and D. Rombac,
Goal Question Metric Paradigm,
Encyclopedia of Software Engineering,
Vol. , pp. 528-532 (1994).
[18] V. Basili, "Software Modeling and
Measurement: The Goal Question Metric
Paradigm," Computer Science Technical
Report Series , University of Maryland,
College Park, MD (1992).
[19] K. Beznosov and B. Chess Security for
the rest of us: An industry perspective on
the secure-software challenge. IEEE
Software, Vol. 25.pp.1012 (2008).
[20] G. Sindre and A. Opdahl,Eliciting
Security Requirements with Misuse
Cases, Requirements Engineering
Journal.pp. 34-44 (2005).
[21] J. McDermott and J. Madison. Using
Abuse Case Models for Security
Requirements Analysis (1999).
[22] G. Sindre and L. Opdahl, Eliciting
security requirements by misuse cases.
In: Proc of TOOLS Pacific 2000, Sydney,
Australia (2000).
[23] I. Alexander, Modelling the interplay of
conflicting goals with use and misuse
cases. In: Proc of the 8th international
workshop on requirements engineering:
foundation for software quality
(REFSQ02), Essen, Germany (2002).
[24] D. Firesmith, Engineering security
requirements, Journal of Object
Technology. pp 5368 (2003).
[25] E. Nichols and G. Peterson, A metrics
framework to drive application security
improvement, The IEEE computer
society. pp 8891 (2007).
[26] P. Mell, K. Scarfone and S. Raomanosky,
A complete guide to the common
vulnerability scoring system version 2.0
219
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 211-220
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
in Forum of Incident Response and
Security Teams (FIRST). pp 1-23 (2007).
[27] I. Chowdhury, B. Chan and M.
Zulkernine, " Security Metrics for Source
Code Structures" In: Proc of the Fourth
International Workshop on Software
Engineering for Secure Systems, Leipzig,
Germany. pp 57-64 (2008).
[28] B. Alshammari, C. Fidge and D. Corney,
Security metrics for object-oriented
designs. In: Proc 21st Australian of
Software Engineering Conference,
Brisbane, Australia. pp. 5564 (2010).
[29] Xu. T. Composite Measurement Pattern.
In: proc of WiCOM '08. 4th International
Conference. Dalian, China, pp.1-6 (2008).
[30] McConnell. S. Code Complete.
Microsoft Press (2004).
220
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 211-220
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Amalgamation of Cyclic Bit Operation in SD-EI Image
Encryption Method: An Advanced Version of SD-EI
Method: SD-EI Ver-2
Somdip Dey
St. Xaviers College
[Autonomous]
Kolkata, India
E-mail:
[email protected]
[email protected]
ABSTRACT
In this paper, the author presents an advanced version of
image encryption technique, which is itself an upgraded
version of SD-EI image encryption method. In this new
method, SD-EI Ver-2, there are more bit wise manipulations
compared to original SD-EI method. The proposed method
consist of three stages: 1) First, a number is generated from
the password and each pixel of the image is converted to its
equivalent eight binary number, and in that eight bit number,
the number of bits, which are equal to the length of the
number generated from the password, are rotated and
reversed; 2) In second stage, extended hill cipher technique is
applied by using involutory matrix, which is generated by
same password used in second stage of encryption to make it
more secure; 3) In last stage, we perform modified Cyclic Bit
manipulation. First, the pixel values are again converted to
their 8 bit binary format. Then 8 consecutive pixels are chosen
and a 8X8 matrix is formed out of these 8 bit 8 pixels. After
that, matrix cyclic operation is performed randomized number
of times, which is again dependent on the password provided
for encryption. After the generation of new 8 bit value of
pixels, they are again converted to their decimal format and
the new value is written in place of the old pixel value. SD-EI
Ver-2 has been tested on different image files and the results
were very satisfactory.
KEYWORDS
SD-EI, SD-AEI, image encryption, bit reversal, bit
manipulation, bit rotation, hill cipher, randomization.
1. INTRODUCTION
In modern world, security is a big issue and securing
important data is very essential, so that the data can not be
intercepted or misused for illegal purposes. For example we
can assume the situation where a bank manager is instructing
his subordinates to credit an account, but in the mean while a
hacker interpret the message and he uses the information to
debit the account instead of crediting it. Or we can assume the
situation where a military commander is instructing his fellow
comrades about an attack and the strategies used for the
attack, but while the instructions are sent to the destination,
the instructions get intercepted by enemy soldiers and they
use the information for a counter-attack. This can be highly
fatal and can cause too much destruction. So, different
cryptographic methods are used by different organizations and
government institutions to protect their data online. But,
cryptography hackers are always trying to break the
cryptographic methods or retrieve keys by different means.
For this reason cryptographers are always trying to invent
different new cryptographic method to keep the data safe as
far as possible.
Cryptography can be basically classified into two types:
1) Symmetric Key Cryptography
2) Public Key Cryptography
In Symmetric Key Cryptography [17][20], only one key is
used for encryption purpose and the same key is used for
decryption purpose as well. Whereas, in Public Key
Cryptography [17][19], one key is used for encryption and
another publicly generated key is used for the decryption
purpose. In symmetric key, it is easier for the whole process
because only one key is needed for both encryption and
decryption. Although today, public key cryptography such as
RSA [15] or Elliptical Curve Cryptography [16] is more
popular because of its high security, but still these methods
are also susceptible to attack like brute force key search
attack [17][21]. The proposed method, SD-EI VER-2, is a
type of symmetric key cryptographic method, which is itself a
combination of four different encryption modules.
SD-EI VER-2 method is devised by Somdip Dey [5] [6] [9]
[10] [11] [12] [13], and it is itself a successor and upgraded
version of SD-EI [5] image encryption technique. The three
different encryption modules, which make up SD-EI VER-2
Cryptographic methods, are as follows:
1) Modified Bits Rotation and Reversal Technique for
Image Encryption
2) Extended Hill Cipher Technique for Image
Encryption
3) Modified Cyclic Bit Manipulation
The aforementioned methods will be discussed in the next
section, i.e. in The Methods in SD-EI VER-2. All the
cryptographic modules, used in SD-EI VER-2 method, use the
same password (key) for both encryption and decryption (as
in case of symmetric key cryptography).
The differences between SD-EI [5] and SD-EI VER-2
methods are that the later one contains one extra encryption
module, which is the modified Cyclic Bit manipulation, and
the Bits rotation and Reversal Technique is modified to
provide better security.
www.nitropdf.com
221
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 221-225
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
2. THE METHODS IN SD-EI VER-2
Before we discuss the four methods, which make the SD-EI
VER-2 Encryption Technique, we need to generate a number
from the password, which will be used to randomize the file
structure using the modified MSA Randomization module.
2.1 Generation of a Number from the Key
In this step, we generate a number from the password
(symmetric key) and use it later for the randomization method
in modified Cyclic Bit manipulation, which is used to encrypt
the image file. The number generated from the password is
case sensitive and depends on each byte (character) of the
password and is subject to change if there is a slightest change
in the password.
If [P
1
P
2
P
3
P
4
..P
len
]
be the password,
where length of the
password ranges from 1,2,3,4..len and len can be
anything.
Then, we first multiply 2
i
, where i is the position of each
byte (character) of the password, to the ASCII vale of the byte
of the password at position i. And keep on doing this until
we have finished this method for all the characters present in
the password. Then we add all the values, which is generated
from the aforementioned step and denote this as N.
Now, if N = [n
1
n
2
n
j
], then we add all the digits of that
number to generate the code (number), i.e. we need to do: n
1
+
n
2
+ n
3
+ n
4
+ .. + n
j
and get the unique number, which is
essential for the encryption method of randomization. We
denote this unique number as Code.
For example: If the password is AbCd, then,
P
1
= A; P
2
= b; P
3
= C
N = 65*2
(1)
+ 98 2
(2)
+ 67*2
(3)
+ 100*2
(4)
= 2658
Code = 2 + 6 + 5 + 8 = 21
2.2 Modified Bits Rotation and Reversal
Technique
In this method, a password is given along with input image.
Value of each pixel of input image is converted into
equivalent eight bit binary number. Now we add the ASCII
Value of each byte of the password and generate a number
from the password. This number is used for the Bits Rotation
and Reversal technique i.e., Number of bits to be rotated to
left and reversed will be decided by the number generated by
adding the ASCII Value of each byte of the password. This
generated number will be then modular operated by 7 to
produce the effective number (N
R
), according to which the
bits will be rotated and reversed. Let N be the number
generated from the password and N
R
(effective number) be the
number of bits to be rotated to left and reversed. The relation
between N and N
R
is represented by equation (1).
N
R
=N mod 7 ------ eq. (1)
,where 7 is the number of iterations required to reverse
entire input byte and N = [n
1
+ n
2
+ n
3
+ n
4
+n
j
], where
n
1
, n
2
, n
j
is the ASCII Value of each byte of the
password.
For example, P
in
(i,j) is the value of a pixel of an input image.
[B
1
B
2
B
3
B
5
B
6
B
7
B
8
] is equivalent eight bit binary
representation of P
in
(i,j).
i.e. P
in
(i,j) [B
1
B
2
B
3
B
5
B
6
B
7
B
8
]
If N
R
=5, five bits of input byte are rotated left to generate
resultant byte as [B
6
B
7
B
8
B
1
B
2
B
3
B
4
B
5
]. After rotation,
rotated five bits i.e. B
1
B
2
B
3
B
4
B
5
, get reversed as B
5
B
4
B
3
B
2
B
1
and hence we get the resultant byte as [B
6
B
7
B
8
B
5
B
4
B
3
B
2
B
1
]. This resultant byte is converted to equivalent decimal
number P
out
(i,j).
i.e. [B
6
B
7
B
8
B
5
B
4
B
3
B
2
B
1
] P
out
(i,j)
,where P
out
(i,j) is the value of output pixel of resultant image.
Since, the weight of each pixel is responsible for its colour,
the change occurred in the weight of each pixel of input image
due to modified Bits Rotation & Reversal generates the
encrypted image.
Note: - If N=7 or multiple of 7, then N
R
=0. In this condition,
the whole byte of pixel gets reversed.
2.3 Extended Hill Cipher Technique
This is another method for encryption of images proposed in
this paper. The basic idea of this method is derived from the
work presented by Saroj Kumar Panigrahy et al [2] and
Bibhudendra Acharya et al [3]. In this work, involutory matrix
is generated by using the algorithm presented in [3].
Algorithm of Extended Hill Cipher technique:
Step 1: An involutory matrix of dimensions m!m is
constructed by using the input password.
Step 2: Index value of each row of input image is converted
into x-bit binary number, where x is number of bits present in
binary equivalent of index value of last row of input image.
The resultant x-bit binary number is rearranged in reverse
order. This reversed-x-bit binary number is converted into its
equivalent decimal number. Therefore weight of index value
of each row changes and hence position of all rows of input
image changes. i.e., Positions of all the rows of input image
are rearranged in Bits-Reversed-Order. Similarly, positions of
all columns of input image are also rearranged in Bits-
Reversed-Order.
Step 3: Hill Cipher technique is applied onto the Positional
Manipulated image generated from Step 2 to obtain final
encrypted image.
2.4 Modified Cyclic Bit Manipulation
This is a new encryption method, which is used in this paper.
This method is proposed by Somdip Dey [5][6][8][9][10][11].
The basic algorithm for this method is as follows:
Step 1: Choose consecutive 8 pixels
Step 2: Convert each pixel value to their corresponding 8 bit
binary value
Step 3: Form a 8X8 matrix with the 8 bit values of 8 pixels
www.nitropdf.com
222
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 221-225
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Step 4: Perform multi-directional matrix Cyclic operation on
that matrix code number of times
Step 5: Convert the modified 8 bit value of each pixel to their
corresponding decimal value
Step 6: Put the newly generated value in place of the old value
of that pixel
Step 7: Go to Step 1, and continue until and unless all the pixel
values of the image are modified
2.4.1 Diagrammatic Representation of Modified
Cyclic Bit Manipulation
Let the following be the matrix comprising of 8 bit value of 8 pixel:
Note: A, B, C, D.H represent each pixel and 1,2,38
represent the 8 bit binary value of each pixel.
Cyclic Operation
A1 A2 A3 A4 A5 A6 A7 A8
B1 B2 B3 B4 B5 B6 B7 B8
C1 C2 C3 C4 C5 C6 C7 C8
D1 D2 D3 D4 D5 D6 D7 D8
E1 E2 E3 E4 E5 E6 E7 E8
F1 F2 F3 F4 F5 F6 F7 F8
G1 G2 G3 G4 G5 G6 G7 G8
H1 H2 H3 H4 H5 H6 H7 H8
B1 A1 A2 A3 A4 A5 A6 A7
C1 B3 B4 B5 B6 B7 C7 A8
D1 B2 D3 C3 C4 C5 D7 B8
E1 C2 E3 D5 E5 C6 E7 C8
F1 D2 F3 D4 E4 D6 F7 D8
G1 E2 F4 F5 F6 E6 G7 E8
H1 F2 G2 G3 G4 G5 G6 F8
H2 H3 H4 H5 H6 H7 H8 G8
Thus, the new pixel values are:
Pixel 1: B1 A1 A2 A3 A4 A5 A6 A7,
Pixel 2: C1 B2 B4 B5 B6 B7 C7 A8,
Pixel 3: D1 B2 D3 C3 C4 C5 D7 B8,
And so on.
3. BLOCK DIAGRAM OF SD-EI VER-2
METHOD
In this section, we provide the block diagram of SD-EI VER-2
method.
Fig 1: Block Diagram of SD-EI VER-2 Method
A1 A2 A3 A4 A5 A6 A7 A8
B1 B2 B3 B4 B5 B6 B7 B8
C1 C2 C3 C4 C5 C6 C7 C8
D1 D2 D3 D4 D5 D6 D7 D8
E1 E2 E3 E4 E5 E6 E7 E8
F1 F2 F3 F4 F5 F6 F7 F8
G1 G2 G3 G4 G5 G6 G7 G8
H1 H2 H3 H4 H5 H6 H7 H8
A1 A2 A3 A4 A5 A6 A7 A8
B1 B2 B3 B4 B5 B6 B7 B8
C1 C2 C3 C4 C5 C6 C7 C8
D1 D2 D3 D4 D5 D6 D7 D8
E1 E2 E3 E4 E5 E6 E7 E8
F1 F2 F3 F4 F5 F6 F7 F8
G1 G2 G3 G4 G5 G6 G7 G8
H1 H2 H3 H4 H5 H6 H7 H8
A1 A2 A3 A4 A5 A6 A7 A8
B1 B2 B3 B4 B5 B6 B7 B8
C1 C2 C3 C4 C5 C6 C7 C8
D1 D2 D3 D4 D5 D6 D7 D8
E1 E2 E3 E4 E5 E6 E7 E8
F1 F2 F3 F4 F5 F6 F7 F8
G1 G2 G3 G4 G5 G6 G7 G8
H1 H2 H3 H4 H5 H6 H7 H8
www.nitropdf.com
223
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 221-225
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
4. RESULTS AND DISCUSSIONS
We provided few results of the proposed SD-EI VER-2
method in the following table.
TABLE 1: Results of SD-EI VER-2
Original File Encrypted File
From the results section it is not possible to know the
effectiveness of the SD-EI VER-2 method because the end
result of both SD-EI and SD-EI VER-2 are almost same if
viewed with naked eyes. But, if we compare the two methods
then we can see that SD-EI VER-2 method is more secure
than SD-EI encryption method.
5. CONCLUSION AND FUTURE SCOPE
In this paper, the author proposed a standard method of image
encryption, which tampers the image in a very effective way.
SD-EI VER-2 method is very successful to encrypt the image
perfectly to maintain its security and authentication. The
inclusion of modified bits rotation and reversal technique, and
modified Cyclic Bit Manipulation, made the system even
stronger than it used to be before. In future, the security of
method can be further enhanced by adding more secure bit
and byte manipulation techniques to the system. And the
author has already started to work on that.
6. ACKNOWLEDGMENTS
Somdip Dey would like to thank the fellow students and his
professors for constant enthusiasm and support. He would
also like to thank Dr. Asoke Nath, founder of Department of
Computer Science, St. Xaviers College [Autonomous],
Kolkata, India, for providing his feedback on the method and
help with the preparation of the project. Somdip Dey would
also like to thank his parents, Sudip Dey and Soma Dey, for
their blessings and constant support, without which the
completion of the project would have not been possible.
7. REFERENCES
[1]. Mitra et. el., A New Image Encryption Approach using
Combinational Permutation Techniques, IJCS, 2006, vol. 1, No
2, pp.127-131.
[2]. Saroj Kumar Panigrahy, Bibhudendra Acharya, Debasish Jena,
Image Encryption Using Self-Invertible Key Matrix of Hill
Cipher Algorithm, 1
st
International Conference on Advances in
Computing, Chikhli, India, 21-22 February 2008.
[3]. Bibhudendra Acharya, Saroj Kumar Panigrahy, Sarat Kumar
Patra, and Ganapati Panda, Image Encryption Using Advanced
Hill Cipher Algorithm, International Journal of Recent Trends
in Engineering, Vol. 1, No. 1, May 2009, pp. 663-667.
[4]. Asoke Nath, Saima Ghosh, Meheboob Alam Mallik,
Symmetric Key Cryptography using Random Key generator,
Proceedings of International conference on security and
management (SAM10 held at Las Vegas, USA Jull 12-15,
2010), P-Vol-2, pp. 239-244 (2010).
[5]. Somdip Dey, SD-EI: A Cryptographic Technique To Encrypt
Images, Proceedings of The International Conference on
Cyber Security, CyberWarfare and Digital Forensic (CyberSec
2012), held at Kuala Lumpur, Malaysia, 2012, pp. 28-32.
[6]. Somdip Dey, SD-AEI: An advanced encryption technique for
images, 2012 IEEE Second International Conference on Digital
Information Processing and Communications (ICDIPC), pp. 69-
74.
[7]. Asoke Nath, Trisha Chatterjee, Tamodeep Das, Joyshree Nath,
Shayan Dey, Symmetric key cryptosystem using combined
cryptographic algorithms - Generalized modified Vernam
Cipher method, MSA method and NJJSAA method: TTJSA
algorithm, Proceedings of WICT, 2011 held at Mumbai, 11
th
14
th
Dec, 2011, Pages:1175-1180.
[8]. Somdip Dey, SD-REE: A Cryptographic Method To Exclude
Repetition From a Message, Proceedings of The International
Conference on Informatics & Applications (ICIA 2012),
Malaysia, pp. 182 189.
[9]. Somdip Dey, SD-AREE: A New Modified Caesar Cipher
Cryptographic Method Along with Bit- Manipulation to Exclude
Repetition from a Message to be Encrypted, Journal:
Computing Research Repository - CoRR, vol. abs/1205.4279,
2012.
[10]. Somdip Dey, Joyshree Nath and Asoke Nath. Article: An
Advanced Combined Symmetric Key Cryptographic Method
using Bit Manipulation, Bit Reversal, Modified Caesar Cipher
(SD-REE), DJSA method, TTJSA method: SJA-I
Algorithm. International Journal of Computer
Applications 46(20): 46-53, May 2012. Published by Foundation
of Computer Science, New York, USA.
[11]. Somdip Dey, Joyshree Nath, Asoke Nath,"An Integrated
Symmetric Key Cryptographic Method Amalgamation of
TTJSA Algorithm, Advanced Caesar Cipher Algorithm, Bit
Rotation and Reversal Method: SJA Algorithm", IJMECS,
vol.4, no.5, pp.1-9, 2012.
[12]. Somdip Dey, Kalyan Mondal, Joyshree Nath, Asoke
Nath,"Advanced Steganography Algorithm Using Randomized
Intermediate QR Host Embedded With Any Encrypted Secret
Message: ASA_QR Algorithm", IJMECS, vol.4, no.6, pp.59-67,
2012.
[13]. Somdip Dey, Joyshree Nath, Asoke Nath," Modified Caesar
Cipher method applied on Generalized Modified Vernam Cipher
method with feedback, MSA method and NJJSA method: STJA
Algorithm Proceedings of FCS12, Las Vegas, USA.
[14]. Somdip Dey, An Image Encryption Method: SD-Advanced
Image Encryption Standard: SD-AIES, International Journal of
Cyber-Security and Digital Forensics (IJCSDF) 1(2), pp. 82-88.
[15]. http://en.wikipedia.org/wiki/RSA_(algorithm) [ONLINE]
[16]. http://en.wikipedia.org/wiki/Elliptic_curve_cryptography
[ONLINE]
[17]. Cryptography & Network Security, Behrouz A. Forouzan, Tata
McGraw Hill Book Company.
[18]. Somdip Dey, Amalgamation of Cyclic Bit Operation in SD-EI
Image Encryption Method: An Advanced Version of SD-EI
www.nitropdf.com
224
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 221-225
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Method: SD-EI Ver-2, International Journal of Cyber-Security
and Digital Forensics (IJCSDF) 1(3), pp. 238-242.
[19]. http://en.wikipedia.org/wiki/Public-key_cryptography
[ONLINE]
[20]. http://en.wikipedia.org/wiki/Symmetric-key_algorithm
[ONLINE]
[21]. http://en.wikipedia.org/wiki/Brute-force_search [ONLINE]
www.nitropdf.com
225
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 221-225
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Ahmad Ibrahim Kulliyyah of Laws
International Islamic University
Selangor, Malaysia
[email protected]
Ahmad Ibrahim Kulliyyah of Laws
International Islamic University
Selangor, Malaysia
[email protected]
Kulliyyah of Engineering
International Islamic University
Selangor, Malaysia
[email protected]
Keywords-data breach; critical information infrastructure; law
and regulation; Malaysia
I. ENTER THE NEW WORLD OF SPIDER WEB
Wikileaks, an international non-profit organization that
runs the online whistle-blower services at the now-defunct
website <www.wikileaks.org>, is hailed by the Time magazine
as the whistle-blower of the digital age. Its Australian
founder, Mr. Julian Assange, was made a candidate for the
Times Person of the Year 2010 award. This prominence was
credited to their activities, most notably in the second half of
2010, in disseminating on the Internet hundreds of thousands of
secret or confidential documents involving various
governments and giant corporations [1].
Among the critical data leaked was the disclosure of a long
list of commercial and other installations deemed critical to
Americas national security. Included in the list are the landing
points of undersea cables and the names of firms making vital
vaccines. There was also disclosure about NATOs new plans
for defending Poland and the Baltic states, which includes
disclosure of the code name related to the plans. As it is earlier
mentioned, the 250,000 data leaked by the Wikileaks had
implicated many countries including Malaysia.
1
Despite leaking top-classified information such as military
and diplomatic communication data, there seems to be
uncertainty as to whether or not Wikileaks will finally face any
legal actions. It was reported by The New York Times, on 7
th
December 2010 that the US Justice Department was exploring
possible charges against WikiLeaks and Assange on the release
of diplomatic messages under the Espionage Act 1917 or even
on conspiracy or trafficking in stolen property. Meanwhile,
Julian Assange had contested in the UK court against his
extradition to Sweden over alleged sexual offences, as reported
by The Guardian on 13
th
July 2011. Needless to say, Assange
and his Wikileaks has gained huge support from all over the
world. The incident demonstrates some causes of concern:
firstly, a highly critical infrastructure such as that houses the
military system and diplomatic cables, despite its sensitivity,
are not spared from security breach or intrusion. Secondly,
such leak can in turn cause far-reaching damage to public
interests, national security and economic interests. Last but not
least, the problems cannot be surmounted easily; the hands of
law seem incapable of resolving the problem. It does not help
that incidence of ordinary data breach is so common that there
does not seem to be full proof method to totally eliminate it.
II. CAUSES AND TRENDS OF DATA BREACH
Statistics tell us that in the cyber environment, data
breaches are everyday phenomena. In a study conducted by
Symantec and the Ponemon Institute in their 2011 Cost of a
Data Breach Reports, it is found that around half of the causes
of data breach can be categorised into system glitch, negligence
and malicious attack. Though negligence counts slightly more
than malicious attacks, the costs caused by the latter is the
highest of all. The reports also revealed that such malicious
attacks involve the use of malicious software (viruses,
1
See, for examples, WikiLeaks: Malaysia didnt inform US
of missing jet engines, The Malaysian Insider, 15
th
February
2011; WikiLeaks: Malaysia loses game of "chicken' with
Singapore over bridge, The Malaysia Today, 6
th
July 2011;
Anifah summons Singapore envoy over Wikileaks content,
The Star, 15
th
December 2011.
Data Leak, Critical Information Infrastructure and the
Legal Options: What does Wikileaks teach us?
Ida Madieha Abdul Ghani Azmi
Sonny Zulhuda Sigit Puspito Wigati Jarot
AbstractThe massive data leaks by Wikileaks suggest how
fragile a national security is from the perspective of information
system and network sustainability. What Wikileaks have done
and achieved raises some causes of concern. How do we view such
leaks? Are they an act of whistle-blowing or disclosure of
government misconduct in the interest of the public? Are they the
champion of free press? Or are they a form of data breach or
information security attack? What if it involves the critical
information infrastructure (CII)? Could they be classified as
cyber-terrorist? The objective this paper is to outline the
problems and challenges that Malaysia should anticipate and
address in maintaining its national CII. The paper first looks at
Wikileaks as it is the icon of data leaks. Then it examines the
causes of data breach before proceeding to foray into the concept
of critical information infrastructure in the US and Malaysia.
Finally, the paper explores legal options that Malaysia can adopt
in preparing herself to possible data breaches onslaught. It is the
contention of the paper that the existing traditional legal
framework should be reformed in line with the advances of the
information and communications technologies, especially in light
of the onslaught of data leaks by the new media typically
represented by Wikileaks.
226
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 226-231
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
malware, worms, Trojans) up to 50% of the cases. Lesser
incidents involve malicious insiders (33%); theft of data-
bearing devices (28%); SQL injection (28%) phishing (22%);
and web-based attacks (17%) [2].
In conformity with the above reports, Bandai (2010)
examines that there are many ways to activate data breach [3].
He categorized data threat agents into three, namely hacker
and malware; well meaning insiders; and malicious
insiders. Hacker breach is usually conducted in multiple
phases including (1) incursion phase, (2) discovery phase, (3)
capture phase, and (4) exfiltration phase. The action of well
meaning insiders, on the other hand, is a key causative factor in
a large number of breach cases. According to Verizon Data
Breach Investigation Report 2009, 67 percent of data breach
reports come from insider negligence. And typical breach cases
perpetrated by malicious insiders involve personnel with valid
access credentials for the data they intend to steal.
Meanwhile in the US, the Industrial Control System Cyber
Emergency Response Team (ICS-CERT) reported that there
was a dramatic increase in the number of reported cyber-
security incidents affecting the U.S. critical infrastructure
companies between 2009 and 2011. In 2009, ICS-CERT
fielded 9 incident reports. In 2010, that number increased to 41.
In 2011, it was 198. The report says that of all critical sectors,
water sector is the one most implicated, accounted for more
than half of the incidents, as shown by the Figure 1.
Figure 1. Data Security Incidents on Critical Sectors in the US (2011)
It is increasingly obvious that with new advances of
information technology (IT) data breach trends are also
growing, potentially beyond control. Hackers would keep
improving and reinventing modes of data breach as the
technology becomes more and more superior. Worse still, IT
security in 2020 will be less about protection from traditional
bad guys, and more about protecting business models (in
corporate level) or national interest (in country level). As
reported by Schneier in Security 2020, that the trends of IT in
the year 2020 will be shaped by a few interconnected concepts
[4]. First, there will be a deperimeterization that assumes
everyone is untrustworthy until proven otherwise. Secondly,
there will be deconsumerization that requires networks to
assume all user devices are untrustworthy until proven
otherwise. Next, decentralization and deconcentration will not
work if one is able to hack the devices to run unauthorized
software or access unauthorized data. Deconsumerization will
not be viable unless you are unable to bypass the ads, or
whatever the vendor uses to monetize you. And
depersonization requires that autonomous devices to be truly
autonomous. It is very obvious that all these trends lead to the
increased risk of data breach [4].
Data breach issue becomes even more crucial in cloud
computing environment. Some IT security professionals
viewed the Cloud as the perfect storm for data breach. The
storm will be technologically facilitated by three factors:
mobility, cloud and virtualization. These would be driven by
three parallel forces namely deperimeterization, mobility and
improving data centre efficiency [4]. What was practiced by the
Wikileaks in late 2010 was one of the recent examples of the
misuse of cloud computing (as serviced by Amazon). Similar
incidents occurred in Europe that forced Google to make an
apology when its Gmail service collapsed. Salesforce.com had
been hit by phishing attack in 2007 which duped a staff
member into revealing passwords. In sum, the cloud is
becoming particularly attractive to cyber crooks.
III. CRITICAL INFORMATION INFRASTRUCTURE AND ITS
DIFFERENCE FROM ORDINARY SYSTEM
The term CII comprises of three main component; critical,
infrastructure, and information infrastructure [5]. This term
has different connotation in different countries. To that extent,
a German agency, Bundesamt fur Sicherheit in der
Informationstechnik: 2004), reiterates that whereas it is
possible to identify some common structural elements between
countries in terms of the measures taken so far, the functions
performed by the responsible organisations and the degree of
protection achieved to date remain widely different.
What makes the protection of CII an important national
security interest is its criticality criteria. CII is about the
reliance of a nation or public to those information assets. It
must be the information assets which are so enormously
important to the extent that the loss, lack or inefficiency of
which would lead to a serious impact.
Countries vary in their perception of how serious is serious.
It may involve a major detrimental impact on the availability
or integrity of essential services, leading to severe economic or
social consequences or to loss of life as defined by the Centre
for the Protection of National Infrastructure (CPNI), UK. In the
US, criticality is associated with the debilitating impact on
security, national economic security, national public health or
safety, or any combination of those matters [6]. Based on that
factor, in the US, military and diplomatic sectors are essentially
critical sectors that house critical infrastructure. In the US, the
Presidents National Strategy for Homeland Security (NSHS),
issued in July 2002, views specific infrastructure sectors as
critical because of the particular and important functions or
services they provide to the country and the fact that its
compromise can have a far-reaching effect and potentially
reverberate long after the immediate damage. Listed under such
critical infrastructures are agriculture, food, and water sectors,
public health and emergency services sectors, institutions of
government and administration, defense sector, information
and telecommunications sector, energy, transportation, banking
and finance, chemical industry, and postal and shipping sectors.
227
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 226-231
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
The word infrastructure literally means the basic
structures and facilities necessary for a country or an
organisation to function efficiently. In many countries, the CII
policies cover both tangible and intangible assets as well as
production or communications networks. Australia, as
indicated by the Attorney Generals Department, for example,
covers physical facilities, supply chains, information
technologies and communication networks; the UKs Centre
for the Protection of National Infrastructure (CPNI) covers
essential services and systems including physical and
electronic, while the US covers the system and assets,
whether physical or virtual. As a whole, the term critical
information infrastructure relates information and information
assets. In a civil aviation sector, for example, this may include
airplanes, personnel, navigation system, information and
communications systems, towers and airports, administrative as
well as regulatory infrastructure. CII is one part of these: it is
all about the information and communications system operated
for and by the aviation system. In this respect, the protection of
critical information infrastructure refers exclusively to the
security and protection of the IT connections and IT solutions
within and between the individual infrastructure sectors.
In Malaysia, the Critical Information Infrastructure are
defined as those assets (real and virtual), systems and functions
that are vital to the nations that their incapacity or destruction
would have a devastating impact on Malaysias national
economic strength, national image, national defence and
security, government capability to function, and public health
and safety (National Cyber Security Policy 2006 or NCSP).
The NCSP defines the Critical National Information
Infrastructure (CNII) as constituting the networked
information systems of the ten critical sectors. The ten sectors
include national defence and security; banking and finance;
information and communications; energy; transportation;
water; health services; government; emergency services; and
food and agriculture.
From the above discussion one can conclude that it is the
critical information infrastructure that makes or breaks national
economy. As the network is massive, the points of attacks can
be hard to determine. As such one may wonder how invincible
the Malaysias critical information infrastructure is.
IV. WHY IS THE CRITICAL INFORMATION INFRASTRUCTURE
SO VULNERABLE?
The increasing reliance of critical sectors on the computer
networks and information system provides an enormous and
unprecedented task. As Condron (2007) described, for the first
time in history, an individual armed with nothing more than
technical expertise, a computer system, and a network
connection could theoretically bring a nation to its knees. The
fact that an attack to critical infrastructure is not merely an
ordinary criminal matter but rather an issue of national security
makes it more urgent for governments worldwide to come up
with the necessary policies, plans or laws addressing issues
ranging from information sharing to public-private cooperation,
from criminal laws to national security, and from public
awareness to law enforcement [6].
The protection of CII has been an international concern.
The OECD reported in May 2008 that many countries have
national plans or strategies start by first identifying what
constitute a critical infrastructure. The concept includes the
physical or intangible assets whose destruction or disruption
would seriously undermine public safety, social order and the
fulfilment of key government responsibilities. Such damage
would generally be catastrophic and far-reaching. Sources of
critical infrastructure risk could be natural (e.g. earthquakes or
floods) or man-made (e.g. terrorism, sabotage).
This concern is natural given the fact that we gradually
move into an electronic environment where most documents
are being digitised and transactions computerised, such as what
is happening with revenue collection and many other
government applications. Given such security challenges that
face electronic environment such as this, we are left with one
nagging question, how secure are those systems? The answer
to this question will undoubtedly have a huge implication on
the life of the community and country as a whole.
V. IS MALAYSIAS CRITICAL INFORMATION
INFRASTRUCTURE AN IMPENETRABLE FORTRESS?
With the increasing reliance of Malaysias critical
infrastructure on the ICT, the need to have a secure and
resilient information infrastructure is imminent and inevitable.
The key objectives of the NCSP declare that Malaysias
national critical information infrastructure must be secured and
resilient, that is, immune against threats and attacks to its
systems. The primary question is whether or not Malaysia is
ready to address those threats. At this juncture, it is instructive
to understand how widespread and critical data breach is in
Malaysia.
Recent incidents involving public facilities and critical
sectors such as railway operation, stock exchange, postal
system as well as government agencies have raised concerns
over the security of our critical information infrastructure. In
one incident, on busy hours in July 2006, the State-linked Light
Railway Transit (LRT) system experienced a computer glitch
that resulted in the lost of tracking on the monitor screen in the
control centre. What follows was a service disruption every
five minutes and the trains were running at a much slower pace.
Due to a failure of backup system, the situation got worse and
caused thousand passengers stranded hours in the trains and at
stations. The management quoted an unexpected technical
failure as the cause of disruption.
In another embarrassing incident reported by The Star on
4
th
July 2008, a computer system malfunction caused Bursa
Malaysia, the national stock exchange, to suspend a whole-day
trading. According to the President of the Malaysian Investors
Association, such unprecedented interruption to the stock
trading was estimated to have caused the Government RM 1
million losses in stamp duty from contracts done while brokers
stood to lose RM 5 million. Monetary losses were not the only
thing occurred: the Stock Exchange and the Malaysian
economy may have also suffered from credibility loss.
There was also a series of unauthorised access and web
defacement by anonymous hackers against several government
websites, apparently done in concert as revenge to the
Governments latest decision to crackdown websites that are
allegedly conducting activities in violation of copyright law
228
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 226-231
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
(The Star, 17
th
June 2011). Even though the damage was said to
be minor, the fact that it was intentional attack on a national
basis indicates that the countrys interest may be at stake.
The national agency CyberSecurity Malaysia reported that
in the year 2011 alone, there was a total of 15,218 incidents
involving online harassment, online fraud, hacking, malicious
programs, denial of service and intrusion. This was almost a
double increase from 8090 incidents reported throughout 2010
(3564 in 2009). Meanwhile, spamming alone in the year 2011
was recorded at 110,870. This 2011 incidents report is
illustrated in Figure 2 as reported by the Malaysia Computer
Emergency Response Team (MyCert).
Figure 2. Data breach incident in Malaysia (2011)
Identity theft in Malaysia is also reported as rampant,
including personal data abuse allegedly linked with a
government agency dealing with university student recruitment
in 2005 (The Star, 6
th
August 2005). According to the 2011
MyCert Incident Statistics report cited above, the report of data
intrusion and abuses indicates to at least 4,433 incidents took
place. All these reports show that data breach is on the increase
in Malaysia. The hackers are increasingly becoming more
competent, sophisticated and with the lack of knowledge on
information security. There is a concern that our critical
information infrastructure will not be an impenetrable fortress.
If Malaysia wishes to continue its impressive growth, securing
its critical information infrastructure is now a necessity because
attacks to it can cripple the country.
VI. CAN THE LAW COPE?
That question had been there for some time [7] on the
doubtful ability of law to deal with newly emerging
technological threats and challenges. Likewise, on the new and
complicated field of CII, the law seems far from ready to face
the challenges in a comprehensive way. In fact, many countries
including Malaysia do not have a specific law on the protection
of CII. It is however noted that several laws may offer limited
assistance in dealing with the problems and challenges, notably
from the laws that deal with security issues as well as
electronic environment.
Security law may be represented in several legislation
including the Protected Areas and Protected Places Act 1959,
Penal Code and the newly passed Security Offences (Special
Measures) Act 2012. While these laws do identify some
important concept such as protected place; essential services,
and essential infrastructure, they are nevertheless insufficient
as they are more designed to deal with tangible assets or
physical infrastructure. On the same token, the set of cyberlaws
such as the Computer Crimes Act (CCA) 1997, the
Communications and Multimedia Act 1998 and the Personal
Data Protection Act 2010, suffer from serious limitations. One
primary concern is that these laws do not materially
differentiate a threat to CII from ordinary computer abuse on
private computers. In the instance that such attack took place
and the case was dealt with under the current CCA 1997, this
means the CII is nothing more than an individual computer
system. This wrong message would underestimate the need to
protect the security of CII in Malaysia.
VII. POSSIBLE LEGAL OPTIONS?
As there is no specific legislation that addresses attack on
the critical information infrastructure, it would be prudent to
see what other nations are doing, especially the US.
A. Activating the law of espionage?
Not surprisingly, using the law of espionage has been
considered by the US to prosecute those who leak critical data
as illustrated in the Wikileaks saga. This was also suggested by
Doug Meier (2008). As it is possible to prosecute the
traditional media for publishing government secrets, so it is
possible to extend that to the new media. However, as it is too
easy for people with classified information to leak it to the
public, it is necessary for the government to tighten up its
current protocols for protecting its truly secret information.
Meier admitted that there are technical problems in tracking
down offenders such as anonymity, territorial restriction and
the availability of mirror sites [8]. In fact, this was exactly the
strategy taken by the US Government. To strengthen the US
government powers to take action against data leaks, the
Espionage Act was amended to criminalise the wilful and
knowing disclosure of properly classified information by any
person who is current or former authorized access to classified
information to any person who is not authorized access to such
classified information, knowing that such person is not
authorized such access.
Hester (2011) examined the US governments move to
amend the Espionage Act and introduce a new SHIELD ACT
in order to deal with the disclosure of governments sensitive
secrets more effectively. In his view, such move would further
obfuscate the line between the person who leaks intelligence
that threatens national security and the person or institution that
publishes the leaked intelligence that threatens national
security. More worrying, the introduction of this Act indicates
that the US government is willing to expand the concept of
espionage, without waiting for decisions from the judiciary.
That may come in the expense of freedom of press [9].
Therefore it was argued that the US Government should not
use their powers to take action against those involved in data
leaks indiscriminately. Papandrea (2008) reckoned that in any
229
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 226-231
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
prosecution against a non-governmental actor for disseminating
national security information, the government must
demonstrate not only that the disclosure posed an immediate,
serious, and direct threat to national security, but also that the
offender either intended the disclosure to harm the United
States or help a foreign nation, or that the offender was
recklessly indifferent to the harm that the disclosure would
cause [10].
B. A Clash with Fundamental Liberties?
On the other end of the spectrum, there may be difficulty in
crafting the correct provisions as technology is ever evolving.
Laws crafted for the real world are ill-equipped to deal with
new media. Robinson (2010) conceded that leaks have long
been relied upon by the traditional or old media as their
source of information and shield themselves under the laws
protecting free speech. However, Wikileaks have called into
serious question the legal regimes related to the disclosure of
information, including protections related to freedom of speech
and the press, protection for sources and whistleblowers, the
alleged need for confidentiality in government, and the
justification for concomitant limitations upon freedom if
information and transparency [11]
There is also freedom of expression interest. Michalec
espouses that the failure of the government to prevent leaks is
not necessarily a failure of the existing scheme, but rather a
failure of the government to apply current controls. Michalec
argued against the new provision as being an unnecessary,
overbroad which would result in the chilling of freedom of
expression [12]. Dmitrieva further questioned the wisdom of
applying the criminal anti-theft statute to leaks of confidential
government information. She raised a number of policy issues
on why this should not be done. Top on her list is the concern
that the government should not aggressively prosecute against
the media for leaking government secrets as this would obstruct
the medias ability to conduct independent investigations into
the government actions [13].
C. Press Freedom in the Public Interest?
Along the same line of argument, Silver (2008) advanced
the view that there must be some form of protection for
journalists for disseminating important information to the
public. Sharing the same view as others, he shared a profound
need to find a balance between national security and the press
responsibility to expose the truth [14]. This freedom of press
argument has been supported by Lewis [15] but was rejected
by Fenster [16] and Peters [17], mainly because Wikileaks
indiscriminate disclosure poses more harm than interest. This is
because the disclosure has resulted in untold, incalculable
damage to the nations military personnel, national security and
diplomatic efforts. Meanwhile Peters opined that Wikileaks is
short of a press as it is conducting any investigative journalism.
What Wikileaks does is simply dumping of documents. It has
not gone far enough to minimise harm by removing the
identities of individuals involved in the documents leaked [17].
D. Targeting the Intermediaries?
Some argues that whilst it is difficult to target Wikileaks
itself, it is possible to prosecute the person responsible for the
leaking. Davidson (2011) opines that in the US, for example,
the government was eyeing to prosecute Pfc. Bradley Maning,
the US soldier suspected of disgorging unprecedented amounts
of classified military and diplomatic reports to Wikileaks. Any
restraint action against Wikileaks per se would be futile. If the
US government chose to ask for an injunction against
Wikileaks, for example, the effectiveness of the injunction
would be problematic given its worldwide reach. Removing the
Wikileaks materials from the Internet is equally problematic as
it appears that the organisation maintains its content on more
than twenty servers around the world and on hundreds of
domain names. The obvious target is of course the person
responsible for the leaks or the leakers as it within the target of
the government [18].
Even more difficult would be prosecuting the downstream
publishers who obtain the materials from the internet.
Enjoining them for further circulating or publishing the
materials would pose several legal hurdles. Bella powerfully
advances the view that any strategy for shaping the
environment for leaks must focus on both the technical as well
as the legal environment [19]. There must be a system which
access to information is restricted only to the information
necessary for the user to perform his or her assigned functions.
There must also be tools to detect anomalous data activity from
sources inside as well as outside of the affected network and
the possible need for insider threat profiling. If such system is
not effectively monitored, the environment for leaks has been
created.
E. Other Options
For countries with freedom of information legislation, the
challenge will be building in restriction of access in relation to
national security documents. Lane et al (2008) examined the
practices in US, UK, Canada and Australia in ensuring that
despite supporting the concept transparency, certain classes of
government documents are kept restricted in terms of access. In
other words, there must be adequate protection against the
disclosure of sensitive government documents. For example,
exemptions could be built in for documents the disclosure of
which would or could reasonably be expected to, damage the
security or defence of a country. Or damage to international
relations or divulging information received in confidence by or
on behalf of foreign governments or international
organisations. This is to guarantee that the functions of
government would be impaired, if not crippled and the interests
of the individuals and businesses prejudiced. Lane viewed that
such exemptions would not create any chilling effect on the
operation of the freedom of information law [20].
Lane further pointed out a major problem in protecting
critical infrastructure, i.e. in that a major portion of it is
privately owned or operated commercially. As a result,
information sharing between government and the private sector
has become a vitally important component of effective risk
management. It is prudent thus for a country to establish a
platform of cooperation between the owners and operators of
the critical information infrastructure within a particular
country. In Australia, this comes in the form of Trusted
Information Sharing Network which was created in 2003. This
platform is created to identify critical infrastructure, analyse
230
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 226-231
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
vulnerabilities, risks and sector interdependencies and prepare
for hazards [20].
Meanwhile a different solution is offered by Freedman
(2012). He ventured further to propose treating state secrets as
intellectual property as a strategy to prosecute Wikileaks [21].
This is because pursuing a copyright case gives a higher chance
of success in comparison to espionage. Firstly, Wikileaks
disclosure will not be caught under the fair use exceptions.
Secondly, copyright action is hot hurdled by issues of
extraterritorial application. Most importantly, copyright
received a strong constitutional backing and would not face the
limitations and challenges of extradition.
From the US experience, it is clear that laws drafted for
traditional media are ill-equipped to deal with problems posed
by the new media. The whole experience with Wikileaks
suggest that a fresh look at the existing legal framework
involving media, whistle-blowing, data leaks, freedom of
information, freedom of press and critical information
infrastructure.
VIII. CONCLUSIONS
The widespread of ordinary data breaches in Malaysia
demonstrates how real the danger of leakage of governments
sensitive data specifically and the critical information
infrastructure generally. As the hackers increase in terms of
sophistication and technical expertise, and as the critical
information infrastructure becomes more massive and intricate,
it is more vulnerable to attack.
What would be the legal options for Malaysia? We can
tread on the same path of the United States; we can treat the
leakers as espionage or worse still, terrorists which justifies
grave action under the new security laws. If we take this path,
we must be prepared of the consequences. What is more
compelling is the need to strengthen the security of the CII
itself. As illustrated in this article, a multi-prong action is
required; one that involves a mixture of technology, manpower
training and effective legal framework.
Finally, it is note-worthy that this initial study raises several
issues as ground for future research agenda. Firstly, there is a
continuous need to assess emerging methods of data leak and
security breaches that potentially threaten the security of
critical information infrastructure. Emerging technological
trends such as cloud computing, the Internet of things and the
intelligent cities would certainly incite new methods of data
breaches. Secondly, this study reminds governments to put in
place necessary laws or policies to ensure that each sector
identified as critical infrastructure be sufficiently protected. On
top of that, this study solicits further research on assessing and
analysing the existing legal landscape that aims to protect the
critical information infrastructure in Malaysia, involving all
enabling laws from all sectors. Through such study, gap can be
identified and problems be further enhanced.
REFERENCES
[1] D. Leigh and L. Harding, Wikileaks: Inside Julian Assanges war on
secrecy. US: Guardian Books, 2011.
[2] Ponemon Institute, 2011 Cost of Data Breach Study: Global, retrieved
from: <http://www.symantec.com/content/en/us/about/media/pdfs/b-
ponemon-2011-cost-of-data-breach-global.en-us.pdf> (accessed 15th
September 2012).
[3] S. Bandal, Data breach: Cause, circumstance, remedies, Secure Asia
2010.
[4] D. Howard and K. Prince, Security 2020: Reduce security risks this
decade. US: Wiley, 2010.
[5] Bundesamt fur Sicherheit in der Informationstechnik (BSI), Critical
infrastructure protection: Survey of world-wide activities, BSI Kritis,
4/2004.
[6] S. M. Condron, Getting it right: Protecting American critical
infrastructure in cyberspace, Harvard Journal of Law and Technology,
20 Harv. J. Law & Tec 404 2007.
[7] C. Edwards, N. Savage and I. Walden, Information technology and the
law. UK: Macmillan Publishers Ltd., 1990.
[8] D. Meier, Changing with the times: How the government must adapt to
prevent the publication of its secrets, The University of Texas School of
Law, 28 Rev. Litig. 203.
[9] J. L. Hester, The Espionage Act and todays high-tech terrorist, 12
N.C.J.L. & Tech. On. 177 (2011).
[10] M. Papandrea, Lapdogs, watchdogs, and scapegoats: The press and the
national security information, 83 Ind. L.J. 233.
[11] J. Robinson, Wikileaks, disclosure, free speech and democracy: New
media and the Fourth Estate, (2012). More or Less Democracy & New
Media, 144.
[12] M. J. Michalec, The classified Information Protection Act: Killing the
messenger or killing the message, 50 Clev. St. L. Rev. 455.
[13] I. Dmitrieva, Stealing information: Application of a criminal anti theft
statute to leaks of confidential government information, 55 Fla. L. Rev.
1043.
[14] D. A. Silver, National security and the press: The governments ability
to prosecute journalists for the possession or publication of national
security information, 13 Comm. L. & Poly 447.
[15] K. Lewis, Wikifreak-out: The legality of prior restraints on Wikileaks
publication of government documents, Journal of Law & Policy, Vol.
38: 417 [2012] .
[16] M. Fenster, Disclosures effects: Wikileaks and transparency, Iowa
Law Review Vol. 97:753 [2012].
[17] J. Peters, Wikileaks would not qualify to claim Federal reporters
privilege in any form, 63 Fed. Comm. L.J. 667 2010-2011.
[18] S. Davidson, Leaks, leakers, and journalists: Adding historical context
to the Age of Wikileaks, 34 Hastings Comm. & Ent. L. J. 27 2011-
2012.
[19] P. L. Bella, Wikileaks and the institutional framework for national
security disclosures, The Yale Law Journal, (2011) 121: 1448.
[20] B. Lane, S. Corones, S. Hedge and D. Clapperton, Freedom of
information implications of information sharing networks for critical
infrastructure protection, 15 AJ Admin L 193 [2008].
[21] J. Freedman, Protecting State secrets as Intellectual Property: A
strategy for prosecuting Wikileaks, 48 Stan. J. Intl. L. 185 (2012).
231
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 226-231
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
University of Western Sydney, NSW Australia
E-mail: [email protected]
ABSTRACT
KEYWORDS
Consent, information privacy, privacy
violations, e-commerce, privacy protection
mechanisms.
1 INTRODUCTION
Whilst the internet is undoubtedly
beneficial to e-consumers users and other
users such as social network users,
information technology has affected
privacy dramatically [1], [2]. It has made it
possible for any person to easily collect
personal information about Internet users
without their consent. Consumer concerns
over the safety of personal information and
the violation of an individuals privacy
rights are described as being the single
overwhelming barrier to rapid growth of e-
commerce. Recent research findings also
show that the level of public concern for
privacy and personal information has
increased since 2006 [1], [3]. In 2007, it
was found that 50 percent of Australians
are more concerned about providing
information about them online than they
were two years ago [4]. A recent survey in
Europe also indicates that about a quarter
of social network users (26 percent) and
online shoppers (18 percent) feel that they
are not in complete control over their
personal data [5]. Internet users are
worried that they give away too much
personal information and want to be
forgotten when there is no legitimate
grounds for retaining their personal
information [6].
This paper explores the constraints on the
exercise of individual autonomy. Viewed
from the perspective of autonomy, it
considers what autonomy means for these
purposes and whether current practices
(such as the use of standard-form privacy
policy statements, bundled consent)
protect individual autonomy. It argues that
to resolve the problem with allowing the
use and/or disclosure of personal
information based on consent, the e-
commerce user must first have sufficient
knowledge of the purpose for information
collection, its use and disclosure of
information collected; secondly, consent
mechanisms should allow informed and
rational decision making; thirdly, there
should be the opportunity for individual
choice allowing withdrawal of consent or
the opting out of information collection.
This paper also examines the effects of
privacy violations on individual when
there is covert collection, automatic
processing, and data security risks that
arise from such activities. This paper also
questions the assumption in most
legislation which affects e-commerce
users, that consent is sufficient to waive an
individuals privacy interests.
The Right to Consent and Control Personal Information Processing in Cyberspace
Thilla Rajaretnam
Associate Lecturer, School of Law,
Consumer concerns over the safety of their
personal information and the violation of their
privacy rights are described as being the single
overwhelming barrier to rapid growth of e-
commerce. This paper explores the problems
for e-commerce users when there is collection,
use, and disclosure of personal information
that are based on implied consent in e-
commerce transactions. It questions the
assumption that consent is sufficient to waive
privacy interests in relation to e-commerce
transactions. It will argue that consent should
not necessarily be sufficient to waive privacy
interests, and that the collection, use and/or
disclosure of personal information should be
subject to regulation.
232
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 232-240
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
This paper proceeds to examines and
discusses firstly, the issue of privacy in the
e-commerce context of information
privacy; secondly, the meaning and role of
consent in relation to the collection, use
and disclosure of personal information in
cyberspace; thirdly, if individuals have
freedom of choice; fourthly, the threats to
privacy interests that arise for individuals
when there covert collection, use and
disclosure of personal information. This
paper then briefly examines, what
information privacy protection there is
under the current framework international,
regional and national framework in
Australia. In the process it will explore
some possible solutions in the form of
privacy protection mechanisms to the
problem of online privacy for individuals.
2 PRIVACY
The important elements of the right to
privacy are identified by theorists [7], [8]
and [9], as being the right to be left
alone [7]; and to be anonymous as one of
the important elements of privacy. A threat
to privacy will be a threat to the integrity
of a person [7] and it is the right of each
individual to protect his or her integrity
and reputation by exercising control over
information about them which reflects and
affects their personality [9] and [10]. The
right of an individual to control such
information enables that individual to
selectively restrict others from his or her
physical and mental state, communication
and information, and control how the
person wishes to be presented, to whom
and in which context [9], [10].
Control over information is connected to
how individuals want to be seen, to whom
they want to be seen, and in what context
[8], [9]. The disclosure facts that are
considered personal and intimate will
expose and reveals an individuals
vulnerability and psychological processes
that are necessarily part of what it is to be
human [9], [10]. This capacity to control
disclosure is seen an element of personal
integrity, reputation, human dignity,
expectations, autonomy and self
determination, happiness and freedom [9],
[10], [11]. The individuals ability to
control disclosure of facts about
themselves is valued as a means of
protecting personality rather than property
interests [9]. Control includes also the
ability to consent, make decisions and
choices whether to allow or disallow
others into the individuals private space
and information about them.
2.1 Consent
Consent is an expression of individual
autonomy, and the right for individuals to
make decisions about how they will live
their lives. According to normal legal
principles, consent cannot be effective if
the person does not have sufficient
knowledge or understanding to consent. In
the context of information privacy, consent
is the mechanism by which the individual
e-commerce user exercises control over
the collection, use or disclosure of
personal information. Consent to the
disclosure of private information provides
the basis for an e-commerce users
agreement to the collection, use, access
and transfer of personal information.
Most often e-commerce users may have
expressly agree to the collection,
disclosure and use of information beyond
what is required for the immediate
transaction [12], [13]. Express consent
may be given in a variety of ways by e-
commerce users such as when filling in a
form online, or by ticking on a tick box
provided on a website. Consent might be
also implied from the previous conduct of
the parties or through an existing business
or other relationship where it can be
assumed that an individual has reasonable
expectation of receiving information; or
where the individual has reasonable
233
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 232-240
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
expectation that their personal information
may or will be collected [14], [15].
Before e-commerce users can make a
considered decision whether to consent,
they must have some understanding of the
implications of what is being consented to,
and sufficient detail in language suitable
for e-commerce users to give genuine
consent [15]. An e-commerce user ability
to exercise autonomy is further
compromised by the use of bundled or
blanket consent used by data collectors
and e-business operators [13]. Bundled
consent refers to the consent to a wide
range of uses and disclosures without
giving an individual the opportunity to
make a choice about which use or
disclosure they agree to and which they do
not. Bundled consent frequently includes
terms and conditions allowing changes to
privacy policies without notice. Data
collectors are also using bundled privacy
clauses to collect personal information for
secondary use for use in data mining [13].
The written statements of bundled consent
may be changed without notice, or some
elements outside the privacy policy, or
bundled consent could be added to
customer agreements to allow data mining
in the future [13], [15], [16]. So the use of
bundled consent cannot be meaningful
because the person who consents to such
terms and conditions does not know what
he or she is consenting to. One reason
being that privacy clauses containing
bundled consent are usually lengthy, often
in very small font size and may not be
easily accessible [14], [18].
This paper suggests that the use of bundled
consent should be prohibited or closely
monitored by regulators so as to not
infringe the privacy rights and restrict an
individuals right to withdraw consent.
The issue of consent on the internet raises
significant privacy concerns with the
emergence of new technological
challenges. There is the added problem
relating to young persons and others who
may lack legal capacity to consent. Tied to
consent is the exercise of choice by the
individual.
2.2 Choice
A secondary sense in which autonomy is
used is that it requires freedom of choice
[12], [13]. Control over personal
information enables an autonomous
individual to make choices, and to select
those persons who will have access to their
body, home, decisions, communication,
and information and those who will not.
Choice requires the individual to be a
rational consumer making informed and
considered decisions and having options in
relation to their personal information. Fair
information practices require that when
there are any changes to an organisations
privacy policy the website user should be
alerted to this change with information
which includes the date of issue and a list
of changes made by the organisation to the
prior version; and that reasonable notice
must be given whenever personal
information is to be shared with others
[19], [20].
In e-commerce, individuals make choices
about the use and disclosure or surrender
of their personal information for secondary
purposes. The options that are available to
individuals in cyberspace to collection, use
and the sharing their personal information
is exercised through the opt-in and opt-out
regime. There are different views on the
efficacy of opt-in versus the opt-out
regime. On one view this could be
considered consent by trickery while the
other view is that there is no true choice
[13].
Available evidence suggests that only a
very few e-commerce users exercise
autonomy in this sense; users seldom read
privacy clauses on websites or change
their behaviour as a consequence [17],
[18]. The e-commerce users ability to
exercise autonomy as deliberative choice
234
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 232-240
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
is constrained in a number of ways. Firstly,
an e-commerce users choices whether to
access a website may be constrained if
required to agree to terms and conditions
up front or may find that alternatives are
equally constrained. If other providers
have similar policies which do not allow
the user to refuse the terms and conditions,
the e-commerce user will lack autonomy
in this secondary sense. Often internet
users also have no alternative but are
obliges to give their consent to access
services and goods advertised on the
Internet. If an individual does not actively
select to opt out then he or she is taken to
agree by default. Alternatively the box
may be ticked as the default state to
indicate agreement with the consumer
required to untick the box if they do not
agree. It is doubtful if e-commerce users
express genuine consent to the use of their
personal information when they tick on the
box that they have read these standard
form privacy policies and accept the terms
therein. The e-commerce user is unlikely
to fully appreciate the effect and
importance for their privacy of ticking a
box agreeing to the terms and conditions
of access to the website or the transaction.
Secondly, there are significant barriers to
the effective exercise of autonomy when e-
commerce users have difficulty in locating
the providers privacy policy. Information
may not be easily accessible, or difficult to
find, or in legal language which is not
easily comprehended, or may be lengthy
and vague as to exactly what is being
agreed or what rights they are actually
surrendering [18].
3. PRIVACY VIOLATIONS
It appears that the e-commerce users
capacity to exercise autonomy and to
protect their privacy is further
compromised by the automatic processing
of personal information, use of privacy
invasive technologies, and data security
risks.
3.1 Automatic Processing
Automatic processing of personal
information allows the aggregation of
personal information, identification of
individuals, and secondary use of personal
information with or without consent. The
automatic processing and secondary use
and disclosure of personal information
collected without the consent of
individuals through data surveillance
affect individual privacy interests [21],
[22], [23]. The privacy issue is that
profiles expose Internet and e-commerce
users to risks of the information being
linked to other information such as names,
addresses and e-mail addresses making
them personally identifiable. The
harvesting of personal information through
monitoring and sensing using privacy
invasive technologies is pervasive and
poses special risks to privacy of
individuals [23].
Database companies are able to correlate
and manipulate the data collected through
the process of data matching, sentiment
analysis, customer profiling, and the
creation of digital dossiers [24], [25].
Cookies are the most common profiling
mechanism used on the Internet [24] [25].
Besides the ability to profile e-commerce
users, the increasing interconnectedness,
affordable, fast, on-line systems also
enable the building of electronic dossiers.
Critical decisions about an individuals
status, reputation and credibility either to
determine eligibility and suitability for
jobs, credit worthiness, and criminal
record can readily be made by tapping into
digital dossiers [22], [25]. The processed
data in the form of profiles and digital
dossiers can be disseminated or can be
made accessible easily; it can be
transferred quickly from one information
system or database to another and across
borders with the click of the mouse
without the knowledge or consent of the
data subject [22], [25]. Personal
information in the digital dossiers is at risk
235
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 232-240
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
of being manipulated or used for
unintended purposes when it is shared with
third parties [26], [28], [29].
3.2 Privacy Invasive Technologies
The online activities of Internet and e-
commerce users are constantly monitored
using electronic surveillance devices for
commercial interests [25], [26], [27]. Data
surveillance, the most common form used
to collect information about e-commerce
users without their consent. Information
technologies such cookies, web bugs,
and HTTP are key features that allow data
collection and enable web pages to be
transported between users and a web
server [1]. Most of the privacy invasive
applications depend upon these
technologies [1], [20], [25], [26]. New
surveillance technologies such as the
RFID chip (Radio-Frequency
Identification), and behaviour-tracking
ad system is also being used to bring
Internet users more relevant advertising
and to benefit e-commerce businesses.
Cookies remain invisible and outside the
control of the user [30]. The Internet users
control tools do not allow for complete
erasure of profiles and data collected even
if the user erases such information from
their Computers [23] [31].
There have been severe backlash recently
from users of social networking websites
when it was discovered that two prominent
websites such as Google, and Facebook
have been monitoring and collecting
personal information for secondary use
without users knowledge, or explicit
consent. Besides Google and Facebook,
other data exchange companies such as
BlueKai, a California based company,
and Phorm (a British company) are
involved in tracking online users without
notification of data collection. Internet and
e-commerce users generally do not know
the fate of their personal information that
is generated online [32]. Online privacy
for consumers is also seriously
compromised by data security breaches
and creates privacy risks for e-commerce
users [33], [34].
3.3 Data Security Breach
Data security involves both managerial
and technical measures to protect against
loss and the unauthorized access,
destruction, use, or disclosure of the data.
Besides the infringement of privacy as a
human right, personal data is at risk of
unauthorised access, falling into the wrong
hands, misused or becoming a commodity
for illegal sale, [31]. Insecure systems can
give rise to identity fraud if a party
acquires a users identifiers and in
particularly identity authenticators [31].
Cyber criminals are ripping data out
information from the Internet and
databases [33], [35]. In Australia, the
Australian Payments Clearing Association
report that the value of online credit card
fraud in Australia exceeded $102 million
during the period 30 June 2009 31July
2010 [33]. Data security breaches expose
individuals to identity theft, loss of
reputation, confidentiality and potential
loss of valuable intellectual property rights
[33]. Identity theft is becoming
increasingly common and is for example
the fastest growing crime [35].
4. PRIVACY PROTECTION IN
CYBERSPACE
There is a range of methods that can be
adopted to enhance privacy such as a
combination of approaches and
mechanisms that include legislation,
technology based enhancing mechanisms,
transparency in information collection,
education and business best practice rules.
These mechanisms for privacy protection
are examined next.
4.1 Regulation
Almost all fair information practices such
as for example under the OECDs
236
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 232-240
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Collection Limitation Principle [36]; and
European Unions Directive 95/46/EC
provide for privacy principles [19], [38],
[39], [40]. Privacy principles provide for
compliance with displaying privacy
policies statements; notice of personal
information collection, use and/or
disclosure; breach notification; access and
correction that are viewed as a prerequisite
for fair information collection practices
[36], [19] Similarly, in the Asia-Pacific
region, the Asia-Pacific Economic Co-
operation (APEC) Privacy Framework
provide for privacy principles [41] provide
for personal information protection.
APECs Data Privacy Pathfinder contains
general commitments leading to the
development of a Cross-Border Privacy
Rules (CBPR) system [41]. The EU
Directives in particular have been
influential but compliance is not
mandatory for non EU Member States.
Although non-EU countries have adopted
similar fair information practices into their
national legal frameworks [36], [19] there
are various approaches and varying
degrees of protection for personal
information under national frameworks.
In contrast to EU laws, the Australian
privacy framework is considered to be
inadequate. The primary federal statute for
privacy protection that is the Privacy Act
1988 (Cth) (Privacy Act) National
Privacy Principles (NPPs) [37] have
their foundation consumer choice or
consent as an essential element. But there
is also no right to privacy under the
common law although a statutory tort of
privacy is being mooted [20]. Privacy
protection in Australia is a patchwork of
federal and state statutory regulation and
industry codes of practice and incidental
protection at common law arising out to
torts, property, contract and criminal law.
Although it is not possible to ensure that a
consumer will act rationally with informed
consideration before deciding to waive
their privacy rights, the legislature can, at
least, legislate to remove constraints
preventing informed and rational decision
making. Neither the Privacy Act nor the
NPPs prohibit bundled consent. It also
appears that the Privacy Act gives priority
to commercial interests in relation to direct
marketing and secondary usage as the
existing legislative structure provide that
consent may be express consent, or
implied consent [37].
At the international level, law reform
initiatives are currently focused on
enhancing privacy protection. For example
the e-Privacy Directive, now requires EU
Member States to ensure that the storing of
information, or the gaining of access to
information already stored, is only allowed
on condition that the data subject
concerned has given his or her consent,
having been provided with clear and com-
prehensive information, in accordance
with Directive 95/46/EC, inter alia, about
the purposes of the processing [39]. These
initiatives have also influenced the
Australian Law Reform Commissions
(ALRC). The ALRC has amongst others
recommended developing a single set of
Privacy Principles; redrafting and updating
the structure of the Privacy Act; and
addressing the impact of new technologies
on privacy; and data security breach
notification [20]. It is proposed that a
single set of privacy rules, compliance and
enforcement will strengthen privacy
protection for Internet users.
4.2 Other Mechanisms for Privacy
Protection
In relation to the problem to exercising
consent and choice, it is suggested that any
choice regime should provide a simple and
easily accessible way for consumers to
exercise this choice. This paper suggests
that an opt-in regime is a better option than
the opt-out regime. It is suggested that the
opt-in regimes require positive action by
the consumer to allow the organisation that
is collecting and using their personal
information. It also suggests that simple
and effective mechanisms for ecommerce
237
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 232-240
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
users and other Internet users to give and
withdraw consent must be in place.
Transparency in data collection is a crucial
part of data protection. But an average data
subject is not always aware of how to use
browser settings to reject cookies and
often unaware that their online activities
are being tracked. Notification encourages
transparency about data collection and the
subsequent handling of personal
information. Appropriate notification prior
to data collection; and information
provided to e-commerce users such as, if
the information collected will be used or
shared with a third party or parties, will
restore control over personal information
and give individuals an opportunity to
consent or to withhold consent to the use
of their personal information for primary
and/or secondary purposes. Such an
approach puts a premium on individual
choice and privacy but probably at some
cost of efficiency for the e-commerce
provider. Prior notice to data collection
allows an autonomous individual the
option to decide and make choices whether
to share their personal information with
others. Notification with standard privacy
clauses attached allows individuals to be
able to access their personal information
and to correct incorrect information held
about them; and it also allows individuals
to withhold consent to the collection of
personal information for unlawful
purposes [19], [20] .
In addition, notification of data security
breach gain consumer trust and reduced
risk to personal information. Mandatory
notification of data security breaches alerts
customers and ensures that customers and
users are able to take timely action to limit
risks to their personal information from
risk by for example changing their pin
number and passwords [20], [39], [40],
[42]. Technological tools establishing
privacy preferences besides continuous
privacy awareness and education can also
be effective in protecting personal
information.
5 CONCLUSION
This paper has examined the significance
of privacy for individuals as a fundamental
human right. Violations of human rights
arise from the unlawful collection and
storage of personal data, the problems
associated with inaccurate personal data,
or the abuse, or unauthorised disclosure of
such data. The difficulty of finding and
understanding information relating to
privacy policies, blanket or bundled
consents, the lack of choice whether to
accept conditions and the preference give
to commercial interests reduces the
individuals autonomy to make informed
decision making, and to control and
consent to the use their personal
information. Autonomy is only truly
observed if the e-consumer is able to
provide explicit consent and has both
choice and the opportunity to make
rational and informed decisions. Consent
to the collection, use, and disclosure of
personal information should be regarded as
instrumental to individual autonomy.
The proposed reforms to enhance
information protection in cyberspace both
in Europe and the Asia-Pacific region is
aimed to strengthen and give Internet users
more control over their personal
information, make it easier for individuals
to access and improve the quality of
information they receive from data
collectors about what happens to their
personal information, with who their
information is shared with, and also to
ensure that personal information is
protected no matter where it is sent or
stored. This paper proposes that more
appropriate regulatory response to remove
constraints which impede considered
decisions about privacy by e-commerce
users needs to be in place to protection of
personal information in cyberspace. For
example in relation to e-commerce users,
238
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 232-240
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
the legislative framework can be satisfied
if the user has liberty of action, that is, if
the user agrees without duress or coercion.
Viewed from the standpoint of individual
privacy, legislation should also ensure that
constraints on the ability to make rational
decisions are removed. But only time will
tell if current reforms initiatives and
regulation have been effective in
protecting personal information of Internet
users in cyberspace.
6 REFERENCE
[1] Office of the Privacy Commissioner:
Submission to the Australian Law Reform
Commission Review of Privacy Discussion Paper
72 (2007).
[2] Schwartz, P.M.,: Privacy and Democracy in
Cyberspace, Vanderbilt Law Review, vol. 52, pp.
1609-1702 (1999).
[3] Privacy Commissioner: Privacy concerns on
the up: Annual Report 2009, Office of the Privacy
Commissioner, New Zealand,( 2009).
[4] Office of the Privacy Commissioner: Privacy
Matters, vol. 1, Issue 4, Australian Government
(2007).
[5] European Commission: Why do we need an EU
data protection reform? (2012)
http://ec.europa.eu/justice/data-
protection/document/review2012/factsheets/1_en.p
df
[6] Special Eurobarometer 359: Attitudes on Data
Protection and Elecronic Indentity in the European
Union (2012)
http://ec.europa.eu/public_opinion/archives/ebs/ebs
_359_en.pdf
[7] Warren, S., Brandeis, L.: The right to privacy,"
Harvard Law Review vol. 4, pp. 193 220 (1890).
[8] Westin, A.: Privacy and Freedom, pp. 487. New
York, Atheneum Publishers (1967).
[9] Rossler, B.,: The Value of Privacy, pp. 1-17.
Cambridge, Polity Press, (2005).
[10] Schoeman, F., (ed.): Philosophical Dimensions
of Privacy: An Anthology, pp. 346-402 Cambridge,
Cambridge University Press (1984).
[11] Penny, J. W.,: Privacy and the New
Virtualism, Yale Journal of Law & Technology,
vol. 10, pp. 194-250 (2008).
[12] Regan, P.,: The role of consent in information
privacy protection, Center for Democratic and
Technology (2009).
[13] Cavoukian, C.,: Data Mining: Staking a Claim
on Your Privacy, Office of the Information and
Privacy Commissioner, Ontario (1998).
[14] Clarke, R.,: e-Contract: A Critical Element of
Trust in e-Business. In: Proc. 15
th
Bled Electronic
Commerce Conference, Bled, Slovenia (2002).
[15] Clarke, R.,: The Effectiveness of Privacy
Policy Statements, Xamax Consultancy Pty Ltd.
(2008).
[16] Marotta-Wurgler, F.,: Does Disclosure
Matter?, New York University Law and Economics
Research Paper, No. 10, pp. 54 (2010).
[17] Senate Select Committee on Information
Technologies: Cookie Monsters?: Privacy in the
information society, Commonwealth Parliament of
Australia (2000).
[18] Out-Law.com: Average privacy policies take s
10 minutes to read, research finds,' Out-Law.com
(2008) http://www.out-law.com/page-9490.
[19] European Commission: Directive 95/46/EC of
the European Parliament and of the Council of 24
October 1995 on the protection of individuals with
regard to the processing of personal data and on the
free movement of such data (Directive 95/46/EC)
(1995).
[20] Australian Law Reform Commission (ALRC):
For Your Information: Australian Privacy Law and
Practice (ALRC Report 108) (2008).
[21] Australian Communications and Media
Authority (ACMA): Growth in sensing and
monitoring information driving change in service,
ACMA Media Release 89/2011 (2011).
[22] Solove, D. J.,: A Taxonomy of Privacy,
University of Pennsylvania Law Review vol. 154,
No. 3, pp. 477-560 (2006).
[23] Electronic Privacy Information Centre:
Cookies (2011)
http://www.epic.org/privacy/internet/cookies/
[24] Cavoukian, C.,: Privacy and the Open
Networked Enterprise, Information and Privacy
Commissioner, Ontario, Canada (2006).
239
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 232-240
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
[25] Clarke, R., : Information Technology and
Dataveillance, Communions of the ACM, vol. 31,
Issue 5, pp. 498-512, (1988).
[26] Privacy International: PHR2006 Privacy
topics: Electronic commerce (2007)
http://www.privacyinternational.org/article.shtml
[27] Solove, D. J.,: The Digital Person: Technology
and Privacy in the Information Age, New York:
New York University Press (2004).
[28] Solove, D. J.,: Digital Dossiers and the
Dissipation of Fourth Amendment Privacy,
Southern California Law Review, vol. 75, pp.
1083-1167 (2002).
[29] Electronic Privacy Information Centre
(EPIC): Federal Trade Commission Announces
Settlement in EPIC Facebook Privacy Complaint -
Social Networking Privacy (2011)
http://epic.org/privacy/socialnet/
[30] R. Clarke, R., A. Maurushat, A.,: The
Feasibility of Consumer Device Security,
University of New South Wales Law Research,
Series No. 5 (2009).
[31] Solove, D.J.,: The New Vulnerability: Data
Security and Personal information. In : Securing
Privacy in the Internet Age, A. Chander, A.,
Gelman, L., Radin, M. J., (eds.) Stanford
University Press (2005).
[32] Australian Broadcasting Corporation: Fear in
the Fast Lane. Four Corners Program - ABC.net.au
(2009)
http://www.abc.net.au/4corners/content/2009/s2658
405.htm.
[33] Australian Payments Clearing Association:
Payments Fraud in Australia - Media Release
(2010) http://www.apca.com.au.
[34] Australian Institute of Criminology: Consumer
Scams-2010 and 2011 (2011)
http://www.aic.gov.au/en/publications/current%20s
erices/rip21-40/rip25.aspx.
[35] Australian Crime Commission: Crime Profile
Series Identity Crime - Fact Sheet (2011)
http://www.crimecommission.gov.au/sites/default/f
iles/files/identity-crime.pdf
[36] Organisation of Economic Cooperation and
Development (OECD): OECD Guidelines on the
Protection of Privacy and Transborder Flows of
Personal Data (OECD Guidelines) (1980)
http://www.oecd.org/documentprint/0,3455,en_264
9_34255_1815186_1_1_1,00.html
[37] Privacy Act 1988 (Cth.) s 6, Sch 3 National
Privacy Principles (NPPs).
[38] European Commission: ePrivacy Directive
close to enactment: improvements on security
breach, cookies and enforcement, and more to
come, Ref.: EDPS/09/13.European Union (2009).
[39] European Commission: EU Directive on
Privacy and electronic Communications, Article 29
WP Issues Opinion on Cookies in the New
ePrivacy Directive (2010).
[40] European Commission: ePrivacy Directive
Regulations. European Union (2011)
http://ec.europa.eu/information_society/policy/eco
mm/doc/library/public_consult/data_breach/ePrivac
y_databreach_consultation.pdf
[41] Asia-Pacific Economic Cooperation (APEC):
APEC Data Privacy Pathfinder Initiative (2012)
http://www.ag.gov.au/Privacy/Pages/APEC-Data-
Privacy-Pathfinder-Initiative.aspx
[42] Greenleaf, G.,: Five years of the APEC
privacy Framework: Failure or promise? (2008)
http://austlii.edu.au/~graham/publications/2008/Gre
enleaf_ASLI0408.pdf
240
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 232-240
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
University of Western Sydney, NSW Australia
E-mail: [email protected]
Keywords: Privacy, consent, e-commerce,
risks to personal information, regulation
I. INTRODUCTION
Information technology has affected privacy
dramatically [1] [2]. The internet has made it
possible for any person to easily collect personal
information about Internet users and others
including e-commerce users with or without their
consent. Consumer concerns over the safety of their
personal information and the violation of their
privacy rights are described as being the single
overwhelming barrier to rapid growth of e-
commerce. Recent research findings also show that
the level of public concern for privacy and personal
information has increased since 2006 [1] [3]. In
2007, it was found that 50 percent of Australians
are more concerned about providing information
about them online than they were two years ago
[4].
This paper explores the on constraints on the
exercise of individual autonomy. Viewed from the
perspective of autonomy, it considers what
autonomy means for these purposes and whether
current practices (such as the use of standard-form
privacy policy statements, bundled consent) protect
individual autonomy. It argues that to resolve the
problem with allowing the use and/or disclosure of
personal information based on consent, the e-
commerce user must first have sufficient
knowledge of the purpose for information
collection, its use and disclosure of information
collected; secondly, consent mechanisms should
allow informed and rational decision making;
thirdly, there should be the opportunity for
individual choice allowing withdrawal of consent
or the opting out of information collection. It
questions the assumption in most legislation which
affects e-commerce users, that consent is sufficient
to waive an individuals privacy interests..
This paper will discuss firstly, the issue of
privacy in the e-commerce context of information
privacy; secondly, examine the meaning and role of
consent in relation to the collection, use and
disclosure of personal information in cyberspace;
thirdly, if individuals have freedom of choice;
fourthly the threats to privacy interests and the
problems that arise for individuals when there is
collection, use and disclosure without consent; and
finally, this paper briefly examines what
information privacy protection there is under the
current framework international, regional and
national framework in Australia. In the process it
will explore some possible solutions to the problem
of online privacy for individuals.
II. PRIVACY
A threat to privacy will be a threat to the
integrity of a person [5] and it is the right of each
individual to protect his or her integrity and
reputation by exercising control over information
about them which reflects and affects their
personality [7] [8]. The important elements of the
right to privacy are identified by theorists, [5] [6]
[7] as being the right to be left alone [5]; and to
be anonymous as one of the important elements of
privacy. The right of an individual to control such
information enables that individual to selectively
restrict others from his or her physical and mental
state, communication and information, and control
how the person wishes to be presented, to whom
and in which context [7] [8].
Control over information is connected to how
individuals want to be seen, to whom they want to
be seen, and in what context [6] [7]. The disclosure
facts that are considered personal and intimate will
expose and reveals an individuals vulnerability
and psychological processes that are necessarily
part of what it is to be human [7] [8]. This capacity
to control disclosure is seen an element of personal
integrity, reputation, human dignity, expectations,
autonomy and self determination, happiness and
freedom [7] [8] [9]. The individuals ability to
control disclosure of facts about themselves is
valued as a means of protecting personality rather
than property interests [7]. Control includes the
ability to consent, make decisions and choices
whether to allow or disallow others into the
individuals private space and information about
them.
III. CONSENT
The Problem to Consent to the Collection, Use,
and Disclosure of Personal I nformation in
Cyberspace
Thilla Rajaretnam
Associate Lecturer, School of Law,
Abstract - Consumer concerns over the safety of
their personal information and the violation of their
privacy rights are described as being the single
overwhelming barrier to rapid growth of e-
commerce. This paper explores the problems for e-
commerce users when there is collection, use, and
disclosure of personal information that are based on
implied consent in e-commerce transactions. It
questions the assumption that consent is sufficient
to waive privacy interests in relation to e-
commerce transactions. It will argue that consent
should not necessarily be sufficient to waive
privacy interests, and that the collection, use and/or
disclosure of personal information should be
subject to regulation.
241
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 241-247
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Consent is an expression of individual
autonomy, and the right for individuals to make
decisions about how they will live their lives. In the
context of information privacy, consent is the
mechanism by which the individual e-commerce
user exercises control over the collection, use or
disclosure of personal information. Consent to the
disclosure of private information provides the basis
for an e-commerce users agreement to the
collection, use, access and transfer of personal
information.
Most often e-commerce users may have
expressly agree to the collection, disclosure and use
of information beyond what is required for the
immediate transaction [10] [11]. Express consent
may be given in a variety of ways by e-commerce
users such as when filling in a form online, or by
ticking on a tick box provided on a website.
Consent might be also implied from the previous
conduct of the parties or through an existing
business or other relationship where it can be
assumed that an individual has reasonable
expectation of receiving information; or where the
individual has reasonable expectation that their
personal information may or will be collected [12]
[13].
According to normal legal principles, consent
cannot be effective if the person does not have
sufficient knowledge or understanding to consent.
Before e-commerce users can make a considered
decision whether to consent, they must have some
understanding of the implications of what is being
consented to, and sufficient detail in language
suitable for e-commerce users to give genuine
consent [13]. There is the added problem relating to
young persons and others who may lack legal
capacity to consent. Tied to consent is the exercise
of choice by the individual.
IV. CHOICE
A secondary sense in which autonomy is used
is that it requires freedom of choice [10] [11].
Control over personal information enables an
autonomous individual to make choices, and to
select those persons who will have access to their
body, home, decisions, communication, and
information and those who will not. In e-
commerce, individuals make choices about the use
and disclosure or surrender of their personal
information for secondary purposes. The e-
commerce users ability to exercise autonomy as
deliberative choice is constrained in a number of
ways. Firstly, choice requires the individual to be a
rational consumer making informed and considered
decisions and having options in relation to their
personal information. The options that are available
to individuals in cyberspace to collection, use and
the sharing their personal information is the opt-in
and opt-out regime. There are also different views
on the efficacy of opt-in versus the opt-out regime.
On one view this could be considered consent by
trickery while the other view is that there is no true
choice [11]. For example, if individuals do not
actively select to opt out then they are taken to
agree by default. The box may also be ticked as the
default state to indicate agreement with the
consumer required to untick the box if they do not
agree. The e-commerce user is unlikely to fully
appreciate the effect and importance for their
privacy of ticking a box agreeing to the terms and
conditions of access to the website or the
transaction. Secondly, there are significant barriers
to the effective exercise of autonomy when e-
commerce users have difficulty in locating the
providers privacy policy. Available evidence
suggests that only a very few e-commerce users
exercise autonomy in this sense; users seldom read
privacy clauses on websites or change their
behaviour as a consequence [15] [16]. Information
may not be easily accessible, or difficult to find, or
in legal language which is not easily
comprehended, or may be lengthy and vague as to
exactly what is being agreed or what rights they are
actually surrendering [16] [11].
Thirdly, an e-commerce users choices whether
to access a website may be constrained if required
to agree to terms and conditions up front or may
find that alternatives are equally constrained.
Similarly if other providers have similar policies
which do not allow the user to refuse the terms and
conditions, the e-commerce user will lack
autonomy in this secondary sense.
Fourthly, an e-commerce user ability to
exercise autonomy is further compromised by the
use of bundled or blanket consent used by data
collectors and e-business operators [11]. Bundled
consent refers to the consent to a wide range of
uses and disclosures without giving an individual
the opportunity to make a choice about which use
or disclosure they agree to and which they do not.
Bundled consent frequently includes terms and
conditions allowing changes to privacy policies
without notice. The use of bundled consent cannot
be meaningful because the person who consents to
such terms and conditions does not know what he
or she is consenting to. One reason being that
privacy clauses containing bundled consent are
usually lengthy, often in very small font size and
may not be easily accessible [12] [16]. Data
collectors are also using bundled privacy clauses to
collect personal information for secondary use for
use in data mining [11]. The written statements of
bundled consent may be changed without notice, or
some elements outside the privacy policy, or
bundled consent could be added to customer
agreements to allow data mining in the future [11]
[13] [14].
Finally, fair information practices require that
when there are any changes to an organisations
242
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 241-247
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
privacy policy the website user should be alerted to
this change with information [17] which includes
the date of issue and a list of changes made by the
organisation to the prior version [18]; and that
reasonable notice must be given when ever
personal information is to be shared with others
[17] [18]. So it is doubtful if e-commerce users
express genuine consent to the use of their personal
information when they tick on the box that they
have read these standard form privacy policies and
accept the terms therein. The issue of consent on
the internet raises significant privacy concerns with
the emergence of new technological challenges.
V. ONLINE PRIVACY VIOLATIONS
The e-commerce users capacity to exercise
autonomy and protect their privacy is further
compromised by the use of privacy invasive
technologies, the automatic processing of their
personal information, and data security risks that
threaten privacy. The online activities of Internet
and e-commerce users are constantly monitored
using electronic surveillance devices for
commercial interests [19] [20] [21].
A. Privacy Invasive Technologies
The harvesting of personal information
through monitoring and sensing using privacy
invasive technologies is pervasive and pose special
risks to privacy of individuals [17] [18]. Data
surveillance, the most common form used to collect
information about e-commerce users without their
consent [23] [24]. Information technologies such
cookies, web bugs, [18] [23] [26] and HTTP are
key features that allow data collection [1] [23] and
enable web pages to be transported between users
and a web server. Most of the privacy invasive
applications depend upon these technologies. New
surveillance technologies such as the RFID chip
(Radio-Frequency Identification), and behaviour-
tracking ad system is also being used to bring
Internet users more relevant advertising and to
benefit e-commerce businesses. There have been
severe backlash recently from users of social
networking websites when it was discovered that
two prominent websites such as Google, and
Facebook have been monitoring and collecting
personal information for secondary use without
users knowledge, or explicit consent [31]. Other
data exchange companies such as BlueKai, a
California based company, and Phorm (a British
company) are involved in tracking online users
without notification of data collection. Internet and
e-commerce users generally do not know the fate of
their personal information that is generated online
[27] [28].
B. Automatic processing
Generally e-commerce may have the consented
to the collection of their personal information for
primary purposes, but e-commerce users do not
know if such information will be used for
secondary purposes or shared with third parties
[18] [30]. The automatic processing and secondary
use and disclosure of personal information
collected without the consent of individuals
through data surveillance also affect individual
privacy interests [23] [27]. Automatic processing of
personal information allows the aggregation of
personal information, identification of individuals,
and secondary use of personal information without
consent. Cookies are the most common profiling
mechanism used on the Internet [18] [27] [29].
Database companies are able to correlate and
manipulate the data collected through the process
of data matching, sentiment analysis, customer
profiling, and the creation of digital dossiers
[23][26]. Consumer profiles are a major currency in
e-commerce [27]. Many database companies are
known to sell information about users or provide
lists of their customers e-mail addresses to other
direct marketing or telemarketing companies [18]
[27]. The processed data in the form of profiles and
digital dossiers can be disseminated or can be made
accessible easily; it can be transferred quickly from
one information system or database to another and
across borders with the click of the mouse without
the knowledge or consent of the data subject [27].
Some companies readily disseminate the personal
information and digital dossiers that have been
collected to a host of other entities and sometimes
to anyone willing to pay a small fee [27] [30]. The
increasing interconnectedness, affordable, fast, on-
line systems also enable the building of electronic
dossiers [30] [31]. Critical decisions about an
individuals status, reputation and credibility either
to determine eligibility and suitability for jobs,
credit worthiness, and criminal record can readily
be made by tapping into digital dossiers [30]. The
privacy issue is that profiles expose Internet and e-
commerce users to risks of the information being
linked to other information such as names,
addresses and e-mail addresses making them
personally identifiable. Personal information in the
digital dossiers is also at risk of being manipulated
or used for unintended purposes when it is shared
with third parties [27] [32].
C. Data security breach
Online privacy for consumers is also seriously
compromised by data security breaches and creates
privacy risks for e-commerce users. Insecure
systems can give rise to identity fraud if a party
acquires a users identifiers and in particularly
identity authenticators [18] [29]. Cyber criminals
are ripping data out information from the Internet
243
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 241-247
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
and databases [34] [35]. Personal data is at risk of
unauthorised access, falling into the wrong hands,
misused or becoming a commodity for illegal sale,
[18] exposes individuals to identity theft, loss of
reputation, confidentiality and potential loss of
valuable intellectual property rights [31]. In
Australia, the Australian Payments Clearing
Association report that the value of online credit
card fraud in Australia exceeded $102 million
during the period 30 June 2009 31July 2010 [33].
Identity theft is becoming increasingly common
and is for example the fastest growing crime [35].
Data security involves both managerial and
technical measures to protect against loss and the
unauthorized access, destruction, use, or disclosure
of the data [18].
Violations of human rights arise from the
unlawful collection and storage of personal data,
the problems associated with inaccurate personal
data, or the abuse, or unauthorised disclosure of
such data [20]. The factors discussed above are a
major determinant of users disclosing their personal
information to e-businesses [15] [16]. It appears
that in cyberspace, data collectors such as Internet
service providers (ISPs) and the suppliers of
content on the web are in the main unregulated in
any way under the current privacy provisions.
VI. REGULATION
Currently, almost all fair information practices
such as for example under the OECDs Collection
Limitation Principle [36]; Directive 95/46/EC [17]
and the Asia-Pacific Economic Co-operation
(APEC) Privacy Framework provide for privacy
principles [36]. These privacy principles provide
for compliance with displaying privacy policies
statements; notice of personal information
collection, use and/or disclosure; breach
notification; access and correction that are viewed
as a prerequisite for fair information collection
practices. In the Asia Pacific region, APECs Data
Privacy Pathfinder [38] contains general
commitments leading to the development of a
Cross-Border Privacy Rules (CBPR) system.
In Australia, there is no right to privacy under
the common law although a statutory tort of
privacy is being mooted [18]. Privacy protection in
Australia is a patchwork of federal and state
statutory regulation and industry codes of practice
and incidental protection at common law arising
out to torts, property, contract and criminal law.
The primary federal statute for privacy protection
that is the Privacy Act 1988 (Cth) (Privacy Act)
National Privacy Principles (NPPs) [36] have
their foundation consumer choice or consent as an
essential element. However, the existing legislative
structure under the Privacy Act appears to give
priority to commercial interests in relation to direct
marketing and secondary usage. Neither the
Privacy Act nor the NPPs prohibit bundled consent.
There are currently law reform initiatives at the
international, regional and national levels to
enhance privacy protection for individuals. For
example under the e-Privacy Directive, EU Mem-
ber States are required to ensure that the storing of
information, or the gaining of access to information
already stored, is only allowed on condition that the
data subject concerned has given his or her consent,
having been provided with clear and comprehen-
sive information, in accordance with Directive
95/46/EC, inter alia, about the purposes of the pro-
cessing [37]. In Australia, the Australian Law
Reform Commissions (ALRC) has in its recent
report on proposed recommendations to enhance
privacy protection [18]. Amongst others it has
recommended developing a single set of Privacy
Principles; redrafting and updating the structure of
the Privacy Act; and addressing the impact of new
technologies on privacy; and data security breach
notification.
VII. SOLUTIONS
This paper also suggests that there should be
more appropriate regulatory response to remove
constraints which impede considered decisions
about privacy by e-commerce users needs to be in
place to protection of personal information in
cyberspace. Viewed from the standpoint of
individual privacy, legislation should ensure that
constraints on the ability to make rational decisions
are removed. In relation to e-commerce users, the
legislative framework can be satisfied if the user
has liberty of action, that is, if the user agrees
without duress or coercion. The difficulty of
finding and understanding information relating to
privacy policies, blanket or bundled consents, the
lack of choice whether to accept conditions and the
preference give to commercial interests reduces the
individuals autonomy to make informed decision
making, and to control and consent to the use their
personal information. Although it is not possible to
ensure that a consumer will act rationally with
informed consideration before deciding to waive
their privacy rights, the legislature can, at least,
legislate to remove constraints preventing informed
and rational decision making.
This paper suggests that one of the ways to
resolve the problem of consent and choice would
be notification prior to collection, use and
disclosure should be mandatory. The reason being
that notification of data collection, use and
disclosure and how such information will used and
disclosed include in a standard from privacy policy
encourages transparency about data collection and
the subsequent handling of personal information.
Notification allows individuals to be able to access
their personal information and to correct incorrect
information held about them; and it also allows
244
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 241-247
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
individuals to withhold consent to the collection of
personal information for unlawful purposes [18].
Notice allows an autonomous individual the option
to decide and make choices whether to share their
personal information with others. In addition to
notification of collection use and disclosure,
mandatory notification of data security breaches
alerts customers and ensures that customers and
users are able to take timely action to limit risks to
their personal information from risk by for example
changing their pin number and passwords [18].
Notification of data security breach gain consumer
trust and reduced risk to personal information.
Besides regulation there are a range of
methods that can be adopted to enhance privacy
that involve a combination of approaches and
mechanisms that include legislation, technology
based enhancing mechanisms, education and
business best practice rules.
VIII. CONCLUSION
This paper has examined the significance of
privacy for individuals. It argued that autonomy is
only truly observed if the e-consumer is able to
provide explicit consent and has both choice and
the opportunity to make rational and informed
decisions. It also argued that consent to the
collection, use, and disclosure of personal
information should be regarded as instrumental to
individual autonomy. This paper examined and
identified some of the online privacy problems that
arise from the use of privacy invasive technologies
by data collectors and its effect on the privacy
interests and risks to individuals. This paper has
also suggested some solution to the problem to
exercising consent and choice. This paper has
suggested that any choice regime should provide a
simple and easily accessible way for consumers to
exercise this choice. It is suggested that the opt-in
regimes must require positive action by the
consumer to allow the organisation that is
collecting and using their personal information [10]
[11].
It suggests that appropriate notification prior to
data collection; and information provided to e-
commerce users if the information collected will be
used or shared with a third party or parties. This
measure will restore control over personal
information and give individuals an opportunity to
consent or to withhold consent to the use of their
personal information for primary and/or secondary
purposes. Such an approach puts a premium on
individual choice and privacy but probably at some
cost of efficiency for the e-commerce provider.
IX. REFERENCE
[1] Office of the Privacy Commissioner. (2007).
Submission to the Australian Law Reform
Commission Review of Privacy Discussion Paper
72'. Australian Government.
[2] P. M. Schwartz, "Privacy and Democracy in
Cyberspace," Vanderbilt Law Review, vol. 52, pp.
1609-1702, 1999.
[3] Privacy Commissioner, "Privacy concerns on
the up: Annual Report 2009," Office of the Privacy
Commissioner, New Zealand, 2009.
[4] Office of the Privacy Commissioner, Privacy
Matters, Australian Government, vol. 1, Issue 4,
2007.
[5] Samuel Warren and Louis Brandeis, "The right
to privacy," Harvard Law Review vol. 4, pp. 193
220, 1890.
[6] A. Westin, Privacy and Freedom. New York:
Atheneum Publishers, pp. 487, 1967.
[7] B. Rossler, The Value of Privacy. Cambridge:
Polity Press, 2005.
[8] F. Schoeman (ed.), Philosophical Dimensions
of Privacy: An Anthology. Cambridge: Cambridge
University Press, pp. 346-402, 1984.
[9] J. W. Penny, Privacy and the New Virtualism,
Yale Journal of Law & Technology, vol. 10, pp.
194-250, 2008.
[10] P. Regan, The role of consent in information
privacy protection, Center for Democratic and
Technology, 2009.
[11] A. Cavoukian, Data Mining: Staking a Claim
on Your Privacy, Office of the Information and
Privacy Commissioner, Ontario, (1998).
[12] R. Clarke, e-Contract: A Critical Element of
Trust in e-Business, presented at the Proc. 15
th
Bled Electronic Commerce Conference, Bled,
Slovenia, Jun. 2002.
[13] R. Clarke, The Effectiveness of Privacy
Policy Statements, Xamax Consultancy Pty Ltd.,
2008.
[14] F. Marotta-Wurgler, Does Disclosure
Matter? New York University Law and Economics
Research Paper, No. 10, pp. 54, 2010.
[15] Senate Select Committee on Information
Technologies. (2000). Cookie Monsters? Privacy in
the information society. Commonwealth Parliament
of Australia.
245
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 241-247
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
[16] Out-Law.com, Average privacy policies take
s 10 minutes to read, research finds' Out-
Law.com, 6 October 2008 [Online]. Available:
<http://www.out-law.com/page-9490>.
[17] European Commission. (1995). Directive
95/46/EC of the European Parliament and of the
Council of 24 October 1995 on the protection of
individuals with regard to the processing of
personal data and on the free movement of such
data (hereafter referred to as Directive
95/46/EC). Directive 95/46/EC, Article 18
[18] Australian Law Reform Commission (ALRC).
(2008, May.). For Your Information: Australian
Privacy Law and Practice (ALRC Report 108).
[Online]. Available: http//:www.alrc.gov.au
[19] Australian Communications and Media
Authority (ACMA). (2011, Sept.). Growth in
sensing and monitoring information driving change
in service. ACMA Media Release 89/2011.
[Online]. Available:
http://www.acma.gov.au/WEB/STANDARD/pc_pc
_410135
[20] D. J. Solove, A Taxonomy of Privacy
University of Pennsylvania Law Review vol. 154,
No. 3, pp. 477-560, 2006.
[21] Electronic Privacy Information Centre. (2011).
Cookies. [Online]. Available:
http://www.epic.org/privacy/internet/cookies/
[22] A. Cavoukian, Privacy and the Open
Networked Enterprise, Information and Privacy
Commissioner, Ontario Canada, 2006.
[23] R. Clarke, Information Technology and
Dataveillance, Communions of the ACM, vol. 31,
Issue 5, pp. 498-512, 1988.
[24] European Commission. (2009, Nov.). ePrivacy
Directive close to enactment: improvements on
security breach, cookies and enforcement, and
more to come. Reference: EDPS/09/13.
[25] Privacy International, PHR2006 Privacy
topics: Electronic commerce, Privacy
International, 2007 [Online]. Available:
http://www.privacyinternational.org/article.shtml
[26] D. J. Solove, The Digital Person: Technology
and Privacy in the Information Age. New York:
New York University Press, 2004.
[27] D. J. Solove, Digital Dossiers and the
Dissipation of Fourth Amendment Privacy,
Southern California Law Review, vol. 75, pp. 1083-
1167, 2002.
[28] Electronic Privacy Information Centre
(EPIC). (2011, Nov.) Federal Trade Commission
Announces Settlement in EPIC Facebook Privacy
Complaint - Social Networking Privacy. [Online].
Available: http://epic.org/privacy/socialnet/
[29] R. Clarke and A. Maurushat, The Feasibility
of Consumer Device Security, University of New
South Wales Law Research Series, No. 5, 2009.
[30] D. J. Solove, The New Vulnerability: Data
Security and Personal information in SECURING
PRIVACY IN THE INTERNET AGE, Eds. A.
Chander, L. Gelman, and M. J. Radin, Stanford
University Press, 2005.
[31] Australian Broadcasting Corporation. (2009,
Aug.). Fear in the Fast Lane, Four Corners -
ABC.net.au. Available:
http://www.abc.net.au/4corners/content/2009/s2658
405.htm.
[32] Australian Payments Clearing Association,
(2010, Dec.) Payments Fraud in Australia - Media
Release. [Online]. Available:
<http://www.apca.com.au>.
[33] Australian Institute of Criminology, (2011).
Consumer Scams-2010 and 2011. [Online].
Available:
http://www.aic.gov.au/en/publications/current%20s
erices/rip21-40/rip25.aspx.
[34] Australian Crime Commission, (2011). Crime
Profile SeriesIdentity Crime - Fact Sheet.
[Online]. Available:
http://www.crimecommission.gov.au/sites/default/f
iles/files/identity-crime.pdf
[35] Organisation of Economic Cooperation and
Development (OECD). OECD Guidelines on the
Protection of Privacy and Transborder Flows of
Personal Data (OECD Guidelines). Available:
http://www.oecd.org/documentprint/0,3455,en_264
9_34255_1815186_1_1_1,00.html
[36] Privacy Act 1988 (Cth) s 6, and Sch 3 National
Privacy Principles (NPPs).
[37] European Union. (2011, May.). ePrivacy
Directive Regulations. European Commission.
Available:
http://ec.europa.eu/information_society/policy/eco
mm/doc/library/public_consult/data_breach/ePrivac
y_databreach_consultation.pdf
[38] Asia-Pacific Economic Co-operation (APEC).
(2012, Mar.). APEC Data Privacy Pathfinder
Initiative. Available:
246
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 241-247
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
http://www.ag.gov.au/Privacy/Pages/APEC-Data-
Privacy-Pathfinder-Initiative.aspx
247
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 241-247
The Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012)
Department of computer science
ISG
Tunis, Tunisia
1
[email protected]
3
[email protected]
Department of computer science
ENIT
Tunis, Tunisia
2
[email protected]
College of Computing Sciences
New Jersey Institute of Technology
Newark NJ 07102-1982 USA
4
[email protected]
ABSTRACT
KEYWORDS
Cloud computing, cyber security metrics,
mean failure cost, security requirements,
security threats, threats classification.
1 INTRODUCTION
With the rapid development of
processing and storage technologies and
the emergence of the Internet, computing
resources have become cheaper, more
powerful and more ubiquitously
available than ever before. As a
consequence, IT service providers are
faced to challenges of expanding the
structures and infrastructures with small
expenditure and short a time in order to
provide rising demands from their
customers. To address these business
challenges, cloud computing architecture
was developed. In this technology, end
users avail themselves of computing
resources and services as a public utility,
rather than a privately run small scale
computing facility. In the same way that
we use electricity as a public utility
(rather than build our own generators),
and that we use water as a public utility
(rather than dig our own well), and that
we use phone service as a public utility
(rather than build and operate our own
cell tower), we want to use computing
services as a public utility. Such a
service would be available to individuals
and organizations, large and small, and
would operate on the same pattern as
other public utilities, namely:
Subscribers sign up for service
from a service provider, on a
contractual basis.
The service provider delivers
services of data processing, data
access and data storage to
subscribers.
The service provider offers
warranties on the quality of
services delivered.
Towards quantitative measures of Information Security: A Cloud
Computing case study
Mouna Jouini
1
, Anis Ben Aissa
2
, Latifa Ben Arfa Rabai
3
, Ali Mili
4
Cloud computing is a prospering technology
that most organizations consider as a cost
effective strategy to manage Information
Technology (IT). It delivers computing
services as a public utility rather than a
personal one. However, despite the
significant benefits, these technologies
present many challenges including less
control and a lack of security. In this paper,
we illustrate the use of a cyber security
metrics to define an economic security
model for cloud computing system. We,
also, suggest two cyber security measures in
order to better understand system threats
and, thus, propose appropriate counter
measure to mitigate them.
248
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Subscribers are charged
according to the services they
use.
It offers the usual advantages of public
utilities, in terms of efficiency (higher
usage rates of servers), economies of
scale (time sharing of computing
resources), capacity (virtually unlimited
computing power, bounded only by
provider assets rather than by individual
user assets), convenience (no need for
users to be computer-savvy, no need for
tech support), dependability (provided by
highly trained provider staff), service
quality (virtually unlimited data storage
capacity, protected against damage and
loss) [1, 11, 12, 15, 16].
Like traditional computing
environments, cloud computing brings
risks like loss of security and loss of
control [5, 7, 8, 13, 14, 18, 19]. Indeed,
by trusting its critical data to a service
provider, a user (whether it is an
individual or an organization) takes risks
with the availability, confidentiality and
integrity of this data. In addition to that,
the aim of Cloud computing is to deliver
its applications and services to users
through the internet and therefore it is
prone to various kinds of external and
internal security risks such as denial-of-
service (DoS) and distributed denial-of-
service (DDoS) attacks that affect
especially the subscriber data.
In this paper, we propose two security
metrics based on threats classification
that enable service providers and service
subscribers not only to quantify the risks
that they incur as a result of prevailing
security threats and system
vulnerabilities but also to know the
origin of threats. The reason why
security is a much bigger concern in
cloud computing than it is in other
shared utility paradigms is that cloud
computing involves a two-way
relationship between the provider and
the subscriber: whereas the water grid
and the electric grid involve a one-way
transfer from the provider to the
subscriber, cloud computing involves
two-way communication, including
transferring information from
subscribers to providers, which raises
security concerns.
The security metrics we discuss in this
paper quantifies in economic terms the
loss resulted in security breaches,
thereby enabling providers and
subscribers to weight these risks against
rewards to assess the cost effectiveness
of security countermeasures, and then, to
identify the source of threats to propose
appropriate security solutions. This
paper is organized as follows: In section
2, we discuss how to quantify security
threats using some quantitative models.
In section 3, we will use the Mean
Failure Cost (MFC) as a cyber security
measure. In section 4, we apply the MFC
in a cloud computing system. In section
5, we proceed to threat classification to
propose appropriate security
countermeasure and we conclude by
summarizing our results, focusing on
strength of the cybersecurity measure
and sketching directions of further
research.
2 QUANTIFYING
DEPENDABILITY AND
SECURITY ATTRIBUTES
The most computer failures are due to
malicious actions and they have
increased during the last decade. Lord
Kelvin stated "If you cannot measure it,
you cannot improve it." In other words,
security cannot be managed, if it cannot
be measured. This clearly states the
249
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
importance of metrics to evaluate the
ability of systems to withstand attacks,
quantify the loss caused by security
breach and assess the effectiveness of
security solutions. Hence, there are
quantitative models that estimate the
dependability of a system which can be
measured according to the reliability,
availability, usability and security
metrics such as the mean time to failure
(MTTF), the mean time to discovery
(MTTD) and the mean failure cost
(MFC) [2, 14].
The mean time to failure (MTTF):
The mean time to failure (MTTF)
describes the expected time that a system
will operate before the first failure
occurs. It is the number of total hours of
service of all devices divided by the
number of devices [21].
The mean time between failures (MTBF):
The Mean time between failures (MTBF)
describes the expected time between two
consecutive failures for a repairable
system. It is the number of total hours of
service of all devices divided by the
number of failures [21].
The mean time to discovery (MTTD):
The mean time to discovery (MTTD)
refers to the mean time between
successive discoveries of unknown
vulnerabilities [20].
The mean time to failure (MTTE):
The mean time to exploit (MTTE) refers
to the mean time between successive
exploitations of a known vulnerability
[21].
Average Uptime Availability (or Mean
Availability):
The mean availability is the proportion
of time during a mission or time period
that the system is available for use [20].
These models reflect the failure rate of
the whole system, they ignore the
variance stakes amongst different
stakeholders, the variance in failure
impact from one stakeholder to another.
They also make no distinction between
requirements. Besides, they consider that
any failure to meet any requirement is a
failure to meet the whole specification.
To estimate the MTTF of a system, we
only need to model its probability of
failure with respect to its specification.
Consequently, the mean failure cost
takes into account:
The variance in failure cost from
one requirement to another.
The variance in failure
probability from one component
to another
The variance in failure impact
from one stakeholder to another.
The mean failure cost (MFC) presents
many advantages:
It provides a failure cost per unit
of time (mean failure cost): it
quantifies the cost in terms of
financial loss per unit of
operation time (e.g. $/h)
It quantifies the impact of
failures: it provides cost as a
result of security attacks.
It distinguishes between
stakeholders: it provides cost for
each systems stakeholder as a
result of a security failure.
250
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
3 MFC A MEASURE OF
CYBER SECURITY
Computing systems are characterized by
five fundamental properties:
functionality, usability, performance,
cost, and dependability. Dependability of
a computing system is the ability to
deliver service that can justifiably be
trusted.
A systematic exposition of the concepts
of dependability consists of three parts:
the threats to, the attributes of, and the
means by which dependability is
attained.
Despite the existence of quantitative
metrics that estimate the attributes of
dependability like the Mean Time To
Failure MTTF for reliability and the
Mean Time To Exploitation MTTE (a
measure of the security vulnerability),
there is no way to measure directly the
dependability of the system or to
quantify security risks.
3.1 The Mean Failure Cost (MFC)
In [3], Ben Aissa et al introduce the
concept of Mean Failure cost as a
measure of dependability in general, and
a measure of cyber security in particular.
3.1.1 The Stakes Matrix
We consider a system S and we let H
1
,
H
2
, H
3
, H
k
, be stakeholders of the
system, i.e. parties that have a stake in its
operation. We let R
1
, R
2
, R
3
, R
n
,
be security requirements that we wish to
impose on the system, and we let ST
i,j
,
for 1ik and 1jn be the stake that
stakeholder Hi has in meeting security
requirement R
j
. We let PR
j
, for 1jn,
be the probability that the system fails to
meet security requirement R
j
, and we let
MFC
i
(Mean Failure Cost), for 1ik, be
the random variable that represents the
cost to stakeholder Hi that may result
from a security failure.
We quantify this random variable in
terms of financial loss per unit of
operation time (e.g. $/hour); it represents
the loss of service that the stakeholder
may experience as a result of a security
failure. Under some assumptions of
statistical independence, we find that the
Mean Failure Cost for stakeholder Hi
can be written as:
,
1
.
i i j j
j n
MFC ST PR
=
If we let MFC be the column-vector of
size k that represents mean failure costs,
let ST be the kn matrix that represents
stakes, and let PR be the column-vector
of size n that represents probabilities of
failing security requirements, then this
can be written using the matrix product
():
MFC = ST PR (5)
The Stakes matrix is filled, row by row,
by the corresponding stakeholders. As
for PR, we discuss below how to
generate it.
3.1.2 The Dependency Matrix
We consider the architecture of system
S, and let C
1
, C
2
, C
3
, C
h
, be the
components of system S. Whether a
particular security requirement is met or
not may conceivably depend on which
component of the system architecture is
operational. If we assume that no more
than one component of the architecture
may fail at any time, and define the
following events:
Ei, 1ih, is the event: the
operation of component Ci is
affected due to a security
breakdown.
(4)
251
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Em+1: No component is
affected.
Given a set of complementary events E
1
,
E
2
, E
3
, E
h
, Eh+1, we know that the
probability of an event F can be written
in terms of conditional probabilities as:
1
1
( ) ( | ) ( ).
h
k k
k
P F P F E P E
+
=
=
We instantiate this formula with F being
the event: the system fails with respect
to some security requirement. To this
effect, we let F
j
denote the event that the
system fails with respect to requirement
R
j
and we write (given that the
probability of failure with respect to R
j
is denoted by PR
j
):
1
1
( | ) ( ).
m
j j k k
k
PR P F E P E
+
=
=
If
we introduce the DP (Dependency)
matrix, which has n rows and h+1
columns, and where the entry at row j
and column k is the probability that
the system fails with respect to
security requirement j given that
component k has failed (or, for
k=h+1, that no component has failed),
we introduce vector PE of size h+1,
such that PE
k
is the probability of
event E
k
, then we can write
PR = DP PE (8)
Matrix DP can be derived by the
systems architect, in light of the role
that each component of the architecture
plays to achieve each security goal. As
for deriving vector PE, we discuss this
matter in the next section.
3.1.3 The Impact Matrix
Components of the architecture may fails
to operate properly as a result of security
breakdowns brought about by malicious
activity. In order to continue the
analysis, we must specify the catalog of
threats that we are dealing with, in the
same way that analysts of a systems
reliability define a fault model. To this
effect, we catalog the set of security
threats that we are facing, and we let T1,
T2, T3, Tp, represent the event that
a cataloged threat has materialized, and
we let Tp+1, be the event that no threat
has materialized. Also, we let PT be the
vector of size p+1 such that
PTq, for 1qp, is the
probability that threat Tq has
materialized during a unitary
period of operation (say, 1 hour).
PTp+1 is the probability that no
threat has materialized during a
unitary period of operation time.
Then, by virtue of the probabilistic
identity cited above, we can write:
1
1
( | ) .
p
k k q q
q
PE P E T PT
+
=
=
If
we introduce the IM (Impact)
matrix, which has h+1 rows and
p+1 columns, and where the entry
at row k and column q is the
probability that component Ck fails
given that threat q has materialized
(or, for q=p+1, that no threat has
materialized),
we introduce vector PT of size
p+1, such that PTq is the
probability of event Tq, then we
can write
PE = IM PT (10)
Matrix IM can be derived by analyzing
which threats affect which components,
and assessing the likelihood of success
of each threat, in light of perpetrator
behavior and possible countermeasures.
Vector PT can be derived from known
perpetrator behavior, perpetrator models,
known system vulnerabilities, etc. We
refer to this vector as the Threat
(6)
(7)
(9)
252
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Configuration Vector or simply as the
Threat Vector.
3.1.4 Summary
Given the stakes matrix ST, the
Dependency matrix DP, the impact
matrix IM and the threat vector PT, we
can derive the vector of mean failure
costs (one entry per stakeholder) by the
following formula:
MFC = ST DP IM PT (11)
where matrix ST is derived collectively
by the stakeholders, matrix DP is derived
by the systems architect, matrix IM is
derived by the security analyst from
architectural information, and vector PT
is derived by the security analyst from
perpetrator models. Figure 1 below
illustrates these matrices and their
attributes (size, content, indexing, etc).
4 ILLUSTRATION: CLOUD
COMPUTING SYSTEM
We illustrate the use of our cyber
security metrics on a practical
application, namely a Cloud Computing
system. We derive in turn the three
matrixes of interest and the threat vector.
To this effect, we identify the security
requirements, the stakeholders and their
stakes in meeting these requirements and
the architectural components of this
system.
4.1 The stake matrix
As for security requirements, we
consider the security concerns that are
most often cited in connection with
cloud computing [7, 14, 16], namely:
availability, integrity, and
confidentiality. We further refine this
classification by considering different
levels of criticality of the data to which
these requirements apply:
Availability: it refers to the
subscribers ability to retrieve
his/ her information when he/she
needs it. Un-availability may be
more or less costly depending on
how critical the data is to the
timely operation of the
subscriber. Thus, we distinct two
types:
o Critical Data
o Archival Data
Integrity: it refers to the
assurances offered to subscribers
that their data is not lost or
damaged as a result of malicious
or inadvertent activity.
Violations of integrity may be
more or less costly depending on
how critical the data is to the
secure operation of the
subscriber. Accordingly, we
distinct two types:
o Critical Data
o Archival Data
Confidentiality: it refers to the
assurances offered by subscribers
that their data is protected from
unauthorized access. Violations
of confidentiality may be more or
less costly depending on how
confidential the divulged data.
The data can be classified into:
o Highly Classified Data
o Proprietary Data
o Public Data
For the purposes of our model, we then
assume that we are dealing with seven
generic security requirements, namely:
AVC: Availability of Critical
Data.
AVA: Availability of Archival
Data.
INC: Integrity of Critical Data.
INA: Integrity of Archival Data.
253
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
CC: Confidentiality of Classified
Data.
CP: Confidentiality of
Proprietary Data.
CB: Confidentiality of Public
Data.
We assume that the provider makes
different provisions for these
requirements, putting more emphasis on
critical requirements than on less critical
requirements. We further assume, for the
sake of argument, that for each
requirement, the provider makes the
same provisions for all its subscribers;
hence if the provider fails to meet a
particular requirement, that failure
applies to all the subscribers that are
dependent on it.
We consider three classes of
stakeholders in a cloud computing
situation, namely: the service provider,
the corporate or organizational
subscribers, and the individual
subscribers. For the sake of illustration,
we consider a fictitious running example,
where we have a cloud computing
provider (PR), and a sample of three
subscribers:
A corporate subscriber (CS),
A governmental subscriber (GS),
An individual subscriber (IS).
Table 1: Stakes Matrix: cost of failing a security
requirement stakes in $K/h
Based on a quantification of these stakes
in terms of thousands of dollars ($K) per
hours of operation, we produce the
following stakes matrix as shown in
Table 1.
4.2 The Dependency matrix
In the cloud computing system, we focus
on two parts: the front end and the back
end connecting to each other through the
Internet. The front end is the side of the
computer user or client including the
client's computer and the application
required to access to the cloud
computing system. The back end is the
"cloud" section of the system which are
the various physical/virtual computers,
servers, software and data storage
systems that create the "cloud" of
computing services. The most common
approach [6, 9] defines cloud computing
services as three layers of services:
Software as a Service (SaaS) offers
finished applications that end users
can access through a thin client
like Gmail, Google Docs and
Salesforce.com
Platform as a Service (PaaS)
offers an operating system as well
as suites of programming
languages and software
development tools that customers
can use to develop their own
applications like Microsoft
Windows Azure and Google App
Engine.
Infrastructure as a Service (IaaS)
offers end users direct access to
processing, storage and other
computing resources and allows
them to configure those resources
and run operating systems and
software on them as they see fit
like Amazon Elastic Compute
Cloud (EC2) and IBM Blue cloud.
Requirements
AVC AVA INC INA CC CP CB
Stakeholders
PR 500 90 800 150 1500 1200 120
CS 150 40 220 80 250 180 60
GS 60 20 120 50 2500 30 12
IS 0,05 0,015 0,30 0,20 0,30 0,10 0,01
254
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
The cloud computing paradigm
optimizes in costs of physical resources
(servers, CPUs, memories) by the
virtualization techniques. This lets users
put numerous applications and functions
on a PC or server, instead of having to
run them on separate machines as in the
past. The cloud computing architecture
contains three layers [9, 10]:
Core foundational capabilities: it
includes a browser, a proxy
server and a router/Firewall and
load balancer.
Cloud services: it includes a web
server, an application server, a
database server, a backup server
and a storage server.
User tools.
Assuming no more than one component
fails at a time, and considering the
additional event that no component has
failed, the dependency matrix has (9 + 1
= 10) columns and 7 rows (one for each
security requirement), for a total system,
described in [4], to fill the dependency
matrix as we do in table 2.
4.3 The impact matrix
The following step in our model is to
deriver the impact matrix ie, the
derivation of the set of threats that we
wish to consider in our system. As we
mentioned above, Cloud Computing is
based on virtualization technology, but
this later causes major security risks and
thus, this system is threatened by many
Dependency Matrix
Components
Browser Proxy
server
Router/
Firewall
Load
balancer
Web
server
Appl
server
Database
server
Backup
server
Storage
server
No
failure
Security Requirements
AVC 1 1 1 1 0,44 0,28 1 0,01 1 0
AVA 1 1 1 1 0,44 0,28 0,28 0,01 1 0
INC 0,14 0,14 1 1 0,44 0,14 1 0,01 1 0
INA 0,14 0,14 1 1 0,44 0,14 0,14 0,01 1 0
CC 0,44 0,14 1 1 0,44 0,44 0,44 0,01 0,44 0
CP 0,44 0,14 1 1 0,44 0,44 0,44 0,01 0,44 0
CB 0,44 0,14 1 1 0,44 0,44 0,44 0,01 0,44 0
Table 2: Dependency Matrix
Threats
MVH CVH VMm VMS MVV VMC VMM DoS FA DL MI ASTH ANU IAI NoT
Components
Brws 0 0 0 0 0 0 0 0,02 0,01 0 0,03 0,02 0 0,03 0
Prox 0,01 0,05 0 0,01 0,01 0,05 0,05 0,02 0,01 0 0,005 0,02 0,01 0 0
R/FW 0,03 0,05 0,033 0,03 0,03 0,05 0,05 0,06 0,04 0 0,005 0,02 0,01 0,01 0
LB 0,02 0,003 0 0,01 0,02 0,003 0,003 0,06 0,04 0 0,005 0,02 0,01 0,01 0
WS 0,03 0,003 0,033 0 0,03 0,003 0,003 0,02 0,04 0 0,01 0,02 0,01 0,01 0
AS 0,02 0,003 0,033 0,06 0,02 0,003 0,003 0,036 0,04 0 0,05 0,02 0,01 0,07 0
DBS 0,001 0 0,033 0,04 0,001 0 0 0,036 0,04 0,05 0,03 0,02 0,01 0,06 0
BS 0,001 0 0 0,04 0,001 0 0 0,036 0,04 0,05 0,03 0,02 0,01 0,06 0
SS 0,04 0,05 0 0,04 0,04 0,05 0,05 0,036 0,04 0,05 0,03 0,02 0,01 0,06 0
NoF 0,06 0,04 0,03 0,03 0,06 0,04 0,04 0,01 0,02 0,01 0,02 0,05 0,06 0,005 1
Table 3: Impact Matrix
255
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
types of attacks which can be classified
into three categories [6, 8, 14, 17, 18]:
Security threats originating from
the host (hypervisor): This class
includes Monitoring Virtual
Machines from host, Virtual
machine modification and
Threats on communications
between virtual machines and
host.
Placement of malicious VM
images on physical systems: it
includes Security Threats
Originating Between the
Customer and the Datacenter
attack, Flooding attacks, Denial
of service (DoS) attack, Data loss
or leakage, Malicious insiders,
Account, service and traffic
hijacking and Abuse and
nefarious use of cloud
computing.
Insecure application
programming interfaces: it
includes Security threats
originating from the virtual
machines, Monitoring VMs from
other VMs, Virtual machine
mobility and Threats on
communications between virtual
machines
In this section we have catalogued
fourteen distinct types of threats. To
compute the MFC we need to know the
probability of the attack for each threat
during one hour. Also we need to fill the
values in that table 4 (150 entries), it
comes from our empirical study [4]
which has an immense source of
references.
Using the 3 Matrix (Stakes, Dependency
and Impact) and the threat vector we can
compute the vector of mean failure costs
for each stakeholder of Cloud
Computing system using the formula:
MFC = ST DP IM PT (11)
Table 5: Stakeholder Mean Failure Cost
Stakeholders MFC($K/h)
PR
15,20443
CS
3,53839
GS
8,98502
IS
0,00341
Table 4: Threat Vector
Threats Probability
Monitoring virtual machines from host
(MVM)
8,063 10
-4
Communications between virtual
machines and host (CBVH)
8,063 10
-4
Virtual Machine modification (VMm) 8,063 10
-4
Placement of malicious VM images on
physical systems (VMS)
8,063 10
-4
Monitoring VMs from other VM
(VMM)
40,31 10
-4
Communication between VMs (VMC) 40,31 10
-4
Virtual machine mobility (VMM) 40,31 10
-4
Denial of service (DoS) 14,39 10
-4
Flooding attacks (FA) 56,44 10
-4
Data loss or leakage (DL) 5,75 10
-4
Malicious insiders (MI) 6,623 10
-4
Account, service and traffic hijacking
(ASTH)
17,277 10
-4
Abuse and nefarious use of cloud
computing (ANU)
17,277 10
-4
Insecure application programming
interfaces (IAI)
29,026 10
-4
No Threats (NoT) 0,9682
256
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
From Table 5 above we can see that the
cost of failure for each stakeholder is so
high.
To avoid the high cost of failure and
reduce risks, we start by identifying
vulnerabilities which help to understand
how an attacker might exploit these
vulnerable points. The attacker provides
an efficient countermeasure to mitigate
these vulnerabilities at their earliest
stages before they become more harmful.
He also, analyzes their effects on
activities or stakeholders goals. Hence
critical vulnerabilities in cloud
computing system have been identified.
However, these vulnerabilities are
dispersed among two intrusion spaces:
internal and / or external, which we
allow to identify and then apply
appropriate countermeasures.
5 MFC MODEL EXTENSION
We illustrate, in this section, an
extension of our MFC model by
suggesting a classification of the
identified threats to propose two types of
measures: the Internal MFC and the
External MFC in order to know the
source of threats shaped the CC system
to develop appropriate strategies to
prevent, or mitigate their effects.
5.1 Classification methods
Threat assessment is an essential
component of an information security
risk evaluation. In order to identify
vulnerabilities and to fix mitigation
techniques, it is important to well
understand potential threat sources or
classes.
In threats classification, threats are
presented together with the appropriate
security services and a recommended
solution [24]. The main aim of threats
classification is to contribute to the
understanding of the nature of threats by
grouping it into classes depending in
many criteria like source, modifying
factors, resources, consequences [24]. In
fact, identifying and classifying threats
helps in the assessment of their impacts
and the development of strategies to
prevent, or mitigate the effects of threats
to the system.
Threat classification is a planned activity
for identifying and assessing system
threats and vulnerabilities and then
defining countermeasures to prevent, or
mitigate the effects of threats to the
system.
A threat is the adversarys goal, or what
an adversary might try to do to a system
[25]. It is also described as the capability
of an adversary to attack a system [25].
Thus, a threat may be defined by two
ways: techniques that attackers use to
exploit the vulnerabilities in applications
or impact or effect of threats to your
assets. Therefore, there are some threat
classification methods that are based on
the first definition and others based on
the second one.
For the threat classification methods that
are based on the effect of threats, we
cite:
In [25, 26], Microsoft developed
a method, called as STRIDE, for
classifying computer security
257
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
threats. It is a classification
scheme for characterizing known
threats according to the goals and
purposes of the attacks (or
motivation of the attacker). The
STRIDE acronym is formed from
the first letter of each of the
following classes: Spoofing
identity, Tampering with data,
Repudiation, Information
disclosure, Denial of service and
Elevation of privilege.
For the threat classification methods that
are based on the techniques of threats,
we cite:
In [27], Visveswarn
Chidambaram proposed a review
of information system threats and
then organized into three classes:
network threats, server or host
threats, and application threats.
In [28], Lukas Ruf et al. proposed
the three orthogonal dimensional
model: that classify the threat
space into subspaces according to
a model of three orthogonal
dimensions labeled Motivation
(accidental, deliberate),
Localization (external, internal)
and Agent (human, technological,
force majeure) to alleviates the
risk assessment.
In [23], Fariborz Farahmand et al.
considered threats to a network
system from two points of view:
Threat agent, and penetration
technique. In fact, a threat is
caused by a threat agent using a
specific penetration technique to
produce an undesired effect on
the network. An agent may be an
unauthorized user, an authorized
user and an environmental factor
and threat techniques are
classified into physical,
personnel, hardware, software,
and procedural.
In [24], Karen Loch et al.
proposed the four-dimensional
model for information system
security that classifies threats by
source (internal, external) and
perpetrator (human, non human),
intent of the actions of the
perpetrator, irrespective of the
source (accidental or intentional)
and consequences of threat on
resources (disclosure,
destruction, modification, denial
of use).
In [22], Antoon Rufi proposed a
model to classify networks
security threat. The model
contains four main classes:
unstructured threats, structured
threats, external threats and
internal threats.
In [31], Kishor Trivedi et al.
proposed a model that classifies
threats into four classes: faults or
attacks (physical faults, software
bugs), errors (overload,
misconfiguration), failures
(physical attacks, software based
attacks, man in the middle,
jamming) and accidents. It is an
extension of Laprie [32]
taxonomy who classifies it into
258
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
three types: faults, errors,
failures.
For the purpose of our system, we
propose to classify the threat space into
subspaces according to a model of three
dimensions labeled Internal, External
and Internal/External. This classification
allows to localize the origin (or source)
of a threat. In fact threat is either caused
from within an organization, system or/
and architecture or from an external
point of origin.
5.1.1 Internal threats
Internal threats occur when someone has
authorized access to the network with
either an account on a server or physical
access to the network. A threat can be
internal to the organization as the result
of employee action or failure of an
organization process.
Regarding internal attacks, McNamara
lists, in [29], the following insider
threats: theft of proprietary information,
accidental or non-malicious breaches,
sabotage, fraud, and
eavesdropping/snooping.
5.1.2 External threats
External threats can arise from
individuals or organizations working
outside of a company. They do not have
authorized access to the computer
systems or network. They work their
way into a network mainly from the
Internet or dialup access servers. The
most obvious external threats to
computer systems and the resident data
are natural disasters: hurricanes, fires,
floods and earthquakes. External attacks
occur through connected networks
(wired and wireless), physical intrusion,
or a partner network.
Lacey et al. provide, in [30], an updated
profile of sophisticated outside attacks
which can compromise the security of
Mobile Ad-hoc Network (MANET).
They include eavesdropping, routing
table overflow, routing cache poisoning,
routing maintenance, data forwarding,
wormhole, sinkhole, byzantine, selfish
nodes, external denial of service, internal
denial of service, spoofing, Sybil,
badmouthing, viruses, and flattering.
5.1.3 Internal/ external threats
Internal/ external threats take place when
someone has authorized access to the
network (for example an employee of the
organization) causes external threats to
the system.
Threats Probability
outsider
committed
Probability
insider
committed
(MVM) 1 0
(CBVH) 1 0
(VMm) 0.6 0.4
(VMS) 1 0
(VMM) 0.5 0.5
(VMC) 0.5 0.5
(VMM) 0.6 0.4
(DoS) 0.136 0.864
(FA) 1 0
(DL) 0.8 0.2
(MI) 0 1
(ASTH) 1 0
(ANU) 0 1
(IAI) 0.8 0.2
Table 6 : Probability of space intrusion
259
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
5.2 MFC Computing
Using empirical data from [3] we can
decompose the probability of event
threat committed in two complementary
probabilities (outsider/insider system
committed) as showing in table 6.
5.3 Results and discussion
The MFC formula can be extended in
two significant results:
Mean failure cost of extern
threats
ext ext
MFC ST DP IM PT = o o o
Stakeholders MFC
ext
($K/h)
PR
10,61051
CS
2,46562
GS
6,278502
IS
0,002382
Mean failure cost of intern threats
int int
MFC ST DP IM PT = o o o
Stakeholders MFC
int
($K/h)
PR
4,5932
CS
1,07261
GS
2,7060
IS
0,001035
Computing the new values of the MFC
extensions can give us the critical space
of intrusion. The extensions of the MFC
are more helpful for the countermeasures
in our case we can adapt some solutions
like adding more firewalls, proxy servers
and antivirus servers.
6 CONCLUSION
Cloud computing is an emerging
computing paradigm that provides an
efficient, scalable, and cost-effective way
for todays organizations to deliver
consumer IT services over the Internet.
A variety of different cloud computing
models are available, providing both
solid support for core business functions
and the flexibility to deliver new
services. However, the flexibility of
cloud computing services has created a
number of security concerns. In fact, it
does not offer is absolute security of
subscriber data with respect to data
integrity, confidentiality, and
availability.
In this paper we have illustrated the use
of the MFC model on a practical
application, namely a cloud computing
system. This quantitative model enables
cloud service providers and cloud
subscribers to quantify the risks they
take with the security of their assets and
to make security related decisions on the
basis of quantitative analysis.
We envision in previous work to refine
the generic architecture of cloud
computing systems, and use cloud-
specific empirical data to refine the
estimation of the dependency matrix and
the impact matrix.
7 REFERENCES
1. Armbrust, M., Fox, A., Griffith, R., D.
Joseph, A., Katz, R.: Above the Clouds: A
Berkeley View of Cloud Computing.
Technical report EECS-2009-28, UC
Berkeley, (2009).
2. Barry W, Johnson.: Design and analysis of
fault-tolorant digital systems. Barry W.
Johnson, Addison-Wesley Longman
Publishing Co., INC. Boston, MA, USA,
(1989).
3. Ben Aissa, A., Abercrombie, RK., Sheldon,
FT., Mili, A.: Quantifying security threats
and their potential impacts: a case study. in
Innovation in Systems and Software
(12)
(13)
260
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Engineering: A NASA Journal, 6:269--281,
(2010).
4. Ben Aissa, A.: Vers une mesure
conomtrique de la scurit des systmes
informatiques. Doctoral dissertation, Faculty
of Sciences of Tunis, submitted, Spring
(2012).
5. Chow, R., Golle, P., Jakobsson, M., Shi, E.,
Staddon, J., Masuoka, R., Molina, J.:
Controlling data in the cloud: Outsourcing
computation without outsourcing control.
The 2009 ACM Workshop on Cloud
Computing Security, Chicago, Illinois, USA,
(2009).
6. Cloud Security Alliance.: Top Threats to
Cloud Computing V 1.0. (2010),
https://cloudsecurityalliance.org/topthreats
7. Hanna, S.: Cloud Computing: Finding the
silver lining. (2009).
8. Ibrahim, A. S., Hamlyn-Harris, J., Grundy,
J.: Emerging Security challenges of cloud
virtual infrastructure. the Asia Pacific
Software Engineering Conference 2010
Cloud Workshop, (2010).
9. Jinesh, Varia.: Cloud Architectures.
Technology Evangelist Amazon Web
Services, (2008).
10. Jaio, Orea. et al.: VisioTCI Reference
Architecture (v2.12). Cloud Security
alliance, (2011).
11. Mell, P., Grance, T.: Effectively and
Securely Using the Cloud Computing
Paradigm. In ACM Cloud Computing
Security Workshop, (2009).
12. Mell, P. Grance, T.: The nist definition of
cloud computing. Communications of the
ACM. 53(6), 50--50, (2010).
13. Sean, C. Kevin, C.: Cloud Computing
Security. International Journal of Ambient
Computing and Intelligence, 3(1), 14--19,
(2011).
14. Subashini, S., Kavitha, V.: A survey on
security issues in service delivery models of
cloud computing. Journal of Network and
Computer Applications, (2010).
15. Vaquero, L M., Rodero-Merino, L., Caceres,
J., Lindner, M.: A Break in the Clouds:
Towards a Cloud Definition. ACM
SIGCOMM Computer Communication
Review, 39(1), 50--55, (2009).
16. Wang, L., von Laszewski, G., Kunze, M.,
Tao, J.: Cloud computing: A Perspective
study. Grid Computing Environments (GCE)
workshop, (2008).
17. Wayne, J., Timothy, G.: Guidelines on
Security and Privacy in Public Cloud
Computing. Information Technology
Laboratory, (2011).
18. Wooley, P: Identifying Cloud Computing
Security Risks. University of Oregon,
Master's Degree Program, (2011).
19. Xuan, Z., Nattapong, W., Hao, L., Xuejie Z.:
Information Security Risk Management
Framework for the Cloud Computing
Environments. 10th IEEE International
Conference on Computer and Information
Technology (CIT 2010), (2010).
20. The Center for Internet Security (CIS).: The
CIS Security Metrics v1.0.0. (2009)
21. Speaks, S.: Reliability and MTBF Overview.
Vicor Reliability Engineering, (2010).
22. Rufi, A.: Vulnerabilities, Threats, and
Attacks, Rufi, A.: Network Security 1 and 2
Companion Guide (Cisco Networking
Academy). Cisco Press, (2006).
23. Farahmand, F., Navathe, S., Sharp, G.,
Enslow, P.: A Management Perspective on
Risk of Security Threats to Information
Systems. Information Technology and
Management archive, 6: 203--225 (2005).
24. Loch, K., Carr, H., Warkentin, M.: Threats
to Information Systems: Today's Reality,
Yesterday's Understanding. Management
Information Systems Quarterly 16(2), 173--
186.
25. Swiderski, F., Snyder, W.: Threat Modeling.
Microsoft Press, (2004).
26. Meier, J., Mackman, A., Vasireddy, S.,
Dunner, M., Escamilla, R., Murukan, A.:
Improving web application security: threats
and counter measures. Satyam Computer
Services, Microsoft Corporation, (2003)
27. Chidambaram, V.: Threat modeling in
enterprise architecture integration. 2004,
http://www.infosys.com/services/systeminte
gration/ThreatModelingin.pdf
28. Ruf, L., Thorn, A., Christen, T., Gruber, B.:
Threat Modeling in Security Architecture -
The Nature of Threats. ISSS Working Group
on Security Architectures
29. McNamara, R., NetworksWhere does the
real threat lie?. Information Security
Technical Report, 3(4), 65--74 (1998).
30. Lacey, T.H., Mills, R.F., Mullins, B.E.,
Raines, R.A., Oxley, M.E., Rogers, S.K.:
RIPsec Using reputation-based multilayer
security to protect MANETs. Computers and
Security, 31(1), 122--136 (2011).
31. Trivedi, K., Kim, D., Roy, A., Medhi, D.:
Dependability and security models.
International Workshop of Design of
261
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Reliable Communication Networks (DRCN),
IEEE, 11--20 (2009).
32. Avizienis, A., Laprie, J., Randell, B.,
Landwehr, C.: Basic concepts and taxonomy
of dependable and secure computing. IEEE
Trans. Dependable and Secure Computing,
1(1), (2004).
262
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 248-262
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)