Social and Human Elements of Information Security PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 412

Social and Human

Elements of
Information Security:
Emerging Trends
and Countermeasures

Manish Gupta
State University of New York, Buffalo, USA

Raj Sharman
State University of New York, Buffalo, USA

Information science reference


Hershey • New York
Director of Editorial Content: Kristin Klinger
Managing Development Editor: Kristin M. Roth
Assistant Development Editor: Deborah Yahnke
Editorial Assistant: Rebecca Beistline
Director of Production: Jennifer Neidig
Managing Editor: Jamie Snavely
Assistant Managing Editor: Carole Coulson
Typesetter: Cindy Consonery
Cover Design: Lisa Tosheff
Printed at: Yurchak Printing Inc.

Published in the United States of America by


Information Science Reference (an imprint of IGI Global)
701 E. Chocolate Avenue, Suite 200
Hershey PA 17033
Tel: 717-533-8845
Fax: 717-533-8661
E-mail: [email protected]
Web site: http://www.igi-global.com

and in the United Kingdom by


Information Science Reference (an imprint of IGI Global)
3 Henrietta Street
Covent Garden
London WC2E 8LU
Tel: 44 20 7240 0856
Fax: 44 20 7379 0609
Web site: http://www.eurospanbookstore.com

Copyright © 2009 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by
any means, electronic or mechanical, including photocopying, without written permission from the publisher.
Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does
not indicate a claim of ownership by IGI Global of the trademark or registered trademark.

Library of Congress Cataloging-in-Publication Data

Social and human elements of information security : emerging trends and countermeasures / Manish Gupta and Raj Sharman, editor [sic].

p. cm.

Includes bibliographical references and index.

Summary: "The book represents a compilation of articles on technology, processes, management, governance, research and practices on
human and social aspects of information security"--Provided by publisher.

ISBN 978-1-60566-036-3 (hbk.) -- ISBN 978-1-60566-037-0 (ebook)

1. Computer crimes. 2. Computer security. 3. Security systems. I. Gupta, Manish, 1978- II. Sharman, Raj.

HV6773.S63 2009

658.4'78--dc22

2008013115

British Cataloguing in Publication Data


A Cataloguing in Publication record for this book is available from the British Library.

All work contributed to this book set is original material. The views expressed in this book are those of the authors, but not necessarily of
the publisher.

If a library purchased a print copy of this publication, please go to http://www.igi-global.com/agreement for information on activating the
library's complimentary electronic access to this publication.
List of Reviewers

Deapesh Misra Samir Chatterjee


George Mason University, USA Claremont Graduate University, USA

Kris Gaj JinKyu Lee


George Mason University, USA Oklahoma State University, USA

Mahil Carr Jingguo Wang


IDBRT, Hyderabad, India University of Texas, USA

Mahi Dontamsetti David Porter


m3security Inc. Detica Corporation, UK

Anup Narayanan Tejaswini Herath


First Legion Consulting, India SOM, SUNY Buffalo, USA

Shambhu Upadhyaya Donald Murphy


CSE, SUNY Buffalo, USA M&T Bank Corporation, USA

Madhusudhanan Chandrasekaran Robert Franz


CSE, SUNY Buffalo, USA M&T Bank Corporation, USA

Chandan Mazumdar Jessica Pu Li


Jadavpur University, West Bengal, India SOM, SUNY Buffalo, USA

Mridul S. Barik Lawrence Harold


Jadavpur University, West Bengal, India IIT Madras, India

Anirban Sengupta Siddhartha Gupta


Jadavpur University, India IBM Global Services, Gurgaon, India

Nick Pullman David Porter


Citi Group, USA Detica Corporation, UK

Arunabha Mukhopadhyay 4 Anonymous Reviewers


IIM Kolkata, India
Table of Contents

Foreword.............................................................................................................................................. xv

Preface . .............................................................................................................................................xvii

Section I
Human and Psychological Aspects

Chapter I
Human and Social Aspects of Password Authentication........................................................................ 1
Deborah S. Carstens, Florida Institute of Technology, USA

Chapter II
Why Humans are the Weakest Link...................................................................................................... 15
Marcus Nohlberg, School of Humanities and Informatics, University of Skövde, Sweden

Chapter III
Impact of the Human Element on Information Security....................................................................... 27
Mahi Dontamsetti, President, M3 Security, USA
Anup Narayanan, Founder Director, First Legion Consulting, India

Chapter IV
The Weakest Link: A Psychological Perspective on Why Users Make Poor Security
Decisions............................................................................................................................................... 43
Ryan West, Dell, Inc., USA
Christopher Mayhorn, North Carolina State University, USA
Jefferson Hardee, North Carolina State University, USA
Jeremy Mendel, North Carolina State University, USA

Chapter V
Trusting Computers Through Trusting Humans: Software Verification in a Safety-Critical
Information Society.............................................................................................................................. 61
Alison Adam, University of Salford, UK
Paul Spedding, University of Salford, UK
Section II
Social and Cultural Aspects

Chapter VI
Information Security Culture as a Social System: Some Notes of Information
Availability and Sharing....................................................................................................................... 77
Rauno Kuusisto, Finland Futuers Research Center, Turku School of Economics, Finland
Tuija Kuusisto, Finnish National Defense University, Finland

Chapter VII
Social Aspects of Information Security: An International Perspective................................................. 98
Paul Drake, Centre for Systems Studies Business School, University of Hull, UK
Steve Clarke, Centre for Systems Studies Business School, University of Hull, UK

Chapter VIII
Social and Human Elements of Information Security: A Case Study................................................. 116
Mahil Carr, Institute for Development and Research in Banking Technology, India

Chapter IX
Effects of Digital Convergence on Social Engineering Attack Channels........................................... 133
Bogdan Hoanca, University of Alaska Anchorage, USA
Kenrick Mock, University of Alaska Anchorage, USA

Chapter X
A Social Ontology for Integrating Security and Software Engineering............................................. 148
E. Yu, University of Toronto, Canada
L. Liu, Tsinghua University, China
J. Mylopoulos, University of Toronto, Canada

Section III
Usability Issues

Chapter XI
Security Configuration for Non-Experts: A Case Study in Wireless Network Configuration............ 179
Cynthia Kuo, Carnegie Mellon University, USA
Adrian Perrig, Carnegie Mellon University, USA
Jesse Walker, Intel Corporation, USA

Chapter XII
Security Usability Challenges for End-Users..................................................................................... 196
Steven Furnell, Centre for Information Security & Network Research, University of
Plymouth, UK
Chapter XIII
CAPTCHAs: Differentiating between Human and Bots.................................................................... 220
Deapesh Misra, VeriSign iDefense Security Intelligence Services, USA

Chapter XIV
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems............ 247
Sylvain Castagnos, LORIA—Université Nancy 2, Campus Scientifique, France
Anne Boyer, LORIA—Université Nancy 2, Campus Scientifique, France

Section IV
Organizational Aspects

Chapter XV
An Adaptive Threat-Vulnerability Model and the Economics of Protection...................................... 262
C. Warren Axelrod, US Trust, USA

Chapter XVI
Bridging the Gap between Employee Surveillance and Privacy Protection....................................... 283
Lilian Mitrou, University of the Aegean, Greece
Maria Karyda, University of the Aegean, Greece

Chapter XVII
Aligning IT Teams’ Risk Management to Business Requirements.................................................... 301
Corey Hirsch, LeCroy Corporation, USA
Jean-Noel Ezingeard, Kingston University, UK

Chapter XVIII
Security Requirements Elicitation: An Agenda for Acquisition of Human Factors........................... 316
Manish Gupta, State University of New York, Buffalo, USA
Raj Sharman, State University of New York, Buffalo, USA
Lawrence Sanders, State University of New York, Buffalo, USA

Chapter XIX
Do Information Security Policies Reduce the Incidence of Security Breaches:
An Exploratory Analysis..................................................................................................................... 326
Neil F. Doherty, Loughborough University, UK
Heather Fulford, Loughborough University, UK

Compilation of References............................................................................................................... 343

About the Contributors.................................................................................................................... 374

Index................................................................................................................................................... 381
Detailed Table of Contents

Foreword.............................................................................................................................................. xv

Preface . .............................................................................................................................................xvii

Section I
Human and Psychological Aspects

Chapter I
Human and Social Aspects of Password Authentication........................................................................ 1
Deborah S. Carstens, Florida Institute of Technology, USA

With the increasing daily reliance on electronic transactions, it is essential to have reliable security
practices for individuals, businesses, and organizations to protect their information (Vu, Bhargav, &
Proctor, 2003; Vu, Tai, Bhargav, Schultz, & Proctor, 2004). A paradigm shift is occurring as research-
ers are targeting social and human dimensions of information security as this aspect is seen as an area
where control can be exercised. Since computer security is largely dependent on the use of passwords
to authenticate users of technology, the objectives of this chapter are to (a) provide a background on
password authentication and information security, (b) provide a discussion on security techniques, hu-
man error in information security, human memory limitations, and password authentication in practice
and (c) provide a discussion on future and emerging trends in password authentication to include future
research areas.

Chapter II
Why Humans are the Weakest Link...................................................................................................... 15
Marcus Nohlberg, School of Humanities and Informatics, University of Skövde, Sweden

This chapter introduces the concept of social psychology and what forms of deception humans are
prone to fall for. It presents a background of the area and a thorough description of the most common
and important influence techniques. It also gives more practical examples of potential attacks and what
kind of influence techniques they use, as well as a set of recommendations on how to defend against
deception and a discussion on future trends. The author hopes that the understanding of why and how
the deceptive techniques work will give the reader new insights into information security in general,
and deception in particular. This insight can be used to improve training, to discover influence earlier,
or even to gain new powers of influence.
Chapter III
Impact of the Human Element on Information Security....................................................................... 27
Mahi Dontamsetti, President, M3 Security, USA
Anup Narayanan, Founder Director, First Legion Consulting, India

This chapter discusses the impact of the human element in information security. We are in the third gen-
eration of information security evolution, having evolved from a focus on technical, to process based to
the current focus on the human element. Using case studies, the author’s detail how existing technical
and process based controls are circumvented by focusing on weaknesses in human behavior. Factors
that affect why individuals behave in a certain way, while making security decisions, are discussed. A
psychology framework called the conscious competence model is introduced. Using this model, typical
individual security behavior is broken down into four quadrants using the individuals’ consciousness
and competence. The authors explain how the model can be used by individuals to recognize their
security competency level and detail steps for learning more effective behavior. Shortfalls of existing
training methods are highlighted and new strategies for increasing information security competence are
presented.

Chapter IV
The Weakest Link: A Psychological Perspective on Why Users Make Poor Security
Decisions............................................................................................................................................... 43
Ryan West, Dell, Inc., USA
Christopher Mayhorn, North Carolina State University, USA
Jefferson Hardee, North Carolina State University, USA
Jeremy Mendel, North Carolina State University, USA

The goal of this chapter is to raise awareness of cognitive and human factors issues that influence user
behavior when interacting with systems and making decisions with security consequences. This chapter
is organized around case studies of computer security incidents and known threats. For each case study,
we provide an analysis of the human factors involved based on a system model approach composed of
three parts: the user, the technology, and the environment. Each analysis discusses how the user inter-
acted with the technology within the context of the environment to actively contribute to the incident.
Using this approach, we introduce key concepts from human factors research and discuss them within
the context of computer security. With a fundamental understanding of the causes that lead users to
make poor security decisions and take risky actions, we hope designers of security systems are better
equipped to mitigate those risks.

Chapter V
Trusting Computers Through Trusting Humans: Software Verification in a Safety-Critical
Information Society.............................................................................................................................. 61
Alison Adam, University of Salford, UK
Paul Spedding, University of Salford, UK

This chapter considers the question of how we may trust automatically generated program code. The
code walkthroughs and inspections of software engineering mimic the ways that mathematicians go
about assuring themselves that a mathematical proof is true. Mathematicians have difficulty accepting
a computer generated proof because they cannot go through the social processes of trusting its construc-
tion. Similarly, those involved in accepting a proof of a computer system or computer generated code
cannot go through their traditional processes of trust. The process of software verification is bound
up in software quality assurance procedures, which are themselves subject to commercial pressures.
Quality standards, including military standards, have procedures for human trust designed into them.
An action research case study of an avionics system within a military aircraft company illustrates these
points, where the software quality assurance (SQA) procedures were incommensurable with the use of
automatically generated code.

Section II
Social and Cultural Aspects

Chapter VI
Information Security Culture as a Social System: Some Notes of Information
Availability and Sharing....................................................................................................................... 77
Rauno Kuusisto, Finland Futures Research Center, Turku School of Economics, Finland
Tuija Kuusisto, Finnish National Defense University, Finland

The purpose of this chapter is to increase understanding of the complex nature of information security
culture in a networked working environment. Viewpoint is comprehensive information exchange in a
social system. The aim of this chapter is to raise discussion about information security culture develop-
ment challenges when acting in a multicultural environment. This chapter does not introduce a method
to handle complex cultural situation, but gives some notes to gain understanding, what might be behind
this complexity. Understanding the nature of this complex cultural environment is essential to form
evolving and proactive security practices. Direct answers to formulate practices are not offered in this
chapter, but certain general phenomena of the activity of a social system are pointed out. This will help
readers to apply these ideas to their own solutions.

Chapter VII
Social Aspects of Information Security: An International Perspective................................................. 98
Paul Drake, Centre for Systems Studies Business School, University of Hull, UK
Steve Clarke, Centre for Systems Studies Business School, University of Hull, UK

This chapter looks at information security as a primarily technological domain and asks what could be
added to our understanding if both technology and human activity were seen to be of equal importance.
The aim is therefore to ground the domain both theoretically and practically from a technological and
social standpoint. The solution to this dilemma is seen to be located in social theory, various aspects of
which deal with both human and technical issues, but do so from the perspective of those involved in
the system of concern. The chapter concludes by offering a model for evaluating information security
from a social theoretical perspective, and guidelines for implementing the findings.
Chapter VIII
Social and Human Elements of Information Security: A Case Study................................................. 116
Mahil Carr, Institute for Development and Research in Banking Technology, India

This chapter attempts to understand the human and social factors in information security by bringing
together three different universes of discourse—philosophy, human behavior, and cognitive science.
When these elements are combined, they unravel a new approach to the design, implementation, and
operation of secure information systems. A case study of the design of a technological solution to the
problem of extension of banking services to remote rural regions is presented and elaborated to highlight
human and social issues in information security. It identifies and examines the concept of the ‘other’ in
information security literature. The final objective is to prevent the ‘other’ from emerging and damaging
secure systems rather than introducing complex lock and key controls.

Chapter IX
Effects of Digital Convergence on Social Engineering Attack Channels........................................... 133
Bogdan Hoanca, University of Alaska Anchorage, USA
Kenrick Mock, University of Alaska Anchorage, USA

Social engineering refers to the practice of manipulating people to divulge confidential information that
can then be used to compromise an information system. In many cases, people, not technology, form
the weakest link in the security of an information system. This chapter discusses the problem of social
engineering and then examines new social engineering threats that arise as voice, data, and video net-
works converge. In particular, converged networks give the social engineer multiple channels of attack
to influence a user and compromise a system. On the other hand, these networks also support new tools
that can help combat social engineering. However, no tool can substitute for educational efforts that
make users aware of the problem of social engineering and policies that must be followed to prevent
social engineering from occurring.

Chapter X
A Social Ontology for Integrating Security and Software Engineering............................................. 148
E. Yu, University of Toronto, Canada
L. Liu, Tsinghua University, China
J. Mylopoulos, University of Toronto, Canada

As software becomes more and more entrenched in everyday life in today’s society, security looms large
as an unsolved problem. Despite advances in security mechanisms and technologies, most software sys-
tems in the world remain precarious and vulnerable. There is now widespread recognition that security
cannot be achieved by technology alone. All software systems are ultimately embedded in some human
social environment. The effectiveness of the system depends very much on the forces in that environ-
ment. Yet there are few systematic techniques for treating the social context of security together with
technical system design in an integral way. In this chapter, we argue that a social ontology at the core of
a requirements engineering process can be the basis for integrating security into a requirements driven
software engineering process. We describe the i* agent-oriented modeling framework and show how it
can be used to model and reason about security concerns and responses. A smart card example is used to
illustrate. Future directions for a social paradigm for security and software engineering are discussed.
Section III
Usability Issues

Chapter XI
Security Configuration for Non-Experts: A Case Study in Wireless Network Configuration............ 179
Cynthia Kuo, Carnegie Mellon University, USA
Adrian Perrig, Carnegie Mellon University, USA
Jesse Walker, Intel Corporation, USA

End users often find that security configuration interfaces are difficult to use. In this chapter, we explore
how application designers can improve the design and evaluation of security configuration interfaces. We
use IEEE 802.11 network configuration as a case study. First, we design and implement a configuration
interface that guides users through secure network configuration. The key insight is that users have a
difficult time translating their security goals into specific feature configurations. Our interface automates
the translation from users’ high-level goals to low-level feature configurations. Second, we develop and
conduct a user study to compare our interface design with commercially available products. We adapt
existing user research methods to sidestep common difficulties in evaluating security applications. Using
our configuration interface, non-expert users are able to secure their networks as well as expert users.
In general, our research addresses prevalent issues in the design and evaluation of consumer-configured
security applications.

Chapter XII
Security Usability Challenges for End-Users..................................................................................... 196
Steven Furnell, Centre for Information Security & Network Research, University of
Plymouth, UK

This chapter highlights the need for security solutions to be usable by their target audience and exam-
ines the problems that can be faced when attempting to understand and use security features in typical
applications. Challenges may arise from system-initiated events, as well as in relation to security tasks
that users wish to perform for themselves, and can occur for a variety of reasons. This is illustrated by
examining problems that arise as a result of reliance upon technical terminology, unclear or confusing
functionality, lack of visible status, and informative feedback to users, forcing users to make uninformed
decision, and a lack of integration amongst the different elements of security software themselves. The
discussion draws upon a number of practical examples from popular applications as well as results from
survey and user trial activities that were conducted in order to assess the potential problems at first hand.
The findings are used as the basis for recommending a series of top-level guidelines that may be used
to improve the situation, and these are used as the basis assessing further examples of existing software
to determine the degree of compliance.

Chapter XIII
CAPTCHAs: Differentiating between Human and Bots.................................................................... 220
Deapesh Misra, VeriSign iDefense Security Intelligence Services, USA

The Internet has established firm deep roots in our day to day life. It has brought many revolutionary
changes in the way we do things. One important consequence has been the way it has replaced human to
human contact. This has also presented us with a new issue which is the requirement for differentiating
between real humans and automated programs on the Internet. Such automated programs are usually
written with a malicious intent. CAPTCHAs play an important role in solving this problem by present-
ing users with tests which only humans can solve. This chapter looks into the need, the history, and the
different kinds of CAPTCHAs that researchers have come up with to deal with the security implications
of automated bots pretending to be humans. Various schemes are compared and contrasted with each
other, the impact of CAPTCHAs on Internet users is discussed and to conclude, the various possible
attacks are discussed. The author hopes that the chapter will not only introduce this interesting field to
the reader in its entirety, but also simulate thought on new schemes.

Chapter XIV
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems............ 247
Sylvain Castagnos, LORIA—Université Nancy 2, Campus Scientifique, France
Anne Boyer, LORIA—Université Nancy 2, Campus Scientifique, France

This chapter investigates ways to deal with privacy rules when modeling preferences of users in recom-
mender systems based on collaborative filtering. It argues that it is possible to find a good compromise
between quality of predictions and protection of personal data. Thus, it proposes a methodology that
fulfills with strictest privacy laws for both centralized and distributed architectures. The authors hope
that their attempts to provide a unified vision of privacy rules through the related works and a generic
privacy-enhancing procedure will help researchers and practitioners to better take into account the ethical
and juridical constraints as regards privacy protection when designing information systems.

Section IV
Organizational Aspects

Chapter XV
An Adaptive Threat-Vulnerability Model and the Economics of Protection...................................... 262
C. Warren Axelrod, US Trust, USA

Traditionally, the views of security professionals regarding responses to threats and the management of
vulnerabilities have been biased towards technology and operational risks. The purpose of this chap-
ter is to extend the legacy threat-vulnerability model to incorporate human and social factors. This is
achieved by presenting the dynamics of threats and vulnerabilities in the human and social context. We
examine costs and benefits as they relate to threats, exploits, vulnerabilities, defense measures, incidents,
and recovery and restoration. We also compare the technical and human/social aspects of each of these
areas. We then look at future work and how trends are pushing against prior formulations and forcing
new thinking on the technical, operational risk, and human/social aspects. The reader will gain a broader
view of threats, vulnerabilities, and responses to them through incorporating human and social elements
into their security models.
Chapter XVI
Bridging the Gap between Employee Surveillance and Privacy Protection....................................... 283
Lilian Mitrou, University of the Aegean, Greece
Maria Karyda, University of the Aegean, Greece

This chapter addresses the issue of electronic workplace monitoring and its implications for employees’
privacy. Organisations increasingly use a variety of electronic surveillance methods to mitigate threats to
their information systems. Monitoring technology spans different aspects of organisational life, includ-
ing communications, desktop and physical monitoring, collecting employees’ personal data, and locat-
ing employees through active badges. The application of these technologies raises privacy protection
concerns. Throughout this chapter, we describe different approaches to privacy protection followed by
different jurisdictions. We also highlight privacy issues with regard to new trends and practices, such
as teleworking and use of RFID technology for identifying the location of employees. Emphasis is also
placed on the reorganisation of work facilitated by information technology, since frontiers between the
private and the public sphere are becoming blurred. The aim of this chapter is twofold: we discuss privacy
concerns and the implications of implementing employee surveillance technologies and we suggest a
framework of fair practices which can be used for bridging the gap between the need to provide adequate
protection for information systems, while preserving employees’ rights to privacy.

Chapter XVII
Aligning IT Teams’ Risk Management to Business Requirements.................................................... 301
Corey Hirsch, LeCroy Corporation, USA
Jean-Noel Ezingeard, Kingston University, UK

Achieving alignment of risk perception, assessment, and tolerance among and between management
teams within an organisation is an important foundation upon which an effective enterprise information
security management strategy can be built. We argue the importance of such alignment based on infor-
mation security and risk assessment literature. Too often, lack of alignment dampens clean execution of
strategy, eroding support during development and implementation of information security programs. We
argue that alignment can be achieved by developing an understanding of enterprise risk management plans
and actions, risk perceptions, and risk culture. This is done by examining context, context and process.
We illustrate this through the case of LeCroy Corp., illustrating how LeCroy managers perceive risk in
practice, and how LeCroy fosters alignment in risk perception and execution of risk management strat-
egy as part of an overall information security program. We show that in some circumstances diversity
of risk tolerance profiles aide a management teams’ function. In other circumstances, variances lead to
dysfunction. We have uncovered and quantified nonlinearities and special cases in LeCroy executive
management’s risk tolerance profiles.
Chapter XVIII
Security Requirements Elicitation: An Agenda for Acquisition of Human Factors........................... 316
Manish Gupta, State University of New York, Buffalo, USA
Raj Sharman, State University of New York, Buffalo, USA
Lawrence Sanders, State University of New York, Buffalo, USA

Information security is becoming increasingly important and more complex as organizations are increas-
ingly adopting electronic channels for managing and conducting business. However, state-of-the-art
systems design methods have ignored several aspects of security that arise from human involvement
or due to human factors. The chapter aims to highlight issues arising from coalescence of fields of sys-
tems requirements elicitation, information security, and human factors. The objective of the chapter is
to investigate and suggest an agenda for state of human factors in information assurance requirements
elicitation from perspectives of both organizations and researchers. Much research has been done in
the area of requirements elicitation, both systems and security, but, invariably, human factors have not
been taken into account during information assurance requirements elicitation. The chapter aims to find
clues and insights into acquisition behavior of human factors in information assurance requirements
elicitation and to illustrate current state of affairs in information assurance and requirements elicitation
and why inclusion of human factors is required.

Chapter XIX
Do Information Security Policies Reduce the Incidence of Security Breaches:
An Exploratory Analysis..................................................................................................................... 326
Neil F. Doherty, Loughborough University, UK
Heather Fulford, Loughborough University, UK

Information is a critical corporate asset that has become increasingly vulnerable to attacks from viruses,
hackers, criminals, and human error. Consequently, organizations have to prioritize the security of their
computer systems in order to ensure that their information assets retain their accuracy, confidential-
ity, and availability. While the importance of the information security policy (InSPy) in ensuring the
security of information is acknowledged widely, to date there has been little empirical analysis of its
impact or effectiveness in this role. To help fill this gap, an exploratory study was initiated that sought
to investigate the relationship between the uptake and application of information security policies and
the accompanying levels of security breaches. To this end, a questionnaire was designed, validated, and
then targeted at IT managers within large organizations in the UK. The findings presented in this chapter
are somewhat surprising, as they show no statistically significant relationships between the adoption of
information security policies and the incidence or severity of security breaches. The chapter concludes
by exploring the possible interpretations of this unexpected finding and its implications for the practice
of information security management.

Compilation of References............................................................................................................... 343

About the Contributors.................................................................................................................... 374

Index................................................................................................................................................... 381
xv

Foreword

During the past two decades, the Internet has evolved from a technologically sophisticated tool to a
commonly used and accepted medium for communication. The promise of interconnectedness has been
realized, providing unprecedented access to people and information. Despite its promising and meteoric
rise to prominence, Internet technology also presents great challenges. The open nature of the Internet is
highly insecure, leading to new questions about the security and quality of information. Moreover, the
promise of interconnected access raises the potential for new threats, causing us to rethink the meaning
and value of privacy in this environment. Individuals and organizations are potentially endangered in
new ways by threats that may emanate from both internal and external environments.
Information security encompasses a broad range of topics and forms. It is, nonetheless, defined by
the common aim of preserving the integrity, availability, and confidentiality of information system
resources. With the introduction of information technology and the resulting security challenges that
organizations face daily, it has become essential to ensure the security of the organization’s information
and other valuable assets.
Enterprises facing security threats demand powerful and flexible approaches that can match the
dynamic environment in which these threats emerge. Challenges related to telecommunications and
data security within the computer and transmission network include threats from online fraud, identity
theft, illegal content, hacking, and piracy, to name a few. Security breaches such as distributed denial-
of-service attacks and virus infections have highlighted the vulnerability of the information systems as
end-users become more dependent on these systems. Complicating the issue is the fact that computer
attacks can access personal information, raising both privacy and economic costs for individuals and
organizations. Such costs can be expected to escalate as data generation and reliance by individuals and
organizations increase on a daily basis.
Many organisations beginning to advocate security policies are relying purely upon technical solu-
tions. However, information security is only partly about technology. A lack of understanding persists
about the strategic importance of managing information security and the need to address security from
a people, processes, and technology standpoint in order to implement a successful security strategy. In-
formation security is not a problem which needs to be solved, but a process to be managed proactively.
This process does not only require safeguarding information from the development and design stage of
the new systems to their implementation, but it also requires a more holistic emphasis that goes beyond
technology to address the social and human dimensions of communications, psychology, marketing,
and behavioural change.
This holistic approach encompasses technical and procedural controls involving technology and
human factors—the people who purchase, implement, use, and manage that technology. The human
element can become the leaky faucet that spills sensitive information, as employees are often the weak-
xvi

est link when it comes to information security. People who manage and implement information security
strategies must take this reality into account as they respond to vulnerabilities that ultimately impact the
effectiveness of the organisation.
This book attempts to close this gap between technology and human factors by exploring the de-
velopment and implementation of information security strategies. Information security is not merely a
technological issue—it is also a human issue. As such, it invokes all of the complexity, unpredictability,
and wonder that human beings bring to their creative enterprises.

Sylvia Kierkegaard
President
International Association of IT Lawyers (IAITL)

Sylvia Mercado Kierkegaard (BA, MA, MSc International Business and Law Summa cum laude, PG Dipl. Law, LLM, PG Dipl.
EU Law, PhD) is the author of over 2000 publications. She is the editor-in-chief of the Journal of International Commercial
Law (DOAJ-access), International Journal of Private Law (Inderscience) and associate editor and editorial board member of
over 20 international journals, including the Computer Law and Security Report (Oxford-Elsevier), where her articles regu-
larly appear. She is the president of the International Association of IT Lawyers (IAITL) and chairs numerous international
conferences dealing with information technology and law. She is also the EU senior legal expert on information security for
the EU-China Info Society project. Her international experience includes working in Europe, the Middle East, North America,
and Asia advising clients on a broad range of legal issues. She is a frequent invited speaker in conferences and a member of
study committees which recommend and draft policies and legislations.
xvii

Preface

More often than not, it is becoming increasingly evident that the weakest links in an information-security
chain are the people because human nature and social interactions are much easier to manipulate than
targeting the complex technological protections of information systems. Concerns and threats regarding
human and social factors in organizational security are increasing at an exponential rate and shifting the
information security paradigm. This book brings together publications on very important, timely, and
critical issues of managing social and human aspects of information security. The book aims to provide
immense scholarly value to, and contribution in, information technology discipline. Despite being an
emerging threat to information security, there is dearth of quality literature in the area. The key objec-
tive is to fill a gap in existing literature on human and social dimensions of information security by
providing the readers one comprehensive source of latest trends, issues and research in the field. The
book provides high-quality research papers and industrial and practice articles on social and human as-
pects of information security. The book covers topics both on theoretical (research) aspects of securing
information systems and infrastructure from social engineering attacks and real-world implications and
implementations (practice) of the research.

beyond technology and policy, Towards comprehensive


information security

With the abundance of confidential information that organizations must protect, and with consumer fraud
and identity theft at an all time high, security has never been as important as it is today for businesses
and individuals alike. An attacker can bypass millions of dollars invested in technical and non-techni-
cal protection mechanisms by exploiting the human and social aspects of information security. While
information systems deal with human interactions and communications through use of technology, it
is extremely infeasible to separate the human elements from the technological ones. Because of this,
organizations and individuals alike must be equipped with the knowledge of what information can be
used to initiate attacks, how information divulged could precipitate further attacks and compromise their
states of systems, and how to discern and mitigate against such attacks. Businesses spend billions of
dollars annually on expensive technology for information systems security, while overlooking one of the
most glaring vulnerabilities—their employees and customers (Orgill, 2004; Schneier, 2000). Research
has indicated that human error makes up as much as 65% of incidents that cause economic loss for a
company and that security incidents caused by external threats such as computer hackers happen only
3% or less of the time (Lewis, 2003; McCauley-Bell & Crumpton, 1998). Information security cannot
be achieved purely from a technology standpoint alone but from understanding human behavior and the
social context in which humans are embedded (Dhillon, 2007).
xviii

The 2007 CSI Computer Crime and Security Survey reports that insider abuse of network access or
e-mail (such as trafficking in pornography or pirated software) edged out virus incidents as the most
prevalent security problem, with 59% and 52% of respondents reporting each, respectively. The survey
also finds that there have been too many data breaches driven by simple human error and carelessness.
On a new question that was added in this year’s survey, asking what percentage of the security budget
was allocated for awareness training. Almost half—48%—spend less than 1% of their security dollars
on awareness programs. For the first time this year, the survey also asked about measures organizations
had adopted to gauge the effectiveness of their security awareness training programs (CSI/FBI Survey,
2007). The survey shows that 18% of respondents do not use awareness training, implying that 4 out
of 5 respondent organizations do in fact engage in training their employees about security risks and ap-
propriate handling of sensitive data (CSI/FBI Survey, 2007). Although a strong majority performs this
kind of training, many of the respondent organizations (35%) make no effort to measure the effect of
this training on the organization. A quarter of them learn anecdotally from reported staff experiences;
roughly one third (32%) administer tests to see whether their lessons have taken hold (CSI/FBI Survey,
2007). Only about one in ten (13%) of the respondents say they test the effectiveness of the training by
checking whether employees can detect internally generated social engineering attacks (CSI/FBI Sur-
vey, 2007). These numbers quite clearly indicate that human and social elements are not given enough
consideration in design and implementation of security programs. While only a small portion (20%)
are conducting security training and awareness programs, even fewer (10%) are actually measuring
effectiveness of the programs. All the same, we see that damages and threats from non-technical and
non-procedural elements of information security are higher than ever. No system is immune to human
ingenuity. Effective information security must be culturally ingrained and backed by strategies and pro-
cesses that are continually tested, taught, measured, and refined (Lineberry, 2007). Businesses spend a
significant portion of their annual information technology budgets on high-tech computer security. But
the firewalls, vaults, bunkers, locks, and biometrics those dollars buy can be pierced by attackers target-
ing untrained, uninformed, or unmonitored users. Some of the best tools for fighting social engineering
attacks are security awareness training and social engineering testing (Lineberry, 2007), but as we just
saw, organizations have a long way to implement an effective information security awareness program
and also measure its performance. Research by Belsis, Spyros, and Kiountouzis (2005) also suggests that
although successful security management depends on the involvement of users and stakeholders, that
knowledge on information systems security issues may be lacking, resulting in reduced participation.
Reformed computer criminal and security consultant Kevin Mitnick popularized the term social
engineering, pointing out that it is much easier to trick someone into giving you his or her password
for a system than to spend the effort to hack in (Mitnick & Kasperavičius, 2004). He claims it to be the
single most effective method in his arsenal. In another recent survey of black hat hackers, social engi-
neering ranked as the third most widely used technique (Wilson, 2007). The survey results indicate that
63% of hackers use social engineering, while 67% use sniffers, 64% use SQL injection, and 53% use
cross site scripting. Social engineering is an attack to break into a corporate network and applications
by manipulating human and social elements. Along with issues surrounding social engineering, there
are several other facets to human and social elements such as usability issues, organizational aspects,
social and psychological aspects, and privacy issues that the book covers in detail. The book brings to
readers an excellent compilation of high quality and relevant articles on technology, processes, man-
agement, governance, research, and practices on human and social aspects of information security. The
book brings together articles from researchers and practitioners in the financial, legal, technology, and
xix

information security fields through original papers on all aspects of roles and effects of human and social
dimensions of information security.

Organization of the book

The nineteen chapters of the book are organized into 4 sections based on the following broad themes:

I. Human and Psychological Aspects


II. Social and Cultural Aspects
III. Usability Issues
IV. Organizational Aspects

The section on Human and Psychological Aspects focuses on some of the most important issues in
information security that relate to human, behavioral, and psychological aspects. In this section, we ex-
plore some of the interesting phenomena associated with password authentication and how human and
social factors interplay with passwords in determining security of a system or environment; particularly
human errors and human memory characteristics. We also look into concept of social psychology and
what forms of deception humans are prone to fall for, while providing a background of the area and a
thorough description of the most common and important influence techniques. This section also presents
a case study detailing how exploiting weaknesses in human behavior can circumvent existing technical
and procedural controls. Another case study is presented to raise awareness of cognitive and human
factors issues that influence user behaviour when interacting with systems and making decisions with
security consequences. Lastly, an action research case study is presented in this section that illustrates
that quality standards, including military standards, have procedures for human trust designed into them
in light of trust issues with automatically generated program codes. The second section on Social and
Cultural Aspects contains chapters that explore and present interesting findings on information secu-
rity culture as a social system, an international perspective on social aspects of information security, a
case study to elaborate and highlight human and social issues in information security, effects of digital
convergence on social engineering attack channels, and a social ontology for integrating security and
software engineering. The third section on Usability Issues comprises of chapters on research on preva-
lent issues in the design and evaluation of consumer-configured security applications, security usability
challenges for end-users, the impact of CAPTCHAs on Internet users, and the various possible attacks
and issues with privacy rules when modeling preferences of users in recommender systems based on
collaborative filtering. The final section of the book, Organizational Aspects, investigates topics on
threats, vulnerabilities, and responses to them through incorporating human and social elements into
their security models through an adaptive threat-vulnerability model and the economics of protection,
issues surrounding employee surveillance and privacy protection, issues related to aligning IT teams’
risk management to business requirements, under-acquisition of human factors in information assurance
requirements elicitation, and an exploratory review of effectiveness of information security policies.
xx

Overview of chapters in the book

With the increasing daily reliance on electronic transactions, it is essential to have reliable security
practices for individuals, businesses, and organizations to protect their information (Vu, Bhargav, &
Proctor, 2003; Vu, Tai, Bhargav, Schultz, & Proctor, 2004). A paradigm shift is occurring as research-
ers are targeting social and human dimensions of information security, as this aspect is seen as an area
where control can be exercised. Computer security is largely dependent on the use of passwords to
authenticate users of technology. In light of the significance of authentication issues, Dr. Deborah Sater
Carstens of Florida Institute of Technology, USA, in her chapter (Chapter I), “Human and Social As-
pects of Password Authentication,” provides a background on password authentication and information
security, discusses security techniques, human error in information security, human memory limitations,
and password authentication in practice, and provides a discussion on future and emerging trends in
password authentication to include future research areas.
Chapter II, “Why Humans are the Weakest Link?” introduces the concept of social psychology and
what forms of deception humans are prone to fall for. It presents a background of the area and a thor-
ough description of the most common and important influence techniques. It also gives more practical
examples of potential attacks and what kind of influence techniques they use, as well as a set of recom-
mendations on how to defend against deception and a discussion on future trends. The author, Marcus
Nohlberg (University of Skövde, Sweden), hopes that the understanding of why and how the deceptive
techniques work will give the reader new insights into information security in general, and deception
in particular. This insight can be used to improve training, to discover influence earlier, or even to gain
new powers of influence.
Chapter III, “Impact of the Human Element on Information Security” discusses the impact of the
human element in information security. We are in the third generation of information security evolution,
having evolved from a focus on technical, to process based to the current focus on the human element.
Using case studies, the authors, Mahi Dontamsetti of M3 Security, USA and Anup Narayanan of First
Legion Consulting, USA, detail how existing technical and process based controls are circumvented by
focusing on weaknesses in human behavior. Factors that affect why individuals behave in a certain way
while making security decisions are discussed. A psychology framework called the conscious compe-
tence model is introduced. Using this model, typical individual security behavior is broken down into
four quadrants using the individuals’ consciousness and competence. The authors explain how the model
can be used by individuals to recognize their security competency level and detail steps for learning
more effective behavior. Shortfalls of existing training methods are highlighted and new strategies for
increasing information security competence are presented.
The goal of Chapter IV, “The Weakest Link: A Psychological Perspective on Why Users Make Poor
Security Decisions is to raise awareness of cognitive and human factors issues that influence user be-
haviour when interacting with systems and making decisions with security consequences. This chapter
is organized around case studies of computer security incidents and known threats. For each case study,
the authors, Ryan West, Dell Inc., USA, Dr. Christopher B. Mayhorn, North Carolina State University,
USA, Dr. Jefferson B. Hardee, North Carolina State University, USA, and Dr. Jeremy Mendel, Clemson
University, USA, provide an analysis of the human factors involved based on a system model approach
composed of three parts: the user, the technology, and the environment. Each analysis discusses how
the user interacted with the technology within the context of the environment to actively contribute to
the incident. Using this approach, the authors introduce key concepts from human factors research and
xxi

discuss them within the context of computer security. With a fundamental understanding of the causes
that lead users to make poor security decisions and take risky actions, the authors hope designers of
security systems are better equipped to mitigate those risks.
Chapter V, “Trusting Computers through Trusting Humans: Software Verification in a Safety-critical
Information System,” considers the question of how we may trust automatically generated program code.
The code walkthroughs and inspections of software engineering mimic the ways that mathematicians go
about assuring themselves that a mathematical proof is true. Mathematicians have difficulty accepting a
computer generated proof because they cannot go through the social processes of trusting its construc-
tion. Similarly, those involved in accepting a proof of a computer system or computer generated code
cannot go through their traditional processes of trust. The process of software verification is bound up
in software quality assurance procedures, which are themselves subject to commercial pressures. Qual-
ity standards, including military standards, have procedures for human trust designed into them. Dr.
Alison Adam of University of Salford, UK and Dr. Paul Spedding, of University of Salford, UK present
an action research case study of an avionics system within a military aircraft company that illustrates
these points, where the software quality assurance (SQA) procedures were incommensurable with the
use of automatically generated code.
The purpose of Chapter VI, “Information Security Culture as a Social System: Some Notes of Informa-
tion Availability and Sharing” is to increase understanding of the complex nature of information security
culture in a networked working environment. Viewpoint is comprehensive information exchange in a
social system. The aim of this chapter is to raise discussion about information security culture develop-
ment challenges when acting in a multicultural environment. The authors, Dr. Rauno Kuusisto, Turku
School of Economics, Finland and Dr. Tuija Kuusisto, Finnish National Defense University, Finland give
some notes to gain understanding, what might be behind this complexity. Understanding the nature of
this complex cultural environment is essential to form evolving and proactive security practices. Direct
answers to formulate practices are not offered in this chapter, but certain general phenomena of the activity
of a social system are pointed out. This will help readers to apply these ideas to their own solutions.
In Chapter VII, “Social Aspects of Information Security: An International Perspective,” authors, Dr.
Paul Drake and Dr. Steve Clarke of University of Hull, UK, look at information security as a primarily
technological domain, and ask what could be added to our understanding if both technology and hu-
man activity were seen to be of equal importance. The aim of the chapter is to ground the domain both
theoretically and practically from a technological and social standpoint. The solution to this dilemma is
seen to be located in social theory, various aspects of which deal with both human and technical issues,
but do so from the perspective of those involved in the system of concern. The chapter concludes by of-
fering a model for evaluating information security from a social theoretical perspective, and guidelines
for implementing the findings.
Chapter VIII, “Social and Human Elements of Information Security: A Case Study,” attempts to under-
stand the human and social factors in information security by bringing together three different universes
of discourse—philosophy, human behavior, and cognitive science. When these elements are combined,
they unravel a new approach to the design, implementation, and operation of secure information systems.
A case study of the design of a technological solution to the problem of extension of banking services
to remote rural regions is presented and elaborated to highlight human and social issues in information
security. The author, Dr. Mahil Carr, Institute for Development and Research in Banking Technology,
India, in the chapter, has also identified and examined the concept of the ‘other’ in information security
xxii

literature. The final objective is to prevent the ‘other’ from emerging and damaging secure systems rather
than introducing complex lock and key controls.
Social engineering refers to the practice of manipulating people to divulge confidential information
that can then be used to compromise an information system. In many cases, people, not technology,
form the weakest link in the security of an information system.
In Chapter IX, “Effects of Digital Convergence on Social Engineering Attack Channels,” the authors,
Dr. Bogdan Hoanca and Dr. Kenrick Mock of University of Alaska Anchorage, USA, discuss the prob-
lem of social engineering and then examine new social engineering threats that arise as voice, data, and
video networks converge. In particular, converged networks give the social engineer multiple channels
of attack to influence a user and compromise a system. On the other hand, these networks also support
new tools that can help combat social engineering. However, no tool can substitute for educational ef-
forts that make users aware of the problem of social engineering and policies that must be followed to
prevent social engineering from occurring.
As software becomes more and more entrenched in everyday life in today’s society, security looms
large as an unsolved problem. Despite advances in security mechanisms and technologies, most software
systems in the world remain precarious and vulnerable. There is now widespread recognition that security
cannot be achieved by technology alone. All software systems are ultimately embedded in some human
social environment. The effectiveness of the system depends very much on the forces in that environ-
ment. Yet there are few systematic techniques for treating the social context of security together with
technical system design in an integral way. In Chapter X, “A Social Ontology for Integrating Security and
Software Engineering,” the authors, Dr. E. Yu and Dr. J. Mylopoulos of University of Toronto, Canada
and Dr. L. Liu of Tsinghua University, China, argue that a social ontology at the core of a requirements
engineering process can be the basis for integrating security into a requirements driven software engineer-
ing process. Authors describe the i* agent-oriented modeling framework and show how it can be used
to model and reason about security concerns and responses. A smart card example is used to illustrate.
Future directions for a social paradigm for security and software engineering are discussed.
End users often find that security configuration interfaces are difficult to use. In Chapter XI, “Security
Configuration for Non-experts: A Case Study in Wireless Network Configuration,” Cynthia Kuo and Dr.
Adrian Perrig of Carnegie Mellon University and Jesse Walker of Intel Corporation, USA explore how
application designers can improve the design and evaluation of security configuration interfaces. The
authors use IEEE 802.11 network configuration as a case study. First, the authors design and implement
a configuration interface that guides users through secure network configuration. The key insight is that
users have a difficult time translating their security goals into specific feature configurations. Our interface
automates the translation from users’ high-level goals to low-level feature configurations. Second, the
authors develop and conduct a user study to compare our interface design with commercially available
products. The authors adapt existing user research methods to sidestep common difficulties in evaluating
security applications. Using authors’ configuration interface, non-expert users are able to secure their
networks as well as expert users. In general, the research addresses prevalent issues in the design and
evaluation of consumer-configured security applications.
Chapter XII, “Security Usability Challenges for End-users,” highlights the need for security solutions
to be usable by their target audience and examines the problems that can be faced when attempting to
understand and use security features in typical applications. Challenges may arise from system-initi-
ated events, as well as in relation to security tasks that users wish to perform for themselves, and can
occur for a variety of reasons. This is illustrated by examining problems that arise as a result of reliance
xxiii

upon technical terminology, unclear or confusing functionality, lack of visible status and informative
feedback to users, forcing users to make uninformed decision, and a lack of integration amongst the
different elements of security software themselves. Dr. Steven M. Furnell of University of Plymouth,
UK discusses a number of practical examples from popular applications, as well as results from survey
and user trial activities that were conducted in order to assess the potential problems at first hand. The
findings are used as the basis for recommending a series of top-level guidelines that may be used to
improve the situation, and these are used as the basis assessing further examples of existing software to
determine the degree of compliance.
The Internet has established firm deep roots in our day-to-day life. It has brought many revolutionary
changes in the way we do things. One important consequence has been the way it has replaced human-to-
human contact. This has also presented a new issue, which is the requirement for differentiating between
real humans and automated programs on the Internet. Such automated programs are usually written with
a malicious intent. CAPTCHAs play an important role in solving this problem by presenting users with
tests that only humans can solve. Chapter XIII, “CAPTCHAs—Differentiating between Human and Bots”
looks into the need, the history, and the different kinds of CAPTCHAs that researchers have come up
with to deal with the security implications of automated bots pretending to be humans. Various schemes
are compared and contrasted with each other, the impact of CAPTCHAs on Internet users is discussed
and to conclude, the various possible attacks are discussed. The author, Dr. Deapesh Misra of Verisign,
USA, hopes that the chapter will not only introduce this interesting field to the reader in its entirety, but
also simulate thought on new schemes.
Chapter XIV, “Privacy Concerns when Modeling Users in Collaborative Filtering Recommender
Systems” investigates ways to deal with privacy rules when modeling preferences of users in recom-
mender systems based on collaborative filtering. It argues that it is possible to find a good compromise
between quality of predictions and protection of personal data. Thus, it proposes a methodology that
fulfills with strictest privacy laws for both centralized and distributed architectures. The authors, Dr.
Sylvain Castagnos and Dr. Anne Boyer of LORIA—Université Nancy 2, Campus Scientifique, France,
hope that their attempts to provide an unified vision of privacy rules through the related works and a
generic privacy-enhancing procedure will help researchers and practitioners to better take into account the
ethical and juridical constraints as regards privacy protection when designing information systems.
Traditionally, the views of security professionals regarding responses to threats and the manage-
ment of vulnerabilities have been biased towards technology and operational risks. The purpose of this
chapter is to extend the legacy threat-vulnerability model to incorporate human and social factors. This
is achieved by presenting the dynamics of threats and vulnerabilities in the human and social context.
Dr. Warren Axelrod of US Trust, USA, in his chapter (Chapter XV), “An Adaptive Threat-Vulnerability
Model and the Economics of Protection” examines costs and benefits as they relate to threats, exploits,
vulnerabilities, defense measures, incidents, and recovery and restoration. The author also compares the
technical and human/social aspects of each of these areas. The author then looks at future work and how
trends are pushing against prior formulations and forcing new thinking on the technical, operational risk,
and human/social aspects. The reader will gain a broader view of threats, vulnerabilities, and responses
to them through incorporating human and social elements into their security models.
Chapter XVI, “Bridging the Gap between Employee Surveillance and Privacy Protection” addresses
the issue of electronic workplace monitoring and its implications for employees’ privacy. Organizations
increasingly use a variety of electronic surveillance methods to mitigate threats to their information sys-
tems. Monitoring technology spans different aspects of organizational life, including communications,
xxiv

desktop, and physical monitoring, collecting employees’ personal data, and locating employees through
active badges. The application of these technologies raises privacy protection concerns. Throughout this
chapter, Dr. Lilian Mitrou and Dr. Maria Karyda of University of the Aegean, Greece, describe different
approaches to privacy protection followed by different jurisdictions. The authors also highlight privacy
issues with regard to new trends and practices, such as tele-working and use of RFID technology for
identifying the location of employees. Emphasis is also placed on the reorganization of work facilitated by
information technology, since frontiers between the private and the public sphere are becoming blurred.
The aim of this chapter is twofold: it discusses privacy concerns and the implications of implementing
employee surveillance technologies and suggests a framework of fair practices which can be used for
bridging the gap between the need to provide adequate protection for information systems, while pre-
serving employees’ rights to privacy.
Achieving alignment of risk perception, assessment, and tolerance among and between management
teams within an organisation is an important foundation upon which an effective enterprise information
security management strategy can be built. Authors of Chapter XVII, “Aligning IT Teams’ Risk Man-
agement to Business Requirements,” Dr. Corey Hirsch of LeCroy Corporation, USA and Dr. Jean-Noel
Ezingeard of Kingston University, UK, argue the importance of such alignment based on information
security and risk assessment literature. Too often, lack of alignment dampens clean execution of strategy,
eroding support during development and implementation of information security programs. Authors
argue that alignment can be achieved by developing an understanding of enterprise risk management
plans and actions, risk perceptions, and risk culture. This is done by examining context, context and
process. Authors illustrate this through the case of LeCroy Corp., on how LeCroy managers perceive
risk in practice and how LeCroy fosters alignment in risk perception and execution of risk management
strategy as part of an overall information security program. They show that in some circumstances
diversity of risk tolerance profiles aide a management teams’ function. In other circumstances, variances
lead to dysfunction. Authors have uncovered and quantified nonlinearities and special cases in LeCroy
executive management’s risk tolerance profiles.
Information security is becoming increasingly important and more complex as organizations are in-
creasingly adopting electronic channels for managing and conducting business. However, state-of-the-art
systems design methods have ignored several aspects of security that arise from human involvement or
due to human factors. Manish Gupta, Dr. Raj Sharman, and Dr. Lawrence Sanders aim to highlight issues
arising from coalescence of fields of systems requirements elicitation, information security, and human
factors in their Chapter, XVIII, “Systems Security Requirements Elicitation: An Agenda for Acquisition
of Human Factors.” The objective of the chapter is to investigate and suggest an agenda for state of hu-
man factors in information assurance requirements elicitation from perspectives of both organizations
and researchers. Much research has been done in the area of requirements elicitation, both systems and
security, but, invariably, human factors are not been taken into account during information assurance
requirements elicitation. The chapter aims to find clues and insights into acquisition behavior of hu-
man factors in information assurance requirements elicitation and to illustrate current state of affairs in
information assurance and requirements elicitation and why inclusion of human factors is required.
Information is a critical corporate asset that has become increasingly vulnerable to attacks from viruses,
hackers, criminals, and human error. Consequently, organizations have to prioritize the security of their
computer systems in order to ensure that their information assets retain their accuracy, confidentiality,
and availability. While the importance of the information security policy (InSPy) in ensuring the security
of information is acknowledged widely, to date there has been little empirical analysis of its impact or
xxv

effectiveness in this role. To help fill this gap, Chapter XIX, “Do Information Security Policies Reduce
the Incidence of Security Breaches: An Exploratory Analysis” presents an exploratory study was initiated
that sought to investigate the relationship between the uptake and application of information security
policies and the accompanying levels of security breaches. To this end, authors, Dr. N.F. Doherty and Dr.
H. Fulford of Loughborough University, UK, designed, validated, and then targeted a questionnaire at
IT managers within large organizations in the UK. The findings presented in this chapter are somewhat
surprising, as they show no statistically significant relationships between the adoption of information
security policies and the incidence or severity of security breaches. The chapter concludes by exploring
the possible interpretations of this unexpected finding and its implications for the practice of informa-
tion security management.
The book is aimed towards primary audience of professionals, scholars, researchers, and academi-
cians working in the field of fast evolving and growing field of information security. Practitioners and
managers working in information technology or information security area across all industries would
vastly improve their knowledge and understanding of critical human and social aspects of information
security.

References

Belsis, P., Kokolakis, S., & Kiountouzis, E. (2005). Information systems security from a knowledge
management perspective. Information Management & Computer Security, 13(3), 189-202.
CSI/FBI Survey. (2007). Twelfth annual CSI/FBI computer crime and security survey. Retrieved Sep-
tember 22, 2007, from http://i.cmpnet.com/v2.gocsi.com/pdf/CSISurvey2007.pdf
Dhillon, G. (2007). Principles of information systems security: Text and cases. Danvers: John Wiley &
Sons.
Lewis, J. (2003). Cyber terror: Missing in action. Knowledge, Technology & Policy, 16(2), 34-41.
Lineberry, S. (2007). The human element: The weakest link in information security. Journal of Accoun-
tancy, 204(5). Retrieved from http://www.aicpa.org/pubs/jofa/nov2007/human_element.htm
McCauley-Bell, P. R., & Crumpton, L. L. (1998). The human factors issues in information security: What
are they and do they matter? In Proceedings of the Human Factors and Ergonomics Society 42ndAnnual
Meeting, USA (pp. 439-442).
Mitnick, K., & Kasperavičius, A. (2004). CSEPS course workbook. Mitnick Security Publishing.
Orgill, G. L., Romney, G. W., Bailey, M. G., & Orgill, P. M. (2004, October 28-30). The urgency for
effective user privacy-education to counter social engineering attacks on secure computer systems. In
Proceedings of the 5th Conference on Information Technology Education CITC5 ‘04, Salt Lake City,
UT, USA, (pp. 177-181). New York: ACM Press.
Schneier, B. (2000). Secrets and lies. John Wiley and Sons.
Vu, K. P. L., Bhargav, A., & Proctor, R. W. (2003). Imposing password restrictions for multiple accounts:
Impact on generation and recall of passwords. In Proceedings of the 47th Annual Meeting of the Human
Factors and Ergonomics Society, USA (pp. 1331-1335).
xxvi

Vu, K. P. L., Tai, B. L., Bhargav, A., Schultz, E. E., & Proctor, R. W. (2004). Promoting memorabil-
ity and security of passwords through sentence generation. In Proceedings of the Human Factors and
Ergonomics Society 48th Annual Meeting, USA (pp.1478-1482).
Wilson, T. (2007). Five myths about black hats, February 26, 2007. Retrieved April 15, 2007, from
http://www.darkreading.com/document.asp?doc_id=118169
Section I
Human and Psychological
Aspects


Chapter I
Human and Social Aspects
of Password Authentication
Deborah S. Carstens
Florida Institute of Technology, USA

Abstract

With the increasing daily reliance on electronic transactions, it is essential to have reliable security
practices for individuals, businesses, and organizations to protect their information (Vu, Bhargav, &
Proctor, 2003; Vu, Tai, Bhargav, Schultz, & Proctor, 2004). A paradigm shift is occurring as research-
ers are targeting social and human dimensions of information security, as this aspect is seen as an area
where control can be exercised. Since computer security is largely dependent on the use of passwords
to authenticate users of technology, the objectives of this chapter are to (a) provide a background on
password authentication and information security, (b) provide a discussion on security techniques, hu-
man error in information security, human memory limitations, and password authentication in practice,
and (c) provide a discussion on future and emerging trends in password authentication to include future
research areas.

INTRODUCTION be exercised. Since computer security is largely


dependent on the use of passwords to authenticate
With the increasing daily reliance on electronic users of technology, the mission of this chapter
transactions, it is essential to have reliable se- is to addresses the human and social aspects of
curity practices for individuals, businesses, and password authentication (Wiedenbeck, Waters,
organizations to protect their information (Vu Birget, Brodskiy, & Memon, 2005). Users are chal-
et al., 2003; Vu et al., 2004). A paradigm shift lenged to remember long and random passwords
is occurring as researchers are targeting social and therefore too often choose passwords that
and human dimensions of information security, may have low security strength or be difficult to
as this aspect is seen as an area where control can remember (Wiedenbeck et al., 2005; Yan, Black-

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Human and Social Aspects of Password Authentication

well, Anderson, & Grant, 2004). As the number remarks in the area of human and social aspects
of individuals using computers and networks has of password authentication.
increased, so has the level of threat for security
breaches against these computers and networks. Password Authentication
Carnegie Mellon’s computer emergency response Background
team (CERT) (2007) has collected statistics show-
ing that six security incidents were reported in In this world of ever increasing technological
1988 compared to 137, 529 in 2003. Furthermore, advances, users of technology are at risk for
CERT (2007) reported that 171 vulnerabilities developing information overload as the number
were reported in 1995 in comparison to 8,064 in and complexity of passwords and other electronic
2006. In addition, the Federal Bureau of Investiga- identifiers increase. Previous investigations of
tion (FBI) conducted a survey in which 40% of the National Institute of Standards and Technol-
organizations claimed that system penetrations ogy (NIST, 1992) have suggested that more than
from outside their organization had increased 50% of incidents that occur within government
from the prior year by 25% (Ives, Walsh, & Sch- and private organizations have been connected
neider, 2004). to human errors. The role that people play in
The rapid expansion in computing and maintaining information security is an important
networking has thus amplified the need to per- one that the literature has only begun to address.
petually manage information security within an As researchers improve their understanding of
organization. Events such as 9/11 and the war on how social and human factors limitations affect
terrorism have also underscored an increased information security, they can provide organiza-
need for vigilance regarding information security. tions with insight into improving information
Organizations, government, and private industry security policies. Passwords adopted by users are
are currently trying to adjust to the burden of this too easily cracked (Proctor, Lien, Vu, Schultz,
heightened need for information security, and, & Salvendy, 2002). In particular, organizations
as an example of this, the U.S. Department of can benefit from research revealing how best to
Homeland Security (2002) has focused particular minimize the demands that passwords place on
efforts on ensuring information security. In light the human memory system while maintaining the
of the current context of universal computing and strength of a password (Carstens, McCauley-Bell,
the realistic threats that exist to organizations’ Malone, & DeMara, 2004).
information systems, there is a strong need for The application of human factors and specifi-
more research in the field of information security. cally, cognitive theory principles, can be used to
The main objectives of this chapter are to (a) positively influence system security when orga-
provide a background on password authentication nizations follow password guidelines that do not
and information security, (b) provide a discus- exceed human memory limitations. Ultimately,
sion on the main thrust of the chapter, human user memory overload can be minimized when
and social aspects of password authentication, all aspects of a password authentication system
which include the topics of security techniques, have been designed in a way that capitalizes on
human error in information security, human the way the human mind works and also recog-
memory limitations, and password authentication nizes its limitations. As Hensley (1999) wrote,
in practice, and (c) provide a discussion on future “Password(s) do little good if no one remembers
and emerging trends in password authentication them.” Nevertheless, the exponential growth in
to include future research areas and concluding vulnerabilities and security incidents as suggested
by the CERT (2007) underscores that the design


Human and Social Aspects of Password Authentication

of password guidelines should be part of a com- policies. Research by Belsis, Spyros, and Kioun-
prehensive approach that still maintains strength touzis (2005) suggests that although successful
of passwords as necessitated by the information security management depends on the involvement
technology (IT) community. Human and social of users and stakeholders, that knowledge on in-
factors in organizational security, such as human formation systems security issues may be lacking
error on information security, are important is- resulting in reduced participation. Organizations
sues that left unresolved can have adverse effects seek to retain knowledge in their operations, but
on industry. the extent to the knowledge captured on security
related topics is not necessarily handled with the
Information Security Background same consistency and rigor. Often, the process is
an ad hoc one in gathering security knowledge
Ensuring effective information security involves through either the hiring of external consultants
making information accessible to those who need or depending on random internal security experts.
the information while maintaining the confiden- Security knowledge can exist in both the form of
tiality and integrity of that material. There are tacit and explicit knowledge in four modes. The
three categories used to classify information first mode is socialization and results strictly in
security risks: (a) confidentiality, (b) integrity, tacit knowledge. This type of knowledge occurs
and (c) accessibility or availability of information when individuals interact with each other while
(U.S. Department of Homeland Security, 2002). A sharing their knowledge. The second mode is
security breach in confidentiality can be defined externalization, which results in tacit knowledge
as occurring when sources not intended to have being transformed into explicit knowledge. This
knowledge of the information have been provided type of knowledge results in the creation of meta-
with this knowledge. Sending sensitive data to the phors, models, and analogies that an organization
wrong person is an example of this category. A will use. The third mode is combination, resulting
security breach in integrity is an incident where in knowledge sharing through documents, meet-
there is an unauthorized or incorrect change made ings, and better structuring of existing knowledge.
to an information source, such as a financial As the term combination implies, both tacit and
accounting error that causes the information in explicit knowledge exists within this mode. The
the database to be inaccurate. A security breach fourth mode is internalization, whereby explicit
in accessibility occurs when either access for knowledge becomes tacit knowledge. This oc-
those entitled to a system is denied or access is curs when an individual works frequently on a
given to those who are not authorized to access certain project where documented knowledge is
the system. An example of this category would utilized as well as the individual’s experience
be an authorized user of a system who is unable expanding as an individual gains more familiarity
to access a system due to forgetting his or her with their work. The four modes together relate
password. Given the definitions, a human error to how knowledge transformation within an
security incident is defined as any human error organization occurs and contributes to the orga-
related event that compromises information’s nizational memory. Lahaie (2005) discusses the
confidentiality, integrity, or accessibility (Carstens threat of corporate memory loss when individuals
et al., 2004). leave an organization. Therefore, an information
The development of security is similar to any system knowledge management system could be
other design or development process in that the deployed within an organization to ensure that
involvement of users and other stakeholders are knowledge, specifically in the area of security,
crucial in the success of an organization’s security


Human and Social Aspects of Password Authentication

is documented and assessable to aid in the suc- of information assets. Contingency theory as it
cess of an organization’s security policy (Belsis relates to information security management is
et al., 2005). to establish guidelines on how an organization
An organization’s security policy is becoming will prevent and respond to system threats and
an increasingly important topic as a source to vulnerabilities. The birth of an integrated theory
protect organizations’ information assets. Risk occurs through ensuring that all of the theories are
analysis can be used to determine threats and iden- considered by information system practitioners
tify processes to secure computer assets (Gerber, and researchers in meeting organization security
von Solms, and Overbeek, 2001). However, secur- objectives. Therefore, organization’s culture in the
ing computer assets is no longer adequate in the area of security policy could shift to an integrated
information society of today and therefore we must theory involving multiple personnel to come
also identify alternative approaches to securing together in the development and maintenance of
information assets. Proactive security measures the security policy.
should be undertaken due to the degree of sensi- The next section of the chapter discusses the
tive information within organizations and their main issues, controversies, and problems in the
constantly changing technological environment area of information security and specifically,
(Sanderson & Forcht, 1996). Information security password authentication. By comparing and
management personnel are facing unprecedented contrasting what has been accomplished in the
challenges to ensure the safety of information past with what is currently being undertaken,
assets within an organization (Hong, Chi, Chao, solutions and recommendations for organizations
& Tang, 2003). By looking at theories on security in regards to managing their information assets
policy, risk management, control and auditing, will be uncovered.
management system, and contingency theory,
practitioners and researchers can work together
to build a comprehensive theory of information HUMAN AND SOCIAL ASPECTS IN
security management and specifically password PASSWORD AUTHENTICATION
management. A security policy focuses on plan-
ning information security requirements, form- Security Techniques
ing consensus in an organization, drafting and
implementing a policy and revising the policy as Research on passwords is necessitated in spite
deemed necessary. Risk management is concerned of a movement towards alternative security
with the evaluation of organizational security techniques, such as bioidentifiers, individual
risk factors for the purpose of ensuring that the certificates, tokens, and smart cards. Smart cards
level of risk at any given time for an organiza- communicate directly with the target system and
tion is acceptable. Control and auditing theory run the authentication procedure themselves. A
investigates and implements security standards survey of 4,254 companies in 29 countries was
for an organization as a mechanism to control conducted by Dinnie (1999) to identify a global
systems. The auditing segment of the theory pro- perspective of information security. The survey
vides for a means to assess control performance indicated that password authentication in the USA
to ensure the standards are being maintained is the preferred security method utilized 62% of the
across an organization. Management system time, as opposed to smart card authentication only
theory places value on the need for information being used 8% of the time and certificates 9% of
security documentation in an organization as a the time. In Australia, password authentication is
tool and guide to ensure control and protection used 67% as opposed to smart card authentication


Human and Social Aspects of Password Authentication

only being used 9% of the time and certificates are unable to remember passwords and therefore
5% of the time. The remaining countries sur- keep insecure records of their passwords, such as
veyed showed password authentication at 58% writing a password down on paper (Yan et al.,
with smart card authentication at 4%. However, 2004). Organizational security policies need to
the problem with password authentication, smart be adhered to by employees to ensure protection
cards, and tokens is that these provide the ability of organization’s information assets. Therefore,
to have the information that is requested but not changes in security policies using integrated
the ability of identifying the person (Harris & theory should be considered resulting in policies
Yen, 2002). Therefore, bioidentifiers will likely being better followed.
become increasingly popular as it is the only way A survey in the area of the human impact on
to identify who the person is rather than what they information security indicated that 37% of survey
have or know. The main problems of bioidentifiers participants never change their work and/or school
are the cost, inconvenience of users needing to passwords and that 69% of survey participants
prepare to be scanned and needing to be enrolled never change their personal passwords (Carstens
at multiple computer systems, potential to fool et al., 2004). The same research suggests that
systems leading to unauthorized access, and when prompted to replace a current password,
fear individuals have with their biometric data 43% of survey participants changed their work
being stolen. Therefore, password usage for both and/or school passwords back to a password they
professional and personal use is still a common had used in the past; 33% of survey participants
means of authentication necessitating the need to indicated changing their personal passwords back
further understand the human and social aspects to an old password as well. The survey research
of information security. suggests that with the IT community stressing the
importance of using secure passwords, not writing
Human Error in Information Security passwords on paper, changing passwords often,
and using different passwords for all systems,
Research has indicated that human error makes up a person may compromise the strength of their
as much as 65% of incidents that cause economic password due to human information processing
loss for a company and that security incidents limitations. Proctor et al. (2002) performed experi-
caused by external threats such as computer hack- ments testing passwords between five characters
ers happen only 3% or less of the time (Lewis, and eight characters in length. The research sug-
2003; McCauley-Bell & Crumpton, 1998; NIST, gests that increasing password character length
1992). However, there is only a minimal effort to a minimum of six to eight characters reduces
to address the human error risks in information crackability and therefore password strength,
security, which is among the highest cause of in terms of security. Another study conducted
information security incidents (McCauley-Bell suggests that crack-resistant passwords were
& Crumpton, 1998; Wood & Banks, 1993). A achieved through the use of a sentence generation
common challenge faced by individuals today is password method including the user to embed
the need to simultaneously maintain passwords a digit and special character into the password
for many different systems in their work, school, (Vu et al., 2004). However, memorability issues
and personal lives. Research conducted by Wie- occurred with users from adding the digit and
denbeck et al. (2005) suggests that stringent rules special character to the password.
for passwords lead to poor password practices that
compromise overall security. Human limitations
can compromise password security because users


Human and Social Aspects of Password Authentication

Human Memory Limitations Golbeck (2002) suggests that schemas can serve
as the basis for chunks because they provide a
Miller’s (1956) chunking theory and Cowan’s meaningful method for grouping information. A
(2001) research is useful to consider, regard- schema is defined as a mental model that makes
ing human and social factors in organizational recall of an item easier for users. Mental models
security, specifically when developing a model are sets of beliefs that a person has on how a sys-
for password guidelines. This theory classifies tem works and therefore interacts with a system
data in terms of chunks and indicates that the based on these beliefs (Norman, 1988).
capacity of working memory is 7±2 chunks of Research suggests that turning information
information. More recent research suggests that into a meaningful chunk of data can increase a per-
a mean memory capacity in adults is only three son’s short-term memory capacity. For example,
to five chunks with a range of two to six chunks a study conducted by Loftus, Dark, and Williams
as the real capacity limit (Cowan, 2001). A chunk (1979), which tested short-term memory retention
of data is defined as being a letter, digit, word, among ground control and student pilots through
or different unit, such as a date (Miller, 1956). an examination of communication errors, found
A chunk is further described as a set of adjacent that recall was better when material was chunked.
stimulus units that are closely tied together by as- In addition, Preczewski and Fisher (1990) studied
sociations in the user’s long-term memory. Miller the format of call signs made up of any series of
suggests that merely turning information into a letters and digits used by the military in secure
meaningful chunk of data can increase a person’s radio communications. The findings indicate that
short-term memory capacity. This occurs because the size of the chunks influenced the accuracy of
chunking data places the input into subsets that short-term retention. Furthermore, mixing letters
are remembered as single units. A person’s short- and digits within one-chunk was more difficult
term memory capacity is reduced if a person to recall than just having letters or digits make
tries to remember isolated digits or letters rather up the chunk because the mixed chunk of letters
than grouping or recoding the information into and digits lacked meaning. This research therefore
chunks of data. Chunking then becomes useful suggests that memory is enhanced when the person
in the development of passwords in creating a can make meaning of the data string.
meaningful sequence of stimuli within the total Wickens (1992) suggests that chunking
string of data; that is, chunks serve as an integral should be used whenever possible because of
representation of data that are already stored in a people’s working memory limitations. Further,
person’s long-term memory. he describes chunking as a strategy or mnemonic
Similar to Miller’s chunking theory, Newell, device that may be taught. This mnemonic aspect
Shaw, and Simon (1961) suggest that highly is what makes chunking a helpful way for orga-
meaningful words are easier for a person to learn nizations and individuals to develop passwords
and remember than less meaningful words, with that do not exceed human memory limitations.
meaningful being defined by the person’s number Therefore, system designers, or in this case, system
of associations with the word. Vu et al. (2003) password guideline designers, should not exceed
suggest that passwords could be more memorable the low end of Miller’s 7±2 chunk scale. Proctor
if users comprised their passwords with familiar et al. (2002) performed research where Miller’s
characters such as phone numbers. Memorizing a chunking theory (1956) was a consideration in
string of words that represent complete concepts is testing different length passwords between five
easier to remember than an unrelated list of words characters to eight characters due to Miller’s 7±2
suggests Straub (2004). Building on Miller’s work, chunk scale. In a study conducted by Vu et al.


Human and Social Aspects of Password Authentication

(2004), a sentence generation method was utilized ferent types of symbols as day, month, and year
to produce a crack resistant password through the separators (second chunk). The third password
user being required to embed a special character was a three-chunk password requiring partici-
and digit into the sentence. User memorability of pants to use the following guidelines: password
these generated passwords declined as it took users must contain participants’ first and last initials
two times longer to recall the passwords, made formatted using a combination of both uppercase
users perform twice as many errors in recalling and lowercase letters (first chunk); password must
the password, and resulted in users forgetting the contain participants’ federal agency start date
password twice as often. The researchers suggest using different types of symbols as day, month,
that the errors occurred due to users forgetting and year separators (second chunk); password
the sentence generated and the special character must contain participants’ mother’s first name
and/or digit embedded in the sentence. Further- initial in uppercase and maiden name initial in
more, participants experienced difficulty with lowercase (third chunk). The fourth password
remembering the digit and/or symbols which tested was a four-chunk password comprised of
researchers attributed to the symbols and/or digits the following criteria: Participants selected two
not being meaningfully related to the sentence. meaningful dates that were not easily accessible
Vu et al. (2003) conducted a different study to to the public, using a symbol of choice to be used
analyze the effects of password generation and as day/month/year separators (2 chunks); partici-
recall utilizing multiple accounts suggesting that pants selected two sets of initials that contained
increasing demands on human memory leads to at least one uppercase and one lowercase letter (2
the level of remembrance of the password to be chunks). For each of the passwords tested, differ-
decreased. ent guidelines were given. All of the guidelines
Carstens, Malone, and McCauley-Bell (2006) mandated study participants to choose passwords
performed a study at a large federal agency to with a combination of numbers, letters, and special
determine whether a difference exists in people’s characters. The passwords tested that included
ability to remember complex passwords with chunking theory, which were the second, third,
different difficulty levels. Four different types and fourth passwords, gave participants written
of passwords were tested. The first password did guidelines informing participants how their pass-
not use chunking theory and participants were word should be developed, such as with the use of
required to follow the following guidelines: pass- their mother’s maiden name initials. However, only
word must be at least seven characters in length; in the four-chunk password was verbal training
password must have a combination of symbols given to participants in how to create meaningful
and letters; password cannot use the same term passwords. The results indicate that a password
more than twice; password must not spell out a comprised of meaningful chunks is easier to
dictionary word or proper noun; and password recall than a password with random data such as
can’t be relevant data such as individual’s social the seven character password tested, even if the
security number, street address, birth date, and password contains additional characters. Individu-
so forth. The second password was a two-chunk als were better able to recall the passwords in the
password and required participants to use the two-chunk password and three-chunk password
following guidelines: password must contain as well as in the four-chunk password, which
the participants’ first and last initials using a indicates that data in the actual passwords could
combination of both uppercase and lowercase be meaningfully chunked together for the indi-
letters (first chunk); password must contain vidual. Furthermore, the results indicate that an
participants’ federal agency start date using dif- individual is able to recall a two-chunk password


Human and Social Aspects of Password Authentication

as easily as a three-chunk or even a four-chunk (2006) developed secure password guidelines


password. Results further suggest that as long that are listed:
as information in a password is composed of
meaningful information unique to an individual, • Passwords must be a combination of symbols,
human memory capabilities enable an individual numbers, and letters.
to recall up to four-chunks of data consisting of • Passwords cannot use the same character
up to 22 characters. more than twice.
• Passwords must not spell out words that are
Password Authentication in Practice found in a dictionary or use a proper noun such
as a name of a person, pet, place, or thing.
This chapter offers guidance to those that design • Passwords can not contain information eas-
security policies by providing a practical guide ily accessible to the public which include but
to password management as displayed in Figure are not limited to a social security number,
1. The first component of password management street address, family members’ birthdays,
is to use integrated theory in the development and wedding anniversary dates.
of security policies. Integrated theory insists on • Passwords contain two to four chunks of data
pulling individuals from different areas within an and are comprised of 10 to 22 characters in
organization along with expertise within many length, which will be dependent on the charac-
important disciplines such as risk, security, con- ter length capabilities of any given system.
trol and auditing, management and contingency
theory. This enables a systematic approach to the The purpose of the password guideline re-
management of passwords and overall security search was in evaluating the impact of password
within an organization. demands as a means of authentication and to
The second component of password manage- mitigate risks that result when these demands
ment is to provide guidelines for individuals exceed human capabilities. The intent was to
within organizations to follow that result in secure develop password guidelines that do not exceed
passwords. Previous research by Carstens et al. human memory limitations, yet that maintain

Figure 1. Password management guidelines


Figure : Password Management Guidelines

1.0 Develop Security 4.0 Monitor & Control


Policies Using Organizational
Integrated Theory Security

2.0 Provide Guidelines 3.0 Provide Training


on Developing on How to Comprise
Secure Passwords Meaningful Passwords
Using Chunking Theory

Continuously Evaluate & Adjust Security Policy


Human and Social Aspects of Password Authentication

strength of passwords. The password guidelines comprise a password. First, the instructor should
developed in the research had individuals com- define what it means to compose a password of
pose their passwords of relevant and meaningful meaningful chunks and may discuss how many
data that are not accessible to the public. Some chunks of data can easily be recalled by users
industries do suggest to system users to compose (i.e., two, three, or four). It would also be helpful
passwords of meaningful data. However, specific if the instructor encouraged different employees
guidelines or password training have not been to comprise their passwords of different lengths
established to aid users in how to comprise a so that potential hackers would be unable to
password that is both secure and meaningful. The discover a consistent password length among em-
research provided value to information security ployees. For example, an instructor might inform
literature through testing the usefulness of chunk- employees in one class to have a system specific
ing theory being applicable to the development password between 7-9 characters in length and
of passwords. When followed, these guidelines in another class might recommend a password
result in (a) reduced vulnerabilities in information between 10-12 characters in length. Instructors
systems within organizations and (b) increased should also stress the importance to not have the
trust in the users of information technology. These same password used for more than one system.
guidelines provide users with a password that is Employees should also be encouraged to select one
both secure and easy to recall in terms of it being chunk of a password to be considered as a core
stored effectively in an individual’s long term of every password. The core or one-chunk would
memory. The password guidelines were created then be part of all passwords. For example, if a
that do not exceed human memory limitations, person wanted to create a two-chunk password, a
yet maintain strength of password as necessitated person could select “Mb#=43,” which translates
by the information technology community. The to my basketball number equals 43, as one-chunk
two criteria for ideal password development are within their password. The person could then
(a) passwords contain meaningful and personally select the second chunk of data such as “iemf,”
relevant data for the user and (b) passwords are which translates to industrial engineering major
strong passwords in terms of the IT community’s in Florida. The two chunks could be combined,
standards. It is important that these guidelines “Mb#=43iemf” to form one password that an
be utilized in conjunction with training that as- individual uses to access their university portal.
sists the user in creating a password composed The person could then select “GMCBHS,” which
of meaningful data chunks and in managing translates to go minuteman at Cocoa Beach High
multiple passwords. School. Once again, the two chunks could be
The third component of password management combined, “Mb#=43GMCBHS” and used as an
is to provide some degree of training to individu- individual’s password for their social network
als in creating a meaningful password. Providing account such as myspace.com or facebook.com,
training to individuals within organizations will since these systems often link former high school
provide assistance in the creation of passwords friends together. Therefore, one-chunk of every
that are meaningful and therefore more easily password for an individual could remain constant.
remembered. The training guides employees on It is the second chunk of the password that could
how to compose passwords that are comprised vary and be composed of information that is
of meaningful chunks of data unique only to that directly linked to the system or device where it
employee. Research by Yan et al. (2004) suggests is used. An individual could then have multiple
that password security can be significantly im- passwords in both their professional and personal
proved through educating users on how to better lives that have one familiar chunk which never


Human and Social Aspects of Password Authentication

varies. However, from a security perspective, any TRENDS IN PASSWORD


additional chunks used in the password should AUTHENTICATION
vary. From a human memory limitations perspec-
tive, linking the second chunk to the system being Emerging and Future Trends in
used in a unique and non-obvious matter would Password Authentication
enable an individual to obtain a password that is
strong yet easy to recall. There are many trends in the area of passwords
The fourth component of password manage- that can be classified as either emerging or future.
ment is to monitor and control organizational Emerging trends are in the area of securing in-
security. This component typically resides with formation assets for organizations as displayed in
the security administrator (Higgins, 1999). Some Figure 2. This is a basic model for human factors
organizations may have different administra- practitioners and information technology profes-
tors such as one for a gateway, network, and so sionals to use in determining the vulnerabilities
forth. However, it is every individual’s job to that password practices are producing on their
ensure that the security policy is being adhered information systems. This is an initial model
to throughout the organization. Routine audits and additional research is ongoing to validate
should be scheduled by the administrator. The and enhance the model. A great need in the area
audits would result in the identification of any of information security is to uncover methods to
vulnerability that may be present within organiza- manage information assets which begin through
tions’ information assets. Additionally, the use of uncovering information threats and system vul-
publicly available hacking techniques should be nerabilities. Vulnerabilities have the potential to
tested to ensure that the information assets can cause harm to information which is the core of
withstand the trials. Users should be prompted any organization, as knowledge is power. With-
by their systems to change passwords periodi- out confidentiality, integrity, and accessibility
cally. Requiring that passwords request at several of information within organizations’ systems,
gates, from remote access, network, applications, an organization’s competitive advantage could
and files enhances security as well, according to be violated. Organizations still primarily utilize
Higgins (1999). The use of scanner or cracker passwords as a means of user authentication for
programs could also be used to ensure strong information technology. The password issues
passwords are chosen by individuals. Lastly, it is consist of an individual being expected to remem-
important for all organizational security person- ber many different passwords, multiple systems
nel to ensure that time is spent to continuously that required different or similar passwords for
evaluate and adjust the security policy as new an individual to obtain access, and the complex-
technologies and individuals become part of an ity of password guidelines that individuals are
organization. These guidelines are applicable to expected to follow in the development of their
a variety of uses such as information systems, passwords. These issues produce system vulner-
document passwords, corporate portals, and abilities, such as weak passwords (e.g., dictionary
mobile devices. However, the guidelines are not words), common passwords (e.g., using the same
applicable to legacy systems due to the recom- password for more than one system), visible pass-
mended character length. Although, the other words (e.g., an individual writing their password
aspects of the guidelines could aid legacy system on a sticky note hanging on their computer), and
users to better recall their passwords. security policies not being followed due to the
complexity of the password guidelines. Currently,
the model identifies workload concerns as well

10
Human and Social Aspects of Password Authentication

such as overload of information. Together, the A future trend will be the development of a
workload and password issues produce system tool or better method for password management.
vulnerabilities that could result in the potential Password management issues causes organi-
of insecure information assets. Identification of zational help desks personnel to be extremely
the causes of the vulnerabilities are important, as busy on Mondays and after holidays due to the
once the causes are known, security personnel can number of employees that forget their passwords.
then take the appropriate actions to decrease the This will continue to be a problem, if not even an
vulnerabilities through reducing or eliminating increasing problem, as the number of passwords
workload and password concerns. individuals are required to remember increase
The integrated theory approach is another due to the degree of technology in society. It is
emerging trend that focuses on security policy, anticipated that alternative security options such
risk management, control and auditing, manage- as those discussed in the security techniques sec-
ment system, and contingency theory can be tion will continue to be developed to enable more
used to build a comprehensive security policy organizations to utilize alternatives to passwords
that is more likely to be followed by individu- for work related systems. A future trend will likely
als in an organization and ensure that security be to develop a wireless device that could serve
within an organization is maintained (Hong, Chi, a person for all of their password authentication
Chao, & Tang, 2003). As technology continues needs. The problem, of course, with such a device
to pulse through the daily lives of individuals, would be if the device is ever lost, which is why
the behavioral aspects of humans at work and the device would likely be some type of biometric
play continues to be explored resulting in more device that is able to accurately and efficiently
technology in terms of products to help organize distinguish one individual from another without
individuals. Along with new technology, new and concern by users of their identity being stolen.
changing security policies will be implemented This will continue to be an important topic for
in organizations which are responsible for growth practitioners and researchers until a solution is
in the field of information technology. found. In the interim, there is research that can
be performed to make the life of those individuals

Figure 2. Understanding the vulnerabilities present with information assets in an organization


Password User Overload of
Complexity Information

Weak Passwords Visible Passwords

Vulnerable
Information Assets
Common Passwords Security Policies
Neglected

Multiple Multiple
Passwords Systems

11
Human and Social Aspects of Password Authentication

required to maintain multiple passwords easier. present in systems and therefore positively con-
Future password authentication research will be tribute to impacting the security of information
discussed next. within systems.

Future Password Conclusion


Authentication Research
Although there are many alternatives to password
The world has been revolutionized by the amount authentication, individuals have multiple pass-
of information that makes its way into the daily words that are used daily, as password authenti-
lives of individuals and ultimately organizations. cation is still considered to be the most common
It is therefore necessary that research continues form of authentication. Therefore, an issue for
to identify specific ways that assist individuals in information security personnel is to reduce the
handling the abundance of information. Future information load faced by individuals trying
research in password authentication practices to maintain numerous passwords for recall. A
should continue to explore the human and social password-creation system that does not impose
factors in organizational security. Organizational additional demands on a person’s attention capaci-
and individual password usage, as new technol- ties and short-term memory, since passwords are
ogy emerges, will likely increase the memory composed of information that already exists in
demands placed on individuals daily. Research an individual’s long-term memory, is perhaps a
will need to be continuously conducted to keep mechanism that individuals can use until a better
a pulse on the password demands being placed and more affordable alternative to authentica-
on individuals to identify techniques that assist tion exists. When individuals have passwords
humans with the management of their passwords. that do no exceed human memory limitations,
Determining the links between password issues employees can be trusted to follow organiza-
and other newly identified issues on human tional security policies. The more instruction and
memory limitations and strategies to reduce the education that can be given to employees will
potential for vulnerabilities produced will be cru- have positive benefits in enabling them to form
cial for organizational success. Future research in strong passwords that are easy to recall. The use
the social and human side of information security of password guidelines reduces the likelihood
will further support the need for organizations of an organization being subject to a security
to have password guidelines that do not exceed breach since individuals would be less likely to
human memory limitations. Having password engage in practices that render an organization
guidelines that do not exceed human memory vulnerable, such as using the same password for
limitations will enable organizational security multiple systems or writing their passwords on
policies to be better followed and eliminate the paper. The recommendations and suggestions for
need for individuals to write their passwords on a improvement to security policies discussed in this
piece of paper or use the same password for mul- chapter support the use of Miller’s (1956) chunk-
tiple systems. Identification of the links between ing theory and Cowan (2001), when developing
password issues and possibly workload issues on password guidelines and training on password
human memory limitations will further educate development. Through simple password guideline
organizations on the vulnerabilities present. As changes and employee password security training,
vulnerabilities are identified, organizations will organizations can better guard against human
be able to better guard against the vulnerabilities error while maintaining secure practices for user

12
Human and Social Aspects of Password Authentication

authentication that guard against external threats. Golbeck, J. (2002). Cognitive load and memory
Until passwords are no longer in use, it is important theories. Retrieved April 2, 2007, from http://
that practitioners and researchers continue their www.cs.umd.edu/class/fall2002/cmsc838s/tichi/
efforts as a building block for future research that printer/memory.html
focuses on the human side of information security
Harris, A. J., & Yen, D. C. (2002). Biometric
and specifically the human and social aspects of
authentication: Assuring access to information.
password authentication.
Information Management &Computer Security,
10(1), 12-19.
References Hensley, G. A. (1999). Calculated risk: passwords
and their limitations. Retrieved April 2, 2007,
Belsis, P., Kokolakis, S., & Kiountouzis, E. (2005). from http://www.infowar.com/articles/99article_
Information systems security from a knowledge 120699a_ j.shtml
management perspective. Information Manage-
Higgins, H. N. (1999). Corporate system security:
ment & Computer Security, 13(3), 189-202.
towards an integrated management approach.
Carnegie Mellon Computer Emergency Response Information Management & Computer Security,
Team (CERT). (2007). Computer emergency 7(5), 217-222.
response team statistics. Retrieved April 25,
Hong, K. S., Chi, Y. P., Chao, L. R., & Tang, J. H.
2007, from http://www.cert.org/stats/cert_stats.
(2003). An integrated system theory of informa-
html#incidents
tion security management. Information Manage-
Carstens, D. S., Malone, L., & Bell, P. (2006). ment & Computer Security, 11(5), 243-248.
Applying chunking theory in organizational
Ives, B., Walsh, K., & Schneider, H. (2004). The
human factors password guidelines. Journal of
domino effect of password reuse. Communica-
Information, Information Technology, and Or-
tions of the ACM, 47(4), 75-78.
ganizations, 1, 97-113.
Lahaie, D. (2005). The impact of corporate
Carstens, D. S., McCauley-Bell, P., Malone, L.,
memory loss. Leadership in Health Services,
& DeMara, R. (2004). Evaluation of the human
18, 35-48.
impact of password authentication practices on
information security. Informing Science Journal, Lewis, J. (2003). Cyber terror: Missing in action.
7, 67-85. Knowledge, Technology & Policy, 16(2), 34-41.
Cowan, N. (2001). The magical number 4 in Loftus, E. F., Dark, V. J., & Williams, D. (1979).
short-term memory: A reconsideration of mental Short-term memory factors in ground control-
storage capacity. Behavioral and Brain Sciences, ler/pilot communication. Human Factors, 21,
24(1), 87-185. 169-181.
Dinnie, G. (1999). The second annual global infor- McCauley-Bell, P. R., & Crumpton, L. L. (1998).
mation security survey. Information Management The human factors issues in information security:
& Computer Security, 7(3), 112-120. What are they and do they matter? In Proceedings
of the Human Factors and Ergonomics Society
Gerber, M., Solms, R. V., & Overbeek, P. (2001).
42ndAnnual Meeting, USA (pp. 439-442).
Formalizing information security requirements.
Information Management & Computer Security,
9(1), 32-37.

13
Human and Social Aspects of Password Authentication

Miller, G. A. (1956). The magical number seven U.S. Department of Homeland Security. (2002).
plus or minus two: Some limits on our capacity Federal information security management act.
for processing information. Psychological Review, Retrieved April 2, 2007, from http://www.fedcirc.
63, 81-97. gov/library/legislation/FISMA.html
National Institute of Standards and Technology Vu, K. P. L., Bhargav, A., & Proctor, R. W.
(NIST). (1992). Computer system security and (2003). Imposing password restrictions for mul-
privacy advisory board (Annual Report, 18). tiple accounts: Impact on generation and recall
of passwords. In Proceedings of the 47th Annual
Newell, A., Shaw, J. C., & Simon, H. (1961) Infor-
Meeting of the Human Factors and Ergonomics
mation processing language V manual. Edgewood
Society, USA (pp. 1331-1335).
Cliffs, NJ: Prentice-Hall.
Vu, K. P. L., Tai, B. L., Bhargav, A., Schultz, E. E.,
Norman, D. A. (1988). The psychology of everyday
& Proctor, R. W. (2004). Promoting memorabil-
things. New York: Harper & Row.
ity and security of passwords through sentence
Preczewski, S. C., & Fisher, D. L. (1990). The generation. In Proceedings of the Human Factors
selection of alphanumeric code sequences. In and Ergonomics Society 48th Annual Meeting,
Proceedings of the Human Factors Society 34th USA (pp.1478-1482).
Annual Meeting (pp. 224-228).
Wickens, C. D. (1992). Engineering psychology
Proctor, R. W., Lien, M. C., Vu, K. P. L., Schultz, and human performance (2nd ed.). New York:
E. E., & Salvendy, G. (2002). Improving computer HarperCollins Publishers.
security for authentication of users: Influence
Wiedenbeck, S., Waters, J., Birget, J. C., Brodskiy,
of proactive password restrictions. Behavior
A., & Memon, N. (2005). PassPoints: Design and
Research Methods, Instruments, & Computers,
longitudinal evaluation of a graphical password
34, 163-169.
system. International Journal of Human Com-
Sanderson, E., & Forcht, K. A. (1996). Information puter Studies, 63, 102-127.
security in business environments. Information
Wood, C. W., & Banks, W. W. (1993). Human error:
Management & Computer Security, 4(1), 32-37.
an overlooked but significant information security
Straub, K. (2004). Cracking password usability problem. Computers & Security, 12, 51-60.
exploiting human memory to create secure and
Yan, J., Blackwell, A., Anderson, R., & Grant,
memorable passwords. UI Design Newsletter,
A. (2004). Password memorability and security:
Retrieved April 2, 2007, from http://www.hu-
Empirical results. IEEE Security and Privacy,
manfactors.com/downloads/jun04.asp
2(5), 25-31.

14
15

Chapter II
Why Humans are
the Weakest Link
Marcus Nohlberg
School of Humanities and Informatics, University of Skövde, Sweden

Abstract

This chapter introduces the concept of social psychology, and what forms of deception humans are
prone to fall for. It presents a background of the area and a thorough description of the most common
and important influence techniques. It also gives more practical examples of potential attacks, and what
kind of influence techniques they use, as well as a set of recommendations on how to defend against
deception, and a discussion on future trends. The author hopes that the understanding of why and how
the deceptive techniques work will give the reader new insights into information security in general,
and deception in particular. This insight can be used to improve training, to discover influence earlier,
or even to gain new powers of influence.

Introduction in order to improve their personal and organiza-


tional defenses. It might also give a little more
A computer crime starts, and ends, with a hu- understanding for the victims. When researching
man, no matter which method is chosen for the successful attacks from the comfortable position
attack. Many successful computer crimes could of the outside observer, most of us are prone to
have been prevented if the people involved had throw the first stone against what can be seen as
been more vigilant, more security conscious, or gullible humans. The fact is that almost everyone
aware of their own weaknesses. This chapter is susceptible to the techniques and weaknesses
deals with human weakness. It can be perceived described in this chapter, simply because the at-
as a “how-to-manual” for the aspiring attacker, tacks play on human emotion rather than logic.
but just as well as a “know-yourself” guide that
can be used by both individuals and professionals

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Why Humans are the Weakest Link

Background exploiting an advantage is deception. The “what”


of deception is the manipulation of behavior.
We humans are complicated beings, with some The “why” is to exploit the advantage achieved
interesting shortcuts in our behavior. In recent (Feer, 2004).
years there have been multiple studies on decep-
tion in general and influence in particular. These There are two different kinds of deception.
studies have been done in, amongst others, the There is dissimulation, which concerns the hid-
field of economics and most notably in social ing of the truth (Bowyer, 2003). The truth can be
psychology. In order to stay as close to the human hidden in three ways. It can be hidden by masking
element as possible, this chapter will focus on the the information, for instance, by hiding nefari-
social psychological aspects that can be practically ous features in a piece of software. It can also
used by the attacker. There are ample theories and be hidden by repackaging the information, for
work being done in a more theoretical setting, but instance, by hiding a Trojan horse in legitimate
this chapter will focus on the techniques that the software. Finally, information can be dissimulated
perpetrators might use. Cialdini (2001) has written by dazzle, to shock or surprise, for instance, by
one of the most influential books in this area, and sending nude pictures in an e-mail. The other
this chapter will follow his use of the six basic kind of deception, simulation, deals with exhibit-
rules of influence, together with some other added ing false information. Simulation can be done by
aspects of influence. In order to facilitate a better mimicking, which is spoofing or imitating reality,
understanding of the concepts, examples will be for instance, as done in a phishing attack. It can
given, both from the literature and from real life. also be done by inventing, which is the creation
When applicable, the terms will be tied together of a new reality, for example, false messages from
with information security as far as possible. Not all Microsoft that a certain bug must be patched as
the information here will be from research, some soon as possible. The final method of simulation
will also be added from online sources, guides on is decoying, where a diversion is done to create
what to explore and attack written for the aspir- a diversion from the real object, such as a false
ing social engineer. While this information has warning of a different attack than the one you are
not been judged against academic standards, it is exposed to at the moment.
still relevant, because it is the information attack-
ers will try to use for their attacks and therefore
important to know. Humans and Deception
Deception is a powerful tool for any attacker,
but also for any parent, teacher, salesman, or most Most of us, and indeed probably you, the reader,
of us in our everyday lives. We buy and sell goods, consider ourselves exceptionally resistant to
we court romantic interests, and we try to raise manipulation. We are better than the average at
our kids in a good way without them loathing detecting lies, and can spot a con a mile away.
us too much when we try to get them to do their When asked about our friend’s susceptibility to
chores. In all of these examples, and many more, deception, however, we find them to be far more
deception is the key element. Deception can be gullible (Levine, 2003). Obviously, we are mis-
defined as: judging our own capacities, as influence in general
is highly effective, which is proven by the huge
Everything done to manipulate the behavior of profits it generates for advertisers, corporations,
the other side, without their knowledge of the and religious groups, among others, that use these
friendly intent, for the purpose of achieving and techniques.

16
Why Humans are the Weakest Link

The reason people misjudge their own abilities A bit of kindness goes a long way, since most
to spot deception is because of “lie detector bias” average users really want to be helpful (Granger,
where individuals almost always overestimate 2002). A slightly more advanced method is to add
their ability to detect lies (Marett, Biros, & Knode, the illusion of a reason behind the request. The
2004). This is further complicated by the truth illusion of a reason can be just as effective as a
bias, which is the widespread assumption that good reason. When asking people to do something,
most people are telling the truth (Martin, 2004). it was found that simply using the word “because”
Humans also tend to think that bad things, such in the question is just as effective as using it in
as death, accidents, crime, natural disasters, and together with an actual motivation (Cialdini, 2003).
so forth, generally only happen to others (Levine, In order to develop deeper skills of influence, any
2003). To further highlight our vulnerabilities, single one of, or combination of, the following
another interesting weakness in the human psyche techniques can be used.
is the “fixed-action patterns.” They are most easily
studied in animals, where certain specific condi- Authority
tions, “trigger features,” trigger a predetermined
response. For instance, a certain breed of bird People are likely to respond obediently to au-
will instantly start to care for any egg-like object, thority. We are generally brought up to respect
even if it is obviously not an egg but instead per- authority and ever since we were kids it has been
haps a painted volleyball (Levine, 2003). While beneficial to do as the authorities want us to do;
“fixed-action patterns” might seem impractical, both in school, at home, in church, in the army,
for animals in particular, they save time, energy, and in the workplace. Listening to authority is
and mental capacity. Even for humans, certain seldom detrimental to anyone. In extreme cir-
“fixed-action patterns” are beneficial, for instance, cumstances this can push people to do dreadful
giving thanks when receiving a gift, doing what things, as was shown by the famous Stanley-
police officers say you should, and so forth. In Milgram experiment (Obedience to Authority
normal circumstances, the “fixed-action patterns” Study). In the study, subjects thought that they
are usually correct and beneficial to us. They start were administering electric shocks to another
to become a major problem when someone starts (fake) subject in order to punish them for errors.
to use them as a weapon against ourselves. The real study was to test their willingness to
So basically, we humans believe that we are administer painful, or even potentially lethal,
good at spotting lies, that people seldom lie to doses of electricity while being told to do so
us and that bad things mostly happen to others. by an authoritative test supervisor. The study
We are also mostly blissfully unaware that we showed that a disturbingly high percentage (65
have certain “fixed-action patterns” that will %) were willing to continue the experiment even
make us react almost without thinking to certain though they, to the best of their knowledge, were
requests. This is the stage for the great game of administering extremely painful and potentially
influence. lethal doses of electricity to another subject who
was screaming in pain and complaining about
intense chest pains (Blass, 2002).
The Basics of Influence But authority is not only someone telling
us what to do. Other aspects than verbal orders
The easiest, and many times the most efficient are also influencing who we think is a person
means of influencing others are simply to be kind. of authority. One example of this is uniforms.
Uniforms are a cheap and simple way to be per-

17
Why Humans are the Weakest Link

ceived as a person of great authority (Mitnick, to make desktop hacking easier, or for instance,
2002). Uniforms can be of the obvious kind specific titles to make a social engineering at-
(police uniform, doctor’s coat, soldiers uniform), tack over the telephone be more efficient. This
but perhaps the most effective kind of uniforms was made chillingly obvious in a study where
are those that we do not normally perceive as a nurses were called over the telephone by a person
uniform. Examples of this kind of uniforms are introducing himself as a doctor responsible for
technicians’ and maintenance personnel’s clothing one of the patients, then proceeding to tell the
and the clothing worn by cleaners. Cleaners and nurse to administer a dangerously high dosage of
maintenance staff are groups of people that often medicine to the patient. Without requesting further
tend to have full access to most areas, often at identification most nurses, 95 %, complied, and
times when there are few or no other employees were stopped by the researchers on their way to
around. They are also often employed by someone the medicine cabinet (Levine, 2003).
else than the organization in which they work, a
subcontractor. This gives them full access, and Scarcity
they are rarely questioned, making them a risky
element. Reasonably normal clothing is also a kind When told that something they want is in short
of uniform, especially the style and message of the supply, people tend to want it even more. The
outfit. For instance, a nice, tailored suit sends a information that others might be competing for
different message than a “nerdy” Linux t-shirt, but the same thing triggers the sense of competition.
they are both efficient as uniforms in a particular This can be observed in ads everyday, where terms
context. Another kind of uniform is the title of a as “limited supply” are frequently used. Time is
person, where an impressive title, such as profes- always a stressing factor, it is efficient to make
sor, doctor, lord, sir, and so forth, can influence the market see that time is in limited supply, thus
the amount of authority we perceive that someone leaving less time for reflection (Cialdini, 2003).
has (Cialdini, 2003). Real titles often take years Our reactions to scarcity also mean that the
of hard work to achieve, but to acquire a fake title things that are hard to possess are valued higher
only takes seconds. Even fake diplomas can be and perceived as better than those that are easy
bought cheaply, making it even more difficult to to possess. This has interesting consequences
judge the value of a mentioned title. for how people value information that is banned
Other examples of things that make us per- or made secret. When information is banned,
ceive someone as having authority are purely humans have a greater desire to acquire it, and
material artifacts, such as wealth, fancy clothing, they also have a more favorable attitude towards
jewelry, and expensive cars, and certain other it than before it was banned. Humans also have a
human traits such as length and tone of voice. greater interest in what has become scarce, rather
Humans are easily influenced by these things, than what has always been scarce (Cialdini, 2003).
and having the right clothes can make a big dif- That people value banned information more is a
ference, something which is well known by con noteworthy piece of information for organizations
men (Cialdini, 2003). that begin to employ more strict secrecy policies,
The practical consequences of this human or who have a rigorous security classification. It
weakness for uniforms and fancy attributes, except also explains some of the basics for the so called
for the sad fact that the imagery in hip-hop videos “hacker culture.” Information wants to be free,
actually works well to influence our perceptions because if it is secret, it must be interesting. It also
of the artists as important, are that an attacker means that information might actually be more
would benefit from using either a specific uniform secure if it is not classified as secret at all. The

18
Why Humans are the Weakest Link

very classification of secret makes people want it, Similarity can be of several different kinds; for
because then it is limited, and if it is limited, it is instance, how a person is dressed and a person’s
good. This principle can also explain why people background and interests. So when choosing how
lust to get into exclusive night clubs, even if queu- to dress when trying to deceive is basically up
ing might take all night, why people work so hard to whether to use authority, or to dress like the
to get accepted into more or less exclusive social victims and use similarity (Cialdini, 2003). The
clubs, why the value of art goes up when the artist importance of liking is also emphasized in neuro-
is dead, and why almost everything nowadays is linguistic programming, NLP, where a great focus
sold in “limited editions,” products ranging from is on developing rapport between people. In NLP
sodas to cars. This is because if supplies really rapport means being “in sync“ with the person you
are limited, we do not want to miss the chance are talking to. The common techniques are match-
to buy the product. Scarcity works because we ing of body language, breathing (frequency), and
learn historically that the good things really are maintaining eye contact (O’Connor & McDermott,
in short supply. And if they are in short supply, 1996). Creating rapport increases liking, and is a
we lose the freedom of choice, something we as powerful weapon of influence.
humans resent (Cialdini, 2003). Other ways to increase liking is to have
Scarcity could be used by attackers by provid- frequent contact with the target, as familiarity
ing a “limited service offer” or by pressing on increases liking, a tactic which is also used in
time: “Sure, I could help, but I’m leaving soon, examples by Mitnick (2002). What is interesting
so we’ll have to fix it quickly.” Another conse- here is that familiarity works without victims
quence is that making information harder to get noting that it occurs, so we tend to like people
could actually make more users interested in it, frequently featured in the media, or those that we
actually making it less secret. see often at work, for no other reason than that
we see them often. An effective method among
Liking and Similarity strangers to quickly achieve liking is to share a
common “enemy,” something most army recruits
People favor others that are like themselves. If have experienced when sharing the dislike for
we share similarities, then we are prone to react certain officers is a sure way to get conversa-
favorably to a person similar to ourselves only tion started. If the attacker manages to leverage
because of the similarity. Another particularly in- himself and the victim in a situation where they
fluencing factor here is the physical attractiveness cooperate in order to gain mutual benefits, such
of a person. A person who is very attractive can as helping each other, liking will also increase.
be perceived as a purely attractive person, where As our senses are tied together with our overall
attractiveness is the dominating characteristic of experience of the situation, it is also interest-
the person. This is called the “halo effect” and ingly enough effective to meet while eating. The
it makes attractiveness a very influential factor positive experience of the food will strengthen
(Cialdini, 2003). In fact, an attractive physical liking. Most importantly, it is important to avoid
appearance can make us believe that the person meeting under bad conditions, as the negativity of
is smarter, kinder, stronger, and of a higher moral the condition will affect the liking of the persons
character, but we are also oblivious of our mostly involved, as do being the bearer of bad news. We
automated preference towards attractive people are also easily affected by compliments, even if
(Levine, 2003). If you are blessed with an at- we realize that the compliments are given with
tractive physical appearance, you will find that an ulterior motive (Cialdini, 2003).
influencing people is easier.

19
Why Humans are the Weakest Link

This knowledge would be used by an attacker the gift is very small, and the request for return
to befriend the targets, to build a liking, rapport, is far greater than what would be reasonable
with the target, for instance, sharing an enemy (Cialdini, 2003). A variation of the reciprocation
(perhaps the boss), or by sharing a remarkable rule is called the “rejection-then-retreat” tech-
amount of interests and background. How come nique. It consists of making an initial, extreme,
most car salesmen are so similar to their custom- offer that is sure to be rejected, and then retreat to
ers, with children roughly the same age? a lower, more sensible request that was the initial
goal with the request. An example would be to
Reciprocation ask someone to buy a $50 painting to support the
arts, and upon rejection, offer them to buy a $5 set
The rule of reciprocation is hard wired in us, and of postcards. Not only does the “rejection-then-
might indeed be the very reason we are humans; retreat” technique increase the possibility that the
our ancestors learned to share, which lead to request will be accepted, it also makes the target
civilization. The rule is quite simple: If someone more probable to carry out the request, and to fall
provides a favor for us, we feel that we must repay for such requests in the future (Cialdini, 2003).
that favor, even it we did not ask for it. It is more An attacker could use this by stating that he
or less an automated reaction and it is frequently has helped the victim in a small matter without
used, and abused by, for instance, car salesmen. prior request, or by giving the victim privileged
They tell the customer that they are doing them a information that he did not ask for.
favor by lowering the price, or by including rust
proofing, or even by selling them the car without Commitment and Consistency
any commission. This makes the customer feel
an urge to repay them, and what better way to do No one wants to be known as a failure. If a person
so than to buy a car? has promised to do something, he will try his best
Reciprocation is a very powerful technique to do it, so as to not be regarded by his peers as
that in many cases can be directly responsible untrustworthy. Therefore, people try hard to act
for successful influence (Cialdini, 2003). One of in ways that are consistent to the way they have
the classic examples is the flowers that are given acted before and to the choices they have made.
out by Hare-Krishnas. The flower is free, they In the same way, people find that they are more
say, but it is customary to give a small donation willing to stand by their decisions when they have
in return. Even if the receiver of the flower does been made public in some way, when a stand has
not want it, or even likes the Hare-Krishnas, he been taken. This is why a gambler is far more
will feel obliged to return the favor, and to give certain of the odds after placing a bid than before
a donation. In fact, this technique is so powerful and also why so many charities collect signatures
that it is one of the major reasons for the success on lists (Cialdini, 2003).
of the Hare-Krishnas (Cialdini, 2003). The same In order for a commitment to be most effec-
thought is behind the free samples often given out tive, it should be active, public, and demand a
at super-markets. Not only do they let the custom- certain degree of effort, and if a person is to accept
ers taste the product, they also have the aura of responsibility for it afterwards, it should also be
a gift around them, making it hard for people to made without strong outside pressures (Cialdini,
resist buying the product after receiving a sample 2003). This has the interesting spin-off effect that
as a gift from the nice lady. it actually is harder to convince someone into
What should be noted especially here, is that cooperating for a longer period of time using a
people’s sense of reciprocation will stand even if large bribe, or a really violent threat, than it is to

20
Why Humans are the Weakest Link

give a smaller bribe and a more feasible threat. gets sick in the middle of the street and no one
This is something that was well known during stops to check to see if they are OK. On the other
the cold war, where most recruited traitors actu- hand, when someone stops to actually check if the
ally did not get paid a great deal of money. It was person in the street is OK, several others might
more efficient for the foreign power to get them to help out almost instantly, as I myself discovered
supply classified information for relatively little while helping an elderly lady who had fallen off
money, as they then would justify their treason her bike. After the first couple of volunteers had
not just because of monetary gains but also by arrived, the crowd started to snowball, and soon
ideological support. This would get them to feel people had to be told to leave in order not to cre-
more personally responsible and to have a greater ate a traffic hazard.
commitment to the relationship, making them This could have a major impact on the se-
easier to exploit as resources for a long time. curity of any organization, because people will
Someone wanting to use this knowledge to adapt to the general attitude towards security in
influence someone could do it by trying to get the the organization, rather than to what is written
target to express public support for the concept, as in a policy. Even if management wants to have a
well as not making the support too easy to express. high degree of security, the employees can nul-
If offering a bribe, it would be relatively small, lify any attempts, unwittingly, by social proof.
and any threat made should be of the reasonable Examples of this are organizations where the
kind, not too spectacular, but threatening enough sharing of passwords, while expressly forbidden
to “tip the edge.” If it is too threatening, the mark in the policy, still is a sign of trust among em-
will not feel obliged to follow through as soon as ployees. Not sharing would stigmatize a person as
the immediate threat is removed. untrusting, paranoid, and not a part of the group,
as sharing is seen as a matter of trust (Brostoff,
Social Proof Sasse, & Weirich, 2002).
An attacker could use this to enforce the
When people have to make a decision on the proper techniques of persuasion by telling the target that
behavior in a situation when they are uncertain, everyone else is doing whatever she asks the target
they do this by seeing how people, especially those to do, such as giving out login information. If there
that are similar to themselves, in their vicinity is proof, or if the target believes this to be true, it
act. Usually it is correct to do the same thing as would be very hard to resist the demand.
the people around you. This is the phenomenon
known as “social proof.” Social proof can cause Other Weaknesses
people to do things not in their own self-interest,
such as purchasing products because of their When the person asked to perform something
popularity, or sharing passwords with coworkers has very little interest in it, they generally have
because “everyone else in the department is doing low involvement. As they are detached from the
it.” What is even worse, it can lead to a phenom- task they are being asked to perform, they may
enon called pluralistic ignorance (Cialdini, 2003). be especially easily influenced by logical reasons
Pluralistic ignorance is when everyone is trying for the task, urgency, or authority. Examples of
to see how everyone else is acting, leading to a people with low involvement can be security
situation where no one acts at all. This is most guards, cleaners, or receptionists (Harl, 1997).
horrifying in cases where crimes are committed This group of people does not care as much about
in an area with a lot of witnesses around and no the quality of the arguments, but more about
one acts to help the victim, or when someone the quantity; the more the better (Harl, 1997).

21
Why Humans are the Weakest Link

In contrast, people with a high involvement, for login information, from the mark (victim), or
example, systems administrators, are persuaded to get the mark to perform some action at the
more by the quality of the arguments than the perpetrator’s request. This is a classical social
quantity (Harl, 1997). engineering attack. The techniques should work
Another powerful factor to elicit the desired best when trying to influence someone from a
compliance is to use strong affect (Gragg, 2002). If Western culture. Many of the same techniques
the victim is feeling a heightened sense of anger, can be used against persons from other cultures
surprise, or anticipation, he will be less likely to too, but they might, due to cultural differences, be
think through the arguments presented to him. ineffective or even insulting (Levine, 2003).
This can be done either by aggravating the mark The perpetrator begins by either creating a
or simply by surprising him with a demand that person of authority, or by exploiting existing
was completely unanticipated. Similar to surprise relationships. If the perpetrator knows the mark
is overloading (Gragg, 2002). When someone or someone who knows the mark, he can use this
has to deal with a great deal of information and or make it up, but a real reference is far more
does not have enough time to think about it, this useful. If that is not possible, the perpetrator may
lowers the ability to think critically about the create a person of authority, such as a doctor,
situation. An example of this would be to present researcher, or other successful person, as sug-
and require a lot of technical information from a gested. In this case, the attacker chooses to be a
person with very little technical knowledge. The systems administrator:
basis for all more advanced deception tricks is to The attacker describes himself as a senior
use deceptive relationships (Gragg, 2002). It is a systems administrator (authority) from a high pro-
very powerful psychological trigger to establish file consultancy firm hired to investigate critical
a relationship with someone, solely to exploit that network problems of the organization (scarcity).
person. This can be done effectively by sharing He phones the mark, introduces himself, notes
information and a common enemy, as discussed the accent of the target, and asks where the target
under liking. The attacker does this by using is from. Whatever city the mark answers, the at-
techniques for creating rapport for a long time, tackers’ wife is from the same town (similarity).
actually building up a (false) relation with the He then asks if the mark could consider spending
target, befriending her, and then slowly starting a couple of minutes helping him fix the network
to use the relationship for nefarious gain. This (commitment), then he starts to describe the prob-
technique was especially popular with foreign lem with the network, by using technical jargon
intelligence services, as it also leads victims to and ample statistics (authority). He explains that
rationalize their actions internally, thus being the mark’s computer must be taken off-line for a
more committed to the case. couple of days, maybe a week, while they fix the
problem. This is if the mark cannot help them
with some technical services, the way many of
How to Act When Influencing his colleagues have today (social proof), notably
Others: a Practical Example by typing in an increasing complicated series of
commands in the DOS-prompt (overloading). The
The mentioned techniques and examples might perpetrator then offers to do the mark a favor by
sound convincing, but in order to illustrate how fixing the problem, as he is to leave for a week
they can be used, an example is given. This ex- of vacation in a couple of minutes (scarcity). The
ample is based on the premises that a perpetrator mark must do a small favor to the perpetrator
wants to get either information, in the form of (reciprocation) by not telling anyone of this, as

22
Why Humans are the Weakest Link

the attacker could lose his job over it due to the do not forget that normal clothing is also a kind
mark’s really strict boss (liking, by finding com- of uniform. For instance, a nice, tailored suit
mon enemy). The best way to fix the situation is if sends a different message than a “nerdy” Linux
the mark could bring his computer to the fictitious t-shirt, but they are both efficient as uniforms in
office of the attacker, just an hour away by car, a particular context.
and then bring his personal ID papers, a signed Speaking with confidence and using ample
letter of recommendation from a co-worker, and technical jargon will make the attacker seem more
a written history of what the mark has done with knowledgeable, and therefore more authoritative,
his computer the last year, as is the policy in the especially if the mark does not know much about
consultancy firms. Or, perhaps, if it can be kept the area the perpetrator is talking about. The same
just between them, the mark could just give the is valid with statistics; so the attacker can use them
attacker his login information (contrast). to further his argument, as people tend to believe
This is a simple and classic example of a more in arguments supported by statistics, even if
social engineering attack. As demonstrated, it is the statistics are false or irrelevant. The attacker
deceptively simple, however, as it uses most of the should also always show both sides of the argu-
manipulative techniques available, even though ment, as this will makes him seem less pushy and
it does not delve too deeply in any one of them. more honest. The attacker will try to find similari-
Against the right kind of mark, using the right ties with the mark, such as the same hobbies, kids
kind of setting, this attack is highly efficient. the same age, have relatives in their hometown,
and so forth. He will also mimic the behavior
How the Attacker Can Be Persuasive and speech patterns of the mark somewhat. This
builds rapport, which leads to liking.
Levine (2003) believes that there are three key Be wary of new acquaintances displaying
elements to being persuasive as a person. They several of the mentioned characteristics.
are authority, honesty, and likeability. The other
techniques described can be used to strengthen Defending Against Deception
influence, but on an interpersonal level, only these
three are crucial. There are some easy ways for the In this section you, the reader, is given a concrete
attacker to strengthen the way the mark perceives set of tips on how to avoid being influenced. While
his offerings in these elements. simply reading about the techniques and vulner-
If actually meeting the mark face to face, it is abilities presented in this chapter will make you
always important to maintain eye contact. This more resistant to manipulation, simply theorizing
will make the attacker seem far more honest and around the concepts are of limited use to organi-
authoritative. While maintaining eye contact, it zations and those responsible for security. Levine
is also useful for the attacker to act as if he is (2003) suggests two basic approaches to enhance
engrossed in what the mark is talking about. It resistance. The first is “the sting,” where people
is, however, not good if the attacker actually is are put in situations when they are influenced to
engrossed, as this will limit his perception. When act against their own preferences, and when they
preparing for an attack, the attacker will consider comply, they are informed of the influence tactic
the clothes he will wear closely. They are a kind of and what has just happened. This has the benefit
uniform, signaling authority, and will be carefully of pushing the subjects out of their comfort zone,
selected to reflect the particular kind of author- making their vulnerability more obvious to them.
ity the attacker aims toward. Classic examples
of this are doctor’s coats, police uniforms, but

23
Why Humans are the Weakest Link

What is critical here is that the subjects should person of authority? If he is, then how truthful do
be made to acknowledge their own personal you think that person is (Cialdini, 2003)?
susceptibility (Levine, 2003). While the scarcity principle is easy to learn
The second method is a little less intrusive than about, it is hard to counter. One method is to try
“the sting,” and more manageable in a business to learn to recognize the feel when the competitive
context. The goal here is to expose the subjects cogs in our brain starts to whirl, but it might not be
to weaker forms of persuasions, which then acts enough. Learning to think about the scarce object
much like an inoculation does to an immune from a more utilitarian standpoint can also help.
system; it prepares it for the real threat. The most Do we want the object because it is rare, or do we
important issue to consider here is to get support believe that the object will be better because it is
from management and in the information secu- rare? Then we should remember that rare things
rity policy for such efficient counter measures are rarely better (Cialdini, 2003).
as “the sting” and inoculations. When support Due to the vast spectra of possibilities to
is acquired, a small roll-out, especially of inocu- influence liking, it is hard do develop a broad
lations is preferred, and in high risk scenarios, spectrum of defenses. Instead Cialdini (2003)
stings can also be enacted. While it can sound recommends a simple approach: Allow to be
cruel and unethical, it is also one of the easiest swept away by the liking of others, but when it
ways to practice some kind of resistance to these comes to decisions, consider how long the person
attacks. Deception against one’s own employees who is asking you to make a decision has been
has been used, with some success, at both West in contact with you, and whether or not you like
Point Military Academy (Dodge & Ferguson, him, to a reasonable extent, based on this time.
2006), and the New York state (Bank, 2005). In If you adore someone who is trying to get you to
the West Point case, students were sent an e-mail give him some information after only knowing
from a person claiming to be a colonel, ordering him for a couple of minutes, there is probably foul
them to click on an attached link to verify their play in the works.
grades. This approach got 80 % compliance among Reciprocity is a very effective influence
the students, who were later informed of the risks technique, and very hard to defend against. A
of their acts. In the case of the New York state, 15 too strict rule against accepting any kind of gifts
% of the employees tried to enter their passwords will make you seem socially awkward. A more
into a special online “password checker” after efficient method is to redefine gifts given, to their
receiving an e-mail from the “Office of Cyber real meaning. A sales person giving you a gift is
Security and Critical Infrastructure Coordina- really exposing you to marketing, and thus you do
tion,” urging them to do so. A follow-up to this a not need to return the favor. A stranger offering
couple of months later, with a similar approach, you help you over the telephone with something
got a lower compliance rate (8 %). you did not request or know that you needed is
In order for you to be better prepared against most likely up to no good.
attacks using specific vulnerabilities discussed, To protect against consistency, you also have
a short guide of defenses is given. to reach inside yourself. Cialdini (2003) mentions
To defend against authority, it is best to remove two kinds of methods to spot when someone is
the element of surprise from authority. Be suspi- exploiting your consistency. The first is to identify
cious of authority power, and remember the influ- when we get the feeling that we are pushed into
ence power of authority. There are two questions performing actions we know that we do not want
that might help with this: Is the person really a to perform. The second method is to consider
whether or not we would make the same com-

24
Why Humans are the Weakest Link

mitment, if we could travel back in time. “millions” of computer viruses, or an education


Social proof is something that is most often program that might prevent one case of social
useful to you. In fact, in most new situations you engineering, the choice is often simple for the
would be well advised to follow the behaviors purchaser. It is thus, sadly, our task as research-
of others. There are, however, certain situations ers, professionals, and students to help point out
when social proof can lead to your being tricked that the single attack might very well be the most
into performing harmful acts. In order to avoid damaging attack imaginable, far more than a
this, there are two tricks. Be aware of what are random virus attack.
obviously faked situations, such as found in ads The increased attention gained in this field
with groups of people praising a product, or when will probably bring greater awareness, among
someone claims that the people around you are both professionals and ordinary users. When
doing something you doubt that they are doing. users become more resilient towards the easy
You should also remember that the actions of tricks, such as those described in this chapter, the
others are not to be taken as the sole reason for attackers will have to either get more advanced
your actions (Cialdini, 2003). themselves, which is quite hard due to the in-
creased complexity of the skills needed, or find
other ways to attack. If attackers have to develop
Future Trends their skills of influence to a level high enough to
influence even humans well aware and trained
This is an area that has been studied extensively against influence techniques, they might as well
in other areas of science than information secu- leave the field of crime and seek more lucrative
rity. There are ample material in fields ranging employment as influence professionals, such as
from literature, to social sciences, to market- salesmen or politicians.
ing and economics. There is a broad range of
researchers working in the field, but within the
field of information security, this area remains Conclusion
rather unexplored. While problems with software,
networks, and other technical artifacts no doubt This chapter has showed how easy it is to use
will be of a high importance in the foreseeable influence to get people to do things that they
future, there is a growing trend when it comes may not want to do. The goal has been to give
to more human aspects of information security. concrete examples of well-established techniques
It is notable that an industry icon such as Bruce and methods, together with practical uses. Hope-
Schneier has begun to put interest in the field, and fully the reader now has a greater insight in both
there are emerging academic conferences such the manipulation techniques used by computer
as the HAISA (Human Aspects of Information criminals and the techniques used by the everyday
Security & Assurance) conference. deception professionals.
One of the challenges for those of us working One of the major points of this chapter is just
in the field of the human element of security is how easy the techniques are to learn and to imple-
how we can argue that our work is both important ment. In fact, just by reading through this chapter,
and whether or not it actually improves security. you, the reader, now probably have most of the
This has always been easier for technical prod- tools you need to influence people around you to
ucts, as they often can argue efficiency based on a far greater extent than before. This is of course
statistics. When the buyer has to choose from a knowledge that should be used with some caution.
product promising a 99.99 % protection against While in most cases it is rather easy to influence

25
Why Humans are the Weakest Link

people, the counter reaction from people who just Dodge, R., & Ferguson, A. (2006). Using phish-
understood that they have been manipulated is ing for user e-mail security awareness. In S.
generally rather severe. Good relationships are Fischer-Hübner, K. Rannenberg, L. Yngström,
not built on deception. & S. Lindskog (Eds.), Proceedings of the IFIP
Still, there is a lot of merit in using decep- TC-11 21st International Information Security
tion, albeit on a small scale, against one’s users Conference (SEC 2006) (pp. 454-458). New York:
and subordinates in an organization. As long as Springer Science + Business Media Inc.
the use of deception of one’s own employees and
Feer, F. (2004). Thinking about deception. Re-
co-workers is practiced with afterthought and a
trieved March 11, 2006, from http://www.d-n-
clear goal, and ample feedback and information
i.net/fcs/feer_thinking_about_deception.htm
is given, it might serve well as an educational
and training tool. Do remember that these are the Gragg, D. (2002). A multi-level defense against
techniques the bad guys are using. If we do not social engineering. SANS Institute. Retrieved
prepare against the methods used, we will easily September 17, 2003, from http://www.sans.org/
fall victims to them. rr/papers/index.php?id=920
Granger, S. (2001). Social engineering funda-
mentals. Security Focus. Retrieved September
References
18, 2003, from: http://www.securityfocus.com/
printable/infocus/1527
Bank, D. (2005). “Spear phishing” tests educate
people about online scams. The Wall Street Jour- Harl (1997). The psychology of social engineering.
nal. Retrieved March 2, 2006, from http://online. Retrieved March 12, 2006, from http://searchlores.
wsj.com/public/article/SB112424042313615131- org/aaatalk.htm
z_8jLB2WkfcVtgdAWf6LRh733sg_20060817.
Levine, R. (2003). The power of persuasion.
html?mod=blogs
Hoboken, NJ: John Wiley & Sons Inc.
Blass, T. (2002). The man who shocked the world.
Marett, K., Biros, D., & Knode, M. (2004). Self-
Psycology Today. Retrieved March 9, 2006, from
efficacy, training effectiveness, and deception
http://www.psychologytoday.com/articles/pto-
detection: A longitudinal study of lie detection
20020301-000037.html
training. Lecture Notes in Computer Science,
Bowyer, B. (2003). Toward a theory of decep- 3073, 187-200.
tion. International Journal of Intelligence and
Martin, B. (2004). Telling lies for a better world?
Counterintelligence, 16, 244-279.
Social Anarchism, 35, 27-39.
Brostoff, S., Sasse, A., & Weirich, D. (2002).
Mitnick, K. (2002). The art of deception. India-
Transforming the “weakest link”: A human-
napolis, Indiana: Wiley Publishing, Inc.
computer interaction approach to usable and
effective security. BT Technology Journal, 19(3), O’Connor, J., & McDermott, I. (1996). Principles
122-131. of NLP. London: Thorsons.
Cialdini, R. (2001). Influence: Science and prac-
tice. Needham Heights, MA: Allyn & Bacon.

26
27

Chapter III
Impact of the Human Element
on Information Security
Mahi Dontamsetti
President, M3 Security, USA

Anup Narayanan
Founder Director, First Legion Consulting, India

Abstract

This chapter discusses the impact of the human element in information security. We are in the third gen-
eration of information security evolution, having evolved from a focus on technical, to process based,
to the current focus on the human element. Using case studies, the authors detail how existing technical
and process based controls are circumvented, by focusing on weaknesses in human behavior. Factors
that affect why individuals behave in a certain way, while making security decisions are discussed. A
psychology framework called the conscious competence model is introduced. Using this model, typical
individual security behavior is broken down into four quadrants using the individuals’ consciousness
and competence. The authors explain how the model can be used by individuals to recognize their
security competency level and detail steps for learning more effective behavior. Shortfalls of existing
training methods are highlighted and new strategies for increasing information security competence
are presented.

Knowledge & information vantage. Along the same lines, economies and
security countries that are successful in this age are the
ones who are networked; information based and
We live in an information age. Companies that those who empower their population. The elec-
are successful are those that are able to harness tron (information based economy) has replaced
and utilize information to their competitive ad- the atom (nuclear power) as the true indicator of

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Impact of the Human Element on Information Security

strength of a country. Given the wide spread and countering those threats. Human element related
critical nature of information, protecting informa- threats during this phase were device misconfigu-
tion, that is, information security, is essential for rations, excessive trust in security technology,
maintaining competitive advantage and business and security flaws within the technology itself.
sustenance. For example, attitudes like, “I have this anti-
The threats to information are varied. They virus, so I am secure, now let me look at other
are technical, physical, and human in nature. non-security issues” were common place. The
To counter these threats, information security other major problem was security flaws within
has evolved over the past few decades. We are the technology itself. For example, security flaws
today, in the third generation (3G) of information within the software that were installed in firewalls,
security. It has evolved from its initial focus on anti-viruses, and so forth.
technology, to its focus on processes (standards, Typical threats during the process implementa-
best practices) and to the current focus on the hu- tion phases were: too much reliance on documen-
man element that manages or uses the technology tation and absence of actual practice. This phase
and processes. does justice to the saying “documented but not
The shift in focus from technology to pro- practiced.” For example, organizations invested
cesses, and subsequently the human element, has time and money in documenting policies, pro-
come with the realization that technology and cesses for information security, especially during
processes are only as good as the human beings the periods of legal regulations and compliance.
that use them. The result was that there were numerous docu-
The evolution of the information security ments that helped the organizations to comply to
model has occurred due to the evolution of the legal regulations but did not substantially reduce
type of threats that businesses are faced with on information security risk.
a day-to-day basis. The threats have evolved and The main reason why technology and processes
become more sophisticated. Typical threats that have not managed to effectively bring down the
occurred during the technology implementation instances of information security incidents is
phase were viruses, worms, distributed denial of because the people entrusted with managing the
service (DDOS), and so forth. Use of firewalls, technology and processes were not motivated,
anti-virus, and IPS systems grew as a means of aware, responsible, and qualified for information

Figure 1. Evolution of information security


Third Generation – Human Element
(3G)

human element
Focus on people that use
the technology and
processes
Second Generation – Process Focus
(2G)

process focus

ISO 00, COBIT

First Generation – Technology Focus


(1G)

technology focus

Firewalls, Anti-Virus, IPS

28
Impact of the Human Element on Information Security

security management. The rest of this chapter Examples include accidental deletion of data,
explores this in detail. revealing of passwords because the source of
the request appears genuine, and so forth.
The Human Element: The Reason • Human fraud: Willful actions aimed at
and Catalyst destroying information. Examples include
sabotage, information theft, and so forth.
Security threats (attacks, exploits, tactics, etc.)
have changed over the years, with the most sig- While fraud is beyond the scope of this chap-
nificant changes occurring over the past 5 years. ter, the reason for human error can be explored
A look at media coverage shall reveal that instead and corrected using well-designed and refined
of massive worm and virus attacks that had promi- techniques.
nence 3-5 years back, the focus today is on:
The Human Factor in Information
• Online frauds: Examples include capturing Security
online banking user ID’s and passwords, us-
ing key loggers. How do people think and feel about informa-
• Phishing: Crafting e-mails that pretend to be tion security? Finding answers to this question
from banks, auction Web sites, and so forth, determines success or failure of information
requesting the recipient to correct some faults security management both at a personal as well
with their online logging credentials by entic- as organizational level. Organizations are con-
ing the user with a URL and a Web page that stantly challenged by the irrational behavior of
resembles a reputable Web site. employees when they fail to secure intellectual
• Spam: Unsolicited junk e-mails that are an information, customer information, computers,
irritant and consumes bandwidth. and other sensitive information or systems.
• Social engineering: psychological tactics
used by attackers to make human beings
reveal information. phishing (mentioned) is Case study one
one such a tactic.
• Spyware & malware: Malicious software An information security consultant was hired
that gets installed when a user clicks on an by an organization to determine the information
illegitimate URL or by installing software. security awareness of employees. The consultant,
The user is often enticed towards install- after a brief initial study of the organization,
ing the software using social engineering designed a simple test that focused on two psy-
techniques. chological aspects:

Examining the reasons behind the success and • Obedience to authority: This is a behavioral
propagation of these attacks reveal that most all factor exhibited by people, especially when
of these attacks target individuals. The human they live in a society that accepts a hierarchi-
element is the reason and catalyst for the success cal structure
of these attack techniques. The reasons can be • Self preservation: This behavioral factor is
further distilled to two factors: common to all human beings. The desire to
survive and sustain is inherent in all human
• Human error: Inadvertent actions performed beings.
by people that have an impact on security.

29
Impact of the Human Element on Information Security

The consultant executed the test as follows. employees when they came out, and then subse-
With the CFO’s permission, the consultant used quently gain access during re-admittance.
the intercom (telephone) of the CFO and dialed The mentioned scenarios raise the question,
a few employees in random, and employed the “Is information security a technical solution?” If
following dialogue—“Hi, I am your new ERP it was purely a technical issue, then which techni-
consultant and I am calling from the CFO’s room. cal controls were circumvented in the mentioned
We have just finished implementing a new salary scenarios? Obviously, information security goes
module for processing your salaries next month beyond purely technical or process issues. For
onwards. If you don’t mind, we need your domain example, there is a very high chance that the
login ID and password to integrate your salary organization in the first case study would have
processing for the next month.” had technical security controls such as passwords,
The test was done on five subjects and all five document access controls, and so forth. Still the
subjects revealed the passwords (some of them very purpose of these controls was negated by a
even spelt the passwords for the consultant’s psychological tactic that exploited human weak-
benefit!). nesses. Similarly in the second case study, a
security control existed, namely the card access
system. But the long delays, frustrations, and com-
Case study two plaints by employees led the guards to de-activate
the system. While the first case study exploited
A company has a card access system for all the obedience to authority and self preservation
employees to gain access to the building. They aspect of the individual, the second case study
also have a regularly scheduled fire evacuation exploited the “reduce inconvenience” and “need
drill conducted in conjunction with the local fire to be liked” aspects of an individual.
department. The fire evacuation drill involves The market today is cluttered with vendors,
triggering of a fire alarm, general evacuation who sell excellent solutions such as firewalls,
announcement on the building PA system, evacu- which can process millions of packets per second,
ation of all employees from the building, routine intrusion detection systems (IDS’s), which can
building checks by the fire department, subsequent perform in-depth attack-detection, worm and
cancellation of the fire drill, and re-admittance of virus control mechanisms, and biometric and
all the employees back into the building. multiple factor authentication systems. We also
Given the large number of employees the have multiple security standards and processes
company had in their building (several hundred), such as ISO27001, CoBIT, HIPAA, and so forth.
the security guards usually deactivated the swipe Organizations have their own security policies and
card access system during employee re-admit- procedures defined to meet their security goals.
tance after a fire drill. This was done to reduce But from the case study presented, attempt to
the inconvenience factor, since it would take a answer the following questions:
substantial amount of time for all employees to
be re-admitted using the two lane swipe card ac- • Do technical solutions effectively mitigate
cess system. The security guards tried initially information security incidents?
to enforce the policy, but due to a large number • After reading the above scenario, do you think
of complaints from employees, they chose to de- that the human perception of a potential threat
activate the system. scenario can be improved?
An intruder, knowing the fire drill schedule, • What are the potential factors that could have
would wait in the parking lot, mingle with the avoided the above scenario?

30
Impact of the Human Element on Information Security

The case studies show how excellent techni- The two behavioral examples mentioned are
cal controls (first case study) and good security influenced by perception, which is in turn influ-
processes and procedures (second case study) can enced by external factors such as media, peer
be defeated due to the human element. group, culture, work practices, and so forth. This
is explained diagrammatically:
The “Reality” and “Feeling” of An individual when faced with a situation
Security whereby he/she has to make a decision that influ-
ences personal security or organizational security
Why do human beings commit errors that influ- is influenced by his/her “feeling” of security. This
ence information security? could be either a “conscious” or “unconscious”
decision. This decision may either be a decision
The question can be answered by analyzing that may have a positive impact on security (secu-
the difference between the “reality” & “feeling” rity is not compromised) or it could be a security
of security.1 For a security practitioner, security tradeoff. Security “tradeoff” is examined in more
is a reality that could be calculated in mathemati- detail in the next section.
cal terms, for example—risk. For a non security
practitioner, security is a feeling, a few examples Security Tradeoffs: Influenced by the
of which are shared. This probably explains the Feeling of Security
following:
Individuals make security tradeoffs every day. In
1. An individual feels more secure driving a fact, this is by far one of the most critical issues
car on his own rather than being driven by for information security managers and individu-
another driver, though there is no statistic to als responsible for information security imple-
prove that driving on your own reduces the mentation. A few examples of security tradeoffs
risk of an accident. that often worry an information security officer
2. An individual feels more secure inside a car responsible for security practices are listed:
rather than in an airplane though statistics
prove that there is a higher chance of getting • Individuals share passwords to get work done
killed in a road accident than an air crash. quickly.

Figure 2. The feeling of security and the influences

Influences Perception

The Feeling of security


Influences

• Media

• Society

• Culture

• Peer group

31
Impact of the Human Element on Information Security

• Individuals write down passwords or stick pact—positive or negative on life, money, career,
them in easily accessible places so that they etc.) associated with it; an individual is more
do not forget them. probable to choose the option that has the higher
• Individuals connect to unauthorized wireless personal cost. The higher the personal cost of a
access points for free Internet access. decision, the lower the security risk of the option
• Individuals click on unknown URL’s in phish- chosen by the individual.
ing e-mails, introducing spyware or providing Security trade-offs are often made uncon-
information that compromises privacy. sciously based on perception that we explored
earlier. This “perception” factor can be explained
The reason for security tradeoffs is often in-depth by using the “conscious competence”
personal convenience/inconvenience and other model that is explained in the next section.
factors such as self preservation, fear, and so
forth. For the purpose of this chapter, let us ex- The Conscious Competence Model
plain the “convenience/inconvenience” factor. For
example—when asked whether they would like The “conscious competence model2 is a matrix
their blood samples tested for malaria every week, that links states of human awareness with knowl-
most persons would say NO. Now when the same edge (“competence” and “incompetence”) and
question is asked with a different context—“The links them to decision making abilities.
company wants to send you on a United Nations Table 1 identifies four “levels” that a person
project to the Amazon River basin.” The answer can be when presented with a scenario that re-
could be different. Hence, when confronted with a quires an intelligent decision to be made. For the
“cost,” people reduce or avoid “security tradeoffs.” purpose of our discussion, the aim of studying the
This is illustrated in Figure 3. “conscious competence model” is to understand
Given a security decision point where one of how individuals behave (react) in a scenario that
the choices causes great personal inconvenience, contains potential information security threats
an individual is more probable to choose the and associated risks. The decision making (reac-
option that causes the smaller inconvenience. tion) or the security tradeoff the individual makes
In fact, the greater the inconvenience caused, would be influenced by the “conscious competence
the higher the security risk of the option chosen quadrant” the individual is operating in, either
by the individual. Similarly, when faced with a consciously or unconsciously, at that particular
security decision point, where one of the choices moment of time. We will analyze each of the
has a higher personal cost (decision has big im- four quadrants.

Figure 3. Security tradeoff, personal inconvenience, and cost


Cost = Life,
Career,
Money, Time
etc.
Risk of Risk of
Security Security
Trade-Off Trade-Off

Personal Cost
Inconvenience

32
Impact of the Human Element on Information Security

Table 1.Conscious competence model

Awareness Levels
Knowledge Level
Unconscious Conscious
Incompetence Quadrant 1 Quadrant 2

You don’t know that you do not know. You know that you do not know.
Competence Quadrant 3 Quadrant 4

You can do it without thinking about it. You can do it if you think about it.

Quadrant 1

Unconscious + Incompetence In this “level,” the person is totally ignorant of the potential risks associated with a
scenario.

The person is unaware of the existence or relevance of a skill.

The person is unaware of the particular skill deficiency.

The person denies relevance or utility of the skill.


Example from real life A toddler playing with a sharp instrument

Incompetence in this instance is lack of knowledge that the instrument is sharp and
that sharp instruments can hurt. Unconscious is playing with the instrument without
forethought

Quadrant 2

Conscious + Incompetence In this “level,” the person is aware that he is “ignorant.”

The person is aware of the existence and relevance of the skill.

The person is aware of his/her deficiency in the skill.


Example from real life Deciding to ask for assistance in a foreign country (foreign language not understood)

Incompetence in this instance is lack of knowledge of foreign language; consciousness


is recognition of the fact that the individual does not know the foreign language.

33
Impact of the Human Element on Information Security

Quadrant 3
Unconscious + Competence In this “level,” the person is aware without being conscious about it.

The skill becomes automatic, is second nature.

It becomes possible for certain skills to be performed while doing something else, for
example, knitting while reading a book.

Skill is instinctual; the person might have difficulty in training or explaining to another
person how it is done.

The need to ensure this behavior is checked regularly against new processes.

Difficult to unlearn, since it is instinctual.


Example from real life Automatically looking at both sides of the road based on sensory cues while crossing
the road while the mind may be actively engaged on other thoughts (e.g.. talking on a
cell phone while crossing the street, driving, sports activities, typing, etc.)

Competence in this instance is the knowledge that one needs to look at traffic before
crossing a road, unconsciousness is applying knowledge without forethought and
effort.

Quadrant 4
Conscious + Competence In this “level,” the individual is aware, but must force himself/herself to be so.

The person will need to concentrate and think in order to perform the skill.

The person can perform the skill unassisted.

Demonstrable skill, but not teachable by a person.


Example from real life In a foreign land, while crossing the road, consciously forcing yourself to think that
driving directions may be different.

Competence in this instance is the knowledge that driving directions may be different.
Conscious is applying this knowledge with forethought.

Now, let us analyze the link between the various eoffs,” we must view the model from a different
quadrants of the “conscious competence model” perspective. In the previous section, we linked
and “security tradeoffs.” the model with “security tradeoffs.” In this sec-
Is it possible to reduce the risk of “security trad- tion, we are viewing the “conscious competence
eoffs” using the “conscious competence model” model” from a positive perspective by adding
as a baseline? Let us explore this in more detail three more factors:
in the next section.
• Can behavior that reduces security risk be
Reducing Risk of “Security learned?
Tradeoffs” using the “Conscious • Pitfalls and/or challenges
Competence” Model as a Guideline • Positives

To use the “conscious competence model” as In Table 3, we have identified whether behavior
a guideline for reducing risk of “security trad- can be modified. The next obvious question is,

34
Impact of the Human Element on Information Security

Table 2. The link between “conscious competence model” and security tradeoffs
Quadrant Reason for security tradeoff Contributing factors Example in the context of information security
Unconscious + The individual is ignorant and hence Perception is influenced by An untrained employee clicking on a URL
Incompetence influenced by perception. culture, society, peer group, provided within a “phishing” e-mail.
and so forth.
Conscious + The individual possibly knows the Personal convenience/ An employee being unable to identify whether
Incompetence risks but still makes a decision that inconvenience an e-mail is genuine or not. Rather than take
has a security risk which impacts the time to verify authenticity of the e-mail, the
information security. employee trusts their “gut” and could make a
decision with a high security risk.
Unconscious + The individual is knowledgeable and Self perseverance and/or Pressing “Ctrl+Alt+Delete” while leaving the
Competence makes a decision without conscious there is a cost involved desk since it has become a habit (the reason
awareness. could be a company security policy that imposes
a penalty).
Conscious + The individual knows the risks “I don’t care” mentality, Not bothering “swiping” the access card and
Competence but makes a conscious decision to lack of respect for security tail-gating behind a colleague
accept the risk. policies

Table 3. Correcting security tradeoffs using conscious competence model


Quadrant Reason for Contributing Example in the context Can reduced Pitfalls and/or Positives
security-tradeoff factors of information security security risk challenges
behavior be
learned?
Unconscious + The individual Perception An untrained employee Yes The individual You have
Incompetence is ignorant and is influenced clicking on a URL has to be trained a fresh and
hence influenced by culture, provided within a and educated impressionable
by perception. society, peer “phishing” e-mail afresh and this mind to work
group, and so involves effort with.
forth. and time.
Conscious + The individual Personal An employee being Yes The individual The subject
Incompetence possibly knows convenience/ unable to identify may or may not may have a
the risks but inconvenience whether an e-mail is be willing to positive attitude
still takes a genuine or not. Rather acknowledge and willingness
decision that has than take the time to lack of to correct
a security risk verify authenticity competence. weaknesses
which impacts of the e-mail, the
information employee trusts their
security. “gut” and could make
a decision with a high
security risk.
Unconscious + The individual is Self Pressing Yes Set perceptions The individual
Competence knowledgeable perseverance “Ctrl+Alt+Delete” have to be has a
and makes a and/or there while leaving the desk changed. competence (an
decision without is a cost since it has become a advantage) that
conscious involved habit (the reason could can be enhanced
awareness. be a company security further.
policy that imposes a
penalty).
Conscious + The individual “I don’t care” Not bothering Yes The individual A good
Competence knows the m e n t a l i t y, “swiping” the access may have opportunity to
risks but makes lack of respect card and tail-gating a negative define strong
a conscious for security behind a colleague. attitude with information
decision to accept policies. respect to s e c u r i t y
the risk. information disciplinary
security, which policies.
requires strong
corrective
measures.

35
Impact of the Human Element on Information Security

“What are the techniques to modify behavior” or Coverage: For large sized organizations it
strategies that could be used? It is best to answer is often difficult to cover all employees in their
this question by analyzing the effectiveness of information security training program. Reasons
current approaches. for this are employees could be at a customer
location, absent from work, attending to urgent
Current Approaches and Their project deadlines, and so forth. Moreover, it is
Effectiveness important to ensure that perimeter personnel
(security guards), support personnel (janitors,
The various initiatives undertaken by organiza- catering), and external contractors attend the
tions to reduce risk of security tradeoff’s are: training and this is often missed.
Quality of trainer: Often organizations
• Training (classroom & electronic) confuse the role of an IT expert or information
• Visual content—posters, notices, videos, security expert with that of a trainer. This often
and so forth happens when a system administrator or informa-
tion security engineer is asked to train employees.
Security Policies Though they may be experts in their domain, they
may not possess the communication skills that are
Though the mentioned activities have been in so essential to convey the importance of informa-
use for some time, they are not effective beyond tion security. Moreover, the trainers themselves
a certain point. The possible reasons for the may be stressed by handling their existing task
ineffectiveness of current information security as well as handling training.
awareness and training approaches are: Measurement of retention: Information secu-
Quality of content: Organizational informa- rity awareness as practiced by organizations today
tion security training programs focus primarily on is often limited to conducting a training session.
security policies and procedures. Though this is The training sessions are not followed up with
important and has to be conveyed to the employees, a strategy to measure how much the employees
the content tends to be dry and monotonous have “captured” and “retained.”
Absence of focus: Organizations need to ask So, is it possible to have a more effective
themselves the question: What are we trying to strategy to reduce security tradeoffs? The answer
achieve?—Increase security awareness or convey is yes and the strategy is to focus on “security
security policies. Without this focus, trainings competence.”
tend to be drab affairs
Fear: Organizations play on their employees Information Security Competence
fear by conveying the following message in a
subtle manner—“If you don’t do this (follow The approach to reducing security tradeoffs
security policies), then you will suffer the follow- must focus on the development of individual and
ing consequences (repercussions or disciplinary organizational information security competence.
procedures).” This creates hypocrisy within the Information security competence can be defined
organization, because on one side the organization as a blend of intelligence, feeling, skills, and
conveys the message to employees that they are organizational relationship. This is elaborated in
the organization’s biggest assets and on the other the following diagram.
side, the organization conveys the message that
the employees are a “security threat.” • Security perception—understanding the
reason and logic for information security.

36
Impact of the Human Element on Information Security

• Security acceptance—adopting information Step 2: Increase Security Perception by


security and allocating mind-share. Conveying Concepts
• Applying security—integrating security
practices into day-to-day work. For effective human impact management for in-
• Commitment to the organization—re- formation security, it is important to understand
specting security policies and practices of the impact of the three factors on information
the company. security namely:

Strategies for Increasing Information • Culture


Security Competence • Organizational ecosystem
• Individual work practices
Step 1: Set Goals
The concepts can be summarized as fol-
The goals for improving information security lows:
competence can be defined as:
• Individuals imbibe the culture of the society
• Make every individual as “responsible & they live in, and the environment surround-
aware” of information security as possible. ing them.
• Subsequently reduce security incidents (se- • They carry this culture to the organization
curity tradeoffs) due to human factor. they work in, where it is integrated with the
• Promote a positive feeling about information work culture.
security and consequently gain more accep- • At a micro level, there are behavioral charac-
tance of information security practices into teristics unique to each individual.
day-to-day work practices.
• Create a management system for monitoring, Step 3: Increase Security Acceptance
measuring, and improving human impact on
information security. Security acceptance can be increased through
strategic intervention techniques that are “active”

Figure 4. Core approach to enable information security competence

Security Perception: Individual awareness of


information security

Security Acceptance: Accepts information


security as an important work-habit in spite of
personal inconvenience

Applying Knowledge (Skills): Using security


skills & ability to evaluate risks

Commitment to organization: Follows security


practices and procedures

37
Impact of the Human Element on Information Security

rather than “passive” in nature. For example, a let us use a mind-map to analyze the impact of
regular information security awareness training “password sharing.”
program is “passive” in nature as the audience The mind map shown is generated from a real
participation is limited. Security acceptance can life information security awareness session with
be increased using “active” techniques that make a group of employees in an organization. The
the end-user “THINK” about information secu- security trainer gave the group a simple phrase—
rity. Examples of such techniques are: “sharing of passwords” to think about. Next, the
trainer asked the members of the group to talk
• Vision building exercises about the thoughts that came to their minds when
• Mind-mapping sessions they heard the phrase “sharing of passwords.”
• Coaching/ mentoring sessions The group members provided answers such as
• Security perception surveys “unauthorized entry,” “account misuse,” and so
• Quizzes and games that require active par- forth (highlighted in red). Further, the trainer
ticipation asked them to think about these phrases in an
• Tools that simulate realistic attacks and test organizational context. This simulated the audi-
participants ence to think and share their ideas on how—for
example—“unauthorized entry” could impact the
An example of a mind-mapping session is organization. For example, the group members
provided: enumerated “unauthorized entry” could be into
A mind map, as per the definition in Wikipedia, systems, e-mail accounts, file servers, corporate
is a diagram used to represent words, ideas, tasks, intranet, and so forth.
or other items linked to and arranged radially The session concluded with the audience being
around a central key word or idea. It is used to able to appreciate the impact of a security breach,
generate, visualize, structure, and classify ideas, such as sharing of passwords, that looks trivial
and as an aid in study, organization, problem when performed, but when taken in an organi-
solving, and decision making. The mind-mapping zational context has impact such as “impact on
technique can be used to make a person analyze customer confidence,, “fines & penalties,” and so
the impact of his or her own “at-risk” information forth (highlighted in green).
security behavior. For the purpose of this example,

Figure 5. Concepts behind information security practices

38
Impact of the Human Element on Information Security

Step 4: Applying Security Skills The message is important to convey the link
between business goals and security targets so that
The desired consequence of “step 3—increasing the end-user is able to understand and appreciate
security acceptance” is to promote the application the importance of information security to achieve
of security skills. This can be conveyed through business goals of the organization.
awareness sessions on using security tools or
adopting security practices into day-to-day work Step 6: Continuous Measurement
practices. This can again be achieved through
strategic awareness sessions that accomplish a The strategies that are used to increase security
particular goal. For example: competence must be continuously measured and
improved. The word measurement brings the term
• Choosing strong passwords “metrics” to almost every manager’s mind. But
• Identifying phishing or SPAM e-mails what types of metrics exist to measure informa-
• Best practices for keeping systems “malware” tion security competence? The issue is quite
free challenging since what has to be measured is
knowledge, competence, and awareness, which
Step 5: Linking Security with the are not concrete and difficult to measure. The best
Organizations Goals answer is that an organization must determine
what best suits them. For example, an organiza-
The ultimate purpose of information security is tion may choose the metric “number of security
to ensure that security errors and accidents do not incidents before strategic awareness sessions and
compromise the business goals of the organiza- after awareness sessions.” Another organiza-
tion. This can be accomplished by conveying the tion may choose an external penetration tester
following message in an example format to the and perform a series of social engineering tests
workforce. before and after awareness sessions and use the

Figure 6. Mind-map: Analysis of own “at-risk” behavior (password sharing)

39
Impact of the Human Element on Information Security

results as a metric. The best bet is to choose an locking systems while leaving the desks, and
approach and measurement technique that best so forth, is very poor.
fits the business requirement. A discussion for • Reluctance to follow security regulations
measuring improvement in information security (for example—not bringing personal media
awareness is provided. players and storage devices) is high.

Discussion Subsequently, ABC Inc. conducted a series


of innovative information security awareness
This discussion focuses on the usage of metrics sessions that included mind-maps, animations,
for measuring information security awareness. coaching dialogues, and so forth.
Organizations have to decide how they want to Subsequently, another audit was conducted
address the following questions: and the audit results are listed:
A glance at the results would make a security
• What types of metrics are used—qualitative manager happy. But metrics are a double edged
or quantitative? sword and could often be misleading, providing
• What is the measurement approach—direct a false sense of security. Let us look at the same
measurement or indirect measurement? table and ask some probing questions.
The table highlights two important aspects
Let us evaluate a few real-world examples: while using metrics to measure increase in in-
formation security awareness:
• ABC Inc. conducts a series of information se-
curity audits. The audit findings are listed: • Do not trust an individual metric. Always
• The number of information security incidents corroborate a metric with another metric or
being reported is few. data to reach a more accurate conclusion.
• Awareness of basic information security For example, in the table, the first metric on
practices such as not sharing passwords, information security incidents would have
pointed a reviewer in the wrong direction,
but with a probing question being asked, it
Figure 7. The link between business goals and the led the reviewer in a much more mean-
security ingful direction.
• Perform direct and indirect measurements.
ethical etc. For example, the metric—“attitude towards
security regulations and rules” can be
measured by a simple perception survey by
Security Goals: Ensure security asking the respondents a sample question as
errors and incidents do not prevent follows—“Do you feel information security
is important now that you have attended the
achievement of business goals
training? Would you stop using personal
electronic storage devices at the work place?”
Respondents may say “yes.” But without
Security Targets: Security incidents corroborating evidence, trusting this “yes”
due to work force incompetence alone invites a false sense of security. Hence,
must be an absolute minimal in the table, this metric is further probed
with a question that focuses on an indirect

40
Impact of the Human Element on Information Security

Description Results (with metrics)


Previous Audit Current Audit
Number of information security incidents reports
12 in 6 months 39 in 6 months
by employees
Awareness of basic information security practices
by employees (example—not sharing passwords, Poor Good
locking systems)
Attitude towards security regulations and rules Negative Positive

Description Results (with metrics)


Previous Audit Current Audit
Number of information security incidents reports 12 in 6 months 39 in 6 months
by employees
Comment: While users are reporting more incidents (which means more awareness), the number of information security incidents
has also increased (which is bad). Is this a case of new security incidents that are occurring or a case of security incidents that were
happening previously, not reported in the audit due to lack of awareness?

Awareness on basic information security practices Poor Good


by employees (example—not sharing passwords,
locking systems)

Comment: The mentioned audit results must be supported by a review of incident reporting logs. For example—how many reports of
password sharing are available in the incident logs? How many reports of unlocked systems are available in the incidents logs? If
sufficient information is not available, we must wait for a few more months and perform another audit before clearly identifying an
improvement in awareness on information security among the workforce.
Attitude towards security regulations and rules Negative Positive
Comment: The mentioned audit results must be supported by a review. For example—what is the number of personal devices available
with the employees within the work area at any given time? Is it lesser than what it was prior to the awareness sessions

metric—verification of security practices Further, it is important to understand that almost


by auditing the number of personal devices all of us make security tradeoffs on a day-to-day
being carried by people. basis based on our perception. This perception is
fed by various factors including society, media,
A good model for performing information workplace behavior, culture, and so forth. The
security management using metrics is ISM3 (in- conscious competence model is a good guide that
formation security management maturity model)3 can be used to understand the impact of security
that is available at www.ism3.com. tradeoffs and also to define corrective measures.
Further, the corrective measures themselves must
focus on increasing security competence through
CONCLUSION strategic techniques that convey the reasons for
corrective actions and these strategies must be sub-
Information security has evolved to its current ject to continuous revision and improvement.
focus on the human element. The important fac- It is safe to surmise that the human element
tors to be understood are: the reasons why an must be addressed to ensure that businesses
individual behaves in a particular manner when stay sustainable and successful over the course
confronted with a situation that poses a potential of time.
information security risk. In this context, the “real-
ity” and “feeling” of security must be understood.

41
Impact of the Human Element on Information Security

References

Gordon Training Institute. Conscious competence


learning model. www.gordontraining.com
ISM3. Information security management maturity
model. www.ism3.com
Schneier, B. (2007). The psychology of security.
www.schneier.com/essay-155.html

42
43

Chapter IV
The Weakest Link:
A Psychological Perspective on Why
Users Make Poor Security Decisions
Ryan West
Dell, Inc., USA

Christopher Mayhorn
North Carolina State University, USA

Jefferson Hardee
North Carolina State University, USA

Jeremy Mendel
North Carolina State University, USA

Abstract

The goal of this chapter is to raise awareness of cognitive and human factors issues that influence
user behavior when interacting with systems and making decisions with security consequences. This
chapter is organized around case studies of computer security incidents and known threats. For each
case study, we provide an analysis of the human factors involved based on a system model approach
composed of three parts: the user, the technology, and the environment. Each analysis discusses how
the user interacted with the technology within the context of the environment to actively contribute to
the incident. Using this approach, we introduce key concepts from human factors research and discuss
them within the context of computer security. With a fundamental understanding of the causes that lead
users to make poor security decisions and take risky actions, we hope designers of security systems are
better equipped to mitigate those risks.

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Weakest Link

INTRODUCTION access of information totaled $10.6 million, and


losses caused by laptop or other hardware theft
Humans are fallible. That means exploitable. totaled $6.6 million.
Recorded in every religious text and mythology When it comes to data loss within organiza-
is the evidence of human imperfection. We lose tions, it appears that users are more of a problem
our wallets, forget our passwords, and drive over than hackers or malware. According to a 2007
the speed limit when we are in a hurry. Yet some- report from the IT Policy Compliance Group,
how, we managed to develop manifestations of mistakes made by internal employees accounted
pure logic in the form of computing systems. At for approximately 75% of all data losses (Gaudin,
the helm of all this technical sophistication and 2007). In contrast, malicious activity such as In-
complexity, unfortunately, is a user. ternet-based threats, attacks, and hacks, accounted
for about 20% of data losses.
True Story: On the home front, a 2004 survey from AOL
and the National Cyber Security Alliance re-
“Company X” is a large nationwide hotel chain ported that 72% of home users surveyed did not
across the United States. Each hotel has two have a properly configured firewall (America
wireless networks, one accessible to hotel guests Online and the National Cyber Security Alliance,
and one accessible to hotel employees. The hotel 2004). In addition, approximately 40% of users
employees use this for reservations, reporting, with home wireless networks had no encryption
and so forth. Once a month, a number of reports configured.
are rolled together by the IT manager, who puts In all of these cases, there are human factors
them into a presentation for his upper manage- issues associated with the acceptance and usability
ment. The executives present this report as part of security mechanisms, user perceptions of risk
of a monthly presentation to the parent company and how it motivates their behavior, and decision
who owns the hotel chain. The parent company making strategies which pit convenience against
and the hotel chain have different policies and security.
firewall settings, and the IT manager for the hotel The focus of this chapter is not on the technolo-
chain has not, in 3 years, been able to figure out gies of computer security but on the psychology
how to make them mesh without causing break- of those who use them. Human decision-making
ages down the line. As a result, once a month, has been a topic of study in social sciences for
when the executives give their presentation, the well over a century (Goldstein & Hogarth, 1997).
IT manager drops the firewall for the hotel chain The research shows that individuals are often less
for the duration of the presentation. than optimal decision-makers when it comes to
User error and poor human factors design reasoning about risks (Simon, 1956). Not only
contribute to many of the top computer security do internal factors such as prior experience and
risks faced today. According to a recent CSI/FBI knowledge specific to the decision maker influ-
computer crime and security study, losses due ence the quality of decisions but many naturalistic
to computer security incidents were estimated or environmental factors such as time pressure
to total more than $52 million across the 313 (Hammond, 2000) and situational context (Klein,
companies surveyed (Gordon, Loeb, Lucyshyn, & 1998) also effect decisions. Thus, there are a
Richardson, 2006). Of the most common security variety of data sources available to describe the
incidents reported in the study, losses related to nature of predictable and exploitable characteris-
viruses or malware totaled an estimated $15.7 tics in the human decision making process. Un-
million, losses associated with the unauthorized derstanding these principles and how users come

44
The Weakest Link

to make security decisions may suggest how we tems approach, the user-machine-environment
can develop interventions to improve the outcome attributes are all considered to interact during
of the decisions. Recent evidence suggests that the production of task-specific behavior. The
decision-making within the domain of computer “user” refers to operator characteristics such as
security does not qualitatively differ from deci- expertise, competence, age, or motivation. Here,
sion-making processes used in other contexts a thorough understanding of the abilities and the
(Hardee, West, & Mayhorn, 2006). Thus, much limitations of the user is important in determin-
of the knowledge gleaned from the classical deci- ing how information is perceived and processed.
sion-making literature can be used to understand Equally important is the “machine” component
the pre-existing biases that might place users at which consists of the characteristics of any ex-
risk in user-security scenarios. traneous tool that is being used to aid the human
We hope the reader develops insight into the operator in performing the designated task. Hu-
user-security problem through a better under- man-machine interactions range from simplistic
standing of the cognitive mechanisms that underlie tasks where someone is using a hammer, to much
the human side of the equation. more complex relationships involving automated
The goal of this chapter is to raise awareness of systems (Sanders & McCormick, 1993). Lastly,
cognitive and human factors issues that influence all user-machine interactions occur within the
an end user’s behavior when he or she is interact- “environment,” which describes the task as well
ing with a system and making decisions or taking as the context in which it is performed. Thus,
actions that have security consequences. With a the social/organizational climate of a company
fundamental understanding of the causes that lead as well as the ambient (e.g., stress, heat, etc.)
users to make poor security decisions and take characteristics of the environment might also
risky actions, designers of security systems may influence task performance.
be better equipped to mitigate those risks. The systems approach has proven to be an ef-
To this end, this chapter is organized around fective tool in improving safety within a variety
case studies of documented computer security of contexts. Human error in diverse situations
incidents and known threats. For each incident, ranging from the 1984 Bhopal chemical spill
we will provide a post-mortem analysis of causal in India to the 1986 Chernobyl nuclear reactor
factors involved. In this fashion, we will introduce disaster have been analyzed using this approach
key concepts from human factors research and (Reason, 2000). More recently, attempts to reduce
discuss them within the real world context of errors within the healthcare industry have greatly
information security. In addition, we will discuss benefited from the realization that medical er-
human factors issues related to potential solutions rors are the product of multiple factors. Bogner
that might be considered to mitigate the incidents (2004) successfully illustrated that the traditional
described in the case studies. rule within the healthcare industry of “blaming
the provider” for adverse events such as infu-
sion pump mishaps, medication administration
SYSTEM MODEL APPROACH TO errors, and surgical blunders is not an effective
UNDERSTANDING SECURITY means for preventing such incidents in the future.
INTERACTIONS Rather, an understanding of the interactions
between the provider, the equipment, and the
Human factors as a scientific discipline often uses organizational/physical environments is neces-
a systems approach to conceptualize all aspects sary for formulating effective error reduction
of a problem (Helander, 1997). Within the sys- through informed intervention. For instance,

45
The Weakest Link

common transfusion errors have been reduced of the 512 cadets sampled for the study (more than
through more effective labelling of samples at 80%) clicked on the embedded link.
the blood bank (Linden, 2004). Moreover, efforts In a follow up study 1 year later, students were
to reduce medication adherence errors in older sent one of three types of phishing e-mails which
adults have been informed by knowledge of how encouraged them to either click on an embedded
cognition and perception change with age (Park link in the e-mail, open an attached file suppos-
& Skurnik, 2004) and how technology such as edly pertaining to their grades, or worst of all,
personal digital assistants might be designed to to submit their social security numbers through
facilitate adherence (Mayhorn, Lanzolla, Wogal- a Web site (Jackson, Ferguson, & Cobb, 2005).
ter, & Watson, 2005). Of the 1,010 phishing e-mails sent encouraging
Given this consistent level of previous success, cadets to click on an embedded link, roughly
it seems appropriate to bring this technique to bear 30% of cadets did so. This was a large decrease
in understanding how computer security errors from the previous year but a significant number
occur. We envision the user-security scenario of cadets proved to be vulnerable to phishing. Of
as a system which can be modelled with three the 1,014 phishing e-mails sent encouraging cadets
parts: the user, the technology they are interact- to open an attached file, approximately 48% of
ing with, and the environment/context in which cadets did so. Finally, of the 456 phishing e-mails
the interaction takes place. These three things sent trolling for social security numbers, roughly
together, determine how a user will respond in a 47% of cadets provided them.
given situation. It also provides a framework for What factors account for these high numbers
us to categorize and discuss a myriad of factors despite having participated in mandatory secu-
in user-security scenarios. rity awareness training and having been fooled
For each security incident case study described once before?
in this chapter, we will discuss causal factors
involved based on this systems approach. Each User Factors
explanation will discuss how the characteristics of
the user interacted with the technology within the A common concern of IT organizations is that
context of the environment to actively contribute users fall prey to viruses and other malware
to the occurrence of the security incident. distributed through e-mail. Why is this attack
vector so successful? In the case of the West Point
e-mail, what factors led to cadets believing the
PHISHING FOR RECRUITS: WEST message was real and taking the action requested
POINT MILITARY ACADEMY E-MAIL of them? A little background first.
STUDIES
Satisficing and Problem Solving
In 2004, researchers at the United States Military
Academy at West Point, New York, conducted a Humans have limited information processing
study in which a sample of cadets received e-mails capacity and routinely multi-task. As a result, few
from a fictitious senior officer asking them to tasks or decisions receive our full attention at any
click on an embedded link for information about given time. Humans allocate mental processing
their grades (Ferguson, 2005). The goal behind resources judicially and try to accomplish the
the study was to actively test the effectiveness of most while using the least.
the school’s security awareness training program To conserve mental resources, we tend to fa-
which all cadets were required to take. Over 400 vour quick decisions based on learned rules and

46
The Weakest Link

heuristics. These are aided by pattern recognition, Representativeness as a Decision


which is cheap and easy in terms of mental effort. Making Heuristic
In fact, pattern recognition is what the human
brain does best. In contrast, careful and deliberate Representativeness refers to a decision-making
concentration requires focused attention, a great heuristics people employ where decisions are
deal of scarce short-term memory resources, and made by classifying the problem as a known
is very expensive in terms of mental effort. type based on experience (Tversky & Kahneman,
When considering a problem, humans usually 1974). The decision is made based on memory of
develop a solution that meets the needs of the past decisions with a category of problems rather
problem to a satisfactory degree. This is called than evaluating of the options each time.
satisficing (Simon, 1957). We settle on choices we In the case of the West Point e-mails, if the
can arrive at quickly and that seem good enough, e-mail looks similar enough in appearance and
not decisions that are the best. While decision- content to previous e-mails, users accepted their
making is not optimal, it is highly efficient. It is legitimacy with little doubt. While the bogus e-
efficient in the sense that it is quick, it minimizes mails created for the West Point study had suspi-
effort, and the outcome is good enough most of cious cues intentionally added in the content, such
the time. Of course, this willingness to settle for as a fictitious colonel and office address (Figure
a less than optimal solution may leave us open 1), they were too subtle to be noticed by a cadet
to error. skimming through all of their e-mails.
Returning to the West Point e-mails, we can
expect that cadets do not scrutinize the authentic- Feedback and Learning from
ity of every e-mail they receive, or consider the Security-Related Decisions
consequences of opening attachments, embed-
ded links, and so forth. It would be extremely For a user to effectively learn to produce secure
counterproductive in terms of time and effort, computer-related decisions, the feedback from
especially when considering the low frequency the interface used to make a security or risk de-
of genuine security incidents.

Figure 1. Example of a fictitious phishing e-mail used in the 2004 West Point phishing study. The only
clues to the fraudulent nature of the e-mail are the fictitious colonel and building

From: [email protected] [mailto:[email protected]]


Sent: Tuesday, June 22, 2004 4:57 PM
To: [email protected]
Subject: Grade Report Problem

There was a problem with your last grade report. You need to:

Select this link Grade Report and follow the instructions to make sure that your information is
correct; and report any problem to me.

Robert Melville
COL, USCC
[email protected]
Washington Hall, 7th Floor, Room 7206

47
The Weakest Link

cision must be conducive to the human learning appearance, etc.), studies show that users report
processes. In a typical learning situation, behavior the “design look” as the most important indicator
is shaped by positive reinforcement when we do of credibility (Stanford, Tauber, Fogg, & Marable,
something “right.” When we do something suc- 2002). Given the simple heuristic approaches that
cessfully, we are usually rewarded immediately user employ to establish the veracity of online
so that the connection between our action and the information, it should be no surprise that users
consequence is clear (Grice, 1948). This temporal are generally bad at judging the credibility of
contiguity is missing in the case of security, when information sources like Web pages and so forth,
the user does something correctly (e.g., makes a based on domain names or official seals/logos as
secure choice); the reinforcement is that bad things well (Wogalter & Mayhorn, 2006).
are less likely to happen. There is no immediate
reward or instant gratification, which can be a Personal Relevance
powerful reinforcement in shaping behavior.
In another common learning situation, behav- It is likely the content of the e-mail dealing with
ior is shaped by negative reinforcement when we cadet grades made the e-mail personally relevant
do something “wrong.” We do something bad, we to the recipients. Therefore, they were motivated
immediately suffer the consequences. In the case to perform any action requested. It stands to
of security, when the user does something bad, the reason that this kind of specific and targeted
negative reinforcement may not be immediately content would be much more relevant and mo-
evident. It may be delayed by days, weeks, or tivating than generic spam content dealing with
months if it comes at all. From previous research, it money from royal families in Nigeria or pictures
is known that the introduction of a delay between of celebrities.
the behavior and the bad consequences will result
in less suppression of the behavior (Baron, 1965). Environmental Factors
Thus, cause and effect is learned best when the
effect is immediate and anti-security choices often In addition to the user factors and technology
have no immediate consequences. This makes factors, the context around the incident played
learning consequences difficult except in the case an equally important role. In the first study, the
of spectacular disasters and near disasters. fake e-mail was sent to students just before the
end of a semester when grades were on all ca-
Technology Factors dets’ minds. In addition, by chance, cadets had
received an e-mail regarding grades a few days
Credibility before the fake e-mail was sent. Thus, there was
a very credible context surrounding the receipt
Exactly what was it about the e-mails that made of the fake e-mail, which added to its perception
them seem so credible? Both the sender of the e- of legitimacy.
mail and the e-mail address were bogus yet the
cadets made an error in judgment by responding Time Pressure
inappropriately with their personal information.
This scenario is an illustration of one area where Because cadets were undergoing both academic
the attributes of the system or technology might and military training, they might be considered
influence the thought processes of the user. When as a special population in terms of their level of
a user encounters a Web site and has to judge its community engagement and activity. Arguably,
authenticity from limited information (e.g., its these cadets were engaged in a very busy life

48
The Weakest Link

style that required them to multi-task on a regu- it is surprising what people can overlook when
lar basis. It seems likely that time pressure may engaged in an activity.
have played a role in complying with the bogus
e-mails and made cadets more likely to rely on Inattention Blindness
rote decision making heuristics when evaluating
the e-mails. Inattention blindness is a well studied psychologi-
Also, because the cadets were actively en- cal phenomenon where observers may not perceive
gaged in military training that conditioned them details in a scene that are not part of the task at
to obey the commands of a superior officer; they hand. Seemingly un-ignorable features of a scene
were predisposed to comply with the seemingly are often ignored, features much more obvious
credible request. Within the military environ- than the subtleties between “http” and “https” or
ment, subordinates may be very used to receiving icons. For example, Simons and Chabris (1999)
frequent orders from superiors with little advance provide a striking demonstration of this in a study
notice. Such environmental factors might combine where they showed that study participants asked
with the technology and user variables to elicit to watch a video of a basketball game and count
security behavior which, in retrospect, might be the number of passes, did not notice a gorilla
considered inappropriate. who walked through the middle of the players,
stopped in the center of the scene, turned toward
Evaluating Potential Solutions the camera, and beat on its chest.
Turning back to phishing tricks, not only are
Evaluating potential solutions to these problems users prone to fall for lures that are highly similar
from a user-technology-environment perspective to sites or e-mail they are familiar with and trust,
allows a more holistic approach to assessment and but attempts to warn users will go unheeded unless
calls attention to user and environmental factors they capture and hold users’ attention.
that might otherwise go overlooked. Unfortunately, there may be easier ways for
In systems engineering, designers often try users to infect their computers or organization’s
to design error out of the system by eliminating networks than opening attachments or down-
dependencies on components with high failure loading files. They might install them physically.
rates and working well within component toler- Several cases document the dangers of removable
ances. The approach that reduces the greatest storage devices to networks and the inherent
source of error in the system may be to eliminate curiosity of end users.
the user from the interaction where possible. For
example, automatically scanning attached files
before opening or before downloading to the user’s CURIOSITY KILLED THE
machine would be a more robust way to mitigate NETWORK: DELIVERING MALWARE
malware than relying on users to pay attention to THROUGH STORAGE DEVICES
what they are doing.
Attempts to build systems that detect and warn As part of a security audit for a bank in 2006,
users about phishing Web sites have fallen short security consultants seeded the parking lot, smok-
of useful, partially due to problems with users ing areas, and break rooms with USB key drives
noticing the alerts from such systems (Dhamija containing a safe Trojan that called home when
& Tygar, 2005). Notification of risk is a critical the USB drive was inserted into an employee’s
area in user-security interaction and we will ad- machine. Of the 20 USB Trojans planted, 15
dress that in more detail later. In the meantime, (75%) were found and used in the bank’s personal

49
The Weakest Link

computers over a three day period (Stasiukonis, unsuspecting employees who were ignorant of
2007). any felonious intentions.
In a similar story, a UK based computer train- Do end users not care about the security of
ing company handed out 100 CDs in London’s the systems they use? After all, the CDs handed
financial district and told recipients that if they out on the streets of London did have warnings
ran the CDs on their computers, they would be on the cover instructing people to make sure they
entered into a contest for a free vacation (Kirk, were in compliance with their corporate security
2006). Contained on the CD was a program which policies before running them. Why is this attack
launched a Web browser and reported home. vector so successful? We might as well ask why
Although all CDs had printed warnings on the the Trojan army hauled the Greek’s wooden horse
front cover instructing users to check with their into the city of Troy.
corporate security policies before running the
CD, 75 of the 100 CDs (75%) handed out called User Factors
home. Included in the transmitted information
were IP addresses of several high profile London Users do not Think They are at Risk
companies.
Perhaps the most stunning example of the risk A possible explanation is that these users were
posed by removable storage devices occurred in unaware of the dangers at hand. More importantly,
2005 with the near theft of over £200 million however, even if the dangers were known, they
($423m) from the world’s second largest bank, were probably perceived as very unlikely events.
the Sumitomo Mitsui Bank (Ilet, 2005; Sturgeon, People often believe that they are less vulnerable
2005; Betteridge, 2005). According to the British to risks than others. Most people believe that they
National Hi-Tech Crime Unit who foiled the plan, are better than average drivers and that they will
hackers accessed the computer systems at the live beyond the average life expectancy (Slovic,
London offices of Sumitomo Mitsui using infor- Fischhoff, & Lichtenstein, 1986). This belief that
mation gathered from keystroke logging programs one is more capable than their peers is known as
which permitted them to acquire passwords and optimism bias (Dalziel & Job, 1997; Dejoy, 1987).
other sensitive account information. Another mistaken belief is known as the third
While neither bank officials nor representa- person effect (Perloff, 1993) where some people
tives for the National Hi-Tech Crime Unit fully believe that they are less susceptible to hazards
explained how the keystroke logging software than other people (Adams, Bochner, & Bilik,
was installed on Sumitomo Mitsui computers, 1998). Given these biases, it stands to reason,
there was much speculation over the use of small then, that any computer user has the pre-set belief
USB storage devices. One theory explored by the that they are at less risk of computer vulnerability
investigating agency was that the hackers gained than others. While people generally do know that
physical access to the computers via a company there are viruses, hackers, and other computer
insider or housekeeping staff and was able to risks out there, they inherently believe it is less
insert USB drives into the back of computers likely to happen to them than others.
which injected the key logging software.
Given the apparent reliability with which end Safety is an Abstract Concept
users will insert found objects into their corporate
computers, coercing bank employees or facilities When evaluating information to make a decision,
people into the plot would seem an unnecessary results that are abstract are less persuasive than
step. The thieves could have handed them out to results that are concrete (Borgida & Nisbett, 1977).

50
The Weakest Link

This is essential to conceptualizing how users these organizations were doubtlessly exposed to
perceive security and make decisions. Often the security awareness information and even train-
pro-security choice has no visible outcome and ing perhaps, did they feel they had a personal
there is no visible threat. The reward for being responsibility? This is something with which all
more secure is that nothing bad happens. Safety IT organizations struggle.
in this situation is an abstract concept. This, by
its nature, is difficult for people to evaluate as Diffusion of Responsibility
a gain when mentally comparing cost, benefits,
and risks. Diffusion of responsibility is a well known social
Compare the abstract reward (safety) garnered behavior that occurs in groups where individu-
from being more secure against a concrete reward als tend to neglect responsibility or fail to take
like satisfying curiosity, entering a contest, and action believing that someone else in the group
so forth, and the outcome does not favor security. will (Darley & Latané, 1968). In restaurants, for
This is especially true when a user does not know example, large groups of diners tend to leave less
what his or her level of risk is or believes he or she money in tips, as a proportion of the bill, than
is initially at less risk than others. Returning to the smaller groups (Freeman, Walker, & Latané,
principle of satisficing, the user is also more likely 1975). The reasoning is that in larger groups, more
to make a quick decision without considering all people shortchange the tip believing the difference
of the risks, consequences, and options. will be made up by someone else who will over
tip. In smaller groups, the members know their
Technology Factors contribution is significant. Related to computer
security, it is easy for end users in an organiza-
In these cases, there are obvious technology fac- tional setting to believe that security is not their
tors which could be addressed. First, end users responsibility and defer it to someone else who
were allowed to attach portable storage devices must know more about it than they do, must be
to their desktop computers or use portable storage watching for security problems, and must know
media. It would have been possible to mitigate what to do when one occurs.
this risk by preventing users from doing so either
by hardware configuration or by user account Evaluating Potential Solutions
access controls.
While possible, it may not always be practical One solution to the problems described in the
to have desktop computers without USB ports or stories may be to impose a corporate security
CD drives or to deny users access rights to them. policy that users cannot use portable USB storage
Another possibility would have been to automati- devices. From a human factors perspective, this is
cally scan all removable storage devices or storage unlikely to be successful for several reasons:
media for potential malware and alert the user or
automatically block its use. • Users may not respect the policy because they
do not understand the risk involved.
Environmental Factors • They may not be aware of the policy or forget
the policy.
Outside of the human-computer loop, there are • Environmental situations may arise where
environmental factors involved as well. One the users have to use a removable storage
has to wonder what the general security ethos device.
of these organizations was. While employees in

51
The Weakest Link

• If they try it once and there are no negative training programs, and evaluation. The initial
consequences, they will likely continue to do step of needs assessment determines the content
it in the future. of training materials by exploring whether train-
ing is necessary, what skills need to be taught,
A more reliable way to prevent the risk is to and the characteristics of those who will benefit
take the user out of the equation. It would be most from training. Task and person analyses
possible to prevent users from using removable follow needs assessment and are conducted to
storage devices or scan any such device or media determine the functional characteristics of the
for malware when discovered. This would reduce technology and users.
the risk that users accidentally inject malware Specifically, a task analysis defines the step-
into network. by-step procedure for operating a device such as
Those steps might reduce these specific risks a computerized security application and yields a
but what about the overarching issues around list of requirements and abilities that are essen-
risk perception and diffusion of responsibility tial to effectively operate that device. Of equal
from which many other risks descend? In these importance is the person analysis that defines
cases, raising user awareness of security issues the capabilities and limitations of the target of
and making them actively engaged in the process the training, in this case the individuals who will
are needed. learn to use the security application. From the
Many security administrators and IT profes- results of the task and person analyses, the most
sionals responsible for security know the best time appropriate design and selection of training op-
to ask for funding for a security project is right tions can be used to facilitate learning. Training
after a major virus outbreak or security breach. techniques such as the provision of well-organized
Incidents heighten everyone’s security awareness written instructions may assist in reconciling
and create an environment with less user induced the differences between task requirements and
risk for some time, until the panic settles down personal limitations.
and the original base level returns. The challenge Once a training program is in place, evalua-
is to raise users’ base level of awareness in an ef- tion of that program is necessary to ensure that
fective and meaningful way without creating an training is effective. To evaluate a program, mea-
environment of paranoia. This should be a goal sures of successful learning such as retention of
of simple and recurring training campaigns. information and usability should be examined. If
a training program is deemed ineffective, a new
Systems Approach to Training needs assessment should be conducted and new
training techniques should be considered during
Designers of security systems might consider an iterative process (design, test, redesign, test,
adopting the systems approach to training which etc.).
is often considered a standard practice in the field
of human factors and ergonomics (Helander,
1997; Mayhorn, Stronge, McLaughlin, & Rog- COMMUNICATION OF SECURITY
ers, 2004). RISK
In the systems approach to training, the
characteristics of the person, the environment, While the previous sections have mainly focused
and the technology itself are considered through on the attributes of the user and the environment,
a series of sequential stages: needs assessment, this section will describe some preliminary ef-
task/person analysis, selection and design of forts to manipulate the properties of the system

52
The Weakest Link

to effectively communicate the nature of the Lehto & Miller, 1986; Edworthy & Adams, 1996;
security risk through warnings. In general, one Rogers, Lamson, & Rousseau, 2000; Wogalter,
of the goals of the human factors discipline is to Dejoy, & Laughery. 1999). Most models agree that
increase safety (Sanders & McCormick, 1993). secure behavior is heavily reliant on the capacity
One approach to pursuing this goal includes the of the human perceptual and cognitive processes
use of warnings systems (see Wogalter, 2006 for to first notice a warning, then comprehend the
a comprehensive review) such as pop-up dialog message content, and finally making the decision
boxes to deliver risk communication messages of whether or not to comply with the security
during computing. Recently, researchers in the system instructions.
area have concluded that advances in new and
emerging technologies promise to revolutionize Base Rate and Response Bias
these efforts possibly leading to safer, more secure
user behavior (Wogalter & Mayhorn, 2005). Security message dialogs are displayed less
To explore these issues, a pilot study is cur- often than all the other message dialogs issued
rently being conducted by the authors of this by the operating system and applications taken
chapter to explore the decision making process together. This means that non-security message
that users undergo when they encounter security dialogs have a higher base rate of occurrence than
messages. To summarize the procedure of the security message dialogs. Because non-security
study, participants were asked to perform a cover dialogs have a higher base rate, users develop a
task while the experimenter left the room to run response bias to the general category of message
an errand. On the participant’s computer ran a dialogs. That is, users learn to accept the default
program to display a simulated security warn- decision or always click “OK” when they receive
ing 10 minutes into the cover task. The security any kind of message dialog without paying much
warning contained a message requesting the attention to what it says. In this way, users come
user’s permission to close an unsecured port. Due to automatically ignore security message dialogs
to the absence of the experimenter, participants when they look similar to other message dialogs.
were forced to respond to the warning without Such dismissive behavior might be due to ha-
any guidance. After a period of time, the experi- bituation (Wogalter & Mayhorn, 2005). When
menter returned and questioned the participant an individual is repeatedly exposed to a given
about his or her experience performing the cover stimulus (e.g., a warning), habituation occurs
task then the security dialog. Preliminary results such that less attention is given to that stimulus
indicate that the majority (approximately 76%) of during subsequent exposures.
participants dismiss the warning quickly without
reading the security-related message or seeking Technology Factors
further information. Even more striking may be
that, in several cases, participants did not recall As mentioned before, effective warnings capture
seeing any security message or dialog at all. This and maintain the attention of the user through the
may have been due to inattention blindness. process of noticing. Thus, the most basic way a
warning such as a security message dialog can
User Factors fail a user is to be non-distinctive in the sense that
it blends into the background noise and does not
Within the literature, a variety of models have communicate a sense of importance or urgency.
been presented to describe how users generally Such design failures often lead users to ignore and
interact with warnings that they encounter (e.g., dismiss the warning before reading the message

53
The Weakest Link

(Figure 2). Figure 2. Examples of security dialogs


The high rate of occurrence for all system
message dialogs is, itself, a technology factor
contributing to the problem. In our pilot study
with security dialogs, one participant told of his
experience with an anti-virus program that con-
tinually alerts a message dialog with an offer to
renew his update service as the reason he quickly
dismissed the experimental warning. He has been
conditioned to disregard the warning and, in his
mind, the warnings all blurred together. The
constant alerting of messages which carry no
real perceived value creates a noisy environment
to which users adjust, sometimes with drastic
measures. This behavior has been observed in
many human factors domains. Fighter pilots, for
example, have been known to actually disable
cockpit alarms with high false alarm rates by
pulling out the electrical fuses and train engineers
have been known to tape over auditory alarms to
muffle their sound (Sorkin, 1988).

Environmental Factors

Environmental factors at work within this pilot


study include the dual nature of the task itself.
Users were initially assigned a primary task to
complete within a specific time frame. When
presented with the secondary task of responding felt a diffusion of responsibility for any negative
to the warning, users conserved time by mostly consequences to the computer.
ignoring the warning. As previous research within
the naturalistic decision making literature sug- Evaluating Potential Solutions
gests, time pressure frequently decreases the
quality of decisions (Hammond, 2000). As previously described, security functions
Another environmental factor at work during should be designed such that end users have few
this pilot study was the use of a computer that was alternatives except to make the secure decision.
not the personal property of the user. Because the To address potential solutions that might increase
experiment was conducted in the university set- behavioural compliance with the security infor-
ting, many users may have believed that they did mation relayed to them via message dialogs, a
not have the authority to make security decisions number of technological changes might be put
regarding a university-owned machine. In this into place to attract attention and optimize user
case as with the examples of malware installed comprehension of the security situation via mes-
via USB devices or free CDs, users may have sage content.

54
The Weakest Link

For instance, the ability to dismiss a warning Bell, 2004; Wolff & Wogalter, 1998), it is impor-
is a system design flaw that can be easily rem- tant that comprehension testing be conducted to
edied. Previous research indicates that interactive determine the effectiveness of proposed warnings
warnings that cannot be dismissed but require a before they are implemented for use.
forced choice, often result in warning compliance Published standards have provided guidance
(Duffy, Kalsher, & Wogalter, 1995). Thus, a mes- for warning designers by quantifying what level
sage dialog that can be minimized or dragged to of comprehension constitutes acceptable mes-
the periphery of the display is much more likely sage content. The American National Standards
to be ignored than a dialog box that requires user Institute (ANSI Z535.3, 2002) requires that at
feedback in the form of an explicit response. least 85% of the answers from a sample of 50 or
Other attempts to attract the user’s attention more people should correctly identify the message
might focus on modifications to the message content being communicated. Furthermore, the
dialog itself. Previous evidence from the warn- sample should generate no more than 5% criti-
ings literature suggests that personalized warn- cal confusions which are defined as answers that
ing signs incorporating the person’s name led to are opposite to the intended concept or wrong
higher rates of compliance than non-personalized answers that lead to behavior resulting in adverse
warning signs (Wogalter, Raciot, Kalsher, & consequences (ANSI, 2002).
Simpson, 1994).
Lessons from other areas of human factors
might be instrumental in finding techniques to DISASTERS WAITING TO HAPPEN
capture the attention of users as well. For instance,
the use of multimodal alarms has been success- To this point in the discussion, the actions of the
ful during medical monitoring procedures where end user’s interaction with the technology of the
anesthesiologists are immediately alerted to system have been examined exclusively. Whether
changes in critical patient vital signs by the use of the user will make secure decisions when faced
auditory alarms that supplement integrated visual with security message dialogs or whether they
displays (Sanderson, 2006). Such a use of other respond to phishing e-mails by providing confi-
non-visual modalities of communication within dential personal information are important, yet the
the security domain might manifest themselves actions of others within the broader system must
in the form of vibratory computer mice or audi- also be considered to fully describe the sources
tory “earcons.” of error. According to Reason (2002), the errors
To optimize the impact of the message content of the end user can be classified as active errors
of a warning, the ability of the user to comprehend because the resulting consequences are directly
the message must be tested. Thus, the design- tied to the behavior of the end user. For instance,
ers of security message dialogs should test the the UK incident reported by Kirk (2006) illus-
readability of their warnings to ensure that risk trates that the active error occurred when users in
communications are comprehensible to the end the London financial district ignored the printed
user. For example, the use of technological jargon warning regarding corporate security policies
must be avoided so that users are not predisposed and inserted the CDs.
to ignore the messages (Mayhorn, Rogers, & Fisk, To fully analyze such a situation, Reason
2004). Because the human factors research litera- (2002) made a distinction between these active
ture indicates that warnings may not be understood errors and latent errors which result from the
by members of the at-risk population at levels activities of other people such as interface design-
expected by the designers (Mayhorn, Wogalter, & ers, managers, and high level corporate decision

55
The Weakest Link

makers who are removed in time and space from hard drive to his home where it was stolen. The
the interface where the user decision is made. external hard drive contained personal information
The consequences of such latent errors may lie on roughly 26 million veterans including names,
dormant in the system for a long time before they social security numbers, birth dates, and addresses.
combine with other factors such as end user action Although the hard drive was recovered and no
to initiate a system security breach. For instance, sensitive data was comprised, the Veteran’s Ad-
high level corporate decision makers may have ministration faced multiple class action lawsuits
contributed to the vulnerability of the system and as a result of negligence.
the increased likelihood of a security incident by
adopting a rather lax security policy that allowed In 2007, Empire Blue Cross and Blue Shield of
employees to use external storage devices from New York blamed human error when they ac-
unknown sources. cidentally lost a CD of customer data en route
to a third party research company (Washkuch,
Oops, Lost the Hard Drive 2007). The CD contained names, social security
numbers, and the medical histories of roughly
True story: 75,000 insurance customers.

“Company Y” wants to be a good corporate We can view these latter two cases with respect
citizen when it de-commissions hardware during to our human-technology-environment model
its regular hardware refresh cycle. According to and see that the users directly involved probably
federal regulations in its industry, the company considered the potential security problems as very
must completely wipe all hard drives from de- unlikely events. At the technology level, there were
commissioned systems (laptops, desktops, servers, simple solutions that could have been imposed to
portable drives, devices, etc.) in a very specific prevent the mishaps and there would certainly have
way. These hard drives must be wiped using a been environmental factors at play such as a need
very specific type of technology and must be wiped to work from home or a rush to deliver the CD of
a very specific number of times before they are data. However, these cases also suggest deeper
considered to be safe. There is no automated way level and latent problems in the environmental
to do this process, so company Y’s solution is to factors around the organizations’ corporate se-
pull all the hard drives from anything about to curity policies or federal regulations.
go into the community, and store them in a room
until someone has the time to complete the wip- Can I have your Social Security
ing process. To date, the process has not been Number Please?
completed and the room is filled with thousands
of hard drives waiting to be processed. At the end of 2005, financial services company,
This is a security breach waiting to happen. H&R Block inadvertently embedded social se-
Consider the following: curity numbers within tracking codes printed on
packaging that was used to mail customers free
In the summer of 2006, a stolen hard drive belong- copies of its tax preparation software (Vijayan,
ing to a Veteran’s Administration employee made 2006). In the beginning of 2006, Blue Cross and
headlines as the second-largest data breach in U.S. Blue Shield of North Carolina made the same mis-
history and the largest breach of social security take and exposed social security numbers belong-
numbers ever (Mark, 2006). The laptop was the ing to more than 600 of the insurance company’s
property of a data analyst who took the external customers by printing them along with the street

56
The Weakest Link

address on a mailer sent to the customers (Vijayan, 4. Estimate the effects of human failure on the
2006). A month after the Blue Cross Blue Shield system then alter the characteristics of the sys-
incident, the Boston Globe accidentally exposed tem to minimize human error. Iteratively recal-
the names, credit card numbers, and bank account culate the probability of human error to deter-
information of more than 200,000 subscribers mine the utility of each system modification.
when sensitive documents were reused to print
routing labels attached to bundles of newspapers Human reliability assessment methods such as
(Vijayan, 2006). THERP have been used in many realms to help
The companies involved in these incidents all design and assess human-machine systems by
claimed human error as the responsible culprit. identifying opportunities for user error and steps
However, the reality is more complex and harder where error detection and recovery are critical.
to fix. Guaranteed, the users involved will not
make the same mistakes again, and there may
be technology level changes, but there will likely CONCLUSION
be other incidents in future. Again, these are
cases where many people were involved who Users are generally considered to be the weak-
had opportunities to prevent the errors before est link when it comes to computer security.
they happened. These are not cases of computer There are many stories of user actions leading to
network or database compromises, but system security incidents beyond the small number dis-
failures where latent errors combined in a way cussed here. While the user problem in computer
that resulted in the breach. security is well known, less has been offered to
explain why.
Evaluating Potential Solutions In this chapter, we have attempted to call at-
tention to cognitive and human factors issues that
There are numerous techniques that can be used influence an end user’s behavior when interacting
to address latent errors within a system (see with a system or making decisions with security
Reason, 2002 for a comprehensive review). One consequences. It is not possible to eliminate users
such method that may prove useful to designers of from the control loop in computer security and, as
security systems is the technique for human error a result, they will always provide a source of errors
rate prediction (THERP). In THERP, vulnerabili- in the system. The most elegant and intuitively
ties within the human-machine systems could be designed interface does not improve security if
caused by human error alone or in conjunction with users ignore warnings, choose poor settings, or
equipment functioning or operational procedures unintentionally subvert corporate policies. The
and practices (Swain & Guttmann, 1983). challenge in developing robust security systems is
THERP employs fours steps to mitigating the take this into account and minimize the potential
likelihood of human error: for error on the user end through intelligent design
on the technology end.
1. Identify system functions that can be influ- Human factors research methodologies have
enced by human error. provided many contributions towards the reduc-
2. Perform a detailed task analysis to obtain a tion of user error and promotion of system safety
list of human operations. in high risk domains ranging from medicine to
3. Estimate error probabilities for each item nuclear power. We feel there is much potential
on the list via expert judgment or available for these approaches to be incorporated into the
data. design and evaluation of user-security systems.

57
The Weakest Link

By conceptualizing the system as an inter-related Dalziel, J. R., & Job, R. F. S. (1997). Motor
mechanism that relies on the interactions between vehicle accidents, fatigue and optimism bias in
human, technology, and environmental factors, taxi drivers. Accident Analysis & Prevention,
security professionals might be able to develop 29, 489-494.
interventions that work to strengthen the weak
Darley, J. M. & Latané, B. (1968). Bystander
links.
intervention in emergencies: Diffusion of re-
sponsibility. Journal of Personality and Social
We would like to extend a special thank you to
Psychology, 8, 377-383.
Leslie Johnson for the “true stories” based on her
experience as a user experience researcher work- Dejoy, D.M. (1987). The optimism bias and traffic
ing in the human-security interaction realm. safety. In Proceedings of the Human Factors and
Ergonomics Society (Vol. 31, pp. 756-759).
Dhamija, R., & Tygar, J. D. (2005). The battle
References
against phishing: Dynamic security skins. In
Proceedings of SOUPS (pp. 77-88).
Adams, A., Bochner, S., & Bilik, L. (1998). The
effectiveness of warning signs in hazardous work Duffy, R. R., Kalsher, M. J., & Wogalter, M. S.
places: Cognitive and social determinants. Applied (1995). Increased effectiveness of an interactive
Ergonomics, 29, 247-254. warning in a realistic incidental product-use
situation. International Journal of Industrial
American National Standards Institute (ANSI).
Ergonomics, 15, 169-166.
(2002). Criteria for safety symbols (Z535.3-
Revised). Washington, DC: National Electrical Edworthy, J., & Adams, A. (1996). Warning
Manufacturers Association. design: A research prospective. London: Taylor
and Francis.
America Online and the National Cyber Security
Alliance (2004). AOL/NCSA online safety study. Ferguson, A. J. (2005). Fostering e-mail security
http://www.staysafeonline.info/news/safety_ awareness: The West Point Carronade. Educause
study_v04.pdf Quarterly, 28, 54-57.
Baron, A. (1965). Delayed punishment of a runway Freeman, S., Walker, M. R., & Latané, B. (1975).
response. Journal of Comparative and Physiologi- Diffusion of responsibility and restaurant tipping:
cal Psychology, 60, 131-134. Cheaper by the bunch. Personality and Social
Psychology Bulletin, 1(4), 584-587.
Betteridge, I. (2005). Police foil $420 million
keylogger scam. eWeek.com. http://www.eweek. Gaudin, S. (2007). Human error more dangerous
com/article2/0,1895,1777706,00.asp than hackers. TechWeb. http://www.techweb.com/
showArticle.jhtml?articleID=197801676
Bogner, M. S. (2004). Misadventures in health
care: Inside stories. Mahwah, NJ: Lawrence Goldstein, W. M. & Hogarth, R. M. (1997). Re-
Erlbaum Associates. search on judgment and decision-making: Cur-
rents, connections, and controversies. Cambridge,
Borgida, E., and Nisbett, R. E. (1977). The differ-
UK: Cambridge University Press.
ential impact of abstract vs. concrete information
on decisions. Journal of Applied Social Psychol- Gordon, L. A., Loeb, M. P., Lucyshyn, W., &
ogy, 7, 258-271. Richardson, R. (2006). 2006 CSI/FBI computer

58
The Weakest Link

crime and security survey. Baltimore: Computer Bogner (Ed.), Misadventures in health care: In-
Security Institute. side stories (pp. 13-25). Mahwah, NJ: Lawrence
Erlbaum Associates.
Grice, G. R. (1948). The relation of secondary
reinforcement to delayed reward in visual dis- Mark, R. (2006). Teens charged in VA laptop
crimination learning. Journal of Experimental theft. Internetnews. http://www.internetnews.
Psychology, 38, 1-16. com/bus-news/article.php/3624986
Hammond, K. R. (2000). Judgments under stress. Mayhorn, C. B., Lanzolla, V. R., Wogalter, M. S.,
New York: Oxford University Press. & Watson, A. M. (2005). Personal digital assistants
(PDAs) as medication reminding tools: Exploring
Hardee, J. B., West, R., & Mayhorn, C. B. (2006).
age differences in usability. Gerontechnology,
To download or not to download: An examina-
4(3), 128-140.
tion of computer security decision-making. As-
sociation of Computing Machinery: Interactions, Mayhorn, C. B., Rogers, W. A., & Fisk, A. D.
13(3), 32-37. (2004). Designing technology based on cognitive
aging principles. In S. Kwon & D. C. Burdick
Helander, M. (1997). The human factors profes-
(Eds.), Gerotechnology: research and practice
sion. In G. Salvendy (Ed.), Handbook of human
in technology and aging (pp. 42-53). New York:
factors and ergonomics (2nd ed., pp. 3-16). New
Springer Publishing.
York: Wiley.
Mayhorn, C. B., Stronge, A. J., McLaughlin,
Ilet, D. (2005). Inside the biggest bank raid that
A. C., & Rogers, W. R. (2004). Older adults,
never was. Zdnet. http://news.zdnet.co.uk/secu-
computer training, and the systems approach: A
rity/0,1000000189,39191956,00.htm.
formula for success. Educational Gerontology,
Jackson, J. W., Ferguson, A. J., & Cobb, M. J. 30(3), 185-203.
(2005, October 12-22). Building a university-
Mayhorn, C. B., Wogalter, M. S., & Bell, J. L.
wide automated information assurance awareness
(2004). Are we ready? Misunderstanding home-
exercise. In Proceedings of the 35th ASEE/IEEE
land security safety symbols. Ergonomics in
Frontiers in Education Conference, Indianapolis,
Design, 12(4), 6-14.
IN, (pp 7-11).
Park, D. C., & Skurnik, I. (2004). Aging, cognition,
Kirk, J. (2006). Free CDs highlight security weak-
and patient errors in following medical instruc-
nesses. PC World. http://www.pcworld.idg.com.
tions. In M.S. Bogner (Ed.), Misadventures in
au/index.php/id;2055135135;fp;2;fpid;1
health care: Inside stories (pp. 165-181). Mahwah,
Klein, G. (1998). Sources of power: How people NJ: Lawrence Erlbaum Associates.
make decisions. Cambridge, MA: The MIT
Perloff, R. (1993). Third person effect research
Press.
1983-1992: A review and synthesis. Interna-
Lehto, M. R., & Miller, J. M. (1986). Warnings, tional Journal of Public Opinion Research, 5,
volume 1: Fundamentals, design, and evalu- 167-184.
ation methodologies. Ann Arbor, MI: Fuller
Reason, J. (2002). Human reason. Cambridge,
Technical.
UK: Cambridge University Press.
Linden, J. V. (2004). The trouble with blood is it
Rogers, W. A., Lamson, N., & Rousseau, G. K.
all looks the same: Transfusion errors. In M.S.
(2000). Warning research: An integrative perspec-
tive. Human Factors, 42(1), 102-139.

59
The Weakest Link

Sanders, M. S., & McCormick, E. J. (1993). Hu- Tversky, A, & Kahneman, D. (1974). Judgment
man factors in engineering and design (7th ed.). under uncertainty: Heuristics and biases. Science,
New York: McGraw-Hill Inc. 185(4157), 1124-1131.
Sanderson, P. (2006). The multimodal world of Vijayan, J. (2006). “Human error” exposes pa-
medical monitoring displays. Applied Ergonom- tients’ social security numbers. Computerworld.
ics, 37, 501-512. http://www.health-itworld.com/newsletters/2006/
02/14/18209?page:int=-1
Simon, H. A. (1956). Rational choice and the
structure of the environment. Psychological Washkuch, F. (2007). Newspaper: Medical infor-
Review, 63, 129-138. mation of 75,000 Empire Blue Cross members lost.
SC Magazine. http://scmagazine.com/us/news/
Simons, D. J., & Chabris, C. F. (1999).Gorillas in
article/643807/newspaper-medical-information-
our midst: sustained inattentional blindness for
75000-empire-blue-cross-members-lost/
dynamic events. Perception, 28(9), 1059-1074.
Wogalter, M. S. (2006). Handbook of warnings.
Slovic, P., Fischhoff, B., & Lichtenstein, S. (1986).
Mahwah, NJ: Lawrence Erlbaum Associates.
Facts versus fears: Understanding perceived
risks. In D. Kahneman, P. Slovic, and A. Tversky Wogalter, M. S., Dejoy, D. M., & Laughery, K.
(Eds.), Judgment under uncertainty: Heuristics R. (1999). Warnings and risk communication.
and biases (pp. 463-489). New York: Cambridge London: Taylor and Francis.
University Press.
Wogalter, M. S., & Mayhorn, C. B. (2005). Pro-
Sorkin, R. D. (1988). Why are people turning viding cognitive support with technology-based
off our alarms? Journal of Acoustical Society of warning systems. Ergonomics, 48(5), 522-533.
America, 84, 1107-1108.
Wogalter, M. S. & Mayhorn, C. B. (2006). Is that
Stanford, J., Tauber, E. R., Fogg, B. J., & Marable, information from a credible source? On discrimi-
L. (2002). Experts vs. online consumers: A com- nating Internet domain names. In Proceedings
parative credibility study of health and finance of the 16th World Congress of the International
websites. Consumer WebWatch. www.consum- Ergonomics Association. Maastricht, The Neth-
erwebwatch.org erlands.
Stasiukonis, S. (2007). Social engineering, the Wogalter, M. S., Racicot, B. M., Kalsher, M. J.,
USB way. Dark Reading. http://www.darkread- & Simpson, S. N. (1994). The role of perceived
ing.com/document.asp?doc_id=95556&WT. relevance in behavioral compliance in personal-
svl=column1_1 ized warning signs. International Journal of
Industrial Ergonomics, 14, 233-242.
Sturgeon, W. (2005). Foiled £220m heist high-
lights spyware threat. Zdnet. http://news.zdnet. Wolff, J. S., & Wogalter, M. S. (1998). Comprehen-
co.uk/security/0,1000000189,39191677,00.htm sion of pictorial symbols: Effects of context and
test method. Human Factors, 40, 173-186.
Swain, A.D, & Guttmann, H.E. (1983). Handbook
of human reliability analysis with emphasis on
nuclear power plant applications. NUREG/CR
1278. Albuquerque, NM: Sandia National Labo-
ratories.

60
61

Chapter V
Trusting Computers Through
Trusting Humans:
Software Verification in a Safety-Critical
Information Society

Alison Adam
University of Salford, UK

Paul Spedding
University of Salford, UK

Abstract

This chapter considers the question of how we may trust automatically generated program code. The
code walkthroughs and inspections of software engineering mimic the ways that mathematicians go
about assuring themselves that a mathematical proof is true. Mathematicians have difficulty accepting a
computer generated proof because they cannot go through the social processes of trusting its construc-
tion. Similarly, those involved in accepting a proof of a computer system or computer generated code
cannot go through their traditional processes of trust. The process of software verification is bound
up in software quality assurance procedures, which are themselves subject to commercial pressures.
Quality standards, including military standards, have procedures for human trust designed into them.
An action research case study of an avionics system within a military aircraft company illustrates these
points, where the software quality assurance (SQA) procedures were incommensurable with the use of
automatically generated code.

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Trusting Computers Through Trusting Humans

Introduction mensurable procedures; one set relying on human


trust, one set on computer trust.
They have computers, and they may have other Our research question is therefore: How may
weapons of mass destruction. Janet Reno, former we understand what happens when software de-
US Attorney General signers are asked to trust the design of a system,
based on automatically generated program code,
In this chapter our aim is to develop a theoretical when the SQA procedures and military standards
framework with which to analyse a case study to which they must adhere demand walkthroughs
where one of the authors was involved, acting as and code inspections which are impossible to
an action researcher in the quality assurance pro- achieve with auto-code?
cedures of a safety-critical system. This involved The theoretical framework we use to form
the production of software for aeroplane flight our analysis of the case study is drawn from
systems. An interesting tension arose between the links we make between the social nature of
the automatically generated code of the software mathematical proof, the need to achieve trust in
system (i.e., ‘auto-code’—produced automatically system verification, the ways in which we achieve
by a computer, using CASE [Computer Aided trust in the online world, the methods of software
Software Engineering] tools from a high level engineering, and within that, the software qual-
design) and the requirement of the quality assur- ity movement and the related highly influential
ance process which had built into it the require- domain of military standards.
ment for human understanding and trust of the In the following section we briefly outline the
code produced. social nature of mathematical proof. The next sec-
The developers of the system in the case tion discusses the debate over system verification
study designed it around auto-code—computer which encapsulates many of the ideas of math-
generated software, free from ‘human’ error, ematical proof and how such proofs can be trusted
although not proved correct in the mathematical by other mathematicians. The chapter proceeds to
sense, and cheaper and quicker to produce than consider ‘computer mediated’ trust, briefly detail-
traditional program code. They looked to means of ing how trust has been reified and represented in
verifying the correctness of their system through computer systems to date, mainly in relation to the
standard software quality assurance (SQA) pro- commercial interests of e-commerce and informa-
cedures. However, ultimately, they were unable tion security. Trust is particularly pertinent in the
to bring themselves to reconcile their verification world of safety-critical systems, where failure is
procedures with automatically generated code. not just inconvenient and financially damaging,
Some of the reason for this was that trust in although commercial pressures are still evident
human verification was built into (or inscribed here, but where lives can be lost. The model of trust
into [Akrich, 1992]) the standards and quality criticised by e-commerce critics is more similar to
assurance procedures which they were obliged the type of trust we describe in relation to safety-
to follow in building the system. Despite their critical systems, than one might, at first, expect.
formally couched descriptions, the standards and Understandably, we would like to put faith in a
verification procedures were completely reliant on system which has been mathematically proved to
human verification at every step. However these be correct. However computer generated proofs,
‘human trust’ procedures were incompatible with proofs about correctness of computer software,
the automated production of software in ways we and automatically generated code are not neces-
show below. The end result was not failure in the sarily understandable or amenable to inspection
traditional sense but a failure to resolve incom- by people, even by experts. The question then

62
Trusting Computers Through Trusting Humans

arises of whether we can bring ourselves to trust one or more starting positions (previous theorems
computer generated proofs or code, when even or axioms) to the final conclusion of the theorem
a competent mathematician, logician,or expert seems to be the basis of mathematics. The concept
programmer cannot readily understand them. of mathematical proof leading inexorably to true
Following this, we describe the evolution of and incontrovertible truths about the world is very
software development standards and the SQA compelling. It is not surprising that we would like
movement. We argue that the development of to apply the apparent certainty and exactness of
quality assurance discourse involves processes of mathematical approaches to computer program-
designing human ways of trusting mathematical ming. However if we consider briefly how agree-
evidence into standardisation and SQA. Military ment on mathematical proof and scientific truth
standards are an important part of the SQA story, is achieved by communities of mathematicians,
having consequences far beyond the military then the social and cultural dimension of proof, as
arena. Standards are political devices with par- an agreement amongst trusted expert witnesses,
ticular views of work processes inscribed (Akrich, reveals itself.
1992) in their design. We note the way that military With the epistemological and professional suc-
standards, historically, moved towards formal cess of mathematical proof, many of the cultural
verification procedures only to move back to rely processes which go into making a proof true sink
more on ‘human’ forms of verification such as from consciousness and are only rendered vis-
code walkthroughs and inspections in the later ible in times of dispute; for example as in claims
1990s. The story is shot through with a tension to the proof of Kepler’s conjecture or Fermat’s
between finding ways to trust the production of last theorem (Davies, 2006; Kuhn, 1962; Singh,
information systems and finding ways to control 1997). Only on the margins then do we call into
them. Formal methods, based on mathematical question our ability to trust these people when a
proof offer the promise of control, but only if we mathematical proof cannot be agreed to be true
can bring ourselves to trust a proof generated by an expert community of mathematicians, as
by a machine rather than a proof constructed by sometimes happens.
another person. We present the background to The apparently pure and abstract nature of
the case study in terms of a description of the mathematical proof fairly quickly breaks down
complex ‘post cold war’ military and commercial when we inspect it more closely. In particular,
environment. This is followed by a description when there is disagreement about a proof, the
of the action research methodology employed in nature of proof is revealed as a social and cultural
the project, an outline of the case study and an phenomenon; the matter of persuading and con-
analysis of the case study findings in terms of vincing colleagues. DeMillo, Lipton, and Perlis
our theoretical framework. In the conclusion we (1977, p. 208) wrote
briefly note that mathematicians and others are
gradually finding ways of trusting computers. Mathematicians talk to each other. They give
symposium and colloquium talks which attempt to
convince doubting (sometimes hostile) audiences
The Social Nature of of their arguments, they burst into each others’
Mathematical Proof offices with news of insights for current research,
and they scribble on napkins in university cafete-
At first sight, the concept of mathematical proof rias and expensive restaurants. All for the sake of
appears to be relatively simple. The idea of a convincing other mathematicians. The key is that
logical and rigorous series of steps, leading from other mathematicians are inclined to listen!

63
Trusting Computers Through Trusting Humans

This traditional approach towards mathemati- convincing one’s colleagues of the validity of
cal proof, which could be described as one of the proof, the potential for coming to agree-
persuasive rigorous argument between math- ment or trust of the proof is there. Essentially,
ematicians leading to trust, is not the only way in trusting that a mathematical proof is correct,
to address the idea of proof. A quite different mathematicians are demonstrating their trust in
approach appeared in the 1950s and was based other competent mathematicians. However, expert
on the work on logic developed by Bertrand mathematicians clearly have trouble bringing
Russell and others in the 1930s and used the themselves to trust computer proofs, for good
newly invented electronic computer. This new reason, as a computer cannot explain the steps
logic-based approach was not dependent on the in its reasoning (Chang, 2004).
computer, but the computer’s speed and accuracy
had a major impact on its application to the proof
of theorems in replacing the persuasive rational Computer System
argument of competent mathematicians with a Verification: Trust and the
formal approach which sees any mathematical Social
proof as a number of steps from initial axioms
(using predicate logic), to the final proof statement The preceding section contrasted the use of com-
(based purely on logical inference) without the puter technology in a claimed proof: the formal
requirement of a human being. method and the human ‘rigorous argument’ ap-
Many proofs can be completed by either proach to proof. Although this is not the same
method. For instance, many persuasive rigorous thing as the proof or verification of a computer
argument proofs can be converted to formal proofs system itself, in other words the formal, computer
(MacKenzie, 2004). It should be emphasised, generated proof that the computer system matches
however, that there is a real difference between the specification, the question of whether we can
the two types of proof. We are not simply talking trust the computer is exactly the same.
about a machine taking on the role of a competent The idea of proof or verification of a program
mathematician. Some proofs which are readily is quite different from simply testing the program.
accepted by mathematicians rely on arguments of Typically, a large suite of programs might have
symmetry and equivalence, analogies, and leaps thousands or millions of possible inputs, and so
of imagination, which humans are very good at could be in many millions or even billions of states.
understanding but which a formal logic approach Exhaustive testing cannot be possible. If a com-
cannot replicate. Symmetry and analogy argu- puter system is to be used in the well-funded and
ments of this type cannot be established by formal high-profile military field to control a space craft,
methods based on logical progression because aeroplane, or a nuclear power station, it is highly
symmetry relies on understanding semantics and desirable if the system can be actually proved to
cannot be gleaned from the syntax of a proof. be correct, secure, and reliable. Since testing,
Whereas the persuasive rigorous argument, the although vital, can never prove the system’s cor-
‘human’ approach, has been used for thousands rectness, more mathematical methods involving
of years, the formal or ‘computer generated’ the notion of proof became of great interest in the
approach has been in use for only about half a late 1960s and have remained so ever since.
century. Clearly, the two methods are not treated In fact the history of the verification of com-
in the same way by the expert community of puter systems echoes that of mathematical proof,
mathematicians. With a rigorous argument type with basically the same two approaches: those who
of proof, although one may expend much energy support the rigour of formal methods and those

64
Trusting Computers Through Trusting Humans

who believe that the purely formal, mechanised controls, say in e-commerce, may encourage
proof lacks the crucial element of human under- participation but they limit experience: ‘Through
standing (Tierney, 1993). In a paper to an ACM security we may create a safer world, inhospitable
Symposium, DeMillo et al. (1977) argued that to trust not because there is distrust, but because
the two types of proof were completely different trust cannot be nourished in environments where
in nature, and that only the persuasive rigorous risk and vulnerability are, for practical purposes,
argument proof with its strong social aspect will eradicated.’
ultimately be believable and capable of earning Stahl’s (2006) take on trust in e-commerce
trust. shows another example of the intangible human
nature of trust, which has become reified and
commodified, so that it can be measured and
Computer-Mediated Trust exchanged in machine transactions. Like Nissen-
baum (1999), Stahl points to the way that a trustor
In ethical terms, trust is a complex phenomenon does not have complete control over a trustee;
and is essentially a human relationship (Nis- vulnerability and uncertainty must be accepted
senbaum, 1999; Stahl, 2006). We think of trust in a trusting relationship. This of course includes
in terms of a trustor who does the trusting and business transactions, and is especially important
a trustee who is trusted. The trustee does not of in e-commerce as many of the traditional ways of
course have to be human, but Nissenbaum (1999) developing trust are absent from online transac-
suggests that the trustee should be a being to whom tions. Trust becomes a way of generating profit;
we ascribe human qualities such as intentions and small wonder that trust, including technological
reasons, what might be termed an ‘agent.’ Trust ways of creating trust and maintaining it, has been
allows meaningful relationships and a vast range of so much interest in e-commerce. In the world
of intuitions to work. Nissenbaum (1999) argues of e-commerce research, trusts lose its relational
that when we are guaranteed safety trust is not aspects and becomes a form of social control. ‘If
needed: ‘What we have is certainty, security, trust is limited to calculations of utility maximi-
safety – not trust. The evidence, the signs, the cues sation in commercial exchange, then most of the
and clues that ground the formation of trust must moral underpinnings of the mechanisms of trust
always fall short of certainty; trust is an attitude become redundant. Trust changes its nature and
without guarantees, without a complete warrant.’ loses the binding moral quality that it has in face-
Intrusive regulation and surveillance are attempts to-face interaction.’ (Stahl, 2006, p. 31)
at control and bad for building trust. Although, on the face of it, Nissenbaum’s and
This generalised definition of trust clearly Stahl’s arguments on the problems of online trust
maps onto our description of mathematicians in e-commerce are not the same as the issue of
trusting proofs. They may not have complete trust described in the body of this chapter, there
certainty over the correctness of a mathemati- are important congruencies which are very di-
cal proof, but they have good reason to trust a rectly applicable to our characterisation of trust.
competent member of the community of expert Whether it is a human trusting another human or
mathematicians. Therefore they can trust the proof an expert mathematician trusting another expert
supplied by such a person. mathematician to supply an accurate proof, the
Understandably, there has been much interest same relationship between trustor and trustee
in trust in the online world, both in terms of online obtains.
security and trust in e-commerce transactions. For Nissenbaum and Stahl, the issue is what
Nissenbaum (1999) suggests that excessive safety happens to trust when it is commodified within an

65
Trusting Computers Through Trusting Humans

online relationship. In other words, what happens book on software testing explores the topic of hu-
when the human-trusting-human relationship is man testing in detail, justifying methods such as
mediated by technology? In this chapter we also formal code inspections and code walkthroughs.
consider what happens when the human-trust- The differences between the two methods depend
ing-human relationship—in terms of a human on different usages of the terms ‘inspection’ and
trusting another human’s mathematical proof, ‘walkthrough,’ but the important point is that both
or computer program—is replaced by a human involve a small group of professionals carefully
having to trust a machine. Of course, in this trus- reading through code together. We argue that this
tor-trustee relationship, the trustee, that is, the can be viewed as an imitation of the social (persua-
machine, cannot be understood in the way that sive rigorous argument) form of proof described
another person can be. earlier where ‘mathematicians talk to each other’
The pressure to create computer-mediated in symposia and colloquia and so on (DeMillo et
trust is completely bound up with commercial al., 1977). The original programmer should be in
pressures. The maximisation of profit drives the the group, analogous to the mathematician demon-
reification of trust in e-commerce. Similarly in strating a proof or principle to expert colleagues.
the world of military avionics we describe, it is The aim (as originally suggested by Weinberg
the commercial pressure of building systems [1971]—an ‘egoless’ approach) is to discover
more cheaply and faster which provides the im- as many errors as possible rather than to try to
petus to turn over proofs, testing of programs, demonstrate that there are none. So the team is to
and automatic generation of code to a machine. act as an idealised group of ‘Popperian’ scientists
A third aspect of similarity between Stahl’s and looking for ‘refutations’ (Popper, 1963). Under
Nissenbaum’s view of computer-mediated trust such an approach, one can never be entirely sure
and ours relates to the tension between trust and that the code is correct. But, as the walkthrough
control. This is clearly present in the debate over proceeds, the original programmer and the code
trust in e-commerce. But it is also present in soft- inspection team can gradually come to trust the
ware quality discourse as we discuss below. code as bugs are weeded out and fixed.
In the following section we briefly discuss Myers claims positive advantages of code in-
some of the ways in which human trust has tra- spections and walkthroughs, including the value
ditionally been built into procedures designed to of the original programmer talking through the
verify program correctness, and how this can be design (and thus spotting the errors). He also notes
seen to mirror an ideal group of mathematicians the ability of human testers to see the causes and
agreeing upon a mathematical proof. likely importance of errors (where a machine
might simply identify symptoms) and also the
likelihood that a batch of errors will be identified
Building Trust Into a simultaneously. Also the team is able to empathise
Computer System with and understand the thought processes of the
original programmer in a way which a machine
We argue that, historically, much of the devel- arguably cannot. Importantly, the team can be
opment of the software engineering discipline creative in its approach. In working together they
can be understood in terms of the development also, inevitably, form something of a sharing and
of procedures, through which we can convince trusting community (even if it is disbanded after
ourselves to trust, and control, the development a day or two).
of information systems and the production of The lesson gleaned from human verification
software. For instance, Myers’ (1979) classic techniques, such as walkthroughs and code in-

66
Trusting Computers Through Trusting Humans

spections, is that these have been regarded, for come to trust the systems and the production of
some time, as reliable, if not exhaustive, ways of correct software.
ensuring reliability of software. A number of military standards have been
developed to regulate and control the use of
software in defence applications. For instance,
Software Quality US standards DOD-STD-2167A (1988), MIL-
Assurance and Military STD-498 (1994), and ISO/IEC 12207 (1995)
Standards for Software respectively established the requirements for
software development and documentation in all
The software verification techniques of code equipment to be used by the US military (and
walkthroughs and inspections are important parts effectively that of all Western armed forces),
of the armoury SQA. Effectively, we argue that introduced object oriented development (OOD)
SQA is a branch of software engineering which and rapid application development (RAD), then
formalises and standardises the very human broadened the scope of international standards
methods of trust, and ultimately control outlined to include acquisition and maintenance. (DSDM
above, which we need to build into software Consortium, 2006).
engineering procedures. The SQA movement The relevant UK standard 00-55, (MoD, 1997)
is an important part of the story of the growth Requirements for Safety Related Software in
of software engineering because of its quest for Defence Equipment, was published in 1997 and
rigour and control of potentially unruly programs echoes much of MIL-STD-498, but moves the
and programmers. discussion on provably correct software in a par-
First of all, SQA offers a promise of rational ticular direction. At first sight, this seems highly
control over software, the software development significant to the current argument, because it
process, and those who produce software. Soft- clearly expressed a preference for formal methods,
ware quality criteria include features for direct- in other words mathematical procedures whereby
ing, controlling, and importantly, measuring the the software is proved to be correct by a machine
quality of software (Gillies, 1997). ‘Qualifica- (MacKenzie, 2001).
tion’ is achieved when a piece of software can Tierney (1993) argues that the release of UK
be demonstrated to meet the criteria specified in Defence Standard 00-55 in draft in 1989 had
these quality procedures. An important aspect of the effect of intensifying the debate over formal
SQA involves demonstrating that software meets methods in the UK software engineering com-
certain defined independent standards. munity. It devoted as much space to regulating
The development and adherence to software and managing software development labour
standards is a very important part of the story processes as the techniques and practices to
of SQA. Generic industry standards are avail- be used for formal designs. This reinforces our
able, but also of much interest—particularly for argument that SQA is concerned with control of
the case study set out later in the chapter—are work processes and those who perform them,
military standards. Indeed, the defence industry the software developers. On the one hand, many
is so influential that Tierney (1993) argues that argued that mathematical techniques for soft-
military standards influence software engineer- ware development and verification could only
ing far beyond applications in defence. Hence ever be used sparingly, as there simply was not
military standards are a very important part of enough suitable mathematical expertise in most
SQA, and ultimately are important in formalising organisations and it increased software quality at
ways in which designers of computer systems can the expense of programmer productivity. On the

67
Trusting Computers Through Trusting Humans

other side, those from a more mathematical camp Nevertheless it is interesting to note the way that
argued that there was commercial advantage in Def Stan 00-5, with its emphasis on formal ap-
proving software correctness as errors could be proaches and attendant onerous work practices,
trapped earlier in the software development cycle has been consigned to the history books with a
(Tierney, 1993, p. 116). clear move back to human verification.
Designed into the MoD (UK Ministry of
Defence) standard was a view of safety-critical
software as an important area of regulation and Case Study Context
control. Some of the reason for this was a change
in its own organisation from the 1980s. The UK The case study relates to a large European mili-
government sought to open up work traditionally tary aircraft company (MAC) with which one
done in-house by the MoD in its own research of the authors was engaged as a researcher in a
establishments to private contractors (Tierney, joint research project, lasting around three years,
1993, p. 118). Given that it had to offer its soft- during the mid to late 1990s. A high proportion of
ware development to the private sector, it built in the senior management were men and its culture
ways of controlling it within its defence standards was masculine in style, particularly emphasising
(Tierney, 1993, p. 118). Further political impetus an interest in engineering and technical mastery
was offered by the introduction of consumer (Faulkner, 2000). Indeed there was much interest,
protection legislation in the UK in the late 1980s pleasure, and admiration for elegant products of
which required software developers to demon- engineering (Hacker, 1991). When one of their
strate that their software had not contributed, in fighter planes flew over (an event difficult to ignore
the event of an accident enquiry, and that they had on account of the engine noise), offices would clear
demonstrably attended to safety. Thus we can see as employees went outside to admire the display of
that in Def Stan 00-55, politics, in the shape of a beautiful machine. A certain amount of military
the MoD’s need to open up software development terminology was used, sometimes ironically, in
to the private sector and also to avoid being held day-to-day work. A number of employees had
responsible for inadequate software in the event links with the armed forces. MAC was exclusively
of an accident, played an important role. involved in the defence industry, with the UK’s
However, more significantly, this document MoD being its largest customer and other approved
has itself been superseded in 2004 by (draft) governments buying its products.
standard 00-56 (MoD, 2004). Def Stan 00-55 has As a manufacturing company in an economy
now become obsolete. The changes involved in where manufacturing was in steep decline and
Def Stan 00-56 are of great interest, in that the with its ties to the defence industry, if a major
preference for formal method is lessened. In the defence contract went elsewhere, jobs would
new standard, it is accepted that provably correct be on the line. Despite the ‘hi-tech’ nature of
software is not possible in most cases and that its work, MAC had a traditional feel to it. The
we are inevitably involved in a human operation company had existed, under one name or an-
when we attempt to show that code is reliable in other, right from the beginning of the avionics
a safety-critical environment. Without a more industry. The defence industry, and within that
detailed consideration of the history of formal the defence aerospace industry, faced uncertain
methods in the UK over the last decade, which is times as the UK government was redefining its
beyond the scope of the present chapter, a strong expectations of the defence industry in post-Cold
claim that the move back to more human meth- War times. It quickly came to expect much clearer
ods of verification might be difficult to sustain. demonstrations of value for money (Trim, 2001).

68
Trusting Computers Through Trusting Humans

Therefore, the ‘peace dividend’ brought about by sion that Software Development System (SDS), a
the end of the Cold War meant uncertain times safety-critical airborne software system for flying
for the defence aerospace industry as military military aircraft, was developed.
spending was reduced significantly (Sillers &
Kleiner, 1997). Yet, as an industry contributing Research Methodology
huge amounts to the UK economy (around £5
billion per annum in export earnings Trim (2001, The methodological approach of the research was
p. 227)), the defence industry is hugely important based on action research (Myers & Avison, 2002).
in terms of revenue and employment. Defence As several successful participant observation stud-
industries have civil wings (which was the case ies in technology based organisations have been
with MAC) and it was seen as important that the reported in the literature (Forsythe, 2001; Low
defence side of the business did not interfere with & Woolgar, 1993; Latour & Woolgar, 1979), an
civil businesses. For instance, BAE Systems is a ethnographic approach holds much appeal. How-
partner in a European consortium and was pledged ever, a strict ethnographic approach was neither
£530 million as a government loan to develop the feasible nor desirable in this study. As someone
A3XXX aircraft to rival the USA’s Boeing 747 with technical expertise, the researcher could
(Trim, 2001, p. 228). not claim to be the sociologist or anthropologist,
Although not strictly a public sector organisa- more typical of reported ethnographic studies of
tion itself, its location in the defence industry put technological systems (Low & Woolgar, 1993;
MAC’s business in the public sector. However, in Forsythe, 2001). This also meant that he was not
the UK, views of public sector management were ‘fobbed off’ by being directed into areas that the
undergoing rapid change in the mid 1990s and it participants thought he wanted to look at or where
was seen as no longer acceptable that the taxpayer they thought he should be interested in as happened
should underwrite investment (Trim, 2001). Such in the Low and Woolgar (1993) case study. Based
firms were required to be more competitive and in the Quality Assurance Division (QAD) in the
to be held more accountable financially. Hence, SQA team, early in his research, the researcher
quality management and value for money were proved his technical credentials by helping run
becoming key concepts in the management rep- a workshop on software metrics and this helped
ertoire of the UK defence industry from the mid to gain him full inclusion in the technical work.
1990s onwards. As we discuss in the preceding Although as a technical researcher, rather than
section, this was at the height of the UK MoD’s a social researcher, it was arguably difficult for
interest in formal approaches to the production him to maintain the ‘anthropological strangeness’
of software. In a climate where post-Cold War which ethnographers look for in explaining the
defence projects were likely to demand a shorter common sense and every day logistics of working
lead time, there was considerable interest in speed- life. In any case, he had been invited, through this
ing up the software development process. research, to make a contribution to the improve-
Computer technology and related activity ment of SQA procedures. Therefore the research
clearly played a central role in MAC. One divi- can be characterised as a form of action research
sion of MAC, the Technical Directorate (TD), (Baskerville & Wood-Harper, 1996), where po-
developed most of the airborne software (much tential improvements to SQA were to be seen as
of it real-time). This software clearly has a central the learning part of the action research cycle.
role in ensuring aircraft performance and safety. Although action research receives a mixed
Around 100 people were involved in developing press from the IS research community (Basker-
systems computing software. It was in this divi- ville & Wood-Harper, 1996; Lau, 1999), it is

69
Trusting Computers Through Trusting Humans

nevertheless seen as a way of coming to grips Analysis of Case Study Findings


with complex social settings where interactions
with information technologies must be understood The initial remit of the researcher was to work
within the context of the whole organisation. with staff to optimise the use of software quality
Baskerville (1999) notes the growing interest in assurance within the organisation. The use of
action research methods in information systems cost benefit analysis was originally suggested by
research. Two key assumptions are that complex senior management. Given our characterisation
social settings cannot be reduced for meaning- of the UK defence industry’s particular focus on
ful study and that action brings understanding management of quality and value for money, as
(Baskerville, 1999). The culture of MAC was described above, it is entirely in keeping with the
extremely complex, as we characterise above and industry’s changing needs that the researcher was
discuss again in what follows. Arguably, key ele- initially directed into these areas. The researcher
ments would be lost were the researcher to have viewed it as problematic to assign monetary
adopted a more distant role, relying on interviews cost to SQA activities, and even harder to assign
and questionnaires rather than becoming fully monetary benefits. However, these concerns
immersed and contributing to the detail of the were never addressed directly in the project as it
project. The researcher adopted an interpretivist soon emerged that there was greater interest in
approach, looking to the interpretations of the a new approach to software development being
other participants of the research. But by allowing pioneered by MAC.
for social intervention he became part of the study, Ince (1994, p. 2-3) tells the story of a junior
producing shared subjective meanings between programmer’s first day in a new job. A senior
researcher and subjects as coparticipants in the programmer shows him around, advising him
research (Baskerville, 1999). where to buy the best sandwiches at lunchtime,
For a period of over one year out of the three where to find the best beer after work, and other
that the whole project lasted, the researcher spent, similarly important matters. Then the senior
on average, one day per week working with MAC colleague points to a door. ‘Whatever you do
staff with access to a variety of staff across the don’t go through that door, the people there have
organisation, and was therefore able to participate been given the job of stifling our creativity.’
in a range of meetings and workshops and to gain The door, of course, led to the quality assurance
a familiarity with the individuals concerned. This department.
could not easily have been gained from interviews The staff of MAC’s Quality Assurance Divi-
or surveys. These events included meetings where sion expressed some similar feelings, albeit less
software quality staff considered quality policy, dramatically. They wanted to act as consultants,
such as the implication of international standards, offering a measure of creativity to the technical
to broader meetings where technical staff were development process, although safely wrapped
considering development methods in detail. Free in appropriate quality assurance processes, but
access was allowed to relevant policy and devel- all too often they felt like the police. The strong
opment documents. This permitted an overview awareness of the safety-critical nature of software
of the detailed practices and culture of this large development, and the related fairly advanced or-
and complex organisation. ganisation of quality assurance in MAC, thanks
in no small measure to the necessity to adhere to
MoD standards, meant that SQA was never going
to get quite the negative press that it attracted in
Ince’s (1994) anecdote. Nevertheless, there was

70
Trusting Computers Through Trusting Humans

still some feeling that the Quality Assurance huge boost in the mid-1990s through the Defence
Division could be brought on board in a project standard DEF Stan 00-55 which mandated the use
some time after the Technical Division had time of formal methods base approaches in safety-criti-
to do the creative part. cal software. It is not surprising that there was
Hence, TD had been prototyping the new SDS considerable interest in a system which offered
system for about a year when they decided to bring the promise of considerably reduced software
in Quality Assurance Division. As we explain production times.
below, the newness of the style of development MAC invested a great deal of money and time
in SDS made it unclear how it was to be quality in SDS in the hope that the improved time-scales
assured. Unsure of how to proceed, the SQA which SDS promised, together with reduced costs,
manager turned to the researcher for suggestions. could keep major current aircraft developments on
The researcher now became involved in investi- course. This was particularly important in an envi-
gating the use of the new software development ronment of political intervention and considerable
approach, which would involve the inclusion of public interest and concern over escalating costs
computer generated program code (‘auto-code’) and delivery times in the public sector, including
in safety-critical airborne software systems, the defence industry. These benefits could only
leading to the approval of the new approach and accrue to MAC if the quality, that is, correctness
its incorporation into MAC’s software quality of the software, could be assured.
assurance systems. SDS was heavily dependent on software
Although there has been a long tradition of (CASE) tools. MAC had used these for many years,
using computers to aid the process of software and had procedures in place for their qualifica-
engineering itself, such CASE tools (Pressman, tion (i.e., acceptance) in certain circumstances.
2005) have not generally been used to generate However, these applied to mission-critical rather
safety-critical code (this was always written by than safety-critical systems. Furthermore, the
human programmers). The new MAC SDS was an movement towards auto-generated code led to
ambitious system whose targets were principally a different environment than one where tools
to reduce avionics systems development time by improved and speeded up the design process,
40% and the cost by 30%, whilst maintaining but where failure would show up and be merely
the very high quality standards necessary for time-wasting. There was seen to be a need for a
computer-based system which fly—and therefore major improvement/update of these procedures, a
can crash—military aircraft. quantum change, before they would be acceptable
A key aspect of SDS was process integration for safety-critical applications.
using an integrated modeling environment. There Some tools being used had major world-wide
was consequentially a heavy reliance on auto- user communities, associated academic confer-
mated methods. A specification was developed in ences, and came from supposedly secure and
a formal modeling language and this generated reliable suppliers. Others might not be so well
programming code automatically. In particular, supported, both intellectually and commercially.
automatic code generation was eventually to lead (For instance, it might be no use having an ideal
to aircraft flying ‘auto-code’ in safety-critical tool if the supplier was small and unlikely to
systems. Two aspects of SDS stand out in the survive for many years.) Methods already existed
climate of defence spending of the mid 1990s. for supplier qualification. These methods were
First, there was pressure to reduce costs and undertaken by software quality staff. However, the
show value for money. Second, the use of formal qualification of these suppliers could be a crucial
methods in computer programming received a issue in the qualification of the tool and ultimately

71
Trusting Computers Through Trusting Humans

the integrity of the avionics system. The issue was MAC was an important customer for the auto-code
not merely one of qualification, it was also one tool suppliers, they were not prepared to expend
of demonstration of qualification to customers. the necessary resources. Furthermore, a ‘weakest
Ultimately, the need in some sense to prove the link’ argument demonstrates a fundamental flaw
new methods became paramount. Hence we can with the formal approach in computer systems. If
see that quality procedures did not just involve the auto-code tool itself could be formally verified,
procedures, such as code walkthroughs through it would then become necessary also to consider
which software teams could persuade themselves the operating system on which the tool would run
to trust program code, they also applied to the and the hardware systems involved. Potentially
question of choosing and trusting suppliers. this could involve a seemingly infinite regression
A number of meetings took place with mem- of hardware and software systems having to be
bers of the SDS team. This discussion was very proved correct, where the system is only as good
useful for an understanding of SDS and gave the as its weakest link. Frustration grew as no solu-
researcher a richer understanding of the SQA tion was forthcoming and ultimately SDS was
needs. It soon became apparent that the necessary shelved indefinitely.
fundamental problems with SQA in SDS were We have argued that mathematical proof is es-
going to be difficult to answer. sentially a human achievement between members
The difficulties were centred around two of the expert mathematical community who are
conflicting ideas. The first of these was that for persuaded of the correctness of mathematical
the persuasive rational argument approach to be proofs because they trust each other. These pro-
successful there would be a need for a group of cesses of trust are replicated in the procedures that
professionals to participate in code walkthroughs, have been developed in software engineering, and
with consequent discussion and persuasion. On within that, software quality assurance. As part
the face of it, this was simply not possible, since of the defence industry, developing safety-critical
the computer which wrote the auto-code could systems, MAC had highly developed SQA proce-
not take part in such a discussion. Alternative dures which were obliged to follow international
approaches were considered. Clearly there would military standards. Their code walkthroughs,
be a stage before the auto-code (at the require- which are analogous to the ways mathematicians
ments specification level) where human agents achieve trust in a proof, were an important part
were involved, but this was found to be too high of such quality procedures. Formal methods offer
level to meet the relevant military standards (the the promise of an attractive certainty and control
US MIL-STD-498 [1994] and the UK standard over software production and hence control over
00-55 [MoD, 1997]). Both standards are very the work processes of human programmers. They
specific about the exact conduct of the necessary also offer the promise of automatic verification
walkthrough. It had to be a code walkthrough. of software systems which, potentially, could be
On the other hand, for the formal proof ap- much cheaper than traditional human based ap-
proach method to work, there would first need to proaches to the verification of software through
be such a formal proof. This did not seem within traditional SQA procedures.
the capability of the QAD itself, despite the divi- SDS achieved very little despite the huge ef-
sion being quite well resourced. MAC referred forts put into it by the many people working for
back to the auto-code tools suppliers, but once MAC. Although it was not, at the time, formulated
again there was no such proof and no realistic in such stark terms, success was elusive because
possibility of achieving such a proof. Although an attempt was being made to achieve the impos-

72
Trusting Computers Through Trusting Humans

sible: namely using auto-code whilst being held where it is unavoidable, much care is taken over
to quality assurance procedures which demanded its development. Military standards, so detailed
code walkthroughs which could not possibly be about the use of formal methods in software design
achieved in an auto-code system. Attempts were and attendant work processes in the 1990s, have
made to consider formally proving the correctness moved a decade later to be much less prescriptive
of the auto-code. In addition to supplier reluctance, about the work methods of ensuring software
this raised the spectre of the infinite regress. If quality, thereby allowing for the crucial element
one looks to proving the auto-code correct, then of human inspection in order that the software
the operating system must be proved correct, the may be trusted. As Collins (1990) notes, we are
hardware platform and so on. remarkably accommodating to computers, mak-
This was at the height of interest in formal ing sense of them and involving them in our social
methods for safety-critical systems for defence, networks, and will continue to find imaginative
a view embodied in Def Stan 00-55. The rise ways of doing so. This echoes Nissenbaum’s (1999)
of formal methods is crucially linked to the de- view that we may trust computers if we can treat
fence industry. The interest in formal methods them as ‘agents.’ We may meaningfully ascribe
and automated approaches arrived as pressure intentions and reasons to them.
mounted on Western governments to prove cost In this chapter we have sought to tell a story of
effectiveness due to the changing nature of defence trust, in particular how software may be trusted
developments after the end of the Cold War and when it is not produced by a human program-
the need to avoid litigation for software that might mer. This involves consideration of a complex
be implicated in an accident. Yet the difficulties of set of discourses including the question of math-
applying formal methods in systems of any level ematical proof and how proof is achieved within
of complexity and the need to trust the program mathematical communities. We see a similar
code acted as a spur to maintain complex human need to replicate such human processes of trust
centred software quality assurance procedures. in trusting computer systems. We have argued
that the making of standards to be applied within
software quality assurance procedures shows
Conclusion: ways in which mechanisms of trust are inscribed
Trusting Computers in software standards. Our case study, an action
research project in a military aircraft company,
There is much evidence that we already do trust demonstrates the difficulties which occur when
computers in many walks of life without formal quality assurance procedures involving code
proof or other formal demonstration, even to the walkthroughs—procedures with built-in human
extent of trusting safety-critical systems such trust mechanisms—are incommensurable with
as the ‘fly by wire’ software in the Boeing 777 a system which relies on auto-code. The climate
airliner, two million lines of code which have not of defence research and spending was a major
been fully proved (Lytz, 1995). Expert mathemati- influence, both on our case study and the wider
cians have begun to accept computer generated development of standards. There is a continued
proofs, albeit in qualified ways (Chang, 2004). tension between needing to trust and trying to
As MacKenzie (2001, p. 301) argues, ‘moral control: trusting the software and controlling its
entrepreneurs’ of computerised risk ensure that production. The story which we tell here is one
warnings about computerised risk are heeded of continuing human ingenuity in finding ways
so that safety-critical software is avoided and, of trusting computer software.

73
Trusting Computers Through Trusting Humans

References Forsythe, D.E. (2001). Studying those who study


as: An anthropologist in the world of artificial
Akrich, M. (1992). The de-scription of techni- intelligence. Stanford University Press.
cal objects. In W. E. Bijker & J. Law (Eds.),
Gillies, A.C. (1997). Software quality: Theory
Shaping technology/building society: Studies in
and management (2nd ed.). London/Boston: In-
sociotechnical change (pp. 205-224). Cambridge,
ternational Thomson Computer Press.
MA/London: MIT Press.
Hacker, S. (1989). Pleasure, power and technol-
Baskerville, R. Investigating information sys-
ogy: Some tales of gender, engineering, and the co-
tems with action research. Communications of
operative workplace. Boston: Unwin Hyman.
the Association for Information Systems, 19(2).
Retrieved October 5, 2006, from http://www.cis. Ince, D. (1994). An introduction to software qual-
gsu.edu/~rbaskerv/CAIS_2_19/CAIS_2_19.htm ity assurance and its implementation. London:
McGraw-Hill.
Baskerville, R., & Wood-Harper, A.T. (1999). A
critical perspective on action research as a method Kuhn, T.S. (1962). The structure of scientific
for information systems research. Journal of revolutions. University of Chicago Press.
Information Technology, 11, 235-246.
Latour, B., & Woolgar, S. (1979). Laboratory
Chang, K. (2004, April 6). In math, computers life: The social construction of scientific facts.
don’t lie. Or do they? New York Times. Retrieved Princeton University Press.
October 5, 2006, from http://www.math.bingham-
Lau, F. (1999). Toward a framework for action
ton.edu/zaslav/Nytimes/+Science/+Math/sphere-
research in information systems studies. Informa-
packing.20040406.html
tion Technology & People, 12(2), 148-175.
Collins, H.M. (1990). Artificial experts: Social
Low, J., & Woolgar, S. (1993). Managing the socio-
knowledge and intelligent machines. Cambridge,
technical divide: Some aspects of the discursive
MA: MIT Press.
structure of information systems development.
Davies, B. (2006, October 3). Full proof? Let’s In P. Quintas (Ed.), Social dimensions of systems
trust it to the black box. Times higher education engineering: People, processes and software
supplement. development (pp. 34-59). New York/London:
Ellis Horwood.
De Millo, R.A., Lipton, R.J., & Perlis, A.J. (1977).
Social processes and proofs of theorems and pro- Lytz, R. (1995). Software metrics for the Boeing
grams. In Proceedings of the 4th ACM Symposium 777: A case study. Software Quality Journal,
on Principles of Programming Language (pp. 4(1), 1-13.
206-214).
MacKenzie, D.A. (2001). Mechanizing proof:
DSDM Consortium. (2006). White papers. Re- Computing, risk, and trust. Cambridge, MA/
trieved October 5, 2006, from http://www.dsdm. London: MIT Press.
org/products/white_ papers.asp
MacKenzie, D.A. (2004). Computers and the
Faulkner, W. (2000). The power and the pleasure? cultures of proving. Paper presented at the Royal
A research agenda for ‘making gender stick.’ Society Discussion Meeting, London.
Science, Technology & Human Values, 25(1),
Ministry of Defence (MoD). (1997). Requirements
87-119.
for safety related software in defence equipment

74
Trusting Computers Through Trusting Humans

Retrieved October 5, 2006, from http://www. Sillers, T.S., & Kleiner, B.H. (1997). Defence
dstan.mod.uk/data/00/055/01000200.pdf conversion: Surviving (and prospering) in the
1990s. Work Study, 46(2), 45-48.
Ministry of Defence (MoD). (2004). Interim
defence standard 00-56. Retrieved October Singh, S. (1997). Fermat’s last theorem. London:
5, 2006, from http://www.dstan.mod.uk / Fourth Estate.
data/00/056/01000300.pdf
Stahl, B.C. (2006). Trust as fetish: A Critical
Myers, G.J. (1979). The art of software testing. theory perspective on research on trust in e-com-
New York: Wiley. merce. Paper presented at the Information Com-
munications and Society Symposium, University
Myers, M.D., & Avison, D.E. (Eds). (2002).
of York, UK.
Qualitative research in information systems: A
reader. London: Sage Publications. Tierney, M. (1993). The evolution of Def Stan
00-55: A socio-history of a design standard for
Nissenbaum, H. (1999). Can trust be secured on-
safety-critical software. In P. Quintas (Ed.), So-
line? A theoretical perspective. Etica e Politica,
cial dimensions of systems engineering: People,
2. Retrieved October 5, 2006, from http://www.
processes and software development (pp. 111-143).
units.it/~etica/1999_2/nissenbaum.html
New York/London: Ellis Horwood.
Popper, K.R. (1963). Conjectures and refutations.
Trim, P. (2001). Public-private partnerships and
New York: Harper.
the defence industry. European Business Review,
Pressman, R. (2005). Software engineering: A 13(4), 227-234.
practitioner’s approach (6th ed.). London/New
Weinberg, G. (1971). The psychology of com-
York: McGraw Hill.
puter programming. New York: Van Nostrand
Reinhold.

This work was previously published in International Journal of Technology and Human Interaction, Vol. 3, Issue 4, edited by
Bernd Carsten Stahl, pp. 1-14, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of
IGI Global).

75
Section II
Social and Cultural
Aspects
77

Chapter VI
Information Security Culture
as a Social System:
Some Notes of Information
Availability and Sharing
Rauno Kuusisto
Finland Futures Research Center, Turku School of Economics, Finland

Tuija Kuusisto
Finnish National Defense University, Finland

Abstract

The purpose of this chapter is to increase understanding of the complex nature of information security
culture in a networked working environment. Viewpoint is comprehensive information exchange in a
social system. The aim of this chapter is to raise discussion about information security culture develop-
ment challenges when acting in a multicultural environment. This chapter does not introduce a method
to handle complex cultural situation, but gives some notes to gain understanding, what might be behind
this complexity. Understanding the nature of this complex cultural environment is essential to form
evolving and proactive security practices. Direct answers to formulate practices are not offered in this
chapter, but certain general phenomena of the activity of a social system are pointed out. This will help
readers to apply these ideas to their own solutions.

INTRODUCTION confidentiality. Organizations should be able to


understand what kind of information shall be and
Information security issues can be considered as will be available to ongoing and future activities
balancing between information availability and and which parts of that shall be secured. This

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Information Security Culture as a Social System

information depends on situation and those phe- culture. Some examples of information sharing
nomena that emerge from the complex networked practices of various actors are presented to learn
working environment. Information security certain phenomena concerning the development
culture affects behind security management and of information security culture.
technology. Understanding the nature of this Interest in the security of information and
complex cultural environment is essential to form knowledge has increased together with the
evolving and proactive security practices. Direct development of coalitions between states and
answers to formulate practices are not offered in networks between public and private organiza-
this chapter, but certain general phenomena of tions. It is obvious that security activities are
the activity of a social system are pointed out. needed for protecting information vital to the
This will help readers to apply these ideas to their functions of the states and organizations. (e.g.,
own solutions. Finnish Government 2003 & OECD, 2002) The
System can be considered as a comprehensive emphasis of security activities has been on the
wholeness that is constructed of nodes and con- means to protect the confidentiality and integrity
nections between them (Castells, 1996). Nodes of information flows on those networks. However,
can be human beings, organizations, communi- keeping information confidential is not as chal-
ties, technological systems, natural systems, or lenging as the identification of critical information
sub-systems of various entities (e.g., Checkland and core knowledge from all of the information
& Holwell, 1998; Checkland & Scholes, 2000). available. That is the reason why we focus here
Information is something that is required to launch on information availability. Modern societies
activity while moving between nodes. Security and organizations depend on information and
can be considered as a comprehensive concept that knowledge. They need to identify critical informa-
enables activities to be conducted in an environ- tion and core knowledge and put them available
ment that is stable and predictable enough to gain either for internal use or for external use visible
desired objectives. Culture is a social structure to customers, partners, and competitors to survive
that tends to maintain certain patterns. This pat- or to gain competitive advantage. So, states and
tern maintenance is driven by information called organizations have to find a balance between the
values and valuations. Each actor has their own confidentiality and availability of information.
kind of cultural structures and values and their They need this balance to identify and commu-
interpretation of other values (Schein 1992). It is nicate information that suits their goals.
obvious that a system contains several cultural Information security culture can be seen as a
phenomena that are exchanging value and other concept that provides means to reach the balance
information. Culture itself is thus a complex system between confidentiality and availability of infor-
that evolves during time while various interacting mation. Edward Waltz (1998) defines three major
actors are exchanging information. information security attributes as follows:
The theoretical background is based on the
theory of communicative action by Jurgen Haber- 1. Availability provides assurance that informa-
mas (1984, 1989). In this theory, Habermas is tion, services, and resources will be accessible
constructing a communicative system consisting and usable when needed by the user.
of structures, activities, and information interact- 2. Integrity assures that information and pro-
ing in a social context on the basis of the socio- cesses are secure from unauthorised tamper-
logical ideas of Talcott Parson. We are using this ing (e.g., insertion, deletion, destruction, or
systemic construction as a basis, against which we replay of data) via methods such as encryption,
are applying the concept of information security digital signatures, and intrusion detection.

78
Information Security Culture as a Social System

3. Confidentiality protects the existence of a but this culture reveals itself in different way to
connection, traffic flow, and information the various actors of the network. Cultural aspects
content from disclosure to an unauthorised will show themselves in dynamic way, as well.
user (Waltz, 1998). Culture seems to change during time and from
one situation to another.
Whitman and Mattord (2003) define avail- The research approach is hermeneutics pursu-
ability as follows: “Availability enables users ing to gain understanding about the process of
who need to access information to do so without forming a culture. Information systems security
interference or obstruction, and to receive it in is studied mainly in social context and research
the required format.” Users in their definition are approach is empirical theory creating. Research
not only humans but computer systems, as well. is completed by first analysing the concept of
According to their thinking, availability does not information security culture and explaining the
mean that information is automatically accessible main content of selected theories of social sys-
to any user, but it needs verification of the user to tems. Aspects of information security culture are
become reachable to that nominated user. “The combined to these theories. Secondly, discussion
information is said to be available to an authorised based on case studies is presented. Finally, we will
user when and where needed and in the correct ponder what information shall be communicated
format” (Whitman & Mattord, 2003). to gain unity in a security culture and what kind
According to the concept of information se- of challenges will arise during the process of
curity culture, the confidentiality, integrity, and forming the culture.
availability of information shall not be based only
on norms such as information security policies,
but on a shared organizational culture. The aim INFORMATION SECURITY CULTURE
of this chapter is to increase understanding about
the development of information security culture Networked working environment set new kinds
for multicultural states and organizations acting in of challenges to information system security area.
global environments. The theoretical background Interaction between various actors is dynamic
of the chapter is based on Habermas’ (1984, 1989) and emergent. Flexibility and certain kinds of
theory of communicative action. Some notes will meta-policy processes have risen into vicinity
be made by referring Luhmans (1990) ideas of the (Baskerville & Siponen, 2002). That is obvious,
difference of the various functions of the society because to understand what is required at organi-
of interpreting the reality seasoned with Schein’s zation level or at its sub-levels, the completeness
(1992) nearly classical ideas about organizational of the overall system behavior shall be understood
culture. Cultural aspects will be approached at least to some degree.
both literature review, and empirical results on Dhillon (1997) has a broad view to the term
information sharing in different decision-making “security culture.” He defines that security culture
situations. is the behavior in an organization that contributes
Security is considered as a whole, but the to the protection of data, information, and knowl-
focus is set on the socio-cultural viewpoint. edge (Dhillon, 1997). Data are typically defined
The meaning of security culture forming, and to be known facts that can be recorded. Data are
approaches to create a holistic security cultural suitable for communication, interpretation, or
atmosphere are discussed. The basic assumption processing by humans or artificial entities. Infor-
is that a multicultural organization is able to mation is usually defined as structured data useful
achieve a unified information security culture, for analysis (Thierauf, 2001). When structured,

79
Information Security Culture as a Social System

data are turned into information, for example, human aspect to information security (Von Solms,
Niiniluoto (1997). Awad and Ghaziri (2003) state 2000) (Figure 1).
that information has a meaning, purpose, and Most of the recent papers approach informa-
relevance. They emphasized that information is tion security culture from theories and models of
about understanding relations. organizational culture (Nosworthy, 2000; Chia,
Knowledge is defined as “the ability to turn Ruighaver, & Maynard, 2002; Martins & Eloff,
information and data into effective action” (Apple- 2002, Schlienger & Teufel, 2002; Zakaria & Gani,
hans, Globe, & Laugero, 1999). Previously, the 2003). Chia et al. (2002) based their work on a
security of data and information has been empha- general framework of an organizational culture
sized when security practices in organizations developed by Detert, Schroeder, and Mauriel
have been developed. However, the protection (2000). Martins and Eloff (2002) define that in-
of core knowledge is as critical as the protection formation security culture is the assumption about
of key data and information for an organization. acceptable information security behavior and it
Knowledge is distinctly different from data and can be regarded as a set of information security
information. Knowledge is the ability to turn characteristics such as integrity and availability of
information and data into effective action (Apple- information. They outlined information security
hans et al., 1999). It is a capacity to act (Sveiby, culture model consisting of organizational, group,
2001). According to Maier (2002), “Knowledge and individual levels. The aim of their work is to
comprises all cognitive expectancies that an in- support an organizational information security
dividual or organisational actor uses to interpret culture evaluation. Martins and Eloff (2002),
situations and to generate activities, behaviour and Schlienger and Teufel (2002), as well as Zakaria
solutions no matter whether these expectancies and Gani (2003) adopted Schein’s (1992) orga-
are rational or used intentionally.” In cognitive nizational cultural model. Schlienger and Teufel
expectancies “observations have been meaning- (2002) and Zakaria and Gani (2003) give examples
fully organised, accumulated and embedded in of information security issues related to each of
a context through experience, communication, the elements of the model. Zakaria, Jarupunphol,
or inference” (Maier, 2002). Knowledge grows and Gani (2003) have the management perspective
through the whole life of an actor, and all new to the studying and applying of the organizational
perceptions are interpreted against the organised, culture into information security management.
understood, and accepted field of information.
This very same idea about knowledge is found
Figure 1. Relation of effectiveness and required
in the production of Merleau-Ponty (1968) and
time to develop information security
Bergson (1911). Incoming information is inter-
preted through a mental filter that consists of the
internalised perception history of the entity.
Von Solms (2000) included information se-
curity culture development in the third wave of
information security, that is, in the institutional-
ization wave. The aim of the institutionalization
wave is to build information security culture in
such a way that information security becomes a
natural aspect of the daily activities of all employ-
ees of the organization. It covers standardization,
certification, measurement, and concern of the

80
Information Security Culture as a Social System

They regard an information security culture as Figure 2. The interpretation of information secu-
a subculture in an organization. As a summary, rity culture is influenced by plethora of cultures
the authors stress an organizational information
culture
security culture model development and security
culture evaluation.

Time duration needed to make changes


Nosworthy (2000) emphasizes that the organi- information information
security culture security culture
zational culture plays a major role in information concept interpretation
security, as it may resist change or direct what
society
types of changes will take place. Schlienger and
Teufel (2002) argue that a corporate culture includ-
ing an information security culture is a collective organization
phenomenon that is changing over time and it can
be designed by the management of an organization. individual
So, there is a need to understand fundamentals
of culture when aiming to develop information
security culture in an organization.
Culture is most commonly defined as a set
of shared values, shared understanding, or even
Anyhow, it could be stated that interaction with
shared methods of problem solving (Bell, 1998;
several other actors emerges somewhat unpre-
Habermas, 1984, 1989; Hofstede, 1984), but
dictable cultural phenomena at network level.
some still use a definition of culture that is all-
Network, while interchanging all kind of infor-
encompassing and abstract in manner and which
mation, produces an ever changing combination
provides very little help in the identification of
of activity patterns that gets its force from each
cultural properties. Values are the commonsense
actors’ cultural phenomena and interaction. It
beliefs about right and wrong that guide us in our
is obvious that this kind of situation is more or
daily lives (Fisher & Lowell, 2003). Straub, Loch,
less difficult to control. That is the reason why
Evaristo, Karahanna, and Strite (2002) argue that
different kinds of methods of managing this ever
information systems (IS) research nearly always
changing combination of different valuations are
assumes that an individual belongs to a single
now under development.
culture. They proposed social identity theory to
be used as a grounding for cultural research in IS.
Social identity theory suggests that each individual
DYNAMIC ORGANISATION MODEL
is influenced by plethora of cultures (Straub et
Al., 2002). When applied to information security
Habermas (1984, 1989) bases his thinking on
culture research, this means the interpretation
information classification in the theories of social
of information security culture is influenced by
sciences. He combines critically theories about
several cultures (Figure 2).
society, a human being as a part of the society,
An individual belongs to several ethical, na-
and system theories. This approach will fit rather
tional, organizational, and information security
well into organisational and inter-organisational
cultures. They have an effect on the way the
features such as information security culture.
individual interpret the meaning and importance
Habermas (1989, referring to Talcott Parsons
of information security culture. These individual
production) states that there are four basic classes
cultural aspects are rather solid and they describe
of information, which are directing an actor’s
the world of values of each individual actor.

81
Information Security Culture as a Social System

activity. These are values, norms, goals, and ex- schemes, symbolic expressions, and value stand-
ternal facts. These same basic items can be found ards, like standards of solving moral-practical
from the background of any purposeful act at and cognitive-instrumental problems, as well
any level—from individuals via working-groups as appreciations. Cultural orientations are both
to organisations, from individuals via families normative and motivational, the first containing
to societies. Those items contain information, cognitive, appreciative and moral and the latter
which—when used—will orient an actor to adapt cognitive, mental-emotional and evaluative (Hab-
its behaviour to better fit into the surrounding. So, ermas, 1989). Information about values forms
actors in a system will interact with each other the long-lasting basis of information creation.
via exchanging these four types of information. Information about values is changing rather slowly
That information will fulfill demands of pat- and it is more or less dependant on the culture of
tern maintenance, integration, goal attainment, concern (Bell, 1998; Hofstede, 1984; Schneider
and adaptation functions. Figure 3 depicts these & Barsoux, 1997).
dependencies. Norms will determine mutually expected
The arrow, which is named “information flow,” rules, among which the subjects of community
describes the direction of information, which is will perform their interactions. Norms will entitle
coming in to the information refining process the members of community to expect certain ac-
of an actor. It shows that values have effects on tions from each other in certain situations. That
norms, which both have effects on goals and the will obligate members of this community to meet
attainment of those, and further on, all those have the legitimate expectations of others. Norms will
effects on exploiting external information. Vice build up a system of controls and orient actors’
versa, the arrow called “energy flow” describes activities to fulfill normative validity claims. The
those activities, which are taking place from us- acceptance of norms will lead to full adaptation
ing external information to change values. An and further development of patterns (Habermas,
actor has a certain variety of resources, means 1989). The understanding of norms without ac-
and facts to put in practice to achieve goals (Hab- ceptance will lead to various ways of action from
ermas, 1989). seemingly total adaptation in the context of norm-
Information concerning values will determine setting community to total ignorance of norms
a general system of culture. The function of cul- and drifting outside of that community. The latter
ture is to maintain certain patterns of activity. will happen if norms are not understood, as well.
These patterns consist of cognitive interpretation There, the dilemma of subjective and objective

Figure 3. Information and energy flows and functions using the information in an actor approached as
a social system (Habermas, 1989)

82
Information Security Culture as a Social System

world will be seen. The adaptation to the com- to organisational environment, as well. Organisa-
munity will depend on the value-based judgement tional culture will remain at least partly in spite
of the acceptance of those norms, which are set of organisational changes, both ontological and
by the community. normative. Policy, which determines goals, will
Goals will determine the desired end-state of change among the demands of the surrounding
actions. Goals are directing resources and means environment and information offered by norms.
to gain success as effectively as possible. Goals Finally, exploiting external information, and using
will provide information of polity, about those resources and means will be mostly dependant
choices, which are made by top management of on goal setting.
an actor. This actor can be, for example, a state, To form a systemic model of information ex-
an organisation, a team, or even an individual. change of a general actor, some other assumption
Finally, means and resources are used to put such shall be done. Systems will produce activity, when
activity in practice, which will lead the actor to the right kind of information is fed into their struc-
fulfill its goals as optimally as possible. The user tures. This activity again acts as input information
of those resources is here called an “institution” for the system to produce new activity (Figure
(Figure 4). Originally in Habermas’ theory, this 5). This feature has been discovered very early
structure is economy. Anyhow, it could be thought (Aristotle) and it can be found both in literature
that depending on the viewpoint, this resource-us- (e.g., Maier, 2002) and in practical life.
ing structure may just as well be something else. In addition, Habermas (1989) describes that a
For example, from the viewpoint of an enterprise, social system contains time and space dimension.
the institution will be, for example, marketing, System has initial state and goal state. Its com-
production, and/or research and development munication orientation is oriented both internally
department. and externally, as well. A model that contains
The structural phenomena of this systemic information, activity, and structure, as well as
approach contain culture, community, pol- temporal and spatial orientation of information
ity, and institutions. Information flows and ac- exchange can be formulated. This very rough
tions described above will take place in these model is depicted in Figure 6 (earlier published,
structures, which are subsystems of the whole e.g., Kuusisto, 2004; Kuusisto, Nyberg, & Virta-
system. Cultural systems are more solid than nen, 2004; Kuusisto, 2006).
communities, which are again more solid than a This model consists of a four-field situated in
polity structures. This ontology may be applied time and space axis. Time axis contains initial

Figure 4. Information and energy flows and structure of an actor approached as a social system (Hab-
ermas, 1989)

83
Information Security Culture as a Social System

Figure 5. Information-driven activity cycle in towards understanding of those phenomena that


structure may be important when examining the complexity
of security issues in an organization.
As can be seen on the model, information of
different functional parts of the system is a com-
bination of the influence of neighbor parts of the
system and external input of each subsystem of the
comprehensive system. It can be easily recognized
that this kind of system is complex, thus being
emergent. Depending on what is the viewpoint of
an actor, it deals with somewhat different informa-
tion. For example, institution adapts to the overall
system by interpreting information about cultural
activities (pattern-maintenance), organizations’
goal attainment, and those external facts that it
collects from the other systems (e.g., stakeholders).
Polity structures (e.g., strategic management of
an organization) form their worldview by using
Figure 6. Systemic model of an organization in information of organizations features and com-
time and space petence, their stakeholders’ activities and those
goals that has set to the organization. Those two
viewpoints are rather different. It is obvious that
the forming understanding about security culture
is not so straightforward. This model gives a good
starting point of evaluate those challenges that
are faced, when attempting to formulate such ac-
tions that will guarantee as functioning security
activities as possible.
Habermas (1989) claims that the judgment
basis of the information exploitation and activity
practices varies from one subsystem to another.
On the institution viewpoint, universalism is im-
portant judgment basis. People in the organization
want be “saved,” they are willing to see themselves
continuing their existence in a safe environment.
state and futures’ state and space axis contain They are willing to obey their cultural heritage,
internal interaction and external interaction. act in an environment, where such decisions are
Each field contains a certain kind of information, made that ensure this continuous existence in
structure, and activity. Interactivity relationships mental space, as well. They make their percep-
of those four fields exist between their neigh- tions of outer world against this mental basis.
boring fields. Now these ideas can be formed On the other hand, polity structures make their
into a holistic systemic model of organizational judgments on the basis of performance. A com-
dynamics. This model is presented in Figure 7. munity and its members should be governed so
With help of this systemic model, we can reach that the overall performance of the completeness

84
Information Security Culture as a Social System

Figure 7. Model of organization dynamics

is as optimum as possible. So, the orientation of manifested by communicating values, and on the
these two subsystems to the complete system is use of available resources. The system shall be
different. Niklas Luhmann (1990) has same kind able to maintain itself both internally and exter-
of idea. He claims that different functions of a nally. Information concerning values and norms
society have different basis for judging the plau- will determine the interaction against the system
sibility of activity. He claims that, for example, itself. The system, weather it is, for example, an
for economy, this basis is to own—not to own, for organisation or society, contains information
law it is right—wrong, for science true—untrue, about values and norms. This information will
for politics it is right political program—other guide goal forming and the use of resources.
programs, for religion it is good—bad (behavior), Information about goals and resources will guide
or joy—fear, and for education this basis is right the social system to perform suitable interaction
attitude and competence versus wrong attitude with the outer world.
and competence. On the basis of these ideas, it Culture can be seen as a structural phenom-
can be claimed that different actors have different enon, which aim is to maintain suitable patterns
ways to interpret the world that they face every of a social system to form a solid enough basis
day. It could be assumed that in an organization for orienting towards the future. Culture is com-
management, marketing, production, and research municated by values. A continuous process of
and development personnel have different judg- the evolution of values and reconstruction of
ment basis to issues that they face in their every norms will be present in the system itself. Hav-
day activities. This happens on the area of security ing an effect on the objective world will be done
and security culture, as well. by policy-making and institutional structures. In
Over a time, an actor such as a state or an an organisation environment, this means the will
organization approached as a social system will of the top management, and the optimal use of
attempt to reach a goal state, which contains a organisational resources, like information, time,
normatively unified community, which is setting material, personnel, and money. Interaction takes
mutually accepted goals in a policy process. This place in a situation via a communicative process,
state will be constructed on cultural structures where information about various items is shared

85
Information Security Culture as a Social System

between subjective actors using mutually under- stakeholder community. Finally, it is question
stood codes. The whole interacting process is a about how to reach understanding of security
series of situations, where mutual adaptation of policy and practices and personal interpretations
interacting actors will take place. of threat? How possible is it to reach understanding
Information security culture in an organisation of divergently oriented subject of a comprehensive
is most obviously a part of an organisational cul- system? For a member of an organization, his/her
ture. The development of an information security feeling of security is real, and for management its
culture can be seen equal to any culture forming action to make community more safer place to per-
process. When referring to Habermas’ theory, form organizations´ functions is real. Community
forming a structure called culture will require contains a good selection of those realities.
a lot of energy. If it is thought that energy will
be transferred via information, a subsequently
great amount of information shall be delivered. MODELING THE INFORMATION
Therefore, it will demand a certain period of SHARING OF AN ORGANIZATION
time to perform changes in cultural structures.
Seemingly, it is very important to understand Next, a general information content model of
what kind of information is available to form this shared situation understanding is presented. The
cultural basis. ultimate origin of the model is classic Greek phi-
Organization culture determines how the na- losophy (Aristotle) as depicted in Figure 8.
ture of reality is seen in the organisation. Accord- The information model that is used to analyze
ing to Habermas’ theory, culture is the structural the dynamism of organizations behavior in dif-
phenomenon, which will act as a platform, from ferent situations is described in Table 1. Rows
which the information about the basic nature of describe the temporality and abstraction degree
the organisation will rise. On the other hand, of information. Information at the upper row is
culture will be the ultimate structural frame of relatively most abstract, future oriented and its
the memory of the organisation, where all that effects are long-lasting. The lowest level contains
information, which is considered the most valu- information that updates fast, is concrete, and is
able and preferable, is stored during the entire observable as immediate events. The column at
life of the organisation. So, culture is a structure, left contains cultural information described by
where the most long-effecting information, that Schein (1980, 1991). The next left column con-
is, values of the organisation will be stored. The tains actors’ internal information. The next right
energy to form the cultural structure will come contains information of expressed conclusions
via norms. Norms determine those rules, which made by the actor. The right column describes
will be followed inside the organisation to be information that comes from outside of an actor
able to work together as smoothly as possible. or is remarkably affected by the world outside
Norms and values are the inside information the actor itself. Rough contents of the informa-
of an organisation, but they will be shown out- tion categories are described in the table, as well.
side by performing activity via those goals that The idea of forming this framework is described,
organisation has. This means that the values of for example, in Kuusisto (2004) and Kuusisto
the organisation will be communicated to the (2006). The main idea to use this kind of model
surrounding through its activities. is to show how very divergent is the information
On practical level, it is a question about how space, when organizations or other actors move
to perform social process between organization from one kind of situation to another.
polity structures, organization members, and

86
Information Security Culture as a Social System

Figure 8. Classic approach to making choises (idea Every layer of the model has a specialized task
adopted from Aaltonen & Wilenius, 2002, refer- in the overall process of forming the understanding
ring to Malaska & Holstius, 1999) compared to of working environment and using information
information categories suggested by Habermas in decision-making process. The layer that deals
with event information produces all the time an
updated picture of events compared to the physi-
cal features of the organization. On the next layer
upwards, the constraints are sorted out. This
means the restrictions and possibilities that the
environment will produce and the behavior and
properties of actor have, when it is interacting
with its environment. Conclusions at this level
are abstracted analysis about restrictions and pos-
sibilities for activity. The next two layers contain
information of resources and means of an actor

Table 1. Information categorization model


Values Internal facts Conclusions External facts
Task
Given activities or work to be
Basic assumptions Decision
Mission, vision performed. For example, activi-
Hidden assumptions that will A solution based on thinking
An end-state of the actor. ties originated by upper-level
guide the behavior of an actor. and assessment.
management or by the develop-
ment of a situation.
Socially true values
Alternatives to act Foreseen end states
Those assumptions that are Means
Description of possibilities or Future situations most certainly
mutually accepted in a certain Activities or methods to reach
proposals to act. reached when activities are fin-
group to be a basis of thinking an aim or fulfill a purpose.
ished.
and executing activities.
Possibilities to act Anticipated futures
Resources
Physically true values Describes a thing, event, or de- Describes possible paths to the
Available material and human
Those assumptions that can be velopment that can be taught or goal that the actor can choose
resources such as people, finan-
accepted to be valid in certain is expected. Possibilities to act and that provide something new
cial resources, material, and of-
physical environment. are derived from strategies and to the actor. For example, strat-
fice space and time.
resources. egy alternatives.
Restrictions Environment
Social artifacts Action patterns
Things that have to be con- Describes an area or a space
Structure of a social system, Describes how an actor can be-
cerned before planning the use that affects an actor. For exam-
principles of interaction, and have. Are stored on databases or
of resources and means. For ple, activities of media, market
description of nodes and their is tacit knowledge, for example,
example, restrictions placed on trends, national trends, global
mutual positions, and observ- process descriptions, manuals,
activities and conditions of in- trends, and higher-level deci-
able behavior. instructions, and action plans.
formation acquisition. sions.
Features
Describes properties of objects Event model
Physical artifacts such as the properties of an or- A description that enables the Events
Results of activity, like techni- ganization or equipment. Are outlining of the pattern of a Describes time-limited events
cal results of a group, written stored in databases or is tacit situation. For example, reports, caused by actors. For example,
and spoken language, symbols, knowledge, for example, infra- documents, analyzed conclu- meetings, accidents, and hostile
and art. structure descriptions, proper- sions such as quality reports, activity.
ties of equipments, and compe- statistics, pictures, and maps.
tencies of people.

87
Information Security Culture as a Social System

combined with futures expectations of the overall and guidance by values. Research targets have
system, including stakeholders. These input facts been government and agency level organizations
as well as information about events and environ- including military, as well as state provincial
ment, and knowledge about the composition and search and rescue organizations. Results have
the development of the situation and possible revealed challenges concerning information ex-
end-states are used as basis. The possibilities change practices and organizational aspects. This
to act and information about alternate ways to shows the complexity of the challenges that are
operate are refined. The chain of deduction can faced when implementing information security
be continued until the ultimate decision-making culture in multinational, multi-actor dynamic
layer is reached. There, all output information from networked working environment.
the lower layers shall be available. Conclusions of Cases are presented rather briefly, because
a neighbor layer are relatively more meaningful for the purpose of this study it is not relevant
than information on the other layers. The whole to describe those in details. The most interest-
spectrum of cultural information shall be available ing conclusions are described here to show how
for the decision-maker. The decision-maker must different the information sharing and exploiting
be able to know the action patterns, anticipate the world is when dealing with different kinds of situ-
change of the situation, foresee the end-state of ations. Those conclusions are then analyzed using
the action, and deeply understand the meaning models of organization dynamics and information
of the mission as a part of the bigger continuum categorization presented earlier.
of action. Thirty crisis management specialists were
asked what issues they have found challenging
when planning and beginning practical coopera-
INFORMATION AVAILABILITY tion on the field (SHIFT WS#1, 2006). Specialists
REQUIREMENTS AND SHARING were from different countries and they represented
PRINCIPLES IN PRACTICE both governmental and non-governmental orga-
nizations. The people that answered the questions
We present four different information sharing represented the practical experience of planning
cases to demonstrate the divergence of the require- and executing operations in a multi-actor environ-
ments of acting in emergent networked environ- ment. The idea of this very brief survey was to
ment. Those cases lead reader to the world of not find out most important structure, activity, and
only the complexity of networked structures, but information related challenges that those actor
also to the complexity of using and producing in- had faced during their operations.
formation and to those challenges that are faced in Answers were categorized in three major
dynamic acting environment. Understanding this classes that were “structural items,” “activity
manifold complexity is important for two reasons. items,” and “information items.” Information
First, it gives some ideas to understand how to items were further on, categorized on the basis
perform security activities to ensure information of the model described earlier. Structural chal-
availability, integrity and confidentiality. Second, lenges were focused on the nature of the working
it gives hints to understand those challenges that environment, (self) organizing of the actors on the
can be faced, when security culture is attempted to field, information access and sharing structures,
create in emergent environment. Those four cases as well as user-friendly technological support.
are: starting a new activity, building up a network, Activity challenges were focused on finding the
moving from normal “steady-state” situation to a way to discuss—to share the needed information
situation where fast decision-making is required, to build up the overall structure to act in a proper

88
Information Security Culture as a Social System

way. Challenges of activity seem to concentrate To find out how organizations will start their
on information management in a complex and information exchange when they are reorganiz-
emerging structure in a rather divergently acting ing their cooperative relationships in suddenly
network of various actors. changing situations, we studied the information
Challenges concerning information sharing exchange practices in a search and rescue exercise
concentrated mainly in five classes: (SAR, 2007). Several various organizations and
about 100 personnel from rescue, medical, law,
• Socially true values and other authorities as well as volunteers were
• Social artifacts involved to the exercise. We surveyed 30 people
• Action patterns of that network of actors. People were asked
• Features what information they want to have from their
• Environment cooperative counterparts, what information they
are willing to share, and what information they
To some degree challenges were faced on the want to have more. We analyzed the content of
area of: answers by using the information categorization
model described earlier. Main conclusions were
• Physically true values as follows:
• Mission and vision
• Means 1. Information sharing situations are complex
• Resources by nature.
• Possibilities to act 2. Information about past (what has taken
• Event model place and how those events have affected to
• Foreseen end-states activity), present day (events), and the future
• Events (intentions) is relevant. This relevance differs
from depending on the information users’
It can be seen that most of the information viewpoint.
content expressions are situated in five categories 3. Information content relevance depends on the
that present socially oriented values, feature phe- activity that an actor is performing. Content
nomena of all actors and environmental facts. This interests are very divergent.
tells us that actors are interested in the information 4. Depending on the role of an actor, the inter-
that is not necessarily dependant of them, but is est to information varies quite a lot. Role is
essential to know to be able to work successfully here understood like, for example, situation
on the field. They have been experienced that awareness, analyzing the meaning of informa-
the phenomena of the working environment and tion content, planning of the operation, and
phenomena of other actors are essential to know. decision-making. Referring to the information
This means that cultural information is neces- exchange categorization model, the following
sary. Further on, to “elicit cultural competence” features exist:
requires a lot of discussions at personal level a. Situation awareness role concentrates to
with all those actors that are involved to common events, event models, environments, restric-
activity. At the departing phase of an operation, tions and partly tasks, and decisions.
the information about the working environment b. Those who analyze the basic information for
and working partners or other actor on the field is planning purposes focus on their information
rather essential to find the optimal way to deploy gathering interests to events, environment,
own activity.

89
Information Security Culture as a Social System

resources, as well as features of actors, action b. There are differences between the information
patterns, and anticipated futures. that actors are willing to share and the one
c. Planners concentrate on resources, anticipated they are willing to receive. In general, more
futures, foreseen end-states and possibilities, information is wanted of features, action pat-
and alternatives to act. terns, events, environment, and anticipated
d. Decision-makers focus their interest on futures. Also, information of resources, pos-
means, tasks, foreseen end-states, mission, sibilities to act, and foreseen end-states are
vision, possibilities to act, and decisions. kept relevant to get. Willingness to share
information focuses on event model and de-
Those actors, who are on the management level, cisions. This leads to the dilemma of wishes
are much more interested in futures information and wills. Different information is wanted that
(anticipated futures and foreseen end-states) and shared. Information sharing challenges focus
action possibilities that those who perform the on the categories of events, resources, means,
field activities at the operative level. Planners and event models, features, action patterns, and
decision-makers want to see to the future. anticipated futures.
Those actors that are performing tasks on c. Organizing of actors does not basically sup-
the operative level are much more interested on port networked information sharing. Actors
decisions that concern them than those ones who tend to keep in organizations structures,
these decisions have done. Operative actors want where they are used to act on long-term.
to know what they are expected to do. Willingness to organize ad-hoc, or to form
Information interest varies depending on if task-based organizations, or self-organize
the information shall be accurate and certain or is limited and takes time. Information shar-
shall it be updated quickly enough according to ing structures are trust-based and they are
the development speed of the situation. Accuracy developed on long-term.
of information is emphasized on conclusions cat-
egory and at the events-features end of the model. It seems that building up a network is rather
Especially, events information and decisions are challenging. In spite of the fact that all actors
required to be as accurate as possible. Updating involved in this exercise knew each other some-
speed is kept important at the level of combining what well, certain viscosity to form new kinds of
the information of resources and means to the networks was observable. This means that in such
futures information about the development of situations where actors are joining and departing
the situation to create alternatives to act. Most the network, a certain amount of time will elapse
important is to achieve updated information before networked actors will understand each
about tasks and continuous ability to evaluate other in new kind of network structure. Further
the requirements of the mission (see Kuusisto, on, if certain structure produces certain activity,
Kuusisto, & Nissen, 2007). the evolving network produces different kinds of
Challenges in information sharing are focused actions depending on what kind of actors have
on three items: joined into the network. If it is question about
security culture, it reveals itself in a new form
a. Willingness to share information in networks. when actors of the network change. Security
It seems that about 20% of information ex- culture evolves during time.
change is directed to networking partners. The An empirical study about situation aware-
rest of the information exchange takes place ness in crisis management was conducted in
inside own organization structure. governmental organizations in Finland in 2005.

90
Information Security Culture as a Social System

A research report of that study is published in As a conclusion, the forming of basis for
Kuusisto et al. (2007). This brief conclusive text decisions as well as decision-making have to be
is based on that report. The aim of the study was supported by the following information based
to collect information for improving interagency activities:
collaboration services and processes of crisis
management. The study focused on the changes • Analyses of the development of real-time
in situation awareness when moving from normal situation
situations to disruptive situations and exceptional • Presenting of the continually updated resource
conditions. The method of the study was semi- information
structured interview. Eleven people representing • Practices for informing tasks immediately
governmental authorities were interviewed. The • Processing of future scenarios—finding of
interviewees were active actors in the area of plausible development paths and foreseen
domestic and international security, or tightly end states
related to these actors. • Realistic analysis of tasks and forming of
The interviewees assessed the changes in the missions
priority of information contents when moving • Forming, presenting, and analysis of alterna-
from normal situations to disruptive situations tives to act
and exceptional conditions. The interviewees were • Sharing of decisions
asked to select those information contents that
priority increases and those information contents In conclusion, it can be stated that free in-
that priority decreases, in crisis situations. formation sharing, understanding the ongoing
Analysis of the material was completed and situation and pro-activity are important when
the following recommendations were sorted out. decision-makers are acting in rapidly changing
Both the forming of basis for decision-making situations.
and decision-making itself require a wide un- OECD (2002) stresses a somewhat solid ethi-
derstanding of large systems having a structure, cal and value based basis for security measures
activities, and information potential. When form- implementation and development in organizations
ing the basis for decision-making, this need is and states. The paper gives a good selection of
visible, especially in task analysis. The forming values that are suggested to give guidance for
of the basis for decision-making activities will be organizations to promote long-lasting security
supported by future study methods suitable for development. Next, values and value-based state-
situations where immediate activities are needed. ments were found from the paper:
Information about resources is a prerequisite for
future orientation. In addition, information and • Taking account of all network members’
experience on features, action patterns, antici- interests
pated futures, foreseen end states and mission, • Confidence among all networked actors
and vision supports the producing of information • Ethical values (develop and adopt best prac-
about futures. Capability needed for forming the tices and to promote conduct that recognizes
basis for decision-making is the creation of new security needs and respects the legitimate
information and knowledge. This is different from interests of others)
capabilities needed for decision-making. These • Co-operation and information sharing (espe-
capabilities are combination of existing informa- cially sharing information about threats and
tion and willingness to make decisions. vulnerabilities)

91
Information Security Culture as a Social System

• Personal privacy policy to perform collaboration successfully. To


• Freedom to exchange thoughts and ideas change values, the norms must be accepted and
• Free flow of information internalised first.
• Confidentiality of information and com- Time shall be taken into account. Unified
munication structures in complex environments will not arise
• Protection of personal information suddenly. They need a certain amount of time to
• Openness and transparency manifest themselves. The development of a culture
• Security is a fundamental element of all prod- always causes more or less changes to personally
ucts, services, systems, and networks understood values. The aim of forming a culture
• Security is an integral part of system design is to gain such structure, on which a solid base for
and architecture all activities can be constructed. To be unified,
• Forward-looking responses to emerging the information gluing this structure together,
threats that is, values of individuals and organisations
• Seeing evolution of risks shall be as close to each other as possible. The
more divergent they are, the longer the duration
Those values are meant to give guidance will be to unify them.
to organizations to promote good practices in Habermas (1984, 1989) argued that those
developing and implementing security policies, who take part in interaction, for example, com-
practices, measures, and procedures. The values munication, should have at least one shared item
that are described can be abstracted in four main of knowledge. This guarantees that they have
categories: a potential to construct their shared situation
coherently. Shared knowledge is information by
• Understanding that it is question about a which models for creating mutual understanding
comprehensive system where security is an can be formed. Without these models, creating of
integral part of that system understanding is not possible. A prerequisite is
• Free information sharing concerning security that people commit to believe in the models. This
issues requires that information concerning the models
• Understanding that personal and organiza- is communicated.
tional privacy and confidentiality require- Successful communication requires that val-
ments exist ues, experiences, knowledge, and emotions of
• Pro-activity people involved are shared. It is rather challenging
to share knowledge about commonly agreed values
It is interesting to see that these issues are and appreciations in multi-cultural networks. It
rather alike of those that have been discovered seems obvious that during a short period of time,
in those three studies of using information in it is impossible to create commonly understood
decision-making in different situations. values. Organizations must be able to create and
communicate believable, attractive, and accept-
able pictures about them over the long haul. By
CONCLUDING REMARKS this—in advance communicated—image, orga-
nization can attract people to fulfill, or at least
Culture is a structure, which exist to maintain understand those objectives like the confidential-
patterns by the information called values. Values ity, integrity and availability of information and
have effects on norms. Norms are information, knowledge, which it appreciates. This kind of
which determines the mutually understood communication needs lots of information about

92
Information Security Culture as a Social System

the future expectations. So, an organization must policy. Communicating about past achievements
be able to communicate its valuations in advance. includes putting information security policy,
This forms the basis for information security information security process descriptions, or
culture development. information security audit results available to
The more communication is future-oriented, partner organizations.
the longer the communication process takes. Research and experience has proven that
The longer communication will last, the more availability of information about situation, com-
information it needs and the more information petence, actors’ features, futures development,
is abstracted. Time-divergent communication and decisions is relevant. Anyhow, unbalance
contains communicating of the organization’s between released and required information is
future, current, and past activities (Figure 9). considerable. People are willing to release dif-
Communicating about the future is needed to ferent kinds of information than they wish to
create shared mental models about the informa- receive from others.
tion security. It includes communicating about Traditional organizing of actors does not basi-
the organization’s image, valuations, values, and cally support networked information sharing. In
expectations in the long-term. The aim of long- practical situations, actors tend to keep in organi-
term communication is to have an effect to way zations structures, where they are used to act on
the other organizations in the business network long-term. Willingness to organize ad-hoc, or to
approach information security. Communicating form task-based organizations, or self-organize
about the current activities includes commu- is limited and takes time. Information sharing
nicating about technical and managerial level structures are trust-based and they are developed
information security activities such as reflections on long-term. Lessons learned emphasize that
to the implementation of information security information exchange between organizations is

Figure 9. Time-divergent communication for information security culture development (the idea in He-
lokunnas & Kuusisto, 2003a, see also an applications in Ahvenainen, Helokunnas, & Kuusisto, 2003 and
Helokunnas & Kuusisto, 2003b). Measurable results of activities concern information security policy,
process descriptions, and audit results. Managing activities and objectives concern the implementation of
information security policy and vision of course is pointed towards the information security culture.

93
Information Security Culture as a Social System

limited and creating trusted information sharing by practical observations of various decision-
processes is time consuming. Functioning infor- making situations:
mation sharing procedures cannot be developed
during operative activity. They shall exist before- • Understanding that, it is a question about a
hand, at least to some degree. comprehensive system where security is an
Culture evolves. It shows itself in different integral part of that system
ways to different actors in a network. Actors • Free information sharing concerning security
come into and depart from the common network. issues
Each structural change of the network will change • Understanding that personal and organiza-
the information content of the network, as well. tional privacy and confidentiality require-
Cultural changes cannot be made during a short ments exist
period. Forming understandably unite security • Pro-activity
culture is possible, but it will prerequisite at
least either long period of time to communicate Culture is an evolving informational system.
desired values, or possibility to exploit existing To be able to work successfully in emerging
unity of values. networks, the structure and nature of those net-
Networking is an obvious future trend. Net- works shall be understood. So, to develop good
working is here understood as forming various security practices, systemic nature of the world
ad hoc organizations to deal with some special shall be studied.
case. These cases can be, for example, businesses,
international politics, and hobbies. Networks
include several perspectives and viewpoints, REFERENCES
because every network member has its own
way to act and interact. Different information
Aaltonen, M., & Wilenius, M. (2002). Osaamisen
is required in different phases of inter-working
ennakointi. Helsinki, Finland: Edita Prima Oy.
in networks. A new member offers and requires
different information that one who has acted a Ahvenainen, S., Helokunnas, T., & Kuusisto, R.
longer period in network. (2003). Acquiring information superiority by
It seems that unified security culture or even time-divergent communication. In B. Hutchinson
the same kind of orientation to security culture (Ed.), Proceedings of the 2nd European Confer-
is somewhat impossible to achieve in evolving ence on Information Warfare and Security (pp.
networks. Security practices can be improved 1-9). Reading, UK: MCIL, Reading.
in two ways. First, those basic principles how
Applehans, W., Globe, A., & Laugero, G. (1999).
an organization deals with information security
Managing knowledge. Boston: Addison-Wes-
issues shall be communicated long-term. This
ley.
tells to other network members the orientation
of an organization to security issues and makes Awad, E., & Ghaziri, H. (2004). Knowledge
its behavior more understandable. Second, some management. Upper Saddle River, NJ: Prentice
basic values shall guide the behavior of all organi- Hall.
zations or actors that are working together on the
Baskerville, R., & Siponen, M. (2002). An in-
same network. Rather good candidates of those
formation security meta-policy for emergent
values might be the four that were found from
organizations. Journal of Logistics Information
OECD recommendations. These are supported
Management, 15(5/6), 337-346.

94
Information Security Culture as a Social System

Bell, W. (1998). Foundations of futures studies, vol Town, Madrid, Mexico City, Amsterdam. Munich,
II, values, objectivity, and the good society. New Paris, Milan: Prentice Hall.
Brunswick, London: Transaction Publishers.
Helokunnas, T., & Kuusisto, R. (2003a). Strength-
Bergson, H. (1911). Creative evolution. Lanham, ening leading situations via time-divergent com-
MD: Henry Holt and Company, University Press munication conducted in Ba. The E-Business
of America, TM Inc. Review, 3(1), 78-81.
Castells, M. (1996). The information age: Helokunnas, T., & Kuusisto, R. (2003b). Informa-
Economy, society and culture: Volume I, The rise tion security culture in a value net. In Proceed-
of the network society. Padstow, Cornwall: T.J. ings of the 2003 IEEE International Engineering
International Limited. Management Conference (pp. 190-194). Albany,
NY, USA.
Checkland, P., & Holwell, S. (1998). Information,
systems and information systems—making sense Habermas, J. (1984). The theory of communicative
of the field. Chichester, New York, Weinheim, action, volume 1: Reason and the rationalization
Brisbane, Singapore, Toronto: John Wiley & of society. Boston: Beacon Press.
Sons Ltd.
Habermas, J. (1989). The theory of communicative
Checkland, P., & Scholes, J. (2000). Soft systems action, volume 2: Lifeworld and system: A critique
methodology in action. Chichester, New York, of functionalist reason. Boston: Beacon Press.
Weinheim, Brisbane, Singapore, Toronto: John
Hofstede, G. (1984). Culture’s consequences:
Wiley & Sons, Ltd.
International differences in work-related val-
Chia, P.A., Ruighaver, A.B., & Maynard, S.B. ues. Beverly Hills, London, New Delhi: Sage
(2002). Understanding organizational security Publications.
culture. In Proceedings of PACIS2002, Japan.
Kuusisto, R. (2004). Aspects on availability.
Retrieved February 20, 2007, from http://www.
Helsinki, Finland: Edita Prima Oy.
dis.unimelb.edu.au/staff/sean/research/ChiaCul-
tureChapter.pdf Kuusisto, R. (2006). Flowing of information
in decision systems. In Proceedings of the 39th
Detert, J. R., Schroeder, R. G., & Mauriel, J. (2000).
Hawaii International conference of System Sci-
Framework for linking culture and improvement
ences (abstract on p. 148, paper published in
initiatives in organisations. The Academy of
electronic form). Kauai, HI: University of Hawai’i
Management Review, 25(4), 850-863.
at Manoa.
Dhillon, G. (1997). Managing information system
Kuusisto, T., Kuusisto, R., & Nissen, M. (2007).
security. Chippenham, Wiltshire, GB: Anthony
Implications of information flow priorities for
Rowe Ltd.
interorganizational crisis management. In L.
Finnish Government Resolution. (2004). Strategy Armistead (Ed.), Proceedings of the 2nd Inter-
for securing the functions vital to society. Helsinki, national  Conference on I-Warfare and Security
Finland: Edita Prima Oy. (pp. 133-140). Monterey, CA: Naval Postgraduate
School.
Fisher, C., & Lovell, A. (2003). Business ethics
and values. Harlow, London, New York, Boston, Kuusisto, R., Nyberg, K., & Virtanen, T. (2004).
San Francisco, Toronto, Sydney, Singapore, Hong Unite security culture—may a unified security
Kong, Tokyo, Seoul, Taipei, New Delhi, Cape culture be plausible. In A. Jones (Ed.), Proceedings

95
Information Security Culture as a Social System

of the 3rd European Conference on Information Schein, E. H. (1980). Organizational psychology


Warfare and Security (pp. 221-230). London: (3rd ed.). Englewood Cliffs, NJ.: Prentice-Hall.
Academic Conferences Limited.
Schein, E. H. (1992). Organizational culture
Luhmann, N. (1999). Ökologishe Kommunikation, and leadership (2nd ed). San Francisco: Jossey-
3. Auflage. Opladen/Wiesbaden: Westdeutcher Bass.
Verlag.
Schlienger, T., & Teufel, S. (2002). Information
Maier, R. (2002). Knowledge management security culture: The socio-cultural dimension in
systems. Information and communication tech- information security management. In Proceedings
nologies for knowledge management. Berlin, of IFIP TC11 17th International Conference on
Heidelberg, New York: Springler-Verlag. Information Security (pp 191-202). Cairo, Egypt:
IFIP Conference Proceedings 214.
Malaska, P., & Holstius, K. (1999). Visionary
management. Foresight, 1(4), 353-361. Schneider, S., & Barsoux, J-L. (1997). Managing
across cultures. London, New York, Toronto,
Martins, A., & Eloff, J. (2002). Information se-
Sydney, Tokyo, Singapore, Madrid, Mexico City,
curity culture. In Proceedings of IFIP TC11 17th
Munich, Paris: Prentice Hall.
International Conference on Information Security
(pp. 203-214). Cairo, Egypt: IFIP Conference SHIFT WS#1. (2006). Group survey. Conducted in
Proceedings 214. workshop of a project that deals with information
sharing in networked crisis management environ-
Merleau-Ponty, M. (1968).The visible and invis-
ment (SHIFT = Shared Information Framework
ible. Evanston, IL: Northwest University Press.
and Technology) on November 13-16, 2006 (Re-
Niiniluoto, I. (1997). Informaatio, tieto ja yh- search report not published).
teiskunta, Filosofinen käsiteanalyysi. Helsinki,
Straub, D., Loch, K., Evaristo, R., Karahanna,
Finland: Edita.
E., & Strite, M. (2002). Toward a theory-based
Nosworthy, J. (2000). Implementing informa- measurement of culture. Journal of Global Infor-
tion security in the 21st century—do you have mation Management, 10(1), 13-23.
the balancing factors? Computers and Security,
Sveiby, K-E. (2001). A knowledge-based theory of
19(4), 337-347.
the firm to guide strategy formulation. Retrieved
OECD. (2002). OECD guidelines for the security February 15, 2003, from http://www.sveiby.com/
of information systems and networks: Towards a articles/Knowledgetheoryoffirm.htm
culture of security. Adopted as a recommendation
Thierauf, R. (2001). Effective business intelligence
of the OECD Council at its 1037th session on July
systems. London: Quorum Books.
25, 2002. Retrieved April 11, 2007, from http://
www.oecd.org/dataoecd/16/22/15582260.pdf Von Solms, B. (2000). Information security—the
third wave? Computers and Security, 19(7), 615-
SAR. (2007). A survey of sharing information
620.
in a search and rescue excercise. A co-operative
exercise, where rescue, law and medical orga- Waltz, E. (1998). Information warfare: Prin-
nizations and non-governmental organizations ciples and operations. Boston & London: Artech
rehearsed together in a case of airliner accident House.
at Helsinki airport on January 25, 2007 (Research
Whitman, M. E., & Mattord, H. J. (2003). Prin-
report not published).
ciples of information security. Boston: ThomsonTM
Course Technology, printed in Canada.

96
Information Security Culture as a Social System

Zakaria, O., & Gani, A. (2003). A conceptual Zakaria, O., Jarupunphol, P., & Gani, A. (2003).
checklist of information security culture. In B. Paradigm mapping for information security
Hutchinson (Ed.), Proceedings of the 2nd European culture approach. In J. Slay (Ed.), Proceedings
Conference on Information Warfare and Security of the 4th Australian Conference on Information
(pp. 365-372). Reading, UK: MCIL, Reading. Warfare and IT Security (pp. 417-426). Adelaide,
Australia: University of South Australia.

97
98

Chapter VII
Social Aspects of
Information Security:
An International Perspective

Paul Drake
Centre for Systems Studies Business School, University of Hull, UK

Steve Clarke
Centre for Systems Studies Business School, University of Hull, UK

Abstract

This chapter looks at information security as a primarily technological domain, and asks what could be
added to our understanding if both technology and human activity were seen to be of equal importance.
The aim is therefore, to ground the domain both theoretically and practically from a technological and
social standpoint. The solution to this dilemma is seen to be located in social theory, various aspects of
which deal with both human and technical issues, but do so from the perspective of those involved in
the system of concern. The chapter concludes by offering a model for evaluating information security
from a social theoretical perspective, and guidelines for implementing the findings.

Introduction ISec, from which it is argued that the practice


ought to pay more attention to the ways in which
Within this chapter, we first look at the dominant differing perceptions might give rise to a different
approach to information security (ISec), establish- ISec practice.
ing it as a domain in which technological factors The tensions in ISec are presented as occur-
predominate, and insufficient consideration is ring between theory and practice on the one hand,
given to human issues. Building on this founda- and social and technological on the other. From
tion, a picture is presented of the complexity of this position, the question posed becomes: “How

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Social Aspects of Information Security

can we build an ISec practice which is grounded and integrity of information, and to restrict its
theoretically, and which addresses both techno- availability: the so called “CIA” of ISec.
logical and social issues?” So, this is ISec practice—but where has this
The source of a solution to this dilemma may practice come from? A brief look at the develop-
be found in social theory. Various aspects of so- ment history of the British Standard, outlined,
cial theory deal with both human and technical gives an indication of this in the UK.
issues, but do so from the perspective of those The sources of the Standard (BS7799) are
involved in the system of concern. Our approach, traceable to the 1990s, when a group of security
therefore, has been to build models to evaluate and professionals formed a committee under the aus-
implement ISec, both based explicitly on theories pices of the British Standards Institute, and with
of social action. the support of the UK government’s Department
of Trade and Industry, to document current “best
information security practice” based on the cur-
Background to Information rent experience, knowledge, and practice of those
Security contributing. The product of this effort was the
Code of Practice for Information Security Man-
From a Technological to a agement (BSI, 1993). The committee continued to
Human-Centred Perspective work towards maintaining and improving the code
of practice, and today it has developed into the
Currently, the practice of information security British Standards for Information Security (ISO,
(ISec) aims primarily to protect information and 2000; BSI, 2002). The same committee continues
to ensure it is available to those authorised to ac- to maintain and revise this Standard. During the
cess it. This approach is emphasized by the well various iterations, Part 1 of the Standard has been
established definition of information security accepted by the International Organization for
to be found in the U.S. Department of Defense Standardization, commonly known as ISO, as
“Orange Book” (DOD, 1985): an international standard, ISO-17799.
Part 1 of the Standard (ISO, 2000) is a code
In general, secure systems will control, through of practice which contains around 130 controls
use of specific security features, access to infor- to be considered and implemented. Part 2 (BSI,
mation such that only properly authorised indi- 2003) contains the same number of controls but
viduals, or processes operating on their behalf, specifies their use and is therefore auditable.
will have access to read, write, create, or delete Both parts of the Standard provide guidance
information. for the development and implementation of a
risk-based management system that allows the
Within the United Kingdom, a similar per- continued assessment and management of risks.
spective on ISec can be seen in UK government This is delivered through an information security
publications, for example the Communica- management system (ISMS) that incorporates a
tions-Electronics Security Group1 (CESG 1994), cycle which, in essence, compiles a list of the 130
The British Standard for Information Security controls and determines whether the absence or
Management (ISO 2000; BSI 2003), and in the inadequate implementation of these controls is
documentation and practice within a large num- likely to harm the organisation and if so, by how
ber of organisations who have adopted informa- much. Proper management of risks and correct
tion security practices. In all of these cases, the implementation of applicable controls can attract
primary concern is to protect the confidentiality certification to the Standard and the right to use the

99
Social Aspects of Information Security

Table 1. Analysis of information security literature in the current domain


Reference Title Approach Category/Bias
Baskerville (Baskerville, Designing information system security Checklist-standard based Operational/Technical
1988) approach to securing systems
Cooper (Cooper, 1989) Computer and communications security: Analytical and strategic tools Operational/Technical
Strategies for the 1990s to understand security issues
and implementing an effective
security programme
Peltier (Peltier, 2001) Information security risk analysis Basics of risk management Risk management
including breakdown of threats
and mitigation techniques
Russell and Gangemi (Russell Computer security basics Fundamental principles and Operational/Technical
& Gangemi, 1991) concepts of information
security
Langford (Langford, 1995) Practical computer ethics Maps out ethical problems of Pseudo-humanistic
computer use and strategies
for dealing with them
Gollman (Gollman, 1999) Computer security Comprehensive review Operational/technical
of security technologies
together with some interesting
explorations of the meaning
of a secure systems—for
example, whether controls
should focus on data,
operations or users
Warman (Warman, 1993) Computer security within organisations Discussion of computer Business/organisational
security from organisational
and management perspective
including recognition that
managing the security of
people is just as important
as managing security of
technology
Forrester and Morrison Computer ethics Exploration of ethical issues Pseudo-humanistic
(Forrester & Morrison, 1994) surrounding hacking, writing
viruses, artificial intelligence,
and data protection
Neumann (Neumann, 1995) Computer related risks Comprehensive review of Risk management
computer failures, why they
occur, and what can be done
to avoid recurrences
Wylder (Wylder, 2003) Strategic information security Guidance on integrating Business/Organisational
information security
requirements with the business
goals of the organisation to
ensure the success of the
security practice

continued on following page

100
Social Aspects of Information Security

Table 1. continued
Reference Title Approach Category/Bias
Killmeyer-Tudor (Killmeyer- Information security architecture: An Guidance on setting up an Business/Organisational
Tudor, 2000) integrated approach to security in the information security practice
organisation including technical controls and
strategic business alignment
Birch (Birch, 1997) The certificate business: Public key Suggests forward-looking Operational/Technical
infrastructure will be big business technological solutions for
current information security
problems, for example, ID cards
to secure transactions
Baum (Baum, 1997) The ABA digital signature guidelines American Bar Association Operational/Technical
guidelines on securing
transactions
McCauley (McCauley, 1997) Legal ethics and the Internet: A U.S. Exploration of ethical issues Pseudo-humanistic
perspective surrounding publication of
information on the Internet,
especially concerning publication
of personal information and
the lack of controls around
publication
Leng (Leng, 1997) Internet regulation in Singapore Controlling access to Pseudo-humanistic
“objectionable” content and the
ethical questions of, for example,
censorship, that arise
Longbough (Longbough, Internet security and insecurity Technical “how to” guide for Operational/Technical
1996) computer hackers
Northcutt (Northcutt et al., Inside network perimeter security: The Technical controls required to Operational/Technical
2002) definitive guide to firewalls, VPNs, secure an organisation’s internal
routers, and intrusion detection systems computer network against
external threats
Kuong (Kuong 1996) Client server controls, security and Technical controls required to Operational/Technical
audit (enterprise protections, control, secure an organisation’s internal
audit, security, risk management and computer network, and attached
business continuity) systems, against external or
internal attack
Wright (Wright 1993) Computer security in large Highlights the dangers of Business/Organisational
corporations: Attitudes and practices insufficient focus on information
of CEOs security within organisations

British Standard kite mark to signify appropriate in Table 1, which classifies nineteen key ISec
management of information security. texts, and Figure 1, which is an analysis of a wide
So, BS7799 has grown out of ISec practice to representative sample of such texts.
become, arguably, the measure by which all UK Figure 1 represents further analysis of the
information security is judged. But are there any literature within the information security domain.
other reasons for believing that we can categorise This is the result of a search of a sample of cur-
a particular approach to ISec? During the course rently available information security literature
of this study, we researched the relevant literature which is considered representative on the basis of
extensively, and the result of this is summarised a quantitative literature review, which is broadly

101
Social Aspects of Information Security

in line with the expectations of both the author Looking back again from this perspective to
and other key information security practitioners the British Standard, its very presence seems to be
with whom this has been discussed. one of the problems with the information security
The pattern which has emerged is of a domain domain as it stands today. British Standards gener-
that includes some broad risk-based theory, but ally can be shown to be useful tools in differen-
is technically biased, with only cursory refer- tiating between one product and another. Many
ence to business and human-centred domains. assume that the presence of the BSI kite mark on
How successful these human-centred excursions a child’s safety harness for example, will mean
have been is unclear as they typically focus (see that the target product is more reliable than one
Table 1) on what may be regarded as superficial that does not have a kite mark. It is assumed that
ethical issues such as data privacy, and how to certain criteria are observed in the manufacture
get people to accept the required information se- of certified products and that their quality and
curity practice. Business-aligned literature tends reliability is tested using established industry prac-
towards the attainment of resources to maintain tices. Few consumers probably know the detailed
the security practice rather than a true attempt to tests, tolerances, and manufacturing practices that
align security with the business objectives, or, in are used to gain and maintain certification. Most
a wider sense, the needs of the people within the probably trust that the product will be “better”
organisation. In terms of quantity of literature, and perhaps even as good as it is possible to get
there is a massive bias towards technical and as a result of certification. However, it is unclear
operational controls within the domain. In fair- how these certification principles that seem to
ness, as technology continues to evolve, there is work so well with products can be satisfactorily
an almost continual need to update the technical applied to services, processes, and controls such
security literature base to keep pace. However, as those embodied in BS7799 to achieve a better
this does not adequately explain the paucity of result than one which does not meet the certifica-
social literature within what is essentially a do- tion requirements. Other British standards appear
main that radically affects people and how they to exhibit the same problem. For example, the
are expected to behave. standard for quality management (ISO, 2000), the

Figure 1. Spread of information security categories within current domain

Risk M a na g e m e nt
P se ud o -Hum a nistic
B usine ss/O rg a nisa tio na l
O p e ra tio na l/Te chnica l

102
Social Aspects of Information Security

standard for environmental management (ISO, There is good quality quantitative data avail-
2000), and the standard for service management able to demonstrate trends in information security
(BSI, 2002; BSI, 2003) all suffer, it is argued, and the use of BS7799. Every 2 years the DTI
by trying to bring standardisation to a domain sponsors an Information Security Breaches Sur-
where standardisation does not readily fit. The vey (DTI, 2000; DTI, 2002; DTI, 2004). Table 2
key problem here is that, where there is human summarises the results pertinent to information
free will present in a domain, the opportunities security generally and BS7799 in particular. The
to truly remove variability through standardisa- table provides an indication of the seriousness
tion are constrained by the unpredictable nature with which information security in general, and
of human behaviour. BS7799 in particular appears to be taken. An
In the case of the information security stan- information security policy is considered a funda-
dard, as well as the examples cited, the response mental and first step towards current information
from the standard and from ISec practice is to security practice. This can be contrasted with the
further constrain human behaviour within a importance that senior managers say they place on
rule-based framework of technical controls. This information security and the effects of not paying
seems perilous in a domain where social issues sufficient attention in this domain. The industry
would appear to be significant, if not dominant. sectors shown are the largest represented in the
To ignore social issues within the information sample. Further detailed analysis of the results
security domain is to imply that the security of appears below the table.
an information system is not changed if all of the It can be seen that, in spite of significant in-
people involved in it, or affected by it are changed. creases in the number of UK businesses that have
This implication seems insupportable given that suffered security breaches (24% in 2000, 44% in
two of the three guiding principles of current 2002, almost doubling to 78% in 2004), the number
information security practice are that informa- of organisations with a security policy remains
tion should be accessed only by people who are low, at a third in 2004. Remarkably, amongst
authorised to access it, and that steps should be those responsible for information security, only
taken to ensure information is available when 12% were aware of the contents of BS7799 in the
authorised people need it. This theme of social 2004 survey. This shows a slight drop from the
issues within the information security domain is 2002 survey! The number of businesses who have
THE central issue of this study. implemented BS7799 also remains a disappointing
5%. However, the number of organisations who
have installed anti-virus software has significantly
Issues and Problems increased to 93%. Other areas showing significant
improvements include the perceived importance
Towards a Determination of the of information security and the number of organi-
Success of Current Practice sations that have carried out a risk assessment.
Clearly, the disappointing awareness and use
Before moving on to addressing the lack of con- of the standard is not matched by the perceived
sideration of human issues in ISec, there was importance of information security, the need for
one further task we wanted to complete in the risk assessment, and the value in investing in key
study. If ISec’s technical focus is insufficient protective measures such as anti-virus software.
to address the problems of the IS domain, this So what is causing this lack of interest in the
ought to be reflected in the success or failure of standard? The 2002 and 2004 surveys both cited
that practice. the top reason given by respondents—the cost of

103
Social Aspects of Information Security

Table 2. Information security breaches surveys


2000 Survey 2002 Survey 2004 Survey
Number of UK businesses with a documented
14% 27% 33%
(Avison) security policy
Number of those responsible for IT security
No data 15% 12%
that are aware of contents of BS7799
Number of UK businesses that have BS7799
No data 5% 5%
implemented
Amount of IT budget spent on (information)
No data 2% 3%
security
Number of UK businesses believe
information security is high priority for senior 53% 73% 75%
management
Number of UK businesses have suffered at
least one malicious security breach in past 24% 44% 78%
year
Number of UK businesses that have
No data 83% 93%
implemented anti-virus software
Number of UK businesses that have carried
out a detailed risk assessment of their IT 37% 66% No data
systems and the threats to them
a) 51% a) 51.9%,
Number of staff employed by respondents. a)
No data b) 29% b) 31%
1-49, b) 50-249, c) 250+
c) 20% c) 17.1%
Sample size 1000 1000 1000
% of respondents in manufacturing sector No data 24% 24%
% in retail & distribution No data 15% 15%
% in technology No data 14% 14%

purchasing the standard. When you consider that Whilst the practice will often include quantitative
the cost of ISO17799 is currently £94 and the cost measurement (for example the number of controls
of BS7799 is currently £56, that is a little surpris- implemented, how long it took to implement
ing. You can also buy them both for a discounted them, and how much it cost), there is very little
£110. The second most common reason cited is that qualitative measurement such as how successful
the standard is seen to be only relevant to large the implementation was in reducing risk. There
organisations. If that is the case then the uptake are also means whereby the presence or absence
in large organisations might be expected to be of controls can be audited and certified. It can be
significant. The proportion of people responsible determined whether security failures have become
for information security in large organisations greater or reduced following implementation of
was 42% in the 2002 survey and dropped back controls.
to just over a third in the 2004 survey. However, this does not seem a very acceptable
A further problem is that, for information means of measuring success, as any reduction
security, there is no measurement within current could be due to some unknown factor outside
practice in terms of whether an implementation of of the scope of the practice. Consequently, there
information security practice has been successful. seems to be no effective means of determining

104
Social Aspects of Information Security

whether a system is fully secure or even more in writing and in telephone interviews were
secure as a result of deploying information secu- received from 16 members (31% response rate).
rity controls. Nor does there seem to be any way Approximately half the organisations surveyed
of establishing how much security is enough to were large, a quarter medium-sized, and a quarter
afford the protection that an organisation may small. The survey contains no further information
desire to have. Information security and the sup- on the meaning of large, medium and small or-
porting British Standard represent best available ganisations. The organisations were simply asked
practice in ensuring the confidentiality of sensitive to categorise themselves. The following sectors
information, and the integrity and availability of were represented: Finance, 8%; Government, 15%;
important business information. With such high Retail, 4%; Risk Management, 15%; IT, 23%;
claims one would expect BS7799 to have received Telecoms, 12%; Utilities, 12%; Manufacturing,
great attention and to be an essential tool in the 7%; and Legal, 4%.
armoury of any successful organisation. The study provides a comparison between the
terms “information security” and “information
Information Assurance: A Survey assurance.” The study suggests that the term in-
formation security often leads to an over-emphasis
The purpose of this section is to further validate on confidentiality whilst missing other aspects of
the outcomes of the analysis of the information the problem, for instance: integrity, accessibility,
security breaches surveys presented in the previ- and reliability. Moreover, use of the term security
ous section. and an emphasis on IT often mean that this type
The Information Assurance Advisory Council of risk is too easily seen as a low-level and niche
(IAAC) surveys corporate leaders, public policy activity, which falls outside the interests of senior
makers, law enforcement, and the research com- management and the board of directors.
munity to address the challenges of information It is clear that those surveyed do not see
infrastructure protection. They are engaged in BS7799 as being sufficient to cover all business
the development of policy recommendations to requirements; the lack of other standards is leading
government and corporate leaders. IAAC recom- many organisations to develop in-house, bespoke
mendations tend to be influential because their standards and processes. Nearly two-thirds of
sponsors and members comprise leading com- respondents found that BS7799 does not go far
mercial end-users, government policy makers, enough in protecting information systems and
and the research community. IAAC’s stated aim there is a clear demand for further standards and
is to work for the creation of a safe and secure clearer guidance.
information society. From detailed responses to the survey, it was
In October 2002 the IAAC published a survey found that the companies who thought there
(Modhvadia, Daman et al., 2002) which explored was no better alternative did so not because of
the concept of information assurance, contrasted the merits of BS7799 but because of the lack of
it with information security, and surveyed or- alternatives. (Modhvadia, Daman et al. 2002)
ganisations’ awareness of the British Standard propose a recasting of information security as
for information security (ISO, 2000; BSI, 2002). “information assurance” with less emphasis on
The survey was undertaken only amongst IAAC confidentiality and more on other aspects such as
members. A total of 58 surveys were distributed; integrity, availability, and reliability. However,
all were followed up by telephone. Full responses they still strongly propose a controls biased ap-
proach driven by the assessment of risk.

105
Social Aspects of Information Security

This exploration of current practice has sur- some excursions into business and human-centred
faced a surprising willingness in the literature domains. How successful these human-centred
and in the practice of organisations to accept excursions have been is unclear as they typi-
information security as a concept even though cally focus on superficial ethical issues such as
the tools available to implement it are often con- data privacy, and how to get people to accept the
sidered lacking. required information security practice. Business-
Through a detailed examination of the British aligned literature tends towards the attainment of
Standard and an extensive review of available resources to maintain the security practice rather
information security literature, a model has than a true attempt to align security with busi-
emerged which clearly shows a domain which ness objectives. In terms of quantity of literature,
is dominated by a set of practical controls which there is a massive bias towards technical and
are seen as rigid, unclear, and largely irrelevant operational controls within the domain. In fair-
to the business needs of most organisations. This ness, as technology continues to evolve there is
view is largely supported by the findings of the an almost continual need to update the technical
two surveys (outlined previously). security literature base to keep pace. However, this
What has become clear, even within some does not adequately explain the relative absence
recent developments that have sought to provide of social literature within what is essentially a
a more accessible model for managing informa- domain that radically affects people and how they
tion such as information assurance, is that all are expected to behave.
current practice is centric around the needs of So, through a detailed examination of the Brit-
the technology and of information rather than the ish Standard (together with a review of ISO and
needs of people in general and users in particular. the DOD Orange Book) and an extensive review
Where human issues are explored in this domain, of available information security literature, a
it is to confer responsibilities and education on model has emerged which clearly shows a domain
people to conform to the needs of the system and which is dominated by a set of practical controls
to regulate their behaviour. which are seen as rigid, unclear, and largely ir-
What emerges then is a domain which is to all relevant to the business needs of most organisa-
intents and purposes technological, as proven by tions. What has become clear, even within some
the absence of sufficient consideration for human recent developments that have sought to provide
issues, and a domain dominated by pragmatism as a more accessible model for managing informa-
demonstrated by the way in which the principle tion such as information assurance, is that all
models in the domain were constructed and are current practice is centric around the needs of
maintained. That is, constructed through the col- the technology and of information rather than the
lation of the practical experiences of practitioners, needs of people in general and users in particular.
and maintained through practical experiences Where human issues are explored in this domain,
of practitioners and by reference to surveys and it is to confer responsibilities and education on
in response to user groups and new regulatory people to conform to the needs of the system and
frameworks. to regulate their behaviour.
Moving from the Standard to current informa- Figure 2 summarises the position reached
tion security literature, a similar pattern emerges so far, and gives some idea of the complexity of
of a domain that includes some broad risk-based the issues.
theory and becomes progressively more specific
and technically biased over time, although with

106
Social Aspects of Information Security

Figure 2. The complexity of information security

Solutions and 2. Development of an implementation model to


Recommendations action the findings of the evaluation

Towards an Improved Information In the next section, this is carried forward by


Security Practice presenting an evaluative model for Information
Security which is true to these tenets.
Figure 3 summarises how a different mix of so-
cial/technical and theoretical/practical approaches An Evaluative Model for Information
to ISec might be characterised. Security
In terms of Figure 3, information security
can best be represented currently as a technical The model for evaluating information security,
practice, with scant regard to social needs. What presented in Figure 4, is the result of an ongoing
we have been pursuing in our research programme research and development programme which is
is a way of moving this view of ISec to an action- currently of some 8 years duration. The grounding
oriented approach to the domain. This has involved for the model is drawn from a foundation in critical
two key stages, both of which are explicitly based theory, and whilst it is not necessary to detail this
on declared social theory: within this short chapter, there are certain issues
which are important to the analysis. In particular,
1. Development, testing, and refinement of a we will be referring later in the chapter to issues
model for evaluating current information of “decolonisation”: these relate specifically to
security practice the work of Weber and Marx (Historical Materi-

107
Social Aspects of Information Security

Figure 3. Theoretical vs. practical and social vs. technical comparison grid

T h e o re tica l

S o cia l T h e o ry T e ch n ica l T h e o ry

D o m a in b a se d o n o n e o r m o re D o m a in b a se d o n o n e o r m o re
e sta b lish e d th e o rie s th a t m a y in fo rm e sta b lish e d th e o rie s th a t m a y in fo rm
a d e b a te co n ce rn e d w ith w h a t a d e b a te co n ce rn e d w ith h o w
p e o p le d o , h o w th e y b e h a ve , w h a t te ch n o lo g y w ill p e rfo rm , h o w its
th e y n e e d e tc p e rfo rm a n ce m ig h t b e m e a su re d e tc

T e ch n ic a l
S o cia l

S o cia l P ra ctice T e ch n ica l P ra ctice

D o m a in b a se d o n th e e xp e rie n ce D o m a in b a se d o n e xp e rie n ce a n d
a n d kn o w le d g e o f ke y sta ke h o ld e rs kn o w le d g e o f ke y sta ke h o ld e rs w h o
w h o a tte m p t to in fo rm a d e b a te a tte m p t to in fo rm a d e b a te
co n ce rn e d w ith w h a t p e o p le d o , co n ce rn e d w ith h o w te ch n o lo g y w ill
h o w th e y b e h a ve , w h a t th e y n e e d p e rfo rm , h o w its p e rfo rm a n ce m ig h t
e tc b e m e a su re d e tc

P ra ctica l

alism, Marx & Engels, 1968), and are grounded the environment for social systems’
in a critical theory which is traceable to Kant. In survival. In terms of information se-
terms of the model it is important, for example, curity, this is related to issues such as
to recognise where the public sphere is allowed gaining organisational support though
to be colonised—an example of the impact of funding and developing channels of
which is given. For the purposes of this text, a improvement.
brief description of the model is given; for those • Goal-attainment (G), concerned with
who wish to look more deeply into the background defining and prioritising social system
to its production and wider use, please see Drake goals: in information security related
(2005). to such issues as determining short
The model is derived by combining three term and long term needs; differen-
concepts from social theory: tiating between local/on-site security
requirements from the needs for remote
1. Habermas’ systems/lifeworld and public/pri- working.
vate spheres of influence. In outline, this the- • Integration (I): the co-ordinating of re-
ory helps us to understand how human action lationships within the social system: in
becomes systematised within organisations. information security—demonstrating
The outcome is that the system functions (e.g., to stakeholders that risks are managed
the technical aspects of information security) and resources are being used appro-
come to dominate, whilst lifeworld functions priately; appreciating the concerns
(e.g., aspects of wider social interaction) are of the practitioners implementing the
overridden. system
2. Parsons’ AGIL model. • Latency (L): motivating the desired
• Adaptation (A) is concerned with se- behaviours and managing tensions
curing and distributing the means from within social systems. Frequently

108
Social Aspects of Information Security

reviewing the information security domain into a lifeworld-dominated one just by


system (policies, procedures, etc.) to deciding that it is desirable to do so. That is the
ensure the administrative processes purpose of the cycle that runs around the outside
support and align with organisational of the AGIL functions in Figure 4. If it is desired
objectives; establishing a culture of to move away from the system-dominated domain
security within the organisation, and then the individuals within the system have to be
so on. influenced. The private spheres which represent
1. Merton’s concept of latent and manifest action those individuals and their families, work groups,
and outcomes. and so forth, have to be modified. Once all the
actors have been “privately” influenced the organi-
How the Model Works sation can “go public” through engagement in the
public sphere which is when the organisation starts
The power of the model rests in no small way on to win back some of the richness of the lifeworld.
its dynamic nature, enabling it to adapt to chang- This is analogous to “winning hearts and minds”
ing circumstances. The evaluative model provides in organisational/leadership terms.
a more culturally enriched means of shifting Once lifeworld-bias has been achieved care
practice towards lifeworld by navigating around must be taken to guard against accidentally (or
the AGIL media. It can be argued that the lines deliberately) restricting the physical manifesta-
horizontally and vertically through the AGIL part tions of where this lifeworld exists. For example,
of the model form actual barriers to navigation. It if an environment where the lifeworld exists is
is not possible to move from a system-dominated changed, a communal area for example, then the

Figure 4. Evaluative model for information security practice (A representation of Habermas’and Merton’s
contribution to Parsons’ model, showing public and private spheres, system and lifeworld boundaries,
and manifest & latent functions)
Latent Latent
Private
Sphere

L A
Adaptation
Latency
H ow m uch resource is
Sustaining know ledge
required from the external to
transfer
Lifew orld sustain the ISec system ?
System
Functions Manifest
Functions
I G
Integration
Goal-Attainm ent
Sustaining freedom of
com m unication w ithin the W hich functions are essential
in any ISec system ?
ISec system

Public
Sphere
Latent Latent

109
Social Aspects of Information Security

public sphere is being destroyed because people theory, change management, boundary theory
will stop congregating there. The public sphere and so on.
is a big part of lifeworld. If it is allowed to be To demonstrate how this model works, it is
colonised then the dependent lifeworld(s) will be applied below to BS7799. This task was under-
too. The manifest vs. latent function idea basically taken as part of the research project in order to
operates when a lifeworld is deliberately colo- provide a benchmark for the domain. Each of the
nised (manifest) vs. when it happens by accident 130 controls in the Standard has been assessed
because someone has not thought through the individually against the model, making possible
consequences or is not paying enough attention a detailed analysis of the elements of the model
(latent). explicitly addressed by the Standard. One of the
The point is that decolonisation is not just a most startling outcomes of this was that only 64
simple decision which can be taken by managers. of the 130 controls emerged as relevant to infor-
To achieve this in action requires that research is mation security, and of this 64, over half (33) are
informed from other domains, including organi- related to goal attainment.
sational culture, management and organisational
Case Analysis of the Evaluative Model

Clearly, the application of the model to BS7799


raises some interesting questions about informa-
Evaluating BS7799 tion security practice. To consolidate this, further
analysis was carried out within a UK local gov-
The evaluation of
ernment organisation.
BS7799 against
the model showing One of the more significant findings of this
how strongly the research has been the clear opportunity to shift the
Standard focuses on practice towards a more socially-aware lifeworld
goal directed issues.
The relevance of this biased approach. The metaphorical approach taken
INTERNATIONAL will become clearer here was specifically selected as a contrast to the
as we look at some “controls counting” approach used in the review
PERSPECTIVE case examples later
in the chapter.
of the British Standard and in the other case stud-
ies employed in this research. The enhancement
B S 7 7 9 9 A sse ssm e n t required to the evaluative model surfaced through
Latent Latent
P riva te this empirical research is the addition of represen-
S p h e re
tation of what ought to be rather than just what
currently is. The addition of this enhancement is
L A
also in line with a critically informed study which
L ife w o rld S yste m underpins this research.
F u n ctio n s Manifest
F u n ctio n s

I G
The Future: Implementing
P u b lic Information Security based
S p h e re
on Social Considerations
Latent Latent

B ias of A G IL Controls P resence of E ssential Controls


So much for the task of assessing the position of
information security in an organisation, but how is

110
Social Aspects of Information Security

Information Security in U.K. Local Government

With some 14,000 staff this organisation is the largest employer in the county
CASE
in which it is located, providing services such as schools, roads and transport
EXAMPLE
schemes, libraries and care for the most vulnerable in society. Around half of
its £500 million annual budget is spent on education and a further quarter on
social care.
L o ca l A u th o rity
Latent Latent
P riva te
S p h e re

L A

L ife w o rld S yste m


F u n ctio n s Manifest
F u n ctio n s

I G

P u b lic
S p h e re
Latent Latent

B ias of A G IL Controls W hat ought to be

One of the directorates within the organisation provides services to the authority itself such as
information technology (IT), human resources and finance. The IT department incorporates the local
authority’s information security practice which forms the subject of this research.
Unlike the BS7799 analysis, the above result was derived by engaging with participants in the system,
using primarily a process of metaphorical exploration. The dotted shading shows participant views of
where the organisation was at the time of the analysis, whilst the lined shading is their view of where
they ought to be. The ‘ought’ analysis indicated a significant opportunity for a shift towards lifeworld
functions. The Integration function is about co-ordinating relationships within the social system and
Latency is about motivating the right behaviours and managing tensions within the social system. How
the metaphors worked can be seen in outline from the metaphors which participants felt were best for
describing the current situation. Ideas such as the unseen driver, the engine that pushes from the back
which is forgotten about by the driver, and the inability to stop the train once it is moving are all highly
indicative of system biased functions in general and the Goal-attainment function in particular. The
train taking passengers well out of their way is also suggestive of the Adaptation function indicating
insufficient resources. This is particularly helpful in terms of moving towards the lifeworld biased
functions where an ability to make changes in the route to avoid trouble spots is again suggestive of
co-ordinating relationships (amongst passengers), motivating the right behaviours towards a common
good and managing tensions within the social system.
The strong bias towards adaptation and goal-attainment controls indicates that the organisation did
not consider social issues when developing its security practice and did not create or maintain a broad
culture of information security. The organisation is significantly biased towards the adaptation and
goal-attainment functions which is in line with the organisation’s observed security practice and intent
to pursue BS7799 certification.

111
Social Aspects of Information Security

Table 3. Adapted approach to information security


Number Step Comments
1 Assess presence of essential controls. If there are Absence of essential controls puts the organisation at risk
gaps they should be implemented unless good of loss of information and access in a way that cannot be
reason not to. addressed through application of sociologically-biased
controls.
2 Assess security practice of organisation against As more organisations are assessed the baseline of
evaluative model and create security profile map. organisations’ security profiles will grow and provide this
Compare this profile with other organisation of additional dimension of analysis.
similar size, industry sector, complexity, and so
forth.
3 Determine whether any implemented system- This represents the easiest step to moving the very common
biased controls can be easily converted to lifeworld- goal-attainment focused security practice to a lifeworld
biased by changing context, environment, people focused one.
involved, means of capturing feedback, and so
forth.
4 Determine whether any of the lifeworld-biased If an organisation has built its practice around audit
controls identified have not been implemented but requirements and/or functional concerns, it is quite likely that
could be deployed reasonably easily. not all lifeworld-biased controls have been implemented.
5 Reassess security profile and compare with the This gives a new baseline from which to measure
previous practice of target organisation along with improvements towards a lifeworld-biased approach. There
other organisations of similar size, industry sector, are few measurements available as the practice changes but
complexity, and so forth. deployment of lifeworld controls is a useful indicator of
progress. Other specific measures such as user satisfaction,
number of security incidents, and so forth, should be
formulated on a case by case basis. Have regard for the
outcomes that are sought by the organisation.
6 Identify neutral (N), counter-productive (C), and It is critically important that N and C controls are not just
other-responsibility (O) controls and eliminate or dropped and ignored. Careful thought should be given
reassign. to determine whether they are correctly classified and
consideration of whether they are important to some other
organisational function (in which case, presumably, they
would be reassigned as O).
If O controls remain a dependency after they have been
reassigned, the dependency must be surfaced and appropriate
service levels agreed and documented.
7 Use the action loop through public and private This action step is key to maintaining a focus on both
spheres to drive the security practice towards a technical and human centred issues throughout the life of an
lifeworld focus information security system.
8 Reassess security profile and compare with last This becomes a long-term (perhaps continual) process to
baseline. Redo action loop at action step 6. achieve desired outcomes and sustain the required focus.

this to be made use of? Implementation requires terms of priority, getting the essential controls in
a more longitudinal study into the impact of the place is highest priority because failure to do this
approach, but initial indications suggest that the would most likely undermine the whole security
procedure outlined in Table 3 is a helpful approach practice irrespective of its social/technical biases.
to implementing a security practice based on the The next priority is to make sure the practice is
findings and use of the evaluative model. continually reviewed to ensure it is meeting the
Figure 5 provides some structure to these means of its users and the businesses. Thirdly,
steps, indicating that the process through these use the evaluative model to identify opportuni-
steps should be continuous. It also shows that in ties to move the practice towards a more socially

112
Social Aspects of Information Security

Figure 5. Revised framework for applying security Shortcomings, derived from the application of
practice the evaluative model, in one of the key standard
approaches to information security, and in the
Review Neutral, Counter-productive and O ther responsibility controls
application of information security within a large
R e vie w e sse n tia l co n tro ls a n d im p le m e n t organisation, have further improved our under-
p ro g ra m m e s to a d d re ss g a p s
standing of how strategies can best be derived
Pro vid e m e a n s to co n tin u a lly re vie w
se cu rity p ra ctice and managed in this domain.

Asse ss se cu rity p ra ctice


a g a in st e va lu a tive m o d e l • Information security is a domain dominated
(F ig u re 9 -3 ). D e te rm in e
d e sire d a p p ro a ch a n d by pragmatic, technology-based methods.
im p le m e n t a p p ro p ria te
co n tro ls a n d in itia tive s • By acceding to these methods, both the British
Standard and industrial practice has favoured
a short-term, operationalist approach.
• Human factors are seen as largely external to
the information security “system.”
• This chapter reports a research study from
which has been derived an evaluative model
and implementation approach which takes
account of human factors by drawing specifi-
aware, life-world biased approach. Underpinning cally on social theory.
all of this is the need to identify and deal with • The outcome is a more human-focused
neutral, counter-productive, and other-respon- information security, with methods which
sibility biased controls. The general ways that enable the current status to be determined
these would be dealt with are to remove neutral and improved upon.
and counter-productive controls altogether and to
reassign other-responsibility to the appropriate This study has focused on what might be
department within the organisation but clearly termed an evolutionary shift from system-biased
maintain such controls as dependencies for the controls to lifeworld-biased controls. This shift
information security practice. can best be characterised as a removal of system-
biased controls, deployment of lifeworld-biased
controls, and a recasting of existing system-biased
Conclusion controls as lifeworld-biased ones.
The evaluative model provides a more cultur-
Information security is a domain which has ally enriched means of shifting practice towards
hitherto been dominated by technologically-bi- lifeworld by navigating around the AGIL media.
ased, operationally-focused, pragmatic controls. It can be argued that the lines form actual barriers
Deeper research of the domain is revealing a set of to navigation. It is not possible to move from a
largely ignored human considerations, in respect system-dominated domain into a lifeworld-domi-
of which methods informed by social theory are nated one just by deciding that it is desirable to
proving of value. do so. That is the purpose of the cycle that runs
The approach adopted in this chapter gains its around the outside of the AGIL functions. If it is
credibility from an explicit basis in social theory, desired to move away from the system-dominated
from which an evaluative model and method of domain then the individuals within the system
implementation have been crafted. have to be influenced. The private spheres which

113
Social Aspects of Information Security

represent those individuals and their families, Baskerville, R. (1988). Designing information
work groups, and so forth, have to be modified. system security. Wiley.
Once all the actors have been “privately” influ-
Baum, M. (1997). The ABA digital signature
enced, the organisation can “go public” through
guidelines. Computer Law & Security Report,
engagement in the public sphere which is when
13(6), 457-458.
the organisation starts to win back some of the
richness of the lifeworld. This is analogous to Birch, D. (1997). The certificate business: Public
“winning hearts and minds” in organisational/ key infrastructure will be big business. Computer
leadership terms. Law & Security Report, 13(6), 454-456.
Once lifeworld-bias has been achieved, care
BSI. (1993). DISC PD0003: A code of practice
must be taken to guard against accidentally (or
for information security management. London:
deliberately) restricting the physical manifesta-
British Standards Institute.
tions of where this lifeworld exists. For example,
if an environment where this lifeworld exists is BSI. (2002). BS7799-2:2002. Information security
changed, a communal area for example, then the management. Specification with guidance for use.
public sphere is being destroyed because people British Standards Institute.
will stop congregating there. The public sphere
BSI. (2003). BS15000-2:2003 IT service manage-
is a big part of lifeworld. If it is allowed to be
ment. Code of practice for service management.
colonised then the dependent lifeworld(s) will be
British Standards Institute.
too. The manifest vs. latent function idea basically
operates when a lifeworld is deliberately colo- CESG. (1994). CESG electronic information
nised (manifest) vs. when it happens by accident systems security: System security policies (Memo-
because someone has not thought through the randum No.5).
consequences or is not paying enough attention
Cooper, J. (1989). Computer and communications
(latent).
security. New York: McGraw-Hill.
The point is that decolonisation is not just a
simple decision which can be taken by managers. DOD. (1985). DoD trusted computer system
To achieve this in action requires that research is evaluation criteria (The Orange Book). (DOD
informed from other domains, including organi- 5200.28-STD). United States Department of
sational culture, management and organisational Defense.
theory, change management, boundary theory,
and so on. Further research informed from these Drake, P. (2005). Communicative action in infor-
perspectives, within a critical approach would mation security systems: An application of social
seem to be of value. theory in a technical domain. Hull: University
of Hull.
DTI. (2000). Information security breaches survey
References 2000: Technical report. London: Department of
Trade & Industry.
Avison, D. E. (1989). An overview of informa-
tion systems development methodologies. In DTI. (2002). Information security breaches survey
R. L. Flood, M. C. Jackson, & P. Keys (Eds.), 2002: Technical report. London: Department of
Systems prospects: The next ten years of systems Trade & Industry.
research.(pp. 189-193). New York: Plenum. DTI. (2004). Information security breaches survey
2004: Technical report. London: Department of
Trade & Industry.

114
Social Aspects of Information Security

Forrester, T., & Morrison, P. (1994). Computer Modhvadia, S., Daman, S. et al. (2002). Engaging
ethics. MIT. the board: Benchmarking information assurance.
Cambridge: Information Assurance Advisory
Gollman, D. (1999). Computer security. Wiley.
Council.
ISO. (2000). BS ISO/IEC 17799:2000, BS7799-
Neumann, P. (1995). Computer related risks.,
1:2000. Information technology. Code of practice
Addison-Wesley.
for information security management. Interna-
tional Standards Organisation. Northcutt, S. et al. (2002). Inside network perim-
eter security: The definitive guide to firewalls,
Killmeyer-Tudor, J. (2000). Information security
virtual private networks (VPNs), routers, and
architecture: In integrated approach to security
intrusion detection systems. Que.
in the organisation. CRC Press.
Peltier, T. (2001). Information security risk Analy-
Kuong, J. (1996). Client server controls, security
sis. Auerbach.
and audit (enterprise protection, control, audit,
security, risk management and business continu- Russell, D., & Gangemi, G., Sr. (1991). Computer
ity). Masp Consulting Group. security basics. O’Reilley.
Langford, D. (1995). Practical computer ethics. Warman, A. (1993). Computer security within
McGraw-Hill. organisations. MacMillan.
Leng, T. (1997). Internet regulation in Singa- Wright, P. (1993). Computer security in large
pore. Computer Law & Security Report, 13(2), corporations: Attitudes and practices of CEOs.
115-119. Management Decision, 31(7), 56-60.
Longbough (1996). Internet security and insecu- Wylder, J. (2003). Strategic information security.
rity. Management Advisory Publications. Auerbach.
Marx, K., & Engels, F. (1968). Selected works in
one volume. London: Lawrence & Wishart.
Endnote
McCauley, J. (1997). Legal ethics and the Internet:
A US perspective. Computer Law & Security 1
CESG is a UK government sponsored body
Report, 13(2), 110-114.
that provides advice to both government and
industry on best practice approaches to the
delivery of information security.

115
116

Chapter VIII
Social and Human Elements
of Information Security:
A Case Study

Mahil Carr
Institute for Development and Research in Banking Technology, India

AI can have two purposes. One is to use the power of computers to augment human thinking, just as we
use motors to augment human or horse power. Robotics and expert systems are major branches of that.
The other is to use a computer’s artificial intelligence to understand how humans think. In a humanoid
way. If you test your programs not merely by what they can accomplish, but how they accomplish it, they
you’re really doing cognitive science; you’re using AI to understand the human mind.
Herbert Simon

Abstract

This chapter attempts to understand the human and social factors in information security by bringing
together three different universes of discourse – philosophy, human behavior and cognitive science. When
these elements are combined they unravel a new approach to the design, implementation and operation
of secure information systems. A case study of the design of a technological solution to the problem of
extension of banking services to remote rural regions is presented and elaborated to highlight human
and social issues in information security. It identifies and examines the concept of the ‘Other’ in infor-
mation security literature. The final objective is to prevent the ‘Other’ from emerging and damaging
secure systems rather than introducing complex lock and key controls.

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Social and Human Elements of Information Security: A Case Study

Introduction The philosophy of security section discusses


the reason why at all we need secure systems.
Information security falls within the broad cat- Secure systems are products of a particular time,
egory of security. All the while when designing space, and the level of technology currently avail-
systems, designers employ an underlying model able in a society. From the nature of humanity we
of the “human being” who is either an “attacker,” draw the conclusion that all human beings have
“adversary,” “eavesdropper,” “enemy,” or “op- the potential to create security hazards. However,
ponent,” apart from the normal user of a system whether a person is a legitimate user of the system
who is a “beneficiary,” “customer,” or “user.” For or the “other” (at the individual level) is determined
the sake of simplicity, let us call the human being by his or her cognitive (rational) capacities, emo-
who interacts with the information system in the tions (affective states), intent (will), spirituality
normal, authenticated, and authorized user mode (belief systems adhered to), and the overt behavior
as a legitimate “user.” Let us call a human being of the individual that is expected of him or her.
who interacts with the system performing some This provides an explanatory framework to un-
illicit operations not within the legitimate frame- derstand why individuals who are intelligent opt
work as the “other.” It is important to understand to undertake malicious activities (e.g., “hackers”
that the same person may switch between different and “terrorists”). The social setting in which the
modes from user to the other depending on the individual is embedded to a great extent deter-
context. Most security systems employ a model mines his or her predisposition to choose act the
of the “other” in relation to which the security role of “the user” or the “other.” The expression
features of systems are designed. of the “collective conscience” of the community
This chapter focuses on fundamental under- to which he or she belong gives sustenance to
lying premises that are implicitly or explicitly the emotional basis, the formation of will, the
employed while constructing secure information spiritual basis, and specifies public action that
systems. This chapter attempts to open the door is encouraged. Though these particular human
for a new approach to the study of information and social factors are not treated in depth in this
security. It examines the human and social factors chapter, it points out that these factors have to be
in information security from the perspective of a studied seriously and an approach should be taken
model of human behavior and cognitive science. to prevent the emergence and continued presence
A real world case study is the basis from which of the “other” in the social space. This probably
insights are drawn from the process of its design is a more secure way of ensuring implementation
(but not actual implementation). We attempt of security features.
to outline three distinct universes of discourse We look at a case study where information
and frames of reference and try to relate them security is of key concern in a modern financial
together. First, we look at the underlying broad system. The case study outlines a design process
philosophical assumptions of security frameworks for remote banking that offers several technical
in general. Second, we choose a model of human and managerial challenges. The challenge is to
behavior from a systems perspective and situate be able to extend banking to communities that
a cognitive science approach within it. Third, we hitherto have had no experience in banking and
analyze the technical fabrication of information to those who are illiterate. This chapter outlines
security protocols in the context of human and the technical issues that need to be addressed to
social factors, drawing insights from a case study. make remote banking a reality. From this case
We discuss and highlight issues in providing study, we draw conclusions of how the “other”
secure messaging. is present in the design of the project. We have

117
Social and Human Elements of Information Security: A Case Study

only emphasized and dealt with the cognitive Whenever humans develop a new tool that is
model of the “other” among the several human technologically advanced than the current level
and social factors involved in providing a secure of technology, then the new technology can also
financial system. We conclude the chapter paving be deployed as a weapon. The invention of knives
the way for a deeper social science research that gave rise to swords, dynamite for mining gave rise
needs to the address the problem of the causes to grenades, the capability to generate nuclear
of formation of malicious or subversive intent, power gave rise to nuclear weapons, and so on.
process of its sustenance, its expression, pres- When a certain technology becomes out of date the
ence, and persistence. This will enable creation weapons also become outdated, for example, we
of secure systems. no longer use bows and arrows, swords and even
firearms—we no longer witness duels or fencing.
Security frameworks of yesteryears are no longer
Philosophy of Security meaningful today—castles and fortresses are no
longer strongholds, they have been replaced by
To understand human factors in information se- different types of defense establishments (e.g.,
curity we must have a framework to comprehend the Windsor castle and many fortresses dotted
both the “human” and “security.” Let us first all over India). Previously, photographing a dam
address the question “Why security?” A simple was thought to be a security risk. But with today’s
description of the human being and human nature satellite capabilities and inter continental ballis-
gives us the answer. Human beings are products tic missiles, the information of the location of a
of nature and interact with nature. Humans have dam cannot be kept secret (e.g., Google Earth).
the ability to create or fashion things out of natural The security frameworks of a particular time are
material. They have the power to destroy forms contingent upon the level of technology that the
and recreate newer forms, for example, they can society has achieved.
melt iron, make steel, build edifices, and construct Since humans have the innate potentiality to
cars and aircrafts. Humans have the power to destroy and consume, if something is to be pre-
break and make things. This human potentiality served from destruction then it is necessary to
makes them destroyers while at same time being safeguard it with a layer of security. Particularly
creators (e.g., cutting trees to make furniture). The with respect to information security, one needs
potential threat while safeguarding an artifact to be clear whom you need to protect information
or a possession comes from other humans (and from. What are the threats to information secu-
possibly from his or her own self too). A layer of rity—where does it arise from? The employee,
security is therefore necessary to protect an entity customer, the competitor, or the enemy? Mali-
from being destroyed accidentally or deliberately. cious attacks from “hackers” or even from your
Borders are an essential security feature that pre- own self? While constructing systems, it has to
serves the form of an entity and provides an inner be taken into account that:
secure space (“privacy”). Borders delineate and
define distinct spaces. What is within and what A human being will exploit a vulnerability in a
is outside. Humans have the capability to break system, if there is a vulnerability existing in the
open “security” features (the husk, shell, case, or system, to his or her advantage at cost of the
skin that covers and protects a seed or fruit). system.
In their quest to attain mastery over the
universe, humans have developed tools that are All security frameworks are built with the
efficient in interacting with nature intimately. other and the other’s capability in mind.

118
Social and Human Elements of Information Security: A Case Study

Human Factors: A Behavioral (Huitt, 2003). Eysenck (1947), Miller (1991), and
Model Norman (1980) provide empirical support for the
three dimensions of mind (or human personality)
When we talk about “human” factors that influence for example:
information security, we first need to identify and
define what we mean by “human” factors. Human 1. Cognition (knowing, understanding, think-
factors are taken into account in a wide variety ing—processing, storing, and retrieving
of fields, for example, aeronautics, ergonomics, information);
human-computer interaction, medical science, 2. Affect (attitudes, predispositions, emotions,
politics, economics, and so forth. Each of these feelings); and
fields considers human factors from several dif- 3. Conation (intentions to act, reasons for do-
ferent aspects. In aviation, human factors mean ing, and will).
cognitive fidelity, whole body motion, and physi-
ological stress (Garland, 1999). Ergonomics deals These three components of the mind can be
with user interface design, and usability—mak- used to address several issues that can arise in the
ing products in the workplace more efficient context of information security. An individual’s
and usable. Anthropometric and physiological thinking (cognition), feeling (affect), and willing-
characteristics of people and their relationship to ness (volition, conation), as well as overt behavior
workspace and environmental parameters are a and spirituality are constituents that interact to
few of the human factors taken into consideration give appropriate human responses to stimuli from
in ergonomics. The other factors may include the the environment. A second characteristic of the
optimal arrangement of displays and controls, systems model of human behavior is that human
human cognitive and sensory limits, furniture, beings do not operate in isolation; they are prod-
and lighting design. ucts of a variety of contexts—environments that
We need a model of the “human” in the con- surround the individual human being that he or
text of information security. Models provide us she is in constant interaction play a major role in
with important relationships between variables. the individual’s responses and interactions with
Philosophical positions give us a foundation to the world (see the next section on social factors
construct scientific models over empirical data. for a detailed discussion).
While scientific models are often reductive in na- There are therefore five major components of
ture (i.e., entities are studied in isolation), systems the human being in the systems model of human
models study interactions between components. behavior (Huitt, 2003):
The sum of parts is greater than the whole. The
systems model of human behavior gives us a 1. Cognitive component: Perceives, stores,
possible basis to identify the sources of threat to processes, and retrieves information
information security. Information security cannot 2. Affective component: Strongly influence
be achieved purely from the standpoint of cryp- perceptions and thoughts before and after
tographic algorithms (lock and key mechanisms) they are processed cognitively
alone, but from understanding human behavior 3. Conative component: The intent of the
and the social context in which humans are em- human actor
bedded (Dhillon, 2007). 4. Spiritual component: How humans ap-
The systems model of human behavior identi- proach the mysteries of life, how they define
fies three major components of the mind as well and relate to the sacred and the profane
as the biological and spiritual underpinnings

119
Social and Human Elements of Information Security: A Case Study

5. Behavioral system: Explicit action of the • The emotion challenge: Emotions can be the
human being and the feedback received from basis for action in human thinking.
other members of the community. • The consciousness challenge: The ability to
do what is good and what is evil influences
Of these, the major component that we are con- the cognitive model of the human being.
cerned with is the cognitive component. We would • The world challenge: The physical environ-
like to explore this cognitive component from the ment in which a human being is located
framework of cognitive science. Briefly, here we influences his or her thought.
will outline how other components are influential • The body challenge: Health conditions can
in information security. Human emotions, the determine one’s thought patterns.
basis of the affective component is a subject that • The social challenge: Human thought is al-
has been explored in psychology (Huitt, 2003). ways embedded in symbol, ritual, and myth
A variety of emotions impact how humans relate and is part of a collective conscience.
with information systems. Anger, fear, and anxiety • The dynamical systems challenge: The human
are known to influence the adoption and usage mind is a continuous dynamic system, and
of information systems (Athabasca University, does not always compute in the traditional
2000), for example, the introduction of comput- sense.
erized systems in the banking industry in India • The mathematics challenge: Human think-
faced organized, stiff resistance during the initial ing is not mathematical—the brain does not
phases as bank employees had apprehensions of compute using numeric quantities will making
threats of job loss and retrenchment (Goodman, calculations, for example, the speed at which
1991). The conative component (human will) a human being drives a car is not computed
determines at what level an individual or a group using equations.
of people will adopt information technology. The
human being can be influenced by cultural fac- The systems model of human behavior does
tors (“we” and “they”), the religious position he accommodate all the criticisms to a pure cognitive
or she has abided by (spirituality), and also the science approach.
collective memory (social factors) in which he or
she has been contextualized.
While this chapter essentially focuses on a Social Factors (Systems and
cognitive science perspective, it also admits the Ecosystems)
limitations of cognitive science in general. In
this chapter we have taken a limited attempt to Systems cannot be completely understood without
study only the cognitive component of the mind understanding the ecosystem within which they
as opposed to treating other components such are embedded. Human behavior is not merely a
as the affective, the conative, the spiritual, and function of an individual’s cognitive components.
the overt action of the human being. There are There are three levels of ecology that are identi-
philosophical criticisms raised by Dreyfus (1992) fied by the systems model of human behavior
and Searle (1992) to cognitive science. They claim (Huitt, 2003). Huitt’s framework is discussed.
that this approach is fundamentally mistaken in The first level of the ecology or the context of
the sense that cognitive perspective does not take human behavior is the micro-system. The family,
into account (Thagard, 2004): the local neighborhood, or the community insti-
tutions such as the school, religious institutions,
and peer groups form part of the micro-system

120
Social and Human Elements of Information Security: A Case Study

where individual formation occurs. The second Cognitive scientists build computer models
level is the meso-system. This influence arises based on a study of the nature of intelligence,
from social institutions or organizations where the essentially from a psychological point of view.
human being does work (employment) or obtains This helps comprehend what happens in our mind
pleasure (entertainment). The micro-system in- during problem solving, remembering, perceiv-
stitutions filter and mediate the influence of these ing, and other psychological processes. AI and
meso-systems and institutions with which the cognitive science have been able to formulate
individual interacts. The third level is the macro- the information-processing model of human
system. The third level of influence relates to the thinking (Association for the Advancement of
international region or global changes or aspects Artificial Intelligence, 2007). Rapaport (2000)
of culture. Ecological parameters can influence puts it this way:
human behavior significantly. The German defeat
in the First World War that led to an economic The notion that mental states and processes in-
catastrophe leading to the Second World War is a tervene between stimuli and responses sometimes
case in point. All human actions of individuals in takes the form of a ‘computational’ metaphor or
the German world or the Allied world had to be analogy, which is often used as the identifying
influenced by the war during the world wars. mark of contemporary cognitive science: The
The sources of security threats can emerge mind is to the brain as software is to hardware;
from the global environment, the meso-system, mental states and processes are (like) computer
or the micro-system. An individual’s motivation programs implemented (in the case of humans)
to destroy can emerge from any of these sources. in brain states and processes.
In a context of war between two communities,
each may perceive the other as a threat (e.g., Whereas when we talk about human factors in
world wars). Two organisations may compete information security, we are primarily interested
against each other for their share of the market in the human information processing model. An
(e.g., Microsoft vs. Apple). Families may have understanding of human information-processing
animosities with other families (e.g., the Capulets characteristics is necessary to model the other’s
and the Montagues). Therefore, each of these of capability and action. Characteristics of the hu-
these ecological levels may strongly impact as to man as a processor of information include (ACM
whom the individual treats as the “other.” SIGCHI, 1996):

Cognitive Science and Security • Models of cognitive architecture: symbol-


system models, connectionist models, engi-
Cognitive science emerged when researchers from neering models
several fields studied complex representations and • Phenomena and theories of memory
computational procedures of the mind. Cognitive • Phenomena and theories of perception
science is the interdisciplinary study of mind and • Phenomena and theories of motor skills
intelligence, embracing philosophy, psychology, • Phenomena and theories of attention and
artificial intelligence, neuroscience, linguistics, vigilance
and anthropology (Thagard, 2004). The compu- • Phenomena and theories of problem solv-
tational-representational approach to cognitive ing
science has been successful in explaining many • Phenomena and theories of learning and skill
aspects of human problem solving, learning, and acquisition
language use. • Phenomena and theories of motivation

121
Social and Human Elements of Information Security: A Case Study

• Users’ conceptual models disadvantaged and low-income groups. Financial


• Models of human action Inclusion Task Force in the UK has cited three
priority areas requiring serious attention: access
While cognitive science attempts to build a to banking, access to affordable credit, and access
model of the human as an information processor to free face-to-face money advice (Kumar, 2005).
(HIP model), it deals with only one individual The Reserve Bank of India (RBI) has noticed that
unit as its basis. However, the design of security more than eighty percent of adult rural Indians
features in information systems design has to (245 million, roughly the size of U.S. population)
take into account two or more processing units do not hold a bank account (Nair, Sofield, &
as the basis of the model. The technical fabrica- Mulbagal, 2006). The Reserve Bank of India has
tion of secure systems incorporates a model of mandated that banks extend their outreach taking
the “Other.” A careful analysis of the Global banking service to the common man (Reserve
Platform (see case study) or EMV standards Bank of India, 2005).
reveals the process of how the designer attempts Extending banking to the rural areas where
to build secure financial information systems there are no bank branches, consistent power sup-
where the “attacker” is always present in the ply, or communication links such as telephones or
scenario. The cognitive model of the “attacker” Internet is a daunting task. This calls for newer
is the human factor that the system attempts to approaches in taking banking to remote regions.
protect itself against (Fig. 1). The other’s technical One solution that RBI has come up with is to
competence is assumed to be equivalent to that enable customers’ intermediate banking facili-
of the designer. The destructive capability—the ties through business correspondents who act as
computational-representational model of the agents on behalf of banks (Reserve Bank of India,
“other” is the source of threat for the designer, 2006). As law mandates, any transaction on an
to protect against whom the designer designs his account involving cash has to be made within
or her security features. the physical premises of the bank. The business
correspondents are appointed by the banks and
have the authority to accept deposits or make cash
The Case Study: Financial payments when customers would like to withdraw
Inclusion using information or deposit money from or to their accounts at
technology locations other than bank premises.
The experience of microfinance institutions
Financial inclusion means extending banking ser- in India while taken into account suggested that
vices at an affordable cost to the vast sections of cash management is a problem in rural India. The

Figure 1. Cognitive model of the designer of secure information systems

HIP model Secure HIP model


Designer Information Other
Protect Subvert
System

122
Social and Human Elements of Information Security: A Case Study

transport of cash is expensive and dangerous. • Banking services such as deposits, withdraw-
A solution was sought whereby the cash that is als, and funds transfer are to be provided.
available in the villages could be circulated and • Each customer must be identified uniquely by
kept within the region would lead to less security some means especially fingerprints. Biomet-
hazards in cash management. Instead of opening ric authentication using fingerprints proved
full-blown brick and mortar bank branches in to be more secure than personal identification
remote districts (an expensive proposition), it was number (PIN) based authentication. As most
proposed that with the help of modern informa- customers are illiterate, they would not be able
tion technology and managerial capabilities of use PINs to authenticate themselves (in some
business correspondents, banking functionalities pilots it was noticed the rural customers who
could be extended to remote regions. It is known could not keep their PIN secret had written
that information technology solutions to deliver it down on the card itself!).
banking services have been able to reduce transac- • Both online and off-line transactions must
tion costs (e.g., ATMs). The business requirements be possible.
for the proposed solution are outlined. We also • Balance enquiry and mini-statement showing
discuss in the next section how these requirements last ten transactions must be possible at all
could possibly be implemented using information terminal locations.
technology as a vehicle. • No transaction should be lost in the entire
system.
Business Requirements for the
Financial Inclusion Initiative Technical Implementation Issues
(Problems and Solutions)
The basic idea of the financial inclusion initiative
is to extend banking services to the un-banked The model solution proposed for the financial
and under-banked rural population. The rural inclusion initiative is outlined in Figure 2. Each
communities that reside in remote regions were customer is given a smart card with his primary
the target beneficiaries of the scheme. account number and other personal details such
Information technology should enable banks as address, nominee details, and contact infor-
to provide services that have the following busi- mation stored within it. The smart cards are
ness requirements: to be used at bank terminals owned by banks
and operated by business correspondents. The

Figure 2. Model solution for the financial inclusion initiative


Host 1

Smart
Card Terminal Switch
Host n

Customer Banking Back-end


Correspondent

123
Social and Human Elements of Information Security: A Case Study

customer is authenticated using the biometric • The communication protocol between the
fingerprint of the customer stored in the smart terminal and the switch (ISO 8583)
card. These terminals have connectivity through • The customization of the switch software
GSM, CDMA, PSTN, or Ethernet depending • A card management system
upon the type of connectivity available at the lo- • A terminal management system
cal place of operation. However connected, the
communication finally happens through an IP The Smart Card
connectivity to the back-end switch. The network
switch connects a particular terminal with an Smart cards have been widely used in various
appropriate bank host. All customer details and sectors such as transport, retail outlets, govern-
account information including current balance is ment, health (insurance), mobile telecom, and in
held at the bank host. The smart card is used for the financial sector. Smart cards come in different
customer authentication whenever transactions flavors with differing operating systems, memory
are made at bank terminals. Figure 3 provides capacities, and processing power (Rankl & Effing,
an overview of the technological solution to the 2003). Smart cards come with different operating
financial inclusion initiative. systems like MultOS, Payflex, or Java. Certain
The technical issues addressed in the design operating systems are vendor specific. Smart
of the system: cards differ in terms of the memory capacity
that is available within them. There is a tradeoff
• The choice of the appropriate smart card (ISO between the cost and the level of security required.
7816, Global Platform) A crypto card that uses public key infrastructure
• The internal layout and file structure of the (PKI) offers more security and is relatively more
smart card (personalization) expensive than a smart card that permits only
• The choice of the terminal (Level 1—EMV static data authentication.
certified) A wide range of smart card with different
• The communication protocol between the memory capacities exist from 8K, 16K, 32K,
terminal and the smart card (EMV) and 64K. It was a business requirement specifi-
cation that the customer’s fingerprint templates
need to be stored in the smart card. Also, since
the last 10 transaction details had to be stored
Figure 3 Technological solution for financial within the smart card for balance enquiry and
inclusion—an overview mini-statement, a smart card with as much EE-
PROM memory as possible was needed. The
fingerprint template ranges from half a kilobyte
Village
Records to one kilobyte in size. Considering the storage of
BANK
templates for four fingers, this would take about
Business
Facilitator four kilobytes of space. A normal, rudimentary
Service software application on the smart card takes about
Processor
Business four kilobytes. Therefore, the final choice was a
Correspondent
32K card. Smart cards were specified to adhere
Settlement to ISO 7816 standards (physical characteristics
System
Merchant
and cryptographic requirements).
POS
The information contained in the smart card
Host Grievances &
Redressal is to be held securely, only to be read and updated

124
Social and Human Elements of Information Security: A Case Study

using secure symmetric keys. This solution us- specifies the security requirements that a card
ing symmetric keys, though weaker than a PKI and the application should have while making
solution, was adopted mainly due to cost consid- secure smart card applications. Table 1 samples
erations. A PKI enabled, Europay MasterCard some of the security requirements that global
Visa (EMV) compliant cryptographic smart card platform card security specification addresses
costs four times as much as a normal card. The (GlobalPlatform, 2005).
affordability of the customer determined the level Corporations such as Visa and MasterCard
of security that could be offered. Since the ac- have built their own EMV standard applications
count balances were anticipated to be low—a lock such as Visa Smart Debit/Credit (VSDC) or
costlier than the value it protects was discarded. MChip, respectively. The design issue is whether
A secure access module (SAM) at the terminal to adopt one of these applications (with suitable
provided the necessary computational security customization) or to build a custom application
to read and access the information in the smart from scratch. Adopting any of these standard
card. Derived keys and diversified keys are used applications has the advantage of worldwide
for this purpose (Rankl & Effing, 2003). interoperability. But the price is heavy in terms
Smart cards can contain multiple applications. of licensing and royalty fees that the poor rural
Global platform is an industry wide standard that customer has to bear when every transaction is
provides a layer of management while handling made.
multiple smart card applications. Global platform

Table 1. Sample from global platform perception of security threats


1. High level threats are classified security concerns as:
- The manipulation of information on card including modification of data, malfunction of security mechanism,
- The disclosure of information on card
- The disclosure of information of card as design and construction data.
2. The cloning of the functional behavior of the smart card on its ISO command interface is the highest-level security concern in
the application context.
3. The attacker executes an application without authorization to disclose the Java card system code.
4. An applet impersonates another application in order to gain illegal access to some resources of the card or with respect to the
end user or the terminal.
5. The attacker modifies the identity of the privileged roles.
6. An attacker prevents correct operation of the Java card system through consumption of some resources of the card: RAM or
NVRAM.
7. The attacker modifies (part of) the initialization data contained in an application package when the package is transmitted to
the card for installation.
8. An attacker may penetrate on-card security through reuse of a completed (or partially completed) operation by an authorized
user.
9. An attacker may cause a malfunction of TSF by applying environmental stress in order to (1) deactivate or modify security
features or functions of the TOE or (2) deactivate or modify security functions of the smart card. This may be achieved by
operating the smart card outside the normal operating conditions.
10. An attacker may exploit information that is leaked from the TOE during usage of the smart card in order to disclose the
software behavior and application data handling (TSF data or user data).

No direct contact with the smart card internals is required here. Leakage may occur through emanations, variations in power
consumption, I/O characteristics, clock frequency, or
by changes in processing time requirements. One example is the differential power analysis (DPA).

125
Social and Human Elements of Information Security: A Case Study

Issues in Storing Value in the Smart (fingerprint) identification details, customer infor-
Card mation, and bank account information. However,
since the card should also possibly be used with
Permitting off-line transactions (as connectivity other existing financial networks, a magnetic
is poor) require that account balance information stripe was also needed in the card (e.g., using the
be carried within the smart card. The value stored same cards in ATMs apart from the terminals). So
in the cards has to be treated as electronic money a combination of chip and magnetic strip solution
or currency. However, apart from technicalities of was proposed.
implementing an electronic purse, there are legal The fingerprint template was to be stored in
constraints. Electronic currency has not yet been the card so that every time the customer wanted
afforded the status of legal tender. Electronic wal- to do a transaction, a fingerprint scanner could
lets have so far been unsuccessful in the market extract the image and terminal could compute
in Europe (Sahut, 2006). the template for feature matching. The biometric
Certain types of smart cards have an e-purse fa- validation process could either take place at:
cility implemented in the card itself. The common
electronic purse specifications (CEPS) standard • The card: The biometric fingerprint template
outlines the business requirements, the functional is held within the card. The terminal reads
requirements and the technical specifications for the fingerprint and creates the template. The
implementing e-purses. However, CEPS is consid- templates are compared inside the card using
ered as a dead standard in the industry. Whenever the intelligence within the card.
there is a situation where the value of currency • The terminal: The terminal reads the biomet-
can be written by the terminal onto the card, the ric fingerprint and converts it to a template,
security risk dramatically increases. Therefore, and it is compared with the template stored
the possibility of offering offline transactions as read from the smart card into the terminal.
a business requirement had to be compromised. Terminals also have memory requirements
and limitations.
Issues in Customer Authentication • The back-end host: The back-end stores both
using Biometric Identification the image and the template. The comparison
is made with the biometric template stored
In today’s payment systems scenario in India, a within the smart card. This solution is costly
normal magnetic stripe card that is used for debit in terms of telecommunication costs since
or credit applications. The magnetic stripe card the fingerprint template needs to be com-
holds the customer’s primary account number municated to the back-end host over wired
(PAN), name, and some authentication informa- or wireless telecommunication networks for
tion (such as enciphered PIN). No account bal- every transaction the customer makes.
ance information is held in the card. These cards
facilitate only online transactions. The Terminal
The business requirements for this initiative
demanded that off-line transactions should be The interaction between the card and the terminal
accommodated since connectivity is poor in rural is specified in the Europay, MasterCard, and Visa
India. This meant that the card needed to carry (EMV) standard. The terminal was chosen to be
the account balance to allow offline transaction one with Level 1—EMV compliant hardware.
facilities. Therefore, the option of a smart card The software on the terminal had to be Level
was taken. The smart card is to carry biometric 2—EMV compliant software. An EMV Level 2

126
Social and Human Elements of Information Security: A Case Study

certification guarantees software on the terminal the CA’s private key and this certificate is stored
that is reasonably secure. The software handles in the smart card. The issuer uses its private key
the communication between the smart and the to digitally sign the static data (those data items
terminal communication. The EMV process is that are to be protected from tampering). The
discussed in section below. signed static data is stored on the card. When the
terminal wants to verify the authenticity of the
EMV Standards for Smart Card: data the following steps are done:
Terminal Communication
1. The terminal retrieves the certification
EMV standard is a specification of a protocol that authority’s public key.
governs the communication between the terminal 2. The public key of the issuer is retrieved from
and the smart card (EMVCo, 2004). Essentially, the certificate.
terminal verifies whether the card belongs to the 3. Retrieve the signed static data.
acceptable family of cards, the card authenticates 4. Separate the static data and its signature
whether the terminal is a genuine one, and finally, (hash result).
the cardholder has to be authenticated, that is, 5. Apply the indicated hash algorithm (derived
whether the cardholder is the one whom he/she from the hash algorithm indicator) to the
claims to be. There are several steps in the EMV static data of the previous step to produce
process: the hash result.
6. Compare the calculated hash result from the
1. Initiate application previous step with the recovered hash result.
2. Read application data If they are not the same, SDA has failed. Else
3. Data authentication SDA is successful.
4. Apply processing restrictions
5. Cardholder verification This procedure verifies whether the card has
6. Terminal action analysis been tampered with or not. The cardholder veri-
7. Card action analysis fication method (CVM) verifies whether the card
8. Online/off-line processing decision belongs to the cardholder or not. Traditionally, the
9. If online, then issuer authentication and script cardholder uses a personal identification number
processing—go to step 11 (PIN). In the financial inclusion project it is envis-
10. If off-line process the transaction aged to use biometric fingerprint authentication
11. Completion. for the CVM.

Though there are several steps (we will not Terminal: Host Communication
delve into the details here), we discuss only the
process of static data authentication (SDA) as an Once the user is authenticated, the terminal com-
example of an EMV process. The terminal verifies municates with the host to make transactions.
the legitimacy of the data personalized in the card. The transactions include deposit, withdrawal,
This is to detect any unauthorized modification and funds transfer or utility payments. ISO 8583
or tampering of the data after personalization of is the protocol that governs this communication
the card (Radu, 2003). between the smart card and the terminal. ISO 8583
The public key of the certificate authority defines the message format and communication
(CA) is stored in the terminal. The CA signs the flows. ISO 8583 defines a common standard for
public key of the card issuer (e.g., a bank) using different networks or systems to interact. How-

127
Social and Human Elements of Information Security: A Case Study

ever, systems or networks rarely directly use ISO distressed, disappointed, or disgruntled can be ma-
8583 as such. In each implementation context, the nipulated and recruited to commit damaging acts.
standard is adapted for its own use with custom Introverts (people who “shy away from the world
fields and custom usages. In the financial inclusion while extroverts embrace it enthusiastically”—H.
project, the message arrives in the switch in the J. Eysenck) like intellectual pursuits rather than
EMV format. The message is stripped of EMV interpersonal relationships are vulnerable to act
formatting and encryption; a customized ISO destructively. Profiles of possible perpetrators of
8583 message is generated and passed on to the security threat are people who are socially isolated,
host. The communication between the terminal computer dependent, and gain emotional stimu-
and the switch happens over the air or over the lation and challenge through breaking security
wire using a telecommunication network. Table codes consequently beating security professionals.
2 provides a small sample of messages. High rates of turnover reduce loyalty of employees
to particular organizations. Moreover, computer
professionals have weak ethics and they see any
Human and Social Factors in unprotected data as fair game for attack. Certain
Information Security predisposed traits in individuals when exposed
to acute, stressful situations produce emotional
Before outlining the possible abuses of the sys- fallout that leads them to act in ways subverting
tem by its various participants, it is important the system.
to note that in a complex information system as A study of illicit cyber activity in the bank-
discussed in the case could be compromised by ing and financial sector (United States Secret
both the insider (personnel who construct and Service, 2004) reveals that most of the security
maintain the system) and people who are end users incidents required very little technical training
of the system (outsiders). We briefly dwell upon and skills. In 87% of the cases studied, simple
the problem of insider threats before addressing and legitimate user commands were used to
the possible problems created by outsiders to the create the security incidents and 13% of cases
system. involved slightly more sophistication like writing
scripts or creating “logic bombs.” About 80% of
The Insider the users were authorized users. Only few (23%)
were employed in technical positions. Most of
Shaw, Ruby, and Post (1998) identify the sources the system subversion activities were planned in
of threats can emerge from employees (full time advance by the perpetrators and their intent of
and part time), contractors, partners, and consul- the actions were known to other people such as
tants in the system. People who are emotionally potential beneficiaries, colleagues, friends, and

Table 2. Sample ISO 8583 messages (Source: http://en.wikipedia.org/wiki/ISO_8583)


Message Type Indicator Type of Request Usage

Request from a terminal for authorization


0100 Authorization request for a cardholder transaction

0200 Acquirer financial request Request for funds


0400 Acquirer reversal request Reverses a transaction

0420 Acquirer reversal advice Advises that a reversal has taken place

128
Social and Human Elements of Information Security: A Case Study

family. Most of the attacks were carried out dur- We have discussed the technicalities of secure
ing normal business hours and at the work place. financial transactions that can happen in the fi-
Revenge, dissatisfaction with company policies, nancial inclusion project in the earlier section. If
desire for respect, and financial gain were some of there occurs a security breach or incident, either
the motivating factors identified by the study. in the authentication process using the smart card
Threats, whether they emerge from inside or or in the interaction between the cardholder and
outside, whatever be the source of their motiva- the business correspondent, the entire system
tions (the different ecological levels and individual will collapse. Any of the human participants in
dispositions that influence human behavior), at the the financial system can attempt to subvert the
point of attack the perpetrator applies logic or a system—the outsider to the system, the customer,
heuristic strategy to make his or her move. There- or the business correspondent. We discuss three
fore, the designer needs to anticipate the possible possible scenarios that could lead to comprising
scenarios where the system can be compromised. of the financial system.
The designer therefore has an anticipated intricate First, we look at a masquerader who either
model of the other’s cognitive thought processes steals or obtains a smart card that is in use in the
and possible “moves.” Against this backdrop, the project. Second, we look at problems a customer
designer secures the system in the best possible can create when offered facilities for off-line
manner (like building a fortress), for example, “bus transactions. Third, we consider the case of a
scrambling”—individual bus lines are not laid out dishonest business correspondent who can exploit
in sequence in a smart card microcontroller, rather the illiteracy (vulnerability) of the customer.
they are scrambled so that the potential attacker
does not know which bus line is associated with The Outsider
which address bit or function (Rankl & Effing,
2003). In this respect, cognitive science can play The outsider to the system who obtains a card may
a significant role in uncovering, mapping, and be able to use the card to his or her advantage.
addressing the heuristics, the strategies, and the Since the EMV standard does not provide for
logic employed by potential attackers. biometric fingerprint authentication, this has to
The modern banking system, on the whole, be incorporated with the cardholder verification
is rather vulnerable as it is heavily dependent method (CVM). In the extreme case, it may be
and exposed to various consultants and vendors. possible for the outsider to alter the biometric of
Vendors take care of security risks within their a card and use it to masquerade as the user.
organizations, for example, smart card manu-
facturers have to secure the entire process of The Other Customer
production of smart card, key management pro-
cess, transport and distribution process, and card Secondly, in the case where off-line transactions
personalization process. The banking electronic were to be permitted, the card has to carry financial
system relies heavily on outsourcing certain key information such as account balance. The pos-
functionalities such as database management sibility of abuse or misuse by the customer had
where “ethical” and regulatory aspects do not to be accounted for. The customer could make
govern the interface and interaction. Every time several offline transactions and claim to loose the
a vendor “opens” a banking database either for card. If the card were to be reissued to customer
maintenance or for troubleshooting, the risks of with balances available at the host (not as yet
exposure are quite phenomenal. synchronized with the offline terminal informa-
tion), then the bank stands to loose financially.

129
Social and Human Elements of Information Security: A Case Study

One could not trust the customer not to resort to behavior as a basis and situates a cognitive science
this avenue. approach within. It discusses the design a real
world case of a secure financial system of how it
The Other Banking Correspondent characterizes and accounts for the “other.” The
design of the technical system is entirely based
Thirdly, the problem of dishonest business cor- on the current level of information technology
respondents (those operating terminals doing today and how it is employed in a remote banking
transactions on behalf of the bank) came to the application scenario.
fore. Suppose a rural customer who was illiterate The source of much of our technical aspects
would like to withdraw cash from his/her account and capabilities (e.g., cryptography) in informa-
then the business correspondent can debit a larger tion security emerged in the historical context of
value than what was disbursed to the customer. world wars—battling with a real, flesh and blood
Though a printed receipt would be made avail- “enemy.” But today systems designers’ battles
able to the customer, since the customer is illiter- with the “other” in their imagination to construct
ate this would not be of any practical use. This secure systems. This chapter attempts to outline
gives rise to a flaw in the system. This technical the need for a broader approach to information
loophole could only be overcome with some form security incorporating philosophy, social and
of social surveillance as well as taking care to cognitive sciences. Security should be approached
appoint trustworthy business correspondents. A from first principles. This approach may provide
possible administrative solution was to provide different ways to handle security. The implications
a hotline for customer complaints of this kind of for information security may be derived from a
system abuse by business correspondents. Once general philosophy of security. Since security
a business correspondent is identified as com- is essentially a psychological, political, social,
mitting fraud, the bank authorities could take and historical phenomenon, there is a need to
appropriate action. “model” the human (and the “other”) in the frames
This financial inclusion project highlights the of references of these sciences. This will better
limitations of approaching information security able societies and organizations to understand the
from the viewpoint of technicalities alone. It re- source and nature of threats and deal with them
quires a solution whereby all the parties involved at that level, rather than at the level of technical
are bound by ethics as well appropriate social fabrication alone, for example, deep-seated emo-
controls (administrative procedures to handle tional memory wounds of lost battles long ago
disputes and violations). Human and social fac- may stir and motivate a “hacker” or a “terrorist.”
tors have to be taken into account to be able to The philosophical question to address is “How
provide a good solution to the problem of remote do you prevent the “other” from emerging and
banking. operating adversely in the world?” The social and
moral fabric of society needs as much attention
as the design of security of protocols.
Conclusion

This chapter attempts to place the human and References


social elements of information security within
a wider context. It discusses some philosophical ACM SIGCHI. (1996). Curricula for human-
underpinnings underlying security ventures. computer interaction. Retrieved March 20, 2007,
The chapter takes the systems model of human from http://sigchi.org/cdg/cdg2.html#2_3_3

130
Social and Human Elements of Information Security: A Case Study

Association for the Advancement of Artificial Miller, A. (1991). Personality types, learning styles
Intelligence. (2007). Cognitive science. Retrieved and educational goals. Educational Psychology,
on March 20, 2007, from http://www.aaai.org/AI- 11(3-4), 217-238.
Topics/html/cogsci.html
Nair, V., Sofield, A., & Mulbagal, V. (2006). Build-
Athabasca University. (2000). Ergonomics re- ing a more inclusive financial system in India.
sources. Retrieved April 17, 2007, from http://scis. Retrieved on April 16, 2007, from http://www.dia-
athabascau.ca/undergraduate/ergo.htm mondconsultants.com/PublicSite/ideas/perspec-
tives/downloads/India%20Rural%20Banking_
Dhillon, G. (2007). Principles of information
Diamond.pdf
systems security: Text and cases. Danvers: John
Wiley & Sons. Norman, D. (1980). Twelve issues for cognitive
science. Cognitive Science, 4, 1-32. [Reprinted in
EMVCo. (2004). EMV integrated circuit card
Norman, D. (1981). Twelve issues for cognitive
specifications for payment systems, Book 2.
science. In D. Norman (Ed.), Perspectives on
Retrieved March 20, 2007, from http://www.
cognitive science (pp. 265-295). Norwood, NJ:
emvco.com/
Ablex Publishing Corp.]
Ericsson, K. A., & Simon, H. A. (1993). Protocol
Radu, C. (2003). Implementing electronic card
analysis: Verbal reports as data (Rev. ed.). Cam-
payment systems. Norwood: Artech House.
bridge, MA: The MIT Press.
Rankl, W., & Effing, W. (2003). Smart card
Eysenck, H. (1947). Dimensions of personality.
handbook. Chichester, England: John Wiley and
London: Routledge & Kegan Paul.
Sons.
Garland, D. J., Hopkin, D., & Wise, J. A. (1999).
Rapaport. W. J. (2000). Cognitive science. In A.
Handbook of aviation human factors. Mahwah,
Ralston, E. D. Reilly, & D. Hemmindinger (Eds.),
NJ: Lawerence Erlbaum Associates.
Encyclopedia of computer science (4th ed.) (pp.
Global Platform. (2005). GlobalPlatform smart 227-233). New York: Grove’s Dictionaries.
card security target guidelines. Retrieved March
Reserve Bank of India. (2005). RBI announces
20, 2007, from http://www.globalplatform.org/
measures towards promoting financial inclu-
Goodman, S. (1991). New technology and bank- sion. Retrieved April 16, 2007, from http://www.
ing: Problems and possibilities for developing rbi.org.in/scripts/BS_PressReleaseDisplay.
countries, actor perspective. University of Lund, aspx?prid=14069
Sweden: Research Policy Institute.
Reserve Bank of India. (2006). Financial inclusion
Huitt, W. (2003). A systems model of human be- by extension of banking services—use of business
havior. Educational Psychology Interactive. Val- facilitators and correspondents. Retrieved April
dosta, GA: Valdosta State University. Retrieved 20, 2007, from http://www.rbi.org.in/scripts/No-
March 20, 2007, from http://chiron.valdosta. tificationUser.aspx?Mode=0&Id=2718
edu/whuitt/materials/sysmdlo.html
Sahut, J. M. (2006). Electronic wallets in danger.
Kumar, K. G. (2005). Towards financial inclu- Journal of Internet Banking and Commerce,
sion. Retrieved April 16, 2007, from http:// 11(2). Retrieved April 16, 2007, from http://www.
www.thehindubusinessline.com/2005/12/13/sto- arraydev.com/commerce/JIBC/2006-08/Jean-
ries/2005121301240200.htm Michel_SAHUT.asp

131
Social and Human Elements of Information Security: A Case Study

Shaw, E. D., Ruby, K. G., & Post, J. M. (1998). United States Secret Service. (2004). Insider threat
The insider threat to information systems. Security study: Illicit cyber activity in the banking and
Awareness Bulletin, 2- 98. Retrieved September finance sector. Retrieved September 5, 2007, from
5, 2007, from http://rf-web.tamu.edu/security/sec- http://www.secretservice.gov/ntac_its.shtml
guide/Treason/Infosys.htm

132
133

Chapter IX
Effects of Digital
Convergence on Social
Engineering Attack Channels
Bogdan Hoanca
University of Alaska Anchorage, USA

Kenrick Mock
University of Alaska Anchorage, USA

Abstract

Social engineering refers to the practice of manipulating people to divulge confidential information that
can then be used to compromise an information system. In many cases, people, not technology, form
the weakest link in the security of an information system. This chapter discusses the problem of social
engineering and then examines new social engineering threats that arise as voice, data, and video net-
works converge. In particular, converged networks give the social engineer multiple channels of attack
to influence a user and compromise a system. On the other hand, these networks also support new tools
that can help combat social engineering. However, no tool can substitute for educational efforts that
make users aware of the problem of social engineering and policies that must be followed to prevent
social engineering from occurring.

INTRODUCTION (Orgill, 2004; Schneier, 2000). Advances in tech-


nology have led to a proliferation of devices and
Businesses spend billions of dollars annually on techniques that allow information filtering and
expensive technology for information systems encryption to protect valuable information from
security, while overlooking one of the most glaring attackers. At the same time, the proliferation of
vulnerabilities—their employees and customers information systems usage is extending access to

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Effects of Digital Convergence on Social Engineering Attack Channels

more and more of the employees and customers of continued education of all information systems
every organization. The old techniques of social users, supplemented by policies that enforce good
engineering have evolved to embrace the newest security practices. Another powerful countermea-
technologies, and are increasingly used against sure is penetration testing, which can be used to
this growing pool of users. Because of the wide- evaluate the organization’s readiness, but also to
spread use of information systems by users of all motivate users to guard against social engineering
technical levels, it is more difficult to ensure that attacks (see for example Argonne, 2006).
all users are educated about the dangers of social Throughout this chapter we will mainly use
engineering. Moreover, as digital convergence masculine gender pronouns and references to
integrates previously separated communications maleness when referring to attackers, because
channels, social engineers are taking advantage statistically most social engineering attackers
of these blended channels to reach new victims tend to be men. As more women have become
in new ways. proficient and interested in using computers, some
Social engineering is a term used to describe of the hackers are now female, but the numbers
attacks on information systems using vulner- are still small. Nonetheless, there are some strik-
abilities that involve people. Information systems ing implications of gender differences in social
include hardware, software, data, policies, and engineering attacks, and we discuss those differ-
people (Kroenke, 2007). Most information secu- ences as appropriate.
rity solutions emphasize technology as the key
element, in the hope that technological barriers
will be able to override weaknesses in the human SOCIAL ENGINEERING
element. Instead, in most cases, social engineering
attacks succeed despite layers of technological Social engineering includes any type of attack
protection around information systems. that exploits the vulnerabilities of human nature.
As technology has evolved, the channels of A recent example is the threat of social engineers
social engineering remain relatively unchanged. taking advantage of doors propped open by smok-
Attackers continue to strike in person, via postal ers, in areas where smoking is banned indoors
mail, and via telephone, in addition to attacking (Jaques, 2007). Social engineers understand hu-
via e-mail and online. Even though they arrive man psychology (sometimes only instinctively)
over the same attack channels, new threats have sufficiently well to determine what reactions they
emerged from the convergence of voice, data, need to provoke in a potential victim to elicit
and video. On one hand, attacks can more easily the information they need. In a recent survey of
combine several media in a converged environ- black hat hackers (hackers inclined to commit
ment, as access to the converged network allows computer crimes), social engineering ranked as
access to all media types. On the other hand, the third most widely used technique (Wilson,
attackers can also convert one information chan- 2007). The survey results indicate that 63% of
nel into another to make it difficult to locate the hackers use social engineering, while 67% use
source of an attack. sniffers, 64% use SQL injection, and 53% use
As we review these new threats, we will also cross site scripting.
describe the latest countermeasures and assess Social engineering is used so widely because
their effectiveness. Convergence of voice, data, it works well despite the technological barriers
and video can also help in combating social deployed by organizations. Social engineers oper-
engineering attacks. One of the most effective ate in person, over the phone, online, or through
countermeasures to social engineering is the a combination of these channels. A report on the

134
Effects of Digital Convergence on Social Engineering Attack Channels

Australian banking industry in ComputerWorld 40% of non-technical attendees and 22% of the
claims that social engineering leads to larger technical attendees to reveal their password. Fol-
losses to the banking industry than armed rob- low up questions, drilling down to whether the
bery. Experts estimate these losses to be 2-5% of password included a pet name or the name of a
the revenue, although industry officials decline to loved one elicited passwords from another 42% of
comment (Crawford, 2006). Social engineering the technical attendees and 22% of the non-tech-
is also used in corporate and military espionage, nical ones. While the survey respondents might
and no organization is safe from such attacks. A have felt secure in only giving out passwords,
good overview of social engineering attacks and user names were easier to obtain, because the
possible countermeasures can be found on the Mi- full name and company affiliation of each survey
crosoft TechNET Web site (TechNET, 2006). respondent was clearly indicated on their confer-
According to Gragg (2003), there are some ence badge. An earlier survey cited in the article
basic techniques common to most social engineer- reported similar statistics in 2004.
ing attacks. Attackers tend to spend time building Another paper urging organizations to defend
trust in the target person. They do that by asking against social engineering illustrates the high
or pretending to deliver small favors, sometimes levels of success of even simple social engineer-
over an extended period of time. Sometimes, the ing attacks. Orgill (2004) describes a survey of
trust building is in fact only familiarity, where 33 employees in an organization, where a “re-
no favors are exchanged, but the victim and at- searcher” asked questions about user names and
tacker establish a relationship. Social engineering passwords. Only one employee of the 33 surveyed
attacks especially target people or departments escorted the intruder to security. Of the 32 others
whose job descriptions include building trust and that took the survey, 81% gave their user name and
relationships (help desks, customer service, etc). 60% gave their password. In some departments,
In addition to asking for favors, sometimes social all the employees surveyed were willing to give
engineers pretend to extend favors by first creat- their passwords. In one instance, an employee
ing a problem, or the appearance of a problem. was reluctant to complete the survey. A manager
Next, the social engineers can appear to solve jokingly told the employee that he would not get
the problem, thus creating in a potential victim a raise the next year unless she completes the
both trust and a sense of obligation to reciprocate. survey. At that point, the employee sat down and
They then use this bond to extract confidential completed the survey. This is a clear indication
information from the victim. Finally, social engi- that management can have a critical role in the
neers are experts at data aggregation, often pick- success or failure of social engineering attacks.
ing disparate bits of data from different sources Statistically, an attacker needs only one gull-
and integrating the data into a comprehensive, ible victim to be successful, but the high success
coherent picture that matches their information rates mentioned above indicate that finding that
gathering needs (Stasiukonis, 2006b; Mitnick & one victim is very easy. If such “surveys” were
Simon, 2002). to be conducted remotely, without a face to face
Although the description might seem complex, dialog or even a human voice over the phone,
social engineering can be as simple as just ask- success rates would likely be much lower, but
ing for information, with a smile. A 2007 survey the risks would also be lower for the attacker.
(Kelly, 2007) showed that 64% of respondents Convergence of data, voice, and video allows
were willing to disclose their password in ex- attackers to take this alternative route, lower
change for chocolate (and a smile). Using “good risk of detection at the expense of lower success
looking” survey takers at an IT conference led rate. Given the ability to automate some of the

135
Effects of Digital Convergence on Social Engineering Attack Channels

attack avenues using converged media, the lower Social Engineering Attacks Involving
success rate is not much of a drawback. We will Physical Presence
show how most social engineering attacks resort
to casting a broad net, and making a profit even The classical social engineering attack involves
of extremely low success rates. a social engineer pretending to be a technical
The basic tools of the social engineer include service person or a person in need of help. The
strong human emotions. Social engineers aim to attacker physically enters an organization’s
create fear, anticipation, surprise, or anger in the premises and finds a way to wander through the
victim, as a way to attenuating the victim’s abil- premises unattended. Once on the premises, the
ity to think critically. Additionally, information attacker searches for staff ID cards, passwords
overload is used to mix true and planted informa- or confidential files.
tion to lead the victim to believe what the social Most of the in-person social engineering
engineer intends. Reciprocation is another strong attacks rely on other information channels to
emotion social engineers use, as we described support the in-person attack. Convergence of
earlier. Finally, social engineers combine using voice and data networks allows blended attacks
guilt (that something bad will happen unless the once the attacker is within the victim’s offices.
victim cooperates), transfer of responsibility (the Before showing up at the company premises, the
social engineer offers to take the blame), and attacker can forge an e-mail message to legitimize
authority (where the social engineer poses as a the purpose of the visit; for example, the e-mail
supervisor or threatens to call in a supervisor). might appear to have been sent by a supervisor
These are basic human emotions used in all social to announce a pest control visit (applekid, 2007).
engineering attacks, whether using converged Alternatively, the attacker might use the phone
networks or not. This chapter will focus on how to call ahead for an appearance of legitimacy.
attackers use these emotions on a converged net- When calling to announce the visit, the attacker
work, combining data, voice, and video. can fake the telephone number displayed on the
caller ID window (especially when using Voice
over IP, Antonopoulos & Knape, 2002).
SOCIAL ENGINEERING ON After entering the premises, an attacker will
CONVERGED NETWORKS often try to connect to the organization’s local
area network to collect user names, passwords,
Social engineering has seen a resurgence in recent or additional information that could facilitate
years, partly due to the convergence of voice, subsequent stages of the attack. Convergence
data, and video, which makes it much easier to allows access to all media once the attacker is
attack an organization remotely, using multiple connected to the network; even copiers now have
media channels. The proliferation of computer network connections that a “service technician”
peripherals and of mobile devices, also driven by could exploit to reach into the organization’s
network convergence, has further opened channels network (Stasiukonis, 2006c). Connecting to the
for attacks against organizations. In this section company network using the port behind the copier
we discuss new attack vectors, combining some is much less obvious than using a network port in
of the classical social engineering channels (in the open. Finally, another powerful attack may
person, by phone, by e-mail) and show how they involve a social engineer entering the premises
have changed on a converged network. just briefly, connecting a wireless access point to
the organization’s network, and then exploring the
network from a safe distance (Stasiukonis, 2007).

136
Effects of Digital Convergence on Social Engineering Attack Channels

This way, the attacker can remain connected to the ment may take advantage of vulnerability in
network, but at the same time minimize the risk of Word and rely on e-mail as the attack channel
exposure while in the organization’s building. (McMillan, 2006). Other vulnerabilities stem from
more complex interactions between incompatible
Social Engineering via Email, News, operating systems and applications. The recently
and Instant Messenger released Microsoft Vista operating system has
vulnerabilities related to the use of non-Microsoft
One example of how convergence is changing e-mail clients, and requires user “cooperation”
the information security threats is the increased (Espiner, 2006). As such, Microsoft views this
incidence of attacks using e-mail, HTML, and as a social engineering attack, rather than a bug
chat software. This is attractive to attackers, in the operating system.
because it bypasses firewalls and allows the at- Other attacks are purely social engineering, as
tacker to transfer files to and from the victim’s in the case of e-mail messages with sensationally
computer (Cobb, 2006). The only requirement for sounding subject lines, for example, claiming
such attacks is a good understanding of human that the USA or Israel have started World War
weaknesses and the tools of social engineering. III, or offering views of scantily clad celebri-
The attackers spend their time devising ways to ties. While the body of the message is empty,
entice the user to open an e-mail, to click on a an attachment with a tempting name incites the
link or to download a file, instead of spending users to open it. The name might be video.exe,
time breaking through a firewall. One such at- clickhere.exe, readmore.exe, or something similar,
tack vector propagates via IRC (Internet relay and opening the attachment can run any number
chat) and “chats” with the user, pretending to be of dangerous applications on the user’s computer
a live person, assuring the downloader that it is (Gaudin, 2007). Other e-mail messages claim that
not a virus, then downloading a shortcut to the the computer has been infected with a virus and
client computer that allows the remote attacker instruct the user to download a “patch” to remove
to execute it locally (Parizo, 2005). the virus (CERT, 2002). Instead, the “patch” is a
Because of the wide use of hyperlinked news Trojan that installs itself on the user’s computer.
stories, attacks are beginning to use these links The source of the message can be forged to make
to trigger attacks. In a recent news story (Nara- it appear that the sender is the IT department or
ine, 2006b), a brief “teaser” concludes with a another trusted source.
link to “read more,” which in fact downloads a Finally, another way to exploit news using
keylogger by taking advantage of a vulnerability social engineering techniques is to send targeted
in the browser. This type of attack is in addition messages following real news announcements. An
to the fully automated attacks that involve only article on silicon.com cites a phishing attack fol-
“drive by” URL, where the malicious content is lowing news of an information leak at Nationwide
downloaded and executed without any interven- Building Society, a UK financial institution. Soon
tion from the user (Naraine, 2006c). Analysis of after the organization announced the theft of a
the code of such automated attacks indicates a laptop containing account information for a large
common source or a small number of sources, number of its customers, an e-mail began circulat-
because the code is very similar across multiple ing, claiming to originate from the organization
different attack sites. and directing recipients to verify their information
Convergence allows e-mail “bait” to use for security reasons (Phishers raise their game,
“hooks” in other applications. For example, an 2006). This is a much more pointed attack than
e-mail message with a Microsoft Word attach- the traditional phishing attacks (described next),

137
Effects of Digital Convergence on Social Engineering Attack Channels

where a threatening or cajoling email is sent to address of a phishing site. When the user types
a large number of potential victims, in hope that www.mybank.com, her computer is directed to
some of them will react. Such targeted attacks the phishing site, even though the browser URL
are known as “spear phishing.” indicates that she is accessing www.mybank.com.
Such attacks are much more insidious, because
Phishing the average user has no way of distinguishing
between the fake and the real sites. Such an attack
Phishing is a special case of e-mail-based social involves a minimal amount of social engineering,
engineering, which warrants its own section be- although, in many cases, the way the attacker
cause of its widespread use (APWG, 2007). The gains access to the DNS database might be based
first phishing attacks occurred in the mid 1990’s on social engineering methods.
and continue to morph as new technologies open In particular, pharming attacks can rely on
new vulnerabilities. The classical phishing attack converged media, for example, using an “evil
involves sending users an e-mail instructing twin” access point on a public wireless net-
them to go to a Web site and provide identifying work. By setting up a rogue access point at a
information “for verification purposes.” public wireless hotspot and by using the same
Two key weaknesses of the user population name as the public access point, an attacker is
make phishing a highly lucrative activity. As able to hijack some or all of the wireless traffic
a larger percent of the population is using Web though the access point he controls. This way,
browsers to reach confidential information in their the attacker can filter all user traffic through his
daily personal and professional activities, the pool own DNS servers, or more generally, is able to
of potential victims is greatly increased. At the mount any type of man-in-the-middle attack. In
same time, the users have an increased sense of general, man-in-the-middle attacks involve the
confidence in the information systems they use, attacker intercepting user credentials as the user
unmatched by their actual level of awareness and is authenticating to a third party Web site and
sophistication in recognizing threats. passing on those credentials from the user to the
A survey of computer users found that most Web site. Having done this, the attacker can now
users overestimate their ability to detect and com- disconnect the user and remain connected to the
bat online threats (Online Safety Study, 2004). A protected Web site. True to social engineering
similar situation is probably the case for awareness principles, these types of attacks are targeted at
of and ability to recognize phishing attacks. As the rich. Evil twin access points are installed in
phishers’ sophistication increases, their ability to first-class airport lounges, in repair shops special-
duplicate and disguise phishing sites increases, izing in expensive cars, and in other similar areas
making it increasingly difficult to recognize fakes (Thomson, 2006).
even by expert users.
More recently, pharming involves DNS attacks Social Engineering Using
to lure users to a fake Web site, even when they Removable Media
enter a URL from a trusted source (from the key-
board or from a favorites list). To mount such an In another type of social engineering, storage
attack, a hacker modifies the local DNS database devices (in particular USB flash drives) might
(a hosts file on the client computer) or one of the be “planted” with users to trick them into install-
DNS servers the user is accessing. The original ing malicious software that is able to capture
DNS entry for the IP address corresponding to a user names and passwords (Stasiukonis, 2006a).
site like www.mybank.com is replaced by the IP This type of attack is based on the fact that users

138
Effects of Digital Convergence on Social Engineering Attack Channels

are still gullible enough to use “found” storage information. The call number might be located
devices or to connect to “promotional” storage in a different location than the phone number
devices purporting to contain games, news, or might indicate.
other entertainment media. Part of the vulner- The use of phone lines is also a way for attack-
ability introduced by such removable storage ers to bypass some of the remaining inhibitions
devices is due to the option of modern operating users have in giving out confidential information
systems to automatically open certain types of on the World Wide Web. While many users are
files. By simply inserting a storage device with aware of the dangers of providing confidential
auto run properties, the user can unleash attack information on Web sites (even those who appear
vectors that might further compromise their genuine), telephone networks are more widely
system. In addition to USB flash drives, other trusted than online channels. Taking advantage
memory cards, CDs, and DVDs can support the of this perception in conjunction with the wide-
same type of attack. spread availability of automated voice menus
has enabled some attackers to collect credit card
Social Engineering via Telephone information. Naraine (2006a) describes an attack
and Voice Over IP Networks where the victim is instructed via e-mail to verify
a credit card number over the phone. The verifica-
Using telephone networks has also changed. The tion request claims to represent a Santa Barbara
basic attack is often still the same, involving a bank and directs users to call a phone number for
phone call asking for information. Convergence, verification. The automated answering system
in particular the widespread use of digitized uses voice prompts similar to those of legitimate
voice channels, also allow an attacker to change credit card validation, which are familiar to users.
his (usually) voice into a feminine voice (bernz, Interestingly, the phone system does not identify
n.d.), which is more likely to convince a potential the bank name, making it possible to reuse the
victim. Digitally altering one’s voice will also same answering system for simultaneous attacks
allow one attacker to appear as different callers against multiple financial institutions.
on subsequent telephone calls (Antonopoulos & A vishing attack even more sophisticated than
Knape, 2002). This way, the attacker can gather the Santa Barbara bank attack targeted users of
information on multiple occasions, without raising PayPal (Ryst, 2006). PayPal users were sent an
as much concern as a repeat caller. e-mail to verify their account information over the
The wide availability of voice over IP and the phone. The automated phone system instructed
low cost of generating and sending possibly large users to enter their credit card number on file with
volumes of voice mail messages also enable new PayPal. The fraudulent system then attempted to
types of attacks. Vishing, or VoIP phishing (Vish- verify the number; if an invalid credit card number
ing, 2006) is one such type of attack that combines was entered, the user was directed to enter their
the use of the telephone networks described with information again, bolstering the illusion of a
automatic data harvesting information systems. legitimate operation. Although this type of multi-
This type of attack relies on the fact that credit channel attack is not limited to VoIP networks,
card companies now require users to enter credit such networks make the automated phone systems
card numbers and other identifying information. much easier to set up.
Taking advantage of user’s acceptance of such As VoIP becomes more prevalent we may begin
practices, vishing attacks set up rogue answering to see Internet-based attacks previously limited to
systems that prompt the user for the identifying computers impact our telephone systems (Plewes,
2007). Denial of service attacks can flood the

139
Effects of Digital Convergence on Social Engineering Attack Channels

network with spurious traffic, bringing legitimate before an attack can succeed. In other words,
data and voice traffic to a halt. Spit (spam over convergence has the potential to help not just
Internet telephony) is the VoIP version of e-mail the social engineer, but also the staff in charge
spam which instead clogs up a voice mailbox with of countering such attacks.
unwanted advertisements (perhaps generated
by text to speech systems) or vishing attacks. Anti-Phishing Techniques
Vulnerabilities in the SIP protocol for VoIP may
allow social engineers to intercept, reroute calls, A number of anti-phishing techniques have
and tamper with calls. Finally, VoIP telephones been proposed to address the growing threat of
are Internet devices that may run a variety of phishing attacks. Most anti-phishing techniques
services such as HTTP, TFTP, or telnet servers, involve hashing the password either in the user’s
which may be vulnerable to hacking. Since all head (Sobrado & Birget, 2005; Wiedenbeck,
of the VoIP phones in an organization are likely Waters, Sobrado, & Birget, 2006), using special
identical, a single vulnerability can compromise browser plugins (Ross, Jackson, Miyake, Boneh,
every phone in the organization. & Mitchell, 2005), using trusted hardware (for
example on tokens) or using a combination of
special hardware and software (e.g., a cell phone,
SOLUTIONS AND Parno, 2006).
COUNTERMEASURES TO SOCIAL All the technological solutions mentioned
ENGINEERING ATTACKS involve a way to hash passwords so that they are
not reusable if captured on a phishing site or with
Following the description of attacks, the chapter a network sniffer. The downfall of all of these
now turns to solutions. The first and most impor- schemes is that the user can always be tricked
tant level of defense against social engineering are into giving out a password through a different,
organizational policies (Gragg, 2003). Setting up unhashed channel, allowing the attacker to use
and enforcing information security policies gives the password later on. A good social engineer
clear indications to employees on what information would be able to just call the victim and ask for
can be communicated, under what conditions, and the password over the phone. Additionally, even
to whom. In a converged network, such policies though all these solutions are becoming increas-
need to specify appropriate information channels, ingly user friendly and powerful, they all require
appropriate means to identify the requester, and additional costs.
appropriate means to document the information
transfer. As the attacker is ratcheting up the strong Voice Analytics
emotions that cajole or threaten the victim into
cooperating, strong policies can make an employee We discussed earlier the negative implications
more likely to resist threats, feelings of guilt, or of VoIP and its associated attacks (vishing). A
a dangerous desire to help. positive outcome of data and voice convergence
In addition to deploying strong policies, orga- in the fight against social engineering is the ability
nizations can use the converged network to search to analyze voice on the fly, in real time as well as
for threats across multiple information channels in on stored digitized voice mail.
real time. In a converged environment, the strong Voice analytics (Mohney, 2006) allows caller
emotions associated with social engineering could identification based on voice print, and can also
be detected over the phone or in an e-mail, and search for keywords, can recognize emotions, and
adverse actions could be tracked and stopped aggregate these information sources statistically

140
Effects of Digital Convergence on Social Engineering Attack Channels

with call date and time, duration, and origin. In messages inviting employees to click on a link
particular, voice print can provide additional to view photos from an open house event. Such
safeguards when caller ID is spoofed. At the e-mail messages are easily spoofed and could be
same time, given that the social engineer has sent from outside the organization, yet they can
similar resources in digitally altering her voice, be made to seem that they originate within the
the voice analytics could employ more advanced organization. Of the 400 recipients of the e-mail,
techniques to thwart such attacks. For example, 149 clicked on the link and were asked to enter
the caller could be asked to say a sentence in an their user name and password to access the pho-
angry voice or calm voice (to preempt attacks us- tos, and 104 of these employees actually entered
ing recorded voice data). Attacks by people who their credentials. Because this was an exercise,
know and avoid “hot” words can be preempted the employees who submitted credentials were
by using a thesaurus to include synonyms. directed to an internal Web site with information
about phishing and social engineering.
Blacklisting A more complex and more memorable (for
the victims) example of penetration testing was
Another common technological solution against reported on the DarkReading site (Stasiukonis,
social engineering is a blacklist of suspicious or 2006d). The attacker team used a shopping card
unverified sites and persons. This might sound to open the secure access door, found and used lab
simple, especially given the ease of filtering coats to blend in, and connected to the company
Web sites, the ease of using voice recognition network at a jack in a conference room. Several
on digital phone lines, and the ease of using face employees actually helped the attackers out by
recognition (for example) in video. However, answering questions and pointing out directions.
maintenance of such a list can be problematic. As part of the final report, the team made a pre-
Additionally, social engineers take precautions sentation to the employees, which had a profound
to disguise Web presence, as well as voice and educational impact. Six months later, on a follow
physical appearance. Even though a converged up penetration testing mission, the team was un-
network may allow an organization to aggregate able to enter the premises. An employee, who first
several information sources to build a profile of an allowed the attackers to pass through a door she
attack or an attacker, the same converged network had opened, realized her mistake as soon as she
will also help the social engineer to disperse the got to her car. She returned, alerted the security
clues, to make detection more difficult. staff and confronted the attackers.
Palmer (2001) describes how an organization
Penetration Testing would locate “ethical hackers” to perform a pen-
etration testing exercise. The penetration testing
Penetration testing is another very effective tool plan involves common sense questions, about what
in identifying vulnerabilities, as well as a tool for needs to be protected, against what threats, and
motivating and educating users. As mentioned ear- using what level of resources (time, money, and
lier, users tend to be overconfident in their ability effort). A “get out of jail free card” is the contract
to handle not just malware, but social engineering between the organization initiating the testing
attacks as well. By mounting a penetration testing and the “ethical hackers” performing the testing.
attack, the IT staff can test against an entire range The contract specifies limits to what the testers
of levels of sophistication in attack. can do and requirements for confidentiality of
An exercise performed at Argonne National information gathered. An important point, often
Labs (Argonne, 2006) involved sending 400 forgotten, is that even if an organization performs

141
Effects of Digital Convergence on Social Engineering Attack Channels

a penetration testing exercise and then fixes all protocols, although these are not mentioned on
the vulnerabilities identified, follow up exercises the company Web site at this time.
will be required to assess newly introduced vul-
nerabilities, improperly fixed vulnerabilities, or Reverse Social Engineering
additional ones not identified during a previous
test. In particular, despite the powerful message Another defensive weapon is to turn the tables
penetration testing can convey to potential victims and use social engineering against attackers (Holz
of social engineering, there is always an additional & Raynal, 2006). This technique can be used
vulnerability a social engineer may exploit, and against less sophisticated attackers, for example,
there is always an employee who has not fully by embedding “call back” code in “toolz” posted
learned the lesson after the previous exercise. on hacker sites. This can alert organizations about
Additionally, social engineering software is the use of such code and about the location of the
available to plan and mount a self-test, similar to prospective attacker. Alternatively, the embedded
the Argonne one reported earlier (Jackson Hig- code could erase the hard drive of the person using
gins, 2006a). Intended mainly to test phishing it—with the understanding that only malevolent
vulnerabilities, the core impact penetration testing hackers would know where to find the code and
tool from Core Security (www.coresecurity.com) would attempt to use it.
allows the IT staff to customize e-mails and to
use social engineering considerations with a few User Education
mouse clicks (Core Impact, n.d.).
Among the tools available against social engineer-
Data Filtering ing, we saved arguably the most effective tool
for last: educating users. Some of the technolo-
One application that may address social engineer- gies mentioned in this section have the potential
ing concerns at the boundary of the corporate to stop some of the social engineering attacks.
network is that proposed by Provilla, Inc. A 2007 Clearly, social engineers also learn about these
Cisco survey identified data leaks as the main technologies, and they either find ways to defeat
concern of IT professionals (Leyden, 2007). Of the technologies or ways to circumvent them.
the 100 professionals polled, 38% were most con- Some experts go as far as to say that any “no
cerned with theft of information, 33% were most holds barred” social engineering attack is bound
concerned about regulatory compliance, and only to succeed, given the wide array of tools and the
27% were most concerned about virus attacks range of vulnerabilities waiting to be exploited.
(down from 55% in 2006). Provilla (www.provilla- Still, educating users can patch many of these
inc.com) claims that their DataDNA™ technology vulnerabilities and is likely to be one of the most
allows organizations to prevent information leaks, cost-effective means to prevent attacks.
including identity theft and to maintain compli- We cannot stress enough that user education
ance. The product scans the network looking is only effective when users understand that they
for document fingerprints, on “every device...at can be victims of attacks, no matter how tech-
every port, for all data types,” according to the nologically aware they might be. Incidentally,
company. The channels listed include USB, IM, penetration testing may be one of the most pow-
Bluetooth, HTTP, FTP, outside email accounts erful learning mechanisms for employees, both
(Hotmail, Gmail, etc). Conceivably, the technol- during and after the attack. Stasiukonis (2006c)
ogy could be extended to include voice over IP confesses that in 90% of the cases where he and
his penetration testing team get caught is when

142
Effects of Digital Convergence on Social Engineering Attack Channels

a user decides to make a call to verify the iden- of the 40% of employees who were not duped by
tity of the attackers. The positive feelings of the the social engineers, 50% cited awareness training
person “catching the bad guys” and the impact and e-mail advisories as the reason for protecting
of the news of the attack on the organization are their passwords, indicating that user education has
guaranteed to make it a memorable lesson. the potential for success. In response to the latest
Educating the users at all levels is critical. study, the IRS is elevating the awareness training
The receptionist of a company is often the first and is even emphasizing the need to discipline
target of social engineering attacks (to get an employees for security violations resulting from
internal phone directory, to forward a fax or just negligence or carelessness.
to chat about who might be on vacation, Mitnick Clearly, user education is not limited to social
& Simon, 2002). On the other hand, the informa- engineering attacks that take advantage of con-
tion security officers are also targeted because of verged networks. Any social engineering attack
their critical access privileges. An attacker pos- is less likely to succeed in an organization where
ing as a client of a bank crafted a spoofed e-mail employees are well informed and empowered
message supposedly to report a phishing attack. by well-designed security policies. Education
When the security officer opened the e-mail he becomes more important on converged networks,
launched an application that took control of the to account for the heightened threat level and to
officer’s computer (Jackson Higgins, 2006b). A allow users to take advantage of the available
social engineering attack may succeed by taking converged tools that may help prevent attacks.
the path of least resistance, using the least trained
user; at the same time, an attack might fail because
one of the best trained users happened to notice CONCLUSION
something suspicious and alerted the IT staff.
Educational efforts often achieve only limited Despite the negative press and despite the nega-
success and education must be an ongoing pro- tive trends we discussed in this chapter, the good
cess. A series of studies by the Treasury Inspec- news is that the outlook is positive (Top myths,
tor General for Tax Administration (2007) used 2006a). The media is often portraying the situation
penetration testing to identify and assess risks, as “dire” and reporting on a seemingly alarming
then to evaluate the effectiveness of education. increase in the number of attacks. For one, the
The study found that IRS employees were vulner- number of users and the number and usage of
able to social engineering even after training in information systems is increasing steadily. That
social engineering had been conducted. In 2001, in itself accounts for a staggering increase in the
the penetration testers posed as computer support number of incidents reported. Additionally, the
helpdesk representatives in a telephone call to IRS awareness of the general population with respect
employees and asked the employees to temporarily to information security issues in general and
change their password to one given over the phone. with respect to social engineering in particular
Seventy-one percent of employees complied. Due is also increasing. The media is responding to
to this alarming rate, efforts were made to educate this increased interest by focusing more attention
employees about the dangers of social engineer- on such topics. Surveys indicate that in fact the
ing. To assess the effectiveness of the training, a rate of occurrence of computer crime is actually
similar test was conducted in 2004, and resulted steady or even decreasing, and that only the public
in a response rate of 35%. However, another test perception and increased usage make computer
in 2007 successfully convinced 60% of employees crime seem to increase. A typical analogy is the
to change their password. One bright spot is that seemingly daunting vulnerabilities in Micro-

143
Effects of Digital Convergence on Social Engineering Attack Channels

soft operating systems, which are in fact only a APWG. (2007). Phishing activity trends report
perceived outcome of the increased usage base for the month of February, 2007. Retrieved April
and increased attractiveness for attackers (Top 15, 2007, from http://www.antiphishing.org/re-
myths, 2006b). ports/apwg_report_february_2007.pdf
Whether the rate of computer crime is in-
Argonne (2006). Simulated ‘social engineering’
creasing or not, social engineering remains a real
attack shows need for awareness. Retrieved April
problem that needs to be continually addressed.
15, 2007, from http://www.anl.gov/Media_Center/
Convergence in telecommunications makes it
Argonne_News/2006/an061113.htm#story4
easier for users to access several information chan-
nels through a unified interface on one or a small bernz (author’s name). (n.d.). The complete social
number of productivity devices. This same trend engineering FAQ. Retrieved April 15, 2007, from
makes it easier for attackers to deploy blended http://www.morehouse.org/hin/blckcrwl/hack/
attacks using several information channels to a soceng.txt
potential victim, and makes it easier to reach the
CERT. (2002). Social engineering attacks via
user through the same converged interface or
IRC and instant messaging (CERT® Incident
productivity device. By its nature, convergence
Note IN-2002-03). Retrieved April 15, 2007,
means putting all one’s eggs in one basket. The
from http://www.cert.org/incident_notes/IN-
only rational security response is to guard the
2002-03.html
basket really well.
If there is one point we have tried hard to make Cobb, M. (2006). Latest IM attacks still rely on
painfully clear in this chapter, education, rather social engineering. Retrieved April 15, 2007,
than technological solutions, appears to be the from http://searchsecurity.techtarget.com/tip/
best answer to the social engineering problem. 0,289483,sid14_gci1220612,00.html
Users who are aware of attack techniques, who
CoreImpact.(n.d.).Coreimpactoverview.Retrieved
are trained in following safe usage policies, and
April 15, 2007, from http://www.coresecurity.com/
who are supported by adequate technological
?module=ContentMod&action=item&id=32
safeguards are much more likely to recognize and
deflect social engineering attacks than users who Crawford, M. (2006). Social engineering replaces
rely only on technology for protection. guns in today’s biggest bank heists. Computer-
World (Australia), May. Retrieved April 15, 2007,
from http://www.computerworld.com.au/index.
REFERENCES php/id;736453614
Damle, P. (2002). Social engineering: A tip of the
Antonopoulos, A. M., & Knape, J. D. (2002).
iceberg. Information Systems Control Journal, 2.
Security in converged networks (featured ar-
Retrieved April 15, 2007, from http://www.isaca.
ticle). Internet Telephony, August. Retrieved
org/Template.cfm?Section=Home&CONTENTI
April 15, 2007, from http://www.tmcnet.com/
D=17032&TEMPLATE=/ContentManagement/
it/0802/0802gr.htm
ContentDisplay.cfm
Applekid. (author’s name). (2007). The life of a
Espiner, T. (2006). Microsoft denies flaw in
social engineer. Retrieved April 15, 2007, from
Vista. ZDNet UK, December 5. Retrieved April
http://www.protokulture.net/?p=79
15, 2007, from http://www.builderau.com.
au/news/soa/Microsoft_denies_f law_in_Vis-
ta/0,339028227,339272533,00.htm?feed=rss

144
Effects of Digital Convergence on Social Engineering Attack Channels

Gaudin, S. (2007). Hackers use Middle East fears http://www.techworld.com/applications/news/in-


to push Trojan attack. Information Week, April dex.cfm?newsID=7577&pagtype=samechan
9. Retrieved April 15, 2007, from http://www.
Mitnick, K. D., & Simon, W. L. (2002). The art
informationweek.com/windows/showArticle.
of deception: Controlling the human element of
jhtml?articleID=198900155
security. Indiana: Wiley Publishing, Inc..
Gragg, D. (2003). A multi-level defense against
Mohney, D. (2006). Defeating social engineering
social engineering. SANS Institute Information
with voice analytics. Black Hat Briefings, Las
Security Reading Room. Retrieved April 15,
Vegas, August 2-3, 2006. Retrieved April 25,
2007, from http://www.sans.org/reading_room/
2007, from http://www.blackhat.com/presenta-
papers/51/920.pdf
tions/bh-usa-06/BH-US-06-Mohney.pdf
Hollows, P. (2005). Hackers are real-time. Are you?
Naraine, R. (2006a). Voice phishers dialing
Sarbanes-Oxley Compliance Journal, February
for PayPal dollars. eWeek, July 7. Retrieved
28. Retrieved April 15, 2007, from http://www.
April 15, 2007, from http://www.eweek.com/ar-
s-ox.com/Feature/detail.cfm?ArticleID=623
ticle2/0,1895,1985966,00.asp
Holz, T., & Raynal, F. (2006). Malicious malware:
Naraine, R. (2006b). Hackers use BBC news
attacking the attackers (part 1), Security Focus,
as IE attack lure. eWeek, March 30. Retrieved
January 31. Retrieved April 15, 2007, from http://
April 15, 2007, from http://www.eweek.com/ar-
www.securityfocus.com/print/infocus/1856
ticle2/0,1895,1944579,00.asp
Jackson Higgins, K. (2006a). Phishing your own
Naraine, R. (2006c). Drive-by IE attacks subside;
users. Retrieved April 26, 2007, from http://www.
threat remains. eWeek, March 27. Retrieved
darkreading.com/document.asp?doc_id=113055
April 15, 2007, from http://www.eweek.com/ar-
Jackson Higgins, K. (2006b). Social engineer- ticle2/0,1895,1943450,00.asp
ing gets smarter. Retrieved April 26, 2007,
Online Safety Study. (2004, October). AOL/NCSA
from http://www.darkreading.com/document.
online safety study, conducted by America On-
asp?doc_id=97382
line and the National Cyber Security Alliance.
Jaques, R. (2007). UK smoking ban opens doors Retrieved April 15, 2007, from http://www.stay-
for hackers. Retrieved April 26, 2007, from safeonline.info/pdf/safety_study_v04.pdf
http://www.vnunet.com/vnunet/news/2183215/
Orgill, G. L., Romney, G. W., Bailey, M. G., &
uk-smoking-ban-opens-doors
Orgill, P. M. (2004, October 28-30). The urgency
Kelly, M. (2007). Chocolate the key to uncovering for effective user privacy-education to counter
PC passwords. The Register, April 17. Retrieved social engineering attacks on secure computer
April 26, 2007, from http://www.theregister. systems. In Proceedings of the 5th Conference
co.uk/2007/04/17/chocolate_password_survey/ on Information Technology Education CITC5
‘04, Salt Lake City, UT, USA, (pp. 177-181). New
Leyden, J. (2007). Data theft replaces malware
York: ACM Press.
as top security concern. The Register, April 19.
Retrieved April 19, 2007, from http://www.thereg- Palmer, C. C. (2001). Ethical hacking. IBM Systems
ister.co.uk/2007/04/19/security_fears_poll/ Journal, 40(3). Retrieved April 15, 2007, from
http://www.research.ibm.com/journal/sj/403/
McMillan, R. (2006). Third word exploit released,
palmer.html
IDG news service. Retrieved April 15, 2007, from

145
Effects of Digital Convergence on Social Engineering Attack Channels

Parizo, E. B. (2005). New bots, worm threaten AIM Stasiukonis, S. (2006c). Banking on security.
network. Retrieved April 25, 2007, from http:// Retrieved April 15, 2007, from http://www.dark-
searchsecurity.techtarget.com/originalContent/ reading.com/document.asp?doc_id=111503
0,289142,sid14_gci1150477,00.html
Stasiukonis, S. (2006d). Social engineering,
Parno, B., Kuo, C, & Perrig, A. (2006, February the shoppers’ way. Retrieved April 15, 2007,
27-March 2). Phoolproof phishing prevention. In from http://www.darkreading.com/document.
Proceedings of the 10th International Conference asp?doc_id=99347
on Financial Cryptography and Data Security.
Stasiukonis, S. (2007). By hook or by crook. Re-
Anguilla, British West Indies.
trieved April 15, 2007, from http://www.darkread-
Phishers raise their game. (2006). Retrieved April ing.com/document.asp?doc_id=119938
25, 2007, from http://software.silicon.com/secu-
TechNET. (2006). How to protect insiders from
rity/0,39024655,39164058,00.htm
social engineering threats. Retrieved April 15,
Plewes, A. (2007, March). VoIP threats to watch out 2007, from http://www.microsoft.com/technet/
for—a primer for all IP telephony users. Retrieved security/midsizebusiness/topics/compliancean-
April 18, 2007, from http://www.silicon.com/sili- dpolicies/socialengineeringthreats.mspx
con/networks/telecoms/0,39024659,39166244,00.
Thomson, I. (2006). ‘Evil twin’ Wi-Fi hacks target
htm
the rich. iTnews.com.au, November. Retrieved
Ross, B., Jackson, C., Miyake, N., Boneh, D., & April 15, 2007, from http://www.itnews.com.
Mitchell, J. C. (2005). Stronger password authen- au/newsstory.aspx?CIaNID=42673&r=rss
tication using browser extensions. In Proceedings
Top myths. (2006a). The 10 biggest myths of IT se-
of the 14th Usenix Security Symposium, 2005.
curity: Myth #1: ‘Epidemic’ data losses. Retrieved
Ryst, S. (2006, July 11). The phone is the latest April 15, 2007, from http://www.darkreading.com/
phishing rod. BusinessWeek. document.asp?doc_id=99291&page_number=2
Schneier, B. (2000). Secrets and lies. John Wiley Top myths. (2006b). The 10 biggest myths of IT se-
and Sons. curity: Myth #2: Anything but Microsoft. Retrieved
April 15, 2007, from http://www.darkreading.com/
Sobrado, L., & Birget, J.-C. (2005). Shoulder
document.asp?doc_id=99291&page_number=3
surfing resistant graphical passwords. Re-
trieved April 15, 2007, from http://clam.rutgers. Treasury Inspector General for Tax Administra-
edu/~birget/grPssw/srgp.pdf tion. (2007). Employees continue to be susceptible
to social engineering attempts that could be used
Stasiukonis, S. (2006a). Social engineering,
by hackers (TR 2007-20-107). Retrieved August
the USB way. Retrieved April 15, 2007, from
18, 2007, from http://www.ustreas.gov/tigta/
http://www.darkreading.com/document.asp?doc_
auditreports/2007reports/200720107fr.pdf
id=95556&WT.svl=column1_1
Vishing. (2006). Secure computing warns of vish-
Stasiukonis, S. (2006b). How identity theft works.
ing. Retrieved April 15, 2007, from http://www.
Retrieved April 15, 2007, from http://www.dark-
darkreading.com/document.asp?doc_id=98732
reading.com/document.asp?doc_id=102595

146
Effects of Digital Convergence on Social Engineering Attack Channels

Wiedenbeck, S., Waters, J., Sobrado, L., & Birget, Wilson, T. (2007). Five myths about black hats.
J. (2006, May 23-26). Design and evaluation of Retrieved April 15, 2007, from http://www.dark-
a shoulder-surfing resistant graphical password reading.com/document.asp?doc_id=118169
scheme. In Proceedings of the Working Confer-
ence on Advanced Visual interfaces AVI ‘06,
Venezia, Italy,(pp. 177-184). ACM Press, New
York: ACM Press. http://doi.acm.org/10.1145/11
33265.1133303

147
148

Chapter X
A Social Ontology for
Integrating Security and
Software Engineering
E. Yu
University of Toronto, Canada

L. Liu
Tsinghua University, China

J. Mylopoulos
University of Toronto, Canada

Abstract

As software becomes more and more entrenched in everyday life in today’s society, security looms large
as an unsolved problem. Despite advances in security mechanisms and technologies, most software
systems in the world remain precarious and vulnerable. There is now widespread recognition that se-
curity cannot be achieved by technology alone. All software systems are ultimately embedded in some
human social environment. The effectiveness of the system depends very much on the forces in that
environment. Yet there are few systematic techniques for treating the social context of security together
with technical system design in an integral way. In this chapter, we argue that a social ontology at the
core of a requirements engineering process can be the basis for integrating security into a requirements
driven software engineering process. We describe the i* agent-oriented modelling framework and show
how it can be used to model and reason about security concerns and responses. A smart card example
is used to illustrate. Future directions for a social paradigm for security and software engineering are
discussed.

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Social Ontology for Integrating Security and Software Engineering

INTRODUCTION with other actors. Security is treated as a high-level


goal held by (some) stakeholders that need to be
It is now widely acknowledged that security can- addressed from the earliest stages of system con-
not be achieved by technological means alone. As ception. Actors make tradeoffs among competing
more and more of our everyday activities rely on goals such as functionality, cost, time-to-market,
software, we are increasingly vulnerable to lapses quality of service, as well as security.
in security and deliberate attacks. Despite ongoing The framework offers a set of security re-
advances in security mechanisms and technolo- quirements analysis facilities to help users, ad-
gies, new attack schemes and exploits continue ministrators, and designers better understand the
to emerge and proliferate. various threats and vulnerabilities they face, the
Security is ultimately about relationships countermeasures they can take, and how these can
among social actors — stakeholders, system users, be combined to achieve the desired security results
potential attackers — and the software that are within the broader picture of system design and
instruments of their actions. Nevertheless, there the business environment. The security analysis
are few systematic methods and techniques for process is integrated into the main requirements
analyzing and designing social relationships as process, so that security is taken into account
technical systems alternatives are explored. from the earliest moment. The technology of
Currently, most of the research on secure smart cards and the environment surrounding its
software engineering methods focuses on the usage provides a good example to illustrate the
technology level. Yet, to be effective, software social ontology of i*.
security must be treated as originating from high- In the next section, we review the current chal-
level business goals that are taken seriously by lenges in achieving security in software systems,
stakeholders and decision makers making strategic motivating the need for a social ontology. Given
choices about the direction of an organisation. that a social modelling and analysis approach
Security interacts with other high-level business is needed, what characteristics should it have?
goals such as quality of service, costs, time-to- We consider this in the following section. The
market, evolvability and responsiveness, reputa- two subsequent sections describe the ontology
tion and competitiveness, and the viability of of the i* strategic actors modelling framework
business models. What is needed is a systematic and outline a process for analyzing the security
linkage between the analysis of technical systems issues surrounding a smart card application. The
design alternatives and an understanding of their last section reviews several areas of related work
implications at the organisational, social level. and discusses how a social ontology framework
From an analysis of the goals and relationships can be complementary to these approaches.
among stakeholders, one seeks technical systems
solutions that meet stakeholder goals.
In this chapter, we describe the i* agent-ori- BACKGROUND
ented modelling framework and how it can be used
to treat security as an integral part of software Despite ongoing advances in security technolo-
system requirements engineering. The world is gies and software quality, new vulnerabilities
viewed as a network of social actors depending continue to emerge. It is clear that there can be
on each other for goals to be achieved, tasks to be no perfect security. Security inevitability involves
performed, and resources to be furnished. Each tradeoffs (Schneier, 2003). In practice, therefore,
actor reasons strategically about alternate means all one can hope for is “good enough” security
for achieving goals, often through relationships (Sandhu, 2003).

149
A Social Ontology for Integrating Security and Software Engineering

But how does one determine what is good what alternate policies might be more workable.
enough? Who decides what is good enough? They cannot explain why certain policies meet
These questions suggest that software and infor- with resistance and non-compliance.
mation security cannot be addressed by technical Each of the common approaches in security
specialists alone. Decisions about security are modelling and analysis focuses on selective aspects
made ultimately by stakeholders — people who of security, which are important in their own right,
are affected by the outcomes — users, investors, but cannot provide the guidance needed to achieve
the general public, etc. — because the tradeoffs “good enough” overall security. Most approaches
are about how their lives would be affected. In focus on technical aspects, neglecting the social
electronic commerce, consumers decide whether context, which is crucial for achieving effective
to purchase from a vendor based on the trust- security in practice. The technical focus is well
worthiness of the vendor’s business and secu- served by mechanistic ontology (i.e., concepts that
rity practices. Businesses decide how much and are suitable for describing and reasoning about
where to invest on security to reduce exposure automated machinery — objects, operations,
to a tolerable level. In healthcare, computerized state transitions, etc.). The importance of social
information management can streamline many context in security suggests that a different set
processes. But e-health will become a reality only of concepts is needed. From the previous discus-
if patients and the general public are satisfied that sion, we propose that the following questions are
their medical records are protected and secure. important for guiding system development in the
Healthcare providers will participate only if li- face of security challenges:
ability concerns can be adequately addressed.
Tradeoffs are being made by participants • Who are the players who have an interest
regarding competing interests and priorities. in the intended system and its surround-
Customers and businesses make judgments ing context? Who would be affected by a
about what is adequate security for each type of change?
business, in relation to the benefits derived from • What are their strategic interests? What are
online transactions. Patients want their personal their business and personal objectives? What
and medical information to be kept private, but do they want from the system and the other
do not want privacy mechanisms to interfere with players?
the quality of care. In national defense, secrecy is • What are the different ways in which they
paramount, but can also lead to communication can achieve what they want?
breakdown. In each case, security needs to be • How do their interests complement or in-
interpreted within the context of the social setting, terfere with each other? How can players
by each stakeholder from his/her viewpoint. achieve what they want despite competing
Current approaches to security do not allow or conflicting interests?
these kinds of tradeoffs to be conveyed to system • What opportunities exist for one player
developers to guide design. For example, UML to advance its interests at the expense of
extensions for addressing security (see Chapter others? What vulnerabilities exist in the
I for a review) do not lend themselves well to the way that each actor envisions achieving its
modelling of social actors and their concerns about objectives?
alternate security arrangements, and how they • How can one player avoid or prevent its
reason about tradeoffs. Access control models interests from being compromised by oth-
can specify policies, but cannot support reason- ers?
ing about which policies are good for whom and

150
A Social Ontology for Integrating Security and Software Engineering

These are the kind of questions that can di- and distributed everywhere. Human practices
rectly engage stakeholders, helping them uncover and imagination determine how hardware and
issues and concerns. Stakeholders need the help software are put to use, not the other way round.
of technical specialists to think through these Pervasive networking, wired and wireless, has
questions, because most strategic objectives are also contributed to blurring the notion of “system.”
accomplished through technological systems. Computational resources can be dynamically
Stakeholders typically do not know enough harnessed in ad hoc configurations (e.g., through
about technology possibilities or their implica- Web services protocols in service-oriented archi-
tions. Technologists do not know enough about tectures) to provide end-to-end services for a few
stakeholder interests to make choices for them. In moments, then dissolved and reconfigured for
order that stakeholder interests can be clarified, another ad hoc engagement. Even computational
deliberated upon, and conveyed effectively to entities, in today’s networked environment, are
system developers, a suitable modelling method better viewed as participants in social networks
is needed to enable stakeholders and technologists than as fixed components in a system with pre-
to jointly explore these questions. The answers to defined structure and boundary. Increasingly, the
these questions will have direct impact on system computational services that we desire will not be
development, as they set requirements and guide offered as a single pre-constructed system, but by
technical design decisions. a conglomeration of interacting services operated
We argue therefore that a social ontology is by different organisations, possibly drawing on
needed to enable security concerns to become a content owned by yet other providers.
driving force in software system development. In The questions raised in the previous section
the next section, we explore the requirements for arise naturally from today’s open networked
such a social ontology. environments, even if one were not focusing on
security concerns. The relevance of a social ontol-
ogy is therefore not unique to security. Competing
APPROACH interests and negative forces that interfere with
one’s objectives are ever present in every organisa-
If a treatment of security requires attention to tion and social setting. They are accentuated in an
the social context of software systems, can the open network environment. In security scenarios,
social analysis be given full weight in a software the negative forces are further accentuated as they
engineering methodology that is typically domi- materialize into full-fledged social structures, in-
nated by a mechanistic worldview? How can the volving malicious actors collaborating with other
social modelling be reconciled and integrated actors, engaging in deliberate attacks, possibly
with mainstream software modelling? violating conventions, rules, and laws. Security
It turns out that a social paradigm for software can therefore be seen as covering the more severe
system analysis is motivated not only by security forms of a general phenomenon. Competing and
concerns, but is consistent with a general shift in conflicting interests are inherent in social worlds.
the context of software and information systems. Negative forces do not come only from well identi-
The analysis of computers and information sys- fied malicious external agents, but can be present
tems used to be machine-centric when hardware legitimately within one’s organisation, among
was the precious resource. The machine was at the one’s associates, and even among the multiple
centre, defining the human procedures and struc- roles that one person may play. It may not be
tures needed to support its proper functioning. possible to clearly separate security analysis from
Today, hardware and software are commoditized the analysis of “normal” business. We conclude,

151
A Social Ontology for Integrating Security and Software Engineering

therefore, that a social ontology would serve well Agents consider alternative configurations of
for “normal” business analysis, recognizing the dependencies to assess their strategic positioning
increasingly “social” nature of software systems in a social context. The name i* (pronounced eye-
and their environments. A social ontology offers star) refers to the concept of multiple, distributed
a smooth integration of the treatment of normal “intentionality.”
and security scenarios, as the latter merely refer The framework is used in contexts in which
to one end of a continuum covering positive and there are multiple parties (or autonomous units)
negative forces from various actors. with strategic interests, which may be reinforc-
Given this understanding, the social ontology ing or conflicting in relation to each other. The i*
should not be preoccupied with those concepts framework has been applied to business process
conventionally associated with security. For modelling (Yu, 1993), business redesign (van
example, the concepts of asset, threat, attack, der Raadt, Gordijn, & Yu, 2005; Yu et al., 2001),
counter-measure are key concepts for security requirements engineering (Yu, 1997), architecture
management. In the social ontology we aim to modelling (Gross & Yu, 2001), COTS selection
construct, we do not necessarily adopt these as (Franch & Maiden, 2003), as well as to informa-
primitive concepts. Instead, the social ontology tion systems security.
should aim to be as general as possible, so that the There are three main categories of concepts:
concepts may be equally applicable to positive as actors, intentional elements, and intentional links.
well as negative scenarios. The general ontology The framework includes a strategic dependency
is then applied to security. Special constructs (SD) model — for describing the network of rela-
unique to security would be introduced only if the tionships among actors, and a strategic rationale
expressiveness of the general constructs is found (SR) model — for describing and supporting the
to be inadequate. The principle of Occam’s razor reasoning that each actor has about its relation-
should be applied to minimize the complexity ships with other actors.
of the ontology. If desired, shorthand notations
for common recurring patterns can be defined Actor
in terms of the primitives. The premises behind
a social ontology are further discussed in Yu In i*, an actor ( ) is used to refer generically to
(2001a, 2001b). any unit to which intentional dependencies can
be ascribed. An actor is an active entity that car-
ries out actions to achieve its goals by exercising
BASIC CONCEPTS OF THE i* means-ends knowledge. It is an encapsulation of
STRATEGIC MODELLING intentionally, rationality and autonomy. Graphi-
FRAMEWORK cally, an actor is represented as a circle, and may
optionally have a dotted boundary, with intentional
The i* framework (Yu, 1993, 1997) proposes an elements inside.
agent oriented approach to requirements engineer-
ing centering on the intentional characteristics of Intentional Elements: Goal, Softgoal,
the agent. Agents attribute intentional properties Task, Resource and Belief
such as goals, beliefs, abilities, commitments to
each other and reason about strategic relationships. The intentional elements in i* are goal, task,
Dependencies give rise to opportunities as well softgoal, resource and belief. A goal ( ) is a
as vulnerabilities. Networks of dependencies are condition or state of affairs in the world that
analyzed using a qualitative reasoning approach. the stakeholders would like to achieve. A goal

152
A Social Ontology for Integrating Security and Software Engineering

can be achieved in different ways, prompting A resource ( ) is a physical or informational


alternatives to be considered. A goal can be a entity, which may serve some purpose. From
business goal or a system goal. Business goals the viewpoint of intentional analysis, the main
are about the business or state of the affairs the concern with a resource is whether it is available.
individual or organisation wishes to achieve Resources are shown graphically as rectangles.
in the world. System goals are about what the The belief ( ) construct is used to represent
target system should achieve, which, generally, domain characteristics, design assumptions and
describe the functional requirements of the target relevant environmental conditions. It allows do-
system. In the i* graphical representation, goals main characteristics to be considered and properly
are represented as a rounded rectangle with the reflected in the decision making process, hence
goal name inside. facilitating later review, justification, and change
A softgoal ( ) is typically a quality (or non- of the system, as well as enhancing traceability.
functional) attribute on one of the other intentional Beliefs are shown as ellipses in i* graphical
elements. A softgoal is similar to a (hard) goal notation.
except that the criteria for whether a softgoal is
achieved are not clear-cut and a priori. It is up Strategic Dependency Model
to the developer to judge whether a particular
state of affairs in fact sufficiently achieves the A strategic dependency (SD) model consists of a
stated softgoal. Non-functional requirements, set of nodes and links. Each node represents an
such as performance, security, accuracy, reus- actor, and each link between two actors indicates
ability, interoperability, time to market and cost that one actor depends on the other for something
are often crucial for the success of a system. In in order that the former may attain some goal. We
i*, non-functional requirements are represented call the depending actor the depender, and the
as softgoals and addressed as early as possible in actor who is depended upon the dependee. The
the software lifecycle. They should be properly object around which the dependency relationship
modelled and addressed in design reasoning before centers is called the dependum. By depending
a commitment is made to a specific design choice. on another actor for a dependum, an actor (the
In the i* graphical representation, a softgoal is depender) is able to achieve goals that it was not
shown as an irregular curvilinear shape. able to without the dependency, or not as easily or
Tasks ( ) are used to represent the specific as well. At the same time, the depender becomes
procedures to be performed by agents, which vulnerable. If the dependee fails to deliver the
specifies a particular way of doing something. It dependum, the depender would be adversely
may be decomposed into a combination of sub- affected in its ability to achieve its goals. A de-
goals, subtasks, resources, and softgoals. These pendency link ( ) is used to describe such
sub-components specify a particular course of an inter-actor relationship. Dependency types are
action while still allowing some freedom. Tasks used to differentiate the kinds of freedom allowed
are used to incrementally specify and refine in a relationship.
solutions in the target system. They are used to In a goal dependency, an actor depends on
achieve goals or to “operationalize” softgoals. another to make a condition in the world come
These solutions provide operations, processes, true. Because only an end state or outcome is
data representations, structuring, constraints, and specified, the dependee is given the freedom to
agents in the target system to meet the needs stated choose how to achieve it.
in the goals and softgoals. Tasks are represented In a task dependency, an actor depends on an-
graphically as a hexagon. other to perform an activity. The depender’s goal

153
A Social Ontology for Integrating Security and Software Engineering

for having the activity performed is not given. The The four types of dependencies reflect different
activity description specifies a particular course levels of freedom that is allowed in the relation-
of action. A task dependency specifies standard ship between depender and dependee.
procedures, indicates the steps to be taken by Figure 1 shows a SD model for a generic
the dependee. smart card-based payment system involving six
In a resource dependency, an actor depends actors. This example is adapted from Yu and Liu
on another for the availability of an entity. The (2001). A Card Holder depends on a Card Issuer
depender takes the availability of the resource to to be allocated a smart card. The Terminal Owner
be unproblematic. depends on Card Holder to present the card for
The fourth type of dependency, softgoal de- each transaction. The Card Issuer in turn depends
pendency, is a variant of the goal dependency. It on the Card Manufacturer and Software Manu-
is different in that there are no a priori, cut-and- facturer to provide cards, devices, and software.
dry criteria for what constitutes meeting the goal. The Data Owner is the one who has control of the
The meaning of a softgoal is elaborated in terms data within the card. He depends on the Terminal
of the methods that are chosen in the course of Owner to submit transaction information to the
pursuing the goal. The dependee contributes to central database. In each case, the dependency
the identification of alternatives, but the deci- means that the depender actor depends on the
sion is taken by the depender. The notion of the dependee actor for something in order to achieve
softgoal allows the model to deal with many of some (internal) goal.
the usually informal concepts. For example, a The goal dependency New Account Be Created
service provider’s dependency on his customer from the Card Issuer to the Data Owner means
for continued business can be achieved in dif- that it is up to the Data Owner to decide how to
ferent ways. The desired style of continued busi- create a new account. The Card Issuer does not
ness is ultimately decided by the depender. The care how a new account is created; what mat-
customer’s softgoal dependency on the service ters is that, for each card, an account should be
provider for “keep personal information con- created. The Card Issuer depends on the Card
fidential” indicates that there is not a clear-cut Holder to apply for a card via a task dependency
criterion for the achievement of confidentiality. by specifying standard application procedures.

Figure 1. Strategic dependency model of a generic smart card system

154
A Social Ontology for Integrating Security and Software Engineering

If the Card Issuer were to indicate the steps for intentional relationships in terms of means-ends,
the Data Owner to create a new account, then the decomposition, contribution, and correlation
Data Owner would be related to the Card Issuer links.
by a task dependency instead.
The Card Issuer’s dependencies on the Card • Means-ends links ( ) are used to describe
Manufacturer for cards and devices, the manufac- how goals can be achieved. Each task con-
turer’s dependencies on Card Issuer for payment nected to a goal by a means-ends link is one
are modelled as resource dependencies. Here the possible way of achieving the goal.
depender takes the availability of the resource to • Decomposition links ( ) define the sub-
be unproblematic. elements of a task, which can include sub-
The Card Holder’s softgoal dependency on tasks, sub-goals, resources, and softgoals.
the Card Issuer for Keep Personal Information The softgoals indicate the desired qualities
Confidential indicates that there is not a clear-cut that are considered to be part of the task.
criterion for the achievement of confidentiality. In The sub-tasks may in turn have decomposi-
the Manufacturer’s softgoal dependency on Card tion links that lead to further sub-elements.
Issuer, Continued Business could be achieved Sub-goals indicate the possibility of alternate
in different ways. The desired style of continued means of achievement, with means-ends
business is ultimately decided by the depender. links leading to tasks.
The strategic dependency model of Figure • A contribution link (→) describes the
1 is not meant to be a complete and accurate qualitative impact that one element has on
description of any particular smart card system. another. A contribution can be negative or
It is intended only for illustrating the modelling positive. The extent of contribution is judged
features of i*. to be partial or sufficient based on Simon's
In conventional software systems modelling, concept of satisficing (Simon, 1996), as in
the focus is on information flows and exchanges the NFR framework (Chung, Nixon, Yu, &
— what messages actors or system components Mylopoulos, 2000). Accordingly, contribu-
send to each other. With the social ontology of i*, tion link types include: help (positive and
the focus is on intentional relationships — what partial), make (positive and sufficient), hurt
are the actors’ expectations and constraints on (negative and partial), break (negative and
each other. Since actors are intentional, strategic, sufficient), some+ (positive of unknown
and have autonomy, they reflect on their relation- extent), some- (negative of unknown extent).
ships with other actors. If these relationships are Correlation links (dashed arrows) are used
unsatisfactory, they will seek alternative ways of to express contributions from one element to
associating with others. other elements that are not explicitly sought,
Security concerns arise naturally from this but are side effects.
perspective. A social ontology therefore provides
a way to integrate security into software system Strategic Rationale Model
engineering from the earliest stages of conception,
and at a high level of abstraction. The strategic rationale (SR) model provides a
detailed level of modelling by looking “inside”
Intentional Links actors to model internal intentional relationships.
Intentional elements (goals, tasks, resources,
Dependencies are intentional relationships and softgoals) appear in SR models not only as
between actors. Within each actor, we model external dependencies, but also as internal ele-

155
A Social Ontology for Integrating Security and Software Engineering

Figure 2. Strategic rationale model of card manufacturer

ments arranged into a predominantly hierarchical these tasks is satisfied. Provide Total Card Solution
structure of means-ends, task-decompositions and will help the Security of the system (represented
contribution relationships. with a Help contribution link to Security), while
The SR model in Figure 2 elaborates on the Provide Simple Card Solution is considered to have
rationale of a Card Manufacturer. The Card no significant impact on security if it is applied to
Manufacturer’s business objective Manufacture cards with small monetary value. The Simple Card
Card Hardware is modeled as a “hard” functional Solution is good for the goal of Low Cost whereas
goal (top right corner). Quality requirements the Total Card Solution is bad. This is supported
such as Security and Low Cost are represented as by the belief that “Total Card Solution, such as
softgoals. The different means for accomplishing Mondex, is expensive.” Beliefs are usually used
the goal are modeled as tasks. The task Provide to represent such domain properties, or design
Total Card Solution can be further decomposed assumption or environmental condition, so that
into three sub-components (connected with traceability of evidence of design decision could
task-decomposition links): sub-goal of Get Paid, be explicitly maintained with the model.
sub-task Develop Card Solution, and sub-task During system analysis and design, softgoals
Manufacture Card & Devices. To perform the task such as Low Cost and Security [card] are system-
Manufacture Card & Devices, the availability of atically refined until they can be operational-
Materials need to be taken into consideration, ized and implemented. Unlike functional goals,
which is modeled as a resource. nonfunctional qualities represented as softgoals
In the model, task node Provide Simple Card frequently interact or interfere with each other, so
Solution (such as the Millicent solution), and the graph of contributions is usually not a strict
Provide Total Card Solution (such as the Mondex tree structure (Chung et al., 2000).
solution) are connected to the goal with means-
ends links. This goal will be satisfied if any of

156
A Social Ontology for Integrating Security and Software Engineering

Agents, Roles, and Positions turer, and the Software Manufacturer. These actors
are modeled as roles since they represent abstrac-
To model complex relationships among social tions of responsibilities and functional units of the
actors, we further define the concepts of agents, business model. Then concrete agents in smart
roles, and positions, each of which is an actor, but card systems are identified. For instance, actors
in a more specialized sense. in a Digital Stored Value Card system include
A role ( ) is an abstract actor embodying Customer, Merchant, Subcontractor Company,
expectations and responsibilities. It is an abstract and their instances. These agents can play one or
characterization of the behavior of a social actor more roles in different smart card systems. Here,
within some specialized context or domain of Financial Institution is modeled as a position that
endeavor. An agent ( ) is a concrete actor with bridges the multiple abstract roles it covers, and
physical manifestations, human or machine, with the real world agents occupying it. Initially, hu-
specific capabilities and functionalities. A set of man/organisational actors are identified together
roles packaged together to be assigned to an agent with existing machine actors. As the requirements
is called a position. A position ( ) is intermediate analysis proceeds, more actors could be added
in abstraction between a role and an agent, which in, including new system agents such as security
often has an organisational flavor. Positions can monitoring system, counter-forgery system, etc.,
COVER roles. Agents can OCCUPY positions. An when certain design choices have been made, and
agent can PLAY one or more roles directly. The new functional entities are added.
INS construct is used to represent the instance- An agent is an actor with concrete, physical
and-class relation. The ISA construct is used to manifestations, such as a human individual. An
express conceptual generalization/specialization. agent has dependencies that apply regardless of
Initially, human actors representing stakeholders what role he/she/it happens to be playing. For ex-
in the domain are identified together with existing ample, in Figure 3, if Jerry, a Card Holder desires
machine actors. As the analysis proceeds, more a good credit record, he wants the credit record
actors are identified, including new system agents, to go towards his personal self, not to the posi-
when certain design choices have been made, and tions and abstract roles that Jerry might occupy
new functional entities are added. or play. We use the term agent instead of person
Figure 3 shows some actors in the domain. At for generality, so that it can be used to refer to
the top, six generic abstract roles are identified, human as well as artificial (hardware, software,
including the Card Holder, the Terminal Owner, the or organisational) agents. Customer and Merchant
Data Owner, the Card Issuer, the Card Manufac- are represented as agent classes and groups. De-

Figure 3. Actor hierarchy (roles, positions, and agents) in a smart card system

157
A Social Ontology for Integrating Security and Software Engineering

pendencies are associated with a role when these sponsibilities are split among multiple parties. The
dependencies apply regardless of who plays the processor, I/O, data, programs, and network may
role. For example, we consider Card Holder an be controlled by different, and potentially hostile,
abstract role that agents can play. The objective parties. By discussing the security ramifications of
of obtaining possession of the card, and deciding different ways of splitting responsibilities, we aim
when and whether to use it, are associated with to show how the proposed modelling framework
the role, no matter who plays the role. can help produce a proper understanding of the
The INS construct represents the instance-and- security systems that employ smart cards. Figure
class relation. For example, Mr. Lee’s Convenience 4 shows the basic steps to take during the process
Store is an instance of Merchant, and Jerry is an of domain requirements analysis with i*, before we
instance of Customer. The ISA construct expresses consider security. The process can be organised
conceptual generalization/ specialization. For into the following iterative steps.
example, a Subcontractor Company is a kind of
Technical Company. These constructs are used Actor Identification
to simplify the presentation of strategic models
with roles, positions, and agents. There can be In step (1), the question “who is involved in the
dependencies from an agent to the role it plays. system?” will be answered. According to the defi-
For example, a Merchant who plays the role of nition given above, we know that all intentional
Terminal owner may depend on that role to attract units may be represented as actors. For example,
more customers. Otherwise, he may choose not in any smart card based systems, there are many
to play that role. parties involved. An actor hierarchy composed
Roles, positions, and agents can each have of roles, positions, and agents such as the ones
subparts. In general, aggregate actors are not in Figure 3 is created.
compositional with respect to intentional prop-
erties. Each actor, regardless of whether it has Goal/Task Identification
parts, or is part of a larger whole, is taken to be
intentional. Each actor has inherent freedom and In the step (2) of the requirements analysis process,
is therefore ultimately unpredictable. There can the question “what does the actor want to achieve?”
be intentional dependencies between the whole will be answered. As shown in the strategic ra-
and its parts (e.g., a dependency by the whole on tionale (SR) model of Figure 2, answers to this
its parts to maintain unity). question can be represented as goals capturing
the high-level objectives of agents. During system
analysis and design, softgoals such as low cost and
DOMAIN REQUIREMENTS security are systematically refined until they can
ANALYSIS WITH i* be operationalized and implemented. Using the
SR model, we can reason about each alternative’s
We now illustrate how the social ontology of i* al- contributions to high-level non-functional qual-
lows security issues to be identified and addressed ity requirements including security, and possible
early in the requirements process. We continue tradeoffs.
with the example of smart card systems design. The refinements of goals, tasks and softgoals
Security in smart card systems is a challenging (step (3) in Figure 4) are considered to have reached
task due to the fact that different aspects of the an adequate level once all the necessary design
system are not under a single trust boundary. Re- decisions can be made based on the existing in-

158
A Social Ontology for Integrating Security and Software Engineering

Figure 4. Requirements elicitation process with i*

(1) Actor Identification

(3 (2) Goal/Task Identification


(5

(4) Dependency Identification

formation in the model. The SR model in Figure important questions has yet to be answered (i.e.,
3 was created by running through steps (1), (2), what if things go wrong)? What if some party
(3) in Figure 4 iteratively. involved in the smart card system does not be-
have as expected? How bad can things get? What
Strategic Dependency Identification prevention tactics can be considered?” These are
exactly the questions we want to answer in the
In the step (4) of the requirements analysis pro- security requirements analysis.
cess, the question “how do the actors relate to
each other?” will be answered. Figure 1 shows
the SD model for a generic smart card-based SECURITY REQUIREMENTS
payment system. By analyzing the dependency ANALYSIS WITH i*
network in a Strategic Dependency model, we can
reason about opportunities and vulnerabilities. We now extend the process to include attacker
A Strategic Dependency model can be obtained analysis, vulnerability analysis, and countermea-
by hiding the internal rationales of actors in a sure analysis. The dashed lines and boxes on the
Strategic Rationale model. Thus, the goal, task, right hand side of Figure 5 indicate a series of
resource, softgoal dependencies in a Strategic analysis steps to deal with security. These steps
Dependency model can be seen as originating are integrated into the basic domain requirements
from SR models. engineering process, such that threats from poten-
The kinds of analysis shown above answers tial attackers are anticipated and countermeasures
questions such as “who is involved in the system? for system protection are sought and equipped
What do they want? How can their expectations wherever necessary. Each of the security related
be fulfilled? And what are the inter-dependencies analysis steps (step [1] to [7]) will be discussed
between them?” These answers initially provide in detail in the following subsections.
a sketch of the social setting of the future system,
and eventually result in a fairly elaborate behav-
ioral model where certain design choices have
already been made. However, another set of very

159
A Social Ontology for Integrating Security and Software Engineering

Figure 5. Security requirements elicitation process with i*

(1)Actor Identification Attacker Identification [1]

(3) (2)Goal/Task Identification

Malicious Intent Identification [2]


(5)
(4)Dependency Identification
Vulnerability Analysis[3]

[7]
Attacking Measure Identification[4]
[6]

Countermeasure Identification[5]

Figure 5

Figure 6. Modelling attackers in strategic actors model

Attacker Analysis In this analysis, each actor is considered in turn


as an attacker. This attacker inherits the intentions,
The attacker analysis steps aim to identify poten- capabilities, and social relationships of the cor-
tial system abusers and their malicious intents. responding legitimate actor (i.e., the internal goal
The basic premise here is that all the actors are hierarchy and external dependency relationships
assumed “guilty until proven innocent.” In other in the model). This may serve as a starting point
words, given the result of the basic i* requirements of a forward direction security analysis (step [1]
modelling process, we now consider any one of in Figure 5). A backward analysis starting from
the actors (roles, positions, or agents) identified identifying possible malicious intents and valu-
so far can be a potential attacker to the system able business assets can also be done.
or to other actors. For example, we want to ask, Proceeding to step [2] of the process, for each
“In what ways can a terminal owner attack the attacker identified, we combine the capabilities
system? How will he benefit from inappropriate and interests of the attacker with those of the
manipulation of the card reader, or transaction legitimate actor. For simplicity, we assume that
data?” an attacker may be modeled as a role or an agent.

160
A Social Ontology for Integrating Security and Software Engineering

To perform the attacker analysis, we consider that in an i* model. An attacker may be motivated by
each role may be played by an attacker agent, financial incentives (softgoal Be Profitable), or
each position may be occupied by an attacker by non-financial ones (e.g., Desire for Notoriety).
agent, and that each agent may play an attacker These malicious intents may lead to various attack
role (Figure 6). The analysis would then reveal strategies, such as Financial Theft, Impersonation
the commandeering of legitimate resources and Attack, Gain Unauthorized Access, Attack on
capabilities for illicit use. The intents and strate- Privacy, and Publicity Attack.
gies of the attackers are explicitly represented
and reasoned about in the models. Dependency Vulnerability Analysis
This approach treats all attackers as insider
attackers, as attacks are via associations with Dependency vulnerability analysis aims at iden-
normal actors. We set a system boundary, then tifying the vulnerable points in the dependency
exhaustively search for possible attackers. Ran- network (step [3] in Figure 5). A dependency
dom attackers such as Internet hackers/crackers, relationship makes the depender inherently vul-
or attackers breaking into a building can also be nerable. Potential attackers may exploit these
dealt with by being represented as sharing the vulnerabilities to actually attack the system,
same territory with their victim. By conducting so that their malicious intents can be served. i*
analysis on the infrastructure of the Internet, dependency modelling allows a more specific
we may identify attackers by treating Internet vulnerability analysis because the potential failure
resources as resources in the i* model. By conduct- of each dependency can be traced to a depender
ing building security analysis, break-in attackers, and to its dependers. The questions we want to
or attackers sharing the same workspace can be answer here are “which dependency relationships
identified. Alternatively, we could adopt an op- are vulnerable to attack?”, “What are the chain
posite assumption, i.e., assume there is a trusted effects if one dependency link is compromised?”
perimeter for each agent, all the potential threat The analysis of dependency vulnerabilities does
sources within this trusted perimeter are ignored, not end with the identification of potential vul-
measures will only be taken to deal with threats nerable points. We need to trace upstream in the
from outside of the perimeter. dependency network, and see whether the attacked
As shown in the Strategic Rationale model in dependency relationship impacts other actors in
Figure 7, the motives of Attacker in the smart card the network.
system may be modeled as intentional elements

Figure 7. Motives of attacker in a smart card system

161
A Social Ontology for Integrating Security and Software Engineering

Figure 8. Dependencies (in other words, vulnerable points) in a smart card system

Figure 8 is a simplified version of the SD Proceeding to step [4], we now focus on how an
model of Figure 4, showing only the softgoal attacker may attack the vulnerable points identi-
dependencies. We assume that each of the actors fied above by exploring the attacker’s capacities.
in the SD model can be a potential attacker. And We model potential attacks (including fraud) as
as an attacker, an actor will fail to deliver the negative contributions from the attackers (from
expected dependencies directed to it, of whom their specific methods of attack) toward the de-
it is the dependee. pendee-side dependency link. A Break contribu-
For instance, the Card Holder depends on the tion indicates that the attack is sufficient to make
Terminal Owner to Read/Write Card Correctly. To the softgoal unviable. For clarity of analysis, we
analyze the vulnerability arising from this depen- place the attack-related intentional elements into
dency, we consider the case where the terminal agents called “Attacker Playing Role X.” Details of
owner is not trustworthy. And we try to identify the attack methods (e.g., Steal Card Information,
the potential attacks by answering question of “In Send Falsified Records) can be elaborated by
what possible ways could the attacker break this further means-ends and decomposition analysis.
dependency relationship?” To do this, we elaborate Thus, the steps and methods of the attack can be
on the agent Attacker Playing Terminal Owner. modeled and analyzed. Other internal details of
Starting from attacker’s potential motivations, the Terminal Owner are not relevant and are thus
we refine the high-level goals of the attackers not included in the model. Negative contribution
(and possible attack routes) based on analysis of links are used to show attacks on more specific
the SD and SR models of the normal operations vulnerabilities of the depender (e.g., refinements
of the smart card (e.g., what resources an actor of Transact with Card).
accesses, what types of interactions exist, etc.). In The dependencies that could be broken are
this way, we may identify a number of potential highlighted with a small square in Figure 9. When
attacks that are sufficient to make this dependency a dependency is compromised, the effect could
not viable (Break). propagate through the dependency network up-
stream along the dependency links. For example,

162
A Social Ontology for Integrating Security and Software Engineering

Figure 9. Attacks directed to vulnerable dependencies in a smart card system

if the Terminal Owner is not Quickly Be Paid, he Figure 5, countermeasure analysis is an iterative
may stop accepting card as a payment option. process. Adding protective measures may bring
new vulnerabilities to the system, so a new round
Countermeasure Analysis of vulnerability analysis and countermeasure
analysis will be triggered (step [6]).
During countermeasure analysis, system design- With the knowledge of some potential at-
ers make decisions on how to mitigate vulnerabili- tacks and frauds, the depender may first look for
ties and set up defenses against potential attackers. trustworthy partners, or change their methods of
This type of analysis covers general types of operation, or add control mechanisms (counter-
attacks, and formulates solutions by selectively measures) to protect their interests. A counter-
applying, combining, or instantiating prototypical measure may prevent the attack from happening
solutions to address the specific needs of various by either making it technically impossible, or by
stakeholders. The general types of attacks and eliminating the attacker’s intent of attack.
the prototypical solutions can be retrieved from Figure 10 shows a SR model with defensive
a taxonomy or knowledge repository. actions as well as attacks. Protection mechanisms
Necessary factors for the success of an attack are adopted to counteract specific attacks. In some
are attacker’s motivations, vulnerabilities of the cases, the protections are sufficient to defeat a
system, and attacker’s capabilities to carry out the strong attack (defense Break link (dotted arrow)
attack. Thus, to counteract a hypothetical attack, pointing to an attack Break link). In other cases,
we seek measures that will sufficiently negate countermeasures are only partially effective in
these factors. Based on the above analysis, we al- defending against their respective attacks (through
ready understand the attackers’ possible malicious the Hurt or Some- contribution types).
intents and system vulnerabilities. As shown in

163
A Social Ontology for Integrating Security and Software Engineering

Figure 10. Resistance models defeating hypothetical attacks

Figure 11. Countermeasure effectiveness evaluation model

Qualitative Goal-Reasoning cess is an interactive one, requiring the analyst


Mechanism to make judgments whenever the outcome is
inconclusive given the combination of potentially
A qualitative goal-reasoning process is used to conflicting contributions.
propagates a series of labels through the models. To begin, the analyst labels all the attack leaf
A label (or satisficing status) on a node is used to nodes as Satisficed since they are all judged to
indicate whether that intentional element (goal, be possible (Figure 11). Similarly, all the defense
task, resource, or softgoal) is viable or not (e.g., leaf nodes are judged to be viable, thus labelled
whether a softgoal is sufficiently met). Labels can Satisfied. The values are then propagated along
have values such as Satisfied “ ,” Denied “ contribution links. Before adding defense nodes,
,” Weakly Satisfied “ ” and Weakly Denied the Card Holder’s dependency on the Terminal
“ ,” Undecided “ ,” etc. (Liu et al., 2003). Leaf Owner for Read Write Card Correctly softgoal
nodes (those with no incoming contributions) are was labelled as Denied, because of the potentially
given labels by the analyst based on judgment of strong attacks from Terminal Owner. However, as
their independent viability. These values are then countermeasures are added, the influences of the
propagated “upwards” through the contribution attacks will be correspondingly weakened.
network (following the direction of the contribu- Regarding Read Write Card Correctly, three
tion links, and from dependee to depender). The possible attacks are identified. One of them Steal
viability of the overall system appears in the high Card Info is counteracted by three defense mea-
level nodes of the various stakeholders. The pro- sures, though each one is partial (Hurt). Another

164
A Social Ontology for Integrating Security and Software Engineering

attack Remember Account Number & Password real world smart card systems, various concrete
has a defense of unknown strength (Some-). The physical or organisational parties play or occupy
third attack has no defensive measure. The softgoal these roles. These are shown in Table 1. Thus,
dependency Read Write Card Correctly is thus to actually understand their trust and security
judged to be weakly unviable ( ). On the other situations, we have to apply the generic model
side, as the Data Owner’s protection mechanism to the real world configurations. We consider
could sufficiently defeat the four possible attacks, two representative kinds of smart card based
the Transmit Complete and Correct Data softgoal systems. One is the Digital Stored Value Card,
dependency is thus judged to be viable ( ). Po- the other is the Prepaid Phone Card (Schneier &
tential attacks lead to the erosion of viability of Shostack, 1998).
the smart card system. Incorporating sufficient
countermeasures restores viability. Digital Stored Value Card System
A prototype knowledge-based tool is being
constructed to support this framework for analyz- These are payment cards intended to be substi-
ing information systems security. tutes for cash. Both Mondex and VisaCash are
examples of this type of system. The Customer
Trust Analysis Based on System is the Card Holder. The Merchant is the Terminal
Configuration Owner. The Financial Institution that supports
the system is both the Data Owner and the Card
In the models previously given, the various par- Issuer. The Smart Card Technology Company,
ticipants in a smart card system were modelled as such as Mondex, is both the Card Manufacturer
abstract roles and analyzed generally. However, in and the Software Manufacturer.

Table 1. Actors (roles, positions, and agents) in various smart card system configurations

Generic Smart Card Terminal Card Data Card Software


Card Model Holder Owner Issuer Owner Manufacturer Manufacturer

Digital Stored
Customer Merchant Financial Institution Technology Company
Value card
Digital Check Financial
Customer Merchant Customer Technology Company
Card Institution
Prepaid Phone
Customer Phone Company
Card
Account-based
Customer Phone Company Customer Technology Company
Phone Card

Key store card User Technology Company

Employee
Employee Employer
Access Token
Web browsing
Customer Financial Institution Technology Company
card

165
A Social Ontology for Integrating Security and Software Engineering

Figure 12. A threat model of digital stored value card system

Figure 13. A threat model of prepaid phone card system

In such a configuration, the previously sepa- the attack from Data Owner to Card Issuer can
rated roles of Data Owner and Card Issuer are be ignored since they both played by the Finan-
Played by the same physical agent, namely, Fi- cial Institution. These two attacking-defending
nancial Institution. Similarly, Card Manufacturer relationships are highlighted in Figure 11 with
and Software Manufacturer are combined into little squares.
one physical agent — the Smart Card Technology
Company. Figure 12 describes the threat model Prepaid Phone Card System
of a digital stored value card. Here the Software
Manufacturer’s attack on Card Manufacturer can These are special-use stored value cards. The
be ignored since they belong to the same agent Customer is the Card Holder. The Phone Company
— the Smart Card Technology Company. Also plays all the four roles of Terminal Owner, Data

166
A Social Ontology for Integrating Security and Software Engineering

Owner, Manufacturer, and Card Issuer. Figure 13 concepts in security models include: subject,
shows the threat model of a prepaid card system. object, action, clearance level, user, group, role,
Under such a system configuration, more attack- task, principal, owner, etc.
defense pairs disappear. Only four possible attacks Since security models are idealized abstrac-
need to be considered now. Three of them are tions, their application in real life requires a se-
from the phone company, which includes violat- ries of translations, involving interpretation and
ing privacy, to issue unusable card, to read write decision making at each stage. Organisational
card incorrectly. The other attack is from the Card structures must be analyzed so as to select the
Holder, who might use an illegitimate card. appropriate models, or a combination of mod-
Note that each time new roles are created, the els. Policies need to be interpreted and codified
possibility of new attacks arises. These models properly to achieve the desired results. Real world
reflect Schneier’s observation that the fewer splits entities and relationships are mapped to the model
we make, the more trustworthy the target system abstractions. Finally, the security model is mapped
is likely to be (Schneier & Shostack, 1998). to security implementation mechanisms. The lev-
els of abstractions used in security requirements,
design, and implementation therefore mirror those
RELATED WORK in software system development and provide a
basis for integration.
This section is complementary to the review The social ontology outlined in this chapter
presented in Chapter I. Each approach to secu- can facilitate and augment an integrated security
rity and software engineering has an ontology, development process by enriching the reasoning
whether explicitly defined or implied. We expect support needed to arrive at decisions at each stage
that a social ontology can be complementary and in the process. The ontology in existing security
beneficial to various approaches to integrating models are intended for the automated enforce-
security and software engineering. We begin with ment of specified security rules (e.g., to decide
work from the security community, followed by whether to give access). They do not support
software engineering approaches that have paid reasoning about why particular models or poli-
special attention to security. cies are appropriate for the target environment,
especially when there are conflicting objectives
Security Models and interpretations. Furthermore, many of the
simplifying assumptions that formal models
Formal models have been an important part of rely on do not hold in real life (Denning, 1999).
computer security since mainframe computing The social ontology of strategic actors provides
(Samarati & Vimercati, 2001). Security policies a framework for reasoning about the use of such
originate from laws, regulations, or organisational models from a pragmatic, broader perspective.
practices, and are typically written in natural In the development of new security models,
language. Security models using mathematical there is a trend towards ontologies that are more
formalisms can provide a precise formulation closely aligned with the ontology of organisational
of the policies for implementation. More impor- work. For example, role based access control
tantly, formally specified policy models can be (RBAC) (Ferraiolo, Sandhu, Gavrila, Kuhn, &
mathematically verified to guarantee security Chandramouli, 2001; Sandhu, Coyne, Feinstein,
properties. As mathematical abstractions, they & Youman, 1996) allows privileges to be organ-
provide unambiguous specifications that are in- ised according to organisational roles such as
dependent of implementation mechanisms. Some loan officer or branch manager. These trends are

167
A Social Ontology for Integrating Security and Software Engineering

consistent with the proposed social ontology ap- reasoning about how their goals may be achieved
proach, though RBAC models, like other access or hindered.
control models, are meant for enforcement, not Another drawback of checklists and guidelines
strategic organisational reasoning. is that they tend to be too generic. Experience and
expert judgment are needed to properly apply them
Security Management Frameworks to specific systems and organisational settings.
Such judgments are hard to trace or maintain over
While formal computer security models focus on time as the systems evolve.
policies built into the automated system, the over- The explicit modelling of strategic relation-
all security of information and software systems ships can provide a more specific analysis of
depends very much on organisational practices. sources of vulnerabilities and failures, thus also
Security practices have existed long before the allowing countermeasures to be targeted appro-
computer age. Many of the principles continue to priately. Using the strategic dependencies and
apply and have been adapted to software systems. rationales, one can trace the impact of threats
Standards have been defined to promote best along the paths to determine which business
practices (e.g., ISO 17799, 1999). goals are affected. The impact on goals other
OCTAVE (Alberts & Dorofee, 2002), than security can also be determined through
CRAMM, and FRAP (Peltier, 2001), are oriented the model since they appear in the same model.
toward decision making from a business perspec- One can see how security goals might compete
tive, leading to management, operational, and with or are synergistic with non-security goals,
technical requirements and procedures. Although thus leading to decisions that take the overall set
few frameworks have explicit information models, of goals into account. Using an agent-oriented
they do have implicit ontologies revolving around ontology, one can determine which actors are
key concepts such as asset, attack, threat, vulner- most affected by which security threats, and
ability, countermeasure, and risk. are therefore likely to be most motivated to take
The main focus of these frameworks is on measures. Tradeoffs are done from the viewpoint
prescriptive guidelines. Tables and charts are of each stakeholder. This approach provides a
used to enumerate and cross-list vulnerabilities good basis for an ontology of security, which
and threats. Potential countermeasures are sug- can mediate between business reasoning from
gested. Risks are computed from potential losses an organisational perspective and system design
arising from estimated likelihood of threats. reasoning from a technical perspective.
Since quantitative estimates are hard to come Some preliminary work have been done to
by, most assessments rely on ratings such as low, integrate the i* modelling ontology with risk-based
medium, high. security management approaches (Gaunard &
While formal computer security models at- Dubois, 2003; Mayer, Rifaut, & Dubois, 2005).
tempt to guarantee security (requiring simplify- Further extensions could incorporate economic
ing assumptions that may depart from reality), theories and reasoning (e.g., Anderson, 2001;
security management frameworks acknowledge Camp & Lewis, 2004). The ontology of i* can
that security breaches will occur, and suggest provide the structure representation of social re-
countermeasures to reduce risk. This pragmatic lationships on which to do economic reasoning.
stance is very much in the spirit of the social
ontology proposed in this chapter. Security
management frameworks can be augmented by
the modelling of strategic actor relationships and

168
A Social Ontology for Integrating Security and Software Engineering

Software Systems Design describe which activities are done where and by
Frameworks whom. Other information systems security ap-
proaches include the automated secure system
Having considered work originating from the development method (Booysen & Eloff, 1995)
security side, we now turn to contributions from and the logical controls specification approach
the software engineering and system development (Baskerville, 1993; Siponen & Baskerville,
perspective. 2001).
These approaches illustrate the extension of
Extensions to UML (see Chapter I for in- conventional information systems ontologies to
formation of such approaches).The ontology incorporate security-specific ontologies. Different
of UML, consisting of objects and classes, ac- concepts are added to each level of modelling (e.g.,
tivities, states, interactions, and so forth, with database schemas, process or function schemas,
its security-oriented extensions, are useful for workflow schemas, and organisation diagrams).
specifying the technical design of security fea- As with UML extensions, these approaches tend
tures and functionalities, but does not support to emphasize the notation needed to express se-
the reasoning that lead up to those requirements curity features in the requirements specification
and designs. As indicated in the second section of or design descriptions and how those features
this chapter, technical design notations are use- can be analyzed. However, the notations (and the
ful for recording the results of decisions, but do implied ontology) do not provide support for the
not offer support for arriving at those decisions. deliberations that lead up to the security require-
The social ontology proposed in this chapter can ments and design. A social ontology that supports
therefore complement UML-based approaches, explicit reasoning about relationships among
such as the one presented in Chapter IX, by sup- strategic actors, as outlined in this chapter, can
porting the early-stage requirements modelling be a helpful extension to these approaches.
and reasoning that can then be propagated to Responsibility modelling. A number of
the technical design stage, resulting in design approaches center around the notion of respon-
choices expressed in UML-like design notations. sibility. In Strens and Dobson (1994), when an
Stakeholder deliberations and tradeoffs therefore agent delegates an obligation, the agent becomes
are effectively conveyed to technical designers. a responsibility principal, and the receiver of the
Conversely, the effect of technical choices can delegation process is a responsibility holder. An
be propagated upstream to enable stakeholders obligation is a high-level mission that the agent
to appreciate the consequences as they appear in can fulfill by carrying out activities. Agents
the stakeholders’ world. cannot transfer their responsibilities, only their
Extensions to information systems model- obligations. Three kinds of requirements are
ling and design. In the information systems derived from responsibilities: need-to-do, need-
area, Pernul (1992) proposes secure data schemas to-know and need-for-audit. The need-to-know
(extension of entity-relationship diagrams) and requirements relate to security policies — which
secure function schemas (extension of data flow subjects (e.g., users) should be allowed to access
diagrams). In Herrmann and Pernul (1999) and which objects (e.g., files, etc.) so that they are able
Röhm and Pernul (1999), these models are ex- to fulfill their responsibilities.
tended to include a business process schema, with Backhouse and Dhillon (1996) also adopt a
tasks, data/material, humans, legal bindings and responsibilities analysis approach, incorporat-
information flow, and an organisational schema ing speech acts theory. The model for automated
with role models and organisation diagrams to profile specification (MAPS) approach (Pottas

169
A Social Ontology for Integrating Security and Software Engineering

& Solms, 1995) uses responsibilities and role of understanding the environment and context of
models to generate information security profiles the intended system so that the requirements will
(such as access control) from job descriptions and truly reflect what stakeholders want.
organisational policies.
This group of work has a more explicit ontology Goal-oriented requirements engineering.
of social organisation. The emphasis is on the map- Traditional requirements languages for software
pings between organisational actors and the tasks specification focus on structure and behavior, with
or activities they have to perform. While actors or ontologies that center around entities, activities,
agents have responsibilities, they are not viewed states, constraints, and their variants. A goal-ori-
as having strategic interests, and do not seek ented ontology allows systems to be placed within
alternate configurations of social relationships the intentional setting of the usage environment.
that favor those interests. The focus of attention Typically, goal-oriented requirements engineering
is on functional behaviors and responsibilities. frameworks employ AND/OR tree structures (or
Security is treated as additional functions to be variants) to analyze and explore alternate system
incorporated, and there are no attempts to deal with definitions that will contribute to stakeholder
interactions and tradeoffs between security and goals in different ways. Security can be readily
other non-functional objectives such as usability integrated into such a framework since attacks and
or maintainability. The social ontology of i* can threats interfere with the normal achievement of
therefore be quite complementary to these ap- stakeholder goals. Security controls and counter-
proaches. Other socio-organisational approaches measures can be derived from defensive goals to
are reviewed in Dhillon and Backhouse (2001). counteract malicious actions and intents.
The NFR framework: Security as softgoal.
Requirements Engineering The NFR framework (Chung, 1993; Chung et al.,
Approaches to Security 2000) is distinctive from most of the above cited
approaches to security in that it does not start
While security needs to be integrated into all with vulnerabilities and risks, nor from security
stages of software engineering, there is gen- features and functions. It starts by treating security
eral agreement that integration starting from the as one among many non-functional requirements.
earliest stages is essential. It is well known that As with many other non-functional requirements
mistakes early in the software process can have such as usability, performance, or information
far reaching consequences in subsequent stages accuracy, security is viewed as a goal whose
that are difficult and costly to remedy. Fred Brooks operational meaning needs to be interpreted ac-
(1995) had noted that the requirements stage is cording to the needs of the specific application
the most difficult, and suggested that software setting. This interpretation is done by a series
engineering should focus more on “building of refinements in a goal graph until the point
the right system,” and not just on “building the (called operationalization) where subgoals are
system right.” sufficiently concrete as to be accomplishable by
In requirements engineering research, a large implementable actions and mechanisms, such as
part of the effort has been devoted to verifying access control mechanisms or protocols. At each
that the requirements statements are precise, un- stage in the refinement, subgoals are judged to be
ambiguous, consistent, and complete. Recently, contributing qualitatively to the parent goals in
more attention has been given to the challenge different ways. Because the nature and extent of
the contribution requires judgement from expe-

170
A Social Ontology for Integrating Security and Software Engineering

rience and possibly domain expertise, the term as goals to be elaborated based on tradeoffs
softgoal is used, drawing on Simon’s notion of encountered in the particular system. Quality
satisficing (Simon, 1996). attributes are concretized in terms of metrics,
The NFR framework thus offers a systematic which are different for each quality, so trade-offs
approach for achieving “good enough” security are difficult across different metrics.
— a practical objective in real life (Sandhu, 2003; The KAOS framework: Goals, obstacles,
Schneier, 2003) that have been hard to achieve in and anti-goals. KAOS (Dardenne, van Lam-
conventional mathematical formalisms. A formal sweerde, & Fickas, 1993; van Lamsweerde, 2001,
treatment of the satisficing semantics of softgoals 2004; van Lamsweerde, Brohez, Landtsheer, &
is offered in Chung et al. (2000). Janssens, 2003) is a goal-oriented requirements
The NFR framework is also distinctive in engineering framework that focuses on systematic
that it allows security goals to be analyzed and derivation of requirements from goals. It includes
understood at the same time as other potentially an outer layer of informally specified goals, and
competing requirements, for example, usability, an inner layer of formalized goal representation
performance, maintainability, and evolvability. In and operations using temporal logic. It is therefore
the past, it has been difficult to deal with these especially suitable for real-time and safety criti-
non-functional requirements early in the develop- cal systems. Refinement patterns are developed
ment life cycle. Typically functional requirements making use of temporal logic relationships.
dominate the design process. Experienced and The KAOS ontology includes obstacles, which
expert designers take non-functional require- impede goal achievement. The methodology
ment into account intuitively and implicitly, but provides techniques for identifying and resolv-
without support from systematic frameworks, ing obstacles. To incorporate security analysis,
languages, or tools. The softgoal graph approach attackers present obstacles to security goals. New
acknowledges that security needs to compete security requirements are derived from attack
with other goals during requirements analysis generation and resolution.
and during design. Different aspects of security Tree structures have been used in the security
may also compete with each other. The NFR community for analyzing the structure of threats
goal-oriented approach supports reasoning about (Schneier, 1999), and in the safety community for
tradeoffs among these competing goals and how the analysis of faults and hazards (Helmer et al.,
they can be achieved. 2002). Experiences from these approaches can be
Beyond clarifying requirements, the NFR incorporated into goal-oriented frameworks.
softgoals are used to drive subsequent stages in
system design and implementation, thus offering Agent-Oriented Requirements
a deep integration of security into the software Engineering
engineering process.
A related body of work is in quality attributes The agent-oriented approach adopts goal-oriented
of software architecture, for example, the ATAM concepts and techniques, but treats goals as origi-
approach (Kazman, Klein, & Clements, 2000) nating from different actors. The i* modelling
for architectural evaluation. Many of the basic framework views actors as having strategic inter-
elements are similar to the NFR framework. The ests. Each actor aims to further its own interests
classification of quality attributes and mechanisms in exploring alternative conceptions of the future
(for security and other attributes), however, are system and how the system will affect its rela-
viewed from an evaluation viewpoint. The tax- tionships to other actors. This may be contrasted
onomy structure of quality attribute is not seen with other frameworks which may include some

171
A Social Ontology for Integrating Security and Software Engineering

notion of actor which are non-intentional (e.g., in such as secure entities, secure dependencies, and
use case diagrams in UML) or non-strategic (e.g., secure capabilities are reinterpreted within the i*
in KAOS, where agents are passive recipients of ontology. The security constraint concept attaches
responsibility assignments at the end of a goal a security-related strategic dependency to the
refinement process). dependency that it applies to. An intuitive benefit
i* adopts the notion of softgoal from the NFR of this concept is that the association between the
framework, but makes further distinctions with two is indicated without having to refer to the
goal, task, and resource. Softgoals are opera- internal rationale structures of actors. An attack
tionalized into tasks, which may in turn contain scenarios representation structure that aims to
decompositions that include softgoals. support the analysis of specific attacking and pro-
Security issues are traced to antagonistic goals tecting situations at a more detailed design stage
and dependencies among attackers and defenders. is developed. New language structures developed
As in the NFR framework, security is treated as include secure capability, and attacking link.
much as possible within the same notational and Giorgini et al. (2003, 2005; also Chapter VIII)
reasoning framework as for other non-functional introduced four new primitive relationships related
requirements (as softgoals), but extended to in- to security requirements: trust, delegation, offer
clude functional elements (as goals, tasks, and and owner relation. These new primitives offer
resources). Security is therefore not treated in an explicit treatment of security concepts such
isolation, but interacts with other concerns at all as permission, ownership, and authority, which
steps throughout the process. The illustration of allows a more detailed analysis.
i* in this chapter is based on the example in Yu In Crook, Ince, and Nuseibeh (2005), the prob-
and Liu (2000, 2001). Further illustrations are in lem of modelling access policies is addressed by
Liu et al. (2002), Yu and Cysneiros (2001), Liu et extending the Tropos approach (Liu et al., 2003),
al. (2003), Liu and Yu (2003, 2004). to ensure that security goals can be achieved and
The i* approach has been adopted and extended that operational requirements are consistent with
in a number of directions. The Tropos framework access policies.
(Bresciani, Perini, Giorgini, Giunchiglia, & Mylo-
poulos, 2004; Castro, Kolp, & Mylopoulos, 2002) Misuse/Abuse Cases
further develops the i* approach into a full-fledged
software engineering methodology, using the Misuse and abuse cases techniques (Alexander,
agent-oriented social ontology originating from 2001; Sindre & Opdahl, 2000, 2001; see also
requirements modelling to drive architectural Review in Chapter I) are complementary to goal-
design, detailed design, and eventual implementa- oriented techniques as they offer different ways
tion on agent-based software platforms. Formal of structuring requirements knowledge (Rolland,
Tropos incorporates formalization techniques Grosz, & Kla, 1999). Use cases are action-oriented
similar to KAOS, so that automated tools such as and include sequence and conditionals. Goal re-
model checking can be applied to verify security finements are (mostly) hierarchical covering mul-
properties (Liu et al., 2003). tiple levels of abstraction. In addressing security
A number of extensions to i* have been requirements, the development of misuse/abuse
developed to address specific needs of security cases can be assisted by using goal analysis. Con-
modelling and analysis. Mouratidis et al. (2003a, versely, goal analysis can be made concrete by
2003b, 2004, 2005, also Chapter VIII) introduced considering positive and negative use cases and
the concepts of security reference diagram and scenarios. Note that use cases are better suited to
security constraints. Common security concepts later stages in requirements analysis since they

172
A Social Ontology for Integrating Security and Software Engineering

assume that the system boundary is already de- Alexander, I. (2003, January). Misuse cases: Use
fined. Unlike the strategic actors in i*, actors in cases with hostile intent. IEEE Software, 20(1),
use cases are non-intentional and serve to delineate 58-66.
the boundary of the automated system.
Anderson, R. (2001). Security engineering: A
guide to building dependable distributed systems.
New York: Wiley.
CONCLUSION
Backhouse, J., & Dhillon, G. (1996). Structures
In this chapter, we have argued that a social ontol- of responsibilities and security of information
ogy can provide the basis for integrating security systems. European Journal of Information Sys-
and software engineering. We presented the social tems, 5(1), 2-10.
ontology of i* and illustrated how it can be used to
Baskerville, R. (1993). Information systems secu-
include security goals when designing a smart card
rity design methods: Implications for information
system. We have outlined how a social ontology
systems development. Computing Surveys, 25(4),
is complementary to a number of techniques in
375-414.
security engineering and in software engineer-
ing, thus building common ground between the Boehm, B. W. (1988). A spiral model of software
two areas. development and enhancement. IEEE Computer,
21(5), 61-72.
Booysen, H. A. S., & Eloff, J. H. P. (1995). A
ACKNOWLEDGMENT
methodology for the development of secure ap-
plication systems. Proceeding of the 11th IFIP
The authors (1 & 3) gratefully acknowledge
TC11 International Conference on Information
financial support from the Natural Sciences
Security.
and Engineering Research Council of Canada,
Bell University Laboratories, and author (2) the Bresciani, P., Perini, A., Giorgini, P., Giunchiglia,
National Key Research and Development Plan F., & Mylopoulos, J. (2004) Tropos: An agent-
(973, no.2002CB312004) and NSF China (no. oriented software development methodology.
60503030). Autonomous Agents and Multi-Agent Systems,
8(3), 203-236.
Brooks, F. (1995, August). The mythical man-
REFERENCES
month: Essays on software engineering, 20th
Anniversary Edition (1st ed.). Boston: Addison-
Alberts, C., & Dorofee, A. (2002, July). Managing
Wesley.
information security risks: The OCTAVE (SM)
approach. Boston: Addison Wesley. Castro, J., Kolp, M., & Mylopoulos, J. (2002).
Towards requirements driven information systems
Alexander, I. (2002, September). Modelling the
engineering: The Tropos project. Information
interplay of conflicting goals with use and mis-
Systems, 27(6), 365-389.
use cases. Proceedings of the 8th International
Workshop on Requirements Engineering: Foun- Chung, L. (1993). Dealing with security require-
dation for Software Quality (REFSQ-02), Essen, ments during the development of information
Germany (pp. 9-10). systems. In C. Rolland, F. Bodart, & C. Cauvet
(Eds.), Proceedings of the 5th International Confer-

173
A Social Ontology for Integrating Security and Software Engineering

ence Advanced Information Systems Engineering, Gaunard, P., & Dubois, E. (2003, May 26-28).
CAiSE ’93 (pp. 234-251). Springer. Bridging the gap between risk analysis and
security policies: Security and privacy in the
Chung L., Nixon, B. A., Yu, E., & Mylopoulos, J.
age of uncertainty. IFIP TC11 18th International
(2000). Non-functional requirements in software
Conference on Information Security (SEC2003)
engineering. Kluwer Academic Publishers.
(pp. 409-412). Athens, Greece. Kluwer.
CRAMM – CCTA (Central Computer and Tele-
Giorgini, P., Massacci, F., & Mylopoulos, J.
communications Agency, UK). Risk analysis and
(2003, October 13-16). Requirement engineering
management method. Retrieved from http://www.
meets security: A case study on modelling secure
cramm.com/cramm.htm
electronic transactions by VISA and Mastercard.
Crook, R., Ince, D., & Nuseibeh, B. (2005, August The 22nd International Conference on Conceptual
29-September 2). On Modelling access policies: Modelling (ER’03) (LNCS 2813, pp. 263-276).
Relating roles to their organisational context. Chicago: Springer.
Proceedings of the 13th IEEE International Re-
Giorgini, P., Massacci, F., Mylopoulos, J., & Zan-
quirements Engineering Conference (RE’05),
none, N. (2005). Modelling social and individual
Paris (pp. 157-166).
trust in requirements engineering methodologies.
Dardenne, A., van Lamsweerde, A., & Fickas, S. Proceedings of the 3rd International Conference
(1993). Goal-directed requirements acquisition. on Trust Management (iTrust 2005). LNCS 3477.
Science of Computer Programming, 20(1-2), Heidelberg: Springer-Verlag.
3-50.
Gross, D., & Yu, E. (2001, August 27-31). Evolving
Denning, D. E. (1998). The limits of formal secu- system architecture to meet changing business
rity models. National Computer Systems Security goals: An agent and goal-oriented approach. The
Award Acceptance Speech. Retrieved October 18, 5th IEEE International Symposium on Require-
1999, from www.cs.georgetown.edu/~denning/ ments Engineering (RE 2001) (pp. 316-317).
infosec/award.html Toronto, Canada.
Dhillon, G., & Backhouse, J. (2001) Current Helmer, G., Wong, J., Slagell, M., Honavar, V.,
directions in IS security research: Toward socio- Miller, L., & Lutz, R. (2002). A software fault
organizational perspectives. Information Systems tree approach to requirements analysis of an in-
Journal, 11(2), 127-154. trusion detection system. In P. Loucopoulos & J.
Mylopoulos (Ed.), Special Issue on Requirements
Ferraiolo, D., Sandhu, R., Gavrila, S., Kuhn, R.,
Engineering for Information Security. Require-
& Chandramouli, R. (2001, August). Proposed
ments Engineering (Vol. 7, No. 4, pp. 177-220).
NIST standard for role-based access control.
ACM Transactions on Information and Systems Herrmann, G., & Pernul, G. (1999). Viewing busi-
Security, 4(3), 224-74. ness-process security from different perspectives.
International Journal of Electronic Commerce,
Franch, X., & Maiden, N. A. M. (2003, Febru-
3(3), 89-103.
ary 10-13). Modelling component dependencies
to inform their selection. COTS-Based Software ISO 17799. (1999). Information security manage-
Systems, 2nd International Conference, (ICCBSS ment — Part 1: Code of practice for information
2003) (pp. 81-91). Lecture Notes in Computer security. London: British Standards Institution.
Science 2580. Ottawa, Canada: Springer.

174
A Social Ontology for Integrating Security and Software Engineering

Kazman, R., Klein, M., & Clements, P. (2000). Applications Conference, Scottsdale, USA (pp.
ATAM: Method for architectural evaluation 55-67).
(CMU/SEI-2000-TR-004). Pittsburgh, PA: Soft-
Mouratidis, H., Giorgini, P., & Manson, G.
ware Engineering Institute, Carnegie Mellon
A. (2003a). Integrating security and systems
University.
engineering: Towards the modelling of secure
Liu, L., & Yu, E. (2003). Designing information information systems. Proceedings of the 15th
systems in social context: A goal and scenario Conference on Advanced Information Systems
modelling approach. Information Systems, 29(2), Engineering (CAiSE 03) (Vol . LNCS 2681, pp.
187-203. 63-78). Klagenfurt, Austria: Springer.
Liu, L., & Yu, E. (2004). Intentional modelling Mouratidis, H., Giorgini, P., & Manson, G. (2004,
to support identity management. In P. Atzeni et April 13-17). Using security attack scenarios to
al. (Eds.), Proceedings of the 23rd International analyse security during information systems
Conference on Conceptual Modelling (ER 2004) design. Proceedings of the 6th International
(pp. 555-566). LNCS 3288. Berlin, Heidelberg: Conference on Enterprise Information Systems,
Springer-Verlag. Porto, Portugal.
Liu, L., Yu, E., & Mylopoulos, J. (2002, October Mouratidis, H., Giorgini, P., & Schumacher, M.
16). Analyzing security requirements as relation- (2003b). Security patterns for agent systems. Pro-
ships among strategic actors. The 2nd Symposium ceedings of the 8th European Conference on Pat-
on Requirements Engineering for Information tern Languages of Programs, Irsee, Germany.
Security (SREIS'02). Raleigh, NC.
Mouratidis, H., Kolp, M., Faulkner, S., & Giorgini.
Liu, L., Yu, E., & Mylopoulos, J. (2003, Septem- P. (2005, July). A secure architectural description
ber). Security and privacy requirements analysis language for agent systems. Proceedings of the 4th
within a social setting. Proceedings of Interna- International Joint Conference on Autonomous
tional Conference on Requirements Engineering Agents and Multiagent Systems (AAMAS05).
(RE’03) (pp. 151-161). Monterey, CA. Utrecht, The Netherlands: ACM Press.
Lodderstedt, T., Basin, D. A., J, & Doser, R. Peltier, T. R. (2001, January). Information se-
(2002). SecureUML: A UML-based modelling curity risk analysis. Boca Raton, FL: Auerbach
language for model-driven security. Proceedings Publications.
of UML '02: Proceedings of the 5th International
Pernul, G. (1992, November 23-25). Security
Conference on The Unified Modelling Language,
constraint processing in multilevel secure AMAC
Dresden, Germany (pp. 426-441).
schemata. The 2nd European Symposium on Re-
Mayer, N., Rifaut, A., & Dubois, E. (2005). To- search in Computer Security (ESORICS 1992)
wards a risk-based security requirements engi- (pp. 349-370). Toulouse, France. Lecture Notes
neering framework. Workshop on Requirements in Computer Science 648. Springer.
Engineering For Software Quality (REFSQ’05), at
Pottas, D., & Solms, S. H. (1995). Aligning in-
the Conference for Advanced Information Systems
formation security profiles with organizational
Engineering (CAiSE), Porto, Portugal.
policies. Proceedings of the IFIP TC11 11th Inter-
McDermott, J., & Fox, C. (1999). Using abuse national Conference on Information Security.
case models for security requirements analysis.
Röhm, A. W., & Pernul, G. (1999). COPS: A model
Proceedings 15th IEEE Annual Computer Security
and infrastructure for secure and fair electronic

175
A Social Ontology for Integrating Security and Software Engineering

markets. Proceedings of the 32nd Annual Hawaii Sindre, G., & Opdahl, A. L. (2001, June 4-5). Tem-
International Conference on Systems Sciences. plates for misuse case description. Proceedings of
the 7th International Workshop on Requirements
Rolland, C., Grosz, G., & Kla, R. (1999, June).
Engineering, Foundation for Software Quality
Experience with goal-scenario coupling in
(REFSQ2001), Switzerland.
requirements engineering. Proceedings of the
IEEE International Symposium on Requirements Siponen, M. T., & Baskerville, R. (2001). A new
Engineering, Limerick, Ireland. paradigm for adding security into IS development
methods. In J. Eloff, L. Labuschagne, R. von
Samarati, P., & Vimercati, S. (2001). Access
Solms, & G. Dhillon (Eds.), Advances in infor-
control: Policies, models, and mechanisms. In
mation security management & small systems
R. Focardi & R. Gorrieri (Eds.), Foundations of
security (pp. 99-111). Boston: Kluwer Academic
security analysis and design: Tutorial lectures
Publishers.
(pp. 137-196). LNCS 2171.
Strens, M. R., & Dobson, J. E. (1994). Responsi-
Sandhu, R. (2003, January/February). Good
bility modelling as a technique for requirements
enough security: Towards a business driven dis-
definition. IEEE, 3(1), 20-26.
cipline. IEEE Internet Computing, 7(1), 66-68.
van der Raadt, B., Gordijn, J., & Yu, E. (2005).
Sandhu, R. S., Coyne, E. J., Feinstein, H. L., &
Exploring Web services from a business value
Youman, C. E. (1996, February). Role-based
perspective. To appear in Proceedings of the 13th
access control models. IEEE Computer, 29(2),
International Requirements Engineering Confer-
38-47.
ence (RE’05), Paris (pp. 53-62).
Schneier, B. (1999). Attack trees modelling se-
van Lamsweerde, A. (2001, August 27-31). Goal-
curity threats. Dr. Dobb’s Journal, December.
oriented requirements engineering: A guided
Retrieved from http://www.counterpane.com/at-
tour. The 5th IEEE International Symposium on
tacktrees-ddj-ft.html
Requirements Engineering (RE 2001) (p. 249).
Schneier, B. (2003). Beyond fear: Thinking Toronto, Canada.
sensibly about security in an uncertain world.
van Lamsweerde, A. (2004, May). Elaborating
New York: Copernicus Books, an imprint of
security requirements by construction of inten-
Springer-Verlag.
tional anti-models. Proceedings of ICSE’04, 26th
Schneier, B., & Shostack, A. (1998). Breaking up is International Conference on Software Engineer-
hard to do: Modelling security threats for smart- ing (pp. 148-157). Edinburgh: ACM-IEEE.
cards. First USENIX Symposium on Smart-Cards,
van Lamsweerde, A., Brohez, S., Landtsheer, R.,
USENIX Press. Retrieved from http://www.
& Janssens, D. (2003, September). From system
counterpane.com/smart-card-threats.html
goals to intruder anti-goals: Attack generation and
Simon, H. (1996). The sciences of the artificial resolution for security requirements engineering.
(3rd ed.). MIT Press. Proceedings of the RE’03 Workshop on Require-
ments for High Assurance Systems (RHAS’03)
Sindre, G., & Opdahl, A. L. (2000). Eliciting se-
(pp. 49-56). Monterey, CA.
curity requirements by misuse cases. Proceedings
of the 37th Conference on Techniques of Object- Yu, E. (1993, January). Modelling organizations
Oriented Languages and Systems (pp. 120-131). for information systems requirements engineer-
TOOLS Pacific 2000. ing. Proceedings of the 1st IEEE International

176
A Social Ontology for Integrating Security and Software Engineering

Symposium on Requirements Engineering (pp. Yu, E., & Liu, L. (2000, June 3-4). Modelling trust
34-41). San Diego, CA. in the i* strategic actors framework. Proceedings
of the 3rd Workshop on Deception, Fraud and
Yu, E. (1997, January 6-8). Towards modelling and
Trust in Agent Societies, Barcelona, Catalonia,
reasoning support for early-phase requirements
Spain (at Agents2000).
engineering. Proceedings of the 3rd IEEE Interna-
tional Symposium on Requirements Engineering Yu, E., & Liu, L. (2001). Modelling trust for system
(RE'97) (pp. 226-235). Washington, DC. design using the i* strategic actors framework. In
R. Falcone, M. Singh, & Y. H. Tan (Eds.), Trust
Yu, E. (2001a, April). Agent orientation as a
in cyber-societies--integrating the human and
modelling paradigm. Wirtschaftsinformatik,
artificial perspectives (pp. 175-194). LNAI-2246.
43(2), 123-132.
Springer.
Yu, E. (2001b). Agent-oriented modelling: Soft-
Yu, E., Liu, L., & Li, Y. (2001, November 27-30).
ware versus the world. Agent-Oriented Software
Modelling strategic actor relationships to support
Engineering AOSE-2001 Workshop Proceedings
intellectual property management. The 20th Inter-
(LNCS 222, pp. 206-225). Springer Verlag. 
national Conference on Conceptual Modelling
Yu, E., & Cysneiros, L. (2002, October 16). (ER-2001) (LNCS 2224, pp. 164-178). Yokohama,
Designing for privacy and other competing re- Japan: Spring Verlag.
quirements. The 2nd Symposium on Requirements
Engineering for Information Security (SREIS’02).
Raleigh, NC.

This work was previously published in Integrating Security and Software Engineering: Advances and Future Visions, edited by
H. Mouratidis and P. Giorgini, pp. 70-106, copyright 2007 by Information Science Publishing (an imprint of IGI Global).

177
Section III
Usability Issues
179

Chapter XI
Security Configuration
for Non-Experts:
A Case Study in
Wireless Network Configuration
Cynthia Kuo
Carnegie Mellon University, USA

Adrian Perrig
Carnegie Mellon University, USA

Jesse Walker
Intel Corporation, USA

Abstract

End users often find that security configuration interfaces are difficult to use. In this chapter, we explore
how application designers can improve the design and evaluation of security configuration interfaces.
We use IEEE 802.11 network configuration as a case study. First, we design and implement a configu-
ration interface that guides users through secure network configuration. The key insight is that users
have a difficult time translating their security goals into specific feature configurations. Our interface
automates the translation from users’ high-level goals to low-level feature configurations. Second, we
develop and conduct a user study to compare our interface design with commercially available prod-
ucts. We adapt existing user research methods to sidestep common difficulties in evaluating security
applications. Using our configuration interface, non-expert users are able to secure their networks as
well as expert users. In general, our research addresses prevalent issues in the design and evaluation
of consumer-configured security applications.

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Security Configuration for Non-Experts

INTRODUCTION features. This problem will only grow as devices


proliferate.
For home consumers, the setup and configuration Among IEEE 802.11 wireless networks in
of new technologies is a daunting experience. the home, only 20% to 30% enable some type of
The most intimidating configuration interfaces security feature (Cohen, 2004). Some security ex-
are often feature-based. They list the different perts interpret this statistic as evidence that home
technical features that end users can configure. users are too ignorant or too unconcerned about
Users select the appropriate radio button or drop- security to enable security measures. However,
down box option and the product changes its the problem is more fundamental: the user experi-
behavior accordingly. This approach is effective ence of consumer 802.11 (also known as “Wi-Fi”)
— if users know what they are doing. For users products is flawed. For approximately every 10
unfamiliar with the technology, the obstacles are products sold, one consumer calls technical sup-
formidable. First, users must articulate their goals port. Most calls address basic setup issues, such
for the configuration. Second, they must map these as establishing Internet connectivity. Moreover,
goals to the product’s features. Last, users must representatives of the Wi-Fi Alliance report that
configure the product features correctly. up to 30% of all 802.11 equipment purchased for
Feature-based configuration interfaces fail to the home is returned (Gefrides, 2004). This is an
consider how people interact with technology. order of magnitude higher than other electronics
Reeves and Nass (1996) show that we apply the products, such as VCRs. Furthermore, the vast
same social norms that we use for human beings majority of returned products—an estimated
to our “conversations” with computers. Now 90%—is not defective. For many home consumers,
consider the typical interaction between a per- basic network setup is too difficult—even without
son and a security product. It is a dysfunctional considering secure network setup.
conversation. The user states, “I would like to In this chapter, we present our design, imple-
achieve goals 1, 2, and 3.” The product declares, mentation, and evaluation of a configuration
“I have features A through Z!” Unfortunately, user interface for 802.11 access points. The interface
goals and product features may not map easily enables home consumers to configure their wire-
to one another. As a result, many users struggle less networks securely. Our system acts as an “ex-
or give up entirely. For security professionals, pert friend,” asking simple, high-level questions to
we argue these interfaces are psychologically elicit users’ needs and goals. This information is
unacceptable (Saltzer & Schroeder, 1975).1 automatically translated into a security policy for
In the early days of computing, security users. By avoiding feature-based questions, our
configuration was a lesser problem. Systems system empowers end users—even novices—to
were configured by early adopters, who tend to make configuration decisions appropriate to
be expert users. Experts have the ability and the their situation. With existing interfaces, more
willingness to master psychologically unaccept- knowledgeable users are better able to configure
able configuration schemes. However, the recent secure networks than novice users. Our system
explosion of personal computers and mobile levels the playing field, enabling non-experts to
devices changes the nature of the problem; home perform as well as experts. The lessons that we
systems are now regularly managed by non-expert learned in this domain will apply to other security
users. Today, security configuration is required configuration interfaces.
for each system, in each home. We are beginning
to see the consequences of difficult configuration
schemes: very few users enable available security

180
Security Configuration for Non-Experts

Outline For these reasons, the design rules that work


for most consumer applications often fall short
First, we explore the challenges in designing for security applications.
and evaluating good security applications. Next, Furthermore, the effectiveness of security
we define our problem space and our design applications is difficult to evaluate. Textbook
principles. The design principles were used to user study methods make assumptions that may
implement our configuration interface, which not hold when researchers evaluate security ap-
is described in the design and implementation plications. We identify five assumptions in Kuo,
section. We tested our implementation against Perrig, and Walker (2006):
two commercially available access points. The
evaluation method and experimental results are • There are clear-cut criteria for success.
briefly summarized in their respective sections. Computer security is a risk management pro-
Finally, we discuss how this work may be applied cess. Each user may be exposed to different
to other domains. risks, and as a result, may require a different
configuration. One security configuration
may not fit all.
BACKGROUND • Applications should tolerate variation in user
behavior and user error. In many applications,
In recent years, application designers have dis- users can take multiple paths through the user
covered that the design guidelines that work for interface to reach the same end state. Mistakes
most consumer applications fail for security ap- can be made and corrected. In security ap-
plications. Intuitively, the explanation is simple: plications, some mistakes are critical errors:
users’ mental models of the world do not match once these mistakes have been made, there
the assumptions underlying the technical imple- is no way to recover.
mentations. • Users are familiar with the underlying
Whitten and Tygar (1999) delineated five concepts. Users may be unfamiliar with the
properties that make designing user interfaces security concepts tested in a study. The very
for security applications challenging: act of providing evaluation tasks during a user
study may introduce a bias—by giving users
• The unmotivated user property, which signi- information they did not previously have.
fies that security is usually a secondary goal • Users’ tasks are their primary goals. Study
for users; designs need to account for the secondary
• The abstraction property, which highlights nature of security-related goals.
how users have difficulty conceptualizing • Users respond in socially acceptable ways.
security concepts; Study participants sometimes try to please the
• The lack of feedback property, which speaks experimenter by saying what they think the
to application designers’ difficulty in provid- experimenter wants to hear. Many users think
ing adequate feedback for users; they should be more security-conscious than
• The barn door property, which states that an they are; this may cause them to exaggerate
error cannot be undone once information has their responses.
been (potentially) compromised; and
• The weakest link property, which reminds us These assumptions must be considered when
that the security of a system is only as strong evaluating security applications. Often, this means
as its weakest link.

181
Security Configuration for Non-Experts

adapting traditional user study methods to avoid (1999); and Stajano and Anderson (1999).
the introduction of biases. Rather than developing new technology, our
The five properties of security applications research tackles a different challenge: empowering
are problematic for designers of security user users to enable a security policy of their choice.
interfaces. The five assumptions underlying tra- Applications that require user input in a security
ditional user study methods challenge evaluators policy will need to leverage the approaches we
of security user interfaces. Together, these factors present here.
frustrate many attempts to improve security ap- The design of our system draws on several con-
plications. This chapter documents the design and cepts which are used in the field and documented
evaluation of one successful project. in the literature. For example, Alan Cooper’s The
Inmates are Running the Asylum (1999) drives
Related Work home the benefit of goal-directed design. Cooper
dissects the differences between users’ goals and
Recently, several industry groups have been devel- tasks. He argues that products should be designed
oping specifications for user-friendly setup. Speci- to accommodate users’ goals (not tasks). Security
fications include Wi-Fi Protected Setup, Bluetooth may be a secondary goal for most users, but we
Simple Pairing, and setup in HomePlug AV (Lortz believe that this makes goal-based design even
et al., 2006; Linsky et al., 2006; Newman, Gavette, more effective.
Yonge, & Anderson, 2006). These specifications In addition, Friedman et al. have explored
deal mainly with the exchange of authentication how to design systems that take human values
credentials. In comparison, this chapter focuses into account (Friedman, Kahn, & Borning, 2006;
on the selection and implementation of a security Friedman, Lin, & Miller, 2005; Friedman & Nis-
policy (which may or may not include the exchange senbaum, 1997). This project could be considered
of authentication credentials). an implementation of value-sensitive design.
In the academic world, the most closely related On the evaluation side, Uzun, Karvonen, and
work is network-in-a-box (NiaB) by Balfanz, Asokan (2007) show that user interface design
Durfee, Grinter, Smetters, and Stewart (2004). can dramatically affect user error rates. Also,
NiaB is a user-friendly method for setting up Friedman, Hurley, Howe, Felten, and Nissenbaum
a secure wireless network. The scheme uses (2002) use a semi-structured interview to elicit
a custom-built access point. The access point users’ understanding of Web security.
supports a location-limited channel, such as
infrared. The location-limited channel ensures
that communication occurs between the wireless PROBLEM DEFINITION
client and the correct access point. NiaB assumes
that the access point can automatically configure In the previous section, we examined the applica-
itself and that the same security policy can be tion designer’s problem; we introduced the factors
applied for all users. Automatic configuration that make designing and evaluating a configura-
is ideal in environments which use a common tion interface difficult. In this section, we delve
security policy. into the user’s predicament.
Other technologies for intuitive and secure key We begin with a networking or security
establishment include work by Balfanz, Smetters, expert’s mental model. Figure 1 illustrates how
Stewart, and Wong (2002); Gutmann (2003); Mc- an expert might evaluate her own wireless net-
Cune, Perrig, and Reiter (2005); Perrig and Song work. A secure configuration depends on the

182
Security Configuration for Non-Experts

Figure 1. Expert’s mental model for configuring a secure wireless network

successful completion of each step in Figure 1. vulnerabilities; that these vulnerabilities may
In other words, each step represents a potential warrant concern; and that if security is important,
point of failure. steps must be taken to protect their data.
Existing configuration interfaces are often Note how the configuration process illustrated
organized around the features of a wireless net- in Figure 1 is extremely delicate. If the user fails
work—not the problems that the user wants to to negotiate any of the six steps, the outcome will
solve. Currently, consumers will reach the con- tend towards an insecure network.
figuration step (Step 6 in Figure 1) only if they
want to enable a certain feature (Step 5 in Figure Existing Configuration Interfaces
1). Thus, unless consumers know that they want
encryption (Step 4 in Figure 1), the likelihood of We conducted a series of preliminary studies to
enabling it is small. gain first-hand experience observing users’ dif-
Now suppose that average consumers do not ficulties with network setup. We used two kinds
have tech-savvy friends or relatives. In this case, of user study techniques: contextual inquiry and
consumers only know that they want encryption usability study. Contextual inquiry is a technique
if they can articulate their goals or values regard- in which researchers select a few representative
ing wireless network security (Step 3 in Figure individuals, visit the individuals in their workplace
1). Articulation relies on the consumer’s knowl- or home, and observe their behavior. We conducted
edge of security vulnerabilities and their possible several contextual inquiries in people’s homes,
consequences (Step 2 in Figure 1). Evaluating the watching users setup and configure secure wire-
consequences requires a working knowledge of less networks. Each study lasted anywhere from
wireless networks and radio signals (Step 1 in one to 4 hours. The usability study is probably the
Figure 1). best-known user study technique. In a usability
Without a fairly sophisticated level of tech- study, experimenters observe participants while
nical understanding, it is unlikely that today’s they try to complete a list of pre-determined tasks.
consumers will be able to effectively reason about We conducted a handful of usability studies, using
their security needs. Users may be unaware that the same tasks that we will describe.
the broadcasting of their data leads to security

183
Security Configuration for Non-Experts

Typically, when a home consumer opens an Empowering Users to make their


access point package, she will find a paper “quick own Choices
start” guide that illustrates how to connect the
access point correctly. Next, the guide will di- A one-size-fits-all approach to system configura-
rect the user to pop in an installation CD or go tion cannot work in all circumstances. For example,
to the URL of the configuration interface (e.g., there may be different categories of users who run
http://192.168.1.1). We observed many users who 802.11 networks in their home. Some households
struggled with network configuration. During may use wireless networks to transmit confidential
network setup, several users had difficulty estab- information and desire a high level of security.
lishing an Internet connection and configuring Other households, such as those full of college
the Windows networking dialogs. In addition, students, may have many transient users, so that
many users failed to secure their networks for a only the most basic access control measures are
variety of reasons. Some users were unaware of practical. Still others may choose to run an open
the vulnerabilities in unsecured wireless networks. wireless network on principle, allowing anyone
Others did not know what features needed to be within range to use their network. On a practical
configured. level, a single default cannot work for everyone.
These preliminary studies led us to develop the On a philosophical level, we believe technology
model in Figure 1. We found that users stumbled users should have the right to configure and change
at each step. In general, users had more diffi- their technology’s behavior as desired.
culty completing Steps 4 through 6, compared
to Steps 1 through 3. Leveling the Playing Field: Making
It is important to note that earlier configu- Security more Accessible to End
ration interfaces were organized by technical Users
functionality. Commercial access point interfaces
exposed on the order of 50 distinct, configurable Currently, experts are able to configure products
features. The different features were grouped by more successfully and more quickly than non-
similarity in the underlying engineering imple- experts. However, expertise need not function
mentation, which was often unrelated to users’ as a barrier. Novices should be able to setup
high-level goals. Users often visited several and configure secure technologies as well as the
different pages in order to accomplish one goal. experts.
However, as more and more users have adopted
wireless technology—and called vendors’ techni- Maintaining Flexibility for
cal support lines—the configuration interfaces Application Designers and Vendors
have improved. Recently, vendors have shifted
towards user-friendly configuration wizards. This People often use products in unexpected ways.
is good news for both consumers, who appear to Keeping changes in software allows vendors to
be struggling less with network setup, and ven- make quick modifications. This is particularly
dors, who have reduced the volume of technical useful for initial product generations, as appli-
support calls. cation designers figure out who is buying their
products and what they will be used for. Once
Issues Addressed usage models have been more clearly delineated,
the software can be easily customized for different
The work we describe addresses three main is- audiences or uses.
sues:

184
Security Configuration for Non-Experts

DESIGN PRINCIPLES rives consistent security policies from the


values. Second, separate security policies
Based on our preliminary user study observations, from their underlying mechanisms. This
we define the following set of design principles for concept is well known in many disciplines,
developing user-friendly security applications. such as operating system design (Grimm &
Bershad, 2001). Existing configuration ap-
1. Assume no prior technical knowledge or plications require users to become experts
expertise on the part of users. Making in security mechanisms before they can
security accessible means that we must al- realize their preferred policies. Automating
low people of all expertise levels to perform the policy–mechanism translation removes
equally well. a substantial barrier to configuration.
2. Minimize human effort: maximize appli-
cation work. Lighten users’ cognitive loads Although these principles may appear obvi-
by automating as much of the configuration ous, the access point interfaces that we studied
work as possible. Also, present only as much violate several of these principles. In the following
information as users need, and make that sections, we show that applying these principles
information available when users need it. can improve the configuration experience a great
3. Maintain a positive user experience. Small deal—particularly for novice users.
details make a big difference. For example,
we noticed in our preliminary studies that
users strongly preferred setup directions on DESIGN AND IMPLEMENTATION
paper. As a result, we made a point to provide
information via users’ preferred medium. We developed a configuration interface that
Also, we observed that people have little pa- helps users articulate and implement a security
tience for configuration. At 30–45 minutes, policy using existing tools and technology. This
users expressed their displeasure. At 60–70 was accomplished using a Linksys WRT54G
minutes, users were visibly frustrated. We access point and source code. The source code
set a goal of a maximum of 45 minutes for was downloaded off Linksys’ Web site (firmware
our configuration process. version 3.01.3). It was compiled on Red Hat Linux
4. Anticipate error states. Users will get lost 2.4.20-8 using gcc 3.2.2.
and make mistakes. A good design needs to We modified the source code and compiled a
anticipate what issues require troubleshoot- new version of the firmware. The new firmware
ing. It should handle errors gracefully. It includes our configuration interface, which co-ex-
should provide useful feedback: Were the ists with the original vendor user interface. Users
configuration settings successfully applied? access the configuration interface just as they
Do they make sense? Do they do what the would access the vendor user interface. Once they
user thinks they should do? connect the access point to a DSL/cable modem
5. Separate distinct concepts. Conflating and a computer, they open a Web browser and di-
different concepts leads to confusion. rect their browser to http://192.168.1.1. This opens
First, separate users’ values and goals from the home page of our configuration interface.
security policies. Novice users are comfort- The dual-interface design shown in Figure
able stating their values, but they are not 2 was created so that both our design and the
experts in designing security policies. A original vendor interface could be used. This
better design elicits users’ values and de- was achieved by creating an HTML frame that

185
Security Configuration for Non-Experts

Figure 2. Example prototype screen (usually the most advanced question users will encounter)

contained two tabs. The Easy tab switches to our produce a set of feature settings that conflict with
prototype, and the advanced tab switches to the one another, the wizard asks the user to resolve the
original vendor interface. conflict. This addresses the lack of feedback and
Our configuration interface mirrors an online barn door properties, as well as the principles of
checkout process: the changes are not applied until anticipating error states and separating distinct
the entire configuration has been reviewed. The concepts.
wizard attempts to elicit a user’s goals and values Each time users access the configuration ap-
by asking general questions. (See the flowchart in plication, they are taken to the home page. The
Figure 3.) The questions were crafted so that they wizard is always available on the home page. On
would include information about the consequences the home page, we grouped possible actions by
of making a particular choice. This was done to goals. The list of actions includes the common ac-
address the abstraction property of security. tions that we expected consumers to take, and the
The system automatically maps the user’s items in the list change by context. For example,
preferences to the system’s technical features. if no security settings have been enabled, the
Any decisions that can be made for the user—and menu offers the option to turn on access control
still reflect the user’s preferences—are automated. or encryption. Otherwise, it shows options for
This addresses the unmotivated user property, as giving and revoking network access. The con-
well as our design principle to minimize human text-sensitive menu fulfills the design principle
work. of maximizing application work.
The mapping produces a recommended con- The goal is for designers to craft a system where
figuration for the user, which can be changed if the target audience understands the questions, and
desired. The recommendation clearly states the the system provides the desired configuration.
implications of adopting a particular configura- We believe the best way to accomplish this is by
tion. For example, the recommendation articulates automating the knowledge required in Steps 4
what actions the user must take to add or remove to 6 in Figure 1. In other words, configuration
devices from the network. If the user’s preferences interfaces should automate the translation from

186
Security Configuration for Non-Experts

Figure 3. Flowchart of application logic Target Population

We define the target population for 802.11 products


as someone who:

1. Uses wireless Internet access at home, school,


or work place on a daily basis (5+ days per
week);
2. Has broadband access at home; and
3. Uses a laptop as his or her primary com-
puter.

We included individuals who already had


wireless networks at home, as well as individuals
who did not.
Eighteen participants were recruited from a
broad university population, drawing from both
humanities and technology backgrounds. We
recruited participants by posting paper flyers on
bulletin boards throughout campus and by posting
messages on electronic bulletin boards. Interested
individuals were directed to a Web-based survey
human goals to technical features—a process that form. We selected participants based on their
taxes users’ abilities. level of computer networking expertise. This
The set of configuration questions shown in was computed using: a self-assessment of their
Figure 3 balances the needs of our users with the network troubleshooting abilities; whether they
simplicity necessary for a positive user experience. had ever managed a wired network; and whether
However, this design is not a definitive design for they had ever managed a wireless network. The
802.11 configuration. The questions and the ap- age of the participants ranged between 18 and 32.
plication flow should be tailored to specific groups Seven participants were female.
of users. As the target population changes—as Participants were randomly assigned an ac-
users’ needs change and their level of technical cess point: the Linksys WRT54G, the Netgear
understanding changes—the questions should WGT624, or our prototype (see Table 1).
also change.

EVALUATION Table 1. Participant assignment


Access Point Low Expertise High Expertise
To test the effectiveness of our design, we de- Linksys WRT54G 3 3
veloped a methodology for assessing security Netgear WGT624 3 3
interfaces. We then tested our configuration in- Prototype 3 3
terface against the two best-selling commercial
access points.

187
Security Configuration for Non-Experts

Tasks Tested other topics that the interviewer wants to cover,


he may ask more specific follow-up questions.
We define the ideal secure wireless network as Inspired by the mental models technique,
one where the consumer has: we designed our evaluation method around the
concept of gradual revelation. Participants were
1. Changed the default password; given no indication that the study was focused on
2. Changed the SSID; wireless security; they were told we were studying
3. Generated or entered an encryption key on wireless network setup. The questions we asked
the access point; and the activities we planned were ordered such
4. Entered the encryption key on a client; and that no information about our study focus was
5. Enabled MAC filtering. revealed before we first evaluated participants’
knowledge of it. For example, we did not mention
We felt these five measures could provide a “encryption” (1) unless participants brought up
basic level of security for the average home user. the concept themselves or (2) until participants
(Note that MAC filtering becomes unnecessary had an opportunity to configure the network and
when WPA or WPA2 is enabled. With WPA/ failed to bring up the concept.
WPA2, each received frame is authenticated by a When participants arrived for the study, we
session key instead of a hardware address. Many interviewed them briefly to understand how they
access points are now equipped with WPA, but the conceptualize wireless technology. We then asked
basic principles that motivate our study remain participants to fill out a questionnaire. The ques-
equally effective). They address the security tionnaire gathered participants’ attitudes towards
requirements (i.e., secrecy and authenticity) that various aspects of wireless networks, including
commercial technology is equipped to handle. availability, reliability, ease of use, use of open
These measures by themselves may be insuf- wireless networks, security, privacy, and health.
ficient; for example, attackers may guess a key Many of these topics are unrelated to security so
based on a password. However, such issues are that participants would not suspect the focus of
outside the scope of our study. our study.
Next, participants were handed an access
Evaluation Method point. The access point was packaged in the box,
as if it had been recently purchased. Experiment-
To compare the effectiveness of different 802.11 ers presented participants with an open-ended
configuration interfaces, we developed a tech- scenario:
nique that combines elements from several
different methodologies: mental models inter- Okay, let’s pretend you just received an 802.11
views, contextual inquiries, usability studies, access point as a gift. You would like to set up and
and surveys. use the wireless connection today. Your laptop is
Mental models interviews are used to under- already configured to use wireless—you just need
stand how interviewees conceptualize certain to worry about the access point. Just set up the
ideas (Morgan, Fischhoff, Bostrom, & Atman, access point as you would if you were at home.
2002). Generally, the interviewer will start with a
neutral statement, such as, “Tell me about X.” The We provided participants with resources that
interviewee is allowed to respond with whatever they would have on their own, such as product
thoughts come to her mind. The interviewer may manuals and access to the Internet. However,
ask her to talk more about an idea, and if there are we refrained from giving participants a list of

188
Security Configuration for Non-Experts

tasks to complete to avoid giving indications of In combining the different evaluation methods
our study focus. We observed participants while together, we believe our technique was able to
they set up and configured the access point as capitalize on the strengths of each method and
they deemed appropriate. During this phase, the minimize its respective shortcomings.
experimenter treated the study like a contextual
inquiry. Contextual inquiries are generally non-
directed observations that allow researchers to EXPERIMENTAL RESULTS
observe what users actually do. We incorporated
this element of qualitative analysis to evaluate We used the data that we collected to assess how
what tasks we would expect participants to at- well we expect users will be able to navigate each
tempt on their own. step in Figure 1. In this section, we highlight the
Since participants were not directed to com- points that are most relevant to the design of se-
plete any set of tasks, they may not have completed curity configuration interfaces. First, we discuss
the tasks we had in mind. The experimenter first users’ understanding of wireless technology. This
waited until the participant declared that the con- corresponds to Step 1 in Figure 1. Second, we show
figuration was complete. Then the experimenter that on commercial access points, low expertise
asked a series of follow-up questions to help users have more problems configuring the security
guide the participant to the security tasks. For of wireless networks than high expertise users.
example, if the participant neglected to change In contrast, users perform comparably using our
the default administrative password, the experi- system, which automates Steps 4 through 6 in
menter would ask: Figure 1.

With your current configuration, did you know that Understanding of Wireless
anyone who knows the default password can log Technology
in to your access point? That means they could
change any of your configuration settings without As mentioned, we first interviewed participants to
your permission. They could even lock you out understand how they conceptualize wireless tech-
from your own network if they wanted to. Did you nologies. For example, participants were asked to
know that could happen? draw a picture illustrating how data travels from
a wireless device to the Internet, and vice versa.
We then asked participants to complete the As a follow-up question, the experimenter asked
task. At this point, the study was more similar participants to choose the diagram in Figure 4
to a usability study. A usability study allows that most closely matches their ideas.
researchers to gather quantitative data about No participant selected Figure 4a, a scenario il-
people’s actions in a limited amount of time. We lustrating the access point and client communicat-
evaluated participants on their ability to complete ing directly with one another across an “invisible
the set of five tasks listed above. wire.” Two participants (11%) selected Figure 4b,
Once the tasks were completed or participants which shows both sides using directional broad-
ran out of time, we asked participants to complete cast. We expected more people to select this
the questionnaire again. Surveys allow research- diagram; it is commonly seen on access point
ers to gather quantitative data about people’s at- packaging as a stylistic simplification. Interest-
titudes quickly. However, because attitude ratings ingly, six participants (33%) selected Figure 4c.
are highly subjective, we only used this data to Figure 4c shows the access point broadcasting in
measure within-subject changes in attitude. all directions, while the client sends a directed

189
Security Configuration for Non-Experts

“beam” of data back to the access point. Last, ior. In this section, we present three fundamental
10 participants (56%) selected Figure 4d, which observations. First, in contrast to commercial
shows both the laptop and client broadcasting data systems, low expertise users will attempt to con-
in all directions. Happily, all users selected a dia- figure the same security settings as high expertise
gram that visualizes some element of broadcast- users using our goal-oriented design. Second,
ing, and over half of the participants recognized our design enables users to configure the same
that both the access point and the client broadcast level of security, regardless of expertise level.
in all directions. Finally, low expertise users react more positively
Unfortunately, the half who selected the to our prototype, in contrast to the commercial
wrong figure holds beliefs that may lead them systems.
to underestimate the risks of wireless technolo- In our user study, the experimenter first asked
gies. What if these users are not concerned about study participants to configure the access point
eavesdropping because they mistakenly believe without providing any directions or tasks. There
the attacker must be physically located between are two interesting points illustrated in Figure 5.
their wireless device and the access point? We First, on the commercial access points (Linksys
did not establish a link between conceptualization and Netgear), high expertise users attempted to
and risk perception in this study, but we believe complete more of the five tasks than low expertise
it may warrant future work. users. While disappointing, this is hardly sur-
prising. However, the extent may be surprising:
Configuration Interface Design using the Netgear access point, low expertise
users did not attempt any of the security-related
Our studies reveal that the design of a configura- tasks—not even changing the default password!
tion interface substantially impacts users’ behav- With the Linksys access point, low expertise us-
ers attempted one task each. Two tried to change
Figure 4. Follow-up exercise to assess users’ no- the default password; the other tried to change
tions of wireless broadcasting the SSID.

Figure 5. Average number of security tasks at-


tempted without experimenter prompting* (* In the
bar graphs, vertical lines represent the standard
error of the mean. The absence of a bar indicates
the standard error is zero.)

190
Security Configuration for Non-Experts

The second lesson in Figure 5 is that low ex- their own and/or ask a technically-savvy friend
pertise users would try to configure the same level to configure the network for them.
of security as high expertise users, if given the Finally, we evaluated the general user experi-
opportunity. In contrast to the commercial access ence of the prototype, compared to the commercial
points, all users on our prototype, both low and access points. We captured this in the question-
high expertise, attempted to change the default naire with a series of questions assessing how
password, enable MAC filtering, and enable en- positively the user feels about wireless network
cryption. By eliciting users’ goals, our prototype setup.
interface indicates that users have similar needs Recall that the questionnaire was administered
to one another, regardless of technical expertise. once before the participants handled the access
In feature-based interfaces, however, technical point and once afterwards. We used participants’
experience and knowledge may serve as a barrier change in attitude (measured on a 7-point Likert
for less savvy users. scale) as a rough indicator of their experience,
Once we began prompting users to complete relative to their prior expectations. A positive
the tasks, we found that the barrier of technical change reflects a positive user experience, and
expertise remained for the commercial access vice versa.
points. This is illustrated in Figure 6. We consider Figure 7 suggests that low expertise users were
the results in Figure 6 to be more representative pleasantly surprised by the prototype. In contrast,
of what would happen in the real world. How- low expertise users showed negative shifts in
ever, a significant difference between the lab attitude for the commercial access points. We
and home environments is that participants did expect this reflects the frustration participants
not have access to a technically-savvy friend. At often expressed during the user study. It is also
home, users would not be told to complete tasks interesting to note that high expertise users may
as they were in our study. It is more likely that have been less happy with the prototype than with
users would struggle with the configuration on the commercial access points. We speculate that

Figure 6. Average number of security tasks com- Figure 7. Average change in ease of use rating per
pleted (ability to configure security features)* question (user experience)* (* In the bar graphs,
(* In the bar graphs, vertical lines represent the vertical lines represent the standard error of the
standard error of the mean. The absence of a bar mean. The absence of a bar indicates the standard
indicates the standard error is zero.) error is zero.)

191
Security Configuration for Non-Experts

this is a result of prior expectations: many of the On a more fundamental level, choice is also
high expertise users managed wireless networks viewed as a desirable feature. In the language of
at home, and our prototype did not match their value-sensitive design, users should be autono-
expectations of how a configuration interface mous. Users should “construct their own goals
should behave. and values, and [be] able to decide, plan, and
Due to the high costs of technical support act in ways they believe will help them achieve
calls and product returns, access point vendors their goals and promote their values” (Friedman
have large economic incentives to improve their & Nissenbaum, 1997). If users are autonomous,
configuration interfaces. Vendors have made they take responsibility for the decisions they
numerous attempts to remedy the situation in make and the actions they take. According to
recent years. Thus, it is even more surprising that Friedman et al. (1997), autonomy is “fundamen-
our goal-oriented design so clearly enhanced us- tal to human flourishing and self-development.”
ers’ inclination and ability to configure security Without autonomy, individuals are not morally
features. These results demonstrate that vendors responsible for their actions. Without user in-
could improve their products dramatically without terfaces to support the choices they make, users
incurring major costs. This would reduce user cannot be autonomous.
frustration and increase technology adoption. As a community, the challenge is to design a
system that enables users to successfully configure
options with which they may be unfamiliar. Our
DISCUSSION AND FUTURE configuration interface is purely software-based,
TRENDS which means that system designers can iterate
through software designs quickly, since no hard-
Many system designers may wonder why we even ware changes are required. It does, however, mean
give users a choice in their security configuration. that software development teams need to research
It benefits the engineers and designers to make their target users in order to formulate the right
the product more “flexible” and “general,” but questions. Determining the right questions to ask
does it benefit the users? Would it not be simpler target users is time-consuming, and the questions
to enforce a secure default setting? Many of the may change as the audience shifts.
choices that users can make in today’s software are Goal-based questions can be used for anything
choices for which the users cannot make informed from configuring location-based applications to
decisions. A pre-configured, easy-to-use, easy-to- Bluetooth security. For example, take a location-
secure access point (such as NiaB) would certainly based application where users can choose to reveal
be desirable to many consumers. However, there their location to family members, friends, or other
are several reasons why it is important for users acquaintances. Since the technology is new to
to have a choice. On a practical level, there may most people, users may not fully understand the
be different types of users. Some households privacy implications of revealing their location
have a small number of users and devices, so a over time. Goal-oriented questions may be useful
high level of security may be easily implemented. for helping users determine what kind of privacy
Others may have large numbers of transient users, settings would be most suitable for their needs: to
so only the most basic access control measures whom information would be given; what infor-
are practical. Still others may choose to run an mation would be exposed; the granularity of the
open access point, allowing anyone within range information that would be available; and so on.
to use their network. A single default can never Users may not initially realize what options are
work for everyone. available to them. A well-crafted configuration

192
Security Configuration for Non-Experts

interface will make them aware of the implications Assisting users with the translation from high-
of the technology, as well as match the configura- level security goal to low-level product feature
tion with their comfort level. is a simple but powerful method for building
The lessons we have learned in our 802.11 case easy-to-use security configuration applications.
study can help designers improve the user experi- We developed a prototype using this strategy.
ence of new technologies. For new technologies We also adapted traditional user study methods
to succeed, they must be both easy-to-use and to evaluate security applications. We conducted
trustworthy. Ease of use and trustworthiness imply a user study to compare the effectiveness of our
that users need to understand what the technology prototype to two commercially available access
is doing—at least to the level where they can form points. Our study demonstrated that the prototype
correct expectations of how the technology should allowed non-expert users to securely configure
behave. Users who understand the implications their networks as well as expert users.
and limitations of a technology will ultimately Our work generalizes to other security con-
be satisfied because the technology meets—or figuration problems, and we hope that the com-
exceeds—their expectations. munity will explore this aspect of application
Unpredictability breeds intimidation in users’ design. Making systems easy-to-use and secure
relationships with technology. Without a basic is critical to the adoption of new technologies.
level of understanding, users will be unhappy After all, new technologies only succeed if they
and bewildered when something does not behave satisfy the people who use them.
as they anticipate. Inevitably, this will happen
if they form the wrong mental models of the
technology. ACKNOWLEDGMENT

This research was supported by a gift from Intel.


CONCLUSION The views and conclusions contained here are
those of the authors and should not be interpreted
Home consumers are now responsible for config- as necessarily representing the official policies or
uring the security settings of their devices. While endorsements, either express or implied, of Carn-
configuration interfaces have improved since the egie Mellon University or Intel Corporation.
days of inscrutable VCR recording menus, they We would like to thank Vincent Goh and
still terrorize many end users. Configuration in- Adrian Tang for their exceptional work with the
terfaces are often feature-based, listing options implementation and the user studies.
available for different technical features. People,
on the other hand, are goal-based. Users may not
have a deep understanding of the technology—and REFERENCES
they probably never want to. This lack of under-
standing makes it hard for users to properly assess Balfanz, D., Durfee, G., Grinter, R., Smetters,
their security and privacy risks. It also makes D., & Stewart, P. (2004). Network-in-a-box: How
it hard for users to configure product features. to set up a secure wireless network in under a
Very few consumers truly understand wireless or minute. In Proceedings of the 13th Conference
cryptographic technology, and as a result, very on USENIX Security Symposium (pp. 207–222).
few consumers are willing to configure security Berkeley: USENIX Association.
in their wireless devices.

193
Security Configuration for Non-Experts

Balfanz, D., Smetters, D., Stewart, P., & Wong, Lessons from studying secure wireless network
H. C. (2002). Talking to strangers: Authentication configuration. ACM Interactions, 13(3), 28-31.
in ad-hoc wireless networks. In Proceedings of
Linsky, J., Bourk, T., Findikli, A., Hulvey, R.,
Symposium on Network and Distributed Systems
Ding, S., Heydon, R., et al. (2006, August). Simple
Security (NDSS).
pairing (Whitepaper, revision v10r00).
Cohen, D. (2004). Consumer front-end to WPA.
Lortz, V., Roberts, D., Erdmann, B., Dawidowsky,
Wi-Fi Alliance.
F., Hayes, K., Yee, J. C., et al. (2006). Wi-Fi simple
Cooper, A. (1999). The inmates are running the config specification (version 1.0a).
asylum: Why high-tech products drive us crazy
McCune, J. M., Perrig, A., & Reiter, M. K. (2005).
and how to restore the sanity. Sams Publishing.
Seeing-is-believing: Using camera phones for hu-
Friedman, B., Hurley, D., Howe, D. C., Felten, man-verifiable authentication. In Proceedings of
E., & Nissenbaum, H. (2002). Users’ conceptions IEEE Symposium on Security and Privacy.
of Web security: a comparative study. Extended
Morgan, G., Fischhoff, B., Bostrom, A., & At-
Abstracts of the CHI 2002 Conference on Human
man, C. (2002). Risk communication: A mental
Factors in Computing Systems (pp. 746-747). New
models approach. New York: Cambridge Uni-
York: Association for Computing Machinery.
versity Press.
Friedman, B., Kahn, P., & Borning, A. (2006).
Newman, R., Gavette, S., Yonge, L., & Anderson,
Value sensitive design and information systems. In
R. (2006). Protecting domestic power-line com-
P. Zhang, & D. Galletta (Eds.), Human-computer
munications. In Proceedings of Symposium on
interaction in management information systems:
Usable Privacy and Security (SOUPS).
Foundations (Vol. 4).
Perrig, A., & Song, D. (1999). Hash visualization:
Friedman, B., Lin, P., & Miller, J. K. (2005).
A new technique to improve real-world security.
Informed consent by design. In L. F. Cranor, &
In Proceedings of International Workshop on
S. Garfinkel (Eds.), Security and usability (Chap.
Cryptographic Techniques and E-Commerce
24, pp. 495-521). O’Reilly Media, Inc.
(CrypTEC).
Friedman, B., & Nissenbaum, H. (1997). Software
Reeves, B., & Nass, C. (1996). The media equa-
agents and user autonomy. In Proceedings of the
tion: How people treat computers, televisions and
First International Conference on Autonomous
new media like real people and places. Cambridge
Agents (pp. 466–469).
University Press.
Grimm, R., & Bershad, B. (2001). Separating ac-
Saltzer, J. H., & Schroeder, M. D. (1975). The
cess control policy, enforcement, and functional-
protection of information in computer systems.
ity in extensible systems. ACM Transactions on
In Proceedings of the IEEE, 63(9), 1278-1308.
Computer Systems, 19(1), 36-70.
Stajano, F., & Anderson, R. (1999). The resurrect-
Gutmann, P. (2003). Plug-and-play PKI: A PKI
ing duckling: Security issues for ad-hoc wireless
your mother can use. In Proceedings of the 12th
networks. Security Protocols, 7th International
USENIX Security Symposium (pp. 45–58). Berke-
Workshop.
ley: USENIX Association.
Uzun, E., Karvonen, K., & Asokan, N. (2007).
Kuo, C., Perrig, A. & Walker, J. (2006). Designing
Usability analysis of secure pairing methods.
an evaluation method for security user interfaces:
Usable Security (USEC).

194
Security Configuration for Non-Experts

Whitten, A., & Tygar, J. D. (1999). Why Johnny psychological acceptability: Psychological
can’t encrypt: A usability evaluation of PGP 5.0. acceptability: It is essential that the human
In Proceedings of the 8th USENIX Security Sym- interface be designed for ease of use, so that
posium. USENIX, Washington, D.C., USA. users routinely and automatically apply the
protection mechanisms correctly. Also, to
the extent that the user’s mental image of his
protection goals matches the mechanisms he
Endnote
must use, mistakes will be minimized. If he
1
must translate his image of his protection
Saltzer and Schroeder (1975) outlined eight
needs into a radically different specification
design principles for minimizing applica-
language, he will make errors. 
tion security flaws. The eighth principle is

195
196

Chapter XII
Security Usability
Challenges for End-Users
Steven Furnell
Centre for Information Security & Network Research, University of Plymouth, UK

Abstract

This chapter highlights the need for security solutions to be usable by their target audience, and exam-
ines the problems that can be faced when attempting to understand and use security features in typical
applications. Challenges may arise from system-initiated events, as well as in relation to security tasks
that users wish to perform for themselves, and can occur for a variety of reasons. This is illustrated by
examining problems that arise as a result of reliance upon technical terminology, unclear or confusing
functionality, lack of visible status and informative feedback to users, forcing users to make uninformed
decisions, and a lack of integration amongst the different elements of security software themselves. The
discussion draws upon a number of practical examples from popular applications, as well as results
from survey and user trial activities that were conducted in order to assess the potential problems at
first hand. The findings are used as the basis for recommending a series of top-level guidelines that may
be used to improve the situation, and these are used as the basis assessing further examples of existing
software to determine the degree of compliance.

Introduction served to heighten general awareness of Internet


threats, with the consequence that users at all levels
End-users are faced with an increasing require- (be they at work or at home) are likely to have at
ment to use security, with recent years witnessing least some appreciation of the need to keep their
a significant surge in the range and volume of systems secure. However, adequate protection
security threats that can affect their IT systems. will rarely be achieved by default, and here we
Highly publicized incidents involving malware, often find that even the security technologies that
spyware, phishing, and denial of service have all are used are often used badly (classic examples

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Security Usability Challenges for End-Users

being bad practice with passwords, and poorly This chapter examines the nature of the usabil-
maintained anti-virus protection). In some cases, ity problem, presenting examples from standard
the blame for this clearly resides with careless or end-user applications, as well as supporting evi-
irresponsible end-users. However, it is important dence from current research. Having established
to realize that another significant factor is often the the existence and nature of the problem, the discus-
underlying unfriendly nature of the technology. sion proceeds to consider specific issues that can
Security-related functionality can be found in present obstacles from the usability perspective.
both specific tools and embedded within general Particular consideration is given to problems at the
applications, and users will frequently encounter user interface level, and how we may consequently
the requirement to make security-related decisions find our attempts to use security being impeded
during routine use of their system. However, pro- (or entirely prevented) as a result of inadequate
vision of security functionality is only of value attention to human-computer interaction (HCI)
if the target audience can understand and use it. aspects. The discussion then proceeds to present
Unfortunately, the manner of presentation, and a brief examination of means by which the situa-
the implicit assumptions about users’ abilities, can tion can be improved, and the chapter concludes
often hamper usage in practice. This can represent with a summation of the main issues.
a particular problem in contexts where users are
required to fend for themselves, and may result
in necessary protection being under-utilized or Background
misapplied.
Although much security-related functionality If we consider the factors that may prevent users
is now presented via the ostensibly friendly context from securing their systems then, perhaps unsur-
of a graphical user interface, if we look beyond prisingly, lack of knowledge and inability to use
the surface, the user-friendliness can quickly the software concerned are amongst the prominent
disappear. For example, a series of apparently reasons, particularly for the novice community.
simple check boxes or low-medium-high settings Evidence here can be cited from a study of security
can soon become more complex if you have to perceptions amongst 415 personal Internet users
understand the actual functionality that they con- who were asked to identify the factors that pre-
trol (Furnell, 2004). As a result, many users will vented them from carrying out security practices
ultimately remain as baffled as they would have (Furnell, Bryant, & Phippen, 2007). The overall
been by a command line interface. Those most findings are illustrated in Figure 1, and although
likely to suffer are non-technical users, who lack 41% considered that they devoted sufficient atten-
the knowledge to help themselves, or any formal tion to security, a variety of reasons were seen to
support to call upon. Should they be implicitly be impeding the remainder. Although there is no
denied the level of protection that they desire single issue that stands out as an obstacle to all
simply because they are not technology experts? users, there are clearly some reasons that can be
Clearly, the answer is no. As such, the usability related to the users’ knowledge and the usability of
of security is a crucial factor in ensuring that it is the software (e.g., “I don’t understand how to use
able to serve its intended purpose. Although this security packages” clearly shows that some users
requirement is now beginning to achieve much find the protection challenging to use, whereas
more widespread recognition (CRA, 2003; Cranor “Security impedes the use of my computer” il-
& Garfinkel, 2005), usable security remains an lustrates a usability constraint from a different
area in which current software is often notably perspective). When specifically considering the
lacking. main reasons cited by respondents that classed

197
Security Usability Challenges for End-Users

themselves as novices (as opposed to intermediate of excluding a proportion of the users. Suf-
or advanced users), it is revealed that factors relat- ficient help and support should be available to
ing to lack of knowledge and understanding are assist novices to achieve the level of security
the most prominent constraints (e.g., 43% claimed that they need.
not to understand the threats, 38% claimed they • Locatable—users need to be able to find
did not know how to use security packages, 35% the features they need. If casual users have
indicated that they did not know how to secure to spend too long looking for security, it
their computer, and 32% indicating that they did increases the chances that they will give up
not know about the threats). and remain unprotected.
When considering the inclusion of security • Visible—the system should give a clear
functionality within end-user software, a number indication of whether security is being ap-
of desirable criteria can be identified that will plied. Appropriate use of status indicators
influence the overall usability of the resulting and warnings will help to remind users in
protection. Some key points include the following cases where they may have forgotten to en-
(Furnell, Jusoh, & Katsabas, 2006): able appropriate safeguards.
• Convenient—although visibility is impor-
• Understandable—options and descriptions tant, the provision of security should not
should be presented in a manner that is become so prominent that it is considered
meaningful to the intended user population. inconvenient or intrusive. Users are likely to
Security offers a great deal of potential for disable features that become too much of an
the use of technical terminology and other impediment to legitimate use.
jargon, but this could easily come at the cost

Figure 1. Factors preventing security practices being carried out

198
Security Usability Challenges for End-Users

Examining current implementations of se- paper by Whitten and Tygar (1999) specifically
curity can reveal deficiencies in these regards. considered the usability and friendliness of the
Indeed, it often appears that security features have PGP utility, and conducted a laboratory test
been included without a great deal of thought about with 12 participants to investigate the ease with
how (and by whom) they will actually be used. which they could use the tool to sign and encrypt
There often seems to be an implicit assumption a message. The study determined that only a
that users who have an interest in security and third were able to do so within the 90 minutes
wish to protect themselves will be determined allocated for the task, with problems arising from
enough to work out how to do it. However, the the user interface design and the complexity of
situation will very often be the exact opposite, the underlying concepts that participants needed
with users needing little excuse to avoid security to understand. However, such problems are by no
unless they have no other choice. As a result, while means restricted to features such as cryptography.
the features enable developers to tick the box to A more recent study from Johnston, Eloff, and
say that security has been addressed (and avoid Labuschagne (2003) considered the HCI aspects
consequent accusations of negligence), it does not of the Internet Connection Firewall (since re-
yield a very positive result for the users. christened as the Windows Firewall) within
Some clear awareness issues still need to be Windows XP. This again found the presentation
overcome, and there is unfortunately ample evi- of the security functionality to be less than ideal,
dence to show that users do not actually understand with the consequence that users would have likely
security very well in the first place. In other cases, difficulties in getting the most out of it.
users think that they understand it, but often find A common factor in both of the aforementioned
it very difficult to use correctly. For example, in studies was that the target was a security-oriented
the United States, the 2005 Online Safety Study tool. However, security features also exist within
conducted by AOL and the National Cyber Secu- more general end-user applications. For example,
rity Alliance interviewed a sample of 354 home word processors, Web browsers, e-mail clients,
users while also performing a technical scan and and databases can all be expected to have some
analysis of their machines. The survey concluded security functionality. However, here too there
that 81% of home computers analyzed lacked can be significant barriers to effective use, and
core protection (i.e., recently-updated anti-virus the following quote from Schultz (2002) sums
software, a properly-configured firewall, and/or things up fairly well:
spyware protection) (AOL/NCSA, 2005). As a
result, 12% currently had a virus on their computer, “The overwhelming majority of software that
61% of machines had known spyware or adware supports security is also defective as far as us-
installed, and 44% did not have their firewall ability goes. If software vendors would make
setup correctly. Such evidence clearly suggests software functions used in providing security more
that if users cannot understand the technologies user-friendly, people would be more receptive to
they may not protect themselves properly. security.”
The difficulty of using security options has a
tendency to mirror the complexity of the security This is very often the fundamental nature of the
concepts involved. For example, using a tool such problem—the protection we need is often avail-
as PGP (pretty good privacy) to send and receive able, but provides no benefit because we cannot
secure e-mail requires the user to have some ap- work out how to use it. Security functionality has
preciation of concepts such as encryption, keys, to be conveyed in a meaningful manner, and the
and digital signatures. Indeed, a widely cited interface through which the user is expected to

199
Security Usability Challenges for End-Users

control and configure it must be appropriate to instances in which participants were confused.
the audience that it intends to address. Prior work from DeWitt and Kuljis (2005) has
observed that, when faced with a requirement to
Usability Problems in Practice make security-related decisions, users will often
take whatever path seems quickest in order to get
Broadly speaking, end-users are likely to face two their work done—even if this means compromis-
categories of security event during their use of a ing their security. As such, encountering events
system—those that they initiate for themselves that are unclear in the first place will add further
and those initiated by the system. incentive for security to be sidelined.
Although some of the confusion surrounding
• System-initiated events: These types of system-initiated events could be explained by the
events occur with intention to inform the fact that they occurred unexpectedly, the prob-
end-user about security issues and/or require lems also extend to the user-initiated context. In
related decisions. Thus, this type of event the aforementioned study, the majority of these
is initiated by the system and targets the events (59%) again required participants to make
end user. For example, many users will be some decision, and again the extent to which they
familiar with seeing pop-up dialogs in their felt able to do so was variable (see Figure 3). Al-
Web browser asking them whether or not they though a greater proportion felt “totally clear” in
wish to allow an event, such as that depicted this context (possibly reflecting the fact that the
later in Figure 11.
• User-initiated events: These types of events
differ from the system-initiated events be-
cause this time the user intends to deal with Figure 2. Users’ understanding of how to respond
security. More specifically, this applies when to decisions required by system-initiated events
an end-user actively seeks to invoke an ele-
ment of security (e.g., encrypting a message) or No t cle a r a t
a ll
perform a security-related task (e.g., control- 16% T o ta lly cle a r
29%
ling or configuring security-related features
within applications and tools).
M o stly
u n cle a r
Unfortunately, both cases can pose problems 23%
M o stly cle a r
from the usability perspective. Evidence for this 32%
comes from a study conducted by the author’s
research group. This work involved 26 users
Figure 3. Users’ understanding of how to perform
who were asked to record details of system- and
user-initiated events
user-initiated events that they encountered over a
2 week period, as well as any usability problems
that resulted. Amongst the findings was the fact Not c lear at
Totally c lear
all
that users are frequently confused when they 34%
31%
are asked to make security-related decisions.
For example, two thirds of the system-initiated
events required users to do this, and (as Figure
Mos tly
2 illustrates) although the majority were clearly Mos tly c lear
unc lear
14%
comfortable, this still left more than a third of 21%

200
Security Usability Challenges for End-Users

users themselves were in control of the situation • Over 96% are regularly use a computer at
when initiating events), there was also a far greater home and/or at work
proportion that were ‘not clear at all’. • Almost 90% rated themselves as intermediate
The remainder of this section highlights a or advanced users
number of examples of common failings that are
apparent in programs that target end-users. For the These factors suggest that the respondents
purposes of discussion, five key themes are used as a whole were likely to have a high level of
to group the problem issues. However, it should be IT literacy, making them well-placed to provide
noted that several of the points are inter-related, relevant comments about the usability of security
and the practical examples used to illustrate them features within the targeted applications. Some
can certainly be seen to be suffering from more of the significant findings from the survey are
than one of the problems. It should also be noted therefore used to support the discussion presented
that the occurrence of these issues is not restricted here. For readers interested obtaining further
to the implementation of security functionality, information, the full details of the survey and
and indeed the themes identified here are closely the associated results can be found in Furnell et
related to usability heuristics proposed by Nielsen al. (2006).
(1994) for systems in general.
In addition to practical examples, the discus- End-User Trials
sion draws upon the results from related studies
that have been conducted in order to assess users’ While the survey allowed a large-scale assess-
understanding of application-level security fea- ment of the extent to which users understood
tures, and their ability to use them in practice. the information before them, it was not able to
reveal deeper insights into the extent to which the
End-User Survey programs could actually be used. As such, the
findings were supplemented by a series of hands-
The aim of the survey was to assess users’ un- on trial activities, in which users were required to
derstanding, and hence the potential usability of make practical use of the security features within
security-related interfaces within a number of a range of applications. The trials involved 15
well-known software packages. An online ques- participants, eight of whom were general users
tionnaire presented respondents with screenshots and seven of whom were advanced. The general
relating to the security functionality within a users were familiar with using IT (and some of
number of popular end-user applications and at- the applications concerned) on a regular basis,
tempted to determine whether they were meaning- but had no specific knowledge about the detail
ful (e.g., in relation to the terminology used) and of the technology. By contrast, the advanced
correctly interpreted (i.e., whether the intention users all held academic qualifications relating to
of the functionality was properly understood). IT and had some prior knowledge in relation to
A total of 342 responses were received, and the security. The required tasks were presented in
main characteristics of the respondent group writing and explained to the participants. Note
were as follows: that they were told what they needed to achieve,
but not how to do it, and the aim of the trial was to
• Almost equally split between male and determine whether they could understand and use
female the security features within the application suf-
• Over 80% in the 17-29 age group ficiently well to achieve the objectives. Each trial
• Over 80% have university-level education session lasted between 1 and 2 hours, depending

201
Security Usability Challenges for End-Users

upon the ability of the participants and the ease of medium, medium-high, and high), it becomes
with which they completed the tasks (Furnell, somewhat less intuitive if users try to understand
Katsabas, Dowland, & Reid, 2007). the descriptions of the settings. For example, one
Both the survey and the trials drew heavily of the characteristics of the “medium-high” setting
upon Microsoft products as the basis for examples, described in the Figure is that “unsigned ActiveX
and this is further reflected by the examples controls will not be downloaded.” Although this
discussed in this chapter. However, it should be would be unlikely to cause problems for users with
noted that this is not intended to imply that the a technology background, it has clear potential to
usability of security features within Microsoft’s confuse the average user (who might nonetheless
software is specifically poor. The examples were have an interest in setting up their system securely,
actually chosen to reflect the widespread usage and so could certainly find themselves looking at
and popularity of the programs concerned—thus the related options). As a result, while they will
maximizing the chances of survey and trial par- appreciate the idea of the medium-to-high scale,
ticipants also being end-users of the software the descriptions may impede their ability to relate
(although this was not a prerequisite for either this to their browsing needs. As an aside, it should
activity, it was considered that participants would be noted that the slider used to control the level in
feel more comfortable with programs they were the other three content zones shows a five-point
familiar with). scale (adding settings of low and medium-low to
the list of options). This reflects the fact that the
Reliance upon Technical safer environment that is hopefully provided by
Terminology the “local intranet” and “trusted sites” may allow
the level of protection to be relaxed. Meanwhile,
One of the traditional barriers to newcomers into although a slider is displayed in the “restricted
IT is the significant degree of technical terminol- sites” zone, the level is permanently set to “high”
ogy that accompanies the domain. Over time, and the user cannot alter it.
efforts have been made to ease this burden, with Respondents to the authors’ survey were
increased use of pictures and plain language as presented with the analogous interface from IE6
a means of expressing concepts to novices. How- (which was the current version at the time of the
ever, security is one area in which the message study), and asked to indicate whether they under-
is still very likely to be unclear, with technical stood various elements of it. One question specifi-
terms often being an intrinsic part of how fea- cally focused upon the content zone concept and
tures are conveyed. An example of this problem showed the related part of the interface with the
is illustrated in Figure 4, which shows the means “trusted sites” and “restricted sites” highlighted.
by which users are able to control the security Respondents were then asked to indicate whether
settings within Internet Explorer (IE) version 7. they knew the difference—revealing that 14% did
Provided as the standard browser within the most not and a further 22% were not sure. Similarly,
popular operating system, IE is consequently the as part of the practical user trial, participants
means by which most users come into contact were asked to explain their understanding of the
with the Web, and browsing is a context in which different zones, revealing that only two-thirds
appropriate security is most definitely required. could do so adequately. However, a slightly greater
However, although the interface initially looks proportion (12 out of 15 participants) was still
quite straightforward, with the use of a slider able to use the functionality, and add sites to the
control to set the desired security level (which, trusted and restricted zones.
for the “Internet” zone, has a three-point scale

202
Security Usability Challenges for End-Users

Figure 4. The security settings interface from understand the setting in Figure 4, with almost
Internet Explorer 7 two thirds of the overall respondent group unable
to comprehend the complete description.
An even more significant terminology problem
is likely to be encountered if users attempt to go
beyond the three presets and select the “custom”
setting. Doing so yields a new window offering
46 distinct settings (note that the number varies
depending upon the version of IE in use); mak-
ing things considerably more complicated than
a 3-position slider. Some of these (relating to
the security of ActiveX controls) are illustrated
in Figure 5, and examples of others include the
following:

• Loose XAML
• Run components signed with Authenticode
• Allow META REFRESH
• Launching programs and files in an IF-
RAME
• Software channel permissions
• Active scripting

In most cases, the options available allow a


Proceeding to consider users’ understanding user to completely enable or disable a particular
of the actual security level, survey respondents setting, or have the system prompt them for a
were presented with description of the “medium” decision each time a relevant activity occurs.
setting from IE6 (which is closely similar to that However, it is very unlikely that the majority of
of the IE7 “medium-high” setting shown in Figure users would actually understand what they are
4) and asked to indicate if they understood it—the being asked to enable or disable anyway (and so
results revealed that 34% did not. Although this selecting the option for the system to prompt them
is already a sizeable proportion of users to lose, each time would not improve things—it would
the authors anticipated that some respondents simply oblige them to take a decision that they did
would claim to understand the interface even not understand on multiple occasions). Moreover,
though they did not actually understand all of the the system offers no context-sensitive help to ex-
terminology. As such, the questionnaire proceeded plain any of the settings, and even looking at the
to ask whether respondents had heard of ActiveX main help system reveals that only a subset of the
before, and if so, whether they actually knew what terminology is actually explained (for example,
it meant. Although the initial finding here was while explanations can be found for “ActiveX”
mostly positive, with 65% claiming to have heard and “Authenticode,” there is nothing to explain the
of the term, only 54% of these people (i.e., only meaning of “IFRAME,” “META REFRESH,” and
35% of the overall respondent group) knew the “Software channel permissions”—although de-
meaning. This puts a rather different interpretation termined users can find definitions on Microsoft’s
upon the proportion of people who would fully Web site if they look there). With these observa-

203
Security Usability Challenges for End-Users

tions in mind, it is perhaps not surprising to find is that it will increase their chances of making
that the majority of respondents to the usability mistakes. In some cases, these mistakes will put
survey were confused when presented with the their system or data at increased risk, whereas in
IE6 version of the interface in Figure 5, with only others they may serve to impede the user’s own
40% claiming to understand the options. use of the system. Confusion can often arise from
One of the further IE tasks required in the user the way in which features are presented, with
trial was to customize the security settings in order the result that even the most straightforward and
to be prompted before running ActiveX content. familiar security safeguards can become chal-
Although 5 out of 7 of the advanced users were lenging to use.
able to achieve this, only one of the general users As an example, we can consider the way in
was able to complete the task. The overall success which password protection is used in several
rate for the trial group as a whole was 40%, and Microsoft Office applications. Excel, PowerPoint,
as such the trial activities confirmed the earlier and Word all allow two levels of password to be
findings from the survey in this regard. applied to user files—to control access to the file
(i.e., in order to maintain confidentiality) or to
Unclear and Confusing Functionality restrict the ability to modify it (i.e., controlling
integrity). Using the Word 2007 interface as an
If users are confronted with security features that example, these are set via the dialog shown on
they do not understand, then the first danger is the left side of Figure 6. However, an immedi-
that they will simply give up and not use it at all. ate observation here is that the route to actually
However, if they are not put off (or, alternatively, finding and using this interface is rather curious.
have no choice but to use it), the next danger Whereas earlier versions of Word made security
settings available from the “tools–options” menu,
they are now only accessible via the “tools” button
in the bottom left corner of the “save as” dialog
Figure 5. IE7 custom security settings box. Although this was also one of the routes
available in earlier versions, it was arguably the
more obscure one. Additionally, rather than being
labeled “security” (and as had previously been
the case), the option to select is now called “gen-
eral”—even though the only options it contains
are security-related. Another aspect that seems
rather unintuitive is that, in addition to initially
setting passwords via this route, users must also
go to the “save as” dialog box in order to change
or remove them. Finally, anyone expecting to get
guidance on how to use the options will be rather
disappointed—although the main help system
does include an entry about passwords, using the
context-sensitive help simply takes you to Word’s
top level help page and requires the user to enter a
search term manually. This is in contrast to what
happens when you use the context-based help
feature in most other dialogs, with the system

204
Security Usability Challenges for End-Users

directing you to a specific themed page, or at could not be opened without a password, and a
least providing a series of suggested topics that further 13% were not sure how to interpret it. As
might be followed. such, more than a third of users would not have
Users who subsequently attempt to open a been in a position to make the correct decision.
password-protected file are then presented with Whereas the survey focused upon the interpre-
the dialogs on the right-hand side of Figure 6. tation of the password prompts, the practical trial
However, whereas the prompt for opening an ac- activities included the task of setting the passwords
cess-controlled file (the upper dialog) is relatively in the first place. Specifically, trialists were given
easy to understand (i.e., you need a password and a sample Word document and then instructed to
cannot open the document without it), the dialog make sure that a password was required to read
for files that are merely protected against modifi- it (which would require them to use the upper
cation is often misunderstood. Users who wish to password box in Figure 6), and then later to use
simply view or print the file can select the “read a password to prevent unauthorized changes (re-
only” option, in order to bypass the password quiring the other password to be set). The overall
request. However, the presentation of the interface success was low in both cases, with only five
causes confusion in practice, and many users are users (two general and three advanced) able to
so distracted by the apparent requirement for a complete the first task, and six participants able to
password that they believe they cannot do any- do the second one (with one more advanced user
thing without it. Indeed, in the usability survey, working it out this time). This clearly shows that
respondents were presented with an example of even familiar security features (and the password
the lower password dialog and asked to indicate is surely the most familiar security measure that
which of three options they understood it to mean. we use) can be rendered unusable if they are not
Although the majority correctly indicated that it presented in an effective manner.
meant the document could not be modified without Looking at other aspects within the dialog box
a password, 23% incorrectly believed that the file on the left of Figure 6, another element that has

Figure 6. Password options and the resulting prompts within Microsoft Word

205
Security Usability Challenges for End-Users

the potential to cause confusion is the “protect applications), other options (such as “Trusted
document” button. From the outset, the presence Locations,” “add-ins,” and “macro settings”) only
of such a button seems odd, given that selecting a initiate changes within the current application.
password to open or modify a document could also Unfortunately, with the exception of the ActiveX
be considered to be aspects of document protec- settings (where the window heading says “ActiveX
tion. Although this observation could be perceived settings for all Office applications”), the scope
as overly picky, it is the type of issue that could of the settings is not remotely obvious from the
easily confuse or impede beginners. Indeed, in this interface presented at the time. Moreover, even
particular case, even Microsoft’s own tutorial for the help system does not provide clarity in some
the security features acknowledges the confusing cases (e.g., while it indicates that macro settings
nature of the situation (Microsoft, 2007): are only applicable to the current application, it
says nothing similar in the descriptions of trusted
Some of the settings that appear on the Secu- locations and add-ins).
rity tab, including some that sound like security
features, do not actually secure documents . . . Lack of Visible Status and
The Document Protection task pane and Pro- Informative Feedback
tect Document features (available in Word) do
not secure your documents against malicious Users ought to know when security is being ap-
interference either. They protect the format and plied and what level of protection they are being
content of your document when you collaborate given. This not only provides a basis for increasing
with co–workers. their confidence when using particular services,
but can also remind them to configure the sys-
Given such comments, it is surprising to tem correctly. Without such a reminder, users
find the “protect document” option remaining may proceed to perform sensitive tasks without
under the security tab, and not being renamed to adequate protection, or may inadvertently leave
something more meaningful (such as ‘document settings at a level that impedes their legitimate
editing restrictions”). usage. As such, the lack of visible status informa-
Another relevant example of unclear and tion is another example of undesirable HCI. As
potentially confusing functionality within Of- an illustration of how this can cause problems,
fice 2007 is provided by the “trust center.” This Figure 8 shows an attempt to reach Microsoft’s
interface is accessible from within most Office Hotmail service via Internet Explorer, with the
2007 applications, and is used to configure as- browser security level set to ‘high’. The user re-
pects such as macro security settings, trusted ceives no message at all, and there is no indication
publishers and locations, and privacy options. of what the problem might be. As such, they may
However, one potentially confusing aspect arises conclude that the site is simply not operational.
from the fact that the trust center’s interface looks What the user should receive is a clear message
very similar when invoked from within different to remind them that their browser security is set
applications (see Figure 7). As such, users may to ‘high’, and to indicate that this may cause the
be inclined to assume that it is a generic utility site to operate incorrectly.
and that any changes will apply across all their Another good example from within Internet
Office applications and files. Although this is Explorer relates back to the use of the custom set-
indeed true in some cases (e.g., changes to the tings mentioned earlier in the discussion. If a user
“ActiveX Settings,” “message bar,” and “privacy changes one of the many settings at this level, it
options” affect these settings across all Office has a notable effect upon what they subsequently

206
Security Usability Challenges for End-Users

Figure 7. The trust center when accessed from (a) Word, (b) Excel, (c) PowerPoint, and (d) Access

(a)

(b)

(c)

(d)

207
Security Usability Challenges for End-Users

Figure 8. Attempts to access Hotmail with Internet Explorer security set to “high”

see in the main security settings window. Rather warning banner as a reminder and the option to
than getting the 3- or 5-point slider, the user now automatically fix the settings. This is clearly a
simply sees that they have a “custom” security very useful feature (which is a notable improve-
level, as shown in Figure 9a. From an information ment since the previous version of the browser),
perspective, all this tells you is that something has but there is no analogous warning to tell the user
been changed —it does not give any indication that they may have made changes that are un-
of whether it has been changed for the better or necessarily restrictive.
worse, and the user no longer has any indication In the user trials involving Internet Explorer,
of where their protection resides in relation to the participants were asked to perform the rather more
previous slider. This is fair enough if one assumes simple task of determining the current security
that any change would have been made by the setting of the browser before any custom changes
user directly (i.e., assuming they had not forgot- had been made (i.e., they simply needed to be able
ten, they would know what had been changed to find the location of the security settings in the
and why), but in some cases the user’s system application and determine the current setting of the
might have been initially configured by someone slider). However, even this proved problematic for
else (e.g., their supplier or system administrator). some with three general users and one advanced
Having said this, in severe cases, where the user user (i.e., a quarter of the participants) unable to
has moved some of the key settings to an insecure complete the task.
status, they are warned that they have placed
the system at risk. This is illustrated in Figure Forcing Uninformed Decisions
9b, which shows a change to the message in the
security level area and warning symbols on the Even if users do not go looking for security-related
icons for the affected zone. In addition, going to options and attempt to change the settings, they
the custom options list will show the offending may still find themselves confronted with the
settings highlighted on a red background, while need to take security-related decisions during
returning to the main browser window yields a the course of their normal activities as a result of

208
Security Usability Challenges for End-Users

Figure 9. The result of altering custom security system-initiated events. In these contexts, it should
settings be all the more important for the information to
be conveyed to them in a meaningful fashion,
with minimal assumptions of prior knowledge
and maximum help available to ease the process.
Unfortunately, however, users may again find
themselves at a disadvantage in practice, with
dialogs often being conveyed in a manner that
only advanced participants would be comfortable
with. To illustrate the point, Figure 10 and Figure
11 present two examples of dialogs that may be
encountered by users during standard Web brows-
ing activities. The first example illustrates the type
of warning that a user would receive in Internet
Explorer when a Web site’s security certificate has
been issued by a provider that is not specified as
trusted in their security configuration. This does
not, of course, mean that the certifying author-
ity cannot be trusted, but the user is being asked
to check in order to make a decision. The likely
problem here is that most users will not know
what a security certificate is, let alone be able to
(a) make a meaningful decision about one. Although
the style of this warning has changed in the newer
version of IE (see Figure 10b), appearing as part
of the browser pane rather than a separate dialog
box, the underlying information (and hence the
potential to confuse users) remains the same. As
an aside, and relating back to the comments in
the previous section, the “more information” link
shown in Figure 10b does not work if the browser
security has been set to “high.”
Meanwhile, the example in Figure 11 is warn-
ing the user that a Web page they are attempting
to download contains active content that could
potentially be harmful to their system. The dif-
ficultly for most users is again likely to be that
they would not understand what they were being
asked, with terms such as “active content” and
“ActiveX” being more likely to confuse than
explain. Of course, part of the problem in these
examples relates to the earlier issue of using
technical terminology. However, the problem
(b) here goes somewhat deeper, in the sense that both

209
Security Usability Challenges for End-Users

Figure 10. Web site security certificate warning (a) IE6 version (b) IE7 version

(a)

(b)

cases are obliging the user to make a decision addition to the application when compared to
without the option to seek further help from the earlier versions is the “document inspector,”
system. As such, they would be forced to make a which allows the user to audit their document
decision in the absence of sufficient information. to ensure that it is not inadvertently holding
As an indication of the scale of this problem, the personal/private information. Although this is a
example from Figure 11 was presented to the potentially useful feature, the reports that it gen-
survey respondents, and 56% indicated that they erates are less informative than one might hope.
would not know how to make a decision. Considering, for example, the report in Figure
Returning to Word 2007, an interesting new 12, we can see that although three areas have

210
Security Usability Challenges for End-Users

Figure 11. Active content warning

Figure 12. An example report from the document inspector

been flagged for concern, there is no indication Lack of Integration


of what was specifically found under each of the
categories. Moreover, the interface offers no op- Another way in which the presentation of security
tion to manually inspect the information—only to features may serve to confuse users is if different
remove it. In some cases (such as with the head- aspects do not integrate together in an appropri-
ers highlighted in the Figure), this could result ate manner. Although individual mechanisms are
in the over-eager user removing something that often provided in different software from different
they actually wanted to keep, while in other cases vendors, it would not be unreasonable for users
(such as the custom XML data), the average user to expect security features to work together in
may not know what is being referred to in the first concert. Unfortunately, it is possible to identify
place and may not appreciate the implications of examples in which this does not happen, and where
retaining or removing the data. integration and compatibility issues can instead
cause users to receive incorrect and inappropri-
ate advice. A good example here is provided by
a widely reported case in which an upgrade to
Windows AntiSpyware (Beta 1) caused it to falsely

211
Security Usability Challenges for End-Users

Figure 13. Examples of misinformation due to lack of integration

(a) (b)

identify Symantec’s AntiVirus Corporate Edition useful as a friendly reminder to general users
and Client Security packages as being instances that they need to be concerned about security,
of a password stealing Trojan. Any users who the problem in this case was that the message
followed the consequent advice to remove certain popped up on a machine running Norton Inter-
registry keys found that they ended up disabling net Security—which meant it already had the
their antivirus solution (Leyden, 2006). firewall and anti-virus protection being referred
As a more visual example of an integration to. Some users will interpret the wording of the
problem, Figure 13a shows the security settings message (“you should have a firewall…”) to mean
for macro functions within Microsoft Word 2003. that the system is telling them that they are not
The significant part of the image is near the very adequately protected—which could cause obvious
bottom, with the indication that no virus scanner confusion and concern for users who considered
is installed. In actual fact, this screenshot was they already had suitable safeguards. It would be
taken from a machine running McAfee VirusScan preferable to offer more specific advice, tailored
Enterprise 7 (the icon for which is visible as the to user’s actual circumstances (e.g., “You have
third item in the system tray), and so receiving a firewall and virus protection, but should also
a message claiming that no virus protection is have anti-spyware protection installed”). Failing
installed is hardly useful to the user. Meanwhile, this (e.g., if it was not possible to determine exist-
Figure 13b presents an example of a pop-up mes- ing protection status), the wording could still be
sage that appeared on a Dell PC during normal adjusted to something that would pose a question
daily usage. Although this could be considered rather than make an apparent statement (e.g.,

212
Security Usability Challenges for End-Users

“Do you have a firewall?”), and therefore allow are receptive to instructions that tell them to
users who knew they were protected to pass by ignore security, then it offers the ideal means for
without concern. malicious code to find its way into their system
Although the messages in both of these (e.g., telling the user to ignore any warning that
examples aimed to be helpful, by warning and says the code may be harmful and allow it to be
reminding users that security needs to be con- installed anyway).
sidered, it could be argued that if the system is
unable to give an accurate message that it would Addressing the Problems
be preferable to say nothing at all.
In some cases, the lack of integration can cause Having established that a range of problems may
security to conflict with things that other software exist, this section considers some of the steps that
is legitimately trying to do. As a result, users may be taken in order to limit or remove them.
may actually be encouraged to ignore security Recognizing that many users will not be in-
warnings—as illustrated by the text at the bottom clined to look at security in the first place unless
of Figure 14, which is from a prompt displayed they are forced to do so, one important step is to
when attempting to download Macromedia’s Flash ensure that the default settings are as appropriate
Player. The text clearly suggests that if a warning as possible. It is certainly valuable to enable the
appears, users should simply tell the system to necessary protection by default so that (in the
proceed or otherwise risk missing out on func- first instance at least) the user does not have to
tionality. This is hardly helpful from a security worry about it, and in an extreme case the sim-
awareness perspective, as it clearly sends out the plification can extend to hiding the existence of
wrong message. For example, novice users may the security altogether. However, this relies upon
use this experience as a basis for making future the suitability of the default setting, and there is
judgments in any similar scenarios. In fact, in plenty of past evidence to show that this is not
a wider security context, this is exactly the sort always effective. For example, until the arrival of
of advice that users ought not to accept. If users XP Service Pack 2, the personal firewall and the

Figure 14. Encouragement to ignore security warnings

213
Security Usability Challenges for End-Users

automatic software updates feature were switched other ways to enable advanced users to con-
off by default within Windows. Similarly, it was trol the software more easily and quickly.
several years before data encryption was enabled 4. Avoid heavy use of technical vocabulary
by default on wireless access points, which led to or advanced terms: Beginners will find
a large-scale proliferation of unsecured networks it hard to use the security features in their
by both personal and business users. application if technical vocabulary and
Unfortunately, however, simply relying upon advanced terms are used.
defaults is not an adequate solution in scenarios 5. Handle errors appropriately: Plan the
where a single level of security cannot reasonably application carefully so that errors caused
be expected to suffice for all users. In addition, by the use of security features could be pre-
there are many scenarios in which explicit choices vented and minimized as much as possible.
and decisions need to be made. As such, it is still However, when errors occur, the messages
important to consider how things can be conveyed have to be meaningful and responsive to the
in a clear and meaningful manner. problem.
In order to improve the situation, there are 6. Allow customization without risk of being
several guidelines that could usefully be followed trapped: Exit paths should be provided in
in order to deliver a more appropriate HCI experi- case some functions are chosen by mistake,
ence. A set of 10 such guidelines are presented, and the default values should be easily re-
with content based upon an earlier set proposed stored.
by Johnston et al. (2003), along with additional 7. Easy to setup security settings: This
considerations based upon the usability heuristics way the user will feel more confident with
proposed by Nielsen (1994). changing and configuring the application
according to their needs
1. Visible system state and security func- 8. Suitable help and documentation for the
tions: Applications should not expect available security: Suitable help and docu-
that users will search in order to find the mentation should be provided that would
security features. Furthermore, the use of assist the users in the difficulties they may
status mechanisms can keep users aware face.
and informed about the state of the system. 9. Make the user feel protected: Assure that
Status information should be periodically the user’s work is protected by the applica-
updated automatically and should be easily tion. Recovery from unexpected errors must
accessible. be taken into account and the application
2. Security should be easily used: The in- should ensure that users will not lose their
terface should be carefully designed and data.
require minimal effort in order to make use 10. Security should not reduce performance:
of security features. Additionally, the secu- By designing the application carefully and
rity settings should not be placed in several using efficient algorithms, it should be
different locations inside the application, possible to use the security features with
because it will be hard for the user to locate minimum impact on the efficiency of the
each one of them. application.
3. Suitable for advanced as well as first time
users: Show enough information for a first These guidelines were used by Katsabas,
time user, while not too much information Furnell, and Dowland (2006) as the basis for
for an experienced user. Provide shortcuts or evaluating the security provision within a number

214
Security Usability Challenges for End-Users

Table 1. Grading for guideline compliance


Grade Description
0 Application diverges completely from the guideline
1 Application significantly diverges from the guideline
2 Application has paid some attention to the guideline but still has major problems
3 Application has paid some attention to the guideline but still has minor problems
4 Application follows the guideline in some sections
5 Application completely follows the guideline in all possible sections

Table 2. Score summary for assessed applications against guidelines

ZoneAlarm
MS Word
McAfee

Outpost
Firefox

Norton

Opera
Visible system state and security functions 2 4 3 2 3 3 5
Security should be easily used 4 3 3 4 3 3 4
Suitable for advanced as well as first time users 5 2 4 4 2 2 3

Avoid technical vocabulary or advanced terms 2 4 1 2 3 0 2

Handle errors appropriately 3 3 4 2 4 2 4

Allow customization without risk of being trapped 2 0 1 2 2 2 1

Easy to setup security settings 2 5 3 2 5 5 2


Suitable security help and documentation 0 1 4 5 5 1 2
Make the user feel protected 3 4 4 3 3 4 3
Security should not reduce performance 3 1 4 3 4 4 1

TOTAL ( /50 ) 26 27 31 29 34 26 27

of established end-user tools and applications. Table 2 shows a summary of the score that
Security-specific tools selected for assessment each application achieved for each of the 10
included Norton Antivirus, McAfee VirusScan, guidelines. It can be noted that there are no
Outpost firewall, and ZoneAlarm firewall, while guidelines that seem to score uniformly well or
more general applications included Microsoft uniformly badly across all applications. As such,
Word and the Opera and Mozilla Firefox brows- no consistent pattern can be observed in terms of
ers. Each application was tested according to the where applications are failing to present security
level of compliance with each of the 10 guidelines. appropriately.
A mark from zero to five could be achieved for In order to demonstrate the improvements
each guideline (giving a maximum score of 50) that can be achieved by following the guidelines,
based upon the scale in Table 1. the user interfaces of a subset of the applications

215
216

Figure 15. An example of applying interface guidelines

were modified in order to enhance their compli- be found. As an illustration, Figure 16 presents
ance with the visual aspects (Katsabas, 2004). two screenshots taken from the Norton Internet
Presentation of the full set of the modifications Security package, which provides an integrated
made is beyond the scope of this chapter, and so security solution (including firewall, anti-virus,
Figure 15 presents a specific example based upon anti-spam, and intrusion detection) for end-user
a security-related interface from Firefox. In the systems. The tool, of course, differs from the ear-
original interface (on the left of the Figure), the lier examples in the chapter, because it represents
security options were among options presented an example of software that has been specifically
in an “advanced” tab. Studies in HCI have shown designed to fulfill a security role, rather than a
that options classified as “advanced” can scare wider application within which security is just
many users, especially beginners. Therefore, one of the supporting functions. As such, it can
locating the security settings here may result be assumed that the designers and developers
in a number of users never accessing them. In would have been in a position to devote more
order to improve the accessibility of the security specific attention to the presentation and usability
settings, a dedicated tab is added in the revised of the protection features. As a result, some of the
interface and moved higher up the list to increase positive observations arising from this particular
the visibility and perceived priority. interface are that:
Having identified a series of less desirable
examples, it is also relevant to observe that many • All of the top-level security options are visible
existing examples of stronger interface design can and configurable from a single window;
Security Usability Challenges for End-Users

Figure 16. Examples of more effective interface design

(a)

(b)

• The status of each option is clearly conveyed, In Figure 16a, the user receives a clear visual
along with the consequent security status of indication that things are amiss, with the “urgent
the overall tool; and attention” banner and the warning icon beside the
• Brief and clearly-worded explanations are aspect that is causing the concern (in addition, the
provided to accompany each main option, background of the information area is shaded red).
and further help is easily accessible in each Meanwhile, the next dialog (Figure 16b) shows
case. the system to be in the more desirable state of

217
Security Usability Challenges for End-Users

having all the critical elements working correctly average IT literacy. As such, it is likely that the
(note that although the “parental control” option usability difficulties they highlighted would be
is disabled, it does not affect the overall security even more pronounced amongst a more general
status, and the window is now shaded with a sample of users.
reassuring green background). Comparing this Finally, an important point to appreciate is
to the examples presented earlier in the chapter, that the challenge of end-user security will not
it is apparent that none of the previous problems be solved by HCI and usability improvements in
are immediately on show. isolation. The issue needs to be seen as part of a
wider range of user-facing initiatives, including
awareness-raising and education, so that users
Conclusion properly appreciate their need for security and
the threats that they may face. Without this, we
Doubtless, the most usable and friendly scenario could simply have usable solutions that no-one
from the end-user perspective would often be the recognizes the need to use.
one in which security is not used at all, in the
sense that it inevitably incurs an additional level
of complexity and effort. Unfortunately, however, References
the range of threats to which we are exposed if we
operate without appropriate protection means that AOL/NCSA. (2005). AOL/NCSA online safety
this is an increasingly unrealistic proposition. study. America Online and the National Cyber
This chapter has shown that the presentation Security Alliance, December 2005. Retrieved
and usability of security features is clearly less March 21, 2007, from http://www.staysafeonline.
than optimal in some cases. Of course, many info/pdf/safety_study_2005.pdf
similar criticisms can also be leveled at other
Chatziapostolou, D., & Furnell, S. M. (2007, April
aspects of application functionality. However,
11-13). Assessing the usability of system-initiated
the significant difference is that other features
and user-initiated security events. In Proceedings
could be considered somewhat more optional than
of ISOneWorld 2007, Las Vegas. Washington DC:
security in this day and age, and thus it is less
The Information Institute.
important if users’ lack of understanding causes
them to neglect to use them. CRA. (2003). Grand research challenges in in-
The discussion has highlighted examples of formation systems. Washington DC: Computing
the problems that end-users may face when at- Research Association. Retrieved March 21, 2007,
tempting to understand and use security-related from http://www.cra.org/reports/gc.systems.pdf
functionality within common software applica-
Cranor, L. F., & Garfinkel, S. (Eds.). (2005). Se-
tions. Although some users will actively seek to
curity and usability: Designing secure systems
overcome their lack of knowledge if the situation
that people can use. Sebastopol, CA: O’Reilly
demands it, the more likely scenario for the ma-
Media.
jority is that security options will be unused or
mis-configured. DeWitt, A. J., & Kuljis, J. (2006, July 12-14).
The survey and trial findings have revealed Aligning usability and security: a usability study
clear problems in the understanding and use of of Polaris. In Proceedings of the Second Sympo-
security features. In considering these, it is worth sium on Usable Privacy and Security (SOUPS
remembering that both sets of findings were ‘06) (pp. 1-7). Pittsburgh, Pennsylvania.
based upon overall groups of users with above-

218
Security Usability Challenges for End-Users

Furnell, S. M. (2004). Using security: easier said Katsabas, D., Furnell, S. M., & Dowland, P. S.
than done? Computer Fraud & Security, April, (2006, April). Evaluation of end-user application
6-10. security from a usability perspective. In K. K.
Dhanda & M. G. Hunter (Eds.), Proceedings of
Furnell, S. M., Bryant, P., & Phippen, A. D.
5th Annual ISOneWorld Conference and Conven-
(2007). Assessing the security perceptions of
tion, Las Vegas, USA, (pp. 19-21). Washington
personal Internet users. Computers & Security,
DC: The Information Institute.
26(5), 410-417.
Leyden, J. (2006). MS anti-spyware labels Sy-
Furnell, S. M., Jusoh, A., & Katsabas, D. (2006).
mantec as Trojan. The Register, 14. Retrieved
The challenges of understanding and using
March 21, 2007, from http://www.theregister.
security: A survey of end-users. Computers &
co.uk/2006/02/14/ms_anti-spyware_false_posi-
Security, 25(1), 27-35.
tive
Furnell, S. M., Katsabas, D., Dowland, P. S., &
Microsoft. (2007). What’s not secure. Help protect
Reid, F. (2007, May 14-16). A practical usability
yourself: Security in Office tutorial. Microsoft
evaluation of security features in end-user ap-
Office Online, Microsoft Corporation. Retrieved
plications. In Proceedings of 22nd IFIP Interna-
March 21, 2007, from http://office.microsoft.com/
tional Information Security Conference (IFIP
training/training.aspx?AssetID=RP01042590103
SEC 2007), Sandton, South Africa. New York:
3&CTT=6&Origin=RP010425891033
Springer.
Nielsen, J. (1994). Heuristic evaluation. In J.
Johnston, J., Eloff, J. H. P., & Labuschagne, L.
Nielsen & R.L. Mack (Eds.), Usability inspection
(2003). Security and human computer interfaces.
methods (pp. 25-64). New York: John Wiley &
Computers & Security, 22(8), 675-684.
Sons.
Katsabas, D. (2004). IT security: A human com-
Schultz, E. (2002). Network associates drops PGP.
puter interaction perspective. Master’s thesis,
Computers & Security, 21(3), 206-207.
University of Plymouth, UK.
Whitten, A., & Tygar, J. D. (1999, August 23-26).
Why Johnny can’t encrypt: A usability evaluation
of PGP 5.0. In Proceedings of the 8th USENIX
Security Symposium, Washington, DC, USA.

219
220

Chapter XIII
CAPTCHAs:
Differentiating
between Human and Bots

Deapesh Misra
VeriSign iDefense Security Intelligence Services, USA

Abstract

The Internet has established firm deep roots in our day to day life. It has brought many revolutionary
changes in the way we do things. One important consequence has been the way it has replaced human to
human contact. This has also presented us with a new issue which is the requirement for differentiating
between real humans and automated programs on the Internet. Such automated programs are usually
written with a malicious intent. CAPTCHAs play an important role in solving this problem by present-
ing users with tests which only humans can solve. This chapter looks into the need, the history, and the
different kinds of CAPTCHAs that researchers have come up with to deal with the security implications
of automated bots pretending to be humans. Various schemes are compared and contrasted with each
other, the impact of CAPTCHAs on Internet users is discussed, and to conclude, the various possible
attacks are discussed. The author hopes that the chapter will not only introduce this interesting field to
the reader in its entirety, but also simulate thought on new schemes.

Introduction programs have a very tough time in passing them.


Thus, such tests try to prevent malicious automated
Human interactive proofs (HIPs) are schemes programs from accessing Web services which are
which require some kind of interaction from a hu- meant to be used by human users only.
man user that is tough for a program to simulate. Differences in the capabilities between humans
“Completely automated public Turing test to tell and computer programs, which can be tested
computers and humans apart” (CAPTCHAs) are a and evaluated over the Internet, are made use of
class of HIPs which are tests that are so designed to create a CAPTCHA. Generally, hard “artifi-
that humans can easily pass them while automated cial intelligence” (AI) problems are turned into

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
CAPTCHAs

CAPTCHAs. Usually such tests utilize schemes • Preventing game bots from playing online
which exploit the differences in the cognitive games
capabilities between humans and computers, for • Preventing DDoS attacks (Gligor, 2005)
instance, exploiting the difference between hu- • Preventing automated worm propagation
mans and computer programs in understanding (e.g., Santy Worm, Provos, McClain, &
distorted text. Wang, 2006)

Necessity While these were some of the current reasons


for the deployment of CAPTCHAs, as e-com-
As the Internet grows into our daily lives and merce grows and as the Internet replaces human
removes human to human interaction by consid- to human interaction, new scenarios requiring
erable leaps and bounds, the necessity to identify CAPTCHAs will emerge.
whether the entity on the other side of Internet is
really a human being or an intelligent program has History
gained immense importance. Many e-commerce
businesses which cater to such a growing popu- The earliest attempt and perhaps the longest
lation of human users on the Web have business continuing one, is a classic example of trying to
models in which the primary assumption is that fool the automated programs which try to harvest
humans are the users of the service. Automated mail IDs on the Web. This is the custom of putting
programs are increasingly able to perform many out mail IDs on the Web with the “@” symbol
tasks on the Web just like a human user. In many replaced by “at” and by other such variations.
cases, these automated bots are to be denied access Some variants are:
to the service. In all such scenarios CAPTCHAs
play the role of the guard which keeps the bots • Mail_id(AT)mail_provider(DOT)com
from accessing the services. • Mail_id@mail_providZr.nZt (Replace Z
Some of the immediate scenarios wherein there with E)
is a necessity of segregating the human and the
non-human user are as follows: instead of mail_id@mail_provider.com. This
practice called “address/mail munging” is still
• Online polls prevalent and has been able to withstand attacks
• Preventing spammers from getting free mail from basic automated scripts which try to harvest
IDs mail IDs.
• Preventing chat bots from irritating people Moni Naor (Naor, 1996) and the researchers
in chat rooms with advertisements at Georgia Tech (Xu, Lipton, & Essa, 2000; Xu,
• Preventing automated dictionary attacks in Lipton, Essa, & Sung, 2001) were one of the earli-
password systems (Pinkas & Sander, 2002) est contributors to the field of CAPTCHAs. The
• Preventing unruly search engine bots from earliest attempt of using a CAPTCHA on the In-
indexing sites ternet was by Altavista in 1997 and was to prevent
• Preventing unethical pricing practices in Web-bots from abusing the free URL submission
e-commerce utility. This was a word based CAPTCHA in which
• Preventing inflating/deflating rankings in the user had to recognize the distorted word. In
online recommender systems 2000, Yahoo was in need of some mechanism
• Preventing spam in blog comments to prevent bots from joining the chat rooms and
directing the chat room users to advertisements.

221
CAPTCHAs

The team at Carnegie Mellon University (CMU) at the existing CAPTCHA schemes in the third
came up with many new ideas (Ahn, 2005; Ahn, section. We also compare and analyze these exist-
Blum, & Langford, 2004, 2002). ing schemes. Then in the fourth section we take a
look at some new schemes that we have created.
Turing Tests In the sixth section we look at the real world is-
sues surrounding the use of CAPTCHAs. Here we
The task of differentiating between a computer look into acceptance issues, the use of CAPTCHA
program and a human being is related to the con- like schemes for sending out spam, and the prob-
cept of the “Turing test.” In a classic paper (Turing, lems faced by the online gaming industry from
1950), Turing suggested a simple game called CAPTCHAs. To round up, we look at some of
“the imitation game.” This went on to be called the attack mechanisms that have been proposed
as the “Turing test.” The test aims to determine if in the sixth section. We end with our conclusions
a machine is intelligent or not. Turing suggested in the seventh section and peek at what the future
that a parameter which could be conveniently might have in store for CAPTCHAs.
used as a yardstick to determine if a computer
program is intelligent or not, is the ability of a
program to carry on a meaningful conversation CAPTCHA Definition
with a human. If the program can do so for some
stipulated amount of time, then it can be safely Existing Definitions
asserted that the program is intelligent. This is
the famous Turing test. A “completely automated public Turing test to tell
The Turing test comprises of three parties, a computers and humans apart” (CAPTCHA) has
human judge, a human participant, and a machine. been defined (CMU, 2000) as a program which
The machine and the human are able to interact generates a test which
with the judge in a manner which does not give
away their true identities. The interactions with • Most humans can pass
the judge consist of answers to the questions • Current computer programs can not pass
asked by the judge. The judge tries to identify
the machine by asking intelligent questions. The Additional requirements for a test to be called
machine pretends to be a human while the human a CAPTCHA are as follows:
tries to prove that he/she is the real human. If the
machine has been able to successfully pretend • Test generation code and data should be
to be a human by means of conversation, then it public
can be stated that the machine is intelligent. The • The test should automatically be generated
philosophical considerations of this test have been and graded by a machine
discussed widely (Anderson, 1964; Penrose, 1994,
1989; Oppy & Dowe, 2005; Crockett, 1994). CAPTCHA tests should be such that an average
These tests are related to CAPTCHAs, since computer user has no difficulty in passing it, and
the CAPTCHA test also tries to differentiate feels at ease while going through the test.
between humans and machines. A more technical definition of CAPTCHA is
The rest of this chapter is organized as follows. provided in (Ahn, Blum, Hopper, & Langford,
We start off with the various existing definitions 2003) as: “A CAPTCHA is a cryptographic pro-
for a CAPTCHA along with our own inputs for tocol whose underlying hardness assumption is
the definition in the second section. We then look based on an AI problem.”

222
CAPTCHAs

This definition suggests that what could be • The strength of the scheme is well under-
classified as a CAPTCHA currently, would loose stood
that distinction if a computer program could pass
that test sometime in the future with the growth of A desirable property is that the problem is
artificial intelligence (AI). The authors of (Ahn et well understood. We do not intend to suggest
al., 2003) state that CAPTCHAs have a two way that a CAPTCHA necessarily make use of well
effect. On one hand, they keep the malicious pro- researched hard problems, though we point out
grams away and on the other hand they provide that to remain robust for a long time, the prob-
motivation to the growth of the field of AI. lem that the CAPTCHA is exploiting better be
Also to be noted is that the definition of the a well known hard problem. If CAPTCHAs are
term “hardness” is not precise and is defined in to contribute to the development of AI, then it
terms of the consensus of a community: an AI might be better that they also try to exploit rela-
problem is said to be hard if the people working tively obscure AI problems and in that process
on it agree that it is hard. increase the understanding of areas that are not
well researched yet.
Revised Definition
CAPTCHA Names
We have revised the definition of a CAPTCHA and
also provided some features that are desirable for CAPTCHAs have been called by different names.
the CAPTCHA to have. These new guidelines are The different names used for them are “human
an amalgamation of the original definitions and interaction proofs” (HIP), “reverse Turing tests”
desired properties with the guidelines from the (RTT), “mandatory human participation scheme,”
Microsoft CAPTCHA team (Rui & Liu, 2003a) “human-in-the-loop protocols,” and “automated
and our own inputs. Turing tests” (ATTs).
CAPTCHA is a test which: The researchers who were responsible for coin-
ing the name “CAPTCHA” and “HIP” maintain
• Most humans can easily pass that CAPTCHAs are a class of HIPs. HIPs are
• Computers can not pass (unless they ran- much broader in the sense that they could be
domly guess) protocols to distinguish a particular human or a
• Is generated and graded by a machine class of humans (like identifying humans based
• Does not base its strength on secrecy on gender or age, etc.).

CAPTCHAs have the following desirable Reverse Turing Tests


properties:
It has been suggested that the CAPTCHA is a
• They can be quickly taken by a user “reverse Turing test” (RTT) since the judge is
• They can be quickly generated and evalu- a machine which tries to identify the human. In
ated the original Turing test, the judge was a human
• The probability of guessing the right answer trying to identify the machine.
is small Another reason for calling it so is the fact that
• They are intuitive to understand and to solve these tests have a goal which is the reverse of the
for humans original Turing test. The Turing test assumes that
• They are independent of the language and a computer program can be intelligent and goes
culture—universal in nature on to determine if such a claim of a computer

223
CAPTCHAs

program is true or not. The CAPTCHA test starts are not very efficient in doing the same. The test
off with the assumption that the computer program chooses a few words randomly from a dictionary
is not as intelligent as the human and exploits and then displays the corrupted and distorted
this difference. version of these words to the user as an image.
It is important to note that Turing tests do The user is expected to recognize the word/s and
not aim to differentiate between humans and type them in order to pass the test.
computers while CAPTCHAs do. CAPTCHAs Another version called ez-Gimpy is a simpler
and Turing tests are related to each other only test in which, instead of multiple words, a single
because most CAPTCHAs use the test for intel- word is displayed to the user.
ligence as a way to differentiate between these The test is not universal in nature since it
two classes of entities. assumes that the user is comfortable with a par-
Dr. Luis von Ahn (Carnegie Mellon Univer- ticular language.
sity) suggests that the term RTT has already
been reserved for a different scenario (personal Georgia Tech’s Contributions
communication). It was first used in the context of
denoting that the human player reversed his/her The team at Georgia Tech also came up indepen-
objective and instead of trying to prove he/she to dently with their CAPTCHA scheme (Xu et al.,
be human, would try to prove to be a computer. 2000; Xu et al., 2001). They suggested the use
of a new type of trapdoor one-way hash func-
tion to convert a character string into an image.
Existing CAPTCHA Schemes They came up with the idea of such a CAPTCHA
while trying to solve the issue of “screenscrapers”
Published Schemes in the context of Web commerce pricing wars.
Apart from this problem, they proposed that their
Many research teams have created new CAPT- CAPTCHA scheme would also prevent online dic-
CHA schemes. Current CAPTCHA schemes can tionary attacks and denial of service attacks. The
be subdivided largely into:

• Character based CAPTCHA schemes


• Image based CAPTCHA schemes Figure 1. Gimpy
• Audio based CAPTCHA schemes
• Miscellaneous CAPTCHA schemes

Character Based CAPTCHA Schemes

Gimpy

The Carnegie Mellon University team came


up with many CAPTCHA schemes. Their
character based CAPTCHA scheme was called
“Gimpy” (CMU, 2000).
Gimpy bases its strength on the assumption
that humans can read extremely distorted and cor-
rupted text while the current computer programs

224
CAPTCHAs

protocol that delivered the CAPTCHA was called The words chosen are not close to dictionary
a “humanizer.” Instead of storing the answer to words.
the puzzle, they recommended the use of message Some amount of familiarity with the English
authentication codes (MAC) which would hash language is assumed.
the right answer for later comparison purposes.
They did not propose any specific schemes. ScatterType

Pessimal Print The aim of this CAPTCHA scheme (Baird &


Riopka, 2004) is to form images of English-like
In this scheme (Coates, 2001), low quality images words which can resist character segmentation
of text are used as a way to differentiate between attacks. Each letter is subjected to cutting and
computer programs and human users. The test scattering and then combined to form the word
taker has to recognize the word and type it in. The image. Each character is fragmented using either
assumptions of the test are that the readers are horizontal or vertical cuts and these fragments are
well versed with the English language’s alphabet scattered by vertical or horizontal displacements.
and have some years of reading experience and Thus, the letters are now not prone to segmentation
familiarity with the English language. attacks. The segmented letters are then combined
The test uses English dictionary words. Thus, to form the word image.
the test assumes that the user is comfortable with This test also assumes that the test taker is
English language. The test is thus not universal familiar with the English language and is thus
in nature. The database has to be kept a secret, not universal in nature.
since an attacker who knows all the words and
the distortions can either create a database of all Microsoft CAPTCHAs
possible distorted words or when presented with a
test, use the word database to increase the chances The Microsoft team came up with a few ideas for
of guessing what the distorted word is. word based CAPTCHAs (Patrice, 2003; Chel-
lapilla, Larson, Simard, & Czerwinski, 2005b).
Baffle Text Their HIPs are claimed to be robust against seg-
mentation attacks. The test uses local warps at the
Utilizing ideas of psychophysics of human read- character level and uses word warps at the word
ing, this CAPTCHA scheme (Chew & Baird,
2003) distorts non-dictionary, but pronounceable
words and asks the user to recognize the letters. Figure 3. Baffle text

Figure 2. Pessimal print

225
CAPTCHAs

Figure 4. Scatter type

Figure 5. Microsoft HIP

level. Words are intersected with arcs (thick and • Bongo: This CAPTCHA tests the visual
thin) to further prevent segmentation attacks. recognition ability of the user. Two series of
The letter based CAPTCHA requires that the blocks, left and right are displayed to the user.
test taker is familiar with the set of alphabets. User The blocks in the left series differ from those
complaints against this test have been that it is in the right in a certain fixed way. The user
tough to differentiate between some letters and is provided with four options which consist
numbers, such as the digit “1” and the lowercase of four single blocks and the user is asked to
alphabet character “l.” The scheme is dependent determine if each of these options belongs to
on the language. the right or to the left series. Bongard problems
have been studied for some time now and this
Human Handwriting Based CAPTCHAs database is available on the Internet (Index
of Bongard Problems, n.d.). Generating new
This scheme (Rusu & Govindaraju, 2005) sug- puzzles require human intervention and since
gests that human handwriting can be used as a this is a test of intelligence, most users will
CAPTCHA. Gestalt laws are applied to hand- find taking them stressful.
written samples so as to make it tough for the • Pix: This is a test in which the user is presented
computers to recognize the text while keeping with four distorted images of a particular
it possible for humans to recognize the text. The object and asked to recognize the name of the
scheme assumes familiarity with the handwritten object. The test maintains a large database of
words and the language. Automatic generation labeled images. It randomly picks an object,
and grading would be a problem. then randomly finds images of this object,
distorts them, and presents these to the user
Image Based CAPTCHA Schemes who has to recognize the theme/object to
pass the test.
Bongo and Pix
In order to label images, a parallel ongoing
The CMU team came up with two image based effort is a game which is played on the Internet.
CAPTCHA schemes—Bongo and Pix (CMU, Playing this game results in the labeling of images
2000). (The ESP Game, n.d.; Ahn & Dabbish, 2004).

226
CAPTCHAs

Figure 6. Pix

The game is called “The ESP Game.” The idea • Challenges can be answered with a single
of utilizing human cycles to do some useful work click while still providing several bits of
is being researched (Ahn, 2005). confidence
The scheme requires a database of labeled • Challenges can be answered only through
images. This requires human intervention. La- experience of the context of the particular
bels and images can be very specific and thus Web site
not universal. Also the database may need to be • Challenges are so easy that failure indicates
kept a secret. a failed bot attack

Implicit CAPTCHA Automatic creation of such tests is not pos-


sible. A human has to design these tests. It can
These CAPTCHAs (Baird & Bentley, 2005) are be expected that the instruction set would result
completely different from the character based ones in this test not being universal in nature.
and the goal here is to reduce the irritation to the
test taker. The tests are clever enough so that the
user does not feel threatened by it and completes Figure 7. Implicit CAPTCHA
it with the least amount of stress.
In one of the suggested schemes, the user is
supposed to interact with the given picture by
clicking on some part of it and thus pass the test.
The image in this scheme provides the background
for the test, upon which an interaction based task
is built.
Some ideas to perform this are as follows:

• Challenges are disguised as necessary brows-


ing links

227
CAPTCHAs

ARTiFACIAL Image Recognition CAPTCHA

This scheme (Rui & Liu, 2003b; Rui & Liu, 2003a) I n the scheme “image recog nition
reasons that human faces are the most universally CAPTCHAs” (Chew & Tygar, 2004) the hard-
familiar object to humans thus making them a ness of the problem is provided by the one way
very good candidate for HIPs and particularly so, transformation between words and pictures. For
since programs are yet not as good as humans in a machine, it is easy to get pictures correspond-
detecting human faces. The researchers also came ing to a particular chosen word, but tough the
up with a more concrete set of desirable features other way around. Thus, given a set of pictures
for a CAPTCHA. associated with a word, the human test taker can
The test is to detect human faces in a cluttered easily find the word while the machine will fail.
background. The program called ARTiFACIAL This scheme plays around with a few possibilities
generates a distorted face in a cluttered back- of this mapping between words and their associ-
ground for each test. The test taker has to detect ated pictures.
the only complete human face.
For every user request, an image of a complete • The three schemes discussed are:
face and many incomplete faces on a cluttered • The naming images CAPTCHA
background is presented and the user is asked to • The distinguishing images CAPTCHA
click on the six points (eye corners, mouth corners)
of the only complete face and if this is done cor- The Identity Anomalies CAPTCHA
rectly then the user is deemed to be a human.
This CAPTCHA meets all the requirements In the naming CAPTCHA scheme, the user is
and also possesses all the desirable features re- presented with six images of a common term. The
quired in a CAPTCHA. But during user trials, user has to type the common term associated with
some users complained that the distorted faces these images to pass the CAPTCHA.
were disturbing and found the test to be repul- In the distinguishing CAPTCHA, the user has
sive (Rui, Liu, Kallin, Janke, & Paya, 2005). to determine if two subsets of images are associ-
ated with the same word or not. In the identifying
anomalies CAPTCHA, the test subject is shown a
set of images where all but one image is associated
Figure 8. ARTiFACIAL with a word and the test taker has to identify the
anomalous image.
The problem with these image schemes is that
they need images which are labeled. The use of a
search engine such as Google is suggested. But
this has the problem of wrongly labeled images.
The solution suggested to this problem is that
the multiple rounds of the test need to be given
to the user. This is not practical in the real world,
wherein a single round of CAPTCHA in itself is
considered to be annoying. Some of these schemes
are susceptible to simple guessing attacks.

228
CAPTCHAs

IMAGINATION CAPTCHA Sounds CAPTCHA

The IMAGINATION (image generation for In- This audio based CAPTCHA was developed by
ternet authentication) CAPTCHA (Datta, Li, & the CMU research team (CMU, 2000). This is
Wang, 2005) is an image based CAPTCHA an attempt to use audio as a way to distinguish
scheme. It is similar to the previously discussed between the human and the computer program.
“image recognition CAPTCHA” scheme. In this This is similar to the picture based Gimpy. A
scheme the user is first provided with a composite word or a sequence of numbers are picked up at
image of many images and has to choose an image random, they are combined into a sound clip and
to annotate. Once the user chooses an image, a distorted. This distorted clip is presented the user
distorted version of that image is presented with who is asked to recognize the word/numbers.
a group of labels. The user now has to select the The test has to be specific to each test taker,
appropriate label for the image from this group. since this depends on the languages that the test
The scheme requires an annotated database. taker knows.
It aims to reduce the number of rounds required
to minimize the success rate of random guessing Miscellaneous CAPTCHA Schemes
attacks. This is achieved by the added task of
asking the user to click on the geometric center Here we describe general CAPTCHA schemes.
of the image.
Collaborative CAPTCHAs
Animation Based CAPTCHA
Collaborative filtering CAPTCHAs (Chew &
In this scheme (Athanasopoulos & Antonatos, Tygar, 2005) try to extract complex patterns
2006) the authors proposed the idea of using that reflect human choices. Thus, for a particular
animated tests as CAPTCHAs. One of the major question which has a set of choices as an answer,
goals of this work was to prevent “replay-attacks.” it is suggested that computer programs would not
To do so, the authors used animation in their know the most popular choice of humans. The right
CAPTCHA tests, so that the test is not a static choice among the answer choices depends upon
image which can be easily forwarded. the answers of a group of human users and the
response of the test taker to a set of control ques-
Audio Based CAPTCHA Schemes tions. Thus, the questions asked in this CAPTCHA
test do not have an absolutely right answer. The
Existing CAPTCHA schemes are unfair towards correct answer is measured from different human
the visually disabled people. The basic assumption opinions and is a reflection of human choices. For
in almost all schemes has been that the test taker example, a set of jokes can be given and the user
can see. An attempt to make the CAPTCHAs suit- could be asked to rate them.
able for the visually impaired was made by the The CAPTCHA consists of multiple rounds.
CMU team with their “sounds CAPTCHA” (CMU, The test instructions are long and complicated.
2000), while other attempts at audio CAPTCHAs New CAPTCHA tests need to be rated thus quick
also exist (Kochanski, Lopresti, & Shih, 2002; production of a large number of tests is not possible.
Google’s Audio CAPTCHA, 2006). Creation of new sources of data that can be rated
is itself a problem. Also answers to such kind of

229
CAPTCHAs

CAPTCHAs are very specific to culture and thus Unpublished Schemes


not international in nature. There could be also
an attack by motivated large groups of malicious Many schemes have been created by people inter-
people who would answer incorrectly so as to ested in CAPTCHAS and released on the Internet.
skew the results of the collaborative filter. The use of Google as a search engine and a labeler
of images is one such idea. A few tools created
Other Ideas for fun could be used for creating CAPTCHAs
(Guess the Google, n.d.; Montage a Google, n.d.).
Moni Naor had the first nascent CAPTCHA ideas Related to it is the idea of using photos in online
which are wide and general in nature (Naor, 1996). photo databases such as Flickr for CAPTCHA
Some of the ideas are: tests. Though with any public image database,
there are always problems of wrongly labeled
• Gender recognition—given a face, decide if images, offensive images, and so forth.
it is male or female One CAPTCHA scheme uses photos of kittens
• Facial expression—given a face, decide its and challenges the user to identify all the kitten
emotion photographs (KittenAuth, n.d.). Another scheme
• Body part recognition—“click on the left presents photographs of humans and asks the test
eye” taker to rank the photos in terms of “hotness”
• Deciding undressed-ness—given a few pho- (HotCaptcha).
tos, decide which has the most/least clothes Microsoft recently came up with an image
• Naive drawings—look and recognize the based CAPTCHA scheme called “Asirra” (animal
object species image recognition for restricting access).
• Handwriting recognition The test taker is presented with photographs of
• Speech recognition dogs and cats and has to identify the cats in these
• Fill in the blanks or reorder to make mean- photographs.
ingful sentences
Our New CAPTCHA Schemes
A way to detect Web robots is detailed in Tan,
Steinbach, and Kumar (2005). This idea tries to Our attempt was to try to come up with new
extract useful patterns from Web access logs. The CAPTCHA schemes which are more human
basis of differentiating between a human Web friendly than current schemes.
surfer and a bot is based on the characteristics
of Web surfing. For instance, access by Web The Problem with Existing Schemes
robots would be broad but shallow, while human
user access to a particular site would tend to be Most of the CAPTCHA schemes in use as of now
more focused. Using such assumptions and Web are text based CAPTCHAs. Distorted letters are
browser logs, a decision tree classifier could be given to the test taker and the test taker has to
made, which could detect the presence of Web recognize these letters. Text based CAPTCHAs
robot surfing. Since the Web access logs are used assume that the user is familiar with the English
to make the decision, the identification can be language. International users might not be very fa-
made only after the Web bot or the human user miliar with the English language character set.
has finished surfing the site. To keep up with the development of OCR
systems and character recognition schemes, these
tests will have to get tougher with passing time.

230
CAPTCHAs

As of now, already they are quite tough to solve Our proposed scheme (Misra & Gaj, 2006a) uti-
sometimes. Thus, new ideas for CAPTCHAs lizes the fact that humans are better than comput-
would be soon required. ers at recognizing human faces. For a machine,
CAPTCHA schemes which can be easily this task is still very tough (Zhao, Chellappa, &
modified with every new attack or progress in AI, Phillips, 2003) and there is a good understanding
have to be rather thought of. Such schemes can of how hard the problem is. These properties are
be deployed for a longer time without the fear of well exploited to create a CAPTCHA.
having to redeploy a completely new scheme of Our scheme is an image based CAPTCHA.
CAPTCHA after every new successful attack. The scheme is similar to “ARTiFACIAL” (Rui &
Since it is tough to measure distortion per-se, Liu, 2003a), the difference being that ARTiFA-
various Web service providers have used their CIAL is a face detection problem while ours is
own measure of what they think is considerable a face recognition problem. In our scheme, we
distortion to create distorted letter CAPTCHAs. move away from making any assumption about
Thus, these CAPTCHAs range from being very the language familiarity of the Web service user.
easy for the machine to break to very tough for We use image based CAPTCHAs to make our
the human user to pass. Both the extremes are tests universal and to increase the comfort level
of no use at all. of the user.
Moreover, the Internet is full of complaints The property that we exploit to create our
against text based CAPTCHAs, as users have CAPTCHA is that given two distorted images of
generally found them to be irritating and at times, a human face, the human user can match these
tough. Since the current user ease with these two images as being of the same person quickly,
CAPTCHAs seems to be low, human friendly while for a computer program it is very tough to
CAPTCHAs need to be thought of. match these two distorted images. The test taker
is presented with two sets of distorted human face
Face Recognition CAPTCHA images. Each set has the distorted images of the
same group of people. Each set could have four to
Your brain is very weak compared to a computer. five images though the exact number of faces is
I will give you a series of numbers, one, three, something that is yet to be determined. The user
seven... Or rather, ichi, san, shichi, san, ni, go, ni, is expected to match the same person’s faces in
go, ichi, hachi, ichi, ni, ku, san, go. Now I want these two sets, to pass the tests.
you to repeat them back to me. The images are chosen from any one of the
A computer can take tens of thousands of num- publicly available face databases. Image process-
bers and give them back in reverse, or sum them ing tools such as the Gimp (Gimp 2.2, n.d.) can
or do lots of things that we cannot do. On the be easily automated to create the distortions and
other hand, if I look at a face, in a glance I can apply them to the photographs. The distortions
tell you who it is if I know that person, or that I applied to the faces are cleverly chosen so as to be
don’t know that person. We do not yet know how able to defeat the face recognition algorithms.
to make a computer system so that if we give it a
pattern of a face it can tell us such information, Test Generation Scheme
even if it has seen many faces and you have tried
to teach it. The generation of the CAPTCHA requires a
- Richard P. Feynman. (Feynman, 2001) database of human face images. Distortion of
the images, creation of the CAPTCHA test, and
evaluation are automated tasks.

231
CAPTCHAs

Image Databases different and random distortions applied to it.


Thus, in effect, the human user is performing
Our scheme makes use of human face photograph an image recognition task, the image being a
databases that are public and there is no need for human face.
the database to be secret. We chose the UMIST The two distortions can be chosen such that
face database (Graham & Allinson, 1998). The one distortion makes it tough for holistic feature
frontal face shots of the people in the database matching face recognition schemes while the
were distorted to create the test. other makes it tough for feature matching face
recognition schemes.
Image Processing Tools
Recognizing Human Faces: Scheme
The use of commonly available image processing Two
tools was looked into. Successful results were
obtained with the use of the open source tool An extension to our basic idea is to use different
“Gimp 2.2.” This tool is particularly suitable for photos of the same individual (in two different
this task since it has a scripting language called poses for instance) to which different distortions
“script-fu,” which allows automatic creation of are applied respectively.
the CAPTCHAs.
The tool comes with built in image manipula- User Trials
tion effects called “filters.” These basic built-in
filters were used to create the distortion effects. Methodology
The image distortion effects can be easily extended
to create new effects as and when the attackers The CAPTCHA test was taken by a few volun-
are able to successfully attack a distortion scheme teers. The UI for the tests consisted of a simple
that is being currently used. Web page as shown.
For human faces, we cannot use any random
distortion since the output should be acceptable
aesthetically. Extreme distortions to the human Figure 9. Example—CAPTCHA test for scheme
face would make the CAPTCHA disgusting. 1
While on the other hand, when choosing the
parameters for the distortions, we have to ensure
that the distorted output is not too simple for an
image recognition scheme applied by a machine.
Acceptable parameter bounds for the distortions
have to be decided by a human being. Once these
bounds are set for the various distortions, at run
time, random values for the parameters are chosen
for the distortion.

Using Human Faces in Image


Recognition: Scheme One

This CAPTCHA scheme requires the user to


recognize the same image of a subject with two

232
CAPTCHAs

Figure 10. Example—CAPTCHA test for scheme Figure 12. User interface—test 2
2

Figure 13. Example 1—image based CAPT-


CHA
Figure 11. User interface—test 1

Extension general images rather than only human face im-


ages. The advantage being that it is tougher to
There has been a renewed interest in face rec- recognize general random images in comparison
ognition in the recent years. Thus, though we to recognizing human faces, since all human faces
understand its current limitations and exploit share common features.
them to create CAPTCHAs, there has and will
be progress in this area. To make the CAPTCHA Improvements
tougher against human face recognition programs,
this scheme could be extended to distortions of The obvious disadvantage in such a “multiple
choice test” is that it is susceptible to guessing

233
CAPTCHAs

Figure 14. Example 2—image based CAPT- photo which is a part of a larger image consisting
CHA of all possible answers. The probability of a ran-
dom attack being successful reduces in this way
when compared to the previous scheme. Automatic
generation of such a UI is possible through the
use of ImageMagick (ImageMagick 6.2.8, n.d.).

Analysis and Conclusions

Our new human face recognition scheme makes


use of an area that is well researched and under-
stood. Human face detection and recognition are
still hard problems for machines to solve and this
is made even harder by the application of distor-
Figure 15. Modified UI to accept the input tions to the images. The distortions also serve
to break the existing face recognition schemes.
Easy extensibility of these distortions due to the
use of the tool, “Gimp,” ensures that as the face
recognition schemes get better, new distortions
can be easily created, thus keeping this idea in
vogue for a long time.
Existing human face photo databases gen-
erally consist of photographs which are taken
in constrained environments. Particularly, the
lighting, expression, and pose are very con-
strained (Howell, 1999). The creation of an image
database with CAPTCHA like tests in mind (with
large variations in pose, facial expressions, and
attacks. Existing word based CAPTCHAs have lighting) will result in images which are tougher
a much higher probable answer space, but at the to break by computer systems. This is particularly
same time are much more inconvenient for inter- true for our second scheme.
national users. A conflict between “security” and The development of image distortion effects
“usability” exists. specifically to defeat human face recognition
The current UI scheme suffers from being schemes (for instance Fischerfaces and Eigen-
susceptible to “no effort” guessing attacks. faces) would be the way ahead. As new schemes
Thus, we claim that the scheme is “somewhat” are developed to recognize human faces, new
susceptible to a random guessing attack, since image distortion effects will have to be developed.
this susceptibility depends on the UI scheme to Also, what needs to be looked into is a way to
take in the input. prevent guessing attacks.
A method to increase the answer space makes
use of a different UI scheme to take the inputs. In Simple Games CAPTCHA
this scheme there are no radio buttons and instead
the test taker has to click on the correct matching We propose a new idea for a CAPTCHA which
is more universal than existing character based

234
CAPTCHAs

tests and also is easier and fun for humans to take man, else the user is judged to be a machine.
(Misra & Gaj, 2006b). The proposed CAPTCHA Playing this simple game provides the hardness
relies on the user being able to successfully play to the CAPTCHA. A machine would be unable
a simple game and is thus visual in nature. The to play this game while a human user can play
test taker plays a simple game using the mouse/ this simple game very well.
touchpad (or keyboard) as the input device. We propose the use of “Macromedia
A very simple game is presented to the user Flash” (Macromedia Web site, n.d.) based games
which is chosen randomly from a large pool of for such simple CAPTCHAs, as such games are
games. If the user is able to complete the game already widely spread over the Internet. Screen
successfully, then the user is judged to be a hu- shots of a few of these simple games are as
shown. We propose the use of such simple games
as CAPTCHAs. For instance, with reference to
Figure 16. Simple “hitting the post” game Fig. 17, a CAPTCHA could ask the user to make
the frog eat a certain number of flies, for instance,
four flies.
The best games for our scheme would be those
which can be played with either the mouse or the
keyboard.

Online Games and Bots

Online games are plagued with the problem of


game bots playing the games and winning prize
monies. Although the bots can play a few online
games, it would need a large jump in AI capabili-

Figure 17. Simple “catching a fly” game and “shooting” game

235
CAPTCHAs

ties to have a bot which can play any game from completing this CAPTCHA. Also, such simple
the large random pool of games. games have neither complicated rules nor com-
This scheme is new and differs from what was plicated instructions. They are intuitive enough
proposed and rejected in (Golle & Ducheneaut, that the user can start playing after just having
2005). It was proposed that well established read one line of instruction. These games are
popular games be used as a CAPTCHA. But universal in nature and also are not restricted to
the authors did not mean simple games which any age group.
changed every time but rather thought of using well
established games as CAPTCHA, for instance,
some MMORPG (massively multiplayer online Comparison and Analysis of
role playing games). One of the conclusions was CAPTCHA Schemes
that already, game bots exist to play such games
and those game bots would be able to play the The schemes are compared based on our definition
CAPTCHA games very well. and desirable properties of a CAPTCHA.
A CAPTCHA must have these properties:
Analysis
• Most humans can easily pass—“humans
A general discussion of our scheme follows. pass easily”
• Computers can not pass, unless they randomly
Real World Existence guess—“computers fail”
• Generated and graded by a machine—“au-
The kind of simple games that we propose to use in tomation”
our scheme already exist in the real world, though • Does not base its strength on secrecy —“pub-
they are being used with a different purpose in lic scheme”
mind. A recent concept which has become popular
on the Internet scene is that of “advergaming” Humans pass easily: Most of the character
(Advergaming, n.d.; Advergaming on the Blockdot distortion schemes are tough to pass easily. Since
Web site, n.d.; Advergaming on the Innoken Web they are language dependent, native speakers of
site, n.d.) and this led to an exponential increase the English language experience lesser problems
in the creation of such games. These are games in passing them in contrast with non-native speak-
that are used by advertisers to attract consum- ers. As OCR and character recognition systems
ers, to help in brand awareness, and to increase get better, the distortion has to increase, thus
brand recall. making them tougher for humans to pass. The
So we can reuse a simple advergame as a general complaint against these schemes is the
CAPTCHA. This would perhaps keep both the confusion between certain numbers and certain
Web service provider and the human test taker lowercase characters. Image based CAPTCHA
happy. This offers an extended capability to a schemes such as ARTiFACIAL, implicit CAPT-
CAPTCHA. CHAs, animation based CAPTCHAs, and face
recognition CAPTCHAs are some schemes which
Usability are relatively easy.
Computer fails: Most of the existing schemes
Playing games has always been considered as are robust against computer attacks as of now. The
a “fun activity,” more so if the game is simple only exception is Gimpy, which has been broken
enough. Thus, the user is not stressed at all in by computer programs.

236
CAPTCHAs

Automation: Generating implicit CAPTCHAs • The strength of the scheme is well under-
automatically seems to be a tough task. Schemes stood—“well understood problem”
like Pix, image recognition CAPTCHAs, and • They can be quickly generated and evalu-
IMAGINATION CAPTCHAs are burdened with ated—“quick generation & evaluation”
the requirement of a pre-labeled image database.
This is a tough requirement to meet. There are Human friendly: The schemes need to be easy
ideas as to how one could obtain a large database to understand and intuitive to perform. This also
of pictures with labels, but those schemes are means that the instructions to perform the test
not very robust. Obtaining non-offensive images will be short and easy. Any CAPTCHA scheme
automatically, with the correct label free from all with a long and complicated instruction will lead
contextual connotations is a tough task. to greater irritation to the test taker. The fact that
Public scheme: Schemes which rely on a data- the CAPTCHA should be universal in nature
base are susceptible to attacks if the database be- implies that CAPTCHAs should be intuitive to
comes public. The easy way to solve this problem a large population across the world. Tasks which
is to use some kind of distortion on the images involve recognizing a particular language are not
and then use them in CAPTCHA schemes. as easy and intuitive as image recognition tasks.
CAPTCHA have the following desirable Matching images is intuitive and does not as-
properties: sume any level of skill. This is an activity which
almost all humans can perform. Though implicit
• They are intuitive to understand and to solve CAPTCHAs are easy to perform, the instruction
for humans—“human friendly” set itself might be tough and unintuitive.
• They can be quickly taken by a user—“hu- Human easy: A CAPTCHA test should be easy
man easy” to perform. The user should be able to complete
• The probability of guessing the right answer it quickly. Character distortion schemes might be
is small—“no effort attack resistant” fast to complete or not, depending on the amount
• They are independent of the language and of distortion that is applied. Image matching is
culture—“universal” much quicker than performing error correction

Table 1. Comparison of existing schemes based on the definition of a CAPTCHA


Definition Guidelines Humans Pass Easily Computers Fail Automation Public Scheme
Gimpy Somewhat No Yes Yes
Pix Somewhat Yes No No
Pessimal Print Somewhat Yes Yes Yes
Baffle Text Somewhat Yes Yes Yes
ARTiFACIAL Yes Yes Yes Yes
Scatter Type Somewhat Yes Yes Yes
Implicit CAPTCHA Yes Yes No Yes
Microsoft HIP Somewhat Yes Yes Yes
Image Recognition Somewhat Yes Somewhat Yes
Animation CAPTCHA Yes Yes No Yes
Face CAPTCHA Yes Yes Yes Yes

237
CAPTCHAs

on distorted words. With regards to the input new CAPTCHA tests and also be able to evaluate
mechanism though, the character based schemes them. Implicit CAPTCHAs for instance, need
are the fastest, since using the keyboard is the time for generation. The CAPTCHAs which
fastest way to input the answer in comparison to rely on labeled image databases can also not be
the mouse or any other pointing device. quickly generated as time and effort is required
No effort attack resistant: The CAPTCHA to label the images. Animation CAPTCHAs and
tests must be resistant to random guessing attacks. simple game based CAPTCHAs need to be cre-
Schemes such as the identifying anomalies image ated beforehand.
recognition CAPTCHA can be trivially broken CAPTCHA schemes are another example of
by random guessing. the conflict between usability and security. The
Universal: All the character distortion based most secure schemes might not be popular while
schemes make an assumption of the language and the most popular schemes might not be secure.
thus are not universal. The image based schemes
are universal (assuming that the instruction set is CAPTCHAs in the Real World
small and the test by nature is intuitive).
Well understood problem: To be successful as CAPTCHAs can be seen in action at many sites
a valid CAPTCHA for a long time, it is impera- on the Web. A few being:
tive that the scheme is well understood. All the
present CAPTCHA schemes are created out of • http://www.yahoomail.com
known open problems in AI. • http://www.gmail.com
Quick generation and evaluation: Since the • http://www.hotmail.com
CAPTCHAs are to be used on the Internet and in • http://www.blogger.com
all probability on high volume sites, it is neces- • https://www.kiwibank.co.nz/banking/login.
sary that the scheme be able to rapidly generate asp

Table 2. Comparison of existing schemes based on the desirable features in a CAPTCHA

Desirable Properties Human Human No Effort Universal Quick Well


Friendly Easy Attack Generation & Understood
Resistant Evaluation Problem
Gimpy Somewhat Yes Yes No Yes Yes
Pix Yes Yes Yes No No Yes
Pessimal Print Somewhat Yes Yes No Yes Yes
Baffle Text Somewhat Yes Yes No No Yes
ARTiFACIAL Yes Yes Yes Yes Yes Yes
Scatter Type Somewhat Yes Yes No Yes Yes
Implicit CAPTCHA Somewhat Yes Somewhat Yes No No
Microsoft HIP Somewhat Yes Yes Yes Yes Yes
Image Recognition Yes Yes No No No Yes
Animation Yes Yes Yes Yes No Yes
CAPTCHA
Face CAPTCHA Yes Yes Somewhat Yes Yes Yes

238
CAPTCHAs

Figure 18 Security vs. Usability Matrix

• http://www.register.com their resentment on various Internet forums.


• http://www.ticketmaster.com This is not surprising. Any new extra steps and
• http://www.usps.com especially those which need some work from
the human are going to be viewed as irritating.
Anti-spam products in the market based on Thus, it is necessary that these tests should not
CAPTCHAs: be too tough to be considered as a challenge by
the users. An element of fun should be inbuilt in
• mailblocks the CAPTCHA. Implicit CAPTCHAs (Baird &
• www.spamarrest.com Bentley, 2005) was a first good attempt in making
this process fun.
It has been suggested (Lopresti, 2005) that Real world schemes have had problems with
CAPTCHA problems, instead of being artificial negative user feedback. Gimpy was being used
problems that need to be solved, should rather be at Yahoo’s site but then upon complaints that it
problems from the real world. This would help was too tough, it was replaced by the easier ver-
research work, since answers got from the test sion called ez-Gimpy. Artifacial uses an image
takers can be used as a baseline for programs try- consisting of distorted human faces. In user trials,
ing to advance AI. But a potential problem could these faces were deemed to be repulsive, disturb-
be that groups of malicious users can skew the ing, and unpleasant by human test takers (Rui et
results of such an attempt. Also, since the correct al., 2005).
answer to any such problem is not exactly known,
multiple tests would be required. Abuse of CAPTCHAs—Spam

Acceptance of CAPTCHAs by the There has been a recent wave of increase in spam
Users mails (June, 2006). This has been because spam-
mers are making use of a CAPTCHA concept:
There are many users who get irritated when “Humans are good at understanding the informa-
they have to pass a CAPTCHA and have voiced tion in images while machines can not decipher

239
CAPTCHAs

images.” This CAPTCHA property is being used Problems with CAPTCHAs


in sending spam which has images instead of
text. Thus, the spam filters which are not able to The biggest problem with the majority of the ex-
understand the graphic information are unable to isting CAPTCHAs is that they are unfair to the
flag such mails as spam. visually impaired people. Most of the existing
schemes and the newly proposed ones, assume that
Online Games and Bots the test taker can see. This is a limitation and to
ensure fairness, ways to allow visually impaired
Online games are a multi-billion dollar industry people to authenticate that they are human have
and are expected to grow. Online games are to be thought of.
plagued with the problem of game bots playing Various countries have introduced measures
the games and winning prize monies. This not and laws to ensure that the Web sites remain
only deters new comers from joining the online accessible to all without any discrimination,
gaming sites, but also cheats the skilled human specifically without any discrimination based on
users of their prizes. Thus, the online multi-million physical ability. Web sites which wish to remain
dollar industry is extremely interested in seeing compliant with various disability laws need to
to it that the game bots are kept away. ensure that they do not discriminate against vi-
With regards to poker for instance, powerful sually or physically challenged users. The World
AI software exist which can play such online Wide Web Consortium (W3C) has set a series of
games very well (Brunker, 2004; Roarke, 2005). standards for accessibility for Web sites, known as
“Vexbot” and “Sparbot” are prime examples of Web accessibility initiative (WAI) (WAI, n.d.).
bots which can be used to win a lot of online tour- Some existing schemes try to solve this prob-
naments. Other online game business models are lem by allowing the test taker to request for an
also severely affected by this problem. Some game audio CAPTCHA instead of a visual one, while
sites resort to periodically updating the game and some schemes allow the test taker to speak with
keeping a constant vigil for bots (Times, 2006). a customer representative to authenticate them-
Also there has been a report of conviction of a selves as humans. As an example of the first
game bot user who made money by auctioning method is Google’s introduction in April, 2006,
the items that his Web bot got for him in games of an audio CAPTCHA (Google’s Audio CAPT-
(service, 2005). The number of MMORPG game CHA, 2006) scheme in which the test taker can
(massively multiplayer online role playing games) choose to hear an audio clip. Yahoo provides the
players and the money to be won, is increasing. second method as an alternate way to authenticate
Games such as “World of Warcraft” had around oneself as a human.
8,000,000 gamers involved (Blizzard Entertain-
ment Ltd., Press Release January 2007, n.d.) as
of January 2007, according a press release by Attacking CAPTCHAs
the company.
Thus, online game industry is a big multi- There has been a lot of interest in attacking CAPT-
billion industry that is affected tremendously by CHAs. Generally, the attacks on a CAPTCHA
Web bots. Better solutions to keep the bots away scheme can be broadly subdivided into (Pinkas &
from playing will always be needed here. Sander, 2002):

• Guessing attacks
• Technical AI attacks

240
CAPTCHAs

• Relay attacks Relay Attacks

Guessing attacks are trivial to implement A possible attack against CAPTCHAs is the “relay
and lead to no development of AI. The attacker attack” (Pinkas & Sander, 2002; Stubblebine &
randomly guesses the answer. These attacks are Oorschot, 2004). In this attack, when a malicious
viable for schemes which have a few answer entity (usually an automated bot) is presented with
choices. a CAPTCHA, the test is relayed to a human being
willing to solve it in exchange for something. The
Technical AI Attacks human user to whom the test is relayed participates
in this activity either because the user is rewarded
A team of researchers were able to break the ez- for doing so or is unaware of the relay. Relay at-
Gimpy with 92% success and the Gimpy scheme tacks are tough to stop. An initial study to negate
with 33% success rate (Mori & Malik, 2003). this kind of attack is in Stubblebine and Oorschot
Another successful attempt was to break the clock (2004). The imagination CAPTCHA (Datta et al.,
face HIP (Zhang, Rui, Huang, & Paya, 2004). 2005) is the only CAPTCHA which attempted to
Thayananthan, Stenger, Torr, and Cipolla of the address this issue in its design.
Cambridge vision group have written a program The scheme of the relay attack can be de-
that can achieve 93% correct recognition rate scribed as:
against ez-Gimpy. Gabriel Moy, Nathan Jones,
Curt Harkless, and Randy Potter of Arete Associ- 1. Entity 1 requests for the test
ates have written a program that can achieve 78% 2. Entity 1 relays the test to Entity 2
accuracy against gimpy-r (CMU, n.d.). 3. Entity 2 solves the test and relays the answer
Schemes to break a few visual HIPs are detailed to Entity 1
in Chellapilla and Simard (2005). The conclusion 4. Entity 1 inputs the answer and passes the
of this research work indicated that for character CAPTCHA
based CAPTCHAs, it is important that apart
from the recognition problem, the segmentation Here “Entity 1” is generally the malicious
problem (that it should not be able to trivially automated program while “Entity 2” is a human
segment the individual letters) should also exist user. Generally it can be assumed that these two
to make it very robust against attacks. One paper entities are physically apart.
which studied the text recognition problem alone Since the CAPTCHA test itself should be as
(assuming that segmentation had been performed simple as possible so as not to irritate the genuine
successfully) concluded that computer programs human user, the malicious human also finds the test
were better or in the worst case as good as humans simple enough to devote resources to solve it. The
in recognizing distorted characters (Chellapilla, malicious human users perform the test in return
Larson, Simard, & Czerwinski, 2005a).
Apart from the research labs, interested in-
dividuals are also trying to break CAPTCHA Figure 19. Relay attacks
schemes and various such attempts can be found
on the Internet.

241
CAPTCHAs

for an incentive (free service, financial incentives, Off-Line Relay Attacks


etc.). The value they associate with the incentive
decides what tests they will consider irritating to In off-line relay attacks, the human solves the
the point that they refuse to take it and what tests CAPTCHA tests off-line. There are real world
they will take. Thus, if the genuine human user examples for this kind of an attack.
places a considerably higher value on the service At George Mason University, one of the un-
being accessed than the value the relay attacker dergraduate students volunteered to inform that
places on the incentive got in return for performing he had been a part of a relay attack. A friend of
the attack, the genuine human user of the service his supplied him with the distorted text CAPT-
will continue to solve the CAPTCHA while the CHAs and he was paid for solving them. He was
attacker would refuse to participate in the attack. paid $10 for every 1000 CAPTCHAs that were
In a Web service usability perspective, ways to solved. On an average, it took him one and half
ensure this need to be further looked into. hours to answer 1000 CAPTCHAs. Also, he in-
formed that his friend had obtained a computer
Online Relay Attacks program which did the same task and so he was
no longer employed to break CAPTCHAs. The
In online relay attacks, the human attacker solves exact reason for needing the CAPTCHAs solved
the CAPTCHA in real time and this answer is used is unknown.
by the bot to overcome the CAPTCHA barrier in On a free lancer recruitment Web site “www.
real time. While this attack is highly plausible and getafreelancer.com” (getafreelancer.com Web
would be highly successful, there are no reports site, 2006), a query was posted requesting a quote
of an instance in which it was used. for solving CAPTCHAs for a 50 hour week period.
The case of the bot pretending to be the server The average asking price to solve CAPTCHAs in
to an innocent and non-suspecting human user 50 hours was $57, which makes it almost a dollar
and thus getting it solved can be avoided by solu- for an hour. The least asking quote was $30 ($0.6
tions which have been described (Stubblebine & for an hour). The description for this job type
Oorschot, 2004). was really vague, the number of CAPTCHAs to
As per “implicit CAPTCHAs” (Baird & be solved was not given and it seemed that most
Bentley, 2005), if the CAPTCHA is a task that of the bidders had no clue about what the task
is intertwined with the actions to access a Web really entailed.
service, then relay attacks can be prevented. In Audio CAPTCHAs have not yet become as
character recognition schemes, a simpler idea for widespread as their text based cousins, but once
the creation of new login IDs would be to embed they are implemented they can also be attacked.
the login id and the password into the image Audio CAPTCHAs are not only language depen-
CAPTCHA. Thus, the attacker would not want to dent but also depend on the accent used. To make
send this private information to a third party. This it as general as possible, it can be assumed that
kind of mechanism can be used in many scenarios only digits will be used in audio CAPTCHAs, as is
wherein there is some confidential information or the case with Google’s present audio CAPTCHA
private information involved. scheme (Google’s Audio CAPTCHA, 2006). Dr.
Detection of different IP addresses can lead Luis von Ahn (Carnegie Mellon University) tried
to discovery of a relay attack (Athanasopoulos & to attack one of the digit based audio CAPTCHAs
Antonatos, 2006). The disadvantage in this coun- by feeding the audio stream to an automated
ter-measure being that the server has to maintain speech recognition system of an U.S. airline
state information. company, through the phone. The automated

242
CAPTCHAs

speech recognition system was able to recognize schemes such as image based CAPTCHAs will
the digits properly. decrease the online users’ irritation. As it is with
The view point that CAPTCHAs will also most new schemes, the initial resentment usually
lead to the solutions of AI problems has not changes into a gradual acceptance. Similarly,
always been true. Relay attacks are very success- with CAPTCHAs, Internet users will accept user
ful against CAPTCHAs and do not lead to any friendly CAPTCHA schemes as a part and parcel
improvement in AI. It has also been argued that of their online transactions.
the development of programs which are able to Current CAPTCHA schemes have not faced
break distorted characters is not helpful to solve a lot of attacks. Increasing attention from the
real world problems (Lopresti, 2005). This is a cyber crime industry will result in standardized
consequence of the fact that the distorted letter CAPTCHAs.
CAPTCHAs are synthetic scenarios which are CAPTCHAs have played an important role in
not letter recognition problems encountered in safeguarding the interests of various online busi-
real world applications. nesses. They have also been used in preventing
automated attacks. With the increasingly prolif-
eration of the Internet and as we move towards
Summary and Conclusion an increasingly networked world, the role and
significance of CAPTCHAs will increase.
CAPTCHAs are finding more and more use on
the Internet. Such a need can be attributed to the
growing power of bots. For instance, DDoS attacks References
and spam mails largely originate from massive bot-
nets. New online business models and the growing Advergaming on the blockdot Web site. (n.
power of bots have resulted in the proliferation d.). Retrieved April 27, 2007, from http://games.
of CAPTCHAs in areas which were not thought blockdot.com/basics/index.cfm/. 
of previously. And thus, CAPTCHAs have come
Advergaming on the innoken Web site. (n.d.). Re-
a long way from way back in 1996, when they
trieved April 27, 2007, from http://www.innoken.
were used to prevent automated URL submission
com/ar_brandgames.html/. 
by AltaVista to the recent use of CAPTCHAs
against the Slaty Worm by Google. In the future Anderson, A. R. (Ed.). (1964). Minds and
this growth in the use of CAPTCHAs to safeguard machines. Prentice Hall.
against the power of bots will continue.
Athanasopoulos, E., & Antonatos, S. (2006). En-
With this increase in number, CAPTCHAs
hanced captchas: Using animation to tell humans
which do not rely on vision alone will have to
and computers apart.  In H. Leitold & E. Lei-
be further developed so that Web sites can be
told (Eds.), Communications and multimedia
accessed by all without any discrimination.
security (Vol. 4237, pp. 97-108). Springer.
Current schemes have generally been character
distortion based schemes. With the advancement Baird, H. S., & Bentley, J. L. (2005). Implicit
in deciphering distorted text, such schemes will captchas. In Proceedings of Spie/IS&T Conference
have to get more and more garbled to the extent on Document Recognition and Retrieval Xii. 
that users will not accept them. New CAPTCHA
Baird, H., & Riopka, T. (2004, Dec). ScatterType:
schemes will then have to be deployed.
a reading CAPTCHA resistant to segmentation
Online users have so far grudgingly ac-
attack. In L. L. J. M. D. M. W. A. Y (Ed.), Vision
cepted CAPTCHAs. A shift to other user friendly

243
CAPTCHAs

geometry Xiii. Proceedings of the Spie (Vol. 5676, Coates, A. L. (2001). Pessimal print—a reverse


pp. 197-207).  Turing test. In Proceedings of the Sixth Inter-
national Conference on Document Analysis and
Blizzard entertainment ltd., press release Janu-
Recognition (Icdar ’01) (p. 1154).  Washington,
ary 2007. (n.d.). Retrieved April 27, 2007, from
DC: IEEE Computer Society.
http://www.blizzard.com/press/070111.shtml 
Crockett, L. J. (1994). The Turing test and the
Brunker, M. (2004, September). Are poker bots
frame problem. Ablex Publishing Corporation.
raking online pots? Retrieved April 27, 2007, from
http://www.msnbc.msn.com/id/6002298/  Datta, R., Li, J., & Wang, J. Z. (2005). Imagination:
a robust image-based captcha generation system. 
Chellapilla, K., Larson, K., Simard, P. Y., &
In Proceedings of the 13th annual ACM Interna-
Czerwinski, M. (2005a, July 21-22). Computers
tional Conference on Multimedia (Multimedia
beat humans at single character recognition in
’05) (pp. 331-334). New York: ACM Press.
reading based human interaction proofs (hips). In
Proceedings of Ceas 2005—Second Conference The esp game. (n.d.). Retrieved April 26, 2007,
on E-mail and Anti-spam. Stanford University, from http://www.espgame.org/ 
California, USA.
Feynman, R. P. (2001). The pleasure of finding
Chellapilla, K., Larson, K., Simard, P., & Czer- things out. Penguin Books.
winski, M. (2005b). Designing human friendly
getafreelancer.com Web site. (2006). Retrieved
human interaction proofs (hips).  In Chi ’05:
April 27, 2007, from http://www.getafreelancer.
Proceedings of the Sigchi Conference on Human
com/projects/Data-Processing-Data-Entry/Data-
Factors in Computing Systems (pp. 711-720). New
Entry-Solve-CAPTCHA.html 
York: ACM Press.
Gimp 2.2.  (n.d.).  Retrieved April 26, 2007, from
Chellapilla, K., & Simard, P. Y. (2005). Using
http://www.gimp.org/ 
machine learning to break visual human inter-
action proofs (hips).  In L. K. Saul, Y. Weiss, & Gligor, V. D. (2005, Sep). Guaranteeing access in
L. Bottou (Eds.), Advances in neural information spite of distributed service-flooding attacks. In
processing systems 17 (pp. 265-272). Cambridge, Proceedings of the 12th ACM Conference on
MA: MIT Press. Computer and Communications Security(pp. 80-
96). Lecture Notes in Computer Science, 3364.
Chew, M., & Baird, H. S. (2003). Baffletext: a hu-
man interactive proof.  In Proceedings of the 10th Golle, P., & Ducheneaut, N. (2005). Preventing
IS&T/Spie Document Recognition & Retrieval bots from playing online games. Computer En-
Conference.  tertainment, 3(3), 3.
Chew, M., & Tygar, J. D. (2004). Image recogni- Google’s audio captcha. (2006, November). Re-
tion captchas.  Isc (p. 268-279).  trieved April 29, 2007, from http://googleblog.
blogspot.com/2006/11/audio-captchas-when-vi-
Chew, M., & Tygar, J. (2005, Jan). Collabora-
sual-images-are.html 
tive filtering captchas. In H. S. Baird & D. P.
Lopresti (Eds.), Lecture notes in computer Graham, D. B., & Allinson, N. M. (1998). Char-
science (pp. 66-81). Springer Verlag. acterizing virtual eigensignatures for general
purpose face recognition. 
CMU. (2000). The captcha project. Retrieved
April 26, 2007, from http://www.captcha.net 

244
CAPTCHAs

Guess the google. (n.d.). Retrieved April 27, 2007, Conference on Computer Vision and Pattern
from http://grant.robinson.name/projects/guess- Recognition (pp. I-134–I-141). 
the-google/ 
Naor, M. (1996). Verification of a human
Howell, J. (1999). Int roduction to face in the loop or identification via the Turing
recognition. Boca Raton, FL: CRC Press, Inc. test [Unknown]. Retrieved April 27, 2007, from
http://www.wisdom.weizmann.ac.il/%7Enaor/
Imagemagick 6.2.8. (n.d.). Retrieved April 26,
PAPERS/human_abs.html 
2007, from http://www.imagemagick.org/ 
Oppy, G., & Dowe, D. (2005). The Turing test. In
Index of bongard problems. (n.d.). Retrieved
E. N. Zalta (Ed.), The Stanford encyclopedia
April 26, 2007, from http://www.cs.indiana.
of philosophy. Retrieved April 27, 2007, from
edu/~hfoundal/res/bps/bpidx.htm 
http://plato.stanford.edu/archives/sum2005/en-
Kochanski, G., Lopresti, D., & Shih, C. (2002, tries/turing-test/.
September). A reverse Turing test using speech. In
Patrice, J. B. J. C. I. C., Simard, Y., & Szeliski,
Proceedings of the International Conferences on
R. (2003). Using character recognition and seg-
Spoken Language Processing (ICSLP). Denver,
mentation to tell computer from humans.  In Pro-
Colorado.
ceedings of the Seventh International Conference
Lopresti, D. (2005, May). Leveraging the captcha on Document Analysis and Recognition (ICDAR
problem. In H. S. Baird & D. P. Lopresti (Eds.), ’03) (p. 418).  Washington, DC: IEEE Computer
Human interactive proofs: Second international Society.
workshop (HIP 2005) (Vol. 3517, p. 97).  Springer
Penrose, R. (1989). The emperor’s new mind. Ox-
Verlag.
ford University Press.
Macromedia Web site. (n.d.). Retrieved April 27,
Penrose, R. (1994). Shadows of the mind. Oxford
2007, from http://www.macromedia.com/ 
University Press.
Misra, D., & Gaj, K. (2006a, February). Face
Pinkas, B., & Sander, T. (2002). Securing pass-
recognition CAPTCHAs. In Proceedings of
words against dictionary attacks. In Ccs ’02: Pro-
the the Advanced Int’l Conference on Telecom-
ceedings of the 9th acm conference on computer
munications and Int’l Conference on Internet
and communications security (pp. 161–170).  New
and Web Applications and Services (AICT-ICIW
York, NY, USA: ACM Press.
‘06). (pp. 122). Washington, DC: IEEE Computer
Society. Provos, N., McClain, J., & Wang, K. (2006). Search
worms. In Proceedings of the 4th ACM Workshop
Misra, D., & Gaj, K. (2006b, July). Human friendly
on Recurring Malcode (WORM ’06) (pp. 1-8). Al-
CAPTCHAs—simple games (Poster). Symposium
exandria, VA: ACM Press.
on Usable Privacy and Security (SOUPS). Pitts-
burgh, Pennsylvania, USA. Roarke, S. P. (2005, July). Bots now battle humans
for poker supremacy. Retrieved April 27, 2007,
Montage a google. (n.d.). Retrieved April 27,
from http://www.Foxsports.com 
2007, from http://grant.robinson.name/projects/
montage-a-google/  Rui, Y., & Liu, Z. (2003a). Artifacial: automated
reverse Turing test using facial features.  In
Mori, G., & Malik, J. (2003). Recognizing objects
Proceedings of the Eleventh ACM International
in adversarial clutter: breaking a visual captcha. 
Conference on Multimedia (Multimedia ’03)
In Proceedings of 2003 IEEE Computer Society
(pp. 295-298). New York: ACM Press.

245
CAPTCHAs

Rui, Y., & Liu, Z. (2003b). Excuse me, but are von Ahn, L., Blum, M., Hopper, N., & Langford,
you human? In Proceedings of the Eleventh J. (2003). Captcha: Using hard AI problems for
ACM International Conference on Multime- security. In Proceedings of Eurocrypt (pp. 294-
dia (Multimedia ’03) (pp. 462-463). New York: 311). 
ACM Press.
von A h n, L., Blu m, M., & La ngford ,
Rui, Y., Liu, Z., Kallin, S., Janke, G., & Paya, J. (2002). Telling humans and computers apart
C. (2005). Characters or faces: A user study on automatically or how lazy cryptographers do
ease of use for hips. Hip (pp. 53–65).  AI (Tech. Rep. No. CMU-CS-02-117). Carnegie
Mellon University.
Rusu, A., & Govindaraju, V. (2005, Jan). Visual
captcha with handwritten image analysis. In von A h n, L., Blu m, M., & La ngford ,
H. S. Baird & D. P. Lopresti (Eds.), Human J. (2004). Telling humans and computers apart
interactive proofs: Second international automatically. Communications of the ACM,
workshop (Vol. 3517). Bethlehem, PA: Springer 47(2), 56-60.
Verlag.
von Ahn, L., & Dabbish, L. (2004, April). Label-
service, N. S. news. (2005, August). Computer ing images with a computer game. In Proceedings
characters mugged in virtual crime spree.  Re- of the SIGCHI Conference on Human Factors in
trieved April 27, 2007, from http://www.newsci- Computing Systems (CHI ’04) (pp. 319-326). Vi-
entist.com/article.ns?id=dn7865  enna, Austria: ACM Press.
Stubblebine, S., & van Oorschot, P. (2004, Web Accessibility Initiative. (n.d.). Retrieved
February). Addressing online dictionary attacks April 29, 2007, from http://www.w3.org/WAI/
with login histories and humans-in-the-loop. In
Xu, J., Lipton, R., & Essa, I. (2000, November). Are
Proceedings of Financial Cryptography. Spring-
y o u h u m a n?  ( Te c h .  R e p.  N o.  G I T- C C -
er-Verlag.
00028). Georgia Institute of Technology: Georgia
Tan, P., Steinbach, M., & Kumar, V. (2005). In- Institute of Technology.
troduction to data mining. Addison-Wesley.
Xu, J., Lipton, R., Essa, I., & Sung, M. (2001). Man-
Times, T. (2006, May). Computer game bot: datory human participation: A new scheme for
Arch-nemesis of online games. Retrieved August building secure systems (Tech. Rep. No. GIT-CC-
1, 2006, from http://times.hankooki.com/lpage/ 01-09). Georgia Institute of Technology.
culture/200605/kt2006052116201765520.htm 
Zhang, Z., Rui, Y., Huang, T., & Paya,
Turing, A. M. (1950). Computing machinery and C. (2004). Breaking the clock face hip. 
intelligence. LIX (236). ICME(pp. 2167-2170). 
von Ahn, L. (2005). Utilizing the power of hu- Zhao, W., Chellappa, R., Phillips, P.J., & Rosen-
man cycles. Unpublished doctoral dissertation, feld, A. (2003). Face recognition—a literature
Carnegie Mellon University. survey. ACM Computer Survey, 35(4), 399-458.

246
247

Chapter XIV
Privacy Concerns when
Modeling Users in Collaborative
Filtering Recommender Systems
Sylvain Castagnos
LORIA—Université Nancy 2, Campus Scientifique, France

Anne Boyer
LORIA—Université Nancy 2, Campus Scientifique, France

Abstract

This chapter investigates ways to deal with privacy rules when modeling preferences of users in recom-
mender systems based on collaborative filtering. It argues that it is possible to find a good compromise
between quality of predictions and protection of personal data. Thus, it proposes a methodology that
fulfills with strictest privacy laws for both centralized and distributed architectures. The authors hope
that their attempts to provide a unified vision of privacy rules through the related works and a generic
privacy-enhancing procedure will help researchers and practitioners to better take into account the
ethical and juridical constraints as regards privacy protection when designing information systems.

INTRODUCTION the machine started giving him documentaries


on Joseph Goebbels and Adolf Eichmann. He
Do you remember the satirical paper from Zaslow has overcompensated and the machine stopped
(2002) in the Wall Street Journal? The problem thinking he was gay and decided he was a fan of
was the following: a man suspects that his digital the Third Reich. The general principle of TiVo
videorecorder named TiVo thought he was gay. is to record for its owner some programs it just
Indeed, it inexplicably recorded programs with assumes he will like, based on shows he has
gay themes. This man decided to modify TiVo’s chosen to record. The recommendation process
gay fixation by recording war movies. Then used what he did to predict what he likes. A

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems

major aspect related to recommender systems reason why the relevancy of results is no longer
is to collect pertinent data about what you do in guarantee in most of existing information provid-
order to determine what you are. Such systems ers. Furthermore, searching information by using
are very popular in many contexts, as for example keywords and logical operators seems not easy
e-commerce, papers online, or Internet access. enough for the general audience. As a result, the
Recommenders individualize their prediction scientific community is rethinking the existing
which each user. Therefore, they need to collect services of search and access to information, under
and to utilize personal data. Fundamental issues the designation “Web 2.0” (White, 2006).
arise such as how to ensure that user privacy will There are several possible approaches to assist
be guaranteed, particularly when individuals can the active user: adaptive interfaces to facilitate
be identifiable? the exploration and the searches on the Web, sys-
We have to keep in mind that many consumers tems relying on social navigation, sites providing
appreciate having computers able to anticipate personalized content, statistical tools suggesting
what they like, what they want to do or to read. keywords for improving searches, and so forth.
Web personalization has been shown to be advan- Another solution consists in providing each user
tageous for both online customers and vendors. But with items likely to interest him/her. Contrary to
for consumers shopping on the Internet, privacy the personalized content, this solution does not
is a major issue. Almost three-quarters of Internet require to adapt resources to the potential read-
users are concerned about having control over the ers. Each item has to be proposed to concerned
release of their private information when shopping persons by using push-and-pull techniques.
online (Source: U.S. Census Data on http://www. To supply the active user with his/her concerns,
bbbonline.org/privacy/). This is also true in the we first have to build his/her model of prefer-
Internet context of information retrieval. As the ences by collecting data about his/her activities.
amount of data available on Internet is so huge, This approach is based on an analysis of usage.
it becomes mandatory to assist the active user Nevertheless, it is not always possible to collect
when searching or accessing Internet resources. quickly enough data about the active user. Col-
Furthermore, the number of available resources laborative filtering techniques (Goldberg, 1992)
is still exponentially growing: for example, the are a good way to cope with this difficulty. They
number of pages referenced by Google has in- amount to identifying the active user to a set of
creased from 1 to 8 trillion between June 2000 persons having the same tastes, based on his/her
and August 2005. preferences and his/her past actions. This kind
Traditional search engines use to provide the of algorithms considers that users who liked the
active user with too many results to ensure that same items have the same topics of interest. Thus,
he/she will identify the most relevant items in a it becomes possible to predict the relevancy of
reasonable time. For instance, Google returns 5.4 data for the active user by taking advantage of
billion links when the user asks for “news.” There experiences of a similar population.
are still 768 million sites about news related to There are several fundamental problems when
New York City. Moreover, searches may never implementing a collaborative filtering algorithm.
end since new resources constantly appear. Con- Beyond technical questions such as quality of
fronted with this overload of data, the rationality service or cold start, are ethical aspects such as
of the active user is bounded to the set of choices intimacy preservation, privacy, or reglementary
that can be considered by human understanding. aspects. These questions are crucial since they are
He/she tends to stop the search at the first choice related to human rights and freedom and conse-
which seems satisfying (Simon, 1982). This is the quently will impact development and generalisa-

248
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems

tion of recommenders in our everyday life. BACKGROUND


This chapter focuses mainly on privacy aspects
in recommenders, based on collaborative filtering Statements
algorithms. First we will provide an overview
about privacy issues. Indeed, when modeling Due to the overload of data on the Internet, per-
user actions and preferences in order to compute sonalizing tools seem to be promising ways to
recommendations, intelligent systems access provide users with the most relevant items. In
personal information about users. For ethical this chapter, we will particularly focus on collab-
and legal reasons, we have to be careful to be as orative filtering techniques. It amounts to collect
unintrusive as possible and at least to guarantee information about users’ preferences. The latter
the anonymity of users. are often represented under the form of rating
As privacy laws are differing in countries all vectors, called user profiles. These profiles are
over the world, we first assume that a recommender then aggregated in a global rating matrix or in
system should be as strict and restrictive as the several partial matrices, depending on the fact that
strictest law. Starting from this statement, the computations are centralized or distributed. This
goal of this chapter is to discuss how and what aggregation allows the system to know the prefer-
kind of data may be collected, for how long time ences of a population. Afterwards, communities
they can be stored, in which aim it is allowed of interests and/or clusters of similar items are
to use them, and what data it is reasonable and inferred by computing distances between profiles
acceptable to share. Indeed, laws use to lay out and by selecting neighborhood in the user/item
both organizational and technical requirements representation space. This process makes explicit
(in terms of data acquisition, purpose of use, the similarities between users. Similar user pro-
transfer, processing, etc.) for information systems files are the most suitable for suggesting new
that store and/or process personal data in order interesting items to the active user.
to ensure their protection (see Wang, 2006 for This technology is sometimes perceived by the
an overview). general audience as a kind of “big brother,” since it
Because of the personal and confidential nature requires collecting information about preferences
of some data, we secondly state that users must of users. In order to promote this new approach
be aware of the prediction computation process. of consultation and search on the Internet, it is
It is important that they can explicitly choose the consequently necessary to understand users’
part of their profile they agree to communicate concerns and to provide privacy guarantees so
and/or to share. Users have to know which data that they are reassured.
are stored about their activities, more especially A survey has been conducted by a provider
when the system relies on implicit observations of personalization called ChoiceStream1 in 2005,
that are collected in a transparent way. among more than 900 American citizens. This
In the first section, we will introduce what study has shown that 80% of them come out in
means privacy in the context of collaborative favour of personalizing processes. More than half
filtering applications, based on a state-of-the-art of the interviewed persons are disposed to spend
of the most popular approaches. Then, we will 2 minutes to fill a questionnaire in order to get a
present our perspectives on the issues related personalizing function. Nevertheless, less than
to this theme, and we will provide some recom- 10% would accept spending 10 minutes or more.
mendations in dealing with the issues. At last, we This survey has also highlighted the fact that the
will discuss future trends, and we will conclude main obstacle remains the fear of diffusion of
this chapter. personal data and, generally speaking, the respect

249
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems

of privacy. However, they are ready to do some duration. Cookies belong to this category by
compromises as far as tolerance to surveillance recognizing visitors automatically, even if they
is concerned, in exchange for a lower required have not returned for a very long time.
effort to fill forms. In any case, Internet users The third axe of personalization distinguishes
want to keep the control on the settings of their processes that are user-initiated from those which
personalization level and retrievable data. are system-initiated. In the first category, users can
define their own settings. They can, for example,
Definition of Privacy choose their level of personalization, the number
of items to display, the preferences that are shar-
Westin (1967) has provided a definition which able, and so forth. In the second case, the system
includes all these aspects by describing the provides personalization for every user, even when
privacy as the ability to determine for ourselves they do not request customized features.
when, how, and to what extent information about At last, Cranor compares reliance on predic-
us is communicated to others. Cranor (2005) has tions for both prediction-based and content-based
completed this definition by considering that the systems. Prediction-based personalization com-
privacy enhancing process often relies on the way pares the active user’s preferences with other
pieces of data are collected, stored, and used. In similar profiles in order to predict his/her future
order to estimate the degree of privacy of a rec- interests and supply recommendations based on
ommender system, she introduces four axes of the stated preferences of the others. This kind of
personalization: the data collection method, the personalization may have very good results, pro-
duration of data storage, the user involvement in viding that there is enough data about compared
the personalizing process, and the reliance on users. But sometimes, the system can provide
predictions. results that have been appreciated by similar users
The data collection method can be either ex- but are not related to the active user’s preferences.
plicit or implicit. In the first case, personalization On the contrary, content-based personalization
relies on information explicitly provided by users, favours similarities between items, rather than
such as ratings or demographics. For example, us- between users. For example, if a user buys a book
ers may rate a sample of items in order to receive on movie makers, this kind of systems may sug-
suggestion of new items that may interest them. gest other books on movie makers. However, this
On the opposite, personalization based on implicit approach starts from the strong claim that items
data collection infers unknown preferences of are persistent on the recommender platform.
a user from his/her browsing history, purchase Cranor assumes that an ideal privacy-en-
history, search queries, and so forth. hanced system should be based on an explicit
The duration of data storage can be task-fo- data collection method, transient profiles, user
cused or profile-based. On one hand, task-focused initiated involvement, and non-invasive predic-
personalization keeps information about users for tions. Nevertheless, it is often necessary to find a
a limited duration. For example, it can be based on compromise between quality of predictions and
actions a user has taken while performing a task privacy. The next subsection provides a state-of-
or during a session. On the other hand, Cranor the-art of privacy-enhancing techniques in the
defines profile-based approach as the fact to keep field of collaborative filtering.
all preferences, whatever their form (implicit or
explicit, numeric or symbolic), for an unlimited

250
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems

Related Work in Collaborative However, he supposes that a reasonable number


Filtering of clients are online at the same time, which is
a strong requirement since the system needs to
Many researches have been conducted to make have enough users putting their shares together
collaborative filtering algorithms as unintrusive to see the whole decryption key.
as possible. We propose here an overview of main In Polat and Du (2004), the authors also as-
works dealing with privacy issues. Several ap- sume the fact that privacy concerns are closely
proaches have been considered. The most popular related to security aspects. They define specific
are encryption of data, public aggregates, and communication protocols to deal with these issues
alteration of profiles. in SVD-based collaborative filtering. Their goal is
Canny (2002a, 2002b) concentrates on means to ensure users’ privacy and to provide accurate
to provide powerful privacy protection by com- predictions. However, they consider privacy and
puting a “public” aggregate for each community accuracy as conflicting goals and propose a tech-
without disclosing individual users’ data. He nique to achieve a balance between them. Their
constructs probabilistic models of user behavior model differs from Canny (2002b), since Canny
from observations and extrapolates user ratings was focusing on the P2P framework in which
from these observations for collaborative filtering. users participate in the collaborative filtering
He has proved that collaborative filtering can be process. In Polat and Du (2004), users send their
considered as a linear factor analysis problem, data to a server and they do not participate in the
which generalizes SVD and linear regression. computing process. Randomized perturbation
He chooses to use the EM algorithm (expectation techniques are used during communication to
maximization) for factor analysis, since it can achieve privacy. Randomization perturbs the data
handle missing data without requiring default in such a way that the server can only know the
values for them. Furthermore, the EM algorithm range of the data. Thus, the server does not know
has a simple recursive definition which fits with his the true ratings of each user. Random numbers are
privacy method. His approach is based on homo- generated using uniform or Gaussian distribution
morphic encryption to protect personal data. This and replace partially the true ratings.
property is used in several common encryption Berkovsky, Eytani, Kuflik, and Ricci (2006)
schemes, such as RSA protocol. Privacy protection also deal with privacy concerns by using policies
is provided by a P2P protocol. Each user starts for modifying the contents of user profiles. They
with his/her own preference data, and knowledge denote three techniques. The uniform random
of who their peers are in his/her community. Users obfuscation allows the system to substitute real
exchange various encrypted messages when run- ratings by random values chosen uniformly in the
ning the protocol. The latter consists in multiplying range of possible ratings in the dataset. The bell
several encodings of several messages to get the curved random obfuscation replaces real votes by
encoding of their sum which corresponds to the random values using a bell-curve distribution in
encryption of the community’s preferences. The the same way as in statistics, that is to say by pay-
decryption process relies on key-sharing. The ing attention to the average and standard deviation
key needed to decrypt the total is not owned by of the ratings. At last, the default obfuscation(x)
anyone, but shared between several users. At the uses a predefined constant value x for the replace-
end, every user has an unencrypted copy of the ment of real data.
community’s preferences. Individuals can then In addition to these policies, the authors of
compute their own personal recommendations Berkovsky et al. (2006) propose an aggregating
and have not revealed their individual data. method to enhance privacy in P2P recommender

251
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems

systems. They address the problem by electing Han, Xie, Yang, Wang, and Shen (2004), which
super-peers whose role is to compute an average addresses the problem of privacy protection in a
profile of a sub-population. In this model, only distributed collaborative filtering algorithm called
these super-peers can access profiles of other PipeCF. Both user database management and
peers. We can, for example, consider that the prediction computation are split between several
provider of the service supplies some computers devices. This approach has been implemented on
to play the role of super-peers. The network is peer-to-peer overlay networks through a distrib-
configured to be hierarchical, which means that uted hash table method.
each peer is associated to a super-peer and that
super-peers can only collect data about peers
under their responsibility. Consequently, each GENERIC PRIVACY ENHANCING
super-peer manages a rating matrix containing a PROCESS
subset of user profiles. There is no common profile
between these super-peer matrices. The union of In the previous section, we have introduced ef-
these matrices would constitute the whole popula- ficient privacy-enhancing methods. However,
tion rating matrix. Each super-peer computes the these techniques can suffer from limitations ac-
isobarycenter of its matrix, which is called average cording to their context of use. Techniques based
profile. In order to get predictions, standard peers on alteration of data (such as random perturbation
have to contact all these super-peers and to exploit of profiles) have a lower accuracy than standard
these average profiles to compute predictions. In collaborative filtering methods. Techniques re-
this way, they never access the public profile of a lying on encryption of data often need to share
particular user. Nevertheless, using these average the decryption key, which requires an important
profiles can reduce the accuracy of the system. The number of simultaneous connections. At last,
sub-populations have been constituted randomly public aggregates can guarantee the privacy of
and the preferences are smoothed, thus reducing users without reducing the quality of predictions,
the influence of neighbors. provided that these aggregates should highlight
Among other works related to privacy in the preferences of virtual communities of interests
collaborative filtering, we can cite the paper of rather than randomly chosen sub-populations. In
Miller, Konstan, and Riedl (2004), which provides the following, we propose to introduce a generic
a method in accordance with the fourth axe of privacy-enhancing process which gathers the
personalization of Cranor. They propose a P2P same advantages than the previously mentioned
version of the item-item algorithm. Consequently, algorithms and works on various architectures.
correlations are computed between items rather
than between users. The recommender system User Modeling Process
gives content-based suggestions with a high
accuracy, since it can determine the similarity The first step when modeling preferences of us-
with the items that have been appreciated by the ers consists in choosing a good manner to collect
active user without basing the predictions on a data. Proposing a series of questions to users is an
single neighbor. However, this approach works for efficient way to do accurate preference elicitation
recommender systems whose items are persistent: (Viappiani, Faltings, & Pu, 2006). However, it
they are never removed from the platform and the would be necessary to ask for hundreds of ques-
system badly reacts to the introduction of new tions. People generally do not take time to carry
items. Their model can adapt to different P2P through such a lengthy process. According to the
configurations. We can also mention the work of constraints of the system, it could be preferable

252
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems

to let users explicitly rate for the items they want, The following subsection is dedicated to the
without order constraints. Each user can always ways that we propose in order to guarantee privacy,
check the list of items that he/she shares or has despite this intrusive data collection method.
consulted. He/she may explicitly rate each of these
items on an arbitrary scale of values. We can also Generic Solution to Guarantee
consider the case where the active user initializes Privacy
his/her personal preference with a set of items2
proposed to everyone in the system interface in In the previous subsection, we have seen that
order to partially face the cold start problem. This a useful way to collect preferences consists in
offers the advantage of completing the profile with analysing log files of the active user to retrieve
more consistency and of finding similarities with useful data. These are pieces of information eas-
other users more quickly, since everyone can fill ily and legally retrievable in the Web browser
the same criteria rating form. of the client. As the collected usages constitute
However, an explicit data collection may be very sensitive data, all pieces of information re-
insufficient. Psychological studies (Payne, Bett- trieved in these log files must remain on the client
man, & Johnson, 1993) have shown that people side. However, collaborative filtering processes
construct their preferences while learning about require sharing knowledge about the active user
the available items. That means a priori ratings with neighbors. User models consequently have
are not necessarily relevant. Unfortunately, few to be transformed in a less intrusive form in
users provide a feedback about their consultations. order to be usable. According to the first axe of
We assume that, despite the explicit voluntary personalization defined by Cranor, this implicit
completion of profiles, there are a lot of missing data collection has to be transformed in explicit
data. A way to face this problem consists in adding ratings. The user modeling function can easily
a user modeling function based on implicit criteria be modified to reach this objective. It amounts
(Castagnos & Boyer, 2006b). This function relies to estimate ratings that the user is likely to give
on an analysis of usages. It collects information to different items from implicit criteria. Only
about the active user’s actions (such as frequency numerical votes which have been deduced from
and duration of consultations for each item, etc.). this process are sharable with other users. We
This data collation method provides better user propose to use the following formula adapted
models, and consequently better predictions in col- from Chan (1999), to achieve this transformation
laborative filtering computations. However, this in numerical profiles (see Box 1).
process is quite intrusive into privacy of users. This formula adapts itself to implicit pieces
of information that are retrievable, according to

Box 1.

Interest (item) = 1 + 2 × IsFavorite(item) + Re cent (item) + 2 × Frequency (item) × Duration(item)


+ PercentVisitedLinks (item)
date(last visit ) − date(log beginning )
With : Re cent (item) =
date( present ) − date(log beginning )
time spent pages of item
And : Duration(item) = max consultations ( )
size of the item

253
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems

the type of consulted items, the architecture of delete ratings stored in their personal profiles.
the system and the local settings defined by the They can also define their public profiles which
active user. Interest(item) must be rounded up to are the part of the personal profiles they accept
the nearest integer and expresses the estimated to share with others.
rating for the selected “item.” Here are some Rather than deteriorating profiles as in Berko-
criteria that can be taken into account: vsky et al. (2006), and thus potentially reducing the
accuracy of the system, we recommend sending
• Log beginning corresponds to the date of first profiles anonymously. This approach also pres-
execution of the recommender module; ents the advantage to comply with the strictest
• IsFavorite(item) equals 1 if the item has been international privacy laws. However, single IDs
explicitly and positively voted by the user and associated to these profiles are required to avoid
0, otherwise. According to the interface of the duplication of data. We have conceived the privacy
system, this vote can take the form of a rat- enhancing procedure in such a way that anonymity
ing. In this case, a positive vote corresponds of users is guaranteed, even if each of them has
to a high rating. If the active user has not a unique ID. Users have to open a session with
the possibility to rate items in the interface, a login and a password before using the recom-
we consider that adding the item among mender system. In this way, several persons can
bookmarks in the Web browser constitutes use the same computer (for example, the different
an explicit positive vote; members of a family) without disrupting their
• Frequency(item) x Duration(item) must be respective profiles. To summarize, each user on
normalized so that the maximum is 1; a given computer has his/her own profile and a
• PercentVisitedLinks(item) corresponds to single ID. For each user, we use a hash function
the number of visited pages divided by the requiring the IP address and the login in order
number of pages on the item; and to generate his/her ID on his/her computer. The
• It is possible to include new implicit criteria use of a hash function H is suitable, since it has
in the formula, such as the fact that the active the following features:
user has printed the item, that he/she has sent
it by e-mail, and so forth. • Non-reversible: knowing “y,” it is hard to find
“x” such as H(x)=y;
This estimation of ratings requires storing • No collision: it is hard to find “x” and “y”
locally the implicit actions for a while. To be in such as H(x)=H(y);
accordance with the second axe of personalization • Knowing “x” and “H,” it is easy to compute
of Cranor, we recommend periodically deleting the H(x); and
oldest actions in the implicit data collection. The • H(x) has a fixed size.
ideal retention period for these actions depends on
the habits of consultation of the active user. In this way, an ID does not allow identification
By reference to the third axe of personalization of the name or IP address of the corresponding
previously mentioned, we argue that the exchange user. Thanks to this ID generator, the communi-
of the numerical profiles between neighbors cation module can guarantee the anonymity of
must be initiated by the users rather than by the users by using an IP multicast address—shared
system. Moreover, to increase the trust of users by all the users of the personalizing service—to
into the system, it is important to grant them the broadcast the packets containing the profiles, the
right to access and modify the content of their sender’s ID, and optionally the addressees’ IDs.
own profiles at any time. They can change or

254
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems

At last, we pay attention to the fourth axe of thousands of persons through a high-bandwidth
personalization when making recommendations, and one-way transmission.
by favouring when possible the items explicitly Web sites are sent from the server to clients
and positively rated by the active user, rather using satellites. Moreover, the users can send
than those which could be interesting according non-numerical votes. These votes appear as the
to neighbors’ experience. list of favorite Web sites (cf. supra, IsFavorite
In the following subsection, we will show in the Chan formula, 3.2 Generic Solution to
on real industrial applications that it is possible Guarantee Privacy). However, we cannot describe
to put this privacy enhancing procedure into these non-numerical votes as boolean. We can-
practice, both for client/server and peer-to-peer not differentiate items in which the active user is
architectures. not interested (negative votes) from those he/she
does not know or has omitted. This kind of votes
Examples of Applications with is not sufficient to do relevant predictions with
Different Architectures collaborative filtering methods. The users can
also suggest new contents into the server. The
Client/Server Architecture votes and the suggestions are used to make up
the bouquet: only the most popular Web sites are
We have implemented our work in the context sent per satellite.
of satellite Web site broadcasting (Castagnos & In order to distribute the system, the server
Boyer, 2006a). Our model has been integrated side part is separated from the client side. The
within the architecture of Casablanca, which is a function of user modeling, based on the Chan
product of the ASTRA company.3 ASTRA, located formula, determines estimated ratings for items
in Luxembourg, conceived a service of satellite according to user actions. These user actions are
Web site broadcasting service called Sat@once. stored temporarily and remain on client side.
This service is sponsored by advertisement so that Then, users have the possibility to send anony-
it is free for users, provided that they use a DVB mously the estimated ratings to the server, like the
receiver. The satellite bouquet holds hundreds non-numerical votes. This is a user-initiated pro-
of Web sites which are sent to several hundred cess and nothing is shared without the agreement

Figure 1. Architecture of the information filtering module in Casablanca

255
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems

of the owners of these profiles. Each user profile The set of users is then divided into two sub-
containing estimated ratings is associated to a groups using the k-means method. In our case, the
single ID to avoid duplication, as explained. k equals 2, since our overall strategy is to recur-
The server uses, as input parameters, the matrix sively divide the population into binary sub-sets.
of user votes and the database including sites and Once this first subdivision has been completed, it
descriptors. Thanks to this privacy enhancing is repeatedly applied to the new subgroups, and
procedure, the server has no information about this until the selected depth of the tree has been
the population, except anonymous votes. User reached. This means, the more one goes down
preferences are stored in the profile on clients. in the structure of the tree, the more the clusters
Thus, the privacy criterion is duly fulfilled. become specific to a certain group of similar us-
Once the profiles of users have been sent to ers. Consequently, people belonging to a leaf of
the server, the system has to build virtual com- the tree share the same opinion concerning the
munities of interests. In our model, this step is assignment of a rating for a given item.
carried out by an improved hierarchical clustering Once groups of persons have been formed as
algorithm, called FRAC. It allows, within the scope previously mentioned, the barycenter is calculated
of our architecture, to limit the number of persons for each cluster. Each barycenter is a kind of typi-
considered in the prediction computations. Thus, cal user profile aggregating the preferences of a
the results will be potentially more relevant, since sub-population.
observations will be based on a group closer to the The profiles of typical users are then sent on
active user. This process amounts to considering client side, using the satellite connection. Subse-
that the active user asks for the opinion of a group quently, the system uses the Pearson correlation
of persons having similar tastes to his/hers. It is coefficient to compute distances between the ac-
obviously transparent for users. tive user and the typical users. We consider that
In order to compute these groups of interests, the active user belongs to the community whose
the server extracts data from the profiles of users center is the closest to him/her. At last, we can
and aggregates the ratings in a global matrix. This predict the interests of the active user from the
matrix constitutes the root of the tree which is knowledge of his/her community.
recursively built by FRAC (cf. Figure 2). This way, to proceed allows the system to
provide the active user with interesting items,
even when he/she does not want to share his/her
profile. Indeed, the typical profiles are sent on
client side, in any case. Consequently, it is always
Figure 2. Communities of interests built with possible to compute distance between the active
FRAC user’s profile which is stored locally and these
typical profiles. Once the system has selected a
community, it can suggest items that have been the
most liked by this group of persons, that is to say
items with the highest ratings in the correspond-
ing typical profile. This way, to proceed does not
reduce the accuracy of the system, since typical
profiles summarize the preferences of similar us-
ers, rather than randomly chosen users. The level
of similarity within a community is a feature of
the system and can be parameterised.

256
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems

The privacy of users is guaranteed both on the minimum-correlation threshold as explained later,
client and server side. On the client side, the active list “C” for the black-listed IDs, and list “O” for
user never has access to profiles of other users IDs of peers which have added the active user to
since he/she only knows preferences of his/her their group profile).
community. Moreover, he/she can choose if and The personal profile is the combination of
what to share with other people at any time. On the explicit ratings and of the ratings estimated
the server side, profiles are stored using single from the actions of the active user on the plat-
IDs which do not allow the system to retrieve the form for a limited duration. The estimation is
corresponding users. based on the Chan formula. The public profile
is the part of the personal profile that the active
Grid Computing user accepts to share with others. The algorithm
also has to build a group profile. It represents the
SofoS is our new document sharing platform preferences of a virtual community of interests,
(Castagnos & Boyer, 2007), using a recommender and has been especially designed to be as close
system to provide users with content. Once it is in- as possible to the active user’s expectations. In
stalled, users can share and/or search documents, order to do that, the peer of the active user asks
as they do on P2P applications like Napster. We for the public profiles of all the peers it can reach
conceived it to be as open as possible to different through the platform. Then, for each of these
existing kinds of data: hypertext files, documents, profiles, it computes a similarity measure with
music, videos, and so forth. The goal of SofoS is the personal profile of the active user. The active
also to assist users to find the most relevant sources user can define a minimum-correlation threshold
of information in the most efficient way. In order which corresponds to the radius of his/her trust
to reach this objective, the platform exploits the circle (cf. Figure 3).
AURA recommender module. Users can integrate If the similarity is lower than this fixed thresh-
in the platform a feedback about their preferences, old, which is specific to each user, the ID of the
by explicitly rating for items. Each item has a peer is added to the list “A” and the corresponding
profile on the platform. The performance of this profile is included in the group profile of the active
module crucially depends on the accuracy of the
individual user preference models.
These personal preference-based profiles are
used by our distributed collaborative filtering Figure 3. Minimum-correlation threshold defining
algorithm, in order to provide each user with the the bounds of the trust circle
content that most likely interests him/her. AURA
relies on a peer-to-peer architecture. We presume
that each peer in SofoS corresponds to a single user
on a given device, having a single ID generated
by our privacy enhancing module (cf. 3.2 Generic
Solution to Guarantee Privacy).
In addition to the available documents, each
peer owns seven pieces of information: a personal
profile, a public profile, a group profile and some
lists of IDs (list “A” for IDs of peers belonging
to its group, list “B” for those which exceed the

257
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems

user. We used the Pearson correlation coefficient Thus, despite their increasing success, collabora-
to establish the similarity measure, since the tive filtering algorithms still suffer from some
literature shows that it works well (Shardanand significant limitations. It becomes strongly cru-
& Maes, 1995). cial to improve the quality and the robustness of
Of course, if this similarity measure is higher prediction in order to provide users with reliable
than the threshold, we add the ID of the peer to the and pertinent results.
list “B.” The list “C” is used to systematically ig- For example, recommender systems could
nore some peers. It enables to improve trust—that be targets for attacks from malicious users since
is, confidence that users have in the recommenda- there are political, economical, or many other
tions—by identifying malicious users. motivations for influencing the promotion or
When his/her personal profile changes, the the demotion of recommendable items. Recent
active user has the possibility to update his/her works (O’Donovan & Smith, 2006; Weng, Miao,
public profile pa. In this case, the active peer has & Goh, 2006; Goldbeck, 2006) have shown that
to contact every peer4 whose ID is in the list “O.” incorporating trust model into the recommenda-
Each of these peers re-computes the similarity tion process have a positive impact both on the
measure. If it exceeds the threshold, the profile accuracy of the predictions and the confidence
pa has to be removed from the group profile. Oth- the user has in the system. Trust is a social notion
erwise, pa has to be updated in the group profile, which provides information about with whom we
that is to say the peer must remove the old profile should share information, from whom we should
and add the new one. accept information and what consideration to give
In order to provide recommendations, the to information when filtering or aggregating data.
system determines the most popular items in the So it seems that trust is a promising way to inves-
group profile. This model provides more accurate tigate also, in terms of privacy issues. Users could
results than methods based on public aggregates, accept to share information with people they trust
since the group profile is built in order to be user- and could allow a system to collect information
centric. This means that the active user is the about their preferences if they know it is able to
gravity center of his/her community of interests. differentiate trusty and untrusty users.
In this model, privacy is guaranteed by the fact
that public profiles of other users are not kept on
the peer. These public profiles are automatically CONCLUSION
and iteratively integrated in the group profile of
the active user. At last, exchanges of profiles are In this chapter, we have provided definitions
made with anonymity which prevents hackers and compared different research works about
from identifying the owner of a profile by inter- privacy protection in recommender systems. We
cepting the packets over the network. have highlighted the drawbacks and benefits of
privacy-enhancing methods, according to the con-
text. This overview has led to the definition of a
FUTURE TRENDS generic procedure that is suitable for collaborative
filtering techniques. We referred to the four axes
Collaborative filtering techniques model the social of personalization defined by Cranor (2005), to
process of asking friends for recommendations define the degree of privacy of our method. Real
on unseen resources. It is not limited to recom- industrial frameworks allowed us to illustrate the
mending similar items to those already liked, benefits of this new approach. We have explored
but can offer surprising suggestions to the user. both distributed and centralized approaches for

258
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems

collaborative filtering techniques, as the privacy posium on Security and Privacy (pp. 45-57).
problem is linked to the choice of architectures. Oakland, CA.
In Castagnos and Boyer (2006a), we introduce a
Canny, J. (2002b, August). Collaborative filtering
client/server algorithm which has been integrated
with privacy via factor analysis. In Proceedings
in a satellite Web site broadcasting service used
of the 25th Annual International ACM SIGIR
by 120,000 persons. Similar privacy-enhanced
Conference on Research and Development in
models have been designed for centralized ar-
Information Retrieval (SIGIR 2002). Tampere,
chitectures and grid computing, respectively in
Finland.
Boyer, Castagnos, Anneheim, Bertrand-Pierron,
and Blanchard, (2006) and Castagnos and Boyer Castagnos, S., & Boyer, A. (2006a, April). From
(2007). implicit to explicit data: A way to enhance pri-
Through these examples, the proposed chapter vacy. Paper presented at CHI 2006 Workshop on
aimed at providing an unified definition of the term Privacy-Enhanced Personalization (PEP 2006),
“privacy.” Indeed, countries over the world have Montreal, Canada.
different laws about privacy. We have shown that
Castagnos, S., & Boyer, A. (2006b, August).
it is possible to guarantee that pieces of software
A client/server user-based collaborative filter-
fulfill the national laws in these conditions. We
ing algorithm: Model and implementation. In
have also shown that users have different expecta-
Proceedings of the 17th European Conference
tions in the matter of privacy protection and we
on Artificial Intelligence (ECAI2006). Riva del
have conceived the generic privacy enhancing
Garda, Italy.
procedure in the goal of increasing the trust of
users. At last, we have discussed the possibility Castagnos, S., & Boyer, A. (2007, April). Personal-
to get good prediction quality—or, at least, to not ized communities in a distributed recommender
alter this quality too much—while preserving system. In Proceedings of the 29th European
privacy of users. Conference on Information Retrieval (ECIR
2007). Rome, Italy.
Chan, P. (1999, August). A non-invasive learn-
REFERENCES
ing approach to building Web user profiles. In
Proceedings of the Workshop on Web Usage
Berkovsky, S., Eytani, Y., Kuflik, T., & Ricci,
Analysis and User Profiling, Fifth International
F. (2006, April). Hierarchical neighborhood
Conference on Knowledge Discovery and Data
topology for privacy enhanced collaborative
Mining, San Diego, California.
filtering. Paper presented at CHI 2006 Workshop
on Privacy-Enhanced Personalization (PEP2006), Cranor, L. F. (2005). Hey, that’s personal! Invited
Montreal, Canada. talk at the International User Modeling Confer-
ence (UM05).
Boyer, A., Castagnos, S., Anneheim, J., Bertrand-
Pierron, Y., & Blanchard, J-P. (2006, December). Golbeck, J. (2006). Generating predictive movie
Le filtrage collaboratif : Pistes d’applications recommendations from trust in social networks.
dans le domaine bancaire et présentation de la In Proceedings of the Fourth International Con-
technologie. Dossiers de la veille technologique ference on Trust Management. USA.
du Crédit Agricole S.A. : Number 27.
Goldberg, D., Nichols, D., Oki, B., & Terry, D.
Canny, J. (2002a, May). Collaborative filtering (1992). Using collaborative filtering to weave an
with privacy. In Proceedings of the IEEE Sym-

259
Privacy Concerns when Modeling Users in Collaborative Filtering Recommender Systems

information tapestry [Special Issue]. Communica- in Computer Science, 4027, 378-389. Milan, Italy:
tions of the ACM, 35, 61-70. Springer.
Han, P., Xie, B., Yang, F., Wang, J., & Shen, Wang, Y., & Kobsa, A. (2006, April). Impacts of
R. (2004, May). A novel distributed collabora- privacy laws and regulations on personalized
tive filtering algorithm and its implementation systems. Paper presented at the CHI 2006 Work-
on P2P overlay network. In Proceedings of the shop on Privacy-Enhanced Personalization (PEP
Eighth Pacific-Asia Conference on Knowledge 2006), Montreal, Canada.
Discovery and Data Mining (PAKDD04). Sydney,
Weng, J., Miao, C., & Goh, A. (2006, April).
Australia.
Improving  collaborative filtering with trust-
Miller, B. N., Konstan, J. A., & Riedl, J. (2004, based metrics. In Proceedings of SAC’06.  Dijon,
July). PocketLens: Toward a personal recom- France.
mender system. ACM Transactions on Informa-
Westin, A. (1967). Privacy and freedom. Talk at
tion Systems, 22.
the Atheneum. New York.
O’Donovan, J., & Smith, B. (2006, January). Is
White, B. (2006). The implications of Web 2.0
trust robust? An analysis of trust-bades recom-
on Web information systems. Invited talk, Webist
mendation. In IUI 2006. Sydney, Australia.
2006.
Payne, J. W., Bettman, J. R., & Johnson, E. J.
Zaslow, J. (2002). If TiVo thinks you are gay,
(1993). The adaptive decision maker. Cambridge
here’s how to set it straight: What you buy affects
University Press.
recommendations on Amazon.com, too; why
Polat, H., & Du, W. (2004). SVD-based collabora- the cartoons? The Wallstreet Journal, Retrieved
tive filtering with privacy. In Proceedings of ACM November 26, 2002, from http://online.wsj.com/
Symposium on Applied Computing. Cyprus. article_email/0,,SB1038261936872356908,00.
html
Shardanand, U., & Maes, P. (1995). Social in-
formation filtering: Algorithms for automating
“word of mouth.” In Proceedings of ACM CHI’95
Conference on Human Factors in Computing ENDNOTES
Systems (Vol. 1, pp. 210-217).
1
http://www.choicestream.com/
Simon, H. A. (1982). Economic analysis and public 2
Ideally, this set of items should cover all the
policy. In book “Models of Bounded Rationality”:
implicit categories that users can find on the
volume 1 and 2: MIT Press.
platform.
3
Viappiani, P., Faltings, B., & Pu, P. (2006, June). http://www.ses-astra.com/
4
The lookahead principle for preference elicita- A packet is broadcasted with a heading
tion: Experimental results. In Proceedings of the containing peers’ IDs, the old profile, and
7th International Conference on Flexible Query the new public profile.
Answering Systems (FQAS 2006) Lecture Notes

260
Section IV
Organizational Aspects
262

Chapter XV
An Adaptive
Threat-Vulnerability Model
and the Economics
of Protection
C. Warren Axelrod
US Trust, USA

Abstract

Traditionally, the views of security professionals regarding responses to threats and the management of
vulnerabilities have been biased towards technology and operational risks. The purpose of this chap-
ter is to extend the legacy threat-vulnerability model to incorporate human and social factors. This is
achieved by presenting the dynamics of threats and vulnerabilities in the human and social context. We
examine costs and benefits as they relate to threats, exploits, vulnerabilities, defense measures, incidents,
and recovery and restoration. We also compare the technical and human/social aspects of each of these
areas. We then look at future work and how trends are pushing against prior formulations and forc-
ing new thinking on the technical, operational risk, and human/social aspects. The reader will gain a
broader view of threats, vulnerabilities and responses to them through incorporating human and social
elements into their security models.

Introduction same manufacturer as yours, and often the same


model, too? While it could be true, depending on
Have you noticed that when you drive a particu- the brand that you choose, it is mostly perception.
lar make of car, it appears that virtually every It is just that you are more aware of and notice
second or third vehicle on the road is from the

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Adaptive Threat-Vulnerability Model and the Economics of Protection

this particular brand. So it is with the subject of the attention that they deserve. As with many
this book. innovative approaches, it might attract exces-
When the editors first suggested the topic in sive interest, certainly more than warranted, and
2006, articles examining the impact of human and divert attention and resources from other critical
social aspects of information security, though they technology and risk areas before it settles into its
did exist, were few and far between. Suddenly, in rightful place in security professionals’ toolkits.
the early months of 2007, the gurus of information The Gartner Group has developed a life-cycle
security—such as Bruce Schneier—had “found model, primarily for IT-related products, called the
religion” and seemed to be talking and writing “hype cycle.”1 In a graphical depiction, they show
about little else. In a report on the 2007 RSA the visibility of a product varies with maturity of
Conference, Ellen Messmer reported that Bruce the product over time. The first phase is the called
Schneier “casts light on psychology of security,” the “technology trigger.” I believe that, if we adopt
where Schneier emphasizes the importance of Gartner’s model for the social and human aspects
human factors in the security equation (Messmer, of information security, we are currently in this
2007). Schneier has also drafted a paper on the first phase. Taking the Gartner model further,
topic (Schneier, 2007a). we can expect to go through subsequent phases
With such a boost as this, I am sure that you named “peak of inflated expectations,” “trough
will see a flood of quotations, articles, and books of disillusionment,” “slope of enlightenment,”
on the topic over the next several years. As a result and “plateau of productivity,” respectively. This
of this, the face of information security practice process is likely to take 2 to 5 years before we
will change forever. might reach the productivity plateau. But this can
Historically, the average security professional only be done with dispatch if we begin the journey.
has been highly technical and operational, but Perhaps by embarking early on this road, we can
many have recently become more risk-aware, ultimately accelerate the process and achieve the
and will be called upon to be a psychologist and desired goal of the full consideration of social and
behavioral scientist as well as a security expert. human factors more expeditiously.
To quote John Kirkwood, chief security officer In favor of giving the social and human side
at Royal Ahold, the Dutch parent of the Stop & more consideration, one might argue that you
Shop supermarket chain, in regard to security as- can not know where most effectively to put your
sessments: “Do it from the way the hacker would security funds without including the economic
think” (Scalet, 2007). Several Stop & Shop stores ramifications of these factors and how they affect
in New England were victims of a scam involving security. This is somewhat similar to the need to
the substitution of card data skimming devices for include “social costs” into an economic justifica-
regular point-of-sale readers. Kirkwood touts the tion of a power plant, say, that pollutes the air and
importance of convergence between information raises the temperature of the adjacent river water,
security and physical security in order to provide which it uses for cooling. The impact on society
more complete protection. As we delve into the is of polluted air, which causes respiratory and
subject, we will see how these differences in other diseases, and heated and polluted water,
backgrounds, knowledge, and experience between which results in dead fish and other creatures. If
“logical” and physical security personnel, many these health and environmental factors are not
with law enforcement origins, can enrich the considered, then there is little incentive for the
threat-vulnerability model. builders of the power plant to install equipment
It is gratifying to see that the human and social to scrub the air before it is emitted, use less pol-
aspects of information security are finally getting luting fuels, and devise less damaging cooling

263
An Adaptive Threat-Vulnerability Model and the Economics of Protection

methods, for example. Similarly, with information that terrorists highjack airplanes prior to 9-11, but
security, information security professionals and it was not generally anticipated that they would
general business management must be persuaded cause them to be flown into specific buildings.
to apply methods and use tools that are effective Post 9-11, such threats were perceived as being
in minimizing the technical, human, and social not only possible but also likely.
vulnerabilities of the overall system. Threats lose much of their uncertainty when
Another related area, which should not be an actual exploit is developed.
ignored but which is not addressed in detail here, In the IT (information technology) security
is the impact of social and human factors on se- context, an exploit is a program or technique that
curity systems themselves, particularly as they takes advantage of a vulnerability in software and
relate to security professionals. Mendell (2007) that can be used for breaking security, or otherwise
notes that many psychological factors, among attacking a host over a network.
them “information overload,” serve to impair A further refinement of exploits is whether
or degrade security professionals’ performance they are “in the zoo” or “in the wild.” The former
and judgment. generally means that the exploit has been demon-
In this chapter, we develop a practical adaptive strated in the laboratory of a security vendor or
model of the threat-vulnerability interaction by protection service provider, and the latter means
including human and social factors along with the that it has been released and may be spreading.
traditional technology- and risk-related influences. An exploit is effectively the demonstrable
We carry the process through to develop an eco- realization of a threat. However, it is not a danger
nomic model with which to determine the most unless it is used by someone for an attack and is
appropriate allocation of security funds subject able to take advantage of a vulnerability.
to technological and human constraints. A vulnerability is a state in a computing
system or set of systems and networks that allows
Some Definitions an attacker to:

Let us take a few minutes to develop and discuss • Execute commands as another user;
some definitions using Figure 1 as a basis. Most • Access data that is contrary to specified ac-
of the definitions are gleaned from the glossary cess restrictions for the data;
posted online by Symantec (2007). • Pose as another entity; and/or
Security begins with a threat, which may be • Conduct a denial of service attack.
defined as follows:
When an exploit is successful against a vulner-
A threat is a circumstance, event, or person with ability, then an incident occurs.
the potential to cause harm to a system in the form Here Symantec (2007) defines an incident
of destruction, disclosure, data modification, or as “the actualization of a risk.” This definition
denial of service (DoS). has semantic issues. It implies that a risk is the
successful exploitation of a vulnerability. In fact,
It should be noted that a threat is “potential” Symantec defines a risk in this way. However,
rather than “actual,” and may only result in an when listing “types of risk,” the Symantec glos-
incident if it meets up with a vulnerability. sary includes threats, such as adware, spyware,
As shown in Figure 1, threats might be known hack tools, viruses, worms, and Trojan horses,
or unknown, and sometimes they might be par- thereby confusing threat and risk. In regard to
tially known. For example, it was well known “risk impact,” they suggest that the impact be on

264
An Adaptive Threat-Vulnerability Model and the Economics of Protection

performance, privacy, and so forth, in this case evidenced by the many articles on the resistance
confusing risk and incident. to, and annoyance with, newly introduced security
Others provide formulae for calculating risk measures, such as complex passwords (Axelrod,
that combines value of an asset, the level of threat 2005) and other supposedly strong authentication
against that asset, and how vulnerable the asset is (Stone, 2007).
to given threats. While this is a reasonable con- On the other hand, physical security specialists,
struct, the analyst generally assigns points and/or frequently with law enforcement backgrounds,
levels (e.g., high, moderate, or low) to each of the are usually much more knowledgeable about evil
three factors and somehow aggregates them via human motives and more sensitive to nefarious
addition or multiplication. Such a methodology activities than are their computer-oriented coun-
is highly subjective and the aggregation method terparts. However, they often do not understand
has no real basis. the nuances of information technology well
Risk is more appropriately defined in terms of enough.
the probability of loss and the estimated magnitude A major difference between the attitudes of
of the loss incurred. The probability of loss, which information security and physical security pro-
is based on threats and the chances of their being fessionals is that the former are generally more
translated into exploits that are successful against focused on protecting information assets through
vulnerabilities, is admittedly highly subjective but preventing bad things from happening, whereas
is more realistic than assigning arbitrary values the latter tend to lean towards investigations,
to each component. Also, there are potentially forensic analysis, and apprehending the perpetra-
many types of threat and exploit, which might be tors. While these goals are not mutually exclusive,
wielded against an asset. Furthermore, the risk they can lead to significantly different impacts on
is often a combination of expected losses from organizations and their employees.
expected incidents against the asset. The key is to find the “sweet spot” among the
Another aspect of these definitions and ap- various disciplines. In Figure 2, we show roughly
proaches is that they do not explicitly take human where the various job functions and background
and social factors into account, a situation that disciplines fall on a continuum from technical
we are about to rectify. through to behavioral orientations. As we can
see, the traditional computer security practi-
tioners tend to have engineering and computer
Background science backgrounds and to concentrate on the
implementation and technical support of security
Human characteristics and social behavior are tools. At the other extreme, we have psychology
central and critical to the creation and evolution and behavioral experts who are focussed on how
of threats and the exploitation of vulnerabilities. individuals interact with various technical inter-
Information security professionals have generally faces and security measures and, in particular,
given the human and social aspects of informa- how they might try to get around them. Between
tion security short shrift, since they most likely these two extremes, we have technically-oriented
have been trained in technical and operational business staff (if such actually exist) and busi-
disciplines. While the same security professionals ness-oriented technical staff, which is likely to
regularly purport to be sensitive to certain reac- be a more common situation. This view conforms
tions of the “user community,” security practitio- with a particular set of graduate programs at the
ners have been notorious in their implementing Stevens Institute of Technology in Hoboken, New
misguided technologies and procedures. This is Jersey. Stevens offers a graduate degree (the MSIS

265
An Adaptive Threat-Vulnerability Model and the Economics of Protection

or masters of science in information systems) to economic basis for selecting remediation ap-
those with technical backgrounds who want to proaches. It is shown how decisions will differ
learn “the business” in selected sectors, such as depending upon whether or not human and social
financial and health services. They also offer tech- aspects have been considered, and demonstrate
nical certifications to those with a more general why certain responses, which do not include the
business background, who want to augment their human element, exacerbate rather than alleviate
experience with a more in-depth understanding of the risks from threats.
technologies. The former has historically outpaced
the latter by an order of magnitude. An increasing
number of institutions of higher education are of- Broadening the Security
fering undergraduate and graduate degrees and Concept
certifications in these areas.
In this chapter, we bridge the gap between Because the academic disciplines of human be-
those who view themselves as defenders of the havior and sociology are generally held to be so
information “crown jewels” and those who are different from technology, they are seldom linked
more interested in capturing and punishing the to one another. However, it is worth defining
perpetrators. Both are important aspects of secu- relevant threats and vulnerabilities in terms of
rity and each has its place. It is, however, important characteristics that are technical, behavioral, and
to balance both approaches. Sometimes it is well cultural. We cannot treat human and technical
neigh impossible to bring criminals, who may be aspects separately if we are truly to understand
operating half way around the globe, to justice. the interaction of threats and vulnerabilities. Nor
On such occasions it is much more feasible to try can we exclude social and human factors when
to prevent damage. Conversely, it might be more determining the most cost-effective approaches
effective to apprehend the criminal, particularly to protecting information assets. After all, the
if the person is a “trusted” insider, since it is best-devised technical approaches can be read-
often difficult to apply defenses against insider ily undermined or sabotaged by individuals who
abuse and still provide the access needed to do consider a particular approach to be against their
a particular job. personal interests. Such interests may protect their
We develop an adaptive human-technical position within the organization or society, or help
threat-vulnerability model, based on the model decide which methods represent cultural compro-
shown in Figure 3, in which we show how the mises, that they are not willing to accept.
impact of attack and defense mechanisms change Furthermore, we must not separate the logical
dynamically, as individuals and groups interact and physical aspects of a secure environment as
with opportunities that they create or are presented they frequently work together to create an overall
to them. We examine how attackers and defenders secure environment, where one aspect mitigates
might adapt to dynamic new technologies. We risks that remain untreated by the other.
consider whether adverse actions are intentional To simplify matters, we offer the concept that
or not. If intentional, we try to understand what technical security tools are one of the means of
the objectives and motivation of the attacker administering defenses, defending against ex-
might be. If accidental, we consider how controls ploits, discovering vulnerabilities, and mitigating
might be affected to avoid such occurrences from them. However, technical tools are only part of
taking place. the story and the different ways in which humans
We then take this model and apply cost-benefit respond to vulnerabilities, as well as to threats
and risk analysis to it in order to derive a new and exploits, must be included. Individuals might

266
An Adaptive Threat-Vulnerability Model and the Economics of Protection

support or undermine security measures, either Likewise, the evildoer does not usually
intentionally or obliviously, depending upon present a raw exploit but frequently cloaks it in
circumstances. some sort of social-engineering disguise, much
If security requirements are complex and like the wolf donned Little Red Riding Hood’s
difficult to operate, subjects will seek and find grandmother’s clothing. Rather, he couches it in
means of circumventing them. If evildoers are terms that might engender a response from the
capable and devious enough, they will come up recipient. This might include an enticing subject
with ways of persuading their victims of their own line in an e-mail, or threatening or cajoling the
authenticity so that the objects of the onslaught recipient to link to a Web site and divulge personal
will yield to the deceptions, often unaware that information, or persuading someone to open an
they are doing so. Thus, a defense measure is attachment or perform some other innocuous task
only as effective as the extent to which those while the hacker implants some malevolent code,
responsible for its implementation are commit- or “malware,” on the unsuspecting recipient’s
ted to ensuring the protection of the assets under computer system.
attack and knowledgeable enough to recognize The relationship between technology-related
that an attack is underway. threats and vulnerabilities and the human players
on each side of the fence are shown diagram-
Wrapped Exploits and Open matically in Figure 4. In many cases, there is an
Responses initial human interaction (1), such as replying to
an e-mail, opening an attachment, or clicking on
One way of looking at this is to view exploits and a link. By responding to this initial probe, the
vulnerabilities existing at the physical, technical, responder effectively opens the door to his or
human, and societal levels. her environment (2), allowing for the insertion of
As an example, consider a computer worm or malware or extraction of identifying information,
virus. In the health field, many viruses develop, and so forth (3).
begin to spread, and then fizzle, whereas others Sometimes malware self-activates without the
proliferate wildly. So it is with computer viruses. victim actually having to do anything. However,
Only a few of those released from the “zoo” to the in some of these cases, it is a matter of the victim
“wild” are actually successful in their missions. not having done something that he or she should
Success is often contingent upon a particular have done, such as installing the most recent
human action or inaction. The inaction may be antivirus software.
intended or not. For example, the inaction of not The attacks can be thwarted somewhat by such
patching a particular system or of not closing off approaches as automating the patching process,
an unneeded service and/or port may be for one installing antivirus and anti-malware software,
of many reasons. Perhaps those responsible are and training personnel to be on the lookout for
not aware, do not try to find out, know about it phishing, pharming, and other types of attack that
but think that the risk is low, or are unable to fix involve social-engineering methods.
it in time. In regard to actions, the spreading of Another factor that greatly affects the success
the virus might require that the potential victims rates of the various criminal elements engaged in
perform some action, such as innocently clicking destructive and fraudulent activities is their abil-
on a link or an attachment, visiting dubious Web ity to modify exploits in response to the defenses
sites, or otherwise responding to some implicit that have been created to protect against known
or explicit request for action. threats and exploits. This adaptive mechanism,

267
An Adaptive Threat-Vulnerability Model and the Economics of Protection

which evildoers are quick to adopt, is one of the as perversity and gullibility respectively, which
keys to the increasing success of malware. only adds to the difficulty in, and expense of,
In some ways, the technical aspects of threat- protection. There are numerous cases of knowl-
vulnerability interactions are predictable in that edgeable individuals falling for new variants of
technical exploits are built to perform a specific social-engineering ploys, even if they should have
task, although they sometimes act in an unpre- known better. The attacker can keep changing his
dictable way. A famous example of this is the method until he finds one that works.
Morris worm, which, beginning on November 2, The speed at which technology is changing
1988, proliferated well beyond the expectations makes for the never-ending creation of new vul-
of its creator.2 nerabilities. At the same time, these same tech-
The social and cultural aspects of the potential nologies also facilitate better-crafted and more
victims’ reaction may also be predictable based effective attacks. Typically, when new software
on an understanding of the motivations and re- is introduced, it contains significant numbers
sponses of a particular culture, although some of errors (or “bugs”) and vulnerabilities, which
exploits (such as the “ILOVEYOU” worm) cut are often fixed soon after they are discovered.
across many cultures.3 Consequently, we are likely to see a diminution
The purely human aspects in regard to both of vulnerabilities over time. We then most often
attacks and defenses are generally much less see a step-up in the number of errors and vulner-
predictable. Sometimes experience and train- abilities with each new release, followed by a
ing will make one person more suspicious of an fall-off. This is illustrated in Figure 5.
unexpected and unusual e-mail, whereas others As can be seen in Figure 5, the best policy is
will be taken in by a more sophisticated version to ride the vulnerability curve down (as illustrated
of the same exploit. Whereas a technical exploit by the thick jagged line) by acquiring the new
can be expected to be effective whenever it gains releases some time after their introduction so as
access to a particular vulnerable system, the ef- to avoid the initial period of higher errors and
fectiveness of a social-engineering exploit is much vulnerabilities.
less predictable, although if the attacker engages At the same time, as the software vendors are
a very large number of subjects, then there is a reducing the number of known vulnerabilities,
strong likelihood that a small percentage, but a hackers are improving their ability to exploit the
substantial number of recipients, will be fooled. remaining vulnerabilities, while simultaneously
discovering new ones. This effort will tend to lag
The Dynamics of Threats and the software development and revision release
Vulnerabilities cycles for the most part since vulnerabilities
are generally discovered only after the targeted
No sooner have defenses against particular types software is released. However, the exploiting
of attack been developed, than these former ex- technologies are also improving over time and
ploits change or morph into other forms, to get are becoming easier to use and more generally
around previously available defenses. In other available. This is illustrated in Figure 6.
cases, entirely new attack vectors are created Now let us consider that the software vendors
to circumvent the defenses. As a consequence, are able to provide fixes, patches, or workarounds
security professionals are always playing catch- for many of the vulnerabilities, but that there re-
up in an ever-escalating battle with evildoers. mains a residue of vulnerabilities which cannot
Both attackers and victims display particularly be mitigated or which would require unacceptable
human motivations and social responses, such measures or a reduction in features. The remain-

268
An Adaptive Threat-Vulnerability Model and the Economics of Protection

ing hard-core vulnerabilities, shown as the lower Most notably, the misguided teenager, who sought
line in Figure 6, track the total number of vulner- kudos and the admiration from his peers, has been
abilities to some degree. If everyone were very largely replaced or augmented by those more
diligent in their vulnerability mitigation efforts, interested in financial gain than publicity. Thus,
they would track along this lower line. However, vulnerabilities and exploits are being marketed
it often is not cost-effective to fix everything for for cash to security software companies, hackers,
which remedies are available, so that most orga- fraudsters, and governments, with prices com-
nizations will lie somewhere between these lines mensurate with the risk of the vulnerability and
depending upon their degree of diligence. the effectiveness of the exploits.
On the exploits side, there is an equivalent This is a particularly important change since,
relationship between the number of exploits de- if publicity and peer recognition are removed as
veloped (“in the zoo”), the number released (“in factors, the crime can be effected in secret, which
the wild”), and the number that are effective. This can be far more dangerous. This is particularly
is also shown in Figure 6. true for targeted attacks, which may never be
In reality, the threat of an incident affecting detected, and which certainly will not result
any particular asset is governed by the number in general defenses, such as common antivirus
of exploits in the wild and the number of vulner- signatures.
abilities that exist on the organization’s systems One of the most disturbing aspects of this
and networks. change in perpetrator is the appearance of well-
Just how these major influences interact with organized groups that engage in carefully planned
each other will vary with characteristics of the exploits extending over long periods of time.
software, with vendor, ubiquity, and market share This results in much more targeted and lucrative
being major factors taken into account by hackers. incidents. The greatest concern here is that the
Thus, for example, vulnerabilities in software evildoers will eventually become well-funded
products manufactured by Microsoft have been nation states intent on much more insidious and
more publicized and generally under greater attack damaging objectives.
than products from Apple Computer, say. While
some contend that this is because Apple products The Changing Vulnerability
are generally more secure, others believe that it Environment
is only because Apple has a much smaller market
share so that they are ignored by hackers. Hack- The great debate these days is whether or not those
ers looking for notoriety will generally seek out discovering vulnerabilities in commercial soft-
the more popular systems to attack, but so will ware should publicize them to the world at large,
those looking for monetary gain or to disrupt or quietly disclose them to the manufacturer of the
operations. However, there are select groups, software. Bruce Schneier presented his ideas on
having very specific targets in their sights, which this in a CSO Magazine column (Schneier, 2007b).
will go after particular systems that promise to Of course, the risk of exploitation is likely to be
yield significantly greater financial, strategic or greater when the vulnerability is made public
military gains. without the vendor or, in the case of open source
software, the community having a mitigation plan
The Changing Threat Environment in place. On the other hand, software makers may
not be motivated to correct the problem if they
It is increasingly being reported that the popula- believe it will not be generally known.
tion of “bad guys” or “black hats” is changing.

269
An Adaptive Threat-Vulnerability Model and the Economics of Protection

There are strong arguments on both sides as might just make public the fact that it is possible
to whether a security vendor should buy informa- to develop such an exploit without describing it.
tion about a vulnerability in order to be the first Fearful customers have accused such firms of
to have a protective measure available. publicizing the existence of viable exploits in order
This is a prime example of how the human fac- to encourage the purchase of their products and
tor can be the predominant force in determining services. Conversely, evildoers may be encour-
whether the average computer user is protected or aged to move quickly to develop specific exploits
not. Table 1 lists various participants, whether they as they already have a measure of assurance that
are intentionally participating or not, and where it is achievable.
they stand in regard to the various strategies,
assuming that someone external to the vendor
discovers the vulnerability. Threats and Vulnerabilities
A similar set of scenarios is initiated if an in the Human Context
exploit is in fact created. Sometimes purveyors of
security solutions or the vendors of the particular We now examine how threats, exploits, vulner-
software under attack are the ones who develop abilities, and incidents interact within a typical
exploits in their own laboratories. Security firms human context. On one side we have the hacker.
He (and the person is usually male) is looking to

Table 1. Different views of various vulnerability disclosure/nondisclosure strategies

Discoverer’s Strategy Views of Discoverer of Views of Vendors and Views of Targeted Victims
Vulnerability Security Firms
Do nothing Will not gain any notoriety, Not aware Not aware
money or satisfaction
Make public Shame vendor into coming up Aware—under pressure to Aware—nervous about creation
with a quick fix before an exploit come up with solution of an exploit; pressuring vendor
can be created to fix it
Make available to vendor only Good Samaritan Good deed if deadline to fix May not be aware if done
at no charge for given period for is reasonable; otherwise, not clandestinely.
vendor to fix problem good Good if fixed before going
public.
Bad if goes public before being
fixed.
Sell to bad guys Can be very profitable Bad news Bad news
Sell to good guys Profitable, but may be less so May be beneficial if sold Generally not aware
than if sold to bad guys to security vendor on an
exclusive basis as it creates
competitive advantage.
Good for software vendor if it
can be fixed without anyone
knowing.
Sell to highest bidder Most profitable if it initiates Generally bad but depends on Neutral (not aware) or bad
a bidding war, but may be who wins bidding war depending upon winner in
dangerous for discoverer bidding war
Use as the basis for a targeted Opportunity for fraud, Not aware Bad news—no protection
attack, which is not generally destruction, blackmail available from vendors or
known, so no defenses would security firms
have been created

270
An Adaptive Threat-Vulnerability Model and the Economics of Protection

Table 2. Different views of various exploit disclosure/nondisclosure strategies

Creator’s Strategy Views of Creator of Exploit Views of Vendors and Security Views of Targeted Victims
Firms
Do nothing Will not gain any notoriety, Not aware Not aware
money or satisfaction
Make public Shame vendor(s) into coming Aware—under greater pressure to Aware—nervous about
up with a quick fix before the come up with protective solution activation of the exploit.
exploit is activated Pressure vendor to protect
against it. Pressure law
enforcement to capture creator
and deactivate exploit.
Make available to vendor Good Samaritan Good deed if deadline to fix is May not be aware if done
only at no charge for reasonable, otherwise, not good clandestinely.
given period for vendor Good if vulnerability fixed
to fix problem before going public with exploit.
Bad if goes public before being
fixed.
Sell to bad guys Can be very profitable Bad news Bad news
Sell to good guys Profitable, but may be less so May be beneficial if sold to security Generally not aware
than if sold to bad guys firm on an exclusive basis as it
creates competitive advantage.
Good for software vendor if it can be
fixed without anyone knowing.
Sell to highest bidder Most profitable if it initiates Generally bad but depends on who Neutral (not aware) or bad
a bidding war, but may be wins bidding war depending upon the winner of
dangerous for creator the bidding war
Use as targeted attack, Opportunity for fraud, Not aware Bad news—no protection
which is not generally destruction, blackmail available from vendors or
known and for which security firms
no defenses have been
created

take advantage of weaknesses in the computer- 3, 2000 and rapidly spread to cause an estimated
network-people complex for personal aggrandize- $5.5 billion in damage globally.
ment, financial gain, or other insidious objectives. The human and social components of today’s
On the other side are the defense mechanisms, also attack-defense mechanisms are often so significant
made up of systems, networks, and people, all of as to dwarf other factors. Yet security profession-
which can be compromised by the nefarious ac- als have often ignored them, preferring to address
tions of the predators. To the extent that the sides all threats with technical solutions, rather than
are uneven, with highly motivated attackers, who taking a holistic view. Could it be because those
are often very skilled, and predominantly naïve addressing defense measures are technicians who
and gullible victims, so do many of the attacks may have little interest or understanding of the
meet with success. Perhaps the all-time most human interactions? Or are the attackers just so
successful exploit was the previously mentioned much smarter and more incisive in regard to hu-
“ILOVEYOU” worm, which immediately ap- man behavior that they are able to overwhelm or
pealed to millions of computer users as it struck fool individuals’ regular abilities to defend and
an irresistible chord. The worm appeared on May resist? Or should we consider that it might be

271
An Adaptive Threat-Vulnerability Model and the Economics of Protection

the attackers’ ability to adapt that leads to their The Technical Viewpoint
eventual successes?
Mendell (2007) advises information security On the technical side, the trend of vulnerabilities
professionals to add psychological insights to their will likely follow a familiar downward sloping
toolkits and he specifically recommends the fol- curve, as is common for depicting the number
lowing actions in order to minimize information of “bugs” remaining in a software product. Such
overload and still ingest the knowledge required a curve is shown in Figure 5. As with computer
to maintain currency and remain effective: program bugs, the number of outstanding errors
or vulnerabilities will jump when a new version
• Develop an intelligence gathering plan, of the product is introduced and then begin to
particularly to learn about “the mindset of fall again. However, while there are arguably a
potential threat agents” fixed number of bugs to be corrected, and many
• Develop a focus plan so as to absorb as much of them might be security related, the concern is
relevant knowledge as possible that new exploits will uncover vulnerabilities not
• Maintain a library of articles on specific topics previously known to exist. While this is also true
for later reference of bugs in general in that new ways of using the
• Acquire automated tools to assist in the product may uncover new bugs, for security vul-
analysis of security logs and alerting nerabilities the consequences can be much more
• With regard to security operations centers, serious. This becomes particularly significant
involve end users in their design and have when a vendor decides to discontinue support of
a human-factors expert ensure that the user an obsolete product, such as Microsoft did with
interface promotes good decisions Windows NT 4.0. While a licensee of a software
• Have data owners work on the security clas- product might feel more at risk when support is
sification process pulled, hackers might lose interest in trying to
• Develop a plan for protecting against elec- penetrate and cause damage through an obsolete
tronic impersonation, such as “pretexting” product since the absolute number of operational
systems falls as it is being phased out. This calls
While these tips are very valuable, they still into question the need to contract for expensive
beg the fundamental question as to whether end-of-life product support. For example, the cost
investing in protecting against attacks is cost-ef- of custom support following discontinuance of
fective as each security measure appears to beget Microsoft Windows NT 4.0 was very high for
a new attack, often requiring even more costly companies, and I am not aware of a single criti-
defenses, not to mention the direct costs of suc- cal vulnerability and consequent security patch
cessful attacks and breaches themselves. After required from when support was officially pulled
all, the attacker only needs to be successful one a couple of years ago.
time, whereas the defender must protect against The technical approach is inherently reac-
a myriad of attacks. tive and more generally used to protect against
Here we will show how the relationship be- future occurrences of events that have already
tween attacker and victim is complex as each happened. This is because it is difficult to sell
tries to outwit the other and how the defender is technical measures that are meant to address
always at a disadvantage. anticipated threats and exploits, particularly if a
specific vulnerability has not yet been identified.

272
An Adaptive Threat-Vulnerability Model and the Economics of Protection

This view extends to a national and global level, The Human Component
where little support is given to those who rant and
rave about something that the populace believes If one were to introduce human and social as-
will either never happen or, if it were to occur, pects, security takes on a whole new form. It goes
someone will surely be able to resolve it. This from backward looking and reactive, to forward
was somewhat true with the Y2K “millennium looking and proactive. Rather than trying to see
bug” for which the remediation was high and the how to defend against a current exploit (which
resulting impact small, whereas the change in date must be done anyway as a baseline practice), the
of Daylight Savings Time to March 11 from April analyst tries to predict how both the attacker and
1 was mostly ignored or treated cavalierly until the victim will behave in response to changing
right before the event, with a resulting significant circumstances. Ideally, we are able to identify,
negative impact, particularly with electronic develop, and analyze a set of scenarios relating
calendaring systems. to the likelihood of a particular event occurring
A significant reason why security is directed and how the other party might respond to it.
by those looking into the rear-view mirror, rather Thus, for example, even before the strong
than through the windshield, is that known prob- authentication measures in U.S. banking, which
lems are easier to solve than unknown ones, and were mandated by the FFIEC, went into effect,
technical problems are easier to deal with than the OCC (2007) was warning that evildoers
non-technical ones. But to really address the issues will likely take advantage of the transition by
at hand, the human element must be incorporated pretending to represent those banks introducing
along with technical considerations. the measures and pilfering identity information.
This illustrates a key aspect of the human-system
The Interaction of Metrics and interaction, namely, that the bad guys will even
Behavior exploit bona fide attempts at strengthening the
commercial environment by taking advantage
We have all heard the expression that “you can of customer confusion.
only manage what you can measure.” This can be Unfortunately, the implication is that one
deceptive and is not always true (Hinson, 2006). cannot fully trust any system at any particular
In her weighty book on security and privacy time, even if the interaction is one purportedly
metrics, Debra Herrmann (2007) begins with intended to improve or safeguard your identity
the following quotation from S.H. Kan (1995): and your assets. A very common ploy of phishing
“It is an undisputed statement that measurement scams is to imply that your security or assets are
is crucial to the progress of all societies.” at risk. This results in even the most innocuous
I prefer the statement by Jerry Gregoire (2007) of interactions becoming suspect. As customers
that “The ability to measure things begets new increasingly sense that no unsolicited and unex-
behaviors.” That allows for inappropriate metrics pected electronic communications can be trusted,
leading to adverse behavior or responses, which so are efficient and lower-cost methods reduced
may be the single most important reason why as effective means of doing business. This has a
information security professionals might suffer major cost impact as will be discussed later.
from a lack of credibility. At the same time, the hackers and fraudsters
That is why, in my opinion, it is crucial to are evolving their attacks so that the victims are
include not only technical aspects of security but not suspicious and respond as desired. This is
also human responses and social interactions. particularly the case for new technologies about
which the victims are unfamiliar. Innovative tech-

273
An Adaptive Threat-Vulnerability Model and the Economics of Protection

nologies, such as RSS (really simple syndication), enjoying the convenience and lower cost of
whereby feeds are used to update content in real performing their desired business or pleasure
time, have become the latest vector for injecting activities. For the hosting organization, it is to
malware into users’ computers. keep the customers happy by safeguarding their
However, it is important, from the evildoers’ personal information and continuing to operate
perspective, not to close off mechanisms which lower cost channels.
they themselves use for practicing their criminal
activities. To some extent, those organizations Cost-Benefit Aspects
hosting online services, particularly financial
firms, are playing into the hands of the evildoers In the end it really comes down to determining
by developing stronger security measures. Despite how best to apply security and privacy fund-
this, defending organizations have little choice ing. Security budgeting and spending tends to
if they are to retain certain lucrative channels of be skewed towards technical solutions, which
communication. There is a degree of symbiosis are in many cases yielding diminishing returns.
among evildoers, defending entity and individual Greater emphasis needs to be placed on examin-
victims, which significantly contributes to the ing and analyzing the human and social aspects,
continuation of an environment in which the and putting money into research of the human-
cost of the crime is balanced against the financial machine interaction. It is commonly held that
benefits of maintaining the channel. When costs the biggest bang for the buck is often achieved
and liabilities are limited, there is little incentive from training and awareness programs, which
to change much. However, legal and regulatory are generally inexpensive to implement, yet can
environments are changing the liability picture have a considerable impact, especially when ac-
and victims, or potential victims, eschewing busi- companied by deterrence in the form of threats
ness methods that they consider to be a threat to of serious consequences for not following policy.
their privacy, themselves change the equation However, despite extensive education and training
radically. programs, we still see unsuspecting or unthinking
The argument that evildoers do not wish to individuals responding inappropriately to the lat-
destroy the mechanisms that support their ac- est social-engineering gimmicks. Does this mean
tivities is also applicable to terrorists, where it that training is ineffective, or is it a matter of not
is considered to be in their interest not to bring using the right training methods and materials,
down the Internet because it is an important com- or not applying the training often enough? Or is
munication vehicle for them. it that the attackers and fraudsters are just that
An analogy is the road to success for a virus much smarter and are able to fool even those alert
in the physical world, where, in order to survive individuals who know better?
and proliferate, a virus must not kill its victim And then there is a whole raft of incidents that
until it has had the opportunity to reproduce and slip through the cracks and occur “inadvertently.”
spread to others. These need to be addressed using preventative
Of course, misjudgment by the evildoer, measures that are effective and easy to use, with
victim, and hosting organization can lead to procedures that can be readily followed, prefer-
unintended consequences. For the perpetrators, ably, without human intervention. The value of
avoiding negative consequences means restrict- such systems, in terms of avoiding unintentional
ing their criminal activities in order to maintain slips or errors, requires a different type of analy-
the means of continuing those activities. For sis; one that accounts for anticipated human and
the victim, it is to avoid being victimized while social behavior.

274
An Adaptive Threat-Vulnerability Model and the Economics of Protection

Some behaviors are universal, whereas other The value of the threat is measured in terms
are limited to particular countries, cultures, ethnic of the down-the-line costs of not having put in
groups, and so on. These similarities and differ- protective measures and being subject to an exploit
ences greatly affect the outcome, particularly in derived from the threat.
countries where deterrence might be less effec- The main issue here is in determining whether
tive. a particular threat can and will become an ex-
ploit.
Costs of Threats
Comparisons of Technical and
Most threats aimed at information systems have Human/Social Threats
some level of technology built into them. Even
those threats that predominantly use social-en- One usually addresses threats to the technical in-
gineering usually have some technical basis, frastructure and applications by means of specific
if only in regard to the delivery system. While tools and measures. These are designed to avoid
the technology for use by “amateurs” or “script or prevent a potential exploit from doing damage.
kiddies” may have been costly in time and effort Threats to people and their general and financial
for someone to develop, many such malware well-being are more along the lines of the pos-
developers make their software available at little sibility for identity theft, which may or may not
or no charge to anyone accessing their nefarious be engaged in by the perpetrator when an attack
Web sites. “Professionals,” who are usually more is launched. However, the mere threat of identity
interested in financial gain than kudos, tend to theft raises significant reaction from the public,
keep the techniques and technology, which they from lawmakers, regulators, and auditors, and
and others develop, to themselves for their own from management. The resulting requirements
use. It was recently reported (Vijayan, 2007) that can be very costly to implement and operate.
several Web sites have sprung up to cater to the
professionals, such as organized gangs. Costs of Exploits
Costs are also incurred by potential individual
and organizational victims since a significant The costs of exploits are similar to those of threats,
number of protective and defensive measures, such only more so. When a threat transitions to an ex-
as blocking certain traffic or closing off specific ploit, particularly an exploit that has been shown
services, are taken on the basis of possible exploits to have been used effectively “in the wild,” then
that have not yet been developed and/or released. clearly the expectation that one’s systems or net-
These measures can be expensive. works might be attacked increases significantly.
Also, for the developers of the malware, success
Benefits of Threats will likely lead to renewed enthusiasm and a pos-
sible increase in related activities.
Strange as it may seem, there are real benefits to From the victims’ perspective, costs go up
threats. Threats perform a service to potential when a threat becomes an exploit. Typically, a
victims, even if the threats never become exploits. relatively very small number of threats are actu-
They serve to encourage individuals and organi- ally realized as exploits. Analysts, who thought
zations to maintain a high level of vigilance and that a particular exploit would not be developed
apply patches and other measures sufficiently in and successful, must revise the assessment of the
advance. probability of attack. They must also estimate
the cost of damage incurred. With a new higher

275
An Adaptive Threat-Vulnerability Model and the Economics of Protection

estimate of expected loss, additional funds will or to disclose personal information to a suspect
normally have to be released for emergency Web site, for example, may be ameliorated by
patching of the systems or other means of quickly technical means, such as SPAM filters. However,
reducing exposure. with the nature of the exploit likely to mutate or
On balance, it may not be a bad strategy to at morph into another form, the human aspect be-
least wait until notified that a real exploit is de- comes important. In many cases, the last line of
veloped before doing the remediation work. Since defense is the person being subject to the attack,
few threats materialize, one must use judgment so that training them on an appropriate behavior
to determine the risk to the organization. There and response is paramount.
are a couple of downside aspects, however. If
one waits for a proven exploit to appear, it may Costs of Vulnerabilities
already be too late to avoid damage. Secondly, one
might not know whether or not one’s organiza- Many vulnerabilities exist because software man-
tion is indeed vulnerable to a particular exploit ufacturers have not taken enough care in ensuring
until an incident happens. This was true of the program code follows secure practices and have
SQL Slammer worm, when many organizations not thoroughly tested the software for security
did not realize that they even had the vulnerable vulnerabilities. In one sense, the manufacturers
code within certain applications. have saved money with this practice and presum-
ably pass some of the savings to the purchaser.
Benefits of Exploits However, it is unlikely that net savings on the
licensing costs of the software products offset the
While similar to the limited benefits of threats, burden and cost of patching the software after it
benefits of exploits will generally be much less has been distributed, installed and is in use.
since there is less time to prepare defenses. It is Costs occur mostly in two areas: one is the cost
generally more difficult and costly to try to install of the patching efforts themselves and the other is
defense mechanisms when subject to the relative the cost of having vulnerable systems, which have
immediacy of an active exploit. Also, by the time not been patched, attacked successfully.
the fixes are completed, some damage may have Not only is there an illicit market for threats
been incurred already. and exploits, as mentioned previously, but there is
Some see a real benefit from hackers attempt- also a similar market for vulnerabilities, including
ing to invade one’s environment. In a published those for which exploits do not yet exist.
interview (Cone, 2007), Edward Amoroso, the
chief information security officer at AT&T, states Benefits of Vulnerabilities
that he believes that hackers perform a positive
service to organizations by pointing out weak- There is really very little tangible benefit of hav-
nesses, which can then be fixed. ing vulnerabilities as far as those who acquire the
vulnerable products are concerned.
Comparisons of Technical and It is ironic that the same vendors who license
Human/Social Exploits software products with inherent vulnerabilities
may also profit from the patching of those vul-
Technical exploits often attack without the vic- nerabilities, since the presence of vulnerabilities
tims even being aware of them. Conversely, those helps to justify spending on maintenance and
exploits that depend upon deceiving victims into support services.
taking particular actions to activate the exploit

276
An Adaptive Threat-Vulnerability Model and the Economics of Protection

From the perspective of those who exploit the human and social environment in which the
vulnerabilities for fame and/or fortune, they may systems operate and at some point will reach di-
well provide a livelihood in terms of the ill-gotten minishing returns, without having benefited from
gains of either blackmailing potential victims or more cost-effective lower tech approaches.
exploiting the vulnerability directly.
Benefits of Defense Measures
Comparisons of Technical and
Human/Social Vulnerabilities It goes without question that major benefits can be
derived from the right set of defense mechanisms.
Technical vulnerabilities are generally fairly Costs and benefits do not always align, however.
well defined and understood once they have been And sometimes the same amount of money spent
discovered. There are also a finite number of on different types of measure can yield very dif-
vulnerabilities, although that number can only be ferent returns.
guessed at. The real challenge is in finding them,
and doing so before the attackers do. Comparisons of Technical and
On the other hand, human and social vulner- Human/Social Defense Measures
abilities are both difficult to identify, and even if
they are defined, they are likely to change rapidly For illustrative purposes, let us consider two
over time. There is effectively no limit on the num- cases—one in which greater reliance is placed
ber of ways a person can be fooled and exploited. on technical defenses and protective measures,
There are also many more potential chinks in the and the other where the main line of defense is
armor when it comes to compromising individu- the human being rather than a machine.
als. One can fool, cajole, or threaten an individual Let us consider an illustrative example where
into exposing vulnerabilities. the initial exploits are predominantly technical.
We will assume an arbitrary measure of 50 “units”
Costs of Defense Measures for the human-social exploits, such as phishing,
aimed at a particular system and 150 units for the
A huge business has evolved to protect systems, technical exploits, such as self-activating targeted
networks, and the human beings who interface viruses, worms, and Trojan horse malware. We
with them against potential and real attacks. The now apply protective measures to each category
more protection that is implemented, the costlier it of exploit. Let us assume that we install e-mail
will be. Some measures are much more effective filtering, for example, at a cost of $20,000, and
than others, so that the key is to select the right reduce the number of suspect e-mails by 50% to
combination of measures as it relates to technology 25 units. In regard to technical exploits, let us
and human and social environments. The goal here assume that we install an IPS (intrusion preven-
is to produce the most security for the least cost. tion system) at a cost of $80,000, with a resulting
A portfolio approach, as described by Axelrod reduction of 20% in the number of exploits getting
(2007), can assist with such a decision. through the defenses to 120 units.
Many information security professionals look The net result is that we have reduced the total
to technical solutions for their security require- exploits from 200 units to 145 units, or by 55 units,
ments and too few place much faith in the human at a cost of $100,000, which produces 0.55 units
and social aspects. This will tend to produce of exploit reduction per $1,000 of cost.
relatively more expensive approaches than would Now, we will look at another example with the
a more holistic approach that takes into account same level of initial exploits, namely, 50 units of

277
An Adaptive Threat-Vulnerability Model and the Economics of Protection

human exploits and 150 units of technical exploits We see many estimates of the cost of incidents
and the same total expenditure of $100,000 units. in the press based on surveys (FBI/CSI, 2006)
We now spend $10,000 on human protection mea- and reported incidents. They run the whole
sures, and see a resulting reduction in exploits of gamut from, at one extreme, the direct costs of
20%, or 10 exploit units. We also spend $90,000 stemming the breach, shoring up the systems,
on measures to reduce the technical exploits and and getting back to normal business, and at the
cut them by 35 units. Now, for same total cost of other, determining the intangibles such as loss
$100,000, we have reduced the exploit level by 45 of reputation. In between, there are such costs
units. This is a reduction of 0.45 exploit units per as the opportunity costs of lost customers, if the
$1,000 of cost, indicating that spending more on breach is made public.
the technical side versus the human side, with the
same total budget, actually reduces the effective- Benefits of Incidents
ness of the protection in this example.
Now let us assume that, in the first example, Incidents can be beneficial to the organizations that
the attacker responds to the filtering program by sustain them, but produce even greater benefits to
coming up with more sophisticated attacks that those that are not directly impacted. This is be-
manage to get through the filters and that the cause, perhaps more than anything else, incidents
available technology is not able to defend against promote action in terms of implementing security
the new version of the attack.. The victim orga- measures. For the organization sustaining the
nization then responds by instituting a training incident, there are all the costs associated with it,
program for users so that they will recognize the which must be subtracted from the benefits of tak-
new attack and avoid it. If the cost of training is ing appropriate risk mitigating actions. For those
$5,000, and the result is to get back to the former less affected observers of the incident, the lesson
level of protection, then the exploit reduction of is learned without incurring the direct costs, so
55 units now cost $105,000, leading to an exploit that the net benefits are that much greater.
reduction of 0.52 units per $1,000. While many fixes may be technical, the hu-
We can see, therefore, that reducing the ef- man and social aspects come to the fore when
fectiveness of exploits is an ongoing iterative an incident is reported. Often the message is
process and requires the fine tuning over time conveyed to senior management by employees,
of protective measures, between those affecting their peers, or through the press, and management
human and technical victims, in response to the responds by mandating action to protect against
attackers modification of their exploits. a like incident occurring in their organization.
The degree to which senior management and,
Costs of Incidents where appropriate, the Board of Directors, take
personal responsibility for fixing the vulnerability
The whole purpose of security measures, whether depends on the laws, regulations, and culture of
they be human, technical, or blended, is to avoid the organization and its country of residence.
incidents or, if they are not avoided or avoidable,
to minimize their impact. The impact can be Comparisons of Technical and
measured in terms of the total costs of incidents to Human/Social Incidents
the organization and its stakeholders (sharehold-
ers, employees, customers, competitors), as well The resolution of an incident will depend heavily
as indirect costs borne by other organizations upon whether it was predominantly technical or
and individuals. human. For example, a large proportion of re-

278
An Adaptive Threat-Vulnerability Model and the Economics of Protection

ported data breaches are low-tech human errors staff as resolutions to all possible permutations
or events, such as those involving lost or stolen and combinations of events are not built into the
laptop computers, computer tapes, and other stor- written procedures.
age devices. Here the remedy might be procedural
and physical, such as put tapes in locked cases, Benefits of Recovery and Restoration
but may also be technical, such as encrypting the
data on the hard disks of laptop computers. Clearly, the benefits of an effective recovery and
In other cases, the attack might be technical, restoration come from the ability to survive an
such as gaining access to data files containing incident or series of incidents and restore viable
customers’ personal information. For the most operations. However, whether an incident response
part, fixing these vulnerabilities requires tech- exercise is a test or in response to an actual event,
nical means, but will likely also have a human there are always lessons to be learned. It is hoped
component, such as in verifying the identity of that the observed deficiencies in the process will
someone requesting access. result in improvements to the procedures so that,
should a similar event occur in the future, the
Costs of Recovery and Restoration recovery will flow better.

Often forgotten in the overall equation are the costs Comparisons of Technical and
of recovering from an incident and, depending Human/Social Recovery and
on the nature of the incident, restoring the opera- Restoration
tion back to its original standing. In a real sense,
these costs depend upon how much preparation As has been mentioned, human participation often
has been done beforehand and whether sufficient dominates the recovery and restoration phases
redundancy and resiliency has been built into the since the automated tools for these procedures
system. Usually, the more planning and testing, have not been sufficiently developed to allow for
the lower the costs when an actual incident occurs. a completely automated process. More likely, the
The tradeoff here is between security, which serves process will be computer assisted. There have
to protect the system and network environment been attempts at instilling learning and adapta-
and prevent incidents, and survivability, the aim tion into tools for failures that are more limited
of which is to ensure that the environment can be in scope, in that the tools recognize a failure, or
reestablished and reconstituted after an incident potential failure, from the system and network
has taken place. Axelrod (2007) provides a more behavior and respond according to predetermined
detailed analysis of the balance between security scripts. In a sense, this is what IPS (intrusion
and survivability. prevention systems) do. Perhaps we will develop
Interestingly, human intervention and control FPS (failure prevention tools), much as existed
may be the more critical in the recovery process with Tandem and Stratus Technologies high-
than technical prowess, as often judgment is availability computers, which contained many
required beyond which an automated solution processors and failed over from one that was
might be capable. This is particularly the case broken to other hot standby computers. Tandem
when a particular incident does not follow a favored hardware fail-over, whereas Stratus used
previously established script. In my experience, a software approach. The same fail-over concept
the complexity of most systems and networks can be employed in RAID (redundant array of
requires, in addition to technical tools, the per- independent—or inexpensive—disks) systems.
sonal knowledge and experience of the operational

279
An Adaptive Threat-Vulnerability Model and the Economics of Protection

Future Work on Successive where applications, processing, data handling,


Iterations and storage are provided on a pay-per-use basis
and security depends on the provider and other
While we have mostly focussed on a linear one- participants. As applications are pushed out to
time process, mention was made of processes that the end user, the latter must be able to trust that
involved action and reaction behaviors. These the applications thus distributed are secure and
latter processes are more realistic but more dif- protect data entrusted to the applications and the
ficult to model and follow. They call for computer infrastructure on which the applications run. As
simulation models wherein the feedback from collaborative work groups and social structures
attacker to victim and vice versa with consequent evolve, there needs to be ways to ensure the
modifications of attacks and defenses, much as a confidentiality and integrity of information and
game would involve strong and rapid interactions its sources. As the collaborative, sharing model
between and among players. of Web 2.0 evolves into the adaptive, intelligent
Most academic and commercial models to “semantic” Web 3.0, there is an even greater need
date have focused on the technical aspects, and to understand the technical and human threats,
behavior monitoring methods have addressed exploits, and vulnerabilities that will abound. In
behavior as depicted through the monitoring fact, as dependency on systems increases and
of, say, traffic flowing over a network. What is the systems themselves will take on many hu-
needed is a more macro-level view of how attack- man elements in regard to judging authenticity
ers might develop an exploit upon learning of a and recognizing the appropriateness of results of
vulnerability, how the victims might attempt to inquiries. In addition, there will be a need to and
shore up the vulnerability, and how the attacker validate the security of all the many integrated
then modifies his exploit to get around the new environments and verify the accuracy of the data
protective measures, and so on. and processes that will have been aggregated into
One might then consider whether the iterative the user interfaces.
model will eventually converge to equilibrium or Future environments and models will make
whether instability will increase over time. Axel- those being worked on today appear to be very
rod (1979) developed such a model with respect primitive. It is important to develop the right
to the interaction of users to different pricing effective tools today for today’s and tomorrow’s
models for computer resources. worlds in order to stand any chance at all of
This is clearly an area that could benefit from maintaining control over the human, social, and
applied research. technical factors that are evolving rapidly.

Future Trends
Summary and Conclusion
The IT industry is abuzz with the rapid evolu-
tion of the Web and ways in which applications Perhaps the most neglected aspects of securing
and services can be delivered as Web services computer systems and networks against ma-
or service orientations. Common terms are SOA levolent attacks or unintended breaches are those
(service-oriented architecture), SaaS (systems related to human behavior and social practices.
as a service), grid computing, Web 2.0, and Web By including them in a model of threats and vul-
3.0. nerabilities, the economics of protection change.
Much of these leading-edge technologies The intention is to channel information security
involve a different set of user-vendor models, funds into more appropriate security measures

280
An Adaptive Threat-Vulnerability Model and the Economics of Protection

than would have been applied had the human and Herrmann, D. (2007). Complete guide to security
social factors been ignored. and privacy metrics. Boca Raton, FL: Auerbach
It is recognized that models that account for all Publications.
aspects of behavior along with technical realities
Hinson, G. (2006). Seven myths about informa-
are complex and difficult today, but will become
tion security metrics. The ISSA Journal, July,
even more of a challenge as new technologies
43-48.
burst forth.
Kan, S. H. (1995). Metrics and models in software
quality engineering. Boston: Addison-Wesley.
References
Mendell, R. L. (2007). The psychology of informa-
tion security. The ISSA Journal, March, 8-11.
Axelrod, C.W. (1979). Computer effectiveness:
Bridging the management-technology gap. Wash- Messmer, E. (2007). RSA ’07: Bruce Schneier
ington, D.C.: Information Resources Press. casts light on psychology of security. CSO Online,
February 7. Retrieved August 15, from http://
Axelrod, C. W. (2005). The demise of passwords:
www.networkworld.com/news/2007/020707-rsa-
Have rumors been exaggerated? The ISSA Jour-
schneier.html
nal, May, 6-13.
OCC. (2006). Customer authentication and inter-
Axelrod, C. W. (2007a). The dynamics of privacy
net banking alert (OCC Alert 2006-50). Retrieved
risk. Information Systems Control Journal, 1,
April 1, 2007, from http://www.occ.treas.gov/ftp/
51-56.
alert/2006-50.html
Axelrod, C. W. (2007b). Analyzing risks to
Scalet, S. D. (2007). Alarmed: Bolting on security
determine a new return on security investment:
at Stop & Shop. CIO Online, March 9. Retrieved
Optimizing security in an escalating threat en-
on August 15, 2007, from www.csoonline.com/
vironment. In H.R. Rao, M. Gupta, & S. Upad-
alarmed/03092007.html
hyaya (Eds.), Managing information assurance
in financial services (pp. 1-25). Hershey, PA: IGI Schneier B. (2007a) The psychology of security.
Global. DRAFT, February 28. Retrieved August 15, 2007,
from www.schneier.com/essay_155.html
Cone, E. (2007). Hacking’s gift to IT. CIO Insight,
March 7. Retrieved August 15, 2007, from www. Schneier, B. (2007b). All or nothing: Why full
cioinsight.com/article2/0,1397,21087886,00.asp disclosure—or the threat of it—forces vendors
to patch flaws, CSO Magazine, 6(2), 20.
CSI/FBI. (2006). Eleventh annual CSI/FBI
computer crime and security survey. Retrieved Stone, B. (2007). Study finds web antifraud mea-
August 15, 2007, from http://i.cmpnet.com/gocsi/ sure ineffective. The New York Times, Feb 5.
db_area/pdfs/fbi/FBI2006.pdf
Symantec. (2007). Symantec security response—
Gartner (2005). Retrieved April 1, 2007 from glossary. Retrieved August 15, 2007, from www.
http://www.gartner.com/130100/130115/gart- symantec.com/avcenter/refa.html
ners_hyp_f2.gif
Vijayan, J. (2007). Hackers now offer subscrip-
Gregoire, J., Jr. (2007). CIO confidential: The tion services, support for their malware. Com-
Manhattan effect. CIO Magazine, 20(9), 30-34. puterworld, April 4. Retrieved August 15, 2007,
from www.computerworld.com/action/article.

281
An Adaptive Threat-Vulnerability Model and the Economics of Protection

do?command=viewArticleBasic&taxonomyNa at www.gartner.com/Display/Document?id-
me=security&articleId=9015588 509085&ref=g_SiteLink
2
For more details about the Morris worm, see
http://en.wikipedia.org/wiki/Morris_worm
retrieved August 15, 2007.
Endnotes 3
For more details about the ILOVEYOU
1
worm see http://en.wikipedia.org/wiki/
A document “Understanding Gartner’s
ILOVEYOU retrieved August 15, 2007.
Hype Cycles, 2007,” which describes the
proprietary hype cycle in greater detail,
may be ordered from the Gartner Web site

282
283

Chapter XVI
Bridging the Gap between
Employee Surveillance
and Privacy Protection
Lilian Mitrou
University of the Aegean, Greece

Maria Karyda
University of the Aegean, Greece

Abstract

This chapter addresses the issue of electronic workplace monitoring and its implications for employees’
privacy. Organisations increasingly use a variety of electronic surveillance methods to mitigate threats
to their information systems. Monitoring technology spans different aspects of organisational life, in-
cluding communications, desktop and physical monitoring, collecting employees’ personal data, and
locating employees through active badges. The application of these technologies raises privacy protection
concerns. Throughout this chapter, we describe different approaches to privacy protection followed by
different jurisdictions. We also highlight privacy issues with regard to new trends and practices, such
as teleworking and use of RFID technology for identifying the location of employees. Emphasis is also
placed on the reorganisation of work facilitated by information technology, since frontiers between
the private and the public sphere are becoming blurred. The aim of this chapter is twofold: we discuss
privacy concerns and the implications of implementing employee surveillance technologies and we sug-
gest a framework of fair practices which can be used for bridging the gap between the need to provide
adequate protection for information systems, while preserving employees’ rights to privacy.

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Bridging the Gap between Employee Surveillance and Privacy Protection

Introduction of technologies such as video and monitoring


software. Electronic monitoring entails the fol-
Employee monitoring is not a new phenomenon. lowing actions:
Employers have always monitored their employees
for reasons of efficiency, security, or legal obliga- • An employer’s use of electronic devices
tion. Nowadays, however, information technology to review and evaluate the performance of
(IT) has significantly reduced the cost and time employees;
needed for information processing, storage, and • An employer’s use of electronic devices to
retrieval, thus making monitoring easier. More- observe actions of employees while employ-
over, new technologies allow for the creation ees are not directly performing work tasks,
of increasingly more sophisticated information or for a reason other than measuring work
sources on employees. At the same time, compa- performance;
nies and their information systems face increased • An employer’s use of computer forensics,
threats originating from their interior. To address the recovery and reconstruction of electronic
this so-called insider threat, companies adopt a data after their deletion, concealment, or at-
wide range of monitoring tools provided by the tempted destruction (Lasprogata, King, &
IT industry. The use of these tools, however, has Pillay, 2004).
been reported as threatening employees’ privacy.
As monitoring and surveillance devices is steadily Why Do Companies Conduct
becoming easier to use as well as cheaper, it is Surveillance?
to be expected that monitoring and surveillance
technologies will be used even more intensively Typically, employment terms entail collecting
in the near future. a considerable amount of information about
Is the workplace to be considered as a public employees, as these data are necessary for basic
domain where the notion of privacy is out of management activities (Mitrou & Karyda, 2006).
place? Do employers’ property rights prevail Electronic monitoring in the past was mainly used
over employees’ right to privacy? This chapter to measure and evaluate employee performance
aims to provide answers to these questions and (for instance, through keystroke analysis). Em-
to analyze privacy implications of the use of ployers tend to regard control of the workplace
monitoring technologies, with regard to lawful as their prerogative, including the right to protect
monitoring principles. and control their property, and the right to manage
employee performance in terms of productivity,
quality, training, and the recording of customer
Background interactions (Findlay & McKinlay, 2003).
Lately, however, the stakes of security and
Employee monitoring or employee surveillance liability have altered the rationale of employee
denotes employer-controlled observation of em- monitoring. One of the reasons most commonly
ployees in order to ascertain the performance, cited by enterprises employing monitoring tech-
behavior, and other characteristics of employees. nologies is the endeavor to protect the interests of
Traditionally, frontline supervisors had the duty the company and its stakeholders. The following
to perform employee surveillance as a means paragraphs illustrate the main reasons used for
of managing their workforce and protecting the justifying employee surveillance.
workplace. Surveillance nowadays is, in most
cases, automatically performed through the use

284
Bridging the Gap between Employee Surveillance and Privacy Protection

Productivity, Cost Control, and breaches which are caused, for example, when
Allocation an employee, intentionally or unintentionally,
downloads a virus or opens an e-mail that con-
Employers have legitimate rights and interests tains a Trojan horse program as an attachment.
to run their business efficiently, evaluating and Other objectives pursued through employee’s
assessing the workforce and also have the right surveillance include the prevention or detection
to protect themselves from the liability or the of industrial espionage and copyright, patent, or
harm that employees’ actions may cause (DPWP, trademark infringement by employees and third
2001). Monitoring methods are implemented for parties.
reasons such as controlling and allocating costs of Insider threats have been identified to pose
different performances and communications and a significantly high level of risk and to have a
measuring and improving productivity. As com- heavier cost for organisations (Schultz, 2002).
puter systems have become an integral component Security controls used for protecting information
of work, process monitoring aims at maximizing systems from externally initiated attacks (e.g.,
productive use of these systems. Reportedly, U.S. firewalls and intrusion detection systems) are
corporations lose more than $54 billion a year considered to be ineffective in detaining insider
because of non-work related employee use of the threats, since these require a different approach
web (Conry-Murray, 2001). Other cost related (Porter, 2003; Lee & Lee, 2002; Schultz, 2002).
reasons used for justifying employee monitoring The main risks connected to the insider threat
include the cost and downgrade of the company’s include the intentional or unintentional leak of
network bandwidth when employees use the Web confidential or proprietary company information,
and e-mail for non-work related activities. contamination from viruses, Trojans and other
types of malicious code, unauthorized access
Security to information, degrade of Internet connec-
tion/network service as a result of abusive use,
Employers also use surveillance methods to dis- financial fraud, and adverse actions or sabotage
cover theft and pilferage, to investigate suspected from disgruntled employees. It is also important
theft, and to identify possible culprits. They have to note that the cost associated to such threats is
also to deal with additional security problems not negligible. Forty-two percent of the companies
caused or intensified through the use of informa- that took part in the 2006 Computer Crime and
tion and communication technologies. Security Survey reported that their employees had
A major purpose of monitoring employees, abused Internet privileges, by downloading, for
under this perspective, is to discover and deter instance, pornography or pirated software. Losses
activities adverse to company interests such as from this type of abuse alone were estimated over
theft of tangible and intangible property (e.g., trade $1.800.000 (CSI/FBI, 2006).
secrets). Employers monitor the use of computer
and communication systems in order to prevent Protection of Own or Third Persons’
or respond to unauthorized access to computer Interests
systems, including access by computer hackers
and to protect computer networks from becoming Employers are confronted with the obligation
overloaded by large downloadable files. to prevent or detect unauthorized utilization of
Moreover, employers often need to verify the employers’ computer systems for criminal
breaches of confidentiality or monitor compli- activities and terrorism (Bloom, Schachter, &
ance with security rules and to prevent security Steelman, 2003). For instance, the widespread use

285
Bridging the Gap between Employee Surveillance and Privacy Protection

of e-mail can entail a number of severe problems tion, more employers are using electronic means
for organisations. The dissemination of illegal or of monitoring. The business of surveillance and
offensive material via e-mail by employees, or the monitoring software is rapidly augmenting in the
distribution of confidential information, can cause recent years and increasingly more companies
bad reputation or even result to legal prosecution obtain them. The main reasons for that have been
for companies. Controlling employer’s compliance attributed to the lowering costs of the technology,
with workplace policies on the use of computer as well as to the increasing need companies have
systems, e-mail accounts, and Internet access is a to protect their infrastructure, especially after
means to prevent and to investigate complaints of 9/11. Widely employed surveillance technologies
employee misconduct, including harassment and can be categorised as follows: communications
discrimination complaints. Lately, with the rise monitoring; video surveillance; desktop monitor-
of the “blogosphere,” employers are also inter- ing; location monitoring; and biometrics.
ested in protecting themselves from defamation: Communications monitoring entails surveil-
employees’ Internet activities are checked for lance of e-mails, Web sites that have been visited,
offensive or libelous content. Blogging about the phone calls, and intranet and Internet traffic. The
employer, even with comments posted on private technology used to support this type of monitoring
servers outside company time, has already led to includes software monitoring, firewalls, intrusion
dismissals (Ball, 2006). detection systems that monitor all network traffic,
As organisations are becoming, through the sniffers, passive listeners to intercept Internet
use of IT, decentralized, accountability becomes communications, and antivirus programs. The
inevitably localized (Findlay & McKinlay, 2003). percentage of employers in the U.S. who use video
Monitoring helps employers to prepare their de- monitoring to detect theft, violence, sabotage,
fense to lawsuits or administrative complaints, and other employee misconduct was raised from
such as those brought by employees-victims of 33 % in 2001 to 51 % in 2005 (AMA, 2005).
discrimination or harassment, or in case of disci- Remote control programs which are installed on
pline measures and/or termination of employment. employees’ computers allow control of a remote
In some cases, monitoring proved to be useful host for surveillance purposes, redirecting the
in responding to discovery requests in litigation video display of the remote host to another host.
related to electronic evidence (Lasprogata et al., In this way, an employer can view in real time a
2004). Employee monitoring technologies are also copy of what the employee is viewing. Desktop
used for collecting evidence for auditing and judi- monitoring also includes files content monitoring.
cial purposes after an incident has occurred. In many companies, employees are equipped with
Finally, other factors driving the growing num- smart ID cards which can track their location
bers of employers monitoring their employees’ while they move through the workplace. New
activities are the low cost of monitoring tech- employee ID cards can even determine the direc-
nologies and the increase in employees using the tion the worker is facing at any given time. Global
company’s IT resources for personal reasons. positioning technology (GPS systems) is also used
to monitor employee cell phones, to keep track of
company vehicles, and to monitor employee ID
Workplace surveillance: cards. The latter are also widely used for control-
tools and techniques ling physical security and access to buildings and
data centers. However, RFID cards are also used
As computer software enabling workplace surveil- for gathering and retaining personally identifi-
lance drops in price and increases in sophistica- able data regarding employee movement. Finally,

286
Bridging the Gap between Employee Surveillance and Privacy Protection

fingerprint scans, facial recognition technology, of misuse; other types of staff misuse cited were
and iris scans are also used by a few companies misuse of e-mail access, unauthorized access to
for employee monitoring. data, breaches of data protection laws or regula-
Generally, monitoring software provides a tions, and misuse of confidential information. It
variety of capabilities, including the following: should also not go without mention that 22% of
the companies needed two to ten man days to
• Preventing access: This type of software recover from the worst security incident caused
entirely blocks access to any Web site which by Internet misuse.
has been previously characterized as inap- According to the Forrester Research reports
propriate by employers. (Forrester, 2006), 38% of respondents in the
• Alerting: Monitoring software can be set to U.S. and UK said they employed staff to read or
alert employers (or the person appointed by otherwise analyze outbound e-mail. Responding
them) when employees visit Web sites they to the same research, 44% of U.S. companies
ought not to. with more than 20,000 employees said they hire
• Direct surveillance: Of employee’s com- workers to snoop on workers’ e-mail. Nearly one
puter. in three U.S. companies also said they had fired
• Flagging: Employees’ e-mails are screened an employee for violating e-mail policies in the
for containing predefined keywords. past 12 months and estimated that about 20% of
• Keystroke logging: Keystrokes, as well as outgoing e-mails contain content that poses a
idle time, are recorded, thus allowing for the legal, financial, or regulatory risk.
recreation of employees actions. In this way Finally, 58% of the companies that responded
information can be recorded, even after it to the CSI/FBI Computer Crime and Security
has been deleted. Survey (CSI/FBI, 2006) use special software to
• Instant messaging monitoring: Software monitor Web activity and 62% of them use e-mail
that allows monitoring the messages ex- monitoring software. Moreover, many companies
changed by employees. use packet-sniffing software that can intercept,
analyze, and archive all communications on their
Current Use of Monitoring intranet.
Technologies
Discussion
According to the Information Security Breaches
Survey (DTI, 2006), 80% of UK large businesses Apart from using software for monitoring their
with Internet access log and monitor their em- employees e-mails and the content of the Web
ployees’ Web access, while 90% of them filter sites they visit, companies also apply a variety of
incoming mail for spam. The same survey reports means for controlling the use of e-mail accounts
the case of a UK publishing company that logs all and Internet access, including firewalls (in this
Internet access by its staff and has line managers case to monitor outbound and not only inbound
monitoring it (DTI 2006). Moreover, 52% of large traffic), restriction of Internet access, limitation
businesses attribute the worst security incident of the space of employee e-mail accounts, en-
they suffered to internal causes while 65% re- forcement of code of conduct and performance
ported staff misuse of the company’s information management systems, and provision of right
systems. Misuse of Web access (including access to read e-mail content to security personnel.
to inappropriate Web sites and excessive Web Companies also apply random checks to their
surfing) was reported as the most common form employees’ e-mails (White & Pearson, 2001). It

287
Bridging the Gap between Employee Surveillance and Privacy Protection

is interesting to note, at this point, that though Communication means are, at the same time,
e-mail monitoring is widely adopted, at the same object and instrument of surveillance.
time employers seldom open postal mail received The changing structure and nature of the
by their employees. workplace has led to more invasive and often
covert monitoring practices. Recently, much of
the focus has been on electronic monitoring, as
Privacy implications of technology has enabled employers to engage in
workplace surveillance constant supervision of employees at work and
technologies access to employees’ electronic communications
(Hodges, 2006). Advances in science have also
A Changing Work Environment pushed the boundaries of what information and
personal details an employer can acquire from
Traditionally, workplace monitoring involved an employee (Privacy International, 2006).
some type of human intervention (for instance, With the rise of team working, peer surveillance
in the form of foreman’s surveillance, access or (watching colleague’ performance, behaviors or
physical/body controls) and either the consent, characteristics) is growing, reinforced through
or at least the knowledge, of employees (EPIC, social norms and culture (Ball, 2006). Develop-
2002). In this way, monitoring was, at least in ments in the nature of work and the structure of
most cases, visible by the persons who were organisations has made difficult to distinguish
monitored. Technological progress, however, clear and unambiguous boundaries between work
has not only facilitated surveillance through the and private life as people work longer hours, work
use of automated means. It has also radically al- from home on computers owned by their employer,
tered the nature and structure of workplace and and work on call (Findlay & McKinlay, 2003;
has increased the risks an employer has to face. DPWP, 2004). As the once clear lines between
Furthermore, it has extended and intensified em- an employee’s personal and professional life are
ployees’ monitoring and has changed its nature. blurring, monitoring may nowadays extent to
These profound changes are strictly interrelated private spaces, activities, and time.
and interdependent with each other and their The legitimization of employers’ monitoring
impact needs to be thoroughly explored. activities is closely related to the actual percep-
The evolution of information technologies has tion of the individual that is monitored. It seems
changed both day-to-day working conditions and to be justifiable that if a probable cause exists that
also the individual and group relationships forged an employee is involved in an illegal or harmful
within the work environment (Lasprogata et al., activity that person’s rights may be restricted to
2004). Apart from the socio-economic develop- a greater extent than would have been normally
ments, a critical change concerns the “genuine allowed (Nouwt, de Vries, & Loermans, 2005).
migration of the technologies from the periphery However, surveillance affects the rights and
to the very center of the work process” (CNIL, interests of every person in the workplace. As
2002). Means used for working, like the PC or monitoring technologies are increasingly modu-
an intranet, are now becoming means and space lar and self perpetuating, surveillance becomes
for communication. When e-mail replaced the a “mundane, ubiquitous and inescapable fact of
telephone as a communication means it became everyday (working) life” (Findlay & McKinlay,
easier for employees to feel a sense of privacy 2003).
(Selmi, 2006). Intranets offered new types of
social spaces inimical to managerial control.

288
Bridging the Gap between Employee Surveillance and Privacy Protection

Employees’ Privacy among other principles, the recognition of an


individual’s personality, respect for other people,
In recent years, privacy has emerged as one of non-interference with another’s life choices, and
the central issues. What is included in the right the possibility to act freely in society (Rodota,
to privacy is, however, highly debatable. The 2004). The protection of privacy is built into
quest for the concept of privacy focuses on the society’s structure in order to shape the quality
search for a means to establish an identifiable and of life in the public sphere (Solove, 2006). Human
sustainable interface between the public and the dignity, as a source and expression of privacy, is
private sphere of human life. The concept of pri- not generated by the individual (it) “is instead
vacy can only be defined in terms of the cultural created by one’s community and bestowed upon
norms of a particular society and the position of the individual. It cannot therefore be bartered
the individual within this society. away or exchanged” (Lasprogata et al., 2004).
Privacy represents primarily a sphere where it Furthermore, privacy is a fundamental component
is possible to remain separate from others, anony- of equality, in order to prevent monitoring from
mous and unobserved. The public sphere offers no turning into a tool that is used to discriminate
such guarantee. Similarly, the concept of private against certain individuals.
represents an aspect of freedom and, more specifi- The issues surrounding employees’ privacy
cally, freedom from interference. However, the are representative of the broader transformation
need for privacy emerges from within the society, that has occurred in the workplace over the last
from the various social relationships that people decades: in stable workplaces and lifetime em-
form with each other, with private sector institu- ployment relationships there was a stronger ele-
tions and with the government. Thus, privacy is ment of trust between employers and employees,
not merely a right possessed by an individual but which rendered privacy less significant. “Once
it is a form of freedom built into the social struc- the workplace was dismantled…privacy became
ture (Solove, 2004). In this respect, privacy aims of greater importance for employees and on the
to protect life choices from public control so that flipside, a greater threat to employers” (Selmi,
everyone can preserve an underlying capacity for 2006). The discussion about employees’ privacy
autonomous decision-making. Privacy represents rights mirrors also a recent and fundamental re-
a social ability or capacity of the individual and alignment of the guiding principles of labor law,
is a characteristic of relation with others. It is at least in Europe: the emphasis is redirected upon
a claim from being simplified, objectified, and the rights and the empowerment of the individual
judged out of context (Simitis, 1987; Schwartz employee rather than the paradigm of “collective
& Reidenberg, 1996). laissez faire” and the representative function of
The right to privacy is often treated as akin employees’ representatives (Simitis, 1999).
to property. Under this perspective, privacy is The lack of clarity in relation to the notion
bargainable. Consequently, it can be exchanged of privacy creates difficulties when elaborating
with other rights and privileges: in the employ- a policy or resolving a case. While the interests
ment context privacy, if any, may be exchanged on the employer’s side (e.g., property, efficiency,
for something of commensurate value, like tak- security) are often readily articulated it is, some-
ing or keeping a job (Lasprogata et al., 2004). times, difficult to define the privacy harm. In the
However, this approach underlines the freedom employment context, privacy violations involve
to alienate privacy rights and ignores the dignity a variety of types of harmful or problematic
element, which is inherent in the notion of pri- activities.
vacy: as related to privacy, dignity summarizes,

289
Bridging the Gap between Employee Surveillance and Privacy Protection

The communication with others as well as and an intentional visit” (Privacy International,
the use of communication services falls within 2006). In any case, electronic surveillance extends
the zone of (communicational) privacy (Mitrou beyond searching, for it records behavior and
& Karyda, 2006). The collection and storage of social interaction (Solove, 2004).
personal information relating to telephone use, as
well as to e-mail and Internet use, regardless the Implications of Video Surveillance
knowledge of the monitored employee, amounts and Location Monitoring Techniques
to an interference with the right to respect for
private life and freedom of communication. The Several privacy invasions arise from video and
French Cour de Cassation (Onof v. Nikon) ruled location surveillance techniques. Pervasive video
than an employer cannot read personal messages surveillance and image digitalization allows track-
sent or received by employees, without violating ing of movements. Surveillance rigidifies one’s
the right to privacy and infringing the fundamen- past: it is a means of creating a trail of informa-
tal liberty of confidentiality of correspondence, tion about a person and in this perspective makes
even if the employer has prohibited non-work the past “visible” (Rodota, 2004) and “present.”
related use of the computer (Lasprogata et al., Surveillance inhibits freedom of choice, imping-
2004; Delbar, Mormont, & Schots, 2003). The ing upon self-determination.
increasing number of computer users, applica- Video surveillance interferes with the principle
tions, and system interconnections along with of “free development of personality,” a principle
the increased complexity of overall technological that is embedded in most European constitutions:
capabilities entails a greater chance that e-mail Video surveillance seems to be accepted only
privacy is compromised. Additional concerns for the protection of goods and persons. More
emerge when internal email monitoring is used specifically, case law states that permanent video
to track employee performance. In this case it is surveillance is an infringement of the “right per-
not just “suspected employees” whose e-mail is taining to one’s own picture”. The German Federal
read (Sipior & Ward, 1995) and the secrecy of Labour Court recently accepted that privacy and
communications that is affected, but also employ- informational self-determination are seriously
ees’ dignity (Austrian Supreme Court (Oberster affected through permanent surveillance and the
Gerichtshof), 2002). “surveillance pressure” created thereby. The court
Increased employee monitoring raises the emphasized that “innocent” employees would face
risk that false inferences can be drawn about a serious and disproportionate interference into
employees conduct: what if an employee is sent their right of personality (Bundesarbeitsgericht,
an “offensive” e-mail accidentally or, even, Beschluß vol 14. December 2004, RDV 5-2005).
maliciously? What if an employee accidentally Solove (2004) points out the Justice Cohen’s
visits a pornographic site upon opening a spam remark that “pervasive monitoring of every first
e-mail that links to such a site or when such a site move or false start will, at the margin, incline
is displayed as a “hit” in response to a perfectly choices toward the bland and the mainstream.”
innocent search query? Even if Internet use sur- Essential privacy concerns are also raised
veillance has common elements with traditional through the use of location techniques. The ag-
searches for hard copy pornography, there are gregate information collected over several days
significant additional dangers for the individuals or months through active-badge systems can
who are monitored. As underlined in the Report reveal movement profiles and behavior patterns.
of Privacy International, “surveillance technology “Traffic analysis” can be aggregated or combined
cannot distinguish between an innocent mistake with other sources of information to reveal data,

290
Bridging the Gap between Employee Surveillance and Privacy Protection

potentially discriminating or damaging the identifiable data (for example, personnel records),
monitored person. Although it can be a highly it may be used for other purposes and may become
effective security system, a common perception an instrument for monitoring performance. In this
for active badges is that employers can use them case, strictly legal justifications for surveillance
to spy on employees or monitor their activities are replaced by organisational justifications. Fi-
such as time spent in the restroom or the length nally, electronic surveillance practices may have
of coffee breaks (Starner, 2001). As trade unions an impact on the relative distribution of reward,
argue, employees’ privacy may also be inhibited undermining existing processes of consultation
by RFID tracking technology (OECD, 2006). and altering the concepts of distributive justice
The introduction of global positioning devices in (Ball, 2006).
vehicles and occasionally on individuals, provid-
ing locational information of employees, has often
proved controversial, with many claiming that Relevant regulatory
they infringe on employee privacy interests while framework
demonstrating a lack of respect for employees
(Selmi, 2006). A violation of human dignity is also Undoubtedly, privacy and other fundamental
assumed if video surveillance or other tracking rights are affected by monitoring in the context
methods are used in order to monitor working of employment relationships. Although several
speed or if restrooms are monitored to prevent aspects of privacy can be defined, there is no ab-
people reading newspapers in secret (Hoeren & solute or uniform concept of privacy or personal
Eusterling, 2005). Finally, video surveillance and data protection. The issue of workplace privacy
location techniques jeopardize another historically can be summarised in the dilemma between a
fundamental freedom right, that is, the freedom property-based and a rights-based approach (Ball,
of movement (DPWP, 2004). 2006; Lasprograta et al., 2004).
Besides challenging employees’ privacy rights,
electronic surveillance practices also challenge Privacy in Private Contexts and
rights concerning the freedom of expression and Relationships
the freedom of association. Unrestricted access
to and use of personal data imperils virtually A first critical difference between the presented
every constitutionally guaranteed right: neither (and—to the extent possible—compared) ap-
freedom of speech, nor freedom of association proaches and the respective legal systems (U.S.
nor freedom of assembly can be fully exercised as and European Union) pertains to the scope of
long as it remains uncertain whether, under what constitutional protection of privacy rights: The
circumstances and for what purposes, personal U.S. Constitution does not contain an expressed
information is collected and processed (German right to privacy; furthermore, there is no com-
Federal Constitutional Court, Census case, prehensive legal framework providing for the
1983). Slobogin argues that being placed under protection of privacy in the U.S. However, in
surveillance impedes one’s anonymity, inhibits certain situations, the Supreme Court has inter-
one’s freedom to associate with others, makes preted the constitution to protect the privacy of
one’s behaviour less spontaneous, and alters one’s the individuals: The Fourth Amendment protects
freedom of movement (Solove, 2006). against unlawful searches and seizures (U.S. Su-
As the use of surveillance technologies may preme Court, Katz v. U.S.) and applies to federal,
lead to the so called function creep, the informa- state, and local government employees, where
tion gathered may be linked with other personally employers conducted the searches (U.S. Supreme

291
Bridging the Gap between Employee Surveillance and Privacy Protection

Court, O’Connor v. Ortega). Employees’ privacy The U.S. Doctrine of “Reasonable


expectation and, consequently, privacy protection, Expectation of Privacy”
are hardly founded on constitutional texts, since
they have been found to restrict only government The so-called “Katz test” has to be applied to
intrusions into privacy, and are therefore inappli- determine the “reasonableness” of the employees’
cable to workplace privacy intrusions by private “privacy expectations” in light of the totality
employers (Lasprogata et al., 2004; Phillips, 2005; of circumstances as well as the “realities of the
Bloom et al., 2003). workplace” (Supreme Court, O’Connor v Ortega).
The U.S. approach to privacy seems dia- Case studies show that courts have held that the
metrically opposite to that in the European Union reasonableness of privacy expectations varies
(EU): Article 8 of the European Convention for considerably with the norms and circumstances
the Protection of Human Rights and Fundamen- surrounding the specific activity and that the
tal Freedoms (ECHR) states: “Everyone has the workplace reasonably entails very low privacy
right to respect for his private and family life, expectations, as other public or private interests
his home and his correspondence” and the more may override privacy expectations, thus making
recent Charter of Fundamental Rights of the intrusions through monitoring reasonable.
European Union affirms that “everyone has the Given the, per definition, public nature of
right to respect for his or her private and family the workplace and its purposes, many argue that
life, home and communications” (Art.7) as well employees, who are hired to attend company
as “to the protection of personal data” (Art.8). business, cannot have a “reasonable expectation
Although the jurisprudence of the European Court of privacy” (Fazekas, 2004). In some jurisdictions,
of Human Rights (Niemitz, Halford, Copland, law and courts have recognized only a “minimal
and other similar cases) concerns government right to privacy” which is limited to those in-
action, it appears that in the EU the opinion that stances where the matter or area intruded upon
Article 8 of the Convention is relevant also in is “intensely private.” There must be solitude or
private context has gained a strong support. In seclusion to be intruded upon, that is, monitoring
the European approach, constitutional rights in public places does not constitute an invasion
cease to be mere means of defence against state (Phillips, 2005).
activities and become structural components of Under the “content approach” courts “decide
the employment relationship. As the employer’s the legitimacy of the employer’s interest…by
opportunities to monitor the employees, “the analysing the purposes behind the monitoring
citizens of the enterprise” (France—Rapport and whether the content of the communication
Auroux, 1981), augments the chances to influence is reasonably related to the proffered purposes.”
their behaviour and thus increases their depend- Under the “context approach,” courts “determine
ence on the employer (Federal German Labour the reasonableness of the employee’s expecta-
Court, 1984), rights and freedoms penetrate the tions by analysing the employer’s notification
employer-employee relationship and question procedures” (Kesan, 2002). In this approach, a
a system of “indisputable prerogatives” of the policy posted in a company bulletin or site or a
employer (Simitis, 1999). The horizontal effect of “surveillance clause” included in a contract, are
the provisions of Article 8 of the European Con- likely to diminish or extinguish privacy expec-
vention is generally accepted by the jurisprudence tations in the workplace. However, in Smyth v.
in European states (for example, French Cour Pillsbury the judge noted that the plaintiff had no
de Cassation, the case Onof, Belgian Supreme reasonable expectation of privacy notwithstanding
Court, 2001). his employer’s assurance about the confidentiality

292
Bridging the Gap between Employee Surveillance and Privacy Protection

of the messages” (Desprocher & Roussos, 2001; developing relationships with the outside world”
Phillips, 2005). Consent may destroy such an (ECHR, Niemitz v. Germany). A decisive criterion
expectation: It is the alienability of privacy that is the difficulty to “distinguish clearly which of an
allows an employer to receive (in most cases even individual’s activities form part of its professional
“implied”) consent of the employee or to virtually life and which not” (Niemitz v. Germany).
eliminate any reasonable expectation of privacy by The case Halford v. the United Kingdom was
notifying its employees of a monitoring policy. insightful for the extension of the protection of
Furthermore, many justify the lack of privacy, privacy in correspondence to electronic commu-
by referring to the fact that monitored com- nications: The court decided that interception of
munications are voluntarily transmitted on an workers phone calls in the workplace constituted
employer’s network, using equipment designated a violation of Art. 8 of the European Convention
to serve business objectives (Fazekas, 2004). The and rejected the argument of United Kingdom
“property argument” has been proved a decisive that the plaintiff had no reasonable expectation
one (Lasprogata et al., 2004): Courts have insisted of privacy in those calls, as they were made using
on property rights, affirming the principle that telephones provided by the employer. The court
employers may monitor communications taking has recently (April 2007) confirmed this approach:
place inside their premises with the use of their in Copland v. the United Kingdom, it stated that
equipment. In McLaren v. Microsoft, the Texas the reasonable expectation as to the privacy of
Court of Appeals expressed the opinion that calls “should apply in relation to the applicant’s
McLaren did not have a reasonable expectation e-mail and Internet usage.” The court recalled
of privacy since the emails were transmitted over explicitly that the use of information relating to
the company’s network and were “at some point the date and length of telephone conversations and
accessible to a third-party.” The judges noted that in particular the numbers dialed, as “integral ele-
notwithstanding the personal password he used to ment of the communications made by telephone”
access his messages and the fact he stored them (Malone v. the United Kingdom), can give rise to
in his “personal folder,” the messages “were not an issue under Article 8 of the Convention. Ac-
McLaren’s personal property, but were merely cordingly, the court considered that the collection
an inherent part of the office environment” (De- and storage of personal information relating to the
sprocher & Roussos, 2001). applicant’s telephone, as well as to her e-mail and
Internet usage, without her knowledge, amounted
The European Approach to Privacy to an interference with her right to respect for her
Rights in the Workplace private life and correspondence.
The concept of “reasonable expectation of pri-
Electronic employee monitoring is currently at vacy” is also present in the European approach. In
the forefront of legal and public debate in Europe. Halford v. United Kingdom, the court considered
Since the 1992 Niemitz decision, the European that the failure to inform Mrs. Halford that her
Court of Human Rights has recognised that the calls might be monitored created a “reasonable
right to privacy extends to workplace (Findlay expectation of privacy.” This consideration is
& McKinlay, 2003). The court rejected the dis- given the interpretation that the court’s ruling
tinction between private life and professional suggested that the extent of employees’ privacy
life exactly because the workplace is especially can be determined largely by the employer (Find-
suited for social intercourse and “it is after all in lay & McKinlay, 2003). However, in the case
the course of their working lives that the major- P.G. and J.H. v. the United Kingdom (2004) the
ity of people have a significant opportunity of court concluded that a reasonable expectation of

293
Bridging the Gap between Employee Surveillance and Privacy Protection

privacy is only one criterion to determine whether employers shape expectations of privacy or define
an interference with the right to privacy exists. the protection level through contractual provisions
The concept of Article 8 of the European conven- or simply organisational policies?
tion recognizes the mere existence of privacy In the U.S., the reasonable expectation of
expectations in a free and democratic society. workplace privacy is often reduced by the use
The protection afforded by the convention is also of consent from employees: employers demand
stronger from the employees’ perspective as they such consent as a “standard business procedure”
would not have to prove the reasonableness of (Phillips, 2005). As a result, consent to monitor-
their expectations. ing is becoming implicitly acknowledged in the
employment relationship. It is noteworthy that
Privacy and Data Protection the concept of consent as a way to legitimize
Principles monitoring practices under the European Union
law is not quite straightforward as under U.S. law,
The Data Protection Working Party extracts three particularly in the employment context, where
principles from the Article 8 jurisprudence that withholding consent can have immediate negative
apply to public and private workplaces: jobs consequences.
According to the EU Data Protection Directive
a) Employees have a legitimate expectation of (Directive 95/46/EC), which has a direct and im-
privacy in the workplace, which is not over- mediate effect on the human resource operations of
ridden by the location and ownership of the employers, consent must be explicit, freely given
electronic communications means used; and fully informed. The European Commission
b) Respect for private life includes, to a certain has expressed the opinion that “employers should
degree, the right to establish and develop avoid relying on the worker’s consent as a means
relationships with other human beings. The that legitimises by itself processing of personal
fact that such relationships, to a great extent, data” (European Commission, 2002), while the
take place at the workplace puts limits to International Labour Organisation accepts, under
employer’s legitimate need for surveillance conditions, employee’s consent as legitimate basis
measures; for the collection of data (ILO, 1997). The Data
c) The general principle of secrecy of cor- Protection Working Party has taken the view that
respondence covers communications at the when an employer has to process personal data,
workplace (DPWP, 2002). A number of cases as a necessary and unavoidable consequence of
judged by courts in European states, confirms the employment relationship; it is misleading
this approach (Delbar et al., 2003). to seek to legitimize this processing through
consent (DPWP, 2001). Due to the nature of the
The fundamental rights of privacy and secrecy employment relationship, in which there is an
of communications of employees are, however, inherent asymmetry of power and the employee
subject to derogations and limitations, in particular is subordinate and dependent, reliance on consent
when they are confronted with rights and free- should be confined only to, the very few, cases
doms of others similarly protected by the law, for where the employee has a genuine free choice
example, the legitimate interests of the employers. and is subsequently able to withdraw the consent
More specifically, employees’ rights are balanced without detriment (Mitrou & Karyda, 2006).
against the interests of employers when validating International and national regulations as well as
the processing of employees’ communications and other non-legally binding texts, such as the Inter-
their personal data (Mitrou & Karyda, 2006). Can national Labour Organisation’s Code of Conduct

294
Bridging the Gap between Employee Surveillance and Privacy Protection

(1997) or the OECD Privacy Guidelines (1980), • Necessity and proportionality: The level
allow us outline the core principles pertaining to of tolerated privacy intrusion depends on
lawful and legitimized monitoring of employees. the nature of the employment as well as on
These principles are: legitimacy, finality, necessity, the specific circumstances surrounding and
proportionality, and transparency. interacting with the employment relationship
(DPWP, 2001). The employer’s monitoring
• Legitimacy: Legitimate employee monitor- policy should be tailored to the type and de-
ing and processing of the derived data includes gree of risk the employer faces. Monitoring
data that are necessary: (a) for compliance must, in all cases, be necessary, appropriate,
with a legal obligation of the employer; (b) for relevant, and proportionate with regard to the
the performance of the work contract; (c) for aims that it is pursuing. The employer may
the purposes of a legitimate interest pursued carry out monitoring of electronic online
by the employer; or (d) for the performance communications data as long as it is pursu-
of a task carried out in the public interest ing the following: the prevention of illegal
(DPWP, 2001). The legitimate purpose, which or defamatory acts; acts that are contrary to
is pursued through monitoring, should be set good ethics or which can damage the dignity
in advance of a measure’s application and be of another person; the protection of the eco-
readily demonstrable to employees (Charles- nomic, commercial, and financial interests
worth, 2003). of the organisation; the security and good
• Finality: Employers must distinguish be- operation of its information and communi-
tween the various aspects of the employ- cation systems; and the observance of the
ment relationship and specify the aims for principles and rules applicable in the company
which monitoring is required. Information for the use of online technologies (Mitrou
processing must be strictly confined to the & Karyda, 2006). Employers must check if
data necessary in relation to the particular any form of monitoring is absolutely neces-
employment relationship. Both the amount sary for a legitimate and specified purpose
and the type of data vary according to the before proceeding to such activities. The UK
individual employee’s tasks or the context information commissioner proposes to carry
of the employer’s decisions (Simitis, 1999). out a formal or informal “impact assessment”
Monitoring should be carried out for a spe- to decide if and how to carry out monitoring.
cific, explicit, and legitimate purpose and the This assessment involves the identification of
derived information should not be further purposes and benefits, the identification of
processed in any way that is incompatible “adverse impacts” on workers, and possibly
with that purpose. For instance, personal data on third parties, such as customers and the
collected in order to ensure the security or the consideration of possible alternatives (for
proper operation of processing systems should example, limitation of monitoring to high
not be processed to control the behavior of risk workers or areas) (UK Information
individual employees, except where the latter Commissioner, 2003). The proportionality
is linked to the operation of these systems principle rules out routine monitoring of all
(European Commission, 2002; ILO, 1997). staff, notwithstanding particular cases such as
Furthermore, personal data collected by automated monitoring for purposes of security
electronic monitoring should not be “the only and proper operation of the system (e.g., vi-
factors in evaluating worker performance” ruses) (DPWP, 2002; European Commission,
(ILO, 1997). 2002). The most important of the effects of the

295
Bridging the Gap between Employee Surveillance and Privacy Protection

proportionality principle is that the employ- framework and that they are in line with business
ers should always monitor employees “in the ethics. The main concerns with regard to this
least-intrusive way” (ILO, 1997). include the lack of specific policy provision by
• Transparency: The transparency require- employers, lack of audit or review as to how em-
ment seems to be the commonly accepted ployee information is used, and a subsequent lack
minimum component of a workplace privacy of awareness of monitoring practice and police
policy. The transparency principle requires on the part of employees (Ball, 2006). For these
that employers’ monitoring practices be fully reasons, compliance with the legal requirements
and clearly disclosed to all employees subject and business ethics should be demonstrated and
to the policy, along with the reasons for the illustrated in the monitoring policy adopted and
monitoring and, ideally, upon hiring. Notably, properly communicated to employees of compa-
courts in Denmark, the Netherlands, the UK, nies, who apply surveillance techniques.
and Germany, have established the necessity Second, companies should provide clear, well-
for employers to have issued a clear use policy defined, written policies concerning the use of
or instructions on Internet and e-mail use their IT resources (e.g., use of e-mail, Internet
before it is legitimate for them to dismiss or access, etc.) (Mitrou & Karyda, 2006). Monitor-
discipline employees on grounds of misuse ing policies should describe in detail the types
(Delbar et al., 2003). Employers should also of employee monitoring techniques used, the
inform their employees about the principal reasons for the monitoring, who will have access
and secondary uses to which personal data to the information collected, those to whom the
generated by such systems are being put information may be disclosed, and should also
(IWGDPT, 1996). The so-called secret or make explicit which e-mail or Internet usage is
covert monitoring can only be justified in allowed and which is not. Information should also
exceptional circumstances. This requires that be provided with regard to the nature and duration
there is suspicion on reasonable grounds that of the surveillance, the features of technology
a grievous criminal activity has been or will used and the type of information compiled, the
be committed (IWGDPT, 1996; ILO, 1997). details of any enforcement procedures outlining
Finally, according to ILO’s code of practice, the notification of breaches of internal policies,
secret monitoring should be permitted only and finally their rights to have access to the data
if “it is in conformity with national legisla- processed about him and to correct errors (DPWP,
tion.” 2002 ; IWGDPT, 1996). In this way, employees
can have a clear understanding as to what is con-
sidered permitted, responsible, and ethical use of
Fair practices the technological infrastructure.
Third, a critical principle laid down in inter-
This section describes a set of fair practices national and national legal texts requires that
which, when adopted, could enable bridge the employers minimize the intrusion on the privacy
gap between privacy and the countervailing of their employees and workers. For instance, the
interests of security. These suggestions have UK Employment Practices Data Protection Code
been based on the previous analysis and follow suggests “impact assessment” as the best way to
the lawful monitoring principles identified in the approach workplace monitoring (Nouwt et al.,
legal framework. 2005). Generally, most of the risks directing the
First, it is important that all monitoring activi- application of monitoring tools can be confronted
ties are compliant with the legal and regulatory using a combination of security controls which are

296
Bridging the Gap between Employee Surveillance and Privacy Protection

less intrusive with regard to employees’ privacy. It should also be noted that privacy, in general,
For example, the risks of unintentional leakage has not yet entered the domain of collective bar-
of confidential information or the spread of a gaining at the level of formal agreements. Surveil-
virus after opening an e-mail attachment could lance and disclosure remain emergent issues in
be avoided if employees were properly trained which the scope and depth of joint regulation of
and received information on security issues such surveillance practices is settled at the enterprise
as the risks of opening attachments e-mailed level. In non-union firms, surveillance is regarded
from unknown senders or the practices of social exclusively as a managerial prerogative, tempered
engineering. by corporate human resources philosophies and
Fourth, implementing filtering tools against practices (Findlay & McKinlay, 2003). However,
non-authorized sites, in association with firewalls in many European states like Germany, France,
for monitoring Internet connections, are preven- Sweden, Netherlands, and Belgium, there are
tion measures that do not necessitate informing stringent requirements for consultation with em-
employees. A posteriori control of Internet con- ployees’ representatives and collective agreements
nection data, for instance, by department or by or model codes of conducts at place as instruments
user, or a statistical control, should in most cases to regulate the use of IT-infrastructure as well as
be sufficient without it being necessary to carry out the use of surveillance means (Nouwt et al., 2005).
an individualized, personal control of the accessed Regulating workplace monitoring and privacy
sites (CNIL, 2004). Furthermore, e-mail moni- should not focus on the “physical artifacts or tech-
toring should be performed through automated niques” but on the “social relationship between
means, searching for key words instead of view- employer and employees” (Phillips, 2005).
ing the content of the e-mail. Specific procedures
for managing the content of e-mail (e.g., storing,
deleting, etc.) should also be followed. As far as Conclusion and open issues
location monitoring is concerned, an alternative
to active badges is to design systems in which the Monitoring has negative side effects that affect
user solely controls the resultant information. In both the observed and the observers: the panoptic
other words, the user’s wearable computer would effect of being constantly monitored, achieved
gather, process, and filter any data collected or through electronic surveillance, has negative
distributed about the user (Starner, 2001). impacts on the relationship of mutual trust and
Fifth, another issue to keep in mind when confidence that should exist between employees
designing a monitoring policy, is granting em- and employer (UK Information Commissioner,
ployees with explicit control over their personal 2003; Desprocher & Roussos, 2001). Fazekas
information. In this way, an employer shows (2004) reports that workers whose communica-
respect and confidence in employees’ use of the tions were monitored, suffered from higher rates
technology and the employee-employer relation is of depression, anxiety, and fatigue than those
positioned on a trust basis (Starner, 2001). Involv- not subject to monitoring at the same business.
ing employees in the design and implementation of More specifically, monitoring technologies af-
monitoring systems will ensure that these systems fect employees’ feelings towards their work and
have a better chance of being accepted, and that the workplace, their attitudes, emotions, beliefs,
employees are informed about where monitoring norms, and so forth. Stanton (2000) proposed
information goes and how long it is kept (ILO, a framework in which perceptions of fairness,
1997; Hodges, 2006). satisfaction with monitoring, and monitoring in-
vasiveness were interrelated attitudinal outcomes.

297
Bridging the Gap between Employee Surveillance and Privacy Protection

Fairness is defined as the degree to which workers should always win in the balance, but it should
evaluate monitoring practices affecting them as not be dismissed just because it is ignored or
reasonable and appropriate. Finally, the notion misconstrued. Maintaining employees’ privacy
of invasiveness is used to describe the extent to can contribute to individual self-esteem and the
which employees perceive monitoring practices development of workplace relations on a trust
as an invasion of privacy. basis, which can be for the mutual benefit of both
It is also important to note that monitoring the employees and the employers’ side.
tools aim to address an inherent controversy:
Companies provide their employees with Internet
access and e-mail accounts mainly for increas- References
ing productivity and lowering transaction costs.
Restricting these privileges may, up to a certain American Management Association. (2005).
point, provide a level of protection against risks Workplace e-mail and instant messaging survey.
resulting of the so-called insider threat, but, at Retrieved March 2006, from http://www.amanet.
the same time the much anticipated benefits of org/research/
the IT technology are undermined. Moreover,
Ball, K. (2006). Expert report: Workplace. In D.
new types of employment, such as teleworking,
M. Wood (Ed.), Surveillance studies network,
can be rendered unfeasible, by employing such
a report on the surveillance society for the in-
restrictions. At the same time, teleworking entails
formation commissioner (UK), appendices (pp.
a set of new privacy challenges. For example, how
94-105). London.
can an employee’s home be monitored without
impinging upon non-work-related activities? How Bloom, E., Schachter, M., & Steelman E. H. (2003).
can employee surveillance during off-hours be Competing interests in the post 9-11 workplace:
prevented? the new line between privacy and safety. William
On the other hand, however, technology Mitchell Law Review, 29, 897-920.
benefits can be negated by inappropriate use of
Charlesworth, A. (2003). Privacy, personal in-
information technology. It seems therefore, that a
formation and employment. Surveillance and
balance between the employees’ right to privacy
Society, 1(2), 217-ff.
and the need to ensure employers’ benefits is not
very easily attainable. The guidelines for fair Commission Nationale de l’Informatique et des
practices that have been previously described can Libertés (CNIL). (2002). La cybersurveillance
help limit this discrepancy. sur les lieux de travail. Paris.
Conflicting laws and jurisdictional approaches
Conry-Murray, A. (2001). The pros and cons
reflect not only the contrasting philosophical
of employee surveillance. Network Magazine,
assumptions underlying different legal systems,
12(2), 62-66.
but also the inherently ambiguous relationship
between property and privacy rights in the CSI/FBI. (2006). Computer crime and security
contemporary workplace (Findlay & McKinlay, survey. Retrieved February 2007, from http://
2003). Privacy protection requires careful bal- www.gocsi.com
ancing, as neither privacy nor its countervailing
interests are absolute values. Due to conceptual Data Protection Working Party—DPWP. (2004).
confusions, courts and legislatures often fail to Opinion 4/2004 on the processing of personal data
recognize privacy problems and thus no balanc- and video surveillance (11750/02/Final).
ing is possible. This does not mean that privacy

298
Bridging the Gap between Employee Surveillance and Privacy Protection

Data Protection Working Party—DPWP. (2001). Hodges, A. C. (2006). Bargaining for privacy in
Opinion 8/2001 on the processing of personal data the unionized workplace. The International Jour-
in the employment context (5062/01/Final). nal of Comparative Labour Law and Industrial
Relations, 22(2), 147-182.
Data Protection Working Party—DPWP.
(2002). Working document on the surveillance Hoeren, T., & Eustergerling, S. (2005). Privacy
of electronic communications in the workplace and data protection at the workplace in Germany.
(5401/01/Final). In S. Nouwt, B. R. de Vries, & C. Prins (Eds.),
Reasonable expectations of privacy (pp. 211-244).
Delbar, C., Mormont, M., & Schots, M. (2003).
The Hague, PA: TMC Asser Press.
New technology and respect for privacy at the
workplace. European Industrial Relations Ob- International Labour Office—ILO. (1997). Pro-
servatory. Retrieved January 2006, from http:// tection of workers’ personal data. Geneva.
www.eiro.eurofound.eu.int/print/2003/07/study/
International Working Group on Data Protection in
TN0307101S.html
Telecommunications—IWGDPT. (1996). Report
Desprocher, S., & Roussos, A. (2001). The juris- and recommendations on telecommunications
prudence of surveillance: a critical look at the laws and privacy in labour relationships. Retrieved
of intimacy (Working Paper), Lex Electronica, January 2006, from http://www.datenschutz-brlin.
6(2). Retrieved March 2006, from http://www. de/doc/int/iwgdpt/dsarb_en.htm
lex-electronica.org/articles/v6-2/
Kesan, J. P. (2002). Cyber-working or cyber-shirk-
DTI. (2006). Information security breaches ing?: a first principles examination of electronic
survey 2006. Retrieved March 2006, from www. privacy in the workplace. Florida Law Review,
dti.gov.uk 289ff.
Electronic Privacy Information Center (EPIC). Lasprogata, G., King, N., & Pillay, S. (2004).
(2002). Possible content of a European frame- Regulation of electronic employee monitoring:
work on protection of workers’ personal data. Identifying fundamental principles of employee
Workplace Privacy, European Commission. privacy through a comparative study of data pri-
Retrieved October 2005, from //www.epic.org/ vacy legislation in the European Union, United
privacy/workplace States and Canada. Stanford Technology Law
Review, 4. Retrieved March 2006, from http://stlr.
Fazekas, C. P. (2004). 1984 is still fiction: Electron-
stanford.edu/STLR/Article?04_STLR_4
ic monitoring in the workplace and U.S. privacy.
Duke Law & Technology Review, 15. Retrieved Lee, J., & Lee, Y. (2002). A holistic model of
January 2006, from http://www.law.duke.edu/ computer abuse within organisations. Informa-
journals/dltr/articles/PDF/2004DLTR0015.pdf tion Management & Computer Security, 10(2),
57-63.
Findlay, P., & McKinlay, A. (2003). Surveillance,
electronic communications technologies and Mitrou, E., & Karyda, M. (2006). Employees’
regulation. Industrial Relations Journal, 34(4), privacy vs. employers’ security: Can they be
305-314. balanced. Telematics and Informatics Journal,
23(3), 164-178.
Forrester. (2006). Forrester research reports.
Retrieved March 2006, from http://www.for- Nouwt, S., de Vries, B. R., & Loermans, R. (2005).
rester.com Analysis of the country reports. In S. Nouwt, B.
R. de Vries, & C. Prins (Eds.), Reasonable ex-

299
Bridging the Gap between Employee Surveillance and Privacy Protection

pectations of privacy (pp. 323-357). The Hague, Simitis, S. (1987). Reviewing privacy in an infor-
PA: TMC Asser Press. mation society. University of Pennsylvania Law
Review, 135, 707-728.
Organisation for Economic Co-Operation and
Development—OECD. (2006). RFID: Drivers, Simitis, S. (1999). Reconsidering the premises
challenges and public policy considerations of labour law: Prolegomena to an EU refulation
(DSTI/ICCP (2005)19/FINAL). on the protection of employees’ personal data.
European Law Journal, 5, 45-62.
Phillips, J. D. (2005). Privacy and data protec-
tion in the workplace: the US case. In S. Nouwt, Sipior, C. J., & Ward, T. B. (1995). The ethical and
B. R. de Vries, & C. Prins (Eds.), Reasonable legal quandary of email privacy. Communications
expectations of privacy (pp. 39-59). The Hague, of the ACM, 38(12), 48-54.
PA: TMC Asser Press.
Solove, D. J. (2004). Reconstructing electronic
Porter, D. (2003). Insider fraud: Spotting the wolf surveillance law. The George Washington Law
in sheep’s clothing. Computer Fraud & Security, Review, 72, 1701-1747.
4, 12-15.
Solove D. J. (2006). A taxonomy of privacy.
Privacy International. (2006). PHR2005—threats University of Pennsylvania Law Review, 154(3),
to privacy (28/10/2006). Retrieved March 2006, 477-564.
from http://www.privacyinternational.org/
Stanton, J. M. (2000). Reactions to employee
Rodota, S. (2004, September 16). Privacy, freedom performance monitoring: framework, review,
and dignity. Closing remarks at the 26th Interna- and research directions. Human Performance,
tional Conference on Privacy and Personal Data 13, 85-113.
Protection, Wroclaw.
Starner, T. (2001). The challenges of wearable
Schultz, E. E. (2002). A framework for understand- computing: Part 2. IEEE Micro, 54-67.
ing and predicting insider attacks. Computers and
UK Information Commissioner. (2003). The em-
Security, 21(6), 526-531.
ployment practices data protection code.
Schwartz, P., & Reidenberg, J. (1996). Data
White, G. W., & Pearson, S. J. (2001). Controlling
privacy law. Charlottesville, VA: Mitchie Law
corporate e-mail, PC use and computer security.
Publishers.
Information Management and Computer Security,
Selmi, M. (2006). Privacy for the working class: 9(2), 88-92.
public work and private lives (Public law and
legal theory working paper No 222). The George
Washington University Law School.

300
301

Chapter XVII
Aligning IT Teams’
Risk Management
to Business Requirements
Corey Hirsch
LeCroy Corporation, USA

Jean-Noel Ezingeard
Kingston University, UK

Abstract

Achieving alignment of risk perception, assessment, and tolerance among and between management
teams within an organisation is an important foundation upon which an effective enterprise information
security management strategy can be built .We argue the importance of such alignment based on infor-
mation security and risk assessment literature. Too often lack of alignment dampens clean execution of
strategy, eroding support during development and implementation of information security programs .
We argue that alignment can be achieved by developing an understanding of enterprise risk manage-
ment plans and actions, risk perceptions and risk culture. This is done by examining context, context and
process. We illustrate this through the case of LeCroy Corp., illustrating how LeCroy managers perceive
risk in practice, and how LeCroy fosters alignment in risk perception and execution of risk management
strategy as part of an overall information security program. We show that in some circumstances diversity
of risk tolerance profiles aide a management teams’ function. In other circumstances, variances lead
to dysfunction. We have uncovered and quantified nonlinearities and special cases in LeCroy executive
management’s risk tolerance profiles.

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Aligning IT Teams' Risk Management to Business Requirements

Introduction offer lower returns than other investments avail-


able, because it also presented a lower likelihood
A sociological understanding of risk perception of variations in return. A young entrepreneur on
as an input to information security development the other hand, might be willing to accept a high
is becoming a necessity. We know this from two probability of surprises, as long as she felt the
strands of literature: the first is the literature in upside was commensurate with the downside.
risk assessment in fields other than information Willingness to accept a reduction in return, in
security. The second is the information secu- order to reduce expected variation in return, is
rity literature. In particular, understanding how intolerance to risk. Willingness to accept high
management and functional teams perceive risk, expected variation in return, in order to maximize
and decide and act in managing risk, is one cor- expected return is tolerance for risk. This chapter
nerstone of an effective enterprise information will illustrate how top executives are mathematical
security management strategy. If managers do in their risk appetite at low and medium stakes,
not understand the reasons behind an informa- yet highly risk-averse when the stakes are higher,
tion security policy, or do not fully support the such as when complete business success or failure
rationale behind it, they are unlikely to engage are potential outcomes. The chapter will also
in its development or adhere to it later. Further- demonstrate how to quantify an organization’s
more, divergent information security decisions level of risk tolerance, which will in turn enable
and actions may have the effect of canceling out a reader to align IT risk management strategy to
each other, and render the enterprise risk manage- an organization’s risk culture.
ment strategy less effective. In addition, events
such as mergers, security breaches, or regulatory
changes may cause managers’ perceptions of risk Background
to evolve.
How, then, do managers perceive risk in A good understanding of both intolerance and
practice? And how might an enterprise foster tolerance to risk is at the core of any successful
an aligned approach to risk management? This information security policy, usually developed
chapter presents such a methodology. We will use in three stages. The first stage typically entails
a medium sized manufacturer of test and measure- risk identification and assessment. This is usu-
ment equipment, LeCroy Corp., to illustrate. ally followed by stages looking at how risks
We will show that whilst there are areas where can be monitored and controlled, with a third
perceptions toward and tolerance of risk are shared and final stage concerned with risk avoidance
within a department or work team, there can be and mitigation. For instance, COBIT 4.0 (ITGI,
substantial variations between different groups of 2005) proposes that the “assess and manage IT
managers. Groups which routinely work together risks” high level control objective should be met
on information security and risk management through a series of 10 activities culminating in
related tasks, however, have lower standard devia- the maintenance and monitoring of a risk action
tions in their risk judgments than teams which do plan. Similarly, in ISO 17799:2005 (ISO, 2005a),
not share this working experience. Yet this second the first section describing best practice is one on
group may have responsibilities that are critical “risk assessment and treatment.”
to enterprise risk management. Sources of information security risk are usually
Individuals in a population display variation documented in taxonomies of risks. They tend to
in their tolerance for risk. A retired widower for list broad categories of risk sources (Backhouse
example, might choose an investment known to & Dhillon, 1996) that can be used to ensure that

302
Aligning IT Teams' Risk Management to Business Requirements

all sources of potential risks have been surveyed. Risk Management and
For instance, Loch, Carr, and Warkentin (1992) Alignment
classify sources of information security risks as
internal versus external, human versus non-hu- Alignment
man, and accidental versus intentional. Similar
classifications exist in the ISO 27001 control The notion of alignment (strategic fit) is crucial
objectives (ISO, 2005b) and in most text relating in many other areas of business. It has its origins
to information security (see for instance, Whit- in the concept of strategic fit, popularised by Tom
man & Mattord, 2003) Peters in the 1980s, who argued that congruence
Such taxonomies and classifications have been among seven elements—strategy, structure, sys-
criticized by Dhillon and Backhouse (2001). They tems, style, staff, shared values, and skills—is
remark that checklists and taxonomies of threat necessary for success (Peters & Waterman, 1982).
tend to leave out the social nature of information Strategic fit is important, because it leads to su-
security problems. This makes it difficult to get perior performance (Gietzmann & Selby, 1994).
a clear picture of management’s appetite for risk Defining “fit” is, however, difficult as fit goes
as an input to the information security strategy beyond knowing what needs to be aligned, to
and subsequently ensure that the expectations and include how alignment should be achieved. This
actions of various stakeholders are aligned. Yet, led Venkatraman and Camillus (1984) to define fit
recent research suggests that understanding an as process (how to achieve fit) and content (what
organization’s appetite for risk (and subsequently fit looks like). The importance of process is also
ensuring a good alignment between the stake- highlighted by Reich and Bensabat (1996) who
holders’ attitudes to risk and its management) is argue that two aspects need to be considered.
perhaps as important to the success of an infor- They highlight the importance of understanding
mation security policy as is understanding risks how the planning process itself can help achieve
clearly (Ashenden & Ezingeard, 2005; Ezingeard, alignment (in the case of enterprise risk man-
Mcfadzean, Howlin, Ashenden, & Birchall, 2004). agement, this would involve an examination of
This is now understood in professional standards. the enterprise risk strategy). They also take this
COBIT 4.0 for instance, firmly reinforces the need further by suggesting the importance of looking
to understand an enterprise’s appetite for risk as at social relationships in the organisation.
part of the IT risk management process. The idea behind the argument that social re-
A key question therefore, seems to be how to lationships need to be looked at is that alignment
measure (or estimate) the appetite for risk of an is not only a strategic, logical process, but also a
organization and use this estimate as an input to social process. Therefore, good communication
an information security strategy? Further, how between business and the function to be aligned
can we ensure a good degree of alignment of at- (for instance, IT executives) is often quoted as
titudes to risk across an organization? The answers necessary for strategic fit (Reich & Benbasat,
rely first of all on understanding the basis of risk 1996; Reich & Benbasat, 2000). Alignment is
management and alignment. also thought to be easier to achieve if business
executives have a good knowledge of the func-
tional areas where alignment is sought (Hussin,
King, & Cragg, 2002).

303
Aligning IT Teams' Risk Management to Business Requirements

The First Link between Strategic worked on the assumption that information secu-
Processes and Social Processes: Risk rity risks can be perceived through hard science.
Perceptions It now seems the case that many of the facets of
information security fall into the category of vir-
Information security has been implemented as tual risk and if we are to address them from this
a process in many organizations for almost two perspective then we need a better understanding
decades now. It follows a sequence of risk identi- of how they are culturally constructed. There is
fication, risk classification (for instance in terms therefore a need to “understand the relationships
of impact and probability), and risk mitigation or between human factors and risk and trust if a
avoidance. The approach has been at the basis of relatively secure cyberspace is to develop in the
some of the most common information security future” (OST, 2004).
best practice approaches such as the ISO 27000 In addition to the need to understand how risk
series (ISO, 2005b) at a management system is perceived because it can help employee motiva-
level as well as the common criteria evaluation tion and behavior, and the need to understand how
and validation scheme (CCEVS, 2005) at a lower risk is culturally constructed, another reason why
technical level since their inceptions. Whilst treat- understanding how risk is perceived is important
ing information security as a process is now seen is the social complexity of risk itself. Willcocks
as good practice, there have been many calls to and Margetts (1994) point out that recent research,
ensure that the process should not be treated solely “supports generally the finding that the major
as a mechanistic one and should be capable of risks and reasons for failure tend to be through
continuously adapting to its context. This approach organizational, social and political, rather than
is very “functionalist” (McFadzean, Ezingeard, & technical factors.” Although this is referring to
Birchall, 2004) and can easily be seen as lacking risk in the broad information system environment
completeness because its comprehension of the rather than information security specifically, the
context of risk is limited. For instance, both Beck same assertion still applies. They go on to recom-
(1992) and Baskerville (1991) argue that much mend that risk should be assessed as, “a result of
work on risk analysis for information security is distinctive human and organizational practices
too functionalist. They suggest that practitioners and patterns of belief and action.”
have become over-reliant on predictive models
for developing a secure information system thus The Second Link between Strategic
ignoring important issues such as employee un- Processes and Social Processes: Risk
derstanding, motivation, and behaviour. Culture
Adams (2005) outlines three types of risk:
those that are perceived directly, those that are Information security risk is only one category
perceived through science, and virtual risk. He of risk organizations are exposed to and many
suggests that risks that are perceived directly are organizations find it difficult to align their IT
dealt with using judgment (this refers to risks risk management efforts with those of the rest of
such as crossing the road, for example). Virtual the organization in other areas such as financial
risks are culturally constructed because science is or business continuity risks (Birchall, Ezingeard,
inconclusive, which means that “whom we believe Mcfadzean, Howlin, & Yoxall, 2004). Often this
depends on whom we trust.” Those risks that are is because risk management strategies, and more
perceived through science are relatively objective specifically information security strategies are
in nature. Information security risk assessment not grounded in organizational values (Dhillon &
has come from a scientific background and has Torkzadeh, 2006). Yet, legislative and regulatory

304
Aligning IT Teams' Risk Management to Business Requirements

requirements—for instance, in the corporate gov- according to Adams and Thompson (2002), the
ernance arena, requiring organizations to think assessment of reward is a key aspect of the “risk
of information security within their overall risk thermostat” that is at play both at an institutional
management frameworks (ITGI, 2003) make this and individual level during risk assessment. In
a requirement. This means that not only do risk Adams’ model, the “risk thermostat” includes
management processes need to be aligned across perceptual filters (Adams, 1999) whose influence
functional areas in the organization, but also that depends on the attitude of people to risk. Simi-
attitudes towards risk need to be aligned. larly, attitude to risks have been found to have a
In order to address this need for alignment, significant impact of the way boards of directors
Jahner and Krcmar (2005) propose a model of risk address information security in their organisation
culture. The model has three dimensions (identify, (Ezingeard, Mcfadzean, & Birchall, 2003). We
communicate, and act). Whilst the “identify” and therefore need to augment Jahner and Krcmar’s
“act” dimensions are often clearly embedded in model of risk culture by adding assessment of
many information security processes, Jahner and reward and assessment of the risk/reward equation
Krcmar argue that an organization’s information in the “identify” and “communicate” dimensions
security efforts can only be successful if a shared of risk culture.
understanding of possible threats is achieved and if
a shared understanding of how to act consistently A Methodology to Understand Risk
is reached. How people act in risk management Culture and Alignment
is, according to Ciborra (2004), “intertwined in
social processes and networks of relationships.” Three Dimensions
Whilst Jahner and Krcmar’s model of risk
culture is useful as a basis for understanding the In order to understand the interactions and de-
social processes around risk in an organization, pendencies between risk management, perception
it does not discuss the importance of a shared and culture, three dimensions need to be looked
understanding of the risk/reward equation in any at: context, content, and process.
of its three phases. Yet, this is likely to be crucial Context is about understanding the influence
to the success of any risk management process. of four key communities on the enterprise risk
Whilst the IT risk management literature is often strategy: customers, employees, owners, and
coy about making this explicit, the purpose of risk competitors. In the case of for profit firms, under-
management is not solely the avoidance of risk to standing owner’s views is critical. The influence
minimise losses, but in fact the need to take risks of the environment, including regulatory, political,
to reap rewards. The financial risk management and economic also needs to be understood.
community is by and large more explicit about Content and process help us understand the two
this since the risk/reward equation is one of the conceptual links we discussed earlier, namely the
fundamental rules of business. As pointed out in need to understand how risk perceptions and risk
the Turnbull report “Since profits are, in part, the culture influence the alignment between enter-
reward for successful risk-taking in business, the prise risk management and business strategy.
purpose of internal control is to help manage and The suggested framework is shown in Table
control risk appropriately rather than to eliminate 1.
it” (Turnbull, 1999). Populating the framework can be done from
There is a growing body of literature that sug- on-the-job experience, interviews (ideally includ-
gests that this risk-reward equation is an integral ing the operational management and governance
part of an organization’s risk culture. For instance,

305
Aligning IT Teams' Risk Management to Business Requirements

Table 1. Analysis framework


Enterprise Risk Management (ERM) Risk Perception Risk Culture
Plans and Actions
Context How the business context influences ERM How the business context How the business context
influences risk perceptions in the influences risk culture in the
organisation organisation
Content What are the enterprise risk management How risk perceptions influence the How the risk culture influences
mechanisms in place ERM mechanisms in place (and ERM mechanisms (and vice
vice versa) versa)
Process What are the processes in place to achieve How risk perceptions impact on How risk culture impacts on the
and maintain alignment between business the alignment process alignment process
strategy and ERM

executives), surveys, and examination of docu- LeCroy’s products are software intensive. Most
mentary evidence, such as: are designed to be used connected to local area
networks. It is therefore important that they should
• Policies be patchable and upgradeable. When LeCroy’s
• Risk management spread-sheets products began to be designed with embedded
• Audit reports and audit recommendations x86 architecture processors running Windows™
operating systems, a rigorous information security
An Example regimen became a requirement (Hirsch, 2005), in
order to prevent malware contagion incidents that
Company Background could affect the company, and possibly thereafter,
its customers (Oshri, Kotlarsky, & Hirsch, 2005).
LeCroy Corp. (Nasdaq LCRY, FY2006 Sales At that time, the CEO chartered a new change ini-
$U.S.168M) was founded by Walter O. LeCroy tiative to elevate the information security culture.
in 1964 in Irvington, New York. It operates in Two years later, when the security team had taken
the test and measurement business, with the tag solid hold and the information security culture had
line “Innovators in Instrumentation.” This illus- clearly moved in the desired direction, the CEO
trates a dilemma in so far as the business area the further chartered a new supplemental change
company works in is one where products must be initiative to institute enterprise risk management
trustworthy and innovation must therefore not get at LeCroy. This is viewed as a completing element
in the way of an equally important reputation for of the information security project.
stability and robustness. Consequently, whilst Our example operates in a niche business
innovations are required and can be significant area, characterised by complex products and few
source of competitive advantage, they cannot be competitors. The two main competitors are much
allowed to be synonymous with surprises for the larger public companies. Instrumentation design
customer. Thus, instrumentation makers tend to and production is a high fixed cost business, hence
test innovations heavily before introducing them there is a substantial advantage conferred by size.
into production. They are generally willing to LeCroy must compete with these larger compa-
spend heavily to avoid surprises. We can there- nies for relationships with customers, employees,
fore, from the outset, categorise the organisation’s and investors. LeCroy therefore has a strategy
strategic environment as “risk averse.” of fostering longer than average relationships
with its partners in each of the mentioned three

306
Aligning IT Teams' Risk Management to Business Requirements

communities. “No surprises” is an element of strategies. Examples of such risks include data
the strategy. loss or corruption. Conversely, “uncontrollable
risks” are those for which the probability of
Context occurrence cannot be changed by management
action (although the impact of occurrence may
The first aspect of LeCroy’s information secu- be influenced). Examples of such risk include the
rity and risk management programme is how it arrival of an Avian Flu pandemic.
is influenced by its environment and business Most managers at LeCroy are intolerant of con-
area. In particular, its policies and procedures trollable risks. On the other hand, most managers
are designed to enable enterprise management of are comfortable to operate in a business environ-
risk, such that customers, employees, and owners ment and context where they know many risks
experience a coherent risk profile. The key influ- are uncontrollable and only their consequences
ences are represented in Table 2. can be mitigated. For example, instrumentation
makers must be one step ahead of their customers
Influence of Context of Perceptions of in terms of technology. If an oscilloscope is going
Risk (and Risk Tolerance) to help a designer working on a 10gbit design, the
oscilloscope itself must be significantly faster
The context LeCroy operates in recognizes internally. Oscilloscope design activities therefore
“controllable risks” as those for which the prob- carry significant risk. Which technologies to “bet
ability of occurrence can be viably decreased on?” Which vendors can supply needed compo-
or increased based on management’s decisions nents within the tight specifications required? One
to invest or withhold investment in mitigation chipset (processor, memory) may offer a longer

Table 2. Key stakeholder influences


Customer Employees Owners
Context o Long warranties and product o Employee benefits offerings are o Expanding number of
support designed to reduce risks for employees institutional shareholders
o Easy and cheap software o Relatively comprehensive insurance (2006)
upgrades coverage and support packages
o Minimized risk of malware o Facilities investments and procedures
contagion designed to help employees manage risk
o Information security policy o Health and safety policy based on
is significantly influenced by halving exposure every year
the high software content of
products
Implications Low tolerance of risks that Low tolerance of risks that could influence High tolerance of market
could influence customer employee relationships risks
relationships
Risks to health and safety on the job are Management’s strategy is
Decision to implement managed in a different paradigm than to aggressively mitigate
ISO9000, receiving the first information security risks controllable risks, while
certification issued under the managing the consequences
ISO9000:2000 program of unavoidable risks
Key Performance Higher than typical values Average length of service at LeCroy is 8 8.2% of total shares
Indicators for customer retention and years, double peer group average (2006) outstanding are held by
repurchase institutional holders with
at least four quarters of
ownership (2006)

307
Aligning IT Teams' Risk Management to Business Requirements

period of stability while another may introduce the managers are responsible and empowered to align
latest feature—which chipsets should be selected? their risk management decisions to the company
Which development project is likely to succeed, risk management strategy, and only require data
and which is likely to fail? and understanding in order to carry this out.
A risk management team comprising execu-
Influence of Context on Risk Culture tives, managers, and employees has been formed
and charged with developing and implementing
LeCroy’s early years were spent in the high-en- an enterprise risk management program. The
ergy physics instrumentation market. This market programme differentiates between those risks
had two main participant segments: academia for which a return on investment figure can be
and military. From an information security and calculated should the company decide to mitigate
risk management perspective, these segments the risk, and those risks for which a purely ROI
presented a dichotomy. The bias for information basis for investment decision-making would be
sharing, typical of the “un-caged information” inappropriate (for instance, relating to employee
culture of the university, stood in stark contrast health and safety).
to the “need-to-know” information culture of the For those risks where mitigation ROI can be
military and national research labs. For this rea- calculated, LeCroy uses a spreadsheet whose key
son, the information security culture at LeCroy is columns labelled as shown in Table 3. Each of
nuanced and complex. Traditionally, the collegial these factors figures into an algebraic expression,
atmosphere at LeCroy had been characteristic whose value indicates an estimated ROI on mitiga-
of a relaxed information security culture with tion, and a confidence level in the estimate. The
a bias toward knowledge management benefits spreadsheet gives management a first indication of
obtained through easy and widespread access to which mitigation decisions to consider, based on
information. expectation of financial return. This is well aligned
to the company’s willingness to spend to reduce
Content of the Risk Management uncertainty, its model of risk management.
Framework All areas of risk are viewed as objective and
treated in the same fashion, except those relating
The company bases its enterprise risk manage- to the comfort, health, and safety of employees
ment methodology on a cycle of measurement and visitors/partners of LeCroy. These health
and education. A significant element of the risk and safety risks are considered not suitable for a
management framework is data driven—with purely ROI based analytical approach, and instead
the overarching philosophy that employees and are managed using an annual risk exposure halv-

Table 3. Headings of objective risks spreadsheet

Estimated Estimated Estimated Confidence Comments External Extent of Expected Action Estimated
Probability Severity of Seriousness Level in Cost to Mitigation ROI Plan Seriousness
FY08 Consequences of Threat Estimate (0 Mitigate in % of Threat
Event in of Event in $ (B*C) low; 1 high) ($) Following
%, as of Action Plan
May 1
2007

308
Aligning IT Teams' Risk Management to Business Requirements

ing process. The key columns of the associated • A malware infestation of the network and 200
spreadsheet are shown in Table 4. The manage- infected products are shipped to customers
ment process is similar to that for objective risk,
however, the scales used and the algebraic expres- The responses we got varied significantly.
sions are changed. The company could not and Interestingly, no significant pattern seemed to
does not want to assign a specific dollar value, for emerge based on the function of the respondent.
example, to an employee’s or customer’s injury When pressed for an explanation it became ap-
avoidance. The philosophy applied in this area is parent that the perceived severity of such events
one of continuous improvement, hence the goal is was the cause of the variation.
to halve the summation of exposure (multiplication
product of columns 1 and 2) each year. Influence of the Risk Culture on the
Risk Management Framework
Influence of Risk Perceptions on the
Risk Management Framework We have so far characterised the company’s risk
culture as one that prefers to give priority to
As explained earlier, the basis of the risk manage- knowledge sharing and collegiality, and one that
ment framework is numerical. This means that historically had a “relaxed” attitude towards infor-
perceptions of probability, severity, seriousness mation security. Yet we have also described how
of threat, as well as costs to mitigate inevitably the “risk thermostats” are set low for controllable
influence the robustness of the framework and its risks and higher for uncontrollable risks. The need
ability to deliver strategic objectives. For instance, to resolve this apparent tension influences the risk
we asked members of the company’s executive management framework at two levels:
team how much they would be prepared to spend
to halve the probability of: • Risk management structures: A high profile
is given to risk management, with two com-
• 2 day building closure mittees (the information security team and
• The loss of 2 days of BaaN (ERP) data the risk management team) dealing with risk
• Bodily injury to 2 employees company-wide. These teams meet regularly.
• The 2 most important LeCroy patents become The chief information officer sits on both
invalidated teams. The teams regularly seek (and get) input
• A large bin of confidential documents in- from members of the company’s executive
tended for shredding is accidentally released team and annually from the board.
into the insecure dumpster • A strong sense that the company’s efforts
• The Web site being attacked and defaced towards risk avoidance where made neces-
for 2 days sary by the market are appropriate. This is
illustrated for instance, by the views of the

Table 4. Headings of the non-quantifiable risks spreadsheet

Estimated Estimated Estimated Confidence   ExternalCost Extent of   Action Estimated


Probability Severity of Seriousness Level in to Mitigate Mitigation Plan Seriousness
FY08 Event Consequences of of Threat Estimate (0 in % of Threat
in %, as of Event (1 low, 10 (B*C) low; 1 high) Following
May 1, 2007 high) Mitigation

309
Aligning IT Teams' Risk Management to Business Requirements

sales force about whether LeCroy should be This is supplemented by two other mecha-
more risk tolerant than it is. Out of seven se- nisms, which whilst not designed with the sole
nior sales employees we questioned, only one purpose of alignment in mind are widely seen in
thought LeCroy should be more risk tolerant, the organisation as important vehicles for validat-
yet three described the company’s culture as ing the alignment of the ERM strategy. The first
risk intolerant. Similarly, only one member such mechanism consists of board agenda items
of the executive team thought that LeCroy where the ERM strategy and its information secu-
should be more risk tolerant. rity components are discussed. The second such
mechanism is company-wide (driven by IS and
Formal Alignment Process finance) participation in debates and preparations
for risk related audits (ISO, Sarbanes-Oxley).
It is assumed that each manager or employee, The company does not generally screen re-
who was hired for their job expertise, is the most cruitment candidates using risk tolerance filters.
capable person to estimate the probability and con- The company therefore expects its employees
sequences of unexpected outcomes in their area and managers, in the absence of an enterprise
of activity (first two columns of the spreadsheet risk management program, would represent a
tool). However, attention is paid to alignment spectrum of individual risk cultures similar to
between the risk-management actions of indi- the general population at large from which these
vidual managers and the company’s desired risk groups are drawn. Therefore, the company seeks
profile. For information risks (generally viewed to actively define and communicate vocabulary,
as not employee health or safety related), assess- concepts, and methods in its risk management
ments are made of likelihood of an unexpected program, that will allow functions as diverse as
outcome during the coming fiscal year, and of sales, facilities, marketing, production, logistics,
the expected cost should such an event occur. finance, and engineering, to achieve alignment in
Whenever possible, this is done based on LeCroy their approach to their diverse risk management
or peer company data. Then alternative mitigation tasks. These functions also need to be able to
actions/strategies are listed, as are the extent of adjust risk management calibration quickly when
estimated mitigation for each. Costs are listed as company circumstances require an adjustment.
well, and from these factors an estimated ROI can In order to ensure that this is done in a fashion
be computed. In general, for information related that accounts for the varying spectrums of risk
risk mitigation, strategies are selected using this perceptions, these are discussed regularly. This
method and the current year’s hurdle rate is ap- is explained.
plied. The first alignment mechanism is therefore
project finance. Managing the Inter-Dependence
The second routine alignment process in place between Risk Perceptions and ERM
in the company is the participation of the chief Alignment
information officer in three key forums with a
significant stake in the company’s enterprise risk Each year managers and selected employees fill
management: The information security team, the in a risk profiling survey. These reflect a range of
risk management team, and the executive team. working groups, including the executive team, the
This is seen to be an effective alignment mecha- security team, the risk management team, sales
nism in so far as both the information security team teams, the Board of Directors, and others. An
and the risk management team are responsible for example of a question asked in the survey is:
overseeing all planning related to ERM.

310
Aligning IT Teams' Risk Management to Business Requirements

I would accept a business proposition that has work groups adjacent in the value chain. Providing
n% chance of doubling LeCroy’s size, enterprise this periodic reminder of company vocabulary and
value, and EPS over a one year period, however methodology drives enterprise risk management
it also carries a y% chance of bankrupting the behaviour toward convergence.
company.., with sets of [n,y] as follows:
Managing the Influence of Culture on
[5%;95%], [25%;75%], [45%;55%], [50%;50%], Alignment
[55%;45%], [75%;25], [95%;5%]
We have so far highlighted the potential tensions
The results for the executive team are shown between the low tolerance of customer and em-
in Figure 1, on a 1-5 scale (strongly agree =1 ; ployee related risks and the collegiate, knowledge
strongly disagree = 5). sharing culture. We have also highlighted the high
Results are presented to various stakeholders; tolerance for market related risks. Further culture-
executive team, board, information security team, related complexity arises out of the confluence of
and risk management team. The resulting discus- all these daily risk management activities. Two
sions are seen as a valuable mechanism to achieve key questions remain therefore:
a common understanding, and convergence. Each
participant is given insight into their risk percep- • Does each actor know what the overall enter-
tion and tolerance characteristics as well as those prise risk objectives are at the time?
of the other members of their work group, and of

Figure 1. Example results of risk tolerance survey, interviewees 1-7, risk neutral boundary, and 3 work
teams
I1 I2 I3 I4
I5 I6 I7 I8
RiskNeutralBoundary ExecAvg SecTeam RiskMgmtTeam

0
0 1 2 3 4 5 6 7 8

311
Aligning IT Teams' Risk Management to Business Requirements

• Does each actor know the risk management and Exchange Commission (SEC) to LeCroy, as
practices (and potentially) biases of the other a witness in an action being brought against an
actors up and down the value chain to with organization being charged with manipulative
whom they work? stock trading. Twelve firms’ shares were alleged
to have been improperly traded, based on advance
In order to help all employees in the company knowledge of the contents of company press
answer these questions each year, the company releases. LeCroy’s shares, among the 12 firms,
conducts a Security Fair for all employees, up to represented only a minute fraction (total alleged
and including the Board of Directors. The fair is improper profit U.S.D $18,000) of the total, how-
comprised of five to eight booths, including at least ever presented a potential “wake up call” at the
one staffed by outside experts in the field. Each company. A thorough investigation in support of
employee must take a test and/or sign a declaration the SEC inquiry led management to the conclu-
at the end, establishing metrics for the company as sion that the e-mail system of a business partner
to the state of “education” of its “human firewall.” downstream in the public relations/press release
The human firewall is a stated part of the overall processing chain had been compromised, and a
defense in depth strategy, summarized in the copy of the company’s FY2007 Q2 public press
Security Mission Statement (see Figure 2) that release had been viewed on the day prior to its
is posted prominently at the company. release by a stock manipulator. Would such an
Top management further expresses its com- event have the effect of significantly re-calibrating
mitment to security through an annual facilities the risk tolerance responses of senior managers
survey that captures employee concerns regarding or the Board of Directors?
physical safety and security. Investments such Following this event a new measurement round
as upgraded outdoor lighting, traffic calming was conducted, with findings that the profiles of
schemes, and security cameras have arisen from non-involved business managers and the board
this process. members were unchanged. This stability, despite a
high profile event, supports the underlying validity
‘Unusual Eevents Offer Chance to of the measurement process.
Validate Process’ Similar analysis following acquisitions and
integrations (CATC and Catalyst in October 2004
In February 2007 an event occurred that offered and 2006 respectively) informed the authors that
an opportunity to validate the measuring tools and integrations increase the standard deviation of risk
processes employed in LeCroy’s ERM process. tolerance profiles in the successor organization.
A subpoena was issued by the U.S. Securities

Figure 2. Security mission statement

LeCroy’s most important assets are its employees and their knowledge. Protecting our assets preserves a
competitive advantage and helps us achieve our goals. Security risks introduced by individuals’ decisions affect the
entire LeCroy community, including visitors, vendors and customers.

It is the responsibility of everyone at LeCroy to use good judgment to continuously manage security risks in a
manner consistent with our business mission and culture. Alongside our security hardware, software and systems,
the employees of LeCroy act as a human firewall to reduce the likelihood and extent of loss or harm.

312
Aligning IT Teams' Risk Management to Business Requirements

Conclusion tion security and subsequent risk management


charter was issued. During the negotiations with
Business is an endeavour which rewards effective insurance brokers and underwriters, ERM events
management of risk. Wise and consistent accep- and progress of the prior year are brought to bear,
tance of risk may be rewarded, proportionally, with a resulting premium savings over the life
with profit. As all managers are financial manag- of the initiative, so far, exceeding $1M. In the
ers, people managers, and information security 2008 annual renewal cycle, there have been no
managers, so too do all managers manage risk. (zero) insurance claims at LeCroy, a remarkable
But should managers operate in organizations outcome for an organization of 500 employees
where each department makes diverse financial in a dozen countries shipping thousands of units
decisions? For example, if the CFO had arranged a each quarter.
line of credit at 5% pa, should a facilities manager The massive power outage in the North Eastern
be leasing copy machines on a discount rate of USA during August 2005 did not bring LeCroy
22%? Or should a procurement team be paying systems off the air, and LeCroy has enjoyed the
a 15% premium for a JIT inventory consignment most favourable Sarbanes-Oxley opinions each
scheme? Likewise, in the area of risk management, year. These and other favourable outcomes are
alignment and convergence pay large dividends attributed by the management team in part to
and enable clean execution of strategy. the ERM and information security frameworks
The case of LeCroy offers an illustration of at the company.
the effective use of mixed formal and informal The case raises interesting questions about
ERM culture alignment mechanisms, ranging the link between enterprise risk management and
from committee structures to security fairs, other forms of risk management in the company. At
surveys, to spreadsheet tools. The methodology LeCroy, three committees have an important risk
is partly data-driven and partly a qualitative management function: The executive team, the
cycle of education and training. An interesting risk management committee, and the information
aspect of the methodology is that it encourages security committee. Because the risk management
discussion to bring about a shared understand- committee and the information security commit-
ing of the appetite for risk of the organization. tee are at the same organizational level, this raises
Recent work on aligning information assurance possibilities of duplication of activity between the
with business strategy (Birchall et al., 2004) has two committees and accountability. Furthermore,
shown that an essential element of alignment is the recommendations of the two committees may
communication between the stakeholders and potentially overlap. There is therefore the need for
managers accountable for information assurance coordination between them, as well as appropriate
in the organisation. The case presented here sug- oversight by the executive team. At LeCroy, this
gests that this communication around risk and is achieved by the role of the CIO (who is also a
risk perceptions can be an important component member of the executive team).
of ensuring that alignment is achieved. This need
for communication is implemented through a
variety of mechanisms that encourage alignment NOTE
(rather than prescribe it).
Communication regarding ERM with busi- An earlier version of this paper entitled Percep-
ness partners offers tangible rewards. Insurance tual and Cultural Aspects of Risk Management
premiums, for example, at LeCroy have declined Alignment: A Case Study, appears in the Journal
in each of the 6 years since the CEO’s informa- of Information Security, 4(1), 2008.

313
Aligning IT Teams' Risk Management to Business Requirements

References Analysis of Risk and Regulation, London School


of Economics.
Adams, J. (1999). Risk-benefit analysis: Who wants
Dhillon, G., & Backhouse, J. (2001). Current
it? Who needs it? Paper presented at the Cost-
directions in IS security research: toward socio-
Benefit Analysis Conference, Yale University.
organizational perspectives. Information Systems
Adams, J. (2005). Risk management, it’s not rocket Journal, 11, 127-153.
science: it’s more complicated draft paper avail-
Dhillon, G., & Torkzadeh, G. (2006). Value-fo-
able from http://www.geog.ucl.ac.uk/~jadams/
cused assessment of information system security
publish.htm
in organizations. Information Systems Journal,
Adams, J., & Thompson, M. (2002). Taking ac- 16, 293-314.
count of societal concerns about risk. Framing
Ezingeard, J.-N., Mcfadzean, E., & Birchall, D. W.
the problem (Research Rep. 035). London: Health
(2003). Board of directors and information secu-
and Safety Executive,
rity: A perception grid. In S. Parkinson & J. Stutt
Ashenden, D., & Ezingeard, J.-N. (2005). The (Eds.), Paper 222 presented at British Academy
need for a sociological approach to information of Management Conference, Harrogate.
security risk management. Paper presented at
Ezingeard, J.-N., Mcfadzean, E., Howlin, N.,
the 4th Annual Security Conference. Las Vegas,
Ashenden, D., & Birchall, D. (2004). Mastering
Nevada, USA.
alignment: bringing information assurance and
Backhouse, J., & Dhillon, G. (1996). Structures corporate strategy together. Paper presented at
of responsibility and security of information the European and Mediterranean Conference on
systems. European Journal of Information Sys- Information Systems, Carthage.
tems, 5, 2-9.
Gietzmann, M. B., & Selby, M. J. P. (1994). As-
Baskerville, R. (1991). Risk analysis: An inter- sessment of innovative software technology:
pretive feasibility tool in justifying information Developing an end-user-initiated interface sesign
systems security. European Journal of Informa- strategy. Technology Analysis & Strategic Man-
tion Systems, 1(2), 121-130 agement, 6, 473-483.

Beck, U. (1992). Risk society. London: Sage Hirsch, C. (2005). Do not ship trojan horses. In
Publishers. P. Dowland, S. Furnell, & B. Thuraisingham
(Eds.), Security management, integrity, and in-
Birchall, D., Ezingeard, J.-N., Mcfadzean, E.,
ternal control in information systems. Fairfax,
Howlin, N., & Yoxall, D. (2004). Information
VA: Springer.
assurance: Strategic alignment and competitive
advantage. London: GRIST. Hussin, H., King, M., & Cragg, P. (2002). IT
alignment in small firms. European Journal of
CCEVS. (2005). Common criteria—Part 1: In-
Information Systems, 11, 108-127.
troduction and general model (Draft v3.0, Rev
2). Common Criteria Evaluation and Validation ISO. (2005a). Information technology—security
Scheme. techniques—code of practice for information
security management (ISO/IEC 17799:2005).
Ciborra, C. (2004). Digital technologies and the
London: BSI.
duality of risk (Discussion Paper). Centre for

314
Aligning IT Teams' Risk Management to Business Requirements

ISO. (2005b). Information technology—security OST. (2004). Cyber trust and crime prevention.
techniques —information security management London: Office of Science & Technology—UK
systems—requirements (ISO/IEC 27001:2005(E)). Department of Trade and Industry. HMSO.
London: BSI.
Peters, T. J., & Waterman, R. H. (1982). In search
ITGI (2003). IT control objectives for Sarbanes- Of excellence: Lessons from America’s best run
Oxley. Rolling Meadows, IL: IT Governance companies. New York: Harper and Row.
Institute.
Reich, B. H. & Benbasat, I. (1996). Measuring the
ITGI. (2005). COBIT 4.0: control objectives and linkage between business and information tech-
management guidelines. Rolling Meadows, IL: nology objectives. MIS Quarterly, 20, 55-81.
Information Technology Governance Institute.
Reich, B. H., & Benbasat, I. (2000). Factors that
Jahner, S., & Krcmar, H. (2005). Beyond technical influence the social dimension of alignment
aspects of information security: Risk culture as between business and information technology
a success factor for IT risk management. Paper objectives. MIS Quarterly, 24, 81-113.
presented at Americas Conference on Informa-
Turnbull, N. (1999). Internal control: Guidance
tion Systems 2005.
for directors on the combined code: The Turnbull
Loch, K. D., Carr, H. H., & Warkentin, M. E. report. London: The Institute of Chartered Ac-
(1992). Threats to information systems: Today’s countants in England & Wales.
reality, Yesterday’s understanding. MIS Quar-
Venkatraman, N., & Camillus, J. C. (1984). Explor-
terly, 16, 173.
ing the concept of ‘fit’ in strategic management.
Mcfadzean, E., Ezingeard, J.-N., & Birchall, D. Academy of Management Review, 9, 513-525.
(2004). Anchoring information security gover-
Whitman, M. E., & Mattord, H. J. (2003). Prin-
nance research. In G. Dhillon & S. Furnell (Eds.),
ciples of information security. Boston, London:
Proceedings of the Third Security Conference.
Thomson Course Technology.
Las Vegas, Nevada, USA.
Willcocks, L., & Margetts, H. (1994). Risk assess-
Oshri, I., Kotlarsky, J., & Hirsch, C. (2005). Se-
ment and information systems. European Journal
curity in networkable windows-based operating
of Information Systems, 3, 127-138.
system devices. Paper presented at Softwares
Conference, Las Vegas, Nevada, USA

315
316

Chapter XVIII
Security Requirements
Elicitation:
An Agenda for Acquisition
of Human Factors
Manish Gupta
State University of New York, Buffalo, USA

Raj Sharman
State University of New York, Buffalo, USA

Lawrence Sanders
State University of New York, Buffalo, USA

Abstract

Information security is becoming increasingly important and more complex as organizations are increas-
ingly adopting electronic channels for managing and conducting business. However, state-of-the-art
systems design methods have ignored several aspects of security that arise from human involvement or
due to human factors. The chapter aims to highlight issues arising from coalescence of fields of systems
requirements elicitation, information security, and human factors. The objective of the chapter is to
investigate and suggest an agenda for state of human factors in information assurance requirements
elicitation from perspectives of both organizations and researchers. Much research has been done in
the area of requirements elicitation, both systems and security, but, invariably, human factors are not
been taken into account during information assurance requirements elicitation. The chapter aims to find
clues and insights into acquisition behavior of human factors in information assurance requirements
elicitation and to illustrate current state of affairs in information assurance and requirements elicitation
and why inclusion of human factors is required.

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Security Requirements Elicitation

Introduction information assurance requirements elicitation.


The chapter aims to find clues and insights into
In last few years, information security has at- acquisition behavior of human factors in infor-
tained a very important position in organizations mation assurance requirements elicitation and to
and personal lives. A decade ago, it was very illustrate current state of affairs and importance
uncommon for colleges to offer any course in of human factors of information assurance and
information security, privacy, or information requirements elicitation. This chapter, based on
assurance. In 2005, the U.S. National Security survey and synthesis of existing literature, aims
Agency certified 67 academic institutions as to bring out the current state of affairs in that area
Centers of Academic Excellence in Information and also suggests why this is vitally critical for
Assurance Education, which evidently under- success of the information systems usage, more
scores the importance of security of information so, in light of growth of exploitation of human
in everyone’s lives as corporate citizens and as factors to manipulate and invalidate information
individuals. However this has arisen to another systems.
interesting assumption that computer security is
primarily a technical subject. This ignores the fact
that computer security’s technical aspects are only Systems and Security
as effective as people designing, using, attack- Requirements Elicitation:
ing, and protecting information systems. People Human factors
are the cornerstone of information security and
privacy. Security solutions that fail to take human More often than not, it is becoming increasingly
factors into account are not going to be effective evident that the weakest links in an information-
in protecting information systems or providing security chain are the people, because human
any assurance thereof. Schwartz (2005) quoted a nature and social interactions are much easier
survey finding that “89% of respondents believe to manipulate than targeting the complex tech-
major security breaches have been reduced as a nological protections of information systems.
result of IT security training and certification.” Concerns and threats regarding human and social
According to the survey, the perceived benefits of factors in organizational security are increasing
training include “improved potential risk identi- at an exponential rate and shifting the informa-
fication, increased awareness, improved security
measures, and an ability to respond more rapidly
to problems.”
The chapter aims to investigate and suggest an Figure 1. Research coverage and areas
agenda for state of human factors in information
assurance requirements elicitation from perspec-
tives of both organizations and researchers. For
any project or information system implementation,
requirements elicitation is one of the most im-
portant steps. Information security requirements
have been long introduced as a vital component
of overall requirements elicitation. Much research
has been done in that area, as is also discussed
in a following section. But, invariably, human
factors are not been taken into account during
Figure 1. Research coverage and
areas
317
Security Requirements Elicitation

tion security paradigm. The growing number of understanding of security and increases their
instances of breaches in information security in motivation to comply with policies. Furthermore,
the last few years has created a compelling case the assessments of completeness, accuracy,
for efforts towards secure electronic systems. and reliability of the requirements hinge on the
Security has been the subject of intensive re- analyst’s ability to conduct effective inquiry
search in the areas of cryptography, hardware, (Waldron, 1986). Experimental results indicate
and networking. However, despite these efforts, that humans do not balance information costs and
designers often understand security as only the benefits well (Connolly & Gilani, 1982; Connolly
hardware or software implementation of specific & Thorn, 1987).
cryptographic algorithms and security proto- Analysts select a particular elicitation tech-
cols. However, human factors are as important. nique for any combination of four reasons (Hickey
Their non-functional nature imposes complex & Davis, 2004): (1) it is the only technique that
constraints on the emergent behavior of soft- the analyst knows; (2) it is the analyst’s favorite
ware-intensive systems, making them hard to technique for all situations; (3) the analyst is
understand, predict, and control. Figure 1 shows following some explicit methodology, and that
information systems requirements that have been methodology prescribes a particular technique at
amply researched, including human elements of the particular time; and (4) the analyst understands
it, though the concentric circle representing the intuitively that the technique is effective in the
information assurance requirements with the current circumstances (Hickey & Davis, 2004).
overlap with circle representing human factors Clearly, the fourth reason demonstrates the most
is understudied. sophistication by the analyst. We hypothesize that
Requirements engineering, a vital component such maturity leads to improved understanding
in successful project development, often neglects of stakeholders’ needs, and thus a higher likeli-
sufficient attention to security concerns. Further, hood that a resulting system will satisfy those
industry lacks a useful model for incorporating needs. Unfortunately, novice and expert systems
security requirements into project development. analysts differ significantly in ability and maturity
Studies show that upfront attention to security (Schenk, 1998). Most practicing analysts simply
saves the economy billions of dollars. Industry do not have the insight necessary to make such
is thus in need of a model to examine security an informed decision, and therefore rely on one
and quality requirements in the development of the first three reasons. Note that the elicitation
stages of the production lifecycle. Traditionally,
security requirements have been derived in an
ad hoc manner. Recently, commercial software Figure 2. The IS design motivations and com-
development organizations have been looking for ponents
ways to produce effective security requirements.
It is generally accepted that early determination of
the stakeholder requirements assists in the devel-
opment of systems that better meet the needs of
those stakeholders. General security requirements
defeat this goal because it is difficult to determine
how they affect the functional requirements of the
system. A benefit of considering human factors
is also that the involvement of stakeholders in
the high-level security analysis improves their
Figure 2. The IS design motivations and
components
318
Security Requirements Elicitation

technique selection process is driven by problem, • Industry lacks a useful model for incorpo-
solution, and project domain characteristics as rating security requirements into project
well as the state of the requirements. Figure 2 development
lays out different processes and motivations • Upfront attention to security saves the
behind system design. The research focus arrow economy billions of dollars
shown as a question mark is the primary focus • Traditionally, security requirements have
of this study. been derived in an ad hoc manner
With the abundance of confidential informa- • A benefit of considering human factors is also
tion that organizations must protect, and with that the involvement of stakeholders in the
consumer fraud and identity theft at an all time high-level security analysis improves their
high, security has never been as important as understanding of security, and increases their
it is today for businesses and individuals alike. motivation to comply with policies
An attacker can bypass millions of dollars in- • Insider threats are the greatest threat to
vested in technical and non-technical protection individual and corporate privacy
mechanisms by exploiting the human and social • Insecure work practices and low security
aspects of information security. While informa- motivation have been identified by research
tion systems deal with human interactions and on information security as major problems
communications through use of technology, it that must be addressed
is often difficult to separate the human elements
from the technological ones. In this chapter, we
investigate the phenomena of under-acquisition Background and Discussions
of human factors during information assurance
requirements elicitation. The importance of Significant research has been done in the area of
research of this study can be summarized as fol- understanding and evaluating security require-
lowing salient points: ments of information systems and processes
from technical and procedural standpoints. No
• The weakest links in an information-security research, to the best of my knowledge, based
chain are the people and social networks on extensive literature survey, has focused on
• Human nature and social interactions are implications and role of human elements during
much easier to manipulate than targeting requirements elicitation. While importance of
the complex technological protections of information security is growing at an exponential
information systems rate in context of information systems and artifacts
• Security has been the subject of intensive thereof, understanding the role of human factors
research in the areas of technology and in information security is equally critical.
regulations Some of the existing research on security
• While information systems deal with human requirements based on technical and functional
interactions and communications through requirements of systems is discussed next. But
use of technology, it is impossible to separate none of them explicitly take into account human
the human elements from the technological factors during security requirements elicitation.
ones The common criteria for information technology
• Requirements engineering, a vital com- security evaluation (CCITSE), usually referred to
ponent in successful project development, as the common criteria (CC), establishes a level
often neglects sufficient attention to security of trustworthiness and confidence that should be
concerns placed in the security functions of products or

319
Security Requirements Elicitation

systems and the assurance measures applied to rent development practice suffers from two key
them (Yavagal, Lee, Ahn, & Gandhi, 2005). CC problems: (1) Security requirements tend to be
achieves this by evaluating the product or system kept separate from other system requirements
conformance with a common set of requirements and not integrated into any overall strategy and
set forth by it. Establishing secure systems as- (2) The impact of security measures on users
surance based on certification and accreditation and the operational cost of these measures on
(C&A) activities requires effective ways to under- a day-to-day basis are usually not considered
stand the enforced security requirements, gather (Flechais, 2003).
relevant evidences, perceive related risks in the Since software project failures are so rampant
operational environment, and reveal their causal (Standish, 1995), it is quite likely that improving
relationships with other domain concepts (Lee, how the industry performs elicitation could have
Gandhi, Muthurajan, Yavagal, & Ahn, 2006). a dramatic effect on the success record of the
Haley et al. present a framework for security industry (Hoffman & Lehner, 2001). Improving
requirements elicitation and analysis based upon requirements elicitation requires us to first under-
the construction of a context for the system and stand it (Hickey & Davis, 2004). Although many
satisfaction arguments for the security of the papers have been written that define elicitation, or
system (Haley et al., 2006). Mead and Stehney prescribe a specific technique to perform during
(2005) examine a methodology for both eliciting elicitation, none have yet defined a unified model of
and prioritizing security requirements on a devel- the elicitation process that emphasizes the knowl-
opment project within an organization based on edge needed to effectively elicit requirements from
a model developed by the Software Engineering stakeholders. Better understanding of elicitation
Institute’s networked systems survivability (NSS) and the factors that should be considered during
program. Haley, Laney, and Bashar (2004) il- elicitation technique selection will improve the
lustrate how representing threats as crosscutting quality of the requirements elicitation process
concerns aids in determining the effect of security and, ultimately, increase the success of software
requirements on the functional requirements. A development projects (Hickey & Davis, 2002).
lot of attention has been devoted to metrics focus- Most requirements models include a requirements
ing on operational security of deployed systems, elicitation activity, either as a separate activity or
analyzing defect rates, known and un-patched as part of another requirements activity (Hickey
vulnerabilities, and configuration of systems & Davis, 2004).
(Dacier, 1994; Noel, Jajodia, O’Berry, & Jacobs, Requirements elicitation is all about determin-
2003). The TCSEC (TCSEC, 1985) ranks systems ing needs of stakeholders. Most models of require-
based on the number of security mechanisms, the ments elicitation focus on specific methodologies
scope of security mechanisms, and the granular- or techniques. Several researchers (Holbrook,
ity of security mechanisms. The activities and 1990; Hsia et al., 1994) have developed specific
documents from the common criteria are tightly models that define how to use scenarios for require-
intertwined with the system development, which ments elicitation. The state of the requirements
improves the quality of the developed system drives technique selection because techniques
and reduces the additional cost and effort due to vary significantly in their ability to elicit different
high security requirements (Vetterling, Wimmel, types of requirements (Gottesdiener, 2002; Davis,
& Wisspeintner, 2002). This chapter reports on 1993; Davis, 1998; Lauesen, 2002). The focus of
our experiences eliciting confidentiality require- this chapter is to highlight all techniques and in-
ments in a real world project in the health care vestigate to what extent human factors are consid-
area (Gürses, 2005). Sasse et al. argue that cur- ered during information assurance requirements

320
Security Requirements Elicitation

elicitation. Generally, decision-makers fall victim development. As the design methods do not offer
to two types of acquisition errors: over-acquiring comprehensive modeling support, practitioners
and under-acquiring (Pitts & Browne, 2004). In need to combine several design methods (Siponen
IRD, over-acquisition involves the gathering of & Heikka, 2007). As a result, future SIS design
more information than is needed and results in methods should provide modeling support at the
wasted time and resources in the elicitation and organizational, conceptual, and technical levels
analysis of requirements (Pitts & Browne, 2004). (Siponen & Heikka, 2007).
Under-acquisition, on the other hand, generates an One such paradigm analyzed by Siponen and
incomplete view of the goals and functionality of Heikka (2007) was the responsibility modeling
the proposed system, leading to potential design paradigm that includes secure information system
problems, iterative redesign, implementation and design methods, representing the view that IS se-
maintenance difficulties, and possible system curity requirements can be found by scrutinizing
failure (Pitts & Browne, 2004). the job responsibilities in organizations (Siponen
There have been several studies on design & Heikka, 2007). Siponen and Heikka (2007)
methods for creating secure information sys- analyzed several methods for their paper and
tems. Some of them have utilized sociological suggest that the methods within this paradigm
and philosophical tenets of design paradigms include a method by Strens and Dobson (1993), the
(Backhouse, 1988, 1992; Dhillon & Backhouse, semantic responsibility analysis method suggested
2001; Siponen, 2005a; Siponen, 2005b). However, by Backhouse and Dhillon (1996), the abuse case
a direct link between specifically to highlight method suggested by McDermott and Fox (1999),
importance of human factors is lacking. There McDermott (2001) and Sindre and Opdahl (2005),
are several requirements elicitation methodologies the task-based authorization method suggested by
employed by organizations driven by necessities Thomas and Sandhu (1994). Strens and Dobson
that propose secure information systems design. (1993) state that the context in which IS security
Some security design methods emphasize more requirements arises is poorly understood, and that
on conceptual level modeling, focusing more understanding of users’ responsibilities is the way
on the system features and environment. These to solve this problem. There are several methods
methods tend to follow a strict structure for every in use today and are used based on specific meth-
system and use constraints as a basic tenet (Ellmer, odology instructions or motivations. There is a
Pernul, & Kappel, 1995; Pernul, 1992). There are severe lack of empirical and analytical research
models that view the key design entities, including suggesting effects and influences of presence or
humans, as organization-oriented. The business absence of consideration of human factors from
process paradigms consist of a secure information studying secure information systems design.
system design method suggested by Herrmann and
Pernul (1998), and the fair and secure electronic
markets method proposed by Ro¨hm and Pernul Conclusion and Future
(2000). Common to these methods is an attempt directions
to construct a modeling notation for describing
security constraints in business process models Studies have consistently found that greater per-
(Siponen & Heikka, 2007). Siponen and Heikka’s centage of security defects in e-business applica-
main finding was that secure information systems tions are due to design-related flaws, which could
design methods provide modeling support only be detected and corrected during applications
from particular perspectives, and therefore lack development. Traditional methods of managing
the comprehensiveness that may be needed in IS security vulnerabilities have often been ad hoc

321
Security Requirements Elicitation

and inadequate, where most of the times human organizations and researchers to emphasize role
factors are not explicitly considered. Thus the de- and importance of human factors in security
signs are exposed to errors arising from two major requirements gathering. There are many compa-
facets: 1). humans designing the system and 2). nies that, under competitive pressure to turn out
the ones using it. A recent approach that includes applications with new features in a short time,
understanding and incorporation of human and relegate security to a lower priority; shortening
social factors promises to induce more effective the development time (Viega, Kohno, & Potter,
security requirements as part of the application 2001; Viega & McGraw, 2002; Mead & Stehney,
development cycle. 2005). Facilitating conditions (e.g., organizational
Security requirements engineering for effec- support and availability of resources) can have a
tive information systems is mainly concerned positive influence on behavioral intention (Ven-
with methods providing efficient systems that katesh, 2000).
are economical and provide operationally effec-
tive protection from undesirable events (Lane,
1985), and as Anderson claims (Anderson, 2001), References
security engineering is about building systems to
remain dependable in the face of malice, error, Anderson, R. (2001). Security engineering: A
or mischance. It has been widely accepted that guide to building dependable distributed systems.
in order to effectively design secure systems, Wiley Computer Publishing.
it is necessary to integrate security engineer-
Backhouse, J., & Dhillon, G. (1996). Structures
ing principles into development techniques and
of responsibilities and security of information
introduce a development methodology that will
systems. European Journal of Information Sys-
consider security as an integral part of the whole
tems, 5(1), 2-10.
development process (Devanbu & Stubblebine,
2000; Michailova, Doche, & Butler, 2002; Lam- Baskerville, R. (1988). Designing information
sweerde, 2004; Viega & McGraw, 2004). However, systems security. John Wiley Information Sys-
there is limited practice of secure development tems.
of applications and lack of research investigating
Baskerville, R. (1992). The developmental dual-
the phenomenon of interaction human and social
ity of information systems security, Journal of
actors with the system and resulting influences.
Management Systems, 4(1), 1-12.
Security requirements engineering should provide
techniques, methods, and standards for incor- Charles, B., Moffett, J. D., Laney, R., & Bashar,
porating all the agents and actors while using N. (2006, May). A framework for security require-
repeatable and systematic procedures to ensure ments engineering. In Proceedings of the 2006
that the set of requirements obtained is complete, International Workshop on Software Engineering
consistent and easy to understand, and analyzable for Secure Systems SESS ‘06.
by the different actors involved in the develop-
Connolly, T., & Gilani, N. (1982). Information
ment of the system (Yu, 1995). Research is needed
search in judgment tasks: A regression model
in employing and using the concepts of actors,
and some preliminary findings. Organizational
goals, tasks, resources, and social dependencies
Behavior and Human Decision Processes, 30(3),
for defining the interaction amongst actors and
330-350.
their intentions and motivations for interacting
with the system or the organization where the Connolly, T., & Thorn, B. K. (1987). Predecisional
system is deployed. So it is highly critical for information acquisition: Effects of task variables

322
Security Requirements Elicitation

on suboptimal search strategies. Organizational of the 3rd International Conference on Aspect-


Behavior and Human Decision Processes, 39(3), Oriented Software Development.
397-416.
Herrmann, G., & Pernul, G. (1998). Towards
Dacier, M. (1994). Vers une ´evaluation quanti- security semantics in workflow management.
tative de la s´ecurit´e, informatique. Phd thesis, In Proceedings of the 31st Hawaii International
Institut National Polytechnique de Toulouse. Conference on Systems Sciences.
Davis, A., (1993). Software requirements: Objects, Hickey, A., & Davis, A. (2002). The role of re-
functions and states. Upper Saddle River, NJ: quirements elicitation techniques in achieving
Prentice Hall. software quality. In C. Salinesi, B. Regnell, &
K. Pohl (Eds.), Proceedings of the Requirements
Davis, A. (1998). A comparison of techniques
Engineering Workshop: Foundations for Software
for the specification of external system behavior.
Quality (REFSQ ’02) (pp. 165-171). Essen, Ger-
Communications of the ACM, 31(9), 1098-1115.
many: Essener Informatik Beiträge.
Devanbu, P. & Stubblebine, S. (2000). Software
Hickey, A. M., & Davis, A. M. (2004). A unified
engineering for security: a roadmap. In Proceed-
model of requirements elicitation. Journal of Man-
ings of the Conference of the Future of Software
agement Information Systems, 20(4), 65-84.
Engineering.
Hofmann, H., & Lehner, F. (2001). Requirements
Dhillon, G., & Backhouse, J. (2001). Current
engineering as a success factor in software proj-
directions in IS security research: toward socio-
ects. IEEE Software, 18(4), 58-66.
organizational perspectives. Information Systems
Journal, 11(2). Holbrook, H. (1990). A scenario-based method-
ology for conducting requirements elicitation.
Ellmer, E., Pernul, G., & Kappel, G. (1995). Ob-
ACM SIGSOFT Software Engineering Notes,
ject-oriented modeling of security semantics. In
15(1), 95-104.
Proceedings of the 11th Annual Computer Society
Applications Conference (ACSAC’95). Hsia, P., Samuel, J., Gao, J., Kung, D., Toyoshima,
Y., & Chen, C. (1994). Formal approach to scenario
Flechais, I., Sasse, M. A., & Hailes, S. M. V.
analysis. IEEE Software, 11(2), 33-41.
(2003, August). Security engineering: Bringing
security home: a process for developing secure Lane, V. P. (1985). Security of computer based in-
and usable systems. In Proceedings of the 2003 formation systems. Macmillan Education Ltd.
Workshop on New Security Paradigms.
Lauesen, S. (2002). Software requirements: Styles
Gottesdiener, E. (2002). Requirements by col- and techniques. London: Addison-Wesley.
laboration. Boston: Addison-Wesley.
Lee, S.-W., Gandhi, R., Muthurajan, D., Yavagal,
Gürses, S., Jahnke, J. H., Obry, C., Onabajo, A., D., & Ahn, G.-J. (2006, May). Building problem
Santen, T., & Price, M. (2005, October). Eliciting domain ontology from security requirements in
confidentiality requirements in practice. In Pro- regulatory documents. In Proceedings of the 2006
ceedings of the 2005 Conference of the Centre for International Workshop on Software Engineering
Advanced Studies on Collaborative Research. for Secure Systems SESS ‘06.
Haley, C. B., Laney, R., & Bashar, N. (2004, McDermott, J. (2001). Abuse-case-based assur-
March). Deriving security requirements from ance arguments. In Proceedings of the 17th An-
crosscutting threat descriptions. In Proceedings

323
Security Requirements Elicitation

nual Computer Security Applications Conference Schenk, K., Vitalari, N., & Davis, K. (1998).
(ACSAC). Differences between novice and expert systems
analysts: What do we know and what do we do?
McDermott, J., & Fox, C. (1999). Using abuse case
Journal of Management Information Systems,
models for security requirements. In Proceedings
15(Summer), 9-50.
of the 15th Annual Computer Security Applica-
tions Conference (ACSAC). Schwartz, M. (2005). Organizations neglect hu-
man factors in security. Retrieved from http://
Mead, N., & Stehney, T. (2005a). Security quality
www.itcinstitute.com/display.aspx?id=363
requirements engineering (SQUARE) methodol-
ogy. ACM SIGSOFT Software Engineering Notes, Sindre, L., & Opdahl, A. (2005). Eliciting security
30(4), 1-7. requirements with misuse cases. Computer Sci-
ence and Engineering, 10(1), 34-44.
Mead, N. R., & Stehney, T. (2005b). Software
engineering for secure systems (SESS)—build- Siponen, M. (2005a). Analysis of modern IS se-
ing trustworthy applications: Security quality curity development approaches: towards the next
requirements engineering (SQUARE) methodol- generation of social and adaptable ISS methods.
ogy. ACM SIGSOFT Software Engineering Notes, Information and Organization, 15(4), 339-375.
Proceedings of the 2005 Workshop on Software
Siponen, M. (2005b). An analysis of the traditional
Engineering for secure systems—building trust-
IS security approaches: implications for research
worthy applications SESS ‘05, Volume, 30(4).
and practice. European Journal of Information
Michailova, A., Doche, M., & Butler, M. (2002). Systems, 14(3), 303-315.
Constraints for scenario-based testing of object-
Siponen, M., & Heikka, J. (2007). Do secure infor-
oriented programs (Technical Report). Electronics
mation system design methods provide ..., Inform.
and Computer Science Department, University
Softw. Technol. doi:10.1016/j.infsof.2007.10.011
of Southampton.
Standish Group. (1995). The CHAOS report. West
Noel, S., Jajodia, S., O’Berry, B., & Jacobs,
Yarmouth, MA. available at www.standishgroup.
M. (2003). Efficient, minimum-cost network
com.
hardening via exploit dependency graphs. In
Proceedings of 19th Annual Computer Security, Strens, R., & Dobson, J. (1993). How responsibil-
Applications Conference (pp. 86-95). IEEE Com- ity modeling leads to security requirements. In
puter Society. Proceedings of the 1992 & 1993 ACM SIGCAS
New Security Paradigm Workshop.
Pernul, G. (1992). Security constraint process-
ing during multilevel secure database design. In TCSEC. (1985). Department of defense trusted
Proceedings of the 8th Annual Computer Security computer system evaluation criteria (TCSEC:
Applications Conference. DoD 5200.28-STD). Department of Defense.
Pitts and Browne. (2004). Stopping behavior of Thomas, R. K., & Sandhu, R. S. (1994). Conceptual
systems analysts during information requirements foundations for a model of task-based authoriza-
elicitation. JMIS, 21(1), 203-226. tions. In Proceedings of the 7th IEEE Computer
Security Foundations Workshop.
Ro¨hm, A. W., & Pernul, G. (2000). COPS: a
model and infrastructure for secure and fair van Lamsweerde, A. (2004). Elaborating security
electronic markets. Decision Support Systems, requirements by construction of intentional anti-
29(4), 434-455.

324
Security Requirements Elicitation

models. In Proceedings of the International Con- Viega, J., & McGraw, G. (2004). Building secure
ference on Software Engineering (pp. 148-157). software—how to avoid security problems the
right way. Reading, MA: Addison-Wesley.
Venkatesh, V. (2000). Determinants of perceived
ease of use: integrating control, intrinsic motiva- Waldron, V. R. (1986). Interviewing for knowl-
tion, and emotion into the technology acceptance edge. IEEE Transactions on Professional Com-
model. Information Systems Research, 11(4), munications, PC29(2), 31-34.
342-365.
Yavagal, D. S., Lee, S.-W., Ahn, G.-J., & Gandhi,
Vetterling, M., Wimmel, G., & Wisspeintner, A. R. A. (2005, March). Security: Common criteria
(2002, November). Requirements analysis: Secure requirements modeling and its uses for quality of
systems development based on the common cri- information assurance (QoIA). In Proceedings of
teria: the PalME project. In Proceedings of the the 43rd Annual Southeast Regional Conference
10th ACM SIGSOFT Symposium on Foundations ACM-SE 43 (Vol. 2).
of Software Engineering.
Yu, E. (1995). Modelling strategic relationships
Viega, J., Kohno, T., & Potter, B. (2001). Trust and for process reengineering. Ph.D. Thesis, Depart-
mistrust in secure applications. Communications ment of Computer Science, University of Toronto,
of the ACM, 44(2), 31-36. Canada.
Viega, J., & McGraw, G. (2002). Building secure
software. Boston: Addison-Wesley.

325
326

Chapter XIX
Do Information Security Policies
Reduce the Incidence of
Security Breaches:
An Exploratory Analysis

Neil F. Doherty
Loughborough University, UK

Heather Fulford
Loughborough University, UK

Abstract

Information is a critical corporate asset that has become increasingly vulnerable to attacks from viruses,
hackers, criminals, and human error. Consequently, organizations are having to prioritize the security
of their computer systems in order to ensure that their information assets retain their accuracy, confi-
dentiality, and availability. While the importance of the information security policy (InSPy) in ensuring
the security of information is acknowledged widely, to date there has been little empirical analysis of its
impact or effectiveness in this role. To help fill this gap, an exploratory study was initiated that sought
to investigate the relationship between the uptake and application of information security policies and
the accompanying levels of security breaches. To this end, a questionnaire was designed, validated, and
then targeted at IT managers within large organizations in the UK. The findings presented in this chapter
are somewhat surprising, as they show no statistically significant relationships between the adoption of
information security policies and the incidence or severity of security breaches. The chapter concludes
by exploring the possible interpretations of this unexpected finding and its implications for the practice
of information security management.

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Do Information Security Policies Reduce the Incidence of Security Breaches

INTRODUCTION following five sections: a review of the literature


and a description of the conceptual framework;
It has been claimed that “information is the firm’s a discussion of the research methods employed;
primary strategic asset” (Glazer, 1993), as it is the a presentation of the findings; a discussion of
critical element in strategic planning and decision their importance and finally the conclusions and
making as well as day-to-day operational control. recommendations for future research.
Consequently, organizations must make every
effort to ensure that their information resources
retain their accuracy, integrity, and availability. LITERATURE REVIEW AND
However, ensuring the security of corporate infor- CONCEPTUAL FRAMEWORK
mation assets has become an extremely complex
and challenging activity, due to the growing value This section aims to present a discussion of the
of information resources and the increased levels literature with regard to the role and importance
of interconnectivity among information systems of the InSPy and the common security threats,
both within and among organizations (Garg et which such policies are intended to prevent. The
al., 2003). Indeed, the high incidence of security section concludes with a critique of this literature,
breaches suggests that many organizations are and the presentation of the conceptual framework
failing to manage their information resources ef- for our study.
fectively (Straub & Welke, 1998). One increasingly
important mechanism for protecting corporate in- The Role of the
formation, and in so doing reducing the occurrence Information Security Policy
of security breaches, is through the formulation
and application of a formal information security The broad aim of the information security policy
policy (InSPy) (Hinde, 2002; von Solms & von is to provide the “ideal operating environment”
Solms, 2004). Gaston (1996, p. 175) defines an for the management of information security
InSPy as: “broad guiding statements of goals to (Barnard & von Solms, 1998), by defining: “the
be achieved; significantly, they define and assign broad boundaries of information security” as well
the responsibilities that various departments and as the “responsibilities of information resource
individuals have in achieving policy goals.” users” (Hone & Eloff, 2002b, p. 145). More spe-
The role and importance of information se- cifically, a good security policy should: “outline
curity policies and the incidence and severity individual responsibilities, define authorized and
of security breaches are both topics that have unauthorized uses of the systems, provide venues
attracted significant attention in the literature, for employee reporting of identified or suspected
but there is little evidence that these topics have threats to the system, define penalties for viola-
been explicitly linked. Consequently, there has tions, and provide a mechanism for updating the
been little empirical exploration of the extent to policy” (Whitman, 2004, p. 52).
which information security policies are effective, The InSPy also has an important role to play
in terms of reducing security breaches. The aim in emphasizing management’s commitment to,
of this chapter is to help fill this gap by report- and support for, information security (Gaston,
ing upon the results of a study that sought to 1996; Hone & Eloff, 2002b; Kwok & Longley,
empirically explore the relationship between the 1999). While the InSPy provides the framework
uptake and application of information security for facilitating the prevention, detection, and re-
policies and the incidence of security breaches. sponse to security breaches, the policy document
The remainder of this chapter is organized into the

327
Do Information Security Policies Reduce the Incidence of Security Breaches

typically is supported by standards that tend to respect to empirical studies of the role that the
have a more technical or operational focus (Dhil- InSPy has to play in the prevention of common
lon, 1997). security threats (Loch et al., 1992; Mitchell et al.,
In recent years, a consensus has emerged both 1999; Rees et al., 2003; Whitman, 2004), such as
within the academic and practitioner communi- those summarized in the following section.
ties that the security of corporate information
resources is predicated upon the formulation Threats to the Security
and application of an appropriate information of Information Assets
security policy (e.g., Rees et al., 2003). As Hinde
(2002) puts it, the information security policy is Information resources can retain their integrity,
now the sine qua non (indispensable condition) confidentiality, and availability only if they can be
of effective security management. In a similar protected from the growing range of threats that
vein, von Solms and von Solms (2004) note that is arrayed against them (Dhillon & Backhouse,
the information security policy is the heart and 1996; Garg et al., 2003). Security threats — which
basis of successful security management. How- have been defined as “circumstances that have
ever, while the InSPy may play an important role the potential to cause loss or harm” (Pfleeger,
in effective information security management, 1997, p. 3) — come both from within and from
there is growing recognition that the policy is outside the organization (Hinde, 2002). For ex-
unlikely to be a successful security tool, unless ample, common internal threats include “mistakes
organizations adhere to a number of important by employees” (Mitchell et al., 1999) and some
prescriptions in their policy implementation (Hone categories of computer-based fraud (Dhillon,
& Eloff, 2002b). The following are probably the 1999), while attacks by hackers (Austin & Derby,
most commonly cited examples of best practice 2003) and viruses (de Champeaux, 2002) are the
guidelines: most commonly cited types of external threat.
The increasing vulnerability of computer-based
1. The policy must be widely and strongly information systems is underlined by the grow-
disseminated throughout the organization ing cost of security breaches (Austin & Darby,
(Hone & Eloff, 2002a; Hone & Eloff, 2002b; 2003). For example, Garg et al. (2003) estimate
ISO, 2000; Sipponen, 2000). the cost of significant security breaches, such as
2. The policy must be reviewed and revised “denial of service attacks” to be in the range $17
frequently (Higgins, 1999; Hone & Eloff,
to 28 million. Given the growing cost of security
2002a; Hong et al., 2003).
breaches, many surveys have been undertaken that
have sought to quantify the range and significance
It also has been suggested that policies must be
of threats that face computer-based information
tailored to the culture of the organization (Hone
systems. A review of these surveys (Loch et al.,
& Eloff, 2002b; ISO, 2000), well aligned with
1992; Mitchell et al., 1999; Whitman, 2004) sug-
corporate objectives (ISO, 2000; Rees et al., 2003),
gests that the breaches presented in Table 1 are
and rigorously enforced (David, 2002). While
probably the most common and most significant
the literature, with respect to the facilitators of
threats. While the threats to the security of infor-
effective information security policy utilization,
mation systems are both well documented and well
undoubtedly is growing, no previous studies could
understood, there is a continuing worry that such
be found that sought to empirically explore the
issues are not high on the organizational agenda.
importance of different success factors. Indeed,
As Straub and Welke (1998) note, “Information
there is a very significant gap in the literature with
security continues to be ignored by top manag-

328
Do Information Security Policies Reduce the Incidence of Security Breaches

Table 1. Common types of security breaches


Type of Breach D escription
Computer Virus Computer programs that have the capability to automatically
replicate themselves across systems and networks.
Hacking Incidents The penetration of organizational computer systems by
unauthorized outsiders, who are then free to manipulate data.
Unauthorized The deliberate abuse of systems and the data contained therein
Access by users of those systems.
Theft of Resources Theft of increasingly valuable hardware, software, and
information assets.
Computer-Based Information systems, especially financial systems, are
Fraud vulnerable to individuals who seek to defraud an organization.
Human Error T he accidental destruction or incorrect entry of data by
computer users.
Natural Disasters Damage to computing facilities or data resources caused by
phenomena such as earthquakes, floods, or fires.
Damage by Disgruntled employees may seek revenge by damaging their
Employee employers’ computer systems.

ers, middle managers, and employees alike. The describe the study designed to fill this gap before
result of this unfortunate neglect is that organiza- articulating the specific research hypotheses and
tional systems are far less secure than they might presenting the research framework. It should be
otherwise be and that security breaches are far noted that a full discussion of the design of the
more frequent and damaging than is necessary” questionnaire and the operationalization of the
(p. 441). Therefore, there is a pressing need for constructs discussed in this section is deferred to
more research that can highlight any strategies or the following section. Given the lack of empirical
approaches that might reduce the incidence and research in the area, it was felt that an exploratory
severity of security breaches. piece of work that embraced a wide range of is-
sues would be most appropriate. To this end, the
Conceptual Framework and aim of the study was to explore how a variety
Research Hypotheses of issues relating to the uptake and application
of information security policies impacted upon
A summary of the literature suggests that there the incidence of security breaches within large
are growing literatures both with regard to the role organizations. Based upon our review of the lit-
of information security policies and the nature erature, it is possible to hypothesize that a number
and incidence of security breaches. Moreover, of distinct aspects of the InSPy might influence
there is a general acceptance that the information the incidence of security breaches. Each of these
security policy is an important, if not the most areas is represented as a significant construct on
important, means of preventing such breaches the conceptual framework (see Figure 1), and
(Loch et al., 1992; Mitchell et al., 1999; Whitman, each can be linked to a research hypothesis, as
2004). Perhaps it is surprising that to date there described in the following paragraphs:
has been little conceptual or empirical scrutiny
to determine whether the incidence and sever- The Existence of an InSPy. The review of
ity of security breaches can be reduced through literature highlighted the strength of the consen-
the adoption of an information security policy. sus with regard to the importance of information
The aim of the remainder of this section is to security policy in countering security breaches

329
Do Information Security Policies Reduce the Incidence of Security Breaches

Figure 1. Conceptual framework

H1
H2
Policy
age
Information Incidence
Security and
Policy Policy H3 Severity of
in place updating Security
Breaches
H4
Policy
scope

Adoption of H5
best
practice

Figure 1: Conceptual Framework

(Loch et al., 1992; Mitchell et al., 1999; Whitman, both frequency and severity than those
2004). Therefore, it is reasonable to propose the organizations that have not.
following hypothesis:
The Updating of the InSPy. There is a grow-
H1: Those organizations that have a documented ing yet empirically untested view within the
InSPy are likely to have fewer security literature (Higgins, 1999; Hone & Eloff, 2002a;
breaches in terms of both frequency and Wood, 1995) that InSPy should be updated regu-
severity than those organizations that do larly. Consequently, the following hypothesis can
not. be proposed:

The Age of the InSPy. The literature has H3: Those organizations that update their InSPy
relatively little to say about the longevity of frequently are likely to have fewer security
information security policies. However, the fol- breaches in terms of both frequency and
lowing hypothesis articulates the assumption that severity than those organizations that do
organizations with a long history of utilizing such not.
policies might be more experienced and, therefore,
effective in security management: The Scope of the InSPy: It has been suggested
that the scope of the InSPy might vary greatly,
H2: Those organizations that have had InSPy depending upon which national or international
in place for many years are likely to information security standard has been adopted
have fewer security breaches in terms of (Hone & Eloff, 2002b). What is less clear is how

330
Do Information Security Policies Reduce the Incidence of Security Breaches

the scope of a policy might affect its successful to a poor security record. The possibility of al-
deployment. However, it seems reasonable to ternative hypotheses is considered further in a
propose the following relationship between the later section.
scope of the InSPy and the effectiveness of an The urgent need for more research and new
organization’s security management: insights in the information security domain was
recently highlighted by Dhillon (2004), who
H4: Those organizations that have a policy noted that “information security problems have
with a broad scope are likely to have been growing at an exponential rate” (p. 4). In a
fewer security breaches in terms of similar vein, Kotulic and Clark (2004) argue that
both frequency and severity than those “the organizational level information security
organizations that do not. domain is relatively new and under researched.
In spite of this, it may prove to be one of the most
The Adoption of Best Practice: The Interna- critical areas of research, necessary for supporting
tional Standard (ISO, 2000) has some very clear the viability of the firm” (p. 605). Therefore, it
advice about the factors that are important in was envisaged that our study might provide some
ensuring the successful application of the InSPy, important new insights at the organizational level
most of which have not been covered explicitly as to how the incidence and severity of security
by the previous hypotheses. The corollary of this, breaches might be controlled.
as presented in the following hypothesis, is that
the adoption of these success factors should lead
to a reduction in security breaches: RESEARCH DESIGN
H5: Those organizations that have adopted a
In order to explore successfully the five research
wide variety of best practice factors are
hypotheses described in the previous section, it
likely to have fewer security breaches in
was necessary to employ survey methods so that
terms of both frequency and severity than
those organizations that have not. the resultant data could be subjected to a rigor-
ous statistical analysis. The aim of this section is
It should be noted that the original intention was to review how the questionnaire was designed,
to explore whether the active dissemination of an validated, and ultimately executed, and then to
information security policy affected the incidence describe the characteristics of the sample.
or severity of security breaches (Hone & Eloff,
2002a; Hone & Eloff, 2002b; ISO, 2000). However, Questionnaire Development,
as 99% of our sample reported that they actively Validation, and Targeting
disseminated their policies, it was not possible to
test this hypothesis. While the hypotheses have A detailed questionnaire was used to collect
been formulated to represent the outcomes that the data necessary to explore the research hy-
the researchers believed to be the most likely, it potheses proposed in the previous section. The
was recognized that, in some cases, alternative questionnaire was organized into the following
yet equally plausible results might be produced. four sections:
For example, it might be that the existence of
the InSPy is associated with a high incidence of Security Breaches. Respondents were asked
security breaches in circumstances in which the to report on the incidence and severity of each
policy has been implemented in direct response of the eight most common types of security
breach (see Table 1) that their organizations had

331
Do Information Security Policies Reduce the Incidence of Security Breaches

experienced over the previous two years. The of information security policies, it was not possible
incidence variable was operationalized as a four- to adapt specific questions and item measures from
point ordinal scale (0; 1-5; 6-10; > 10), while the the existing literature. Consequently, once a draft
severity of breaches was measured using a five- questionnaire had been created, it was necessary
point Likert scale. to subject it to a rigorous validation process. More
specifically, the draft questionnaire initially was
The Existence and Updating of the Infor- validated through a series of pre-tests, first with
mation Security Policy. This section sought to four experienced IS researchers and then, after
determine whether a responding organization had some modifications, with five senior IT profes-
a documented InSPy and, if it did, how old it was sionals, all of whom had some responsibility for
and how often it was updated. information security. The pre-testers were asked
to critically appraise the questionnaire, focusing
The Scope of the Information Security primarily on issues of instrument content, clarity,
Policy. This section of the questionnaire was de- question wording, and validity before providing
signed to evaluate the coverage of the information detailed feedback via interviews. The pre-tests
security policy. The respondent was presented were very useful, as they resulted in a number
with a list of 11 distinct issues, such as disclosure of of enhancements being made to the structure of
information, Internet access, viruses, worms, and the survey and the wording of specific questions.
trojans that an information security policy might Having refined the questionnaire, a pilot study
reasonably be expected to cover. These items all exercise also was undertaken, which provided
have been derived explicitly from the relevant valuable insights into the likely response rate and
International Standard (ISO, 2000) or from a white analytical implications for the full survey.
paper published by the SANS Institute (Canavan, As an InSPy is essentially a managerial, di-
2003). For each of these issues, the respondent was rection-giving document (Hone & Eloff, 2002b)
invited to indicate whether the issue was covered rather than a technical document, it was rec-
in the policy document only, through the policy ognized that the most appropriate individuals
document and a supplementary procedure, or not to target were executives with a high degree of
explicitly covered in the InSPy. managerial responsibility for information systems
and technology. Senior IT executives, therefore,
Best Practice in Information Security were targeted explicitly, as it was envisaged that
Policy Adoption they could provide the required organizational and
managerial perspective. A list of the addresses of
The International Standard on information secu- IT directors from large UK-based organizations
rity management (ISO, 2000) suggests that there was purchased from a commercial research orga-
are 10 distinct factors that might influence the nization. The decision to target only larger firms
success of an information security policy, such (firms employing more than 250 people) was based
as visible commitment from management and a on the premise that small firms have few, if any,
good understanding of security requirements. For dedicated IT staff (Prembukar & King, 1992). A
each of these factors, the respondent was asked to total of 219 valid responses were received from
indicate the extent to which his or her organiza- the 2,838 questionnaires mailed out, represent-
tion was successful in adopting that factor, using ing a response rate of 7.7%. While this response
a five-point Likert scale. rate was rather disappointing, perhaps it was not
As there are few previous survey-based, empir- surprising, given the increasingly sensitive nature
ical studies that explicitly address the application of information security (Menzies, 1993). More

332
Do Information Security Policies Reduce the Incidence of Security Breaches

recently, in an article entitled, “Why Aren’t There been established through the process of initially
More Information Security Studies?” Kotulic and linking the variables to the research literature
Clark (2004) concluded that “it is nearly impos- and then refining them through an extensive and
sible to extract information of this nature [relating comprehensive process of pre-testing and pilot
to information security] by mail from business testing. Any sample bias introduced through the
organizations without having a major supporter” loss of data from non-respondents is often harder
(p. 604). Consequently, while the sample was to establish, as the data are not easily obtainable.
smaller than had been originally hoped, it was However, it is possible to approximate this bias by
probably as good as could be expected, given the comparing the answer patterns of early and late
circumstances. respondents (Lindner et al., 2001). Consequently,
in this study, early and late responses were com-
Sample Characteristics and pared along key dimensions, such as the existence
Response Bias of policy, the age of the policy, the frequency of
updating, and severity of breaches, in order to test
The sample could be characterized in terms of for non-response bias. An independent samples
both the size of the responding organizations and t-test indicated that there were no significant
the sectors in which they primarily are operating. differences in the profile of responses at the
Of the valid respondents, 44% were employed in 5% level. These results imply that no detectable
organizations having less than 1,000 employees, response bias exists in the sample and that the
33% in organizations with between 1,000 and results are generalizable within the boundary of
5,000 employees, and the remaining 23% in larger the sample frame.
organizations with more than 5,000 employees.
While the responses also were found to have
come from a wide variety of industrial sectors, RESEARCH FINDINGS
four were particularly well represented: manu-
facturing (24%); public services (20%); health This section explores the five research hypoth-
(7%); and wholesale/retail (6%). Respondents eses, as presented in a previous section, through
also were asked to indicate the geographical a quantitative analysis of the survey data. Before
spread of their organization, as it was envisaged reviewing the evidence relating to each of these
that this might have an impact on the need for a hypotheses, it is important to summarize and
formal information security policy. The majority discuss the data relating to both the incidence
of responding organizations (50%) operated from and severity of security breaches, as these two
multiple locations within the UK, while a further data items are used as the dependent variable
33% of organizations operated from multiple sites throughout the analyses. It is beyond the scope
both within the UK and abroad, and the final of this chapter to present a detailed, descriptive
17% of the sample were located at a single site analysis of the data relating to the uptake and
within the UK. application of information security policies. How-
When undertaking survey-based research, ever, this information is available in a previous
there is always the danger that the results will paper by the authors (Fulford & Doherty, 2003).
be undermined or even invalidated through the Table 2 presents a simple, descriptive analysis of
introduction of bias. Therefore, it is important that the data relating to the incidence and severity of
active measures be taken to reduce the likelihood security breaches.
of bias having any such negative effects. In this It is interesting to note that all eight potential
research, the content validity of the constructs has types of security breaches have been experienced

333
Do Information Security Policies Reduce the Incidence of Security Breaches

Table 2. The incidence and severity of security breaches

Incidence of Breaches Severity of Worst Breach


Type of Breach Approximate number of Fairly Highly Mean
breaches in last two Insignificant Significant value
years
0 1-5 6 -10 > 10 1 2 3 4 5
Computer virus 6 111 23 77 45 65 47 35 19 2.59
Hacking incident 142 66 1 5 42 21 10 5 4 1.92
Unauthorized access 106 83 13 10 32 42 21 5 7 2.23
Theft of resources 50 123 24 19 43 52 48 20 8 2.38
Computer-based fraud 187 23 0 2 15 10 3 6 2 2.15
Human error 41 85 19 65 32 61 43 23 10 2.48
Natural disaster 160 54 2 1 16 24 9 11 5 2.52
Damage by employees 185 28 0 0 20 8 7 2 2 1.82

within our sample and that there appears to be a to determine whether there was any association
relationship between the incidence of breaches and between the adoption of InSPys and the severity
their perceived impact. For example, computer of each of the distinct types of security breach
virus and human error are both very common (see Table 3, columns 5 to 8).
types of breaches, and both have a significant An inspection of the data in Table 3 indicates
impact, when they do strike. At the other end of that there are no statistically significant associa-
the scale, damage by disgruntled employees, hack- tions between the existence of an information
ing incidents, and computer-based fraud all occur security policy and either the incidence or the
infrequently and have a relatively insignificant severity of any of the eight types of security
impact, when they do occur. The only type of breach. This is a particularly surprising result,
breach to break this pattern is obviously natural given the prevailing orthodoxy that the InSPy is
disasters, which, despite their rare occurrences, the primary mechanism for preventing security
do have a significant impact. breaches (Rees et al., 2003). However, based upon
this analysis, hypothesis H1 must be rejected.
The Impact of the Adoption of
an InSPy on Security Breaches The Impact of the Age of
the InSPy on Security Breaches
The vast majority of respondents (77%) in our
sample reported that their organization had a It was envisaged that the greater experience of
formal, documented InSPy, with the remaining those organizations that had utilized an infor-
23% of organizations confirming that they did mation security policy for many years might be
not. Therefore, it was both possible and desirable manifested in more effective security manage-
to explore the degree of association between the ment practices and, thus, fewer security breaches.
adoption of InSPy and the resultant level of secu- As the respondents had been asked to estimate
rity breaches. The results of a series of chi-squared the number of years that their organizations had
tests suggest that there is no statistical association actively used an InSPy as a simple integer, the
between the adoption of InSPy and the incidence degree of association between the age of a policy
of security breaches (see Table 3, columns 2 to 4). and the incidence/severity of security breaches
An analysis of variance (ANOVA) also was used was explored using ANOVA (Table 4, columns

334
Do Information Security Policies Reduce the Incidence of Security Breaches

Table 3. The relationship between the adoption of InSPy and the incidence and severity of security
breaches

Type of Breach Incidence of Breaches Severity of Worst Breach


(Chi-Squared Analysis) (One-Way ANOVA)
Pearson Deg. of Two- Yes No F F
Value Freedom Sided Ratio Prob.
Prob.
Computer virus 0.730 3 0.878 2.59 2.69 0.215 0.644
Hacking incident 5.733 3 0.111 1.92 1.72 0.422 0.518
Unauthorized access 3.090 3 0.378 2.23 2.00 0.730 0.395
Theft of resources 1.905 3 0.607 2.38 2.51 0.429 0.513
Computer-based fraud 1.892 2 0.300 2.15 2.25 0.036 0.851
Human error 5.388 3 0.144 2.48 2.67 0.743 0.390
Natural disaster 6 .469 3 0.089 2 .52 2.32 0.361 0.550
Damage by employees 0.003 1 1.000 1.82 2 .30 1.210 0.279
Note: A chi-squared test was used to test the association between the four categories of
incidence (0, 1-5, 6-10, >10) and the two classes of InSPy existence (yes, no), while
ANOVA was used to compare the mean severity of breaches and the two classes of InSPy
existence.

Table 4. Relationship between the age of the InSPy and the incidence/severity of security breaches

Type of Breach Incidence of Breaches Severity of Worst


(One-Way ANOVA) Breach (Correlation)
0 1-5 6-10 >10 F F Pearson Two-Sided
Ratio Prob. Value Significance
Computer virus 2.0 3.7 3 .0 5.1 2.3 . 08 -0.05 0.501
Hacking incident 3.7 4.7 5.0 5.0 .77 .51 -0.05 0.718
Unauthorized access 3.5 3.9 4.5 10.1 6.4 .00** -0.08 0.443
Theft of resources 4.1 3 .7 3 .4 7.27 3.7 .01* -0.20 0 .025*
Computer-based fraud 3 .9 6.14 - 3.00 2 .8 . 07 -0.13 0.513
Human error 3.9 3.5 3.7 4.9 1.2 .31 -0.00 0.963
Natural disaster 4.1 3.8 2 .8 - .23 .80 -0.15 0.335
Damage by employees 7 .8 8 .9 - - 2 .9 . 09 -0.19 0.332

Note: * Result significant at the 5% level; ** Result significant at the 1% level

2 to 7) and correlation (Table 4, columns 8 to 9). as they suggest that there may be some compla-
The findings (see Table 4) indicate that there are cency creeping into the security practices of those
two significant associations between the age of organizations with a longer history of policy uti-
the policy and the incidence of security breaches. lization. When it comes to associations between
However, an inspection of this data suggests that the age of the policy and the severity of breaches,
in both cases where there is a significant result, there is only one case (theft of resources) where
the decreased incidence of security breaches is there is a significant association. In this case,
associated with recently deployed policies rather there is some support for hypothesis H2, as the
than those that have been in existence for a long Pearson correlation value is negative, indicating
time. Consequently, these findings are important, that older policies are associated with less severe

335
Do Information Security Policies Reduce the Incidence of Security Breaches

Table 5. Relationship between the frequency of updating InSPy and the incidence/severity of security
breaches

Type of Breach Incidence of Breaches Severity of Worst Breach


(Chi-Squared Analysis) (One-Way ANOVA)
Pearson Degree Two- < Once ?Once F F
Value of Sided a Year a Year Ratio Prob.
Freedom Prob.
Computer virus 3 .157 3 0.368 2.42 2.75 2.71 0.101
Hacking incident 1.679 3 0.642 2.00 1.92 0.065 0.799
Unauthorized access 3.108 3 0.375 2.21 2.25 0.030 0.864
Theft of resources 2.219 3 0.528 2.35 2.42 0.117 0.733
Computer-based fraud 1.098 2 0.577 2.08 2.20 0.052 0.821
Human error 5.253 3 0.154 2.67 2.42 1.467 0.228
Natural disaster 3.237 2 0.198 2.29 2.72 1.450 0.235
Damage by employees 1.198 1 0.274 1.73 1.87 0.087 0.770

Table 6. Relationship between the range of issues covered by the InSPy and the incidence/severity of
security breaches
Type of Breach Incidence of Breaches Severity of Worst
(One-Way ANOVA) Breach (Correlation))
0 1-5 6-10 > 10 F F Pearson Two-Sided
Ratio Prob. Value Significance
Computer virus 8 .0 7.8 7.6 8 .4 .79 .49 0.05 0.530
Hacking incident 8. 0 7.9 10.0 6.5 .41 .75 -0.04 0.779
Unauthorized access 7.9 8.0 7 .9 9 .4 .86 .46 0.15 0.169
Theft of resources 7.4 8.0 8 .2 9 . 3 2.4 .10 -0.05 0.536
Computer-based fraud 7.8 9.3 - 5.00 3.4 .04* 0.31 0.122
Human error 8.1 7.9 7.8 8.2 .29 .88 0.02 0 .838
Natural disaster 7.9 8.5 3.5 - 3.8 .02* 0.24 0.105
Damage by employees 7.8 8 .9 - - 2.9 . 09 0.08 0.678
Note: Result significant at the 5% level

Table 7. One-way ANOVA between the successful adoption of success factors and the incidence/severity
of security breaches
Type of Breach Incidence of Breaches Severity of Worst
(One-Way ANOVA) Breach (Correlation))
0 1-5 6-10 > 10 F F Pearson Two-Sided
Ratio Prob. Value Significance
Computer virus 3.17 2.95 2.85 2.85 0.42 0.74 0.031 0.699
Hacking incident 2.94 2.93 2.50 1.55 3.05 0.03* 0.120 0.365
Unauthorized access 2.99 2.82 2.76 2.75 1.01 0.39 - 0.070 0.529
Theft of resources 2.87 2.89 3 .01 2.91 0.40 0.75 - 0.149 0.097
Computer-based fraud 2.89 2.87 - 2.40 0.27 0.76 0.305 0.138
Human error 2.98 2.87 3.12 2.81 0.99 0.39 -0.189 0 .035*
Natural disaster 2.92 2.82 3.20 - 0.50 0.60 0.171 0.255
Damage by employees 2.91 2.86 - - 0.09 0.76 - 0.088 0.655

Note: * Result significant at the 5% level

336
Do Information Security Policies Reduce the Incidence of Security Breaches

breaches. However, given that there is no strong or covered, so it was important to explore whether
consistent evidence in support of the hypothesis, the scope of the policy was associated with the
H2 also must be rejected. incidence and severity of security breaches. As
discussed in a previous section, the scope of the
Impact of InSPy Update policy was investigated by asking respondents to
Frequency on Security indicate which issues, from a list of 11 separate
issues, were covered in their policies. Conse-
The relationship between the frequency of up- quently, it was possible to create a new variable
dating an information security policy and the — total issues covered — that was the sum of
incidence and severity of security breaches was the individual issues covered. This new variable,
explored using a chi-squared analysis (Table 5, which was in the range 0-11, had a mean of 8.01
columns 2 to 4) and ANOVA (Table 5, columns and a standard deviation of 2.61. The relationship
5 to 8). The frequency with which InSPys were between the total issues covered and the incidence
updated was measured using a five-item categori- and severity of security breaches was explored
cal scale (less than every two years; every two using an ANOVA (Table 6, columns 2 to 7) and a
years; every year; every six months; more than bivariate correlation (Table 6, columns 8 to 9).
every six months). To use this variable in a chi- The results relating to hypothesis H4 are
squared analysis, with the incidence of breaches quite interesting, as there are some statistically
variable, it was necessary to compress the five significant results. For example, the range of
original categories into just two (less than once issues covered is associated significantly with
a year and at least once a year), to ensure that the the incidence of both computer-based fraud and
expected frequencies in every cell of the contin- natural disasters. However, an inspection of the
gency table were greater than five, a prerequisite data (Table 6, columns 2 to 5) is inconclusive;
of the chi-squared approach. Having used a two- while the incidence of breaches is highest in both
category measure of frequency of updating for of these cases in which the issues covered are
the chi-squared analysis, it made sense also to lowest, the lowest incidence of breaches is not
use it for the ANOVA in order to make the results associated with the highest numbers of issues
more comparable. covered. With regard to the severity of threats,
The results of the two analyses (see Table 5) there are no statistically significant associations
indicate that there are no statistically significant between number of issues covered by the policy
associations between the frequency with which and the severity of security breaches. In summary,
the InSPy is updated and the incidence and given that only two out of the 16 individual tests
severity of any of the eight types of security conducted resulted in statistically significant out-
breach; hypothesis H3, therefore, also must be comes, there is little in the way of strong evidence
rejected. This result is also surprising in the face in support of hypothesis H4, and, therefore, it
of the prevailing orthodoxy that the InSPy will must be rejected.
be more effective if updated regularly (Hone &
Eloff, 2002b). The Impact of the Adoption of
Best Practice on Security Breaches
The Impact of the Scope of an
InSPy on Security Breaches In order to explore effectively the relationship
between the adoption of success factors and the
The scope of information security policies can incidence and severity of security breaches, it
vary greatly in terms of the numbers of issues was necessary to derive a summated scale for

337
Do Information Security Policies Reduce the Incidence of Security Breaches

the 10 success factors. An underlying assumption say that it is to write a good security policy.”
and fundamental requirement for constructing a Therefore, it came as something of a surprise in
summated measure of a metric construct is that the present study to find almost no statistically
the item scales all measure the same underlying significant relationships between the adoption of
construct. This was confirmed by undertaking information security policies and the incidence or
internal reliability tests using the Cronbach alpha severity of security breaches. Consequently, it is
measure, which yielded a statistically significant important to explore the possible interpretations
score of 0.87. Having derived the overall measure of this unexpected finding. The implications of
for the adoption of best practice, ANOVA and this study for the practice of information security
correlation analyses were conducted to explore management are reviewed in this section, and
its association with the incidence and severity of then its limitations are explored.
security breaches (see Table 7). Although there is little evidence of any formal,
The results of these analyses indicate that empirical studies that focus on the effectiveness
there is a statistical association between the sum- of information security policies, the published
mated success factors and security breaches for literature does provide some clues as to why
two out of the 16 tests conducted. Moreover, an InSPys might be failing to stem the level of se-
inspection of the data provides some evidence in curity breaches. Among these, the following are
support of hypothesis H5. For example, success the most plausible reasons for deficient poli and
in adopting best practice is associated with a low ineffective policy implementationcies.
occurrence of hacking incidents, whereas low
success in adopting best practice is associated Difficulties of Raising Awareness. Sipponen
with a high incidence of hacking incidents. In a (2000) highlights the problems of policy dis-
similar vein, success in adopting best practice is semination in the workplace. If employees are
associated with low severity breaches due to hu- not made aware of a policy, then there is a danger
man error, whereas low success in adopting best that it will become a dead document rather than
practice is associated with high severity incidents an active and effective security management tool.
of human error. However, given that only two of Given that nearly all the respondents in our study
the 16 tests were significant, there is insufficient claimed to be actively disseminating their policies,
evidence to support hypothesis H5, and, therefore, questions must be raised about the effectiveness
it must be rejected. of their dissemination strategies, in the light of
the consistently high levels of security breach
witnessed. As Hone and Eloff (2002b) note, “a
DISCUSSION common failure of information security policies
is that they fail to impact users on the ground”
It was established in the literature review that (p. 15).
the information security policy is now viewed
as the basis for the dissemination and enforce- Difficulties of Enforcement. As David (2002)
ment of sound security practices and, as such, notes, “having a policy and being able to enforce
should help to reduce the occurrence of security it are totally different things” (p. 506). Hinde
breaches (Loch et al., 1992; Mitchell et al., 1999; (2002) provides evidence that the problem of
Whitman, 2004). Indeed, as Wadlow (2000) notes, policy enforcement might stem primarily from
“[I]f you ask any security professional what the the difficulties of getting employees to read and
single most important thing is that you can do to
protect your network, they will unhesitatingly

338
Do Information Security Policies Reduce the Incidence of Security Breaches

take heed of policies. As Wood (2000) notes, to explain why all five of our hypotheses were
the expectation that “users are going to look at ultimately rejected. Our basic thesis was that or-
a centralized information security policy is just ganizations that had formulated a policy that was
unrealistic and bound to lead to disappointing updated regularly, broad in scope, and adhered to
results” (p. 14). best practice, would have fewer security breaches
than those organizations that had not. An alterna-
Policy Standards Are Too Complex. Many tive thesis might be that, rather than deploying
organizations lack the skills and experience to policies to prevent breaches, many organizations
formulate an information security policy. There- might be adopting or enhancing a policy in re-
fore, they typically will refer to one of the many sponse to a spate of security breaches. However,
international information security standards, such if there was any significant evidence in support
as ISO17799, COBIT, or GMITS (Hone & Eloff, of this alternative thesis in which the direction of
2002a). While such standards are recognized as causality is simply reversed, then a large number
a “good starting point for determining what an of statistically significant associations still might
InSPy should consist of” (Hone & Eloff, 2002a, have been expected. Consequently, one plausible
p. 402), in practice, they can be complex and time explanation of our findings is that there is a mixture
consuming to apply (Arnott, 2002). of drivers; in some instances, policies are doing
their job and preventing breaches, and in other
Inadequate Resourcing. In too many cases, cases, policies are being implemented or enhanced
there are “insufficient resources available to devote in response to a high incidence of breaches.
to the monitoring and enforcement of policies” While the previous discussion might help
(Moule & Giavara, 1995, p. 8). Effective security to explain the apparent ineffectiveness of in-
management requires a great deal of time, effort, formation security policies, any manager with
and money, which many organizations are not responsibility for the formulation of his or her
prepared to commit. organization’s information security policy needs
to heed the messages that are inherent in these
Failure to Tailor Policies. It has been argued findings. First, the findings suggest that there is
that the security requirements of an organization no room for complacency; it is not enough simply
will be dependent on the types of information to produce a policy, even if that policy has a broad
being processed (Pernul, 1995) and on the cul- scope, adheres to best practice, and is updated
ture of the organization (Hone & Eloff, 2002a). regularly. Steps must be taken to ensure that the
Consequently, an InSPy must be tailored to its policy is tailored to its organizational context
organizational context. However, because many and then enforced, which, in turn, means that
organizations rely on international standards as the policy must be disseminated appropriately
the point of departure for developing a policy, and well resourced. Moreover, the results sug-
they often apply a generic solution rather than gest that organizations need to be more proactive
tailor it to their own circumstances. in evaluating the effectiveness of their policies;
It is very likely that the factors reviewed previ- when security breaches occur, the policy should
ously provide at least a partial explanation of why be reviewed to determine how such incidents
InSPys are failing to have a significant impact on can be avoided in the future. It is particularly
the incidence and severity of security breaches. important for those organizations who already
However, the drivers for adopting or enhancing deploy an appropriate policy and who appear to
an information security policy also might help be following best practice in its application yet

339
Do Information Security Policies Reduce the Incidence of Security Breaches

still suffer a high incidence of security breaches CONCLUDING REMARKS


to evaluate critically their security policy and
security practices. The work presented in this chapter makes an im-
This research also should be of interest to the portant contribution to the information security
information management research community. As literature, as it presents the first empirical study
it is one of the first empirical studies to explicitly of the relationship between the application of
tackle the relationship between the information information security policies and the incidence
security policy and the level of security breaches, and severity of security breaches. The key result
many new variables and item measures have of this research is the finding that there is no
been identified and validated; these might be statistically significant relationship between the
incorporated usefully in future research. More- existence and application of information security
over, the study has highlighted the need for far policies and the incidence or severity of security
more research in this area in order to explore breaches. While a number of plausible explana-
further the relationship between the informa- tions have been proffered to help understand this
tion security policy and security breaches and somewhat surprising finding, there is an urgent
to determine what steps are needed to improve need for follow-up studies to explore what can be
its effectiveness. done to improve the effectiveness of information
Research into the adoption of sophisticated security policies. To this end, a series of follow-
policies within the organizational context is an up interviews and focus groups to help interpret
ambitious undertaking and, therefore, contains a and explain the results of the quantitative analysis
number of inherent limitations. In particular, the currently are being planned. As the project un-
adoption of the survey format restricts the range folds, it is anticipated that the findings will help
of issues and constructs that can be explored; organizations to better understand the value of
the selection of a very narrow sampling frame security policies and to pinpoint the policy areas
reduces the generalizability of the results; and, for prioritization.
finally, there is potential response bias associated
with the single-informant. Moreover, the survey
approach cannot help provide meaningful expla- Acknowledgment
nations of why no statistically significant findings
were derived from our analyses. Consequently, The provisional version of the research framework
while the study provides many interesting in- upon which this chapter is based was presented
sights, these limitations do highlight the need at the IRMA Conference (Doherty & Fulford,
for follow-up studies to be conducted employing 2003). The authors would like to thank the chapter
different methods and targeting different popula- reviewers and conference participants for their
tions. When considering future studies, it will be helpful comments, as these greatly shaped our
important for researchers to be creative in finding thinking with regard to this chapter.
ways to secure organizational buy-in to their stud-
ies in order to avoid the difficulties of response
witnessed in this and other information security
projects (Kotulic & Clarke, 2004).

340
Do Information Security Policies Reduce the Incidence of Security Breaches

References sources Management Association International


Conference, Philadelphia, Pennsylvania, May
Arnott, S. (2002, February). Strategy paper. 18-21 (pp. 1052-1053). Hershey, PA: Idea Group
Computing. Publishing.

Austin, R.D. & Darby, C.A. (2003, June). The Fulford, H. & Doherty, N.F. (2003). The applica-
myth of secure computing. Harvard Business tion of information security policies in large UK-
Review. based organizations. Information Management
and Computer Security, 11(3), 106-114.
Barnard, L. & von Solms, R. (1998). The evalu-
ation and certification of information security Garg, A., Curtis, J., & Halper, H. (2003). Quan-
against BS 7799. Information Management and tifying the financial impact of information se-
Computer Security, 6(2), 72-77. curity breaches. Information Management and
Computer Security, 11(2), 74-83.
Canavan, S. (2003). An information security
policy development guide for large companies. Gaston, S.J. (1996).Information security: Strat-
SANS Institute. Retrieved from http://www. egies for successful management. Toronto:
SANS.org CICA.

David, J. (2002). Policy enforcement in the work- Glazer, R. (1993). Measuring the value of infor-
place. Computers and Security, 21(6), 506-513. mation: The information intensive organization.
IBM Systems Journal., 32(1), 99-110.
De Campeaux, D. (2002). Taking responsibility
for worms and viruses. Communications of the Higgins, H.N. (1999). Corporate system security:
ACM, 45(4), 15-16. Towards an integrated management approach.
Information Management and Computer Security,
Dhillon, G. (1997). Managing information systems 7(5), 217-222.
security. London: Macmillan Press.
Hinde, S. (2002). Security surveys spring crop.
Dhillon, G. (1999). Managing and controlling Computers and Security, 21(4), 310-321.
computer misuse. Information Management and
Computer Security, 7(4), 171-175. Hinde, S. (2003). Cyber-terrorism in context.
Computers and Security, 22(3), 188-192.
Dhillon, G. (2004). The challenge of managing
information security. International Journal of Hone, K. & Eloff, J.H.P. (2002a). Information
Information Management, 24, 3-4. security policy: What do international security
standards say. Computers & Security, 21(5), 402-
Dhillon, G. & Backhouse, J. (1996). Risks in 409.
the use of information technology within orga-
nizations. International Journal of Information Hone, K. & Eloff, J.H.P. (2002b). What makes an
Management, 16(1), 65-74. effective information security policy. Network
Security, 20(6), 14-16.
Doherty, N.F. & Fulford, H. (2003). Information
security policies in large organisations: Devel- Hong, K., Chi, Y., Chao, L., & Tang, J. (2003).
oping a conceptual framework to explore their An integrated system theory of information secu-
impact. In M. Khosrow-Pour (Ed.), Information rity management. Information Management and
Technology & Organizations: Trends, Issues, Computer Security, 11(5), 243-248.
Challenges & Solutions, 2003 Information Re-

341
Do Information Security Policies Reduce the Incidence of Security Breaches

I.S.O. (2000). Information technology. Code of Premkumar, G. & King, W.R. (1992). An empirical
practice for information security management, assessment of information systems planning and
ISO 17799. International Standards Organiza- the role of information systems in organizations.
tion. Journal of Management Information Systems,
19(2), 99-125.
Kotulic, A.G. & Clark, J.G. (2004). Why there
aren’t more information security research studies. Rees, J., Bandyopadhyay, S., & Spafford, E.H.
Information & Management, 41, 5907-607. (2003). PFIRES: A policy framework for infor-
mation security. Communications of the ACM,
Kwoc, L. & Longley, D. (1999). Information
46(7), 101-106.
security management & modelling. Informa-
tion Management and Computer Security, 7 1), Siponen, M. (2000). Policies for construction
30-39. of information systems’ security guidelines. In
Proceedings of the 15th International Informa-
Lindner, J.R., Murphy, T.H., & Briers, G.E.
tion Security Conference (IFIP TC11/SEC2000),
(2001). Handling non-response in social science
Beijing, China, August (pp. 111-120).
research. Journal of Agricultural Education,
42(4), 43-53. Straub, D.W. & Welke, R.J. (1998). Coping with
systems risk: Security planning models for
Loch, K.D., Carr, H.H., & Warkentin, M.E. (1992).
management decision making. MIS Quarterly,
Threats to information systems: Today’s reality,
22(4), 441-470.
yesterday’s understanding. MIS Quarterly, 16(2),
173-186. von Solms, B. & von Solms, R. (2004). The ten
deadly sins of information security management.
Menzies, R. (1993). Information systems secu-
Computers & Security, 23, 371-376.
rity. In J. Peppard (Ed.), IT strategy for business.
London: Pitman Publishing. Wadlow, T.A. (2000). The process of network
security. Reading, MA: Addison-Wesley.
Mitchell, R.C., Marcella, R., & Baxter, G. (1999).
Corporate information security. New Library Whitman. (2004). In defense of the realm:
World, 100 1150), 213-277. Understanding threats to information security.
International Journal of Information Manage-
Pernul, G. (1995). Information systems security:
ment, 24, 3-4.
Scope, state of the art and evaluation of techniques.
International Journal of Information Manage- Wood, C.C. (1996). Writing infosec policies.
ment, 15(3), 165-180. Computers & Security, 14(8), 667-674.
Pfleeger, C.P. (1997). Security in computing. Wood, C.C. (2000). An unappreciated reason
Englewood Cliffs, NJ: Prentice Hall. why information security policies fail. Computer
Fraud & Security, 10, 13-14.

This work was previously published inInformation Resources Management Journal, Vol. 18, No. 4, edited by M. Khosrow-Pour,
pp. 21-39, copyright 2005 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).

342
343

Compilation of References

Aaltonen, M., & Wilenius, M. (2002). Osaamisen en- Akrich, M. (1992). The de-scription of technical objects.
nakointi. Helsinki, Finland: Edita Prima Oy. In W. E. Bijker & J. Law (Eds.), Shaping technology/
building society: Studies in sociotechnical change (pp.
ACM SIGCHI. (1996). Curricula for human-computer
205-224). Cambridge, MA/London: MIT Press.
interaction. Retrieved March 20, 2007, from http://sigchi.
org/cdg/cdg2.html#2_3_3 Alberts, C., & Dorofee, A. (2002, July). Managing in-
formation security risks: The OCTAVE (SM) approach.
Adams, A., Bochner, S., & Bilik, L. (1998). The ef-
Boston: Addison Wesley.
fectiveness of warning signs in hazardous work places:
Cognitive and social determinants. Applied Ergonomics, Alexander, I. (2002, September). Modelling the interplay
29, 247-254. of conflicting goals with use and misuse cases. Proceed-
ings of the 8th International Workshop on Requirements
Adams, J. (1999). Risk-benefit analysis: Who wants
Engineering: Foundation for Software Quality (REFSQ-
it? Who needs it? Paper presented at the Cost-Benefit
02), Essen, Germany (pp. 9-10).
Analysis Conference, Yale University.
Alexander, I. (2003, January). Misuse cases: Use cases
Adams, J. (2005). Risk management, it’s not rocket sci-
with hostile intent. IEEE Software, 20(1), 58-66.
ence: it’s more complicated draft paper available from
http://www.geog.ucl.ac.uk/~jadams/publish.htm America Online and the National Cyber Security Alli-
ance (2004). AOL/NCSA online safety study. http://www.
Adams, J., & Thompson, M. (2002). Taking account
staysafeonline.info/news/safety_study_v04.pdf
of societal concerns about risk. Framing the problem
(Research Rep. 035). London: Health and Safety Ex- American Management Association. (2005). Workplace
ecutive, e-mail and instant messaging survey. Retrieved March
2006, from http://www.amanet.org/research/
Advergaming on the blockdot Web site. (n.d.). Retrieved
April 27, 2007, from http://games.blockdot.com/basics/ American National Standards Institute (ANSI). (2002).
index.cfm/.  Criteria for safety symbols (Z535.3-Revised). Wash-
ington, DC: National Electrical Manufacturers As-
Advergaming on the innoken Web site. (n.d.). Retrieved
sociation.
April 27, 2007, from http://www.innoken.com/ar_brand-
games.html/.  Anderson, A. R. (Ed.). (1964). Minds and machines. Pren-
tice Hall.
Ahvenainen, S., Helokunnas, T., & Kuusisto, R. (2003).
Acquiring information superiority by time-divergent Anderson, R. (2001). Security engineering: A guide to
communication. In B. Hutchinson (Ed.), Proceedings of building dependable distributed systems. New York:
the 2nd European Conference on Information Warfare and Wiley.
Security (pp. 1-9). Reading, UK: MCIL, Reading.

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Compilation of References

Antonopoulos, A. M., & Knape, J. D. (2002). Security in nications and multimedia security (Vol. 4237, pp. 97-
converged networks (featured article). Internet Telephony, 108). Springer.
August. Retrieved April 15, 2007, from http://www.
Austin, R.D. & Darby, C.A. (2003, June). The myth of
tmcnet.com/it/0802/0802gr.htm
secure computing. Harvard Business Review.
AOL/NCSA. (2005). AOL/NCSA online safety study.
Awad, E., & Ghaziri, H. (2004). Knowledge management.
America Online and the National Cyber Security Alli-
Upper Saddle River, NJ: Prentice Hall.
ance, December 2005. Retrieved March 21, 2007, from
http://www.staysafeonline.info/pdf/safety_study_2005. Axelrod, C. W. (2005). The demise of passwords: Have
pdf rumors been exaggerated? The ISSA Journal, May,
6-13.
Applehans, W., Globe, A., & Laugero, G. (1999). Manag-
ing knowledge. Boston: Addison-Wesley. Axelrod, C. W. (2007). The dynamics of privacy risk.
Information Systems Control Journal, 1, 51-56.
Applekid. (author’s name). (2007). The life of a social
engineer. Retrieved April 15, 2007, from http://www. Axelrod, C. W. (2007). Analyzing risks to determine a
protokulture.net/?p=79 new return on security investment: Optimizing security
in an escalating threat environment. In H.R. Rao, M.
APWG. (2007). Phishing activity trends report for the
Gupta, & S. Upadhyaya (Eds.), Managing information
month of February, 2007. Retrieved April 15, 2007, from
assurance in financial services (pp. 1-25). Hershey, PA:
http://www.antiphishing.org/reports/apwg_report_feb-
IGI Global.
ruary_2007.pdf
Axelrod, C.W. (1979). Computer effectiveness: Bridging
Argonne (2006). Simulated ‘social engineering’ attack
the management-technology gap. Washington, D.C.:
shows need for awareness. Retrieved April 15, 2007,
Information Resources Press.
from http://www.anl.gov/Media_Center/Argonne_
News/2006/an061113.htm#story4 Backhouse, J., & Dhillon, G. (1996). Structures of respon-
sibilities and security of information systems. European
Arnott, S. (2002, February). Strategy paper. Comput-
Journal of Information Systems, 5(1), 2-10.
ing.
Baird, H. S., & Bentley, J. L. (2005). Implicit captchas.
Ashenden, D., & Ezingeard, J.-N. (2005). The need for
In Proceedings of Spie/IS&T Conference on Document
a sociological approach to information security risk
Recognition and Retrieval Xii. 
management. Paper presented at the 4th Annual Security
Conference. Las Vegas, Nevada, USA. Baird, H., & Riopka, T. (2004, Dec). ScatterType: a
reading CAPTCHA resistant to segmentation attack. In
Association for the Advancement of Artificial Intel-
L. L. J. M. D. M. W. A. Y (Ed.), Vision geometry Xiii.
ligence. (2007). Cognitive science. Retrieved on March
Proceedings of the Spie (Vol. 5676, pp. 197-207). 
20, 2007, from http://www.aaai.org/AITopics/html/cog-
sci.html Balfanz, D., Durfee, G., Grinter, R., Smetters, D., &
Stewart, P. (2004). Network-in-a-box: How to set up a
Athabasca University. (2000). Ergonomics resources.
secure wireless network in under a minute. In Proceedings
Retrieved April 17, 2007, from http://scis.athabascau.
of the 13th Conference on USENIX Security Symposium
ca/undergraduate/ergo.htm
(pp. 207–222). Berkeley: USENIX Association.
Athanasopoulos, E., & Antonatos, S. (2006). Enhanced
Balfanz, D., Smetters, D., Stewart, P., & Wong, H. C.
captchas: Using animation to tell humans and comput-
(2002). Talking to strangers: Authentication in ad-hoc
ers apart.  In H. Leitold & E. Leitold (Eds.), Commu-

344
Compilation of References

wireless networks. In Proceedings of Symposium on Baskerville, R., & Wood-Harper, A.T. (1999). A critical
Network and Distributed Systems Security (NDSS). perspective on action research as a method for information
systems research. Journal of Information Technology,
Ball, K. (2006). Expert report: Workplace. In D. M.
11, 235-246.
Wood (Ed.), Surveillance studies network, a report on
the surveillance society for the information commissioner Baum, M. (1997). The ABA digital signature guidelines.
(UK), appendices (pp. 94-105). London. Computer Law & Security Report, 13(6), 457-458.

Bank, D. (2005). “Spear phishing” tests educate people Beck, U. (1992). Risk society. London: Sage Publish-
about online scams. The Wall Street Journal. Retrieved ers.
March 2, 2006, from http://online.wsj.com/public/ar-
Bell, W. (1998). Foundations of futures studies, vol II,
ticle/SB112424042313615131-z_8jLB2WkfcVtgdAW-
values, objectivity, and the good society. New Brunswick,
f6LRh733sg_20060817.html?mod=blogs
London: Transaction Publishers.
Barnard, L. & von Solms, R. (1998). The evaluation
Belsis, P., Kokolakis, S., & Kiountouzis, E. (2005). Infor-
and certification of information security against BS
mation systems security from a knowledge management
7799. Information Management and Computer Security,
perspective. Information Management & Computer
6(2), 72-77.
Security, 13(3), 189-202.
Baron, A. (1965). Delayed punishment of a runway
Bergson, H. (1911). Creative evolution. Lanham, MD:
response. Journal of Comparative and Physiological
Henry Holt and Company, University Press of America,
Psychology, 60, 131-134.
TM Inc.
Baskerville, R. (1988). Designing information systems
Berkovsky, S., Eytani, Y., Kuflik, T., & Ricci, F. (2006,
security. John Wiley Information Systems.
April). Hierarchical neighborhood topology for privacy
Baskerville, R. (1991). Risk analysis: An interpretive enhanced collaborative filtering. Paper presented at CHI
feasibility tool in justifying information systems secu- 2006 Workshop on Privacy-Enhanced Personalization
rity. European Journal of Information Systems, 1(2), (PEP2006), Montreal, Canada.
121-130
bernz (author’s name). (n.d.). The complete social engi-
Baskerville, R. (1992). The developmental duality of neering FAQ. Retrieved April 15, 2007, from http://www.
information systems security, Journal of Management morehouse.org/hin/blckcrwl/hack/soceng.txt
Systems, 4(1), 1-12.
Betteridge, I. (2005). Police foil $420 million keylog-
Baskerville, R. (1993). Information systems security ger scam. eWeek.com. http://www.eweek.com/ar-
design methods: Implications for information systems ticle2/0,1895,1777706,00.asp
development. Computing Surveys, 25(4), 375-414.
Birch, D. (1997). The certificate business: Public key
Baskerville, R. Investigating information systems with infrastructure will be big business. Computer Law &
action research. Communications of the Association for Security Report, 13(6), 454-456.
Information Systems, 19(2). Retrieved October 5, 2006,
Birchall, D., Ezingeard, J.-N., Mcfadzean, E., Howlin,
from http://www.cis.gsu.edu/~rbaskerv/CAIS_2_19/
N., & Yoxall, D. (2004). Information assurance: Stra-
CAIS_2_19.htm
tegic alignment and competitive advantage. London:
Baskerville, R., & Siponen, M. (2002). An informa- GRIST.
tion security meta-policy for emergent organizations.
Blass, T. (2002). The man who shocked the world. Psycol-
Journal of Logistics Information Management, 15(5/6),
ogy Today. Retrieved March 9, 2006, from http://www.
337-346.
psychologytoday.com/articles/pto-20020301-000037.
html

345
Compilation of References

Blizzard entertainment ltd., press release January Brunker, M. (2004, September). Are poker bots raking
2007. (n.d.). Retrieved April 27, 2007, from http://www. online pots? Retrieved April 27, 2007, from http://www.
blizzard.com/press/070111.shtml  msnbc.msn.com/id/6002298/ 

Bloom, E., Schachter, M., & Steelman E. H. (2003). BSI. (1993). DISC PD0003: A code of practice for
Competing interests in the post 9-11 workplace: the new information security management. London: British
line between privacy and safety. William Mitchell Law Standards Institute.
Review, 29, 897-920.
BSI. (2002). BS7799-2:2002. Information security man-
Boehm, B. W. (1988). A spiral model of software develop- agement. Specification with guidance for use. British
ment and enhancement. IEEE Computer, 21(5), 61-72. Standards Institute.

Bogner, M. S. (2004). Misadventures in health care: BSI. (2003). BS15000-2:2003 IT service management.
Inside stories. Mahwah, NJ: Lawrence Erlbaum As- Code of practice for service management. British Stan-
sociates. dards Institute.

Booysen, H. A. S., & Eloff, J. H. P. (1995). A methodol- Canavan, S. (2003). An information security policy
ogy for the development of secure application systems. development guide for large companies. SANS Institute.
Proceeding of the 11th IFIP TC11 International Confer- Retrieved from http://www.SANS.org
ence on Information Security.
Canny, J. (2002, May). Collaborative filtering with pri-
Borgida, E., and Nisbett, R. E. (1977). The differential vacy. In Proceedings of the IEEE Symposium on Security
impact of abstract vs. concrete information on decisions. and Privacy (pp. 45-57). Oakland, CA.
Journal of Applied Social Psychology, 7, 258-271.
Canny, J. (2002, August). Collaborative filtering with
Bowyer, B. (2003). Toward a theory of deception. Inter- privacy via factor analysis. In Proceedings of the 25th An-
national Journal of Intelligence and Counterintelligence, nual International ACM SIGIR Conference on Research
16, 244-279. and Development in Information Retrieval (SIGIR 2002).
Tampere, Finland.
Boyer, A., Castagnos, S., Anneheim, J., Bertrand-Pierron,
Y., & Blanchard, J-P. (2006, December). Le filtrage col- Carnegie Mellon Computer Emergency Response Team
laboratif : Pistes d’applications dans le domaine bancaire (CERT). (2007). Computer emergency response team
et présentation de la technologie. Dossiers de la veille statistics. Retrieved April 25, 2007, from http://www.
technologique du Crédit Agricole S.A. : Number 27. cert.org/stats/cert_stats.html#incidents

Bresciani, P., Perini, A., Giorgini, P., Giunchiglia, F., & Carstens, D. S., Malone, L., & Bell, P. (2006). Applying
Mylopoulos, J. (2004) Tropos: An agent-oriented soft- chunking theory in organizational human factors pass-
ware development methodology. Autonomous Agents word guidelines. Journal of Information, Information
and Multi-Agent Systems, 8(3), 203-236. Technology, and Organizations, 1, 97-113.

Brooks, F. (1995, August). The mythical man-month: Es- Carstens, D. S., McCauley-Bell, P., Malone, L., &
says on software engineering, 20th Anniversary Edition DeMara, R. (2004). Evaluation of the human impact
(1st ed.). Boston: Addison-Wesley. of password authentication practices on information
security. Informing Science Journal, 7, 67-85.
Brostoff, S., Sasse, A., & Weirich, D. (2002). Transform-
ing the “weakest link”: A human-computer interaction Castagnos, S., & Boyer, A. (2006, April). From implicit
approach to usable and effective security. BT Technology to explicit data: A way to enhance privacy. Paper pre-
Journal, 19(3), 122-131. sented at CHI 2006 Workshop on Privacy-Enhanced
Personalization (PEP 2006), Montreal, Canada.

346
Compilation of References

Castagnos, S., & Boyer, A. (2006, August). A client/server Charlesworth, A. (2003). Privacy, personal informa-
user-based collaborative filtering algorithm: Model and tion and employment. Surveillance and Society, 1(2),
implementation. In Proceedings of the 17th European 217-ff.
Conference on Artificial Intelligence (ECAI2006). Riva
Chatziapostolou, D., & Furnell, S. M. (2007, April 11-13).
del Garda, Italy.
Assessing the usability of system-initiated and user-
Castagnos, S., & Boyer, A. (2007, April). Personalized initiated security events. In Proceedings of ISOneWorld
communities in a distributed recommender system. In 2007, Las Vegas. Washington DC: The Information
Proceedings of the 29th European Conference on Infor- Institute.
mation Retrieval (ECIR 2007). Rome, Italy.
Checkland, P., & Holwell, S. (1998). Information, systems
Castells, M. (1996). The information age: Economy, and information systems—making sense of the field.
society and culture: Volume I, The rise of the network Chichester, New York, Weinheim, Brisbane, Singapore,
society. Padstow, Cornwall: T.J. International Limited. Toronto: John Wiley & Sons Ltd.

Castro, J., Kolp, M., & Mylopoulos, J. (2002). Towards re- Checkland, P., & Scholes, J. (2000). Soft systems method-
quirements driven information systems engineering: The ology in action. Chichester, New York, Weinheim, Bris-
Tropos project. Information Systems, 27(6), 365-389. bane, Singapore, Toronto: John Wiley & Sons, Ltd.

CCEVS. (2005). Common criteria—Part 1: Introduction Chellapilla, K., Larson, K., Simard, P. Y., & Czerwinski,
and general model (Draft v3.0, Rev 2). Common Criteria M. (2005, July 21-22). Computers beat humans at single
Evaluation and Validation Scheme. character recognition in reading based human interaction
proofs (hips). In Proceedings of Ceas 2005—Second
CERT. (2002). Social engineering attacks via IRC and
Conference on E-mail and Anti-spam. Stanford Univer-
instant messaging (CERT® Incident Note IN-2002-03).
sity, California, USA.
Retrieved April 15, 2007, from http://www.cert.org/in-
cident_notes/IN-2002-03.html Chellapilla, K., Larson, K., Simard, P., & Czerwinski,
M. (2005). Designing human friendly human inter-
CESG. (1994). CESG electronic information systems se-
action proofs (hips).  In Chi ’05: Proceedings of the
curity: System security policies (Memorandum No.5).
Sigchi Conference on Human Factors in Computing
Chan, P. (1999, August). A non-invasive learning ap- Systems (pp. 711-720). New York: ACM Press.
proach to building Web user profiles. In Proceedings of
Chellapilla, K., & Simard, P. Y. (2005). Using machine
the Workshop on Web Usage Analysis and User Profiling,
learning to break visual human interaction proofs (hips). 
Fifth International Conference on Knowledge Discovery
In L. K. Saul, Y. Weiss, & L. Bottou (Eds.), Advances
and Data Mining, San Diego, California.
in neural information processing systems 17 (pp. 265-
Chang, K. (2004, April 6). In math, computers don’t lie. 272). Cambridge, MA: MIT Press.
Or do they? New York Times. Retrieved October 5, 2006,
Chew, M., & Baird, H. S. (2003). Baffletext: a human
from http://www.math.binghamton.edu/zaslav/Nytimes/
interactive proof.  In Proceedings of the 10th IS&T/Spie
+Science/+Math/sphere-packing.20040406.html
Document Recognition & Retrieval Conference. 
Charles, B., Moffett, J. D., Laney, R., & Bashar, N.
Chew, M., & Tygar, J. (2005, Jan). Collaborative filter-
(2006, May). A framework for security requirements
ing captchas. In H. S. Baird & D. P. Lopresti (Eds.),
engineering. In Proceedings of the 2006 International
Lecture notes in computer science (pp. 66-81). Springer
Workshop on Software Engineering for Secure Systems
Verlag.
SESS ‘06.

347
Compilation of References

Chew, M., & Tygar, J. D. (2004). Image recognition Cone, E. (2007). Hacking’s gift to IT. CIO Insight, March
captchas.  Isc (p. 268-279).  7. Retrieved August 15, 2007, from www.cioinsight.
com/article2/0,1397,21087886,00.asp
Chia, P.A., Ruighaver, A.B., & Maynard, S.B. (2002).
Understanding organizational security culture. In Pro- Connolly, T., & Gilani, N. (1982). Information search in
ceedings of PACIS2002, Japan. Retrieved February 20, judgment tasks: A regression model and some preliminary
2007, from http://www.dis.unimelb.edu.au/staff/sean/ findings. Organizational Behavior and Human Decision
research/ChiaCultureChapter.pdf Processes, 30(3), 330-350.

Chung L., Nixon, B. A., Yu, E., & Mylopoulos, J. (2000). Connolly, T., & Thorn, B. K. (1987). Predecisional
Non-functional requirements in software engineering. information acquisition: Effects of task variables on
Kluwer Academic Publishers. suboptimal search strategies. Organizational Behavior
and Human Decision Processes, 39(3), 397-416.
Chung, L. (1993). Dealing with security requirements
during the development of information systems. In C. Rol- Conry-Murray, A. (2001). The pros and cons of employee
land, F. Bodart, & C. Cauvet (Eds.), Proceedings of the 5th surveillance. Network Magazine, 12(2), 62-66.
International Conference Advanced Information Systems
Cooper, A. (1999). The inmates are running the asylum:
Engineering, CAiSE ’93 (pp. 234-251). Springer.
Why high-tech products drive us crazy and how to restore
Cialdini, R. (2001). Influence: Science and practice. the sanity. Sams Publishing.
Needham Heights, MA: Allyn & Bacon.
Cooper, J. (1989). Computer and communications secu-
Ciborra, C. (2004). Digital technologies and the duality rity. New York: McGraw-Hill.
of risk (Discussion Paper). Centre for Analysis of Risk
Core Impact. (n.d.). Core impact overview. Retrieved
and Regulation, London School of Economics.
April 15, 2007, from http://www.coresecurity.com/
CMU. (2000). The captcha project. Retrieved April 26, ?module=ContentMod&action=item&id=32
2007, from http://www.captcha.net 
Cowan, N. (2001). The magical number 4 in short-term
Coates, A. L. (2001). Pessimal print—a reverse Turing memory: A reconsideration of mental storage capacity.
test. In Proceedings of the Sixth International Confer- Behavioral and Brain Sciences, 24(1), 87-185.
ence on Document Analysis and Recognition (Icdar ’01)
CRA. (2003). Grand research challenges in information
(p. 1154).  Washington, DC: IEEE Computer Society.
systems. Washington DC: Computing Research Associa-
Cobb, M. (2006). Latest IM attacks still rely on social tion. Retrieved March 21, 2007, from http://www.cra.
engineering. Retrieved April 15, 2007, from http:// org/reports/gc.systems.pdf
searchsecurity.techtarget.com/tip/0,289483,sid14_
CRAMM – CCTA (Central Computer and Telecommu-
gci1220612,00.html
nications Agency, UK). Risk analysis and management
Cohen, D. (2004). Consumer front-end to WPA. Wi-Fi method. Retrieved from http://www.cramm.com/cramm.
Alliance. htm

Collins, H.M. (1990). Artificial experts: Social knowledge Cranor, L. F. (2005). Hey, that’s personal! Invited talk at
and intelligent machines. Cambridge, MA: MIT Press. the International User Modeling Conference (UM05).

Commission Nationale de l’Informatique et des Libertés Cranor, L. F., & Garfinkel, S. (Eds.). (2005). Security
(CNIL). (2002). La cybersurveillance sur les lieux de and usability: Designing secure systems that people can
travail. Paris. use. Sebastopol, CA: O’Reilly Media.

348
Compilation of References

Crawford, M. (2006). Social engineering replaces guns Data Protection Working Party—DPWP. (2002). Working
in today’s biggest bank heists. ComputerWorld (Austra- document on the surveillance of electronic communica-
lia), May. Retrieved April 15, 2007, from http://www. tions in the workplace (5401/01/Final).
computerworld.com.au/index.php/id;736453614
Data Protection Working Party—DPWP. (2004). Opinion
Crockett, L. J. (1994). The Turing test and the frame 4/2004 on the processing of personal data and video
problem. Ablex Publishing Corporation. surveillance (11750/02/Final).

Crook, R., Ince, D., & Nuseibeh, B. (2005, August 29- Datta, R., Li, J., & Wang, J. Z. (2005). Imagination:
September 2). On Modelling access policies: Relating a robust image-based captcha generation system.  In
roles to their organisational context. Proceedings of Proceedings of the 13th annual ACM International
the 13th IEEE International Requirements Engineering Conference on Multimedia (Multimedia ’05) (pp. 331-
Conference (RE’05), Paris (pp. 157-166). 334). New York: ACM Press.

CSI/FBI. (2006). Computer crime and security survey. David, J. (2002). Policy enforcement in the workplace.
Retrieved February 2007, from http://www.gocsi.com Computers and Security, 21(6), 506-513.

CSI/FBI. (2006). Eleventh annual CSI/FBI computer Davies, B. (2006, October 3). Full proof? Let’s trust it to
crime and security survey. Retrieved August 15, 2007, the black box. Times higher education supplement.
from http://i.cmpnet.com/gocsi/db_area/pdfs/f bi/
Davis, A. (1998). A comparison of techniques for the
FBI2006.pdf
specification of external system behavior. Communica-
Dacier, M. (1994). Vers une ´evaluation quantitative de tions of the ACM, 31(9), 1098-1115.
la s´ecurit´e, informatique. Phd thesis, Institut National
Davis, A., (1993). Software requirements: Objects,
Polytechnique de Toulouse.
functions and states. Upper Saddle River, NJ: Prentice
Dalziel, J. R., & Job, R. F. S. (1997). Motor vehicle acci- Hall.
dents, fatigue and optimism bias in taxi drivers. Accident
De Campeaux, D. (2002). Taking responsibility for
Analysis & Prevention, 29, 489-494.
worms and viruses. Communications of the ACM, 45(4),
Damle, P. (2002). Social engineering: A tip of the iceberg. 15-16.
Information Systems Control Journal, 2. Retrieved April
De Millo, R.A., Lipton, R.J., & Perlis, A.J. (1977). So-
15, 2007, from http://www.isaca.org/Template.cfm?Sect
cial processes and proofs of theorems and programs. In
ion=Home&CONTENTID=17032&TEMPLATE=/Con-
Proceedings of the 4th ACM Symposium on Principles
tentManagement/ContentDisplay.cfm
of Programming Language (pp. 206-214).
Dardenne, A., van Lamsweerde, A., & Fickas, S. (1993).
Dejoy, D.M. (1987). The optimism bias and traffic safety.
Goal-directed requirements acquisition. Science of
In Proceedings of the Human Factors and Ergonomics
Computer Programming, 20(1-2), 3-50.
Society (Vol. 31, pp. 756-759).
Darley, J. M. & Latané, B. (1968). Bystander intervention
Delbar, C., Mormont, M., & Schots, M. (2003). New
in emergencies: Diffusion of responsibility. Journal of
technology and respect for privacy at the workplace.
Personality and Social Psychology, 8, 377-383.
European Industrial Relations Observatory. Retrieved
Data Protection Working Party—DPWP. (2001). Opinion January 2006, from http://www.eiro.eurofound.eu.int/
8/2001 on the processing of personal data in the employ- print/2003/07/study/TN0307101S.html
ment context (5062/01/Final).
Denning, D. E. (1998). The limits of formal security
models. National Computer Systems Security Award Ac-

349
Compilation of References

ceptance Speech. Retrieved October 18, 1999, from www. perspectives. Information Systems Journal, 11(2), 127-
cs.georgetown.edu/~denning/infosec/award.html 154.

Desprocher, S., & Roussos, A. (2001). The jurisprudence Dhillon, G., & Torkzadeh, G. (2006). Value-focused as-
of surveillance: a critical look at the laws of intimacy sessment of information system security in organizations.
(Working Paper), Lex Electronica, 6(2). Retrieved March Information Systems Journal, 16, 293-314.
2006, from http://www.lex-electronica.org/articles/v6-
Dinnie, G. (1999). The second annual global information
2/
security survey. Information Management & Computer
Detert, J. R., Schroeder, R. G., & Mauriel, J. (2000). Security, 7(3), 112-120.
Framework for linking culture and improvement initia-
DOD. (1985). DoD trusted computer system evaluation
tives in organisations. The Academy of Management
criteria (The Orange Book). (DOD 5200.28-STD). United
Review, 25(4), 850-863.
States Department of Defense.
Devanbu, P. & Stubblebine, S. (2000). Software engi-
Dodge, R., & Ferguson, A. (2006). Using phishing for
neering for security: a roadmap. In Proceedings of the
user e-mail security awareness. In S. Fischer-Hübner, K.
Conference of the Future of Software Engineering.
Rannenberg, L. Yngström, & S. Lindskog (Eds.), Pro-
DeWitt, A. J., & Kuljis, J. (2006, July 12-14). Aligning ceedings of the IFIP TC-11 21st International Information
usability and security: a usability study of Polaris. In Security Conference (SEC 2006) (pp. 454-458). New
Proceedings of the Second Symposium on Usable Pri- York: Springer Science + Business Media Inc.
vacy and Security (SOUPS ‘06) (pp. 1-7). Pittsburgh,
Doherty, N.F. & Fulford, H. (2003). Information security
Pennsylvania.
policies in large organisations: Developing a conceptual
Dhamija, R., & Tygar, J. D. (2005). The battle against framework to explore their impact. In M. Khosrow-
phishing: Dynamic security skins. In Proceedings of Pour (Ed.), Information Technology & Organizations:
SOUPS (pp. 77-88). Trends, Issues, Challenges & Solutions, 2003 Informa-
tion Resources Management Association International
Dhillon, G. & Backhouse, J. (1996). Risks in the use of in-
Conference, Philadelphia, Pennsylvania, May 18-21 (pp.
formation technology within organizations. International
1052-1053). Hershey, PA: Idea Group Publishing.
Journal of Information Management, 16(1), 65-74.
Drake, P. (2005). Communicative action in information
Dhillon, G. (1997). Managing information systems
security systems: An application of social theory in a
security. London: Macmillan Press.
technical domain. Hull: University of Hull.
Dhillon, G. (1999). Managing and controlling computer
DSDM Consortium. (2006). White papers. Retrieved
misuse. Information Management and Computer Secu-
October 5, 2006, from http://www.dsdm.org/products/
rity, 7(4), 171-175.
white_ papers.asp
Dhillon, G. (2004). The challenge of managing infor-
DTI. (2000). Information security breaches survey
mation security. International Journal of Information
2000: Technical report. London: Department of Trade
Management, 24, 3-4.
& Industry.
Dhillon, G. (2007). Principles of information systems
DTI. (2002). Information security breaches survey
security: Text and cases. Danvers: John Wiley & Sons.
2002: Technical report. London: Department of Trade
Dhillon, G., & Backhouse, J. (2001) Current directions & Industry.
in IS security research: Toward socio-organizational

350
Compilation of References

DTI. (2004). Information security breaches survey Ezingeard, J.-N., Mcfadzean, E., Howlin, N., Ashenden,
2004: Technical report. London: Department of Trade D., & Birchall, D. (2004). Mastering alignment: bringing
& Industry. information assurance and corporate strategy together.
Paper presented at the European and Mediterranean
DTI. (2006). Information security breaches survey 2006.
Conference on Information Systems, Carthage.
Retrieved March 2006, from www.dti.gov.uk
Faulkner, W. (2000). The power and the pleasure? A
Duffy, R. R., Kalsher, M. J., & Wogalter, M. S. (1995).
research agenda for ‘making gender stick.’ Science,
Increased effectiveness of an interactive warning in a
Technology & Human Values, 25(1), 87-119.
realistic incidental product-use situation. International
Journal of Industrial Ergonomics, 15, 169-166. Fazekas, C. P. (2004). 1984 is still fiction: Electronic
monitoring in the workplace and U.S. privacy. Duke
Edworthy, J., & Adams, A. (1996). Warning design: A
Law & Technology Review, 15. Retrieved January 2006,
research prospective. London: Taylor and Francis.
from http://www.law.duke.edu/journals/dltr/articles/
Electronic Privacy Information Center (EPIC). (2002). PDF/2004DLTR0015.pdf
Possible content of a European framework on protection
Feer, F. (2004). Thinking about deception. Retrieved
of workers’ personal data. Workplace Privacy, European
March 11, 2006, from http://www.d-n-i.net/fcs/feer_
Commission. Retrieved October 2005, from //www.epic.
thinking_about_deception.htm
org/privacy/workplace
Ferguson, A. J. (2005). Fostering e-mail security aware-
Ellmer, E., Pernul, G., & Kappel, G. (1995). Object-ori-
ness: The West Point Carronade. Educause Quarterly,
ented modeling of security semantics. In Proceedings of
28, 54-57.
the 11th Annual Computer Society Applications Confer-
ence (ACSAC’95). Ferraiolo, D., Sandhu, R., Gavrila, S., Kuhn, R., & Chan-
dramouli, R. (2001, August). Proposed NIST standard
EMVCo. (2004). EMV integrated circuit card specifica-
for role-based access control. ACM Transactions on
tions for payment systems, Book 2. Retrieved March 20,
Information and Systems Security, 4(3), 224-74.
2007, from http://www.emvco.com/
Feynman, R. P. (2001). The pleasure of finding things
Ericsson, K. A., & Simon, H. A. (1993). Protocol analy-
out. Penguin Books.
sis: Verbal reports as data (Rev. ed.). Cambridge, MA:
The MIT Press. Findlay, P., & McKinlay, A. (2003). Surveillance, elec-
tronic communications technologies and regulation.
Espiner, T. (2006). Microsoft denies flaw in Vista.
Industrial Relations Journal, 34(4), 305-314.
ZDNet UK, December 5. Retrieved April 15, 2007,
from http://www.builderau.com.au/news/soa/Micro- Finnish Government Resolution. (2004). Strategy for
soft_denies_flaw_in_Vista/0,339028227,339272533,00. securing the functions vital to society. Helsinki, Finland:
htm?feed=rss Edita Prima Oy.

Eysenck, H. (1947). Dimensions of personality. London: Fisher, C., & Lovell, A. (2003). Business ethics and val-
Routledge & Kegan Paul. ues. Harlow, London, New York, Boston, San Francisco,
Toronto, Sydney, Singapore, Hong Kong, Tokyo, Seoul,
Ezingeard, J.-N., Mcfadzean, E., & Birchall, D. W.
Taipei, New Delhi, Cape Town, Madrid, Mexico City,
(2003). Board of directors and information security: A
Amsterdam. Munich, Paris, Milan: Prentice Hall.
perception grid. In S. Parkinson & J. Stutt (Eds.), Pa-
per 222 presented at British Academy of Management Flechais, I., Sasse, M. A., & Hailes, S. M. V. (2003,
Conference, Harrogate. August). Security engineering: Bringing security home:
a process for developing secure and usable systems. In

351
Compilation of References

Proceedings of the 2003 Workshop on New Security tions. Information Management and Computer Security,
Paradigms. 11(3), 106-114.

Forrester, T., & Morrison, P. (1994). Computer ethics. Furnell, S. M. (2004). Using security: easier said than
MIT. done? Computer Fraud & Security, April, 6-10.

Forrester. (2006). Forrester research reports. Retrieved Furnell, S. M., Bryant, P., & Phippen, A. D. (2007).
March 2006, from http://www.forrester.com Assessing the security perceptions of personal Internet
users. Computers & Security, 26(5), 410-417.
Forsythe, D.E. (2001). Studying those who study as: An
anthropologist in the world of artificial intelligence. Furnell, S. M., Jusoh, A., & Katsabas, D. (2006). The
Stanford University Press. challenges of understanding and using security: A survey
of end-users. Computers & Security, 25(1), 27-35.
Franch, X., & Maiden, N. A. M. (2003, February 10-13).
Modelling component dependencies to inform their selec- Furnell, S. M., Katsabas, D., Dowland, P. S., & Reid, F.
tion. COTS-Based Software Systems, 2nd International (2007, May 14-16). A practical usability evaluation of
Conference, (ICCBSS 2003) (pp. 81-91). Lecture Notes security features in end-user applications. In Proceed-
in Computer Science 2580. Ottawa, Canada: Springer. ings of 22nd IFIP International Information Security
Conference (IFIP SEC 2007), Sandton, South Africa.
Freeman, S., Walker, M. R., & Latané, B. (1975). Diffu-
New York: Springer.
sion of responsibility and restaurant tipping: Cheaper by
the bunch. Personality and Social Psychology Bulletin, Garg, A., Curtis, J., & Halper, H. (2003). Quantifying
1(4), 584-587. the financial impact of information security breaches.
Information Management and Computer Security, 11(2),
Friedman, B., & Nissenbaum, H. (1997). Software
74-83.
agents and user autonomy. In Proceedings of the First
International Conference on Autonomous Agents (pp. Garland, D. J., Hopkin, D., & Wise, J. A. (1999). Hand-
466–469). book of aviation human factors. Mahwah, NJ: Lawerence
Erlbaum Associates.
Friedman, B., Hurley, D., Howe, D. C., Felten, E., & Nis-
senbaum, H. (2002). Users’ conceptions of Web security: Gartner (2005). Retrieved April 1, 2007 from http://www.
a comparative study. Extended Abstracts of the CHI 2002 gartner.com/130100/130115/gartners_hyp_f2.gif
Conference on Human Factors in Computing Systems
Gaston, S.J. (1996).Information security: Strategies for
(pp. 746-747). New York: Association for Computing
successful management. Toronto: CICA.
Machinery.
Gaudin, S. (2007). Hackers use Middle East fears to push
Friedman, B., Kahn, P., & Borning, A. (2006). Value
Trojan attack. Information Week, April 9. Retrieved April
sensitive design and information systems. In P. Zhang, &
15, 2007, from http://www.informationweek.com/win-
D. Galletta (Eds.), Human-computer interaction in man-
dows/showArticle.jhtml?articleID=198900155
agement information systems: Foundations (Vol. 4).
Gaudin, S. (2007). Human error more dangerous than
Friedman, B., Lin, P., & Miller, J. K. (2005). Informed
hackers. TechWeb. http://www.techweb.com/showArticle.
consent by design. In L. F. Cranor, & S. Garfinkel (Eds.),
jhtml?articleID=197801676
Security and usability (Chap. 24, pp. 495-521). O’Reilly
Media, Inc. Gaunard, P., & Dubois, E. (2003, May 26-28). Bridging
the gap between risk analysis and security policies: Se-
Fulford, H. & Doherty, N.F. (2003). The application of
curity and privacy in the age of uncertainty. IFIP TC11
information security policies in large UK-based organiza-

352
Compilation of References

18th International Conference on Information Security Global Platform. (2005). GlobalPlatform smart card
(SEC2003) (pp. 409-412). Athens, Greece. Kluwer. security target guidelines. Retrieved March 20, 2007,
from http://www.globalplatform.org/
Gerber, M., Solms, R. V., & Overbeek, P. (2001). Formal-
izing information security requirements. Information Golbeck, J. (2002). Cognitive load and memory theories.
Management & Computer Security, 9(1), 32-37. Retrieved April 2, 2007, from http://www.cs.umd.edu/
class/fall2002/cmsc838s/tichi/printer/memory.html
getafreelancer.com Web site. (2006). Retrieved April
27, 2007, from http://www.getafreelancer.com/proj- Golbeck, J. (2006). Generating predictive movie recom-
ects/Data-Processing-Data-Entry/Data-Entry-Solve- mendations from trust in social networks. In Proceed-
CAPTCHA.html  ings of the Fourth International Conference on Trust
Management. USA.
Gietzmann, M. B., & Selby, M. J. P. (1994). Assessment of
innovative software technology: Developing an end-user- Goldberg, D., Nichols, D., Oki, B., & Terry, D. (1992).
initiated interface sesign strategy. Technology Analysis Using collaborative filtering to weave an information
& Strategic Management, 6, 473-483. tapestry [Special Issue]. Communications of the ACM,
35, 61-70.
Gillies, A.C. (1997). Software quality: Theory and
management (2nd ed.). London/Boston: International Goldstein, W. M. & Hogarth, R. M. (1997). Research
Thomson Computer Press. on judgment and decision-making: Currents, connec-
tions, and controversies. Cambridge, UK: Cambridge
Gimp 2.2.  (n.d.).  Retrieved April 26, 2007, from http://
University Press.
www.gimp.org/ 
Golle, P., & Ducheneaut, N. (2005). Preventing bots
Giorgini, P., Massacci, F., & Mylopoulos, J. (2003, Oc-
from playing online games. Computer Entertainment,
tober 13-16). Requirement engineering meets security:
3(3), 3.
A case study on modelling secure electronic transac-
tions by VISA and Mastercard. The 22nd International Gollman, D. (1999). Computer security. Wiley.
Conference on Conceptual Modelling (ER’03) (LNCS
Goodman, S. (1991). New technology and banking:
2813, pp. 263-276). Chicago: Springer.
Problems and possibilities for developing countries,
Giorgini, P., Massacci, F., Mylopoulos, J., & Zannone, N. actor perspective. University of Lund, Sweden: Research
(2005). Modelling social and individual trust in require- Policy Institute.
ments engineering methodologies. Proceedings of the 3rd
Google’s audio captcha. (2006, November). Retrieved
International Conference on Trust Management (iTrust
April 29, 2007, from http://googleblog.blogspot.
2005). LNCS 3477. Heidelberg: Springer-Verlag.
com/2006/11/audio-captchas-when-visual-images-are.
Glazer, R. (1993). Measuring the value of information: html 
The information intensive organization. IBM Systems
Gordon Training Institute. Conscious competence lear-
Journal., 32(1), 99-110.
ning model. www.gordontraining.com
Gligor, V. D. (2005, Sep). Guaranteeing access in spite
Gordon, L. A., Loeb, M. P., Lucyshyn, W., & Richardson,
of distributed service-flooding attacks. In Proceedings
R. (2006). 2006 CSI/FBI computer crime and security
of the 12th ACM Conference on Computer and Communi-
survey. Baltimore: Computer Security Institute.
cations Security(pp. 80-96). Lecture Notes in Computer
Science, 3364. Gottesdiener, E. (2002). Requirements by collaboration.
Boston: Addison-Wesley.

353
Compilation of References

Gragg, D. (2002). A multi-level defense against social Habermas, J. (1984). The theory of communicative ac-
engineering. SANS Institute. Retrieved September tion, volume 1: Reason and the rationalization of society.
17, 2003, from http://www.sans.org/rr/papers/index. Boston: Beacon Press.
php?id=920
Habermas, J. (1989). The theory of communicative
Gragg, D. (2003). A multi-level defense against social action, volume 2: Lifeworld and system: A critique of
engineering. SANS Institute Information Security Read- functionalist reason. Boston: Beacon Press.
ing Room. Retrieved April 15, 2007, from http://www.
Hacker, S. (1989). Pleasure, power and technology:
sans.org/reading_room/papers/51/920.pdf
Some tales of gender, engineering, and the cooperative
Graham, D. B., & Allinson, N. M. (1998). Character- workplace. Boston: Unwin Hyman.
izing virtual eigensignatures for general purpose face
Haley, C. B., Laney, R., & Bashar, N. (2004, March).
recognition. 
Deriving security requirements from crosscutting threat
Granger, S. (2001). Social engineering fundamentals. descriptions. In Proceedings of the 3rd International Con-
Security Focus. Retrieved September 18, 2003, from: ference on Aspect-Oriented Software Development.
http://www.securityfocus.com/printable/infocus/1527
Hammond, K. R. (2000). Judgments under stress. New
Gregoire, J., Jr. (2007). CIO confidential: The Manhattan York: Oxford University Press.
effect. CIO Magazine, 20(9), 30-34.
Han, P., Xie, B., Yang, F., Wang, J., & Shen, R. (2004,
Grice, G. R. (1948). The relation of secondary reinforce- May). A novel distributed collaborative filtering algo-
ment to delayed reward in visual discrimination learning. rithm and its implementation on P2P overlay network.
Journal of Experimental Psychology, 38, 1-16. In Proceedings of the Eighth Pacific-Asia Conference
on Knowledge Discovery and Data Mining (PAKDD04).
Grimm, R., & Bershad, B. (2001). Separating access
Sydney, Australia.
control policy, enforcement, and functionality in exten-
sible systems. ACM Transactions on Computer Systems, Hardee, J. B., West, R., & Mayhorn, C. B. (2006). To
19(1), 36-70. download or not to download: An examination of com-
puter security decision-making. Association of Comput-
Gross, D., & Yu, E. (2001, August 27-31). Evolving system
ing Machinery: Interactions, 13(3), 32-37.
architecture to meet changing business goals: An agent
and goal-oriented approach. The 5th IEEE International Harl (1997). The psychology of social engineering.
Symposium on Requirements Engineering (RE 2001) (pp. Retrieved March 12, 2006, from http://searchlores.
316-317). Toronto, Canada. org/aaatalk.htm

Guess the google. (n.d.). Retrieved April 27, 2007, from Harris, A. J., & Yen, D. C. (2002). Biometric authen-
http://grant.robinson.name/projects/guess-the-google/  tication: Assuring access to information. Information
Management &Computer Security, 10(1), 12-19.
Gürses, S., Jahnke, J. H., Obry, C., Onabajo, A., Santen,
T., & Price, M. (2005, October). Eliciting confidentiality Helander, M. (1997). The human factors profession.
requirements in practice. In Proceedings of the 2005 In G. Salvendy (Ed.), Handbook of human factors and
Conference of the Centre for Advanced Studies on Col- ergonomics (2nd ed., pp. 3-16). New York: Wiley.
laborative Research.
Helmer, G., Wong, J., Slagell, M., Honavar, V., Miller,
Gutmann, P. (2003). Plug-and-play PKI: A PKI your L., & Lutz, R. (2002). A software fault tree approach to
mother can use. In Proceedings of the 12th USENIX requirements analysis of an intrusion detection system.
Security Symposium (pp. 45–58). Berkeley: USENIX In P. Loucopoulos & J. Mylopoulos (Ed.), Special Issue
Association.

354
Compilation of References

on Requirements Engineering for Information Security. Hinson, G. (2006). Seven myths about information se-
Requirements Engineering (Vol. 7, No. 4, pp. 177-220). curity metrics. The ISSA Journal, July, 43-48.

Helokunnas, T., & Kuusisto, R. (2003). Strengthening Hirsch, C. (2005). Do not ship trojan horses. In P. Dow-
leading situations via time-divergent communication land, S. Furnell, & B. Thuraisingham (Eds.), Security
conducted in Ba. The E-Business Review, 3(1), 78-81. management, integrity, and internal control in informa-
tion systems. Fairfax, VA: Springer.
Helokunnas, T., & Kuusisto, R. (2003). Information
security culture in a value net. In Proceedings of the Hodges, A. C. (2006). Bargaining for privacy in the
2003 IEEE International Engineering Management unionized workplace. The International Journal of
Conference (pp. 190-194). Albany, NY, USA. Comparative Labour Law and Industrial Relations,
22(2), 147-182.
Hensley, G. A. (1999). Calculated risk: passwords and
their limitations. Retrieved April 2, 2007, from http:// Hoeren, T., & Eustergerling, S. (2005). Privacy and data
www.infowar.com/articles/99article_120699a_ j.shtml protection at the workplace in Germany. In S. Nouwt,
B. R. de Vries, & C. Prins (Eds.), Reasonable expecta-
Herrmann, D. (2007). Complete guide to security and
tions of privacy (pp. 211-244). The Hague, PA: TMC
privacy metrics. Boca Raton, FL: Auerbach Publica-
Asser Press.
tions.
Hofmann, H., & Lehner, F. (2001). Requirements engi-
Herrmann, G., & Pernul, G. (1998). Towards security
neering as a success factor in software projects. IEEE
semantics in workflow management. In Proceedings of
Software, 18(4), 58-66.
the 31st Hawaii International Conference on Systems
Sciences. Hofstede, G. (1984). Culture’s consequences: Interna-
tional differences in work-related values. Beverly Hills,
Herrmann, G., & Pernul, G. (1999). Viewing business-
London, New Delhi: Sage Publications.
process security from different perspectives. Interna-
tional Journal of Electronic Commerce, 3(3), 89-103. Holbrook, H. (1990). A scenario-based methodology
for conducting requirements elicitation. ACM SIGSOFT
Hickey, A. M., & Davis, A. M. (2004). A unified model
Software Engineering Notes, 15(1), 95-104.
of requirements elicitation. Journal of Management
Information Systems, 20(4), 65-84. Hollows, P. (2005). Hackers are real-time. Are you?
Sarbanes-Oxley Compliance Journal, February 28.
Hickey, A., & Davis, A. (2002). The role of requirements
Retrieved April 15, 2007, from http://www.s-ox.com/
elicitation techniques in achieving software quality. In
Feature/detail.cfm?ArticleID=623
C. Salinesi, B. Regnell, & K. Pohl (Eds.), Proceedings of
the Requirements Engineering Workshop: Foundations Holz, T., & Raynal, F. (2006). Malicious malware: attack-
for Software Quality (REFSQ ’02) (pp. 165-171). Essen, ing the attackers (part 1), Security Focus, January 31.
Germany: Essener Informatik Beiträge. Retrieved April 15, 2007, from http://www.securityfocus.
com/print/infocus/1856
Higgins, H. N. (1999). Corporate system security: towards
an integrated management approach. Information Man- Hone, K. & Eloff, J.H.P. (2002). Information security
agement & Computer Security, 7(5), 217-222. policy: What do international security standards say.
Computers & Security, 21(5), 402-409.
Hinde, S. (2002). Security surveys spring crop. Comput-
ers and Security, 21(4), 310-321. Hone, K. & Eloff, J.H.P. (2002). What makes an ef-
fective information security policy. Network Security,
Hinde, S. (2003). Cyber-terrorism in context. Computers
20(6), 14-16.
and Security, 22(3), 188-192.

355
Compilation of References

Hong, K., Chi, Y., Chao, L., & Tang, J. (2003). An inte- ISM3. Information security management maturity model.
grated system theory of information security manage- www.ism3.com
ment. Information Management and Computer Security,
ISO 17799. (1999). Information security management
11(5), 243-248.
— Part 1: Code of practice for information security.
Howell, J. (1999). Introduction to face recognition. Boca London: British Standards Institution.
Raton, FL: CRC Press, Inc.
ISO. (2000). BS ISO/IEC 17799:2000, BS7799-1:2000.
Hsia, P., Samuel, J., Gao, J., Kung, D., Toyoshima, Y., & Information technology. Code of practice for informa-
Chen, C. (1994). Formal approach to scenario analysis. tion security management. International Standards
IEEE Software, 11(2), 33-41. Organisation.

Huitt, W. (2003). A systems model of human behavior. ISO. (2005). Information technology—security tech-
Educational Psychology Interactive. Valdosta, GA: niques—code of practice for information security man-
Valdosta State University. Retrieved March 20, 2007, agement (ISO/IEC 17799:2005). London: BSI.
from http://chiron.valdosta.edu/whuitt/materials/sys-
ISO. (2005). Information technology—security tech-
mdlo.html
niques —information security management systems—re-
Hussin, H., King, M., & Cragg, P. (2002). IT alignment in quirements (ISO/IEC 27001:2005(E)). London: BSI.
small firms. European Journal of Information Systems,
ITGI (2003). IT control objectives for Sarbanes-Oxley.
11, 108-127.
Rolling Meadows, IL: IT Governance Institute.
I.S.O. (2000). Information technology. Code of practice
ITGI. (2005). COBIT 4.0: control objectives and man-
for information security management, ISO 17799. Inter-
agement guidelines. Rolling Meadows, IL: Information
national Standards Organization.
Technology Governance Institute.
Ilet, D. (2005). Inside the biggest bank raid that
Ives, B., Walsh, K., & Schneider, H. (2004). The domino
never was. Zdnet. http://news.zdnet.co.uk/securi-
effect of password reuse. Communications of the ACM,
ty/0,1000000189,39191956,00.htm.
47(4), 75-78.
Imagemagick 6.2.8. (n.d.). Retrieved April 26, 2007,
Jackson Higgins, K. (2006). Phishing your own users.
from http://www.imagemagick.org/ 
Retrieved April 26, 2007, from http://www.darkreading.
Ince, D. (1994). An introduction to software quality assur- com/document.asp?doc_id=113055
ance and its implementation. London: McGraw-Hill.
Jackson Higgins, K. (2006). Social engineering gets
Index of bongard problems. (n.d.). Retrieved April 26, smarter. Retrieved April 26, 2007, from http://www.
2007, from http://www.cs.indiana.edu/~hfoundal/res/ darkreading.com/document.asp?doc_id=97382
bps/bpidx.htm 
Jackson, J. W., Ferguson, A. J., & Cobb, M. J. (2005,
International Labour Office—ILO. (1997). Protection October 12-22). Building a university-wide automated
of workers’ personal data. Geneva. information assurance awareness exercise. In Proceed-
ings of the 35th ASEE/IEEE Frontiers in Education
International Working Group on Data Protection in Tele-
Conference, Indianapolis, IN, (pp 7-11).
communications—IWGDPT. (1996). Report and recom-
mendations on telecommunications and privacy in labour Jahner, S., & Krcmar, H. (2005). Beyond technical aspects
relationships. Retrieved January 2006, from http://www. of information security: Risk culture as a success factor
datenschutz-brlin.de/doc/int/iwgdpt/dsarb_en.htm for IT risk management. Paper presented at Americas
Conference on Information Systems 2005.

356
Compilation of References

Jaques, R. (2007). UK smoking ban opens doors for Kochanski, G., Lopresti, D., & Shih, C. (2002,
hackers. Retrieved April 26, 2007, from http://www. September). A reverse Turing test using speech. In
vnunet.com/vnunet/news/2183215/uk-smoking-ban- Proceedings of the International Conferences on Spoken
opens-doors Language Processing (ICSLP). Denver, Colorado.

Johnston, J., Eloff, J. H. P., & Labuschagne, L. (2003). Kotulic, A.G. & Clark, J.G. (2004). Why there aren’t
Security and human computer interfaces. Computers & more information security research studies. Information
Security, 22(8), 675-684. & Management, 41, 5907-607.

Kan, S. H. (1995). Metrics and models in software quality Kuhn, T.S. (1962). The structure of scientific revolutions.
engineering. Boston: Addison-Wesley. University of Chicago Press.

Katsabas, D. (2004). IT security: A human computer Kumar, K. G. (2005). Towards financial inclusion. Re-
interaction perspective. Master’s thesis, University of trieved April 16, 2007, from http://www.thehindubusi-
Plymouth, UK. nessline.com/2005/12/13/stories/2005121301240200.
htm
Katsabas, D., Furnell, S. M., & Dowland, P. S. (2006,
April). Evaluation of end-user application security from Kuo, C., Perrig, A. & Walker, J. (2006). Designing an
a usability perspective. In K. K. Dhanda & M. G. Hunter evaluation method for security user interfaces: Lessons
(Eds.), Proceedings of 5th Annual ISOneWorld Confer- from studying secure wireless network configuration.
ence and Convention, Las Vegas, USA, (pp. 19-21). ACM Interactions, 13(3), 28-31.
Washington DC: The Information Institute.
Kuong, J. (1996). Client server controls, security and
Kazman, R., Klein, M., & Clements, P. (2000). ATAM: audit (enterprise protection, control, audit, security,
Method for architectural evaluation (CMU/SEI-2000- risk management and business continuity). Masp Con-
TR-004). Pittsburgh, PA: Software Engineering Institute, sulting Group.
Carnegie Mellon University.
Kuusisto, R. (2004). Aspects on availability. Helsinki,
Kelly, M. (2007). Chocolate the key to uncovering PC Finland: Edita Prima Oy.
passwords. The Register, April 17. Retrieved April 26,
Kuusisto, R. (2006). Flowing of information in decision
2007, from http://www.theregister.co.uk/2007/04/17/
systems. In Proceedings of the 39th Hawaii International
chocolate_password_survey/
conference of System Sciences (abstract on p. 148, paper
Kesan, J. P. (2002). Cyber-working or cyber-shirking?: a published in electronic form). Kauai, HI: University of
first principles examination of electronic privacy in the Hawai’i at Manoa.
workplace. Florida Law Review, 289ff.
Kuusisto, R., Nyberg, K., & Virtanen, T. (2004). Unite
Killmeyer-Tudor, J. (2000). Information security ar- security culture—may a unified security culture be plau-
chitecture: In integrated approach to security in the sible. In A. Jones (Ed.), Proceedings of the 3rd European
organisation. CRC Press. Conference on Information Warfare and Security (pp.
221-230). London: Academic Conferences Limited.
Kirk, J. (2006). Free CDs highlight security weaknesses.
PC World. http://www.pcworld.idg.com.au/index.php/ Kuusisto, T., Kuusisto, R., & Nissen, M.,(2007). Im-
id;2055135135;fp;2;fpid;1 plications of information flow priorities for interorga-
nizational crisis management. In L. Armistead (Ed.),
Klein, G. (1998). Sources of power: How people make
Proceedings of the 2nd International  Conference on
decisions. Cambridge, MA: The MIT Press.
I-Warfare and Security (pp. 133-140). Monterey, CA:
Naval Postgraduate School.

357
Compilation of References

Kwoc, L. & Longley, D. (1999). Information security Levine, R. (2003). The power of persuasion. Hoboken,
management & modelling. Information Management NJ: John Wiley & Sons Inc.
and Computer Security, 7 1), 30-39.
Lewis, J. (2003). Cyber terror: Missing in action. Knowl-
Lahaie, D. (2005). The impact of corporate memory loss. edge, Technology & Policy, 16(2), 34-41.
Leadership in Health Services, 18, 35-48.
Leyden, J. (2006). MS anti-spyware labels Symantec
Lane, V. P. (1985). Security of computer based informa- as Trojan. The Register, 14. Retrieved March 21, 2007,
tion systems. Macmillan Education Ltd. from http://www.theregister.co.uk/2006/02/14/ms_anti-
spyware_false_positive
Langford, D. (1995). Practical computer ethics. Mc-
Graw-Hill. Leyden, J. (2007). Data theft replaces malware as top
security concern. The Register, April 19. Retrieved April
Lasprogata, G., King, N., & Pillay, S. (2004). Regula-
19, 2007, from http://www.theregister.co.uk/2007/04/19/
tion of electronic employee monitoring: Identifying
security_fears_poll/
fundamental principles of employee privacy through
a comparative study of data privacy legislation in the Linden, J. V. (2004). The trouble with blood is it all looks
European Union, United States and Canada. Stanford the same: Transfusion errors. In M.S. Bogner (Ed.),
Technology Law Review, 4. Retrieved March 2006, from Misadventures in health care: Inside stories (pp. 13-25).
http://stlr.stanford.edu/STLR/Article?04_STLR_4 Mahwah, NJ: Lawrence Erlbaum Associates.

Latour, B., & Woolgar, S. (1979). Laboratory life: The Lindner, J.R., Murphy, T.H., & Briers, G.E. (2001). Han-
social construction of scientific facts. Princeton Uni- dling non-response in social science research. Journal
versity Press. of Agricultural Education, 42(4), 43-53.

Lau, F. (1999). Toward a framework for action research Linsky, J., Bourk, T., Findikli, A., Hulvey, R., Ding,
in information systems studies. Information Technology S., Heydon, R., et al. (2006, August). Simple pairing
& People, 12(2), 148-175. (Whitepaper, revision v10r00).

Lauesen, S. (2002). Software requirements: Styles and Liu, L., & Yu, E. (2003). Designing information systems
techniques. London: Addison-Wesley. in social context: A goal and scenario modelling approach.
Information Systems, 29(2), 187-203.
Lee, J., & Lee, Y. (2002). A holistic model of computer
abuse within organisations. Information Management Liu, L., & Yu, E. (2004). Intentional modelling to support
& Computer Security, 10(2), 57-63. identity management. In P. Atzeni et al. (Eds.), Proceed-
ings of the 23rd International Conference on Conceptual
Lee, S.-W., Gandhi, R., Muthurajan, D., Yavagal, D., &
Modelling (ER 2004) (pp. 555-566). LNCS 3288. Berlin,
Ahn, G.-J. (2006, May). Building problem domain ontol-
Heidelberg: Springer-Verlag.
ogy from security requirements in regulatory documents.
In Proceedings of the 2006 International Workshop on Liu, L., Yu, E., & Mylopoulos, J. (2002, October 16).
Software Engineering for Secure Systems SESS ‘06. Analyzing security requirements as relationships among
strategic actors. The 2nd Symposium on Requirements
Lehto, M. R., & Miller, J. M. (1986). Warnings, volume
Engineering for Information Security (SREIS’02).
1: Fundamentals, design, and evaluation methodologies.
Raleigh, NC.
Ann Arbor, MI: Fuller Technical.
Liu, L., Yu, E., & Mylopoulos, J. (2003, September).
Leng, T. (1997). Internet regulation in Singapore. Com-
Security and privacy requirements analysis within a
puter Law & Security Report, 13(2), 115-119.
social setting. Proceedings of International Conference

358
Compilation of References

on Requirements Engineering (RE’03) (pp. 151-161). MacKenzie, D.A. (2004). Computers and the cultures of
Monterey, CA. proving. Paper presented at the Royal Society Discussion
Meeting, London.
Loch, K. D., Carr, H. H., & Warkentin, M. E. (1992).
Threats to information systems: Today’s reality, Yester- Macromedia Web site. (n.d.). Retrieved April 27, 2007,
day’s understanding. MIS Quarterly, 16, 173. from http://www.macromedia.com/ 

Loch, K.D., Carr, H.H., & Warkentin, M.E. (1992). Threats Maier, R. (2002). Knowledge management systems.
to information systems: Today’s reality, yesterday’s Information and communication technologies for
understanding. MIS Quarterly, 16(2), 173-186. knowledge management. Berlin, Heidelberg, New York:
Springler-Verlag.
Lodderstedt, T., Basin, D. A., J, & Doser, R. (2002). Se-
cureUML: A UML-based modelling language for model- Malaska, P., & Holstius, K. (1999). Visionary manage-
driven security. Proceedings of UML ‘02: Proceedings of ment. Foresight, 1(4), 353-361.
the 5th International Conference on The Unified Modelling
Marett, K., Biros, D., & Knode, M. (2004). Self-efficacy,
Language, Dresden, Germany (pp. 426-441).
training effectiveness, and deception detection: A lon-
Loftus, E. F., Dark, V. J., & Williams, D. (1979). Short- gitudinal study of lie detection training. Lecture Notes
term memory factors in ground control-ler/pilot com- in Computer Science, 3073, 187-200.
munication. Human Factors, 21, 169-181.
Mark, R. (2006). Teens charged in VA laptop theft.
Longbough (1996). Internet security and insecurity. Internetnews. http://www.internetnews.com/bus-news/
Management Advisory Publications. article.php/3624986

Lopresti, D. (2005, May). Leveraging the captcha Martin, B. (2004). Telling lies for a better world? Social
problem. In H. S. Baird & D. P. Lopresti (Eds.), Human Anarchism, 35, 27-39.
interactive proofs: Second international workshop (HIP
Martins, A., & Eloff, J. (2002). Information security
2005) (Vol. 3517, p. 97).  Springer Verlag.
culture. In Proceedings of IFIP TC11 17th International
Lortz, V., Roberts, D., Erdmann, B., Dawidowsky, F., Conference on Information Security (pp. 203-214). Cairo,
Hayes, K., Yee, J. C., et al. (2006). Wi-Fi simple config Egypt: IFIP Conference Proceedings 214.
specification (version 1.0a).
Marx, K., & Engels, F. (1968). Selected works in one
Low, J., & Woolgar, S. (1993). Managing the socio-tech- volume. London: Lawrence & Wishart.
nical divide: Some aspects of the discursive structure of
Mayer, N., Rifaut, A., & Dubois, E. (2005). Towards a
information systems development. In P. Quintas (Ed.),
risk-based security requirements engineering frame-
Social dimensions of systems engineering: People,
work. Workshop on Requirements Engineering For
processes and software development (pp. 34-59). New
Software Quality (REFSQ’05), at the Conference for
York/London: Ellis Horwood.
Advanced Information Systems Engineering (CAiSE),
Luhmann, N. (1999). Ökologishe Kommunikation, 3. Porto, Portugal.
Auflage. Opladen/Wiesbaden: Westdeutcher Verlag.
Mayhorn, C. B., Lanzolla, V. R., Wogalter, M. S., &
Lytz, R. (1995). Software metrics for the Boeing 777: A Watson, A. M. (2005). Personal digital assistants (PDAs)
case study. Software Quality Journal, 4(1), 1-13. as medication reminding tools: Exploring age differences
in usability. Gerontechnology, 4(3), 128-140.
MacKenzie, D.A. (2001). Mechanizing proof: Com-
puting, risk, and trust. Cambridge, MA/London: MIT Mayhorn, C. B., Rogers, W. A., & Fisk, A. D. (2004). De-
Press. signing technology based on cognitive aging principles.

359
Compilation of References

In S. Kwon & D. C. Burdick (Eds.), Gerotechnology: applications: Security quality requirements engineer-
research and practice in technology and aging (pp. ing (SQUARE) methodology. ACM SIGSOFT Software
42-53). New York: Springer Publishing. Engineering Notes, Proceedings of the 2005 Workshop
on Software Engineering for secure systems—building
Mayhorn, C. B., Stronge, A. J., McLaughlin, A. C., &
trustworthy applications SESS ‘05, Volume, 30(4).
Rogers, W. R. (2004). Older adults, computer training,
and the systems approach: A formula for success. Edu- Mead, N., & Stehney, T. (2005). Security quality re-
cational Gerontology, 30(3), 185-203. quirements engineering (SQUARE) methodology. ACM
SIGSOFT Software Engineering Notes, 30(4), 1-7.
Mayhorn, C. B., Wogalter, M. S., & Bell, J. L. (2004). Are
we ready? Misunderstanding homeland security safety Mendell, R. L. (2007). The psychology of information
symbols. Ergonomics in Design, 12(4), 6-14. security. The ISSA Journal, March, 8-11.

McCauley, J. (1997). Legal ethics and the Internet: A Menzies, R. (1993). Information systems security. In J.
US perspective. Computer Law & Security Report, Peppard (Ed.), IT strategy for business. London: Pitman
13(2), 110-114. Publishing.

McCauley-Bell, P. R., & Crumpton, L. L. (1998). The Merleau-Ponty, M. (1968).The visible and invisible.
human factors issues in information security: What are Evanston, IL: Northwest University Press.
they and do they matter? In Proceedings of the Human
Messmer, E. (2007). RSA ’07: Bruce Schneier casts
Factors and Ergonomics Society 42ndAnnual Meeting,
light on psychology of security. CSO Online, February
USA (pp. 439-442).
7. Retrieved August 15, from http://www.networkworld.
McCune, J. M., Perrig, A., & Reiter, M. K. (2005). Seeing- com/news/2007/020707-rsa-schneier.html
is-believing: Using camera phones for human-verifiable
Michailova, A., Doche, M., & Butler, M. (2002). Con-
authentication. In Proceedings of IEEE Symposium on
straints for scenario-based testing of object-oriented
Security and Privacy.
programs (Technical Report). Electronics and Computer
McDermott, J. (2001). Abuse-case-based assurance ar- Science Department, University of Southampton.
guments. In Proceedings of the 17th Annual Computer
Microsoft. (2007). What’s not secure. Help protect
Security Applications Conference (ACSAC).
yourself: Security in Office tutorial. Microsoft Office
McDermott, J., & Fox, C. (1999). Using abuse case models Online, Microsoft Corporation. Retrieved March 21,
for security requirements analysis. Proceedings 15th IEEE 2007, from http://office.microsoft.com/training/training.
Annual Computer Security Applications Conference, aspx?AssetID=RP010425901033&CTT=6&Origin=RP0
Scottsdale, USA (pp. 55-67). 10425891033

Mcfadzean, E., Ezingeard, J.-N., & Birchall, D. (2004). Miller, A. (1991). Personality types, learning styles and
Anchoring information security governance research. In educational goals. Educational Psychology, 11(3-4),
G. Dhillon & S. Furnell (Eds.), Proceedings of the Third 217-238.
Security Conference. Las Vegas, Nevada, USA.
Miller, B. N., Konstan, J. A., & Riedl, J. (2004, July).
McMillan, R. (2006). Third word exploit released, IDG PocketLens: Toward a personal recommender system.
news service. Retrieved April 15, 2007, from http://www. ACM Transactions on Information Systems, 22.
techworld.com/applications/news/index.cfm?newsID=7
Miller, G. A. (1956). The magical number seven plus or
577&pagtype=samechan
minus two: Some limits on our capacity for processing
Mead, N. R., & Stehney, T. (2005). Software engineer- information. Psychological Review, 63, 81-97.
ing for secure systems (SESS)—building trustworthy

360
Compilation of References

Ministry of Defence (MoD). (1997). Requirements Morgan, G., Fischhoff, B., Bostrom, A., & Atman, C.
for safety related software in defence equipment Re- (2002). Risk communication: A mental models approach.
trieved October 5, 2006, from http://www.dstan.mod. New York: Cambridge University Press.
uk/data/00/055/01000200.pdf
Mori, G., & Malik, J. (2003). Recognizing objects in
Ministry of Defence (MoD). (2004). Interim defence adversarial clutter: breaking a visual captcha.  In Pro-
standard 00-56. Retrieved October 5, 2006, from http:// ceedings of 2003 IEEE Computer Society Conference
www.dstan.mod.uk/data/00/056/01000300.pdf on Computer Vision and Pattern Recognition (pp. I-
134–I-141). 
Misra, D., & Gaj, K. (2006, February). Face recognition
CAPTCHAs. In Proceedings of the the Advanced Int’l Mouratidis, H., Giorgini, P., & Manson, G. (2004, April
Conference on Telecommunications and Int’l Confer- 13-17). Using security attack scenarios to analyse security
ence on Internet and Web Applications and Services during information systems design. Proceedings of the
(AICT-ICIW ‘06). (pp. 122). Washington, DC: IEEE 6th International Conference on Enterprise Information
Computer Society. Systems, Porto, Portugal.

Misra, D., & Gaj, K. (2006, July). Human friendly Mouratidis, H., Giorgini, P., & Manson, G. A. (2003).
CAPTCHAs—simple games (Poster). Symposium on Integrating security and systems engineering: Towards
Usable Privacy and Security (SOUPS). Pittsburgh, the modelling of secure information systems. Proceed-
Pennsylvania, USA. ings of the 15th Conference on Advanced Information
Systems Engineering (CAiSE 03) (Vol . LNCS 2681, pp.
Mitchell, R.C., Marcella, R., & Baxter, G. (1999). Cor-
63-78). Klagenfurt, Austria: Springer.
porate information security. New Library World, 100
1150), 213-277. Mouratidis, H., Giorgini, P., & Schumacher, M. (2003).
Security patterns for agent systems. Proceedings of
Mitnick, K. (2002). The art of deception. Indianapolis,
the 8th European Conference on Pattern Languages of
Indiana: Wiley Publishing, Inc.
Programs, Irsee, Germany.
Mitnick, K. D., & Simon, W. L. (2002). The art of decep-
Mouratidis, H., Kolp, M., Faulkner, S., & Giorgini. P.
tion: Controlling the human element of security. Indiana:
(2005, July). A secure architectural description language
Wiley Publishing, Inc..
for agent systems. Proceedings of the 4th International
Mitrou, E., & Karyda, M. (2006). Employees’ privacy vs. Joint Conference on Autonomous Agents and Multia-
employers’ security: Can they be balanced. Telematics gent Systems (AAMAS05). Utrecht, The Netherlands:
and Informatics Journal, 23(3), 164-178. ACM Press.

Modhvadia, S., Daman, S. et al. (2002). Engaging the Myers, G. J. (1979). The art of software testing. New
board: Benchmarking information assurance. Cam- York: Wiley.
bridge: Information Assurance Advisory Council.
Myers, M.D., & Avison, D.E. (Eds). (2002). Qualitative
Mohney, D. (2006). Defeating social engineering with research in information systems: A reader. London:
voice analytics. Black Hat Briefings, Las Vegas, August Sage Publications.
2-3, 2006. Retrieved April 25, 2007, from http://www.
Nair, V., Sofield, A., & Mulbagal, V. (2006). Building
blackhat.com/presentations/bh-usa-06/BH-US-06-
a more inclusive financial system in India. Retrieved
Mohney.pdf
on April 16, 2007, from http://www.diamondconsul-
Montage a google. (n.d.). Retrieved April 27, 2007, from tants.com/PublicSite/ideas/perspectives/downloads/
http://grant.robinson.name/projects/montage-a-google/  India%20Rural%20Banking_Diamond.pdf

361
Compilation of References

Naor, M. (1996). Verification of a human in the loop or Computer Security, Applications Conference (pp. 86-
identification via the Turing test [Unknown]. Retrieved 95). IEEE Computer Society.
April 27, 2007, from http://www.wisdom.weizmann.
Norman, D. (1980). Twelve issues for cognitive science.
ac.il/%7Enaor/PAPERS/human_abs.html 
Cognitive Science, 4, 1-32. [Reprinted in Norman, D.
Naraine, R. (2006). Voice phishers dialing for PayPal (1981). Twelve issues for cognitive science. In D. Norman
dollars. eWeek, July 7. Retrieved April 15, 2007, from (Ed.), Perspectives on cognitive science (pp. 265-295).
http://www.eweek.com/article2/0,1895,1985966,00.asp Norwood, NJ: Ablex Publishing Corp.]

Naraine, R. (2006). Hackers use BBC news as IE attack Norman, D. A. (1988). The psychology of everyday things.
lure. eWeek, March 30. Retrieved April 15, 2007, from New York: Harper & Row.
http://www.eweek.com/article2/0,1895,1944579,00.asp
Northcutt, S. et al. (2002). Inside network perimeter
Naraine, R. (2006). Drive-by IE attacks subside; threat security: The definitive guide to firewalls, virtual pri-
remains. eWeek, March 27. Retrieved April 15, 2007, from vate networks (VPNs), routers, and intrusion detection
http://www.eweek.com/article2/0,1895,1943450,00.asp systems. Que.

National Institute of Standards and Technology (NIST). Nosworthy, J. (2000). Implementing information security
(1992). Computer system security and privacy advisory in the 21st century—do you have the balancing factors?
board (Annual Report, 18). Computers and Security, 19(4), 337-347.

Neumann, P. (1995). Computer related risks., Addison- Nouwt, S., de Vries, B. R., & Loermans, R. (2005). Analy-
Wesley. sis of the country reports. In S. Nouwt, B. R. de Vries, &
C. Prins (Eds.), Reasonable expectations of privacy (pp.
Newell, A., Shaw, J. C., & Simon, H. (1961) Information
323-357). The Hague, PA: TMC Asser Press.
processing language V manual. Edgewood Cliffs, NJ:
Prentice-Hall. O’Connor, J., & McDermott, I. (1996). Principles of
NLP. London: Thorsons.
Newman, R., Gavette, S., Yonge, L., & Anderson, R.
(2006). Protecting domestic power-line communications. O’Donovan, J., & Smith, B. (2006, January). Is trust
In Proceedings of Symposium on Usable Privacy and robust? An analysis of trust-bades recommendation. In
Security (SOUPS). IUI 2006. Sydney, Australia.

Nielsen, J. (1994). Heuristic evaluation. In J. Nielsen OCC. (2006). Customer authentication and internet bank-
& R.L. Mack (Eds.), Usability inspection methods (pp. ing alert (OCC Alert 2006-50). Retrieved April 1, 2007,
25-64). New York: John Wiley & Sons. from http://www.occ.treas.gov/ftp/alert/2006-50.html

Niiniluoto, I. (1997). Informaatio, tieto ja yhteiskunta, OECD. (2002). OECD guidelines for the security of
Filosofinen käsiteanalyysi. Helsinki, Finland: Edita. information systems and networks: Towards a cul-
ture of security. Adopted as a recommendation of the
Nissenbaum, H. (1999). Can trust be secured online? A
OECD Council at its 1037th session on July 25, 2002.
theoretical perspective. Etica e Politica, 2. Retrieved
Retrieved April 11, 2007, from http://www.oecd.org/
October 5, 2006, from http://www.units.it/~etica/1999_2/
dataoecd/16/22/15582260.pdf
nissenbaum.html
Online Safety Study. (2004, October). AOL/NCSA
Noel, S., Jajodia, S., O’Berry, B., & Jacobs, M. (2003).
online safety study, conducted by America Online and
Efficient, minimum-cost network hardening via exploit
the National Cyber Security Alliance. Retrieved April
dependency graphs. In Proceedings of 19th Annual
15, 2007, from http://www.staysafeonline.info/pdf/
safety_study_v04.pdf

362
Compilation of References

Oppy, G., & Dowe, D. (2005). The Turing test. In E. N. Patrice, J. B. J. C. I. C., Simard, Y., & Szeliski,
Zalta (Ed.), The Stanford encyclopedia of philosophy. Re- R. (2003). Using character recognition and segmentation
trieved April 27, 2007, from http://plato.stanford.edu/ar- to tell computer from humans.  In Proceedings of the
chives/sum2005/entries/turing-test/. Seventh International Conference on Document Analysis
and Recognition (ICDAR ’03) (p. 418).  Washington, DC:
Organisation for Economic Co-Operation and Develop-
IEEE Computer Society.
ment—OECD. (2006). RFID: Drivers, challenges and
public policy considerations (DSTI/ICCP (2005)19/FI- Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The
NAL). adaptive decision maker. Cambridge University Press.

Orgill, G. L., Romney, G. W., Bailey, M. G., & Orgill, Peltier, T. (2001). Information security risk Analysis.
P. M. (2004, October 28-30). The urgency for effective Auerbach.
user privacy-education to counter social engineering
Peltier, T. R. (2001, January). Information security risk
attacks on secure computer systems. In Proceedings of
analysis. Boca Raton, FL: Auerbach Publications.
the 5th Conference on Information Technology Educa-
tion CITC5 ‘04, Salt Lake City, UT, USA, (pp. 177-181). Penrose, R. (1989). The emperor’s new mind. Oxford
New York: ACM Press. University Press.

Oshri, I., Kotlarsky, J., & Hirsch, C. (2005). Security in Penrose, R. (1994). Shadows of the mind. Oxford Uni-
networkable windows-based operating system devices. versity Press.
Paper presented at Softwares Conference, Las Vegas,
Perloff, R. (1993). Third person effect research 1983-
Nevada, USA
1992: A review and synthesis. International Journal of
OST. (2004). Cyber trust and crime prevention. London: Public Opinion Research, 5, 167-184.
Office of Science & Technology—UK Department of
Pernul, G. (1992). Security constraint processing dur-
Trade and Industry. HMSO.
ing multilevel secure database design. In Proceedings
Palmer, C. C. (2001). Ethical hacking. IBM Systems Jour- of the 8th Annual Computer Security Applications
nal, 40(3). Retrieved April 15, 2007, from http://www. Conference.
research.ibm.com/journal/sj/403/palmer.html
Pernul, G. (1992, November 23-25). Security constraint
Parizo, E. B. (2005). New bots, worm threaten AIM processing in multilevel secure AMAC schemata. The 2nd
network. Retrieved April 25, 2007, from http://searchse- European Symposium on Research in Computer Security
curity.techtarget.com/originalContent/0,289142,sid14_ (ESORICS 1992) (pp. 349-370). Toulouse, France. Lecture
gci1150477,00.html Notes in Computer Science 648. Springer.

Park, D. C., & Skurnik, I. (2004). Aging, cognition, Pernul, G. (1995). Information systems security: Scope,
and patient errors in following medical instructions. In state of the art and evaluation of techniques. International
M.S. Bogner (Ed.), Misadventures in health care: Inside Journal of Information Management, 15(3), 165-180.
stories (pp. 165-181). Mahwah, NJ: Lawrence Erlbaum
Perrig, A., & Song, D. (1999). Hash visualization: A new
Associates.
technique to improve real-world security. In Proceedings
Parno, B., Kuo, C, & Perrig, A. (2006, February 27-March of International Workshop on Cryptographic Techniques
2). Phoolproof phishing prevention. In Proceedings of the and E-Commerce (CrypTEC).
10th International Conference on Financial Cryptogra-
Peters, T. J., & Waterman, R. H. (1982). In search Of
phy and Data Security. Anguilla, British West Indies.
excellence: Lessons from America’s best run companies.
New York: Harper and Row.

363
Compilation of References

Pfleeger, C.P. (1997). Security in computing. Englewood Premkumar, G. & King, W.R. (1992). An empirical as-
Cliffs, NJ: Prentice Hall. sessment of information systems planning and the role
of information systems in organizations. Journal of
Phillips, J. D. (2005). Privacy and data protection in the
Management Information Systems, 19(2), 99-125.
workplace: the US case. In S. Nouwt, B. R. de Vries, &
C. Prins (Eds.), Reasonable expectations of privacy (pp. Pressman, R. (2005). Software engineering: A practi-
39-59). The Hague, PA: TMC Asser Press. tioner’s approach (6th ed.). London/New York: McGraw
Hill.
Phishers raise their game. (2006). Retrieved April
25, 2007, from http://software.silicon.com/secu- Privacy International. (2006). PHR2005—threats to
rity/0,39024655,39164058,00.htm privacy (28/10/2006). Retrieved March 2006, from
http://www.privacyinternational.org/
Pinkas, B., & Sander, T. (2002). Securing passwords
against dictionary attacks. In Ccs ’02: Proceedings of Proctor, R. W., Lien, M. C., Vu, K. P. L., Schultz, E. E.,
the 9th acm conference on computer and communica- & Salvendy, G. (2002). Improving computer security for
tions security (pp. 161–170).  New York, NY, USA: authentication of users: Influence of proactive password
ACM Press. restrictions. Behavior Research Methods, Instruments,
& Computers, 34, 163-169.
Pitts and Browne. (2004). Stopping behavior of systems
analysts during information requirements elicitation. Provos, N., McClain, J., & Wang, K. (2006). Search
JMIS, 21(1), 203-226. worms. In Proceedings of the 4th ACM Workshop on
Recurring Malcode (WORM ’06) (pp. 1-8). Alexandria,
Plewes, A. (2007, March). VoIP threats to watch out
VA: ACM Press.
for—a primer for all IP telephony users. Retrieved April
18, 2007, from http://www.silicon.com/silicon/networks/ Radu, C. (2003). Implementing electronic card payment
telecoms/0,39024659,39166244,00.htm systems. Norwood: Artech House.

Polat, H., & Du, W. (2004). SVD-based collaborative Rankl, W., & Effing, W. (2003). Smart card handbook.
filtering with privacy. In Proceedings of ACM Symposium Chichester, England: John Wiley and Sons.
on Applied Computing. Cyprus.
Rapaport. W. J. (2000). Cognitive science. In A. Ralston,
Popper, K.R. (1963). Conjectures and refutations. New E. D. Reilly, & D. Hemmindinger (Eds.), Encyclopedia
York: Harper. of computer science (4th ed.) (pp. 227-233). New York:
Grove’s Dictionaries.
Porter, D. (2003). Insider fraud: Spotting the wolf in
sheep’s clothing. Computer Fraud & Security, 4, 12- Reason, J. (2002). Human reason. Cambridge, UK:
15. Cambridge University Press.

Pottas, D., & Solms, S. H. (1995). Aligning information Rees, J., Bandyopadhyay, S., & Spafford, E.H. (2003).
security profiles with organizational policies. Proceed- PFIRES: A policy framework for information security.
ings of the IFIP TC11 11th International Conference on Communications of the ACM, 46(7), 101-106.
Information Security.
Reeves, B., & Nass, C. (1996). The media equation: How
Preczewski, S. C., & Fisher, D. L. (1990). The selection people treat computers, televisions and new media like
of alphanumeric code sequences. In Proceedings of real people and places. Cambridge University Press.
the Human Factors Society 34th Annual Meeting (pp.
Reich, B. H. & Benbasat, I. (1996). Measuring the linkage
224-228).
between business and information technology objectives.
MIS Quarterly, 20, 55-81.

364
Compilation of References

Reich, B. H., & Benbasat, I. (2000). Factors that influ- Rui, Y., & Liu, Z. (2003). Artifacial: automated reverse
ence the social dimension of alignment between business Turing test using facial features.  In Proceedings of the
and information technology objectives. MIS Quarterly, Eleventh ACM International Conference on Multime-
24, 81-113. dia (Multimedia ’03) (pp. 295-298). New York: ACM
Press.
Reserve Bank of India. (2005). RBI announces measures
towards promoting financial inclusion. Retrieved April Rui, Y., & Liu, Z. (2003). Excuse me, but are you
16, 2007, from http://www.rbi.org.in/scripts/BS_Press- human? In Proceedings of the Eleventh ACM Inter-
ReleaseDisplay.aspx?prid=14069 national Conference on Multimedia (Multimedia ’03)
(pp. 462-463). New York: ACM Press.
Reserve Bank of India. (2006). Financial inclusion by
extension of banking services—use of business facili- Russell, D., & Gangemi, G., Sr. (1991). Computer security
tators and correspondents. Retrieved April 20, 2007, basics. O’Reilley.
from http://www.rbi.org.in/scripts/NotificationUser.
Rusu, A., & Govindaraju, V. (2005, Jan). Visual capt-
aspx?Mode=0&Id=2718
cha with handwritten image analysis. In H. S. Baird &
Roarke, S. P. (2005, July). Bots now battle humans for D. P. Lopresti (Eds.), Human interactive proofs: Second
poker supremacy. Retrieved April 27, 2007, from http:// international workshop (Vol. 3517). Bethlehem, PA:
www.Foxsports.com  Springer Verlag.

Rodota, S. (2004, September 16). Privacy, freedom Ryst, S. (2006, July 11). The phone is the latest phishing
and dignity. Closing remarks at the 26th International rod. BusinessWeek.
Conference on Privacy and Personal Data Protection,
Sahut, J. M. (2006). Electronic wallets in danger. Journal
Wroclaw.
of Internet Banking and Commerce, 11(2). Retrieved
Rogers, W. A., Lamson, N., & Rousseau, G. K. (2000). April 16, 2007, from http://www.arraydev.com/com-
Warning research: An integrative perspective. Human merce/JIBC/2006-08/Jean-Michel_SAHUT.asp
Factors, 42(1), 102-139.
Saltzer, J. H., & Schroeder, M. D. (1975). The protection
Röhm, A. W., & Pernul, G. (1999). COPS: A model and of information in computer systems. In Proceedings of
infrastructure for secure and fair electronic markets. the IEEE, 63(9), 1278-1308.
Proceedings of the 32nd Annual Hawaii International
Samarati, P., & Vimercati, S. (2001). Access control:
Conference on Systems Sciences.
Policies, models, and mechanisms. In R. Focardi & R.
Rolland, C., Grosz, G., & Kla, R. (1999, June). Experience Gorrieri (Eds.), Foundations of security analysis and
with goal-scenario coupling in requirements engineering. design: Tutorial lectures (pp. 137-196). LNCS 2171.
Proceedings of the IEEE International Symposium on
Sanders, M. S., & McCormick, E. J. (1993). Human
Requirements Engineering, Limerick, Ireland.
factors in engineering and design (7th ed.). New York:
Ross, B., Jackson, C., Miyake, N., Boneh, D., & Mitchell, McGraw-Hill Inc.
J. C. (2005). Stronger password authentication using
Sanderson, E., & Forcht, K. A. (1996). Information secu-
browser extensions. In Proceedings of the 14th Usenix
rity in business environments. Information Management
Security Symposium, 2005.
& Computer Security, 4(1), 32-37.
Rui, Y., Liu, Z., Kallin, S., Janke, G., & Paya,
Sanderson, P. (2006). The multimodal world of medical
C. (2005). Characters or faces: A user study on ease of
monitoring displays. Applied Ergonomics, 37, 501-512.
use for hips. Hip (pp. 53–65). 

365
Compilation of References

Sandhu, R. (2003, January/February). Good enough Schneier, B. (1999). Attack trees modelling security
security: Towards a business driven discipline. IEEE threats. Dr. Dobb’s Journal, December. Retrieved from
Internet Computing, 7(1), 66-68. http://www.counterpane.com/attacktrees-ddj-ft.html

Sandhu, R. S., Coyne, E. J., Feinstein, H. L., & Youman, Schneier, B. (2000). Secrets and lies. John Wiley and
C. E. (1996, February). Role-based access control models. Sons.
IEEE Computer, 29(2), 38-47.
Schneier, B. (2003). Beyond fear: Thinking sensibly about
SAR. (2007). A survey of sharing information in a security in an uncertain world. New York: Copernicus
search and rescue excercise. A co-operative exercise, Books, an imprint of Springer-Verlag.
where rescue, law and medical organizations and non-
Schneier, B. (2007). All or nothing: Why full disclo-
governmental organizations rehearsed together in a case
sure—or the threat of it—forces vendors to patch flaws,
of airliner accident at Helsinki airport on January 25,
CSO Magazine, 6(2), 20.
2007 (Research report not published).
Schneier, B., & Shostack, A. (1998). Breaking up is hard
Scalet, S. D. (2007). Alarmed: Bolting on security at
to do: Modelling security threats for smart-cards. First
Stop & Shop. CIO Online, March 9. Retrieved on August
USENIX Symposium on Smart-Cards, USENIX Press.
15, 2007, from www.csoonline.com/alarmed/03092007.
Retrieved from http://www.counterpane.com/smart-
html
card-threats.html
Schein, E. H. (1980). Organizational psychology (3rd ed.).
Schultz, E. (2002). Network associates drops PGP. Com-
Englewood Cliffs, NJ.: Prentice-Hall.
puters & Security, 21(3), 206-207.
Schein, E. H. (1992). Organizational culture and leader-
Schultz, E. E. (2002). A framework for understanding
ship (2nd ed). San Francisco: Jossey-Bass.
and predicting insider attacks. Computers and Security,
Schenk, K., Vitalari, N., & Davis, K. (1998). Differences 21(6), 526-531.
between novice and expert systems analysts: What do
Schwartz, M. (2005). Organizations neglect human
we know and what do we do? Journal of Management
factors in security. Retrieved from http://www.
Information Systems, 15(Summer), 9-50.
itcinstitute.com/display.aspx?id=363
Schlienger, T., & Teufel, S. (2002). Information security
Schwartz, P., & Reidenberg, J. (1996). Data privacy law.
culture: The socio-cultural dimension in information
Charlottesville, VA: Mitchie Law Publishers.
security management. In Proceedings of IFIP TC11
17th International Conference on Information Security Selmi, M. (2006). Privacy for the working class: public
(pp 191-202). Cairo, Egypt: IFIP Conference Proceed- work and private lives (Public law and legal theory work-
ings 214. ing paper No 222). The George Washington University
Law School.
Schneider, S., & Barsoux, J-L. (1997). Managing across
cultures. London, New York, Toronto, Sydney, Tokyo, service, N. S. news. (2005, August). Computer char-
Singapore, Madrid, Mexico City, Munich, Paris: Pren- acters mugged in virtual crime spree.  Retrieved April
tice Hall. 27, 2007, from http://www.newscientist.com/article.
ns?id=dn7865 
Schneier B. (2007) The psychology of security. DRAFT,
February 28. Retrieved August 15, 2007, from www. Shardanand, U., & Maes, P. (1995). Social information
schneier.com/essay_155.html filtering: Algorithms for automating “word of mouth.”
In Proceedings of ACM CHI’95 Conference on Human
Factors in Computing Systems (Vol. 1, pp. 210-217).

366
Compilation of References

Shaw, E. D., Ruby, K. G., & Post, J. M. (1998). The insider Sindre, L., & Opdahl, A. (2005). Eliciting security re-
threat to information systems. Security Awareness Bul- quirements with misuse cases. Computer Science and
letin, 2- 98. Retrieved September 5, 2007, from http://rf- Engineering, 10(1), 34-44.
web.tamu.edu/security/secguide/Treason/Infosys.htm
Singh, S. (1997). Fermat’s last theorem. London: Fourth
SHIFT WS#1. (2006). Group survey. Conducted in work- Estate.
shop of a project that deals with information sharing in
Sipior, C. J., & Ward, T. B. (1995). The ethical and legal
networked crisis management environment (SHIFT =
quandary of email privacy. Communications of the ACM,
Shared Information Framework and Technology) on No-
38(12), 48-54.
vember 13-16, 2006 (Research report not published).
Siponen, M. (2000). Policies for construction of infor-
Sillers, T.S., & Kleiner, B.H. (1997). Defence conver-
mation systems’ security guidelines. In Proceedings of
sion: Surviving (and prospering) in the 1990s. Work
the 15th International Information Security Conference
Study, 46(2), 45-48.
(IFIP TC11/SEC2000), Beijing, China, August (pp.
Simitis, S. (1987). Reviewing privacy in an informa- 111-120).
tion society. University of Pennsylvania Law Review,
Siponen, M. (2005). Analysis of modern IS security
135, 707-728.
development approaches: towards the next generation
Simitis, S. (1999). Reconsidering the premises of labour of social and adaptable ISS methods. Information and
law: Prolegomena to an EU refulation on the protection Organization, 15(4), 339-375.
of employees’ personal data. European Law Journal,
Siponen, M. (2005). An analysis of the traditional IS
5, 45-62.
security approaches: implications for research and
Simon, H. (1996). The sciences of the artificial (3rd ed.). practice. European Journal of Information Systems,
MIT Press. 14(3), 303-315.

Simon, H. A. (1956). Rational choice and the structure of Siponen, M. T., & Baskerville, R. (2001). A new para-
the environment. Psychological Review, 63, 129-138. digm for adding security into IS development methods.
In J. Eloff, L. Labuschagne, R. von Solms, & G. Dhillon
Simon, H. A. (1982). Economic analysis and public policy.
(Eds.), Advances in information security management
In book “Models of Bounded Rationality”: volume 1
& small systems security (pp. 99-111). Boston: Kluwer
and 2: MIT Press.
Academic Publishers.
Simons, D. J., & Chabris, C. F. (1999).Gorillas in our
Siponen, M., & Heikka, J. (2007). Do secure informa-
midst: sustained inattentional blindness for dynamic
tion system design methods provide ..., Inform. Softw.
events. Perception, 28(9), 1059-1074.
Technol. doi:10.1016/j.infsof.2007.10.011
Sindre, G., & Opdahl, A. L. (2000). Eliciting security
Slovic, P., Fischhoff, B., & Lichtenstein, S. (1986). Facts
requirements by misuse cases. Proceedings of the 37th
versus fears: Understanding perceived risks. In D. Kahne-
Conference on Techniques of Object-Oriented Languages
man, P. Slovic, and A. Tversky (Eds.), Judgment under
and Systems (pp. 120-131). TOOLS Pacific 2000.
uncertainty: Heuristics and biases (pp. 463-489). New
Sindre, G., & Opdahl, A. L. (2001, June 4-5). Templates York: Cambridge University Press.
for misuse case description. Proceedings of the 7th
Sobrado, L., & Birget, J.-C. (2005). Shoulder surfing
International Workshop on Requirements Engineer-
resistant graphical passwords. Retrieved April 15, 2007,
ing, Foundation for Software Quality (REFSQ2001),
from http://clam.rutgers.edu/~birget/grPssw/srgp.pdf
Switzerland.

367
Compilation of References

Solove D. J. (2006). A taxonomy of privacy. University Stasiukonis, S. (2006). Social engineering, the shop-
of Pennsylvania Law Review, 154(3), 477-564. pers’ way. Retrieved April 15, 2007, from http://www.
darkreading.com/document.asp?doc_id=99347
Solove, D. J. (2004). Reconstructing electronic surveil-
lance law. The George Washington Law Review, 72, Stasiukonis, S. (2007). By hook or by crook. Retrieved
1701-1747. April 15, 2007, from http://www.darkreading.com/docu-
ment.asp?doc_id=119938
Sorkin, R. D. (1988). Why are people turning off our
alarms? Journal of Acoustical Society of America, 84, Stasiukonis, S. (2007). Social engineering, the USB way.
1107-1108. Dark Reading. http://www.darkreading.com/document.
asp?doc_id=95556&WT.svl=column1_1
Stahl, B.C. (2006). Trust as fetish: A Critical theory
perspective on research on trust in e-commerce. Paper Stone, B. (2007). Study finds web antifraud measure
presented at the Information Communications and So- ineffective. The New York Times, Feb 5.
ciety Symposium, University of York, UK.
Straub, D., Loch, K., Evaristo, R., Karahanna, E., &
Stajano, F., & Anderson, R. (1999). The resurrecting Strite, M. (2002). Toward a theory-based measurement
duckling: Security issues for ad-hoc wireless networks. of culture. Journal of Global Information Management,
Security Protocols, 7th International Workshop. 10(1), 13-23.

Standish Group. (1995). The CHAOS report. West Yar- Straub, D.W. & Welke, R.J. (1998). Coping with systems
mouth, MA. available at www.standishgroup.com. risk: Security planning models for management decision
making. MIS Quarterly, 22(4), 441-470.
Stanford, J., Tauber, E. R., Fogg, B. J., & Marable, L.
(2002). Experts vs. online consumers: A comparative Straub, K. (2004). Cracking password usability exploit-
credibility study of health and finance websites. Consumer ing human memory to create secure and memorable
WebWatch. www.consumerwebwatch.org passwords. UI Design Newsletter, Retrieved April 2,
2007, from http://www.humanfactors.com/downloads/
Stanton, J. M. (2000). Reactions to employee performance
jun04.asp
monitoring: framework, review, and research directions.
Human Performance, 13, 85-113. Strens, M. R., & Dobson, J. E. (1994). Responsibility
modelling as a technique for requirements definition.
Starner, T. (2001). The challenges of wearable computing:
IEEE, 3(1), 20-26.
Part 2. IEEE Micro, 54-67.
Strens, R., & Dobson, J. (1993). How responsibility
Stasiukonis, S. (2006). Social engineering, the USB
modeling leads to security requirements. In Proceed-
way. Retrieved April 15, 2007, from http://www.
ings of the 1992 & 1993 ACM SIGCAS New Security
darkreading.com/document.asp?doc_id=95556&WT.
Paradigm Workshop.
svl=column1_1
Stubblebine, S., & van Oorschot, P. (2004, February). Ad-
Stasiukonis, S. (2006). How identity theft works. Re-
dressing online dictionary attacks with login histories
trieved April 15, 2007, from http://www.darkreading.
and humans-in-the-loop. In Proceedings of Financial
com/document.asp?doc_id=102595
Cryptography. Springer-Verlag.
Stasiukonis, S. (2006). Banking on security. Retrieved
Sturgeon, W. (2005). Foiled £220m heist highlights
April 15, 2007, from http://www.darkreading.com/docu-
spyware threat. Zdnet. http://news.zdnet.co.uk/secu-
ment.asp?doc_id=111503
rity/0,1000000189,39191677,00.htm

368
Compilation of References

Sveiby, K-E. (2001). A knowledge-based theory of the Times, T. (2006, May). Computer game bot: Arch-
firm to guide strategy formulation. Retrieved February nemesis of online games. Retrieved August 1, 2006,
15, 2003, from http://www.sveiby.com/articles/Knowl- from http://times.hankooki.com/lpage/culture/200605/
edgetheoryoffirm.htm kt2006052116201765520.htm 

Swain, A.D, & Guttmann, H.E. (1983). Handbook of hu- Top myths. (2006). The 10 biggest myths of IT security:
man reliability analysis with emphasis on nuclear power Myth #1: ‘Epidemic’ data losses. Retrieved April 15,
plant applications. NUREG/CR 1278. Albuquerque, NM: 2007, from http://www.darkreading.com/document.
Sandia National Laboratories. asp?doc_id=99291&page_number=2

Symantec. (2007) Top myths. (2006). The 10 biggest myths of IT security:


Myth #2: Anything but Microsoft. Retrieved April 15,
Tan, P., Steinbach, M., & Kumar, V. (2005). Introduction
2007, from http://www.darkreading.com/document.
to data mining. Addison-Wesley.
asp?doc_id=99291&page_number=3
TCSEC. (1985). Department of defense trusted computer
Treasury Inspector General for Tax Administration.
system evaluation criteria (TCSEC: DoD 5200.28-STD).
(2007). Employees continue to be susceptible to social
Department of Defense.
engineering attempts that could be used by hackers
TechNET. (2006). How to protect insiders from social (TR 2007-20-107). Retrieved August 18, 2007, from
engineering threats. Retrieved April 15, 2007, from http://www.ustreas.gov/tigta/auditreports/2007reports/
http://www.microsoft.com/technet/security/midsize- 200720107fr.pdf
business/topics/complianceandpolicies/socialengineer-
Trim, P. (2001). Public-private partnerships and the
ingthreats.mspx
defence industry. European Business Review, 13(4),
The esp game. (n.d.). Retrieved April 26, 2007, from 227-234.
http://www.espgame.org/ 
Turing, A. M. (1950). Computing machinery and
Thierauf, R. (2001). Effective business intelligence sys- intelligence. LIX (236).
tems. London: Quorum Books.
Turnbull, N. (1999). Internal control: Guidance for
Thomas, R. K., & Sandhu, R. S. (1994). Conceptual directors on the combined code: The Turnbull report.
foundations for a model of task-based authorizations. London: The Institute of Chartered Accountants in
In Proceedings of the 7th IEEE Computer Security England & Wales.
Foundations Workshop.
Tversky, A, & Kahneman, D. (1974). Judgment under
Thomson, I. (2006). ‘Evil twin’ Wi-Fi hacks target uncertainty: Heuristics and biases. Science, 185(4157),
the rich. iTnews.com.au, November. Retrieved April 1124-1131.
15, 2007, from http://www.itnews.com.au/newsstory.
U.S. Department of Homeland Security. (2002). Federal
aspx?CIaNID=42673&r=rss
information security management act. Retrieved April
Tierney, M. (1993). The evolution of Def Stan 00-55: 2, 2007, from http://www.fedcirc.gov/library/legisla-
A socio-history of a design standard for safety-criti- tion/FISMA.html
cal software. In P. Quintas (Ed.), Social dimensions of
UK Information Commissioner. (2003). The employment
systems engineering: People, processes and software
practices data protection code.
development (pp. 111-143). New York/London: Ellis
Horwood. United States Secret Service. (2004). Insider threat
study: Illicit cyber activity in the banking and finance

369
Compilation of References

sector. Retrieved September 5, 2007, from http://www. project. In Proceedings of the 10th ACM SIGSOFT Sym-
secretservice.gov/ntac_its.shtml posium on Foundations of Software Engineering.

Uzun, E., Karvonen, K., & Asokan, N. (2007). Usability Viappiani, P., Faltings, B., & Pu, P. (2006, June). The
analysis of secure pairing methods. Usable Security lookahead principle for preference elicitation: Experi-
(USEC). mental results. In Proceedings of the 7th International
Conference on Flexible Query Answering Systems (FQAS
van der Raadt, B., Gordijn, J., & Yu, E. (2005). Exploring
2006) Lecture Notes in Computer Science, 4027, 378-389.
Web services from a business value perspective. To appear
Milan, Italy: Springer.
in Proceedings of the 13th International Requirements
Engineering Conference (RE’05), Paris (pp. 53-62). Viega, J., & McGraw, G. (2002). Building secure software.
Boston: Addison-Wesley.
van Lamsweerde, A. (2001, August 27-31). Goal-oriented
requirements engineering: A guided tour. The 5th IEEE Viega, J., & McGraw, G. (2004). Building secure soft-
International Symposium on Requirements Engineering ware—how to avoid security problems the right way.
(RE 2001) (p. 249). Toronto, Canada. Reading, MA: Addison-Wesley.

van Lamsweerde, A. (2004). Elaborating security re- Viega, J., Kohno, T., & Potter, B. (2001). Trust and
quirements by construction of intentional anti-models. In mistrust in secure applications. Communications of the
Proceedings of the International Conference on Software ACM, 44(2), 31-36.
Engineering (pp. 148-157).
Vijayan, J. (2006). “Human error” exposes pa-
van Lamsweerde, A. (2004, May). Elaborating security tients’ social security numbers. Computerworld.
requirements by construction of intentional anti-models. http://www.health-itworld.com/newsletters/2006/02/14/
Proceedings of ICSE’04, 26th International Conference 18209?page:int=-1
on Software Engineering (pp. 148-157). Edinburgh:
Vishing. (2006). Secure computing warns of vishing.
ACM-IEEE.
Retrieved April 15, 2007, from http://www.darkreading.
van Lamsweerde, A., Brohez, S., Landtsheer, R., & com/document.asp?doc_id=98732
Janssens, D. (2003, September). From system goals to
von Ahn, L. (2005). Utilizing the power of human
intruder anti-goals: Attack generation and resolution for
cycles. Unpublished doctoral dissertation, Carnegie
security requirements engineering. Proceedings of the
Mellon University.
RE’03 Workshop on Requirements for High Assurance
Systems (RHAS’03) (pp. 49-56). Monterey, CA. von Ahn, L., Blum, M., Hopper, N., & Langford, J. (2003).
Captcha: Using hard AI problems for security. In Pro-
Venkatesh, V. (2000). Determinants of perceived ease of
ceedings of Eurocrypt (pp. 294-311). 
use: integrating control, intrinsic motivation, and emo-
tion into the technology acceptance model. Information von Ahn, L., Blum, M., & Langford, J. (2002). Telling
Systems Research, 11(4), 342-365. humans and computers apart automatically or how
lazy cryptographers do AI (Tech. Rep. No. CMU-CS-
Venkatraman, N., & Camillus, J. C. (1984). Exploring
02-117). Carnegie Mellon University.
the concept of ‘fit’ in strategic management. Academy
of Management Review, 9, 513-525. von Ahn, L., Blum, M., & Langford, J. (2004). Telling
humans and computers apart automatically. Communi-
Vetterling, M., Wimmel, G., & Wisspeintner, A. (2002,
cations of the ACM, 47(2), 56-60.
November). Requirements analysis: Secure systems
development based on the common criteria: the PalME von Ahn, L., & Dabbish, L. (2004, April). Labeling
images with a computer game. In Proceedings of the

370
Compilation of References

SIGCHI Conference on Human Factors in Comput- Weinberg, G. (1971). The psychology of computer pro-
ing Systems (CHI ’04) (pp. 319-326). Vienna, Austria: gramming. New York: Van Nostrand Reinhold.
ACM Press.
Weng, J., Miao, C., & Goh, A. (2006, April).
von Solms, B. & von Solms, R. (2004). The ten deadly Improving  collaborative filtering with trust-based met-
sins of information security management. Computers & rics. In Proceedings of SAC’06.  Dijon, France.
Security, 23, 371-376.
Westin, A. (1967). Privacy and freedom. Talk at the
Von Solms, B. (2000). Information security—the third Atheneum. New York.
wave? Computers and Security, 19(7), 615-620.
White, B. (2006). The implications of Web 2.0 on Web
Vu, K. P. L., Bhargav, A., & Proctor, R. W. (2003). Impos- information systems. Invited talk, Webist 2006.
ing password restrictions for multiple accounts: Impact
White, G. W., & Pearson, S. J. (2001). Controlling corpo-
on generation and recall of passwords. In Proceedings
rate e-mail, PC use and computer security. Information
of the 47th Annual Meeting of the Human Factors and
Management and Computer Security, 9(2), 88-92.
Ergonomics Society, USA (pp. 1331-1335).
Whitman, M. E., & Mattord, H. J. (2003). Principles of
Vu, K. P. L., Tai, B. L., Bhargav, A., Schultz, E. E., &
information security. Boston, London: Thomson Course
Proctor, R. W. (2004). Promoting memorability and
Technology.
security of passwords through sentence generation. In
Proceedings of the Human Factors and Ergonomics Whitman. (2004). In defense of the realm: Understanding
Society 48th Annual Meeting, USA (pp.1478-1482). threats to information security. International Journal of
Information Management, 24, 3-4.
Wadlow, T.A. (2000). The process of network security.
Reading, MA: Addison-Wesley. Whitten, A., & Tygar, J. D. (1999). Why Johnny can’t
encrypt: A usability evaluation of PGP 5.0. In Proceed-
Waldron, V. R. (1986). Interviewing for knowledge.
ings of the 8th USENIX Security Symposium. USENIX,
IEEE Transactions on Professional Communications,
Washington, D.C., USA.
PC29(2), 31-34.
Whitten, A., & Tygar, J. D. (1999, August 23-26). Why
Waltz, E. (1998). Information warfare: Principles and
Johnny can’t encrypt: A usability evaluation of PGP 5.0.
operations. Boston & London: Artech House.
In Proceedings of the 8th USENIX Security Symposium,
Wang, Y., & Kobsa, A. (2006, April). Impacts of privacy Washington, DC, USA.
laws and regulations on personalized systems. Paper pre-
Wickens, C. D. (1992). Engineering psychology and
sented at the CHI 2006 Workshop on Privacy-Enhanced
human performance (2nd ed.). New York: HarperCollins
Personalization (PEP 2006), Montreal, Canada.
Publishers.
Warman, A. (1993). Computer security within organisa-
Wiedenbeck, S., Waters, J., Birget, J. C., Brodskiy, A., &
tions. MacMillan.
Memon, N. (2005). PassPoints: Design and longitudinal
Washkuch, F. (2007). Newspaper: Medical information of evaluation of a graphical password system. International
75,000 Empire Blue Cross members lost. SC Magazine. Journal of Human Computer Studies, 63, 102-127.
http://scmagazine.com/us/news/article/643807/news-
Wiedenbeck, S., Waters, J., Sobrado, L., & Birget, J.
paper-medical-information-75000-empire-blue-cross-
(2006, May 23-26). Design and evaluation of a shoul-
members-lost/
der-surfing resistant graphical password scheme. In
Web Accessibility Initiative. (n.d.). Retrieved April 29, Proceedings of the Working Conference on Advanced
2007, from http://www.w3.org/WAI/ Visual interfaces AVI ‘06, Venezia, Italy,(pp. 177-184).

371
Compilation of References

ACM Press, New York: ACM Press. http://doi.acm. Wright, P. (1993). Computer security in large corpora-
org/10.1145/1133265.1133303 tions: Attitudes and practices of CEOs. Management
Decision, 31(7), 56-60.
Willcocks, L., & Margetts, H. (1994). Risk assessment
and information systems. European Journal of Informa- Wylder, J. (2003). Strategic information security. Au-
tion Systems, 3, 127-138. erbach.

Wilson, T. (2007). Five myths about black hats. Retrieved Xu, J., Lipton, R., Essa, I., & Sung, M. (2001). Mandatory
April 15, 2007, from http://www.darkreading.com/docu- human participation: A new scheme for building secure
ment.asp?doc_id=118169 systems (Tech. Rep. No. GIT-CC-01-09). Georgia Insti-
tute of Technology.
Wogalter, M. S. & Mayhorn, C. B. (2006). Is that in-
formation from a credible source? On discriminating Xu, J., Lipton, R., & Essa, I. (2000, November). Are you
Internet domain names. In Proceedings of the 16th World human? (Tech. Rep. No. GIT-CC-00028). Georgia Insti-
Congress of the International Ergonomics Association. tute of Technology: Georgia Institute of Technology.
Maastricht, The Netherlands.
Yan, J., Blackwell, A., Anderson, R., & Grant, A. (2004).
Wogalter, M. S. (2006). Handbook of warnings. Mahwah, Password memorability and security: Empirical
NJ: Lawrence Erlbaum Associates. results. IEEE Security and Privacy, 2(5), 25-31.

Wogalter, M. S., & Mayhorn, C. B. (2005). Providing Yavagal, D. S., Lee, S.-W., Ahn, G.-J., & Gandhi, R. A.
cognitive support with technology-based warning sys- (2005, March). Security: Common criteria requirements
tems. Ergonomics, 48(5), 522-533. modeling and its uses for quality of information assurance
(QoIA). In Proceedings of the 43rd Annual Southeast
Wogalter, M. S., Dejoy, D. M., & Laughery, K. R. (1999).
Regional Conference ACM-SE 43 (Vol. 2).
Warnings and risk communication. London: Taylor and
Francis. Yu, E. (1993, January). Modelling organizations for infor-
mation systems requirements engineering. Proceedings
Wogalter, M. S., Racicot, B. M., Kalsher, M. J., & Simpson,
of the 1st IEEE International Symposium on Requirements
S. N. (1994). The role of perceived relevance in behavioral
Engineering (pp. 34-41). San Diego, CA.
compliance in personalized warning signs. International
Journal of Industrial Ergonomics, 14, 233-242. Yu, E. (1995). Modelling strategic relationships for
process reengineering. Ph.D. Thesis, Department of
Wolff, J. S., & Wogalter, M. S. (1998). Comprehension
Computer Science, University of Toronto, Canada.
of pictorial symbols: Effects of context and test method.
Human Factors, 40, 173-186. Yu, E. (1997, January 6-8). Towards modelling and rea-
soning support for early-phase requirements engineering.
Wood, C. W., & Banks, W. W. (1993). Human error: an
Proceedings of the 3rd IEEE International Symposium
overlooked but significant information security problem.
on Requirements Engineering (RE’97) (pp. 226-235).
Computers & Security, 12, 51-60.
Washington, DC.
Wood, C.C. (1996). Writing infosec policies. Computers
Yu, E. (2001, April). Agent orientation as a modelling
& Security, 14(8), 667-674.
paradigm. Wirtschaftsinformatik, 43(2), 123-132.
Wood, C.C. (2000). An unappreciated reason why
Yu, E. (2001). Agent-oriented modelling: Software versus
information security policies fail. Computer Fraud &
the world. Agent-Oriented Software Engineering AOSE-
Security, 10, 13-14.
2001 Workshop Proceedings (LNCS 222, pp. 206-225).
Springer Verlag. 

372
Compilation of References

Yu, E., & Cysneiros, L. (2002, October 16). Designing Zakaria, O., Jarupunphol, P., & Gani, A. (2003). Paradigm
for privacy and other competing requirements. The 2nd mapping for information security culture approach. In J.
Symposium on Requirements Engineering for Informa- Slay (Ed.), Proceedings of the 4th Australian Conference
tion Security (SREIS’02). Raleigh, NC. on Information Warfare and IT Security (pp. 417-426).
Adelaide, Australia: University of South Australia.
Yu, E., & Liu, L. (2000, June 3-4). Modelling trust in
Avison, D. E. (1989). An overview of information sys-
the i* strategic actors framework. Proceedings of the 3rd
tems development methodologies. In R. L. Flood, M.
Workshop on Deception, Fraud and Trust in Agent Societ-
C. Jackson, & P. Keys (Eds.), Systems prospects: The
ies, Barcelona, Catalonia, Spain (at Agents2000).
next ten years of systems research.(pp. 189-193). New
Yu, E., & Liu, L. (2001). Modelling trust for system design York: Plenum.
using the i* strategic actors framework. In R. Falcone,
Zaslow, J. (2002). If TiVo thinks you are gay, here’s how to
M. Singh, & Y. H. Tan (Eds.), Trust in cyber-societies-
set it straight: What you buy affects recommendations on
-integrating the human and artificial perspectives (pp.
Amazon.com, too; why the cartoons? The Wallstreet Jour-
175-194). LNAI-2246. Springer.
nal, Retrieved November 26, 2002, from http://online.
Yu, E., Liu, L., & Li, Y. (2001, November 27-30). Model- wsj.com/article_email/0,,SB1038261936872356908,00.
ling strategic actor relationships to support intellectual html
property management. The 20th International Conference
Zhang, Z., Rui, Y., Huang, T., & Paya, C. (2004). Breaking
on Conceptual Modelling (ER-2001) (LNCS 2224, pp.
the clock face hip.  ICME(pp. 2167-2170). 
164-178). Yokohama, Japan: Spring Verlag.
Zhao, W., Chellappa, R., Phillips, P.J., & Rosenfeld,
Zakaria, O., & Gani, A. (2003). A conceptual checklist
A. (2003). Face recognition—a literature survey. ACM
of information security culture. In B. Hutchinson (Ed.),
Computer Survey, 35(4), 399-458.
Proceedings of the 2nd European Conference on Infor-
mation Warfare and Security (pp. 365-372). Reading,
UK: MCIL, Reading.

373
374

About the Contributors

Manish Gupta is an information security professional in M&T Bank, Buffalo and also a PhD can-
didate at the State University of New York, Buffalo. He received his Bachelor’s degree in mechanical
engineering from the Institute of Engineering and Technology, Lucknow, India in 1998 and an MBA in
information systems from the State University of New York, Buffalo, USA in 2003. He has more than
ten years of industry experience in information systems, policies, and technologies. He has published
three books in the area of information security and assurance (one is forthcoming). He has published
more than 30 research articles in leading journals, conference proceedings and books including DSS,
ACM Transactions, IEEE, and JOEUC. He serves in editorial boards of six international journals and
has served in program committees of several international conferences. He has also received advanced
certificates in information assurance (SUNY, Buffalo), IT benchmarking (Stanford University), and
cyber law (Asian School of Cyber Law). He is listed in Cambridge Who’s Who Among Executives and
Professionals, 2007 and Who’s Who among students in American Universities and Colleges, 2003.
He holds several professional designations including CISSP, CISA, CISM, ISSPCS, and PMP. He is a
member of ACM, AIS, IEEE, INFORMS, APWG, ISACA, and ISC2.

Raj Sharman is an associate professor in the Management Science and Systems Department at
SUNY Buffalo, NY. He received his BTech and MTech degree from IIT Bombay, India and his MS
degree in industrial engineering and PhD in computer science from Louisiana State University. His
research streams include information assurance, extreme events, and improving performance on the
Web. His papers have been published in a number of national and international journals. He is also the
recipient of several grants from the university as well as external agencies. He serves as an associate
editor for the Journal of Information Systems Security.

***

C. Warren Axelrod is a senior vice president of U.S. Trust, Bank of America Private Wealth Man-
agement, where he is the chief privacy officer and business information security officer. He interfaces
with the firm’s business units and the parent company to identify and assess privacy and security risks
and mitigate them, and to ensure that employees are familiar with and follow privacy and security
policies, standards, and procedures. He has worked in many areas of financial services for firms such
as SIAC, HSBC, and Pershing. He is involved at both industry and national levels with security, pri-
vacy, and critical infrastructure protection issues. He is a member of the FSSCC R&D Committee, the
SIFMA Privacy and Data Protection Committee, and the Information Security Subcommittee, and

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
About the Contributors

several BITS committees and working groups. He was honored with a Computerworld 2003 Premier
100 IT Leaders Award and his department’s implementation of an intrusion detection system earned
a Best in Class award. Dr. Axelrod represented financial services security interests at the Y2K com-
mand center in Washington over the century date rollover. He was a founder of the FS/ISAC (financial
services information sharing and analysis center), and served two terms on its Board of Managers. He
testified at a congressional hearing on cyber security in November 2001, and contributed to the Bank-
ing and Finance Sector’s National Strategy for Critical Infrastructure Assurance, published in May
2002. Dr. Axelrod is the author of Outsourcing Information Security (Artech House, September 2004),
and contributed a chapter to Managing Information Assurance in Financial Services (IGI Publishing,
July 2007). He has published two previous books on computer management and numerous articles on
many aspects of information technology, including computer and network security, contingency plan-
ning, and computer-related risks. He holds a PhD in managerial economics from the Johnson Graduate
School of Management at Cornell University. He has a Bachelor’s degree in electrical engineering and
Master’s degree in economics and statistics from the University of Glasgow. He is certified as a CISSP
and CISM, and has NASD Series 7 and 24 licenses.

Anne Boyer is a professor at the Université Nancy 2. She works in the field of artificial intelligence
and her research interests include machine learning, real-time decision making, and probabilistic model-
ing. She is involved in many European projects and international collaborations. She had addressed, in
many international conferences, fundamental problems linked to user modeling and information search
and retrieval based on an analysis of usages. She combines several approaches such as natural language
processing, collaborative filtering, or content-based filtering to deal with scalability, privacy, sparsity,
portability, context, trust, quality, and security when designing recommender systems.

Mahil Carr is a doctorate in the area of software engineering from the City University of Hong
Kong. He completed his MCA from the St. Joseph’s College, Bharatidasan University, Trichy and held
the position of director (in-charge), Department of Computer Science, American College, Madurai, for
over 3 1/2 years. Currently, Dr. Mahil Carr is assistant professor at the Institute for Research and De-
velopment in Banking Technology. Dr. Carr has published in Information Technology & Management,
Journal of Services Research. His research interests include e-commerce and e-governance.

Deborah Sater Carstens has a PhD in industrial engineering, a BS in business administration


from the University of Central Florida, and a MBA from the Florida Institute of Technology. She has
been an assistant professor of management information systems since 2003 at the Florida Institute of
Technology. Her interests are in information security, human-computer interaction, and system design.
She previously was employed for more than 10 years at NASA Kennedy Space Center where she was
principal investigator on human factors research projects, industrial engineering for safety program
study manager and the process and human factors engineering roadmap manager.

Sylvain Castagnos joined the LORIA laboratory of research in Nancy as a member of University
Nancy 2 after obtaining a Master’s degree in computer science. He is a third year PhD student in ar-
tificial intelligence and his topics of interest include collaborative filtering, user modeling, distributed
algorithms, and probabilistic approaches. His work starts from the fact that there are too many informa-
tion on the Internet. Nowadays, the question is no longer to know where to find information, but how to

375
About the Contributors

find the most suitable information in a very short time. A good way to provide useful items consists in
generating personalized recommendations within an information system. Through several international
publications, he has highlighted the interest of distributing collaborative filtering processes to deal with
scalability, privacy, and other industrial constraints.

Steve Clarke received a BSc in economics from The University of Kingston upon Hull, an MBA
from the Putteridge Bury Management Centre, The University of Luton, and a PhD in human centred
approaches to information systems development from Brunel University—all in the United Kingdom.
He is professor of information systems in the University of Hull Business School. He has extensive ex-
perience in management systems and information systems consultancy and research, focusing primarily
on the identification and satisfaction of user needs and issues connected with knowledge management.
His research interests include: social theory and information systems practice, strategic planning, and
the impact of user involvement in the development of management systems. Major current research is
focused on approaches informed by critical social theory.

Mahi Dontamsetti is president of M3 Security, a security consulting firm. A CISSP, Mr. Dontamsetti
currently serves on the board of OWASP-NY/NJ. He is author of a couple of books on wireless technolo-
gies and has spoken on security issues at conferences. During his professional career which spans over
15 years, he has held executive leadership positions at Fortune 50 companies such as Lockheed Martin
as well as held founding employee status at startups. He has served in the role of CTO, VP, and chief
technologist during his career. Mr. Dontamsetti has an MS in computer science from the University of
Missouri-Kansas City.

Paul Drake is currently a director within a multinational pharmaceutical organisation. He has over
25 years experience in information technology, 15 of these being aligned with information security, in
which has worked in a variety of organisations spanning local authority, utilities, finance and trading,
and retail and leisure. He is one of the original authors of the British Standards for information security
and sits on the committee that maintains them. He holds a Master’s degree in supervisory management
and a PhD from Hull University in communicative action in information security systems. In the time
left after work, six sigma studies, membership of the British Standards Institute, research, being a hus-
band, and father to two children: He likes to sleep!

Jean-Noël Ezingeard is dean of the Faculty of Business and Law at Kingston University (London).
His research is focused on information assurance, information security, and enterprise risk manage-
ment, topics which he has researched, taught, and consulted about in Europe, North America, and
South Africa. His work on information assurance has been used in publications by QinetiQ, Axa, and
the Federation against Software Theft. He is a founding member of the British Computer Society’s
Information Assurance working group. He joined the business school world 9 years ago. Prior to this
he worked as a chartered manufacturing engineer (operations management) and a lecturer in computer
integrated manufacturing.

Steven Furnell is the head of the Information Security & Network Research Group at the University
of Plymouth in the United Kingdom, and an adjunct associate professor with Edith Cowan University in
Western Australia. His research interests include the usability of security technology, user authentica-

376
About the Contributors

tion, and security management.  Professor Furnell is a fellow and branch chair of the British Computer
Society (BCS), a senior member of the Institute of Electrical and Electronics Engineers (IEEE), and
a UK representative in International Federation for Information Processing (IFIP) working groups
relating to information security management (of which he is the current chair), network security, and
information security education. He is the author of over 170 papers in refereed international journals
and conference proceedings, as well as the books Cybercrime: Vandalizing the Information Society
(2001) and Computer Insecurity: Risking the System (2005).  Further details can be found at www.
network-research-group.org

Jefferson B. Hardee is a graduate student pursuing his PhD in the ergonomics/experimental psy-
chology program at North Carolina State University in Raleigh, North Carolina. He received his MS in
psychology in 2007 and his BS in computer science from North Carolina State University in 2003.

Corey Hirsch is CIO of LeCroy Corporation, headquartered in Chestnut Ridge, New York. He
also serves as visiting executive fellow at Henley Management College, UK, in the School of Projects,
Processes, and Systems. His research is focused on information security and enterprise risk manage-
ment. LeCroy is a recognized innovator in these areas, and has been the subject of numerous recent case
studies in these and related fields. Dr. Hirsch researches, lectures, e-tutors, and supervises students in
competitor intelligence, customer relationship management systems, and managing the ICT function.
His practice includes responsibility for development and maintenance of the physical and information
infrastructure of LeCroy, a $160M leader in test and measurement instrumentation.

Bogdan Hoanca is an associate professor of management information systems at the University of


Alaska Anchorage (UAA). Before joining UAA, he co-founded, started up, and sold a company that builds
components for fiber optic communications. He also helped start and consulted with a number of other
startup companies in optical fiber communications. He received a PhD in electrical engineering from
the University of Southern California in 1999. His current research interests revolve around technology,
in particular e-learning and societal implications of technology, as well as privacy and security.

Maria Karyda is lecturer at the University of the Aegean, Greece (Department of Information and
Communication Systems Engineering) and member of the Laboratory of Information and Communica-
tion Systems Security of the University of the Aegean. She holds a PhD in information systems secu-
rity management and an MSc in information systems. She has participated in several EU and national
funded research projects in the areas of IT security, electronic government, and information systems
security management and has also collaborated with many private and public institutions as security
consultant. She has published several journal papers and chapters in books and has participated in many
international conferences.

Cynthia Kuo is a doctoral candidate in the Engineering and Public Policy Department at Carnegie
Mellon University. Her research focuses on how security applications can be designed to minimize end
user errors. Previously, she worked as a user experience designer.

Rauno Kuusisto works at the moment as a senior researcher at the Finland Futures Research Centre
at Turku School of Economics. He is an adjunct professor of network enabled defense at Finnish National

377
About the Contributors

Defense University, as well. Kuusisto graduated as PhD at Helsinki University of Technology in 2004
on the area of corporate security and futures studies. Kuusisto has general staff officer qualification, as
well. He has 30 years experience mainly as a developer and researcher of heavy duty communication
systems, intelligence systems, and decision support systems, as well as educating personnel on several
degrees up to doctoral programs. He has about 40 scientific publications and research reports on areas
of networked management, situation understanding, information in decision-making, and safety and
security. Nowadays his interest is directed to networked management, comprehensive information
availability for decision-making, and creating futures information. Kuusisto is an active participant of
several scientific advisory boards, and he is a conference and journal reviewer.

Tuija Kuusisto works as a chief information officer for the Ministry of Defence in Finland. She
is adjunct professor of information management for decision-making at National Defence University
and adjunct professor of geoinformatics at Helsinki University of Technology in Finland. Her research
interests lay around information, knowledge, and information systems. Her viewpoints to research
include management and tactics, software business, and geoinformatics. She has about 60 scientific
publications in international and national journals, conference proceedings, and books. She has over 10
years work experience in national and global software business and telecommunications industry and
over 10 years work experience in local and national IT administration.

Christopher B. Mayhorn is an associate professor in the ergonomics/experimental psychology


program at North Carolina State University in Raleigh, North Carolina. He received his PhD in cogni-
tive/experimental psychology from the University of Georgia in 1999.

Jeremy Mendel is a graduate student pursuing his M.S. in the Psychology Department at Clemson
University in Clemson, South Carolina. He received his B.S. in psychology from North Carolina State
University in 2007.

Deapesh Misra works as a security analyst with iDefense, a VeriSign company. Previously he
worked as a graduate research assistant at the Cryptography and Network Security Implementations
Lab in George Mason University. He can be contacted at [email protected].

Lilian Mitrou is assistant professor at the University of the Aegean, Greece (Department of Infor-
mation and Communication Systems Engineering) and visiting professor at the University of Athens
(postgraduate studies program). She holds a PhD in data protection (University of Frankfurt-Germany)
and teaches information law and data protection law. She has served as a member of the Hellenic Data
Protection Authority (1999-2003) and as advisor to the former Prime Minister K. Simitis in sectors of
information society and public administration (1996-2004). From 1998 till 2004 she was the national
representative in the EC-Committee on the Protection of Individuals with regard to the processing of
personal data. She served as member of many committees working on law proposals in the fields of
privacy and data protection, electronic commerce, e-government, and so forth. Her professional experi-
ence includes senior consulting and researcher positions in a number of private and public institutions.
She has published books and chapters in books (in Greek, German, and English) and many journal and
national and international conference papers.

378
About the Contributors

Kenrick Mock received his PhD in computer science from the University of California, Davis,
in 1996. He currently holds the position of associate professor of computer science at the University
of Alaska Anchorage. His research centers on complex systems, information management, artificial
intelligence, computer security, and technological innovations in education. Kenrick has previously
held positions as a research scientist at Intel Corporation and as CTO of an Internet startup company,
Unconventional Wisdom.

Anup Narayanan is an information security professional and the founding director of First Legion
Consulting, an information security management company based in Bangalore, India. His current focus
is on developing a framework, that he has christened “HIM-IS” (human impact management—infor-
mation security), for managing the human impact on information security. He is a CISSP, CISA, and
currently pursuing a master’s degree in applied psychology. He has more than 8 years of experience in
information security and contributes to the development of ISM3 (information security management
maturity model).

Marcus Nohlberg used his BSc in information systems to begin working as a consultant in one
of Sweden’s major Internet firms, but soon found an interest in the business side and received an
M.B.A. with a focus on e-business. Still, the lure to specialize further was there, so he got started as
an industrial PhD student with a focus on information security at the University of Skövde, Sweden,
supported by The Logic Planet AB. His research interest lies specifically in social engineering and the
manipulation of humans. His private interests are politics, sport pistol shooting at a national level, his
wirehaired dachshund Greta, and general geekiness. He lives in Skövde, Sweden, and his Web page is
at: http://www.nohlberg.com

Adrian Perrig is an associate professor at Carnegie Mellon University. He has appointments in the
departments of Electrical and Computer Engineering, Engineering and Public Policy, and Computer
Science. His research focuses on networking and systems security, security for mobile computing, and
sensor networks. Other research interests include human interfaces for security, networking, operating
systems, and cryptography.

G. Lawrence Sanders, PhD, is a professor in the Department of Management Science and Systems
in the School of Management at the State University of New York at Buffalo. He has also served as a
department chair and the chair of the PhD program in the School of Management. He has taught MBA
courses in the Peoples Republic of China and Singapore. His research interests are in the ethics and
economics of digital piracy, systems success measurement, cross-cultural implementation research,
systems development, and decision processes. He has published papers in outlets such as The Journal
of Business, MIS Quarterly, Information Systems Research, the Journal of Management Information
Systems, the Journal of Strategic Information Systems, the Journal of Management Systems, Decision
Support Systems, and Decision Sciences. He has also published a book on database design and co-edited
two other books.

379
About the Contributors

Jesse Walker is a principal engineer in Intel Corporation’s communications technology lab. His
primary interest concerns network security protocols. Dr. Walker served as editor for IEEE 802.11i, and
has contributed to many other IEEE 802.11 amendments. He also has contributed to numerous IETF
standards. Prior to joining Intel, he worked at Shiva, Raptor Systems, Digital Equipment Corporation,
Rockwell International, Datapoint, and Iowa State. He holds a PhD in mathematics from University of
Texas.

Ryan T. West is a user experience researcher who has studied enterprise-class systems administra-
tion at Microsoft, SAS Institute, and now Dell Inc. He has conducted academic research in risk and
decision-making and applied research in areas ranging from medical mistakes to computer security.
Ryan West has a PhD in cognitive psychology from the University of Florida.

380
381

Index

A Decomposition link 155


dependency 153
Actor 152 Dependency Vulnerability Analysis 161
agent 157 Digital Stored Value Card 165
alignment 303 disasters waiting to happen 55
ARTiFACIAL 228 DOMAIN REQUIREMENT 158
artificial intelligence (AI) 220 dynamic acting environment 88
Attacker 160
attackers 137 E
automated programs 220
electronic workplace monitoring 283
B employee monitoring 284
employee privacy 289
belief 153 employee surveillance 284
bridging the gap 283 end-users 196
BS7799 103 exploit 264
C F
CAPTCHAs 220 fair practices 296
clear-cut criteria 181 financial inclusion 122
cognitive science 116 fixed-action patterns 17
collaborative filtering 247
commitment and consistency 20 G
common criteria for information technology security
evaluation (CCITSE) 319 Gimpy 224
computer-mediated trust 65 goal dependency 153
Computer Aided Software Engineering (CASE) 62
computer crime 15
H
computer security 43 how-to-manual 15
computer system verification, trust and the social 64 human-centred 102
conscious competence model 27 human aspects 1
contribution link 155 human behavior 116
converged networks 133 human characteristics 265
convergence 137 human considerations 113
Countermeasure Analysis 163 human element 27
Customer 150 human emotion 15
human error 5
D human factors 2, 119, 316
deception 16 human factors research 43

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Index

human firewall 312 philosophy of security 118


human impact management 37 phishing 138
Human interactive proofs (HIPs) 220 physical security 265
human memory 6 position 157
human weakness 15 pre-configured 192
Prepaid Phone Card System 166
I privacy rules 247
i* agent 149 psychology framework 27
IEEE 802.11 179 public domain 284
IMAGINATION 229 Q
in-person attack 136
incident 264 Qualitative Goal-Reasoning Mechanism 164
Information Assurance Advisory Council (IAAC)
105 R
information security 3, 27 reciprocation 20
information security (ISec) 98 recommender systems 247
information security competence 36 removable media 138
information security culture 77 requirements elicitation 317
information security policy (InSPy) 326 resource 153
information security policy, the role of the 327 resource dependency 154
information security policy adoption, best practice risk 265
in 332 risk assessment 302
information security practice 263 risk averse 306
information sharing 90 risk culture 309
Information Systems Modelling 169 risk management 301
information technology (IT) 284 risk perception 302
intolerance to risk 302 risky actions 43
K role 157

key stakeholder influences 307 S


known threats 43 secure information systems 116
L security-related functionality 197
security breaches 103, 326
lifeworld 114 security breaches, eight most common 331
lock and key controls 116 security configuration 179
security literature 101
M Security Model 167
manipulation 16 security requirements engineering 322
mathematical proof, social nature of 63 security techniques 4
military aircraft company (MAC) 68 security threat 196
model for automated profile specification 169 smart-card system 158
modeling preferences 247 smart card 124
social aspects 1
N social behavior 265
social engineering 133
networked working environment 77 social factors 120
socially acceptable 181
P social processes 304
password authentication 1 social proof 21
philosophy 116 social psychological aspects 16

382
Index

social relationships 303 threat-vulnerability model 262


social system 77 tolerance for risk 302
softgoal 153 Trust 165
softgoal dependency 154 trust, and automatically generated program code 61
software quality assurance (SQA) 62 trust, building into a computer system 66
software quality assurance (SQA) procedures 61 Turing test 222
software quality assurance, and military standards for
software 67 U
software verification process 61 user-initiated events 200
Strategic Dependency Model 153 user behavior 43
Strategic Rationale Model 155 users’ behavior 190
strategic rationale model 158
system-initiated events 200 V
System Configuration 165
system model approach 43 Vishing 139
VoIP phishing 139
T vulnerability 264
Task 153 W
task dependency 153
telephone networks 139 Web services protocol 151
threat 264 workplace monitoring 288

383

You might also like