A Moving Target - The Evolution of Human-Computer Interaction

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

A Moving Target—The Evolution of


Human-Computer Interaction

Jonathan Grudin

Revision of a chapter in J. Jacko (Ed.), Human-Computer Interaction Handbook (3rd Edition), Taylor & Francis, 2012.

1
PREAMBLE: HISTORY IN A TIME OF RAPID OBSOLESCENCE 3 1985–1995: GRAPHICAL USER INTERFACES SUCCEED 21
Why Study the History of Human-Computer Interaction? CHI Embraces Computer Science
Definitions: HCI, CHI, HF&E, IT, IS, LIS 4 HF&E Maintains a Nondiscretionary Use Focus 22
Shifting Context: Moore’s Law and Inflation IS Extends Its Range 23
Collaboration Support: OIS Gives Way to CSCW
HUMAN-TOOL INTERACTION AND INFORMATION PROCESSING Participatory Design and Ethnography 24
AT THE DAWN OF COMPUTING LIS: A Transformation Is Underway 25
Origins of Human Factors
Origins of the Focus on Information 5 1995–2010: THE INTERNET ERA ARRIVES AND SURVIVES A
Paul Otlet and the Mundaneum 6 BUBBLE 26
Vannevar Bush and Microfilm Machines The Formation of AIS SIGHCI
Digital Libraries and the Rise of Information Schools 27
1945–1955: MANAGING VACUUM TUBES 7 HF&E Embraces Cognitive Approaches
Three Roles in Early Computing A Wave of New Technologies & CHI Embraces Design 28
Grace Hopper: Liberating Computer Users 8 The Dot-Com Collapse 29

1955–1965: TRANSISTORS, NEW VISTAS LOOKING BACK: CULTURES AND BRIDGES


Supporting Operators: First Formal HCI Studies Discretion as a Major Differentiator 29
Visions and Demonstrations 9 Disciplinary, Generational, and Regional Cultures 30
J.C.R. Licklider at BBN and ARPA
John McCarthy, Christopher Strachey, Wesley Clark LOOKING FORWARD: TRAJECTORIES 31
Ivan Sutherland and Computer Graphics The Optional Becomes Conventional 32
Douglas Engelbart: Augmenting Human Intellect Ubiquitous Computing, Invisible HCI?
Ted Nelson’s Vision of Interconnectedness 10 Human Factors and Ergonomics
From Documentalism to Information Science Information Systems
Conclusion: Visions, Demos, and Widespread Use Computer-Human Interaction
Information 33
1965–1980: HCI PRIOR TO PERSONAL COMPUTING 11
HF&E Embraces Computer Operation CONCLUSION: THE NEXT GENERATION 34
IS Addresses the Management of Computing
Programming: Subject of Study, Source of Change 12 APPENDIX: PERSONAL OBSERVATIONS
Computer Science: A New Discipline 1970: A Change in Plans
1973: Three Professions
Computer Graphics: Realism and Interaction
1975: A Cadre of Discretionary Hand-on Users
Artificial Intelligence: Winter Follows Summer 13 1983: Chilly Reception for a Paper on Discretion in Use
Library Schools Embrace Information Science 15 1984: Encountering IS, Human Factors, and Design
1985: The GUI Shock
1986: Beyond “The User”: Groups and Organizations
1980–1985: DISCRETIONARY USE COMES INTO FOCUS 16
1989: Development Contexts: A Major Differentiator
Discretion in Computer Use 1990: Just Words: Terminology Can Matter
Minicomputers and Office Automation 17 2005: Considering HCI History
The Formation of ACM SIGCHI 18 2012: Reflections on Bridging Efforts
2012: Predictions
CHI and Human Factors Diverge 19
Workstations and another AI Summer 20 ACKNOWLEDGMENT 36

REFERENCES 37

2
PREAMBLE: HISTORY IN A TIME OF RAPID OBSOLESCENCE

“What is a typewriter?” my six-year-old daughter asked.


I hesitated. “Well, it’s like a computer,” I began.

Why Study the History of Human-Computer Interaction?

A paper widely read 25 years ago advised designing a word processor by analogy to something familiar to every-
one: a typewriter. Even then, one of my Danish students questioned this reading assignment, noting that “the type-
writer is a species on its last legs.” For most of the computing era, interaction involved 80-column punch cards, paper
tape, line editors, 1920-character displays, 1-megabye diskettes, and other extinct species. Are the interaction issues
of those times relevant today? No.
Of course, some aspects of the human side of human-computer interaction change slowly or not at all. Much of
what was learned about our perceptual, cognitive, social, and emotional processes when we interacted with older
technologies applies to our interaction with emerging technologies as well. Aspects of how we organize and retrieve
information persist, even as the specific technologies that we use change. The handbook in which an earlier version
of this essay appeared surveyed relevant knowledge of human psychology.
Nevertheless, there are reasons to understand the field’s history. Paradoxically, the rapid pace of technology
change could strengthen them.
1. Several disciplines are engaged in HCI research and application, but few people are exposed to more than one.
By seeing how each evolved, we can identify some benefits of expanding our focus and obstacles to doing so.
2. Celebrating the accomplishments of past visionaries and innovators is part of building a community and inspiring future
contributors, even when some of the past achievements are difficult to appreciate today.
3. Some visions and prototypes were quickly converted to widespread application, others took decades to influence
use, and some remain unrealized. By understanding the reasons for different outcomes, we can assess today’s vi-
sions more realistically.
4. Crystal balls are notoriously unreliable, but anyone planning or managing a career in a rapidly-changing field must
consider the future. Our best chance to anticipate change is to find trajectories that extend from the past to the
present. One thing is certain: The future will not resemble the present.
This account does not emphasize engineering "firsts.” It focuses on technologies and practices as they became widely
used, reflected in the spread of systems and applications. This was often paralleled by the formation of new research
fields, changes in existing disciplines, and the creation and evolution of professional associations and publications. More a
social history than a conceptual history, this survey identifies trends that you might download into your crystal balls.
An historical account is a perspective. It emphasizes some things while it de-emphasizes or omits others. A history
can be wrong in details, but it is never right in any final sense. Your questions and your interests will determine how
useful a perspective is to you. This essay covers several disciplines; software engineering, communication, design,
and marketing receive less attention than other accounts might provide.
A blueprint for intellectual histories of HCI was established by Ron Baecker in the 1987 and 1995 editions of Read-
ings in Human-Computer Interaction. It was followed by Richard Pew in the 2003 first edition of the Human-Computer
Handbook. Further historical insights and references can be found in Brian Shackel’s (1997) account of European
contributions and specialized essays by Brad Myers (1998) on HCI engineering history and Alan Blackwell (2006) on
the history of metaphor in design. Perlman et al. (1995) is a compendium of early HCI papers that appeared in the
human factors literature. HCI research in management information systems is covered by Banker and Kaufmann
(2004) and Zhang et al. (2009). Rayward (1983; 1998) and Burke (1994; 2007) review the pre-digital history of infor-
mation science; Burke (1998) is a focused study of an early digital effort.
A wave of popular books have addressed the history of personal computing (e.g., Hiltzik, 1999; Bardini, 2000;
Hertzfeld, 2005; Markoff, 2005; 2015; Moggridge, 2007). This essay builds on Timelines columns that ACM Interac-
tions published from 2006 to 2013.
Few of the authors above are trained historians. Many lived through much of the computing era as participants
and witnesses, yielding rich insights and questionable objectivity. This account draws on extensive literature and
hundreds of formal interviews and informal discussions, but everyone has biases. Personal experiences can enliven
an account by conveying human consequences of changes that otherwise appear abstract or distant. Some readers
enjoy anecdotes, others find them irritating. I try to satisfy both groups by including personal examples in a short ap-
pendix, akin to “deleted scenes” on a DVD.
I include links to freely-accessed digital reproductions of some early works that have appeared in recent years.
The reproductions often do not preserve the original pagination, but quoted passages can be found with a search
tool.

3
Definitions: HCI, CHI, HF&E, IT, IS, LIS

HCI is often used narrowly to refer to work in one discipline. I define it very broadly to cover major threads of re-
search in four disciplines: human factors, information systems, computer science, and library & information sci-
ence. Later I discuss how differences in the use of simple terms make it difficult to explore the literature. Here I
explain my use of key disciplinary labels. CHI (computer-human Interaction) is given a narrower focus than HCI;
CHI is associated primarily with computer science, the Association for Computing Machinery Special Interest
Group (ACM SIGCHI), and the latter's annual CHI conference. I use human factors and ergonomics interchange-
ably, and refer to the discipline as HF&E. (Some writers define ergonomics more narrowly around hardware.) The
Human Factors Society (HFS) became the Human Factors and Ergonomics Society (HFES) in 1992. IS (info r-
mation systems) refers to the management discipline that has also been labeled d ata processing (DP) and man-
agement information systems (MIS). I follow common parlance in referring to organizational information systems
specialists as IT professionals or IT pros. LIS (library and information science) represents an old field with a new
digital incarnation that includes important HCI research. With IS taken, I do not abbreviate information science, a
discipline that often goes by simply 'information,' as in "Information School" or "School of Information."

Shifting Context: Moore's Law and Inflation

A challenge in interpreting past events and the literature is to keep in mind the radical differences in what a typi-
cal computer was from one decade to the next. Conceptual development can be detached from hardware to
some extent, but the evolving course of research and development cannot. We are familiar with Moore's law, but
we do not reason well about supralinear or exponential growth. We often failed to anticipate how rapidly change
would come, and then when it came, we did not credit the role played by the underlying technology.
Moore's law specifies the number of transistors on an integrated circuit; we will consider the broader range of
phenomena that exhibit exponential growth. Narrowly defined, Moore's law may be revoked, but broadly defin ed,
the health of the technology industry is tied to ongoing hardware innovation and efficiency gains. Let's not under-
estimate human ingenuity when so much is at stake, whether advances come through novel materials and cool-
ing techniques, three-dimensional architectures, optical computing, more effective parallelism, or other means.
Increased software efficiency is another area of opportunity. Finally, much of the historical literature forgets to
update costs to account for inflation. One dollar when the first commercial computers appeared was equivalent to
ten dollars today. I have converted prices, costs and grant funding to U.S. dollars as of 2015.

HUMAN-TOOL INTERACTION AND INFORMATION


PROCESSING AT THE DAWN OF COMPUTING

In the century prior to arrival of the first digital computers, new technologies gave rise to two fields of research that
later contributed to human-computer interaction. One focused on making the human use of tools more efficient, the
other focused on ways to represent and distribute information more effectively.

Origins of Human Factors

Frederick Taylor (1911) employed technologies and methods developed in the late 19th century—photography,
moving pictures, and statistical analysis—to improve work practices by reducing performance time. Time-and-
motion studies were applied to assembly-line manufacturing and other manual tasks. Despite the uneasiness with
“Taylorism” reflected in Charlie Chaplin’s popular satire Modern Times, scientists and engineers continued applying
this approach to boost efficiency and productivity.
Lillian Gilbreth (1914) and her husband Frank were the first engineers to add psychology to Taylor's "scientific
management." Lillian Gilbreth's PhD was the first degree awarded in industrial psychology. She studied and de-
signed for efficiency and worker experience as a whole; some consider her the founder of modern Human Factors.
She advised five U.S. presidents and was the first woman inducted into the National Academy of Engineering.
World War I and World War II gave rise to efforts to match people to jobs, to train them, and to design equipment
that was more easily mastered. Engineering psychology was born in World War II after simple flaws in the design of
aircraft controls (Roscoe, 1997) and escape hatches (Dyson, 1979) led to plane losses and thousands of casualties.
Among the legacies of World War II were respect for the potential of computing, based on code-breaking successes,
and an enduring interest in behavioral requirements for design.

4
During the war, aviation engineers, psychologists, and physicians formed the Aeromedical Engineering Associa-
tion. After the war, the terms 'human engineering,' 'human factors,' and 'ergonomics' came into use, the latter primari-
ly in Europe. For more on this history, see Roscoe (1997), Meister (1999), and HFES (2010).
Early tool use, whether by assembly-line workers or pilots, was not discretionary. If training was necessary, people
were trained. One research goal was to reduce training time, but more important was to increase the speed and reli-
ability of skilled performance.

Origins of the Focus on Information

H. G. Wells, best known for his science fiction, campaigned for decades to improve society by improving information
dissemination. In 1905 he proposed a system based on a new technology: index cards!
These index cards might conceivably be transparent and so contrived as to give a photographic copy
promptly whenever it was needed, and they could have an attachment into which would slip a ticket bearing
the name of the locality in which the individual was last reported. A little army of attendants would be at work
on this index day and night… An incessant stream of information would come, of births, of deaths, of arrivals
at inns, of applications to post-offices for letters, of tickets taken for long journeys, of criminal convictions,
marriages, applications for public doles and the like. A filter of offices would sort the stream, and all day and
all night for ever a swarm of clerks would go to and fro correcting this central register, and photographing
copies of its entries for transmission to the subordinate local stations, in response to their inquiries…
Would such a human-powered “Web 2.0” be a tool for social control or public information access? The image
evokes both the potential and the challenges of the information era that is taking shape now, a century later.
In the late 19th century, technologies and practices for compressing, distributing, and organizing information
bloomed. Index cards, folders, and filing cabinets—models for icons on computer displays much later—were im-
portant inventions that influenced the management of information and organizations in the early 20th century (Yates,
1989). Typewriters and carbon paper facilitated information dissemination, as did the mimeograph machine, patented
by Thomas Edison. Hollerith cards and electromechanical tabulation, celebrated steps toward computing, were heavi-
ly used to process information in industry.
Photography was used to record information as well as behavior. For almost a century, microfilm was the most ef-
ficient way to compress, duplicate, and disseminate large amounts of information. Paul Otlet, Vannevar Bush, and
other microfilm advocates played a major role in shaping the future of information technology.
As the cost of paper, printing, and transportation dropped in the late 19 th and early 20th centuries, information dis-
semination and the profession of librarianship grew explosively. Library Associations were formed. The Dewey Deci-
mal and Library of Congress classification systems were developed. Thousands of relatively poorly-funded public
libraries sprang up to serve local demand in the United States. In Europe, government-funded libraries were estab-
lished to serve scientists and other specialists in medicine and the humanities. This difference led to different ap-
proaches to technology development on either side of the Atlantic.
In the United States, library management and the training of thousands of librarians took precedence over tech-
nology development and the needs of specialists. Public libraries adopted the simple but inflexible Dewey Decimal
Classification System. The pragmatic focus of libraries and emerging library schools meant that research into tech-
nology was in the province of industry. Research into indexing, cataloging, and information retrieval was variously
referred to as Bibliography, Documentation, and Documentalism.
In contrast, the well-funded European special libraries elicited sophisticated reader demands and pressure for li-
braries to share resources, which promoted interest in technology and information management. The Belgian Paul
Otlet obtained Melvyn Dewey’s permission to create an extended version of his classification system to support what
we would today call hypertext links. Otlet agreed not to implement his Universal Decimal Classification (UDC) in Eng-
lish for a time, an early example of a legal constraint on technology development. Nevertheless, UDC is still in use in
some places.
In 1926, the Carnegie Foundation dropped a bombshell by endowing the Graduate Library School (GLS) at the
University of Chicago to focus solely on research. For two decades Chicago was the only university granting PhDs
in library studies. GLS positioned itself in the humanities and social sciences, with research into the history of pub-
lishing, typography, and other topics (Buckland, 1998). An Introduction to Library Science, the dominant library re-
search textbook for forty years, was written at Chicago (Butler, 1933). It did not mention information technology at
all. Library Science was shaped by the prestigious GLS program until well into the computer era and human-tool
interaction was not among its major concerns. Documentalists, researchers who did focus on technology, were con-
centrated in industry and government agencies.
Burke (2007, p. 15) summarized the early history, with its emphasis on training librarians and other specialists:
Most information professionals … were focusing on providing information to specialists as quickly as pos-
sible. The terms used by contemporary specialists appeared to be satisfactory for many indexing tasks
and there seemed no need for systems based on comprehensive and intellectually pleasing classification
schemes. The goal of creating tools useful to non-specialists was, at best, of secondary importance.
My account emphasizes the points at which computer technologies came into what might be called 'non-

5
specialist use.' This early history of information management is significant, however, because the Web and declining
digital storage costs have made it evident that everyone will soon become their own information manager, just as
we are all now telephone operators. But I am getting ahead of our story. This section concludes with accounts of
two individuals who, in different ways, shaped the history of information research and development.

Paul Otlet and the Mundaneum. Like his contemporary H.G. Wells, Otlet envisioned a vast network of information.
But unlike Wells, Otlet and his collaborators built one. Otlet established a commercial research service around facts
that he had been cataloging on index cards since the late 19th century. In 1919 the Belgian government financed the
effort, which moved to a record center called the Mundaneum. By 1934, 15 million index cards and millions of images
were organized and linked or cross-referenced using UDC. Curtailed by the Depression and damaged during World
War II, the work was largely forgotten. It was not cited by developers of the metaphorically identical Xerox Notecards,
an influential hypertext system of the 1980s.
Technological innovation continued in Europe with the development of mechanical systems of remarkable ingenui-
ty (Buckland, 2009). Features included the use of photoreceptors to detect light passing through holes in index cards
positioned to represent different terms, enabling rapid retrieval of items on specific topics. These innovations inspired
the work of a well-known American scientist and research manager.

Vannevar Bush and Microfilm Machines. MIT professor Vannevar Bush was one of the most influential scientists in
American history. He advised Presidents Franklin Roosevelt and Harry Truman, served as director of the Office of
Scientific Research and Development, and was president of the Carnegie Institute.
Bush is remembered today for As We May Think, his 1945 Atlantic Monthly essay. It described the MEMEX, a hy-
pothetical microfilm-based electromechanical information processing machine. The MEMEX was to be a personal
workstation that enabled a professional to quickly index and retrieve documents or pictures and create hypertext-like
associations among them. The essay, excerpted below, inspired computer engineers and computer scientists who
made major contributions to HCI in the 1960s and beyond.
Not well known is that Bush wrote the core of his essay in the early 1930s, after which, shrouded in secrecy, he
spent two decades and unprecedented resources on the design and construction of several machines that comprised
a subset of MEMEX features. None were successful. The details are recounted in Colin Burke’s comprehensive book
Information and secrecy: Vannevar Bush, Ultra, and the other Memex.
Microfilm—photographic miniaturization—had qualities that attracted Bush, as they had Otlet. Microfilm was light,
could be easily transported, and was as easy to duplicate as paper records (Xerox photocopiers did not appear until
1959). The cost of handling film was brought down by technology created for the moving picture industry. Barcode-
like patterns of small holes could be punched on a film and read very quickly by passing the film between light beams
and photoreceptors. Microfilm was tremendously efficient as a storage medium. Memory based on relays or vacuum
tubes would never be competitive, and magnetic memory, when it eventually arrived, was less versatile and far more
expensive. It is easy today to overlook the compelling case that existed for basing information systems on microfilm.
Bush’s machines failed because of overly ambitious compression and speed goals and patent disputes, but ulti-
mately most critical was that Bush was unaware of decades of research on classification systems. American docu-
mentalists had been active, albeit not well-funded. In 1937, the American Documentation Institute (ADI) was formed,
predecessor of the American Society for Information Science and Technology (ASIST). Had he worked with them,
Bush, an electrical engineer by training, could have avoided the fatal assumption that small sets of useful indexing
terms would easily be defined and agreed upon. Metadata design was a research challenge then, and still is.
Bush described libraries and the public as potential users, but his machines cost far too much for that. He focused
on the FBI and CIA as customers, as well as military uses of cryptography and information retrieval. Despite the clas-
sified nature of this work, through his academic and government positions, his writings, the vast resources he com-
mandeered, and the scores of brilliant engineers he enlisted to work on microfilm projects, Bush exerted influence for
two decades, well into the computer era.

Bush’s vision emphasized both associative linking of information sources and discretionary use:
Associative indexing, the basic idea of which is a provision whereby any item may be caused at will to se-
lect immediately and automatically another. This is the essential feature of the MEMEX… Any item can be
joined into numerous trails… New forms of encyclopedias will appear, ready made with a mesh of associ-
ative trails [which a user could extend]…
The lawyer has at his touch the associated opinions and decisions of his whole experience and of the ex-
perience of friends and authorities. The patent attorney has on call the millions of issued patents, with fa-
miliar trails to every point of his client’s interest. The physician, puzzled by a patient’s reactions, strikes
the trail established in studying an earlier similar case and runs rapidly through analogous case histories,
with side references to the classics for the pertinent anatomy and histology. The chemist, struggling with
the synthesis of an organic compound, has all the chemical literature before him in his laboratory, with
trails following the analogies of compounds and side trails to their physical and chemical behavior.
The historian, with a vast chronological account of a people, parallels it with a skip trail which stops only

6
on the salient items, and can follow at any time contemporary trails which lead him all over civilization at a
particular epoch. There is a new profession of trail blazers, those who find delight in the task of establish-
ing useful trails through the enormous mass of the common record. (Bush, 1945.)
Bush knew that the MEMEX was unrealistic. None of his many projects included the “essential” associative linking.
Nevertheless his descriptions of hands-on, discretionary use of powerful machines by professionals was inspirational.
His vision was realized 50 years later, built on technologies undreamt of in the 1930s and 1940s. Bush did not initially
support computer development—their slow, bulky, and expensive information storage was clearly inferior to microfilm.

1945–1955: MANAGING VACUUM TUBES

World War II changed everything. Until then, most government research funding was managed by the Department of
Agriculture. The war brought unprecedented investment in science and technology, culminating in the atomic bomb.
This showed that huge sums could be found for academic or industrial research that addressed national goals. Re-
search expectations and strategies would never be the same.
Sophisticated electronic computation machines built before and during World War II were designed for specific
purposes, such as solving equations or breaking codes. Each of the expensive cryptographic machines that helped
win the war was designed to attack one encryption device. Whenever the enemy changed machines, a new one was
needed. This spurred interest in developing general-purpose computational devices. War-time improvements in vac-
uum tubes and other technologies made it more feasible, and their deployment brought human-computer interaction
into the foreground.
When engineers and mathematicians emerged from military and government laboratories and from secret project
rooms on university campuses, the public became aware of some breakthroughs. Development of ENIAC, arguably
the first general-purpose computer, began in secret during the war; the 'giant brain' was revealed publicly only when it
was completed in 1946. Its first use was not publicized: calculations supporting hydrogen bomb development. ENIAC
stood eight to ten feet high, occupied about 1800 square feet, and consumed as much energy as a small town. It pro-
vided far less computation and memory than you can slip into your pocket and run on a small battery today.
Memory was inordinately expensive. Even the largest computers of the time had little memory, so they were used
for computation and not for symbolic representation or information processing. The HCI focus was to reduce operator
burden: enabling a person to replace or reset vacuum tubes more quickly, and load stored-program computers from
tape rather than by manually attaching cables and setting switches. Such 'knobs and dials' human factors improve-
ments enabled one computer operator to accomplish work that had previously required a team.
Libraries installed simple microfilm readers to assist with information retrieval as publication of scholarly and popu-
lar material soared, but interest in technology was otherwise limited. The GLS orientation still dominated, focused on
librarianship, social science, and historical research. Independently, the foundation of information science was com-
ing into place, built on alliances that had been forged during the war among documentalists, electrical engineers, and
mathematicians interested in communication and information management. These included Vannevar Bush and his
collaborators Claude Shannon and Warren Weaver, co-authors in 1949 of a seminal work on information theory
(called communication theory at that time), and Ralph Shaw, a prominent American documentalist. The division be-
tween the two camps widened. Prior to the war, the technology-oriented American Documentation Institute had in-
cluded librarians and support for systems that spanned humanities and sciences; during the war and continuing
thereafter, ADI focused on government and 'Big Science.'

Three Roles in Early Computing

Early computer projects employed people in three roles: managers, programmers, and operators. Managers oversaw
design, development, and operation. They specified the programs to be written and distributed the output. Scientists
and engineers wrote programs, working with mathematically adept programmers who decomposed tasks into com-
ponents that a computer could manage. For ENIAC, this was a team of six women. In addition, a small army of op-
erators was needed. Once written, a program could take days to load by setting switches, positioning dials, and con-
necting cables. Despite innovations that boosted reliability, such as operating vacuum tubes at lower power to in-
crease lifespan and building in visible indicators of tube failure, ENIAC was often stopped to locate and replace failed
tubes. Vacuum tubes were reportedly wheeled around in shopping carts.
Eventually, each occupation—computer operation, management and systems analysis, and programming—
became a major focus of HCI research, centered respectively in Human Factors, (Management) Information Sys-
tems, and Computer Science. Computers and our interactions with them have evolved, but our research spectrum
still reflects this tripartite division of labor--extended to include a fourth, Information Science, when the cost of digital
memory declined sufficiently.

7
Grace Hopper: Liberating Computer Users. As computers became more reliable and capable, programming became
a central activity. Computer languages, compilers, and constructs such as subroutines facilitated ‘programmer-
computer interaction. Grace Hopper was a pioneer in all of these areas. She described her goal as freeing mathema-
ticians to do mathematics (Hopper, 1952; Sammet, 1992). It was echoed years later in the HCI goal of freeing users
to do their work. In the early 1950s, mathematicians were the users! Just as HCI professionals often feel marginalized
by software developers, Hopper's pioneering accomplishments in human-computer interaction were arguably under-
valued by other computer scientists, although she received recognition through the annual Grace Hopper Celebration
of Women in Computing, initiated in 1994.

1955–1965: TRANSISTORS, NEW VISTAS

Early forecasts that the world would need only a few computers reflected the limitations of vacuum tubes. Solid-state
computers became available commercially in 1958. Computers were still used primarily for scientific and engineering
tasks, but they were reliable enough not to require a staff of engineers. Less-savvy operators needed better interfac-
es. Although transistor-based computers were still very expensive and had limited capabilities, researchers could
envision previously unimaginable possibilities.
The Soviet Union's launch of the Sputnik satellite in 1957 challenged the West to invest in science and technology.
The development of lighter and more capable computers was an integral part of the well-funded program that put
men on the moon twelve years later.

Supporting Operators: The First Formal HCI Studies

“In the beginning, the computer was so costly that it had to be kept gainfully occupied for every second;
people were almost slaves to feed it.”
—Brian Shackel (1997)

Almost all computer use in the late 1950s and early 1960s involved programs and data that were read in from
cards or tape. A program then ran without interruption until it terminated, producing printed, punched or tape output.
This 'batch processing' restricted human interaction to operating the hardware, programming, and using the output.
Of these, the only job involving hands-on computer use was the least challenging and lowest-paying, the computer
operator. Programs were typically written on paper and keypunched onto cards or tape.
Computer operators loaded and unloaded cards and magnetic or paper tapes, set switches, pushed buttons, read
lights, loaded and burst printer paper, and put printouts into distribution bins. Operators interacted directly with the
system via a teletype: Typed commands interleaved with computer responses and status messages were printed on
paper that scrolled up one line at a time. Eventually, printers yielded to 'glass tty’s' (glass teletypes), also called cath-
ode ray tubes (CRTs) and visual display units or terminals (VDUs/VDTs). These displays also scrolled commands
and computer responses one line at a time. The price of a monochrome terminal that could only display alphanumeric
characters was $50,000 in today’s dollars, a small fraction of the cost of the computer. A large computer might have
one or more consoles. Programmers did not use interactive consoles until later. Although the capabilities were far
less than a tablet today, the charge to use an IBM 650 was $7500 an hour (Markoff, 2015).
Improving the design of buttons, switches, and displays was a natural extension of human factors or ergonomics.
In 1959, British ergonomist Brian Shackel published the first HCI paper, “Ergonomics for a Computer,” followed in
1962 by “Ergonomics in the Design of a Large Digital Computer Console.” These described console redesign for ana-
log and digital computers called the EMIac and EMIdec 2400, the latter being the largest computer at the time
(Shackel, 1997).
In the United States, in 1956 aviation psychologists created the Human Engineering Society, focused on improving
skilled performance through greater efficiency, fewer errors, and better training. The next year it adopted the more
elegant title Human Factors Society and in 1958 it initiated the journal Human Factors. Sid Smith’s (1963) “Man–
Computer Information Transfer” marked the start of his long career in the human factors of computing.

Visions and Demonstrations

As transistors replaced vacuum tubes, a wave of imaginative writing, conceptual innovation, and prototype-building
swept through the research community. Some of the terms and language quoted below are dated, notably in the use
of male generics, but many of the key concepts still resonate.

8
J.C.R. Licklider at BBN and ARPA. Licklider, a psychologist, played a dual role in the development of the field. He
wrote influential essays, and he backed important research projects as a manager at Bolt Beranek and Newman
(BBN) from 1957 to 1962 and director of the Information Processing Techniques Office (IPTO) of the Department of
Defense Advanced Research Projects Agency (called ARPA and DARPA at different times) from 1962 to 1964.
BBN employed dozens of influential researchers on government-funded computer-related projects. These included
John Seely Brown, Richard Pew, and many MIT faculty members including John McCarthy, Marvin Minsky, and Lick-
lider himself. IPTO funding created computer science departments and established artificial intelligence as a disci-
pline in the 1960s. It is best known for a Licklider project that created the forerunner of the Internet called the AR-
PANET, which was in use until 1985.
In 1960, Licklider outlined a vision of man–machine symbiosis: “There are many man–machine systems. At pre-
sent, however, there are no man–computer symbioses—answers are needed.” The computer, he wrote, was “a fast
information-retrieval and data-processing machine” destined to play a larger role: “One of the main aims of man–
computer symbiosis is to bring the computing machine effectively into the formulative parts of technical problems.”
This would require rapid, real-time interaction, which batch systems did not support. In 1962, Licklider and Wes
Clark outlined the requirements of a system for “on-line man–computer communication.” They identified capabilities
that they felt were ripe for development: time-sharing of a computer among many users; electronic input–output sur-
faces to display and communicate symbolic and pictorial information; interactive, real-time support for programming
and information processing; large-scale information storage and retrieval systems; and facilitation of human coopera-
tion. They were right: The ensuing decades of HCI work filled in this outline. They also foresaw that other desirable
technologies would be more difficult to achieve, such as speech recognition and natural language understanding.
In a memorandum that cleverly tied computing to the post-Sputnik space program, Licklider addressed his col-
leagues as “the members and affiliates of the Intergalactic Computer Network” and identified many features of a fu-
ture Internet (Licklider, 1963). His 1965 book Libraries of the Future expanded this vision. Licklider’s role in advancing
computer science and HCI is detailed by Waldrop (2001).

John McCarthy, Christopher Strachey, Wesley Clark. McCarthy and Strachey worked on time-sharing, which made
interactive computing possible (Fano & Corbato, 1966). Apart from a few researchers who had access to computers
built with spare-no-expense military funding, computer use was too expensive to support exclusive individual access.
Time-sharing allowed several simultaneous users (and later dozens) to work at terminals cabled to a single computer.
Languages were developed to facilitate control and programming of time-sharing systems (e.g., JOSS in 1964).
General Wesley Clark was instrumental in building the TX-0 and TX-2 at MIT’s Lincoln Laboratory. These ma-
chines, which cost on the order of US$10 million apiece, demonstrated time-sharing and other innovative concepts,
and helped establish the Boston area as a center for computer research. The TX-2 was the most powerful and capa-
ble computer in the world at the time. It was much less powerful than a smartphone is today. Buxton (2006) includes
a recording of Clark and Ivan Sutherland discussing this era in 2005.

Ivan Sutherland and Computer Graphics. Sutherland’s 1963 Ph.D. thesis may be the most influential document in the
history of HCI. His Sketchpad system, built on the TX-2 to make computers “more approachable,” launched computer
graphics, which had a decisive impact on HCI twenty years later. A nicely restored version is available at
http://www.cl.cam.ac.uk/TechReports/UCAM-CL-TR-574.pdf.
Sutherland demonstrated iconic representations of software constraints, object-oriented programming concepts,
and the copying, moving, and deleting of hierarchically organized objects. He explored novel interaction techniques,
such as picture construction using a light pen. He facilitated visualization by separating the coordinate system used to
define a picture from the one used to display it, and demonstrated animated graphics, noting the potential for digitally
rendered cartoons 20 years before Toy Story. His frank descriptions enabled others to make rapid progress—when
engineers found Sketchpad too limited for computer-assisted design (CAD), he called the trial a “big flop,” described
why, and soon CAD was thriving.
In 1964, Sutherland succeeded Licklider as the director of IPTO. Among those he funded was Douglas Engelbart
at the Stanford Research Institute (SRI).

Douglas Engelbart: Augmenting Human Intellect. In 1962, Engelbart published “Augmenting Human Intellect: A Con-
ceptual Framework.” In the years that followed, he built systems that made astonishing strides toward realizing his
vision. He also supported and inspired engineers and programmers who made major contributions.
Echoing Bush and Licklider, Engelbart saw the potential for computers to become congenial tools that people
would choose to use interactively:
By 'augmenting human intellect' we mean increasing the capability of a man to approach a complex prob-
lem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems… By
‘complex situations’ we include the professional problems of diplomats, executives, social scientists, life
scientists, physical scientists, attorneys, designers… We refer to a way of life in an integrated domain
where hunches, cut-and-try, intangibles, and the human ‘feel for a situation’ usefully co-exist with powerful
concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.

9
Engelbart used his ARPA funding to develop and integrate an extraordinary set of prototype applications into his
NLS system. He conceptualized and implemented the foundations of word processing, invented and refined input
devices including the mouse and the multikey control box, and built multi-display environments that integrated text,
graphics, and video. These unparalleled advances were demonstrated in a sensational 90-minute live event at the
1968 Fall Joint Computer Conference in San Francisco (http://sloan.stanford.edu/MouseSite/1968Demo.html). The
focal point for interactive systems research in the United States was shifting from the East Coast to the West Coast.
Engelbart, an engineer, supported human factors testing to improve efficiency and reduce errors in skilled use
arising from fatigue and stress. To use the systems required training. Engelbart felt that people should be willing to
tackle a difficult interface if it delivered great power once mastered. Unfortunately, difficulty with initial use was a fac-
tor in Engelbart's loss of funding. His 1968 demonstration became in one respect a success disaster: DARPA in-
stalled NLS in it offices and found it too difficult (Bardini, 2000). Years later, the question "Is it more important to opti-
mize for skilled use or initial use?" was actively debated by CHI researchers and still resurfaces.

Ted Nelson’s Vision of Interconnectedness. In 1960, Ted Nelson, then a graduate student in sociology, founded Pro-
ject Xanadu and three years later coined the term hypertext in describing the goal, an easily-used computer network.
In 1965, he published an influential paper titled “A File Structure for the Complex, the Changing and the Indetermi-
nate.” Nelson wrote stirring calls for systems that would democratize computing through a highly interconnected, ex-
tensible network of digital objects (e.g., Nelson, 1973). Xanadu was never fully realized, and Nelson did not consider
the early World Wide Web to be a realization of his vision, but subsequent, lightweight technologies such as weblogs,
wikis, collaborative tagging, and search enable many of the activities he envisioned.
Later, Nelson (1996) anticipated intellectual property issues that would arise in digital domains. He coined the term
'micropayment' and again drew attention to possibilities that are being realized in different ways.

From Documentalism to Information Science

The late 1950s saw the final major investments in microfilm and other pre-digital systems. The most ambitious were
Vannevar Bush's final military and intelligence projects (Burke, 1994). Some documentalists saw that the declining
cost of digital memory would enable computation engines to become information processing machines. As mathema-
ticians and engineers began engaging in technology development, initiatives were launched that bore few ties to con-
temporary librarianship or the humanities orientation of library schools. The conceptual evolution was incremental, but
institutional change could come swiftly, organized around a new banner.
Merriam Webster dates the term 'information science' to 1960. Conferences held at Georgia Institute of Technolo-
gy in 1961 are credited with shifting the focus from information as a technology to information as an incipient science.
In 1963, chemist-turned-documentalist Jason Farradane taught the first information science courses at City Universi-
ty, London. Much earlier, the profession of chemistry had taken the lead in organizing its literature systematically.
Another chemist-turned-documentalist, Allen Kent, was central to an information science initiative at the University of
Pittsburgh (Aspray, 1999). In the early 1960s, Anthony Debons, a psychologist and friend of Licklider, organized a
series of NATO-sponsored congresses at Pittsburgh. Guided by Douglas Engelbart, these meetings centered on
people and on how technology could augment their activities. In 1964 the Graduate Library School at the University of
Pittsburgh became the Graduate School of Library and Information Sciences, and Georgia Tech formed a “School of
Information Science,” initially with one full-time faculty member.

Conclusion: Visions, Demos, and Widespread Use

Progress in HCI can be reflected in inspiring visions, conceptual advances that enable aspects of those visions to be
demonstrated in working prototypes, and the evolution of design and application. The engine that drove it all, inspiring
visions and enabling some to be realized and eventually widely deployed, was the relentless advance of hardware
that produced devices that were millions of times more powerful than the far more expensive systems designed and
used by the pioneers.
At the conceptual level, much of the basic foundation for today’s graphical user interfaces was in place by 1965.
As Sutherland worked on the custom-built US$10 million TX-2 at MIT, a breakthrough occurred nearby: the Digital
Equipment Corporation (DEC) PDP-1 minicomputer, “truly a computer with which an individual could interact” (Pew,
2003). First appearing in 1960, the PDP-1 came with a CRT display, keyboard, light pen, and paper tape reader. It
cost only about US$1 million. It only had about the capacity that a Radio Shack TRS 80 would have twenty years
later, and it required considerable technical and programming support. Nevertheless, the PDP-1 and its descendants
became the machines of choice for computer-savvy researchers.
Licklider’s "man–computer symbiosis,” Engelbart’s “augmented human intellect,” and Nelson’s “conceptual frame-
work for man–machine everything” described a world that did not exist: A world in which attorneys, doctors, chemists,
and designers chose to be hands-on users of computers. For some time to come, the reality would be that most
hands-on use was by computer operators engaged in routine, nondiscretionary tasks. As for the visions, 50 years
later many of the capabilities are taken for granted, some are just being realized, and a few remain elusive.

10
1965–1980: HCI PRIOR TO PERSONAL COMPUTING

Control Data Corporation launched the transistor-based 6000 series computers in 1964. In 1965, commercial com-
puters based on integrated circuits arrived with the IBM System/360. These systems, later called mainframes to dis-
tinguish them from minicomputers, firmly established computing in the business realm. Each of the three computing
roles—operation, management, and programming—became a significant profession in this period.
Operators still interacted directly with computers for routine maintenance and operation. As timesharing spread,
hands-on use expanded to include data entry and other repetitive tasks. Managers and systems analysts oversaw
hardware acquisition, software development, operation, and the use of output. Managers who relied on printed output
and reports were called 'computer users,' although they did not interact directly with the computers.
Few programmers were direct users until late in this period. Most prepared flowcharts and wrote programs on pa-
per forms. Keypunch operators then punched the program instructions onto cards. These were sent to computer cen-
ters for computer operators to load and run. Printouts and other output were picked up later. Some programmers
used computers directly when they could, but outside of research centers, the cost of computer time generally dictat-
ed this more efficient division of labor.
I have focused on broad trends. Business computing took off in the late 1960s, but the 1951 LEO claimed to be
the first commercial business computer. To learn more about this venture, which ended with the arrival of the main-
frame, see Wikipedia (under 'LEO computer') and the books and articles cited there.

Human Factors and Ergonomics Embrace Computer Operation

In 1970 at Loughborough University, Brian Shackel founded the Human Sciences and Advanced Technology (HU-
SAT) center, devoted to ergonomics research and emphasizing HCI. In the U.S., Sid Smith and other human factors
engineers worked on input and output issues, such as the representation of information on displays (Smith, Farquhar
& Thomas, 1965) and computer-generated speech (Smith & Goodwin, 1970). The Computer Systems Technical
Group (CSTG) of the Human Factors Society was formed in 1972 and became the society's largest technical group.
The Human Factors journal published some HCI articles and in 1969 the International Journal of Man-Machine
Studies (IJMMS) appeared. The first widely-read HCI book was James Martin’s 1973 Design of Man–Computer Dia-
logues. His comprehensive survey of interfaces for computer operators and data entry began with an arresting open-
ing chapter that described a world in transition. Extrapolating from declining hardware prices, he wrote:
The terminal or console operator, instead of being a peripheral consideration, will become the tail that
wags the whole dog. . . . The computer industry will be forced to become increasingly concerned with the
usage of people, rather than with the computer’s intestines.
In the mid-1970s, U.S. government agencies responsible for agriculture and social security initiated large-scale
data processing projects (Pew, 2003). Although these efforts were unsuccessful, they led to methodological innova-
tions in the use of style guides, usability labs, prototyping, and task analysis.
In 1980, three significant HF&E books were published: two on VDT design (Cakir, Hart & Stewart, 1980;
Grandjean & Vigliani, 1980) and one general guideline (Damodaran, Simpson & Wilson, 1980). Drafts of German
work on VDT standards, made public in 1981, provided an economic incentive to design for human capabilities by
threatening to ban noncompliant products. Later that year, an ANSI standards group for 'office and text systems'
formed in the U.S.

Information Systems Addresses the Management of Computing

Companies acquired expensive business computers to address major organizational concerns. Even when the
principal concern was simply to appear modern (Greenbaum, 1979), the desire to show benefits from a multi-million
dollar investment chained some managers to a computer almost as tightly as it did the operator and data entry
'slaves.' They were expected to make use of output and manage any employee resistance to using a system.
In 1967, the journal Management Science initiated a column titled “Information Systems in Management Science.”
Early definitions of 'Information systems' included “an integrated man/machine system for providing information to
support the operation, management, and decision-making functions in an organization” and “the effective design,
delivery and use of information systems in organizations” (Davis, 1974, & Keen, 1980, quoted in Zhang, Nah &
Preece, 2004). In 1968, a Management Information Systems center and degree program was established at the Uni-
versity of Minnesota. It initiated influential research streams and in 1977 launched MIS Quarterly, which became the
leading journal in the field. The MIS field focused on specific tasks in organizational settings while emphasizing gen-
eral theory and precise measurement, a challenging combination.

11
An historical survey by Banker and Kaufmann (2004) identifies HCI as one of five major IS research streams, da-
ting back to a 1967 paper by Ackoff that described challenges in handling computer-generated information. Some
MIS research covered hands-on operator issues such as data entry and error message design, but for a decade most
HCI work in information systems dealt with users of information, typically managers. Research included the design of
printed reports. The drive for theory led to a strong focus on cognitive styles: individual differences in how people
(especially managers) perceive and process information. Some MIS articles on HCI appeared in the human factors-
oriented IJMMS as well as in the management journals.
Sociotechnical approaches to system design (Mumford, 1971; 1976; Bjørn-Andersen & Hedberg, 1977) were de-
veloped in response to user difficulties and resistance. They involved educating representative workers about techno-
logical possibilities and involving them in design, in part to increase employee acceptance of a system being devel-
oped. Sophisticated views of the complex social and organizational dynamics around system adoption and use
emerged, such as Rob Kling's 'Social analyses of computing: Theoretical perspectives in recent empirical research'
(1980) and Lynne Markus's 'Power, politics, and MIS implementation' (1983).

Programming: Subject of Study, Source of Change

Programmers who were not hands-on users nevertheless interacted with computers. More than 1,000 research pa-
pers on variables affecting programming performance were published in the 1960s and 1970s (Baecker & Buxton,
1987). Most examined programmer behavior in isolation, independent of organizational context. Influential reviews of
this work included Gerald Weinberg’s landmark The Psychology of Computer Programming in 1971, Ben Shnei-
derman's Software Psychology: Human Factors in Computer and Information Systems in 1980, and Beau Sheil's
1981 review of studies of programming notation (conditionals, control flow, data types), practices (flowcharting, in-
denting, variable naming, commenting), and tasks (learning, coding, debugging).
Software developers changed the field through invention. In 1970, Xerox Palo Alto Research Center (PARC) was
created to develop new computer hardware, programming languages, and programming environments. It attracted
researchers and system builders from the laboratories of Engelbart and Sutherland. In 1971, Allen Newell of Carne-
gie Mellon University proposed a project to PARC to focus on the psychology of cognitive behavior, writing that “cen-
tral to the activities of computing—programming, debugging, etc.—are tasks that appear to be within the scope of this
emerging theory.” It was launched in 1974 (Card & Moran, 1986).
HUSAT and PARC were both founded in 1970 with broad charters. HUSAT focused on ergonomics, anchored in
the tradition of nondiscretionary use, one component of which was the human factors of computing. PARC focused
on computing, anchored in visions of discretionary use, one component of which was also the human factors of com-
puting. Researchers at PARC, influenced by cognitive psychology, extended the primarily perceptual-motor focus of
human factors to higher-level cognition, whereas HUSAT, influenced by sociotechnical design, extended human fac-
tors by considering organizational factors.

Computer Science: A New Discipline

The first university computer science departments formed in the mid-1960s. Their orientation depended on their
origin: some branched from engineering, others from mathematics. Computer graphics was an engineering speciali-
zation of particular relevance to HCI. Applied mathematics provided many of the early researchers in artificial intelli-
gence, which has interacted with HCI in complex ways that are described below.
The early machines were funded without consideration of the cost by branches of the military. Technical success
was the sole evaluation criterion (Norberg & O’Neill, 1996). Directed by Licklider, Sutherland, and their successors,
ARPA played a major role. The need for heavy funding concentrated researchers in a few research centers, which
bore little resemblance to the business computing environments of that era. Users and user needs differed: Techni-
cally savvy hands-on users in research settings did not press for low-level interface efficiency enhancements.
Therefore, the computer graphics and AI perspectives that arose in these centers differed from the perspectives of
HCI researchers who focused on less expensive, more widely deployed systems. Computer graphics and AI required
processing power—for this research, hardware advances led to declining costs for the same high level of computa-
tion. For HCI research, hardware advances led to greater computing capability at the same low price. Later this dif-
ference would diminish, when widely-available machines could support graphical interfaces and some AI programs.
Nevertheless, between 1965 and 1980 some computer science researchers focused on interaction; many had been
influenced by the central role of discretionary interactive use in the early writings of Bush, Licklider, and Engelbart.

Computer Graphics: Realism and Interaction. In 1968, Sutherland joined David Evans to establish an influential com-
puter graphics laboratory at the University of Utah, which had established a Computer Science department in 1965.
Utah contributed to the western migration as graduates went to California, including Alan Kay and William Newman,
and later Jim Blinn and Jim Clark. Most graphics systems at the time were built on the DEC PDP-1 and PDP-7. The
list price of one high-resolution display was more than US$100,000 in today’s dollars. These machines were in princi-
ple capable of multitasking, but in practice most graphics programs required all of a processor's cycles.

12
In 1973, the Xerox Alto arrived, a major step toward realizing Alan Kay’s vision of computation as a medium for
personal computing (Kay and Goldberg, 1977). It wasn't powerful enough to support high-end graphics research, but
it included more polished versions of many of the graphical interface features that Engelbart prototyped five years
earlier. Less expensive than the PDP-1 but too costly for wide use as a personal device, the Alto was never widely
marketed. However, it signaled the approach of inexpensive, interactive, personal machines capable of supporting
graphics.
Computer graphics researchers had a choice: High-end graphics, or more primitive features that could run on
widely affordable machines. William Newman, co-author in 1973 of the influential Principles of Interactive Computer
Graphics, described the shift: “Everything changed—the Computer Graphics community got interested in realism, I
remained interested in interaction, and I eventually found myself doing HCI” (personal communication). He was not
alone. Other graphics researchers whose focus shifted to broader interaction issues included Ron Baecker and Jim
Foley. Foley and Wallace (1974) identified requirements for designing “interactive graphics systems whose aim is
good symbiosis between man and machine.” The shift was gradual but steady: Eighteen papers in the first SIG-
GRAPH conference in 1974 had the words “interactive” or “interaction” in their titles. A decade later, none did.
Prior to the Alto, the focus had been on trained, expert performance. At Xerox, Larry Tesler and Tim Mott took the
step of considering how a graphical interface could best serve users with no training or technical background. By ear-
ly 1974 they had developed the Gypsy text editor. Gypsy and Xerox’s Bravo editor developed by Charles Simonyi
preceded and influenced Microsoft Word (Hiltzik, 1999).
In 1976 SIGGRAPH sponsored a two-day workshop in Pittsburgh titled “User Oriented Design of Interactive
Graphics Systems.” Participants who were later active in CHI included Jim Foley, William Newman, Ron Baecker,
John Bennett, Phyllis Reisner, and Tom Moran. J.C.R. Licklider and Nicholas Negroponte presented vision papers.
The conference was managed by the chair of Pittsburgh’s computer science department. One participant was Antho-
ny Debons, Licklider’s friend in Pittsburgh’s influential Information Science program. UODIGS’76 marked the end of
the visionary period that had promoted an idea whose time had not quite come. Licklider saw it clearly:
Interactive computer graphics appears likely to be one of the main forces that will bring computers directly
into the lives of very large numbers of people during the next two or three decades. Truly user-oriented
graphics of sufficient power to be useful to large numbers of people has not been widely affordable, but it
will soon become so, and, when it does, the appropriateness and quality of the products offered will to a
large extent determine the future of computers as intellectual aids and partners of people. (Licklider, 1976.)
Despite the stature of the participants, the 150-page proceedings was not cited. The next “user oriented design”
conference was not held until five years later, at which point they became annual events. Application of graphics was
not yet at hand; HCI research remained focused on interaction driven by commands, forms, and full-page menus.

Artificial Intelligence: Winter Follows Summer. AI burst onto the scene in the late 1960s and early 1970s. Logically, AI
and HCI are closely related. What are intelligent machines for if not to interact with people? AI research influenced
HCI: Speech recognition and natural language are perennial HCI topics; expert, knowledge-based, adaptive and
mixed-initiative systems were tried, as were HCI applications of production systems, neural networks, and fuzzy logic.
However, AI did not transform HCI. Some AI features made it into systems and applications, but predictions that
powerful AI technologies would come into wide use were not borne out. AI did not come into focus in the HCI re-
search literature, and few AI researchers showed interest in HCI.
To understand how this transpired requires a brief review of early AI history. The term artificial intelligence first ap-
peared in a 1955 call by John McCarthy for a meeting on machine intelligence held at Dartmouth. In 1956, Alan Tu-
ring’s prescient essay, “Computing Machinery and Intelligence,” attracted attention when it was reprinted in The
World of Mathematics. (It was first published in 1950, as were Claude Shannon’s “Programming a Computer for Play-
ing Chess” and Isaac Asimov’s collection of science fiction stories I, Robot, a thoughtful exploration of ethical issues).
Also in 1956, Newell and Simon outlined a logic theory machine, after which they focused on developing a general
problem solver. The LISP programming language was invented in 1958 (McCarthy, 1960).
Many AI pioneers were trained in mathematics and logic, fields that can be largely derived from a few axioms and
a small set of rules. Mathematical ability is considered a high form of intelligence, even by non-mathematicians. AI
researchers anticipated that machines that operate logically and tirelessly would make profound advances by apply-
ing a small set of rules to a limited number of objects. Early AI focused on theorem-proving, problems with a strong
logical focus, and games such as chess and go, which like math start with a small number of rules and a fixed set of
objects. McCarthy (1988), who espoused predicate calculus as a foundation for AI, summed it up:
As suggested by the term 'artificial intelligence,' we weren’t considering human behavior except as a clue
to possible effective ways of doing tasks. The only participants who studied human behavior were Newell
and Simon. (The goal) was to get away from studying human behavior and consider the computer as a
tool for solving certain classes of problems. Thus, AI was created as a branch of computer science and
not as a branch of psychology.
Unfortunately, by ignoring psychology, mathematicians overlooked the complexity and inconsistency that mark
human thought and social constructs. Underestimating the complexity of intelligence, they overestimated the pro-
spects for creating it artificially. Hyperbolic predictions and AI have been close companions from the start. In the
summer of 1949, the British logician and code-breaker Alan Turing wrote in the London Times:

13
I do not see why [the computer] should not enter any one of the fields normally covered by the human intel-
lect, and eventually compete on equal terms. I do not think you can even draw the line about sonnets,
though the comparison is perhaps a little bit unfair because a sonnet written by a machine will be better ap-
preciated by another machine.
Optimistic forecasts by the 1956 Dartmouth workshop participants attracted considerable attention. When they col-
lided with reality, a pattern was established that was to play out repeatedly. Hans Moravec (1998) wrote:
In the 1950s, the pioneers of AI viewed computers as locomotives of thought, which might outperform hu-
mans in higher mental work as prodigiously as they outperformed them in arithmetic, if they were harnessed
to the right programs… By 1960 the unspectacular performance of the first reasoning and translation pro-
grams had taken the bloom off the rose.
When interest in AI declined, HCI thrived on human and computational resources that were freed, a recurring pat-
tern. In 1960, with the bloom off the AI rose, the managers of MIT’s Lincoln Laboratory sought uses for their massive
computers. Ivan Sutherland's Sketchpad and other early computer graphics were a result.
The response to Sputnik soon reversed the downturn in AI prospects. J.C.R. Licklider, as director of ARPA’s In-
formation Processing Techniques Office from 1962 to 1964, provided extensive support for computer science in gen-
eral and AI in particular. MIT’s Project Mac, founded in 1963 by Marvin Minsky and others, initially received US$13M
per year, rising to $24M by 1969. ARPA sponsored the AI Laboratory at SRI, AI research at CMU, and Nicholas Ne-
groponte’s Machine Architecture Group at MIT. A dramatic early achievement, SRI’s Shakey the Robot, was featured
in articles in Life (Darrach, 1970) and National Geographic (White, 1970). Given a simple but non-trivial task, Shakey
could apparently go to the specified location, scan and reason about the surroundings, and move objects as needed
to accomplish the goal (to see Shakey at work, go to http://www.ai.sri.com/shakey/).
In 1970, Negroponte outlined a case for machine intelligence: “Why ask a machine to learn, to understand, to as-
sociate courses with goals, to be self-improving, to be ethical—in short, to be intelligent?” He noted common reserva-
tions: “People generally distrust the concept of machines that approach (and thus why not pass?) our own human
intelligence,” and identified a key problem: “Any design procedure, set of rules, or truism is tenuous, if not subversive,
when used out of context or regardless of context.” This insight, that it is risky to apply algorithms without understand-
ing the situation at hand, led Negroponte to a false inference: “It follows that a mechanism must recognize and un-
derstand the context before carrying out an operation.”
An alternative is that the mechanism is guided by humans who understand the context: Licklider’s human-machine
symbiosis. Overlooking this, Negroponte sought funding for an ambitious AI research program:
Therefore, a machine must be able to discern changes in meaning brought about by changes in context,
hence, be intelligent. And to do this, it must have a sophisticated set of sensors, effectors, and processors to
view the real world directly and indirectly… A paradigm for fruitful conversations must be machines that can
speak and respond to a natural language… But, the tete-à-tete (sic) must be even more direct and fluid; it is
gestures, smiles, and frowns that turn a conversation into a dialogue... Hand-waving often carries as much
meaning as text. Manner carries cultural information: the Arabs use their noses, the Japanese nod their
heads…
Imagine a machine that can follow your design methodology, and at the same time discern and assimilate
your conversational idiosyncrasies. This same machine, after observing your behavior, could build a predic-
tive model of your conversational performance. Such a machine could then reinforce the dialogue by using
the predictive model to respond to you in a manner that is in rhythm with your personal behavior and con-
versational idiosyncrasies… The dialogue would be so intimate—even exclusive—that only mutual persua-
sion and compromise would bring about ideas, ideas unrealizable by either conversant alone. No doubt, in
such a symbiosis it would not be solely the human designer who would decide when the machine is rele-
vant.
Also in 1970, Negroponte’s MIT colleague Minsky went further, as reported in Life:
In from three to eight years we will have a machine with the general intelligence of an average human being.
I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, and
have a fight. At that point, the machine will begin to educate itself with fantastic speed. In a few months, it
will be at genius level and a few months after that its powers will be incalculable. (Darrach, 1970.)
Other AI researchers told Darrach that Minsky’s timetable was ambitious:
Give us 15 years’ was a common remark—but all agreed that there would be such a machine and that it
would precipitate the third Industrial Revolution, wipe out war and poverty and roll up centuries of growth in
science, education and the arts.
Minsky later suggested to me that he was misquoted, but such predictions were everywhere. In 1960, Nobel lau-
reate and AI pioneer Herb Simon had written, “Machines will be capable, within twenty years, of doing any work that a
man can do.” In 1963, John McCarthy obtained ARPA funding to produce a "fully intelligent machine within a dec-
ade," (Moravec, 1988). In 1965, I. J. Good, an Oxford mathematician, wrote, “the survival of man depends on the
early construction of an ultra-intelligent machine” that “could design even better machines; there would then unques-
tionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.” Indeed, if Minsky had not
made such a prediction in 1970, he might have had difficulty getting funded.

14
Darrach ended by quoting Ross Quillian:
I hope that man and these ultimate machines will be able to collaborate without conflict. But if they can’t,
we may be forced to choose sides. And if it comes to choice, I know what mine will be. My loyalties go to
intelligent life, no matter in what medium it may arise. (p. 68).
Responding to this collective sense of urgency, ARPA initiated major programs in speech recognition and natural
language understanding in 1971.
It is important to understand both the anxieties of the time and the consequences of such claims. The world had
barely avoided a devastating thermonuclear war during the Cuban missile crisis of 1962. Leaders seemed powerless
to defuse the Cold War. The machines could save us! The consequences for HCI were not good. Funds went to AI
and good students followed. An ultra-intelligent machine would be able to clean up all of the world's user interfaces;
so why should anyone focus on such trivialities?
Ironically, central to funding the AI research was a psychologist who was not wholly convinced. In 1960, citing an
Air Force study that predicted that super-intelligent machines might take 20 years to build, J.C.R. Licklider noted that
until that happened, HCI ("man-computer symbiosis") would be useful: “That would leave, say, five years to develop
man-computer symbiosis and 15 years to use it. The 15 may be 10 or 500, but those years should be intellectually
the most creative and exciting in the history of mankind.” Ten to five hundred years represents breathtaking uncer-
tainty. Recipients of Licklider’s AI funding were on the optimistic end of this spectrum. Speech and language recogni-
tion were well-funded, considered by Licklider to be integral to achieving symbiosis, steps on the path to intelligence.
Five years later, disappointed with the lack of progress, ARPA ended support. An AI winter ensued. A similar story
unfolded in Great Britain. Through the 1960s, AI research expanded, spearheaded by Turing’s former colleague
Donald Michie. In 1973, the Lighthill report, commissioned by the Science and Engineering Research Council, was
generally pessimistic about AI scaling up to address real-world problems. Government funding was cut.
The next decade has been called an AI winter, a recurring season in which research funding is withheld due to dis-
illusionment over unfulfilled promises. The rose bloom was again off the rose, but it would prove to be a hardy peren-
nial. (For a more detailed account of AI summers and winters see Grudin, 2009, and Markoff, 2015).

Library Schools Embrace Information Science

Work on information science and “human information behavior” in the 1960s and 1970s focused on scholarship and
application in science and engineering (Fidel, 2011). With 'big science' alive and well post-Sputnik, aligning with na-
tional priorities was a priority for many researchers.
The terms 'information science,' 'information technology,' and 'information explosion' came into use in this period.
In 1968, the American Documentation Institute became the American Society for Information Science. Two years
later, the journal American Documentation became Journal of the American Society for Information Science. In 1978
the ACM Special Interest Group on Information Retrieval (SIGIR) was formed and launched the annual Information
Storage and Retrieval conference (since 1982, 'Information Retrieval'), modeled on a conference held seven years
earlier. By 1980, schools at over a dozen universities had added 'information' to their titles, many of them library
schools in transition. In 1984, the American Library Association belatedly embraced the i-word by creating the Asso-
ciation for Library and Information Science Education (ALISE), which convenes an annual research conference.
The pioneering Pittsburgh and Georgia Tech programs flourished. Pittsburgh created the first U.S. information sci-
ence Ph.D. program in 1970, declaring humans to be “the central factor in the development of an understanding of
information phenomena” (Aspray, 1999). The program balanced behavioral science (psychology, linguistics, commu-
nication) and technical grounding (automata theory, computer science). In 1973, Pittsburgh established the first in-
formation science department, which developed an international reputation. The emphasis shifted slowly from behav-
ior to technology. The Georgia Tech School of Information Science expanded after being awarded an NSF center
grant in 1966. In 1970 it became a Ph.D.-granting school, rechristened 'Information and Computer Science.'
Terminal-based computing costs declined. The ARPANET debuted in 1969. It supported email in 1971 and file-
sharing in 1973, which spurred visions of a 'network society' (Hiltz & Turoff, 1978). However, developing and deploy-
ing transformative information technology could be difficult. A prominent example was MIT's Project Intrex, the largest
unclassified information research project of the time. From 1965 to 1972, the Ford and Carnegie Foundations, NSF,
DARPA, and the American Newspaper Publishers Association invested over US$30 million to create a "library of the
future." Online catalogs were to include up to 50 index fields per item, accessible on CRT displays, with full text of
books and articles converted to microfilm and read via television displays. None of it proved feasible (Burke, 1998).
As an aside, the optimism of AI and networking proponents of this era lacks the psychological insight and nuance
of the novelist E.M. Forster, who in 1909 anticipated both developments in his remarkable story The Machine Stops.

15
1980–1985: DISCRETIONARY USE COMES INTO FOCUS

In 1980, HF&E and IS were focused on the down-to-earth business of making efficient use of expensive mainframes.
The beginning of a major shift went almost unnoticed. Less expensive but highly capable minicomputers based on
LSI technology were making inroads into the mainframe market. At the low end, home computers were gaining trac-
tion. Students and hobbyists were drawn to these minis and micros, creating a population of hands-on discretionary
users. There were also experiments with online library catalogs and electronic journals.
Then, between 1981 and 1984, a flood of innovative, serious computers were released: Xerox Star, IBM PC, Ap-
ple Lisa, Lisp machines from Symbolics and Lisp Machines, Inc. (LMI), workstations from Sun Microsystems and Sili-
con Graphics, and the Apple Macintosh. All supported interactive, discretionary use.
A change that gave rise to more consumer technology choices than all of these machines combined occurred on
January 1, 1984, when AT&T’s breakup into competing companies took effect. AT&T, with more employees and more
customers than any other U.S. company, had been a monopoly: Neither customers nor employees had much discre-
tion in technology use. Accordingly, AT&T and its Bell Laboratories research division had employed human factors to
improve training and operational efficiency. Customers of AT&T and the new regional operating companies now had
choices. As a monopoly, AT&T had been barred from the computer business. In 1985 it launched the Unix PC. AT&T
lacked experience designing for discretionary use, which is why you haven't heard of the Unix PC. The HCI focus of
telecommunications companies had to broaden (Israelski & Lund, 2003).
The lower-priced computers created markets for shrinkwrap software as, for the first time, non-technical hands-on
users who would get little or no formal training were targeted. It had taken twenty years, but the early visions were
being realized! Non-programmers were choosing to use computers to do their work. The psychology of discretionary
users intrigued two groups: (i) experimental psychologists who liked to use computers, and (ii) technology companies
who wanted to sell to discretionary users. Not surprisingly, computer and telecommunication companies started hiring
more experimental psychologists, especially those who liked to use computers.

Discretion in Computer Use

Technology use lies on a continuum, bracketed by the assembly line nightmare of Modern Times and the utopian
vision of completely empowered individuals. To use a technology or not to use it—sometimes we have a choice, oth-
er times we don’t. On the phone, we sometimes have to wrestle with speech recognition routing systems. At home,
computer use is largely discretionary. The workplace often lies in-between: Technologies are prescribed or pro-
scribed, but we ignore some injunctions or obtain exceptions, we use some features but not others, and we join with
colleagues to press for changes.
For early computer builders, the work was more a calling than a job, but operation required a staff to carry out es-
sential but less interesting tasks. For the first half of the computing era, most hands-on use was by people with a
mandate. Hardware innovation, more versatile software, and progress in understanding the tasks and psychology of
users—and transferring that understanding to software developers—led to hands-on users with more choice over
how they work. Rising expectations played a role; people heard that software is flexible and expected it to be more
congenial. Competition among vendors produced alternatives. With more emphasis on marketing to consumers came
more emphasis on user-friendliness.
Discretion is not all-or-none. No one must use a computer, but many jobs and pastimes require it. People can re-
sist, sabotage, or quit, but a clerk or a systems administrator has less discretion than someone engaged in an online
leisure activity. For an airline reservation clerk, computer use is mandatory. For a traveler booking a flight, computer
use is still discretionary. This distinction and the shift toward greater discretion were at the heart of the history of HCI.
The shift was gradual. Over thirty-five years ago, John Bennett (1979) predicted that discretionary use would lead
to more emphasis on usability. The 1980 book Human Interaction with Computers, edited by Harold Smith and
Thomas Green, stood on the cusp. A chapter by Jens Rasmussen, “The Human as a Systems Component,” covered
the nondiscretionary perspective. One-third of the book covered research on programming. The remainder addressed
“non-specialist people,” discretionary users who are not computer-savvy. Smith and Green wrote, “It’s not enough just
to establish what computer systems can and cannot do; we need to spend just as much effort establishing what peo-
ple can and want to do” (italics in the original).
A decade later, Liam Bannon (1991) noted broader implications of a shift “from human factors to human actors.”
The trajectory is not always toward choice. Discretion can be curtailed—for example, word processor and email use
became job requirements, not an alternative to using typewriters and phones. You can still talk with an airline reser-
vation clerk, but there may be a surcharge if you do. Increased competition, customization, and specialization lead to
more choices, but how it exercised varies over time and across contexts. Discretion is only one factor in understand-
ing HCI history, but analysis of its role casts light on how efforts in different HCI disciplines differed and why they did
not interact more over the years.

16
Minicomputers and Office Automation

Cabinet-sized mini-computers that could support several people simultaneously had arrived in the mid-1960s. By the
late 1970s, super-minis such as the VAX 11/780 supported integrated suites of productivity tools. In 1980, the leading
minicomputer companies, Digital Equipment Corporation, Data General, and Wang Laboratories, were a growing
presence outside Boston.
A minicomputer could handle personal productivity tools or a database of moderate size. Users sat at terminals. A
terminal could be 'dumb,' passing each keystroke to the central processor, or it could contain a processor that sup-
ported a user entering a screenfull of data that was then on command sent as a batch to a nearby central processor.
Minis provided a small group (or 'office') with file sharing, word processing, spreadsheets, and email, and managed
output devices. They were marketed as 'office systems,' 'office automation systems,' or 'office information systems.'
The 1980 Stanford International Symposium on Office Automation (Landau, Bair & Siegman, 1982) had two pa-
pers by Douglas Engelbart. It marked the emergence of a research field that remained influential for a decade, then
faded away. Also in 1980, ACM formed the Special Interest Group on Office Automation (SIGOA), and AFIPS (Amer-
ican Federation of Information Processing Societies, the parent organization of ACM and IEEE at the time) held the
first of seven annual Office Automation conferences and product exhibitions. In 1982, SIGOA initiated the biennial
Conference on Office Information Systems (COIS) and the first issue of the journal Office: Technology and People
appeared, followed in 1983 by ACM Transactions on Office Information Systems (TOOIS).
You might be wondering, "what is all this with offices?" Minicomputers reduced the price of a computer from an en-
terprise decision to one within the budget of a small work-group: an office. (The attentive reader will anticipate: The
personal computer era was approaching! It was next!) Office Information Systems, focused on minicomputer use,
was positioned alongside Management Information Systems, which focused on mainframes. The research scope was
reflected in the charter of TOOIS: database theory, artificial intelligence, behavioral studies, organizational theory,
and communications. Database researchers could afford to buy minis. Digital’s PDP series was a favorite of AI re-
searchers until LISP machines arrived. Minis were familiar to behavioral researchers who used them to run and ana-
lyze psychology experiments. Computer-mediated communication (CMC) was an intriguing new capability: Networks
were rare, but people at different terminals, often in different rooms, could exchange email or chat in real time. Minis
were the interactive computers of choice for many organizations. Digital became the second largest computer com-
pany in the world. Dr. An Wang, founder of Wang Labs, became the fourth wealthiest American.
Researchers were discretionary users, but few office workers chose their tools. The term 'automation' (in 'office
automation') was challenging and exciting to researchers, but it conjured up less pleasant images for office workers.
And some researchers also preferred Engelbart’s focus on augmentation rather than automation.
Papers in the SIGOA newsletter, COIS, and TOOIS included database theory and technology, a modest stream of
AI papers (the AI winter of the mid-1970s had not yet ended), decision support and CMC papers from the Information
Systems community, and behavioral studies by researchers who later joined CHI. The newsletter published complete
IS papers. TOOIS favored more technical work and was also a major outlet for behavioral studies until the journal
Human-Computer Interaction started in 1985.
Although OA/OIS research was eventually absorbed by other fields, it identified important emerging topics, includ-
ing as object-oriented databases, hypertext, computer-mediated communication, and collaboration support. OIS re-
search conceptually reflected aspects the technical side of information science, notably information retrieval and lan-
guage processing.

17
Figure 1. Four fields with major HCI research threads. Acronym expansions are in the text.

The Formation of ACM SIGCHI

Figure 1 identifies research fields with a direct bearing on HCI. Human Factors and Information Systems have distinct
subgroups that focus on digital technology use. The relevant Computer Science research is concentrated in CHI, the
interest group primarily concerned with discretionary hands-on computer use. Other CS influences—computer
graphics, artificial intelligence, and office systems—are discussed in the text but are not broken out in Figure 1. The
fourth field, Information, began as support for specialists but has broadened, especially since 2000.
In the 1970s, decreasing microcomputer prices attracted discretionary hobbyists. In 1980, as IBM prepared to
launch its PC, a groundswell of attention to computer user behavior was building. IBM had decided to make software
a product focus. Four years earlier, Bill Gates had made a novel argument that software had commercial value when
he protested the unauthorized copying of his Altair BASIC interpreter. Until then, computer companies generally bun-
dled all available software with hardware. Hardware had been so expensive and profitable, they could afford to de-
velop and distribute software, in some cases without bothering to track the cost.
Experimental psychologists began using minicomputers to run and analyze experiments. Cognitive psychologists
naturally introspected on their experiences programming and using computers, and computer company interest coin-
cided with a weakening academic job market. One of several psychologists forming an IBM group was John Gould,
who had published human factors research since the late 1960s. The group initiated empirical studies of program-
ming and studies of software design and use. Other psychologists who led recently formed HCI groups in 1980 in-
cluded Phil Barnard at the Medical Research Council Applied Psychology Unit in Cambridge, England (partly funded
by IBM and British Telecom); Tom Landauer at Bell Laboratories; Donald Norman at the University of California, San
Diego; and John Whiteside at Digital Equipment Corp.
From one perspective, CHI was formed by psychologists who saw an opportunity to shape a better future. From
another, it was formed by managers in computer and telecommunications companies who saw that digital technology
would soon be in the hands of millions of technically unsophisticated users with unknown interaction needs. Inven-
tion?—Or incremental improvements based on empirical observations? Competing views of CHI's mission were pre-
sent from the outset.
The influence of Xerox PARC and its CMU collaborators is described in this section and the next. The 1981 Xerox
Star with its carefully designed graphical user interface was not a commercial success, nor were a flurry of GUIs that
followed, including the Apple Lisa. However, the Star influenced researchers and developers—and the designers of

18
the Macintosh.
Communications of the ACM established a “Human Aspects of Computing” focus in 1980, edited by Whiteside's
mentor Henry Ledgard. The next year, Tom Moran edited a special issue of Computing Surveys on “The Psychology
of the Computer User.” Also in 1981, the ACM Special Interest Group on Social and Behavioral Science Computing
(SIGSOC) extended its workshop to cover interactive software design and use. A 1982 conference in Gaithersburg,
Maryland on “Human Factors in Computing Systems” was unexpectedly well-attended. SIGSOC immediately shifted
its focus to Computer-Human Interaction and changed its name to SIGCHI (Borman, 1996).
In 1983, the first CHI conference attracted more than 1000 people. Half of the 58 papers were from the seven or-
ganizations listed above. Cognitive psychologists working in industry dominated the program, but the Human Factors
Society co-sponsored the conference and was represented by program chair Richard Pew, committee members Sid
Smith, H. Rudy Ramsay, and Paul Green, and several paper presenters. Brian Shackel and HFS president Robert
Williges gave tutorials the first day. The alliance continued in 1984, when both HF&E and CHI researchers attended
the first International Conference on Human-Computer Interaction (INTERACT) in London, chaired by Shackel.
The first profession to adopt discretionary hands-on use was computer programming. Paper coding sheets were
discarded in favor of text editing at interactive terminals, PCs, and small minicomputers. Early CHI papers by Ruven
Brooks, Bill Curtis, Thomas Green, Ben Shneiderman, and others continued the psychology-of-programming re-
search thread. Shneiderman formed the influential HCI Laboratory (HCIL) at Maryland in 1983. IBM's Watson Re-
search Center contributed, as noted by John Thomas (personal communication, October 2003):
One of the main themes of the early work was basically that we in IBM were afraid that the market for
computing would be limited by the number of people who could program complex systems, so we wanted
to find ways for “nonprogrammers” to be able, essentially, to program.
The prevalence of experimental psychologists studying text editing was captured by Thomas Green at INTER-
ACT'84 when he observed, “Text editors are the white rats of HCI.” As personal computing spread, experimental
methods were applied to examine other discretionary use contexts. Studies of programming gradually disappeared
from HCI conferences.
The focus centered on so-called 'novice use.' The initial experience is significant to people who can choose
whether or not to use a system, so it was important for vendors developing software for them. Novice users are also a
natural focus for a brand new technology when each year, more people took up computer use than the previous year.
Of course, routinized expert use remained widespread. Databases were used by airlines, banks, government
agencies, and other organizations. In these settings, hands-on activity was rarely discretionary. Managers oversaw
software development and data analysis, leaving data entry and information retrieval to people hired for those jobs.
To improve skilled performance required a human factors approach. CHI studies of database use were few—I count
three over the first decade, all focused on novice or casual use.
CHI's emphasis differed from the HCI arising at the same time in Europe. Few European companies produced
mass-market software. European HCI research focused on in-house development and use, as reflected in articles
published in Behaviour & Information Technology, a journal launched in 1982 by Tom Stewart in London. In the per-
ceptive essay cited earlier, Liam Bannon urged that more attention be paid to discretionary use, but he also criticized
CHI’s emphasis on initial experiences, reflecting a European perspective. At Loughborough University, HUSAT fo-
cused on job design (the division of labor between people and systems) and collaborated with the Institute for Con-
sumer Ergonomics, particularly on product safety. In 1984, Loughborough initiated an HCI graduate program drawing
on human factors, industrial engineering, and computer science.
Many CHI researchers had not read the early visionaries, even as they labored to realize some of the visions.
Many of the 633 references in the 58 CHI'83 papers were to the work of well-known cognitive scientists. Vannevar
Bush, Ivan Sutherland, and Doug Engelbart were not cited. A few years later, when computer scientists and engi-
neers (primarily from computer graphics) joined CHI, the psychologists learned about the pioneers who shared their
interest in discretionary use. By appropriating this history, CHI acquired a sense of continuity that bestowed legitima-
cy on a young enterprise seeking to establish itself academically and professionally.

CHI and Human Factors Diverge

Hard science, in the form of engineering, drives out soft science, in the form of human factors. — Newell and Card (1985)

Card, Moran, and Newell (1980a,b) introduced a “keystroke-level model for user performance time with interactive
systems.” This was followed by the cognitive model GOMS—goals operators, methods, and selection rules—in their
landmark 1983 book, The Psychology of Human–Computer Interaction. Although highly respected by CHI cognitive
psychologists, these models did not address discretionary, novice use. They focused on the repetitive expert use
studied in Human Factors. GOMS was explicitly positioned to counter the stimulus-response bias of human factors
research:
Human–factors specialists, ergonomists, and human engineers will find that we have synthesized ideas from
modern cognitive psychology and artificial intelligence with the old methods of task analysis… The user is
not an operator. He does not operate the computer, he communicates with it…

19
Newell and Card (1985) noted that human factors had a role in design, but continued:
Classical human factors… has all the earmarks of second-class status. (Our approach) avoids continuation
of the classical human–factors role (by transforming) the psychology of the interface into a hard science.
In 2004, Card noted in an email discussion:
Human Factors was the discipline we were trying to improve… I personally changed the call [for CHI'86 par-
ticipation], so as to emphasize computer science and reduce the emphasis on cognitive science, because I
was afraid that it would just become human factors again.
Human performance modeling drew a modest but fervent CHI following. Their goals differed from those of most
other researchers and many practitioners as well. “The central idea behind the model is that the time for an expert to
do a task on an interactive system is determined by the time it takes to do the keystrokes,” wrote Card, Moran &
Newell (1980b). It did not speak to novice experience. Although subsequently extended to a range of cognitive pro-
cesses, the modeling was most useful when designing for nondiscretionary users, such as telephone operators en-
gaged in repetitive tasks (e.g., Gray et al., 1990). Its role in augmenting human intellect was not evident.
“Human Factors in Computing Systems” remains the CHI conference subtitle, but CHI and Human Factors moved
apart without ever being highly integrated. Most of the cognitive psychologists had turned to HCI after earning their
degrees and were unfamiliar with the human factors literature. The Human Factors Society did not cosponsor CHI
after 1985 and their researchers disappeared from the CHI program committee. Most CHI researchers who had pub-
lished in the human factors literature shifted to the CHI conference, Communications of the ACM, and the journal
Human–Computer Interaction launched in 1985 by Thomas Moran and published by Erlbaum, a publisher of psychol-
ogy books and journals.
The shift was reflected at IBM. John Gould trained in human factors, Clayton Lewis in cognitive psychology. To-
gether they authored a CHI 83 paper that best captured the CHI focus on user-centered, iterative design based on
building and testing prototypes. Gould became president of the Human Factors Society four years later, as cognitive
scientists at IBM helped shape CHI. In 1984 the Human Factors Group at IBM Watson faded away and a User Inter-
face Institute emerged.
CHI researchers and developers, wanting to identify with 'hard' science and engineering, adopted the terms 'cogni-
tive engineering' and 'usability engineering.' In the first paper presented at CHI'83, “Design Principles for Human–
Computer Interfaces,” Donald Norman applied engineering techniques to discretionary use, creating ‘user satisfaction
functions’ based on technical parameters. These functions did not hold up long—people are fickle, yesterday's satis-
fying technology is not as gratifying today—but it would be years before CHI loosened its identification with engineer-
ing enough to welcome disciplines such as ethnography and design.

Workstations and another AI Summer

High-end workstations from Apollo, Sun, and Silicon Graphics arrived between 1981 and 1984. Graphics researchers
no longer had to flock to heavily-financed laboratories, as they had to MIT and Utah in the 1960s; MIT, NYIT, and
PARC in the 1970s. Workstations were priced beyond the reach of the mass market, so graphics work on photoreal-
ism and animation, which required the processing power of these machines, did not directly influence HCI.
The Xerox Star (formally named Office Workstation), Apple Lisa, and other commercial GUIs appeared, but at the
time of the first CHI conference in December, 1983, none were commercial successes. They cost too much or ran on
processors that were too weak to exploit interactive graphics effectively.
In 1981, Symbolics and LMI introduced workstations that were optimized for the LISP programming language fa-
vored by AI researchers. The timing was fortuitous. In October of that year, a conference called Next Generation
Technology was held in the National Chamber of Commerce auditorium in Tokyo. In 1982, the Japanese government
established the Institute for New Generation Computer Technology (ICOT) and launched a ten-year Fifth Generation
project. AI researchers in Europe and the United States sounded the alarm. Donald Michie of Edinburgh called it a
threat to Western computer technology. In 1983, Raj Reddy of CMU and Victor Zue of MIT criticized ARPA's mid-
1970s abandonment of speech processing research and Stanford's Ed Feigenbaum and Pamela McCorduck wrote:
The Japanese are planning the miracle product… They’re going to give the world the next generation—the
Fifth Generation—of computers, and those machines are going to be intelligent… We stand, however, be-
fore a singularity, an event so unprecedented that predictions are almost silly… Who can say how univer-
sal access to machine–intelligence—faster, deeper, better than human intelligence—will change science,
economics, and warfare, and the whole intellectual and sociological development of mankind?
Parallel distributed processing (also called 'neural network') models seized the attention of researchers and the
media. A conceptual advance over early AI work on perceptrons, neural networks were used to model signal detec-
tion, motor control, semantic processing, and other phenomena. Minicomputers and workstations were powerful
enough to support simulation experiments. Another computer-intensive AI modeling approach with a psychological
foundation, called production systems, was developed at CMU.
An artificial intelligence gold rush ensued. As with actual gold rushes, most of the money was made by those who
outfitted and provisioned the prospectors, with generous government funding again flowing to researchers. The Euro-

20
pean ESPRIT and UK Alvey programs invested over US$200M per year starting in 1984 (Oakley, 1990). In the Unit-
ed States, funding for the DARPA Strategic Computing AI program began in 1983 and rose to almost $400M for 1988
(Norberg & O’Neill, 1996). AI investments by 150 U.S. corporations were estimated to total $2B in 1985, with 400
academics and 800 corporate employees working on natural language processing alone (Kao, 1998; Johnson, 1985).
These estimates did not include substantial classified work in intelligence and military agencies.
The unfulfilled promises of the past led to changes this time around. General problem-solving was emphasized
less, domain-specific problem-solving was emphasized more. The term AI was used less; intelligent knowledge-
based systems, knowledge engineering, expert systems, machine learning, language understanding, image under-
standing, neural networks, and robotics were used more.
U.S. antitrust laws were relaxed in 1984 to permit 20 U.S. technology companies to form a consortium, the Microe-
lectronics and Computer Technology Corporation (MCC). MCC was an explicit response to the Japanese Fifth Gen-
eration project. It also embraced AI, reportedly becoming the leading customer of both Symbolics and LMI. Within its
broad scope were a large natural language understanding effort and work on intelligent advising. The centerpiece
was Doug Lenat's CYC (as in “encyclopedic”), an ambitious effort to build a common-sense knowledge base that
other programs could exploit. Lenat predicted in 1984 that by 1994 CYC would be intelligent enough to educate itself.
In 1989, he reported that CYC would soon “spark a vastly greater renaissance in [machine learning].” This did not
materialize, but a quarter century later renewed optimism flourishes.
A mismatch between the billions of dollars invested annually in speech and language processing and the revenue
produced was documented in comprehensive Ovum reports (Johnson, 1985; Engelien & McBride, 1991). In 1985,
"revenue" (mostly income from grants and investor capital, not sales) reached US$75 million. Commercial natural
language understanding interfaces to databases were developed and marketed: In 1983, Artificial Intelligence Corpo-
ration's Intellect became the first third-party application marketed by IBM for their mainframes; in 1984, Clout was the
first AI product for microcomputers.
With no consumer market materializing, the few sales were to government agencies and large companies where
use was by trained experts, not discretionary. This shaped the AI approach to HCI. AI systems developers focused
on "knowledge engineering": representing and reasoning about knowledge obtained from experts. European funding
directives explicitly dictated that work cover both technology and behavior. Few AI researchers and developers in
either continent were interested in interaction details. They loved powerful tools such as EMACS and UNIX, not both-
ered by the painful weeks required to master the badly-designed command languages. The difficulty of eliciting
knowledge from experts led frustrated AI researchers to collaborate with HF&E, which shared the focus on non-
discretionary use. The journal IJMMS became a major outlet for both HF&E and AI research in the 1980s, but not CHI
researchers. Early CHI conferences had a few papers on speech and language, cognitive modeling, knowledge-
based help, and knowledge elicitation, but AI was not a significant focus.
Hope springs eternal. Ovum concluded its 1985 review by predicting a 1000% increase in speech and language
revenue in 5 years: $750 million by 1990 and $2.75 billion by 1995. Its 1991 review reported that revenue had in-
creased by less than 20%, not reaching $90 million. Undaunted, Ovum forecast a 500% increase to $490 million by
1995 and $3.6 billion by 2000. But soon another winter set in. AI Corporation and MCC disappeared.

1985–1995: GRAPHICAL USER INTERFACES SUCCEED

“There will never be a mouse at the Ford Motor Company.”


— High-level acquisition manager, 1985

When graphical user interfaces finally succeeded commercially, they were a disruptive revolution that transformed
HCI. As with previous major shifts—to stored programs, and to interaction based on commands, full-screen forms
and full-screen menus—some people were affected before others. GUIs were especially attractive to consumers.
They appealed to new users and "casual users," people who were not chained to a computer but used programs oc-
casionally. GUI success in late 1985 immediately transformed CHI, but only after Windows 3.0 succeeded in 1990 did
GUIs influence the government agencies and business organizations that were the focus of HF&E and IS research.
By then, the technology was better understood. The early 1990s also saw the maturation of local area networking and
the Internet, which produced a second transformation: computer-mediated communication and information sharing.

CHI Embraces Computer Science

Apple launched the Macintosh with a 1984 Super Bowl ad describing office work, but sales did not follow. By mid-
1985, Apple was in trouble. Steve Jobs was forced out. And then, months later, the "Fat Mac" was released. With four
times as much random-access memory (RAM), it could run Aldus PageMaker, Adobe Postscript, the Apple La-
serWriter, and Microsoft’s Excel and Word for Macintosh as they were released. The more powerful Mac Plus arrived

21
in January, 1986. Rescued by hardware and software advances, the Mac succeeded where the commercial GUIs
that preceded it had not. It was popular with consumers and became the platform for desktop publishing.
Within CHI, GUIs were initially controversial. They had disadvantages. An extra level of interface code increased
development complexity and decreased reliability. They consumed processor cycles and distanced users from the
underlying system that, many believed, experienced users would eventually want to master. Carroll and Mazur (1986)
showed that GUIs confused and created problems for people familiar with existing interfaces. An influential essay on
direct manipulation interfaces, Hutchins, Hollan, and Norman (1986), concluded that “It is too early to tell” how GUIs
would fare. GUIs might well prove useful for novices, they wrote, but “we would not be surprised if experts are slower
with Direct Manipulation systems than with command language systems" (italics in original). As noted above, most
prior HCI research had focused on expert use, but first-time use was critical in the rapidly-expanding consumer mar-
ket. They were here to stay. CHI was quickly on board, contributing to hardware and software improvements to ad-
dress GUI limitations and abandoning command names, text editing, and the psychology of programming.
As topics such as 'user interface management systems' became prominent, psychology gave way to computer
science as the driving force in interaction design. Early researchers had worked one formal experiment at a time to-
ward a comprehensive psychological framework (Newell & Card, 1985; Carroll & Campbell, 1986; Long, 1989; Bar-
nard, 1991). Such a theoretical framework was conceivable when interaction was limited to commands and forms, but
it could not be scaled to design spaces that included color, sound, animation, and an endless variety of icons, menu
designs, window arrangements, and input devices. The new mission: To identify the most pressing problems and find
satisfactory rather than optimal solutions. Rigorous experimentation, a skill of cognitive psychologists, gave way to
quicker, less precise assessment methods (Nielsen, 1989; Nielsen & Molich, 1990).
To explore this dynamically evolving design space required software engineering expertise. In the late 1980s, the
CHI community enjoyed an influx of computer scientists focused on interactive graphics, software engineers interest-
ed in interaction, and a few AI researchers working on speech recognition, language understanding, and expert sys-
tems. Many CS departments added HCI to their curricula. In 1994, ACM launched Transactions on Computer-Human
Interaction (TOCHI). Early PCs and Macs were not easily networked, but as local area networks spread, CHI’s focus
expanded to include collaboration support, bringing CHI into contact with MIS and OIS research, discussed below.

HF&E Maintains a Nondiscretionary Use Focus

After SIGCHI formed, the Human Factors Society undertook a study to determine the effect on membership in its
Computer Systems Technical Group. It found an unexpectedly small overlap (Richard Pew, personal communication,
September 2004). The two organizations had different customers and methods, directly linked to the distinctions be-
tween discretionary and non-discretionary use. Human Factors and Ergonomics addressed military, aviation, and
telecommunications industries, and government use in general, for census, tax, social security, health and welfare,
power plant operation, air traffic control, space missions, military logistics, intelligence analysis, and so on. Govern-
ment remained the largest customer of computing. Technology was assigned and training was provided. For repeti-
tive tasks such as data entry, a very small efficiency gain in an individual transaction can yield huge benefits over
time. This justified rigorous experimental human factors studies to make improvements that would go unnoticed by
consumers and were of no interest to the fast-moving commercial software developers employing CHI researchers.
Government agencies also promoted the development of ergonomic standards. A standard enables a government
agency to define system requirements for competitive contracts while remaining at arms’ length from potential bid-
ders. Standards were regarded warily by competitive commercial software developers, who feared they would con-
strain innovation. In 1986, human factors researchers Sid Smith and Jane Mosier published the last of a series of
government-sponsored interface guidelines, 944 in all, organized into sections titled Data Entry, Data Display, Data
Transmission, Data Protection, Sequence Control, and User Guidance. Smith and Mosier saw the implications of the
Macintosh's very recent success: GUIs would expand the design space beyond the reach of an already cumbersome
document that which omitted icons, pull-down and pop-up menus, mice button assignments, sound, and animation.
They foresaw that future contracts would specify predefined interface styles and design processes, rather than specif-
ic interface features.
In the meantime, there was a lot of business for human factors engineers. A keyboard optimized for data entry
was a reasonable goal, but much was in doomed efforts. DARPA’s heavily-funded Strategic Computing AI program
set out to develop an Autonomous Land Vehicle, a Pilot’s Associate, and a Battle Management system, to include
interactive technologies such as speech recognition, language understanding, and heads-up displays for pilots, driv-
ers of autonomous vehicles, and officers under stressful conditions. Speech and language recognition might assist
translators and intelligence analysts, and people trapped by a phone answering system or in circumstances that limit
keyboard use. Unfortunately, these technologies remained as elusive in the 1980s and 1990s as they had when Lick-
lider funded them from DARPA in the 1960s and 1970s.

22
IS Extends Its Range

Although GUIs were not quickly adopted by organizations, spreadsheets and business graphics (charts and tables)
were important to managers. They became foci of Information Systems research. Remus (1984) contrasted tabular
and graphic presentations. Benbasat and Dexter (1985) added color as a factor, although color displays were rare in
the1980s. Many studies contrasted on-line and paper presentation, because most managers worked with printed
reports. Although research into individual cognitive styles had been abandoned following a devastating critique (Hu-
ber, 1983), the concept of cognitive fit between task and tool was introduced to explain apparently contradictory re-
sults in the technology adoption literature (Vessey and Galletta, 1991).
In 1986 Jane Carey initiated a series of symposia and books on Human Factors in Information Systems (e.g.,
Carey, 1988). Topics included design, development, and user interaction with information. As corporate adoption of
minicomputers and intranets matured, studies of email and other collaboration support appeared. The focus shifted
away from maximizing computer use—screen savers had become the main consumer of processor cycles.
Hands-on managerial use remained atypical, but group decision support system research sought to change that.
GDSSs emerged from work on decision support systems designed for an executive or a manager. Central to GDSS
was technology for meetings, including brainstorming, idea organization, and online voting features. The standalone
systems were initially too expensive for widespread group support; hence the focus on 'decision-makers' and re-
search in schools of management, not computer science departments or software companies. Nunamaker et al.
(1997) summarizes the extensive research conducted over a few decades.
As computing costs dropped and client-server local area networks spread in the mid-1980s, more laboratories built
meeting facilities to conduct research (Begeman et al., 1986; DeSanctis and Gallupe, 1987; Dennis et al., 1988).
GDSSs morphed into 'group support systems' that included support for non-managerial workers, catching the atten-
tion of CHI researchers and becoming a major IS contribution to Computer Supported Cooperative Work (CSCW),
discussed in the next section. In 1990, three GDSSs were marketed, including a University of Arizona spin-off and an
IBM version that licensed the same technology. The systems did well in laboratory studies but were generally not
liked by executives and managers, who felt that their control of meetings was undermined by the technology (Dennis
& Reinicke, 2004). Executives and managers are discretionary users whose involvement was critical; the products
were unsuccessful.
With the notable exception of European sociotechnical and participatory design movements discussed below, non-
managerial end user involvement in design and development was rare, as documented in Friedman's (1989) com-
prehensive survey and analysis. Then Davis (1989), exposed to CHI usability research, introduced the influential
Technology Acceptance Model (TAM). A managerial view of individual behavior, TAM and its offspring focused on
perceived usefulness and perceived usability to improve “white collar performance” that is “often obstructed by users’
unwillingness to accept and use available systems.” “An element of uncertainty exists in the minds of decision makers
with respect to the successful adoption,” wrote Bagozzi, Davis, and Warshaw (1992).
TAM is the most cited HCI work that I have found in the IS literature. Its emphasis on the perception of usability
and utility is a key distinction. CHI assumed utility—consumers choose technologies that they believe will be useful
and abandon any that are not--and demonstrated usability improvements. TAM researchers could not assume that an
acquired system was useful. They observed that perceptions of usability influenced acceptance and improved the
chance it would become useful. CHI addressed usability a decade before TAM, albeit actual usability rather than per-
ceived usability. In CHI, perception was a secondary 'user satisfaction' measure. The belief, not entirely correct, was
that measurable reductions in time, errors, questions, and training would eventually translate into positive percep-
tions. 'Acceptance,' the 'A' in TAM, is not in the CHI vocabulary: Discretionary users adopt, they do not accept.
The IS and CHI communities rarely mixed. When CHI was more than a decade old, Harvard Business Review, a
touchstone for IS researchers, published “Usability: The New Dimension of Product Design” (March, 1994). The arti-
cle did not mention CHI at all and concluded, “user-centered design is still in its infancy.”

Collaboration Support: OIS Gives Way to CSCW

Previous section have described three research communities that by the late 1980s were engaged with support for
small-group communication and information-sharing:
(i) Office Automation / Office Information Systems;
(ii) IS group decision support;
(iii) CHI researchers and developers looking past individual productivity tools for 'killer apps' to support emerging
workgroups connected by local area networks.
OA/OIS had led the way, but it was fading as the minicomputer platform succumbed to competition from PCs and
workstations. The concept of 'office' was proving to be problematic; even 'group' was elusive: Organizations and indi-
viduals are persistent entities with long-term goals and needs, but small groups frequently have ambiguous member-
ship and shift in character when even a single member comes or goes. In addition, people in an organization who
need to communicate are often in different groups and fall under different budgets, which complicated technology
acquisition decisions at a time when applications were rarely made available to everyone in an organization.

23
The shift was reflected in terminology use. First, ‘automation’ fell out of favor. In 1986, ACM SIGOA shifted to SI-
GOIS and the annual AFIPS OA conference was discontinued. In 1991, ‘office’ followed: Transactions on Office In-
formation Systems became Transactions on Information Systems; Office: Information and People became Infor-
mation Technology and People; and ACM's Conference on Office Information Systems became Conference on Or-
ganizational Communication Systems (COOCS, which in 1997 became GROUP).
The AI summer of the 1980s, a contributor to OA/OIS, ended when AI again failed to meet expectations: Extensive
funding did not deliver a Pilot’s Associate, an Autonomous Land Vehicle, or a Battle Management System. Nor were
many offices automated by 1995. CHI conference sessions on language processing had diminished earlier, but ses-
sions on modeling, adaptive interfaces, advising systems, and other uses of intelligence increased through the 1980s
before declining in the 1990s. As funding became scarce, AI employment opportunities dried up and conference par-
ticipation dropped off.
In 1986, the banner 'Computer Supported Cooperative Work' attracted diverse researchers into communication,
information sharing, and coordination. Building on a successful 1984 workshop (Greif, 1985), participants came pri-
marily from IS, OIS, CHI, distributed AI, and anthropology. Four of the thirteen program committee members, and
several of the papers, were from schools of management.
CSCW coalesced in 1988 with the publication of the book Computer-Supported Cooperative Work, edited by Irene
Greif. SIGCHI launched a biennial North American CSCW conference. A European series (ECSCW) began in 1989.
With heavy participation from technology companies, North American CSCW focused on small groups of PC, work-
station, and minicomputer users. Most were within an organization, some were linked by ARPANET, BITNET, or oth-
er networks. European participation was primarily from academia and government agencies, focused instead on or-
ganizational use of technology, and differed methodologically from North American IS research. Scandinavian ap-
proaches, described in the next section, were central to ECSCW and presented at CSCW.
Much as human factors researchers left CHI after a few years, most IS researchers left CSCW in the early 1990s.
Most IS submissions to CSCW were packed with acronyms and terminology unfamiliar to the dominant SIGCHI re-
viewers, and were rejected. The IS approach to studying teams was shifting from to organizational behavior from
social psychology, which continued to be favored by CSCW researchers. The organizational focus conflicted with the
interest of influential computer and software companies in context-independent small-group support, the latter being a
realm for social psychology. The Hawaii International Conference on System Sciences (HICSS) became a major IS
pre-journal publication venue for group support research. Some IS researchers participated in COOCS and a Group-
ware conference initiated in 1992. Due primarily to the paper rejections, the split was not amicable; the IS newsletter
Groupware Report did not include CSCW on its list of relevant conferences.
The pace of change created challenges. In 1985, to support a small team was a major technical accomplishment
and in the early 1990s, an application that provided awareness of the activity of distant collaborators was a celebrat-
ed achievement. By 1995, the Web had arrived and too much awareness was on the horizon: privacy concerns and
information overload. Phenomena were no sooner identified then they vanished in new studies: a 'productivity para-
dox' in which IT investments were not returning benefits came and went; adverse health effects of Internet use by
young people disappeared a few years after being widely reported. European and North American CSCW gradually
came into greater alignment as European organizations began acquiring the commercial software products studied
and built in North America, and North Americans discovered that organizational context, the ECSCW focus, was often
decisive when deploying tools to support teams. Organizational behaviorists and social theorists remained in their
home disciplines, but ethnographers, who by studying technology use were marginalized in traditional anthropology
departments, were welcomed by both CSCW and ECSCW.
Despite the challenges of building on sand that was swept by successive waves of technology innovation, CSCW
continued to attract a broad swath of HCI researchers. Content ranged from highly technical to thick ethnographies of
workplace activity, from studies of instant messaging dyads to scientific collaboratories involving hundreds of people
dispersed in space and time. A handbook chapter on Collaboration Technologies (Olson & Olson, 2011) covers the
technical side in depth.

Participatory Design and Ethnography

European efforts to involve future users in designing a system predated 1985. The system was being developed with-
in a large enterprise and the users would have to use it upon completion. Sociotechnical design took a managerial
perspective, with user involvement intended both to improve functioning and increase acceptance (Mumford, 1971;
1976). Another approach, participatory or cooperative design, was rooted in a Scandinavian trade union movement
and focused on empowering hands-on users (Nygaard, 1977).
These approaches influenced human factors and ergonomics (Rasmussen, 1986). This is understandable, given
that consulting workers in improving non-discretionary tasks in organizational settings dated back to Lillian Gilbreth.
A 1985 conference in Aarhus, Denmark (Bjerknes et al., 1987) had a more surprising consequence. Participatory
design critiqued IS systems development approaches for non-discretionary users, yet it resonated with CHI research-
ers who had a commercial application focus on discretionary use. Why? They shared the goal of empowering hands-
on users, and both sets of researchers were primarily baby boomers, unlike the World War II generation that still
dominated HF&E and IS.

24
Ethnography was another source of deep insights into potential users. Lucy Suchman managed a Xerox PARC
group that presented studies of workplace activity at CSCW. She published an influential critique of artificial intelli-
gence in 1987. In 1988 she published a widely-read review of the Aarhus and, as program chair, brought Scandinavi-
ans to CSCW 1988.

LIS: A Transformation Is Underway

Research universities support prestigious professional schools, but the prestige of library schools declined as libraries
lost their near-monopoly on information. Between 1978 and 1995, 15 American library schools were shut down (Cro-
nin, 1995). Many survivors were rechristened Library and Information Science. The humanities orientation was giving
way. Librarianship was changed by technology and IT staff salaries rivalled those of librarians.
Change was not smooth. The closer a discipline is to pure information, the faster Moore's law and networks disrupt
it once a tipping point is reached. Photography, music, news… and libraries. Library school exclusion of information
technology studies had once been understandable given the cost and the limitations of early systems, but when the
need arrived, there was little time to prepare. Young information scientists, eyes fixed on a future in which many past
lessons might not apply, were reluctant to absorb a century of work on indexing, classifying, and accessing complex
information repositories.. Knowledge and practices that still applied would have to be adapted or rediscovered. The
conflicts are exposed in a landmark 1983 collection, The Study of Information: Interdisciplinary Messages (Machlup
and Mansfield, 1983). In it, W. Boyd Rayward outlines the humanities-oriented and the technological perspectives
and argues that they had converged. His essay is followed by commentaries attacking him from both sides.
For several years starting in 1988, deans of library & information schools at Pittsburgh, Syracuse, Drexel, and
Rutgers converged annually to share their approaches to explaining and managing multidisciplinary schools. Despite
this progressive effort, Cronin (1995) depicted LIS at loggerheads and in a “deep professional malaise.” He suggest-
ed that librarianship be cut loose and that the schools establish ties to the cognitive and computer sciences. Through
the 1990s, several schools dropped 'Library' and became schools of Information (Figure 2). More would follow.

25
Figure 2. University schools, colleges and faculties and when "information" came into their names (as of
2010).

1995–2010: THE INTERNET ERA ARRIVES AND SURVIVES A BUBBLE

How did the spread of the Internet and the emergence of the Web affect HCI research threads? Internet-savvy CHI
researchers were excited by the prospects and took the new technologies in stride. The Internet and Web did not
disrupt HF&E immediately, as neither was initially a locus of routine work; in fact, the Web initially revived the form-
driven interaction style that had long been a focus of human factors. However, the Web had a seismic impact on In-
formation Systems and Information Science, so this section begins with these disciplines.

The Formation of AIS SIGHCI

The focus of IT professionals and Information Systems researchers had been on the internal use of systems. The
Internet created more porous organizational boundaries. Employees downloaded instant messaging clients, music
players, web apps, and other software despite management concerns about productivity and IT worries about securi-
ty. Facebook, Twitter, and other applications and services were accessed in a web browser without a download. Ex-
perience at home increased impatience with poor software at work. Managers who had been hands-off users became

26
late adopters, or they were replaced by younger managers. More managers and executives became hands-on early
adopters of some tools.
Significant as these changes were, the Web had a more dramatic effect. Corporate IT groups had previously fo-
cused on internal operations: systems used by employees. Suddenly, organizations were scrambling to create Web
interfaces to customers and external vendors. Discretionary users! The bursting of the Internet bubble revealed that
many IT professionals and IS experts had not understood Web phenomena. Nevertheless, millions of the people who
had bought PCs continued to look for ways to use them. On-line sales and services, and business-to-business sys-
tems, continued to grow. As the Web became an essential business tool, IS researchers faced issues that CHI had
confronted 20 years earlier, whether they realized it or not.
Some realized it. In 2001, the Association for Information Systems (AIS) established the Special Interest Group in
Human–Computer Interaction (SIGHCI). The founders defined HCI by citing 12 CHI research papers (Zhang et al.,
2004) and declared that bridging to CHI and Information Science was a priority. The charter of SIGHCI included a
broad range of issues, but early research emphasized interface design for e-commerce, online shopping, online be-
havior “especially in the Internet era,” and the effects of Web-based interfaces on attitudes and perceptions (Zhang,
2004). SIGHCI sponsored special issues of journals; eight of the first ten papers covered Internet and Web behavior.
Eight years later, SIGHCI launched AIS Transactions on Human–Computer Interaction. Zhang et al.’s (2009) anal-
ysis of the IS literature from 1990 to 2008 documented the shift from an organizational focus to the Web and broader
end-user computing. Yet her survey omitted CHI from a list of fields related to AIS SIGHCI. The bridging effort had
foundered, as had earlier efforts to bridge to CHI from Human Factors, Office Information Systems, and IS in CSCW.
The dynamics that undermined bridging efforts is explored below in the section Looking Back.

Digital Libraries and the Rise of Information Schools

As seen in Figure 2, an information wave was traveling through many universities by 1995. Digital technology was in
LIS curricula and technology use was a prerequisite for librarianship. However, innovative research had not kept pace
with professional training (Cronin, 1995).
The Internet was growing exponentially, but Internet use was still a niche activity, found mainly in colleges and
universities. In the mid-1990s, Gopher, a system for downloading files over the Internet, attracted attention as a pos-
sible springboard for indexing digital materials. Wells’s (1938) 'world brain' seemed to be within reach. Then the Web
hit. It accelerated the transformation of information distribution. From 1994 to 1999, the research community as gal-
vanized by digital library research and development awards totaling close to US$200M, jointly sponsored by NSF,
DARPA, NASA, National Library of Medicine, Library of Congress, National Endowment for the Humanities, and the
FBI. This was unparalleled for a field still close to its roots in humanities and ambivalent about the role of technology.
In 2000, the American Society for Information Science appended 'and Technology' and become ASIST, and by
then ten universities had a school or college with 'Information' the only discipline in its name. The next year, 'deans
meetings' modeled on those of the late 1980s began. Three original participants were joined by Michigan, Berkeley,
and the University of Washington. In 2005, the first annual 'iConference' drew participants from 19 universities. By
2010, the 'iCaucus' had 28 dues-paying members with five more ready to join. Some had been library schools, others
had roots in different disciplines, and a few had formed recently as a School of Information. Their faculty included
people trained In the four HCI disciplines covered in this work.
Expansion came with growing pains. The iConference competes with established conferences in each field. Within
each school, conflicts arose among academic subcultures. A shift to a field called Information seemed underway, but
many faculty still considered themselves “a researcher in [X] who is located in an information school,” where X could
be Library Science, HCI, CSCW, IS, Communication, Education, Computer Science, or another discipline.

Human Factors & Ergonomics Embraces Cognitive Approaches

In 1996, a Cognitive Engineering and Decision Making technical group formed and quickly became the largest in
HFES. As noted earlier, one factor in the formation of CHI a decade earlier was strident opposition in human factors
to cognitive approaches. The term ‘cognitive engineering’ was used in CHI at that time (Norman, 1982; 1986).
In another surprising reversal, in 2005 Human Performance Modeling was a new and thriving HFES technical
group. Card, Moran, and Newell (1983) had introduced human performance modeling to reform the discipline of hu-
man factors from the outside. But the reform effort logically belonged within HF&E, as both focused largely on non-
discretionary expert performance. The HFES technical group was initiated by Wayne Gray and Dick Pew, who had
been active in CHI in the 1980s. The last major CHI undertaking in this area was a special issue of Human–Computer
Interaction in late 1997.
Government funding of HCI still went largely to HF&E. The Interactive Systems Program of the U.S. National Sci-
ence Foundation—subsequently renamed Human–Computer Interaction—was described thus:
The Interactive Systems Program considers scientific and engineering research oriented toward the en-
hancement of human–computer communications and interactions in all modalities. These modalities in-

27
clude speech/language, sound, images and, in general, any single or multiple, sequential, or concurrent,
human–computer input, output, or action.” (National Science Foundation, 1993)
An NSF program manager confided that his proudest accomplishment was doubling the already ample funding for
natural language understanding. Even after NSF established a separate Human Language and Communication Pro-
gram in 2003, speech and language research was heavily supported by both the HCI and Accessibility Programs,
with additional support from AI and elsewhere. Subsequent NSF HCI program managers emphasized 'direct brain
interfaces' or 'brain–computer interaction,' which were not of interest in discretionary home and office contexts. A re-
view committee noted that a random sample of NSF HCI grants included none by prominent CHI researchers (Na-
tional Science Foundation, 2003). NSF program managers rarely attended CHI conferences, which in this period had
little on speech, language, or direct brain interaction.

A Wave of New Technologies, and CHI Embraces Design

With a steady flow of new hardware, software features, applications, and systems, people continually encountered
novel technologies. This sustained technology producers and their CHI allies, who focused primarily on innovations
that have started to attract wide audiences. At that point, a good interface can be a positive differentiator.
As an application matured and use became routine, it got less attention. When email and word processing ceased
being discretionary for most of us, CHI researchers moved to the discretionary use frontier of the moment: Web de-
sign, ubiquitous and mobile computing, social computing, Wikipedia use, and so on. New issues arose: information
overload, privacy, and the effects of multitasking. Ethnography and data mining were methods that gained currency in
this period. From a higher vantage point we see continuity in this churn: the exploration of input devices, communica-
tion channels, information visualization techniques, and design methods. At the most abstract level, aspirational pro-
posals to build HCI theory were still heard (Barnard et al., 2000; Carroll, 2003).
Internet reliability, bandwidth, and penetration increased steadily through the mid-1990s. Real-time and quasi-real-
time communication technologies such as Mbone (multicast backbone) videoconferencing appeared. If you are un-
familiar with Mbone, it is because the Web arrived and sucked the oxygen out of the room. Attention shifted to asyn-
chronous interaction with static sites. With Web 2.0 and greater support for animation and video, the pace quickened.
New real-time applications such as Skype surfaced.
The Web was like a new continent. The first explorers posted flags here and there. Then came attempts at settle-
ment. Virtual world research and development blossomed in the mid-1990s, but few pioneers survived: There was
little to do in virtual worlds other than chat and play games. This did not prevent real-estate speculation and a land
rush: the Internet bubble of the late 1990s that burst in 2001. Then, slowly, people shifted major portions of their work
and play online, coming to rely on online information sources, digital photo management, social software, digital doc-
uments, online shopping, and multiplayer games.
CSCW. The convergence of CSCW and ECSCW came apart. CSCW in North America glommed onto the bur-
geoning activity around social networking, Wikipedia, multiplayer games, and other Web phenomena. These
spawned Silicon Valley companies and interested the students and researchers that many would hire or bring in as
consultants. In contrast, the organizational and governmental consumers of European CSCW research preferred
basic research in vertical domains. The division resembled that of 20 years earlier, brought about once again by the
influx of a new generation of technology.
AI. The Web curtailed one branch of AI research: efforts to build powerful, self-contained productivity tools. The ef-
fort to embed deep knowledge in application software could be justified when access to external information sources
was limited, but not when reaching information and knowledgeable people is easy. In contrast, adaptive systems that
filter and merge local and Internet-based information became more appealing with the Web. Machine learning slowly
enhanced some productivity tools—and forecasts of an ultra-intelligent Singularity still bloomed.
Design. To the psychologists and computer scientists of the early CHI community, interface design was a matter of
science and engineering. They focused on performance and assumed that people would eventually choose efficient
alternatives. 'Entertainment' was left to SIGGRAPH. But humans have valued aesthetics since at least as far back as
the French cave paintings, no doubt accompanied by marketing and non-rational persuasion. As computing costs
dropped, the engineering orientation weakened. CHI researchers eventually came around. The study of enjoyment
was labeled “funology” lest someone think we were having too good a time (Blythe et al., 2003).
Some visual designers participated in graphical interface research very early. Aaron Marcus worked full time on
computer graphics in the late 1960s. William Bowman’s 1968 book Graphic Communication influenced the design of
the Xerox Star, which used icons designed by Norm Cox (Bewley et al., 1983). However, graphic design was consid-
ered a secondary activity (Evenson, 2005). Even a decade later, the cost of memory could severely constrain design-
ers, as documented in Moody's (1995) ethnography.
In 1995, building on workshops at previous conferences, SIGCHI initiated the Designing Interactive Systems (DIS)
conference. DIS aspired to be broader, but drew more system designers than visual designers. In 2003, SIGCHI,
SIGGRAPH, and the American Institute of Graphic Arts (AIGA) initiated the Designing for User Experience (DUX)
conference series. DUX fully embraced visual and commercial design. It lasted only through 2007, but established
the significance of Design, which had not typically been assessed in research papers. The changing sensibility is

28
reflected in ACM Interactions, a magazine launched by CHI in 1994 that steadily increased its focus on visual design,
both in content and appearance.
Marketing. Design’s first cousin was poorly regarded for years by the CHI community (Marcus, 2004), but Web
sites introduced complications that forced a broadening of perspective on what contributes to user experiences. Con-
sumers may want to quickly conduct business on a site and get off, whereas site owners want to trap users on the
site to view ads or make ancillary purchases. An analogy is supermarkets: items that most shoppers want are posi-
tioned far apart, forcing shoppers to traverse aisles where other products beckon. CHI professionals who long identi-
fied with 'end users' faced a conflict when designing for web site owners. It was different in the past: Designers of
personal productivity tools felt fully aligned with prospective customers.
The evolution of CHI through 2010 is reflected in the influential contributions of Donald Norman. He was a cogni-
tive scientist who introduced the term cognitive engineering. He presented the first CHI 83 paper, which defined 'user
satisfaction functions' based on speed of use, ease of learning, required knowledge, and errors. His influential 1988
book Psychology of Everyday Things (POET) focused on pragmatic usability. Its 1990 reissue as Design of Everyday
Things reflected a field refocusing on invention. Fourteen years later he published Emotional Design: Why We Love
(or Hate) Everyday Things, stressing the role of aesthetics in our response to objects.

The Dot-Com Collapse

The foundation for the World Wide Web was laid around 1990, but the spread of the Mosaic browser in 1994 and the
HTTP communication protocol in 1996 led to accelerating growth and intense speculation in online businesses. It is
relevant that commercial activity had been prohibited on the ARPANET and subsequently NSFNET, which in 1995
became the major backbone of the Internet. The status of commercial activity became hazy in late 1992, but the point
is that with little history of commercial use, the Web was a frontier into which speculators poured like homesteaders in
the Oklahoma Land Rush a century earlier. Some homesteaders knew how to farm, but few people knew how to
make money on the Web. Optimism created a massive stock market bubble that peaked in March, 2000. As Internet
companies went out of business, the NASDAQ stock index dropped from over 5000 to just above 1000.
The bubble and collapse particularly affected consumer-oriented CHI, which had turned attention to Web site de-
sign and interaction models, and Information Systems, due to the emergence of well-financed organizations with little-
understood methods. HCI flourished in CS departments and IS in Management schools--until the collapse.
A tide had come in and gone out, but it left something behind: Millions of people had bought computers and got on
the Internet, even if they did less shopping than many had hoped. Seeking to get some value from their devices and
broadband access, they provided opportunities for a new generation of entrepreneurs.

LOOKING BACK: CULTURES AND BRIDGES

Despite overlapping interests, in a dynamic environment with shifting alliances, the major threads of human-
computer interaction research—human factors and ergonomics, information systems, library and information sci-
ence, and computer-human interaction—have not merged. They have interacted with each other only sporadically,
although not for lack of bridge-building efforts. The Human Factors Society co-organized the first CHI conference.
CSCW sought to link CHI and IS. Mergers of OIS with CHI and later CSCW were considered. AIS SIGHCI tried to
engage with CHI. Researchers recently hired into information schools remain active in the other fields.
Even within computer science, bridging is difficult. Researchers interested in interaction left SIGGRAPH to join
the CHI community rather than form a bridge. A second opportunity arose 20 years later when standard platforms
powerful enough for photorealism loomed, but the DUX conference series managed only three meetings. In the
case of artificial intelligence, SIGART and SIGCHI cosponsor the Intelligent User Interface series, but participation
has remained outside mainstream HCI. What are the obstacles to more extensive interaction across fields?

Discretion as a Major Differentiator

HF&E and IS arose before discretionary hands-on use was common. The information field only slowly distanced
itself from supporting specialists. CHI occupied a new niche: discretionary use by non-experts. HF&E and especially
IS researchers considered organizational factors; CHI with few exceptions avoided domain-dependent work. As a
consequence, HF&E and IS researchers shared journals. For example, Benbasat and Dexter (1985) was published
in Management Science and cited five Human Factors articles. Apart from the LIS, they quickly focused on broad
populations. IS countered its organizational focus by insisting that work be framed by theory, which distanced it from
generally atheoretical CHI in particular.

29
The appropriateness of a research method is tied to the motivation of the researchers. HF&E and CHI were
shaped by psychologists trained in experimental testing of hypotheses about behavior, and hypothesis-driven exper-
imentation was also embraced by IS. Experimental subjects agree to follow instructions for an extrinsic reward. This
is a reasonable model for nondiscretionary use, but not for discretionary use. CHI researchers relabeled 'subjects'
as 'participants,' which sounds volitional, and found that formal experimental studies were usually inappropriate:
There were too many variables to test formally and feedback from a few participants was often enough. Laboratory
studies of initial or casual discretionary use usually require confirmation in real-world settings anyway, more so than
studies of expert or trained behavior, due to the artificial motivation of laboratory study participants.
The same goals apply—fewer errors, faster performance, quicker learning, greater memorability, and being en-
joyable— but the emphasis differs. For power plant operation, error reduction is critical, performance enhancement
is good, and other goals are less important. For telephone order entry takers, performance is critical, and testing an
interface that could shave a few seconds from a repetitive operation requires a formal experiment. In contrast, con-
sumers often respond to visceral appeal and initial experience. In assessing designs for mass markets, avoiding
obvious problems can be more important than striving for an optimal solution. Less-rigorous discount usability or
cognitive walkthrough methods (Nielsen, 1989; Lewis et al., 1990) can be enough. Relatively time-consuming quali-
tative approaches, such as contextual design or persona use (Beyer and Holtzblatt, 1998; Pruitt and Adlin, 2006),
can provide a deeper understanding when context is critical or new circumstances arise.
CHI largely abandoned its roots in scientific theory and engineering, which does not impress researchers from
HF&E or theory-oriented IS. The controversial psychological method of verbal reports, developed by Newell and
Simon (1972) and foreshadowed by Gestalt psychology, was applied to design by Clayton Lewis as 'thinking-aloud'
(Lewis and Mack, 1982; Lewis, 1983). Perhaps the most widely used CHI method, it led some in the other fields to
characterize CHI people as wanting to talk about their experiences instead of doing research.

Disciplinary, Generational, and Regional Cultures

In the humanities, journals are venues for work in progress and serious work is published in books. In engineering
and the sciences, conferences are generally for work in progress and journals are repositories for polished work. The
disciplines of HF&E, IS, and LIS follow the latter practice. In contrast, for computer science in the United States, con-
ference proceedings became the final destination of most work and journal have lost relevance; outside the United
States, it retained a journal focus. A key factor was arguably the decision of ACM to archive conference proceedings
once it became practical to assemble and publish them prior to the conference (Grudin, 2010). A difference in pre-
ferred channel impedes communication. Researchers in journal cultures chafe at CHI’s insistence on high selectivity
and polished work. CHI researchers are dismayed by what they see at other conferences and having abandoned
journals, they do not seek out the strong work in other fields. Low acceptance rates also damaged the bridge be-
tween academic and practitioner cultures: Few practitioner papers are accepted and fewer practitioners attend the
conferences.
CHI conferences generally accept 20%-25% of submissions. With a few exceptions, HF&E and IS conferences
accept twice that proportion or more. By my estimate, at most 15% of the work in CHI-sponsored conferences reach-
es journal publication. In contrast, an IS track organizer for HICSS estimated that 80% of research there progressed
to a journal (Jay Nunamaker, opening remarks at HICSS-38, January 2004).
Information schools draws on researchers from both the journals-as-archival and conferences-as-archival camps.
They struggle with this issue, as have computer scientists outside North America. In both cases, the trend is toward
increasing the selectivity and valuation of conference publication.
Even within the English language, differences in language use impede communication. Where CHI refers to 'us-
ers,' HF&E and IS used the term 'operators.' Especially in IS, a user could be a manager whose only contact with the
computer was reading printed reports. For CHI, 'operator' was demeaning and users were always hands-on users. (In
software engineering, 'user' often means 'tool user'—which is to say, developers.) These and many other distinctions
may not seem critical, but they lead to serious confusions or misconceptions when reading or listening to work from
another discipline.
In HF&E and IS streams, 'task analysis' refers to an organizational decomposition of work, perhaps considering
external factors; in CHI 'task analysis' is a cognitive decomposition, such as breaking a text editing move operation
into select, cut, select, and paste. In IS, 'implementation' meant organizational deployment; in CHI it was a synonym
for development. The terms 'system,' 'application,' and 'evaluation' also had different connotations or denotations in
the different fields. Significant misunderstandings resulted from failures to appreciate these differences.
Different perspectives and priorities were also reflected in attitudes toward standards. Many HF&E researchers
contributed to standards development, believing that standards contribute to efficiency and innovation. A view wide-
spread in the CHI community was that standards inhibit innovation. Both views have elements of truth, and the posi-
tions partly converged as Internet and Web standards were tackled. However, the attitudes reflected the different
demands of government contracting and commercial software development. Specifying adherence to standards is a
useful tool for those preparing requests for proposals, whereas compliance with standards can make it more difficult
for a product to differentiate itself.

30
Competition for resources was another factor. Computers of modest capability were extremely expensive for much
of the time span we have considered. CHI was initially largely driven by the healthy tech industry; whereas research
in the other fields was more dependent on government funding that waxed and waned. On upswings, demand for
researchers outstripped supply. HCI prospered during AI winters, starting with Sutherland’s use of the TX-2 when AI
suffered its first setback and recurring with the emergence of major HCI laboratories during the severe AI winter of
the late 1970s. When computer science thrived, library schools laboring to create information science programs had
to compete with expanding computer science departments that were themselves desperate enough to award faculty
positions to graduates of masters programs.
A generational divide was evident in the case of CHI researchers who grew up in the 1960s and 1970s. Many did
not share the prior generation’s orientation toward military, government, and business systems, and reacted negative-
ly to the lack of gender neutrality that is still occasionally encountered in the HF&E and IS 'man-machine interaction'
literature. Only in 1994 did International Journal of Man-Machine Studies become International Journal of Human-
Computer Studies. Such differences diminished enthusiasm for building bridges and exploring literatures.
Challenges presented by regional cultures merit a study of their own. The presence in North America of the strong
non-profit professional organizations ACM and IEEE led to developments not experienced elsewhere. Whether due to
an entrepreneurial culture, a large consumer market, or other factors, the most successful software and Web applica-
tions originated in the United States and shaped the direction of research there. In Europe, the government role re-
mained central; research favored technology use in large organizations. To protect domestic industries, some gov-
ernments support propped up mainframe companies longer than their U.S. counterparts and discouraged imports of
new U.S. technologies. The convergence and divergence of North American and European CSCW factions illustrates
how the resulting differences in perspective can impede co-mingling. HCI research in Japan followed the United
States emphasis on the consumer market. After focusing on methods to design and develop Japanese-language
individual productivity tools, for the domestic market, on computers oriented toward English, much research turned to
language-independent communication tools for the international market (Tomoo Inoue, personal communication,
March 2012).
Interdisciplinarity and multicultural exploration are intellectually seductive. Could we not learn by looking over
fences? But another metaphor is the Big Bang. Digital technology is exploding, streaming matter and energy in every
direction, forming worlds that at some later date might discover one another and find ways to communicate, and then
again, might not.

LOOKING FORWARD: TRAJECTORIES

The future of HCI will be dynamic and full of surprises. The supralinear growth of hardware capability confounds ef-
forts at prediction—we rarely experience supralinear exponential change and do not reason well about it. In the Unit-
ed States, NSF is tasked with envisioning the future and providing resources to take us there, yet two major recent
HCI initiatives, 'Science of Design' and 'CreativIT' (focused on creativity), were short-lived. Nevertheless, extrapola-
tions from observations about the past and present suggest possible developments, providing a prism through which
to view other work and perhaps some guidance in planning a career.

Figure 3. From invention to maturity.

31
The Optional Becomes Conventional

We exercise prerogative when we use digital technology—sometimes. More often when at home, less often at work.
Sometimes we have no choice, as when confronted by a telephone answering system. Those who are young and
healthy have more choices than those constrained by injury or aging.
Many technologies follow the maturation path shown in Figure 3. Software that was discretionary yesterday is in-
dispensable today. Collaboration forces us to adopt shared conventions. Consider a hypothetical team that has
worked together for 20 years. In 1990, members exchanged printed documents. One person still used a typewriter,
whereas others used different word processors. One emphasized words by underlining, another by italicizing, and a
third by bolding. In 2000, the group decided to exchange digital documents. They had to adopt the same word pro-
cessor. Choice was curtailed; it was only exercised collectively. Today, this team is happy sharing documents in PDF
format, so they can again use different word processors. Perhaps tomorrow software will let them personalize their
view of a single underlying document, so one person can again use and see in italics what another sees as bold or
underlined.
Shackel (1997, p. 981) noted this progression under the heading “From Systems Design to Interface Usability and
Back Again.” Early designers focused at the system level; operators had to cope. When the PC merged the roles of
operator, output user, and program provider, the focus shifted to the human interface and choice. Then individual
users again became components in fully networked organizational systems. Discretion can evaporate when a tech-
nology becomes mission-critical, as word processing and email did in the 1990s.
The converse also occurs. Discretion increases when employees can download free software, bring smartphones
to work, and demand capabilities that they enjoy at home. Managers are less likely to mandate the use of a technolo-
gy that they use and find burdensome. For example, language understanding systems appealed to military officers—
until they themselves became hands-on users:
Our military users… generally flatly refuse to use any system that requires speech recognition… Over and
over and over again, we were told ‘If we have to use speech, we will not take it. I don’t even want to waste
my time talking to you if it requires speech.’ … I have seen generals come out of using, trying to use one
of the speech-enabled systems looking really whipped. One really sad puppy, he said ‘OK, what’s your
system like, do I have to use speech?’ He looked at me plaintively. And when I said ‘No,’ his face lit up,
and he got so happy.” (Forbus, 2003; see also Forbus, Usher & Chapman, et al., 2003.)
In domains where specialized applications become essential and where security concerns curtail openness, dis-
cretion can recede. But Moore’s law (broadly construed), competition, and the ease of sharing bits should guarantee
a steady flow of experimental technologies with unanticipated and thus initially discretionary uses.

Ubiquitous Computing, Invisible HCI?

Norman (1988, p. 185) wrote of “the invisible computer of the future.” Like motors, he speculated, computers would
be present everywhere and visible nowhere. We interact with clocks, refrigerators, and cars. Each has a motor, but
who studies human–motor interaction? Marc Weiser subsequently introduced a similar concept, 'ubiquitous compu-
ting.' A decade later, at the height of the Y2K crisis and the Internet bubble, computers were more visible than ever.
But after a quarter century, while we may always want a large display or two, would anyone call a smartphone or a
book reader a computer? The visions of Norman and Weiser may be materializing.
With digital technology embedded everywhere, concern with interaction is everywhere. HCI may become invisible
through omnipresence. As interaction with digital technology becomes part of everyone’s research, the three long-
standing HCI fields are losing participation.

Human Factors and Ergonomics. David Meister, author of The History of Human Factors and Ergonomics (1999),
stresses the continuity of HF&E in the face of technology change:
Outside of a few significant events, like the organization of HFS in 1957 or the publication of Proceedings
of the annual meetings in 1972, there are no seminal occurrences . . . no sharp discontinuities that are
memorable. A scientific discipline like HF has only an intellectual history; one would hope to find major
paradigm changes in orientation toward our human performance phenomena, but there is none, largely
because the emergence of HF did not involve major changes from pre-World War II applied psychology.
In an intellectual history, one has to look for major changes in thinking, and I have not been able to di s-
cover any in HF. (e-mail, September 7, 2004)
Membership in the Computer Systems Technical Group has declined. Technology is heavily stressed in technical
groups such as Cognitive Engineering and Decision Making, Communication, Human Performance Modeling, Inter-
net, System Development, and Virtual Environment. Nor do Aging, Medical Systems, or other groups avoid 'invisible
computers.'

Information Systems. While IS thrived during the Y2K crisis and the Internet bubble, other management disciplines—

32
finance, marketing, operations research, and organizational behavior—became more technically savvy. When the
bubble burst and enrollments declined, the IS niche became less well-defined. The research issues remain signifi-
cant, but this cuts two ways. As IT organizations standardize on products and outsource IT functions, more IT atten-
tion is focused on business-to-business and Web portals for customers. These raise finance and marketing consider-
ations, which in turn leads to HCI functions migrating to other management disciplines.

Computer–Human Interaction. This nomadic group started in psychology, then won a grudgingly-bestowed seat at
the computer science table. Several senior CHI people moved to information schools. Lacking a well-defined aca-
demic niche, CHI ties its identity to the SIGCHI organization and the CHI conference. Membership in SIGGCHI
peaked in 1992 and conference attendance peaked in 2001. As new technologies become widely used, thriving spe-
cialized conferences are formed, often started by younger researchers. World Wide Web conferences included pa-
pers on HCI issues from the outset. HCI is an 'invisible' presence in conferences on agents, design, and computing
that is ubiquitous, pervasive, accessible, social and sustainable. High rejection rates for conference submissions and
a new generational divide could accelerate the dispersion of research.
CHI attendance has become more exclusively academic, despite industry's need for basic research in specific ar-
eas. Apart from education and health, which have broad appeal, and software design and development, CHI remains
largely focused on general phenomena and resistant to domain-specific work. This creates additional opportunities
for regional and specialized conferences.

Information

Early in the computer era, there were no networks and memory was fantastically expensive. Computers were for
computation, not information processing. Today, the situation is reversed: Memory and bandwidth are so plentiful that
most computation is in the service of processing and distributing information. And the shift to an emphasis on infor-
mation, with computation present but less visible, could well accelerate.
Cronin (2005) proposed that information access, in terms of intellectual, physical, social, economic, and spa-
tial/temporal factors, is the focus of the information field. Information is acquired from sensors and human input, it
flows over networks including the Web, and is aggregated, organized, and transformed. The routing and manage-
ment of information within enterprises, as well as the consequences of ever-more-permeable organizational bounda-
ries, is evolving. Approaches to personal information management are also rapidly changing. We once contended
with shoeboxes of photographs and boxes of old papers; now many of us must make significant online information
management decisions, choosing what to keep locally, what to maintain in the cloud, and how to organize it to insure
its future accessibility. CHI has over a decade of work on information design and visualization.
In speculating about the future, Cronin (1995, p. 56) quotes Wersig (1992) who argued that concepts around in-
formation might function “like magnets or attractors, sucking the focus-oriented materials out of the disciplines and
restructuring them within the information scientific framework.” Could this happen? Information schools have hired
senior and junior people from many relevant areas. Andrew Dillon, dean of the University of Texas School of Infor-
mation, worked at Loughborough with Brian Shackel and Ken Eason. Syracuse, the first extant school of information
(since 1974), has faculty with IS training and orientation. CHI faculty have migrated to information schools and de-
partments of several leading universities.
Communication studies is a discipline to watch. Rooted in humanities and social sciences, it is gradually assuming
a quantitative focus. Centered in studies of television and other mass media, the field blossomed in the 1980s and
1990s. Only in the last several years has computer-mediated communication reached the scale of significance of the
other media. HCI is in a position to draw on past work in communication as communication focuses more on digital
media.
The rise of specialized programs—biomedical informatics, social informatics, community informatics, and infor-
mation and communication technology for development (ICT4D)—could work against the consolidation of information
studies. Information, like HCI, could become invisible through ubiquity. The annual Information Conference is a ba-
rometer. In 2005 and 2006, the halls echoed with active discussions and disagreement about directions. Should new
journals and research conferences be pursued, or should researchers stick with the established venues in the various
contributing disciplines? In the years since, faculty from the different fields worked out pidgin languages in order to
communicate with each other. Assistant professors were hired and graduate students enlisted whose initial jobs and
primary identities are with 'Information.' Will they creolize the pidgin language?
One can get a sense that the generals may still be arguing over directions, but the troops are starting to march. It
is not clear where they will go. The generals are reluctant to turn over command to less busy and more fluent junior
officers. The iConference has grown but vies with the less international although more established ASIST conference.
However this evolves, in the long term Information is likely to be the major player in human–computer interaction.
Design and Information are active HCI foci today, but the attention to design is compensating for past neglect. Infor-
mation is being reinvented.

33
CONCLUSION: THE NEXT GENERATION

Looking back, cyclic patterns and cumulative influences are evident. New waves of hardware enable different ways to
support the same activities. Email arrived as an informal communication medium, was embraced by students, re-
garded with suspicion by organizations, and eventually became more formal and used everywhere. Then texting and
instant messaging came along as an informal medium, were embraced by students, regarded with suspicion by or-
ganizations, and eventually became used everywhere. Social networking came along…
Mindful of Edgar Fiedler's admonition that "he who lives by the crystal ball soon learns to eat ground glass," con-
sider this: In the mid-1980s, the mainframe market lost the spotlight. Organizations were buying hundreds of PCs, but
these were weak devices with little memory, hard to network. They didn’t need more mainframes, but what about a
massive, parallel supercomputer? Government and industry invested vast sums in high performance computing, only
to discover that it was hard to decompose most computational problems into parallel processes whose output could
be reassembled. As these expensive and largely ineffective efforts proceeded, PCs slowly got stronger, added some
memory, got networked together, and without vast expenditures and almost unnoticed at first, the Internet and the
Web emerged.
Today the desktop computer has lost the spotlight to portable devices, but it won't stop there. Organizations buy
hundreds of embedded systems, sensors and effectors, but these are weak devices with little memory, hard to net-
work. Some tasks can be handed off to a second processor, but how far can parallel multicore computers take us?
Government and industry are investing large sums in parallel computing. They are rediscovering the limitations. Sen-
sors and effectors will add processing and memory, harvest energy, and get networked. What will that lead to? The
desktop computer may become a personal control station with large displays enabling us to monitor vast quantities of
information on anything of interest—work and professional, family and health, the state of household appliances, In-
ternet activity, and so forth—with a work area that supports exchanging tasks and objects with portable or distributed
devices.
New technologies capture our attention, but of equal importance is the rapid maturation of technologies such as
digital video and document repositories, as well as the complex specialization occurring in virtually all domains of
application. Different patterns of use emerge in different cultures, different industries. Accessibility and sustainability
are wide-open, specialized research and development areas. Tuning technologies for specific settings can bring hu-
man factors approaches to the fore, designing for efficient heavy use could revive command-driven interfaces,
whether the commands are typed, spoken, or gestural.
Digital technology has inexorably increased the visibility of activity. We see people behaving not as we thought
they would or as we think they should. Rules, conventions, policies, regulations, and laws are not consistently fol-
lowed; sanctions for violating them are not uniformly applied. Privacy and our evolving attitudes toward it are a small
piece of this powerful progression. Choosing how to approach these complex and intensifying challenges as individu-
als, families, organizations, and societies—Should or could we create more nuanced rules? When do we increase
enforcement or tolerate deviance?—will be a perpetual preoccupation as technology exposes the world as it is.
Until well after it is revoked, Moore’s law broadly construed will ensure that digital landscapes provide new forms
of interaction to explore and new practices to improve. The first generation of computer researchers, designers, and
users grew up without computers. The generation that followed used computers as students, entered workplaces,
and changed the way technology was used. Now a generation has grown up with computers, game consoles, and
cell phones. They absorbed an aesthetic of technology design and communicate by messaging. They are developing
skills at searching, browsing, assessing, and synthesizing information. They use smartphones, acquire multimedia
authoring talent, and embrace social networking sites. They have different takes on privacy and multitasking. They
are entering workplaces, and everything will be changed once again. However it is defined and wherever it is studied,
human–computer interaction will for some time be in its early days.

APPENDIX: PERSONAL OBSERVATIONS

This appendix describes some of my experiences to add texture and a sense of the human impact of various devel-
opments. It is of potential interest because I followed a common path for many years, working as a computer pro-
grammer, studying cognitive psychology, spending time as an HCI professional in industry, and then joining academ-
ia. My interest in history arose from the feeling of being swept along by invisible forces, often against my intention. My
first effort at making these forces visible was titled “The Computer Reaches Out” (Grudin, 1990) because I saw com-
puters evolving and slowly engaging with the world in ways that we, their developers, had not entirely foreseen.

1970: A Change in Plans. As a student, I was awed by a Life magazine article that quoted experts who said that

34
computers with super-human intelligence would arrive very soon. If we survived a few years, we could count on ma-
chines to do all necessary work! Human beings should focus on what they enjoy, not what they had thought might be
useful. I shifted from physics and politics to mathematics and literature.

1973: Three Professions. Looking for work, I found three computer job categories in the Boston Globe classifieds: (i)
operators, (2) programmers, and (3) systems analysts. Not qualified to be a highly paid analyst, I considered low-
paid, hands-on operator jobs but landed a programming job with a small electronics company, Wang Laboratories.
For two years, I never saw the computer that my programs ran on. I flowcharted on paper and coded on coding
sheets that a secretary sent to be punched and verified. A van carried the stack of cards 20 miles to a computer cen-
ter, and later that day or the next morning I got the printout. It might say something like “Error in Line 20.”.

1975: A Cadre of Discretionary Hand-On Users. In 1975, Wang acquired a few teletype terminals with access to the
WYLBUR line editor developed at the Stanford Linear Accelerator. Some of us programmers chose to abandon paper
and became hands-on computer users.

1983: Chilly Reception for a Paper on Discretion in Use. After time out to get a PhD in cognitive psychology I pub-
lished my first HCI work as a postdoc at the MRC Applied Psychology Unit in Great Britain (Grudin & MacLean,
1984). Allan MacLean and I found that some people choose a slower interface for aesthetic or other reasons, even
when they are familiar with a more efficient alternative. A senior colleague asked us not to publish it. He was part of a
large effort to improve expert efficiency through cognitive modeling. A demonstration that greater efficiency could be
undesirable would be a distraction, he said: “Sometimes the larger enterprise is more important than a small study.”

1984: Encountering Moore's Law, Information Systems, Human Factors, and Design. I returned to Wang, which had
become a leading minicomputer company. Moore's law had changed the industry. Hardware was now often ordered
from catalogs. The reduced cost of memory changed the priorities and programming skills needed for software de-
sign and development. Another cognitive psychologist, Susan Ehrlich, worked in a marketing research group and
later managed a human factors group. She introduced me to the IS literature and I attended local chapter meetings of
both HFS and SIGCHI. In a gesture to counter CHI antipathy toward human factors I began calling myself a human
factors engineer. I drove to Cambridge to see the newly released Macintosh. I realized that few software engineers
had the visual design skills that would become important, so at work I encouraged industrial designers of hardware
('boxes') to look into software interface design, which one did.

1985: The GUI Shock. In the early 1980s, Phil Barnard and I were among the many cognitive psychologists working on
command naming, an important topic in the era of command-line interfaces. Our ambition was to develop a comprehen-
sive theoretical foundation for HCI. But the success of the Mac in 1985 curtailed interest in command names. No one
would build on our past work—a depressing thought--and dashed the hope for a comprehensive theoretical foundation
for HCI. Time to choose: Were we cognitive psychologists or computer professionals? Phil remained a psychologist.

1986: Beyond “The User”: Groups and Organizations. I joined MCC, an industry research consortium. Between jobs I
worked on two papers, each addressing a major challenge that I had encountered in product development. (i) From
1984 to 1986, I worked on several products or features to support groups rather than individual users. These did not
do well with users. Why was group support so challenging? (ii) Organizational structures and software development
processes were painfully unsuitable for developing interactive software. What could be done about it? These ques-
tions formed the basis of much of my subsequent research.

1989: Development Contexts: A Major Differentiator. Within weeks of arriving at Aarhus University, where I would
spend two years in a country that then had little commercial software development, I saw that differences in the con-
ditions that govern the development of interactive commercial applications, in-house software, and contracted sys-
tems, shape practices and perceptions found in CHI, IS, and software engineering, respectively. Sorting this out led
to my first library research for historical purposes (Grudin, 1991). Perusing long-forgotten journals and magazines in
dusty library corridors felt like wandering through an archaeological site. Understanding articles from these different
fields surfaced the significant challenge discussed next.

1990: Just Words? Terminology Can Matter. A premonition has arisen in 1987; my IS-oriented colleague Susan Ehr-
lich titled a paper “Successful Implementation of Office Communication Systems.” To me, implementation was a syn-
onym for coding or development. To her, Implementation meant introduction into organizations. Sure enough, the
ACM editor asked her to change 'implementation' to 'adoption' (Ehrlich, 1987). Also, what she called systems, I called
applications. Language, usually an ally, was getting in the way.
In 1990, I described my planned HCI course at Aarhus as featuring “user-interface evaluation.” My new colleagues
seemed embarrassed. Weeks later, a book authored by one of them was published (Bødker, 1990). Its first sentence
quoted Alan Newell and Stu Card: “Design is where the action is, not evaluation.” Now I was embarrassed. In their in-

35
house development world, projects could take ten years. Design was the first phase. Evaluation, coming at the end
when only cosmetic changes were possible, had a negative stigma. In the world of commercial products, evaluation
of previous versions, competitive products, and (ideally) prototypes was integral. Evaluation drew on an experimental
psychologist's skillset, was central to iterative design, and was considered a good thing.
Later in 1990, I joined a panel on task analysis at a European conference. To my dismay, this IS-oriented group
had a different definition of 'task analysis.' In CHI, it meant a cognitive task analysis: breaking a simple task into com-
ponents; for example, is “move text” thought of as 'select-delete-paste' or as 'select-move-place'? In IS, it meant an
organizational task analysis: tasks as components of a broad work process. Some Europeans, unfamiliar with the
consumer-oriented context, felt that for us to call what we did a task analysis was disgraceful.
Also in 1990, en route to give a job talk at UC Irvine, my lecture to an IS audience at the UCLA Anderson School
of Management ended badly. The department head asked a question that seemed meaningless, so I replied cautious-
ly. He rephrased the question. I rephrased my response. He started again, then stopped and shrugged as if to say,
“This fellow is hopeless.” When I saw him a few months later, he seemed astonished to learn that his Irvine friends
were hiring me. Only later did I discover the basis of our failure to communicate: We attached different meanings to
the word 'users.' To me, it meant hands-on computer users. He had asked about the important users in IS: managers
who specified database requirements and read reports, but were usually not hands-on computer users. To me, use
was by definition hands-on; his question had made no sense.
A book could be written about the word 'user.' From a CHI perspective, the IS user was the 'customer.' Consult-
ants called them 'client.' In IS, the hands-on user was the 'end-user.' In CHI parlance, 'end-user' and 'user' were one
and the same—a person who entered data or commands and used the output. 'End-user' seemed superfluous or an
affectation. Human factors used 'operator,' which CHI considered demeaning. In software engineering, 'user' usually
denoted a tool user, which is to say, a software engineer.
A final terminology note: the male generic. CHI eschewed it but the other fields held onto it. I avoided submitting to
International Journal of Man-Machine Studies and turned down an invitation to speak at a 'man-machine' interaction
event.
I generally consider words a necessary but uninteresting medium for conveying meaning, but these experiences
led me to write an essay on unintended consequences of language (Grudin, 1993).

2005: Considering HCI history. My intermittent efforts to understand the past came together in an article published in
IEEE Annals of the History of Computing. ACM Interactions magazine then asked me to write and collect essays on
historical topics from people with different perspectives and interests. A list is on my web page.

2015: Reflections on Bridging Efforts. I've worked with others to try to bridge CHI and human factors, office infor-
mation systems, information systems, Design, and information science. None succeeded. I interviewed people who
participated in two or more areas before withdrawing into one. They identified the obstacles described in this history.
As a boomer and cognitive psychologist, I experienced generational and cultural divides. Many of my MCC col-
leagues went there to avoid 'Star Wars' military projects, which were a focus of human factors, with its antipathy (at
that time) to cognitive approaches. We shifted from journals to conferences as the primary publication venue, and
from hypothesis-driven experimentation to qualitative field research or prototyping approaches. As I have noted,
these differences and actions separated fields.
Some differences faded, others persist. Reviewers are often irritated by unfamiliar acronyms used by authors from
other fields. Writing a chapter for an IS-oriented book, my coauthor and I wrangled at great length with the editor over
terminology (Palen & Grudin, 2002). When reviewing the literature on IS models of white-collar employee perceptions
of technology, I searched online for TAM references and twice came up blank. I was mystified. Eventually I saw the
problem: TAM stands for 'Technology Acceptance Model,' but I had quickly typed 'Technology Adoption Model.' Non-
discretionary acceptance vs. discretionary adoption: Different foci lead to different terminology, and confusion.

2016: Predictions. Detailed forecasts rarely hold up well to close inspection. But understanding the forces from the
past that shaped the present improve the odds of anticipating or reacting quickly to future events. They also provide a
sense of efforts that will likely be futile. Common errors are to underestimate either the immovable object that is hu-
man nature or the irresistible force of technology change, or both. Check my projections in ACM Interactions, the last
2006 and first 2007 issues—to see how I'm doing. All of my ACM publications can be freely and legally accessed via
my web site.

ACKNOWLEDGMENTS

Hundreds of people shared recollections and information, and pointed to additional sources. Through the years I

36
worked part-time on this, Phil Barnard and Ron Baecker provided encouragement and advice without which I would
not have persisted and with which I improved the account. The courses I taught with Steve Poltrock for 20 years pro-
vided opportunities to explore some of these perspectives. Dick Pew's passing of the HCI Handbook history franchise
to me was a generous enabler. Finally, my daughters Eleanor and Isobel and my wife Gayna Williams took in stride
my hours at the computer and lost in thought.

REFERENCES

Note: All URLs were accessed December 9 2011.

Ackoff, R. L. (1967). Management misinformation systems. Management Science, 14, B147–B156.


Asimov, I. (1950). I, robot. New York: Gnome Press.
Aspray, W. (1999). Command and control, documentation, and library science: The origins of information science at the University
of Pittsburgh. IEEE Annals of the history of computing, 21(4), 4-20.
Baecker, R. & Buxton, W. (1987). A historical and intellectual perspective. In R. Baecker & W. Buxton, Readings in HCI: A multidis-
ciplinary approach (pp. 41–54). San Francisco: Morgan Kaufmann.
Baecker, R., Grudin, J., Buxton, W. & Greenberg, S. (1995). A historical and intellectual perspective. In R. Baecker, J. Grudin, W.
Buxton & S. Greenberg, Readings in HCI: Toward the Year 2000 (pp. 35–47). San Francisco: Morgan Kaufmann.
Bagozzi, R. P., Davis, F. D. & Warshaw, P. R. (1992). Development and test of a theory of technological learning and usage. Hu-
man Relations, 45(7), 660–686.
Banker, R. D. & Kaufmann, R. J. (2004). The evolution of research on Information Systems: A fiftieth-year survey of the literature in
Management Science. Management Science, 50(3), 281–298.
Bannon, L. (1991). From human factors to human actors: The role of psychology and HCI studies in system design. In J. Green-
baum & M. Kyng (Eds.), Design at Work (pp. 25–44). Hillsdale, NJ: Erlbaum.
Bardini, T. (2000). Bootstrapping: Douglas Engelbart, coevolution, and the origins of personal computing. Stanford University.
Barnard, P. (1991). Bridging between basic theories and the artifacts of HCI. In J. M. Carroll (Ed.), Designing interaction: Psycholo-
gy at the human-computer interface (pp. 103–127). Cambridge: Cambridge University Press.
Barnard, P., May, J, Duke, D. & Duce, D. (2000). Systems, interactions, and macrotheory. ACM Trans. Computer-Human Interac-
tion, 7(2), 222 - 262.
Begeman, M., Cook, P., Ellis, C., Graf, M., Rein, G. & Smith, T. (1986). Project Nick: Meetings augmentation and analysis. Proc.
Computer-Supported Cooperative Work 1986, 1–6.
Benbasat, I. & Dexter A. S. (1985). An experimental evaluation of graphical and color-enhanced information presentation. Manage-
ment science, 31(11), 1348–1364.
Bennett, J. L. (1979). The commercial impact of usability in interactive systems. In B. Shackel (Ed.), Man-computer communica-
tion (Vol. 2, pp. 1-17). Maidenhead: Pergamon-Infotech.
Bewley, W. L., Roberts, T. L., Schroit, D. & Verplank, W. L. (1983). Human factors testing in the design of Xerox’s 8010 “Star” office
workstation. Proc. CHI’83, 72–77. New York: ACM.
Beyer, H. & Holtzblatt, K. (1998). Contextual Design—Defining customer-centered systems. San Francisco: Morgan Kaufmann.
Bjerknes, G., Ehn, P. & Kyng, M. (Eds.). (1987). Computers and Democracy—a Scandinavian Challenge. Aldershot, UK: Avebury.
Björn-Andersen, N. & Hedberg, B. (1977). Design of information systems in an organizational perspective. In P.C. Nystrom & W.H.
Starbuck (Eds.), Prescriptive models of organizations (pp. 125-142). TIMS Studies in the Management Sciences, Vol. 5. Amster-
dam: North-Holland.
Blackwell, A. (2006). The reification of metaphor as a design tool. ACM Trans. Computer-Human Interaction, 13(4), 490–530.
Blythe, M. A., Monk, A. F., Overbeeke, K. & Wright, P. C. (Eds.). (2003). Funology: From usability to user enjoyment. New York: Kluwer.
Borman, L. (1996). SIGCHI: the early years. SIGCHI Bulletin, 28(1), 1–33. New York: ACM.
Bowman, William J. (1968). Graphic Communication. New York: John Wiley.
Buckland, M. (1998). Documentation, information science, and library science in the U.S.A. In Hahn, T.B. & Buckland, M. (Eds.),
Historical studies in Information Science, pp. 159-172. Medford, NJ: Information Today / ASIS.
Buckland, M. (2009). As we may recall: Four forgotten pioneers. ACM Interactions, 16(6), 76-69.
Burke, C. (1994). Information and secrecy: Vannevar Bush, Ultra, and the other Memex. Lanham, MD: Scarecrow Press.
Burke, C. (1998). A rough road to the information highway: Project INTREX. In Hahn, T.B. & Buckland, M. (Eds.), Historical studies
in Information Science, pp. 132-146. Medford, NJ: Information Today / ASIS.
Burke, C. (2007). History of information science. In Cronin, B. (Ed.), Annual review of information science and technology 41, pp. 3-
53. Medford, NJ: Information Today / ASIST.
Bush, V. (1945). As we may think. The Atlantic Monthly, 176, 101–108.
http://www.theatlantic.com/magazine/archive/1969/12/as-we-may-think/3881/
Butler, P. (1933). Introduction to library science. Chicago: Univ. of Chicago Press.
Buxton, W.A.S. (2006). Early interactive graphics at MIT Lincoln Labs. http://www.billbuxton.com/Lincoln.html
Bødker, S. (1990). Through the interface: A human activity approach to user interface design. Mahwah, NJ: Lawrence Erlbaum.
Cakir, A., Hart, D. J. & T. F. M. Stewart, T. F. M. (1980). Visual display terminals. New York: Wiley.
Card, S. K. & Moran, T. P. (1986). User technology: From pointing to pondering. Proc. Conference on the History of Personal Work-
stations, 183–198. New York: ACM.
Card, S. K., Moran, T. P. & Newell, A. (1980a). Computer text-editing: An information-processing analysis of a routine cognitive skill.

37
Cognitive Psychology, 12, 396–410.
Card, S. K., Moran, T. P. & Newell, A. (1980b). Keystroke-level model for user performance time with interactive systems. Comm.
ACM, 23(7), 396–410. New York: ACM.
Card, S., Moran, T. P. & Newell, A. (1983). The psychology of human-computer interaction. Mahwah, NJ: Lawrence Erlbaum.
Carey, J. (1988). Human Factors in Management Information Systems. Greenwich, CT: Ablex.
Carroll, J.M. (Ed.) (2003). HCI models, theories and frameworks: Toward a multidisciplinary science. San Francisco: Morgan Kaufmann.
Carroll, J. M. & Campbell, R. L. (1986). Softening up hard science: Response to Newell and Card. Human-Computer Interaction,
2(3), 227–249.
Carroll, J. M. & Mazur, S. A. (1986). Lisa learning. IEEE Computer, 19(11), 35–49.
Cronin, B. (1995). Shibboleth and substance in North American Library and Information Science education. Libri, 45, 45-63.
Damodaran, L., Simpson, A. & Wilson, P. (1980). Designing systems for people. Manchester, UK: NCC Publications.
Darrach, B. (1970, November 20). Meet Shaky: The first electronic person. Life Magazine, 69(21), 58B–68.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarter-
ly, 13(3), 319–339.
Davis, G. B. (1974). Management information systems: Conceptual foundations, structure, and development. New York: McGraw-Hill.
Dennis, A.R. & Reinicke, B.A. (2004). Beta versus VHS and the acceptance of electronic brainstorming technology. MIS Quarterly,
28(1), 1-20.
Dennis, A., George, J., Jessup, L., Nunamaker, J. & Vogel, D. (1988). Information technology to support electronic meetings. MIS
Quarterly, 12(4), 591–624.
DeSanctis, G. & Gallupe, R. B. (1987). A foundation for the study of group decision support systems. Management Science, 33,
589–610.
Dyson, F. (1979). Disturbing the universe. New York: Harper & Row.
Ehrlich, S. F. (1987). Strategies for encouraging successful adoption of office communication systems. ACM Trans. Office Infor-
mation Systems, 5(4), 340–357.
Engelbart, D. (1962). Augmenting human intellect: A conceptual framework. SRI Summary report AFOSR-3223. Reprinted in P.
Howerton & D. Weeks (Eds.), Vistas in information handling, Vol. 1 (pp. 1-29). Washington, D.C.: Spartan Books, 1963.
http://www.dougengelbart.org/pubs/augment-3906.html
Engelien, B. & McBryde, R. (1991). Natural language markets: Commercial strategies. London: Ovum Ltd.
Evenson, S. (2005). Design and HCI highlights. Presented at the HCIC 2005 Conference. Winter Park, Colorado, February 6, 2005.
Fano, R. & Corbato, F. (1966). Timesharing on computers. Scientific American 214(9), 129–140.
Feigenbaum, E. A. & McCorduck, P. (1983). The Fifth Generation: Artificial Intelligence and Japan’s computer challenge to the
world. Reading, MA: Addison-Wesley.
Fidel, R. (2011). Human information interaction: an ecological approach to information behavior. Cambridge, MA: MIT Press.
Foley, J. D. & Wallace, V. L. (1974). The art of natural graphic man-machine conversation. Proc. of the IEEE, 62(4), 462–471.
Forbus, K. (2003). Sketching for knowledge capture. Lecture at Microsoft Research, Redmond, WA, May 2.
Forbus, K. D., Usher, J. & Chapman, V. (2003). Qualitative spatial reasoning about sketch maps. Proc. Innovative Applications of AI,
pp. 85-92. Menlo Park: AAAI.
Forster, E.M. (1909). The machine stops. Oxford and Cambridge review, 8, November, 83-122.
Friedman, A. (1989). Computer systems development: History, organization and implementation. New York: Wiley.
Gilbreth, L. (1914). The psychology of management: The function of the mind in determining teaching and installing methods of least
waste. NY: Sturgis and Walton.
Good, I.J. (1965). Speculations concerning the first ultra-intelligent machine. Advances in Computers, 6, 31-88.
http://commonsenseatheism.com/wp-content/uploads/2011/02/Good-Speculations-Concerning-the-First-Ultraintelligent-
Machine.pdf
Gould, J.D. & Lewis, C. (1983). Designing for usability—Key principles and what designers think. Proc. CHI’83, 50–53. NY: ACM.
Grandjean, E. & Vigliani, A. (1980). Ergonomics aspects of visual display terminals. London: Taylor and Francis.
Gray, W. D., John, B. E., Stuart, R., Lawrence, D. & Atwood, M. E. (1990). GOMS meets the phone company: Analytic modeling
applied to real-world problems. Proc. Interact’90, 29–34. Amsterdam: North Holland.
Greenbaum, J. (1979). In the name of efficiency. Philadelphia: Temple University.
Greif, I. (1985). Computer-Supported Cooperative Groups: What are the issues? Proc. AFIPS Office Automation Conference, 73-76.
Montvale, NJ: AFIPS Press.
Greif, I. (ed.) (1988). Computer-Supported Cooperative Work: A book of readings. San Mateo, CA: Morgan Kaufmann.
Grudin, J. (1990). The computer reaches out: The historical continuity of interface design. Proc. CHI’90, 261–268. NY: ACM.
Grudin, J. (1991). Interactive systems: Bridging the gaps between developers and users. IEEE Computer, 24(4), 59–69.
Grudin, J. (1993). Interface: An evolving concept. Comm. ACM, 36(4), 110–119
Grudin, J. (2009). AI and HCI: Two fields divided by a common focus. AI Magazine, 30(4), 48-57.
Grudin, J. (2010). Conferences, community, and technology: Avoiding a crisis. Proc. iConference 2010.
https://www.ideals.illinois.edu/handle/2142/14921
Grudin, J. (2011). Human-computer interaction. In Cronin, B. (Ed.), Annual review of information science and technology 45, pp.
369-430. Medford, NJ: Information Today (for ASIST).
Grudin, J. & MacLean, A. (1984). Adapting a psychophysical method to measure performance and preference tradeoffs in human-
computer interaction. Proc. INTERACT’84, 338–342. Amsterdam: North Holland.
Hertzfeld, A. (2005). Revolution in the valley: The insanely great story of how the Mac was made. Sebastopol, CA: O’Reilly Media.
HFES (2010). HFES history. In HFES 2010–2011, directory and yearbook (pp. 1–3). Santa Monica: Human Factors and Ergonom-
ics Society. Also found at http://www.hfes.org/web/AboutHFES/history.html
Hiltz, S. R. & Turoff, M. (1978). The network nation. Reading, MA: Addison-Wesley.
Hiltzik, M. A. (1999). Dealers of lightning: Xerox PARC and the dawn of the computer age. New York: HarperCollins.
Hopper, G. (1952). The education of a computer. Proc. ACM Conference, reprinted in Annals of the History of Computing, 9(3–4),
271–281, 1987.

38
Huber, G. (1983). Cognitive style as a basis for MIS and DSS designs: Much ado about nothing? Management Science, 29(5), 567-579.
Hutchins, E. L., Hollan, J. D. & Norman, D. A. (1986). Direct manipulation interfaces. In D. A. Norman & S. W. Draper (Eds.), User
Centered System Design (pp. 87–124). Mahwah, NJ: Lawrence Erlbaum.
Israelski, E. & Lund, A. M. (2003). The evolution of HCI during the telecommunications revolution. In J. A. Jacko & A. Sears (Eds.),
The human-computer interaction handbook, pp. 772-789. Mahwah, NJ: Lawrence Erlbaum.
Johnson, T. (1985). Natural language computing: The commercial applications. London: Ovum Ltd.
Kao, E. (1998). The history of AI. Retrieved March 13, 2007, from http://www.generation5.org/content/1999/aihistory.asp
Kay, A. & Goldberg, A. (1977). Personal dynamic media. IEEE Computer 10(3), 31–42.
Keen, P. G. W. (1980). MIS research: reference disciplines and a cumulative tradition. In First International Conference on Infor-
mation Systems, 9–18. Chicago: Society for Management Information Systems.
Kling, R. (1980). Social analyses of computing: Theoretical perspectives in recent empirical research. Computing Surveys, 12(1),
61-110.
Landau, R., Bair, J. & Siegmna, J. (Eds.). (1982). Emerging office systems: Extended Proceedings of the 1980 Stanford Internation-
al Symposium on Office Automation, Norwood, NJ.
Lenat, D. (1989). When will machines learn? Machine Learning, 4, 255–257.
Lewis, C. (1983). The ‘thinking aloud’ method in interface evaluation. Tutorial given at CHI’83. Unpublished notes.
Lewis, C. & Mack, R. (1982). Learning to use a text processing system: Evidence from “thinking aloud” protocols. Proc. Conference
on Human Factors in Computing Systems, 387–392. New York: ACM.
Lewis, C. Polson, P, Wharton, C. & Rieman, J. (1990). Testing a walkthrough methodology for theory-based design of walk-up-and-
use Interfaces. Proc. CHI'90, 235–242. New York: ACM.
Licklider, J. C. R. (1960). Man-computer symbiosis. IRE Transactions of Human Factors in Electronics HFE-1, 1, 4–11.
http://groups.csail.mit.edu/medg/people/psz/Licklider.html
Licklider, J.C.R. (1963). Memorandum For: Members and Affiliates of the Intergalactic Computer Network. April 23.
http://www.kurzweilai.net/memorandum-for-members-and-affiliates-of-the-intergalactic-computer-network
Licklider, J. C. R. (1965). Libraries of the future. Cambridge, MA: MIT Press.
Licklider, J. C. R. (1976). User-oriented interactive computer graphics. In Proc. SIGGRAPH workshop on user-oriented design of
interactive graphics systems, 89–96. New York: ACM.
Licklider, J. C. R & Clark, W. (1962). On-line man-computer communication. AFIPS Conference Proc., 21, 113–128.
Lighthill, J. (1973). Artificial intelligence: A general survey. In J. Lighthill, N. S. Sutherland, R. M. Needham, H. C. Longuet-Higgins &
D. Michie (Eds.), Artificial intelligence: A paper symposium. London: Science Research Council of Great Britain.
http://www.chilton-computing.org.uk/inf/literature/reports/lighthill_report/p001.htm
Long, J. (1989). Cognitive ergonomics and human-computer interaction. In J. Long & A. Whitefield (Eds.), Cognitive ergonomics and
human-computer interaction (pp. 4–34). Cambridge: Cambridge University Press.
Machlup, F. & Mansfield, U. (Eds.) (1983). The study of information: Interdisciplinary messages. New York: Wiley.
March, A. (1994). Usability: the new dimension of product design. Harvard Business Review, 72(5), 144–149.
Marcus, A. (2004). Branding 101. ACM Interactions, 11(5), 14–21.
Markoff, J. (2005). What the dormouse said: How the 60s counter-culture shaped the personal computer. London: Viking.
Markoff, J. (2015). Machines of loving grace: The quest for common ground between humans and robots. New York: HarperCollins.
Markus, M.L. (1983). Power, politics, and MIS implementation. Comm. of the ACM, 26(6), 430-444.
Martin, J. (1973). Design of man-computer dialogues. New York: Prentice-Hall.
McCarthy, J. (1960). Functions of symbolic expressions and their computation by machine, part 1. Comm. ACM, 3(4), 184–195.
McCarthy, J. (1988). B. P. Bloomfield, The question of artificial intelligence: Philosophical and sociological perspectives. Annals of
the History of Computing, 10(3), 224–229. http://www-formal.stanford.edu/jmc/reviews/bloomfield/bloomfield.html
Meister, D. (1999). The history of human factors and ergonomics. Mahwah, NJ: Lawrence Erlbaum.
Moggridge, B. (2007). Designing interactions. Cambridge: MIT Press.
Moody, F. (1995). I sing the body electronic: A year with Microsoft on the multimedia frontier. Viking.
Moravec, H. (1988). Mind children: The future of robot and human intelligence. Cambridge: Harvard University Press.
Moravec, H. (1998). When will computer hardware match the human brain? Journal of evolution and technology, 1, 1.
http://www.transhumanist.com/volume1/moravec.htm
Mumford, E. (1971). A comprehensive method for handling the human problems of computer introduction. IFIP Congress, 2, 918–923.
Mumford, E. (1976). Toward the democratic design of work systems. Personnel management, 8(9), 32-35.
Myers, B. A. (1998). A brief history of human computer interaction technology. ACM Interactions, 5(2), 44–54.
National Science Foundation. (1993). NSF 93–2: Interactive Systems Program Description. 13 January 1993.
http://www.nsf.gov/pubs/stis1993/nsf932/nsf932.txt
National Science Foundation. (2003). NSF Committee of Visitors Report: Information and Intelligent Systems Division. 28 July 2003.
Negroponte, N. (1970). The architecture machine: Towards a more humane environment. Cambridge: MIT Press.
Nelson, T. (1965). A file structure for the complex, the changing, and the indeterminate. Proc. ACM National Conference, 84–100.
New York: ACM.
Nelson, T. (1973). A conceptual framework for man-machine everything. Proc. National Computer Conference (pp. M21–M26).
Montvale, New Jersey: AFIPS Press.
Nelson, T. (1996). Generalized links, micropayment and transcopyright.
http://www.almaden.ibm.com/almaden/npuc97/1996/tnelson.htm
Newell, A. & Card, S. K. (1985). The prospects for psychological science in human-computer interaction. Human-computer interac-
tion, 1(3), 209–242.
Newell, A. & Simon, H. A. (1956). The logic theory machine: A complex information processing system. IRE transactions on infor-
mation theory IT-2, 61–79.
Newell, A. & Simon, H. A. (1972). Human problem solving. New York: Prentice-Hall.
Newman, W.M. & Sproull, R.F. (1973).Principles of interactive computer graphics. New York: McGraw-Hill.
Nielsen, J. (1989). Usability engineering at a discount. In G. Salvendy & M.J. Smith (Eds.), Designing and using human-computer

39
interfaces and knowledge based systems (pp. 394–401). Amsterdam: Elsevier.
Nielsen, J., & Molich, R. (1990). Heuristic evaluation of user interfaces. Proc. CHI'90, 249-256. New York: ACM.
Norberg, A. L. & O’Neill, J. E. (1996). Transforming computer technology: Information processing for the Pentagon 1962–1986.
Baltimore: Johns Hopkins.
Norman, D. A. (1982). Steps toward a cognitive engineering: Design rules based on analyses of human error. Proc. Conference on
Human Factors in Computing Systems, 378–382. New York: ACM.
Norman, D. A. (1983). Design principles for human-computer interfaces. Proc. CHI’83, 1–10. New York: ACM.
Norman, D. A. (1986). Cognitive engineering. In D. A. Norman & S. W. Draper (Eds.), User centered system design (pp. 31–61).
Mahwah, NJ: Lawrence Erlbaum.
Norman, D. A. (1988). Psychology of everyday things. Reissued in 1990 as Design of everyday things. New York: Basic Books.
Norman, D. A. (2004). Emotional design: Why we love (or hate) everyday things. New York: Basic Books.
Nunamaker, J., Briggs, R. O., Mittleman, D. D., Vogel, D. R. & Balthazard, P. A. (1997). Lessons from a dozen years of group sup-
port systems research: A discussion of lab and field findings. Journal of Management Information Systems, 13(3), 163–207.
Nygaard, K. (1977). Trade union participation. Presentation at CREST Conference on Management Information Systems. Stafford, UK.
Oakley, B. W. (1990). Intelligent knowledge-based systems—AI in the U.K. In R. Kurzweil (Ed.), The age of intelligent machines (pp.
346–349). Cambridge, MA: MIT Press.
Olson, G.M. & Olson, J.S. (2012) Collaboration technologies. In J. Jacko (Ed.), Human-Computer Interaction Handbook (3rd edi-
tion) (pp. 549-564). Boca Raton, FL: CRC Press.
Palen, L. & Grudin, J. (2002). Discretionary adoption of group support software. In B.E. Munkvold, Implementing collaboration tech-
nology in industry, (pp. 159-190). London: Springer-Verlag.
Perlman, G., Green, G.K. & Wogalter, M.S. (1995). Human factors perspectives on human-computer interaction. Santa Monica:
Human Factors and Ergonomics Society.
Pew, R. (2003). Evolution of HCI: From MEMEX to Bluetooth and beyond. In J. A. Jacko & A. Sears (Eds.), The Human-Computer
Interaction handbook (pp. 1–17). Mahwah, NJ: Lawrence Erlbaum.
Proc. Joint Conference on Easier and More Productive Use of Computer Systems, Part II: Human Interface and User Interface.
(1981). New York: ACM. http://doi.acm.org/10.1145/800276.810998
Pruitt, J. & Adlin, T. (2006). The persona lifecycle: Keeping people in mind throughout product design. Morgan Kaufmann.
Rasmussen, J. (1980). The human as a system component. In H.T. Smith & T.R.G. Green (Eds.), Human interaction with comput-
ers, pp. 67-96. London: Academic.
Rasmussen, J. (1986). Information processing and human-machine interaction: An approach to cognitive engineering. New York:
North-Holland.
Rayward, W.B. (1983). Library and information sciences: Disciplinary differentiation, competition, and convergence. Edited section
of F. Machlup & U. Mansfield (Eds.), The study of information: Interdisciplinary messages (pp. 343-405). New York: Wiley.
Rayward. W.B. (1998). The history and historiography of Information Science: Some reflections. In Hahn, T.B. & Buckland, M.
(Eds.), Historical studies in Information Science, pp. 7-21. Medford, NJ: Information Today / ASIS.
Remus, W. (1984). An empirical evaluation of the impact of graphical and tabular presentations on decision-making. Management
Science, 30(5), 533–542.
Roscoe, S. N. (1997). The adolescence of engineering psychology. Santa Monica, CA: Human Factors and Ergonomics Society.
Sammet, J. (1992). Farewell to Grace Hopper—End of an era! Comm. ACM, 35(4), 128–131.
Shackel, B. (1959). Ergonomics for a computer. Design, 120, 36–39.
Shackel, B. (1962). Ergonomics in the design of a large digital computer console. Ergonomics, 5, 229–241.
Shackel, B. (1997). HCI: Whence and whither? Journal of ASIS, 48(11), 970–986.
Shannon, C. E. (1950). Programming a computer for playing chess. Philosophical magazine, 7(41), 256–275.
Shannon, C.E. & Weaver, W. (1949). The mathematical theory of communication. Urbana: Univ. of Illinois Press.
Sheil, B. A. (1981). The psychological study of programming. ACM Computing Surveys, 13(1), 101–120.
Shneiderman, B. (1980). Software psychology: Human factors in computer and information systems. Cambridge, MA: Winthrop.
Simon, H.A. (1960). The new science of management decision. New York: Harper. Reprinted in The Shape of Automation for Men
and Management, Harper & Row, 1965.
Smith, H. T. & Green, T. R. G. (Eds.). (1980). Human interaction with computers. Orlando, FL: Academic.
Smith, S. L. (1963). Man-computer information transfer. In J. H. Howard (Ed.), Electronic information display systems (pp. 284–299).
Washington, DC: Spartan Books.
Smith, S. L., Farquhar, B. B. & Thomas, D. W. (1965). Color coding in formatted displays. Journal of Applied Psychology, 49, 393–398.
Smith, S. L. & Goodwin, N. C. (1970). Computer-generated speech and man-computer interaction. Human Factors, 12, 215–223.
Smith, S. L. & Mosier, J. N. (1986). Guidelines for designing user interface software (ESD-TR-86-278). Bedford, MA: MITRE.
Suchman, L. (1987). Plans and situated action: The problem of human-machine communication. Cambridge University Press.
Suchman, L. (1987). Designing with the user: Review of 'Computers and democracy: A Scandinavian challenge. ACM TOIS, 6, 2,
173-183.
Sutherland, I. (1963). Sketchpad: A man-machine graphical communication system. Doctoral dissertation, MIT.
http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-574.pdf
Taylor, F. W. (1911). The principles of scientific management. New York: Harper.
Turing, A. (1949). Letter in London Times, June 11. See Highlights from the Computer Museum report Vol. 20, Summer/Fall 1987,
p. 12. http://ed-thelen.org/comp-hist/TCMR-V20.pdf
Turing, A. (1950). Computing machinery and intelligence. Mind, 49, 433–460. Republished as “Can a machine think?” in J. R. Newman
(Ed.), The world of mathematics, Vol. 4 (pp. 2099–2123). New York: Simon & Schuster.
Vessey, I. & Galletta, D. (1991). Cognitive fit: An empirical test of information acquisition. Information Systems Research, 2(1), 63–84.
Waldrop, M. M. (2001). The dream machine: J.C.R. Licklider and the revolution that made computing personal. New York: Viking.
Weinberg, G. (1971). The psychology of computer programming. New York: Van Nostrand Reinhold.
Wells, H.G. (1905). A modern utopia. London: Jonathan Cape. http://www.gutenberg.org/etext/6424
Wells, H.G. (1938). World brain. London: Methuen.

40
Wersig, G. (1992). Information science and theory: A weaver bird’s perspective. In P. Vakkari & B. Cronin (Eds.), Conceptions of
library and information science: Historical, empirical, and theoretical perspectives (pp. 201-217). London: Taylor Graham.
White, P.T. 1970. Behold the computer revolution. National Geographic, 38, 5, 593-633.
http://blog.modernmechanix.com/2008/12/22/behold-the-computer-revolution/
Yates, J. (1989). Control through communication: The rise of system in American management. Baltimore: Johns Hopkins.
Zhang, P. (2004). AIS SIGHCI three-year report. SIGHCI newsletter, 3(1), 2–6.
Zhang, P., Nah, F. F.-H. & Preece, J. (2004). HCI studies in management information systems. Behaviour & Information Technolo-
gy, 23(3), 147–151.
Zhang, P., Li, N., Scialdone, M.J. & Carey, J. 2009. The intellectual advancement of Human-Computer Interaction Research: A criti-
cal assessment of the MIS Literature. AIS Trans. Human-Computer Interaction, 1, 3, 55-107.

41

You might also like