Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $9.99/month after trial. Cancel anytime.

Osiris, Volume 32: Data Histories
Osiris, Volume 32: Data Histories
Osiris, Volume 32: Data Histories
Ebook794 pages12 hours

Osiris, Volume 32: Data Histories

Rating: 1 out of 5 stars

1/5

()

Read preview

About this ebook

The history of data brings together topics and themes from a variety of perspectives in history of science: histories of the material culture of information and of computing, the history of politics on individual and global scales, gender and women’s history, as well as the histories of many individual disciplines, to name just a few of the areas covered by essays in this volume. But the history of data is more than just the sum of its parts. It provides an emerging new rubric for considering the impact of changes in cultures of information in the sciences in the longue durée, and an opportunity for historians to rethink important questions that cross many of our traditional disciplinary categories. 
 
LanguageEnglish
Release dateJul 10, 2019
ISBN9780226462356
Osiris, Volume 32: Data Histories

Related to Osiris, Volume 32

Titles in the series (12)

View More

Related ebooks

Science & Mathematics For You

View More

Related articles

Reviews for Osiris, Volume 32

Rating: 1 out of 5 stars
1/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Osiris, Volume 32 - Aronova Elena

    Acknowledgments

    Introduction

    Elena Aronova, Christine von Oertzen, and David Sepkoski: IntroductionHistoricizing Big Data

    Personal Data

    Rebecca Lemov: Anthropology’s Most Documented Man, Ca. 1947A Prefiguration of Big Data from the Big Social Science Era

    Joanna Radin: Digital NativesHow Medical and Indigenous Histories Matter for Big Data

    Markus Friedrich: Genealogy as Archive-Driven Research Enterprise in Early Modern Europe

    Dan Bouk: The History and Political Economy of Personal Data over the Last Two Centuries in Three Acts

    Epistemologies and Technologies of Data

    Staffan Müller-Wille: Names and NumbersData in Classical Natural History, 1758–1859

    Christine von Oertzen: Machineries of Data PowerManual versus Mechanical Census Compilation in Nineteenth-Century Europe

    Hallam Stevens: A Feeling for the AlgorithmWorking Knowledge and Big Data in Biology

    David Sepkoski: The Database before the Computer?

    Judith Kaplan: From Lexicostatistics to LexomicsBasic Vocabulary and the Study of Language Prehistory

    Markus Krajewski: Tell Data from MetaTracing the Origins of Big Data, Bibliometrics, and the OPAC

    Economies of Data

    W. Patrick McCray: The Biggest Data of AllMaking and Sharing a Digital Universe

    Mirjam Brusius: The Field in the MuseumPuzzling Out Babylon in Berlin

    Etienne S. Benson: A Centrifuge of CalculationManaging Data and Enthusiasm in Early Twentieth-Century Bird Banding

    Elena Aronova: Geophysical Datascapes of the Cold War: Politics and Practices of the World Data Centers in the 1950s and 1960s

    Epilogue

    Bruno J. Strasser and Paul N. Edwards: Big Data Is the Answer … But What Is the Question?

    Notes on Contributors

    Index

    OSIRIS 2017, 32 : iv–iv

    Acknowledgments

    This volume is the culmination of a working group at the Max Planck Institute for the History of Science titled Historicizing Big Data, which convened between 2013 and 2015. First and foremost, our gratitude goes to the participants in two workshops—the first in October 2013, and the second in October 2014—most of whom have contributed essays to this collection. Many of the group’s external members spent several months at the Max Planck Institute as visiting scholars while working on their contributions and, in addition to attending the formal workshops, participated in numerous reading groups and impromptu discussions on themes relating to the history of data. The institutional and intellectual support of Department II at the Max Planck Institute—and particularly its director, Lorraine Daston—allowed this publication to evolve through extended interaction among the group members, and we thank all the contributors to this volume for making this project a truly collaborative experience.

    We have benefited from conversations about data in a variety of conferences and institutional settings, with interlocutors who are too numerous to list here. However, our particular gratitude goes to Soraya de Chadarevian, Lorraine Daston, Cathy Gere, Matt Jones, Sabina Leonelli, Ted Porter, Dan Rosenberg, and the late John Pickstone for their generous and stimulating interventions and comments. We would also like to express our deep thanks to Andrea Rusnock and the Osiris Editorial Board for their support for and engagement with this project. The entire volume benefited enormously from the detailed and constructive comments of two anonymous reviewers, who were exemplary referees.

    © 2017 by The History of Science Society. All rights reserved.

    OSIRIS 2017, 32 : 1–17

    Introduction

    Historicizing Big Data

    Elena Aronova,* Christine von Oertzen,§ and David Sepkoski#

    Abstract

    The history of data brings together topics and themes from a variety of perspectives in history of science: histories of the material culture of information and of computing, the history of politics on individual and global scales, gender and women’s history, as well as the histories of many individual disciplines, to name just a few of the areas covered by essays in this volume. But the history of data is more than just the sum of its parts. It provides an emerging new rubric for considering the impact of changes in cultures of information in the sciences in the longue durée, and an opportunity for historians to rethink important questions that cross many of our traditional disciplinary categories.

    We live in a world of data. Or, more precisely, we live in a world where we have become used to understanding our lives, the economic fortunes of our societies, the information we surround ourselves with, and the objects and phenomena of science, as bits stored electronically on digital computers. Our era is not just an era of data, we are told, but of Big Data, a phenomenon that was first named in the computer industry of the 1990s and now serves as a label for everything that is bad or good (depending on one’s perspective) about twenty-first-century technological society.¹ From Google analytics to NSA surveillance, from biodiversity databases and genomics archives to the interactions of subatomic particles, we now often assume that anything worth knowing can be counted, quantified, digitized, and reduced to binary electronic signals residing on huge servers or floating somewhere in the cloud. Big Data refers to the sheer scope of modern information technology—a world measured in terabytes and petabytes, or even yottabytes (a trillion terabytes)—as well as to the ubiquity of data in every aspect of modern existence.²

    Big Data also signals, to some observers, a profound change in the very nature of science: it has been heralded as a fourth paradigm or even the end of science, a model of investigation in which algorithms do much of the work of interpreting the world around us.³ Practitioners in business and industry praise Big Data as a new oil or new asset class affording unprecedented opportunities for production, marketing, and social engineering.⁴ Many of these enthusiastic accounts of the Big Data revolution bear more than a whiff of technological determinism. It sometimes seems as if Big Data is the inevitable product of the introduction of electronic computers, or that the developments in infrastructure—such as the giant server farms built to sustain and optimize the data flow—from the late twentieth century onward have led to a qualitative transformation in the nature of scientific practice and epistemology.

    Sometimes called the digital age, our era is perhaps more than anything defined by a reliance on a particular technology: the electronic digital computer. Computers are ubiquitous—they are in our pockets, on our wrists, in our cars, in our kitchens and bathrooms, at the supermarket, and on the subway. Computers large and small comprise nodes in a vast network of information services we access on an almost continual basis, from checking the weather forecast to predicting the performance of the global stock market. Smaller devices, faster processors, bigger volumes of data: we are so constantly bombarded by the forward-looking rhetoric celebrating the multitude of gadgets surrounding us that we can forget that these technologies have histories, and that those histories stretch back well before the advent of electronic computing. Data, a term that originally referred to the givens in a geometric proof, does not necessarily reside in electronic computers. The same applies to digitization, the process by which continuous phenomena (whether the light emitted by a distant star or a Beethoven sonata) are translated into discrete (or numerical) format. Even computing was performed in a variety of technological contexts, both pre- and postelectronic.

    The notion of Big Data calls to mind another alleged transformation in twentieth-century science that has been much discussed by historians of science and technology in the past several decades: the advent of Big Science.⁶ A term first coined in the 1960s by physicist Alvin Weinberg, Big Science represented a shift toward enormous scientific undertakings that were incredibly costly, involved hundreds or even thousands of investigators, adopted a corporate-style management structure, and tended to monopolize support and attention from public and private sources. There are parallels—and indeed direct overlaps—between Big Science and Big Data. Many projects that involve Big Data—the Human Genome Project, CERN, the Very Large Telescope array—unquestionably fit the definition of Big Science. Big Data has also become a buzzword whose deployment can loosen purse strings from funding agencies, attract journalists, and confer importance and prestige on scientific projects. But at a time when the smartphones most of us carry in our pockets are as powerful as 1990s-era supercomputers, it makes sense to question whether Big Data is big in the same way that Big Science is.

    Historian Jon Agar has suggested distinguishing between Big Science as a mode of organization of scientific research that had pride of place in the nineteenth century, and the labeling of ‘Big Science’ as something of concern … a product of the … long 1960s.⁷ Likewise, it might be useful to draw a distinction between Big Data as a temporal cultural phenomenon of the twenty-first century and Big Data as a data-driven mode of doing science with deeper roots in the past. As a label, Big Science has specific connotations—including a quantitative threshold for size, costliness, and scope—that distinguish it from the science practiced (in most cases) before the Second World War.⁸ In contrast, Big Data is something of a moving target. When the ENIAC (Electronic Numerical Integrator and Computer) was introduced in 1946, it boasted a processing capability some three orders of magnitude more powerful than its electromechanical predecessors. For the scientists and engineers engaged in nuclear weapons research, this increased processing capability enabled Big Data indeed. However, the 30-ton ENIAC was less powerful than a 1970s-era programmable calculator (in fact, a 1995 project at the University of Pennsylvania managed to re-create the original ENIAC on a 7.44- by 5.29-mm chip), and current smartphones and tablets have up to 10⁹ times as much computing power.⁹ Although computing technology has tended toward exponential growth in recent decades (Moore’s law states that the number of transistors on a microchip will tend to double every two years), it is worth remembering that even exponential growth is a continuous function. As a phenomenon, then, Big Data seems to lack the decisive qualitative distinction present in the transition to Big Science. Unlike Big Science, which was typified, as a historical phenomenon, in the Manhattan Project and the National Laboratories, what constitutes Big Data is contested by practitioners and observers alike.¹⁰

    What, then, is the history of Big Data? Is it a history of ever more powerful technologies, quantitative increase of data, and qualitative changes associated with these processes, as the proponents of Big Data suggest? It is undeniable that computing technologies have altered the practice of science in some profound ways: examples such as the Human Genome Project, the Large Hadron Collider, and the Global Biodiversity Information Facility, among many others, testify to the intellectual, political, and economic significance of Big Data. At the same time, however, practitioners and scholars have begun to critically reflect on the more breathless and hyperbolic proposals regarding the revolutionary and transformative character of today’s data-driven science. Considering the legal, ethical, and political implications of today’s information technologies and data practices for our societies, they ask how the growth of data is impacting the emergence of new elites, the growth of wealth and capital, as well as second-order harms, such as profiling, government surveillance, and discrimination. If data is a new asset class, as some Big Data enthusiasts claim, what is the source of the new value?¹¹ The sheer volume and diversity of the expanding literature featuring Big Data in its titles attest to the fact that data in itself has become an object of study, manifesting an important and growing conceptual space in the natural and social sciences alike.

    All these questions have important historical dimensions. Historians, however, have been a rare minority in the growing literature on Big Data. Humanities scholars are usually skeptical about claims framed in explicitly presentist terms. Historians of science have been especially reluctant to take part in such discussions: for a relatively young field that established its identity as part of history rather than the sciences, the projection of categories of the present into the distant past involved not only methodological concerns about producing bad history (what historian Herbert Butterfield famously dubbed Whig historiography) but also political caution regarding disciplinary identity.¹² However, historians of science have an important stake in the conversation on and around data, big or small.

    History of Data and the History of Science

    Data has long been a key category in the history and philosophy of science. A fundamental epistemological category, data has been, and is, a central notion of empiricist epistemology. At the same time, data figured prominently in struggles to free the history of science as a field from a particular vision of science as a steady accumulation of gathered empirical data—the givens in the original Latin sense of the word.¹³ This vision of science narrated as a story of an upward, linear, and univocal progress was part and parcel of the Enlightenment project, which integrated a particular model of positivist epistemology with the notion of progress. No less than three generations of historians of science have been introduced to the profession through Thomas Kuhn’s Structure of Scientific Revolutions (1962), which shuttered this received view of a cumulative progression of knowledge. Few, however, would recall now what philosopher Gerald Doppelt and others described as the central implication of Kuhn’s Structure: data loss caused by change of paradigms.¹⁴ According to John Zammito, the demonstration of even the short-term failures of cumulation in data was the main problem for philosophers who, for this reason, rejected Kuhn’s proposal as a lapse into irrationalism, while at the same time turning their attention away from traditional philosophical appraisals of scientific theories and focusing instead on actual scientific processes.¹⁵

    Historians and other students of science in the wake of Kuhn have confronted similar scenarios. When post-Kuhnian students of science reinvented the history of science as the history of scientific practices rather than scientific ideas, the observation of actual practices went hand in hand with discussions of how experience and observations have been shaped and transformed into scientific data, and what the stakes of these processes were. Laboratory studies pioneered by Bruno Latour, Karin Knorr-Cetina, Michael Lynch, and Sharon Traweek, among others, have demonstrated that data are not only gradually construed through routine negotiation and tinkering until stabilized; data-mining techniques, quantitative algorithms, and qualitative visual selections are all part of making data meaningful. Latour has argued that scientific abstractions and propositions result from ever more rapid merging and condensing of one set of inscriptions about empirical data into another, simpler one, made possible by ever more sophisticated techniques of quantification, condensing, and visualization. It is this merging and condensing of complex piles of inscriptions into compact, movable, and manipulable immutable mobiles that make scientific data from an array of empirical observations.¹⁶

    Other scholars saw these types of transformations as transitions from traces, signs, and indices to data themselves.¹⁷ Hans-Jörg Rheinberger, for instance, pointed out that "traces only start to make sense and acquire meaning as data, if one could relate and condense them into an epistemic thing, a suspected Sachverhalt, a possible ‘fact’ in the everyday language of the sciences … that in turn could be used to create data patterns: curves, maps, schemes, and the like—potentially the whole plethora of forms of visualization the sciences have been so inventive to generate."¹⁸ From this perspective, a meaningful visualization of the data is what constitutes knowledge. As Latour argued, reducing the complexity and uncertainty of observations, visualizations—diagrams, technical drawings, lists, and graphs—can have an impact beyond the realm of language-based arguments. Consequently, for Latour the history of science is the history of innovations in visualizations: every possible innovation that offers any of these advantages will be selected by eager scientists and engineers: new photographs, new dyes to color more cell cultures, new reactive paper, a more sensitive physiograph, a new indexing system for librarians, a new notation for algebraic function, a new heating system to keep specimens longer. History of science is the history of these innovations.¹⁹

    Kuhn’s Structure did not have much to say about technology—indeed, Kuhn explicitly bracketed the entire question of the role of technology, as well as techniques of making and moving data, in understanding science and its change. Post-Kuhnian historians of science who turned to study the material cultures of science have found that strategies and technologies developed to deal with information and data played a vital role in making knowledge itself, demonstrating that the interconnection between data manipulation and ordering knowledge can be seen throughout the history of the natural sciences.²⁰ New technologies allowed not only new kinds of data analysis but also ever larger data production. Over the past decades, historians of science have explored how previous societies coped with their own problems of information overload, whether that meant a superabundance of manuscripts and printed works in the medieval and early modern periods, the inexhaustible supply of observations of natural history during the age of European expansion, or the bureaucratic accumulation of an avalanche of numbers in the nineteenth century.²¹ Against this backdrop, today’s Big Data can be seen as a chapter in a longer history (or, rather, histories) of observation, quantification, statistical methods, models, and computing technologies.²² What all these studies have shown is that the strategies and technologies developed to deal with information overload played a vital role in making knowledge itself.

    Practices of observing, collecting, and sorting data are by nature collective endeavors, often involving long-distance cross-cultural interactions, sponsored by state governments, industries, or powerful stakeholders with their own interests. A long-distance collective empiricism enabled through distributed collection, classification, transportation, and negotiation of specimens, things, and images necessarily involves the arrangements of power and authority. New technologies that allowed new kinds of data analysis were thus coproduced with cultural and political orders. As Steven Shapin and Simon Schaffer put it succinctly in their classic Leviathan and the Air-Pump, The problem of generating … knowledge is a problem in politics, and, conversely, … the problem of political order always involves solutions to the problem of knowledge.²³ Making data, much like the conventions about how knowledge is made, is inherently political.²⁴

    The examination of the epistemologies, technologies, and politics behind the practices developed for disciplining, recording, and systematizing observational experiences, turning them into scientific evidence, has thus been embedded in post-Kuhnian history of science. Building on this rich historiography, this volume represents a new step toward a more comprehensive history of data in the natural, social, and human sciences. By focusing on data, this volume sharpens and develops further the insights that were often merely secondary themes, or were the explanans rather than the explanandum, in the historiography focused on questions of knowledge production and contextualization. A more explicit focus on data and questions motivated by current issues around Big Data helps us to revisit earlier historical studies in the light of these present concerns, and to connect the historiographies that had so far only little to say to each other. We hope to set out—in this introduction, in our individual contributions, and in the reflective essay at the end of the volume—some of the key questions that emerge when we historicize modern data culture, and to point to some preliminary conclusions about how and why data has come to matter so much for science and society at large.

    Themes and Questions

    In historicizing and problematizing our current fascination with Big Data over the longue durée, we are as committed to examining continuities in the practice and moral economy of data as we are to identifying points of rupture. We take as a starting point, for example, that a history of data depends on an understanding of the material culture of data—the tools and technologies used to collect, store, and analyze information—that makes data science possible. However, we do not share with many of the recent popular appraisals of Big Data the implicit view that this has been a progressive or teleological development. Rather, we see data as immanent to the practices and technologies that support it: not only are the epistemologies of data embodied in the tools that scientists use, but in a concrete sense data themselves cannot exist apart from that material culture. However, the precise relationship between technologies, practices, and epistemologies of data is complex.

    Big Data is often, for example, associated with the era of digital electronic databases, but this association potentially overlooks important continuities with data practices stretching back to much earlier material cultures. While technologies have changed—from paper-based to mechanical to electronic devices—database practices have, as several contributions to the volume reveal, been more continuous than the technologies and tools scientists used to organize, analyze, and represent their data. What is apparent in the case of recent and dramatic technological innovation, though, is that electronic computers have accelerated and amplified features of data-driven science already present or latent in earlier data cultures. These features include the automation of data collection, storage, and analysis; the mechanization of hypothesis testing; an increasing division of labor between data collectors and consumers; and the technological black-boxing of many data practices. This volume aims to present a nuanced genealogy of these features of modern data-driven science that recognizes underlying continuities (automation and division of labor are not unique to the computer era, for example), as well as genuine discontinuities (such as the black boxing of statistical analysis in many current scientific disciplines).

    The very notion of size—of bigness—in the history of data is also contingent on factors that need to be historicized. Like the analogous phenomenon of Big Science, the term Big Data invokes the consequences of increasing economies of scale on many different levels. The term ostensibly refers to the enormous amount of information collected, stored, and processed in fields as varied as genomics, archaeology, climate science, astronomy, and geology. But it is also big in terms of the major investments of time, energy, personnel, and capital it requires to function and in the way it has reoriented scientific priorities: Big Data has contributed to a hierarchical organization of science in which large, collaborative projects involving impressive-sounding databases often have pride of place in funding decisions and media attention. In other words, not only have new technologies emerged that allow new kinds of analysis and production of data, but a new cultural and political landscape has shaped and defined the meaning, significance, and politics of data-driven science.

    The making of Big Data has implied the creation of new communities—of scientists, technicians, and the public—committed to the project of collecting, indexing, organizing, and analyzing large amounts of data.²⁵ But while it has taken on new forms and contexts, the project of translating the world into data, as the contributions to this volume show, has been under way for centuries. More recent iterations of data culture may embody distinct political and technological idioms, but they are also built upon conventions, structures, epistemologies, and political relations of the past. We therefore gain insight from identifying continuities that run through distinct eras of data, as well as from comparing the historically contingent differences and relations that emerge from the constellation of case studies we have collected in this volume.

    Four basic historical questions serve as our starting point. First, how have scientists understood what data are, and what are the relationships between data and the physical objects or phenomena that the data represent in this and earlier periods? Second, what technologies, techniques, or practices are emblematic of data-driven science, and how have they developed or changed over time? Third, what has been at stake—politically, epistemically, economically, and even morally—in the increasing orientation of the sciences toward data-driven approaches? And finally, to what extent (if at all) does the era of Big Data represent a genuinely new paradigm in modern science?

    Structure of the Volume

    This volume is organized around three major themes. The contributions in the opening section, Personal Data, bear witness to a persisting, though often overlooked, feature of data history. While most scientific data may not be about individual people, the four essays in this section highlight that individuality, intimacy, and personal ownership not only figure prominently in current debates around social media but also formed a crucial part of earlier data practices in science, commerce, and bureaucracies. Rebecca Lemov reconsiders the historical context of today’s obsession with personal data, as she sets out to examine the sharing and amassing of uniquely intensive and private personal information in the history of modern anthropology. Hers is the case of a Hopi Indian who became the most-documented single individual in post–World War II social science, long before such practices were common. Lemov uses the lived biography and the pioneering data set compiled from it to scrutinize the differences and distinctness of Big Data in relation to the personal, psychological realm.

    Joanna Radin reinserts the historical and political context of personal data into the allegedly neutral testing tools of today’s Big Data machine learning. She reconstructs the circumstances that have led to the formation of a comprehensive long-term data set on rates of diabetes and obesity in the Native American Akimel O’odham (known in science as Pima) tribe living at the Gila River Indian Community Reservation. Originally collected in agreement with the data’s donors in a strictly medical context, the database produced from this information has since been mobilized far beyond the reservation and the donors’ reach to serve as an openly accessible tool for any kind of algorithmic manipulation. Taking the itinerary of the Pima data set as a case study, Radin argues for an approach to data ethics grounded in history and lived experience that helps us understand how data come into being.

    Markus Friedrich examines Europe’s early modern genealogical craze, a time when ever more comprehensive reconstructions of family lineage were required and requested for the validation of claims to gentility, proofs often necessary to hold an office. Friedrich unveils how genealogical information was generated, circulated, vetted, and compiled: families were stripped down to individuals, who in turn were reduced to a few bare dates and names. To produce these basic data and to establish relationships among them required complex archival research, turning genealogy into a social and epistemological battleground where standards of evidence and the reliability of data were constantly challenged.

    Dan Bouk completes this section by highlighting changes in the political economy of personal data over the last two centuries. Crucial to the operation of modern states and companies and deeply embedded in culture and practice, personal data, as Bouk claims, have come to exist in two types: as individual data sets (data doubles) chained to individuals or as data aggregates generated from such individual inscriptions. Examining the relationship between these two kinds of personal data, Bouk reveals staggering shifts of power since the nineteenth century that gained enormous momentum during the 1970s. In Bouk’s view, traditional aggregates of personal data have lost value, whereas data doubles of individuals rule our current era, often being out of the control of those they represent.

    The contributions in the second section, Epistemologies and Technologies of Data, focus on the methods, tools, and practices to collect and process data that have emerged since the late eighteenth century. In particular, these essays analyze the contexts and driving forces behind crucial changes in the history of data and ask what kinds of new insights are offered by the identification of such shifts. While the first three contributions zoom in on particular turning points in botany, population statistics, and bioinformatics, the second three span a wider temporal arc to showcase long-term continuities and ruptures in fields such as paleontology, linguistics, and librarianship. Staffan Müller-Wille reveals how the study of nature became a data-driven activity during the late eighteenth and early nineteenth centuries. Taking up the epochal historical turn in botany during that time, Müller-Wille offers a new explanation for why naturalists developed powerful temporal visions of the earth’s flora and fauna. His close reading of the many posthumous updates, translations, and adaptations of Carl Linnaeus’s taxonomic works reveals the practices of Linnaean nomenclature and classification as information science, where names and taxa were reduced to labels and containers. As these infrastructures—brought into play to manage and enhance the circulation of data—themselves became a research subject and an object of experimentation and manipulation, they generated surprising and never-before-seen phenomena unsettling long-held intuitions of nature as stable or uniform.

    Christine von Oertzen unearths a similarly unremarked transition in mid-nineteenth-century European population statistics. She considers the concepts, tools, and logistics of manual census compilation in Prussia from 1860 to 1914, showing that the use of the term data coincided with a fundamental reform in census taking. At the core of this transformation lay a new movable paper tool encompassing all relevant data of one person on a single page, which enabled statisticians to sort and combine data in new ways. This method yielded an unprecedented refinement of census statistics, produced by a carefully nurtured workforce, including many women toiling from home. Von Oertzen concludes that the concepts, techniques, and manual sorting methods introduced since the mid-nineteenth century revolutionized European census taking and statistical complexity long before punch cards and Hollerith machines came to the fore, a fact that also explains why the Prussian census bureau chose to abstain from mechanization for many years.

    While Müller-Wille and von Oertzen explore the emergence of new paper-based infrastructures and novel, data-driven sensibilities and practices at the outset of what we might call the information age, Hallam Stevens fast-forwards us to very recent shifts he sees taking shape in skills associated with Big Data work in the biomedical sciences. Stevens uncovers complex interactions between human users and computers that rely on new ways of thinking and working. He demonstrates that computers and computational practices have generated specific approaches to solving problems: producing knowledge under the label of Big Data requires thing knowledge and a feeling for algorithms, analogous to the intuition, tacit knowledge, and close attention required in working with organisms.

    Juxtaposing Stevens’s emphasis on distinct ruptures brought about by computers in current biomedical research, David Sepkoski reflects on the long-term continuities between recent technologies and practices of data and earlier archival or collecting practices. By reconstructing and contrasting the ways in which nineteenth- and late twentieth-century paleontologists collected, sorted, compiled, stored, and visualized their data on paper, Sepkoski traces a long and continuous genealogy of data-driven science. He shows how practices that originated in the nineteenth century migrated from collection to paper based, then to mechanical, and finally to computer-based information technologies and concludes that the history of databases in the natural sciences stretches back well before the advent of computers. This earlier context, he reasons, has contingently shaped the concepts and structures deployed in electronic tools.

    Judith Kaplan unveils similar—yet unacknowledged—diachronic conjunctions in historical linguistics. She scrutinizes current claims that statistical phylogenetic methods as well as Big Data computing technologies and practices bring about a radical, revolutionizing data-driven change to this field. By tracing the methods of historical linguistics from the nineteenth century onward, Kaplan shows that the tools and the actual data of current large-scale research on historical linguistic evolution are by no means new. Wordlists of 100–200 terms defined as basic vocabulary go back to nineteenth-century comparative philology. Current large-scale computational efforts to unravel humanity’s linguistic deep past continue to rely on such long-vetted lists, a fact that leads Kaplan to characterize such methods as data drag rather than recent innovations enabled by Big Data.

    In the closing contribution of this section, Markus Krajewski argues for a still longer trajectory of data history with reference to librarianship. Krajewksi claims that ancient bibliometrics knowingly distinguished between data and metadata to control and navigate vast amounts of information: headwords and indexing made any book’s content accessible, as did classification and ordering on shelves. Tracing continuities of information processing from eighteenth-century classical card catalogs to Online Public Access Catalogs (OPACs) in libraries, Krajewski identifies smart catalog software as an immanent and radically new feature of today’s Big Data, capable of merging data (content) and metadata (structure) into one, infinitely browsable absolute book.

    The contributions in the third section, Economies of Data, examine how changing data practices in specific disciplinary contexts, particular institutional cultures, and political frameworks impacted epistemologies as well as scientific and social values (and vice versa). Patrick McCray explores how changes in computer-assisted astronomy from the 1970s onward altered communal elements of scientific practice, such as collective empiricism, open access, and notions of intellectual property. Paying special attention to crucial moments of data and social friction, McCray analyzes practices and activities beyond the telescope itself to highlight how the moral economy in astronomy changed along with the knowledge infrastructure, creating new frameworks of data circulation and sharing.

    Mirjam Brusius ventures into nineteenth-century archaeology to discuss the contested history of the excavation of the Ishtar Gate in Babylon and its subsequent reconstruction in Berlin’s Pergamon Museum. Following the excavated rubble on its complex journey from Persia to Berlin, Brusius illustrates how difficult it can be to pin down the relationship between objects and data, especially if we take into account how epistemic values, political objectives, and power relations change from one context to another. She thus unravels the malleability of data: what the rubble found on-site represented to the excavators on the one hand and to museum curators on the other differed so dramatically that the reconstruction in Berlin barely resembled what the experts in the field considered authentic or truthful.

    Etienne Benson examines the power relations between center and periphery with regard to early and mid-twentieth-century ornithology. Benson examines the efforts of amateur bird banders and professional ornithologists in North America in order to study the ways in which data-management practices balanced the fine line between disciplining collecting efforts and encouraging enthusiasms. With reference to today’s buzz around citizen science projects, Benson uncovers a backhanded, centripetal dynamic of power: whereas early efforts to engage amateurs in large-scale collecting programs also encouraged individual and regional forms of scientific sociality, current data-centric visions of science privilege quantity and pattern matching over quality and interpretation—visions that narrow the role of amateurs to mere sensors for science and contribute to eroding participatory notions of citizenship.

    Elena Aronova concludes this section by tracing economies of data sharing at the intersection of mid-twentieth-century Big Science and world politics. She explores how the charged political climate of the Cold War affected the ways in which data were produced, collected, analyzed, and used in geophysics. Focusing on the International Geophysical Year and the World Data Centers in the USSR and the United States, Aronova analyzes a regime of exchange and secrecy in which data became a form of currency, enabling an unprecedented global circulation of data on various aspects of the physical environment and shaping distinct technological approaches that differed on the two sides of the Iron Curtain.

    The volume closes with an epilogue by Paul Edwards and Bruno Strasser, who formulate eight provocations about what can be learned from the contributions in this volume regarding the changing meanings, characteristics, and practices of data to understand the current moment(um) of data-driven science. As they argue, the data histories presented here not only help us to see the multitude of practices and modes of knowledge production that are embedded in today’s Big Data but also suggest new questions in need of attention from historians.

    Conclusions

    It would be difficult—if not impossible—to summarize the conclusions of such a diverse group of contributions as are represented in this volume, on a topic as voluminous as data. However, we feel that several general lessons can be drawn from the essays presented here. We firmly believe that using data as a category for historical analysis not only opens up new conceptual space for considering a diverse array of practices, technologies, and cultures in longue durée context but also promotes new disciplinary conversations among scholars (e.g., historians of computing and technology, historians of natural and social sciences, sociologists, anthropologists, and philosophers) who have not always been in direct communication. We offer, then, our conclusions—as influenced by working with the essays in this volume—about three broad questions: how data should be defined, how its history can be periodized, and how the current culture of data in the sciences can (or should) relate to its past. Our conclusions are not meant to be definitive or prescriptive but rather should be seen as reflections or encouragements for further historical study of the role of data in the natural and human sciences over the past several centuries.

    What Is Data?

    What does this volume as a whole tell us about what data was and is? It might be worth noting that none of the authors offers a definition of the term. Nor do they always use the same vocabulary to identify the many different meanings of data, its attributes, and its behavior. Analogies and metaphors abound: Radin, Stevens, and Sepkoski appeal to Lisa Gitelman’s evocative phrase of cooking data to analyze how physicians, bioinformatics specialists, and paleontologists aggregated their collected findings.²⁶ McCray, Bouk, and Aronova problematize the notion of data flow, unpacking the challenges of moving information from one context to another. Müller-Wille and Krajewski similarly unpack and scrutinize the fluid metaphors data flood and data deluge, pointing to the tension between scarcity and abundance of evidence inherent in most scholarly work. Benson contextualizes a case of information overload when he describes how data management structures proved inefficient to process the evidence provided by amateur bird banders, while von Oertzen uses the notion data jam to describe what happened when millions of records accumulated in the American census office in 1880. Lemov uses data exhaust to denote the current perception that the mundane details of our daily lives are worthy of constant monitoring and recording. Invoking an alternate meaning of the term, Radin refers to digital exhaust to characterize the silent accumulation of the digital traces produced by personal devices such as smartphones and computers, which manifest as a virtual commons and as a source that can be used to trace how people think and behave.

    Moreover, some authors use identical notions to describe contrary procedures. To give just one example: whereas Friedrich refers to basic data as a hard-won product distilled by early modern genealogists from rich and messy family lives, Aronova employs this expression to characterize the assumption behind the archives of Cold War Big Science. In Aronova’s case, basic data was seen as material waiting to be used and processed toward more refined data products. Rather than imposing a common vocabulary and definitions, we as editors regard such divergences as highly productive. They underscore that the data histories presented in this volume refocus recent approaches in the history of science devoted to practices, technologies, and materiality on data itself—the most fundamental and apparently simplest entity of empirical research. And thus, instead of defining what data was or is, our authors analyze what data did or do, and what was or is done with them.

    This discrepancy speaks to the fact that the authors in this volume emphatically refuse to see data as acting on their own accord, or as givens, as the Latin translation of the term suggests. Rather, the essays highlight the complexity and diversity of processes involved in making data, spanning the natural and the human sciences to render visible the myriad choices that went into creating data in different ways and diverse contexts: whether they were extracted from individual blood samples donated by members of the native American Akimel O’odham tribe (Radin); derived from celestial photographs for astronomic calculations (McCray); culled from a Hopi Indian’s descriptions of his dreams and visions to form a social science archive (Lemov); recovered from archeological rubble in drawings for reconstruction purposes far from the excavation site (Brusius); retrieved from aluminum bands in the search for patterns of bird migration (Benson); or selected from vast vocabularies and compressed into basic word lists for linguistic comparison (Kaplan). Data could also be chained to individuals (Bouk) or decontextualized from their original context (Radin). They were mustered to reconstruct narratives about the past—of human genealogy (Friedrich), languages (Kaplan), or life itself (Sepkoski)—and they were gathered and processed automatically without human intervention (Stevens).

    What is (or are) data? On this question, we as editors are adamantly pluralistic.²⁷ Philosophers may find profit in debating the ontology of data, but as historians, we feel that it is more fruitful to adopt the principle that data is what its makers and users have considered it to be.²⁸ This point is underlined by the fact that the term data was, for nearly all of the cases considered in this collection, an actor’s category—a term that changed its meanings across time and contexts in important ways. For example, Müller-Wille draws on Alexander von Humboldt’s use of the term in 1790 to illuminate the beginnings of a new, innovative reflexivity toward cumulative scientific methods, often dismissed as a naïve, Baconian approach. Sepkoski and von Oertzen spot similar conjunctions in nineteenth-century paleontology and population statistics, respectively, in the sense that seeing collections of fossils or gathered census information as data went hand in hand with developing methods, tools, and practices to classify, count, and correlate data in new ways. One important feature of this novel practice was to turn data into numerical values, a development that fed into the avalanche of printed numbers and nineteenth-century statistical thinking, phenomena that have been so aptly described by Porter, Hacking, and others.²⁹

    As Paul Edwards has helpfully clarified, data have mass and are subject to friction.³⁰ The essays in this volume illustrate in multiple ways that data did not exist in some insubstantial Platonic realm but rather occupy physical space—in specimen drawers, in compendia and catalogs, in reams of census records or insurance forms, on microfiche cards, or on magnetic storage media. The contributions show that collecting, processing, recombining, and transferring data required energy and labor, which carry costs that are tangible, whether in direct economic terms or otherwise. Data could also serve as a kind of currency or medium of exchange, and a fairly elaborate economy of data transfer and sharing has emerged over the past several hundred years in many scientific disciplines. However, the very physicality of data made it volatile; some essays point to the challenges of data storage and preservation, shedding light on dead-end technologies, archives lost (Lemov), and roads abandoned (Aronova). Such narratives show that histories of data are open-ended and help us to see what was at stake at crossroads of change.

    Data themselves are usually mute and require association, processing, reconfiguration, and representation to be made to speak. Expanding earlier insights of science studies scholars, a number of the essays in this volume show that the history of data is intimately intertwined with histories of visualization. In a range of disciplines, data were conceptualized in relational and spatial terms, a perception that led to new insights through visualization, whether in the form of statistical tables, genealogical pedigrees, paleontological spindle graphs, language trees, or computer-animated visuals for genomic comparisons. Several authors show that just as data themselves occupied tangible physical space, data representations occupied visual space—for example, the measured graphical space of a simple line graph, or the more complex and even multidimensional virtual space of computer models in genetics, physics, and other fields. Aiming at making hidden patterns visible, these products of data-driven analysis were by no means hypothesis free, as various examples in this volume document. Rather, they resulted from long and arduous processes, in which the data in question had to be made commensurable before they were recontextualized, recombined, and subjected to various sorting and classifying procedures. The resulting tables, graphs, maps, and images served both as an end in themselves and as new tools for the production of further knowledge.

    What we see as a real strength of the collective contributions to the volume is that they illuminate data histories from multiple perspectives. Our contributors show us how data were collected, where they have been stored, who has handled them, how they have been processed and recombined, the practical economy of their exchange, and the moral economy of their use in scientific, political, and cultural contexts. Though not comprehensive or encyclopedic, this volume illuminates the entire life cycle of data through its historical case studies. We consider this to be the best way of illustrating the diverse forms and roles that data can occupy.

    Continuities and Ruptures in the History of Data

    Our second question relates to the periodization of data histories. Is the history of data essentially continuous, proceeding by incremental steps through succeeding technological, scientific, social, and epistemological contexts, or is it instead marked by sudden rupture and discontinuity? To put it in the current language of Silicon Valley, has it been dependent on periodic technological or social disruptions? This has been one of the crucial issues in the workshops that preceded this volume, and it is a central concern in a number of the essays. It is fair to say that no collective consensus emerged on how to periodize the history of data, but there is general agreement among our contributors on a few key points.

    In the first place, we see no reason to treat the choice of continuity versus rupture as a mutually exclusive one. Depending on a variety of factors—the discipline being examined, the span of time considered, the social and political contexts—the history of data can appear either broadly continuous or punctuated by distinctive breaks. Some disciplines with long traditions—such as astronomy, geology, or linguistics—seem less prone to epistemic or practical disruption or reinvention by new technologies or material cultures, while others with shorter traditions (e.g., biomedicine) have been more dramatically shaped by particular technologies or institutional contexts. The appearance of continuity or discontinuity also sometimes relates to the level of resolution at which the history is being examined: as scientists who study prehistoric extinctions have learned, an event that might appear as an instantaneous and anomalous spike when viewed from the perspective of the immensity of geologic time can resolve to a more drawn out smear, lasting millions or tens of millions of years, from the perspective of ecological time. The same, we feel, is true of our own more modest historical investigations: in some of the longue durée case studies in this volume (e.g., Krajewski, Sepkoski, Kaplan), empirical, conceptual, and epistemological conventions established at an early point seem more stable against periodic disruptions by new technologies or practices when viewed from the perspective of centuries, but they might appear more volatile if decomposed to shorter units of time.

    It is impossible, however, to ignore the impact that electronic computers (both analog and digital) have had in the history of data. While we suspect that the view of the computer as the paradigm-defining technological innovation of late modernity by many historians is partly attributable to some quite reasonable and forgivable presentism, it is nonetheless the case that computers have transformed the storage, processing, and even interpretation of data in profound ways in many disciplines, as a number of the essays here show. What we urge, though, is to avoid making the introduction of computers a decisive Rubicon in a broader history of data—to avoid, in other words, thinking of data histories as being B.C. (before computers) or A.C. Computers take on enormous significance in disciplines that are roughly contemporaneous with their emergence: fields like molecular genetics or particle physics, as Hallam Stevens and Peter Galison have documented, had many of their essential concepts and practices (notions of causality, tools for analysis, and even models of phenomena) shaped by the computer technologies and applications of the 1940s and 1950s.³¹ But for disciplines with longer histories—bibliometrics, taxonomy, paleontology, linguistics, population statistics, astronomy—the computer is just one of a number of innovations in technology or material culture over the past few centuries.

    Furthermore—as essays by Sepkoski, Kaplan, Bouk, von Oertzen, and others in this volume show—in many disciplines, data practices involving computers were strongly conditioned and constrained by practices developed around earlier technologies, such as punched-card tabulators, printed tables, index cards, and even simple lists. We as editors view the history of the relationship between material culture and practice around data as an exemplary illustration of the phenomenon of historical contingency: there is no predetermined goal that historical development is destined to reach, but particular contingent decisions—the adoption of particular tools or techniques at one point in time—have often strongly constrained subsequent developments. We see strong evidence that particular collective decisions at one point in history—such as the adoption of numerical tables as the standard format for presenting quantitative data—had downstream consequences for later developments—such as the format and function of early computerized databases.³²

    We also broadly agree with Jon Agar’s argument that the initial adoption of computers in disciplines with established traditions has tended to be followed by a period when older techniques were applied to the new technology.³³ That is to say, the arrival of computers did not immediately and automatically change the way scientists collected, analyzed, or interpreted data, even if genuinely new tools (simulation models, machine learning, automated algorithmic analysis) have eventually come to the fore in some disciplines. What we see in many of our case studies is that computers promoted other kinds of important changes around what Bruno Strasser has called the moral economy of data: new divisions of labor between investigators and technicians (or data jockeys), new political tensions and relationships (e.g., around data sharing and access), and new economies of scale for data collection and analysis. Additionally, our contributions yield a tangle of further deep and momentous changes with regard to today’s Big Data, within and beyond science. In the pre-electronic era, huge amounts of data were collected on a global scale, with the explicit aim of creating data archives that could be endlessly mined, for future uses yet unknown, but data was also bound in space and time to physical archives and analog infrastructures (Aronova). In stark contrast, today’s Big Data radically transcends the circumstances and locality of its production (Radin). What is more, data and metadata can be swapped at will, potentially erasing the distinction between content and structure (Krajewksi). Similarly, personal data chained to individuals empower consumers, but much more so states and corporations (Bouk), and many social situations have come to resemble laboratories in which each individual who enters becomes a de facto experimental subject (Lemov).

    Are We Presentists?

    Finally, we want to say a few words about the relationship between our current moment of datafication—the Big Data era—and our studies of data’s past. Is it acceptable to be inspired by the ubiquity of data in our own lives to investigate the history of data? Naomi Oreskes has made a compelling case for a kind of motivational presentism, or allowing ourselves to be guided in our historical interests by the concerns of the present, that we believe applies well to our collective project.³⁴ One can be motivated as a historian to explore the genealogy of technologies, institutions, or social and cultural phenomena that matter to us today without resorting to the triumphalism or teleology associated with Whiggishness. Indeed, one way of consciously avoiding Whiggism is to avoid privileging the emblematic data technology of the twenty-first century—the computer—in our study of data in the past. The methodology of historicizing is genealogical, and the essence of genealogy is contingency. We do not attempt to show that the past inevitably led us to our current data moment but rather to highlight the many unpredictable, contingent decisions and events that produced the present that we actually have, rather than one of the innumerable ones that did not come to pass. This strategy serves also to make the familiar strange by showing how something as apparently self-evident as data has encompassed many surprising and unfamiliar contexts, associations, and practices. Time and again, these essays illustrate that changing technologies or practices often did not make the work of scientists easier or better—to say nothing of the societies they have participated in—but rather that they introduced new challenges, new social arrangements, and new kinds of friction to be confronted.

    In the end, we as editors strongly believe that data history is an important category not because many of its individual components—histories of technology and quantification, bureaucratic history, history of measurement and assessment—have been overlooked, but rather because a focus around data brings many often independent conversations and perspectives together under a broader umbrella. This matters because tracing the genealogy of data-driven science reveals essential contexts and assumptions underlying our current material culture, politics, and epistemology of data. By presenting historical analyses of the materiality, practices, and political ramifications of data collection and analysis, we gain new insights in many distinct scientific disciplines and eras relevant to historians of science, while adding a much-needed comparative dimension and historical depth to the ongoing discussion of the revolutionary potential of data-driven modes of knowledge production. Our hope is that this volume will inspire scholarship that will further enrich our historical understanding of data and its consequences beyond the cases presented in this collection. If Big Data is the question, data histories hold the answers.

    * Department of History, University of California, Santa Barbara, Santa Barbara, CA 93106-9410; [email protected].

    §Max Planck Institute for the History of Science, Boltzmannstrasse 22, 14195 Berlin, Germany; [email protected].

    #Max Planck Institute for the History of Science, Boltzmannstrasse 22, 14195 Berlin, Germany; [email protected].

    ¹On the introduction of the term Big Data to general currency, see Rebecca Lemov, Anthropology’s Most Documented Man, Ca. 1947: A Prefiguration of Big Data from the Big Social Science Era, in this volume. We refer to Big Data in capital letters simply as a reference to current usage in the literature, which does not signify any endorsement of the exclusive significance of present-day data practices.

    ²The emphasis on the size of data derives its impulse from estimates of data flows in social media. As of 2012, Facebook collected more than 500 terabytes of data on a daily basis, Google processed dozens of petabytes, and Twitter produced nearly 12 terabytes. See Hamid Ekbia, Michael Mattioli, Inna Kouper, G. Arave, Ali Ghazinejad, Timothy Bowman, Venkata Ratandeep Suri, Andrew Tsou, Scott Weingart, and Cassidy R. Sugimoto, Big Data, Bigger Dilemmas: A Critical Review, JASIST 66 (2015): 1523–1746.

    ³Tony Hey, Stewart Tansley, and Kristine Tolle, eds., The Fourth Paradigm: Data-Intensive Scientific Discovery (Redmond, Wash., 2009); Chris Anderson, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete, Wired, 23 June 2008.

    ⁴World Economic Forum, Personal Data: The Emergence of a New Asset Class (2011), https://www.weforum.org/reports/personal-data-emergence-new-asset-class (accessed 17 January 2017); P. Rotella, Is Data the New Oil?, Forbes, 2 April 2012, http://www.forbes.com/sites/perryrotella/2012/04/02/is-data-the-new-oil/ (accessed 17 January 2017).

    ⁵Often, computing tasks were performed by women. See Pamela Mack, Strategies and Compromises: Women in Astronomy at Harvard College Observatory, 1870–1920, J. Hist. Astron. 21 (1990): 65–75; Jennifer Light, When Computers Were Women, Tech. & Cult. 40 (1999): 455–83; David Alan Grier, When Computers Were Human (Princeton, N.J., 2005); Denise Kiernan, The Girls of Atomic City: The Untold Story of the Women Who Helped Win World War II (New York, 2013).

    ⁶See, e.g., James H. Capshew and Karen Rader, Big Science: Price to the Present, Osiris 7 (1992): 3–25; Peter Galison and Bruce Hevly, eds., Big Science: The Growth of Large-Scale Research (Palo Alto, Calif., 1992); Catherine Westfall, "Rethinking Big Science: Modest, Mezzo, Grand Science

    Enjoying the preview?
    Page 1 of 1