The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine AI regulation. In this chapter, we engage in a critical discussion of the concept of trustworthy AI by probing the concept both on theoretical and practical grounds, assessing its substance and the feasibility of its intent. We offer a concise overview of the guidelines and their vision for trustworthy AI and examine the conceptual underpinnings of trustworthy AI by considering how notions of 'trust' and 'trustworthiness' have been discussed in the philosophical literature. We then discuss several epistemic obstacles and moral requirements when striving to achieve trustworthy AI in practice before concluding with an argument in support of the establishment of a trustworthy AI culture that respects and protects foundational values.
As the digital revolution continues and our lives become increasingly governed by smart technolog... more As the digital revolution continues and our lives become increasingly governed by smart technologies, there is a rising need for reflection and critical debate about where we are, where we are headed, and where we want to be. Against this background, the paper suggests that one way to foster such discussion is by engaging with the world of fiction, with imaginative stories that explore the spaces, places, and politics of alternative realities. Hence, after a concise discussion of the concept of speculative fiction, we introduce the notion of datafictions as an umbrella term for speculative stories that deal with the datafication of society in both imaginative and imaginable ways. We then outline and briefly discuss fifteen datafictions subdivided into five main categories: surveillance; social sorting; prediction; advertising and corporate power; hubris, breakdown, and the end of Big Data. In a concluding section, we argue for the increased use of speculative fiction in education, but also as a tool to examine how specific technologies are culturally imagined and what kind of futures are considered plausible given current implementations and trajectories.
Seit einiger Zeit gibt es wieder verstärktes Interesse an sogenannter evidenzbasierter Politikges... more Seit einiger Zeit gibt es wieder verstärktes Interesse an sogenannter evidenzbasierter Politikgestaltung. Angelockt durch die großen Versprechen von Big Data scheinen politische Entscheidungsträger zunehmend mit stärker auf digitalen Daten basierenden Regierungsformen experimentieren zu wollen. Doch obwohl das Aufkommen von Big Data und die damit verbundenen Gefahren von wissenschaftlicher Seite durchaus kritisch hinterfragt werden, gab es bislang nur wenige Versuche, ein besseres Verständnis für die historischen Kontexte und Grundlagen dieser Vorgänge zu entwickeln. Der hier vorliegende Kommentar befasst sich mit dieser Lücke, indem er das der-zeitige Streben nach numerischen Beweisen in einen breiteren gesellschaftspolitischen Kontext einordnet und dadurch zeigt, wie die Erkenntnisversprechen von Big Data sich mit bestimmten Formen von Vertrauen, Wahrheit und Objektivität kreuzen. Wir argumentieren, dass das übersteigerte Vertrauen in zahlenbasierte Evidenz einer speziellen politischen Kultur zugeordnet werden kann, nämlich einer repräsentativen Demokratie, die von öffentlichem Misstrauen und großer Zukunftsunsicherheit gekennzeichnet ist.
Across the globe, the notion of Big Data has received much attention, not only in technology and ... more Across the globe, the notion of Big Data has received much attention, not only in technology and business circles but also among political authorities. Public officials in Europe, the U.S., and beyond have formulated Big Data strategies that will steer I(C)T development towards certain goals and aspirations. Drawing on official European Commission documents and using the notion of sociotechnical imaginaries as a sensitising concept, this chapter investigates the values, beliefs, and interests that guide European policymakers' Big Data rhetoric, making the argument that while the Commission's embrace of a strong free-market position can be partly explained in terms of vexing economic, institutional, and epistemic challenges, its push for Big Data solutions threatens to undermine democratic rights and principles as well as efforts towards responsible research and innovation. The article concludes with recommendations for further research, emphasising the need for cross-disciplinary dialogue and scholarship.
With its promise to transform how we live, work, and think, Big Data has captured the imagination... more With its promise to transform how we live, work, and think, Big Data has captured the imaginations of governments, businesses, and academia. However, the grand claims of Big Data advocates have been accompanied with concerns about potential detrimental implications for civil rights and liberties, leading to a climate of clash and mutual distrust between different stakeholders. Throughout the years, the interdisciplinary field of technology assessment (TA) has gained considerable experience in studying socio-technical controversies and as such is exceptionally well equipped to assess the premises and implications of Big Data practices. However, the relationship between Big Data as a socio-technical phenomenon and TA as a discipline assessing such phenomena is a peculiar one: Big Data may be the first topic TA deals with that is not only an object of inquiry, but also a major competitor, rivaling TA in several of its core functions, including the assessment of public views and visions, means and methods for exploring the future, and the provision of actionable knowledge and advice for political decision making. Our paper explores this dual relationship between Big Data and TA before concluding with some considerations on how TA might contribute to more responsible data-based research and innovation.
The paper investigates the rise of Big Data in contemporary society. It examines the most promine... more The paper investigates the rise of Big Data in contemporary society. It examines the most prominent epistemological claims made by Big Data proponents, calls attention to the potential socio-political consequences of blind data trust, and proposes a possible way forward. The paper's main focus is on the interplay between an emerging new empiricism and an increasingly opaque algorithmic environment that challenges democratic demands for transparency and accountability. It concludes that a responsible culture of quantification requires epistemic vigilance as well as a greater awareness of the potential dangers and pitfalls of an ever more data-driven society.
Recently, there has been renewed interest in so-called evidence-based policy making. Enticed by t... more Recently, there has been renewed interest in so-called evidence-based policy making. Enticed by the grand promises of Big Data, public officials seem increasingly inclined to experiment with more data-driven forms of governance. But while the rise of Big Data and related consequences has been a major issue of concern across different disciplines, attempts to develop a better understanding of the phenomenon's historical foundations have been rare. This short commentary addresses this gap by situating the current push for numerical evidence within a broader socio-political context, demonstrating how the epistemological claims of Big Data science intersect with specific forms of trust, truth, and objectivity. We conclude by arguing that regulators' faith in numbers can be attributed to a distinct political culture, a representative democracy undermined by pervasive public distrust and uncertainty.
In late 2019, Philip Alston, the United Nations Special Rapporteur on extreme poverty and human r... more In late 2019, Philip Alston, the United Nations Special Rapporteur on extreme poverty and human rights, published a much-discussed report on the worldwide emergence of digital welfare states where "digital data and technologies are used to automate, predict, identify, surveil, detect, target and punish" (UN OHCHR, 2019). Outlining a number of related societal risks – from privacy violations to the reinforcement or exacerbation of existing inequalities – the report warns of the dangers of "stumbling zombie-like into a digital welfare dystopia", a human-rights free zone that is "especially problematic when the private sector is taking a leading role in designing, constructing, and even operating significant parts of the digital welfare state" (ibid.). Against this background, and given that the political push for "automating society" (AlgorithmWatch, 2019) is clearly gaining momentum, this paper takes a closer look at the political economy of welfare AI, with particular attention to the expanding role of the private sector in either supporting or delivering public services. More specifically, we wish to shed some light on the business behind welfare tech, the types of corporations involved, and the nature of the emerging public-private partnerships in some of the most sensitive areas of government operation. Drawing on examples from EU member states, the paper thus seeks to contribute to current discussions around the datafication and industrialization of the public sector (see, e.g., Dencik et al., 2019) and the changing power relations these processes initiate, raising questions of ownership, control, accountability, and democratic legitimacy.
In an era marked by crisis and uncertainty, policy makers have shown interest in more anticipator... more In an era marked by crisis and uncertainty, policy makers have shown interest in more anticipatory forms of governance, facilitated by analytical tools and techniques that promise to provide not just insight into past and present events, but foresight into what the future holds. Big Data-based predictive algorithms are meant to shift the focus from reactive measures to proactive prevention, from monitoring and responding to the continuous assessment of the ‘not yet’. This talk considers the increasingly central role of predictive analytics in many areas of public life, reflecting on the opportunities and benefits, but also the potential social, ethical, and epistemic pitfalls of an ever more data-driven society.
The future is currently enjoying great popularity. Throughout different spheres of society the ‘n... more The future is currently enjoying great popularity. Throughout different spheres of society the ‘not yet’ is at the centre of debates. While technoscience articulates great promises about futures expected to be brought about by novel innovations, research policy is busy in its attempts to anticipate and govern future lifeworlds.
Science and Technology Studies (STS) has devoted considerable attention to the study of the future. Collectively held imaginations about the future are explored as instrumental in the stabilization of societal and scientific orderings, dynamics of promise and expectation are studied in order to understand processes of technoscientific innovation and the making of particular futures, and varying anticipatory techniques are developed, compared and refined in an effort to delineate new modes of governing in increasingly complex, uncertain worlds. While these approaches of studying the future differ considerably in their empirical focus, they usually have one thing in common: they deal with particular futures, e.g. the future of nuclear energy, the future potentials of nanotechnology and so on, while paying considerably less attention to questions concerning how the future as a temporal concept is constituted.
In this talk we attempt to remedy this omission by bringing together insights from both STS and narratological research to ask how futures are narrated. This means exploring the future as a ‘narrated temporality’ and thus focusing on how stories about the future are constructed, related and ordered temporally. We will provide an empirically grounded understanding of what ‘future’ means in policy contexts – both for policymakers and advisory institutions. We argue that what is understood as ‘the future’ varies significantly in different fields, and that these variations have epistemic, social and moral implications for how we choose to engage with the future, including ideas about what are legitimate issues to be tackled, how to produce knowledge to deal with them, and who is supposed to take action and thus can be held responsible. This means asking how particular stories of the future are co-constitutive of political spaces. Narrating the future then becomes an intrinsically political and ontological activity of negotiating and ordering the present or, to put it differently, a practise of world-making. We argue that thinking about futures as narrated temporalities is a promising avenue to better understand processes of politicization.
Using a comparative approach, we show how different stories about the future relate to different conceptualizations of the political (participation, representation, responsibility, …). To this end we analyse (science) policy documents from the fields of sustainability research, big data and climate engineering.
The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine AI regulation. In this chapter, we engage in a critical discussion of the concept of trustworthy AI by probing the concept both on theoretical and practical grounds, assessing its substance and the feasibility of its intent. We offer a concise overview of the guidelines and their vision for trustworthy AI and examine the conceptual underpinnings of trustworthy AI by considering how notions of 'trust' and 'trustworthiness' have been discussed in the philosophical literature. We then discuss several epistemic obstacles and moral requirements when striving to achieve trustworthy AI in practice before concluding with an argument in support of the establishment of a trustworthy AI culture that respects and protects foundational values.
As the digital revolution continues and our lives become increasingly governed by smart technolog... more As the digital revolution continues and our lives become increasingly governed by smart technologies, there is a rising need for reflection and critical debate about where we are, where we are headed, and where we want to be. Against this background, the paper suggests that one way to foster such discussion is by engaging with the world of fiction, with imaginative stories that explore the spaces, places, and politics of alternative realities. Hence, after a concise discussion of the concept of speculative fiction, we introduce the notion of datafictions as an umbrella term for speculative stories that deal with the datafication of society in both imaginative and imaginable ways. We then outline and briefly discuss fifteen datafictions subdivided into five main categories: surveillance; social sorting; prediction; advertising and corporate power; hubris, breakdown, and the end of Big Data. In a concluding section, we argue for the increased use of speculative fiction in education, but also as a tool to examine how specific technologies are culturally imagined and what kind of futures are considered plausible given current implementations and trajectories.
Seit einiger Zeit gibt es wieder verstärktes Interesse an sogenannter evidenzbasierter Politikges... more Seit einiger Zeit gibt es wieder verstärktes Interesse an sogenannter evidenzbasierter Politikgestaltung. Angelockt durch die großen Versprechen von Big Data scheinen politische Entscheidungsträger zunehmend mit stärker auf digitalen Daten basierenden Regierungsformen experimentieren zu wollen. Doch obwohl das Aufkommen von Big Data und die damit verbundenen Gefahren von wissenschaftlicher Seite durchaus kritisch hinterfragt werden, gab es bislang nur wenige Versuche, ein besseres Verständnis für die historischen Kontexte und Grundlagen dieser Vorgänge zu entwickeln. Der hier vorliegende Kommentar befasst sich mit dieser Lücke, indem er das der-zeitige Streben nach numerischen Beweisen in einen breiteren gesellschaftspolitischen Kontext einordnet und dadurch zeigt, wie die Erkenntnisversprechen von Big Data sich mit bestimmten Formen von Vertrauen, Wahrheit und Objektivität kreuzen. Wir argumentieren, dass das übersteigerte Vertrauen in zahlenbasierte Evidenz einer speziellen politischen Kultur zugeordnet werden kann, nämlich einer repräsentativen Demokratie, die von öffentlichem Misstrauen und großer Zukunftsunsicherheit gekennzeichnet ist.
Across the globe, the notion of Big Data has received much attention, not only in technology and ... more Across the globe, the notion of Big Data has received much attention, not only in technology and business circles but also among political authorities. Public officials in Europe, the U.S., and beyond have formulated Big Data strategies that will steer I(C)T development towards certain goals and aspirations. Drawing on official European Commission documents and using the notion of sociotechnical imaginaries as a sensitising concept, this chapter investigates the values, beliefs, and interests that guide European policymakers' Big Data rhetoric, making the argument that while the Commission's embrace of a strong free-market position can be partly explained in terms of vexing economic, institutional, and epistemic challenges, its push for Big Data solutions threatens to undermine democratic rights and principles as well as efforts towards responsible research and innovation. The article concludes with recommendations for further research, emphasising the need for cross-disciplinary dialogue and scholarship.
With its promise to transform how we live, work, and think, Big Data has captured the imagination... more With its promise to transform how we live, work, and think, Big Data has captured the imaginations of governments, businesses, and academia. However, the grand claims of Big Data advocates have been accompanied with concerns about potential detrimental implications for civil rights and liberties, leading to a climate of clash and mutual distrust between different stakeholders. Throughout the years, the interdisciplinary field of technology assessment (TA) has gained considerable experience in studying socio-technical controversies and as such is exceptionally well equipped to assess the premises and implications of Big Data practices. However, the relationship between Big Data as a socio-technical phenomenon and TA as a discipline assessing such phenomena is a peculiar one: Big Data may be the first topic TA deals with that is not only an object of inquiry, but also a major competitor, rivaling TA in several of its core functions, including the assessment of public views and visions, means and methods for exploring the future, and the provision of actionable knowledge and advice for political decision making. Our paper explores this dual relationship between Big Data and TA before concluding with some considerations on how TA might contribute to more responsible data-based research and innovation.
The paper investigates the rise of Big Data in contemporary society. It examines the most promine... more The paper investigates the rise of Big Data in contemporary society. It examines the most prominent epistemological claims made by Big Data proponents, calls attention to the potential socio-political consequences of blind data trust, and proposes a possible way forward. The paper's main focus is on the interplay between an emerging new empiricism and an increasingly opaque algorithmic environment that challenges democratic demands for transparency and accountability. It concludes that a responsible culture of quantification requires epistemic vigilance as well as a greater awareness of the potential dangers and pitfalls of an ever more data-driven society.
Recently, there has been renewed interest in so-called evidence-based policy making. Enticed by t... more Recently, there has been renewed interest in so-called evidence-based policy making. Enticed by the grand promises of Big Data, public officials seem increasingly inclined to experiment with more data-driven forms of governance. But while the rise of Big Data and related consequences has been a major issue of concern across different disciplines, attempts to develop a better understanding of the phenomenon's historical foundations have been rare. This short commentary addresses this gap by situating the current push for numerical evidence within a broader socio-political context, demonstrating how the epistemological claims of Big Data science intersect with specific forms of trust, truth, and objectivity. We conclude by arguing that regulators' faith in numbers can be attributed to a distinct political culture, a representative democracy undermined by pervasive public distrust and uncertainty.
In late 2019, Philip Alston, the United Nations Special Rapporteur on extreme poverty and human r... more In late 2019, Philip Alston, the United Nations Special Rapporteur on extreme poverty and human rights, published a much-discussed report on the worldwide emergence of digital welfare states where "digital data and technologies are used to automate, predict, identify, surveil, detect, target and punish" (UN OHCHR, 2019). Outlining a number of related societal risks – from privacy violations to the reinforcement or exacerbation of existing inequalities – the report warns of the dangers of "stumbling zombie-like into a digital welfare dystopia", a human-rights free zone that is "especially problematic when the private sector is taking a leading role in designing, constructing, and even operating significant parts of the digital welfare state" (ibid.). Against this background, and given that the political push for "automating society" (AlgorithmWatch, 2019) is clearly gaining momentum, this paper takes a closer look at the political economy of welfare AI, with particular attention to the expanding role of the private sector in either supporting or delivering public services. More specifically, we wish to shed some light on the business behind welfare tech, the types of corporations involved, and the nature of the emerging public-private partnerships in some of the most sensitive areas of government operation. Drawing on examples from EU member states, the paper thus seeks to contribute to current discussions around the datafication and industrialization of the public sector (see, e.g., Dencik et al., 2019) and the changing power relations these processes initiate, raising questions of ownership, control, accountability, and democratic legitimacy.
In an era marked by crisis and uncertainty, policy makers have shown interest in more anticipator... more In an era marked by crisis and uncertainty, policy makers have shown interest in more anticipatory forms of governance, facilitated by analytical tools and techniques that promise to provide not just insight into past and present events, but foresight into what the future holds. Big Data-based predictive algorithms are meant to shift the focus from reactive measures to proactive prevention, from monitoring and responding to the continuous assessment of the ‘not yet’. This talk considers the increasingly central role of predictive analytics in many areas of public life, reflecting on the opportunities and benefits, but also the potential social, ethical, and epistemic pitfalls of an ever more data-driven society.
The future is currently enjoying great popularity. Throughout different spheres of society the ‘n... more The future is currently enjoying great popularity. Throughout different spheres of society the ‘not yet’ is at the centre of debates. While technoscience articulates great promises about futures expected to be brought about by novel innovations, research policy is busy in its attempts to anticipate and govern future lifeworlds.
Science and Technology Studies (STS) has devoted considerable attention to the study of the future. Collectively held imaginations about the future are explored as instrumental in the stabilization of societal and scientific orderings, dynamics of promise and expectation are studied in order to understand processes of technoscientific innovation and the making of particular futures, and varying anticipatory techniques are developed, compared and refined in an effort to delineate new modes of governing in increasingly complex, uncertain worlds. While these approaches of studying the future differ considerably in their empirical focus, they usually have one thing in common: they deal with particular futures, e.g. the future of nuclear energy, the future potentials of nanotechnology and so on, while paying considerably less attention to questions concerning how the future as a temporal concept is constituted.
In this talk we attempt to remedy this omission by bringing together insights from both STS and narratological research to ask how futures are narrated. This means exploring the future as a ‘narrated temporality’ and thus focusing on how stories about the future are constructed, related and ordered temporally. We will provide an empirically grounded understanding of what ‘future’ means in policy contexts – both for policymakers and advisory institutions. We argue that what is understood as ‘the future’ varies significantly in different fields, and that these variations have epistemic, social and moral implications for how we choose to engage with the future, including ideas about what are legitimate issues to be tackled, how to produce knowledge to deal with them, and who is supposed to take action and thus can be held responsible. This means asking how particular stories of the future are co-constitutive of political spaces. Narrating the future then becomes an intrinsically political and ontological activity of negotiating and ordering the present or, to put it differently, a practise of world-making. We argue that thinking about futures as narrated temporalities is a promising avenue to better understand processes of politicization.
Using a comparative approach, we show how different stories about the future relate to different conceptualizations of the political (participation, representation, responsibility, …). To this end we analyse (science) policy documents from the fields of sustainability research, big data and climate engineering.
Nanoscience and --technology has been referred to as one of the most important technoscientifc br... more Nanoscience and --technology has been referred to as one of the most important technoscientifc breakthrough areas of the 21 st century, residing in a post--normal state of uncertainty that is subject to both utopian dreams and dystopian nightmares. Considered a field of great economic potential, policymakers have been keen to ensure its social acceptability early on, calling for responsible and sustainable R&D based on democratic principles and public participation. However, such discursive shifts towards a more inclusive governance of technoscientific innovation have been undermined by deeply entrenched but conceptually questionable policy framings such as the deficit model or the risk paradigm. As a result, public 'engagement' initiatives have all too often taken the form of unidirectional, expert--led information dissemination exercises, more prone to 'downstream'--dominated rather than 'upstream'--oriented modes of future deliberation.
Internet Governance oder: Wer regiert den Cyberspace? Eine Darstellung der Entwicklung der admini... more Internet Governance oder: Wer regiert den Cyberspace? Eine Darstellung der Entwicklung der administrativen Verwaltung des Internet und die aktuelle Diskussion zur Problematik der Domainvergabe. Das Internet ist ein weltweites Netzwerk welches sich wiederum aus voneinander unabhängigen Netzwerken zusammensetzt (interconnected networks). Auf Grundlage verschiedener Protokolle wird es den unterschiedlichen Systemen erlaubt, auf einer einheitlichen Ebene miteinander in Verbindung zu treten. Zunächst wurde das Internet vor allem für militärische und wissenschaftliche Zwecke gebraucht, dies änderte sich jedoch schnell als Ende der 80iger Jahre das World Wide Web (www) entstand. Es ermöglichte durch seine grafische Darstellung eine einfachere Anwendung als dies zuvor der Falls gewesen war, was zur Folge hatte, dass im Laufe von weniger als zwei Dekaden über eine Milliarde Menschen die Möglichkeiten des Internet und seiner zahlreichen Dienste zu nutzen begannen. Kaum wurde das Internet zu einem globalen Marktplatz mit großer ökonomischer Bedeutung, meldeten sich auch immer mehr Regierungen zu Wort die mehr Mitsprache bei Verwaltungs-und Entscheidungsprozessen forderten. Zum jetzigen Zeitpunkt regelt ICANN (Internet Corporation for Assigned Names and Numbers) auf höchster Ebene die Gesetzesgrundlagen und technische Spezifikationen. ICANN ist eine Firma mit Sitz in Kalifornien und bezieht seine Legitimation aus einem Vertrag mit dem US-amerikanischen Wirtschaftsministerium. Die Amerikaner sehen sich als die Erfinder des Internet und fordern das Recht auf die Verwaltung der Kernressourcen für sich. Die Amerikaner als führende Internet-Nation kommen jedoch aufgrund globaler Entwicklungen in immer stärker in Bedrängnis. Schon heute benutzen wesentlich mehr Leute in Asien das Internet als in Nord Amerika. Spitzenreiter China liegt zwar mit 111 Millionen Usern noch klar unter dem Wert der Amerikaner (227 Millionen) die Wachstumsquote in den Jahren zwischen 2000 und 2005 lag jedoch bei den Chinesen etwa viermal so hoch. Mit anhaltendem Wirtschaftsaufschwung ist hier kein Ende dieser Tendenz abzusehen. Auch in Europa nutzen mittlerweile mehr Menschen das Internet als in Nordamerika. Von diesen Zahlen ausgehend verwundert es kaum, dass allgemeiner Unmut herrscht über die aktuelle Hierarchie in der Domainvergabe. Ersten Höhepunkt sollte die Diskussion um eine Neuausrichtung der Regulierung im Jahr 2003 in Genf finden wo der erste "World Summit on the Information Society" (WSIS) stattfand. Das Thema wurde damals jedoch ausgeklammert da schon auf den Vorbereitungskongressen keine Einigung erzielt werden konnte. Stattdessen wurden zwei Arbeitsgruppen eingerichtet die vorbereitend für eine Lösung des Disputs bis zum zweiten Gipfel im November 2005 arbeiten sollten. Eine endgültige Lösung die alle Parteien zufrieden stellt schien jedoch unmöglich und ist auch bis zum heutigen Tage noch nicht gefunden. Erst kurz vor Beginn des zweiten Gipfels in Tunis konnte man sich auf einen einstweiligen Kompromiss einigen. Die Gründung eines Internet Governance Forums (IGF) wurde beschlossen, welches den UNO Staaten als eine neue dezentralisierte Kommunikationsplattform die Möglichkeit zur Mitsprache geben könnte, auch wenn sich an den eigentlichen Strukturen zunächst einmal nichts ändert. Herausgearbeitet soll werden, dass das Internet aufgrund seiner einzigartigen technologischen Struktur eine andere Form von Regulierung benötigt dies etwas bei traditionellen Print-oder Rundfunkmedien der Fall ist. Aufgrund seiner Grenzen überschreitenden Architektur geht es hierbei nicht nur um einen rein diplomatischen Lösungsansatz, eine globale Regulierung des Netzes kann nicht ohne die Partizipation privater Unternehmen und des Zivilsektors erfolgen. Das die internationale Diskussion dies schon erkannt zu haben scheint und trotzdem noch mit einer gelungenen Umsetzung eines solchen "multistakeholderism" zu kämpfen hat ist eine Thematik auf welche diese Arbeit eingehen möchte. Diese Arbeit möchte den Prozess rund um eine Administration des Internet, ausgehend von einigen grundlegenden technischen Erläuterungen, in kurzen Etappen von der Frühphase bis hin zur aktuellen Debatte nachzeichnen, wobei der Fokus klar auf dem Zeitraum um die beiden Weltgipfel zur Informationsgesellschaft, 2003 bzw. 2005, liegt. Die Abbildung der historischen Prozesse soll befähigen eine Antwort auf die Frage zu finden ob ein unipolares Verwaltungskonzept in einem, den Charakteristika der Technologie entsprechend, freien Anwendungssystem auf Dauer bestehen kann oder ob es zu Umstrukturierungsprozessen kommen muss. Der Versuch sich der Problematik der Regulierung des Internet zu nähern, soll anhand einer intensiven Literaturrecherche unternommen werden wobei neben ausgewählten Publikationen zu diesem Thema vor allem auf Internetquellen zurückgegriffen werden soll. Auch wird auf öffentlich zugängliche Dokumente einzelner Verhandlungen zurückgegriffen. Die Thematik der Internet Governance wird von den meisten Medien im deutschsprachigen Raum kaum wahrgenommen. Ein Versäumnis da es sich bei dieser Problematik um zukunftsentscheidende Entscheidungsprozesse handelt deren Ergebnisse sich in Auswirkungen von globaler Reichweite konstatieren.
Uploads
Papers by Gernot Rieder
The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine AI regulation. In this chapter, we engage in a critical discussion of the concept of trustworthy AI by probing the concept both on theoretical and practical grounds, assessing its substance and the feasibility of its intent. We offer a concise overview of the guidelines and their vision for trustworthy AI and examine the conceptual underpinnings of trustworthy AI by considering how notions of 'trust' and 'trustworthiness' have been discussed in the philosophical literature. We then discuss several epistemic obstacles and moral requirements when striving to achieve trustworthy AI in practice before concluding with an argument in support of the establishment of a trustworthy AI culture that respects and protects foundational values.
Talks by Gernot Rieder
Science and Technology Studies (STS) has devoted considerable attention to the study of the future. Collectively held imaginations about the future are explored as instrumental in the stabilization of societal and scientific orderings, dynamics of promise and expectation are studied in order to understand processes of technoscientific innovation and the making of particular futures, and varying anticipatory techniques are developed, compared and refined in an effort to delineate new modes of governing in increasingly complex, uncertain worlds. While these approaches of studying the future differ considerably in their empirical focus, they usually have one thing in common: they deal with particular futures, e.g. the future of nuclear energy, the future potentials of nanotechnology and so on, while paying considerably less attention to questions concerning how the future as a temporal concept is constituted.
In this talk we attempt to remedy this omission by bringing together insights from both STS and narratological research to ask how futures are narrated. This means exploring the future as a ‘narrated temporality’ and thus focusing on how stories about the future are constructed, related and ordered temporally. We will provide an empirically grounded understanding of what ‘future’ means in policy contexts – both for policymakers and advisory institutions. We argue that what is understood as ‘the future’ varies significantly in different fields, and that these variations have epistemic, social and moral implications for how we choose to engage with the future, including ideas about what are legitimate issues to be tackled, how to produce knowledge to deal with them, and who is supposed to take action and thus can be held responsible. This means asking how particular stories of the future are co-constitutive of political spaces. Narrating the future then becomes an intrinsically political and ontological activity of negotiating and ordering the present or, to put it differently, a practise of world-making. We argue that thinking about futures as narrated temporalities is a promising avenue to better understand processes of politicization.
Using a comparative approach, we show how different stories about the future relate to different conceptualizations of the political (participation, representation, responsibility, …). To this end we analyse (science) policy documents from the fields of sustainability research, big data and climate engineering.
The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine AI regulation. In this chapter, we engage in a critical discussion of the concept of trustworthy AI by probing the concept both on theoretical and practical grounds, assessing its substance and the feasibility of its intent. We offer a concise overview of the guidelines and their vision for trustworthy AI and examine the conceptual underpinnings of trustworthy AI by considering how notions of 'trust' and 'trustworthiness' have been discussed in the philosophical literature. We then discuss several epistemic obstacles and moral requirements when striving to achieve trustworthy AI in practice before concluding with an argument in support of the establishment of a trustworthy AI culture that respects and protects foundational values.
Science and Technology Studies (STS) has devoted considerable attention to the study of the future. Collectively held imaginations about the future are explored as instrumental in the stabilization of societal and scientific orderings, dynamics of promise and expectation are studied in order to understand processes of technoscientific innovation and the making of particular futures, and varying anticipatory techniques are developed, compared and refined in an effort to delineate new modes of governing in increasingly complex, uncertain worlds. While these approaches of studying the future differ considerably in their empirical focus, they usually have one thing in common: they deal with particular futures, e.g. the future of nuclear energy, the future potentials of nanotechnology and so on, while paying considerably less attention to questions concerning how the future as a temporal concept is constituted.
In this talk we attempt to remedy this omission by bringing together insights from both STS and narratological research to ask how futures are narrated. This means exploring the future as a ‘narrated temporality’ and thus focusing on how stories about the future are constructed, related and ordered temporally. We will provide an empirically grounded understanding of what ‘future’ means in policy contexts – both for policymakers and advisory institutions. We argue that what is understood as ‘the future’ varies significantly in different fields, and that these variations have epistemic, social and moral implications for how we choose to engage with the future, including ideas about what are legitimate issues to be tackled, how to produce knowledge to deal with them, and who is supposed to take action and thus can be held responsible. This means asking how particular stories of the future are co-constitutive of political spaces. Narrating the future then becomes an intrinsically political and ontological activity of negotiating and ordering the present or, to put it differently, a practise of world-making. We argue that thinking about futures as narrated temporalities is a promising avenue to better understand processes of politicization.
Using a comparative approach, we show how different stories about the future relate to different conceptualizations of the political (participation, representation, responsibility, …). To this end we analyse (science) policy documents from the fields of sustainability research, big data and climate engineering.