Philosophy of AI, Robotics and Cognitive Science by Guglielmo Tamburrini
Journal of responsible technology, 2024
This article analyses the negative impact of heuristic biases on the main goals of AI ethics. The... more This article analyses the negative impact of heuristic biases on the main goals of AI ethics. These biases are found to hinder the identification of ethical issues in AI, the development of related ethical policies, and their application. This pervasive impact has been mostly neglected, giving rise to what is called here the heuristics gap in AI ethics. This heuristics gap is illustrated using the AI carbon footprint problem as an exemplary case. Psychological work on biases hampering climate warming mitigation actions is specialized to this problem, and novel extensions are proposed by considering heuristic mentalization strategies that one uses to design and interact with AI systems. To mitigate the effects of this heuristics gap, interventions on the design of ethical policies and suitable incentives for AI stakeholders are suggested. Finally, a checklist of questions helping one to investigate systematically this heuristics gap throughout the AI ethics pipeline is provided.
Nuclear Risks and Arms Control - Problems and Progresses in the Time of Pandemics and War, 2023
This contribution provides an overview of nuclear risks emerging from the militarization of AI te... more This contribution provides an overview of nuclear risks emerging from the militarization of AI technologies and systems. These include AI enhancements of cyber threats to nuclear command, control and communication infrastructures, proposed uses of AI systems affected by inherent vulnerabilities in nuclear early warning, AI-powered unmanned vessels trailing submarines armed with nuclear ballistic missiles. Taken together, nuclear risks emerging from the militarization of AI add new significant motives for nuclear non-proliferation and disarmament.
This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
Italian Philosophy of Technology, Chiodo & Schiaffonati eds., 2021
Games have played a significant role throughout the history of artificial intelligence and roboti... more Games have played a significant role throughout the history of artificial intelligence and robotics. Machine intelligence games are examined here from a methodological perspective, focusing on their role as generators of research programs. These research programs are schematized in terms of framework building, subgoaling, and outcome appraisal processes. The latter process is found to involve a rather intricate system of rewards and penalties, which take into account the double allegiance of participating scientists, trading and sharing interchanges occurring in multidisciplinary research environments, in addition to expected industrial payoffs and a variety of other research benefits in the way of research outreach and results dissemination, recruitment of junior researchers and students' enrolment. Examples used to illustrate these various aspects of the outcome appraisal process include RoboCup and computer chess, Go, Poker and video-games. On the whole, a reflection on research programs that are based on machine game playing opens a window on central features of the complex systems of rewards and penalties that come into play to appraise machine intelligence investigations.
Una collana per chi ritiene che nella vita non si smetta mai di imparare I veicoli a guida autono... more Una collana per chi ritiene che nella vita non si smetta mai di imparare I veicoli a guida autonoma possono contribuire a ridurre il numero di vittime della strada, ma sono già stati coinvolti in gravi incidenti stradali. Le armi autonome possono attaccare obiettivi militari legittimi senza richiedere l'approvazione di un operatore umano, ma potrebbero colpire dei civili estranei al conflitto.
ICRAC WORKING PAPERS SERIES #4, 2019
The present ICRAC Report contributes to move forward the debate on Meaning Human Control (MHC) of... more The present ICRAC Report contributes to move forward the debate on Meaning Human Control (MHC) of Autonomous Weapons Systems (AWS) (i) by filling the MHC placeholder with more precise contents, and (ii) by identifying on this basis some key aspects of any legal instrument enshrining the MHC requirement (such as, e.g., a Protocol VI to the CCW).
This paper provides a methodological analysis of Executable Cell Biology (ECB), a current simulat... more This paper provides a methodological analysis of Executable Cell Biology (ECB), a current simulative approach to computational biology, showing how ECB resumed the general idea of constructing theoretical models that are also executable, pursued over fifty years ago by Herbert Simon and Allen Newell within the Information Processing Psychology approach. It is highlighted, however, that ECB focuses on a more abstract model of the biological system. On the one hand, the processes of abstraction involved in the construction of ECB theoretical models allow one to omit those implementation details of the simulative program that have no theoretical value. On the other hand, the executability of the abstract model permits, in general, to expand the class of predictions that can be extracted from the observation of the simulative programs' executions. Finally, focusing on ECB executable theoretical models, which are distinct from the simulative programs, poses new problems for the methodological analysis of the sciences of the artificial, in particular with reference to the role that both abstraction and idealization processes have in the construction of theoretical models and in the exploration of their relationship with the biological reality of modelled systems.
Games and competitions have played a significant role throughout the history of artificial intell... more Games and competitions have played a significant role throughout the history of artificial intelligence and robotics. Machine intelligence games are examined here from a distinctive methodological perspective, focusing on their use as generators of multidisciplinary research programs. In particular, Robocup is analyzed as an exemplary case of contemporary research program developing from machine intelligence games. These research programs arising are schematized in terms of framework building, subgoaling, and outcome appraisal processes. The latter process is found to involve a rather intricate system of rewards and penalties, which take into account the double allegiance of participating scientists, trading and sharing interchanges taking place in a multidisciplinary research environment, in addition to expected industrial payoffs and a variety of other fringe research benefits in the way of research outreach and results dissemination, recruitment of junior researchers and students enrollment.
Robots are being extensively used for the purpose of discovering and testing empirical hypotheses... more Robots are being extensively used for the purpose of discovering and testing empirical hypotheses about biological sensorimotor mechanisms. We examine here methodological problems that have to be addressed in order to design and perform “good” experiments with these machine models. These problems notably concern the mapping of biological mechanism descriptions into robotic mechanism descriptions; the distinction between theoretically unconstrained “implementation details” and robotic features that carry a modeling weight; the role of preliminary calibration experiments; the monitoring of experimental environments for disturbing factors that affect both modeling features and theoretically unconstrained implementation details of robots.
Various assumptions that are gradually introduced in the process of setting up and
performing these robotic experiments become integral parts of the background hypotheses that are needed to bring experimental observations to bear on biological mechanism descriptions.
The sustainability of social robotics, like other ambitious
research programs, depends on the id... more The sustainability of social robotics, like other ambitious
research programs, depends on the identification of lines of
inquiry that are coherent with its visionary goals while satisfying
more stringent constraints of feasibility and near-term payoffs.
Within these constraints, this article outlines one line of inquiry
that seems especially viable: development of a society of robots
operating within the physical environments of everyday human
life, developing rich robot–robot social exchanges, and yet,
refraining from any physical contact with human beings. To
pursue this line of inquiry effectively, sustained interactions
between specialized research communities in robotics are needed.
Notably, suitable robotic hand design and control principles must
be adopted to achieve proper robotic manipulation of objects
designed for human hands that one finds in human habitats. The Pisa-IIT SoftHand project promises to meet these manipulation needs by a principled combination of sensorimotor synergies and soft robotics actuation, which aims at capturing how the biomechanical structure and neural control strategies of the human hand interact so as to simplify and solve both control and sensing problems.
Psychological attitudes towards service and personal robots are selectively examined from the van... more Psychological attitudes towards service and personal robots are selectively examined from the vantage point of psychoanalysis. Significant case studies include the uncanny valley effect, brain-actuated robots evoking magic mental powers, parental attitudes towards robotic children, idealizations of robotic soldiers, persecutory fantasies involving robotic components and systems.
Freudian theories of narcissism, animism, infantile complexes, ego ideal, and ideal ego are brought to bear on the interpretation of these various items. The horizons of Human-robot Interaction are found to afford new and fertile grounds for psychoanalytic theorizing beyond strictly therapeutic contexts.
È difficile dare una definizione dell'IA e dei suoi obiettivi che sia unanimemente condivisa ... more È difficile dare una definizione dell'IA e dei suoi obiettivi che sia unanimemente condivisa dai ricercatori. L'origine della difficoltà sta anche nel fatto che da sempre l'IA si è presentata sotto un duplice profilo: quello di disciplina ingegneristica, il cui obiettivo è di costruire macchine in grado di assistere l'uomo, e magari di competere con esso, in compiti soprattutto intellettuali, e quello di disciplina psicologica, il cui obiettivo è di costruire macchine le quali, riproducendo da vicino le caratteristiche essenziali dell'attività cognitiva umana, gettino una nuova luce su alcuni tradizionali enigmi della mente, ad esempio sul cosiddetto problema mente-corpo. Forse la base programmatica dell'IA più comunemente accettata è ancora quella utilizzata nella presentazione del seminario organizzato da John McCarthy, Marvin Minsky, Nathaniel Rochester e Claude Shannon nel giugno del 1956 negli Stati Uniti, a Dartmouth (New Hampshire), nella quale si legg...
E’ riduttivo e per alcuni aspetti anche fuorviante incentrare sul cosiddetto “gioco dell’imitazio... more E’ riduttivo e per alcuni aspetti anche fuorviante incentrare sul cosiddetto “gioco dell’imitazione”, noto anche come Test di Turing (TT), l’analisi dei contributi di Alan Turing alla nascita e allo sviluppo iniziale dell’Intelligenza Artificiale (IA) . Anzitutto, la funzione intesa del TT sembra essere stata fin dall’inizio puramente divulgativa: Turing aveva l’obiettivo di raggiungere un ampio pubblico di persone colte, alle quali trasmettere e illustrare la possibilità tecnologica di sviluppare macchine capaci di elaborare strutture simboliche e di manifestare comportamenti intelligenti. Inoltre, le regole proposte per la conduzione e il superamento del TT risultano essere molto vaghe, senza peraltro soddisfare i requisiti di intersoggettività per la valutazione dei risultati di un test empirico. E’ infine rilevante il dato di fatto che il TT non sia stato usato come benchmark o test empirico per i sistemi concretamente sviluppati dall’IA. L’obiettivo di superare il TT—qualunque cosa si intenda con ciò—è stato perseguito prevalentemente in manifestazioni socio-culturali, come il Loebner Prize. Ciononostante, è piuttosto diffusa la strategia di incentrare proprio intorno al TT il discorso sul rapporto tra Turing e l’IA.
In questo lavoro seguiremo una diversa strategia per analizzare le relazioni tra Turing e l’IA, soprattutto in considerazione della sostanziale estraneità del TT allo svolgimento effettivo delle ricerche condotte nell’ambito dell’IA. Ecco in breve che cosa ci proponiamo di fare: partendo da una ricostruzione schematica dell’IA vista come un programma di ricerca, metteremo in evidenza i contributi di carattere modellistico, epistemologico, metodologico e anche tecnologico che Turing ha dato al suo sviluppo.
Proceedings of the 1988 IEEE International Conference on Systems, Man, and Cybernetics, 1988
Turing's test (TT) is a game involving three parti-cipants, introduced by Turing in 111 unde... more Turing's test (TT) is a game involving three parti-cipants, introduced by Turing in 111 under The name of "imitation game" and presented together with the propo-sal of replacing the question "Can machines think?" with a question concerning the game. The latter question, Turing ...
Ethics of HCI and HRI by Guglielmo Tamburrini
Paradigmi, 2022
In the ethics of autonomous vehicles, dilemmatic frameworks have been widely used to discuss unav... more In the ethics of autonomous vehicles, dilemmatic frameworks have been widely used to discuss unavoidable collision scenarios. However, doubts have been raised on the opportunity of framing unavoidable collisions as moral dilemmas. We claim that dilemmatic frameworks take on new roles in this context and that acknowledging these changes is essential to any assessment of their methodological productivity. We suggest that in the ethics of autonomous vehicles dilemmatic frameworks are pointers to social deliberation issues and related trade-offs between competing values, helping one to identify ethically motivated policies and specifications for autonomous driving controllers.
Philosophies, 2022
This article examines ethical implications of the growing AI carbon footprint, focusing on the fa... more This article examines ethical implications of the growing AI carbon footprint, focusing on the fair distribution of prospective responsibilities among groups of involved actors. First, major groups of involved actors are identified, including AI scientists, AI industry, and AI infrastructure providers, from datacenters to electrical energy suppliers. Second, responsibilities of AI scientists concerning climate warming mitigation actions are disentangled from responsibilities of other involved actors. Third, to implement these responsibilities nudging interventions are suggested, leveraging on AI competitive games which would prize research combining better system accuracy with greater computational and energy efficiency. Finally, in addition to the AI carbon footprint, it is argued that another ethical issue with a genuinely global dimension is now emerging in the AI ethics agenda. This issue concerns the threats that AI-powered cyberweapons pose to the digital command, control, and communication infrastructure of nuclear weapons systems.
Automi e persone, 2021
Automi e persone
Introduzione all’etica dell’intelligenza artificiale e della robotica
a cura d... more Automi e persone
Introduzione all’etica dell’intelligenza artificiale e della robotica
a cura di Fabio Fossa, Viola Schiaffonati, Guglielmo Tamburrini
Carocci Editore, Roma 2021, pp. 320, euro 29
ISBN 978-88-290-1170-4
http://www.carocci.it/index.php?option=com_carocci&task=schedalibro&Itemid=72&isbn=9788829011704
In breve
Dalle decisioni algoritmiche alle raccomandazioni sugli acquisti, dai sex robot alla sorveglianza sociale, dalla cybersicurezza all’autonomia operativa dei veicoli e delle armi, l’impatto dell’intelligenza artificiale e della robotica sulla vita delle persone è sempre più ramificato e pervasivo. Il volume offre un quadro d’insieme delle questioni etiche sollevate dall’incontro tra automi e individui nella società contemporanea: la protezione dell’autonomia a fronte della raccolta minuziosa di dati personali, le forme di benessere collettivo da promuovere attraverso l’automazione, la trasparenza e l’equità delle decisioni prese con il supporto di un algoritmo, il ruolo dei sistemi intelligenti nella crisi ambientale. L’automazione non è segno incontrovertibile di sventura, ma nemmeno indizio sicuro di progresso. Cosa sarà delle persone, delle società e della vita sul nostro pianeta dipende in modo cruciale da come sapremo affrontare le sfide etiche dell’età degli automi.
Indice
Introduzione di Fabio Fossa, Viola Schiaffonati e Guglielmo Tamburrini
Parte prima
Decisioni e responsabilità
1. Etica dei sistemi intelligenti e autonomi: una mappa per orientarsi di Fabio Fossa, Viola Schiaffonati e Guglielmo Tamburrini
2. Apprendimento automatico e decisione umana di Giovanni Grandi e Teresa Scantamburlo
Introduzione/Macchine che decidono?/L’impatto etico e sociale dell’apprendimento automatico nei processi decisionali/Concetti filosofici e chiavi di lettura per una analisi del decidere/Istanze morali nei processi decisionali/IA e istanze morali
3. Medicina digitale IA: profili etici di Alessandro Blasimme
Introduzione/Concetti fondamentali e casi studio/Profili etici/Aspetti generali
4. Etica dei veicoli autonomi di Fabio Fossa e Guglielmo Tamburrini
Introduzione/Collisioni inevitabili/Oltre le collisioni inevitabili/Dall’etica applicata alle politiche etiche/Senso e limiti dell’analisi morale
Parte seconda
Persone e istituzioni
5. Governance algoritmica: sorveglianza, censura e diritti fondamentali di Francesca Musiani
Introduzione/Dove si osserva la governance algoritmica?/Prospettive multi- e interdisciplinari sulla governance algoritmica/Governance algoritmica, valori e diritti/Conclusioni
6. Cybersicurezza di Michele Loi
Introduzione/Esempi di scambi di valore (trade-off )/Etica della cybersicurezza, robotica e IA/Conclusioni
7. Armi autonome e controllo umano significativo di Viola Schiaffonati e Guglielmo Tamburrini
Introduzione/Etica dei doveri e armi autonome/Etica delle conseguenze e armi autonome/Controllo umano significativo e armi autonome/Politiche etiche per le armi autonome
8. Sostenibilità ambientale della società dell’informazione di Federica Lucivero
Introduzione/Sostenibilità ambientale, sviluppo sostenibile e big data/Inquinamento digitale: che cos’è?/ Inquinamento digitale ed etica/Conclusioni
Parte terza
Persone e interazioni
9. Robotica sociale: persuasione, inganno ed etica del design di Fabio Fossa
Introduzione/Robotica sociale/Captologia e tecnologie persuasive/Spinta gentile e robotica sociale/Inganni robotici/Uno strumento metodologico: Value Sensitive Design
10. Sex robot di Maurizio Balistreri
Introduzione/Che cosa sono i sex robot?/Chi fa sesso con un robot è una persona viziosa?/C’è qualcosa di sbagliato nello “stuprare” un robot?/I sex robot promuovono la violenza sulle donne?/Conclusioni
11. Le implicazioni etiche degli usi educativi e didattici dei robot di Edoardo Datteri e Luisa Zecca
Dalle “macchine per insegnare” agli insegnanti robotici/Scenari di uso dei robot in contesti educativi e didattici/Privacy, inganno, responsabilità/Conclusioni
12. Etica, videogiochi e gamification di Francesca Dagnino, Marcello Passarelli e Donatella Persico
Introduzione/Videogiochi e gamification/Problematiche etiche dei videogiochi/Problematiche etiche della gamification/Promuovere valori e consapevolezza etica attraverso i giochi/Conclusioni
Parte quarta
Segnavia
13. Codici etici e documenti di indirizzo di Fabio Fossa e Viola Schiaffonati
Introduzione/ACM Code of Ethics and Professional Conduct/IEEE Ethically Aligned Design/Orientamenti etici per un’IA affidabile/Critiche, limiti e senso dell’approccio
14. Orizzonti di Fabio Fossa, Viola Schiaffonati e Guglielmo Tamburrini
Introduzione/Automazione e lavoro/Potenziamento informatico e robotico/Etica delle macchine/Diritti dei robot/Automi e sfide etiche globali
Bibliografia
Le autrici e gli autori
Ethics and International Affairs, 2021
First, we briefly review the main stumbling blocks in building a satisfactory definition of AWS; i... more First, we briefly review the main stumbling blocks in building a satisfactory definition of AWS; in other words, in building one that is sufficiently precise and that is neither overly restrictive nor overly permissive. As mentioned above, these persisting impediments speak clearly in favor of shifting the focus of ethical and legal debates away from definitions of AWS and toward a specification of MHC contents. To lay the groundwork for identifying the core components of MHC, we examine ethical and legal arguments from the AWS debate, which selectively concern jus in bello principles of distinction, proportionality, and precaution; responsibility ascription; and human dignity protection. We then argue that these ethical and legal arguments concur to pinpoint distinctive human obligations regarding weapons systems control. These obligations constrain human-weapon shared control by retaining for human agents the roles of “fail-safe actor,”“accountability attractor,” and “moral agency enactor.” We maintain that uniform models of
human control—that is, those applying one size of human control to all weapons systems and uses thereof—fail to properly accommodate these normative requirements.
Hence the need for an MHC framework that is both “differentiated,” in
rejecting uniform solutions to the issue of human control, and “principled,” in favoring solutions that invariably retain the fail-safe, accountability, and moral agency roles for humans in human-weapon interactions. We additionally argue for “prudential” solutions, chiefly by appealing to epistemic uncertainties about AWS behaviors. The prudential solution we advance here imposes by default higher levels of human control of target selection and engagement processes; designated exceptions to this default rule are admitted solely on the basis of an international agreement entered into by states for specific weapons systems and uses thereof, provided that lower levels of human control are by consensus found sufficient to meet the fail-safe actor, accountability attractor, and moral agency enactor requirements. Finally, we suggest that the outlined differentiated, principled, and prudential framework provides a most appropriate normative basis for both
national arms review policies and any international legal instrument enshrining the MHC requirement (such as a possible Protocol VI to the CCW).
Uploads
Philosophy of AI, Robotics and Cognitive Science by Guglielmo Tamburrini
Various assumptions that are gradually introduced in the process of setting up and
performing these robotic experiments become integral parts of the background hypotheses that are needed to bring experimental observations to bear on biological mechanism descriptions.
research programs, depends on the identification of lines of
inquiry that are coherent with its visionary goals while satisfying
more stringent constraints of feasibility and near-term payoffs.
Within these constraints, this article outlines one line of inquiry
that seems especially viable: development of a society of robots
operating within the physical environments of everyday human
life, developing rich robot–robot social exchanges, and yet,
refraining from any physical contact with human beings. To
pursue this line of inquiry effectively, sustained interactions
between specialized research communities in robotics are needed.
Notably, suitable robotic hand design and control principles must
be adopted to achieve proper robotic manipulation of objects
designed for human hands that one finds in human habitats. The Pisa-IIT SoftHand project promises to meet these manipulation needs by a principled combination of sensorimotor synergies and soft robotics actuation, which aims at capturing how the biomechanical structure and neural control strategies of the human hand interact so as to simplify and solve both control and sensing problems.
Freudian theories of narcissism, animism, infantile complexes, ego ideal, and ideal ego are brought to bear on the interpretation of these various items. The horizons of Human-robot Interaction are found to afford new and fertile grounds for psychoanalytic theorizing beyond strictly therapeutic contexts.
In questo lavoro seguiremo una diversa strategia per analizzare le relazioni tra Turing e l’IA, soprattutto in considerazione della sostanziale estraneità del TT allo svolgimento effettivo delle ricerche condotte nell’ambito dell’IA. Ecco in breve che cosa ci proponiamo di fare: partendo da una ricostruzione schematica dell’IA vista come un programma di ricerca, metteremo in evidenza i contributi di carattere modellistico, epistemologico, metodologico e anche tecnologico che Turing ha dato al suo sviluppo.
Ethics of HCI and HRI by Guglielmo Tamburrini
Introduzione all’etica dell’intelligenza artificiale e della robotica
a cura di Fabio Fossa, Viola Schiaffonati, Guglielmo Tamburrini
Carocci Editore, Roma 2021, pp. 320, euro 29
ISBN 978-88-290-1170-4
http://www.carocci.it/index.php?option=com_carocci&task=schedalibro&Itemid=72&isbn=9788829011704
In breve
Dalle decisioni algoritmiche alle raccomandazioni sugli acquisti, dai sex robot alla sorveglianza sociale, dalla cybersicurezza all’autonomia operativa dei veicoli e delle armi, l’impatto dell’intelligenza artificiale e della robotica sulla vita delle persone è sempre più ramificato e pervasivo. Il volume offre un quadro d’insieme delle questioni etiche sollevate dall’incontro tra automi e individui nella società contemporanea: la protezione dell’autonomia a fronte della raccolta minuziosa di dati personali, le forme di benessere collettivo da promuovere attraverso l’automazione, la trasparenza e l’equità delle decisioni prese con il supporto di un algoritmo, il ruolo dei sistemi intelligenti nella crisi ambientale. L’automazione non è segno incontrovertibile di sventura, ma nemmeno indizio sicuro di progresso. Cosa sarà delle persone, delle società e della vita sul nostro pianeta dipende in modo cruciale da come sapremo affrontare le sfide etiche dell’età degli automi.
Indice
Introduzione di Fabio Fossa, Viola Schiaffonati e Guglielmo Tamburrini
Parte prima
Decisioni e responsabilità
1. Etica dei sistemi intelligenti e autonomi: una mappa per orientarsi di Fabio Fossa, Viola Schiaffonati e Guglielmo Tamburrini
2. Apprendimento automatico e decisione umana di Giovanni Grandi e Teresa Scantamburlo
Introduzione/Macchine che decidono?/L’impatto etico e sociale dell’apprendimento automatico nei processi decisionali/Concetti filosofici e chiavi di lettura per una analisi del decidere/Istanze morali nei processi decisionali/IA e istanze morali
3. Medicina digitale IA: profili etici di Alessandro Blasimme
Introduzione/Concetti fondamentali e casi studio/Profili etici/Aspetti generali
4. Etica dei veicoli autonomi di Fabio Fossa e Guglielmo Tamburrini
Introduzione/Collisioni inevitabili/Oltre le collisioni inevitabili/Dall’etica applicata alle politiche etiche/Senso e limiti dell’analisi morale
Parte seconda
Persone e istituzioni
5. Governance algoritmica: sorveglianza, censura e diritti fondamentali di Francesca Musiani
Introduzione/Dove si osserva la governance algoritmica?/Prospettive multi- e interdisciplinari sulla governance algoritmica/Governance algoritmica, valori e diritti/Conclusioni
6. Cybersicurezza di Michele Loi
Introduzione/Esempi di scambi di valore (trade-off )/Etica della cybersicurezza, robotica e IA/Conclusioni
7. Armi autonome e controllo umano significativo di Viola Schiaffonati e Guglielmo Tamburrini
Introduzione/Etica dei doveri e armi autonome/Etica delle conseguenze e armi autonome/Controllo umano significativo e armi autonome/Politiche etiche per le armi autonome
8. Sostenibilità ambientale della società dell’informazione di Federica Lucivero
Introduzione/Sostenibilità ambientale, sviluppo sostenibile e big data/Inquinamento digitale: che cos’è?/ Inquinamento digitale ed etica/Conclusioni
Parte terza
Persone e interazioni
9. Robotica sociale: persuasione, inganno ed etica del design di Fabio Fossa
Introduzione/Robotica sociale/Captologia e tecnologie persuasive/Spinta gentile e robotica sociale/Inganni robotici/Uno strumento metodologico: Value Sensitive Design
10. Sex robot di Maurizio Balistreri
Introduzione/Che cosa sono i sex robot?/Chi fa sesso con un robot è una persona viziosa?/C’è qualcosa di sbagliato nello “stuprare” un robot?/I sex robot promuovono la violenza sulle donne?/Conclusioni
11. Le implicazioni etiche degli usi educativi e didattici dei robot di Edoardo Datteri e Luisa Zecca
Dalle “macchine per insegnare” agli insegnanti robotici/Scenari di uso dei robot in contesti educativi e didattici/Privacy, inganno, responsabilità/Conclusioni
12. Etica, videogiochi e gamification di Francesca Dagnino, Marcello Passarelli e Donatella Persico
Introduzione/Videogiochi e gamification/Problematiche etiche dei videogiochi/Problematiche etiche della gamification/Promuovere valori e consapevolezza etica attraverso i giochi/Conclusioni
Parte quarta
Segnavia
13. Codici etici e documenti di indirizzo di Fabio Fossa e Viola Schiaffonati
Introduzione/ACM Code of Ethics and Professional Conduct/IEEE Ethically Aligned Design/Orientamenti etici per un’IA affidabile/Critiche, limiti e senso dell’approccio
14. Orizzonti di Fabio Fossa, Viola Schiaffonati e Guglielmo Tamburrini
Introduzione/Automazione e lavoro/Potenziamento informatico e robotico/Etica delle macchine/Diritti dei robot/Automi e sfide etiche globali
Bibliografia
Le autrici e gli autori
human control—that is, those applying one size of human control to all weapons systems and uses thereof—fail to properly accommodate these normative requirements.
Hence the need for an MHC framework that is both “differentiated,” in
rejecting uniform solutions to the issue of human control, and “principled,” in favoring solutions that invariably retain the fail-safe, accountability, and moral agency roles for humans in human-weapon interactions. We additionally argue for “prudential” solutions, chiefly by appealing to epistemic uncertainties about AWS behaviors. The prudential solution we advance here imposes by default higher levels of human control of target selection and engagement processes; designated exceptions to this default rule are admitted solely on the basis of an international agreement entered into by states for specific weapons systems and uses thereof, provided that lower levels of human control are by consensus found sufficient to meet the fail-safe actor, accountability attractor, and moral agency enactor requirements. Finally, we suggest that the outlined differentiated, principled, and prudential framework provides a most appropriate normative basis for both
national arms review policies and any international legal instrument enshrining the MHC requirement (such as a possible Protocol VI to the CCW).
Various assumptions that are gradually introduced in the process of setting up and
performing these robotic experiments become integral parts of the background hypotheses that are needed to bring experimental observations to bear on biological mechanism descriptions.
research programs, depends on the identification of lines of
inquiry that are coherent with its visionary goals while satisfying
more stringent constraints of feasibility and near-term payoffs.
Within these constraints, this article outlines one line of inquiry
that seems especially viable: development of a society of robots
operating within the physical environments of everyday human
life, developing rich robot–robot social exchanges, and yet,
refraining from any physical contact with human beings. To
pursue this line of inquiry effectively, sustained interactions
between specialized research communities in robotics are needed.
Notably, suitable robotic hand design and control principles must
be adopted to achieve proper robotic manipulation of objects
designed for human hands that one finds in human habitats. The Pisa-IIT SoftHand project promises to meet these manipulation needs by a principled combination of sensorimotor synergies and soft robotics actuation, which aims at capturing how the biomechanical structure and neural control strategies of the human hand interact so as to simplify and solve both control and sensing problems.
Freudian theories of narcissism, animism, infantile complexes, ego ideal, and ideal ego are brought to bear on the interpretation of these various items. The horizons of Human-robot Interaction are found to afford new and fertile grounds for psychoanalytic theorizing beyond strictly therapeutic contexts.
In questo lavoro seguiremo una diversa strategia per analizzare le relazioni tra Turing e l’IA, soprattutto in considerazione della sostanziale estraneità del TT allo svolgimento effettivo delle ricerche condotte nell’ambito dell’IA. Ecco in breve che cosa ci proponiamo di fare: partendo da una ricostruzione schematica dell’IA vista come un programma di ricerca, metteremo in evidenza i contributi di carattere modellistico, epistemologico, metodologico e anche tecnologico che Turing ha dato al suo sviluppo.
Introduzione all’etica dell’intelligenza artificiale e della robotica
a cura di Fabio Fossa, Viola Schiaffonati, Guglielmo Tamburrini
Carocci Editore, Roma 2021, pp. 320, euro 29
ISBN 978-88-290-1170-4
http://www.carocci.it/index.php?option=com_carocci&task=schedalibro&Itemid=72&isbn=9788829011704
In breve
Dalle decisioni algoritmiche alle raccomandazioni sugli acquisti, dai sex robot alla sorveglianza sociale, dalla cybersicurezza all’autonomia operativa dei veicoli e delle armi, l’impatto dell’intelligenza artificiale e della robotica sulla vita delle persone è sempre più ramificato e pervasivo. Il volume offre un quadro d’insieme delle questioni etiche sollevate dall’incontro tra automi e individui nella società contemporanea: la protezione dell’autonomia a fronte della raccolta minuziosa di dati personali, le forme di benessere collettivo da promuovere attraverso l’automazione, la trasparenza e l’equità delle decisioni prese con il supporto di un algoritmo, il ruolo dei sistemi intelligenti nella crisi ambientale. L’automazione non è segno incontrovertibile di sventura, ma nemmeno indizio sicuro di progresso. Cosa sarà delle persone, delle società e della vita sul nostro pianeta dipende in modo cruciale da come sapremo affrontare le sfide etiche dell’età degli automi.
Indice
Introduzione di Fabio Fossa, Viola Schiaffonati e Guglielmo Tamburrini
Parte prima
Decisioni e responsabilità
1. Etica dei sistemi intelligenti e autonomi: una mappa per orientarsi di Fabio Fossa, Viola Schiaffonati e Guglielmo Tamburrini
2. Apprendimento automatico e decisione umana di Giovanni Grandi e Teresa Scantamburlo
Introduzione/Macchine che decidono?/L’impatto etico e sociale dell’apprendimento automatico nei processi decisionali/Concetti filosofici e chiavi di lettura per una analisi del decidere/Istanze morali nei processi decisionali/IA e istanze morali
3. Medicina digitale IA: profili etici di Alessandro Blasimme
Introduzione/Concetti fondamentali e casi studio/Profili etici/Aspetti generali
4. Etica dei veicoli autonomi di Fabio Fossa e Guglielmo Tamburrini
Introduzione/Collisioni inevitabili/Oltre le collisioni inevitabili/Dall’etica applicata alle politiche etiche/Senso e limiti dell’analisi morale
Parte seconda
Persone e istituzioni
5. Governance algoritmica: sorveglianza, censura e diritti fondamentali di Francesca Musiani
Introduzione/Dove si osserva la governance algoritmica?/Prospettive multi- e interdisciplinari sulla governance algoritmica/Governance algoritmica, valori e diritti/Conclusioni
6. Cybersicurezza di Michele Loi
Introduzione/Esempi di scambi di valore (trade-off )/Etica della cybersicurezza, robotica e IA/Conclusioni
7. Armi autonome e controllo umano significativo di Viola Schiaffonati e Guglielmo Tamburrini
Introduzione/Etica dei doveri e armi autonome/Etica delle conseguenze e armi autonome/Controllo umano significativo e armi autonome/Politiche etiche per le armi autonome
8. Sostenibilità ambientale della società dell’informazione di Federica Lucivero
Introduzione/Sostenibilità ambientale, sviluppo sostenibile e big data/Inquinamento digitale: che cos’è?/ Inquinamento digitale ed etica/Conclusioni
Parte terza
Persone e interazioni
9. Robotica sociale: persuasione, inganno ed etica del design di Fabio Fossa
Introduzione/Robotica sociale/Captologia e tecnologie persuasive/Spinta gentile e robotica sociale/Inganni robotici/Uno strumento metodologico: Value Sensitive Design
10. Sex robot di Maurizio Balistreri
Introduzione/Che cosa sono i sex robot?/Chi fa sesso con un robot è una persona viziosa?/C’è qualcosa di sbagliato nello “stuprare” un robot?/I sex robot promuovono la violenza sulle donne?/Conclusioni
11. Le implicazioni etiche degli usi educativi e didattici dei robot di Edoardo Datteri e Luisa Zecca
Dalle “macchine per insegnare” agli insegnanti robotici/Scenari di uso dei robot in contesti educativi e didattici/Privacy, inganno, responsabilità/Conclusioni
12. Etica, videogiochi e gamification di Francesca Dagnino, Marcello Passarelli e Donatella Persico
Introduzione/Videogiochi e gamification/Problematiche etiche dei videogiochi/Problematiche etiche della gamification/Promuovere valori e consapevolezza etica attraverso i giochi/Conclusioni
Parte quarta
Segnavia
13. Codici etici e documenti di indirizzo di Fabio Fossa e Viola Schiaffonati
Introduzione/ACM Code of Ethics and Professional Conduct/IEEE Ethically Aligned Design/Orientamenti etici per un’IA affidabile/Critiche, limiti e senso dell’approccio
14. Orizzonti di Fabio Fossa, Viola Schiaffonati e Guglielmo Tamburrini
Introduzione/Automazione e lavoro/Potenziamento informatico e robotico/Etica delle macchine/Diritti dei robot/Automi e sfide etiche globali
Bibliografia
Le autrici e gli autori
human control—that is, those applying one size of human control to all weapons systems and uses thereof—fail to properly accommodate these normative requirements.
Hence the need for an MHC framework that is both “differentiated,” in
rejecting uniform solutions to the issue of human control, and “principled,” in favoring solutions that invariably retain the fail-safe, accountability, and moral agency roles for humans in human-weapon interactions. We additionally argue for “prudential” solutions, chiefly by appealing to epistemic uncertainties about AWS behaviors. The prudential solution we advance here imposes by default higher levels of human control of target selection and engagement processes; designated exceptions to this default rule are admitted solely on the basis of an international agreement entered into by states for specific weapons systems and uses thereof, provided that lower levels of human control are by consensus found sufficient to meet the fail-safe actor, accountability attractor, and moral agency enactor requirements. Finally, we suggest that the outlined differentiated, principled, and prudential framework provides a most appropriate normative basis for both
national arms review policies and any international legal instrument enshrining the MHC requirement (such as a possible Protocol VI to the CCW).
Title translated into English INCREASINGLY AUTONOMOUS ROBOTIC SYSTEMS AND THEIR MEANINGFUL HUMAN CONTROL.
ABSTRACT IN ENGLISH: To be counted as operationally autonomous relative to the execution of some given task, a robotic system must be capable of performing that task without any human intervention after its activation. Recent progress in the fields of robotics and AI paved the way to robots autonomously performing tasks that may significantly affect individual and collective interests that are worthy of protection from both ethical and legal perspectives. The present contribution provides an overview of ensuing normative problems and identifies some ethically and legally grounded solutions to them. To this end, three case studies are more closely scrutinized: increasingly autonomous weapons systems, vehicles, and surgical robots. These exemplary cases are used to illustrate, respectively, the ground problem of whether we want to grant certain forms of autonomy to robotic systems, the problem of selecting appropriate ethical policies to control the behavior of autonomous robotic systems, and the problem of how to retain responsibility for misdoings of autonomous robotic systems. The analysis of these case studies brings out the key role played by human control in ethical and legal problem-solving strategies concerning the operational autonomy of robotic and AI systems.
KEY WORDS: Autonomous Weapons Systems; Self-driving Cars; Surgical Robots; Deontological Ethics and Consequentialism; International Law
SOMMARIO: 1. Introduzione.-2. L'autonomia dei sistemi robotici come "autonomia operativa".-3. L'(in)accettabilità etica e giuridica dell'autonomia nei sistemi robotici: il dibattito sulle armi autonome.-4. Come disciplinare l'autonomia operativa in situazioni eticamente e giuridicamente complesse: veicoli autonomi e collisioni inevitabili.-5. Autonomia delle macchine e responsabilità (professionale) umana: il caso dei robot chirurgici.-6. Conclusioni.
Purpose of Review: To provide readers with a compact account of ongoing academic and diplomatic debates about autonomy in weapons systems, that is, about the moral and legal acceptability of letting a robotic system to unleash destructive force in warfare and take attendant life-or-death decisions without any human intervention.
Recent Findings: A précis of current debates is provided, which focuses on the requirement that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC) in order to be ethically acceptable and lawfully employed. Main approaches to MHC are described and briefly analyzed, distinguishing between uniform, differentiated, and prudential policies for human control on weapons systems.
Summary: The review highlights the crucial role played by the robotics research community to start ethical and legal debates about autonomy in weapons systems. A concise overview is provided of the main concerns emerging in those early debates: respect of the laws of war, responsibility ascription issues, violation of the human dignity of potential victims of autonomous weapons systems, and increased risks for global stability. It is pointed out that these various concerns have been jointly taken to support the idea that all weapons systems, including autonomous ones, should remain under meaningful human control (MHC). Main approaches to MHC are described and briefly analyzed. Finally, it is emphasized that the MHC idea looms large on shared control policies to adopt in other ethically and legally sensitive application domains for robotics and artificial intelligence.
Keywords: Autonomous weapons systems, Roboethics, International humanitarian law, Human-robot shared control, Meaningful human control
CAROCCI EDITORE - 152 pp.
Collana Quality paperbacks - ISBN: 9788843098965
http://www.carocci.it/index.php?option=com_carocci&task=schedalibro&Itemid=72&isbn=9788843098965
INDICE
INTRODUZIONE
1. REGOLE MORALI PER VEICOLI AUTONOMI
L’ingegnere e l’etica delle conseguenze/Sicurezza stradale e veicoli autonomi/Libertà personale ed etica delle conseguenze/Dilemmi morali nelle collisioni inevitabili/Tensioni nell’etica delle conseguenze/Tensioni tra doveri, conseguenze e ruoli
2. POLITICHE ETICHE PER VEICOLI AUTONOMI
L’etica delle conseguenze e il divieto di tortura/Priorità e soluzione di conflitti morali/Preferenze culturali e politiche etiche/Politiche libertarie e segregazioniste/Supererogazione, libertà di scelta e paternalismo morale/Guida autonoma e crisi climatica
3. AUTONOMIA DELL'ARTIFICIALE ED ETICA APPLICATA
Autoveicoli ad autonomia crescente/Autonomia operativa: dal termostato ai robot calciatori/Etica delle macchine autonome: scopi e strumenti/Etica delle macchine autonome: aspettative e problemi di fondo/Lo studio algoritmico dei giochi e le sue applicazioni inattese
4. METTERE AL BANDO LE ARMI AUTONOME?
Il ricercatore come vedetta etica/Dai droni alle armi autonome: problematiche etiche/Dai droni alle armi autonome: sviluppi tecnologici/Etica dei doveri: principi di distinzione e proporzionalità in guerra/Etica dei doveri: responsabilità e dignità dell’uomo/Etica delle conseguenze e armi autonome
5. POLITICHE ETICHE PER LE ARMI AUTONOME
Non tutto è permesso in guerra/La definizione di arma autonoma nel dibattito internazionale/Dalle definizioni al problema del controllo umano/La “condizione Petrov” nel contesto dell’intelligenza artificiale/Politiche etiche uniformi per il controllo umano/Politiche etiche differenziate e prudenziali
6. LA FORESTA DELLE MACCHINE AUTONOME E L'ETICA
Alan Turing sul futuro del lavoro/Norbert Wiener su automazione, etica e diritti dei lavoratori/Sorvegliare, educare e punire al tempo dell’intelligenza artificiale/Sorvegliare al tempo dei droni/Il chirurgo e il suo robot ad autonomia crescente/Scelte morali in cerca d’autore
A Report by Daniele Amoroso, Frank Sauer, Noel Sharkey, Lucy Suchman and Guglielmo Tamburrini.
Volume 49 of the Publication Series on Democracy
Edited by the Heinrich Böll Foundation.
ISBN 978-3-86928-173-5
Published under the following Creative Commons License:
http://creativecommons.org/licenses/by-nc-nd/3.0
Philosophical motives of interest for AI and robotic autonomous systems prominently stem from distinctive ethical concerns: in which circumstances are autonomous systems ought to be permitted or prohibited to perform tasks which have significant implications in the way of human responsibilities, moral duties or fundamental rights? Deontological and consequentialist approaches to ethical theorizing are brought to bear on these ethical issues in the context afforded by the case studies of autonomous vehicles and autonomous weapons. Local solutions to intertheoretic conflicts concerning these case studies are advanced towards the development of a more comprehensive ethical platform guiding the design and use of autonomous machinery.
research laboratories jointly with information about longterm
goals of technological inquiry they are lined up with
and about the short-term objectives guiding daily laboratory
activities. These various ingredients play crucial roles
in the pursuit of what are called here technological
research programs. A comprehensive ethical framing of
technological research programs is decomposed here into
the ethical framing of their long-term and short-term goals,
respectively. This approach to the ethical framing of
technological research is exemplified by reference to fundamental
rights in the context of technological research
programs on elderly care and child care robots. Moreover,
its significance is highlighted in connection with democratic
decision-making about new and emerging technologies,
as well as in connection with the cultural production
of ignorance which is induced by missing information
about the protection and promotion of fundamental rights
in the specific context of robotic technologies.
Keywords: Applied ethics Robotics
Technological research programs Elderly care
robots Child care robots Fundamental rights
Agnotology
of BCI systems, initially demonstrated in rehabilitation medicine, is now being explored in education, entertainment, intensive workflow monitoring, security, and training. Ethical issues arising in connection with these investigations are triaged taking into account technological imminence and pervasiveness of BCI technologies. By focussing on imminent technological developments, ethical reflection is informatively grounded into realistic protocols of
brain-to-computer communication. In particular, it is argued that human-machine adaptation and shared control distinctively shape autonomy and responsibility issues in current BCI interaction environments. Novel personhood issues are identified and analyzed too. These notably concern (i) the “sub-personal” use of human beings in BCI-enabled cooperative problem solving, and (ii) the pro-active protection of personal identity which BCI rehabilitation therapies may afford, in the light of so-called motor theories of thinking, for the benefit of patients affected by severe motor disabilities.
Novel ethical issues arise in connection with more distant prospects for BCI enhancement of unimpaired motor capabilities. Ethical policy formation about BCI-enabled enhancements appears to be premature in view of technological lack of imminence. Nevertheless, watchful monitoring of BCI research is presently called for, in order to anticipate prospective ethical tensions between the claims of personal freedom to enhancement and the claims deriving from social justice, fairness, and mental and physical integrity considerations.
On the whole, BCI systems afford unique potential solutions for protecting the autonomy, the action, and even the thinking capabilities of people affected by severe motor impairments. However, trust building between BCI researchers and various groups of stakeholders requires the development of communication strategies which enable one to appreciate the rapid advancements in BCI research without underestimating at the same time the formidable challenges one has to meet before various forms of BCI-enabled communication and motor control become more widely available.
the behaviour of software and hardware systems, is examined on the basis of
reflective work in the philosophy of science concerning the ontology of scientific
theories and model-based reasoning. The empirical theories of computational systems that model checking techniques enable one to build are identified, in the light of the semantic conception of scientific theories, with families of models that are interconnected by simulation relations. And the mappings between these scientific theories and computational systems in their scope are analyzed in terms of suitable specializations of the notions of model of experiment and model of data. Furthermore, the extensively mechanized character of model-based reasoning in model checking is highlighted by a comparison with proof procedures adopted by other formal methods in computer science. Finally, potential epistemic benefits flowing from the application of model checking in other areas of scientific inquiry are emphasized in the context of computer simulation studies of biological information processing.
OPEN ACCESS ARTICLE