Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2023, AI and Ethics
…
1 page
1 file
When this article was first published, the given and the family name were given as "Firt Erez", while they should be correctly "Erez Firt". The original publication has been corrected.
Artificial agents such as robots are performing increasingly significant ethical roles in society. As a result, there is a growing literature regarding their moral status with many suggesting it is justified to regard manufactured entities as having intrinsic moral worth. However, the question of whether artificial agents could have the high degree of moral status that is attributed to human persons has largely been neglected. To address this question, the author developed a respect-based account of the ethical criteria for the moral status of persons. From this account, the paper employs an empirical test that must be passed in order for artificial agents to be considered alongside persons as having the corresponding rights and duties.
Minds and Machines, 2004
Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on ‘mind-less morality’ we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the ‘Method of Abstraction’ for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The ‘Method of Abstraction’ is explained in terms of an ‘interface’ or set of features or observables at a given ‘LoA’. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the ‘transition rules’ by which state is changed) at a given LoA. Morality may be thought of as a ‘threshold’ defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary ‘cost’ of this facility is the extension of the class of agents and moral agents to embrace AAs.
Springer, 2019
The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins’ (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique.
Ethics and Information Technology, 2018
The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take into consideration the work of Bostrom and Dietrich, who have radically assumed this viewpoint and thoroughly explored its implications. Thirdly, I present an alternative approach to AMAs—the Discontinuity Approach—which underscores an essential difference between human moral agents and AMAs by tackling the matter from another angle. In this section I concentrate on the work of Johnson and Bryson and I highlight the link between their claims and Heidegger’s and Jonas’s sug-gestions concerning the relationship between human beings and technological products. In conclusion I argue that, although the Continuity Approach turns out to be a necessary postulate to the machine ethics project, the Discontinuity Approach highlights a relevant distinction between AMAs and human moral agents. On this account, the Discontinuity Approach generates a clearer understanding of what AMAs are, of how we should face the moral issues they pose, and, finally, of the difference that separates machine ethics from moral philosophy.
International Journal of Technoethics (IJT), 2011
Artificial agents such as robots are performing increasingly significant ethical roles in society. As a result, there is a growing literature regarding their moral status, with many suggesting it is justified to regard manufactured entities as having intrinsic moral worth. However, the question of whether artificial agents could have the high degree of moral status that is attributed to human persons has largely been neglected. To address this question I develop a respect-based account of the ethical criteria for the moral status of persons. From this account I derive an empirical test that must be passed in order for artificial agents to be considered alongside persons as having the corresponding rights and duties.
Humanities and Social Sciences Communications
Moral implications of the decision-making process based on algorithms require special attention within the field of machine ethics. Specifically, research focuses on clarifying why even if one assumes the existence of well-working ethical intelligent agents in epistemic terms, it does not necessarily mean that they meet the requirements of autonomous moral agents, such as human beings. For the purposes of exemplifying some of the difficulties in arguing for implicit and explicit ethical agents in Moor’s sense, three first-order normative theories in the field of machine ethics are put to test. Those are Powers’ prospect for a Kantian machine, Anderson and Anderson’s reinterpretation of act utilitarianism and Howard and Muntean’s prospect for a moral machine based on a virtue ethical approach. By comparing and contrasting the three first-order normative theories, and by clarifying the gist of the differences between the processes of calculation and moral estimation, the possibility f...
2021
This Thesis intends to verify and assume the philosophical possibility of the emergence of an authentic artificial moral agent. The plausibility of overcoming the Turing Test, the Chinese Room and the Ada Lovelace Test is taken as a presupposition, as well as the possible emergence of an authentic moral artificial agent, with intentional deliberations in a first-person perspective. Thus, the thesis of the possibility of a computational code capable of giving rise to emergence is accepted. The main problem of this study will be to investigate the philosophical possibility of an artificial ethics, as a result of the will and rationality of an artificial subject, that is, of artificial intelligence as a moral subject. An artificial ethical agent must act by its own characteristics and not according to a predetermined external schedule. Authentic artificial ethics are internal and not external to the automaton. A proposed and increasingly accepted model that demonstrates this computational possibility is that of a bottom-up morality, in which case the system can independently acquire moral capabilities. This model is close to the Aristotelian ethics of virtues. Another possible way is the union of a computational floor model, with models based on deontology, with the more general formulation of duties and maxims. On the other hand, it is shown that, in at least one case, it is possible to construct a viable and autonomous model of artificial morality. There is no unequivocal demonstration of the impossibility for artificial moral agents to possess artificial emotions. The conclusion that several programming scientists have reached is that an artificial agency model founded on machine learning, combined with the ethics of virtue, is a natural, cohesive, coherent, integrated and “seamless” path. Thus, there is a coherent, consistent, and well-founded answer that indicates that the impossibility of an authentic artificial moral agent has not been proven. Finally, a responsible ethical theory must consider the concrete possibility of the emergence of full moral agents and all the consequences of this divisive phenomenon in human history.
Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe-"rational" and "free"-while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach. Beavers pushes for the reinvention of traditional ethics in order to avoid "ethical nihilism" due to the reduction of morality to mechanical causation. Wallach pushes for redoubled efforts toward a comprehensive account of ethics to guide machine ethicists on the issue of artificial moral agency. Options thus present themselves: reinterpret traditional ethics in a way that affords a comprehensive account of moral agency inclusive of both artificial and natural agents, or give up on the possibility and "muddle through" regardless. This series of papers pursues the first option, meets Tonkens' "challenge" and pursues Wallach's ends through Beavers' proposed means, by "landscaping" traditional moral theory in resolution of a comprehensive account of moral agency. This first paper establishes the challenge and sets out the tradition in terms of which an adequate solution should be assessed. The next paper in this series responds to the challenge in Kantian terms, and shows that a Kantian AMA is not only a possibility for Machine ethics research, but a necessary one.
2020
In this paper, one of my primary objectives is to analyze why adopting particular machine-learning techniques and using a moral AI as an adviser is an insufficient condition for eradicating racist human attitudes. By outlining some difficulties in justifying what artificial "explicit ethical agents" in Moor's sense should look like, I explore why, even if the development of machine-learning techniques can be accepted in epistemic terms, it does not follow that the techniques in question will have a positive impact in changing immoral human behavior.
AI & SOCIETY, 2019
Revista Acervo, 2024
Journal for the History of Science, 2022
J. G Silva, 2022
Educación Política: Cuerpos, espacios, memorias y transiciones, 2024
The art of Siege Warfare and Military Architecture from the Classical World to the Middle Ages, Oxford, 2020
Asian Language and Linguistics, 2020
Journal of applied clinical medical physics / American College of Medical Physics, 2011
Journal of Imaging
Current drug targets, 2012
Advances in Experimental Medicine and Biology, 2016
Asia Pacific journal of clinical nutrition, 2017
Handbook of Australian School Psychology, 2017
Social Science Research Network, 2022
Marine Pollution Bulletin, 2013
Asian and Pacific Coasts 2009, 2009
2016
The Nigerian Journal of Sociology and Anthropology, 2016