Papers by Andreas Matthias
Science and Engineering Ethics
Handbook of Research on Technoethics, 2000
Creation of autonomously acting, learning artifacts has reached a point where humans cannot any m... more Creation of autonomously acting, learning artifacts has reached a point where humans cannot any more be justly held responsible for the actions of certain types of machines. Such machines learn during operation, thus continuously changing their original behaviour in uncontrollable (by the initial manufacturer) ways. They act without effective supervision and have an epistemic advantage over humans, in that their extended sensory apparatus, their superior processing speed and perfect memory render it impossible for humans to supervise the machine's decisions in real-time. We survey the techniques of artificial intelligence engineering, showing that there has been a shift in the role of the programmer of such machines from a coder (who has complete control over the program in the machine) to a mere creator of software organisms which evolve and develop by themselves. We then discuss the problem of responsibility ascription to such machines, trying to avoid the metaphysical pitfalls of the mind-body problem. We propose five criteria for purely legal responsibility, which are in accordance both with the findings of contemporary analytic philosophy and with legal practise. We suggest that Stahl's (2006) concept of "quasi-responsibility" might also be a way to handle the responsibility gap.
Kennedy Institute of Ethics Journal, 2015
Autonomous robots are increasingly interacting with users who have limited knowledge of robotics ... more Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot's workings, capabilities, and internal structure. The robot's real capabilities may diverge from this mental model to the extent that one might accuse the robot's manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in child care). This poses the question, whether misleading or even actively deceiving the user of an autonomous artifact about the capabilities of the machine is morally bad and why. By analyzing trust, autonomy, and the erosion of trust in communicative acts as consequences of deceptive robot behavior, we formulate four criteria that must be fulfilled in order for robot deception to be morally permissible, and in some cases even morally indicated.
Law, Innovation and Technology, 2011
... 8 Josh Lucas and Gary L Comstock, 'Do Machines have Prima Facie Duties?' in... more ... 8 Josh Lucas and Gary L Comstock, 'Do Machines have Prima Facie Duties?' in Bibi van den Berg and Laura Klaming (eds ... Are We?' in James J Sheehan and Morton Sosna (eds), The Boundaries of Humanity: Humans, Animals, Machines (University of California Press, 1991). ...
igi-global.com, 2009
... details... $30.00 Add to Cart. 4. A Technoethical Approach to the Race Problem inAnthropology... more ... details... $30.00 Add to Cart. 4. A Technoethical Approach to the Race Problem inAnthropology (pages 44-68). Michael S. Billinger (Edmonton Police Service, Canada) Sample PDF | More details... $30.00 Add to Cart. 5. The ...
Ethics and Information Technology, 2004
Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible f... more Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a realistic option), or facing a responsibility gap, which cannot be bridged by traditional concepts of responsibility ascription.
Uploads
Papers by Andreas Matthias