What Social Robots Can and Should Do. Robophilosophy2016/TRANSOR2016, Proceedings. Eds:Johanna Seibt, Marco Nørskov, Søren Schack Andersen, 2016
We argue that social robots should be designed to behave similarly to humans , and furthermore th... more We argue that social robots should be designed to behave similarly to humans , and furthermore that social norms constitute the core of human interaction. Whether robots can be designed to behave in human-like ways turns on whether they can be designed to organize and coordinate their behavior with others' social expectations. We suggest that social norms regulate interaction in real time, where agents relies on dynamic information about their own and others' attention, intention and emotion to perform social tasks.
Uploads
Papers by Ingar Brinck
starts to develop in the first year, emerging as a practical skill that depends on participatory engagement. Three sources of compliance are discussed: emotional engagement, nonverbal
agreement, and conversation.
and value instability in decision-making that concerns morally controversial issues. Value uncertainty and value instability are distinguished from moral uncertainty, and several types of value uncertainty and value instability are defined and discussed. The relations between value uncertainty and value instability are explored, and value uncertainty is illustrated with examples drawn from the social sciences, medicine and everyday life. Several types of factor producing value uncertainty and/or value instability are then identified. They are grouped into three categories and discussed under the headings ‘value framing’, ‘ambivalence’ and ‘lack of self-knowledge’. The paper then discusses the role of value uncertainty in decision-making. The concluding remarks summarize what has been achieved and what remains to be done
in this area.