Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
Belonging to a group is a natural need for human beings. Being left out and rejected represents a... more Belonging to a group is a natural need for human beings. Being left out and rejected represents a negative event, which can cause discomfort and stress to the excluded person and other members. Social robots have been shown to have the potential to be optimal tools for studying influence in group interactions, providing valuable insights into how human group dynamics can be modeled, replicated, and leveraged. In this work, we aim to study the effect of being excluded by a social robot in a teenagers-robot interaction. We propose a conversational turn-taking game, inspired by the Cyberball paradigm and rooted in social exclusion mechanisms, to explore how the humanoid robot iCub can affect group dynamics by excluding one of the group members. Preliminary results show that the included player tries to re-engage with the one excluded by the robot. We interpret this dynamic as an included player's tentative to compensate for the exclusion and reestablish a balance, in line with findings in human-human interaction research. Furthermore, the paradigm we developed seems a suitable tool for researching social influence in different Human-Robot Interaction contexts. CCS CONCEPTS • Human-centered computing → Scenario-based design; User studies; • Applied computing → Psychology.
Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
Human perception and motion are continuously influenced by prior experience. However, when humans... more Human perception and motion are continuously influenced by prior experience. However, when humans have to share the same space and time, different previous experience could lead towards opposite percepts and actions, consequently failing in coordination. This study presents a novel experimental setup that aims at exploring the interplay between human perceptual mechanisms and motor strategies during human-robot interaction. To achieve this goal, we developed a complex system to enable the realization of an interactive perceptual task, where the participant has to perceive and estimate temporal durations together with iCub, with the goal of coordinating with the robotic partner. Results show that the experimental setup continuously monitor how participants implement their perceptual and motor behavior during the interaction with a controllable interacting agent. Therefore, it will be possible to produce quantitative models describing the interplay between perceptual and motor adaptation during an interaction.
The ability to recognize human partners is an important social skill to build personalized and lo... more The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions, especially in scenarios like education, care-giving, and rehabilitation. Faces and voices constitute two important sources of information to enable artificial systems to reliably recognize individuals. Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task. However, when those networks are applied to different and unprecedented scenarios not included in the training set, they can suffer a drop in performance. For example, with robotic platforms in ever-changing and realistic environments, where always new sensory evidence is acquired, the performance of those models degrades. One solution is to make robots learn from their first-hand sensory data with selfsupervision. This allows coping with the inherent variability of the data gathered in realistic and interactive contexts. To this aim, we propose a cognitive architecture integrating low-level perceptual processes with a spatial working memory mechanism. The architecture autonomously organizes the robot's sensory experience into a structured dataset suitable for human recognition. Our results demonstrate the effectiveness of our architecture and show that it is a promising solution in the quest of making robots more autonomous in their learning process.
Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization
Personalization and adaptation are key aspects of designing and developing effective and acceptab... more Personalization and adaptation are key aspects of designing and developing effective and acceptable social robot tutors. They allow to tailor interactions towards individual needs and preferences, improve engagement and sense of familiarity over time, and facilitate trust between the user and the robot. To foster the development of autonomous adaptive social robots, we present a tutoring framework that recognizes new or previously met pupils and adapts the training experience through feedback about real-time performance and the tailoring of exercises and interaction based on users' past encounters. The framework is suitable for multiparty scenarios, allowing for deployment in real-world tutoring contexts unfolding in groups. A preliminary evaluation of the framework during pilot studies and demonstration events in yoga-based training and game scenarios showed that our framework could be adapted to different contexts and populations, including children and adolescents. The robot's ability to recognize people and personalize its behavior based on the performance of previous sessions was appreciated by participants, who reported the feeling of being followed and cared for by the robot. Overall, the framework can support autonomous robot-led training by allowing monitoring of both daily performance and improvements over multiple encounters. It also lends itself to further expansion to more complex behaviors, with the organic and modular inclusion of more advanced social capabilities, such as redirecting the robot's attention to different learners or estimating participant engagement. CCS CONCEPTS • Computer systems organization → Robotics; • Computing methodologies → Artificial intelligence.
The investigation of emerging adults' expectations of development of the next generation of robot... more The investigation of emerging adults' expectations of development of the next generation of robots is a fundamental challenge to narrow the gap between expectations and real technological advances, which can potentially impact the effectiveness of future interactions between humans and robots. Furthermore, the literature highlights the important role played by negative attitudes toward robots in setting people's expectations. To better explore these expectations, we administered the Scale for Robotic Needs and performed a latent profile analysis to describe different expectation profiles about the development of future robots. The profiles identified through this methodology can be placed along a continuum of robots' humanization: from a group that desires mainly the technical features to a group that imagines a humanized robot in the future. Finally, the analysis of emerging adults' knowledge about robots and their negative attitudes toward robots allowed us to understand how these affect their expectations.
When interacting with others in our everyday life, we prefer the company of those who share with ... more When interacting with others in our everyday life, we prefer the company of those who share with us the same desire of closeness and intimacy (or lack thereof), since this determines if our interaction will be more o less pleasant. This sort of compatibility can be inferred by our innate attachment style. The attachment style represents our characteristic way of thinking, feeling and behaving in close relationship, and other than behaviourally, it can also affect us biologically via our hormonal dynamics. When we are looking how to enrich humanrobot interaction (HRI), one potential solution could be enabling robots to understand their partners' attachment style, which could then improve the perception of their partners and help them behave in an adaptive manner during the interaction. We propose to use the relationship between the attachment style and the cortisol hormone, to endow the humanoid robot iCub with an internal cortisol inspired framework that allows it to infer participant's attachment style by the effect of the interaction on its cortisol levels (referred to as R-cortisol). In this work, we present our cognitive framework and its validation during the replication of a well-known paradigm on hormonal modulation in human-human interaction (HHI)-the Still Face paradigm.
For most people magicians seem to surpass human abilities, combining skills and deception to perf... more For most people magicians seem to surpass human abilities, combining skills and deception to perform mesmerizing tricks. Robots performing magic tricks could similarly fascinate and engage the audience, potentially establishing a novel rapport with human partners. However, magician robots are usually done by Wizard of Oz. This study presents an autonomous framework to perform a magic trick in a quick and game-like human-robot interaction. The iCub humanoid robot plays the role of a magician in a card game, autonomously inferring which card the human partner is lying about. We exploited cognitive load assessment via pupil reading to infer the mental state of the player. The validation results show an accuracy of 90.9% and the possibility to simplify the game to improve its portability. This suggests the feasibility of our approach and paves the way toward a real-world application of the game.<br>
Modern robotics is interested in developing humanoid robots with meta-cognitive capabilities in o... more Modern robotics is interested in developing humanoid robots with meta-cognitive capabilities in order to create systems that have the possibility of dealing efficiently with the presence of novel situations and unforeseen inputs. Given the relational nature of human beings, with a glimpse into the future of assistive robots, it seems relevant to start thinking about the nature of the interaction with such robots, increasingly human-like not only from the outside but also in terms of behavior. The question posed in this abstract concerns the possibility of ascribing the robot not only a mind but a more profound dimension: a Self.
We propose a self-supervised generative model for addressing the perspective translation problem.... more We propose a self-supervised generative model for addressing the perspective translation problem. In particular we focus on third-person to first-person view translation as primary and more common form of perspective translation in human robot interaction. Evidences show how this skill is developed in children since the very first months of life. In nature, this skill has been also found in many animal species. Endowing robots with perspective translation would be an important contribution to the research fields such as imitation learning and action understanding. We trained our model on simple RGB videos representing actions seen from different perspectives, specifically the first person (ego-vision) and third person (allo-vision). We demonstrate that the learned model generates results that are visually consistent. We also show that our solution automatically learns an embedded representation of the action that can be useful for tasks like action/scene recognition.
Human motion understanding has been studied for decades but yet it remains a challenging research... more Human motion understanding has been studied for decades but yet it remains a challenging research field which attracts the interest from different disciplines. This book wants to provide a comprehensive view on this topic, closing the loop between perception and action, starting from humans' action perception skills and then moving to computational models of motion perception and control adopted in robotics. To achieve this aim, the book collects contributions from experts in different fields, spanning neuroscience, computer vision and robotics. The first part focuses on the features of human motion perception and its neural underpinnings. The second part considers motion perception from the computational perspective, providing a view on cutting-edge machine learning solutions. Finally, the third part takes into account the implications for robotics, exploring how motion and gestures should be generated by communicative artificial agents to establish intuitive and effective human-robot interaction.
The high request for autonomous and flexible HRI implies the necessity of deploying Machine Learn... more The high request for autonomous and flexible HRI implies the necessity of deploying Machine Learning (ML) mechanisms in the robot control. Indeed, the use of ML techniques, such as Reinforcement Learning (RL), makes the robot behaviour, during the learning process, not transparent to the observing user. In this work, we proposed an emotional model to improve the transparency in RL tasks for human-robot collaborative scenarios. The architecture we propose supports the RL algorithm with an emotional model able to both receive human feedback and exhibit emotional responses based on the learning process. The model is entirely based on the Temporal Difference (TD) error. The architecture was tested in an isolated laboratory with a simple setup. The results highlight that showing its internal state through an emotional response is enough to make a robot transparent to its human teacher. People also prefer to interact with a responsive robot because they are used to understand their intent...
The high request for autonomous human-robot interaction (HRI), combined with the potential of mac... more The high request for autonomous human-robot interaction (HRI), combined with the potential of machine learning (ML) techniques, allow us to deploy ML mechanisms in robot control. However, the use of ML can make robots' behavior unclear to the observer during the learning phase. Recently, transparency in HRI has been investigated to make such interactions more comprehensible. In this work, we propose a model to improve the transparency during reinforcement learning (RL) tasks for HRI scenarios: the model supports transparency by having the robot show nonverbal emotional-behavioral cues. Our model considered human feedback as the reward of the RL algorithm and it presents emotional-behavioral responses based on the progress of the robot learning. The model is managed only by the temporal-difference error. We tested the architecture in a teaching scenario with the iCub humanoid robot. The results highlight that when the robot expresses its emotional-behavioral response, the human teacher is able to understand its learning process better. Furthermore, people prefer to interact with an expressive robot as compared to a mechanical one. Movement-based signals proved to be more effective in revealing the internal state of the robot than facial expressions. In particular, gaze movements were effective in showing the robot's next intentions. In contrast, communicating uncertainty through robot movements sometimes led to action misinterpretation, highlighting the importance of balancing transparency and the legibility of the robot goal. We also found a reliable temporal window in which to register teachers' feedback that can be used by the robot as a reward.
2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids), 2018
Lie detection is a necessary skill for a variety of social professions, including teachers, repor... more Lie detection is a necessary skill for a variety of social professions, including teachers, reporters, therapists, and law enforcement officers. Autonomous system and robots should acquire such skill to support professionals in numerous working contexts. Inspired by literature on human-human interaction, this work investigates whether the behavioral cues associated to lying-including eye movements and response temporal features-are apparent also during human-humanoid interaction and can be leveraged by the robot to detect deception. The results highlight strong similarities in the lying behavior toward humans and the robot. Further, the study proposes an implementation of a machine learning algorithm that can detect lies with an accuracy of 75%, when trained with a dataset collected during human-human and human robot interaction. Consequently, this work proposes a technological solution for humanoid interviewers that can be trained with knowledge about lie detection and reuse it to counteract deception.
Trust is essential in human-robot interactions, and in times where machines are yet to be fully r... more Trust is essential in human-robot interactions, and in times where machines are yet to be fully reliable, it is important to study how robotic hardware faults can affect the human counterpart. This experiment builds on a previous research that studied trust changes in a game-like scenario with the humanoid robot iCub. Several robot hardware failures (validated in another online study) were introduced in order to measure changes in trust due to the unreliability of the iCub. A total of 68 participants took part in this study. For half of them, the robot adopted a transparent approach, explaining each failure after it happened. Participants' behaviour was also compared to the 61 participants that played the same game with a fully reliable robot in the previous study. Against all expectations, introducing manifest hardware failures does not seem to significantly affect trust, while transparency mainly deteriorates the quality of interaction with the robot.
Humans are very good at interacting and collaborating with each other. This ability is based on m... more Humans are very good at interacting and collaborating with each other. This ability is based on mutual understanding and is supported by a continuous exchange of information mediated only in minimal part by language. The majority of messages are covertly embedded in the way the two partners move their eyes and their body. It is this silent, movement-based flow of information that enables a seamless coordination. It occurs without the two partners' awareness and the
Previous research has shown that the perception that one’s partner is investing effort in a joint... more Previous research has shown that the perception that one’s partner is investing effort in a joint action can generate a sense of commitment, leading participants to persist longer despite increasing boredom. The current research extends this finding to human-robot interaction. We implemented a 2-player version of the classic snake game which became increasingly boring over the course of each round, and operationalized commitment in terms of how long participants persisted before pressing a ‘finish’ button to conclude each round. Participants were informed that they would be linked via internet with their partner, a humanoid robot. Our results reveal that participants persisted longer when they perceived what they believed to be cues of their robot partner’s effortful contribution to the joint action. This provides evidence that the perception of a robot partner’s effort can elicit a sense of commitment to human-robot interaction.
Proceedings of the 10th International Conference on Computer Vision Theory and Applications, 2015
This paper deals with the problem of estimating the affinity level between different types of hum... more This paper deals with the problem of estimating the affinity level between different types of human actions observed from different viewpoints. We analyse simple repetitive upper body human actions with the goal of producing a view-invariant model from simple motion cues, that have been inspired by studies on the human perception. We adopt a simple descriptor that summarizes the evolution of spatio-temporal curvature of the trajectories, which we use for evaluating the similarity between actions pair on a multi-level matching. We experimentally verified the presence of semantic connections between actions across views, inferring a relations graph that shows such affinities.
Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
Belonging to a group is a natural need for human beings. Being left out and rejected represents a... more Belonging to a group is a natural need for human beings. Being left out and rejected represents a negative event, which can cause discomfort and stress to the excluded person and other members. Social robots have been shown to have the potential to be optimal tools for studying influence in group interactions, providing valuable insights into how human group dynamics can be modeled, replicated, and leveraged. In this work, we aim to study the effect of being excluded by a social robot in a teenagers-robot interaction. We propose a conversational turn-taking game, inspired by the Cyberball paradigm and rooted in social exclusion mechanisms, to explore how the humanoid robot iCub can affect group dynamics by excluding one of the group members. Preliminary results show that the included player tries to re-engage with the one excluded by the robot. We interpret this dynamic as an included player's tentative to compensate for the exclusion and reestablish a balance, in line with findings in human-human interaction research. Furthermore, the paradigm we developed seems a suitable tool for researching social influence in different Human-Robot Interaction contexts. CCS CONCEPTS • Human-centered computing → Scenario-based design; User studies; • Applied computing → Psychology.
Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
Human perception and motion are continuously influenced by prior experience. However, when humans... more Human perception and motion are continuously influenced by prior experience. However, when humans have to share the same space and time, different previous experience could lead towards opposite percepts and actions, consequently failing in coordination. This study presents a novel experimental setup that aims at exploring the interplay between human perceptual mechanisms and motor strategies during human-robot interaction. To achieve this goal, we developed a complex system to enable the realization of an interactive perceptual task, where the participant has to perceive and estimate temporal durations together with iCub, with the goal of coordinating with the robotic partner. Results show that the experimental setup continuously monitor how participants implement their perceptual and motor behavior during the interaction with a controllable interacting agent. Therefore, it will be possible to produce quantitative models describing the interplay between perceptual and motor adaptation during an interaction.
The ability to recognize human partners is an important social skill to build personalized and lo... more The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions, especially in scenarios like education, care-giving, and rehabilitation. Faces and voices constitute two important sources of information to enable artificial systems to reliably recognize individuals. Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task. However, when those networks are applied to different and unprecedented scenarios not included in the training set, they can suffer a drop in performance. For example, with robotic platforms in ever-changing and realistic environments, where always new sensory evidence is acquired, the performance of those models degrades. One solution is to make robots learn from their first-hand sensory data with selfsupervision. This allows coping with the inherent variability of the data gathered in realistic and interactive contexts. To this aim, we propose a cognitive architecture integrating low-level perceptual processes with a spatial working memory mechanism. The architecture autonomously organizes the robot's sensory experience into a structured dataset suitable for human recognition. Our results demonstrate the effectiveness of our architecture and show that it is a promising solution in the quest of making robots more autonomous in their learning process.
Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization
Personalization and adaptation are key aspects of designing and developing effective and acceptab... more Personalization and adaptation are key aspects of designing and developing effective and acceptable social robot tutors. They allow to tailor interactions towards individual needs and preferences, improve engagement and sense of familiarity over time, and facilitate trust between the user and the robot. To foster the development of autonomous adaptive social robots, we present a tutoring framework that recognizes new or previously met pupils and adapts the training experience through feedback about real-time performance and the tailoring of exercises and interaction based on users' past encounters. The framework is suitable for multiparty scenarios, allowing for deployment in real-world tutoring contexts unfolding in groups. A preliminary evaluation of the framework during pilot studies and demonstration events in yoga-based training and game scenarios showed that our framework could be adapted to different contexts and populations, including children and adolescents. The robot's ability to recognize people and personalize its behavior based on the performance of previous sessions was appreciated by participants, who reported the feeling of being followed and cared for by the robot. Overall, the framework can support autonomous robot-led training by allowing monitoring of both daily performance and improvements over multiple encounters. It also lends itself to further expansion to more complex behaviors, with the organic and modular inclusion of more advanced social capabilities, such as redirecting the robot's attention to different learners or estimating participant engagement. CCS CONCEPTS • Computer systems organization → Robotics; • Computing methodologies → Artificial intelligence.
The investigation of emerging adults' expectations of development of the next generation of robot... more The investigation of emerging adults' expectations of development of the next generation of robots is a fundamental challenge to narrow the gap between expectations and real technological advances, which can potentially impact the effectiveness of future interactions between humans and robots. Furthermore, the literature highlights the important role played by negative attitudes toward robots in setting people's expectations. To better explore these expectations, we administered the Scale for Robotic Needs and performed a latent profile analysis to describe different expectation profiles about the development of future robots. The profiles identified through this methodology can be placed along a continuum of robots' humanization: from a group that desires mainly the technical features to a group that imagines a humanized robot in the future. Finally, the analysis of emerging adults' knowledge about robots and their negative attitudes toward robots allowed us to understand how these affect their expectations.
When interacting with others in our everyday life, we prefer the company of those who share with ... more When interacting with others in our everyday life, we prefer the company of those who share with us the same desire of closeness and intimacy (or lack thereof), since this determines if our interaction will be more o less pleasant. This sort of compatibility can be inferred by our innate attachment style. The attachment style represents our characteristic way of thinking, feeling and behaving in close relationship, and other than behaviourally, it can also affect us biologically via our hormonal dynamics. When we are looking how to enrich humanrobot interaction (HRI), one potential solution could be enabling robots to understand their partners' attachment style, which could then improve the perception of their partners and help them behave in an adaptive manner during the interaction. We propose to use the relationship between the attachment style and the cortisol hormone, to endow the humanoid robot iCub with an internal cortisol inspired framework that allows it to infer participant's attachment style by the effect of the interaction on its cortisol levels (referred to as R-cortisol). In this work, we present our cognitive framework and its validation during the replication of a well-known paradigm on hormonal modulation in human-human interaction (HHI)-the Still Face paradigm.
For most people magicians seem to surpass human abilities, combining skills and deception to perf... more For most people magicians seem to surpass human abilities, combining skills and deception to perform mesmerizing tricks. Robots performing magic tricks could similarly fascinate and engage the audience, potentially establishing a novel rapport with human partners. However, magician robots are usually done by Wizard of Oz. This study presents an autonomous framework to perform a magic trick in a quick and game-like human-robot interaction. The iCub humanoid robot plays the role of a magician in a card game, autonomously inferring which card the human partner is lying about. We exploited cognitive load assessment via pupil reading to infer the mental state of the player. The validation results show an accuracy of 90.9% and the possibility to simplify the game to improve its portability. This suggests the feasibility of our approach and paves the way toward a real-world application of the game.<br>
Modern robotics is interested in developing humanoid robots with meta-cognitive capabilities in o... more Modern robotics is interested in developing humanoid robots with meta-cognitive capabilities in order to create systems that have the possibility of dealing efficiently with the presence of novel situations and unforeseen inputs. Given the relational nature of human beings, with a glimpse into the future of assistive robots, it seems relevant to start thinking about the nature of the interaction with such robots, increasingly human-like not only from the outside but also in terms of behavior. The question posed in this abstract concerns the possibility of ascribing the robot not only a mind but a more profound dimension: a Self.
We propose a self-supervised generative model for addressing the perspective translation problem.... more We propose a self-supervised generative model for addressing the perspective translation problem. In particular we focus on third-person to first-person view translation as primary and more common form of perspective translation in human robot interaction. Evidences show how this skill is developed in children since the very first months of life. In nature, this skill has been also found in many animal species. Endowing robots with perspective translation would be an important contribution to the research fields such as imitation learning and action understanding. We trained our model on simple RGB videos representing actions seen from different perspectives, specifically the first person (ego-vision) and third person (allo-vision). We demonstrate that the learned model generates results that are visually consistent. We also show that our solution automatically learns an embedded representation of the action that can be useful for tasks like action/scene recognition.
Human motion understanding has been studied for decades but yet it remains a challenging research... more Human motion understanding has been studied for decades but yet it remains a challenging research field which attracts the interest from different disciplines. This book wants to provide a comprehensive view on this topic, closing the loop between perception and action, starting from humans' action perception skills and then moving to computational models of motion perception and control adopted in robotics. To achieve this aim, the book collects contributions from experts in different fields, spanning neuroscience, computer vision and robotics. The first part focuses on the features of human motion perception and its neural underpinnings. The second part considers motion perception from the computational perspective, providing a view on cutting-edge machine learning solutions. Finally, the third part takes into account the implications for robotics, exploring how motion and gestures should be generated by communicative artificial agents to establish intuitive and effective human-robot interaction.
The high request for autonomous and flexible HRI implies the necessity of deploying Machine Learn... more The high request for autonomous and flexible HRI implies the necessity of deploying Machine Learning (ML) mechanisms in the robot control. Indeed, the use of ML techniques, such as Reinforcement Learning (RL), makes the robot behaviour, during the learning process, not transparent to the observing user. In this work, we proposed an emotional model to improve the transparency in RL tasks for human-robot collaborative scenarios. The architecture we propose supports the RL algorithm with an emotional model able to both receive human feedback and exhibit emotional responses based on the learning process. The model is entirely based on the Temporal Difference (TD) error. The architecture was tested in an isolated laboratory with a simple setup. The results highlight that showing its internal state through an emotional response is enough to make a robot transparent to its human teacher. People also prefer to interact with a responsive robot because they are used to understand their intent...
The high request for autonomous human-robot interaction (HRI), combined with the potential of mac... more The high request for autonomous human-robot interaction (HRI), combined with the potential of machine learning (ML) techniques, allow us to deploy ML mechanisms in robot control. However, the use of ML can make robots' behavior unclear to the observer during the learning phase. Recently, transparency in HRI has been investigated to make such interactions more comprehensible. In this work, we propose a model to improve the transparency during reinforcement learning (RL) tasks for HRI scenarios: the model supports transparency by having the robot show nonverbal emotional-behavioral cues. Our model considered human feedback as the reward of the RL algorithm and it presents emotional-behavioral responses based on the progress of the robot learning. The model is managed only by the temporal-difference error. We tested the architecture in a teaching scenario with the iCub humanoid robot. The results highlight that when the robot expresses its emotional-behavioral response, the human teacher is able to understand its learning process better. Furthermore, people prefer to interact with an expressive robot as compared to a mechanical one. Movement-based signals proved to be more effective in revealing the internal state of the robot than facial expressions. In particular, gaze movements were effective in showing the robot's next intentions. In contrast, communicating uncertainty through robot movements sometimes led to action misinterpretation, highlighting the importance of balancing transparency and the legibility of the robot goal. We also found a reliable temporal window in which to register teachers' feedback that can be used by the robot as a reward.
2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids), 2018
Lie detection is a necessary skill for a variety of social professions, including teachers, repor... more Lie detection is a necessary skill for a variety of social professions, including teachers, reporters, therapists, and law enforcement officers. Autonomous system and robots should acquire such skill to support professionals in numerous working contexts. Inspired by literature on human-human interaction, this work investigates whether the behavioral cues associated to lying-including eye movements and response temporal features-are apparent also during human-humanoid interaction and can be leveraged by the robot to detect deception. The results highlight strong similarities in the lying behavior toward humans and the robot. Further, the study proposes an implementation of a machine learning algorithm that can detect lies with an accuracy of 75%, when trained with a dataset collected during human-human and human robot interaction. Consequently, this work proposes a technological solution for humanoid interviewers that can be trained with knowledge about lie detection and reuse it to counteract deception.
Trust is essential in human-robot interactions, and in times where machines are yet to be fully r... more Trust is essential in human-robot interactions, and in times where machines are yet to be fully reliable, it is important to study how robotic hardware faults can affect the human counterpart. This experiment builds on a previous research that studied trust changes in a game-like scenario with the humanoid robot iCub. Several robot hardware failures (validated in another online study) were introduced in order to measure changes in trust due to the unreliability of the iCub. A total of 68 participants took part in this study. For half of them, the robot adopted a transparent approach, explaining each failure after it happened. Participants' behaviour was also compared to the 61 participants that played the same game with a fully reliable robot in the previous study. Against all expectations, introducing manifest hardware failures does not seem to significantly affect trust, while transparency mainly deteriorates the quality of interaction with the robot.
Humans are very good at interacting and collaborating with each other. This ability is based on m... more Humans are very good at interacting and collaborating with each other. This ability is based on mutual understanding and is supported by a continuous exchange of information mediated only in minimal part by language. The majority of messages are covertly embedded in the way the two partners move their eyes and their body. It is this silent, movement-based flow of information that enables a seamless coordination. It occurs without the two partners' awareness and the
Previous research has shown that the perception that one’s partner is investing effort in a joint... more Previous research has shown that the perception that one’s partner is investing effort in a joint action can generate a sense of commitment, leading participants to persist longer despite increasing boredom. The current research extends this finding to human-robot interaction. We implemented a 2-player version of the classic snake game which became increasingly boring over the course of each round, and operationalized commitment in terms of how long participants persisted before pressing a ‘finish’ button to conclude each round. Participants were informed that they would be linked via internet with their partner, a humanoid robot. Our results reveal that participants persisted longer when they perceived what they believed to be cues of their robot partner’s effortful contribution to the joint action. This provides evidence that the perception of a robot partner’s effort can elicit a sense of commitment to human-robot interaction.
Proceedings of the 10th International Conference on Computer Vision Theory and Applications, 2015
This paper deals with the problem of estimating the affinity level between different types of hum... more This paper deals with the problem of estimating the affinity level between different types of human actions observed from different viewpoints. We analyse simple repetitive upper body human actions with the goal of producing a view-invariant model from simple motion cues, that have been inspired by studies on the human perception. We adopt a simple descriptor that summarizes the evolution of spatio-temporal curvature of the trajectories, which we use for evaluating the similarity between actions pair on a multi-level matching. We experimentally verified the presence of semantic connections between actions across views, inferring a relations graph that shows such affinities.
Uploads
Papers by Francesco Rea