Papers by Kostas Karpouzis
Big Data and Cognitive Computing
The goal of this paper is to utilize available big and open data sets to create content for a boa... more The goal of this paper is to utilize available big and open data sets to create content for a board and a digital game and implement an educational environment to improve students’ familiarity with concepts and relations in the data and, in the process, academic performance and engagement. To this end, we used Wikipedia data to generate content for a Monopoly clone called Geopoly and designed a game-based learning experiment. Our research examines whether this game had any impact on the students’ performance, which is related to identifying implied ranking and grouping mechanisms in the game, whether performance is correlated with interest and whether performance differs across genders. Student performance and knowledge about the relationships contained in the data improved significantly after playing the game, while the positive correlation between student interest and performance illustrated the relationship between them. This was also verified by a digital version of the game, ev...
The current work investigates issues of expressivity and personality traits for Embodied Conversa... more The current work investigates issues of expressivity and personality traits for Embodied Conversational Agents in environments that allow for dynamic interactions with human users. Such environments are defined and modelled with the use of state of the art game engine technology. We focus on generating simple ECA behaviours, comprised of facial expressions and gestures in a well defined context of non-verbal interaction 2 .
Pattern Recognition Letters, 2010
This article appeared in a journal published by Elsevier. The attached copy is furnished to the a... more This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are
Journal on Multimodal User Interfaces, 2010
Affective and human-centered computing have attracted an abundance of attention during the past y... more Affective and human-centered computing have attracted an abundance of attention during the past years, mainly due to the abundance of environments and applications able to exploit and adapt to multimodal input from the users. The combination of facial expressions with prosody information allows us to capture the users' emotional state in an unintrusive manner, relying on the best performing modality in cases where one modality suffers from noise or bad sensing conditions. In this paper, we describe a multicue, dynamic approach to detect emotion in naturalistic video sequences, where input is taken from nearly real world situations, contrary to controlled recording conditions of audiovisual material. Recognition is performed via a recurrent neural network, whose short term memory and approximation capabilities cater for modeling dynamic events in facial and prosodic expressivity. This approach also differs from existing work in that it models user expressivity using a dimensional representation, instead of detecting discrete 'universal emotions', which are scarce in everyday human
Current article presents preliminary research work on defining and extracting full body expressiv... more Current article presents preliminary research work on defining and extracting full body expressivity features within the framework of using natural interaction in games and game based learning. Behavior expressiveness is an integral part of the communication process since it can provide information on the current emotional state, the personality of the interlocutor and his performance when the aim of the interaction is measurable. Many researchers have studied characteristics of human movement and coded them in binary categories such as slow/fast, restricted/wide, weak/strong, small/big, unpleasant/pleasant in order to properly model expressivity. Expressivity dimensions are selected as the most complete approach to body expressivity modeling, since they cover the entire spectrum of expressivity parameters related to emotion and affect. Derived from the field of expressivity synthesis five parameters have been computationally defined following different approaches and comparison of these approaches aims to investigate the most suitable for representing each expressivity feature.
Proceedings of the international conference on Multimedia - MM '10, 2010
Abstract The 3rd International Workshop on Affective Interaction in Natural Environments, AFFINE,... more Abstract The 3rd International Workshop on Affective Interaction in Natural Environments, AFFINE, follows a number of successful AFFINE workshops and events commencing in 2008. A key aim of AFFINE is the identification and investigation of significant open issues in real-time, affect-aware applications' in the wild'and especially in embodied interaction, for example, with robots or virtual agents. AFFINE seeks to bring together researchers working on the real-time interpretation of user behaviour with those who are concerned with social ...
Very Low Bitrate Video Coding, 2000
A lifelike human face can enhance interactive applications by providing straightforward feedback ... more A lifelike human face can enhance interactive applications by providing straightforward feedback to and from the users and stimulating emotional responses from them. An expressive, realistic avatar should not "express himself" in the narrow confines of the six archetypal expressions. In this paper, we present a system which generates intermediate expression profiles (set of FAPs) combining profiles of the six
IEEE Transactions on Affective Computing, 2014
Kostas Karpouzis (M07-SM12) is an associate researcher at the Image, Video and Multimedia System ... more Kostas Karpouzis (M07-SM12) is an associate researcher at the Image, Video and Multimedia System Lab of the Institute of Communication and Computer Systems in Athens, Greece. His research interests include natural interaction, games-based learning and serious games, and emotion recognition. He was an active contributor to the "The HUMAINE Handbook: Emotion-Oriented Systems'' and the "Blueprint for Affective Computing''.
Lecture Notes in Computer Science, 2011
Computer games are unique elicitors of emotion. Recognition of player emotion, dynamic constructi... more Computer games are unique elicitors of emotion. Recognition of player emotion, dynamic construction of affective player models, and modelling emotions in non-playing characters, represent challenging areas of research and practice at the crossroads of cognitive and affective science, psychology, artificial intelligence and human-computer interaction. Techniques from AI and HCI can be used to recognize player affective states and to model emotion in non-playing characters. Multiple input modalities provide novel means for measuring player satisfaction and engagement. These data can then be used to adapt the gameplay to the player's state, to maximize player engagement and to close the affective game loop. The Emotion in Games workshop (EmoGames 2011 http://sirenproject. eu/content/acii-2011-workshop-emotion-games) will bring together researchers and practitioners in affective computing, user experience research, social psychology and cognition, machine learning, and AI and HCI, to explore topics in player experience research, affect induction, sensing and modelling and affect-driven game adaptation, and modelling of emotion in non-playing characters. It will also provide new insights on how gaming can be used as a research platform, to induce and capture affective interactions with single and multiple users, and to model affect-and behaviour-related concepts, helping to operationalize concepts such as flow and engagement. The workshop will include a keynote, paper and poster presentations, and panel discussions. Selected papers will appear in a special issue of the IEEE Transactions on Affective Computing, "Emotion in Games", in mid-2013. The EmoGames2011 workshop is organized in coordination with the newly formed 'Emotion in Games' Special Interest Group (SIG) of the Humaine Association and the IEEE Computational Intelligence Society (CIS) Task Force on Player Satisfaction Modelling. We would like to thank all participants, as well as the members of the Program Committee, for their reviews of the workshop submissions:
... 366 Nicolas Ech Chafai, Catherine Pelachaud, and Danielle Pele AI-RPG Toolkit: Towards A Deep... more ... 366 Nicolas Ech Chafai, Catherine Pelachaud, and Danielle Pele AI-RPG Toolkit: Towards A Deep Model Implementation for ... 409 S. Vosinakis, G. Anastassakis, and T. Panayiotopoulos Industrial Demos Avatars Contributions to Commercial Applications with Living ActorTM ...
Affective and human-centered computing have attracted an abundance of attention during the past y... more Affective and human-centered computing have attracted an abundance of attention during the past years, mainly due to the abundance of environments and applications able to exploit and adapt to multimodal input from the users. The combination of facial expressions with prosody information allows us to capture the users' emotional state in an unintrusive manner, relying on the best performing modality in cases where one modality suffers from noise or bad sensing conditions. In this paper, we describe a multicue, dynamic approach to detect emotion in naturalistic video sequences, where input is taken from nearly real world situations, contrary to controlled recording conditions of audiovisual material. Recognition is performed via a recurrent neural network, whose short term memory and approximation capabilities cater for modeling dynamic events in facial and prosodic expressivity. This approach also differs from existing work in that it models user expressivity using a dimensional representation, instead of detecting discrete 'universal emotions', which are scarce in everyday human
Computer Communications, 2007
The MELISA system is a distributed platform for multi-platform sports content broadcasting, provi... more The MELISA system is a distributed platform for multi-platform sports content broadcasting, providing end users with a wide range of real-time interactive services during the sport event, such as statistics, visual aids or enhancements, betting, and user- and context-spe- cific advertisements. In this paper, we present the revamped design of the complete system and the implementation of a middleware entity
Wikipedia defines Human-Computer Interaction (HCI) as being targeted “… to improve the interactio... more Wikipedia defines Human-Computer Interaction (HCI) as being targeted “… to improve the interaction between users and computers by making computers more usable and receptive to the user's needs”. During the recent decades, especially since the advent of the term 'affective computing'by R. Picard, computing is no longer considered a 'number crunching'discipline, but should be thought of as an interfacing means between humans and machines and sometimes even between humans alone. To achieve this, application ...
Journal on Multimodal User Interfaces, 2010
A vital requirement for social robots and virtual agents is the ability to perceive and interpret... more A vital requirement for social robots and virtual agents is the ability to perceive and interpret social, affective expressions and states of humans, so as to be able to engage in and behave appropriately during sustained social interactions [1, 2]. Examples include ensuring that the user is interested in maintaining the interaction or providing suitable empathic responses. A fundamental component in these “mentalising” and “empathising” capabilities is the interpretation of human behaviour [3] from sensory input, which must be conducted ...
2005 IEEE International Conference on Multimedia and Expo, 2005
The paper presents the framework of a special session that aims at investigating the best possibl... more The paper presents the framework of a special session that aims at investigating the best possible techniques for multimodal emotion recognition and expressivity analysis in human computer interaction, based on a common psychological background. The session mainly deals with audio and visual emotion analysis, with physiological signal analysis serving as supplementary to these modalities. Specific topics that are examined include extraction of emotional features and signs from each modality in separate, integration of the outputs of singlemode emotion analysis systems and recognition of the user's emotional state, taking into account emotion models and existing knowledge or demands from both the analysis and synthesis perspective. Various labelling schemes, supply of accordingly labeled test databases, as well as synthesis of expressive avatars and affective interactions, are issues brought up and examined in the proposed framework.
Cognitive Technologies, 2010
Emotional intelligence is an indispensable facet of human intelligence and one of the most import... more Emotional intelligence is an indispensable facet of human intelligence and one of the most important factors for a successful social life. Endowing machines with this kind of intelligence towards affective human-machine interaction, however, is not an easy task. It becomes more complex with the fact that human beings use several modalities jointly to interpret affective states, since emotion affects almost all modes-audiovisual (facial expression, voice, gesture, posture, etc.), physiological (respiration, skin temperature, etc.), and contextual (goal, preference, environment, social situation, etc.) states. Compared to common unimodal approaches, many specific problems arise from the case of multimodal emotion recognition, especially concerning fusion architecture of the multimodal information. In this chapter, we firstly give a short review for the problems and then present research results of various multimodal architectures based on combined analysis of facial expression, speech, and physiological signals. Lastly we introduce designing of an adaptive neural network classifier that is capable of deciding the necessity of adaptation process in respect of environmental changes.
This study presents an integrated approach to locating and presenting the medical practitioner wi... more This study presents an integrated approach to locating and presenting the medical practitioner with salient regions in a CT scan when focusing on the area of the liver. A number of image processing tasks are performed in successive scans to extract areas with a different texture than that of the greater part of the organ. In general, these areas do not always correspond to pathological patterns, but may be the result of noise in the scanned image or related to veins passing through the tissue. The result of the algorithm is the original image with a mask indicating these regions, so the attention of the medical practitioner is drawn to them for further examination. The algorithm also calculates a measure of confidence of the system, with respect to the extraction of the salient region, based on the fact that a region with a similar pattern is also located in successive scans. This essentially represents the hypothesis that the volume of both pathological patterns and blood vessels, ...
Neural …, 2005
Extracting and validating emotional cues through analysis of users' facial expressions is of... more Extracting and validating emotional cues through analysis of users' facial expressions is of high importance for improving the level of interaction in man machine communication systems. Extraction of appropriate facial features and consequent recognition of the user's emotional state ...
Influence of the manual reaching preparation movement on visuo-spatial attention during a visual ... more Influence of the manual reaching preparation movement on visuo-spatial attention during a visual research task Alexandre Coutté & Gérard Olivier Finding the visual information used in driving around a bend: An experimental approach
Uploads
Papers by Kostas Karpouzis