Affective Computing: A Social Aspect
Madhusudan
Department of computer science
Himachal Pradesh University
Shimla
[email protected]
Abstract: The recent developments in the field of
Human-Computer Interaction have emphasized the focus of
design more towards user-centered rather than computercentered approach. This led to design of interfaces which are
highly effective, intelligent and adaptive, and can adjust
themselves with respect to the user’s behavioral changes. In
this study, interaction challenges for disabled people are listed
and the possible alternatives via intelligent and affective
devices are proposed for such users.
Keywords: HDI, human computing, affective interfaces, intelligent
interfaces, emotions.
I. INTRODUCTION
Recent developments in the field of Human Computer
Interaction (HCI) and affective computing has shifted the design
paradigms, both of hardware and software, towards user-centric
approach of design. This new aspect of computing is termed as
Human computing where the development cycle of machines is
attributed with user factors. While interaction with machines, data
is acquired, analysed and is utilized for better designs. This has
emerged as a distinct field known as Human–Data Interaction
(HDI). The analysis of this data has revealed several facts about
the impact of interfaces on perceived usage, perceived ease of use,
job performance, and user satisfaction levels [1]. Last two decades
of computer science research has resulted into convergence of
computing machines from mainframes to Personal Computers
(PCs) to mobile devices ultimately. The services once provided by
giant PCs and laptops are now available on small scale computing
machines like tablets and smart phones [2]. The intensity with
which computing machines has penetrated into our lives and our
day-to-day activities has presented scientist with several new
challenges in design process of software and related hardware
devices. The convergence of machines has also contributed to a
new field of research called Big Data. The net of internet-oriented
devices and applications has grown many folds in the last three
years. Reports have shown that 80% of data over the internet has
been accumulated in last three years only [5]. On one side, the
confluence of information technology has presented us with
several challenges of storage, security, and data management but
on the other side this huge variety-rich and high velocity data is
structured and analysed to gain useful insights. The techniques to
store and manage this voluminous data are collectively called big
data, and the techniques used to analyse this data for new data
products and information needs are collectively called Data
Science [6]. Though so many new applications and computeroriented software have been developed for making life easier and
for performing our daily tasks automatically endlessly, but most
of them are prone to failure due to several reasons. The most
Dr. Aman Kumar Sharma
Department of computer science
Himachal Pradesh University
Shimla
[email protected]
prominent reason for these failures is gap in communicating or
understanding the true requirements and the actual performance
levels of interactions in the designed machines. Another reason for
failure is the designer-oriented approach of software and interface
development instead of user-centred approach [4]. There are
several hidden complexities in carrying the gathered requirements
to the design and implementation process like inefficiency of users
in conveying requirements, the inability of business analyst to
understand the user requirements, the non-feasibility of elicited
requirements, the constraints of computing machines where the
software are designed and several other issues [7]. But if we take
the sincere look at the available software’s, we can conclude that
as long as normal users are concerned, most of the software are
useable and are sufficient to fulfil the tasks of the user, though with
certain efforts.
The real success and aim of human computing will be achieved
only if these machines are able to serve and supply the information
needs to maximum or almost all the people in this world, no matter
what their efficiencies are, whether they are computer literate or
not, they are disabled or not, they are primitive or developed or so
on. There are several possibilities for making computing machines
available for different types of people by designing simple and
easier interfaces for operating the devices in natural languages and
designing the flow process of software as per the psychological
understanding and perception of the users about the computing
device, hiding the underlying complexities of architectures and
design of these devices. On the other side of the users, if we take
into account the differently–challenged people or people with
disability, the computing machines available to them are not up to
the mark or we can say they are not much capable of understanding
the need or capabilities of interaction for disabled people. They are
prone to some design issues and levels of understanding about
computing devices to these users. The psychological levels of
operations are quite different in the disabled-people as compared
to normal users. As far as our country is considered, there are
about 30 million disabled people and available design paradigm
are not capable of capturing the real requirements and needs of
disabled people. The disabilities are of different types and in this
study we will consider physically disabled people only since the
sphere of mentally challenged people is totally different and need
a lot of research for incorporating the design needs for such people
[8][9].
Citizens, either abled or disabled, all contribute to the
development of the nation, though contributions of their parts are
in varied scales. All of us contribute to growth and fall of nation,
directly or indirectly, hence instead of neglecting the contribution
of people with disabilities, we can adapt the design process of
interfaces and interactions to accommodate the needs of such users
and provide them with better satisfaction levels producing better
performances. If we want computing to emerge the as social
benefit and human-computing, we need to understand the
inefficiencies of the physically challenged people in making the
interactions and mining their information needs from computing
devices. In this study, Section II highlights the different types of
disabilities and challenges present to people with disabilities.
Section III describe the use of interaction techniques using
intelligent devices and Affective interfaces for enhancing levels of
performance in disabled people. The conclusion is presented in
Section IV.
II. DISABILITY: DEFINITION & TYPES
Disability is defined as an “activity limitation or participation
restriction associated with a physical or mental condition or health
problem”. Persons with disabilities often report difficulties with
daily living activities or indicate that a physical or mental
condition or health problem reduces their participation in society
[10][11]. Though persons with disabilities display the same range
and levels of interests, talents aspirations and concerns as any
other group, their disabilities restrict them to those performance
levels. Physical Disabilities are categorized into different classes
under hearing, sight, mobility, chronic pain, and autoimmune
diseases. Neurological disabilities comprise of acquired brain
injury, epilepsy, Tourette syndrome and hyperactivity disorder.
These range of disabilities present several barriers and challenges
in participation of disabled people into the society [12]. Some of
the barriers and challenges include; financial concerns,
environmental issues, systemic barriers, social challenges, and
personal challenges. Financial concerns depict client’s financial
assistances, Environmental issues include inaccessibility to
workspace, lack of transportation, limited financial support,
Systemic barriers like inflexibility of employers, irrelevant job
requirements, misinterpretation of legal duties, co-ordination
challenges etc., Social challenges such as lack of understanding,
lack of awareness, unfair treatment, exclusion from work-related
and social activities. Lastly, Personal challenges include low level
of education, minimal work experience, low self-confidence, low
self-esteem, communication difficulties, and underdeveloped
social and interpersonal skills [13][14][15].
Earlier we listed physical disabilities under different classes;
hearing disabilities, sight disabilities, mobility challenges, chronic
pain, and autoimmune diseases. People with hearing disabilities
suffer from hearing loss, hard of hearing and culturally deaf
problems. Often, sign languages are used for communication with
such people. Vision or sight disabled people suffer from problems
like visual impairment, limited light perception, impaired vision,
or deaf-blind. Visual disabilities are categorized as; blind, legally
blind, partially sighted, low vision, and cortically-visually
impaired. Mobility refers to the ability to move. Persons with
mobility limitations may have lost arms, hands, feet or legs due to
amputation or congenital problem. Mobility impairment results
from medical conditions such as arthritis, multiple sclerosis,
cerebral palsy, spina bifida, diabetes, muscular dystrophy and
paraplegia. Persons with chronic pain disorders experience a range
of intensity from a dull, annoying ache, to intense, stabbing pain.
Autoimmune diseases are due to disorder in immune system and
are not considered in this study [16][17].
The range of disabilities listed above, limit disabled people for
interaction with computing devices and often retard their
performance levels. The underlying complexities in designing
successful computing machines and work environment has
occurred due to incapability of meeting their information need and
to adapt interfaces for such users. The languages or modes of
interaction are not suitable for people with disability. Generally,
the interaction with a computing devices may take place at three
different levels namely; physical, cognitive, and affective level
[3]. Physical level includes devices like keyboard, mouse etc.,
whereas the cognitive level defines the interpretation or the way
understands the system. The affective level determines the user's
experiences better while interacting with the system. The input
devices used for interaction includes three different classes
namely; vision-based like mouse, audio-based like speech
analyser, and touch-based haptic devices like touchscreens
[16][26]. Various modes, channels that can be used for
communication with a machine determine the intelligent
behaviour of the machines. If there is more than one way of
interaction, then such machines are called multimodal human
computer interaction (MMHCI) [17]. The interfaces are
categorized as intelligent and affective interfaces in MMHCI. The
term intelligent is used with an interface, if it is able to capture the
information from the user with minimum physical inputs. Such
devices require some kind of intelligence in perception of the
response from the user [18]. The affective HCIs are those which
are able to sense and interpret gestures, human emotions and adapt
itself with respect to certain gesture or emotion. This aspect of HCI
deals entirely user experiences and mechanism to improve the
level of satisfaction in the interaction whereas intelligent HCIs
deal with the ways of information gathering [19]. Recent
developments have turned the paradigm of research towards
design of gesture-based interaction systems that can provide
pleasant experiences to the users through different modes of
interaction [20] [21].
III. AFFECTIVE COMPUTING & INTELLIGENT INTERFACES FOR
DISABLED PEOPLE
Language is a medium to carry information from one ‘place’
to another, where the word place may refer to brain, storage
device, file, media etc. Languages are categorized as; natural
languages and programming languages. The day-to-day medium
of information processing in people is natural language such as
English, Hindi or some other language. On this level of
interaction, people share ideas, thoughts, information among them
and use natural language for communication. The languages
which we use to interact with computing devices and program
them according to information needs are called programming
languages. Languages for interaction with computers have
emerged in some other forms like haptic devices, gesture-based
interfaces etc.
Several challenges faced by people with disability at
workplaces and homes can be dealt to some extent, more or less,
with gesture-based interactions and affective computing. The
range of MMHCIs includes gesture-based devices with facialfeature recognition, hand-recognition, body-movements etc.,
speech-based devices like speech-to-text converter, speech
synthesizer, speech recognizer etc., gaze-detection devices which
work on gaze and angle of view of the eye of the user, wearable
devices which include gloves and jackets to receive inputs via skin
conductance, temperature etc., and brain-sensing devices which
detect the input from the impulses in brain signals and neurons.
Careful study of information needs for disabled people
supplied with user’s feedback through intelligent interfaces are
analysed using mathematical tools and techniques to gain useful
insights from user data. Disabled people cannot provide feedback
in the way other people due to certain challenges in operating
computing devices. Their data and feedback is collected through
intelligent sensors and interfaces, is processed with data analytic
techniques to find the pattern of use, perceived usage, ease of use,
performance levels. Study of these patterns and visualizations help
in revealing the advantages and disadvantages posed to the user
during interaction through some interface. This is repeated with
several differently design interaction techniques to study different
parameters of user interaction. These studies are applied to the
design process to make interaction easier to use and enhance the
levels of satisfaction. Such models of design process are
successful in version controls and iterative development. Studies
in Human-Computer-Interaction (HCI) revealed several issues
that are often overlooked in design process of systems and
software [22]. The requirements itself are the greatest challenge
since with time they change and are difficult to fabricate into the
underdeveloped software. There are some other factors like
feasibility constraints, communication barriers, lower experience
levels of software teams etc. The most important factor that HCI
studies find is lack of user-centred design approach [23]. User
factors are often not considered in design process and interfaces
are more like designer-oriented and lesser as user-centred. This
leads to lower performance and satisfaction levels. In case of
disabled people such systems are totally abandoned since they
generate almost nil amounts of performance and satisfaction
levels. To improve these factors, intelligent and affective devices
can be used for interaction where the levels and channels of
interaction is not through keyboard, mouse or programs etc. but
through different ways of interactions like gaze, speech, touch,
gestures etc. [24][25]. In this study, several alternatives of
interaction are listed under different classes of disabilities. The
available affective and intelligent devices can be altered to
accommodate some degree of flexibility so that they can be
suitably configured for different classes of disabled people.
a.) People with Visual-Disabilities: As discussed earlier, there
are several classes of people with visual disabilities. There is
a varied range of degree of vision in visually challenged
people. Several points need to be considered while designing
interfaces for visually-challenged people. The speech
mechanism used for such people should incorporate normal
tone and volume and address people directly by their name to
ensure attention. People with vision loss can’t interact using
physical or vision based devices due to accessibility issues
and restrictions in proper positioning of interactive
techniques. As input and output to and fro from computing
device can be received in multiple ways, so people with such
disabilities can use speech-based MMHCIs which operates on
pitch, tone and voice of the user. Other possible interaction
technique for such users is facial-recognition and gesturerecognition interactions which can receive the input and
supply the information need of the user through body
movements and gestures only. Speech synthesizer with
personal calibrations are used for better levels of interactions
in vision-loss people.
b.) People with Hearing Disabilities: The major barriers for
people with hearing disabilities include loss of hearing or low
levels of hearing or hard of hearing. Generally, people with
such disabilities are accommodated in the work places using
sign languages like American Sign Language (ASL) or using
symbolic codes and gestures. As far as interaction with
computing devices is concerned for such people, they easily
operate the vision-based devices like keyboard, mouse etc.
But if we consider interactions at higher levels using
conference calls, video conferencing etc., such people are
restricted due to various challenges as mentioned in above
section. Compared to visually-challenged people, hearingdisabled people can be easily accommodated to newer
technologies and computing machines with little efforts. The
facial feature extraction and recognition techniques are used
for acquiring input from such users. The information needs of
such people are met with the help of visualization techniques
and through gesture-recognition interfaces.
Emotions are a medium of expression of mental
activities, levels and acknowledgements. Emotions describe
the state of different variable in human mind. The facial
expressions and gestures are used for expressing the mental
environment and can be used for receiving input from the
users. Similarly, gestures are used for supplying input to the
machines through intelligent interfaces. Studies in HCI
realized that the expressions on face or body movements are
one of the medium for interaction among people. These
expressions and gestures are full of information and can be
used as language or medium of interaction with computing
devices. Intelligent and affective interfaces use these codes of
expressions and movements for supplying input to the
devices.
In case of people with certain disabilities, where the
possibilities of interaction are very low or negligible and
where the participation of people is restricted due to
interaction challenges, these gesture-based and facialdetection interfaces have proven to be a boon. People with
hearing disabilities can be further accommodated to
workplaces using Gaze-detection mechanisms and through
speech synthesizers.
c.) People with Mobility Issues: People with disabilities find the
interfaces appropriate and can easily interpret the information
available, but are restricted to physical levels of interactions
due to loss of legs, arms, hands, or fingers etc. In this case of
disability, the participation is limited in the work places due
to several factors and computing environments may not be
suitable due to varied demands of such users. Some of the
mobility issues are overcome by incorporating intelligent
interfaces in computing devices. Firstly, input from such users
can be received through gestures, body movements, facial
features, speech, and gaze or through touch. Output or
information needs of such users can be designed in a manner
where there is least level of physical activity like through
speech synthesizers. Gaze-detection mechanisms, facialfeature recognition techniques, body-movements reduces the
physical levels of interactions and the efforts required for
interaction with computing devices. Other possible alterations
of interaction in such users are wearable intelligent devices,
in which the input from the disabled users are received
through skin conductance, temperature, humidity and blood
pressure levels. Various challenges confronted by mobilitydisabled people are dealt with MMHCIs, in which the input
and output to the users is supplied through different levels of
interactions at; physical, cognitive, and affective levels.
If we clearly understand the issues that restrict the participation
levels of interaction in disabled people, we realize that underlying
complexities of design never lower the Perceived Usage (PU) and
levels of performances that much as the poorer and ill-designed
interfaces do. Major focus on interaction design in Software
development life cycles and version control, results in better PU
and Satisfaction levels. As computers have proved beneficial
almost in all the spheres of life and our daily activities, these
machines can be configured for people with disabilities to suit
their information need in workplaces and hence will enhance the
levels of participation in such users. As a result, various factors
and parameters of User Experience (UX) can be enhanced for
people with disability, which ultimately will lead to better
workplaces and lesser dependence on support from others for
meeting information needs. Also, these intelligent interfaces lower
the level of efforts required for interaction with computing
devices. The emotional, psychological needs of people with
disability can be met with affective interfaces which sense,
interpret and generate feedback according to the gestures and
mood of the users. Where the people with disability are often
overlooked and live a separated life in our society, affective
machines can serve as a companion to them, adapting as per the
emotional levels of the users. Ultimately, these machines will
reduce the efforts for interaction and frustration in the disabled
people, and will provide a harmonious work environment to such
people.
IV. ARCHITECTURE FOR REAL TIME AFFECTIVE COMPUTING
In the proposed model, n sensors are processing the behaviour
feature of the human beings. These multiple sensors are collecting
the data which is processed at the real time so all the acquired data
is fed to the algorithm for feature classification. The use of the
multiple processing units in the system is to make the processing
faster and feasible to design the system that can supply real time
decisions. The results of the processed data from the individual
processing units are fed to the fuzzy inference systems.
Fig. 2. Architecture of proposed framework using Fuzzy Inference System
The use of the fuzzy system is due to the fact that the multiple
inputs are crisp in nature but the particular value may belong to
the multiple features for example if the human have facial
expression describing happiness but his hand expression may
show the nature of the angriness. This means a value may belong
to the multiples set of behaviour. To handle these data, we define
IF-THEN rules and associated membership functions. For
simplicity, the Gaussian membership functions are preferred to
each of them. On the basis of the rule base, the resultant is obtained
and this result will go through the process of defuzzification and
the system will produce the desired results. In the Figure 2, the
detail description the complete system has been depicted. The
sensors are generally referring to the video cameras or other
affective sensors like wearable devices. The results of the different
cameras have been fed to the individual processing unit to process
according the body component has been focusing. The results
from the feature extraction algorithm are provided to the fuzzy
inference system which is using the rule base and providing the
results accordingly. Massive parallelized and distributed
architecture is required for real time results. The processors should
have high speed, storage space must be fast and large, quality of
sensors need to be high, and classification algorithms should be
robust.
Real time processing of multimodal information from different
sensors need high-performance computing environment. Since
information from different channels like facial features, hand
movements, body movements, gaze, skin conductance and speech
is fused to judge the affective levels of users, processing of this
varied and voluminous data require distributed and parallel
processing. Input signals received by sensors are fed into
processing machine to analyse the features via feature extraction
and pattern recognition algorithms. Simply machines with limited
computing and storage capacity are not sufficient to achieve the
same. Keeping in mind the user factors of satisfaction, we need to
acquire and process information at very high rates with greater
accuracy and precision. These processing requirements can only
be achieved in massively parallelized and highly-coupled
distributive environments.
V. CONCLUSION & FUTURE SCOPE
In this study, we presented several types of disabilities and
challenges faced by people with disabilities. The work
environment available for such users is not appropriate to meet the
information need of such people. We proposed several affective
and intelligent interfaces for such people for making better levels
of interactions for such people. Furthermore, these techniques can
be analyzed for better results and reducing challenges in work
places and homes for people with disabilities.
REFERENCES:
[1]. Rosalind W. Picard, “Affective computing’’, MIT Press, 1997.
[2]. Alan Dix, Janet Finlay, Gregory D. Abowd, Russell Beale, “Human
Computer Interaction”, Third Edition, Pearson.
[3]. Fakhreddine karray et. Al, “Human-Computer Interaction: Overview
on State of the Art”, International Journal on Smart sensing and Intelligent
Systems Vol. 1 No. 1, March 2008.
[4]. Liam J. Bannon, “From Human Factors to Human Actors: The role of
Psychology and Human-Computer Interaction Studies in Systems
Design”, Book Chapter in Grenbaum, J. & Kyng, M. (Eds.), 1991.
[5]. Zan Mo, Yanfei Li, “Research of Big Data Based on the Views of
Technology and Application”, American Journal of Industrial and
Business Management, 2015, 5, 192-197.
[6]. Ericsson White Paper: Translating user experience into KPIs, August
2015.
[7]. Alejandro Jaimes et. Al., “Multimodal human-computer interaction:
A survey”, Computer Vision and Image Understanding pg. no. 116-134,
Elsevier, 2007.
[8].http://www.ccdisabilities.nic.in/page.php?s=&t=yb&p=disab_ind
[9]. What Works: Career-building strategies for people from diverse
groups.pdf.
[10]. “Profile of Disability among adults: a report”, 2001.
[11]. Luca E. Conte, “Vocational Development theories and the disabled
person: Oversight or deliberate omission”, Rehabilitation Counseling
Bulletin, 1983.
[12]. A report: Person with Disabilities profiles 2006 census analysis,
2009.
[13]. Poeher Institute, “Factors affecting the employment of people with
disabilities: a Review of literature”, 2001.
[14]. Access Centre for independent living, “History of independent
living”, 2010.
[15]. Bill Eaton, “Making Employment a reality”, JVR, 2001.
[16]. Terry Winograd, “Shifting viewpoints: Artificial intelligence and
human–computer interaction”, Elsevier, 1 November 2006.
[17]. Jianjua Tao and Tieniu Tan, “Affective Computing: A Review”, ACII
2005, LNCS 3784, pp. 981-995, 2005.
[18]. K. R. Scherer, “Vocal affect expression: A review and a model for
future research”, Psychological Bulletin, vol. 99, pp. 143–165, 1986.
[19]. A. Petrushin, ‘‘Emotion Recognition in Speech Signal: Experimental
Study, Development and Application.’’ ICSLP2000, Beijing.
[20]. Chuang Ze-Jing and Wu Chung-Hsien, ‘‘Emotion Recognition from
Textual Input using an Emotional Semantic Network,’’ In Proceedings of
International Conference on Spoken Language Processing, ICSLP 2002,
Denver, 2002.
[21]. F. Yu et. al., ‘‘Emotion Detection from Speech to Enrich Multimedia
Content’’, in the second IEEE Pacific-Rim Conference on Multimedia,
October 24-26, 2001, Beijing, China.
[22]. “Human Computer Interaction”, Alan Dix, Janet Finlay, Gregory D.
Abowd, Russell Beale, Third Edition, Pearson.
[23]. J. K. Aggarwal, Q. Cai, ‘‘Human Motion Analysis: A Review’’,
Computer Vision and Image Understanding, Vol. 73, No. 3, 1999.
[24]. Rosalind W. Picard, “Affective computing: From Laughter to
IEEE’’, IEEE Transaction on Affective Computing, vol. 1, no. 1, Jan-Jun,
2010.
[25]. Rosalind W. Picard, “Affective computing’’, MIT Press, 1997.
[26]. Fakhreddine karray et. Al, “Human-Computer Interaction:
Overview on State of the Art”, International Journal on Smart sensing and
Intelligent Systems Vol. 1 No. 1, March 2008.