Ongoing Challenges of Evaluating Mulsemedia QoE
ALEPH CAMPOS DA SILVEIRA∗ , Universidade Federal do Espirito Santo, Brazil
CELSO ALBERTO SAIBEL SANTOS, Universidade Federal do Espirito Santo, Brazil
Multimedia applications are usually limited to stimulating only two human senses: vision and hearing.
Recent studies seek to expand the definition of multimedia applications to include stimuli for other human
senses. In this way, sensory effects that should be triggered in synchrony with the audiovisual content being
presented are included in the applications. By including sensory effects in multimedia, we aim to improve the
Quality of Experience (QoE) with these mulsemedia environments. Usually, two approaches are being used
for performing QoE evaluations these environments. The first, more common, is performed by subjective
evaluation approaches, i.e. through questionnaires, interrogations, oral responses, etc. The second, rarer but
growing, uses objective approaches by collecting physiological data from the user when dealing with the
system being evaluated. Such data is gathered in real time or not, however, it is considered objective because
it is "involuntary", that is, data is not the result of the user’s intention. This paper will address both the these
methods to evaluate QoE and what the respective obstacles are when dealing with in mulsemedia systems.
CCS Concepts: • Human-centered computing → Interaction paradigms; HCI design and evaluation
methods; • Information systems → Multimedia information systems.
Additional Key Words and Phrases: Mulsemedia, Quality of Experience, Objective Evaluations, Subjective
Evaluations
ACM Reference Format:
Aleph Campos da Silveira and Celso Alberto Saibel Santos. 2022. Ongoing Challenges of Evaluating Mulsemedia QoE. In SensoryX ’22: 2nd Workshop on Multisensory Experiences, together with IMX 2022: ACM International
Conference on Interactive Media Experiences. June 22-24, 2022. Aveiro, Portugal. 8 pages.
1 INTRODUCTION
For many years, multimedia were limited to stimulating only two of the human senses: sight and
hearing. This situation is at odds with the fact that 60% of human communications are non-verbal,
and that most of us perceive the world through the combination of the five senses: sight, hearing,
touch, taste and smell [12]. Based on this, efforts have been made to study and understand how to
expand the definition of multimedia to include other sensory stimuli besides sight and hearing [3].
The last decade has witnessed a growing shift in emphasis from studying the senses alongside
other media devices. Calvert and Thesen [4] say that the adoption of a multisensory perspective
on human sensory perception has evolved in part as a consequence of developments in both
technology and sensory neurophysiology. These advances in technology have coincided with the
increasing knowledge about the mechanisms involved in the sensory systems. A natural extension
of this was the realization that a complete understanding of our perceptual systems would require
the inclusion of how each sense was integrated with input from different sensory systems. Thus,
the basis of mulsemedia capabilities is clear: the integration multiple sensory sources besides audio
and video to improve user feeling of presence.
∗ Both
authors contributed equally to this research.
Published in accordance with the terms of the Creative Commons Attribution 4.0 International Public License (CC BY 4.0).
Permission to reproduce or distribute this work, in part or in whole, verbatim, adapted, or remixed, is granted without fee,
provided that the appropriate credits are given to the original work, not implying any endorsement by the authors or by
SBC.
© 2022 Brazilian Computing Society
1
SensoryX ’22, June 22-24, 2022, Aveiro, Portugal
Silveira and Santos.
However, to understand how mulsemedia works, evaluations of QoE are necessary. There are
two ways to acquire QoE data: subjective and objective.
Subjective measures refer to self-report methods that are typically combined with a task during
which participants are asked to indicate how they are feeling during the evaluation. These are
the most common method of user evaluation, which compromises of user questionnaires, think A
loud methods, interrogation, etc. Subjective measures, however, are not without problems [19, 22]
as they can suffer from what is called łresponse biasž, which is a phenomenon that participants
respond inaccurately or falsely to said questions. For example, participants often set their own
criteria for assessing what they are feeling, and those who are not sure what to feel may not
report that they are confident unless they are absolutely right [19]. However, as these self-reported
measures are relative ease of use, they can be easily applied in a multitude of environments without
much cost.
Objective measures was proposed to address the problem aforementioned about the imprecision
of voluntary and self-report methods. This view is not new, authors such as Huynh et al. [16] and
Ciolacu and Svasta [6] have proposed models and architectures for evaluating QoE using objective
measures. Lin et al. [17] also has already stated that, although subjective evaluation is an essential
element in the usability evaluations, they are not enough and may need an objective method
for physiological measures to be integrated in traditional usability evaluations. Object measures
e.g. EMG, accelerometers, video recordings and biosiginals [22], are more costly and, so far, only
being applied in academics and laboratory environment. For instance, EngageMon [16] is a system
that uses a combination of sensors from the smartphone, a wristband, and an external camera to
accurately determine the engagement level of a mobile game. Aiming at learning and education,
Ciolacu and Svasta [6] presented a model that uses biofeedback to measure and control learning
processes during user interaction with learning content, as the authors argue that the process of
teach-learning should not be measured only at the end of the exam, but also during the learning
experience.
In summary, objective measures are those information captured from the user biofeedback and
biosignal response. To avoid misunderstandings about these terms, it’s important to make clear
what they are. According to McKee [20], biofeedback is as much a process as the instrumentation
used in that same process. For the first, it is taking physiological information that is monitored and
returning it to be used elsewhere through biofeedback instruments. The latter refers to biofeedback
instruments that are capable of monitoring one or more physiological processes, measuring what
is monitored and transforming that measurement into an understandable information, such as
images or audio cues, to present what is monitored and measured simply, direct and immediate.
Biosignals are closely related to biofeedback data. Giannakakis et al. [13] state that biosignals
are measures of human body processes that can be divided into two main categories: physical
signals and physiological signals. The former are measurements of body tension as a result of
muscle activity, such as pupil size, eye movements, blinking, head, body and semi-voluntary
position/movements, breathing, facial expressions, and voice. As part of it is not a subject of the
Autonomic Nervous System (ANS), it is not a entirely objective measurement. Thus, physiological
signals are more directly related to the ANS, such as cardiac activity, brain function, exocrine
activity, and some muscle excitability assessed by electromyography. These are closely related
to the ANS and are seen as objective metrics. However, objective measurements are not easy to
collect. Almost always expensive, they are usually limited to controlled environments and is not
commonly applied in real situations.
Biofeedback is an umbrella term for capturing physiological data from biosignals and returning
it for another purpose. With the advent of the Internet of Things (IoT) and a more ubiquitous
computing world with mobiles, smart watches, heart rate monitors, etc., we believe that the data
2
Ongoing Challenges of Evaluating Mulsemedia QoE
SensoryX ’22, June 22-24, 2022, Aveiro, Portugal
collected through this multitude of connected devices and new ways to interact with information
can, in addition to being used for evaluations, also be used as a control mechanism for the delivery
of mulsemedia (multiple sensorial media) content, as IoT improvements are increasing the ubiquity
of the Internet by integrating all objects for interaction between systems leading to a highly
distributed network of devices that communicate with humans and other devices, opening up
opportunities for a host of new applications that promise to improve the quality of our lives [30].
What this paper proposes is to highlight some of the ongoing problems and challenges of
objective and subjective approaches for mulsemedia QoE evaluation. This work is divided into
the following sections. Section 2 is subdivided in three subsections: 2.1 Subjective Measures will
address approaches such as questionnaires, think-A-loud, etc. 2.2 Objective Measures will address
approaches such as biosignals gathering, involuntary data, etc. The last subsection 2.3 Challenges
will address the ongoing current challenges of both approaches. Finally, Section 3 our conclusions.
2 EVALUATION APPROACHES
2.1 Subjective Measures
Evaluation of QoE refers to a collection of methods and tools used to discover how a person perceives
a system (product, service, non-commercial item, or a combination thereof) before, during, and
after interacting with it. When investigating momentary user experiences, we can evaluate the
level of positive affect, negative affect, joy, surprise, frustration, etc. via vocal statements (i.e Think
Aloud protocol) or post-experience (i.e. User experience questionnaire or UEQ). It is not trivial to
assess user experience, as user experience is subjective, context-dependent, dynamic over time
and when dealing with mulsemedia, the challenge is magnified by the plethora of multisensory
devices.
Murray et al. [21] stated that user perceived QoE capture of mulsemedia is non-trivial mainly
due to the number and various types of media components that are presented synchronously. As
there are no standardized methodologies to conduct subjective assessment of mulsemedia quality,
researchers use different approaches to assess user QoE of mulsemedia applications, mostly of
them with questionnaires. Which questionnaires were administered depends on the mulsemedia
environment being assessed, with no standard questionnaire found to date. Furthermore, according
to Murray et al. [21] review on QoE evaluation of Olfaction-Based Multisensorial Media, only 20%
of the experiments provided details on questionnaires, allowing very few repeatability opportunity.
There are, however, already tested Subjective Methods involving Mulsemedia content, such
as Covaci et al. [7] which proposed a method to improve subjective QoE in 360° Virtual Reality
(VR) Videos through a QoE questionnaire that comprised a series of questions focused on the
user experience. The answer to each question was expressed on a 5-point Likert scale. In addition,
participants also answered a set of 8 more questions aimed at olfactory and wind effects.
2.2
Objective Measures
When dealing with objective measurements, there is a plethora of devices being used. The most
common and easier to use is the measurement of Electrodermal activity (EDA) or Galvanic Skin
Response (GSR). Mostly are being used to measure various psychological states, including arousal,
attention, and stress. Because of its low cost, being non-intrusive, and sensitivity to psychological
processes, EDA is one of the most popular response systems in psychophysiology [1]. Being measure
by the skin, it is also common that the Heart Rate (HR) is measured along with EDA. The mostly
common method it is with Photoplethysmogram (PPG). An optically obtained plethysmogram that
can be used to detect changes in blood volume in the microvascular bed of tissue. PPG is usually
obtained using a pulse oximeter that illuminates the skin and measures changes in light absorption.
3
SensoryX ’22, June 22-24, 2022, Aveiro, Portugal
Silveira and Santos.
A conventional pulse oximeter monitors blood perfusion in the dermis and subcutaneous tissue of
the skin. As EDA and PPG are non-invasive, they hold promise for stress detection because they
rely on passive sensors to provide pulse and electrodermal data that can be analyzed and have been
shown to be reliable alternatives for easy and inexpensive objective user evaluations. Houzangbe
et al. [15], Wang et al. [29] stated however that both are only able to show "arousal", which is the
physiological and psychological state of being awake or of sense organs stimulated to a point of
perception, leading to increased heart rate, electrodermal activity and blood pressure. This means
that arousal triggered by fear or happiness are detected equally and difficult to differentiate by
data collected.
To this end, and because being more capable of detecting emotions with greater precision, Electroencephalography (EEG) is preferred. EEG is a method of recording an electrogram of electrical
activity in the scalp that has been shown to represent the macroscopic activity of the surface layer
of the brain. This measure can be used to explore physiological information about the user and can
be a useful tool for user experience as EEGs can be taken as an indicator to assess user perception
when using products without interruption [9]. However, they are expensive to collect, to analyze
and EEG suffers from high variability between subjects and requires a long setup of high expert
specialists to acquire good quality signal [11]. Because they are increasingly being used for QoE
evaluations, with EEG-based emotion recognition studies gaining popularity in many disciplines
[8], there are now commercial EEG products that promise to make your data easier to measure
and understand, such as Emotiv EPOC X, Emotiv EPOC+ , Emotiv INSIGHT, Emotiv EPOC FLEX,
OpenBCI and NeuroSky MindWav. Some of these EEG devices have one or more extra channels for
capturing physiological signals, such as Electrocardiogram (ECG), Electrooculography (EOG) and
Electromyography (EMG) [28], capable of collecting psychological (brain) and physiological (blood
pressure, muscle activity, heart rate, etc.) data, being an improvement over the cheaper devices
mentioned earlier. EMG and EOG, however, are not always necessarily "objective" biosignals, as
some of the bio-data recorded are a result of the user’s intention to do so, and are not a product of
our autonomic nervous system (ANS), which would jeopardize the "objectivity" of these data.
2.3 Challenges for Mulsemedia Quality of Experience Evaluations
Subjective and objective approaches for QoE evaluations are prone to problems. Subjective measurements, although cheaper, are also a source of criticism: questionnaires and surveys can break
user immersion when dealing with immersive experiences as in VR because the transition from
the virtual world to the physical world to respond to VR experience questionnaires can lead to
systematic biases [23, 24]. A better approach was proposed by Schwind et al. [27] who stated that
applying questionnaires directly in the VR environment is better because, although the results
indicate that, in addition to reducing the duration of a study and decreasing disorientation, filling
out questionnaires in RV does not change the measured presence, but it can increase the consistency of variance. Almost the same conclusions were reached by Putze et al. [23] who stated
that the application of VR questionnaires (inVRQs) is becoming more common in contemporary
research. This builds on the intuitive notion that inVRQs can facilitate participation, reduce Break
in Presence (BIP), and avoid bias. Also, Safikhani et al. [24], despite not having reached a definitive
conclusion, highlighted that users preferred to use inVRQs designs to the traditional ones. Also for
that reason, a Think-A-Loud method is more suitable for evaluations of mulsemedia environments,
as it indicates the user’s feeling in real time, rather than a post-report carried out after the user’s
interaction with the environment, thus, avoiding biases and false statements. What we are trying
to say is that traditional approaches may not be suitable for mulsemedia QoE evaluation because
they explore new fields of experience that cannot be reached by such methods.
4
Ongoing Challenges of Evaluating Mulsemedia QoE
SensoryX ’22, June 22-24, 2022, Aveiro, Portugal
While objective measurements seek to be accurate, when dealing with mulsemedia environments,
the devices and sensors used may lack synchronization and latency, which is a current challenge of
using these devices altogether. Synchronization and latency are a major challenge for the inclusion
of other sensory stimuli than audio and video in these environments, which depends on a complete
synchronization between traditional multimedia and devices that deliver haptic, olfactory or taste
sensations [26, 33]. When wearable devices are used to capture and detect biosignals, the problem
is magnified.
Some sensory effects such as olfactory and gustatory effects are gradual, whereas using EDA
or PPG is a highly responsive body signal. Synchronize these devices can be challenging as Ho
et al. [14] claimed that electrodermal activity increased rapidly within seconds in response to
small physiological and mental stimuli. This means that correctly choosing which devices to
measure biosignals when evaluating multisensory environments can be tricky. For example, when
dealing with emotions such as fear and excitement, both produce an increase in heart rate and
electrodermal skin activity. This means that trying to identify them with just EDA and PPG
devices can lead to confusion. As noted before, both positive (łhappyž or łjoyfulž) and negative
(łthreateningž or łsaddeningž) stimuli can result in an increase in arousal, and by it, in an increase
in skin conductance and heart rate. That said, EDA and PPG signals are therefore not representative
of the type of emotion, but the intensity of it.
Another challenge is how to adapt these devices for "real time environments", this mean, outside
of a controlled area. EDA devices, for instance, can have their data compromised by outside
temperatures. That’s because while EDA have a strong association has with emotional arousal, it
also shares a link to the regulation of our internal temperatures [2]. Since PPG is dependent of
light, its signals can be affected by the light spectra and intensity of the environment [18].
When dealing with even more complex devices such as the EEG, the range of problems is widened.
EEG devices, even commercial ones, can be uncomfortable [5]. Other still ongoing challenges
are costs, accuracy of sensors (EEG sensors often need a saline solution or gel to facilitate skin
conduction), data transfer errors or inconsistency, and ease of use for devices [10, 31, 32].
However, some challenges are about to be faced with 5G, which has the potential to create new
interfaces for our everyday devices and network components. With 5G being able to connect more
users to provide smarter and faster communications we are about to see a boom in wearables. How
5G will work together with mulsemedia devices is an object of research [25], but we can assume
that 5G technology will provide faster and more reliable communication with high data rate and
low latency rates, partially dealing with the ongoing challenge of synchronization and latency on
mulsemedia devices, as cited by work by Yuan et al. [33].
Table 2.3 summarizes advantages and disadvantages of each approach.
5
SensoryX ’22, June 22-24, 2022, Aveiro, Portugal
Silveira and Santos.
Evaluation Approaches in Mulsemedia Environments
Subjective
Advantages
1.1 Cheap
1.2 It can be done anywhere, without the need for a lot of resources,
compared to other evaluation approaches.
1.3 Very explored and diverse field, with high number of references
and standardizations.
Disadvantages
1.4 Questionnaires and surveys can break user immersion when
dealing with immersive experiences
1.5 Can suffer from łresponse biasž and subject to inaccuracies
1.6 Mulsemedia explores new fields of experience that cannot be
evaluated by this method.
Objective
Advantages
2.1 More accurate data
2.2 Does not suffer from user bias or inaccuracies presented in
subjective approaches.
2.3 Can detect hidden information, usually not detected with subjective methods.
Disadvantages
2.4 Expensive
2.5 Requires resources and controlled environment.
2.5 Inherent complexities of mulsemedia environments can interfere with the execution of objective approaches.
2.6 Wearable data collection devices can be uncomfortable.
3 CONCLUSION
For conclusions, we can agree that the usability evaluation of mulsemedia environments presents a
higher level of difficulty than the standard multimedia. The interaction of multiple systems makes
it difficult to evaluate sensory stimuli in isolation, compromising the standard forms of usability
evaluation, especially the subjective ones.
Growing alternatives currently are the use of wearable devices for real time analysis of the
user’s level of satisfaction when interacting with the system, and with the advancement of 5G
networks, it is expected that challenges such as latency and synchronization will be mitigated and
that better machine user interaction is reached.
However, the plethora of available devices makes it difficult to generalize how evaluations of
QoE should be performed, as each device has different peculiarities and structures: some use
Peltier heating systems, others simulate haptic effects by "tricking" the human senses (such as
using a mint scent to give the impression of being cold). This said, it’s not just a technical issue
of the devices, but how the human senses are stimulated by them. This increases the scope of
usability evaluations, expanding the field to psychology, medicine, design, etc.
6
Ongoing Challenges of Evaluating Mulsemedia QoE
SensoryX ’22, June 22-24, 2022, Aveiro, Portugal
Furthermore, seeing how this area of study is growing every day, even more so with the arrival
of 5G, we assume that this will be a fertile field of research, even outside academia, as we can see
with the advent of Virtual Reality environments, as META.
We also do not seek to substitute one method for the other, as both can be mutual complementary.
What we want to say in this paper is the opportunity to use objective data collection devices to
better understand the functioning of mulsemedia systems.
ACKNOWLEDGMENTS
This study was financed in part by the Brazilian Agencies FAPES, CNPq, and CAPES.
REFERENCES
[1] Ebrahim Babaei, Benjamin Tag, Tilman Dingler, and Eduardo Velloso. 2021. A Critique of Electrodermal Activity
Practices at CHI. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3411764.3445370
[2] Mathias Benedek and Christian Kaernbach. 2010. A continuous measure of phasic electrodermal activity. Journal of
Neuroscience Methods 190, 1 (2010), 80ś91. https://doi.org/10.1016/j.jneumeth.2010.04.028
[3] Estêvão B. Saleme, Celso A. S. Santos, Ricardo A. Falbo, Gheorghita Ghinea, and Frederic Andres. 2019. MulseOnto: a
Reference Ontology to Support the Design of Mulsemedia Systems. JUCS - Journal of Universal Computer Science 25,
13 (2019), 1761ś1786. https://doi.org/10.3217/jucs-025-13-1761 arXiv:https://doi.org/10.3217/jucs-025-13-1761
[4] Gemma A. Calvert and Thomas Thesen. 2004. Multisensory integration: methodological approaches and emerging
principles in the human brain. Journal of Physiology-Paris 98, 1 (2004), 191ś205. https://doi.org/10.1016/j.jphysparis.
2004.03.018 Representation of 3-D Space Using Different Senses In Different Species.
[5] Thibault Chabin, Damien Gabriel, Emmanuel Haffen, Thierry Moulin, and Lionel Pazart. 2021. Are the new mobile
wireless EEG headsets reliable for the evaluation of musical pleasure? PLOS ONE 15, 12 (12 2021), 1ś19. https:
//doi.org/10.1371/journal.pone.0244820
[6] Monica Ionita Ciolacu and Paul Svasta. 2021. Education 4.0: AI Empowers Smart Blended Learning Process with
Biofeedback. In 2021 IEEE Global Engineering Education Conference (EDUCON). 1443ś1448. https://doi.org/10.1109/
EDUCON46332.2021.9453959
[7] Alexandra Covaci, Ramona Trestian, Estêvão Bissoli Saleme, Ioan-Sorin Comsa, Gebremariam Assres, Celso A. S. Santos,
and Gheorghita Ghinea. 2019. 360° Mulsemedia: A Way to Improve Subjective QoE in 360° Videos. In Proceedings of
the 27th ACM International Conference on Multimedia (Nice, France) (MM ’19). Association for Computing Machinery,
New York, NY, USA, 2378ś2386. https://doi.org/10.1145/3343031.3350954
[8] Didar Dadebayev, Wei Wei Goh, and Ee Xion Tan. 2021. EEG-based emotion recognition: Review of commercial EEG
devices and machine learning techniques. Journal of King Saud University - Computer and Information Sciences (2021).
https://doi.org/10.1016/j.jksuci.2021.03.009
[9] Yi Ding, Yaqin Cao, Qingxing Qu, and Vincent G. Duffy. 2020. An Exploratory Study Using Electroencephalography
(EEG) to Measure the Smartphone User Experience in the Short Term. International Journal of HumanśComputer
Interaction 36, 11 (2020), 1008ś1021. https://doi.org/10.1080/10447318.2019.1709330
[10] Aviv Elor, Michael Powell, Evanjelin Mahmoodi, Nico Hawthorne, Mircea Teodorescu, and Sri Kurniawan. 2020. On
Shooting Stars: Comparing CAVE and HMD Immersive Virtual Reality Exergaming for Adults with Mixed Ability.
ACM Trans. Comput. Healthcare 1, 4, Article 22 (sep 2020), 22 pages. https://doi.org/10.1145/3396249
[11] Jérémy Frey, Gilad Ostrin, May Grabli, and Jessica R. Cauchard. 2020. Physiologically Driven Storytelling: Concept and
Software Tool. Association for Computing Machinery, New York, NY, USA, 1ś13. https://doi.org/10.1145/3313831.
3376643
[12] Gheorghita Ghinea, Christian Timmerer, Weisi Lin, and Stephen R. Gulliver. 2014. Mulsemedia: State of the Art,
Perspectives, and Challenges. ACM Trans. Multimedia Comput. Commun. Appl. 11, 1s, Article 17 (oct 2014), 23 pages.
https://doi.org/10.1145/2617994
[13] Giorgos Giannakakis, Dimitris Grigoriadis, Katerina Giannakaki, Olympia Simantiraki, Alexandros Roniotis, and
Manolis Tsiknakis. 2019. Review on psychological stress detection using biosignals. IEEE Transactions on Affective
Computing (2019), 1ś1. https://doi.org/10.1109/TAFFC.2019.2927337
[14] Ai Van Thuy Ho, Karin Toska, and Jarlis Wesche. 2020. Rapid, Large, and Synchronous Sweat and Cardiovascular
Responses Upon Minor Stimuli in Healthy Subjects. Dynamics and Reproducibility. Frontiers in Neurology 11 (2020).
https://doi.org/10.3389/fneur.2020.00051
[15] Samory Houzangbe, Olivier Christmann, Geoffrey Gorisse, and Simon Richir. 2020. Effects of voluntary heart rate
control on user engagement and agency in a virtual reality game. Virtual Reality 24, 4 (01 Dec 2020), 665ś681.
https://doi.org/10.1007/s10055-020-00429-7
7
SensoryX ’22, June 22-24, 2022, Aveiro, Portugal
Silveira and Santos.
[16] Sinh Huynh, Seungmin Kim, JeongGil Ko, Rajesh Krishna Balan, and Youngki Lee. 2018. EngageMon: Multi-Modal
Engagement Sensing for Mobile Games. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 1, Article 13 (mar
2018), 27 pages. https://doi.org/10.1145/3191745
[17] Tao Lin, Masaki Omata, Wanhua Hu, and Atsumi Imamiya. 2005. Do Physiological Data Relate to Traditional
Usability Indexes?. In Proceedings of the 17th Australia Conference on Computer-Human Interaction: Citizens Online:
Considerations for Today and the Future (Canberra, Australia) (OZCHI ’05). Computer-Human Interaction Special
Interest Group (CHISIG) of Australia, Narrabundah, AUS, 1ś10.
[18] He Liu, Yadong Wang, and Lei Wang. 2014. The Effect of Light Conditions on Photoplethysmographic Image
Acquisition Using a Commercial Camera. IEEE Journal of Translational Engineering in Health and Medicine 2 (2014),
1ś11. https://doi.org/10.1109/JTEHM.2014.2360200
[19] Ryo Maie and Robert M. DeKeyser. 2020. CONFLICTING EVIDENCE OF EXPLICIT AND IMPLICIT KNOWLEDGE
FROM OBJECTIVE AND SUBJECTIVE MEASURES. Studies in Second Language Acquisition 42, 2 (2020), 359ś382.
https://doi.org/10.1017/S0272263119000615
[20] M. G McKee. 2008. Biofeedback: an overview in the context of heart-brain medicine. Cleveland Clinic Journal of
Medicine 75, Suppl_2 (March 2008), S31śS31. https://doi.org/10.3949/ccjm.75.suppl_2.s31
[21] Niall Murray, Oluwakemi A. Ademoye, Gheorghita Ghinea, and Gabriel-Miro Muntean. 2017. A Tutorial for OlfactionBased Multisensorial Media Application Design and Evaluation. ACM Comput. Surv. 50, 5, Article 67 (sep 2017),
30 pages. https://doi.org/10.1145/3108243
[22] Jodi Oakman, Margo Ketels, and Els Clays. 2021. Low back and neck pain: objective and subjective measures of
workplace psychosocial and physical hazards. International Archives of Occupational and Environmental Health 94, 7
(01 Oct 2021), 1637ś1644. https://doi.org/10.1007/s00420-021-01707-w
[23] Susanne Putze, Dmitry Alexandrovsky, Felix Putze, Sebastian Höffner, Jan David Smeddinck, and Rainer Malaka. 2020.
Breaking The Experience: Effects of Questionnaires in VR User Studies. Association for Computing Machinery, New
York, NY, USA, 1ś15. https://doi.org/10.1145/3313831.3376144
[24] Saeed Safikhani, Michael Holly, Alexander Kainz, and Johanna Pirker. 2021. The Influence of In-VR Questionnaire
Design on the User Experience. In Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology
(Osaka, Japan) (VRST ’21). Association for Computing Machinery, New York, NY, USA, Article 12, 8 pages. https:
//doi.org/10.1145/3489849.3489884
[25] E. B. Saleme, Alexandra Covaci, Gebremariam Assres, Ioan-Sorin Comsa, Ramona Trestian, Celso A.S. Santos, and
Gheorghita Ghinea. 2021. The influence of human factors on 360 mulsemedia QoE. International Journal of HumanComputer Studies 146 (2021), 102550. https://doi.org/10.1016/j.ijhcs.2020.102550
[26] Estêvão Bissoli Saleme, Celso A. S. Santos, and Gheorghita Ghinea. 2019. Coping With the Challenges of Delivering
Multiple Sensorial Media. IEEE MultiMedia 26, 2 (2019), 66ś75. https://doi.org/10.1109/MMUL.2018.2873565
[27] Valentin Schwind, Pascal Knierim, Nico Haas, and Niels Henze. 2019. Using Presence Questionnaires in Virtual Reality.
In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19).
Association for Computing Machinery, New York, NY, USA, 1ś12. https://doi.org/10.1145/3290605.3300590
[28] Mahsa Soufineyestani, Dale Dowling, and Arshia Khan. 2020. Electroencephalography (EEG) Technology Applications
and Available Devices. Applied Sciences 10, 21 (2020). https://doi.org/10.3390/app10217453
[29] Yingding Wang, Nikolai Fischer, and François Bry. 2019. Pervasive Persuasion for Stress Self-Regulation. In 2019
IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). 724ś730.
https://doi.org/10.1109/PERCOMW.2019.8730850
[30] Feng Xia, Laurence T. Yang, Lizhe Wang, and Alexey Vinel. 2012. Internet of Things. Int. J. Commun. Syst. 25, 9 (sep
2012), 1101ś1102. https://doi.org/10.1002/dac.2417
[31] Xu Yan, Li-Ming Zhao, and Bao-Liang Lu. 2021. Simplifying Multimodal Emotion Recognition with Single Eye Movement
Modality. Association for Computing Machinery, New York, NY, USA, 1057ś1063. https://doi.org/10.1145/3474085.
3475701
[32] Xiaozhen Ye, Huansheng Ning, Per Backlund, and Jianguo Ding. 2021. Flow Experience Detection and Analysis for
Game Users by Wearable-Devices-Based Physiological Responses Capture. IEEE Internet of Things Journal 8, 3 (2021),
1373ś1387. https://doi.org/10.1109/JIOT.2020.3010853
[33] Zhenhui Yuan, Ting Bi, Gabriel-Miro Muntean, and Gheorghita Ghinea. 2015. Perceived Synchronization of Mulsemedia Services. IEEE Transactions on Multimedia 17, 7 (2015), 957ś966. https://doi.org/10.1109/TMM.2015.2431915
8