Mathematical Models For Dynamic, Multisensory Spatial Orientation Perception

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

CHAPTER

Mathematical models
for dynamic, multisensory
spatial orientation
perception
5
Torin K. Clarka,*, Michael C. Newmanb, Faisal Karmalic,d, Charles M. Omane,
Daniel M. Merfeldf,g
a
Smead Aerospace Engineering Sciences, University of Colorado-Boulder, Boulder, CO,
United States
b
Environmental Tectonics Corporation, Southampton, PA, United States
c
Jenks Vestibular Physiology Laboratory, Massachusetts Eye and Ear Infirmary, Boston,
MA, United States
d
Otolaryngology, Harvard Medical School, Boston, MA, United States
e
Human Systems Laboratory, Massachusetts Institute of Technology, Cambridge, MA,
United States
f
Otolaryngology—Head and Neck Surgery, The Ohio State University, Columbus, OH,
United States
g
Naval Aerospace Medical Research Lab (NAMRL), Naval Medical Research
Unit—Dayton (NAMRUD), Dayton, OH, United States
*Corresponding author: Tel.: +1-303-492-4015., e-mail address: [email protected]

Abstract
Mathematical models have been proposed for how the brain interprets sensory information
to produce estimates of self-orientation and self-motion. This process, spatial orientation
perception, requires dynamically integrating multiple sensory modalities, including visual,
vestibular, and somatosensory cues. Here, we review the progress in mathematical modeling
of spatial orientation perception, focusing on dynamic multisensory models, and the experi-
mental paradigms in which they have been validated. These models are primarily “black box”
or “as if” models for how the brain processes spatial orientation cues. Yet, they have been
effective scientifically, in making quantitative hypotheses that can be empirically assessed,
and operationally, in investigating aircraft pilot disorientation, for example. The primary
family of models considered, the observer model, implements estimation theory approaches,
hypothesizing that internal models (i.e., neural systems replicating the behavior/dynamics of
physical systems) are used to produce expected sensory measurements. Expected signals are
then compared to actual sensory afference, yielding sensory conflict, which is weighted to

Progress in Brain Research, Volume 248, ISSN 0079-6123, https://doi.org/10.1016/bs.pbr.2019.04.014


© 2019 Elsevier B.V. All rights reserved.
65
66 CHAPTER 5 Mathematical models for orientation perception

drive central perceptions of gravity, angular velocity, and translation. This approach effec-
tively predicts a wide range of experimental scenarios using a small set of fixed free param-
eters. We conclude with limitations and applications of existing mathematical models and
important areas of future work.

Keywords
Computational, Vestibular, Internal models, Sensory conflict, Visual-vestibular integration

1 Introduction
Spatial orientation perception is the process of integrating and interpreting sensory
information to estimate one’s orientation and self-motion. Orientation is the position
of the body (i.e., tilt relative to gravity and heading); motion is linear or angular
movement of the body; spatial orientation broadly includes both (Benson, 1978;
Merfeld, 2017). During activities experienced in everyday life, healthy humans are
able to estimate their own spatial orientation fairly well (Guedry, 1974). However,
patients with vestibular or central nervous dysfunction (Lewis et al., 2011; Merfeld
et al., 2010; Wright et al., 2017), as well as normal individuals exposed to unusual
environments (Clark, 2019; Clement and Wood, 2014; de Winkel et al., 2012;
Oman, 2007) or motions (Clark et al., 2015b) (e.g., in a high performance aircraft;
Tribukait et al., 2016) often misperceive their orientation and motion. The brain
combines multisensory cues to make dynamic estimates of self-orientation and
motion, enabling a host of behaviors including reflexive eye movements, balance,
locomotion, and control activities (e.g., natural coordinated feats such as ice skating,
but also pilot manual control) (Merfeld, 2017).
Over the last few decades, mathematical models have been proposed to explain
how the brain dynamically integrates sensory information in an effort to mimic
spatial orientation perceptual responses observed empirically. These models are
primarily black box or as if models, hypothesizing the processes performed by the
brain in spatial orientation perception. This paper reviews the evolution of models,
the current state-of-the-art, the various experimental paradigms in which they have
been validated so far, applications, and areas of future work.
Previous reviews have included models of the dynamics of a single sensor
(e.g., the semicircular canals) (Young, 2011), focused on perception in paradigms
that primarily use that sensor, such as in Earth-vertical rotation (MacNeilage
et al., 2008), or assessed the relationships between families of models (Selva and
Oman, 2012). Here, we will focus on models that predict dynamic spatial orientation
perception (as opposed to static, stationary tilt perception) and include integration
of multiple sensory cues. Further, we will primarily focus on models predicting
spatial orientation perception, as opposed to reflexive eye movements, though we
will note when a model was simulated to mimic recordings of the vestibulo-ocular
reflex (VOR).
1 Introduction 67

1.1 Multisensory integration including the vestibular system


To estimate spatial orientation, the brain integrates sensory information from avail-
able sources. This includes the vestibular system in the inner ear, visual information,
somatosensory cues, etc. The vestibular system senses inertial motion and consists of
two components. The semicircular canals, across the frequency range typically ex-
perienced in everyday life (Grossman et al., 1988), transduce angular velocity in
three dimensions (Goldberg and Fernandez, 1971). The otolith organs sense
gravito-inertial stimulation (Fernandez and Goldberg, 1976b) and include two mac-
ulae: the utricle and the saccule, which are roughly orthogonal to one another. Math-
ematically, the net gravito-inertial force (as a specific force, f, with bold denoting a
three-dimensional vector) acting on the otoliths is the vector difference between
gravity (g) and head linear acceleration (a):
f ¼ga (1)

Further, head rotations (angular velocity, ω) that are not parallel to gravity, yield
a change in the direction of gravity:
g_ ¼ ω  g (2)

The vestibular system provide primary cues for orientation perception,


highlighted by patients with bilateral vestibular dysfunction being 2–50 less
sensitive to perceiving their own motion in the dark (Valko et al., 2012). However,
other sensory cues, particularly vision, are integrated in estimation of self-orientation
(Fetsch et al., 2012; Karmali et al., 2014), as well as for VOR and optokinetic
nystagmus (OKN) responses (Raphan et al., 1977; Robinson, 1977). In fact, previous
studies have found integration of multiple sensory cues, particularly visual and
vestibular, to be consistent with static Bayesian optimal predictions for rotation
(Jurgens and Becker, 2006) and translation (Butler et al., 2010; Fetsch et al.,
2009, 2012; Gu et al., 2008) thresholds.

1.2 Motivation for modeling spatial orientation perception


The dynamics of vestibular sensory transduction have been well quantified using
transfer function approximations (Fernandez and Goldberg, 1971, 1976a,c). The
semicircular canals transduce a neural signal proportional to head angular velocity
at mid to high frequencies, but fail to reliably signal low frequency or constant (DC)
stimulation due to their mechanical properties. This sensory behavior can be well-
modeled with a high-pass filter transfer function.
Furthermore, the challenge of estimating states from noisy, imperfect, dynamic
sensory signals has a rich history in control system engineering, robotics, and avia-
tion. For example, modern aircraft use an internal measurement unit, consisting of
gyroscopes and accelerometers, to estimate the attitude of the aircraft. The well-
developed history of mathematical formulations for solving this estimation problem
(Maybeck, 1982) has suggested approaches model how the human brain estimates
self-orientation (Laurens and Droulez, 2007; MacNeilage et al., 2008).
68 CHAPTER 5 Mathematical models for orientation perception

As noted earlier, the use of these mathematical formulations toward modeling


human spatial orientation perception leads to primarily black box/as if models. None-
theless, there is scientific (e.g., quantifying hypotheses to be tested experimentally),
clinical (e.g., velocity storage models), and operational (e.g., analyzing aircraft spa-
tial disorientation accidents) benefits to such an approach. Further, there is evidence
from neural recordings (Oman and Cullen, 2014; Roy and Cullen, 2004) that the
brain uses similar mechanisms as hypothesized in the models, particularly the
observer-family of models, described below.

2 Early modeling approaches


Mathematical models were first applied to perception of motion well over a century
ago (Mach, 1875; Young et al., 2001). However, here we focus on models for
multi-modal integration, which were first proposed more recently (Mayne, 1974).
For example, Young and colleagues proposed a model for integration of semicircular
canal and otolith cues (Ormsby and Young, 1977; Young, 1970; Young et al., 1966).
As a primary hypothesis of the model, non-otolith cues (e.g., semicircular canals) were
proposed to disambiguate the gravito-inertial stimulation to the otolith organs, into
perceptions of linear acceleration and gravity. Previously, frequency segregation
(Mayne, 1974), in which low frequency otolith stimulation is hypothesized to be
perceived as gravity (i.e., tilt) while higher frequencies were presumed to be linear
acceleration (i.e., translation). Young and colleagues’ model accounted for the brain
having knowledge of the frequency dependence of canal transduction, as well as when
head rotation components not parallel to gravity yield a change in the direction of
gravity (Eq. 2). However, Ormsby and Young did not pose these as internal models.
Briefly (also see next section and Tin and Poon, 2005, for a review), internal models
are a neural system that replicates the behavior/dynamics of a physical system
(Angelaki et al., 2004; Green and Angelaki, 2004; Merfeld et al., 1999; Nooij
et al., 2008; Wolpert et al., 1998). Mimicking the response of physical systems
may enable to brain to better process sensory stimuli for spatial orientation perception.
Building upon these early sensory integration models, Borah and colleagues
proposed and implemented a Kalman filter model for spatial orientation perception
(Borah et al., 1979, 1988). Briefly, Kalman filtering (Kalman, 1960; Kalman and
Bucy, 1961) is a mathematical approach for estimating states in a linear system
with Gaussian noise (process and measurement noise). Under these assumptions,
Kalman filtering can produce Bayesian optimal integration of multisensory cues,
which we will return to in the next section. Including vestibular, but also visual,
proprioceptive, and tactile cues, Borah’s model implicitly suggested internal
models of sensory dynamics and noise statistics in order to determine the Kalman
gains. However, the Kalman filter frameworks assume linearity. As seen in
Eq. (2), spatial orientation is inherently non-linear. To maintain this assumption,
Borah’s model was limited to simulating scenarios where the person’s orienta-
tion was upright (e.g., Earth-vertical yaw rotation) or tilted to only small angles
3 Observer models 69

(i.e., where sin(θ)  θ). In an effort to overcome these limitations, Pommellet


(Pommellet, 1990) and later Bilien (Bilien, 1993) modified Borah’s model and
attempted to implement Extended Kalman filter models. These models effectively
re-linearize the state equations about the current orientation; however, both were
limited by numerical stability issues.

3 Observer models
The observer framework allows for capturing the non-linearity (e.g., Eq. 2) of ori-
entation perception as well as full six degree-of-freedom motion, while generally
remaining numerically stable. The observer-family of spatial orientation models
are inspired by the engineering estimation structure of a classic Luenburger observer
(Luenburger, 1971). In this approach, the state of a dynamical system is estimated by
computationally simulating a model of the system and sensor dynamics (i.e., an
internal model) to produce expected measurements. The expected measurements
are compared to actual measurements and the difference is weighted to help update
the state estimate, so that it better converges with reality. This internal model based
scheme is implicit in the design of the Kalman filter (Kalman and Bucy, 1961), which
is a class of observer models, as well as in the design of Output Feedback Control
Systems (Kwakernaak and Sivan, 1972) for model referenced control of a linear sys-
tem. An analogous approach was the basis for a heuristic model for sensory conflict
in motion perception, movement control, and motion sickness (Oman, 1982, 1990,
1991), which was built upon the concept of sensory expectancy conflict and related
concepts (Sperry, 1950), including sensory reafference (Held and Freedman, 1963;
Held and Hein, 1958; Von Holst, 1954; Von Holst and Mittelstaedt, 1950) and neural
mismatch (Reason, 1978).
The central hypothesis is the brain generates an expected sensory measurement,
which is used to compute the sensory conflict signal. Here, “sensory conflict” is de-
fined as the difference between expected and actual sensory signals and not between
various sources of sensory signals (e.g., visual-vestibular conflict). This hypothesis
was new in that the previous Kalman filter models did not explicitly compute sensory
conflict (Borah et al., 1978, 1988), though the Kalman filter framework does not pre-
clude doing so. There is now evidence of neural populations in the vestibular nuclei
(Cullen, 2011; Roy and Cullen, 2001, 2004) and the cerebellum (Brooks and Cullen,
2009, 2013; Brooks et al., 2015) that behave similarly to that hypothesized of this
sensory conflict signal (i.e., passive-generated motions produce a robust neural re-
sponse, but not from those generated actively where the brain presumably can esti-
mate the expected sensory measurement) (Oman and Cullen, 2014). These neuron
populations appear to be involved with head posture stabilization, likely motion sick-
ness, but apparently not VOR. Future work should aim to elucidate their role in spa-
tial orientation perception. To compute expected sensory measurements, the brain
requires an internal model, which has been hypothesized to reside in the cerebellum
(Brooks and Cullen, 2013; Laurens and Angelaki, 2016; Oman and Cullen, 2014).
70 CHAPTER 5 Mathematical models for orientation perception

3.1 Observer model framework


This framework was first applied to the relationship between motion disturbances,
sensory rearrangements, motion sickness (Oman, 1982, 1990), as well as spatial ori-
entation, as depicted in Fig. 1.
In this application, the physical systems include body/world dynamics (including
Eqs. 1 and 2) and sensory dynamics (e.g., the high-pass filter characteristics of the
semicircular canals), and thus in this framework the brain is hypothesized to use in-
ternal models of these systems (Fig. 2). Nearly all models to date have focused on
perception of passive motion and therefore active control (producing self-generated
motion), shown in gray in Fig. 1, is omitted; one exception (Laurens and Angelaki,
2017) is discussed later.

External
Disturbances Sensory Noise
Actual Sensory
Body/World Orientation Sensory Afference
Efference Dynamics Dynamics

Sensory Conflict
K C

Expected
Perceived Sensory
Internal Model of Internal Model of
Efference Orientation Afference
Body/World Sensory
Copy
Dynamics Dynamics

Motor Planning C Desired Orientation


Central Nervous System Estimation

FIG. 1
Observer model framework for spatial orientation perception. Internal models in the
brain are used to compute expected sensory afference which is compared (C) to actual
sensory afference to yield sensory conflict. This is weighted (gain of K) to drive a
central perception of orientation. We focus on passive spatial orientation perception;
however, the active control (producing self-generated motion) pathways are included
in gray. Orientation in the figure refers to spatial orientation, including tilt relative to
gravity, linear, and angular motion.
After Oman, C.M., 1982. A heuristic mathematical-model for the dynamics of sensory conflict
and motion sickness. Acta Otolaryngol. Suppl. 392, 3–44; Oman, C.M., 1990. Motion sickness—a
synthesis and evaluation of the sensory conflict theory. Can. J. Physiol. Pharmacol. 68 (2), 294–303;
and Merfeld, D.M., Young, L.R., Oman, C.M., Shelhammer, M.J., 1993a. A multidimensional model of
the effect of gravity on the spatial orientation of the monkey. J. Vestib. Res. 3 (2), 141–161.
3 Observer models 71

FIG. 2
Vestibular observer model. Bold denotes three dimensional vectors. The model primarily
uses the standard head-fixed, right handed coordinate system with X out the nose, Y out the
left ear, and Z out the top of the head (other coordinate systems are discussed below).
Hat symbols correspond to brain estimates of that parameter. Body/world dynamics
(pink, showing Eqs. 1 and 2) produce physical stimulation (f and ω) to the otoliths (OTO)
and semicircular canals (SCC, orange), yielding sensory afferent signals (αOTO and αSCC).
The brain compares these with expected sensory afferent signals (^ αOTO and α^ SCC ). Vector
differences (red circles) between actual and expected sensory afferent signals produce
two sensory conflict signals (ea and eω). Similarly, ef represents the rotation vector error
(direction and magnitude) required to align the actual and expected otolith measurements
(red box, see Merfeld et al. (1993a); Eqs. 15 and 16 for details). Sensory conflict
signals are weighted by feedback gains (K’s, green) to produce central estimates (i.e.,
perceptions of ), angular velocity, gravity, and linear acceleration. In Merfeld et al.’s
implementation (1993a), the eω conflict signal was applied directly to the angular velocity
estimate and not its rate of change, as in a formal Kalman filter approach. Merfeld et al.
(1993a) originally proposed four scalar feedback gains (Ka, Kf, Kfω, and Kω). Highlighted
with a thick border, Newman (2009) subsequently added K1 ¼ Kω/(Kω + 1) to ensure the
closed loop gain for angular velocity was unity. Expected sensory afferent signals are
produced using internal models of the otoliths and semicircular canals (OTO ^ and SCC, ^
respectively, in purple) and internal models of body/world dynamics (light green, mimicking
Eqs. 1 and 2). It is assumed the brain has learned internal models that match the
physical dynamics well. For example, the internal model of the semicircular canals (SCC) ^
has the same high pass filter transfer function as that of the physical canals (SCC). See
Merfeld et al. (1993a) showing that model predictions of spatial orientation perception are
fairly robust to differences between internal model and sensory dynamics.
After Merfeld, D.M., Young, L.R., Oman, C.M., Shelhammer, M.J., 1993a. A multidimensional
model of the effect of gravity on the spatial orientation of the monkey. Journal of Vestibular Research 3 (2),
141–161 with updates from Newman, M.C., 2009. A Multisensory Observer Model for Human Spatial
Orientation Perception. S.M., Massachusetts Institute of Technology.
72 CHAPTER 5 Mathematical models for orientation perception

3.2 Vestibular observer model


The observer model (Merfeld and Zupan, 2002; Merfeld et al., 1993a) makes four
fundamental hypotheses: (1) The brain computes central estimates of linear acceler-
ation (^a, where hat symbols represent internal estimates), gravity (^g), and angular
^ (2) These central estimates are not produced by direct processing of
velocity (ω).
sensory signals (e.g., αOTO and αSCC), but instead the brain uses internal models
of sensory dynamics (in the vestibular-only model: OTO ^ and physical laws
^ and SCC)
(^f ¼ ^
g^ a for Eq. (1) and ^g_ 52ω ^  ^g for Eq. (2)) to produce expected sensory affer-
ent signals (^αOTO and α ^ SCC ). (3) These are compared (red in Fig. 2) to produce sen-
sory conflict signals (ea, ef, and eω) which are then weighted (K gains). (4) The brain
combines these weighted sensory conflicts through sensory integration (blue circles
in Fig. 2), such that semicircular canal pathways influence the perception of orien-
tation (and in turn linear acceleration) and otolith pathways influence the perception
of angular velocity.
Using these hypotheses the model outputs a time history of predicted perceptions
^ gravity (^g), and linear acceleration (^a) in response to an in-
of angular velocity (ω),
putted time history of physical motions defined by angular velocity (ω) and linear
acceleration (a). From these inputs, Eq. (2) and an initial orientation (g(t0)) are used
to compute the gravito-inertial stimulation (f) to the otoliths. The model uses qua-
ternions (a stable parameterization of orientation) and quaternion integration to track
the non-linearity of orientation (Newman, 2009).

3.3 Extensions to vestibular observer model


Earlier efforts applied the vestibular observer model (Fig. 2) to predicting eye move-
ments (i.e., the rotational and translation vestibular ocular reflex, VOR) (Merfeld
et al., 1993a), providing a sensory conflict based explanation of VOR velocity stor-
age (a well-studied behavior in which the bandwidth of rotational responses is ex-
tended to lower frequencies than those captured by the semicircular canal
dynamics). Specifically, the internal estimates of angular velocity, linear accelera-
tion, and gravity were modeled to predict the VOR (Angelaki et al., 1999; Green
and Angelaki, 2004; Merfeld and Zupan, 2002), even for complex three-dimensional
eye movements (Haslwanter et al., 2000), enabling comparison to temporal record-
ings of slow phase reflexive eye movements. This approach built upon other models
specifically focused on predicting VOR responses (Raphan et al., 1977, 1979;
Robinson, 1977, 1981), also extended to three dimensions (Raphan and Cohen,
2002). However, more recent empirical evidence has shown that there are qualitative
response differences in the processing of VOR versus perception of spatial orienta-
tion (Merfeld et al., 2005a,b). Thus, here we primarily focus on modeling to predict
human perception of orientation and motion (i.e., reported by the subject verbally
and/or using a psychophysical task).
The model has been applied successfully to predict the dynamics of human self-
orientation perception, even for complex paradigms such as off-vertical axis rotation
3 Observer models 73

(OVAR) (Vingerhoets et al., 2006, 2007, 2008). In these efforts, the concept of a
leaky integrator was added to produce predicted perception of linear velocity from
that of linear acceleration (Vingerhoets et al., 2006). This approach was extended
further (Newman, 2009) to predict perceptions of linear position and angular azi-
muth. Newman modeled this with different time constants of leaky integration
depending upon the axis in a limbic coordinate system, which is defined by the per-
ceived direction of gravity and motivated by physiological evidence (Best et al.,
2001; Calton and Taube, 2005; Hafting et al., 2005; Knierim et al., 2000). Further,
Vingerhoets et al. (2007) proposed the inclusion of an idiotropic vector in the dy-
namic observer model. The concept of the idiotropic vector (h), proposed earlier
for static orientation (Mittelstaedt, 1983), is a cognitive driver of perception of ver-
tical toward the body longitudinal axis. This approach is useful for predicting under-
estimation of subjective visual vertical (SVV) for large angles of roll tilt (Baccini
et al., 2014; Barnett-Cowan and Harris, 2008; Muller, 1916).
Another concern was the observer model had originally failed to mimic the
systematic overestimation of roll tilt observed in hyper-gravity, such as experienced
on a centrifuge (Miller and Graybiel, 1966). Early models of static tilt in hyper-gravity
(Correia et al., 1968; Schone, 1964) hypothesized tilt perception were directly
processed from gravito-inertial shear stimulation in the utricular plane of the otoliths.
We note the utricular maculae have complex three-dimensional surface geometry and
thus the concept of a utricular plane is a considerable simplification. However, when
considering an average utricular plane, the saccule is primarily sensitive perpendicular
to this plane with potentially different afferent characteristics (Fernandez and Goldberg,
1976a). The utricular plane is pitched up relative to head level by approximately
30 degrees (Corvera et al., 1958; Curthoys et al., 1999), but level in roll. Motivated
by earlier models which differentially considered utricular stimulation (Mittelstaedt,
1983; Ormsby and Young, 1976) or that in the vertical direction (Dai et al., 1989;
Haslwanter et al., 2000), we proposed a modification to the observer model in which
sensory conflict (ea) is differentially weighted in the utricular plane (Kau) versus that
perpendicular to the utricular plane (i.e., primarily in the saccular direction) (Kau?)
(Clark et al., 2015b,c). This approach effectively predicts (1) dynamic roll tilt in
hyper-gravity, which is overestimated but less so for faster tilts due to the integration
of semicircular canal cues that are unaffected by altered gravity (Clark et al., 2015b);
(2) illusory static pitch tilt in hyper-gravity (Clark et al., 2015c; Cohen, 1973;
Correia et al., 1968); and (3) static roll tilt in hypo-gravity in a single subject (Clark
and Young, 2017) and in a hypo-gravity analog (Galvan-Garza et al., 2018). This
modification to the model (differential weighting of sensory conflicts in the utricular
plane) extends observer model predictions to scenarios in altered gravity.

3.4 Visual-vestibular observer model


Building upon the framework of the vestibular-only observer model (Merfeld and
Zupan, 2002), visual pathways have been incorporated (Newman, 2009). Fig. 3
depicts the added visual pathways, corresponding to the visual system extracting
74 CHAPTER 5 Mathematical models for orientation perception

FIG. 3
Visual-vestibular observer model, after (Newman, 2009). Bolding, symbols, coordinate
frames, and colors are the same as in Fig. 2. Four visual pathways are added to the
vestibular model: visual linear position (pV), visual linear velocity (vV), visual angular
velocity (ωV), and visual “gravity” (gV). Each uses the observer feedback framework of an
internal model (e.g., V^ISp ) to compute an expected sensory measurements (e.g., α ^ pV ),
which is compared to the actual sensory measurement (e.g., αpV) to produce a sensory
conflict signal (e.g., epV). The visual error/sensory conflict signals are weighted (e.g., KpV)
and, as in a traditional Luenburger observer, are summed to yield the state estimate of
the derivative (e.g., weighted linear position error is summed to yield the estimate of
linear velocity, ^v). Additional internal models capture linear position being the integral of
linear velocity, which is the integral of linear acceleration. To model imprecise translation
perception in the dark, leaky integration (time constant of τ) is used between perceived
acceleration and velocity. This linear translation integration process occurs in a limbic
coordinate frame, defined by the perceived direction of gravity (not explicitly shown in
figure). Newman (2009) proposed differing linear translation integration time constant
in the vertical limbic direction τz ¼ 1 versus the horizontal plane τx ¼ τx ¼ 16.67 s.
Visual pathways can be activated and/or deactivated during a simulation through gating
(not shown in figure).
3 Observer models 75

four visual cues from its environment: visual linear position (pV), visual linear ve-
locity (vV), visual angular velocity (ωV), (Gu et al., 2006), and visual “gravity”
(gV) (derived by cues presumed to be parallel or perpendicular to vertical objects
such as trees or the horizon, respectively). Visual contributions are combined with
those from the vestibular system through summation. In Newman’s implementation
(2009), the sensory dynamics of each of the four visual sensors (VIS) were assumed
to be unity (i.e., accurate with no added dynamics) and for simplicity did not distin-
guish between focal and ambient vision, as previous efforts had (Borah et al., 1988;
Pommellet, 1990). Similarly, it was assumed the brain uses accurate internal models
for this visual processing (i.e., also unity). Sensory conflicts for visual pathways were
produced by simple vector differences (epV, evV, eωV) or using the gravito-inertial ro-
tation vector error (egV), mimicking that for the otoliths in Merfeld et al. (1993a). The
sensory conflict signals for the added visual pathways are weighted by feedback
gains, as defined in Table 2. Unlike the vestibular pathways which are presumably
always active for a healthy individual, the model’s visual pathways can be activated/
deactivated throughout or during a simulation to replicate scenarios where visual
cues become available or are lost (e.g., turning off the lights, opening one’s eyes,
or a pilot flying in and out of clouds) (Newman, 2009).

3.5 Experimental validation


One of the strengths of the observer model is the number of different experimental
paradigms which it is able to predict (scenarios which the model is known to poorly
predict are discussed in the next section). Table 1 lists the various experimental par-
adigms mimicked and the source(s) which provides the quantitative comparison. We
note that these can be replicated with a single set of user defined gains (e.g., Table 2).

3.6 Limitations
While the observer model approach has a history of successfully replicating a wide
range of experimental paradigms (Table 1), there are limitations worth noting. First,
the observer model contains feedback gains (K’s in Figs. 2 and 3), which are free
parameters, set by the modeler to predict perceptions observed experimentally. As
such, an oft-noted criticism is that there is no physiological significance or motiva-
tion for the feedback gain values. In other models, such as Kalman filter models
(Borah et al., 1988; Karmali et al., 2018; Laurens and Angelaki, 2017; Lim et al.,
2017) and particle filter models (Karmali and Merfeld, 2012; Laurens and
Droulez, 2007) the gains are not set by the modeler explicitly. Instead the measure-
ment noise and process noise values (Karmali and Merfeld, 2012; Sadeghi et al.,
2007) define the feedback gains (e.g., using the Riccati equation in Kalman filtering
or statistically estimated across parallel pathways in particle filtering) that weight the
error optimally (Karmali, 2019). However, the selection of measurement and process
noise values by the modeler is analogous to selecting feedback gains. In fact, for
small angles at which the Kalman filter model is valid, its computation of feedback
76 CHAPTER 5 Mathematical models for orientation perception

Table 1 Experimental motion paradigms for which the observer model has
successfully been validated.
Motion paradigm References

Earth-vertical rotation Merfeld et al. (1993a)


Off vertical axis rotation (OVAR) Haslwanter et al. (2000), Merfeld et al. (1993a),
and Vingerhoets et al. (2006, 2007)
Post-rotational tilt Merfeld et al. (1993a,b) and Zupan et al. (2000)
Constant linear acceleration Cohen et al. (1973) and Newman (2009)
(somatogravic illusion)
Sinusoidal linear translation in different Walsh (1964), Malcolm and Jones (1974), Israel
axes et al. (1997), and Newman (2009)
Roll tilt Merfeld and Zupan (2002) and Vingerhoets et al.
(2007)
Tilt versus translation across Merfeld and Zupan (2002)
frequencies
Variable-radius centrifugation Seidman et al. (1998) and Merfeld et al. (2001)
“Coriolis” cross-coupled illusion Guedry and Benson (1978), Newman (2009), and
Vincent et al. (2018)
Circular (earth-vertical) visual vection Waespe and Henn (1977), Cohen et al. (1981),
and Newman (2009)
Linear (earth-horizontal) visual vection Berthoz et al. (1975) and Newman (2009)
Visual pseudo-Coriolis illusion Dichgans and Brandt (1973), Newman (2009),
and Newman et al. (2011)
Visual-vestibular earth-vertical rotation Cohen et al. (1981) and Newman (2009)
Visual-vestibular constant linear Tokumaru et al. (1998) and Newman (2009)
acceleration (visual somatogravic)
Hyper-gravity static roll tilt Clark et al. (2015b), Correia et al. (1968), and
Schone (1964)
Hyper-gravity dynamic roll-tilt Clark et al. (2015b,c) and Guedry and Rupert
(1991))
Hyper-gravity static pitch tilt “Elevator Clark et al. (2015c), Cohen (1973), and Correia
illusion” et al. (1968)
Hypo-gravity static roll tilt Clark et al. (2015c), Clark and Young (2017), and
Galvan-Garza et al. (2018)

Where appropriate, references are provided for both the experimental dataset and the associated
modeling comparison.

gains, model structure, and associated responses are equivalent to those used in
observer (Selva and Oman, 2012). Estimating measurement and process noise values
based upon neural recordings is appealing (no free parameters selected by the
modeler), however, these quantities are rough approximations, not known in
humans, and not available for all sensory channels (e.g., visual pathways). Further,
as previously noted (MacNeilage et al., 2008) the bandwidth of noise estimates are
often tweaked to better fit perceptual responses (Borah et al., 1978, 1988), in which
case this appeal is lost.
3 Observer models 77

Table 2 Observer model sensory conflict feedback gain values.


Gain Description of feedback gain and state estimate influenced Value

Kω Semicircular canal error on angular velocity estimate 8 [unitless]


Kau Otolith error in the utricular plane on linear acceleration estimate 2 [unitless]
Kau? Otolith error perpendicular to the utricular plane on linear 4 [unitless]
acceleration estimate
Kf Otolith rotation error on gravity estimate 4 [1/s]
Kfω Otolith rotation error on angular velocity estimate 8 [1/s]
K ωv Visual angular velocity error on angular velocity estimate 10 [unitless]
Kxv Visual position error on linear position estimate 0.1 [1/s]
Kx_ v Visual position error on linear velocity estimate 0.75 [1/s]
K gv Visual gravity error on gravity estimate 10 [1/s]

Different sets of feedback gains have been used to fit VOR responses (Merfeld
et al., 1993a) versus perception of orientation and motion (Vingerhoets et al., 2007).
A comprehensive experimental effort has not yet been undertaken to systematically
determine the best set of feedback gains across paradigms. For now, we suggest
using the feedback gain parameters in Table 2. Notably, this single set of gains
can effectively produce perceptions for each of the paradigms in Table 1 (i.e., gains
do not need to be modified to predict each paradigm), suggesting the model is not
over-parameterized.
The four original vestibular parameters (Kω, Ka, Kf, Kfω) were selected as the only
set to specifically fit to spatial orientation perception, using an OVAR paradigm
(Vingerhoets et al., 2007). In addition, we suggest altering the feedback gain for oto-
lith errors in the utricular plane (Kau) versus that perpendicular (Kau? ¼ Ka) to ex-
tended the model to altered gravity environments (Clark et al., 2015b,c). Visual
parameters were defined by Newman (2009) fit to mimic previous modeling behav-
ior (Borah et al., 1978, 1988), primarily for paradigms such as visual rotational and
translation vection with experimental visual scenes such as moving/rotating dot/line
patterns. These gains may require modification depending upon the reliability/
saliency of natural visual scenes, as there is neural evidence the brain dynamics
reweights visual versus vestibular stimuli based upon cue reliability (Fetsch et al.,
2012). Further, the model’s predicted dynamic responses when visual cues are acti-
vated/deactivated during a simulation are yet to be experimentally validated.
Second, the model outputs a single time-history of perceived spatial orientation.
However, it well-known that there are substantial inter-individual differences in spa-
tial orientation perception (i.e., the same motion profile is perceived markedly dif-
ferently by different subjects, particularly with different histories of motion
experiences, such as pilots; Tribukait et al., 2011). Further, within-subject variation
in perception is common (i.e., the same motion profile is perceived differently by the
same subject on repeated presentations). The observer model does not predict this
variation (or even attempt to). Instead the single time-history of perceived spatial
78 CHAPTER 5 Mathematical models for orientation perception

orientation can be considered an “average” perception (either within or across indi-


viduals). One approach to modeling inter-individual variations is to modify the feed-
back gains, which presumably differ across individuals, which in turn create
differences in model predictions for the same motion input.
Physiologically neural pathways are distributed (e.g., thousands of afferent neu-
rons innervate the semicircular canals). In addition, neural pathways are noisy (i.e.,
like other sensors, afferent transduction includes noise—orange circle in Fig. 1).
While the observer model can (and has been) extended to include distributed parallel
pathways with simulated sensory noise, typically this is not done as it is not critical to
the observer framework. This contrasts, for example, a particle filter model (Karmali
and Merfeld, 2012) where the simulated sensory noise across parallel pathways is
used to determine optimal feedback gains. Instead the single, deterministic (non-
noisy) pathways in observer can again be considered an “average” of the signal phys-
iologically carried across a distributed network of neurons. The single pathway ap-
proach of observer is beneficial in terms of computational complexity, enabling
faster than real-time simulation (i.e., 1 s of input motion profile can be simulated
in much less than 1 s of real time using a standard laptop computer).
Finally, while the observer model has been successful in predicting perceptions
from a range of paradigms, there are known situations that are not predicted well.
Notably, the model applies the hypothesis that linear acceleration is estimated by
using an internal model of Eq. (1) (^f ¼ ^g  ^a). Since the estimate of gravity is as-
sumed to be near 1 Earth G (j^ gj ¼ 1), paradigms which produce gravito-inertial stim-
ulation greater than 1 Earth G have a prediction of perceived linear acceleration. In
some scenarios, such as sinusoidal tilt, linear translation, or combinations of tilt and
translation in the dark (Merfeld and Zupan, 2002; Merfeld et al., 2005a,b), the
model’s prediction of linear acceleration/translation match empirical perceptual re-
ports across a range of motion frequencies. However, in other scenarios the direction
of the gravito-inertial stimulation is consistent relative to the subject (while the mag-
nitude is greater than 1 Earth G), such as during circular motion (Nooij et al., 2016) or
in a hyper-gravity environment (Clark et al., 2015a,b; Correia et al., 1968; Schone,
1964). In these scenarios, observer’s internal model of Eq. (1) predicts a sustained
perception of linear acceleration (in the absence of conflicting visual cues). With
leaky integration to yield linear velocity and position perceptions (Newman,
2009), this results in predicted perceptions of linear translation which are typically
not reported by subjects (Clark et al., 2015c). In the case of circular trajectories, this
creates errors in predicted heading perception (Nooij et al., 2016). Potential expla-
nations for the model discrepancy in translation perception include (1) influences
from non-vestibular cues (e.g., somatosensory) (Clark et al., 2015b,c) and/or (2) sub-
ject’s cognitive knowledge about feasible motion dynamics or familiar stimuli com-
binations (Clark et al., 2015b; Nooij et al., 2016). These may act to quench the
perception of sustained linear translation and are not generally included in the model
potentially causing the discrepancy. In fact, subject knowledge of feasible motions
has been shown to influence perceptual reports (Wertheim et al., 2001) and in a
swing paradigm when these geometric constraints were included in the observer
3 Observer models 79

model processing predictions were improved (Rader et al., 2009). Finally, we note
that psychophysical tasks for reporting perceived linear acceleration, velocity, and
position (i.e., translation) are less well developed, particularly for non-sinusoidal
stimuli making it difficult to properly assess these model predictions.

3.7 Applications
A computational model for dynamic, multisensory spatial orientation perception
(even if it is black box) is particularly valuable scientifically because it produces
quantitative predictions, which can then be assessed experimentally. For example,
hypothesis formulation can be more rigorous (Robinson, 1977) by making specific
predictions (i.e., magnitude, direction, time-history, phase, each as a function of the
stimulus characteristics) (Sadeghpour et al., 2019).
In addition, the model has been used or proposed to be used for non-scientific
applications (Newman et al., 2012). For example, it could be used to identify mis-
perceptions in postural tasks, potentially assisting with instability in populations such
as the elderly (Bermudez Rey et al., 2017; Karmali et al., 2017) and those conducting
piloting-like tasks (Rosenberg et al., 2018). Flight simulators are designed to best
replicate the perceptions of motions experienced during flight. The computational
model can be help design flight simulators and their motion drive algorithms
(Bussolari et al., 1989; Sivan et al., 1982). For example, the model can be used to
quantitatively predict perceived orientation during a desired flight profile, then pre-
dict that produced on the flight simulator (using different motions), and compare the
two sets of perceptions (Dixon et al., 2019).
Accurate spatial orientation perception is highly critical in piloted aviation. Mis-
perceptions of orientation, or spatial disorientation, defined as the pilot’s “failure to
correctly perceive attitude, position, and motion of the aircraft” (Benson, 1978) are a
leading cause of accidents in high performance aircraft (Neubauer, 2000). Using the
time-history of aircraft trajectory as inputs, the model can predict what the pilot’s
perception of aircraft orientation may have been (Newman et al., 2014). This can
be useful in determining if spatial disorientation may have been a factor post-
accident, particularly in fatal accidents, but also given that spatial disorientation
may be unrecognized (Type I) by pilots (Previc and Ercoline, 2004). More proac-
tively, it has also been proposed to use the model on-board the aircraft to identify
potential pilot disorientation in real-time (Rupert et al., 2016). In the event of pre-
dicted pilot disorientation, appropriate countermeasures may be taken in an effort
to prevent disorientation from leading to an accident (e.g., altering the pilot via a
warning of potentially unrecognized disorientation, reducing or removing control au-
thority, adjusting automation level, etc.). Finally, the model can be used ahead of
time to assess the potential for pilot disorientation during certain motion trajectories,
including planetary landing (Clark et al., 2010, 2011, 2014), Space Shuttle landings
(Clark et al., 2012), and artificial gravity (Vincent et al., 2018).
However, there are still challenges to applying the model to piloted aerospace
vehicle scenarios. Strictly, the model predicts perception of passive motions.
80 CHAPTER 5 Mathematical models for orientation perception

While pilots are not making typical whole-body active motions (e.g., self-
locomotion) and are subject to some passive motion (e.g., turbulence), through
controller inputs they do typically have active control of aircraft trajectory. Sec-
ond, to properly simulate the observer model information beyond the aircraft mo-
tion trajectory is needed. The inputs of head linear acceleration and angular
velocity require knowing pilot head movements within the aircraft (both voluntary
and reflexive; Gallimore et al., 1999, 2000). Visual inputs require knowing what
the pilot is viewing (e.g., gaze out the window or on instruments) (Newman et al.,
2014). Non-invasive head and eye tracking systems are not yet readily available in
operational aircraft.

4 Mathematical model frameworks beyond observer


With the focus on dynamic, multisensory spatial orientation perception, the majority
of this review has focused on the family of observer models. However, there are
alternative frameworks worthy of discussion. First, while observer combines cues
from multiple sensory modalities through weighted linear summing, there are alter-
nate frameworks. Early efforts proposed a non-linear sensory conflict switch, in
which the weight applied to visual cues depended upon their relative agreement with
vestibular estimates (Zacharias and Young, 1981). Other (non-observer) models
have been proposed aimed at multisensory integration for spatial orientation percep-
tion (Groen et al., 2007; Small et al., 2006), however, these models only use forward
processing of sensory cues and do not include the sensory conflict framework.
A “perceived down” conflict model (Bos and Bles, 2002) is a close-cousin to the
observer family, but differs in that it does not explicitly compare actual and expected
measurements to yield sensory conflict. Similarly, a “sensory weighting” model
(Zupan et al., 2002) is closely related in that it uses internal models, sensory conflict,
and sensory integration; but differs from the observer framework in weighting versus
feedback gains, how gravito-inertial force is proposed to be resolved, and that it in-
cludes the proposed idiotropic vector. This approach builds upon the family of
“coherence constraint” models (Darlot, 1993; Droulez and Darlot, 1989; Zupan
et al., 1994) focused on the brain applying internal models of sensory dynamics, body
dynamics, and physical relationships. Another approach for multisensory integration
is through application of Bayesian inference, in which cues are weighted based upon
their reliability using prior probabilities of motion/orientation (Ernst and Banks,
2002; Laurens and Droulez, 2007; MacNeilage et al., 2007). Kalman filter models
similarly weight cues based upon their reliability. Recently, a Kalman filter model
has been extended to include closed-loop, active control (Laurens and Angelaki,
2017), however, as noted previously Kalman filters require linearity and thus are
limited to small angles. Another variant, the “unscented Kalman filter” model has
been implemented (Selva, 2009), with only passive control in an effort overcome
the limitations of linearity in orientation perception.
5 Future work 81

5 Future work
Future efforts should aim to extend and further assess the model. Currently, observer
includes vestibular (otolith and semicircular canals) and visual pathways, which
are the primary cues used for spatial orientation perception. Integration of somato-
sensory cues has been modeled for static perception (Bortolami et al., 2006),
but only preliminary efforts have been made to incorporate them into the observer
dynamic model (personal communication, Newman). Somatosensory sensors,
which are presumably distributed throughout the body, requires modeling the trans-
formation across the neck (Mergner et al., 1983, 1991, 1997), which has thus far
not been addressed with visual and vestibular sensors fixed in the head. Nonetheless,
this modeling extension is critical for addressing how the brain estimates body
versus head orientation and combines cues across dynamically rotating coordinate
frames. Further, the model has only considered natural sensory cues (i.e., vestibular
and natural visual cues, such as optical flow from the surrounding environment).
However, other “artificial cues” may be provided that influence orientation
perception. Such artificial (or augmented) cues include a haptic/tactile belt or vest
(Rupert, 2000) that vibrates to indicate orientation, an attitude display indicator, or
three-dimensional auditory stimulation systems. As mentioned earlier the observer
model provides an “average” predicted perception. Specifically, it does not typically
include sensory noise or aim to capture individual differences, though these
could be added. In addition to innate individual differences, higher-level cognitive
effects (McGrath et al., 2016) of attention, focus, and knowledge of feasible or
likely motions could be incorporated in the model. In each case, these model addi-
tions need to be validated experimentally and sufficiently constrain the added free
parameters (e.g., feedback gains).
Scientifically, an important extension is to computationally implement the
pathways for closed-loop, active control of spatial orientation (those grayed out
in Fig. 1). In an extensive recent effort (Laurens and Angelaki, 2017) this has been
accomplished using the Kalman filter framework, but as noted earlier it is restricted
to small angles due to the linearity assumption. Implementing this in an observer
framework would capture the non-linearity of full three-dimensional orientation
perception. One of the challenges of modeling closed-loop, active control is that
it is specific to the control scenario. For example, modeling postural control
(Peterka, 2009; Ting, 2007) requires a different system plant than control of an
aircraft’s orientation. Finally, it has been conceptually hypothesized that sensory
conflict is used to update internal models. Specifically, a change in the body/world
(e.g., microgravity in space) or sensory dynamics (e.g., peripheral vestibular
damage) would lead to incorrect orientation estimates and in turn cause sustained
sensory conflict. Updating (or reinterpreting/adapting) the internal models to
reduce/minimize sensory conflict should cause them to better match the changed
body/world or sensory dynamics. However, this conceptual hypothesis has not yet
been implemented computationally.
82 CHAPTER 5 Mathematical models for orientation perception

Acknowledgments
This work was partially supported by SBIR Phase II to Environmental Tectonics Corp.,
Agreement No. W911W6-17-C-0011 (TKC, CMO, MCN, DMM). Dr. Merfeld is a federal/
contracted employee of the United States government. This work was prepared as part of his
official duties. Title 17 U.S.C. 105 provides that “copyright protection under this title is not
available for any work of the United States Government.” Title 17 U.S.C. 101 defines a
U.S. Government work as work prepared by a military service member or employee of the
U.S. Government as part of that person’s official duties. Dr. Merfeld was supported/funded
by work unit number H1705 and funded by the Defense Health Program JPC-5 Aviation Mishap
Prevention Working Group. The views expressed in this article reflect the results of research
conducted by the authors and do not necessarily reflect the official policy or position of the
Department of the Navy, Department of Defense, nor the United States Government.

References
Angelaki, D.E., McHenry, M.Q., Dickman, J.D., Newlands, S.D., Hess, B.J.M., 1999. Com-
putation of inertial motion: neural strategies to resolve ambiguous otolith information.
J. Neurosci. 19 (1), 316–327.
Angelaki, D.E., Shaikh, A.G., Green, A.M., Dickman, J.D., 2004. Neurons compute internal
models of the physical laws of motion. Nature 430 (6999), 560–564.
Baccini, M., Paci, M., Del Colletto, M., Ravenni, M., Baldassi, S., 2014. The assessment of
subjective visual vertical: comparison of two psychophysical paradigms and age-related
performance. Atten. Percept. Psychophys. 76 (1), 112–122.
Barnett-Cowan, M., Harris, L.R., 2008. Perceived self-orientation in allocentric and egocen-
tric space: effects of visual and physical tilt on saccadic and tactile measures. Brain Res.
1242, 231–243.
Benson, A.J., 1978. Spatial disorientation—general aspects. In: Ernsting, J., Nicholson, A.N.,
Rainford, D.J. (Eds.), Aviation Medicine. Butterworth Heinemann, Oxford, England, UK,
pp. 2772–2796.
Bermudez Rey, M.C., Clark, T.K., Merfeld, D.M., 2017. Balance screening of vestibular func-
tion in subjects aged 4 years and older: a living laboratory experience. Front. Neurol.
8, 631.
Berthoz, A., Pavard, B., Young, L.R., 1975. Perception of linear horizontal self-motion
induced by peripheral vision (linearvection)—basic characteristics and visual-vestibular
interactions. Exp. Brain Res. 23 (5), 471–489.
Best, P.J., White, A.M., Minai, A., 2001. Spatial processing in the brain: the activity of
hippocampal place cells. Annu. Rev. Neurosci. 24, 459–486.
Bilien, V., 1993. Modeling Human Spatial Orientation Perception in a Centrifuge using
Estimation Theory. S.M. Massachusetts Institute of Technology.
Borah, J., Young, L.R., Curry, R.E., 1978. Sensory Mechanism Modeling. AFHRL-TR-78-83,
Air Force Human Resources Laboratory, AFHRL-TR-78-83, Air Force Systems Command.
Borah, J., Young, L.R., Curry, R.E., 1979. Optimal estimator model for human spatial
orientation. In: Joint Automatic Control Conference. IEEE. vol. 16, pp. 800–805.
Borah, J., Young, L.R., Curry, R.E., 1988. Optimal estimator model for human spatial
orientation. Ann. N. Y. Acad. Sci. 545, 51–73.
Bortolami, S.B., Rocca, S., Daros, S., Dizio, P., Lackner, J.R., 2006. Mechanisms of human
static spatial orientation. Exp. Brain Res. 173 (3), 374–388.
References 83

Bos, J.E., Bles, W., 2002. Theoretical considerations on canal-otolith interaction and an
observer model. Biol. Cybern. 86 (3), 191–207.
Brooks, J.X., Cullen, K.E., 2009. Multimodal integration in rostral fastigial nucleus provides
an estimate of body movement. J. Neurosci. 29 (34), 10499–10511.
Brooks, J.X., Cullen, K.E., 2013. The primate cerebellum selectively encodes unexpected
self-motion. Curr. Biol. 23 (11), 947–955.
Brooks, J.X., Carriot, J., Cullen, K.E., 2015. Learning to expect the unexpected: rapid updating
in primate cerebellum during voluntary self-motion. Nat. Neurosci. 18 (9), 1310–1317.
Bussolari, S.R., Young, L.R., Lee, A.T., 1989. The use of vestibular models for design
and evaluation of flight simulator motion. In: AGARD Conference Proceedings,
433:91-1/9-11.
Butler, J.S., Smith, S.T., Campos, J.L., Bulthoff, H.H., 2010. Bayesian integration of visual
and vestibular signals for heading. J. Vis. 10 (11), 1–13.
Calton, J.L., Taube, J.S., 2005. Degradation of head direction cell activity during inverted
locomotion. J. Neurosci. 25 (9), 2420–2428.
Clark, T.K., 2019. Effects of spaceflight on the vestibular system. In: Pathak, Y., Araujo dos
Santos, M., Zea, L. (Eds.), Handbook of Space Pharmaceuticals. Springer, Cham,
pp. 1–39.
Clark, T.K., Young, L.R., 2017. A case study of human roll tilt perception in hypogravity.
Aerosp. Med. Hum. Perform. 88 (7), 682–687 (686).
Clark, T.K., Stimpson, A.J., Young, L.R., Oman, C.M., Duda, K.R., 2010. Analysis of
human spatial perception during lunar landing. In: IEEE Aerospace Conference. Big
Sky, MT, pp. 1–13.
Clark, T.K., Young, L.R., Stimpson, A.J., Duda, K.R., Oman, C.M., 2011. Numerical simu-
lation of human orientation perception during lunar landing. Acta Astronaut. 69 (7–8),
420–428.
Clark, T.K., Young, L.R., Duda, K.R., Oman, C.M., 2012. Simulation of astronaut perception
of vehicle orientation during planetary landing trajectories. In: IEEE Aerospace
Conference. Big Sky, MT, pp. 1–12.
Clark, T.K., Stimpson, A.J., Young, L.R., Oman, C.M., Natapoff, A., Duda, K.R., 2014.
Human spatial orientation perceptions during simulated lunar landing motions. AIAA J.
Spacecr. Rocket. 51 (1), 267–280.
Clark, T.K., Newman, M.C., Merfeld, D.M., Oman, C.M., Young, L.R., 2015a. Human manual
control performance in hyper-gravity. Exp. Brain Res. 233, 1409–1420.
Clark, T.K., Newman, M.C., Oman, C.M., Merfeld, D.M., Young, L.R., 2015b. Human
perceptual overestimation of whole-body roll tilt in hyper-gravity. J. Neurophysiol.
113 (7), 2062–2077.
Clark, T.K., Newman, M.C., Oman, C.M., Merfeld, D.M., Young, L.R., 2015c. Modeling
human perception of orientation in altered gravity. Front. Syst. Neurosci. 9, 1–13.
Clement, G., Wood, S.J., 2014. Rocking or rolling—perception of ambiguous motion after
returning from space. PLoS One 9 (10), 1–8.
Cohen, M.M., 1973. Elevator illusion—influences of otolith organ activity and neck
proprioception. Percept. Psychophys. 14 (3), 401–406.
Cohen, M.M., Crosbie, R.J., Blackburn, L.H., 1973. Disorienting effects of aircraft catapult
launchings. Aerosp. Med. 44 (1), 37–39.
Cohen, B., Henn, V., Raphan, T., Dennett, D., 1981. Velocity storage, nystagmus, and
visual-vestibular interactions in humans. Ann. N. Y. Acad. Sci. 374 (Nov), 421–433.
Correia, M.J., Hixson, W.C., Niven, J.I., 1968. On predictive equations for subjective
judgments of vertical and horizon in a force field. Acta Otolaryngol. Suppl. 230, 1–20.
84 CHAPTER 5 Mathematical models for orientation perception

Corvera, J., Hallpike, C.S., Schuster, E.H.J., 1958. A new method for the anatomical recon-
struction of the human macular planes. Acta Otolaryngol. 49 (1), 4–16.
Cullen, K.E., 2011. The neural encoding of self-motion. Curr. Opin. Neurobiol. 21 (4),
587–595.
Curthoys, I.S., Betts, G.A., Burgess, A.M., MacDougall, H.G., Cartwright, A.D.,
Halmagyi, G.M., 1999. The planes of the utricular and saccular maculae of the guinea
pig. Ann. N. Y. Acad. Sci. 871, 27–34.
Dai, M.J., Curthoys, I.S., Halmagyi, G.M., 1989. A model of otolith stimulation. Biol. Cybern.
60 (3), 185–194.
Darlot, C., 1993. The cerebellum as a predictor of neural messages .1. The stable estimator
hypothesis. Neuroscience 56 (3), 617–646.
de Winkel, K.N., Clement, G., Groen, E.L., Werkhoven, P.J., 2012. The perception of verti-
cality in lunar and Martian gravity conditions. Neurosci. Lett. 529 (1), 7–11.
Dichgans, J., Brandt, T., 1973. Optokinetic motion sickness and pseudo-Coriolis effects in-
duced by moving visual-stimuli. Acta Otolaryngol. 76 (5), 339–348.
Dixon, J.B., Etgen, C., Clark, T.K., Folga, R., 2019. Optimizing the Kraken: integration of a
vestibular model and state estimator for disorientation research device (DRD) motion al-
gorithm application. Aerosp. Med. Hum. Perform. (under review).
Droulez, J., Darlot, C., 1989. The geometric and dynamic implications of the coherence con-
straints in three-dimensional sensorimotor interactions. In: Jeannerod, M. (Ed.), Attention
and Performance. Erlbaum, New York, pp. 495–526.
Ernst, M.O., Banks, M.S., 2002. Humans integrate visual and haptic information in a statis-
tically optimal fashion. Nature 415 (6870), 429–433.
Fernandez, C., Goldberg, J.M., 1971. Physiology of peripheral neurons innervating semicir-
cular canals of squirrel monkey. II. Response to sinusoidal stimulation and dynamics of
peripheral vestibular system. J. Neurophysiol. 34 (4), 661–675.
Fernandez, C., Goldberg, J.M., 1976a. Physiology of peripheral neurons innervating otolith
organs of squirrel-monkey. II. Directional selectivity and force-response relations.
J. Neurophysiol. 39 (5), 985–995.
Fernandez, C., Goldberg, J.M., 1976b. Physiology of peripheral neurons innervating otolith
organs of the squirrel monkey. I. Response to static tilts and to long-duration centrifuga-
tion. J. Neurophysiol. 39 (5), 970–984.
Fernandez, C., Goldberg, J.M., 1976c. Physiology of peripheral neurons innervating
otolith organs of the squirrel monkey. III. Response dynamics. J. Neurophysiol.
39, 996–1008.
Fetsch, C.R., Turner, A.H., DeAngelis, G.C., Angelaki, D.E., 2009. Dynamic reweighting of
visual and vestibular cues during self-motion perception. J. Neurosci. 29 (49), 15601–15612.
Fetsch, C.R., Pouget, A., DeAngelis, G.C., Angelaki, D.E., 2012. Neural correlates of
reliability-based cue weighting during multisensory integration. Nat. Neurosci. 15 (1),
146–154.
Gallimore, J.J., Brannon, N.G., Patterson, F.R., Nalepka, J.P., 1999. Effects of FOV and air-
craft bank on pilot head movement and reversal errors during simulated flight. Aviat.
Space Environ. Med. 70 (12), 1152–1160.
Gallimore, J.J., Patterson, F.R., Brannon, N.G., Nalepka, J.P., 2000. The opto-kinetic cervical
reflex during formation flight. Aviat. Space Environ. Med. 71 (8), 812–821.
Galvan-Garza, R.C., Clark, T.K., Sherwood, D., Diaz-Artiles, A., Rosenberg, M.,
Natapoff, A., Karmali, F., Oman, C.M., Young, L.R., 2018. Human perception of whole
body roll-tilt orientation in a hypogravity analog: underestimation and adaptation.
J. Neurophysiol. 120, 3110–3121.
References 85

Goldberg, J.M., Fernandez, C., 1971. Physiology of peripheral neurons innervating semicir-
cular canals of squirrel monkey. I. Resting discharge and response to constant angular ac-
celerations. J. Neurophysiol. 34 (4), 635–660.
Green, A.M., Angelaki, D.E., 2004. An integrative neural network for detecting inertial motion
and head orientation. J. Neurophysiol. 92 (2), 905–925.
Groen, E.L., Smaili, M.H., Hosman, R.J.A.W., 2007. Perception model analysis of flight sim-
ulator motion for a decrab maneuver. J. Aircr. 44 (2), 427–435.
Grossman, G.E., Leigh, R.J., Abel, L.A., Lanska, D.J., Thurston, S.E., 1988. Frequency and
velocity of rotational head perturbations during locomotion. Exp. Brain Res. 70 (3),
470–476.
Gu, Y., Watkins, P.V., Angelaki, D.E., DeAngelis, G.C., 2006. Visual and nonvisual contri-
butions to three-dimensional heading selectivity in the medial superior temporal area.
J. Neurosci. 26 (1), 73–85.
Gu, Y., Angelaki, D.E., DeAngelis, G.C., 2008. Neural correlates of multisensory cue integra-
tion in macaque MSTd. Nat. Neurosci. 11 (10), 1201–1210.
Guedry, F.E., 1974. Psychophysics of vestibular sensation. In: Handbook of Sensory Physiol-
ogy. Springer-Verlag, Berlin, pp. 3–154.
Guedry Jr., F.E., Benson, A.J., 1978. Coriolis cross-coupling effects: disorienting and nauseo-
genic or not? Aviat. Space Environ. Med. 49 (1 Pt. 1), 29–35.
Guedry, F.E., Rupert, A.H., 1991. Steady-state and transient G-excess effects. Aviat. Space
Environ. Med. 62 (3), 252–253.
Hafting, T., Fyhn, M., Molden, S., Moser, M.B., Moser, E.I., 2005. Microstructure of a spatial
map in the entorhinal cortex. Nature 436 (7052), 801–806.
Haslwanter, T., Jaeger, R., Mayr, S., Fetter, M., 2000. Three-dimensional eye-movement re-
sponses to off-vertical axis rotations in humans. Exp. Brain Res. 134 (1), 96–106.
Held, R., Freedman, S.J., 1963. Plasticity in human sensorimotor control. Science
142 (359), 455.
Held, R., Hein, A.V., 1958. Adaptation of disarranged hand-eye coordination contingent upon
re-afferent stimulation. Percept. Mot. Skills 8, 87–90.
Israel, I., Grasso, R., GeorgesFrancois, P., Tsuzuku, T., Berthoz, A., 1997. Spatial memory and
path integration studied by self-driven passive linear displacement. 1. Basic properties.
J. Neurophysiol. 77 (6), 3180–3192.
Jurgens, R., Becker, W., 2006. Perception of angular displacement without landmarks: evi-
dence for Bayesian fusion of vestibular, optokinetic, podokinesthetic, and cognitive infor-
mation. Exp. Brain Res. 174 (3), 528–543.
Kalman, R.E., 1960. A new approach to linear filtering and prediction problems. J. Basic Eng.
82D, 35–45.
Kalman, R.E., Bucy, R.S., 1961. New Results in Linear Filtering and Prediction Problems.
J. Basic Eng. 83D, 95–108.
Karmali, F., 2019. The velocity storage time constant: balances between accuracy and
precision. Prog. Brain Res. 248, 269–276.
Karmali, F., Merfeld, D.M., 2012. A distributed, dynamic, parallel computational model: the
role of noise in velocity storage. J. Neurophysiol. 108 (2), 390–405.
Karmali, F., Lim, K., Merfeld, D.M., 2014. Visual and vestibular perceptual thresholds each
demonstrate better precision at specific frequencies and also exhibit optimal integration.
J. Neurophysiol. 111, 2393–2403.
Karmali, F., Bermudez Rey, M.C., Clark, T.K., Wang, W., Merfeld, D.M., 2017. Multivariate
Analyses of Balance Test Performance, Vestibular Thresholds, and age. Front. Neurol.
8, 578.
86 CHAPTER 5 Mathematical models for orientation perception

Karmali, F., Whitman, G.T., Lewis, R.F., 2018. Bayesian optimal adaptation explains age-
related human sensorimotor changes. J. Neurophysiol. 119, 509–520.
Knierim, J.J., McNaughton, B.L., Poe, G.R., 2000. Three-dimensional spatial selectivity of
hippocampal neurons during space flight. Nat. Neurosci. 3 (3), 209–210.
Kwakernaak, H., Sivan, R., 1972. Linear Optimal Control Systems. Wiley Interscience.
Laurens, J., Angelaki, D.E., 2016. How the vestibulocerebellum builds an internal model of
self-motion. In: Neuronal Codes of the Cerebellum. Elsevier Inc., pp. 97–115.
Laurens, J., Angelaki, D.E., 2017. A unified internal model theory to resolve the paradox of
active versus passive self-motion sensation. Elife 6, e28074.
Laurens, J., Droulez, J., 2007. Bayesian processing of vestibular information. Biol. Cybern.
96 (4), 389–404.
Lewis, R.F., Priesol, A.J., Nicoucar, K., Lim, K., Merfeld, D.M., 2011. Abnormal motion per-
ception in vestibular migraine. Laryngoscope 121 (5), 1124–1125.
Lim, K., Karmali, F., Nicoucar, K., Merfeld, D.M., 2017. Perception precision of passive body
tilt is consistent with statistically optimal cue integration. J. Neurophysiol. 117 (5),
2037–2052.
Luenburger, D.G., 1971. An introduction to observers. IEEE Trans. Autom. Control 16, 596–602.
Mach, E., 1875. Grundlinen der Lehre von den Bewegungsemfindungen. Wilhelm
Engelmann, Leipzig.
MacNeilage, P.R., Banks, M.S., Berger, D.R., Bulthoff, H.H., 2007. A Bayesian model of the
disambiguation of gravitoinertial force by visual cues. Exp. Brain Res. 179 (2), 263–290.
MacNeilage, P.R., Ganesan, N., Angelaki, D.E., 2008. Computational approaches to spatial
orientation: from transfer functions to dynamic Bayesian inference. J. Neurophysiol.
100 (6), 2981–2996.
Malcolm, R., Jones, G.M., 1974. Erroneous perception of vertical motion by humans seated in
upright position. Acta Otolaryngol. 77 (4), 274–283.
Maybeck, P.S., 1982. Stochastics Models, Estimation, and Control. Academic Press.
Mayne, R., 1974. A systems concept of the vestibular organs. In: Komuber, H.H. (Ed.),
Vestibular System. Part 2—Psychophysics, Applied Aspects, and General Interpretations.
vol. 2. Springer-Verlag, Berlin, pp. 493–580.
McGrath, B.J., Mortimer, B., French, J., Brakunov, S., 2016. Mathematical Multi-Sensory
Model of Spatial Orientation. AIAA SciTech, San Diego, CA.
Merfeld, D.M., 2017. Vestibular Sensation. Sensation and Perception. Sinauer Associates, Inc,
M. A. Sunderland.
Merfeld, D.M., Zupan, L.H., 2002. Neural processing of gravitoinertial cues in humans. III.
Modeling tilt and translation responses. J. Neurophysiol. 87 (2), 819–833.
Merfeld, D.M., Young, L.R., Oman, C.M., Shelhammer, M.J., 1993a. A multidimensional
model of the effect of gravity on the spatial orientation of the monkey. J. Vestib. Res.
3 (2), 141–161.
Merfeld, D.M., Young, L.R., Paige, G.D., Tomko, D.L., 1993b. Three dimensional eye move-
ments of squirrel monkeys following postrotatory tilt. J. Vestib. Res. 3 (2), 123–139.
Merfeld, D.M., Zupan, L., Peterka, R.J., 1999. Humans use internal models to estimate gravity
and linear acceleration. Nature 398 (6728), 615–618.
Merfeld, D.M., Zupan, L.H., Gifford, C.A., 2001. Neural processing of gravito-inertial cues in
humans. II. Influence of the semicircular canals during eccentric rotation. J. Neurophysiol.
85 (4), 1648–1660.
Merfeld, D.M., Park, S., Gianna-Poulin, C., Black, F.O., Wood, S., 2005a. Vestibular percep-
tion and action employ qualitatively different mechanisms. I. Frequency response of VOR
and perceptual responses during translation and tilt. J. Neurophysiol. 94 (1), 186–198.
References 87

Merfeld, D.M., Park, S., Poulin, C.G., Black, F.O., Wood, S., 2005b. Vestibular perception and
action employ qualitatively different mechanisms. II. VOR and perceptual responses dur-
ing combined Tilt&Translation. J. Neurophysiol. 94 (1), 199–205.
Merfeld, D.M., Priesol, A., Lee, D., Lewis, R.F., 2010. Potential solutions to several vestibular
challenges facing clinicians. J. Vestib. Res. 20 (1–2), 71–77.
Mergner, T., Nardi, G.L., Becker, W., Deecke, L., 1983. The role of canal-neck interaction for
the perception of horizontal trunk and head rotation. Exp. Brain Res. 49 (2), 198–208.
Mergner, T., Siebold, C., Schweigart, G., Becker, W., 1991. Human perception of horizontal
trunk and head rotation in space during vestibular and neck stimulation. Exp. Brain Res.
85 (2), 389–404.
Mergner, T., Huber, W., Becker, W., 1997. Vestibular-neck interaction and transformation of
sensory coordinates. J. Vestib. Res. 7 (4), 347–367.
Miller, E.F., Graybiel, A., 1966. Magnitude of gravitoinertial force an independent variable in
egocentric visual localization of horizontal. J. Exp. Psychol. 71 (3), 452–460.
Mittelstaedt, H., 1983. A new solution to the problem of the subjective vertical.
Naturwissenschaften 70 (6), 272–281.
Muller, G.E., 1916. Uber das Aubertsche Phanomen. Z. Sinnesphysiol. 49, 109–246.
Neubauer, J.C., 2000. Classifying spatial disorientation mishaps using different definitions—
analysis of five years of USAF class A mishaps. IEEE Eng. Med. Biol. Mag. 19 (2), 28–34.
Newman, M.C., 2009. A Multisensory Observer Model for Human Spatial Orientation Percep-
tion. S.M. Massachusetts Institute of Technology.
Newman, M.C., Oman, C.M., Clark, T.K., Mateus, J., Kaderka, J.D., 2011. Pseudo-Coriolis
effect: a 3D angular velocity phenomenon described by a left-hand rule. In: Eighth
Symposium on the Role of the Vestibular Organs in Space Exploration, Houston, TX.
Journal of Vestibular Research Special Issue.
Newman, M.C., Lawson, B., Rupert, A.H., McGrath, B.J., 2012. The role of perceptual model-
ing in the understanding of spatial disorientation during flight and ground-based simulator
training. In: AIAA Modeling and Simulation Technologies Conference.
Newman, M.C., Lawson, B.D., McGrath, B.J., Rupert, A.H., 2014. Perceptual modeling as a
tool to prevent aircraft upset associated with spatial disorientation. In: AIAA Guidance,
Navigation, and Control Conference. National Harbor, MD.
Nooij, S.A.E., Bos, J.E., Groen, E.L., 2008. Velocity storage activity is affected after sustained
centrifugation: a relationship with spatial disorientation. Exp. Brain Res. 190 (2), 165–177.
Nooij, S.A.E., Nesti, A., Bulthoff, H.H., Pretto, P., 2016. Perception of rotation, path, and
heading in circular trajectories. Exp. Brain Res. 234, 2323–2337.
Oman, C.M., 1982. A heuristic mathematical-model for the dynamics of sensory conflict and
motion sickness. Acta Otolaryngol. Suppl. 392, 3–44.
Oman, C.M., 1990. Motion sickness—a synthesis and evaluation of the sensory conflict the-
ory. Can. J. Physiol. Pharmacol. 68 (2), 294–303.
Oman, C.M., 1991. Sensory conflict in motion sickness: an observer theory approach. In:
Ellis, S.R., Kaiser, M.K., Grunwald, A. (Eds.), Pictorial Communication in Virtual and
Real Environments. Taylor & Francis, London, pp. 362–367.
Oman, C.M., 2007. Spatial orientation and navigation in microgravity. In: Mast, F.W.,
Janeke, L. (Eds.), Spatial Processing in Navigation, Imagery and Perception. Springer
Verlag, New York, pp. 208–248.
Oman, C.M., Cullen, K.E., 2014. Brainstem processing of vestibular sensory exafference: im-
plications for motion sickness etiology. Exp. Brain Res. 232, 2483–2492.
Ormsby, C.C., Young, L.R., 1976. Perception of static orientation in a constant gravito-inertial
environment. Aviat. Space Environ. Med. 47 (2), 159–164.
88 CHAPTER 5 Mathematical models for orientation perception

Ormsby, C.C., Young, L.R., 1977. Integration of semicircular canal and otolith information for
multi-sensory orientation stimuli. Math. Biosci. 34 (1–2), 1–21.
Peterka, R.J., 2009. Comparison of human and humanoid robot control of upright stance.
J. Physiol. Paris 103 (3–5), 149–158.
Pommellet, P.E., 1990. Suboptimal Estimator for the Spatial Orientation of a Pilot. S.M.
Massachusetts Institute of Technology.
Previc, F.H., Ercoline, W.R., 2004. Spatial Disorientation in Aviation. American Institute of
Aeroanutics and Astronautics, Reston, Virginia.
Rader, A.A., Oman, C.M., Merfeld, D.M., 2009. Motion perception during variable-radius
swing motion in darkness. J. Neurophysiol. 102 (4), 2232–2244.
Raphan, T., Cohen, B., 2002. The vestibulo-ocular reflex in three dimensions. Exp. Brain Res.
145 (1), 1–27.
Raphan, T., Matsuo, V., Cohen, B., 1977. A velocity storage mechanism responsible for op-
tokinetic nystagmus (OKN), optokinetic after-nsytagmus (OKAN) and vestibular nystag-
mus. In: Baker, R., Berthoz, A. (Eds.), Control of Gaze by Brain Stem Neurons. Elsevier/
North-Holland Biomedical Press, Amsterdam, pp. 37–47.
Raphan, T., Matsuo, V., Cohen, B., 1979. Velocity storage in the vestibulo-ocular reflex arc
(VOR). Exp. Brain Res. 35 (2), 229–248.
Reason, J.T., 1978. Motion sickness adaptation—neural mismatch model. J. R. Soc. Med.
71 (11), 819–829.
Robinson, D.A., 1977. Vestibular and optokinetic symbiosis: an example of explaining by
modeling. In: Baker, R., Berthoz, A. (Eds.), Control of Gaze by Brain Stem Neurons.
Elsevier/North-Holland Biomedical Press, Amsterdam, pp. 49–58.
Robinson, D.A., 1981. The use of control-systems analysis in the neurophysiology of eye-
movements. Annu. Rev. Neurosci. 4, 463–503.
Rosenberg, M.J., Galvan-Garza, R.C., Clark, T.K., Sherwood, D.P., Young, L.R., Karmali, F.,
2018. Human manual control precision depends on vestibular sensory precision an grav-
itational magnitude. J. Neurophysiol. 120 (6), 3187–3197. https://doi.org/10.1152/
jn.00565.2018.
Roy, J.E., Cullen, K.E., 2001. Selective processing of vestibular reafference during self-
generated head motion. J. Neurosci. 21 (6), 2131–2142.
Roy, J.E., Cullen, K.E., 2004. Dissociating self-generated from passively applied head motion:
neural mechanisms in the vestibular nuclei. J. Neurosci. 24 (9), 2102–2111.
Rupert, A.H., 2000. Tactile situation awareness system: proprioceptive prostheses for sensory
deficiencies. Aviat. Space Environ. Med. 71 (9), A92–A99.
Rupert, A.H., Brill, J.C., Woo, G., Lawson, B., 2016. Countermeasures for loss of situation
awareness: spatial orientation modeling to reduce mishaps. In: Aerospace IEEE
Conference.
Sadeghi, S.G., Chacron, M.J., Taylor, M.C., Cullen, K.E., 2007. Neural variability, detection
thresholds, and information transmission in the vestibular system. J. Neurosci. 27 (4),
771–781.
Sadeghpour, S., Zee, D.S., Leigh, R.J., 2019. Clinical applications of control systems models:
the neural integrators for eye movements. Prog. Brain Res. 248, 103–114.
Schone, H., 1964. On the role of gravity in human spatial orientation. Aerosp. Med.
35, 764–772.
Seidman, S.H., Telford, L., Paige, G.D., 1998. Tilt perception during dynamic linear acceler-
ation. Exp. Brain Res. 119 (3), 307–314.
References 89

Selva, P., 2009. Modeling of the Vestibular System and Nonlinear Models for Human Spatial
Orientation Perception. PhD, L’Universite de Toulouse.
Selva, P., Oman, C.M., 2012. Relationships between observer and Kalman filter models for
human dynamic spatial orientation. J. Vestib. Res. 22 (2–3), 69–80.
Sivan, R., Ish-Shalom, J., Huang, J.K., 1982. An optimal control approach to the design of
moving flight simulators. IEEE Trans. Syst. Man Cybern. 12 (6), 818–827.
Small, R.L., Keller, J.W., Wickens, C.D., Socash, C.M., Ronan, A.M., Fischer, A.M., 2006.
Multisensory Integration for Pilot Spatial Orientation. Micro Analysis and Design,
Boulder, CO.
Sperry, R.W., 1950. Neural basis of the spontaneous optokinetic response produced by visual
inversion. J. Comp. Physiol. Psychol. 43 (6), 482–489.
Tin, C., Poon, C.S., 2005. Internal models in sensorimotor integration: perspectives from adap-
tive control theory. J. Neural Eng. 2 (3), S147–S163.
Ting, L.H., 2007. Dimensional reduction in sensorimotor systems: a framework for under-
standing muscle coordination of posture. In: Computational Neuroscience: Theoretical
Insights Into Brain Function. vol. 165. Elsevier. pp. 299–321.
Tokumaru, O., Kaida, K., Ashida, H., Mizumoto, C., Tatsuno, J., 1998. Visual influence on the
magnitude of somatogravic illusion evoked on advanced spatial disorientation demonstra-
tor. Aviat. Space Environ. Med. 69 (2), 111–116.
Tribukait, A., Gronkvist, M., Eiken, O., 2011. The perception of roll tilt in pilots during a sim-
ulated coordinated turn in a gondola centrifuge. Aviat. Space Environ. Med. 82 (5),
523–530.
Tribukait, A., Strom, A., Bergsten, E., Eiken, O., 2016. Vestibular stimulus and perceived roll
tilt during coordinated turns in aircraft and gondola centrifuge. Aerosp. Med. Hum. Per-
form. 87 (5), 454–463.
Valko, Y., Lewis, R.F., Priesol, A.J., Merfeld, D.M., 2012. Vestibular labyrinth contributions
to human whole-body motion discrimination. J. Neurosci. 32 (39), 13537–13542.
Vincent, G.R., Gruber, J., Newman, M.C., Clark, T.K., 2018. Analysis of artificial gravity par-
adigms using a mathematical model of spatial orientation. Acta Astronaut. 152, 602–610.
Vingerhoets, R.A.A., Medendorp, W.P., Van Gisbergen, J.A.M., 2006. Time course and mag-
nitude of illusory translation perception during off-vertical axis rotation. J. Neurophysiol.
95 (3), 1571–1587.
Vingerhoets, R.A.A., Van Gisbergen, J.A.M., Medendorp, W.P., 2007. Verticality perception
during off-vertical axis rotation. J. Neurophysiol. 97 (5), 3256–3268.
Vingerhoets, R.A.A., Medendorp, W.P., Van Gisbergen, J.A.M., 2008. Body-tilt and visual
verticality perception during multiple cycles of roll rotation. J. Neurophysiol. 99 (5),
2264–2280.
Von Holst, E., 1954. Relations between the central nervous system and the peripheral organs.
Br. J. Anim. Behav. 2, 89–94.
Von Holst, E., Mittelstaedt, H., 1950. Das Reafferenzprinzip. Naturwissenschaften
37, 464–476.
Waespe, W., Henn, V., 1977. Neuronal-activity in vestibular nuclei of alert monkey during
vestibular and optokinetic stimulation. Exp. Brain Res. 27 (5), 523–538.
Walsh, E.G., 1964. The perception of rhythmically repeated linear motion in the vertical plane.
Q. J. Exp. Physiol. Cogn. Med. Sci. 49, 58–65.
Wertheim, A.H., Mesland, B.S., Bles, W., 2001. Cognitive suppression of tilt sensations dur-
ing linear horizontal self-motion in the dark. Perception 30 (6), 733–741.
90 CHAPTER 5 Mathematical models for orientation perception

Wolpert, D.M., Miall, R.C., Kawato, M., 1998. Internal models in the cerebellum. Trends
Cogn. Sci. 2 (9), 338–347.
Wright, W.G., Tierney, R.T., McDevitt, J., 2017. Visual-vestibular processing deficits in mild
traumatic brain injury. J. Vestib. Res. 27 (1), 27–37.
Young, L.R., 1970. On visual-vestibular interaction. In: Fifth Symposium on the Role of the
Vestibular Organs in Space Exploration, Houston, TX.
Young, L.R., 2011. Optimal estimator models for spatial orientation and vestibular nystagmus.
Exp. Brain Res. 210 (3–4), 465–476.
Young, L.R., Meiry, J.L., Li, Y.T., 1966. Control engineering approaches to human dynamic
space orientation. In: Second Symposium on the Role of the Vestibular Organs in Space
Exploration, Moffeet Field, CA.
Young, L.R., Henn, V.S., Scherberger, H., 2001. Fundamentals of the Theory of Movement
Perception. Translated and Annotated: Dr. Ernst Mach, Kluwer Academic/Plenum
Publishers, New York.
Zacharias, G.L., Young, L.R., 1981. Influence of combined visual and vestibular cues on hu-
man perception and control of horizontal rotation. Exp. Brain Res. 41 (2), 159–171.
Zupan, L.H., Droulez, J., Darlot, C., Denise, P., Maruani, A., 1994. Modelization of the
vestibulo-ocular reflex (VOR) and motion sickness prediction. In: Proceedings of the In-
ternational Conference on Artificial Neural Networks, Sorrento, Italy.
Zupan, L.H., Peterka, R.J., Merfeld, D.M., 2000. Neural processing of gravito-inertial cues in
humans. I. Influence of the semicircular canals following post-rotatory tilt.
J. Neurophysiol. 84 (4), 2001–2015.
Zupan, L.H., Merfeld, D.M., Darlot, C., 2002. Using sensory weighting to model the influence
of canal, otolith and visual cues on spatial orientation and eye movements. Biol. Cybern.
86 (3), 209–230.

You might also like