Donnarumma 2017 Action Perception As Hypothesis Testing
Donnarumma 2017 Action Perception As Hypothesis Testing
Donnarumma 2017 Action Perception As Hypothesis Testing
ScienceDirect
Journal homepage: www.elsevier.com/locate/cortex
Research report
Article history: We present a novel computational model that describes action perception as an active
Received 30 May 2016 inferential process that combines motor prediction (the reuse of our own motor system to
Reviewed 12 September 2016 predict perceived movements) and hypothesis testing (the use of eye movements to
Revised 21 November 2016 disambiguate amongst hypotheses). The system uses a generative model of how (arm and
Accepted 18 January 2017 hand) actions are performed to generate hypothesis-specific visual predictions, and directs
Action editor Laurel Buxbaum saccades to the most informative places of the visual scene to test these predictions e and
Published online 31 January 2017 underlying hypotheses. We test the model using eye movement data from a human action
observation study. In both the human study and our model, saccades are proactive
Keywords: whenever context affords accurate action prediction; but uncertainty induces a more
Active inference reactive gaze strategy, via tracking the observed movements. Our model offers a novel
Action observation perspective on action observation that highlights its active nature based on prediction
Hypothesis testing dynamics and hypothesis testing.
Active perception © 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC
Motor prediction BY license (http://creativecommons.org/licenses/by/4.0/).
* Corresponding author. Institute of Cognitive Sciences and Technologies, National Research Council, Via S. Martino della Battaglia, 44,
00185 Rome, Italy.
E-mail address: [email protected] (G. Pezzulo).
http://dx.doi.org/10.1016/j.cortex.2017.01.016
0010-9452/© 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.
org/licenses/by/4.0/).
46 c o r t e x 8 9 ( 2 0 1 7 ) 4 5 e6 0
In principle, the recognition of action goals might be understanding of these tasks (Gredeba € ck & Falck-Ytter, 2015;
implemented in perceptual and associative brain areas, Sailer, Flanagan, & Johansson, 2005).
similar to the way other events such as visual scenes are Here we describe and test a novel computational model of
(believed to be) recognized, predicted and understood action understanding and accompanying eye movements.
semantically. However, two decades of research on action The model elaborates the predictive coding framework of ac-
perception and mirror neurons have shown that parts of the tion observation (Friston et al., 2011; Kilner et al., 2007) but
motor system deputed to specific actions are also selectively significantly extends it by considering the specific role of
active during the observation of the same actions when others active information sampling. The model incorporates two
perform them. Based on this body of evidence, several re- main hypotheses. First, while most studies implicitly describe
searchers have proposed that the motor system might support action observation as a passive task, we cast it as an active,
e partially or totally e action understanding and other func- hypothesis testing process that uses a generative model of how
tions in social cognition (Kilner & Lemon, 2013; Rizzolatti & different actions are performed to generate hypothesis-
Craighero, 2004). Some theories propose an automatic mech- specific predictions, and directs saccades to the most infor-
anism of motor resonance, according to which the action goals mative (i.e., salient) parts of the visual scene e in order to test
of the performer are “mirrored” in the motor system of the these predictions and in turn disambiguate among the
perceiver, thus permitting an automatic understanding competing hypotheses (Friston, Adams, Perrinet, &
(Rizzolatti, Fadiga, Gallese, & Fogassi, 1996). Other theories Breakspear, 2012). Second, the generative model that drives
highlight the importance of (motor) prediction and the covert oculomotor plans across action performance and observation
reuse of our own motor repertoire and internal models in this is the same, which implies that the motor system drives
process. For example, one influential proposal is that STS, predictive eye movements in ways that are coherent with the
premotor and parietal areas are arranged hierarchically (in a unfolding of goal-directed action plans (Costantini,
so-called predictive coding architectural scheme) and form an Ambrosini, Cardellicchio, & Sinigaglia, 2014; Elsner,
internal generative model that predicts action patterns (at the D'Ausilio, Gredeba € ck, Falck-Ytter, & Fadiga, 2013).
lowest hierarchical level) as well as understanding action We tested our computational model against human data
goals (at the higher hierarchical level). These hierarchical on eye movement dynamics during an action observation task
processes interact continuously through reciprocal top-down (Ambrosini, Costantini, & Sinigaglia, 2011). In the action
and bottom-up exchanges between hierarchical levels, so that observation study, participants' eye movements were recor-
action understanding can be variously influenced by action ded while they viewed videos of an actor performing an un-
dynamics as well as various forms of prior knowledge; such as predictable goal-directed hand movement toward one of two
the context in which the action occurs (Friston, Mattout, & objects (targets) mandating two different kinds of grip (i.e., a
Kilner, 2011; Kilner, Friston, & Frith, 2007). Numerous other small object requiring a precision grip or a big object requiring
theories point to the importance of different mechanisms a power grasp). To counterbalance the hand trajectories and
besides mirroring and motor prediction, such as Hebbian ensure hand position was not informative about the actor's
plasticity or visual recognition (Fleischer, Caggiano, Thier, & goal, actions were recorded from the side using four different
Giese, 2013; Heyes, 2010; Keysers & Perrett, 2004), see Giese target layouts. Before the hand movement, lasting 1000 msec,
and Rizzolatti (2015) for a recent review. However, these the- the videos showed the actor's hand resting on a table (imme-
ories implicitly or explicitly consider action observation as a diately in front of his torso) with a fixation cross superimposed
rather passive task, disregarding its enactive aspects, such as on the hand (1000 msec). Participants were asked to fixate the
the role of active information sampling and proactive eye cross and to simply watch the videos without further in-
movements. structions. In half of the videos, the actor preformed a reach-
In everyday activities involving goal-directed arm move- to-grasp action during which the preshaping of the hand
ments, perception is an active and not a passive task (Ahissar (either a precision or a power grasp, depending on the target)
& Assa, 2016; Bajcsy, Aloimonos, & Tsotsos, 2016; O'Regan & was clearly visible as soon as the movement started (preshape
Noe, 2001); and eye movements are proactive, foraging for condition), whereas in the remaining half, the actor merely
information required in the near future. Indeed, eyes typically reached for e and touched e one of the objects with a closed
shift toward objects that will be eventually acted upon, while fist; that is, without preshaping his hand according to the
being rarely attracted to action irrelevant objects (Land, 2006; target features (no shape condition). Therefore, there were
Land, Mennie, & Rusted, 1999; Rothkopf, Ballard, & Hayhoe, four movement types, corresponding to the four conditions of
2007). A seminal study (Flanagan & Johansson, 2003) showed a two factor design (pre-shape and target size); namely, no
that when people observe object-related manual actions (e.g., shapeebig target, no shapeesmall target, pre-shapeebig
block-stacking actions), the coordination between their gaze target and pre-shapeesmall target. The four conditions were
and the actor's hand is very similar to the gaze-hand coordi- presented in random order so that the actor's movement and
nation when they perform those actions themselves. In both goal could not be anticipated. The main result of this study
cases, people proactively shift their gaze to the target sites, was that participants' gaze proactively reached the target ob-
thus anticipating the outcome of the actions. These findings ject significantly earlier when motor cues (i.e., the preshaping
suggest that oculomotor plans that support action perfor- hand) were available. In what follows, we offer a formal
mance can be reused for action observation (Flanagan & explanation of this anticipatory visual foraging in terms of
Johansson, 2003) and might also support learning and causal active inference.
c o r t e x 8 9 ( 2 0 1 7 ) 4 5 e6 0 47
Fig. 1 e Scheme of the computational model adopted in the study. The system implicitly encodes a (probabilistic) model of
which visual stimuli should be expected under the different perceptual hypotheses (e.g., if the action target is the big object,
when doing a saccade to the next hand position I should see a power grasp) and uses the saccades to check if these
expectations are correct e and in turn to revise the probability of the two hypotheses. Details of the procedure can be found
in the main text and in Friston et al. (2012). (B) The pulvinar saliency map receives as input the (expected) position of task-
relevant variables (e.g., expected hand position, to-be-grasped objects), weighted by their saliency, which in turn depends
on the probability of the two competing hypotheses. Neurophysiologically, we assume that a hierarchically organized
“action observation” brain network computes both the expected hand position (at lower hierarchical levels) and the
probability of the two competing hypotheses (at higher hierarchical levels). The inset shows a schematic of the functioning
of the action observation network according to predictive coding (Kilner et al., 2007). Here, action observation depends on
reciprocal message passing between areas that lie lower in the predictive coding hierarchy (STS) and areas higher areas
(parietal and prefrontal). The functioning of the action observation network is abstracted here using a Bayesian model
(Dindo et al., 2011), see the Methods section for details. (C) This panel represents graphically the two competing hypotheses
that are considered here. Note that here the hypotheses are not (only) about final states (hand on big vs small object) but
describe also how the action will unfold in time: they correspond to sequences of (superimposed) images of hand
trajectories (here we consider 6 time frames). As evident in the figure, the hypothesis that one is reaching for a small (or big)
object entails the hypothesis that the hand will be configured in a precision grip (or power grasp) during action execution e
and this hypothesis can be tested during action observation.
states; and the tilde notation m~ denotes variables in general- At the first hierarchical layer of the architecture, sensory
00
ized coordinates of motion ðm; m0 ; m ; …Þ (Friston, 2008). In the signals ðvð0Þ :¼ sÞ are generated in two modalities: proprio-
generative model, causal states link hierarchical levels (i.e., ception (p) and vision (q):
the output of one level provides input to the next); hidden
states encode dynamics over time; and hidden controls Proprioceptive signals, encoded in sp 2ℝ2 , represent the
encode representations of actions that affect transitions be- centre of gaze and have an associated (precision-weighted)
tween hidden states. It is these control states from which prediction error xv;p ; i.e., the difference between condi-
actions (e.g., saccades) are sampled. tional expectations and predicted values.
c o r t e x 8 9 ( 2 0 1 7 ) 4 5 e6 0 49
Visual signals, encoded in an array of sq 2ℝ256 , sample a sensorimotor brain regions that may support action under-
visual scene uniformly with a grid of 16 16 sensors, and standing via the simulation of one's own action (Dindo,
have an associated (precision-weighted) prediction error Zambuto, & Pezzulo, 2011; Friston et al., 2011; Grafton, 2009;
xv;q . Kilner et al., 2007; Pezzulo, 2013) and that includes both
cortical and subcortical structures (Bonini, 2016; Caligiore,
Pezzulo, Miall, & Baldassarre, 2013), see also Fig. 1B.
2.2. Hidden states include For simplicity, we implemented four generative sub-
models predicting the location and configuration of the hand
Proprioceptive internal states, which encode an internal (preshape) under the two hypotheses (reaching for a big or
representation of the centre of oculomotor fixation. Their small object) separately. This allows the agent to accumulate
corresponding expectation (i.e., neuronal activity) is sensory evidence in two modalities (hand position and
denoted as m ~ x;p 2ℝ2 and their prediction error as xx;p . configuration) for each of the two hypotheses. Furthermore,
Perceptual internal states, encoding the (logarithm of the) these sub-models provided predictions of hand position and
probability that each hypothesis is the cause of the visual configuration in the future, under the two hypotheses in
input. Their corresponding variational mode (i.e., neuronal question.
activity) is denoted as m ~ x;q 2ℝN and their prediction error as These four probabilistic sub-models were learned on the
xx;q . basis of hand movement data collected from six adult male
participants. Each participant executed 50 precision grip
Hidden controls u ~¼h ~u þ u ~ u are modelled as 2D points h ~u movements directed to a small object (the small ball) and 50
plus a Gaussian noise perturbation u ~ u , and determine the power grasp movements directed to a big object (the big ball),
location that attracts gaze. Their corresponding expectation is and data on finger and wrist angles were collected using a
denoted as m ~ u 2ℝ2 and their prediction error as xu . dataglove (HumanGlove e Humanware S.r.l., Pontedera, Pisa,
Action a is modelled as classical reflex arc suppressing Italy) endowed with 16 sensors (3 angles for each finger and 1
proprioceptive prediction errors and producing saccadic angle for the wrist). The four sub-models used in the simula-
vs~ ð1Þ
movements by solving the following equation: a_ ¼ va xv . tions were obtained by regressing the aforementioned data
Defining qðx~; v
~; u
~~mx ðt þ tÞ; m
~ v ðt þ tÞ; h
~j Þ as the conditional den- (300 trials for each sub-model), to obtain probability distribu-
sity over hidden controls, parameterized by hidden states and tions over the angles of the fingers and wrist, over time. To
causes in the future, salience S can be defined as the negen- regress each sub-model, we used a separate Echo State
tropy (inverse uncertainty) of the conditional density q: Gaussian Processes (ESGP) (Chatzis & Demiris, 2011): an algo-
h i rithm that produced a predictive distribution over trajectories
~j ¼ H q x~; v
S h ~m
~; u ~ x ðt þ tÞ; m
~ v ðt þ tÞ; h
~j of angles, under a particular sub-model, see Fig. 2A. The ESGP
sub-models were trained off-line to predict the content of the
Thus, the system aims to find the (eye) control that maxi-
next frame of the videos used in the experiments (6 frames)
mizes salience; i.e.,
and to map the angles of the fingers and wrist to the visual
appearance (preshape) and position in space of the hand,
~u ¼ argmax S hej
h respectively.
ej
h
After the off-line learning phase, the four forward sub-
Or, more intuitively, sampling the most informative loca- models generate a probabilistic prediction of the next hand
tions (given the current agent's belief state). preshape and position based on all previous sensory images.
This enables the probability of the two competing hypotheses
2.3. Generative models to be evaluated, using the method described in Dindo et al.
(2011).
The computational scheme introduced so far is generic and More formally, the first two sub-models encode the tra-
implements active sampling of information in a variety of jectories traced by subjects' hands during the trials, thus
perceptual tasks (Friston et al., 2012). In this paper, we use it predicting the probability of the hand position in the image (as
for an action observation task (Ambrosini et al., 2011), in Gaussians) under the hypothesis of grasping a small object
which the agent (observer) has two hypotheses about the (SMALL):
hidden causes of visual input. These hypotheses correspond
to reaching for a big object (with a power grip) or reaching for a pSMALL ðhPosðtÞÞ ¼ pðhPosðtÞjhPosðt 1Þ; G ¼ SMALLÞ
smaller object in a nearby location (with a precision grip). To
and grasping a big object (BIG):
test these competing hypotheses, the architecture needs to
generate predictions about the current and future sensory pBIG ðhPosðtÞÞ ¼ pðhPosðtÞjhPosðt 1Þ; G ¼ BIGÞ
outcomes (i.e., observed hand movements and configura-
respectively.
tions). These predictions are generated from a forward or
Analogously, the second two sub-models encode the
generative model of reach-to-grasp actions, enabling one to
probability of the hand configuration (preshape) in the image
accumulate evidence for different hypotheses e and to eval-
under the hypothesis of grasping a small object (SMALL):
uate a salience map for the next saccade (see below). In
keeping with embodied and motor cognition theories, we pSMALL ðhShapeðtÞÞ ¼ pðhShapeðtÞjhShapeðt 1Þ; G ¼ SMALLÞ
consider these generative models to be embodied in the so-
called action observation brain network, a network of and grasping a big object (BIG):
50 c o r t e x 8 9 ( 2 0 1 7 ) 4 5 e6 0
respectively.
Similarly, we encode the positions of the two objects,
small:
and big:
object salience map (Fig. 2B), we used a Bayesian model power grasp), given the current observations. Specifically, it is
average of Gaussian windows centred on the object (which is calculated as the posterior probability of the small (or big)
fixed), weighted by the probability of reaching big or small object hypothesis multiplied by a Gaussian term N(hPos;
object and the relative hand-object distance. This captures the gPos,s) that essentially describes hand-object distance. The
fact that the identity of the target object resolves more un- Gaussian is centred on the object position (gPos) and hPos is
certainty about the intended movement when the hand is the hand position. The s of the Gaussian is the uncertainty
closer; i.e., approaching the object. Finally, the hand and ob- about the posterior probability of the small (or big) object
ject salience maps were combined and downsampled (using hypothesis. Overall, Rk represents a dynamic (and fading)
on-off centre-surround sampling) to obtain a smaller (16 16 snapshot of the current belief of the perceived action based on
grid) saliency map that is computationally more tractable the observation of the trajectories and preshape of the sub-
(Fig. 2C). Note that for clarity the combined map shown in jects' hands.
Fig. 2C is illustrative and it is not the true superimposition of The most salient zones of the saliency map of Fig. 2C
the four images above. represent the most informative locations of the visual scene;
In detail, we compute Sk ¼ Sð~ hu Þ minðSð~
hu ÞÞ, the differ- i.e., those that are expected to disambiguate alternative hy-
ential salience for the kth saccade and enhance it by Rk , i.e., potheses. Therefore, the map does not simply include spatial
Sk ¼ Sk þ Rk with Rk corresponding to the map information (e.g., the expected position of the hand), but also
information about the (epistemic) value of the observations
X
4
(e.g., a hand preshaped for power grasp) one can harvest by
Rk ¼ wj r Sk ; cj þ a$Rk1
j¼1
looking at these positions, given the current belief state of the
agent. Hence, hypothesis testing e or the active sampling for
with a representing the weight of previous estimates, which is the most relevant information e corresponds to selecting the
set to 1/2 for coherence with (Friston et al., 2012). The ele- most salient location for the next saccade. Note that this is a
ments of the equation are computed on the basis of the pre- dynamical process: the saliency map is continuously updated
ceding ESGP models: reflecting the changing beliefs of the agent.
r is a Gaussian function (with a standard deviation of 1/16 2.5. Modelling perceptual decisions in action observation
of the image size) of the distance from the points cj ;
c1 pSMALL ðhPosðt þ 1ÞÞ and c2 pBIG ðhPosðt þ 1ÞÞ are pre- In the action observation paradigm we simulated, participants
dicted points of the position of the hand; were not explicitly asked to decide (between “small” or “big”
c3 pSMALL ðgPosðt þ 1ÞÞ, c4 pBIG ðgPosðt þ 1ÞÞ are predicted hypotheses) but their “decision” was inferred by measuring
points of the goal position; their gaze behaviour; i.e., saccade towards one of the two
w1 ¼pðG¼SMALLjhShapeð1:tÞÞ and w2 ¼pðG¼BIGjhShapeð1:tÞÞ objects, big or small (Ambrosini et al., 2011). In the same way,
are predictions of grasping action computed on the basis of in the computational model, updates of the agent's belief and
the hand preshape models; saliency map terminate when the (artificial) eye lands on one
w3 ¼ pðG ¼ SMALLjOBSð1 : tÞÞ and w4 ¼ pðG ¼ BIGjOBSð1 : tÞÞ of the two objects e signalling the agent's decision. As we will
are beliefs about the currently performed grasping action. see, in both the human experiment and the model, with suf-
ficient information, saccades can be proactive rather than just
where OBSð1 : tÞ denotes the sequence of previous tracking the moving hand, and participants fixate the selected
observations. target before the action is completed.
The coefficients of the map and the relative salience of the Note that, in the model, the decision (i.e., the fixation to the
elements within it (hand and objects) depend on the outputs selected object) emerges naturally from saliency dynamics,
of the generative models described earlier. For the hand which in turn reflect belief updating during hypothesis
salience maps, the centre of Gaussians was based on the testing, without an explicit decision criterion (e.g., look at the
forward models of hand position under the precision grip (or big object when you are certain about it). This is because ac-
power grasp) hypothesis, while the “weight” of the map w1 (or tions are always sampled from the same salience map, which
w2 ) is calculated based on the forward model of preshape in- implicitly indicates whether the hand or one of the objects is
formation under the precision grip (or power grasp) hypoth- most contextually salient. In other words, the decision is
esis. In other words, salience of hand position expected under made when the target location becomes more salient than the
the precision grip (or power grasp) hypothesis is higher when other locations (e.g., the hand location), not when the agent
the hand is correctly configured for a precision grip (or power has reached a predefined criterion, e.g., a fixed confidence
grasp). This is because, in the empirical study we are model- level. This lack of a “threshold” or criterion for the decision
ling, only preshape depends on the performer's goal (while marks an important difference with common place models of
hand position is uninformative); however, the same model decision-making such as the drift diffusion model (Ratcliff,
can be readily expanded to integrate (in a Bayesian manner) 1978) and is a hallmark of embodied models of choice that
other sources of evidence; such as the actor's hand position consider action and perception as mutually interactive rather
and gaze (Ambrosini, Pezzulo, & Costantini, 2015). Further- than modularized systems (Lepora & Pezzulo, 2015).
more, the salience of the small (or the big) object, and the Key to this result e and the implicit shift from hand-
“weight” of the map w3 (or w4 ), corresponds to the probability tracking to the fixation of the selected object e is the fact
that the performer agent is executing a precision grip (or that the posterior probability that one of the two objects will
52 c o r t e x 8 9 ( 2 0 1 7 ) 4 5 e6 0
be grasped is continuously updated when new visual samples four (2 2) conditions, which derive from the combination of 2
are collected and can eventually become high enough to drive target conditions (big or small object) and 2 shape conditions
a saccade (i.e., one of the objects can assume more salience (pre-shape or no-shape). As a result, the four conditions
than the hand). This, in turn, depends on the fact that when correspond to four types of hand actions: “no-shapeebig
the probability of a power versus precision grip is updated target”, “no-shapeesmall target” (i.e., a hand movement with
(Fig. 2A) the probability of the big versus small object is also the fist closed to the big or small target, respectively), “pre-
updated (Fig. 2B), reflecting the implicit knowledge of the shapeebig target”, and “pre-shapeesmall target” (i.e., a hand
intentionality of the action (e.g., that big objects require a movement with a power grasp or a precision grip to grasp the
power grasp). In sum, if the agent does not know the goal, as in big or small object, respectively).
this perceptual paradigm, it has to accumulate evidence first To compare the results of the original study and the simu-
by looking at the hand, and then by looking at the target when lations, we calculated the arrival time for the simulated sac-
it has resolved its uncertainty. cades as the difference between the time when the hand (of the
As an illustrative example, Fig. 3 shows a sequence of actor) and the saccade (of the simulated agent) land on the
(unfiltered) saliency maps along the six time frames of a target object. Note that arrival time is negative when the eye
sample run. Here, the brighter areas correspond to the most lands on the object before the hand. Note also that our simu-
salient locations (recall that the most salient area is selected lations include one simplification: saccades have a fixed
for the next saccade). One can see a shift in the saliency map, duration (of 192 msec, which stems from the fact that before a
such that, by the third frame, the most salient object is the to- saccade the inference algorithm performs 16 iterations, each
be-grasped big object. Below we test the behaviour of the assumed to last 12 msec). These parameters were selected for
model by directly comparing it with human data. consistency with previous work using the saccadic eye move-
ment model (Friston et al., 2012) and to ensure that the simu-
lated saccadic duration is within the average range for humans
3. Results (Leigh & Zee, 2015). Given that both saccades and videos have
fixed duration, every trial comprises exactly 6 epochs.
We tested the computational model on the visual stimuli used The results of our simulations are remarkably similar to
by Ambrosini et al. (2011), which include action observation in those of the original study (Fig. 4). The key result is a
Fig. 3 e A sample saliency map, shown during 6 time frames. The figure shows how the saliency map (as in Fig. 2C) evolves
over time as the actors action unfolds. This map encodes perceptual aspects of the observed scene (e.g., hand position and
configuration) as well as the expected informational or epistemic value (salience) of the percept. Bright areas correspond to
high-saliency locations. Note that the saliency map is updated during action observation, reflecting the changing belief
state of the observer or agent. At the time frame T2, the most salient location is the big object. Since actions (gazes) are
sampled from the most salient locations in the saliency map, the agent is more likely in it a proactive saccade to the big
object, even if the hand has not yet reached it.
c o r t e x 8 9 ( 2 0 1 7 ) 4 5 e6 0 53
Fig. 5 e Results of sample simulations of power grasp, without preshape (left) or with preshape (right). Panel A shows the
expected probability of the two competing hypotheses (here, big vs small) during an exemplar trial. Panels B and C show the
location of the saccade in the video frame and the saliency map, respectively. Panel D shows the hidden (oculomotor) states
as computed by the model. Panel E show the actual content of what is sampled by a saccade (in the filtered map). Panel F
shows the posterior beliefs about the ‘true’ hypothesis (expectations are in blue and associated uncertainty are in grey). The
posterior beliefs are plotted in terms of expected log probabilities and associated 90% confidence interval. A value of zero
corresponds to an expected probability of one. Increases in conditional confidence about the expected log probability
c o r t e x 8 9 ( 2 0 1 7 ) 4 5 e6 0 55
correspond to a shrinking of the confidence intervals. Panel G show the “percept” of the system e that is, the mixture of
hypotheses weighted by the posterior expectation, which in this study is represented with a superposition of all the frames
of the previous time steps. Panel H shows the sequence of saccades during the experiment (where the first saccade to the
hand depends on participants' instructions and can be ignored, see the main text). Note that in the (left) case without
preshape, gaze follows a reactive, hand-following strategy (panels G and H) and the action is disambiguated fairly late in the
trial (panels A and F). The scenario is different in the (right) case with preshape information.
56 c o r t e x 8 9 ( 2 0 1 7 ) 4 5 e6 0
Fig. 6 e Results of sample simulations of precision grip, without preshape (left) or with preshape (right). Labels as in Fig. 5.
Note that even in these sample simulations, the (right) scenario with preshape entails faster recognition and proactive
movements compared to the (left) scenario without preshape.
Furthermore, our approach entails a systems-level other neuron) operate within much larger brain networks for
perspective on action understanding. The importance of brain adaptive action and perception. This implies the necessity of a
mechanisms such as mirror neurons in action recognition has systems-level view of action recognition, which clearly rec-
been often recognized, but clearly these neurons (like any ognizes the role of large cortical areas and cortico-subcortical
c o r t e x 8 9 ( 2 0 1 7 ) 4 5 e6 0 57
loops (Bonini, 2016; Caligiore et al., 2013). The systems-level example was (the motor theory of) speech perception
architectural scheme of Fig. 1 e despite it is necessarily (Liberman & Mattingly, 1985; Liberman, Cooper, Shankweiler,
simplified and incomplete e represents a first step in this di- & Kennedy, 1967). Our proposal here is in accordance to one
rection. Addressing action understanding within a large-scale central claim of this and other motor theories of cognition
biological model like active inference permits to generate (Jeannerod, 2006), namely, that perceptual processing reuses
specific predictions on the role of different brain areas in this the generative or forward models implied in motor control. In
process. our study, however, the contribution of generative models
Finally, it is worth highlighting that we have tested the (and the motor system) is quite specific: guiding eye move-
validity of the model at the behavioural level, and its capacity ments and supporting active hypothesis testing. As our sim-
to explain different patterns of (proactive or reactive) eye ulations and the experimental data show, engaging the
movements by appealing to a single imperative of uncertainty generative models is not mandatory for action recognition,
(i.e., expected surprise or variational free energy) minimiza- but improves it by making eye movements more proactive. In
tion. Clearly, there are several other aspects of the proposal other words, our simulations show that one can assign sa-
that remain to be tested in more detail. One advantage of our liency to current stimuli (observed movements) and solve the
computational approach is that it enables the estimation of same task in various ways: reactively (by following the hand),
hidden variables from behavioural data. For example, panels D by extrapolating perceptual variables over time, or by
and F of Figs. 5 and 6 show the hidden (oculomotor) states and engaging the generative models (and the motor system).
the agent's current uncertainty, respectively. These measures However, reactive strategies may be limited and visual
(and others) are automatically inferred by the model and can extrapolation may fail to correctly represent sequential events
be used for model-based, trial-by-trial computational analysis that are generated by hidden causes (e.g., the dynamics of the
of neurobiological data, such as for example dynamical mea- motor system) and have an intrinsic intentionality; otherwise,
sures of brain activation such as EEG or MEG (Daw, 2011; the generative model underlying visual extrapolations would
Friston et al., 2014), thus productively linking various levels be essentially a duplicate of the generative model underlying
and timescales of action observation, behavioural and action execution. Another problem with a visual extrapolation
neuronal. This reflects the fact that the proposed model gen- explanation is that it is not immediately clear why eye
erates a variety of empirical predictions, concerning for movements should go proactively to the object (and not, for
example the ways action e or belief-related brain signals example, any future predicted location before the object)
(Panels D and F in Figs. 5 and 6) e change during trials with without a notion that grasping the object is the agent's goal.
high or low uncertainty, or when the motor system is While it may not be mandatory to engage the (generative
temporarily inactivated (Costantini et al., 2014), which can be model of the) motor system to solve this specific task, doing so
tested empirically. Another prediction is that, because action would automatically produce an advance understanding of
understanding is an active process, modulations of the hy- the situation that speaks to one's own action goals (“motor
pothesis testing mechanism would influence it; for example, understanding”); in turn, this may have additional benefits
that it would be possible to bias action understanding by such as segmenting action observation in meaningful ele-
restricting eye movements. ments (e.g., goal and subgoal-related ways, Donnarumma,
Compared to the original model of Friston et al. (2012), Maisto, & Pezzulo, 2016; Stoianov, Genovesio, & Pezzulo,
there are three main differences. The first difference is the fact 2015) and permitting fast planning of complementary or
that the perceptual stimulus is dynamical (a video and not an adversarial actions in social settings (Pezzulo, 2013; Pezzulo,
image), and for this, the two perceptual hypotheses corre- Iodice, Donnarumma, Dindo, & Knoblich, 2017).
spond to image sequences not images. The second difference In this illustration of epistemic foraging under active
lies in the way the saliency map is computed e here, it does inference, we have focused on information gain in the context
not depend on perceptual features of the to-be-recognized of action observation. On this view, salience becomes a sort of
objects but on motor predictions. The third important differ- “epistemic affordance”, where the affordance of different lo-
ence between the current scheme and that described by cations (hand or objects) changes dynamically as a function of
Friston and colleagues is that we eschew an ad hoc inhibition the agent's beliefs e and therefore becomes inherently context
of return e which they included because their generative sensitive. It is interesting to note that other studies using
model did not have any memory. This meant that the simu- active inference (but in simplified, Markovian or discrete time
lated agent forgot what it had learned from sampling a pre- scheme) appeal to exactly the same idea, but in the domain of
vious location and would keep on returning to the most goal-directed action, e.g., finding reward in a maze. In these
salient visual features in the absence of inhibition of return. studies, when agents are uncertain about reward locations,
Our more realistic setup precludes this because the model they first need to resolve their uncertainty through epistemic
generates trajectories that unfold over time. This means that action that entails information gain (e.g., they search for cues
what was salient on the previous saccade is usually less that disambiguate a reward location). Resolving this uncer-
salient on the subsequent saccade. This follows from the fact tainty is a prerequisite to successively execute a pragmatic
that our generative model encodes trajectories and therefore action (e.g., reaching the reward location). The resulting
has an implicit memory, in the sense that it can accumulate mixture of epistemic and pragmatic value turns out to be the
information over time about the underlying causes of sensory free energy expected under any sequence of actions or policy.
information. In short, the active inference we have demonstrated in this
The idea of a reuse of motor strategies to support percep- work has a construct validity in terms of recent work on more
tual functions has been raised in several domains. One early abstract formulations of exploration and exploitation (Friston
58 c o r t e x 8 9 ( 2 0 1 7 ) 4 5 e6 0
et al., 2015; Friston, FitzGerald, Rigoli, Schwartenbeck, & sources of information. Journal of Neurophysiology, 113,
O'Doherty, et al., 2016; Friston, FitzGerald, Rigoli, 2271e2279. http://dx.doi.org/10.1152/jn.00464.2014.
Schwartenbeck, & Pezzulo, 2016; Pezzulo & Rigoli, 2011; Ambrosini, E., Reddy, V., de Looper, A., Costantini, M., Lopez, B., &
Sinigaglia, C. (2013). Looking ahead: Anticipatory gaze and
Pezzulo, Rigoli, & Friston, 2015; Pezzulo, Cartoni, Rigoli, Pio-
motor ability in infancy. PloS One, 8, e67916. http://dx.doi.org/
Lopez, & Friston, 2016). 10.1371/journal.pone.0067916.
Ambrosini, E., Sinigaglia, C., & Costantini, M. (2012). Tie my
hands, tie my eyes. Journal of Experimental Psychology: Human
5. Conclusions Perception and Performance, 38, 263.
Bajcsy, R., Aloimonos, Y., & Tsotsos, J. K. (2016). Revisiting active
This paper offers a potentially important and novel formula- perception. ArXiv160302729 Cs.
Bonini, L. (2016). The extended mirror neuron network anatomy,
tion of action observation that generalizes active inference
origin, and functions. The Neuroscientist. http://dx.doi.org/
based on epistemic foraging (foraging for information) and
10.1177/1073858415626400, 1073858415626400.
visual salience. In short, we consider the driving force behind Caligiore, D., Pezzulo, G., Miall, R. C., & Baldassarre, G. (2013). The
saccadic eye movements to be the resolution of uncertainty contribution of brain sub-cortical loops in the expression and
about competing explanations for the causes of sensory in- acquisition of action understanding abilities. Neuroscience and
formation e in our case study, whether an actor is reaching a Biobehavioral Reviews, 37, 2504e2515.
small or a big object. This can be formulated in terms of sa- Chatzis, S. P., & Demiris, Y. (2011). Echo state Gaussian process.
IEEE Transactions on Neural Networks, 22, 1435e1445. http://
liency maps that encode the information gain (or epistemic
dx.doi.org/10.1109/TNN.2011.2162109.
value) of sampling the next location in the visual field. In turn, Corbetta, M., & Shulman, G. L. (2002). Control of goal-directed and
this depends upon predictions about the likely configuration stimulus-driven attention in the brain. Nature Reviews.
of the world based upon a forward or generative model of Neuroscience, 3, 201e215.
unfolding events (i.e., the prediction of the hand movement Costantini, M., Ambrosini, E., Cardellicchio, P., & Sinigaglia, C. (2014).
and shape, depending on the actor's goal of grasping a small or How your hand drives my eyes. Social Cognitive and Affective
Neuroscience, 9, 705e711. http://dx.doi.org/10.1093/scan/nst037.
a big object). This construction is both principled and
Costantini, M., Ambrosini, E., & Sinigaglia, C. (2012). Out of your
straightforward: it differs fundamentally from previous
hand's reach, out of my eyes' reach. The Quarterly Journal of
treatments of salience, because salience becomes an explicit Experimental Psychology A, 2006(65), 848e855. http://dx.doi.org/
function of beliefs and predictions about the future and can be 10.1080/17470218.2012.679945.
constructed on line in a Bayes-optimal fashion. Furthermore, Cross, E. S., Hamilton, A. F., & Grafton, S. T. (2006). Building a
our work provides a formal perspective on mirror neuron-like motor simulation de novo: Observation of dance by dancers.
activity and the key role of active vision in coupling perception NeuroImage, 31, 1257e1267. http://dx.doi.org/10.1016/
j.neuroimage.2006.01.033.
and action. This paper presents the basic ideas and estab-
Daw, N. D. (3e8 2011). Trial-by-trial data analysis using
lishes their construct validity by showing that one can
computational models. In Decision making, affect, and learning:
reproduce (with remarkable accuracy) key phenomena Attention and performance XIII 23. Oxford: Oxford University
observed in empirical studies of eye movement dynamics Press. http://dx.doi.org/10.1093/acprof:oso/
during action observation. The ability to model, in formal 9780199600434.003.0001.
terms, action observation may have important implications Demiris, Y. (2007). Prediction of intent in robotics and multi-agent
for the modelling of both eye movements and their neuronal systems. Cognitive Processing, 8, 151e158.
Demiris, Y., & Khadhouri, B. (2005). Hierarchical attentive
correlates.
multiple models for execution and recognition (HAMMER).
Robotics and Autonomous Systems e Journal, 54, 361e369.
Dindo, H., Zambuto, D., & Pezzulo, G. (2011). Motor simulation via
Acknowledgements coupled internal models using sequential Monte Carlo. In
Proceedings of IJCAI 2011 (pp. 2113e2119).
KJF is funded by the Wellcome Trust [088130/Z/09/Z]. GP is Donnarumma, F., Dindo, H., & Pezzulo, G. (2017). Sensorimotor
coarticulation in the execution and recognition of intentional
funded by the European Community's Seventh Framework
actions. Frontiers in Psychology, 8. http://dx.doi.org/10.3389/
Programme (FP7/2007e2013) project Goal-Leaders (Grant No:
fpsyg.2017.00237.
FP7-ICT-270108) and the HFSP (Grant No: RGY0088/2014). Donnarumma, F., Maisto, D., & Pezzulo, G. (2016). Problem solving
as probabilistic inference with subgoaling: Explaining human
successes and pitfalls in the tower of Hanoi. PloS Computational
references Biology, 12, e1004864. http://dx.doi.org/10.1371/
journal.pcbi.1004864.
Elsner, C., D'Ausilio, A., Gredeba€ ck, G., Falck-Ytter, T., & Fadiga, L.
Aglioti, S. M., Cesari, P., Romani, M., & Urgesi, C. (2008). Action (2013). The motor cortex is causally related to predictive eye
anticipation and motor resonance in elite basketball players. movements during action observation. Neuropsychologia, 51,
Nature Neuroscience, 11, 1109e1116. 488e492. http://dx.doi.org/10.1016/
Ahissar, E., & Assa, E. (2016). Perception as a closed-loop j.neuropsychologia.2012.12.007.
convergence process. eLife, 5, e12830. http://dx.doi.org/ Engel, A. K., Maye, A., Kurthen, M., & Ko € nig, P. (2013). Where's the
10.7554/eLife.12830. action? The pragmatic turn in cognitive science. Trends in
Ambrosini, E., Costantini, M., & Sinigaglia, C. (2011). Grasping Cognitive Sciences, 17, 202e209. http://dx.doi.org/10.1016/
with the eyes. Journal of Neurophysiology, 106, 1437e1442. j.tics.2013.03.006.
Ambrosini, E., Pezzulo, G., & Costantini, M. (2015). The eye in Flanagan, J. R., & Johansson, R. S. (2003). Action plans used in
hand: Predicting others' behavior by integrating multiple action observation. Nature, 424, 769e771.
c o r t e x 8 9 ( 2 0 1 7 ) 4 5 e6 0 59
Fleischer, F., Caggiano, V., Thier, P., & Giese, M. A. (2013). Kilner, J. M., Paulignan, Y., & Blakemore, S. J. (2003). An
Physiologically inspired model for the visual recognition of interference effect of observed biological movement on action.
transitive hand actions. The Journal of Neuroscience, 33, 6563e6580. Current Biology: CB, 13, 522e525.
Friston, K. (2008). Hierarchical models in the brain. PloS Land, M. F. (2006). Eye movements and the control of actions in
Computational Biology, 4, e1000211. everyday life. Progress in Retinal and Eye Research, 25,
Friston, K. (2010). The free-energy principle: A unified brain 296e324.
theory? Nature Reviews. Neuroscience, 11, 127e138. http:// Land, M., Mennie, N., & Rusted, J. (1999). The roles of vision and
dx.doi.org/10.1038/nrn2787. eye movements in the control of activities of daily living.
Friston, K. (2011). What is optimal about motor control? Neuron, Perception, 28, 1311e1328.
72, 488e498. Leigh, R. J., & Zee, D. S. (2015). The neurology of eye movements. USA:
Friston, K., Adams, R. A., Perrinet, L., & Breakspear, M. (2012). Oxford University Press.
Perceptions as hypotheses: Saccades as experiments. Frontiers Lepora, N. F., & Pezzulo, G. (2015). Embodied choice: How action
in Psychology, 3, 151. http://dx.doi.org/10.3389/ influences perceptual decision making. PloS Computational
fpsyg.2012.00151. Biology, 11(4), e1004110. http://dx.doi.org/10.1371/
Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., journal.pcbi.1004110.
O'Doherty, J., & Pezzulo, G. (2016). Active inference and Liberman, A. M., Cooper, F. S., Shankweiler, D. P., &
learning. Neuroscience and Biobehavioral Reviews, 68, 862e879. Kennedy, M. S. (1967). Perception of the speech code.
http://dx.doi.org/10.1016/j.neubiorev.2016.06.022. Psychological Review, 74, 431e461.
Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., & Liberman, A. M., & Mattingly, I. G. (1985). The motor theory of
Pezzulo, G. (2016). Active inference: A process theory. Neural speech perception revised. Cognition, 21, 1e36. http://
Computation, 29, 1e49. http://dx.doi.org/10.1162/ dx.doi.org/10.1016/0010-0277(85)90021-6.
NECO_a_00912. Ognibene, D., Chinellato, E., Sarabia, M., & Demiris, Y. (2013).
Friston, K., Mattout, J., & Kilner, J. (2011). Action understanding Contextual action recognition and target localization with an
and active inference. Biological Cybernetics, 104, 137e160. active allocation of attention on a humanoid robot.
http://dx.doi.org/10.1007/s00422-011-0424-z. Bioinspiration & Biomimetics, 8, 035002. http://dx.doi.org/
Friston, K., Rigoli, F., Ognibene, D., Mathys, C., Fitzgerald, T., & 10.1088/1748-3182/8/3/035002.
Pezzulo, G. (2015). Active inference and epistemic value. Ognibene, D., & Demiris, Y. (2013). Towards active event
Cognitive Neuroscience, 0, 1e28. http://dx.doi.org/10.1080/ recognition. In The 23rd international joint conference of artificial
17588928.2015.1020053. intelligence (IJCAI13).
Friston, K., Schwartenbeck, P., FitzGerald, T., Moutoussis, M., O'Regan, J. K., & Noe, A. (2001). A sensorimotor account of vision
Behrens, T., & Dolan, R. J. (2014). The anatomy of choice: and visual consciousness. The Behavioral and Brain Sciences, 24,
Dopamine and decision-making. Philosophical Transactions of 883e917.
the Royal Society of London Series B: Biological Sciences, 369, Penny, W., Mattout, J., & Trujillo-Barreto, N. (2006). Bayesian model
20130481. http://dx.doi.org/10.1098/rstb.2013.0481. selection and averaging. Stat. Parametr. Mapp. Anal. Funct. Brain
Gibson, J. J. (1966). The senses considered as perceptual systems. images Lond. Elsevier.
Boston, MA: Houghton Mifflin. Pezzulo, G. (2008). Coordinating with the future: The anticipatory
Giese, M. A., & Poggio, T. (2003). Neural mechanisms for the nature of representation. Minds and Machines, 18, 179e225.
recognition of biological movements. Nature Reviews. http://dx.doi.org/10.1007/s11023-008-9095-5.
Neuroscience, 4, 179e192. http://dx.doi.org/10.1038/nrn1057. Pezzulo, G. (2011). Grounding procedural and declarative
Giese, M. A., & Rizzolatti, G. (2015). Neural and computational knowledge in sensorimotor anticipation. Mind and Language,
mechanisms of action Processing: Interaction between visual 26, 78e114.
and motor representations. Neuron, 88, 167e180. http:// Pezzulo, G. (2013). Studying mirror mechanisms within
dx.doi.org/10.1016/j.neuron.2015.09.040. generative and predictive architectures for joint action. Cortex,
Grafton, S. T. (2009). Embodied cognition and the simulation of 49(10), 2968e2969. http://dx.doi.org/10.1016/
action to understand others. The Annals of the New York j.cortex.2013.06.008.
Academy of Sciences, 1156, 97e117. http://dx.doi.org/10.1111/ Pezzulo, G., Cartoni, E., Rigoli, F., Pio-Lopez, L., & Friston, K. (2016).
j.1749-6632.2009.04425.x. Active Inference, epistemic value, and vicarious trial and
Gredeba € ck, G., & Falck-Ytter, T. (2015). Eye movements during error. Learning & Memory, 23, 322e338. http://dx.doi.org/
action observation. Perspectives on Psychological Science, 10, 10.1101/lm.041780.116.
591e598. http://dx.doi.org/10.1177/1745691615589103. Pezzulo, G., & Cisek, P. (2016). Navigating the affordance
Hayhoe, M., & Ballard, D. (2005). Eye movements in natural landscape: Feedback control as a process model of behavior
behavior. Trends in Cognitive Sciences, 9, 188e194. http:// and cognition. Trends in Cognitive Sciences. http://dx.doi.org/
dx.doi.org/10.1016/j.tics.2005.02.009. 10.1016/j.tics.2016.03.013.
Heyes, C. (2010). Where do mirror neurons come from? Pezzulo, G., Iodice, P., Donnarumma, F., Dindo, H., & Knoblich, G.
Neuroscience and Biobehavioral Reviews, 34, 575e583. http:// (2017). Avoiding accidents at the champagne reception: A
dx.doi.org/10.1016/j.neubiorev.2009.11.007. study of joint lifting and balancing. Psychological Science. http://
Itti, L., & Koch, C. (2000). A saliency-based search mechanism for dx.doi.org/10.1177/0956797616683015.
overt and covert shifts of visual attention. Vision Research, 40, Pezzulo, G., Barsalou, L. W., Cangelosi, A., Fischer, M. H.,
1489e1506. McRae, K., & Spivey, M. (2011). The mechanics of embodiment:
Jeannerod, M. (2006). Motor cognition. Oxford University Press. A dialogue on embodiment and computational modeling.
Keysers, C., & Perrett, D. (2004). Demystifying social cognition: A Frontiers in Psychology, 2, 1e21.
Hebbian perspective. Trends in Cognitive Sciences, 8, 501e507. Pezzulo, G., & Rigoli, F. (2011). The value of foresight: How
Kilner, J. M., Friston, K. J., & Frith, C. D. (2007). Predictive coding: prospection affects decision-making. Frontiers in Neuroscience,
An account of the mirror neuron system. Cognitive Processing, 5.
8, 159e166. Pezzulo, G., Rigoli, F., & Friston, K. J. (2015). Active inference,
Kilner, J. M., & Lemon, R. N. (2013). What we know currently about homeostatic regulation and adaptive behavioural control.
mirror neurons. Current Biology: CB, 23, R1057eR1062. http:// Progress in Neurobiology, 134, 17e35. http://dx.doi.org/10.1016/
dx.doi.org/10.1016/j.cub.2013.10.051. j.pneurobio.2015.09.001.
60 c o r t e x 8 9 ( 2 0 1 7 ) 4 5 e6 0
Puce, A., & Perrett, D. (2003). Electrophysiology and brain imaging Stoianov, I., Genovesio, A., & Pezzulo, G. (2015). Prefrontal goal
of biological motion. Philosophical Transactions of the Royal codes emerge as latent states in probabilistic value learning.
Society of London Series B: Biological Sciences, 358, 435e445. http:// Journal of Cognitive Neuroscience, 28, 140e157.
dx.doi.org/10.1098/rstb.2002.1221. Tatler, B. W., Hirose, Y., Finnegan, S. K., Pievilainen, R., Kirtley, C.,
Ratcliff, R. (1978). A theory of memory retrieval. Psychological & Kennedy, A. (2013). Priorities for selection and
Review, 85, 59e108. representation in natural tasks. Philosophical Transactions of the
Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Royal Society of London Series B: Biological Sciences, 368, 20130066.
Annual Review of Neuroscience, 27, 169e192. http://dx.doi.org/10.1098/rstb.2013.0066.
Rizzolatti, G., Fadiga, L., Gallese, V., & Fogassi, L. (1996). Premotor Umilta , M. A., Kohler, E., Gallese, V., Fogassi, L., Fadiga, L.,
cortex and the recognition of motor actions. Cognitive Brain Keyers, C., et al. (2001). I know what you are doing. A
Research, 3, 131e141. neurophysiological study. Neuron, 31, 155e165.
Rothkopf, C. A., Ballard, D. H., & Hayhoe, M. M. (2007). Task and Wolpert, D. M., Doya, K., & Kawato, M. (2003). A unifying
context determine where you look. Journal of Vision, 7, 16. computational framework for motor control and social
Sailer, U., Flanagan, J. R., & Johansson, R. S. (2005). Eye-hand interaction. Philosophical Transactions of the Royal Society of
coordination during learning of a novel visuomotor task. The London Series B: Biological Sciences, 358, 593e602. http://
Journal of Neuroscience, 25, 8833e8842. http://dx.doi.org/ dx.doi.org/10.1098/rstb.2002.1238.
10.1523/JNEUROSCI.2658-05.2005.