Body Posture Modulates Action Perception: Behavioral/Cognitive

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

5930 • The Journal of Neuroscience, April 3, 2013 • 33(14):5930 –5938

Behavioral/Cognitive

Body Posture Modulates Action Perception


Marius Zimmermann, Ivan Toni, and Floris P. de Lange
Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, 6500 HE Nijmegen, The Netherlands

Recent studies have highlighted cognitive and neural similarities between planning and perceiving actions. Given that action planning
involves a simulation of potential action plans that depends on the actor’s body posture, we reasoned that perceiving actions may also be
influenced by one’s body posture. Here, we test whether and how this influence occurs by measuring behavioral and cerebral (fMRI)
responses in human participants predicting goals of observed actions, while manipulating postural congruency between their own body
posture and postures of the observed agents. Behaviorally, predicting action goals is facilitated when the body posture of the observer
matches the posture achieved by the observed agent at the end of his action (action’s goal posture). Cerebrally, this perceptual postural
congruency effect modulates activity in a portion of the left intraparietal sulcus that has previously been shown to be involved in updating
neural representations of one’s own limb posture during action planning. This intraparietal area showed stronger responses when the
goal posture of the observed action did not match the current body posture of the observer. These results add two novel elements to the
notion that perceiving actions relies on the same predictive mechanism as planning actions. First, the predictions implemented by this
mechanism are based on the current physical configuration of the body. Second, during both action planning and action observation,
these predictions pertain to the goal state of the action.

Introduction ever, chronically deafferented patients might experience substan-


Several studies have suggested that perception of others’ actions tial functional reorganization (Chen et al., 2002), and it remains
engages the observer’s motor system (Cattaneo et al., 2011; Press unclear whether body posture influences action observation
et al., 2011). More precisely, observed movements are thought to through cerebral regions involved in action planning and state
be simulated internally via forward models (Jeannerod, 2001; estimation. Recently, Ambrosini et al. (2012) showed that having
Oztop et al., 2005). Forward models are also computed during one’s hand tied behind one’s back impairs proactive eye move-
production of movements, imagined or actual (Shadmehr and ments during action observation. Others did not find any influ-
Krakauer, 2008), and those computations are modulated by the ence of body posture on action observation, either on behavior
spatial relationship between current and intended body posture (Fischer, 2005) or cerebral motor structures (Lorey et al., 2009),
of an action, the latter being the action-goal posture, which is the making it unclear whether and at which level of the action hier-
body posture occurring when the action goal is achieved archy the observer’s body posture might influence action
(Shenton et al., 2004; de Lange et al., 2006; Lorey et al., 2009; perception.
Ionta et al., 2012; Zimmermann et al., 2012). This modulation Here we assess whether and how the body posture of an ob-
can be seen as an instance of the end-state comfort principle, server influences action perception. Participants predicted the
according to which action plans are hierarchically organized goal state of visually presented actions, while their cerebral activ-
around temporally distal goals and goal postures (Rosenbaum et ity was monitored with fMRI and their right arm was either pro-
al., 1995; Homnmel, 2003; Grafton and Hamilton, 2007; Kilner et nated or supinated. The visually presented actions showed an
al., 2007). Here we test whether action perception also follows actor grasping a bar with a pronated or supinated right arm, using
this principle, considering the relation between an observer’s either a rotation or a translation movement to move the bar. This
body posture and the action-goal posture. procedure allowed us to disentangle the effects of participants’
Suggestive evidence for the general idea that the state of the own arm posture on the perception of actions across different
observer’s body influences action observation comes from a goal postures and biomechanical complexities of the observed
study showing that chronically deafferented patients are im- actions. We expected that action perception would be facilitated
paired in inferring motoric expectations of an actor (Bosbach et when participants’ body posture matches the actions’ goal pos-
al., 2005). This suggests that lack of somatosensory information ture, and that this modulatory effect would be supported by ce-
of one’s own body influences perception of others’ actions. How- rebral regions generating state estimates of one’s own body using
proprioceptive or visual information [i.e., portions of the intra-
parietal sulcus (IPS) and the extrastriate body area (EBA)]
Received Dec. 5, 2012; revised Jan. 25, 2013; accepted Feb. 17, 2013. (Wolpert et al., 1998; Homnmel, 2003; Pellijeff et al., 2006; Urgesi
Author contributions: M.Z., I.T., and F.P.d.L. designed research; M.Z. performed research; M.Z., I.T., and F.P.d.L. et al., 2007; Desmurget and Sirigu, 2009; Parkinson et al., 2010).
analyzed data; M.Z., I.T., and F.P.d.L. wrote the paper.
Correspondence should be addressed to Marius Zimmermann, Donders Institute for Brain, Cognition and Behav-
iour, PO Box 9101, 6500 HB, Nijmegen, The Netherlands. E-mail: [email protected]. Materials and Methods
DOI:10.1523/JNEUROSCI.5570-12.2013 Participants. Twenty-nine healthy, naive participants [17 female; age,
Copyright © 2013 the authors 0270-6474/13/335930-09$15.00/0 24.1 ⫾ 3.9 (mean ⫾ SD) years] participated after giving informed con-
Zimmermann et al. • Body Posture Modulates Action Perception J. Neurosci., April 3, 2013 • 33(14):5930 –5938 • 5931

Figure 1. Action prediction task. A, B, D, E, In each trial, participants were shown videos of an action (A, D: eight representative still frames) that involved either a bar translation (schematically
illustrated in B) or a bar rotation (E). C, F, Participants were lying in a scanner while the spatial relation between the posture of their right hand and the start/goal posture of the observed action was
manipulated. Participants used their left hand to indicate their prediction of the goal state of the observed action when required (100% of the trials during the behavioral session, 10% of the trials
during the fMRI session).

sent according to institutional guidelines (Commissie Mensgebonden Some actions required a translation of the bar from the middle cradle
Onderzoek region Arnhem-Nijmegen, The Netherlands) for payment of to the left or right cradle (16 trials). Other actions required an additional
10 €/h or course credit. All participants were consistent right-handers clockwise or counterclockwise rotation of the bar by 90° (16 trials) or
and had normal or corrected-to-normal vision. Two participants were 180° (16 trials). All actions were performed using the right hand, and
excluded from the analysis due to technical problems with the MR im- participants were free to choose whether to use an overhand or under-
aging system. Three participants were excluded because of poor behav- hand power grip when grasping the bar. Task duration was ⬃15 min.
ioral performance (showing error rates and/or reaction times that were Action-prediction task. Participants performed the action-prediction
⬎2.5 SDs larger than the group mean). The remaining 24 participants task both outside and inside the MR environment. First, they performed
(13 female; age, 24.3 ⫾ 3.9 years) were included in the analyses. the task in a dummy MR scanner, where we collected behavioral data
Experimental paradigms. The experiment consisted of three parts com- concerning their predictions on the observed actions. In the second ses-
pleted in a fixed order, spread over two sessions. A bar-grasping task (see sion, the participants performed an adapted version of the task in the MR
below for task descriptions) was performed during the first session only. scanner, where we measured BOLD responses. Below we describe the
An action-prediction task was performed during both sessions. To collect task in general, followed by a description of the aspects that differed
behavioral data, the first session took place in a dummy MR scanner between the two versions of the task.
identical in appearance to a real MR scanner. Several days later (average, In the action-prediction tasks (Fig. 1), participants watched short vid-
3.2 d) the action-prediction task was performed in a functional MR eos of actions while they were asked to predict the goal state of the
scanner. observed actions as quickly as possible. The stimulus videos lasted ⬃2 s.
Bar-grasping task. The purpose of the bar-grasping task was to famil- In each video, an actor sitting at a table grasped and moved a bar with his
iarize participants with the actions they were about to observe in the right hand to one of the two cradles. Each video started with a static image
prediction tasks later on. The participants were seated at a table with of the actor in a rest position with his right hand on the table and his left
three cradles positioned next to each other at 5 cm distance between hand out of view (below the table, on the actor’s lap). After a variable
adjacent cradles. Participants were instructed to use a power grip to grasp delay (250 –500 ms), the video started, showing the actor moving his
the bar (length, 25 cm; diameter, 2.5 cm; one end black, one end white), right arm to grasp the bar with either an overhand or an underhand grip.
positioned horizontally on the middle cradle, and place it on either the Subsequently, the actor moved the bar to the left or right cradle, using
left or right cradle according to instructions presented on a screen. In- either a rotation or translation movement. It has been shown that par-
structions involved both a direction (i.e., whether to place the bar on the ticipants choose between different grip configurations depending on the
left or right cradle) and a goal orientation of the bar (i.e., where the white action goal (Table 1) (Zimmermann et al., 2012). Namely, translation
and black ends of the bar should point). actions are more likely to be executed with an overhand grip; rotation
5932 • J. Neurosci., April 3, 2013 • 33(14):5930 –5938 Zimmermann et al. • Body Posture Modulates Action Perception

Table 1. Frequencies of grip strategies a response as quickly as possible after they inferred the action goal. The
Target location video was stopped when the subjects pressed the button to indicate that
they could predict the action goal. The intertrial interval (ITI) varied
Action type Left Right between 0.5 and 1 s. Before the task, participants practiced the task until
Translation 94% overhand 82% overhand they could correctly predict 8 of 10 consecutive trials, with a reaction time
6% underhand 18% underhand ⬍2 s. The behavioral session lasted ⬃40 min.
Rotation 19% overhand 71% overhand During functional imaging, we were interested in how postural con-
71% underhand 19% underhand gruency affected neural responses during the prediction task, while
Frequencies were measured as a percentage of participants that preferably chose a particular grip (overhand, avoiding any motor preparation processes related to responding (i.e.,
underhand) for a given combination of action type (translation, rotation) and target location (left, right; actor’s button presses). Therefore, participants were probed to respond only to
perspective). High-frequency grip strategies are in bold. Data are from Zimmermann et al. (2012): 20 participants, 16 a small number of “catch” trials (10% of all trials), during which the
trials for every combination of action type and target location.
stimulus video was replaced by a green exclamation mark at an unpre-
dictable moment during the video, between 1000 and 1500 ms after
actions to the left (actor’s perspective) are more likely to be performed stimulus onset. Participants then, using one of four buttons, had to
with an underhand grip of the bar; and rotation actions to the right are choose the likely goal of the observed action. These catch trials (as well as
more likely to be performed with an overhand grip of the bar. We refer to other trials where participants mistakenly pressed a button) were mod-
these action preferences as low-frequency and high-frequency grip strat- eled separately in the fMRI analysis. The ITI varied between 2 and 4 s.
egies. The set of videos used in this study displayed combinations of rest Before the fMRI session, participants engaged in a number of practice
posture (overhand, underhand), grasp posture (overhand, underhand), trials until they could correctly respond to 8 of 10 consecutive catch trials
initial bar orientations (black end on the left or on the right), movement within 2 s. During the neuroimaging session, eye movements were mea-
direction (left, right), and action types (translation, rotation) in an sured using an MR-compatible infrared camera (MRI-LR, SensoMotoric
equiprobable distribution, including both highly frequent and less fre- Instruments). Muscle activity of participants’ right forearms (approxi-
quent grip strategies. Time until the bar was grasped (⬃800 ms) and total mately above musculus pronator teres and musculus supinator, to opti-
duration of the grasping movement (⬃1600 ms) were standardized mally detect pronosupination of the forearm) was measured using an
across trials. Videos were stopped when the goal was achieved (i.e., the MR-compatible EMG system (Brain Products) and silver/silver-chloride
actor’s hand rested on the bar in its final configuration) and the last frame (Ag/AgCl) electrodes (Easycap). The fMRI session lasted ⬃55 min.
was shown until 2 s from video onset were elapsed. EBA localizer task. As detailed in the introduction, we wanted to test for
Participants were asked to predict the goal state of each observed ac- the presence of posture congruency effects in the EBA. To function-
tion as quickly as possible. Goal state was defined as the final orientation ally localize the EBA we used a set of previously validated stimuli
of the bar on the cradle to which the actor moved the bar. Therefore, for (http://pages.bangor.ac.uk/⬃pss811/page7/page7.html). This set con-
each trial there were four possible goal states (black bar end “pointing” sisted of 20 pictures of human bodies without heads and 20 pictures of
up, left, down, or right). Participants indicated their decision using one chairs. Stimuli were presented in an alternating blocked design with stim-
of four buttons on a button box held in their left hand. Each button was uli presentation time of 300 ms on and 450 ms off, and 20 stimuli per
assigned to one final state, defined as the bar orientation on the target block. Two stimuli of each block were presented twice in succession.
cradle (white end pointing up, left, down, or right), irrespectively of the Participants were instructed to detect stimulus repetitions (1-back task)
movement used to achieve that final state. The mapping between final to ensure attention to the stimuli. To prevent low-level adaptation, the
states and buttons was constant throughout the experiment. The map- location of each stimulus on the screen was slightly shifted at random.
ping was displayed during practice and during breaks between trials. The functional localizer took ⬃10 min and was administered after the
During the task, we manipulated the arm posture of each participant’s prediction task was completed.
right arm (Fig. 1). Participants could either have their arm in a prone Analysis of behavioral data. We obtained the time required to predict
posture (i.e., palm down), or in a supine posture (i.e., palm up), lying to the goal state of observed actions [prediction time, (PT)] and error rate
the right side of their body on the scanner table. Posture was changed from the button box responses. Trials with prediction times exceeding
after every block of nine trials. The posture manipulation resulted in 2.5 SDs above a participant’s condition mean were removed from the
different patterns of congruency between participants’ own arm posture analysis (on average, 1.7% of the trials were removed by this procedure).
and the observed arm posture(s) in the videos. During translation trials, Mean PTs were computed from all remaining, correct responses. Given
participants’ posture could either be “overall congruent” or “overall in- the low error rate (7.5%), we did not analyze error trials.
congruent” with the observed action (because start posture and goal PTs were defined as the time elapsed between the first video frame
posture are the same for these actions). During rotation trials, the par- when the actor grasped the bar and the moment the participant pressed a
ticipant’s posture could either be in a “goal-posture congruent” state or button. We investigated the influence of three task-related factors on PT.
in a “goal-posture incongruent” state. After each arm posture change The effect of action complexity was assessed by comparing PTs during
instruction, there was a short break (5 s) to allow for arm repositioning. translation actions with PTs during rotation actions. To probe the or-
Participants engaged in a total of 432 trials. On average, ⬃11% of trials thogonal effect of action frequency on performance, we compared PTs of
were filler trials. In these trials, the bar was placed vertically rather than actions performed with high-frequency and low-frequency grip strate-
horizontally on a cradle. These trials, in which the bar was rotated 90° gies. Finally, we assessed the effect of postural congruency during trans-
(instead of 0° or 180° as occurring during the experimental conditions), lation and rotation actions on PTs. For translation actions, we compared
were introduced to increase the number of possible observed movements PTs for translation trials with overall congruent and overall incongruent
and reduce predictability. These filler trials were excluded from subse- body posture. For rotation actions, we compared PTs for rotation trials
quent analyses. Of the remaining trials (N ⫽ 372), half were translation where participants’ own posture was either congruent or incongruent
trials (N ⫽ 186). The other half were rotation trials. In each group, half of with the goal posture of the observed action.
the trials (N ⫽ 93) were goal-posture congruent (rotation trials) or over- We used two-tailed paired-sample t tests for all comparisons on be-
all congruent (translation trials), and half were goal-posture/overall in- havioral data. Comparisons that exceeded t values corresponding to p
congruent. Sessions were divided into six blocks of 72 trials each, with values ⬍0.05 were considered significant.
self-paced rest breaks between blocks. Trials were presented in pseudo- To assess performance during the fMRI version of the prediction task,
random order, such that each block consisted of the same number of we analyzed the error rate during the catch trials as a function of viewing
trials of each condition, and the same action was not presented twice in a duration (i.e., the time before video playback was stopped and the catch
row. trial signal was presented). We calculated the error rate for trials depend-
The goal of the prediction task performed in the dummy scanner was ing on viewing duration in bins of 100 ms. Note that we cannot calculate
to examine whether postural congruency affected decision speed on the PTs during the fMRI version of the prediction task, since the decision
prediction task. Therefore, in this session, participants were asked to give moment was imposed by the experimenter, rather than the participant.
Zimmermann et al. • Body Posture Modulates Action Perception J. Neurosci., April 3, 2013 • 33(14):5930 –5938 • 5933

Eye movement and EMG data. To regress out potential interpretational pertaining to the main effects of the functional design were calculated
confounds related to cerebral effects of eye and muscle movements dur- based on parameter estimates of canonical hrfs.
ing the action-prediction task, regressors describing eye movement and For analysis of the experimental task, we looked at the same compar-
EMG activity recorded during the fMRI session of the prediction task isons as those we looked at during the behavioral prediction task, includ-
were included in the first-level fMRI analysis. For eye movements, we ing those related to action complexity, frequency, and postural
computed trajectory length and number of eye blinks for each MR vol- congruency. Contrasts of the parameter estimates for these comparisons
ume. For EMG activity, we computed the root mean square (RMS) ac- constituted the data for the second-stage analyses, which treated partic-
tivity for each MR volume. These eye-movement and EMG time series ipants as a random effect (Friston et al., 1999). Contrasts were thresh-
were included as additional nuisance regressors in the first-level analysis olded, if not otherwise specified, at p ⬍ 0.05 after familywise error (FWE)
of imaging data (see below). correction for multiple comparisons at the voxel level. Anatomical details
The eye-movement recordings were also used to compare eye move- of significant clusters were obtained by superimposing the structural
ments between conditions (see analysis of behavioral data) by segment- parametric maps onto the structural images of the MNI template. Brod-
ing the recordings into trials and time-locking each segment to video mann areas (BAs) were assigned based on the SPM anatomy toolbox
onset. (Eickhoff et al., 2005).
Image data acquisition. We used a 3 T Trio MR-scanner (Siemens) with a Apart from a whole-brain search for significant differences, we specif-
32-channel head coil for signal reception to acquire whole-brain T2*- ically focused on two predefined regions of interest (ROIs; spherical,
weighted multiecho echo-planar images (TR, 2070 ms; TE(1), 9.4 ms; TE(2), radius: 5 mm). The first ROI consisted of individually localized EBA (on
21.2 ms; TE(3), 33.0 ms; TE(4), 45.0 ms; voxel-size, 3.5 ⫻ 3.5 ⫻ 3.0 mm; gap the basis of a separate EBA localizer session), to test whether action-
size, 0.5 mm) during all functional scans. For each participant, we collected prediction effects were visible in this area, which is sensitive to observa-
⬃1400 volumes for the prediction task and 180 volumes for the EBA local- tion of body parts, as has been previously suggested (Downing et al.,
izer. The first 30 volumes of each scan were used for echo weighting (see 2001). The second ROI was a region in the IPS, which has been found to
Imaging data analysis) and were discarded from the analysis. This also en- be sensitive to body-posture manipulations during planning of goal-
sured signal equilibration of T1. Anatomical images were acquired with a directed actions. Here we used previously published stereotactic coordi-
T1-weighted MP-RAGE sequence (TR/TE, 2300/3.03 ms; voxel size, 1.0 ⫻ nates (MNI: ⫺22, ⫺60, 58; Zimmermann et al., 2012) to extract the
1.0 ⫻ 1.0 mm) after the EBA localizer task. difference in brain activation for contrasts related to posture congruency.
The head of each participant was carefully constrained using cushions Effective connectivity analysis. After having identified that regions in
on both sides of the head. Participants were instructed to remain as still as parietal and dorsal premotor cortex and the left EBA are more strongly
possible during the experiment. For additional somatosensory feedback involved in predicting goals of low-frequency actions compared with
on head movements, the forehead of each participant was taped, with high-frequency actions (see Results), we assessed whether there were
tape extending to both sides of the head coil. Data inspection showed that changes in effective connectivity between EBA and parietal or premotor
no head movements of participants ever exceeded 2 mm. regions as a function of action frequency.
Imaging data analysis. Imaging data were analyzed using MatLab More specifically, we expected an increased connectivity between EBA
(MathWorks) and SPM8 (Wellcome Department of Cognitive Neu- and parietal/premotor cortex during prediction of unlikely (i.e., low fre-
rology). First, functional images were spatially realigned using a sinc quency compared with high frequency) observed actions, under the hy-
interpolation algorithm that estimates rigid body transformations pothesis that EBA forms predictions about potential goal states during
(translations, rotations) by minimizing head movements between the observation of another agent. Moreover, it has previously been shown
first echo of each image and the reference image (Friston et al., 1995). that predictions about observed actions are influenced by one’s own,
Next, the four echoes were combined to form a single volume. For this, previously executed actions (Cattaneo et al., 2011). Therefore predic-
the first 30 volumes of each scan were used to estimate the best echo tions may also be influenced by the likelihood with which the observed
combination to optimally capture the BOLD response over the brain action would be chosen to reach a particular goal state in general. With
(Poser et al., 2006). These weights were then applied to the entire time accumulating evidence, predictions in EBA can be updated, and this
series. Subsequently, the time series for each voxel were temporally re- updated information may be forwarded to the parietal or precentral
aligned to the acquisition of the first slice. Images were normalized to a regions to inform the action plan. This would result in the hypothesized
standard EPI template centered in Talairach space (Ashburner and increase in connectivity.
Friston, 1999) by using linear and nonlinear parameters and resampled at To analyze changes in connectivity, we performed a psychophysiolog-
an isotropic voxel size of 2 mm. The normalized images were smoothed ical interaction (PPI) analysis (Friston et al., 1997). PPI analysis tries to
with an isotropic 8 mm full-width-at-half-maximum Gaussian kernel. model regionally specific responses based on an interaction between a
Anatomical images were spatially coregistered to the mean of the func- psychological factor and physiological activity of one specific (seed)
tional images and spatially normalized by using the same transformation brain region. Here, the analysis was set up to test for differences in con-
matrix applied to the functional images. The ensuing preprocessed fMRI nectivity (measured by correlation strength between activity of 2 areas)
time series were analyzed on a subject-by-subject basis using an event- between left EBA and all remaining brain areas, depending on the grip
related approach in the context of the general linear model. strategy used in the observed video (low frequency or high frequency).
For each trial type, square-wave functions were constructed with a dura- To define activity in EBA, we used the peak location of the left EBA from
tion corresponding to the stimulus duration and convolved with a canonical the independent localizer task as a starting point. We drew a 5-mm-
hemodynamic response function (hrf) and its temporal derivative (Friston et radius sphere around that voxel and extracted the first eigenvalue of
al., 1996). Additionally, the statistical model included 34 separate regressors voxels in this sphere that showed a relative increase in BOLD signal
of no interest, modeling catch trials and false alarms, residual head during observation of low-frequency actions (first level, p ⬍ 0.05 uncor-
movement-related effects by including Volterra expansions of the six rigid- rected). First, a PPI analysis was performed for each subject. Then, con-
body motion parameters (Lund et al., 2005), and compartment signals from trasts of parameter estimates for the interaction term constituted the data
white matter, cerebrospinal fluid, and out-of-brain regions (Verhagen et al., for the second-stage PPI analysis, treating participants as a random effect.
2008). Volterra expansions consisted of linear and quadratic effects of the six Finally, contrasts were corrected for multiple comparisons by applying
movement parameters for each volume, and included temporal derivatives. FWE correction at the cluster level ( p ⬍ 0.05) over the search volume
Finally, to covary out any potential confounding effects of eye and muscle (whole brain, IPS-ROI), on the basis of an intensity-based voxelwise
movements, hrf-convolved metrics of eye movements (path trajectory and threshold of p ⬍ 0.001 uncorrected.
number of eye blinks) and muscle activity data were included as additional
regressors of no interest. Results
Parameter estimates for all regressors were obtained by maximum- In this section, we describe behavioral and neuroimaging results
likelihood estimation, using a temporal high-pass filter (cutoff, 128 s), during the different tasks. Each set of results is structured along
modeling temporal autocorrelation as an AR(1) process. Linear contrasts the three dimensions assessed in this study, namely action com-
5934 • J. Neurosci., April 3, 2013 • 33(14):5930 –5938 Zimmermann et al. • Body Posture Modulates Action Perception

plexity (rotation trials vs translation trials), action frequency


(high-frequency vs low-frequency actions), and observer’s pos-
ture (congruent vs incongruent with actor’s goal posture).

Behavioral results
Action complexity
As can be seen from Figure 2 A, PTs for observed rotation actions
were longer than those for observed translation actions [transla-
tion, 753 ⫾ 124 ms (mean ⫾ SD); rotation, 944 ⫾ 152 ms; t(23) ⫽
14.04, p ⬍ 0.001]. This finding indicates that, even when the
timing of observed actions of different motoric complexity is
comparable, it takes longer to predict the goal state of the more
complex actions.
Action strategy frequency
Within each action type, PTs differed depending on the fre-
quency of the grip orientation used by actors when picking up the
bar (Fig. 2 B). For translation actions, participants were faster to
predict pronated than supinated translation actions (pronated,
732 ⫾ 24 ms; supinated, 774 ⫾ 25 ms; t(23) ⫽ 7.97, p ⬍ 0.001). For
rotation actions, PTs were faster for supinated rotations to the left
(pronated, 971 ⫾ 32 ms; supinated, 892 ⫾ 29 ms; t(23) ⫽ 9.09, p ⬍
0.001) and for pronated rotations to the right (pronated, 898 ⫾
31 ms; supinated, 1016 ⫾ 32 ms; t(23) ⫽ 7.37, p ⬍ 0.001). This
pattern of results is fully in line with the frequency of different
action strategies (Table 1). Namely, the more frequently an action
is executed (the more likely participants are to use a particular
grasp orientation in a condition), the faster its goal state is pre-
dicted during action observation.
Effect of observer’s body posture
Next we assessed the effect of one’s own arm posture on predict-
ing the goal state of the observed actions. When observing trans-
lation actions, there was no effect of the observer’s arm posture
on prediction times (congruent, 752 ⫾ 25 ms; incongruent,
755 ⫾ 24 ms; t(23) ⬍ 1, p ⬎ 0.10; Fig. 2C). For rotation actions,
however, participants were faster in predicting action goals when
their arm posture matched the goal posture of the observed ac-
tion (goal-posture congruent, 936 ⫾ 30 ms; goal-posture incon-
gruent, 952 ⫾ 30 ms; t(23) ⫽ 2.44, p ⫽ 0.022; Fig. 2 D). That is,
when participants observed a rotation action performed with a
supinated grip and thus ending with a prone arm posture, pre-
diction of the action goal was faster when the participant’s own
arm was also in a prone posture. Similarly, when observing a
rotation action performed with a pronated grip, PTs were faster
when the participant’s arm was in a supine posture.
Behavioral performance during fMRI session
We analyzed the performance (error rate) on catch trials during
the fMRI session of the action-prediction task as a function of
viewing duration (i.e., in 100 ms bins, with an average 12 trials per
participant in each bin). Performance increased with an increase
in viewing duration (linear increase of performance across sub-
sequent bins, ␤ ⫽ 0.368; t(23) ⫽ 4.295; p ⬍ 0.001, R 2 ⫽ 0.135).
This finding indicates that the longer participants could watch
the action, the better they could predict it. For the first bin (1000 –
1100 ms) participants correctly predicted 75.7% of the actions,
which increased to 89.7% correctly predicted actions in the two
last bins (1300 –1400 ms, 1400 –1500 ms).

Neuroimaging results
Figure 2. A, B, Action prediction times increase for biomechanically complex (A) and low-
Action complexity modulates activity in intraparietal and
frequency actions (B). C, D, Prediction times increase when the observer’s hand posture does
precentral regions
not match the action goal posture during biomechanically complex (rotation trials, D), but not
During observation of actions of higher motoric complexity (ro-
simple actions (translation trials, C). *p ⬍ 0.05; ***p ⬍ 0.001. n.s., Not significant.
tation trials, compared with translation trials) neural activity in-
Zimmermann et al. • Body Posture Modulates Action Perception J. Neurosci., April 3, 2013 • 33(14):5930 –5938 • 5935

Effect of body-posture congruency in IPS


We next assessed whether there were any
activity differences related to the congru-
ency of the participants’ arm posture with
the action-goal posture. Focusing on the a
priori defined intraparietal ROI, we ob-
served increased activity in the left IPS
when participants’ body posture was in-
congruent to the goal posture of the ob-
served action, compared with trials in
which the two postures were congruent
(t(23) ⫽ 2.48, p ⫽ 0.021; Fig. 4). This re-
gion did not show an activity difference as
a function of body-posture congruency
during translation actions (t(23) ⫽ 0.03,
p ⫽ 0.974), which was similar to the be-
havioral results. A whole-brain search for
differences in neural activation associated
with postural congruency found none in
other regions examined in either transla-
tion or rotation trials.
Effective connectivity between EBA and
IPS is modulated by action frequency
If EBA is involved in action prediction,
then its activity should modulate (or
should be modulated by) processes occur-
ring in intraparietal and precentral re-
gions that support action perception.
Using PPI analysis, which is designed to
assess changes in effective connectivity be-
tween brain regions (Friston et al., 1997),
we found that activity in the left IPS, at the
Figure 3. A, B, Activation maps, illustrating areas that show stronger activation during observation of complex actions com- same site as the above-mentioned effect of
pared to observation of simple actions (A) and observation of low-frequency action strategies compared to observation of high-
observer’s body posture (IPS-ROI at
frequency action strategies (B). Both contrasts show stronger activations in the EBA, and posterior parietal and premotor cortices.
p ⬍ 0.001, uncorrected for illustration purposes.
MNI: ⫺22, ⫺58, 60), correlated with ac-
tivity in left EBA as a function of action
frequency. Namely, observing low-
frequency actions increases the coupling
creased bilaterally in the IPS and the precentral gyrus, as well as between EBA and IPS (t(23) ⫽ 3.66, p ⫽ 0.046). There were no
the EBA (Fig. 3A, Table 2). These activity increases were localized differences in connectivity when searching over the whole brain.
in the superior parietal lobe [BA7; 40 – 60% probability (Eickhoff
et al., 2005)] on the upper bank of the IPS, and extended ventrally Eye movements
into the inferior parietal lobe. There were also complexity-related To control for the possibility that the cerebral effects described
activity increases in the frontal cortex, restricted to the dorsal above are due to differences in eye movements, we tested for
premotor cortex (BA6; 20 –50%) and ventral premotor cortex between-condition differences in eye-movement trajectory
(BA44; 40 –50%). Activity differences within the middle occipital length. There was no difference in trajectory length between eye
gyrus overlapped with EBA: within the individually localized left movements corresponding to different trial types (i.e., rotation vs
EBA, activity was stronger for rotation trials compared with translation, posture effects, grip choice within trial types; all t ⬍
translation trials (t(23) ⫽ 4.58, p ⬍ 0.001). 1.50, all p ⬎ 0.10).

Action strategy frequency modulates activity in intraparietal and Discussion


precentral regions This study investigated whether and how one’s body posture in-
Observation of low-frequency actions (compared with high- fluences one’s observations of the actions of others and affects
frequency actions) increased activity in cortical regions partially predictions of the goals of those actions. The results provide em-
overlapping those sensitive to action complexity (Fig. 3B; Table pirical support for a direct influence of the observer’s own body
3). These activity differences were observed in the left posterior posture on action observation, indicating that the prediction of
parietal cortex (upper bank of IPS, BA7; 10 –20%), the left and an action goal is facilitated when the observer’s body posture
right dorsal premotor cortex (BA6; 20 – 40%), as well as the left matches the action-goal posture. In neural terms, postural incon-
and right middle occipital cortex. The latter regions overlapped gruency between the observer’s body posture and the action-goal
with the individually localized EBA, where activity was stron- posture leads to increased activity within a region of the left IPS
ger for low-frequency compared with high-frequency actions known to be implicated in generating state estimates of one’s own
for translation (t(23) ⫽ 3.31, p ⫽ 0.003) as well as rotation body (Wolpert and Ghahramani, 2000, Pellijeff et al., 2006,
actions (t(23) ⫽ 3.75, p ⫽ 0.001). Parkinson et al., 2010).
5936 • J. Neurosci., April 3, 2013 • 33(14):5930 –5938 Zimmermann et al. • Body Posture Modulates Action Perception

Table 2. Brain regions associated with increased activity during observation and times observed when participants planned actions of the same
prediction of rotation actions compared to translation actions type as shown in the videos used in this study (Zimmermann et
Cluster size
MNI coordinates al., 2012).
Anatomical region Hemisphere (voxels) x y z t value (df)
Action-observation effects in parietal and precentral cortex
Posterior parietal cortex Left 1133 ⫺34 ⫺52 ⫹54 10.98 (23)
Posterior parietal and precentral regions were sensitive to the
Posterior parietal cortex Right 705 ⫹32 ⫺48 ⫹48 11.38 (23)
Dorsal precentral gyrus Left 471 ⫺26 ⫺6 ⫹56 11.29 (23) complexity and frequency of the observed actions. These brain
Dorsal precentral gyrus Right 225 ⫹26 ⫺8 ⫹62 9.47 (23) regions showed a stronger response to complex actions com-
Ventral precentral gyrus Left 60 ⫺50 ⫹4 ⫹34 8.08 (23) pared with simple ones, and they also showed a stronger response
Ventral precentral gyrus Right 5 ⫹52 ⫹10 ⫹28 6.57 (23) to low-frequency actions compared with high-frequency ones.
Middle occipital gyrus Left 695 ⫺48 ⫺74 ⫹2 12.53 (23) This activation pattern fits with earlier observations of planning-
Middle occipital gyrus Right 432 ⫹44 ⫺66 ⫹2 10.57 (23) related activity (Zimmermann et al., 2012) and with studies
showing brain regions that allow decoding of action intentions in
Table 3. Brain regions associated with increased activity during observation and object-directed actions (Gallivan et al., 2011).
prediction of low-frequency actions compared to high-frequency actions Sensitivity to frequency and complexity of the observed ac-
MNI coordinates tions, together with the known involvement of these regions in
Cluster size
Anatomical region Hemisphere (voxels) x y z t value (df) motor preparatory processes (Thoenissen et al., 2002) further
supports the idea that planning and observation of actions engage
Posterior parietal cortex Left 29 ⫺30 ⫺42 ⫹44 7.53 (23) overlapping brain regions. The increased parietal and precentral
Dorsal precentral gyrus Left 135 ⫺26 ⫺6 ⫹52 9.35 (23)
Dorsal precentral gyrus Right 20 ⫹30 ⫺2 ⫹48 7.57 (23)
activity during the observation of low-frequency actions might
Middle occipital gyrus Left 106 ⫺46 ⫺76 ⫹4 8.51 (23) reflect competition among multiple forward models (Oztop et
Middle occipital gyrus Right 35 ⫹52 ⫺66 ⫹0 7.60 (23) al., 2005) or familiarity with the observed action (Calvo-Merino
et al., 2005; Neal and Kilner, 2010). Future studies may want to
use individual priors for different action strategies to test for
effects of expertise and individual preferences. In particular, it
seems plausible that individual differences in forward models
(e.g., due to differences in exposure to particular motor pro-
grams) may have consequences for both perception and action.
Incidentally, this may also explain the qualitative differences in
action perception observed in deafferented patients (Bosbach et
al., 2005).
A region within IPS was sensitive to the postural congruency
between the observer’s body posture and the kinematics of ob-
served actions. It has been shown earlier that the same region
maintains a body-state estimate (Wolpert et al., 1998; Pellijeff et
al., 2006; Parkinson et al., 2010), and that it is modulated by one’s
body posture during action production (Shenton et al., 2004; de
Lange et al., 2006; Lorey et al., 2009; Ionta et al., 2012; Zimmer-
mann et al., 2012). Modulatory effects of one’s body posture
during observation of others’ actions in the same region within
IPS suggest that it not only represents one’s own estimated body
states, but also the estimated goal states of others’ actions. How-
ever, it is also possible that there are two classes of neurons in the
same region, with some neurons representing one’s own body
state and other neurons representing the body state of others.
Figure 4. BOLD signal amplitude in a region of interest of the left IPS (indicated in the
rendered brain image) during observation of translation (left bar) and rotation actions (right Action-observation effects in the EBA
bar). BOLD signal increases when the observer’s body posture does not match (is incongruent Observing actions of higher complexity evoked stronger re-
with) the actor’s goal posture during rotation actions only. *p ⬍ 0.05. n.s., Not significant. sponses in the EBA. Given that the actor’s hand during rotation
actions is visible from both sides, these trials might provide more
structural information about that body part and the tool being
Behavioral effects manipulated than the less complex translation trials. These fea-
PTs were modulated by the biomechanical complexity of the ob- tures have been suggested to increase EBA activity (Downing et
served action, the frequency of those actions (as assessed in an al., 2001), and the lateral occipitotemporal cortex is particularly
independent production task), and the spatial relationship be- responsive during the perception of hands (Bracci et al., 2010)
tween the observer’s body posture and the actor’s goal posture. In and visually presented man-made tools (Bracci et al., 2012).
detail, it took observers longer to predict goals of actions when However, these features cannot explain the larger EBA activity
the actions’ biomechanical complexity was higher, and it took during observation of rotation actions when the action is exe-
them longer to predict goals of actions that they would make less cuted infrequently, with structural body and tool information
frequently. Moreover, when the body posture of an observer being matched between these conditions.
matched the goal posture of the observed action, predicting the EBA, rather than having only perceptual functions, may dur-
action goal required less time than when those postures did not ing motor control represent desired goal postures for future ac-
match. These effects closely resemble the pattern of reaction tions, which can be used to guide selection of an appropriate
Zimmermann et al. • Body Posture Modulates Action Perception J. Neurosci., April 3, 2013 • 33(14):5930 –5938 • 5937

motor plan (van Nuenen et al., 2012; Zimmermann et al., 2012). tion) is organized around the prediction of goal postures, as ap-
If action observation makes use of the same processes that under- pears to be the case during action planning (Rosenbaum et al.,
lie action planning, EBA could potentially provide a visual repre- 1990, 2012; Graziano et al., 2002).
sentation of a predicted goal state of the observed action, which
can be used to guide action simulation. In case a low-frequency References
action strategy is observed, the initial prediction may be inaccu- Ambrosini E, Sinigaglia C, Costantini M (2012) Tie my hands, tie my eyes.
rate (since other actions/goal states are more likely) and updated J Exp Psychol Hum Percept Perform 38:263–266. CrossRef Medline
when more evidence is available, thereby increasing overall brain Ashburner J, Friston KJ (1999) Nonlinear spatial normalization using basis
functions. Hum Brain Mapp 7:254 –266. CrossRef Medline
activity. Because we perform many actions with our hands, and
Bosbach S, Cole J, Prinz W, Knoblich G (2005) Inferring another’s expecta-
many actions involve tools, desired goal states for action production tion from action: the role of peripheral sensation. Nat Neurosci 8:1295–
may be tool-specific (i.e., different tools may require specific grip 1297. CrossRef Medline
strategies), and the same combined representations of tools and Bracci S, Ietswaart M, Peelen MV, Cavina-Pratesi C (2010) Dissociable neu-
body parts may be used to infer goals of observed actions. To infer an ral responses to hands and nonhand body parts in human left extrastriate
action goal, it is important not only to know how and where some- visual cortex. J Neurophysiol 103:3389 –3397. CrossRef Medline
Bracci S, Cavina-Pratesi C, Ietswaart M, Caramazza A, Peelen MV (2012)
one is moving, but also, to resolve ambiguity, to anticipate what can
Closely overlapping responses to tools and hands in left lateral occipito-
be done with the object(s) that are part of the scene and to under- temporal cortex. J Neurophysiol 107:1443–1456. CrossRef Medline
stand the action’s context in general (Kilner et al., 2007). Calvo-Merino B, Glaser DE, Grèzes J, Passingham RE, Haggard P (2005)
The suggestion that parietal and precentral regions are en- Action observation and acquired motor skills: an FMRI study with expert
gaged during action observation, guided by predicted goal states dancers. Cereb Cortex 15:1243–1249. Medline
from EBA, is further supported by the finding that the functional Cattaneo L, Barchiesi G, Tabarelli D, Arfeller C, Sato M, Glenberg AM (2011)
connectivity between EBA and IPS is strengthened during obser- One’s motor performance predictably modulates the understanding of
others’ actions through adaptation of premotor visuo-motor neurons.
vation of infrequent actions. Drawing from earlier explanations, Soc Cogn Affect Neurosci 6:301–310. CrossRef Medline
the increase in connection strength may reflect the updating of Chen R, Cohen LG, Hallett M (2002) Nervous system reorganization fol-
information about the predicted goal state after the initially false lowing injury. Neuroscience 111:761–773. CrossRef Medline
representation within EBA is corrected based on additional evi- de Lange FP, Helmich RC, Toni I (2006) Posture influences motor imagery:
dence about the action’s goal state. an fMRI study. Neuroimage 33:609 – 617. CrossRef Medline
Desmurget M, Sirigu A (2009) A parietal-premotor network for movement
intention and motor awareness. Trends Cogn Sci 13:411– 419. CrossRef
Goal-state estimation and body posture
Medline
Ambrosini et al. (2012) recently showed that proactive eye move- Desmurget M, Epstein CM, Turner RS, Prablanc C, Alexander GE, Grafton
ments are impaired when observers have their hands tied behind ST (1999) Role of the posterior parietal cortex in updating reaching
their back. This demonstrated that the observer’s body posture movements to a visual target. Nat Neurosci 2:563–567. CrossRef Medline
can modulate action perception. Here we extend this finding by Downing PE, Jiang Y, Shuman M, Kanwisher N (2001) A cortical area selec-
showing that prediction of others’ actions is facilitated by con- tive for visual processing of the human body. Science 293:2470 –2473.
CrossRef Medline
gruence of one’s body posture with the goal state of the observed
Eickhoff SB, Stephan KE, Mohlberg H, Grefkes C, Fink GR, Amunts K, Zilles
action. This finding is consistent with studies showing that simul- K (2005) A new SPM toolbox for combining probabilistic cytoarchitec-
taneous execution of congruent actions during action observa- tonic maps and functional imaging data. Neuroimage 25:1325–1335.
tion can assist perception of these actions (Hamilton et al., 2004; CrossRef Medline
Miall et al., 2006). We assume that in these situations an esti- Fischer MH (2005) Action simulation for others is not constrained by one’s
mated goal state of one’s own action is congruent with a predicted own postures. Neuropsychologia 43:28 –34. CrossRef Medline
goal state of the observed action. The characteristics of the goal- Friston KJ, Holmes AP, Poline JB, Grasby PJ, Williams SC, Frackowiak RS,
Turner R (1995) Analysis of fMRI time-series revisited. Neuroimage
state effect observed in the current study may also explain why 2:45–53. CrossRef Medline
previous studies did not find an effect of observers’ posture on Friston KJ, Holmes A, Poline JB, Price CJ, Frith CD (1996) Detecting acti-
action observation. For instance, in Lorey et al. (2009), the ob- vations in PET and fMRI: levels of inference and power. Neuroimage
served actions lacked a clear goal, and in Fischer et al. (2005), the 4:223–235. CrossRef Medline
actions were unfamiliar to the observers (i.e., reaching a dot from Friston KJ, Buechel C, Fink GR, Morris J, Rolls E, Dolan RJ (1997) Psycho-
a bent posture while sitting on a chair). physiological and modulatory interactions in neuroimaging. Neuroimage
6:218 –229. CrossRef Medline
Friston KJ, Holmes AP, Worsley KJ (1999) How many subjects constitute a
Conclusions study? Neuroimage 10:1–5. CrossRef Medline
This study has shown that planning and perceiving actions rely Gallivan JP, McLean DA, Valyear KF, Pettypiece CE, Culham JC (2011) De-
on a common predictive mechanism that generates internal sim- coding action intentions from preparatory brain activity in human
ulations of these actions. In both situations, predictions pertain parieto-frontal networks. J Neurosci 31:9599 –9610. CrossRef Medline
to the goal state of the action, and they take into account the Grafton ST, Hamilton AF (2007) Evidence for a distributed hierarchy of
action representation in the brain. Hum Mov Sci 26:590 – 616. CrossRef
current state of the body. During planning, predicted goal states
Medline
may be evaluated with respect to the task goal of the actor, to Graziano MS, Taylor CS, Moore T (2002) Complex movements evoked by
anticipate future states, adjust for movement errors and improve microstimulation of precentral cortex. Neuron 34:841– 851. CrossRef
perception (Desmurget et al., 1999; Wolpert and Ghahramani, Medline
2000; Voss et al., 2008). During observation, predicted goal states Hamilton A, Wolpert D, Frith U (2004) Your own action influences how
can be used to anticipate another’s actions or help to understand you perceive another person’s action. Curr Biol 14:493– 498. CrossRef
the intentions of the observed agent (Kilner et al., 2007; Urgesi et Medline
Homnmel B (2003) Planning and representing intentional action.
al., 2010). ScientificWorldJournal 3:593– 608. CrossRef Medline
Overall, our results are in line with theories assuming a tight Ionta S, Perruchoud D, Draganski B, Blanke O (2012) Body context and
link between action observation and execution (Jeannerod, 2001; posture affect mental imagery of hands. PLoS One 7:e34382. CrossRef
Oztop et al., 2005), and suggest that action observation (predic- Medline
5938 • J. Neurosci., April 3, 2013 • 33(14):5930 –5938 Zimmermann et al. • Body Posture Modulates Action Perception

Jeannerod M (2001) Neural simulation of action: a unifying mechanism for SE (1995) Planning reaches by evaluating stored postures. Psychol Rev
motor cognition. Neuroimage 14:S103–S109. CrossRef Medline 102:28 – 67. CrossRef Medline
Kilner JM, Friston KJ, Frith CD (2007) Predictive coding: an account of the Rosenbaum DA, Chapman KM, Weigelt M, Weiss DJ, van der Wel R (2012)
mirror neuron system. Cogn Process 8:159 –166. CrossRef Medline Cognition, action, and object manipulation. Psychol Bull 138:924 –946.
Lorey B, Bischoff M, Pilgramm S, Stark R, Munzert J, Zentgraf K (2009) The CrossRef Medline
embodied nature of motor imagery: the influence of posture and perspec- Shadmehr R, Krakauer JW (2008) A computational neuroanatomy for mo-
tive. Exp Brain Res 194:233–243. CrossRef Medline tor control. Exp Brain Res 185:359 –381. CrossRef Medline
Lund TE, Nørgaard MD, Rostrup E, Rowe JB, Paulson OB (2005) Motion or Shenton JT, Schwoebel J, Coslett HB (2004) Mental motor imagery and the
activity: their role in intra- and inter-subject variation in fMRI. Neuroim- body schema: evidence for proprioceptive dominance. Neurosci Lett 370:
age 26:960 –964. CrossRef Medline 19 –24. CrossRef Medline
Miall RC, Stanley J, Todhunter S, Levick C, Lindo S, Miall JD (2006) Per- Thoenissen D, Zilles K, Toni I (2002) Differential involvement of parietal
forming hand actions assists the visual discrimination of similar hand and precentral regions in movement preparation and motor intention.
postures. Neuropsychologia 44:966 –976. CrossRef Medline J Neurosci 22:9024 –9034. Medline
Neal A, Kilner JM (2010) What is simulated in the action observation net- Urgesi C, Candidi M, Ionta S, Aglioti SM (2007) Representation of body
work when we observe actions? Eur J Neurosci 32:1765–1770. CrossRef identity and body actions in extrastriate body area and ventral premotor
Medline
cortex. Nat Neurosci 10:30 –31. CrossRef Medline
Oztop E, Wolpert D, Kawato M (2005) Mental state inference using visual
Urgesi C, Maieron M, Avenanti A, Tidoni E, Fabbro F, Aglioti SM (2010)
control parameters. Brain Res Cogn Brain Res 22:129 –151. CrossRef
Simulating the future of actions in the human corticospinal system. Cereb
Medline
Cortex 20:2511–2521. CrossRef Medline
Parkinson A, Condon L, Jackson SR (2010) Parietal cortex coding of limb
van Nuenen BF, Helmich RC, Buenen N, van de Warrenburg BP, Bloem BR,
posture: in search of the body-schema. Neuropsychologia 48:3228 –3234.
Toni I (2012) Compensatory activity in the extrastriate body area of
CrossRef Medline
Pellijeff A, Bonilha L, Morgan PS, McKenzie K, Jackson SR (2006) Parietal Parkinson’s disease patients. J Neurosci 32:9546 –9553. CrossRef Medline
updating of limb posture: an event-related fMRI study. Neuropsychologia Verhagen L, Dijkerman HC, Grol MJ, Toni I (2008) Perceptuo-motor
44:2685–2690. CrossRef Medline interactions during prehension movements. J Neurosci 28:4726 –
Poser BA, Versluis MJ, Hoogduin JM, Norris DG (2006) BOLD contrast 4735. CrossRef Medline
sensitivity enhancement and artifact reduction with multiecho EPI: Voss M, Ingram JN, Wolpert DM, Haggard P (2008) Mere expectation to
parallel-acquired inhomogeneity-desensitized fMRI. Magn Reson Med move causes attenuation of sensory signals. PLoS One 3:e2866. CrossRef
55:1227–1235. CrossRef Medline Medline
Press C, Cook J, Blakemore SJ, Kilner J (2011) Dynamic modulation of Wolpert DM, Ghahramani Z (2000) Computational principles of move-
human motor activity when observing actions. J Neurosci 31:2792–2800. ment neuroscience. Nat Neurosci 3[Suppl]:1212–1217. Medline
CrossRef Medline Wolpert DM, Goodbody SJ, Husain M (1998) Maintaining internal repre-
Rosenbaum DA, Marchak F, Barnes HJ, Vaughan J, Slotta J, Jorgensen M sentations: the role of the human superior parietal lobe. Nat Neurosci
(1990) Constraints for action selection: overhand versus underhand 1:529 –533. CrossRef Medline
grips. In: Attention and performance XIII: motor representation and con- Zimmermann M, Meulenbroek RG, de Lange FP (2012) Motor planning is
trol (Jeannerod M, ed), pp 321–342. Hillsdale, NJ: Erlbaum. facilitated by adopting an action’s goal posture: an fMRI study. Cereb
Rosenbaum DA, Loukopoulos LD, Meulenbroek RG, Vaughan J, Engelbrecht Cortex 22:122–131. CrossRef Medline

You might also like