Imap3.5 Manual: Technical Report
Imap3.5 Manual: Technical Report
Imap3.5 Manual: Technical Report
discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/264933015
iMap3.5 manual
CITATIONS READS
0 122
8 authors, including:
All in-text references underlined in blue are linked to publications on ResearchGate, Available from: Sebastien Miellet
letting you access and read them immediately. Retrieved on: 26 August 2016
iMap3.5
iMap toolbox for eye-tracking data analysis - Version 3.5 (May 2014)
http://perso.unifr.ch/roberto.caldara/index.php?page=3
Sébastien Miellet, Yingdi Liu, Cyril R. Pernet, Guillaume A. Rousselet, Michael Papinutto, Junpeng
Lao, Luca Vizioli & Roberto Caldara (2014). University of Fribourg
Junpeng Lao (2012) for the indtorgb function. Cyril Pernet & Guillaume Rousselet (2013) for the
tfce2d function
[email protected]
CITATION
If you use iMap, please cite us. You could use the following sentence:
Eye movement data were analysed with iMap3.5 (Caldara and Miellet, 2011; Miellet, Lao &
Caldara, 2014), which implements TFCE for signal enhancement (Pernet, Chauveau, Gaspar,
Rousselet, 2011; Smith and Nichols, 2009).
References:
Caldara, R., & Miellet, S. (2011). iMap: A Novel Method for Statistical Fixation Mapping of Eye Movement data.
Behavior Research Methods, 43(3), 864-78
Miellet, S., Lao, J. & Caldara, R. (2014). An appropriate use of iMap produces correct statistical results: a reply to
McManus (2013) iMAP and iMAP2 produce erroneous statistical maps of eye-movement differences.
Perception, 43(5), 451-457. DOI:10.1068/p7682.
Pernet, C.R., Chauveau, N., Gaspar, C.M., & Rousselet, G.G. (2011). LIMO EEG: A Toolbox for Hierarchical LInear
MOdeling of ElectroEncephaloGraphic Data. Computational Intelligence and Neuroscience. Article ID 831409,
doi:10.1155/2011/831409
Smith S.M., & Nichols, T.E. (2009). Threshold-free cluster enhancement: addressing problems of smoothing, threshold
dependence and localisation in cluster inference. Neuroimage, 44(1), 83-98. doi:
10.1016/j.neuroimage.2008.03.061.
Table of Contents
1. Update details
2. Input data
3. How to use iMap3
4. Configuration structure
5. Output
6. Examples
7. Approach description
8. Flow chart of iMap3 processing pipeline
9. Credits
10. Disclaimer
11. Updates details for older versions
1
1. Update details
Version 3.5
This version introduces random noise to avoid the problem of empty cells. Hence the
arbitrary choice of a minimal number of data points per pixel included in the analysis
(v. 3) is not necessary. The noise level is dependent on the individual data amplitude.
1- Suppression of the edge effect in the spatial smoothing of the fixation maps (Yingdi Liu).
2- Possibility to set the input and outpout data paths (Michael Papinutto).
3- No need to set the minimal number of data points required to include a pixel in the
analysis. As a consequence iMap does not generate a logical mask indicating the
parts of the stimulus space that will be included in the analysis. Instead of using
logical masks to select pixels with enough data points, we add random noise to
prevent having empty cells (Yingdi Liu).
4- The minimal number of unique values in the bootstrap procedures (minBootUnique) can
be set individually. In the previous version (3) it was set equal to mindatapoints
(Minimal number of data points required to include a pixel in the analysis).
5- iMap3.1 generates the sigShading maps showing on a single picture the significant
areas corresponding to different significance threshold (Yingdi Liu).
6- The toolbox now generates individual fixation maps - smoothed and normalized in the
stimulus space (Michael Papinutto).
7- All the numerical outputs are now presented in a unique txt document (iMap_report.txt).
Version 3
1- New "statistical engine". iMap3 now uses t-values across observers and bootstrapped TFCE
transformed scores to correct for multiple comparisons (TFCE: threshold-free cluster-
enhancement; Smith & Nichols, 2009), instead of being based on the Random Field Theory
(used in previous versions up to iMap2.1).
The associated benefits are: a) a more appropriate and direct estimate of data variability, b)
specific tests for independent and paired samples, c) a better control for false positives.
Many thanks to Cyril R. Pernet and Guillaume A. Rousselet for their help with the implementation
of this approach.
2- A bug due to the impossibility of calculating effect sizes when there is no effect has been fixed.
3- This version includes an option to generate the raw fixation maps.
4- This version no longer creates temporary working files.
5- A normalization of the individual maps is done across the search space in order to represent the
individual fixation bias.
6- Improved appearance of the statistical fixation maps.
7- Possibility to set the transparency of the background image and the statistical maps.
2. Input data
An iMap input data file is a matrix with a fixation per line. The only crucial data are the
coordinates (x and y), duration of the fixations and the trial numbers. Any other
column can be used for specifying your conditions.
Typically the data files are .mat files (called data1.mat, data2.mat,...) containing matrices
called "summary". This can be obtained from any txt file (e.g. fixation report from
EyeLink, Data Viewer, custom preprocessing code,…).
2
3. How to use iMap3.5
Copy iMap3 and its support functions in the folder containing the data file (or alternatively
set paths for input and output data).
Set the parameters (see configuration structure section) in a configuration structure (e.g.
cfg.xSize=400). Default values will be used for non-specified parameters.
Call the iMap3 function with the configuration structure (e.g. imap3(cfg)).
See examples at the end of this document and the example folders for more specific
explanations.
Please keep in mind that running iMap3 takes a bit of time due to the use of bootstrapping
and TFCE. We included wait bars so the user can keep track of the analysis
progression.
Some analyses with small stimulus size can be performed on a 8Go RAM computer.
Please note that you might encounter some “Out of Memory” issues when using a
32bits OS and/or Matlab; or depending on memory allocation settings in Matlab.
However, we strongly recommend using a 16Go RAM or more computer. This code
was tested on a 72Go node.
We recommend using iMap3 at first with the default settings to have a general view on the
results. Then, in a second step for the final analysis, we advise to use a higher
number of bootstraps (1000 or more for better estimate) and, if necessary, to set the
color scale and transparency of the maps.
4. Configuration structure
VARIABLES that can be set in the cfg structure
e.g. cfg.xSize=400. See examples at the end of this document.
5. Output
1- Statistical maps: iMap3.5 creates .tif pictures of the single and contrast statistical
maps merged with a background picture. It displays the significant areas (displayed
heat maps) after correction for multiple comparisons. The color coding of the heat
maps indicates the t-values. Scales and transparencies of the background and stats
map can be set.
2- Cumulative raw data: Raw (without normalization and smoothing) fixation locations
and durations for each data set, averaged across participants. The raw maps are
generated by default.
3- Individual fixation maps: individual fixation maps (smoothed and normalized in the
stimulus space) located in a folder called Individualmaps.
4- sigShading maps showing on a single picture the significant areas corresponding to
different significance threshold.
5- All the numerical outputs are now presented in a unique txt document
(iMap_report.txt). These numerical outputs include:
a- Global eye-tracking measures for each participant: number of fixations, total fixation
duration, mean fixation duration, path length and mean saccade length.
b- Eye-tracking measures in significant areas (and the rest of the picture): mean
fixation duration for area 1 then for area 2 then for the rest of the picture. In the same
4
way are path length, total fixation duration and number of fixations. Please refer to
the code for the exact output structure that might vary depending on the observed
significant effects.
6. Example
Please note that this data are only provided in order to illustrate how to use the toolbox.
We do not intend to prove the validity of the method or to run simulations from this
example.
Also note that you might observe slightly different results when trying these examples. This
is due to the imperfect convergence with 500 random replacement bootstraps. We
recommend using 500 bootstraps to have a quick look at the output but to use a higher
number of bootstraps in order to obtain the final results.
Subsample of data from Miellet, Vizioli, He, Zhou & Caldara (2013) and Miellet, He, Zhou,
Lao & Caldara (2012): East-Asian (EA, 15 observers) and Western-Caucasian (WC, 15
observers) participants performed an old-new task on EA and WC faces.
The data in this example are in .mat files (called data1.mat, data2.mat,...), the matrices are
called "summary". The stimulus presentation was using the Psychophysic Toolbox. The
raw data have been recorded in Matlab using the EyeLink Toolbox. The eye position was
recorded every 8ms then we computed fixations and saccades (with a custom algorithm),
and centered them in the stimulus space. There was a central fixation cross before each
trial then the 382x390 stimulus was randomly placed on a 800x600 screen.
Western Caucasian [1:15] vs East Asian [16:30] participants (WC and EA faces stimuli for
both groups). This example shows an eye bias for WC observers (warm colors on the
contrast map) and a center of the face bias for EA participants (cold colors on the contrast
map).
Configuration structure:
clear all
cfg=[];
cfg.xSize=382;
cfg.ySize=390;
cfg.dataset1=1:15;
cfg.dataset2=16:30;
cfg.twosampletest=2; % twosampletest: 1=paired or 2=independant
cfg.backgroundfile='facebackground.tif';
imap3_5(cfg)
5
Outputs:
1- Cumulative raw data
2- Statistical maps
3- Exploratory analysis
4- Numerical report
6
5- Individual fixation maps
7. Approach description
In 2011, we released iMap, a freely available open-source Matlab toolbox dedicated to
data-driven, robust statistical mapping of eye movement data (Caldara and Miellet, 2011).
Our approach was strongly inspired by the development of open-source toolboxes in
neuroimaging, such as SPM (Friston, Worsley, Frackowiak, Mazziotta & Evans, 1994),
EEGLab (Delorme and Makeig, 2004), Fieldtrip (Oostenveld, Fries, Maris & Schoffelen,
2011), etc. Open-source toolboxes are constantly updated and improved thanks to users’
feedbacks, comments and contributions. This approach and its scientific philosophy permit
constructive and reactive updates to the software made necessary by continuous
developments in statistics, novel theoretical and practical interests, types of data,
methodology and equipment.
The initial main goal of iMap was to offer solutions for data-driven analysis of eye-
movements, inspired by methods used in functional Magnetic Resonance Imaging. We
were mainly aiming to avoid the use of subjective a-priori defined regions of interest
(ROIs), as discussed in the original paper (Caldara and Miellet, 2011).
iMap offers a free, open-source, flexible and user friendly toolbox to analyze eye
movement data with a robust data-driven approach that generates statistical fixation maps.
Importantly, iMap does not require the a-priori segmentation of the experimental images
into Regions Of Interest.
With iMap, fixation data are first smoothed by convolving Gaussian kernels (this procedure
embodies eye-tracker accuracy) to generate three-dimensional fixation maps. The
individual smoothed fixation maps are then Z-scored in the stimulus space in order to
represent the individual fixation bias.
7.1. Observed statistical maps
iMap3.5 adds random noise to the original data to avoid empty cells and computes
independent- or paired-samples t-values on each pixel across datasets. This process is
repeated avgnum times (default = 50) and the resulting statistical maps are averaged in
order to cancel out the random noise.
7
7.2. Threshold-Free Cluster-Enhancement
The averaged statistical maps are then TFCE transformed, using a function derived from
LIMO-EEG (Pernet et al., 2011). The “threshold-free cluster-enhancement” (TFCE)
approach (Smith & Nichols, 2009) takes into account both amplitude and extent of the
signal. The TFCE approach aims to enhance areas of signal that exhibit some spatial
contiguity without relying on hard-threshold-based clustering. The image is passed through
an algorithm, which enhances the intensity within cluster-like regions more than within
background (noise) regions. The output image is therefore not intrinsically clustered /
thresholded, but after TFCE enhancement, thresholding will better discriminate between
noise and spatially-extended signal. This generic form of non-linear image processing
boosts the height of spatially distributed signals without changing the location of their local
maxima. Hence, it optimises the detection of both diffuse, low-amplitude signals and
sharp, focal signals. Strong control over family-wise error is obtained by using a bootstrap
threshold of the maximum TFCE scores expected by chance.
Regarding the contrast, it should be noted that our TFCE implementation produces only
positive values. In the bootstrap procedure, taking the maximum value would confound the
biases for dataset 1 and dataset 2. For instance, in the case of independent samples, the
distribution of maximum values could originate from only one of the two samples. For this
reason, the positive and negative parts of the t-value maps are considered independently
and the pixel intensity threshold is set to α/2 for both. Note that a similar approach is used
in Fieldtrip (Maris & Oostenveld, 2007).
7.2.3. Nonparametric multiple comparisons correction
In order to avoid false positives in the statistical maps due to multiple comparisons, we
implemented a nonparametric multiple comparisons correction. A bootstrap procedure
(cfg.nboot parameter, 500 resamples by default) is performed under H0. H0 is obtained by
centering each dataset, hence retaining the real data variability (Wilcox, 2012). The
maximum value of each bootstrapped TFCE enhanced map (pixel with the largest TFCE
intensity) is kept in order to create a distribution of the max intensities. The max intensities
are then sorted and the 95th percentile obtained to yield a one-sided 95% confidence
interval. The intensity of each pixel in the statistical map of the original data is then
compared to the maximum pixel intensity 95% confidence interval just obtained. Any
significant pixel of the true data smaller than the pixel intensity threshold of the maximum
pixel intensity distribution (α) is eliminated from the statistical map. The pixel intensity
threshold is set to be .05 by default (cfg.sigthres parameter).
8
8. Flow chart of iMap3 processing pipeline
9
9. Credits
tfce2d is adapted from LIMO-EEG.
Pernet, C.R., Chauveau, N., Gaspar, C.M., & Rousselet, G.G. (2011). LIMO EEG: A
Toolbox for Hierarchical LInear MOdeling of ElectroEncephaloGraphic Data.
Computational Intelligence and Neuroscience. Article ID 831409,
doi:10.1155/2011/831409
10. Disclaimer
iMap3.5 is free, open-source software; you can redistribute it and/or modify it.
We cannot be held responsible for any damage that may (appear to) be caused by the use
of iMap. Use at your own risk.
This program is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
A PARTICULAR PURPOSE.
Please cite us and let us know if you have any comments or suggestions.
Thank you to all the users who have sent us feedback.
Any comments, suggestions, constructive criticisms are very welcome. Any offer for code
optimisation as well!!!
References:
Caldara, R., & Miellet, S. (2011). iMap: A novel method for statistical fixation mapping of
eye movement data. Behavior Research Methods, 43(3), 864-878.
10
Delorme, A., Makeig, S. (2004). EEGLAB: an open source toolbox for analysis of single-
trial EEG dynamics including independent component analysis. Journal of
neuroscience methods, 134(1), 9-21.
Friston, K.J., Worsley, K.J., Frackowiak, R.S.J., Mazziotta, J.C. & Evans, A.C. (1994).
Assessing the significance of focal activations using their spatial extent. Human Brain
Mapping, 1, 214-220.
Kriegeskorte, N., Simmons, W. K., Bellgowan, P. S. F., and Baker, C. I. (2009). Circular
analysis in systems neuroscience – the dangers of double dipping. Nature
Neuroscience, 12, 535–540.
Maris, E., & Oostenveld, R. (2007). Nonparametric statistical testing of EEG- and MEG-
data. Journal of Neuroscience Methods, 164(1), 177–190.
Miellet, S., He, L., Zhou, X., Lao, J. & Caldara, R. (2012). When East meets West: gaze-
contingent Blindspots abolish cultural diversity in eye movements for faces. Journal
of Eye Movement Research, 5(2):4, 1-12.
Miellet, S., Lao, J. & Caldara, R. (2014). An appropriate use of iMap produces correct
statistical results: a reply to McManus (2013) iMAP and iMAP2 produce erroneous
statistical maps of eye-movement differences. Perception, 43(5), 451-457.
Miellet, S., Vizioli, L., He, L., Zhou, X., & Caldara, R. (2013). Mapping face recognition
information use across cultures. Frontiers in Perception Science, 4 :34.
Oostenveld, R., Fries, P., Maris, E., Schoffelen, J.–M. (2011). FieldTrip: Open Source
Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological
Data. Computational Intelligence and Neuroscience, 156869.
Pernet, C.R., Chauveau, N., Gaspar, C.M., & Rousselet, G.G. (2011). LIMO EEG: A
Toolbox for Hierarchical LInear MOdeling of ElectroEncephaloGraphic Data.
Computational Intelligence and Neuroscience. Article ID 831409,
doi:10.1155/2011/831409
Smith S.M., & Nichols, T.E. (2009). Threshold-free cluster enhancement: addressing
problems of smoothing, threshold dependence and localisation in cluster inference.
Neuroimage, 44(1), 83-98. doi: 10.1016/j.neuroimage.2008.03.061.
Wilcox, R. R. (2012). Introduction to robust estimation and hypothesis testing. Statistical
modeling and decision science (3rd ed., pp. xxi–690 p.). Amsterdam ; Boston:
Academic Press.
11