Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We... more Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We examined its effects on auditory localization in healthy human subjects. The specific objective was to test whether the relative intensity and location of a seemingly irrelevant visual stimulus would influence auditory localization in accordance with the inverse effectiveness and spatial rules of multisensory integration that have been developed from neurophysiological studies with animals [Stein and Meredith, 1993 The Merging of the Senses (Cambridge, MA: MIT Press)]. Subjects were asked to localize a sound in one condition in which a neutral visual stimulus was either above threshold (supra-threshold) or at threshold. In both cases the spatial disparity of the visual and auditory stimuli was systematically varied. The results reveal that stimulus salience is a critical factor in determining the effect of a neutral visual cue on auditory localization. Visual bias and, hence, perceptual translocation of the auditory stimulus appeared when the visual stimulus was supra-threshold, regardless of its location. However, this was not the case when the visual stimulus was at threshold. In this case, the influence of the visual cue was apparent only when the two cues were spatially coincident and resulted in an enhancement of stimulus localization. These data suggest that the brain uses multiple strategies to integrate multisensory information.
Haptic object recognition is usually an efficient process although slower and less accurate than ... more Haptic object recognition is usually an efficient process although slower and less accurate than its visual counterpart. The early loss of vision imposes a greater reliance on haptic perception for recognition compared to the sighted. Therefore, we may expect that congenitally blind persons could recognize objects through touch more quickly and accurately than late blind or sighted people. However, the literature provided mixed results. Furthermore, most of the studies on haptic object recognition focused on performance, devoting little attention to the exploration procedures that conducted to that performance. In this study, we used iCube, an instrumented cube recording its orientation in space as well as the location of the points of contact on its faces. Three groups of congenitally blind, late blind and age and gender-matched blindfolded sighted participants were asked to explore the cube faces where little pins were positioned in varying number. Participants were required to ex...
Electrotactile stimulation has been commonly used in human–machine interfaces to provide feedback... more Electrotactile stimulation has been commonly used in human–machine interfaces to provide feedback to the user, thereby closing the control loop and improving performance. The encoding approach, which defines the mapping of the feedback information into stimulation profiles, is a critical component of an electrotactile interface. Ideally, the encoding will provide a high-fidelity representation of the feedback variable while being easy to perceive and interpret by the subject. In the present study, we performed a closed-loop experiment wherein discrete and continuous coding schemes are combined to exploit the benefits of both techniques. Subjects performed a muscle activation-matching task relying solely on electrotactile feedback representing the generated myoelectric signal (EMG). In particular, we investigated the performance of two different coding schemes (spatial and spatial combined with frequency) at two feedback resolutions (low: 3 and high: 5 intervals). In both schemes, th...
One role of visual attention is the construction of a visual narrative, a description of visual e... more One role of visual attention is the construction of a visual narrative, a description of visual events in a language-like format. Recent work on the architecture of attention shows that it has simultaneous access to multiple selections, and luckily so, because other studies show that it is impossible to scrutinise the details within a single selection. As a result, the integration of elements, the derivation of their relations, and the actions they are engaged in is only possible by comparing information across multiple selections. The product of this cross-selection is a language-like description of the visual event that is then exported to other modules in the brain. A basic distinction immediately arises between objects (`nouns') that can be selected rapidly with a single operation, and actions (`verbs') that always require multiple selections. Results from parietal patients demonstrate a candidate case of syntax for motion and transient events.
The products of unisensory and multisensory integration within the Superior Colliculus were found... more The products of unisensory and multisensory integration within the Superior Colliculus were found to be appreciably different. Moreover, recent data from cats suggests that multiple stimuli from the same sensory modality only marginally enhance localization compared to cross-modal stimulus combinations. In the present study, we investigated whether the integration of stimuli from different modalities (cross-modal) and from the same modality (within-modal) have a different impact on spatial orienting in humans. To this aim, we asked subjects to perform a simple reaction time task (Experiment 1) and a localization task (Experiment 2), which comprised modality-specific stimuli (visual or auditory), cross-modal stimulus pairs (visual-auditory) and within-modal stimulus pairs (visual-visual). Although both the integrative modes shortened RTs compared to the best unimodal condition, the redundancy gain was significantly greater for cross-modal than within-modal stimulus combinations. Moreover, a violation of race model inequality was observed only for the cross-modal condition. In addition, cross-modal stimulus combinations yielded a greater improvement in stimulus localization, according to a Bayesian model of spatial integration. The present results suggest that the integration of stimuli from different modalities and from the same modality have a different impact on covert and overt orienting, and support the hypothesis that the behavioural products derived from multisensory integration are not attributable to simple target redundancy
Multisensory integration is the perceptual enhancement deriving from the integration of stimuli f... more Multisensory integration is the perceptual enhancement deriving from the integration of stimuli from different sensory channels, due to a neural coactivation mechanism. Multisensory integration has been suggested to be different from unisensory integration ( i.e. the statistical facilitation in a behavioural response when two stimuli of the same sensory modality are presented) and to be subserved by the activity of Superior Colliculus (SC). In order to verify these hypotheses, in the present study, a group of patients with subcortical lesions at the SC and a control group of healthy subjects were tested in a speeded detection task (Experiment 1) and a localization task (Experiment 2). Subjects were presented with modality-specific stimuli (visual or auditory), cross-modal stimulus pairs (audio-visual) and within-modal stimulus pairs (visual-visual). In Experiment 1, control subjects showed a multisensory enhancement effect with a violation of the race model inequality only for audio-visual stimuli, whereas visual-visual stimuli induced a statistical facilitation effect. On the other hand, SC patients did not show any significant violation of the race inequality, demonstrating only a statistical facilitation effect both for audio-visual and visual-visual stimuli. In Experiment 2, control subjects exhibited significantly enhanced localization accuracy for audio-visual stimuli, while in SC patients no differences were found in the localization performances. Overall these results suggest that multisensory and unisensory integration are two distinct phenomena and that the former, due to a neural coactivation mechanism, requires SC activity to occur, whereas the latter is due to a statistical facilitation and is independent from SC activity
ABSTRACT A brief experience of using a tool to act upon far, unreachable, objects quickly modifie... more ABSTRACT A brief experience of using a tool to act upon far, unreachable, objects quickly modifies the action space, extending transiently the limits of the Peripersonal Space (PPS), i.e., a limited space surrounding the body where stimuli from different sensory modalities are integrated. Here we investigated whether a possible long-term extension of the PPS could be shown in professional fencers, who train everyday with their weapon to “invade” another person’s PPS and “defend” their own PPS. Subjects performed an audio tactile interaction task to assess the PPS around their right hand, either while holding a short handle or their weapon. While holding the handle, the perception of a tactile stimulus at the hand was affected by a sound only when this was perceived near the hand and not in the far space. Conversely, when professional fencers held their weapon, far auditory stimuli interacted with tactile stimuli at the hand, thus suggesting that the boundaries of the multisensory PPS shifted to the tip of the weapon. Preliminary results suggest that the extension effect on the PPS varies between foil and sabre users, due to the different functional use of these two kinds of weapon.
Previous studies show that visual stimuli can influence auditory localization. The present study ... more Previous studies show that visual stimuli can influence auditory localization. The present study investigates the role of primary visual cortex (V1) on multisensory-mediated auditory localization, using rTMS (inhibitory theta-burst stimulation, iTBS). Subjects were asked to localize an auditory stimulus alone or with a concurrent near-threshold visual stimulus presented at the same spatial position or at spatial disparity, in two counterbalanced sessions performed outside (baseline) or within the inhibitory effects created by iTBS of V1. Compared to baseline, after iTBS, visual capture (i.e. perceptual translocation of the auditory stimulus toward the visual one, when audio-visual stimuli are spatially disparate) disappeared into the visual field contralateral to the stimulated site, whereas no effect was found into the ipsilateral visual field. However, when audio-visual stimuli were spatially coincident, an acoustical localization enhancement of the same magnitude in the contralateral and in the ipsilateral field was found in both sessions, suggesting an audio-visual integration effect. These results suggest that visual capture and multisensory integration for spatially coincident audio-visual stimuli are functionally independent and mediated by different neural circuits: V1 activity is necessary in mediating visual capture, whereas audio-visual integration is unaffected by V1 inhibition and may be mediated by subcortical structures such as Superior Colliculus
Haptic exploration strategies have been traditionally studied focusing on hand movements and negl... more Haptic exploration strategies have been traditionally studied focusing on hand movements and neglecting how objects are moved in space. However, in daily life situations touch and movement cannot be disentangled. Furthermore, the relation between object manipulation as well as performance in haptic tasks and spatial skill is still little understood. In this study, we used iCube, a sensorized cube recording its orientation in space as well as the location of the points of contact on its faces. Participants had to explore the cube faces where little pins were positioned in varying number and count the number of pins on the faces with either even or odd number of pins. At the end of this task, they also completed a standard visual mental rotation test (MRT). Results showed that higher MRT scores were associated with better performance in the task with iCube both in term of accuracy and exploration speed and exploration strategies associated with better performance were identified. High performers tended to rotate the cube so that the explored face had the same spatial orientation (i.e., they preferentially explored the upward face and rotated iCube to explore the next face in the same orientation). They also explored less often twice the same face and were faster and more systematic in moving from one face to the next. These findings indicate that iCube could be used to infer subjects' spatial skill in a more natural and unobtrusive fashion than with standard MRTs.
Proceedings of the 2nd Workshop on Multimedia for Accessible Human Computer Interfaces - MAHCI '19
We investigate the role of refreshable tactile display in supporting the learning of cognitive ma... more We investigate the role of refreshable tactile display in supporting the learning of cognitive maps, followed by actual exploration of a real environment that matches that map. We test both blind and low-vision persons and compare displaying maps in three information modes: with a pin array matrix, with raised paper and with verbal descriptions. We find that the pin matrix leads to a better way of externalizing a cognitive map and reduces the performance gap between blind and low-vision people. The entire evaluation is performed by participants in autonomy and suggests that refreshable tactile displays may be used to train blind persons in orientation and mobility tasks.
Proceedings of the 2018 Workshop on Multimedia for Accessible Human Computer Interface, 2018
Pin-array displays are a promising technology that allow to display visual information with touch... more Pin-array displays are a promising technology that allow to display visual information with touch, a crucial issue for blind and partially sighted users. Such displays are programmable, therefore can considerably increase, vary and tailor the amount of information as compared to common embossed paper and, beyond Braille, they allow to display graphics. Due to a shortage in establishing which ideal resolution allows to understand simple graphical concepts, we evaluated the discriminability of tactile symbols at different resolutions and complexity levels in blind, blindfolded low-vision and sighted participants. We report no differences in discrimination accuracy between tactile symbols organized in 3x3 as compared to 4x4 arrays. A metric based on search and discrimination speed in blind and in low-vision participants does not change at different resolutions, whereas in sighted participants it significantly increases when resolution increases. We suggest possible guidelines in designing dictionaries of low-resolution tactile symbols. Our results can help designers, ergonomists and rehabilitators to develop usable human-machine interfaces with tactual symbol coding.
Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We... more Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We examined its effects on auditory localization in healthy human subjects. The specific objective was to test whether the relative intensity and location of a seemingly irrelevant visual stimulus would influence auditory localization in accordance with the inverse effectiveness and spatial rules of multisensory integration that have been developed from neurophysiological studies with animals [Stein and Meredith, 1993 The Merging of the Senses (Cambridge, MA: MIT Press)]. Subjects were asked to localize a sound in one condition in which a neutral visual stimulus was either above threshold (supra-threshold) or at threshold. In both cases the spatial disparity of the visual and auditory stimuli was systematically varied. The results reveal that stimulus salience is a critical factor in determining the effect of a neutral visual cue on auditory localization. Visual bias and, hence, perceptual translocation of the auditory stimulus appeared when the visual stimulus was supra-threshold, regardless of its location. However, this was not the case when the visual stimulus was at threshold. In this case, the influence of the visual cue was apparent only when the two cues were spatially coincident and resulted in an enhancement of stimulus localization. These data suggest that the brain uses multiple strategies to integrate multisensory information.
Haptic object recognition is usually an efficient process although slower and less accurate than ... more Haptic object recognition is usually an efficient process although slower and less accurate than its visual counterpart. The early loss of vision imposes a greater reliance on haptic perception for recognition compared to the sighted. Therefore, we may expect that congenitally blind persons could recognize objects through touch more quickly and accurately than late blind or sighted people. However, the literature provided mixed results. Furthermore, most of the studies on haptic object recognition focused on performance, devoting little attention to the exploration procedures that conducted to that performance. In this study, we used iCube, an instrumented cube recording its orientation in space as well as the location of the points of contact on its faces. Three groups of congenitally blind, late blind and age and gender-matched blindfolded sighted participants were asked to explore the cube faces where little pins were positioned in varying number. Participants were required to ex...
Electrotactile stimulation has been commonly used in human–machine interfaces to provide feedback... more Electrotactile stimulation has been commonly used in human–machine interfaces to provide feedback to the user, thereby closing the control loop and improving performance. The encoding approach, which defines the mapping of the feedback information into stimulation profiles, is a critical component of an electrotactile interface. Ideally, the encoding will provide a high-fidelity representation of the feedback variable while being easy to perceive and interpret by the subject. In the present study, we performed a closed-loop experiment wherein discrete and continuous coding schemes are combined to exploit the benefits of both techniques. Subjects performed a muscle activation-matching task relying solely on electrotactile feedback representing the generated myoelectric signal (EMG). In particular, we investigated the performance of two different coding schemes (spatial and spatial combined with frequency) at two feedback resolutions (low: 3 and high: 5 intervals). In both schemes, th...
One role of visual attention is the construction of a visual narrative, a description of visual e... more One role of visual attention is the construction of a visual narrative, a description of visual events in a language-like format. Recent work on the architecture of attention shows that it has simultaneous access to multiple selections, and luckily so, because other studies show that it is impossible to scrutinise the details within a single selection. As a result, the integration of elements, the derivation of their relations, and the actions they are engaged in is only possible by comparing information across multiple selections. The product of this cross-selection is a language-like description of the visual event that is then exported to other modules in the brain. A basic distinction immediately arises between objects (`nouns') that can be selected rapidly with a single operation, and actions (`verbs') that always require multiple selections. Results from parietal patients demonstrate a candidate case of syntax for motion and transient events.
The products of unisensory and multisensory integration within the Superior Colliculus were found... more The products of unisensory and multisensory integration within the Superior Colliculus were found to be appreciably different. Moreover, recent data from cats suggests that multiple stimuli from the same sensory modality only marginally enhance localization compared to cross-modal stimulus combinations. In the present study, we investigated whether the integration of stimuli from different modalities (cross-modal) and from the same modality (within-modal) have a different impact on spatial orienting in humans. To this aim, we asked subjects to perform a simple reaction time task (Experiment 1) and a localization task (Experiment 2), which comprised modality-specific stimuli (visual or auditory), cross-modal stimulus pairs (visual-auditory) and within-modal stimulus pairs (visual-visual). Although both the integrative modes shortened RTs compared to the best unimodal condition, the redundancy gain was significantly greater for cross-modal than within-modal stimulus combinations. Moreover, a violation of race model inequality was observed only for the cross-modal condition. In addition, cross-modal stimulus combinations yielded a greater improvement in stimulus localization, according to a Bayesian model of spatial integration. The present results suggest that the integration of stimuli from different modalities and from the same modality have a different impact on covert and overt orienting, and support the hypothesis that the behavioural products derived from multisensory integration are not attributable to simple target redundancy
Multisensory integration is the perceptual enhancement deriving from the integration of stimuli f... more Multisensory integration is the perceptual enhancement deriving from the integration of stimuli from different sensory channels, due to a neural coactivation mechanism. Multisensory integration has been suggested to be different from unisensory integration ( i.e. the statistical facilitation in a behavioural response when two stimuli of the same sensory modality are presented) and to be subserved by the activity of Superior Colliculus (SC). In order to verify these hypotheses, in the present study, a group of patients with subcortical lesions at the SC and a control group of healthy subjects were tested in a speeded detection task (Experiment 1) and a localization task (Experiment 2). Subjects were presented with modality-specific stimuli (visual or auditory), cross-modal stimulus pairs (audio-visual) and within-modal stimulus pairs (visual-visual). In Experiment 1, control subjects showed a multisensory enhancement effect with a violation of the race model inequality only for audio-visual stimuli, whereas visual-visual stimuli induced a statistical facilitation effect. On the other hand, SC patients did not show any significant violation of the race inequality, demonstrating only a statistical facilitation effect both for audio-visual and visual-visual stimuli. In Experiment 2, control subjects exhibited significantly enhanced localization accuracy for audio-visual stimuli, while in SC patients no differences were found in the localization performances. Overall these results suggest that multisensory and unisensory integration are two distinct phenomena and that the former, due to a neural coactivation mechanism, requires SC activity to occur, whereas the latter is due to a statistical facilitation and is independent from SC activity
ABSTRACT A brief experience of using a tool to act upon far, unreachable, objects quickly modifie... more ABSTRACT A brief experience of using a tool to act upon far, unreachable, objects quickly modifies the action space, extending transiently the limits of the Peripersonal Space (PPS), i.e., a limited space surrounding the body where stimuli from different sensory modalities are integrated. Here we investigated whether a possible long-term extension of the PPS could be shown in professional fencers, who train everyday with their weapon to “invade” another person’s PPS and “defend” their own PPS. Subjects performed an audio tactile interaction task to assess the PPS around their right hand, either while holding a short handle or their weapon. While holding the handle, the perception of a tactile stimulus at the hand was affected by a sound only when this was perceived near the hand and not in the far space. Conversely, when professional fencers held their weapon, far auditory stimuli interacted with tactile stimuli at the hand, thus suggesting that the boundaries of the multisensory PPS shifted to the tip of the weapon. Preliminary results suggest that the extension effect on the PPS varies between foil and sabre users, due to the different functional use of these two kinds of weapon.
Previous studies show that visual stimuli can influence auditory localization. The present study ... more Previous studies show that visual stimuli can influence auditory localization. The present study investigates the role of primary visual cortex (V1) on multisensory-mediated auditory localization, using rTMS (inhibitory theta-burst stimulation, iTBS). Subjects were asked to localize an auditory stimulus alone or with a concurrent near-threshold visual stimulus presented at the same spatial position or at spatial disparity, in two counterbalanced sessions performed outside (baseline) or within the inhibitory effects created by iTBS of V1. Compared to baseline, after iTBS, visual capture (i.e. perceptual translocation of the auditory stimulus toward the visual one, when audio-visual stimuli are spatially disparate) disappeared into the visual field contralateral to the stimulated site, whereas no effect was found into the ipsilateral visual field. However, when audio-visual stimuli were spatially coincident, an acoustical localization enhancement of the same magnitude in the contralateral and in the ipsilateral field was found in both sessions, suggesting an audio-visual integration effect. These results suggest that visual capture and multisensory integration for spatially coincident audio-visual stimuli are functionally independent and mediated by different neural circuits: V1 activity is necessary in mediating visual capture, whereas audio-visual integration is unaffected by V1 inhibition and may be mediated by subcortical structures such as Superior Colliculus
Haptic exploration strategies have been traditionally studied focusing on hand movements and negl... more Haptic exploration strategies have been traditionally studied focusing on hand movements and neglecting how objects are moved in space. However, in daily life situations touch and movement cannot be disentangled. Furthermore, the relation between object manipulation as well as performance in haptic tasks and spatial skill is still little understood. In this study, we used iCube, a sensorized cube recording its orientation in space as well as the location of the points of contact on its faces. Participants had to explore the cube faces where little pins were positioned in varying number and count the number of pins on the faces with either even or odd number of pins. At the end of this task, they also completed a standard visual mental rotation test (MRT). Results showed that higher MRT scores were associated with better performance in the task with iCube both in term of accuracy and exploration speed and exploration strategies associated with better performance were identified. High performers tended to rotate the cube so that the explored face had the same spatial orientation (i.e., they preferentially explored the upward face and rotated iCube to explore the next face in the same orientation). They also explored less often twice the same face and were faster and more systematic in moving from one face to the next. These findings indicate that iCube could be used to infer subjects' spatial skill in a more natural and unobtrusive fashion than with standard MRTs.
Proceedings of the 2nd Workshop on Multimedia for Accessible Human Computer Interfaces - MAHCI '19
We investigate the role of refreshable tactile display in supporting the learning of cognitive ma... more We investigate the role of refreshable tactile display in supporting the learning of cognitive maps, followed by actual exploration of a real environment that matches that map. We test both blind and low-vision persons and compare displaying maps in three information modes: with a pin array matrix, with raised paper and with verbal descriptions. We find that the pin matrix leads to a better way of externalizing a cognitive map and reduces the performance gap between blind and low-vision people. The entire evaluation is performed by participants in autonomy and suggests that refreshable tactile displays may be used to train blind persons in orientation and mobility tasks.
Proceedings of the 2018 Workshop on Multimedia for Accessible Human Computer Interface, 2018
Pin-array displays are a promising technology that allow to display visual information with touch... more Pin-array displays are a promising technology that allow to display visual information with touch, a crucial issue for blind and partially sighted users. Such displays are programmable, therefore can considerably increase, vary and tailor the amount of information as compared to common embossed paper and, beyond Braille, they allow to display graphics. Due to a shortage in establishing which ideal resolution allows to understand simple graphical concepts, we evaluated the discriminability of tactile symbols at different resolutions and complexity levels in blind, blindfolded low-vision and sighted participants. We report no differences in discrimination accuracy between tactile symbols organized in 3x3 as compared to 4x4 arrays. A metric based on search and discrimination speed in blind and in low-vision participants does not change at different resolutions, whereas in sighted participants it significantly increases when resolution increases. We suggest possible guidelines in designing dictionaries of low-resolution tactile symbols. Our results can help designers, ergonomists and rehabilitators to develop usable human-machine interfaces with tactual symbol coding.
Uploads
Papers by Fabrizio Leo