Papers by Leonardo Fernandino
The Journal of Neuroscience, 2022
Neuroimaging, neuropsychological, and psychophysical evidence indicate that concept retrieval sel... more Neuroimaging, neuropsychological, and psychophysical evidence indicate that concept retrieval selectively engages specific sensory and motor brain systems involved in the acquisition of the retrieved concept. However, it remains unclear which supramodal cortical regions contribute to this process and what kind of information they represent. Here, we used representational similarity analysis of two large fMRI datasets with a searchlight approach to generate a detailed map of human brain regions where the semantic similarity structure across individual lexical concepts can be reliably detected. We hypothesized that heteromodal cortical areas typically associated with the default mode network encode multimodal experiential information about concepts, consistent with their proposed role as cortical integration hubs. In two studies involving different sets of concepts and different participants (both sexes), we found a distributed, bihemispheric network engaged in concept representation, composed of high-level association areas in the anterior, lateral, and ventral temporal lobe; inferior parietal lobule; posterior cingulate gyrus and precuneus; and medial, dorsal, ventrolateral, and orbital prefrontal cortex. In both studies, a multimodal model combining sensory, motor, affective, and other types of experiential information explained significant variance in the neural similarity structure observed in these regions that was not explained by unimodal experiential models or by distributional semantics (i.e., word2vec similarity). These results indicate that during concept retrieval, lexical concepts are represented across a vast expanse of high-level cortical regions, especially in the areas that make up the default mode network, and that these regions encode multimodal experiential information.
Proceedings of the National Academy of Sciences, 2022
The nature of the representational code underlying conceptual knowledge remains a major unsolved ... more The nature of the representational code underlying conceptual knowledge remains a major unsolved problem in cognitive neuroscience. We assessed the extent to which different representational systems contribute to the instantiation of lexical concepts in high-level, heteromodal cortical areas previously associated with semantic cognition. We found that lexical semantic information can be reliably decoded from a wide range of heteromodal cortical areas in frontal, parietal, and temporal cortex. In most of these areas, we found a striking advantage for experience-based representational structures (i.e., encoding information about sensory-motor, affective, and other features of phenomenal experience), with little evidence for independent taxonomic or distributional organization. These results were found independently for object and event concepts. Our findings indicate that concept representations in heteromodal cortex are based, at least in part, on experiential information. They also reveal that, in most heteromodal areas, event concepts have more heterogeneous representations (i.e., they are more easily decodable) than object concepts, and that other areas beyond the traditional “semantic hubs” contribute to semantic cognition, particularly the posterior cingulate gyrus and the precuneus.
While major advances have been made in uncovering the neural processes underlying perceptual repr... more While major advances have been made in uncovering the neural processes underlying perceptual representations , our grasp of how the brain gives rise to conceptual knowledge remains relatively poor. Recent work has provided strong evidence that concepts rely, at least in part, on the same sensory and motor neural systems through which they were acquired, but it is still unclear whether the neural code for concept representation uses information about sensory-motor features to discriminate between concepts. In the present study, we investigate this question by asking whether an encoding model based on five semantic attributes directly related to sensory-motor experience – sound, color, visual motion, shape, and manipulation – can successfully predict patterns of brain activation elicited by individual lexical concepts. We collected ratings on the relevance of these five attributes to the meaning of 820 words, and used these ratings as predictors in a multiple regression model of the fMRI signal associated with the words in a separate group of participants. The five resulting activation maps were then combined by linear summation to predict the distributed activation pattern elicited by a novel set of 80 test words. The encoding model predicted the activation patterns elicited by the test words significantly better than chance. As expected, prediction was successful for concrete but not for abstract concepts. Comparisons between encoding models based on different combinations of attributes indicate that all five attributes contribute to the representation of concrete concepts. Consistent with embodied theories of semantics, these results show, for the first time, that the distributed activation pattern associated with a concept combines information about different sensory-motor attributes according to their respective relevance. Future research should investigate how additional features of phenomenal experience contribute to the neural representation of conceptual knowledge.
We introduce an approach that predicts neural representations of word meanings contained in sente... more We introduce an approach that predicts neural representations of word meanings contained in sentences then superposes these to predict neural representations of new sentences. A neurobiological semantic model based on sensory, motor, social, emotional, and cognitive attributes was used as a foundation to define semantic content. Previous studies have predominantly predicted neural patterns for isolated words, using models that lack neurobiological interpretation. Fourteen participants read 240 sentences describing everyday situations while undergoing fMRI. To connect sentence-level fMRI activation patterns to the word-level semantic model, we devised methods to decompose the fMRI data into individual words. Activation patterns associated with each attribute in the model were then estimated using multiple-regression. This enabled synthesis of activation patterns for trained and new words, which were subsequently averaged to predict new sentences. Region-of-interest analyses revealed that prediction accuracy was highest using voxels in the left temporal and inferior parietal cortex, although a broad range of regions returned statistically significant results, showing that semantic information is widely distributed across the brain. The results show how a neurobiologically motivated semantic model can decompose sentence-level fMRI data into activation features for component words, which can be recombined to predict activation patterns for new sentences.
The capacity to process information in conceptual form is a fundamental aspect of human cognition... more The capacity to process information in conceptual form is a fundamental aspect of human cognition, yet little is known about how this type of information is encoded in the brain. Although the role of sensory and motor cortical areas has been a focus of recent debate, neuroimaging studies of concept representation consistently implicate a network of heteromodal areas that seem to support concept retrieval in general rather than knowledge related to any particular sensory-motor content. We used predictive machine learning on fMRI data to investigate the hypothesis that cortical areas in this “general semantic network” (GSN) encode multimodal information derived from basic sensory-motor processes, possibly functioning as convergence– divergence zones for distributed concept representation. An encoding model based on five conceptual attributes directly related to sensory-motor experience (sound, color, shape, manipulability, and visual motion) was used to predict brain activation patterns associated with individual lexical concepts in a semantic decision task. When the analysis was restricted to voxels in the GSN, the model was able to identify the activation patterns corresponding to individual concrete concepts significantly above chance. In contrast, a model based on five perceptual attributes of the word form performed at chance level. This pattern was reversed when the analysis was restricted to areas involved in the perceptual analysis of written word forms. These results indicate that heteromodal areas involved in semantic processing encode information about the relative importance of different sensory-motor attributes of concepts, possibly by storing particular combinations of sensory and motor features.
Componential theories of lexical semantics assume that concepts can be represented by sets of fea... more Componential theories of lexical semantics assume that concepts can be represented by sets of features or attributes that are in some sense primitive or basic components of meaning. The binary features used in classical category and prototype theories are problematic in that these features are themselves complex concepts, leaving open the question of what constitutes a primitive feature. The present availability of brain imaging tools has enhanced interest in how concepts are represented in brains, and accumulating evidence supports the claim that these representations are at least partly "embodied" in the perception, action, and other modal neural systems through which concepts are experienced. In this study we explore the possibility of devising a componential model of semantic representation based entirely on such functional divisions in the human brain. We propose a basic set of approximately 65 experiential attributes based on neurobiological considerations, comprising sensory, motor, spatial, temporal, affective, social, and cognitive experiences. We provide normative data on the salience of each attribute for a large set of English nouns, verbs, and adjectives, and show how these attribute vectors distinguish a priori conceptual categories and capture semantic similarity. Robust quantitative differences between concrete object categories were observed across a large number of attribute dimensions. A within-versus between-category similarity metric showed much greater separation between categories than representations derived from distributional (latent semantic) analysis of text. Cluster analyses were used to explore the similarity structure in the data independent of a priori labels, revealing several novel category distinctions. We discuss how such a representation might deal with various longstanding problems in semantic theory, such as feature selection and weighting, representation of abstract concepts, effects of context on semantic retrieval, and conceptual combination. In contrast to componential models based on verbal features, the proposed representation systematically relates semantic content to largescale brain networks and biologically plausible accounts of concept acquisition.
Cerebral Cortex, 2015
Recent research indicates that sensory and motor cortical areas play a significant role in the ne... more Recent research indicates that sensory and motor cortical areas play a significant role in the neural representation of concepts. However, little is known about the overall architecture of this representational system, including the role played by higher-level areas that integrate different types of sensory and motor information. The present study addressed this issue by investigating the simultaneous contributions of multiple sensory-motor modalities to semantic word processing. With a multivariate fMRI design, we examined activation associated with five sensory-motor attributes—color, shape, visual motion, sound, and manipulation—for 900 words. Regions responsive to each attribute were identified using independent ratings of the attributes’ relevance to the meaning of each word. The results indicate that these aspects of conceptual knowledge are encoded in multimodal and higher-level unimodal areas involved in processing the corresponding types of information during perception and action, in agreement with embodied theories of semantics. They also reveal a hierarchical system of abstracted sensory-motor representations incorporating a major division between object interaction and object perception processes.
Neuropsychologia, 2013
According to an influential view of conceptual representation, action concepts are understood thr... more According to an influential view of conceptual representation, action concepts are understood through motoric simulations, involving motor networks of the brain. A stronger version of this embodied account suggests that even figurative uses of action words (e.g., grasping the concept) are understood through motoric simulations. We investigated these claims by assessing whether Parkinson's disease (PD), a disorder affecting the motor system, is associated with selective deficits in comprehending action-related sentences. Twenty PD patients and 21 age-matched controls performed a sentence comprehension task, where sentences belonged to one of four conditions: literal action, non-idiomatic metaphoric action, idiomatic action, and abstract. The same verbs (referring to hand/arm actions) were used in the three action-related conditions. Patients, but not controls, were slower to respond to literal and idiomatic action than to abstract sentences. These results indicate that sensory-motor systems play a functional role in semantic processing, including processing of figurative action language.
Brain and Language, 2012
The problem of how word meaning is processed in the brain has been a topic of intense investigati... more The problem of how word meaning is processed in the brain has been a topic of intense investigation in cognitive neuroscience. While considerable correlational evidence exists for the involvement of sensory- motor systems in conceptual processing, it is still unclear whether they play a causal role. We investigated this issue by comparing the performance of patients with Parkinson’s disease (PD) with that of age-matched controls when processing action and abstract verbs. To examine the effects of task demands, we used tasks in which semantic demands were either implicit (lexical decision and priming) or explicit (semantic similarity judgment). In both tasks, PD patients’ performance was selectively impaired for action verbs (relative to controls), indicating that the motor system plays a more central role in the processing of action verbs than in the processing of abstract verbs. These results argue for a causal role of sensory-motor systems in semantic processing.
Brain and language, Jan 1, 2010
The embodied cognition approach to the study of the mind proposes that higher order mental proces... more The embodied cognition approach to the study of the mind proposes that higher order mental processessuch as concept formation and language are essentially based on perceptual and motor processes. Contrary to the classical approach in cognitive science, in which concepts are viewed as amodal, arbitrary symbols, embodied semantics argues that concepts must be ‘‘grounded” in sensorimotor experiencesin order to have meaning. In line with this view, neuroimaging studies have shown aroughly somatotopicpattern of activation along cortical motor areas (broadly construed) for the observation of actions involving different body parts, as well as for action-related language comprehension. These findings have beeninterpreted in terms of a mirror-neuron system, which automatically matches observed and executed actions. However, the somatotopic pattern of activation found in these studies is very coarse, with significant overlap between body parts, and sometimes with multiple representations for the same body part. Furthermore, the localization of the respective activations varies considerably across studies. Based on recent work on the motor cortex in monkeys, we suggest that these discrepancies result from the organization of the primate motor cortex, which probably includes maps of the coordinated actions making up the individual’s motor repertoire, rather than a single, continuous mapof the body. We review neurophysiological and neuroimaging data supporting this hypothesis and discuss ways in which this framework can be used to further test the links between neural mirroring and linguistic processing.
Cerebral Cortex, 2010
The ability to draw analogies requires 2 key cognitive processes,relational integration and resol... more The ability to draw analogies requires 2 key cognitive processes,relational integration and resolution of interference. The presentstudy aimed to identify the neural correlates of both componentprocesses of analogical reasoning within a single, nonverbalanalogy task using event-related functional magnetic resonanceimaging. Participants verified whether a visual analogy was true by considering either 1 or 3 relational dimensions. On half of the trials,there was an additional need to resolve interference in order tomake a correct judgment. Increase in the number of dimensions tointegrate was associated with increased activation in the lateralprefrontal cortex as well as lateral frontal pole in both hemi-spheres. When there was a need to resolve interference duringreasoning, activation increased in the lateral prefrontal cortex butnot in the frontal pole. We identified regions in the middle andinferior frontal gyri which were exclusively sensitive to demands oneach component process, in addition to a partial overlap betweenthese neural correlates of each component process. These resultsindicate that analogical reasoning is mediated by the coordinationof multiple regions of the prefrontal cortex, of which some aresensitive to demands on only one of these 2 component processes,whereas others are sensitive to both.
Brain and cognition, Jan 1, 2007
We investigated how lateralized lexical decision is affected by the presence of distractors in the... more We investigated how lateralized lexical decision is affected by the presence of distractors in the visual hemifield contralateral to thetarget. The study had three goals: first, to determine how the presence of a distractor (either a word or a pseudoword) affects visual fielddifferences in the processing of the target; second, to identify the stage of the process in which the distractor is affecting the decision aboutthe target; and third, to determine whether the interaction between the lexicality of the target and the lexicality of the distractor (‘‘lexicalredundancy effect’’) is due to facilitation or inhibition of lexical processing. Unilateral and bilateral trials were presented in separateblocks. Target stimuli were always underlined. Regarding our first goal, we found that bilateral presentations (a) increased the effectof visual hemifield of presentation (right visual field advantage) for words by slowing down the processing of word targets presentedto the left visual field, and (b) produced an interaction between visual hemifield of presentation (VF) and target lexicality (TLex), whichimplies the use of different strategies by the two hemispheres in lexical processing. For our second goal of determining the processingstage that is affected by the distractor, we introduced a third condition in which targets were always accompanied by ‘‘perceptual’’ dis-tractors consisting of sequences of the letter ‘‘x’’ (e.g., xxxx). Performance on these trials indicated that most of the interaction occursduring lexical access (after basic perceptual analysis but before response programming). Finally, a comparison between performance pat-terns on the trials containing perceptual and lexical distractors indicated that the lexical redundancy effect is mainly due to inhibition of word processing by pseudoword distractors.
Posters by Leonardo Fernandino
• How is conceptual information represented in the cortex?
• According to hierarchical embodied m... more • How is conceptual information represented in the cortex?
• According to hierarchical embodied models, heteromodal convergence zones encode information about the co-activation of sensory-motor areas during concept formation.
• We tested this hypothesis by investigating whether a forward encoding model based on 5 sensory-motor attributes of word meaning (the “semantic model”) could decode conceptual information from heteromodal areas involved in semantic word processing (the "general semantic network", or GSN).
• For voxels in the GSN, the semantic model identified the activation patterns corresponding to individual concepts significantly above chance (chance performance = .5), while the model based on perceptual attributes of the written word form performed at chance level.
•As expected, the semantic model failed to decode abstract words when they were analyzed separately (due to the low variance of their attribute ratings).
• This pattern was reversed when the analysis was restricted to the visual word form network.
•Heteromodal cortical areas involved in semantic word processing can discriminate between individual concepts based on sensory-motor information alone.
• Classical feature-based theories of concept categorization make little contact with neurobiolog... more • Classical feature-based theories of concept categorization make little contact with neurobiology or learning mechanisms.
• We investigated whether a model based on embodied features, rooted in known brain systems, succeeds in classifying words into superordinate categories.
• The model consists of 65 attributes related to sensory, motor, spatial, temporal, affective, social, and cognitive processes.
• The model was able to classify previously unseen words into superordinate categories with high accuracy.
• “Lesioning” the model produced category-specific deficits in word categorization.
• Category-specific semantic impairments observed in stroke patients may be explained by embodied componential models.
Uploads
Papers by Leonardo Fernandino
Posters by Leonardo Fernandino
• According to hierarchical embodied models, heteromodal convergence zones encode information about the co-activation of sensory-motor areas during concept formation.
• We tested this hypothesis by investigating whether a forward encoding model based on 5 sensory-motor attributes of word meaning (the “semantic model”) could decode conceptual information from heteromodal areas involved in semantic word processing (the "general semantic network", or GSN).
• For voxels in the GSN, the semantic model identified the activation patterns corresponding to individual concepts significantly above chance (chance performance = .5), while the model based on perceptual attributes of the written word form performed at chance level.
•As expected, the semantic model failed to decode abstract words when they were analyzed separately (due to the low variance of their attribute ratings).
• This pattern was reversed when the analysis was restricted to the visual word form network.
•Heteromodal cortical areas involved in semantic word processing can discriminate between individual concepts based on sensory-motor information alone.
• We investigated whether a model based on embodied features, rooted in known brain systems, succeeds in classifying words into superordinate categories.
• The model consists of 65 attributes related to sensory, motor, spatial, temporal, affective, social, and cognitive processes.
• The model was able to classify previously unseen words into superordinate categories with high accuracy.
• “Lesioning” the model produced category-specific deficits in word categorization.
• Category-specific semantic impairments observed in stroke patients may be explained by embodied componential models.
• According to hierarchical embodied models, heteromodal convergence zones encode information about the co-activation of sensory-motor areas during concept formation.
• We tested this hypothesis by investigating whether a forward encoding model based on 5 sensory-motor attributes of word meaning (the “semantic model”) could decode conceptual information from heteromodal areas involved in semantic word processing (the "general semantic network", or GSN).
• For voxels in the GSN, the semantic model identified the activation patterns corresponding to individual concepts significantly above chance (chance performance = .5), while the model based on perceptual attributes of the written word form performed at chance level.
•As expected, the semantic model failed to decode abstract words when they were analyzed separately (due to the low variance of their attribute ratings).
• This pattern was reversed when the analysis was restricted to the visual word form network.
•Heteromodal cortical areas involved in semantic word processing can discriminate between individual concepts based on sensory-motor information alone.
• We investigated whether a model based on embodied features, rooted in known brain systems, succeeds in classifying words into superordinate categories.
• The model consists of 65 attributes related to sensory, motor, spatial, temporal, affective, social, and cognitive processes.
• The model was able to classify previously unseen words into superordinate categories with high accuracy.
• “Lesioning” the model produced category-specific deficits in word categorization.
• Category-specific semantic impairments observed in stroke patients may be explained by embodied componential models.