Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013
…
3 pages
1 file
In this issue of Neuron, Kornblith et al. (2013) identify two regions in macaque occipitotemporal cortex that encode both spatial and nonspatial aspects of visual scenes and might be the homolog of the human parahippocampal place area.
Journal of Vision, 2010
Defining the exact mechanisms by which the brain processes visual objects and scenes remains an unresolved challenge. Valuable clues to this process have emerged from the demonstration that clusters of neurons (''modules'') in inferior temporal cortex apparently respond selectively to specific categories of visual stimuli, such as places/scenes. However, the higher-order ''category-selective'' response could also reflect specific lower-level spatial factors. Here we tested this idea in multiple functional MRI experiments, in humans and macaque monkeys, by systematically manipulating the spatial content of geometrical shapes and natural images. These tests revealed that visual spatial discontinuities (as reflected by an increased response to high spatial frequencies) selectively activate a well-known place-selective region of visual cortex (the ''parahippocampal place area'') in humans. In macaques, we demonstrate a homologous cortical area, and show that it also responds selectively to higher spatial frequencies. The parahippocampal place area may use such information for detecting object borders and scene details during spatial perception and navigation.
PLoS biology, 2011
Defining the exact mechanisms by which the brain processes visual objects and scenes remains an unresolved challenge. Valuable clues to this process have emerged from the demonstration that clusters of neurons ("modules") in inferior temporal cortex apparently respond selectively to specific categories of visual stimuli, such as places/scenes. However, the higher-order "category-selective" response could also reflect specific lower-level spatial factors. Here we tested this idea in multiple functional MRI experiments, in humans and macaque monkeys, by systematically manipulating the spatial content of geometrical shapes and natural images. These tests revealed that visual spatial discontinuities (as reflected by an increased response to high spatial frequencies) selectively activate a well-known place-selective region of visual cortex (the "parahippocampal place area") in humans. In macaques, we demonstrate a homologous cortical area, and show that it a...
The Journal of …, 2011
Behavioral and computational studies suggest that visual scene analysis rapidly produces a rich description of both the objects and the spatial layout of surfaces in a scene. However, there is still a large gap in our understanding of how the human brain accomplishes these diverse functions of scene understanding. Here we probe the nature of real-world scene representations using multivoxel functional magnetic resonance imaging pattern analysis. We show that natural scenes are analyzed in a distributed and complementary manner by the parahippocampal place area (PPA) and the lateral occipital complex (LOC) in particular, as well as other regions in the ventral stream. Specifically, we study the classification performance of different scene-selective regions using images that vary in spatial boundary and naturalness content. We discover that, whereas both the PPA and LOC can accurately classify scenes, they make different errors: the PPA more often confuses scenes that have the same spatial boundaries, whereas the LOC more often confuses scenes that have the same content. By demonstrating that visual scene analysis recruits distinct and complementary high-level representations, our results testify to distinct neural pathways for representing the spatial boundaries and content of a visual scene.
Nature, 2012
Place-modulated activity among neurons in the hippocampal formation presents a means to organize contextual information in the service of memory formation and recall 1,2. One particular spatial representation, that of grid cells, has been observed in the entorhinal cortex (EC) of rats and bats 3-5 , but has yet to be described in single units in primates. Here, we examined spatial representations in the EC of head-fixed monkeys performing a free-viewing visual memory task 6,7. Individual neurons were identified in the primate EC that emitted action potentials when the monkey fixated multiple discrete locations in the visual field across the presentation of up to hundreds of novel images. These firing fields possess spatial periodicity similar to a triangular tiling with a corresponding well-defined hexagonal structure in the spatial autocorrelation. Further, these neurons demonstrated theta-band oscillatory activity and changing spatial scale as a function of distance from the rhinal sulcus, which is consistent with previous findings in rodents 4,8-10. These spatial representations may provide a framework to anchor the encoding of stimulus content in a complex visual scene. Together, our results provide a direct demonstration of grid cells in the primate and suggest that EC neurons encode space during visual exploration, even without locomotion.
Journal of Neuroscience, 2009
It is well established that the primate parietal cortex plays an important role in visuospatial processing. Parietal cortex damage in both humans and monkeys can lead to behavioral deficits in spatial processing, and many parietal neurons, such as in the macaque lateral intraparietal area (LIP), are strongly influenced by visual-spatial factors. Several recent studies have shown that LIP neurons can also convey robust signals related to nonspatial factors, such as color, shape, and the behavioral context or rule that is relevant for solving the task at hand. But what is the relationship between the encoding of spatial factors and more abstract, nonspatial, influences in LIP? To examine this, we trained monkeys to group visual motion patterns into two arbitrary categories, and recorded the activity of LIP neurons while monkeys performed a categorization task in which stimuli were presented either inside each neuron's receptive field (RF) or at a location in the opposite visual field. While the activity of nearly all LIP neurons showed strong spatial dependence (i.e., greater responses when stimuli were presented within neurons' RFs), we also found that many LIP neurons also showed reliable encoding of the category membership of stimuli even when the stimuli were presented away from neurons' RFs. This suggests that both spatial and nonspatial information can be encoded by individual LIP neurons, and that parietal cortex may be a nexus for the integration of visuospatial signals and more abstract task-dependent information during complex visually based behaviors.
Journal of Neurophysiology, 2006
We investigated object representation in area TE, the anterior part of monkey inferotemporal (IT) cortex, with a combination of optical and extracellular recordings in anesthetized monkeys. We found neurons that respond to visual stimuli composed of naturally distinguishable parts. These neurons were sensitive to a particular spatial arrangement of parts but less sensitive to differences in local features within individual parts. Thus these neurons were activated when arbitrary local features were arranged in a particular spatial configuration, suggesting that they may be responsible for representing the spatial configuration of object images. Previously it has been reported that many neurons in area TE respond to visual features less complex than natural objects, but it has remained unclear whether these features are related to local features of object images or to more global features. These results indicate that TE neurons represent not only local features but also global feature...
European Journal of …, 1997
Robertson, Robert G., Edmund T. Rolls, and Pierre Georgeskeys perform object-place memory tasks in which they must François. Spatial view cells in the primate hippocampus: effects remember where on a video monitor a picture has been of removal of view details.
Journal of Vision, 2010
We used functional magnetic resonance imaging (fMRI) to investigate the reference frames used to encode visual information in scene-responsive cortical regions. At early levels of the cortical visual hierarchy, neurons possess spatially selective receptive fields (RFs) that are yoked to specific locations on the retina. In lieu of this eye-centered organization, we speculated that visual areas implicated in scene processing, such as the parahippocampal place area (PPA), the retrosplenial complex (RSC), and transverse occipital sulcus (TOS) might instead possess RFs defined in head-, body-, or world-centered reference frames. To test this, we scanned subjects while they viewed objects and scenes presented at four screen locations while they maintained fixation at one of three possible gaze positions. We then examined response profiles as a function of either fixation-referenced or screen-referenced position. Contrary to our prediction, the PPA and TOS exhibited position-response curves that moved with the fixation point rather than being anchored to the screen, a pattern indicative of eye-centered encoding. RSC, on the other hand, did not exhibit a position-response curve in either reference frame. By showing an important commonality between the PPA/TOS and other visually responsive regions, the results emphasize the critical involvement of these regions in the visual analysis of scenes.
Hippocampus, 2001
Hippocampal spatial view cells found in primates respond to a region of visual space being looked at, relatively independently of where the monkey is located. Rat place cells have responses which depend on where the rat is located. We investigate the hypothesis that in both types of animal, hippocampal cells respond to a combination of visual cues in the correct spatial relation to each other. In rats, which have a wide visual field, such a combination might define a place. In primates, including humans, which have a much smaller visual field and a fovea which is directed towards a part of the environment, the same mechanism might lead to spatial view cells. A computational model in which the neurons become organized by learning to respond to a combination of a small number of visual cues spread within an angle of a 30° receptive field resulted in cells with visual properties like those of primate spatial view cells. The same model, but operating with a receptive field of 270°, prod...
Information Systems Education Journal, 2019
Características de la Investigación Cualitativa, 2024
Guide to Early Music in Belgium, 2020
Ecological Engineering & Environmental Technology, 2024
Bloomsbury T&T Clark; LNTS 700, 2024
Zsidó vallástudomány: nyilvánvaló elnevezés vagy fából vaskarika?, 2019
Continental Philosophy Review, 2021
Discusiones Filosoficas, 2011
Current Opinion in Colloid & Interface Science, 2014
Proceedings of the 9th ACM/IEEE-CS joint conference on Digital libraries, 2009
Transactions on Environment and Electrical Engineering, 2017
Journal of Language Relationship, 2019
Journal of Chittagong Medical College Teachers' Association
Mimarlık Fakültesi dergisi, 2017