Academia.eduAcademia.edu

When Seeing a Dog Activates the Bark

2013, Experimental Psychology (formerly Zeitschrift für Experimentelle Psychologie)

The goal of the present study was to find evidence for a multisensory generalization effect (i.e., generalization from one sensory modality to another sensory modality). The authors used an innovative paradigm (adapted from Brunel, Labeye, Lesourd, & Versace, 2009) involving three phases: a learning phase, consisting in the categorization of geometrical shapes, which manipulated the rules of association between shapes and a sound feature, and two test phases. The first of these was designed to examine the priming effect of the geometrical shapes seen in the learning phase on target tones (i.e., priming task), while the aim of the second was to examine the probability of recognizing the previously learned geometrical shapes (i.e., recognition task). When a shape category was mostly presented with a sound during learning, all of the primes (including those not presented with a sound in the learning phase) enhanced target processing compared to a condition in which the primes were mostly seen without a sound during learning. A pattern of results consistent with this initial finding was also observed during recognition, with the participants being unable to pick out the shape seen without a sound during the learning phase. Experiment 1 revealed a multisensory generalization effect across the members of a category when the objects belonging to the same category share the same value on the shape dimension. However, a distinctiveness effect was observed when a salient feature distinguished the objects within the category (Experiment 2a vs. 2b).

ISSN-L 1618-3169 · ISSN-Print 1618-3169 · ISSN-Online 2190-5142 Experimental Psychology www.hogrefe.com/journals/exppsy Edited by T. Meiser (Editor-in-Chief) T. Beckers · A. Bröder · A. Diederich K. Epstude · C. Frings · A. Kiesel · M. Perea K. Rothermund · M.C. Steffens · S. Tremblayy C. Unkelbach · F. Verbruggen Experimental Psychology Your article has appeared in a journal published by Hogrefe Publishing. This e-offprint is provided exclusively for the personal use of the authors. It may not be posted on a personal or institutional website or to an institutional or disciplinary repository. If you wish to post the article to your personal or institutional website or to archive it in an institutional or disciplinary repository, please use either a pre-print or a post-print of your manuscript in accordance with the publication release for your article and our ‘‘Online Rights for Journal Articles’’ (www.hogrefe.com/journals). Author’s personal copy (e-offprint) Research Article When Seeing a Dog Activates the Bark Multisensory Generalization and Distinctiveness Effects Lionel Brunel,1 Robert L. Goldstone,2 Guillaume Vallet,3 Benoit Riou,3 and Rémy Versace3 1 Laboratoire Epsylon, Université Paul-Valery, Montpellier 3, France, 2Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA, 3Laboratoire d’Etude des Mécanismes Cognitifs (EMC), Université Lumière Lyon 2, France Abstract. The goal of the present study was to find evidence for a multisensory generalization effect (i.e., generalization from one sensory modality to another sensory modality). The authors used an innovative paradigm (adapted from Brunel, Labeye, Lesourd, & Versace, 2009) involving three phases: a learning phase, consisting in the categorization of geometrical shapes, which manipulated the rules of association between shapes and a sound feature, and two test phases. The first of these was designed to examine the priming effect of the geometrical shapes seen in the learning phase on target tones (i.e., priming task), while the aim of the second was to examine the probability of recognizing the previously learned geometrical shapes (i.e., recognition task). When a shape category was mostly presented with a sound during learning, all of the primes (including those not presented with a sound in the learning phase) enhanced target processing compared to a condition in which the primes were mostly seen without a sound during learning. A pattern of results consistent with this initial finding was also observed during recognition, with the participants being unable to pick out the shape seen without a sound during the learning phase. Experiment 1 revealed a multisensory generalization effect across the members of a category when the objects belonging to the same category share the same value on the shape dimension. However, a distinctiveness effect was observed when a salient feature distinguished the objects within the category (Experiment 2a vs. 2b). Keywords: multisensory memory, exemplar-based memory, featural vs. dimensional selective attention, generalization, distinctiveness The starting point for our study is the observation that, when asked to think about an object, people tend (implicitly or not) to infer that the object possesses perceptual properties that other objects do not. Although these properties are often visual, they frequently involve other sensory modalities. For example, we might have learned that an Object i (e.g., a dog) possesses not only certain visual properties (e.g., four legs) but also auditory properties (e.g., it barks). If we see a new object, we can readily distinguish it from Object i. However, if this new object is identified as belonging to the same category as Object i, we can generalize some of i’s properties to this new object even if these properties are not perceptually present in the new object. In other words, we could infer that a dog can bark even if, in the present situation, it does not do so. Generalization effects have been explained and operationalized within several different memory perspectives (i.e., prototype-based, Rosch & Mervis, 1975; boundarybased, Ashby & Towsend, 1986; feature-based, Tversky, 1977; and exemplar-based, Logan, 2002, and Nosofsky, 1986, 1991). However, none of these approaches has explicitly considered the possibility that generalization might Ó 2012 Hogrefe Publishing occur with modalities other than vision or indeed between two distinct sensory modalities. A basic claim relating to the exemplar-based perspective is that our memories are based on a collection of multiple instances represented by their locations within a multidimensional psychological space. The generalization of inferences from one exemplar to another is then inversely related to their distance in a psychological space (Shepard, 1987). The effectiveness of the memory system is simply determined by an interaction between a perceptual cue (given by the task) and a set of exemplars activated by the cue (see also Hintzman, 1986). In this view, the probability that a given cue will activate an exemplar is directly linked to the similarity relation, across a set of dimensions (e.g., size, color), between a currently perceived object and exemplars in memory. The weight of a dimension is influenced by its diagnosticity for categorization or recognition (see Nosofsky, 1991). Thus, the probability of generalization (i.e., assimilation of a presented object’s inferred dimensions to the corresponding dimension values of exemplars already known to belong to a category) increases as the similarity of an object to exemplars increases, particularly along dimensions that are Experimental Psychology 2012 DOI: 10.1027/1618-3169/a000176 Author’s personal copy (e-offprint) 2 L. Brunel et al.: Multisensory Generalization and Distinctiveness Effects diagnostic for categorization. In other words, the generalization effect depends on: 1) the distribution of the exemplars in the psychological space (see Shepard, 1987), and 2) the dimensional similarity between a probed object and exemplars, with dimensions weighted according to their diagnosticity for relevant categorizations. However, this kind of model tells us nothing about: (1) the formation of memory exemplars and (2) the nature of these exemplars. There is now an increasing amount of evidence concerning the existence of multisensory memory traces or exemplars (for a review, see Versace, Labeye, Badard, & Rose, 2009) deriving from both brain imagery studies (e.g., Beisteiner, Höllinger, Lindinger, Lang, & Berthoz, 1995; Bensafi, Sobel, & Khan, 2007; Martin, Haxby, Lalonde, Wiggs, & Ungerleider, 1995; Wheeler, Petersen, & Buckner, 2000) and behavioral studies (Brunel, Labeye, et al., 2009; Brunel, Lesourd, Labeye, & Versace, 2010; Connell & Lynott, 2010; Pecher, Zeelenberg, & Barsalou, 2004; Riou, Lesourd, Brunel, & Versace, 2011; Topolinski, 2012; Vallet, Brunel, & Versace, 2010; Van Dantzig, Pecher, Zeelenberg, & Barsalou, 2008). Once perceived, the perceptual properties of a multisensory object can be preserved in memory in the form of an exemplar. This is due to an integration mechanism that allows for the creation of durable links between perceptual properties within the same memory representation (see Brunel, Labeye, et al., 2009; Hommel, 1998; Labeye, Oker, Badard, & Versace, 2008). Contrary to simple associative learning (see Hall, 1991), once features are integrated within an exemplar, it is difficult to access the individual features (see Labeye et al., 2008; Richter & Zwaan, 2010). This new unit, once acquired, becomes a functional ‘‘building block’’ for subsequent processing and learning (in language, Richter & Zwaan, 2010; in memory, Labeye et al., 2008; or attention, Delvenne, Cleeremans, & Laloyaux, 2009). Once two features have become integrated, the presence of one feature automatically suggests the presence of the other. In this view, the integration mechanism is a fundamental mechanism of perceptual learning (see the unitization mechanism, Goldstone, 2000) or contingency learning (see Schmidt & De Houwer, 2012; Schmidt, De Houwer, & Besner, 2010). Multisensory memory representations can be seen as a logical extension of an exemplar-based memory model, whose basic claim is that an exemplar is represented by a set of features. Consequently, the current study focuses on finding evidence of generalization within a multisensory memory perspective. The implications of a multisensory view of memory for generalization are not empirically tested. The Present Study The present study focuses on generalization across two sensory modalities, specifically visual and auditory. We selected two visual dimensions (color and shape) and the auditory dimension of pitch. We intentionally chose pitch because: Experimental Psychology 2012 (1) previous work has shown strong multisensory integration between these visual modalities and pitch (Brunel, Labeye, et al., 2009; Brunel, Lesourd, et al., 2010; Vallet et al., 2010, see also Hommel, 1998; Labeye et al., 2008; Meyer, Baumann, Marchina, & Jancke, 2007), and (2) it is easy to experimentally manipulate both the visual and auditory modalities. There is clear evidence that categorization involves the multisensory representation of stimulus information (e.g., Chen & Spence, 2010; Cooke, Jäkel, Wallraven, & Bülthoff, 2007; Schneider, Engel, & Debener, 2008; Vallet et al., 2010) and also mediates perceptual learning (Brunel, Labeye, et al., 2009; Brunel, Lesourd, et al., 2010; Goldstone, 1994; Goldstone, Gerganov, Landy, & Roberts, 2008). For instance, the categorization of an object presented in a given modality is associated with the activation of properties of the object in other modalities. A wellknown example of this comes from the literature on cross-modal priming (Schneider et al., 2008; Vallet et al., 2010). Vallet et al. (2010) demonstrated that cross-modal priming can be a robust phenomenon which they explained in terms of the multisensory view of memory. In their experiment, participants first categorized visual stimuli (e.g., dog) and then, shortly afterwards, the corresponding sounds (e.g., a dog’s bark). When the visual stimuli were presented simultaneously with an auditory mask, the corresponding sounds were categorized as new sounds, unlike the unmasked items, which were categorized faster than the new ones (i.e., crossmodal priming). The authors explained their main results as follows: ‘‘The sensory mask interfered with the automatic activation of the other sensory component of the memory trace activated by the stimulus’’ (Vallet et al., 2010, p. 381). In the same vein, Brunel and coworkers have demonstrated within a behavioral context that the activation of an auditory memory component (a component that is not really present) can influence the perceptual processing of a sound (Brunel, Labeye, et al., 2009) or of an object that is accompanied by a sound (Brunel, Lesourd, et al., 2010; see also Van Dantzig et al., 2008). More precisely, using a short-term priming paradigm, they demonstrated that the presentation of a prime shape that has been associated with a sound during a learning phase automatically activates the auditory component (see also Meyer et al., 2007) and influences the processing of target sounds (irrespective of whether they are low or high pitched) unlike prime shapes seen without an accompanying sound during learning. Given that our previous studies used a systematic rule of association (a particular shape was systematically presented with a sound during learning), they did not provide us with any information about whether learned multisensory integration is specific or general. Multisensory generalization could be described as the probability that a unisensory cue (e.g., visual) will activate previously learned multisensory exemplars (e.g., visual and auditory) and then ‘‘benefits’’ from a property belonging to a different modality (e.g., sound). In other words, our aim was to provide evidence that if, during learning, we find out that X (e.g., a shape) is a dimension frequently associated Ó 2012 Hogrefe Publishing Author’s personal copy (e-offprint) L. Brunel et al.: Multisensory Generalization and Distinctiveness Effects 3 with Y (e.g., a sound) and that X is a diagnostic dimension, then further processing of X alone as a cue should automatically activate XY exemplars irrespective of the current task. Experiment 1 To reveal evidence relating to such an effect, we employed a three-phase paradigm adapted from our previous work (Brunel, Labeye, et al., 2009; Brunel, Lesourd et al., 2010). In the first phase, the participants had to categorize geometrical shapes into categories that were well known to them (i.e., circles and squares). In addition to the categorization task, we manipulated the rules of association between shapes and a sound feature as well as the colors and sounds that were used (see Figure 1). For example, in the ‘‘Sound Category’’ condition, the non-isolated squares were presented simultaneously with a sound feature whereas a single isolated square was not. Similarly, in the ‘‘NoSound Category’’ condition, the non-isolated circles were presented alone whereas one isolated circle was presented in combination with a sound feature. As Figure 1 indicates, isolated objects were always displayed in a color consistent with their isolation status (i.e., if the isolated square in the Sound Category condition was red, all shapes displayed in red were presented without sound). At the end of the learning phase, we predicted that the shape dimension should be a diagnostic dimension for further processing, because categorization that depends on one dimension leads to perceptual sensitization on this dimension and desensitization to variation along other dimensions (Goldstone, 1994). The second phase consisted of a tone categorization task (either low-pitched or high-pitched tones). Each tone was preceded by a visual prime shape as part of a short-term priming paradigm. These shapes were the same as those presented in the first phase. However, in this phase they were systematically presented without an accompanying sound. The third phase took the form of a recognition memory task which referred back to the learning phase. The participants completed two successive recognition tasks. They saw the shapes from the Sound Category condition and had to indicate which of them had been presented without sound during the learning phase. Similarly, they saw shapes from the No-Sound Category condition and had to determine which of them had been presented with sound during the learning phase. Our predictions in this experiment focused directly on the probability that isolated prime objects would or would not activate the sound feature (i.e., priming effect induced by isolated objects for both categories). Basically, if the probability of a multisensory exemplar being activated depends on the similarity of the exemplars on a diagnostic dimension, we predict that activation in response to the isolated prime shape seen in the Sound Category condition would be greater along the shape dimension (because it was the diagnostic dimension for the categorization) than Ó 2012 Hogrefe Publishing Figure 1. Illustration of the basic manipulations used in the learning phase in all experiments. For each trial, the participants had to categorize the shape displayed on the screen as a ‘‘square’’ or a ‘‘circle.’’ Any given shape could belong to one category or the other (Sound or No-Sound) and could be Non-isolated or Isolated. Isolation refers to the status of the shape with reference to its category. In Experiment 1, all the shapes were displayed in color (respectively, red, green, yellow, and blue), whereas in Experiment 2, the shapes were displayed in shades of gray. the color dimension. There is therefore a high probability that this prime shape will activate the sound feature (i.e., priming effect) even though it was previously experienced without sound. If this is the case, participants should also have difficulty identifying the colors associated with the isolated objects. In the case of the isolated prime shape seen in the No-Sound Category condition, activation should be greater along the shape dimension (because it was the diagnostic dimension for the categorization) than color dimension. There is therefore a high probability that this prime shape will not activate the sound feature even though it was experienced with this feature. Consequently, participants should also have difficulty identifying the colors associated with the isolated objects. In sum, we predicted (for both isolated shapes) a generalization effect along the shape dimension in terms of both priming and recognition accuracy. More precisely, we expected to observe a ‘‘multisensory’’ generalization effect (i.e., generalization of multisensory integration) for the isolated object seen in the Sound Category condition. Even though this object was experienced without sound, if it is assimilated with the other objects possessing the same shape, then, when used as a prime, it should enhance the categorization of the target tones and should not be recognized as having been presented without a sound. Similarly, we expected a ‘‘unisensory’’ generalization effect for the isolated object seen in the No-Sound Category condition. This means that even though this object was experienced with a sound, it should not, when used as a prime, significantly influence the categorization of the target tones and should also not be recognized as having been presented with a sound. Our predictions therefore suggest that we should only observe a significant main effect of Category Type. Experimental Psychology 2012 Author’s personal copy (e-offprint) 4 L. Brunel et al.: Multisensory Generalization and Distinctiveness Effects Each prime shape seen in the Sound Category condition should enhance the categorization of tones compared to those seen in the No-Sound Category condition. Similarly, the performances of the participants should not differ from chance level (i.e., 25%) in either recognition task. Method Participants Thirty-two right-handed voluntary participants were recruited for Experiment 1. All of them were students at the University Lumière Lyon 2 (France) and had normal or corrected-to-normal vision. Apparatus The visual stimuli were geometric shapes in the form of either a 7 cm square or a circle with a radius of 3.66 cm. The square and circle could be displayed in four different colors1 (CIE L*a*b setting values are indicated in parentheses): yellow (L: 95.66, a: 7.96, b: 77.70), red (L: 54.36, a: 66.26, b: 54.44), blue (L: 39.24, a: 28.38, b: 91.03), and green (L: 85.44, a: 55.74, b: 68.19). The auditory stimuli consisted of a white noise, which was used in the learning task, whereas two pure tones, namely a high-pitched tone (312 Hz) and low-pitched tone (256 Hz), were used for the priming task. All the auditory stimuli were presented in mono through headphones and had a duration of 500 ms. Procedure After filling out a written consent form, each participant was tested individually in a session lasting approximately 15 min. The first phase (we use the term ‘‘learning task’’ to refer to this task) consisted of a shape categorization task involving two categories: circle and square. Each circle and square could be displayed in one of four colors. Within this categorization task, we manipulated the pattern of association between shapes and the sound feature, as well as the colors and sounds used (see Figure 1). For example, in the Sound Category condition, each square (Non-isolated) except for one (Isolated) was presented simultaneously with a sound feature. Similarly, in the No-Sound Category condition, each circle (Non-isolated) except for one (Isolated) was presented with no-sound feature. The isolated object in the No-Sound Category condition was always displayed in a color presented with sound (irrespective of the category) 1 while the opposite was true for the isolated object seen in the Sound Category condition (i.e., always displayed in a color presented without sound irrespective of the category). The same design was used for each of the two shapes. For half of the participants, the squares were presented in the Sound Category condition and the circles in the No-Sound Category condition, with the procedure being reversed for the other half. In the same way, the colors were counterbalanced between participants so that each color was also seen in each condition. In each trial, a shape (square or circle) was presented for 500 ms either alone or together with a white noise. The participants were told that their task was to judge, as quickly and accurately as possible, whether the shape was a square or a circle and to respond by pressing the appropriate key on the keyboard. All the visual stimuli were presented in the center of the screen, and the intertrial interval was 1,500 ms. Each shape was presented 40 times (10 times for each of the four colors) in random order. Half of the participants used their right index finger to respond ‘‘square’’ and their right middle finger to respond ‘‘circle.’’ For the other half of the participants, the response fingers were reversed. The second phase of the experiment (the ‘‘Priming task’’) consisted of a tone categorization task in which each target tone was preceded by a visual prime (short-term priming paradigm). The prime was one of the shapes presented during the learning phase. For all the participants, the prime was presented for 500 ms. It was immediately followed by a target, which was either a high-pitched or a low-pitched sound. The participants had to judge as quickly and accurately as possible whether the target sound was low pitched or high pitched, and to respond by pressing the appropriate key on the keyboard. The participants were told to keep their eyes open throughout this phase. Given that the target appeared as soon as the prime disappeared, the SOA between the prime and the target was also 500 ms. All of the visual stimuli were presented in the center of the screen, and the intertrial interval was 1,500 ms. Each participant saw a total of 80 trials, that is, 40 with each target sound, half (20) of them with a prime shape seen in the Sound Category condition and the other half (20) with a prime shape seen in the No-Sound Category condition. The order of the different experimental conditions was randomized. The third phase consisted of a forced-choice recognition task (referred to as the ‘‘Recognition task’’), during which the participants had to complete two successive recognition tests. Importantly, all the participants were informed at the beginning of the experiment that they would have to perform a recognition task, but they were not told what kind of question they would have to answer. The first recognition test related to the isolated shape seen in the Sound Category We ran a preexperiment (16 participants) on our visual stimuli in order to ensure that participants represented objects in terms of shape and color dimensions and that neither of these dimensions was more salient than the other. Consequently, we recorded the similarity judgments of participants given in response to pairs of objects that we had previously subjected to a multidimensional scaling analysis (for a review, see Nosofsky, 1992). The participants rated the similarity between the objects in terms of both their shape and color (CIE L*a*b). Each of these dimensions seemed to be approximately equally relevant for the similarity judgments. Consequently, the combination of these dimensions meant that each object had a relatively homogeneous level of similarity for either color or shape. Experimental Psychology 2012 Ó 2012 Hogrefe Publishing Author’s personal copy (e-offprint) L. Brunel et al.: Multisensory Generalization and Distinctiveness Effects 5 Table 1. Mean response times (RTs in ms) and mean correct response (CR) rates in each experimental condition (standard errors are in parentheses) in the learning task in Experiments 1, 2a, and 2b Isolation Non-isolated Isolated Category RT CR RT CR Experiment 1 SC prime NSC prime 479 (13) 521 (14) 0.95 (.01) 0.96 (.01) 543 (14) 476 (11) 0.92 (.01) 0.97 (.01) Experiment 2a SC prime NSC prime 462 (16) 512 (17) 0.93 (.01) 0.93 (.01) 540 (18) 462 (17) 0.88 (.03) 0.94 (.01) Experiment 2b SC prime NSC prime 496 (15) 523 (16) 0.95 (.01) 0.94 (.01) 567 (22) 480 (14) 0.91 (.01) 0.95 (.02) Notes. SC = Sound Category; NSC = No-Sound Category. condition. The participants had to recognize the shape presented with no accompanying sound from among those presented with a sound during the learning phase. The second recognition test related to the isolated shape seen in the No-Sound Category condition, with the participants having to identify the shape that was presented with a sound from among those presented without sound during the learning phase. In each of the tests, the participants indicated their response by pressing the appropriate key on the keyboard. The order of the questions was counterbalanced across subjects. Results and Discussion Learning Task The mean correct response latencies and mean percentages of correct responses were calculated across subjects for each experimental condition. Latencies below 250 ms and above 1,250 ms were removed (this same cut-off2 was used throughout all the experiments and never resulted in the exclusion of more than 3% of the data). The participants performed the shape categorization task very accurately (overall correct response rate of 95%, see Table 2). Separate analyses of variance were performed on latencies and correct response rates, with Subject as a random variable, and Category Type (Sound Category vs. No-Sound Category) and Isolation (isolated vs. non-isolated shapes) as within-subject factors. The same analyses were conducted for the learning task in all the experiments (see Table 1). Analyses revealed a main effect of Category for both latencies, F(1, 31) = 5.26, p < .05, g2p = .15, and correct response rates, F(1, 31) = 9.49, p < .05, g2p = .23. The participants were significantly worse at categorizing shapes from the Sound Category (RT = 512 ms, SE = 12 ms; CR = .93, SE = .01) than from the No-Sound Category 2 (RT = 499 ms, SE = 12 ms; CR = .96, SE = .01). Most interestingly, we observed a significant interaction between Category Type and Isolation: F(1, 31) = 103.06, p < .05, g2p = .77, for latencies, and F(1, 31) = 7.22, p < .01, g2p = .19, for correct responses. In the Sound Category condition, the isolated object took longer to process than the non-isolated ones, F(1, 31) = 51.86, p < .05, and it was also processed marginally less accurately, F(1, 31) = 3.41, p = .07. In contrast, in the No-Sound Category condition, the isolated object was processed faster, F(1, 31) = 34.65, p < .05, but not significantly more accurately, F < 1, than the non-isolated ones. Every shape that was presented with a sound was categorized faster than those presented without a sound, irrespective of category. These results support our predictions by revealing that the participants paid attention to the sound feature during the visual shape categorization task and used it as a cue for categorization. Priming Task Separate analyses of variance were performed on latencies and percentages of correct responses, with subjects as random variables, and Category Type (prime based on the shapes seen in the Sound Category or No-Sound Category conditions), and Isolation (isolated vs. non-isolated) as within-subject factors. Once again, the participants performed this task accurately (94% correct on average, see Table 2). The analyses revealed a significant main effect of Category Type for both latencies, F(1, 31) = 10.19, p < .05, g2p = .24, and correct response rates, F(1, 31) = 23.46, p < .05, g2p = .43. Neither a significant main effect of isolation nor an interaction between Category Type and Isolation was observed for either latencies, F < 1, g2p = .03, or correct response rates, F < 1, g2p = .04. The responses were significantly faster and more accurate (RT = 521 ms, SE = 24; We chose this particular cut-off in the light of our previous works (Brunel, Labeye et al., 2009; Brunel, Lesourd et al., 2010). It did not affect the results in any noteworthy way. Ó 2012 Hogrefe Publishing Experimental Psychology 2012 Author’s personal copy (e-offprint) 6 L. Brunel et al.: Multisensory Generalization and Distinctiveness Effects Table 2. Mean reaction times (RTs in ms) and correct response (CR) rates in each experimental condition (standard errors are given in parentheses) during the priming task in Experiments 1, 2a, and 2b Isolation Non-isolated Isolated Category RT CR RT CR Experiment 1 SC prime NSC prime 527 (23) 540 (24) 0.96 (.01) 0.93 (.01) 515 (25) 544 (25) 0.97 (.02) 0.92 (.02) Experiment 2a SC prime NSC prime 526 (24) 553 (23) 0.94 (.02) 0.92 (.02) 550 (25) 528 (23) 0.94 (.02) 0.92 (.02) Experiment 2b SC prime NSC prime 571 (26) 587 (25) 0.91 (.02) 0.91 (.02) 558 (24) 600 (25) 0.91 (.02) 0.91 (.02) Notes. SC = Sound Category; NSC = No-Sound Category. CR = 0.96, SE = 0.01) when the target tone was preceded by an exemplar from the Sound Category rather than the No-Sound Category (RT = 542 ms, SE = 25; CR = 0.93, SE = 0.01). As Figure 2 shows, the isolated exemplars produced the same direction of priming as the non-isolated exemplars of the same category. This means, for example, that a priming effect on tone identification was observed for the isolated exemplar from the Sound Category, even though it was presented without a sound feature. Likewise, no significant priming effect was observed for the isolated exemplar in the No-Sound Category, despite the fact that it was presented accompanied by a sound feature. Recognition Task A correct recognition percentage was calculated for the two recognition tests, and Student’s tests (t-tests – two tailed) were run to compare each score with chance level (25%). When the participants were required to pick out the isolated shape seen in the Sound Category condition, the correct recognition rate was 12.5%, a score that was actually significantly below chance level, t(31) = 2.10, p < .05. This low accuracy score did not reflect a bias toward a particular color. In fact, the distribution of recognition responses among participants (irrespective of whether the response was correct) was relatively homogeneous. Each color was indicated with equal frequency. It seems reasonable to assume that the participants were biased toward non-isolated shapes because they thought that the isolated shape had been presented together with a sound. When the participants had to identify the isolated shape seen in the No-Sound Category condition, the correct recognition rate was 28.13% and did not differ significantly from chance level, t < 1. Overall, the results of this experiment can be explained in terms of multisensory generalization. In the case of the learning task, there is evidence that the participants were sensitized to the manipulation of the implicit association between visual objects (color and shape) and sound. 3 In general, categorization was faster and more accurate when the shape was accompanied by a sound feature, irrespective of its category. This result suggests that the participants paid attention to the sound feature and could use it as a cue when performing the task. However, the most interesting results concerned the two following tasks (i.e., the priming and recognition tasks). First, we showed that only the prime shapes belonging to the Sound Category made discrimination of the target sound easier, regardless of whether the shape was associated with the white noise (non-isolated prime) or not (isolated prime). This is, in fact, a replication of the results observed by Brunel, Labeye, and coworkers (2009) in their Experiment 1.3 A shape previously associated with a sound and presented as a prime is able to preactivate, and as a consequence to facilitate, immediate sound processing (in this case, in a tone categorization task). However, our main interest resided in the results for the isolated exemplar in both the priming and recognition tasks. The results obtained for the isolated exemplar in the Sound Category condition provide evidence of multisensory generalization, whereas those obtained for the isolated exemplars in both categories indicate that the generalization effect is directly determined by the summed activation between objects and exemplars in memory. When the isolated exemplar in the Sound Category was a prime, it helped in tone categorization and was not recognized significantly above chance level. When the isolated exemplar in the No-Sound Category was a prime, it did not significantly influence categorization of the tone and was not recognized significantly above chance level. This pattern of data was probably due to the homogeneous distribution of the exemplars within the shape and color dimensions. In other words, within each category, each exemplar (including its color and shape) was equally effective in activating all of the other exemplars and was also able to activate the corresponding exemplar with the same color across categories. However, because the summed activation was stronger within the categories (due to the learning task), we observed generalization on the shape See the referenced publication for a full discussion of the nature of this effect. Experimental Psychology 2012 Ó 2012 Hogrefe Publishing Author’s personal copy (e-offprint) L. Brunel et al.: Multisensory Generalization and Distinctiveness Effects 7 predicted that an isolated object that possesses a salient feature should not be generalized with the other exemplars of the corresponding category. The aim of our second experiment was to test this prediction. Experiment 2 Figure 2. Mean reaction times (RTs) for tone categorization as a function of Isolation and for each prime type in Experiment 1. Error bars represent standard errors. SC = Sound Category; NSC = No-Sound Category. dimension (i.e., within category). At this point, it is not possible to say whether the same pattern of results will typically be found for the isolated exemplars. Our results are consistent with explanations provided within the framework of pure similarity-based models (e.g., Nosofsky, 1986) which hold that attention can be selectively focused on category-relevant dimensions. However, there is evidence that summed activation based only on similarity to relevant dimensions is not sufficient to predict the influence of exemplar variability on categorization (Cohen, Nosofsky, & Zaki, 2001), the distinctiveness effect in recognition (Nosofsky & Zaki, 2003), or the inverse base-rate effect (Johansen, Fouquet, & Shanks, 2010). Nosofsky and Zaki (2003) proposed a ‘‘Hybrid-similarity exemplar model’’ to account for distinctiveness effects in recognition memory. This model represented an extension of the Generalized Context Model (Nosofsky, 1986) and was based on the summed activation of exemplars along diagnostic dimensions while also making use of discretefeature matching. A stimulus with a highly salient feature will be more similar to itself (i.e., have a greater self-match) than a stimulus without such a feature, because matching common features increase similarity. The result is that the computation of summed similarities is biased toward the exemplar with a salient feature. Johansen et al. (2010) subsequently formalized this idea in terms of a contrast between selective dimensional attention and selective featural attention. In order to account for the inverse base-rate effect, the exemplar-based model was modified to include both dimensional and featural selective attention (Johansen et al., 2010). As far as the present study is concerned, if one feature, a particular color for example, happens to be salient (Johansen et al., 2010; Nosofsky & Zaki, 2003), then there might be relatively little generalization based on shape during either the priming or recognition tasks. A distinctiveness effect might be observed. According to Nosofsky and Zaki (2003), exemplars that are both isolated in psychological space and have a salient feature tend to show a distinctiveness effect, at least in a recognition task. Accordingly, we Ó 2012 Hogrefe Publishing Overall, Experiment 2 used the same design as Experiment 1 but with a brightness dimension instead of a color dimension. Brightness and shape are separable dimensions (Goldstone, 1994), and brightness varies within a continuum that, in general, causes extreme values to be less similar to other values on this dimension than the central values are. We predicted that there would be a distinctiveness effect (including at a multisensory level) when the objects that were presented in isolation during learning were associated with extreme brightness values. More specifically, we predicted that the isolated prime shape from the Sound Category would not activate the sound, whereas the isolated prime shape from the No-Sound Category would. Accordingly, we expected to observe a significant interaction between Category Type and Isolation in the tone categorization task. If our predictions are correct, both isolated exemplars should be recognized at above chance level (i.e., 25%), given their salient, extreme brightness values. Similarly, if the central brightness values are less salient, we should observe a within-category generalization effect (i.e., on shape dimension), as in Experiment 1 (i.e., a significant main effect of Category Type for the tone categorization task) and the recognition performances should not differ from chance level (i.e., 25%). In Experiment 2a, extreme values of gray (Gray 1 and Gray 4) were used for the isolated objects and in Experiment 2b, central values of gray (Gray 2 and Gray 3) were used as the colors for these objects. Experiment 2a Method Participants Thirty-two right-handed voluntary participants were recruited for Experiment 2a. All of them were students at the University Lumière Lyon 2 (France) and had normal or corrected-to-normal vision. Apparatus and Procedure Our aim was to directly test our predictions concerning feature matching during activation by manipulating intra-category similarities in such a way that the generalization effects previously observed for isolated exemplars would be replaced by distinctiveness effects. In this experiment, we used the same stimuli as in Experiment 1 except for the colors of the shapes, which were replaced by values Experimental Psychology 2012 Author’s personal copy (e-offprint) 8 L. Brunel et al.: Multisensory Generalization and Distinctiveness Effects along a brightness dimension.4 The CIE L*a*b values for each kind of gray were set in such a way that they differed only in terms of luminance (ranging from darker to lighter, Gray 1: 25.74, Gray 2: 48.03, Gray 3: 68.48, Gray 4: 87.75). The learning and association conditions in Experiment 2a were counterbalanced in the same way as in Experiment 1 and the values of Gray 1 and Gray 4 were used only for the isolated objects in the two categories. For half of the participants, when the isolated object in the Sound Category condition was presented in Gray 1 without a sound feature (the corresponding non-isolated objects were displayed in Gray 2, 3, and 4 with a sound feature), the isolated object in the No-Sound Category condition was displayed in Gray 4 with a sound feature (the corresponding non-isolated objects were displayed in Gray 1, 2, and 3 without a sound feature). For the other half of participants, the reverse pattern was used. There were no other changes between Experiments 1 and 2a. Results and Discussion Learning Task The participants performed the shape categorization task accurately (95% overall correct response rate, see Table 1). Analyses revealed a significant main effect of Category Type on latencies, F(1, 31) = 12.14, p < .05, g2p = .28, but not on correct response rates, F < 1. The analyses revealed a significant interaction between Category Type and Isolation for both latencies, F(1, 31) = 31.91, p < .05, g2p = .50, and correct response rates, F(1, 31) = 5.22, p < .05, g2p = .14. In the Sound Category condition, the non-isolated objects were processed both faster than the isolated ones, F(1, 31) = 22.74, p < .05, and also marginally more accurately, F(1, 31) = 3.82, p = .06. By contrast, in the No-Sound Category condition, the isolated object was processed faster than the non-isolated ones, F(1, 31) = 18,77 p < .05, but not more accurately, F < 1. Overall, these results were similar to those of Experiments 1 and 2a. Figure 3. Mean reaction times for tone categorization as a function of isolation and for each prime type in Experiment 2a. Error bars represent standard errors. SC = Sound Category; NSC = No-Sound Category. effects of Category Type nor those of Isolation were significant, F < 1. This interaction is presented in Figure 3. As Figure 3 indicates, the non-isolated prime in the Sound Category and the isolated prime in the No-Sound Category enhanced the categorization of the target tones compared to the non-isolated prime in the No-Sound Category and the isolated prime in the Sound Category. A planned comparison revealed a significant difference between categorization performances induced by isolated and non-isolated primes in the Sound Category, F(1, 31) = 6.27, p < .05. The corresponding planned comparison for the No-Sound Category showed a significant difference between categorization latencies for isolated and non-isolated shapes, F(1, 31) = 10.18, p < .05. The isolated shapes presented with white noise during the learning phase facilitated target sound discrimination. Unlike Experiment 1, non-isolated primes from the Sound Category significantly enhanced the tone categorization latencies compared to the non-isolated primes from the No-Sound Category, F(1, 31) = 9.62, p < .05 (see Brunel, Labeye, et al., 2009). To summarize, unlike in Experiment 1, when participants saw objects that were associated with sounds, sound discrimination was always enhanced. Priming Task Recognition Task As in Experiment 1, the participants performed the categorization task accurately (with an overall correct response rate of 93%, see Table 2). The analyses of the correct response rates revealed neither any main effects (respectively, F(1, 31) = 2.32, p = .14, for Category Type and F < 1 for Isolation) nor any interaction, F < 1. As expected, in the case of latencies the analysis revealed only a significant interaction between Category Type and Isolation, F(1, 31) = 14.48, p < .05, g2p = .32, and neither the main 4 When participants were required to pick out the isolated exemplar seen in the Sound Category condition, the correct recognition rate was 50.0%, which was significantly above the chance level of 25%, t(31) = 2.78, p < .05. Similarly, when they had to identify the isolated exemplar seen in the No-Sound Category, the correct recognition rate was 46.9% which again differed significantly from the 25% chance response level, t(31) = 2.44, p < .05. As in Experiment 1, we conducted a similarity rating study (16 participants) on our stimuli in order to see how people represented these stimuli in a psychological space. In the same way as for color, the participants used both the brightness and the shape dimensions to represent objects in multidimensional space. Most importantly, and as expected, the two extreme gray values (1 and 4) were judged to be less similar than the two central values (2 and 3), and the distance between them was greater in the multidimensional scaling solution. These extreme gray values therefore mean that the within-category similarities are not homogeneous. Experimental Psychology 2012 Ó 2012 Hogrefe Publishing Author’s personal copy (e-offprint) L. Brunel et al.: Multisensory Generalization and Distinctiveness Effects In the learning task, the pattern of results was similar to that observed in Experiment 1 in that categorization was always faster and more accurate when the shape was accompanied by a sound feature. Once again, the most interesting results were observed in the priming task and recognition task. Unlike in Experiment 1, the isolated prime exemplar in the Sound Category did not enhance tone categorization and was recognized significantly above chance level. In the No-Sound Category, the isolated prime exemplar enhanced tone categorization and was recognized significantly above chance level. Taken together, these results support our main hypothesis that feature saliency (extreme values of gray) can neutralize generalization along the shape dimension, with the result that exemplars with extreme values of gray are treated as distinct. We therefore conducted Experiment 2b in order to ensure that when isolated objects are displayed with the less salient and discriminable central gray values (Gray 2 and Gray 3), this distinctiveness effect is no longer observed and, more specifically, that generalization is observed along the shape dimension. In other words, we predicted that Experiment 2b would replicate the results of Experiment 1. 9 Figure 4. Mean reaction times for tone categorization as a function of isolation and for each prime type in Experiment 2b. Error bars represent standard errors. SC = Sound Category; NSC = No-Sound Category. more accurately, F(1, 31) = 3.82, p = .06. By contrast, in the No-Sound Category condition, the isolated object was processed faster than the non-isolated ones, F(1, 31) = 18,77 p < .05, but not more accurately, F < 1. Overall, these results were similar to those of Experiments 1 and 2a. Experiment 2b Method Participants Thirty-two right-handed voluntary participants were recruited for Experiment 2b. All of them were students at the University Lumière Lyon 2 (France) and had normal or corrected-to-normal vision. Stimulus, Material, Procedure, and Design The stimuli, material, procedure, and design were identical to those used in Experiment 2a. The only difference was that Gray 2 and Gray 3 were used as the colors for the isolated objects in each category. Results and Discussion Priming Task In the same way as in the previous experiment, the participants performed the task accurately (91% overall correct response rate, see Table 2). The analyses performed on the correct responses revealed neither a significant main effect, F < 1, of either Category Type or Isolation nor any interaction, F < 1. As expected, the latency analysis revealed a significant effect of Category Type, F(1, 31) = 21.55, p < .05, g2p = .41, see Figure 4, and no significant effect of Isolation, F < 1. The responses were significantly faster (RT = 561 ms, SE = 24) when the target tone was preceded by an exemplar from the Sound Category rather than from the No-Sound Category (RT = 593 ms, SE = 25). It should be mentioned that we observed a trend for the interaction between Category Type and Isolation, F(1, 31) = 4.10, p = .052, g2p = .11. The trend was due to the fact that the responses for the isolated shape seen in the Sound Category condition were even faster than those for the non-isolated shapes, while the opposite was true for the shapes seen in the No-Sound Category condition. Learning Task The participants performed the shape categorization task accurately (95% overall correct response rate, see Table 1). Analyses revealed a significant main effect of Category Type on latencies, F(1, 31) = 12.14, p < .05, g2p = .28, but not on correct response rates, F < 1. The analyses revealed a significant interaction between Category Type and Isolation for both latencies, F(1, 31) = 31.91, p < .05, g2p = .50, and correct response rates, F(1, 31) = 5.22, p < .05, g2p = .14. In the Sound Category condition, the non-isolated objects were processed both faster than the isolated ones, F(1, 31) = 22.74, p < .05, and also marginally Ó 2012 Hogrefe Publishing Recognition Task When the participants were required to pick out the isolated exemplar from the Sound Category, the correct recognition rate was 6.3% which was significantly below the chance level of 25%, t(31) = 4.31, p < .05. This low accuracy score did not reflect a bias toward the extreme gray values. In fact, the distribution of recognition responses among participants (irrespective of whether the responses were correct or false) was relatively homogeneous. As in Experiment 1, the participants were biased toward non-isolated shapes as Experimental Psychology 2012 Author’s personal copy (e-offprint) 10 L. Brunel et al.: Multisensory Generalization and Distinctiveness Effects if they assumed that the isolated shape was systematically presented with sound (the trend observed for the interaction between Isolation and Category Type supports this interpretation). When they had to identify the isolated exemplar seen in the No-Sound Category condition, the correct recognition rate was 25.0%, that is, the same as chance response level, t < 1. Once again, the pattern of results obtained in the learning task was consistent with those obtained in Experiment 1 and Experiment 2a. Most importantly, and as expected, this experiment replicated the results of Experiment 1 in both the priming and recognition tasks, that is, the isolated prime in the Sound Category enhanced tone categorization and was not recognized above chance level (i.e., multisensory generalization effect), whereas the isolated prime in the No-Sound Category did not enhance tone categorization and was also not recognized above chance level (i.e., unisensory generalization effect). General Discussion The aim of this research was to assess generalization within a multisensory memory perspective. Here, we defined the multisensory generalization effect as the probability of a visual exemplar activating a sound feature even when it is experienced without a sound feature. By manipulating the learning conditions, we were able to isolate this effect. In the learning phase, participants performed a shape categorization task in which there were unstated but strong contingencies between shapes, colors, and sounds (see also, Schmidt & De Houwer, 2012; Schmidt et al., 2010). In the Sound Category condition, three differently colored but identically shaped objects (‘‘non-isolated’’) were presented with a white noise, whereas a final exemplar of the same shape was not associated with this sound (‘‘isolated’’). In the No-Sound Category condition, the opposite pattern applied (i.e., an isolated shape was presented with a sound, whereas three non-isolated shapes were not). The observed pattern of results (which was consistent across all our experiments) showed that when the participants had to categorize shapes as member of the No-Sound Category, adding a sound feature to one of these shapes (the isolated shape) facilitated its categorization. In contrast, when they had to categorize shapes as members of the Sound Category, the presentation of one of these shapes without a sound feature interfered with its categorization. This result shows that the participants paid attention to the sound feature during the shape categorization task. It is now well known that providing a multisensory cue during perceptual processing can help people to be more accurate (e.g., Lehmann & Murray, 2005) or can bias their performance (see the McGurk effect, McGurk & MacDonald, 1976) even if this cue is not necessarily related to the task (e.g., Giard & Perronet, 1999). However, the shortening of latencies is not, in itself, sufficient to conclude that integration takes place during the learning task. That is why we shall not seek to go beyond the reported results and simply state that the sound feature helped our participants to perform the shape categorization Experimental Psychology 2012 task. Could the sound feature have functioned as an alerting stimulus? Each shape presented with a sound could have benefited from the simultaneous sound because the sound provided an additional, redundant cue for the onset of a trial. However, such an explanation would only help explain the results obtained during the learning task. In fact, it is more likely that the other effects that we observed can be explained in terms of multisensory memory accounts rather than by a mere perceptual fluency effect (see Brunel, Labeye, et al., 2009; Brunel, Lesourd et al., 2010). Based on our assumption that category exemplars are activated due to their similarity to presented objects (see also Logan, 2002; Nosofsky, 1986), and in line with our previous work (see Brunel, Labeye, et al., 2009; Brunel, Lesourd et al., 2010), we made predictions concerning the priming and recognition phases. The first finding of our experiments is that there is a priming effect between a shape prime and a sound target based on the integration that occurs during the learning phase. These data replicate the results previously obtained by Brunel and coworkers and confirm that activation of an auditory memory component (a component that is not perceptually present) can influence the perceptual processing of a sound that is presented later. This finding can be explained by the fact that the preactivation of a modality influences processing in the same modality (see Pecher et al., 2004; Van Dantzig et al, 2008). In our study, the visual prime automatically activated the corresponding associated sound (white noise) and prepared the system to process information in the same modality or information evoking this modality (Brunel, Lesourd et al., 2010; Vallet et al., 2010). Once again, it provides evidence in support of the assumption that memory mechanisms are not dissociated from perceptual mechanisms, but instead involve shared processes (Goldstone & Barsalou, 1998; for a review, see Versace et al., 2009). In Experiment 1, in particular, we found that visual objects from the Sound Category enhanced the categorization of auditory target tones more than the prime shapes from the No-Sound Category (either Isolated or Non-Isolated) irrespective of whether the prime shape had (Non-Isolated) or had not (Isolated) been experienced with sound. Even the isolated object from the No-Sound Category, which was experienced with a sound feature, did not facilitate tone categorization. Furthermore, isolated exemplars were not subsequently recognized at significantly above chance level. Taken together, these results suggest that the participants were sensitized along the shape dimension during category learning, with the result that generalization occurred on this dimension in the priming and recognition tasks. It seems likely that there was some sensitization to shape due to its relevance for categorization because the similarity ratings did not indicate that shape was a priori a more salient cue than color. Following category training, the participants generalized their responses from the more frequent non-isolated category exemplars to the less frequent isolated category exemplar. These results indicate three types of ‘‘spreading’’ of activation which occur simultaneously. First, activation spreads across modalities, that is, from a shape to the sound that has been frequently paired with this shape. Second, response patterns spread from the exemplars of a category to other similar exemplars Ó 2012 Hogrefe Publishing Author’s personal copy (e-offprint) L. Brunel et al.: Multisensory Generalization and Distinctiveness Effects of the same category. Third, integrated cross-modal responses spread from white noise to tones. An alternative explanation is that isolated exemplars are similar overall to the exemplars of their respective category across all dimensions (color and shape), and that the dimensions are equally weighted. If this is the case, then we should observe this result irrespective of the learning task. However, there is evidence that categorization induces sensitization along a diagnostic dimension (see Goldstone, 1994) and also that objects that are categorized together tend to be judged more similar by participants along this dimension (see Goldstone, Lippa, & Shiffrin, 2001). We observed a multisensory generalization effect, with the response on the non-isolated exemplars in the Sound Category being generalized to the isolated exemplar, and a ‘‘unisensory’’ generalization effect, with the response on the non-isolated exemplars from the No-Sound Category being generalized to the isolated exemplar. These results are consistent with both the existence of multisensory exemplars that integrate shape and sound in the psychological space and classical selective attention accounts of exemplar models, such as the Generalized Context Model (Nososfky, 1986, 1991). Employing these background theories, we provide evidence for generalization within a multisensory perspective. Despite the many studies conducted within a multisensory memory perspective (for a review, see Barsalou, 2008; Glenberg, 2010; Versace et al., 2009), this is the first observation of categorization-based generalization across modalities, that is, from visual shape to sound. Moreover, we found evidence that priming can be employed as an implicit measure of this generalization effect. This reinforces the idea that experimental designs that make use of priming are useful tools for exploring the nature of exemplar representations in memory. Exemplar-based accounts of selective attention to category-diagnostic dimensions would have predicted a systematic generalization based on a common shape, irrespective of the dissimilarity of the objects on the category-irrelevant dimensions. In Experiment 2, we were able to show that this was not the case. Furthermore, there are other results in the literature that cannot be explained solely in terms of summed similarity along diagnostic dimensions (Cohen et al., 2001; Johansen et al., 2010; Nosofsky & Zaki, 2003). For example, the advantage observed for distinctive stimuli in recognition memory (Nosofsky & Zaki, 2003) is mediated by discrete-feature matching that allows a given exemplar to have a better self-matching activation value (i.e., bias in the summed activation computation toward this exemplar). By manipulating brightness instead of color, we were able to isolate this effect. When an isolated exemplar was displayed with a brightness value that strongly differentiated it from the other exemplars of its category (Experiment 2a), the learning results were similar to those found in Experiment 1. However, cross-modal generalization from the non-isolated to the isolated exemplars was no longer observed in the subsequent priming and recognition tasks. The extreme brightness values served to differentiate the isolated exemplar from the others (Johansen et al., 2010), with the result that generalization did not spread across these exemplars (Nosofsky & Zaki, 2003). However, Experiment Ó 2012 Hogrefe Publishing 11 2b provided evidence that distinctiveness effects are observed only if the relevant features are salient, as otherwise generalization along the shape dimension is found for the isolated exemplars in both categories. When isolated exemplars are assigned non-extreme values along the brightness dimension, we replicated the cross-modal generalization results obtained in Experiment 1. The generalization of learned cross-modal integrations across the members of a category was observed when objects belonging to the same category shared values on the shape dimension (Experiment 1) and when there was no salient feature to distinguish the objects to which responses could be generalized (Experiment 2a vs. 2b). To summarize, we conclude that (1) generalization can occur both within a modality and across modalities; (2) retrieval from memory depends on an activation mechanism based on selective attention to the dimension or to a feature during a global matching process (see also Nairne, 2006). Finally, and as suggested by the present research, the distinctiveness effects that we observed in earlier studies have been found to occur not only at a featural level but also at an exemplar level (Brunel, Oker, Riou, & Versace, 2010). One remaining issue relates to whether multiple levels of representation (i.e., feature and exemplar, see Navarro & Lee, 2002), or multiple levels of processing (i.e., dimensional and featural), or both, are involved during retrieval. This is an interesting issue that should be addressed in future research. Acknowledgments Preliminary results of Experiment 2 were presented at the 31st Annual Congress of the Cognitive Science Society (Brunel, Vallet, Riou, & Versace, 2009). However, the present article contains original analyses and results that have not been published. This research was supported by FYSSEN Foundation (94, rue de Rivoli, 75001 Paris, France). We would like to thank Diana Pecher, Jason Arndt, Thorsten Meiser, James R. Schmidt, and two anonymous reviewers for their helpful comments on previous versions of this article. References Ashby, F. G., & Towsend, J. T. (1986). Varieties of perceptual independence. Psychological Review, 93, 154–179. Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617–645. doi: 10.1146/annurev.psych.59. 103006.093639 Beisteiner, R., Höllinger, P., Lindinger, G., Lang, W., & Berthoz, A. (1995). Mental representations of movements. Brain potentials associated with imagination of hand movements. Electroencephalography and Clinical Neurophysiology, 96, 183–193. Bensafi, M., Sobel, N., & Khan, R. M. (2007). Hedonic-specific activity in the piriform cortex during odor imagery mimics that during odor perception. Journal of Neurophysiology, 98, 3254–3262. doi: 10.1152/jn.00349.2007 Brunel, L., Labeye, E., Lesourd, M., & Versace, R. (2009). The sensory nature of episodic memory: Sensory priming due to memory trace activation. Journal of Experimental Experimental Psychology 2012 Author’s personal copy (e-offprint) 12 L. Brunel et al.: Multisensory Generalization and Distinctiveness Effects Psychology: Learning, Memory and Cognition, 35, 1081– 1088. doi: 10.1037/a0015537 Brunel, L., Lesourd, M., Labeye, E., & Versace, R. (2010). The sensory nature of knowledge: Sensory priming effect in semantic categorization. The Quarterly Journal of Experimental Psychology, 63, 955–964. doi: 10.1080/ 17470210903134369 Brunel, L., Oker, A., Riou, B., & Versace, R. (2010). Memory and consciousness: Trace distinctiveness in memory retrievals. Consciousness & Cognition, 19, 926–937. doi: 10.1016/j.concog.2010.08.006 Brunel, L., Vallet, G., Riou, B., & Versace, R. (2009). The sensory nature of knowledge generalization vs. specification mechanisms. In N. A. Taatgen & H. van Rijn (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society (pp. 2789–2794). Austin, TX: Cognitive Science Society. Chen, Y.-C., & Spence, C. (2010). When hearing the bark helps to identify the dog: Semantically-congruent sounds modulate the identification of masked pictures. Cognition, 114, 389– 404. doi: 10.1016/j.cognition.2009.10.012 Cohen, A., Nosofsky, R., & Zaki, S. (2001). Category variability, exemplar similarity, and perceptual classification. Memory & Cognition, 29, 1165–1175. Connell, L., & Lynott, D. (2010). Look but don’t touch: Tactile disadvantage in processing of modality-specific words. Cognition, 115, 1–9. doi: 10.1016/j.cognition.2009.10.005 Cooke, T., Jäkel, F., Wallraven, C., & Bülthoff, H. H. (2007). Multisensory similarity and categorization of novel, threedimensional objects. Neuropsychologia, 45, 484–495. doi: 10.1016/j.neuropsychologia.2006.02.009 Delvenne, J.-F., Cleeremans, A., & Laloyaux, C. (2009). Feature bindings are maintained in visual short-term memory without sustained focused attention. Experimental Psychology, 57, 108–116. doi: 10.1027/1618-3169/a000014 Giard, M. H., & Peronnet, F. (1999). Auditory-visual integration during multisensory object recognition in humans: A behavior and electrophysiological study. Journal of Cognitive Neuroscience, 11, 473–490. Glenberg, A. M. (2010). Embodiment as a unifying perspective for psychology. Wiley Interdisciplinary Review: Cognitive Science, 1, 586–596. doi: 10.1002/wcs.55 Goldstone, R. L. (1994). Influences of categorization on perceptual discrimination. Journal of Experimental Psychology: General, 123, 178–200. Goldstone, R. L. (2000). Unitization during category learning. Journal of Experimental Psychology: Human Perception and Performance, 26, 86–112. Goldstone, R. L., & Barsalou, L. W. (1998). Reuniting perception and conception. Cognition, 65, 231–262. Goldstone, R. L., Gerganov, A., Landy, D., & Roberts, M. E. (2008). Learning to see and conceive. In L. Tommasi, M. Peterson, & L. Nadel (Eds.), The new cognitive sciences (pp. 163–188). Cambridge, MA: MIT Press. Goldstone, R. L., Lippa, Y., & Shiffrin, R. M. (2001). Altering object representations through category learning. Cognition, 78, 27–43. Hall, G. (1991). Perceptual and associative learning. New York, NY: Clarendon Press/Oxford University Press. Hintzman, D. L. (1986). ‘‘Schema abstraction’’ in a multipletrace model. Psychological Review, 95, 528–551. Hommel, B. (1998). Event files: Evidence for automatic integration of stimulus-response episodes. Visual Cognition, 5, 183–216. Johansen, M. K., Fouquet, N., & Shanks, D. R. (2010). Featural selective attention, exemplar representation, and the inverse base-rate effect. Psychonomic Bulletin & Review, 17, 637– 643. doi: 10.3758/PBR.17.5.637 Experimental Psychology 2012 Labeye, E., Oker, A., Badard, G., & Versace, R. (2008). Activation and integration of motor components in a shortterm priming paradigm. Acta Psychologica, 129, 108–111. doi: 10.1016/j.actpsy.2008.04.010 Lehmann, S., & Murray, M. (2005). The role of multisensory memories in unisensory object discrimination. Cognitive Brain Research, 24, 326–334. doi: 10.1016/j.cogbrainres. 2005.02.005 Logan, G. (2002). An instance theory of attention and memory. Psychological Review, 109, 376–400. Martin, A., Haxby, J. V., Lalonde, F. M., Wiggs, C. L., & Ungerleider, L. G. (1995). Discrete cortical regions associated with knowledge of color and knowledge of action. Science, 270, 102–105. McGurk, H., & Mac Donald, J. W. (1976). Hearing lips and seeing voices. Nature, 264, 746–748. Meyer, M., Baumann, S., Marchina, S., & Jancke, L. (2007). Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation. BMC Neuroscience, 8, 14. doi: 10.1186/1471-2202-8-14 Nairne, J. (2006). Modeling distinctiveness: Implications for general memory theory. In R. R Hunt & J. B Worthen (Eds.), Distinctiveness and memory (pp. 27–46). Oxford, NY: Oxford University Press. Navarro, D. J., & Lee, M. D. (2002). Combining dimensions and features in similarity-based representations. In S. Becker, S. Thrun, & K. Obermayer (Eds.), Advances in neural information processing systems 15: Proceedings of the 2002 conference (pp. 67–74). Cambridge, MA: MIT Press. Nosofsky, R. (1986). Attention, similarity, and the identificationcategorization relationship. Journal of Experimental Psychology: General, 115, 39–57. Nosofsky, R. (1991). Tests of an exemplar model for relating perceptual classification and recognition memory. Journal of Experimental Psychology: Human Perception and Performance, 17, 3–27. Nosofsky, R. (1992). Similarity scaling and cognitive process models. Annual Review of Psychology, 43, 25–53. Nosofsky, R., & Zaki, S. (2003). A hybrid-similarity exemplar model for predicting distinctiveness effects in perceptual oldnew recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 1194–1209. Pecher, D., Zeelenberg, R., & Barsalou, L. W. (2004). Sensorimotor simulations underlie conceptual representations: Modality-specific effects of prior activation. Psychonomic Bulletin & Review, 11, 164–167. Richter, T., & Zwaan, R. (2010). Integration of perceptual information in word access. The Quarterly Journal of Experimental Psychology, 63, 81–107. doi: 10.1080/ 17470210902829563 Riou, B., Lesourd, M., Brunel, L., & Versace, R. (2011). Visual memory and visual perception: When memory improves visual search. Memory & Cognition, 39, 1094–1102. doi: 10.3758/s13421-011-0075-2 Rosch, E., & Mervis, C. B. (1975). Family resemblances: Studies in the internal structure of the categories. Cognitive Psychology, 7, 573–605. Schmidt, J. R., & De Houwer, J. (2012). Contingency learning with evaluative stimuli. Experimental Psychology, 59, 175– 182. doi: 10.1027/1618-3169/a000141 Schmidt, J. R., De Houwer, J., & Besner, D. (2010). Contingency learning and unlearning in the blink of an eye: A resource dependent process. Consciousness & Cognition, 19, 235–250. doi: 10.1016/j.concog.2009.12.016 Schneider, T., Engel, A., & Debener, S. (2008). Multisensory identification of natural objects in a two-way crossmodal priming paradigm. Experimental Psychology, 55, 121–132. doi: 10.1027/1618-3169.55.2.121 Ó 2012 Hogrefe Publishing Author’s personal copy (e-offprint) L. Brunel et al.: Multisensory Generalization and Distinctiveness Effects Shepard, R. (1987). Toward a universal law of generalization for psychological science. Science, 237, 1317. Topolinski, S. (2012). The sensorimotor contribution to implicit memory, familiarity, and recollection. Journal of Experimental Psychology: General, 141, 260–281. doi: 10.1037/ a0025658 Tversky, A. (1977). Features of similarity. Psychological Review, 84, 327–352. Vallet, G., Brunel, L., & Versace, R. (2010). The perceptual nature of the cross-modal priming effect: Arguments in favor of a sensory-based conception of memory. Experimental Psychology, 57, 376–382. doi: 10.1027/1618-3169/a000045 Van Dantzig, S., Pecher, D., Zeelenberg, R., & Barsalou, L. W. (2008). Perceptual processing affects conceptual processing. Cognitive Science, 32, 579–590. doi: 10.1080/ 03640210802035365 Versace, R., Labeye, E., Badard, G., & Rose, M. (2009). The contents of long-term memory and the emergence of knowledge. The European Journal of Cognitive Psychology, 21, 522–560. doi: 10.1080/09541440801951844 Ó 2012 Hogrefe Publishing 13 Wheeler, M., Peterson, S., & Buckner, R. (2000). Memory’s echo: Vivid remembering reactivates sensory-specific cortex. PNAS, 97, 11125–11129. Received October 15, 2011 Revision received July 13, 2012 Accepted July 17, 2012 Published online October 10, 2012 Lionel Brunel Laboratoire Epsylon Université Paul Valery (campus Saint Charles) Route de Mende 34199 Montpellier France E-mail [email protected] Experimental Psychology 2012