Previous research has disagreed about whether a difficult cognitive skill is best learned by begi... more Previous research has disagreed about whether a difficult cognitive skill is best learned by beginning with easy or difficult examples. Two experiments are described that clarify this debate. Participants in both experiments received one of three types of training on a difficult perceptual categorization task. In one condition participants began with easy examples, then moved to examples of intermediate difficulty, and finished with the most difficult examples. In a second condition this order was reversed, and in a third condition, participants saw examples in a random order. The results depended on the type of categories that participants were learning. When the categories could be learned via explicit reasoning (a rule-based task), all three training procedures were equally effective. However, when the categorization rule was difficult to describe verbally (an information-integration task), participants who began with the most difficult items performed much better than participants in the other two conditions. Conventional wisdom suggests that the best way to learn a difficult cognitive skill is to begin with easy examples, master those, and then gradually increase example difficulty. A variety of evidence supports this general hypothesis. For example, a popular training procedure, called the method of errorless learning (Baddeley, 1992; Terrace, 1964), adopts an extreme form of this strategy in which the initial examples are so easy and each subsequent increase in difficulty is so small that participants never make errors. The basic assumption of this method is that errors that occur during training strengthen incorrect associations and are therefore harmful to the learning process. Errorless learning has proven to be an effective training procedure in a wide variety of tasks (e.g.
Objective-To provide a select review of our applications of quantitative modeling to highlight th... more Objective-To provide a select review of our applications of quantitative modeling to highlight the utility of such approaches to better understand the neuropsychological deficits associated with various neurologic and psychiatric diseases. Method-We review our work examining category learning in various patient populations, including individuals with basal ganglia disorders (Huntington's Disease and Parkinson's Disease), amnesia and Eating Disorders. Results-Our review suggests that the use of quantitative models has enabled a better understand the learning deficits often observed in these conditions and has allowed us to form novel hypotheses about the neurobiological bases of their deficits. Conclusions-We feel that the use of neurobiologically-inspired quantitative modeling holds great promise in neuropsychological assessment and that future clinical measures should incorporate the use of such models as part of their standard scoring. Appropriate studies need to be completed, however, to determine whether such modeling techniques adhere to the rigorous psychometric properties necessary for a valid and reliable application in a clinical setting. This article selectively reviews our work using quantitative modeling to better understand learning deficits associated with various neurologic conditions. While it is understood that there have been many applications of modeling in various patient populations, the literature that we review allows us to describe a series of studies that have made use of a specific technique, the Perceptual Categorization Task (PCT; also called the General Recognition Randomization Technique; Ashby & Gott, 1988) that enables the consistent application of a set of quantitative models at the individual participant level, thus making these models highly relevant for evaluating individual differences in cognitive functioning. In this paper,
Journal of Experimental Psychology: Learning, Memory and Cognition, Nov 1, 2018
Interventions for drug abuse and other maladaptive habitual behaviors may yield temporary success... more Interventions for drug abuse and other maladaptive habitual behaviors may yield temporary success, but are often fragile and relapse is common. This implies that current interventions do not erase or substantially modify the representations that support the underlying addictive behavior-that is, they do not cause true unlearning. One example of an intervention that fails to induce true unlearning comes from Crossley, Ashby, and Maddox (2013, Journal of Experimental Psychology: General), who reported that a sudden shift to random feedback did not cause unlearning of category knowledge obtained through procedural systems, and they also reported results suggesting that this failure is because random feedback is non-contingent on behavior. These results imply the existence of a mechanism that (1) estimates feedback contingency, and (2) protects procedural learning from modification when feedback contingency is low (i.e., during random feedback). This article reports the results of an experiment in which increasing cognitive load via an explicit dual-task during the random feedback period facilitated unlearning. This result is consistent with the hypothesis that the mechanism that protects procedural learning when feedback contingency is low depends on executive function.
The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detail... more The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning.
Determining whether perceptual properties are processed independently is an important goal in per... more Determining whether perceptual properties are processed independently is an important goal in perceptual science, and tools to test independence should be widely available to experimental researchers. The best analytical tools to test for perceptual independence are provided by General Recognition Theory (GRT), a multidimensional extension of signal detection theory. Unfortunately, there is currently a lack of software implementing GRT analyses that is ready-to-use by experimental psychologists and neuroscientists with little training in computational modeling. This paper presents grtools, an R package developed with the explicit aim of providing experimentalists with the ability to perform full GRT analyses using only a couple of command lines. We describe the software and provide a practical tutorial on how to perform each of the analyses available in grtools. We also provide advice to researchers on best practices for experimental design and interpretation of results.
In recent years there has been explosive growth in the number of neuroimaging studies performed u... more In recent years there has been explosive growth in the number of neuroimaging studies performed using functional Magnetic Resonance Imaging (fMRI). The field that has grown around the acquisition and analysis of fMRI data is intrinsically interdisciplinary in nature and involves contributions from researchers in neuroscience, psychology, physics and statistics, among others. A standard fMRI study gives rise to massive amounts of noisy data with a complicated spatio-temporal correlation structure. Statistics plays a crucial role in understanding the nature of the data and obtaining relevant results that can be used and interpreted by neuroscientists. In this paper we discuss the analysis of fMRI data, from the initial acquisition of the raw data to its use in locating brain activity, making inference about brain connectivity and predictions about psychological or disease states. Along the way, we illustrate interesting and important issues where statistics already plays a crucial role. We also seek to illustrate areas where statistics has perhaps been underutilized and will have an increased role in the future.
In two experiments, observers learned two types of category structures: those in which perfect ac... more In two experiments, observers learned two types of category structures: those in which perfect accuracy could be achieved via some explicit rule-based strategyand those in which perfect accuracy required integrating information from separate perceptual dimensions at some predecisional stage. At the end of training, some observers were required to switch their hands on the response keys, whereas the assignment of categories to response keys was switched for other observers. With the rule-based category structures, neither change in response instructions interfered with categorization accuracy. However, with the information-integration structures, switching response key assignments interfered with categorization performance, but switching hands did not. These results are consistent with the hypothesis that abstract category labels are learned in rule-based categorization, whereas response positions are learned in information-integration categorization. The association to response positions also supports the hypothesis of a procedural-learning-based component to information integration categorization.
Traditionally, category learning has been investigated by examining how response accuracy changes... more Traditionally, category learning has been investigated by examining how response accuracy changes with experience. Often, such data are presented in the form of a learning curve, which plots proportion correct against trial or block number. Learning curves are a good nonparametric method for investigating category learning, because no model needs to be specified during their construction. Learning curves are also relatively simple to compute and often provide an effective method for comparing task difficulty across different conditions of an experiment (e.g., Shepard, Hovland, & Jenkins, 1961). On the other hand, the use of learning curves to test among competing models is severely limited because, in most cases, a variety of different models will be capable of predicting the same learning curves. For example, a popular assumption of many category-learning models is that the trial-by-trial learning of categories is accomplished via a process of gradient descent on the error sur
The biased choice model (BCM) of Lute (1963) and general recognition theory (CRT) of Ashby and To... more The biased choice model (BCM) of Lute (1963) and general recognition theory (CRT) of Ashby and Townsend (1986) are compared with respect to their ability to account for data from identification experiments. Specifically, by using CRT we investigate the ability of the BCM to account for data characterized by violations of perceptual independence and perceptual and decisional separability, and by homoscedasticity of the perceptual distributions. The effects of varying the dimensionality of the perceptual space are also studied. It is shown that the BCM has extreme difficulty accounting for data from experiments in which: (1) the stimuli are constructed from two physical components; (2) the perceptual means are positioned unevenly in the perceptual space; and (3) different stimuli are characterized by different amounts of perceptual variability or perceptual dependence. In general, however, the BCM has no difficulty with violations of perceptual or decisional separability. It is shown that CRT is not a special case of the BCM but that the two models do intersect. The BCM is shown to be closely tied to a number of different forms of symmetry that can be found in identification data and these are used to construct specific GRT models that are compatible with the BCM.
Identifying the strategy that participants use in laboratory experiments is crucial in interpreti... more Identifying the strategy that participants use in laboratory experiments is crucial in interpreting the results of behavioral experiments. This article introduces a new modeling procedure called iterative decision-bound modeling (iDBM), which iteratively fits decision-bound models to the trial-by-trial responses generated from single participants in perceptual categorization experiments. The goals of iDBM are to identify: (1) all response strategies used by a participant, (2) changes in response strategy, and (3) the trial number at which each change occurs. The new method is validated by testing its ability to identify the response strategies used in noisy simulated data. The benchmark simulation results show that iDBM is able to detect and identify strategy switches during an experiment and accurately estimate the trial number at which the strategy change occurs in low to moderate noise conditions. The new method is then used to reanalyze data from Ell and Ashby (2006). Applying iDBM revealed that increasing category overlap in an Sébastien Hélie
Contents: F.G. Ashby, Multivariate Probability Distributions. Part I:Similarity, Preference, and ... more Contents: F.G. Ashby, Multivariate Probability Distributions. Part I:Similarity, Preference, and Choice. J.L. Zinnes, D.B. MacKay, A Probabilistic Multidimensional Scaling Approach: Properties and Procedures. G. De Soete, J.D. Carroll, Probabilistic Multidimensional Models of Pairwise Choice Data. U. Bo"ckenholt, Multivariate Models of Preference and Choice. D.M. Ennis, K. Mullen, A General Probabilistic Model for Triad Discrimination, Preferential Choice, and Two-Alternative Identification. N.A. Perrin, Uniting Identification, Similarity and Preference: General Recognition Theory. Part II:Interactions Between Perceptual Dimensions. W.T. Maddox, Perceptual and Decisional Separability. H. Kadlec, J.T. Townsend, Signal Detection Analyses of Dimensional Interactions. T.D. Wickens, L.A. Olzak, Three Views of Association in Concurrent Detection Ratings. Part III:Detection, Identification, and Categorization. J.P. Thomas, L.A. Olzak, Simultaneous Detection and Identification. D.M. Ennis, Modeling Similarity and Identification When There Are Momentary Fluctuations in Psychological Magnitudes. A.A.J. Marley, Developing and Characterizing Multidimensional Thurstone and Luce Models for Identification and Preference. Y. Takane, T. Shibayama, Structures in Stimulus Identification Data. R.M. Nosofsky, Exemplar-Based Approach to Relating Categorization, Identification, and Recognition. M.M. Cohen, D.W. Massaro, On the Similarity of Categorization Models. F.G. Ashby, Multidimensional Models of Categorization.
Previous research has disagreed about whether a difficult cognitive skill is best learned by begi... more Previous research has disagreed about whether a difficult cognitive skill is best learned by beginning with easy or difficult examples. Two experiments are described that clarify this debate. Participants in both experiments received one of three types of training on a difficult perceptual categorization task. In one condition participants began with easy examples, then moved to examples of intermediate difficulty, and finished with the most difficult examples. In a second condition this order was reversed, and in a third condition, participants saw examples in a random order. The results depended on the type of categories that participants were learning. When the categories could be learned via explicit reasoning (a rule-based task), all three training procedures were equally effective. However, when the categorization rule was difficult to describe verbally (an information-integration task), participants who began with the most difficult items performed much better than participants in the other two conditions. Conventional wisdom suggests that the best way to learn a difficult cognitive skill is to begin with easy examples, master those, and then gradually increase example difficulty. A variety of evidence supports this general hypothesis. For example, a popular training procedure, called the method of errorless learning (Baddeley, 1992; Terrace, 1964), adopts an extreme form of this strategy in which the initial examples are so easy and each subsequent increase in difficulty is so small that participants never make errors. The basic assumption of this method is that errors that occur during training strengthen incorrect associations and are therefore harmful to the learning process. Errorless learning has proven to be an effective training procedure in a wide variety of tasks (e.g.
Objective-To provide a select review of our applications of quantitative modeling to highlight th... more Objective-To provide a select review of our applications of quantitative modeling to highlight the utility of such approaches to better understand the neuropsychological deficits associated with various neurologic and psychiatric diseases. Method-We review our work examining category learning in various patient populations, including individuals with basal ganglia disorders (Huntington's Disease and Parkinson's Disease), amnesia and Eating Disorders. Results-Our review suggests that the use of quantitative models has enabled a better understand the learning deficits often observed in these conditions and has allowed us to form novel hypotheses about the neurobiological bases of their deficits. Conclusions-We feel that the use of neurobiologically-inspired quantitative modeling holds great promise in neuropsychological assessment and that future clinical measures should incorporate the use of such models as part of their standard scoring. Appropriate studies need to be completed, however, to determine whether such modeling techniques adhere to the rigorous psychometric properties necessary for a valid and reliable application in a clinical setting. This article selectively reviews our work using quantitative modeling to better understand learning deficits associated with various neurologic conditions. While it is understood that there have been many applications of modeling in various patient populations, the literature that we review allows us to describe a series of studies that have made use of a specific technique, the Perceptual Categorization Task (PCT; also called the General Recognition Randomization Technique; Ashby & Gott, 1988) that enables the consistent application of a set of quantitative models at the individual participant level, thus making these models highly relevant for evaluating individual differences in cognitive functioning. In this paper,
Journal of Experimental Psychology: Learning, Memory and Cognition, Nov 1, 2018
Interventions for drug abuse and other maladaptive habitual behaviors may yield temporary success... more Interventions for drug abuse and other maladaptive habitual behaviors may yield temporary success, but are often fragile and relapse is common. This implies that current interventions do not erase or substantially modify the representations that support the underlying addictive behavior-that is, they do not cause true unlearning. One example of an intervention that fails to induce true unlearning comes from Crossley, Ashby, and Maddox (2013, Journal of Experimental Psychology: General), who reported that a sudden shift to random feedback did not cause unlearning of category knowledge obtained through procedural systems, and they also reported results suggesting that this failure is because random feedback is non-contingent on behavior. These results imply the existence of a mechanism that (1) estimates feedback contingency, and (2) protects procedural learning from modification when feedback contingency is low (i.e., during random feedback). This article reports the results of an experiment in which increasing cognitive load via an explicit dual-task during the random feedback period facilitated unlearning. This result is consistent with the hypothesis that the mechanism that protects procedural learning when feedback contingency is low depends on executive function.
The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detail... more The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning.
Determining whether perceptual properties are processed independently is an important goal in per... more Determining whether perceptual properties are processed independently is an important goal in perceptual science, and tools to test independence should be widely available to experimental researchers. The best analytical tools to test for perceptual independence are provided by General Recognition Theory (GRT), a multidimensional extension of signal detection theory. Unfortunately, there is currently a lack of software implementing GRT analyses that is ready-to-use by experimental psychologists and neuroscientists with little training in computational modeling. This paper presents grtools, an R package developed with the explicit aim of providing experimentalists with the ability to perform full GRT analyses using only a couple of command lines. We describe the software and provide a practical tutorial on how to perform each of the analyses available in grtools. We also provide advice to researchers on best practices for experimental design and interpretation of results.
In recent years there has been explosive growth in the number of neuroimaging studies performed u... more In recent years there has been explosive growth in the number of neuroimaging studies performed using functional Magnetic Resonance Imaging (fMRI). The field that has grown around the acquisition and analysis of fMRI data is intrinsically interdisciplinary in nature and involves contributions from researchers in neuroscience, psychology, physics and statistics, among others. A standard fMRI study gives rise to massive amounts of noisy data with a complicated spatio-temporal correlation structure. Statistics plays a crucial role in understanding the nature of the data and obtaining relevant results that can be used and interpreted by neuroscientists. In this paper we discuss the analysis of fMRI data, from the initial acquisition of the raw data to its use in locating brain activity, making inference about brain connectivity and predictions about psychological or disease states. Along the way, we illustrate interesting and important issues where statistics already plays a crucial role. We also seek to illustrate areas where statistics has perhaps been underutilized and will have an increased role in the future.
In two experiments, observers learned two types of category structures: those in which perfect ac... more In two experiments, observers learned two types of category structures: those in which perfect accuracy could be achieved via some explicit rule-based strategyand those in which perfect accuracy required integrating information from separate perceptual dimensions at some predecisional stage. At the end of training, some observers were required to switch their hands on the response keys, whereas the assignment of categories to response keys was switched for other observers. With the rule-based category structures, neither change in response instructions interfered with categorization accuracy. However, with the information-integration structures, switching response key assignments interfered with categorization performance, but switching hands did not. These results are consistent with the hypothesis that abstract category labels are learned in rule-based categorization, whereas response positions are learned in information-integration categorization. The association to response positions also supports the hypothesis of a procedural-learning-based component to information integration categorization.
Traditionally, category learning has been investigated by examining how response accuracy changes... more Traditionally, category learning has been investigated by examining how response accuracy changes with experience. Often, such data are presented in the form of a learning curve, which plots proportion correct against trial or block number. Learning curves are a good nonparametric method for investigating category learning, because no model needs to be specified during their construction. Learning curves are also relatively simple to compute and often provide an effective method for comparing task difficulty across different conditions of an experiment (e.g., Shepard, Hovland, & Jenkins, 1961). On the other hand, the use of learning curves to test among competing models is severely limited because, in most cases, a variety of different models will be capable of predicting the same learning curves. For example, a popular assumption of many category-learning models is that the trial-by-trial learning of categories is accomplished via a process of gradient descent on the error sur
The biased choice model (BCM) of Lute (1963) and general recognition theory (CRT) of Ashby and To... more The biased choice model (BCM) of Lute (1963) and general recognition theory (CRT) of Ashby and Townsend (1986) are compared with respect to their ability to account for data from identification experiments. Specifically, by using CRT we investigate the ability of the BCM to account for data characterized by violations of perceptual independence and perceptual and decisional separability, and by homoscedasticity of the perceptual distributions. The effects of varying the dimensionality of the perceptual space are also studied. It is shown that the BCM has extreme difficulty accounting for data from experiments in which: (1) the stimuli are constructed from two physical components; (2) the perceptual means are positioned unevenly in the perceptual space; and (3) different stimuli are characterized by different amounts of perceptual variability or perceptual dependence. In general, however, the BCM has no difficulty with violations of perceptual or decisional separability. It is shown that CRT is not a special case of the BCM but that the two models do intersect. The BCM is shown to be closely tied to a number of different forms of symmetry that can be found in identification data and these are used to construct specific GRT models that are compatible with the BCM.
Identifying the strategy that participants use in laboratory experiments is crucial in interpreti... more Identifying the strategy that participants use in laboratory experiments is crucial in interpreting the results of behavioral experiments. This article introduces a new modeling procedure called iterative decision-bound modeling (iDBM), which iteratively fits decision-bound models to the trial-by-trial responses generated from single participants in perceptual categorization experiments. The goals of iDBM are to identify: (1) all response strategies used by a participant, (2) changes in response strategy, and (3) the trial number at which each change occurs. The new method is validated by testing its ability to identify the response strategies used in noisy simulated data. The benchmark simulation results show that iDBM is able to detect and identify strategy switches during an experiment and accurately estimate the trial number at which the strategy change occurs in low to moderate noise conditions. The new method is then used to reanalyze data from Ell and Ashby (2006). Applying iDBM revealed that increasing category overlap in an Sébastien Hélie
Contents: F.G. Ashby, Multivariate Probability Distributions. Part I:Similarity, Preference, and ... more Contents: F.G. Ashby, Multivariate Probability Distributions. Part I:Similarity, Preference, and Choice. J.L. Zinnes, D.B. MacKay, A Probabilistic Multidimensional Scaling Approach: Properties and Procedures. G. De Soete, J.D. Carroll, Probabilistic Multidimensional Models of Pairwise Choice Data. U. Bo"ckenholt, Multivariate Models of Preference and Choice. D.M. Ennis, K. Mullen, A General Probabilistic Model for Triad Discrimination, Preferential Choice, and Two-Alternative Identification. N.A. Perrin, Uniting Identification, Similarity and Preference: General Recognition Theory. Part II:Interactions Between Perceptual Dimensions. W.T. Maddox, Perceptual and Decisional Separability. H. Kadlec, J.T. Townsend, Signal Detection Analyses of Dimensional Interactions. T.D. Wickens, L.A. Olzak, Three Views of Association in Concurrent Detection Ratings. Part III:Detection, Identification, and Categorization. J.P. Thomas, L.A. Olzak, Simultaneous Detection and Identification. D.M. Ennis, Modeling Similarity and Identification When There Are Momentary Fluctuations in Psychological Magnitudes. A.A.J. Marley, Developing and Characterizing Multidimensional Thurstone and Luce Models for Identification and Preference. Y. Takane, T. Shibayama, Structures in Stimulus Identification Data. R.M. Nosofsky, Exemplar-Based Approach to Relating Categorization, Identification, and Recognition. M.M. Cohen, D.W. Massaro, On the Similarity of Categorization Models. F.G. Ashby, Multidimensional Models of Categorization.
Uploads
Papers by Gregory Ashby