Articles/Chapters by Corrine Occhino
Journal of Deaf Studies and Deaf Education, 2019
When deaf bilinguals are asked to make semantic similarity judgments of two written words, their ... more When deaf bilinguals are asked to make semantic similarity judgments of two written words, their responses are inf luenced by the sublexical relationship of the signed language translations of the target words. This study investigated whether the observed effects of ASL activation on English print depend on (a) an overlap in syllabic structure of the signed translations or (b) on initialization, an effect of contact between ASL and English that has resulted in a direct representation of English orthographic features in ASL sublexical form. Results demonstrate that neither of these conditions is required or enhances effects of cross-language activation. The experimental outcomes indicate that deaf bilinguals discover the optimal mapping between their two languages in a manner that is not constrained by privileged sublexical associations.
The Construction of Words: Advances in Construction Morphology, 2018
In this chapter, we extend a usage-based theory of Construction Morphology to the analysis of sig... more In this chapter, we extend a usage-based theory of Construction Morphology to the analysis of sign language structure, to address two long-standing categorization problems in sign language linguistics. Sign language linguistics traditionally distinguishes monomorphemic core lexical signs from multimorphemic classifier construction signs, based on whether or not a sign form exhibits analyzable morphological structure ("the Core vs. Classifier problem"). In this tradition, core signs are retrieved from the lexicon, while classifier signs are derived productively via grammatical rules. Sign linguists are also accustomed to classifying discrete and listable aspects of sign structure as language, while aspects of signing that exhibit more holism or gradience are considered to be gesture ("the Language vs. Gesture problem"). These categories of core vs. classifier on the one hand and language vs. gesture on the other derive from a shared source: the assumption that linguistic forms are built up from discrete building blocks. Instead, we analyze multimodal usage events in terms of constructions, conventional patterns of meaning and form containing both fixed elements and variable slots and organized in a structured network. We argue that the Construction Morphology approach leads to a uniform analysis of core and classifier signs alike, without resorting to an a priori distinction between language and gesture.
Bilingualism: Language and Cognition 20 (2), 2017
What is the time course of cross-language activation in deaf sign-print bilinguals? Prior studies... more What is the time course of cross-language activation in deaf sign-print bilinguals? Prior studies demonstrating cross-language activation in deaf bilinguals used paradigms that would allow strategic or conscious translation. This study investigates whether cross-language activation can be eliminated by reducing the time available for lexical processing. Deaf ASL-English bilinguals and hearing English monolinguals viewed pairs of English words and judged their semantic similarity. Half of the stimuli had phonologically related translations in ASL, but participants saw only English words. We replicated prior findings of cross-language activation despite the introduction of a much faster rate of presentation. Further, the deaf bilinguals were as fast or faster than hearing monolinguals despite the fact that the task was in their second language. The results allow us to rule out the possibility that deaf ASL-English bilinguals only activate ASL phonological forms when given ample time for strategic or conscious translation across their two languages.
Behavioral and Brain Sciences, 2017
Goldin-Meadow & Brentari (G-M&B) rely on a formalist approach to language, leading them to seek o... more Goldin-Meadow & Brentari (G-M&B) rely on a formalist approach to language, leading them to seek objective criteria by which to distinguish language and gesture. This results in the assumption that gradient aspects of signs are gesture. Usage-based theories challenge this view, maintaining that all linguistic units exhibit gradience. Instead, we propose that the distinction between language and gesture is a categorization problem.
Complutense Journal of English Studies, 2017
While the arbitrariness of the sign has occupied a central space in linguistic theory for a centu... more While the arbitrariness of the sign has occupied a central space in linguistic theory for a century, counter-evidence to this basic tenet has been mounting. Recent findings from cross-linguistic studies on spoken languages have suggested that, contrary to purely arbitrary distributions of phonological content, languages often exhibit systematic and regular phonological and sub-phonological patterns of form-meaning mappings. To date, studies of distributional tendencies of this kind have not been conducted for signed languages.
In an investigation of phoneme distribution in American Sign Language (ASL) and Língua Brasileira de Sinais (Libras), tokens of the claw-5 handshape were extracted and analyzed for whether the handshape contributed to the overall meaning of the sign. The data suggests that distribution of the claw-5 handshape is not randomly distributed across the lexicon, but clusters around six form-meaning patterns: convex-concave, Unitary-elements, non-compact matter, hand-as-hand, touch, and interlocking. Interestingly, feature-level motivations were uncovered as the source of the mappings
These findings are considered within a new cognitive framework to better understand how and why sub-morphemic units develop and maintain motivated form-meaning mappings. The model proposed here, Embodied Cognitive Phonology, builds on cognitive and usage-based approaches but incorporates theories of embodiment to address the source of the claw-5 mappings. Embodied Cognitive Phonology provides a unifying framework for understanding the perceived differences in phonological patterning and organization across the modalities. Both language-internal and language-external sources of motivation contribute to the emergence of form-meaning mappings. Arbitrariness is argued to be but one possible outcome from the process of emergence and schematization of phonological content, and exists alongside motivation as a legitimate state of linguistic units of all sizes of complexity. Importantly, because language is dynamic, these states are not fixed, but are in continuous flux, as language users reinvent and reinterpret form and meaning over time.
Gesture, 2017
A renewed interest in understanding the role of iconicity in the structure and processing of sign... more A renewed interest in understanding the role of iconicity in the structure and processing of signed languages is hampered by the conflation of iconicity and transparency in the definition and operationalization of iconicity as a variable. We hypothesize that iconicity is fundamentally different than transparency since it arises from individuals' experience with the world and their language, and is subjectively mediated by the signers' construal of form and meaning. We test this hypothesis by asking American Sign Language (ASL) signers and German Sign Language (DGS) signers to rate iconicity of ASL and DGS signs. Native signers consistently rate signs in their own language as more iconic than foreign language signs. The results demonstrate that the perception of iconicity is intimately related to language-specific experience. Discovering the full ramifications of iconicity for the structure and processing of signed languages requires operationalizing this construct in a manner that is sensitive to language experience.
Cognitive Linguistics, 2016
This paper presents a usage-based, Cognitive Grammar analysis of Place as a symbolic structure in... more This paper presents a usage-based, Cognitive Grammar analysis of Place as a symbolic structure in signed languages. We suggest that many signs are better viewed as constructions in which schematic or specific formal properties are extracted from usage events alongside specific or schematic meaning. We argue that pointing signs are complex constructions composed of a pointing device and a Place, each of which are symbolic structures having form and meaning. We extend our analysis to antecedent-anaphora constructions and directional verb constructions. Finally, we discuss how the usage-based approach suggests a new way of understanding the relationship between language and gesture.
What is the time course of cross-language activation in deaf sign–print bilinguals? Prior studies... more What is the time course of cross-language activation in deaf sign–print bilinguals? Prior studies demonstrating
cross-language activation in deaf bilinguals used paradigms that would allow strategic or conscious translation. This study
investigates whether cross-language activation can be eliminated by reducing the time available for lexical processing. Deaf
ASL–English bilinguals and hearing English monolinguals viewed pairs of English words and judged their semantic
similarity. Half of the stimuli had phonologically related translations in ASL, but participants saw only English words. We
replicated prior findings of cross-language activation despite the introduction of a much faster rate of presentation. Further,
the deaf bilinguals were as fast or faster than hearing monolinguals despite the fact that the task was in their second
language. The results allow us to rule out the possibility that deaf ASL–English bilinguals only activate ASL phonological
forms when given ample time for strategic or conscious translation across their two languages.
doi:10.1017/S136672891500067X
Signed languages are natural human languages used by deaf people around the world as their primar... more Signed languages are natural human languages used by deaf people around the world as their primary language. This chapter explores the linguistic study of signed language, their linguistic properties, and aspects of their genetic and historical relationships. The chapter focuses on historical change that has occurred in signed languages, showing that the same linguistic processes that contribute to historical change in spoken languages, such as lexicalization, grammaticization, and semantic change, contribute to historical change in signed languages. Historical influences unique to signed languages, such as the educational approach of borrowing and adapting signs and an effort to create a system of representing the surrounding spoken/written language and of the incorporation of lexicalized fingerspelling are also discussed.
In grammar books and dictionaries of American Sign Language (ASL), the word HAPPEN is generally d... more In grammar books and dictionaries of American Sign Language (ASL), the word HAPPEN is generally described as a conjunction. We challenge the idea that HAPPEN functions as a 'conjunction' and instead propose an analysis of meaning and form which results in the conclusion that HAPPEN is rather functioning as an evidential marker which is grammaticalizing from the canonical verbal use. Based on the Morford, MacFarlane Corpus (2003) of 4,000 words in ASL, as well as ASL video blogs (VLOGs), interviews, and public service announcements (PSAs) collected from public YouTube channels, we extracted 50 tokens of HAPPEN from various native signers. Our analysis bears out at least three distinct uses of HAPPEN based on syntactic distribution; verbal (34%), nominal (12%) and what we will call an evidential marker (54%); denoted respectively by the following notation: HAPPEN, HAPPEN+, and HAPPEN 1 . In addition to varied syntactic distribution, we also observe variations in phonological form and a shift in semantics toward a more subjective meaning in HAPPEN 1. We conclude that the canonical form of HAPPEN is undergoing a grammaticalization process evidenced by reduction in the phonological form, syntactic constriction, and semantic bleaching.
Dissertation by Corrine Occhino
This dissertation uses corpus data from ASL and Libras (Brazilian Sign Language), to investigate ... more This dissertation uses corpus data from ASL and Libras (Brazilian Sign Language), to investigate the distribution of a series of static and dynamic handshapes across the two languages. While traditional phonological frameworks argue handshape distribution to be a facet of well-formedness constraints and articulatory ease (Brentari, 1998), the data analyzed here suggests that the majority of handshapes cluster around schematic form-meaning mappings. Furthermore, these schematic mappings are shown to be motivated by both language-internal and language-external construals of formal articulatory properties and embodied experiential gestalts.
Usage-based approaches to phonology (Bybee, 2001) and cognitively oriented constructional approaches (Langacker, 1987) have recognized that phonology is not modular. Instead, phonology is expected to interact with all levels of grammar, including semantic association. In this dissertation I begin to develop a cognitive model of phonology which views phonological content as similar in kind to other constructional units of language. I argue that, because formal units of linguistic structure emerge from the extraction of commonalities across usage events, phonological form is not immune from an accumulation of semantic associations. Finally, I demonstrate that appealing to such approaches allows one to account for both idiosyncratic, unconventionalized mappings seen in creative language use, as well as motivation in highly conventionalized form-meaning associations.
Drafts by Corrine Occhino
This paper presents a usage-based, Cognitive Grammar analysis of Place as a symbolic structure in... more This paper presents a usage-based, Cognitive Grammar analysis of Place as a symbolic structure in signed languages. We suggest that many signs are better viewed as constructions in which schematic or specific formal properties are extracted from usage events alongside specific or schematic meaning. We argue that pointing signs are complex constructions composed of a pointing device and a Place, each of which are symbolic structures having form and meaning. We extend our analysis to antecedent-anaphora constructions and directional verb constructions. Finally, we discuss how the usage-based
approach suggests a new way of understanding the relationship between language and gesture.
Investigations of iconicity in signed language processing often rely on non-signer ratings to det... more Investigations of iconicity in signed language processing often rely on non-signer ratings to determine whether signs are iconic, implying that iconicity can be objectively evaluated by individuals with no prior exposure to a linguistic form. We question the assumption that iconicity is an objective property of the form of a sign and argue that iconicity arises from individuals’ experience with the world and their language, and is subjectively mediated by the signer’s construal of form and meaning. We test this hypothesis by asking American Sign Language (ASL) signers and German Sign Language (DGS) signers to rate iconicity of 86 ASL and DGS signs. Native signers consistently rate signs in their own language as more iconic than foreign language signs under a wide range of conditions. The results demonstrate that iconicity is not an objective characteristic of a sign form, and is instead specific to individual construals of form and meaning.
Conference Presentations by Corrine Occhino
Does Language Experience affect Perceived Iconicity?
When operationalizing ‘iconicity’ in sig... more Does Language Experience affect Perceived Iconicity?
When operationalizing ‘iconicity’ in signed languages, researchers often conflate iconicity with transparency. Instructions to raters generally include definitions such as, “iconic signs look like what they mean”, and include examples of transparent signs as ‘good examples’ of iconicity (1)(4). As a result, it has become standard practice to utilize non-signers to provide sign iconicity ratings, since transparent mappings should be easily accessible to anyone. Recent research on signers’ evaluation of iconicity across languages has suggested however, that signers
rate signs in their native language as more iconic than signs, matched across a variety of measures, in a foreign signed language (3). This suggests that iconicity is subjectively constructed in the minds of language users, and that experience with one’s own language influences perceptions of iconic mappings. One possible explanation of why signers consider signs from their own language to be more iconic than signs from another signed language is that signers’ iconicity judgements are sensitive to language-internal mappings, such as construing the
fist with thumb pointing upward as a human body, as opposed to construing the same handshape as an upward pointer indicating positive valence. While any one signed language may include both construals, the extent that one construal is widely prevalent within the language may
influence signers’ judgements of iconicity.
To investigate this hypothesis, non-signers from Amazon’s Mechanical Turk rated images of 32 ASL signs (with glosses) for iconicity, using a Likert scale given the standard instructions: “how much does the sign look like what it means?” Subsequently, L1-ASL expert signers (14) and English-ASL L2 novice signers (14) viewed ASL sentences containing the same 32 ASL signs and responded with a keypress when they detected target handshapes. ASL proficiency was assessed using the ASL Sentence Reproduction Test (2). Using a mixed linear regression, we found reaction times were significantly modulated by non-signer iconicity ratings for novices, but not for experts. Handshapes in signs with higher iconicity ratings were more quickly identified by signers with lower proficiency, but identified at equal speeds by signers with higher proficiency (β = 1.02, t = 1.87, p = .06).
Without taking into account both language internal and language external motivations,
investigations of iconicity effects in signed languages run the risk of skewing results toward behaviors that are primarily present early in the second language acquisition process and overlook effects that may change after more complete knowledge of a language’s patterns have
been learned and internalized. Further, these results suggest that the construct of iconicity differs along several dimensions, and transparency does not capture all dimensions of the construct. Careful review of the theoretical implications of the definition and operationalization of iconicity will be crucial to future investigations.
Bibliography
(1) Caselli, Naomi K., Zed Sevcikova Sehyr, Ariel M. Cohen-Goldberg, and Karen Emmorey. (2016).
“ASL-LEX: A Lexical Database of American Sign Language.” Behavior Research Methods, 1–
18. doi:10.3758/s13428-016-0742-0.
(2) Hauser, Peter., Raylene Paludnevičienė, Ted Supalla, and Daphne Bavelier. (2008). “American
Sign Language - Sentence Reproduction Test: Development and implications.” In R. de
Quadros (Ed.), Sign Language: Spinning and unraveling the past, present and future (pp. 160–
172). Petrópolis, Brazil: Arara Azul.
(3) Occhino, Corrine, Benjamin Anible, Jill P. Morford, and Erin Wilkinson. (2017). “Iconicity Is in the
Eye of the Beholder: How Language Experience Affects Perceived Iconicity.” Gesture, 16:1,
101–127. doi 10.1075/gest.16.1.04occ.
(4) Vinson, David P., Kearsy Cormier, Tanya Denmark, Adam Schembri, and Gabriella Vigliocco.
(2008). “The British Sign Language (BSL) Norms for Age of Acquisition, Familiarity, and
Iconicity.” Behavior Research Methods, 40, no. 4: 1079–87. doi:10.3758/BRM.40.4.1079.
.
Mounting evidence indicates that both languages are active for spoken language bilinguals even in... more Mounting evidence indicates that both languages are active for spoken language bilinguals even in monolingual contexts (Schwartz & Kroll, 2006; Van Hell & Dijkstra, 2002) such that no one language is turned-off. Recently linguists have turned their attention to deaf bilinguals in order to see if signs are activated when processing in a print only context despite the lack of phonological overlap between signs and spoken words. Morford et al. 2011 investigated whether deaf bilinguals who use American Sign Language (ASL) for face-to-face communication and English for reading and writing also experience cross-language activation. Although translation was not necessary for the task, responses were facilitated when the ASL translations of the English words were phonologically related.
In a follow up experiment, we have used a similar methodology to uncover which phonological parameters influenced this co-activation. Stimulus pairs were controlled for overlap among the three major sign parameters: handshape, movement and location (e.g. sign pairs might share handshape and location but differ in movement). In addition to controlling for overlapping parameters, we also tested whether initialization, the phonological process of substituting a handshape from the ASL fingerspelling alphabet to represent the first letter of an English word, had any effect on lexical processing. The co-activation effect was replicated in this experiment. However, although location and movement have been proposed to be the syllable core of signs, this configuration of overlap was not necessary to find facilitation or inhibition. Nor did we see that any other subset of parameter overlap facilitated responses any more than any other overlap. A slight facilitation effect was found for initialization. This may seem like a contentious finding, but when situated within a Usage-Based approach this finding is not only easily explained, but we consider it to be predictable.
Finally, we considered whether this effect is pre-lexical or post-lexical, that is whether this cross-language activation is merely an expression of post-lexical semantic activation, or whether the two languages also directly activate each other. By manipulating the presentation time between stimuli, we were able to test whether phonological processes could be excluded. We were able to replicate our inhibition and facilitation findings even at a short 50 ms inter-stimulus interval, suggesting that semantics alone is not responsible for these effects. Implications for these findings will be discussed from a Usage-Based approach.
I propose a new framework for the phonological analysis of signed languages, centered on Usage-ba... more I propose a new framework for the phonological analysis of signed languages, centered on Usage-based approaches to phonology (Bybee, 2001; 2010), while also incorporating the frameworks of Cognitive Grammar (Langacker, 1987, 1991, 2008) and Cognitive Iconicity (Wilcox, 2004). Usage-based phonology begins with the simple tenant that linguistic knowledge is built up from real usage events. It states that words are stored as full units, from which schemas are then abstracted away. Thus, traditional notions of segment, syllable, phoneme, and morpheme, are in fact secondary effects, gleaned from frequency patterns. The tools afforded by Cognitive Grammar and Cognitive Iconicity, ground the data in a cognitively realistic explanations of internal structure and motivation, and help to illuminate the role of iconicity.
Consider a set of signs that share the handshape (HS) Claw-5 (formed by slightly bending each joint of a 5 handshape to form the shape of a claw) such as RIBS, TIGER, RAIN, and AUDIENCE. In formal theories (Brentari, 1989; Sandler, 1989; Liddell & Johnson, 1989; inter alia), the phonological relatedness of these signs is considered random and unmotivated. However, a Cognitive approach reveals that these signs are related based on iconic domain mappings. Another example comes as a challenge to claims that the handshape parameter can be specified only once per lexeme, meaning that each sign contains only one handshape (Brentari, 1989). There are some signs, though, which express a change in handshape, such as the ASL THROW, which changes from HS: S to HS:U. The resolution to this problem in previous phonological models is to posit an underlying specification for finger selection, wherein changes in handshape are actually a change on the movement tier (i.e. a change in aperture) (Sandler, 1989). According to this standard model, the observation that a given sign begins and ends with different handshapes is not a matter of handshape at all, but a matter of movement. My newly proposed model does not posit underlying structures, nor do I posit hierarchical sub-phonemic structures that dictate the behavior of lexemes. Instead I advocate for a cognitive rationale for the change in handshape, and by this I mean that the change in handshape is motivated by Conceptual Archetypes (Langacker 1991; 2006) which themselves are born from embodied cognition, repetition and schematization.
As previously described, these types of data challenge traditional understandings of handshape as phoneme, and provide fodder for advocating an entirely new approach to the phonological analysis of signed languages. This method of analysis is a radical departure from current theories of signed language phonology which are grounded in generative frameworks and focus on hierarchical structure, definition of segments and syllables and identification of contrastive elements. A hybrid Cognitive/Usage-based approach allows one to understand grammatical structure, which includes phonological structure, as inherently symbolic. By not relegating grammar and lexicon to separate domains, and by viewing linguistic knowledge as an accumulation of usage events, we can account for variability, frequency effects and language-change, which current theories struggle to explain.
Occhino-Kehoe, C., Morford, J. P., Twitchell, P., Piñar, P., Kroll, J. F., & Wilkinson, E. The time course of bilingual lexical access in deaf ASL-English bilinguals.
Uploads
Articles/Chapters by Corrine Occhino
In an investigation of phoneme distribution in American Sign Language (ASL) and Língua Brasileira de Sinais (Libras), tokens of the claw-5 handshape were extracted and analyzed for whether the handshape contributed to the overall meaning of the sign. The data suggests that distribution of the claw-5 handshape is not randomly distributed across the lexicon, but clusters around six form-meaning patterns: convex-concave, Unitary-elements, non-compact matter, hand-as-hand, touch, and interlocking. Interestingly, feature-level motivations were uncovered as the source of the mappings
These findings are considered within a new cognitive framework to better understand how and why sub-morphemic units develop and maintain motivated form-meaning mappings. The model proposed here, Embodied Cognitive Phonology, builds on cognitive and usage-based approaches but incorporates theories of embodiment to address the source of the claw-5 mappings. Embodied Cognitive Phonology provides a unifying framework for understanding the perceived differences in phonological patterning and organization across the modalities. Both language-internal and language-external sources of motivation contribute to the emergence of form-meaning mappings. Arbitrariness is argued to be but one possible outcome from the process of emergence and schematization of phonological content, and exists alongside motivation as a legitimate state of linguistic units of all sizes of complexity. Importantly, because language is dynamic, these states are not fixed, but are in continuous flux, as language users reinvent and reinterpret form and meaning over time.
cross-language activation in deaf bilinguals used paradigms that would allow strategic or conscious translation. This study
investigates whether cross-language activation can be eliminated by reducing the time available for lexical processing. Deaf
ASL–English bilinguals and hearing English monolinguals viewed pairs of English words and judged their semantic
similarity. Half of the stimuli had phonologically related translations in ASL, but participants saw only English words. We
replicated prior findings of cross-language activation despite the introduction of a much faster rate of presentation. Further,
the deaf bilinguals were as fast or faster than hearing monolinguals despite the fact that the task was in their second
language. The results allow us to rule out the possibility that deaf ASL–English bilinguals only activate ASL phonological
forms when given ample time for strategic or conscious translation across their two languages.
doi:10.1017/S136672891500067X
Dissertation by Corrine Occhino
Usage-based approaches to phonology (Bybee, 2001) and cognitively oriented constructional approaches (Langacker, 1987) have recognized that phonology is not modular. Instead, phonology is expected to interact with all levels of grammar, including semantic association. In this dissertation I begin to develop a cognitive model of phonology which views phonological content as similar in kind to other constructional units of language. I argue that, because formal units of linguistic structure emerge from the extraction of commonalities across usage events, phonological form is not immune from an accumulation of semantic associations. Finally, I demonstrate that appealing to such approaches allows one to account for both idiosyncratic, unconventionalized mappings seen in creative language use, as well as motivation in highly conventionalized form-meaning associations.
Drafts by Corrine Occhino
approach suggests a new way of understanding the relationship between language and gesture.
Conference Presentations by Corrine Occhino
When operationalizing ‘iconicity’ in signed languages, researchers often conflate iconicity with transparency. Instructions to raters generally include definitions such as, “iconic signs look like what they mean”, and include examples of transparent signs as ‘good examples’ of iconicity (1)(4). As a result, it has become standard practice to utilize non-signers to provide sign iconicity ratings, since transparent mappings should be easily accessible to anyone. Recent research on signers’ evaluation of iconicity across languages has suggested however, that signers
rate signs in their native language as more iconic than signs, matched across a variety of measures, in a foreign signed language (3). This suggests that iconicity is subjectively constructed in the minds of language users, and that experience with one’s own language influences perceptions of iconic mappings. One possible explanation of why signers consider signs from their own language to be more iconic than signs from another signed language is that signers’ iconicity judgements are sensitive to language-internal mappings, such as construing the
fist with thumb pointing upward as a human body, as opposed to construing the same handshape as an upward pointer indicating positive valence. While any one signed language may include both construals, the extent that one construal is widely prevalent within the language may
influence signers’ judgements of iconicity.
To investigate this hypothesis, non-signers from Amazon’s Mechanical Turk rated images of 32 ASL signs (with glosses) for iconicity, using a Likert scale given the standard instructions: “how much does the sign look like what it means?” Subsequently, L1-ASL expert signers (14) and English-ASL L2 novice signers (14) viewed ASL sentences containing the same 32 ASL signs and responded with a keypress when they detected target handshapes. ASL proficiency was assessed using the ASL Sentence Reproduction Test (2). Using a mixed linear regression, we found reaction times were significantly modulated by non-signer iconicity ratings for novices, but not for experts. Handshapes in signs with higher iconicity ratings were more quickly identified by signers with lower proficiency, but identified at equal speeds by signers with higher proficiency (β = 1.02, t = 1.87, p = .06).
Without taking into account both language internal and language external motivations,
investigations of iconicity effects in signed languages run the risk of skewing results toward behaviors that are primarily present early in the second language acquisition process and overlook effects that may change after more complete knowledge of a language’s patterns have
been learned and internalized. Further, these results suggest that the construct of iconicity differs along several dimensions, and transparency does not capture all dimensions of the construct. Careful review of the theoretical implications of the definition and operationalization of iconicity will be crucial to future investigations.
Bibliography
(1) Caselli, Naomi K., Zed Sevcikova Sehyr, Ariel M. Cohen-Goldberg, and Karen Emmorey. (2016).
“ASL-LEX: A Lexical Database of American Sign Language.” Behavior Research Methods, 1–
18. doi:10.3758/s13428-016-0742-0.
(2) Hauser, Peter., Raylene Paludnevičienė, Ted Supalla, and Daphne Bavelier. (2008). “American
Sign Language - Sentence Reproduction Test: Development and implications.” In R. de
Quadros (Ed.), Sign Language: Spinning and unraveling the past, present and future (pp. 160–
172). Petrópolis, Brazil: Arara Azul.
(3) Occhino, Corrine, Benjamin Anible, Jill P. Morford, and Erin Wilkinson. (2017). “Iconicity Is in the
Eye of the Beholder: How Language Experience Affects Perceived Iconicity.” Gesture, 16:1,
101–127. doi 10.1075/gest.16.1.04occ.
(4) Vinson, David P., Kearsy Cormier, Tanya Denmark, Adam Schembri, and Gabriella Vigliocco.
(2008). “The British Sign Language (BSL) Norms for Age of Acquisition, Familiarity, and
Iconicity.” Behavior Research Methods, 40, no. 4: 1079–87. doi:10.3758/BRM.40.4.1079.
.
In a follow up experiment, we have used a similar methodology to uncover which phonological parameters influenced this co-activation. Stimulus pairs were controlled for overlap among the three major sign parameters: handshape, movement and location (e.g. sign pairs might share handshape and location but differ in movement). In addition to controlling for overlapping parameters, we also tested whether initialization, the phonological process of substituting a handshape from the ASL fingerspelling alphabet to represent the first letter of an English word, had any effect on lexical processing. The co-activation effect was replicated in this experiment. However, although location and movement have been proposed to be the syllable core of signs, this configuration of overlap was not necessary to find facilitation or inhibition. Nor did we see that any other subset of parameter overlap facilitated responses any more than any other overlap. A slight facilitation effect was found for initialization. This may seem like a contentious finding, but when situated within a Usage-Based approach this finding is not only easily explained, but we consider it to be predictable.
Finally, we considered whether this effect is pre-lexical or post-lexical, that is whether this cross-language activation is merely an expression of post-lexical semantic activation, or whether the two languages also directly activate each other. By manipulating the presentation time between stimuli, we were able to test whether phonological processes could be excluded. We were able to replicate our inhibition and facilitation findings even at a short 50 ms inter-stimulus interval, suggesting that semantics alone is not responsible for these effects. Implications for these findings will be discussed from a Usage-Based approach.
Consider a set of signs that share the handshape (HS) Claw-5 (formed by slightly bending each joint of a 5 handshape to form the shape of a claw) such as RIBS, TIGER, RAIN, and AUDIENCE. In formal theories (Brentari, 1989; Sandler, 1989; Liddell & Johnson, 1989; inter alia), the phonological relatedness of these signs is considered random and unmotivated. However, a Cognitive approach reveals that these signs are related based on iconic domain mappings. Another example comes as a challenge to claims that the handshape parameter can be specified only once per lexeme, meaning that each sign contains only one handshape (Brentari, 1989). There are some signs, though, which express a change in handshape, such as the ASL THROW, which changes from HS: S to HS:U. The resolution to this problem in previous phonological models is to posit an underlying specification for finger selection, wherein changes in handshape are actually a change on the movement tier (i.e. a change in aperture) (Sandler, 1989). According to this standard model, the observation that a given sign begins and ends with different handshapes is not a matter of handshape at all, but a matter of movement. My newly proposed model does not posit underlying structures, nor do I posit hierarchical sub-phonemic structures that dictate the behavior of lexemes. Instead I advocate for a cognitive rationale for the change in handshape, and by this I mean that the change in handshape is motivated by Conceptual Archetypes (Langacker 1991; 2006) which themselves are born from embodied cognition, repetition and schematization.
As previously described, these types of data challenge traditional understandings of handshape as phoneme, and provide fodder for advocating an entirely new approach to the phonological analysis of signed languages. This method of analysis is a radical departure from current theories of signed language phonology which are grounded in generative frameworks and focus on hierarchical structure, definition of segments and syllables and identification of contrastive elements. A hybrid Cognitive/Usage-based approach allows one to understand grammatical structure, which includes phonological structure, as inherently symbolic. By not relegating grammar and lexicon to separate domains, and by viewing linguistic knowledge as an accumulation of usage events, we can account for variability, frequency effects and language-change, which current theories struggle to explain.
In an investigation of phoneme distribution in American Sign Language (ASL) and Língua Brasileira de Sinais (Libras), tokens of the claw-5 handshape were extracted and analyzed for whether the handshape contributed to the overall meaning of the sign. The data suggests that distribution of the claw-5 handshape is not randomly distributed across the lexicon, but clusters around six form-meaning patterns: convex-concave, Unitary-elements, non-compact matter, hand-as-hand, touch, and interlocking. Interestingly, feature-level motivations were uncovered as the source of the mappings
These findings are considered within a new cognitive framework to better understand how and why sub-morphemic units develop and maintain motivated form-meaning mappings. The model proposed here, Embodied Cognitive Phonology, builds on cognitive and usage-based approaches but incorporates theories of embodiment to address the source of the claw-5 mappings. Embodied Cognitive Phonology provides a unifying framework for understanding the perceived differences in phonological patterning and organization across the modalities. Both language-internal and language-external sources of motivation contribute to the emergence of form-meaning mappings. Arbitrariness is argued to be but one possible outcome from the process of emergence and schematization of phonological content, and exists alongside motivation as a legitimate state of linguistic units of all sizes of complexity. Importantly, because language is dynamic, these states are not fixed, but are in continuous flux, as language users reinvent and reinterpret form and meaning over time.
cross-language activation in deaf bilinguals used paradigms that would allow strategic or conscious translation. This study
investigates whether cross-language activation can be eliminated by reducing the time available for lexical processing. Deaf
ASL–English bilinguals and hearing English monolinguals viewed pairs of English words and judged their semantic
similarity. Half of the stimuli had phonologically related translations in ASL, but participants saw only English words. We
replicated prior findings of cross-language activation despite the introduction of a much faster rate of presentation. Further,
the deaf bilinguals were as fast or faster than hearing monolinguals despite the fact that the task was in their second
language. The results allow us to rule out the possibility that deaf ASL–English bilinguals only activate ASL phonological
forms when given ample time for strategic or conscious translation across their two languages.
doi:10.1017/S136672891500067X
Usage-based approaches to phonology (Bybee, 2001) and cognitively oriented constructional approaches (Langacker, 1987) have recognized that phonology is not modular. Instead, phonology is expected to interact with all levels of grammar, including semantic association. In this dissertation I begin to develop a cognitive model of phonology which views phonological content as similar in kind to other constructional units of language. I argue that, because formal units of linguistic structure emerge from the extraction of commonalities across usage events, phonological form is not immune from an accumulation of semantic associations. Finally, I demonstrate that appealing to such approaches allows one to account for both idiosyncratic, unconventionalized mappings seen in creative language use, as well as motivation in highly conventionalized form-meaning associations.
approach suggests a new way of understanding the relationship between language and gesture.
When operationalizing ‘iconicity’ in signed languages, researchers often conflate iconicity with transparency. Instructions to raters generally include definitions such as, “iconic signs look like what they mean”, and include examples of transparent signs as ‘good examples’ of iconicity (1)(4). As a result, it has become standard practice to utilize non-signers to provide sign iconicity ratings, since transparent mappings should be easily accessible to anyone. Recent research on signers’ evaluation of iconicity across languages has suggested however, that signers
rate signs in their native language as more iconic than signs, matched across a variety of measures, in a foreign signed language (3). This suggests that iconicity is subjectively constructed in the minds of language users, and that experience with one’s own language influences perceptions of iconic mappings. One possible explanation of why signers consider signs from their own language to be more iconic than signs from another signed language is that signers’ iconicity judgements are sensitive to language-internal mappings, such as construing the
fist with thumb pointing upward as a human body, as opposed to construing the same handshape as an upward pointer indicating positive valence. While any one signed language may include both construals, the extent that one construal is widely prevalent within the language may
influence signers’ judgements of iconicity.
To investigate this hypothesis, non-signers from Amazon’s Mechanical Turk rated images of 32 ASL signs (with glosses) for iconicity, using a Likert scale given the standard instructions: “how much does the sign look like what it means?” Subsequently, L1-ASL expert signers (14) and English-ASL L2 novice signers (14) viewed ASL sentences containing the same 32 ASL signs and responded with a keypress when they detected target handshapes. ASL proficiency was assessed using the ASL Sentence Reproduction Test (2). Using a mixed linear regression, we found reaction times were significantly modulated by non-signer iconicity ratings for novices, but not for experts. Handshapes in signs with higher iconicity ratings were more quickly identified by signers with lower proficiency, but identified at equal speeds by signers with higher proficiency (β = 1.02, t = 1.87, p = .06).
Without taking into account both language internal and language external motivations,
investigations of iconicity effects in signed languages run the risk of skewing results toward behaviors that are primarily present early in the second language acquisition process and overlook effects that may change after more complete knowledge of a language’s patterns have
been learned and internalized. Further, these results suggest that the construct of iconicity differs along several dimensions, and transparency does not capture all dimensions of the construct. Careful review of the theoretical implications of the definition and operationalization of iconicity will be crucial to future investigations.
Bibliography
(1) Caselli, Naomi K., Zed Sevcikova Sehyr, Ariel M. Cohen-Goldberg, and Karen Emmorey. (2016).
“ASL-LEX: A Lexical Database of American Sign Language.” Behavior Research Methods, 1–
18. doi:10.3758/s13428-016-0742-0.
(2) Hauser, Peter., Raylene Paludnevičienė, Ted Supalla, and Daphne Bavelier. (2008). “American
Sign Language - Sentence Reproduction Test: Development and implications.” In R. de
Quadros (Ed.), Sign Language: Spinning and unraveling the past, present and future (pp. 160–
172). Petrópolis, Brazil: Arara Azul.
(3) Occhino, Corrine, Benjamin Anible, Jill P. Morford, and Erin Wilkinson. (2017). “Iconicity Is in the
Eye of the Beholder: How Language Experience Affects Perceived Iconicity.” Gesture, 16:1,
101–127. doi 10.1075/gest.16.1.04occ.
(4) Vinson, David P., Kearsy Cormier, Tanya Denmark, Adam Schembri, and Gabriella Vigliocco.
(2008). “The British Sign Language (BSL) Norms for Age of Acquisition, Familiarity, and
Iconicity.” Behavior Research Methods, 40, no. 4: 1079–87. doi:10.3758/BRM.40.4.1079.
.
In a follow up experiment, we have used a similar methodology to uncover which phonological parameters influenced this co-activation. Stimulus pairs were controlled for overlap among the three major sign parameters: handshape, movement and location (e.g. sign pairs might share handshape and location but differ in movement). In addition to controlling for overlapping parameters, we also tested whether initialization, the phonological process of substituting a handshape from the ASL fingerspelling alphabet to represent the first letter of an English word, had any effect on lexical processing. The co-activation effect was replicated in this experiment. However, although location and movement have been proposed to be the syllable core of signs, this configuration of overlap was not necessary to find facilitation or inhibition. Nor did we see that any other subset of parameter overlap facilitated responses any more than any other overlap. A slight facilitation effect was found for initialization. This may seem like a contentious finding, but when situated within a Usage-Based approach this finding is not only easily explained, but we consider it to be predictable.
Finally, we considered whether this effect is pre-lexical or post-lexical, that is whether this cross-language activation is merely an expression of post-lexical semantic activation, or whether the two languages also directly activate each other. By manipulating the presentation time between stimuli, we were able to test whether phonological processes could be excluded. We were able to replicate our inhibition and facilitation findings even at a short 50 ms inter-stimulus interval, suggesting that semantics alone is not responsible for these effects. Implications for these findings will be discussed from a Usage-Based approach.
Consider a set of signs that share the handshape (HS) Claw-5 (formed by slightly bending each joint of a 5 handshape to form the shape of a claw) such as RIBS, TIGER, RAIN, and AUDIENCE. In formal theories (Brentari, 1989; Sandler, 1989; Liddell & Johnson, 1989; inter alia), the phonological relatedness of these signs is considered random and unmotivated. However, a Cognitive approach reveals that these signs are related based on iconic domain mappings. Another example comes as a challenge to claims that the handshape parameter can be specified only once per lexeme, meaning that each sign contains only one handshape (Brentari, 1989). There are some signs, though, which express a change in handshape, such as the ASL THROW, which changes from HS: S to HS:U. The resolution to this problem in previous phonological models is to posit an underlying specification for finger selection, wherein changes in handshape are actually a change on the movement tier (i.e. a change in aperture) (Sandler, 1989). According to this standard model, the observation that a given sign begins and ends with different handshapes is not a matter of handshape at all, but a matter of movement. My newly proposed model does not posit underlying structures, nor do I posit hierarchical sub-phonemic structures that dictate the behavior of lexemes. Instead I advocate for a cognitive rationale for the change in handshape, and by this I mean that the change in handshape is motivated by Conceptual Archetypes (Langacker 1991; 2006) which themselves are born from embodied cognition, repetition and schematization.
As previously described, these types of data challenge traditional understandings of handshape as phoneme, and provide fodder for advocating an entirely new approach to the phonological analysis of signed languages. This method of analysis is a radical departure from current theories of signed language phonology which are grounded in generative frameworks and focus on hierarchical structure, definition of segments and syllables and identification of contrastive elements. A hybrid Cognitive/Usage-based approach allows one to understand grammatical structure, which includes phonological structure, as inherently symbolic. By not relegating grammar and lexicon to separate domains, and by viewing linguistic knowledge as an accumulation of usage events, we can account for variability, frequency effects and language-change, which current theories struggle to explain.
Consider a set of signs that share the handshape Claw-5 (formed by spreading the fingers and the thumb in a 5 handshape and then slightly bending each joint to form the shape of a claw) such as BALL, LION, RIBS, FAT, TIGER, and RAIN. In formal theories, the phonological relatedness of these signs is considered to be random; there would be no more reason for these signs to share Claw-5 then there is reason for the English words ‘dog,’ ‘day,’ and ‘dime’ to share an initial ‘d.’ When examined from a cognitive approach, we find these signs fall into groupings according to iconic domain mappings. We can profile the concave shape of the entire palm, manifested in signs like BALL, or profile the fingers themselves as long slender objects, seen in signs such as RIBS, TIGER and RAIN. Alternately the Claw-5 handshape can map the domain of non-compact matter, represented by the gaps between the bent fingers. This mapping presents itself in signs such as FAT and LION where the handshape represents the idea that spread fingers are not connected in space and therefore are permeable.
This short list of mappings is not exhaustive, as there are other metaphoric and metonymic uses of Claw-5; however, this list can serve as a data set that challenges traditional understanding of handshape as phoneme. It is only when we consider that all languages are built-up from real usage events, embodied in our experience with the world, that we can begin to see language as a reflection of our cognitive processes and a window into the mind through which we can observe the complex interplay between our language and other cognitive systems.
We were able to test this hypothesis by investigating effects of sign language knowledge on written word recognition. In spite of a lack of cognates in American Sign Language (ASL) and English, cross-language activation effects were recently documented in deaf (Morford et al., 2011) and hearing (Shook et al., 2012) ASL-English bilinguals. This panel explores the nature of cross-language influences in different populations of deaf and hearing bilinguals who are fluent in a signed language.
The first paper in our panel clarifies the basic finding of cross-language activation effects in deaf ASL-English bilinguals, and extends them to two new populations of signing bilinguals: deaf ASL-dominant bilinguals and hearing English-dominant bilinguals. These effects were found using a monolingual English task in which participants decided whether two words were semantically related. Unbeknownst to participants, half of the stimuli had phonologically related translation equivalents in ASL, and half had unrelated translation equivalents. Because the task does not require translating the stimuli into ASL, effects of the ASL manipulation are a strong indication that bilinguals access the ASL translations during English word recognition.
The second paper in our panel reports the results of a study that investigates whether deaf bilinguals in Germany also exhibit cross-language activation effects. The study modified the semantic judgment task for use with deaf German Sign Language (DGS)-German bilinguals. Results indicate that DGS-German bilinguals activate DGS signs during German word recognition. Implications of these results for reading development in deaf German bilinguals are discussed.
The third paper in our panel explores the time course of cross-language activation in deaf ASL-English bilinguals. When deaf bilinguals see a written word, does activation spread directly to ASL phonological forms, or are ASL forms only activated after the semantics of the English word are activated? The paradigm used by Morford et al. (2011) included a 1 second stimulus onset asynchrony (SOA), allowing ample time for activation from the English word to spread to semantics, and then from semantics to ASL phonological forms. We present results from a replication study in which SOA was manipulated such that participants had 750 ms SOA in one condition, but only 250 SOA in a second condition. We replicated the cross-language activation effect at both SOAs, strongly indicating that activation spreads directly from English words to ASL phonological forms.
HAPPEN is generally described as a conjunction. We challenge the idea that HAPPEN functions
as a ‘conjunction’ and instead propose an analysis of meaning and form which results in the
conclusion that HAPPEN is rather functioning as an evidential marker which is grammaticalizing
from the canonical verbal use. Based on the Morford, MacFarlane Corpus (2003) of 4,000 words
in ASL, as well as ASL video blogs (VLOGs), interviews, and public service announcements
(PSAs) collected from public YouTube channels, we extracted 50 tokens of HAPPEN from
various native signers. Our analysis bears out at least three distinct uses of HAPPEN based on
syntactic distribution; verbal (34%), nominal (12%) and what we will call an evidential marker
(54%); denoted respectively by the following notation: HAPPEN, HAPPEN+, and HAPPEN1. In
addition to varied syntactic distribution, we also observe variations in phonological form and a
shift in semantics toward a more subjective meaning in HAPPEN1. We conclude that the
canonical form of HAPPEN is undergoing a grammaticalization process evidenced by reduction
in the phonological form, syntactic constriction, and semantic bleaching.