Research

This lab draws upon the methods of cognitive psychology, cognitive neuropsychology, linguistic analysis and computational simulation to understand how we produce and comprehend spoken, written, and signed language. Our focus is on single-word processing, with a particular emphasis on the interactions between the lexicon and the grammar and between phonology/orthography and morphology. 

Phonology

What types of phonological information do listeners and speakers store about words and grammatical sound patterns?  

Our research has investigated the event to which phonological knowledge is concrete (contains word-specific phonetic detail, e.g., exemplars) and abstract (discrete and context-independent, e.g., phonemes). We have shown that listeners use abstract generalizations about phonetic detail to guide lexical access (Blazej & Cohen-Goldberg, 2015) and that both word-specific and abstract phonological knowledge underlie the grammatical phonological process of English /r/-sandhi (Cohen-Goldberg, 2015). Further evidence for abstract levels of sound representation can be found in Cohen-Goldberg (2012) and Cohen-Goldberg, Cholin, Miozzo, & Rapp (2013). These results indicate that concrete and abstract phonological information both play important roles in speech processing while providing important information about the scope of this knowledge and how they interact during processing.

How are a word's sounds prepared for production?

We have also investigated how speakers select and order a word's sounds for production. In (Cohen-Goldberg, 2012) we showed that similar consonants slow production latencies (kick > tick), an effect that holds among all consonants in a word. This indicates that all consonants are active and compete with each during phonological planning. These effects occurred over relatively abstract phonological representations, providing further support for the existence of abstract sound levels in production.

How does social identity manifest in speech?

Speakers regularly adjust the way they speak to fit their environment. We use different words and pronunciations depending on the social context (work vs. home) and interlocutors (boss vs. friend). We investigated whether style-switching can be influenced by the internal social state of the speaker regardless of context (Gaither, Cohen-Goldberg, Gidney, & Maddox, 2015). We found that Black/White biracial individuals whose Black identity had recently been primed (by asking them to write about their Black parent) sounded relatively more "Black" to raters while individuals who had been primed with their White identity (these individuals had been asked to write about their White parent) sounded relatively more "White". Analyses suggest that these shifts in identity were marked by phonological/phonetic properties, though we were unable to identify consistent patterns. These results indicate that subtle shifts in one's social self-representation can lead to rapid shifts in the way we speak, a result that begins to bridge the fields of social psychology and sociolinguistics.

Morphology

Morphology in Speaking

How are morphologically complex words stored in long-term memory? 

Using corpus analyses of spontaneous speech, we have shown that the articulation of English inflected words is influenced by their root and whole-word frequency, suggesting that both morpheme-based (e.g., <jump>+<ing>) and whole-word (e.g., <jumping>) representations are accessed in spoken production (Caselli, Caselli, & Cohen-Goldberg, 2015). In that study, we show for the first time that a monomorphemic word's inflected neighborhood density (the number of inflected words that are phonologically similar to the target word, e.g., side: spied, signed, sighs, sides, tied, etc.) influences its duration, providing further evidence for whole-word representations. We have also shown that listeners can very quickly use phonetic information to determine if a word is unsuffixed or suffixed (e.g., hope vs. hopeful), a fact that may encourage the storage of whole-word representations for suffixed words (Blazej & Cohen-Goldberg, 2015).

How does a word’s morphological organization affect the way its sounds are processed?

It is unknown whether the processes that assemble words from component morphemes produce phonological representations that are identical to or different from non-composed (monomorphemic) words. Is shyness phonologically /ʃaɪnәs/ (similar to minus /maɪnәs/) or does the assembly process leave some sort of trace (e.g., /ʃaɪ-nәs/)? In a case-study (Cohen-Goldberg, Cholin, Miozzo, & Rapp, 2013), we found that an aphasic individual made speech errors that improved a word's phonological form (specifically, coda sonority and stress pattern) but these errors only occurred in suffixed words at morpheme boundaries (e.g., packed→[pækɪt] but act→[ækt]). We conclude that phonemes from different morphemes must be 'stitched' together, producing phonological representations that are morphologically organized (e.g., /ʃaɪ-nәs/). This claim was confirmed in Cohen-Goldberg (2013), where we showed that the consonant similarity effect described in Cohen-Goldberg (2012) is weaker for consonants in different morphemes (bayed) than the same morpheme (bide).

Morphology in Reading

How are letter strings recognized as morphemes in reading?

We have recently begun to investigate how morphologically complex words are processed in reading. Although it is virtually uncontested that multimorphemic words are decomposed during visual word recognition, relatively little is known about this process. One important question is how readers represent the position of a word's morphemes. In Blazej & Cohen-Goldberg (under review a), we show that English suffix position is categorical rather than gradient; readers only recognize suffixes in nonwords when they appear after a root (e.g., readers recognize -ment in forgetment but not formentget or mentforget). This rigidity contrasts with the extreme flexibility with which letters and roots are recognized (e.g., caniso is easily recognized as casino). We have also investigated how letter recognition interacts with morphological decomposition.  Mixed results have been found for whether masked transposed letter priming (TL) occurs across morpheme boundaries. We show in two experiments that TL occurs across vowel-initial but not consonant-initial suffixes (Blazej & Cohen-Goldberg, in prep). These results resolve the discrepancy in the literature and indicate that letter recognition and decompositional processes interact.

We have also investigated how morphological boundaries interact with letter-color and lexical-color synesthesia, a relatively rare condition where individuals perceive letters and words as having colors. In a case study of a synesthetic individual (Blazej & Cohen-Goldberg, under review b), we found that he often perceived multimorphemic words as bearing two colors (e.g., peppermintbiosphere) but did not do so for monomorphemic words (e.g., personbrochure) or pseudo compounds (e.g., pumpkinbrimstone). These results support decompositional theories of morphological processing, provide support for non-modular theories of reading, and indicate that synesthetic percepts can be influenced by morphological structure.

Sign Language

ASL-LEX

In collaboration with Karen Emmorey's lab, we have created the first large-scale lexical database of American Sign Language (ASL). This database, to be publicly released in the late summer, contains frequency judgments, iconicity judgments, phonological transcriptions, and neighborhood density counts for nearly 1,000 ASL signs. A sample video clip accompanies each sign and users will be able to search for signs using various criteria.  Check back for more information!

Lexical Access in Sign Language

While much is known about how phonological and orthographic forms are stored in the mental lexicon, relatively little is known about how the signed lexicon is organized. We have been investigating two primary questions.  First, how do phonologically similar signs interact with each other during processing? In spoken and written language, formally similar words tend to compete with each other during comprehension (e.g., cap is partially activated when hearing cat). Previous research suggests that sign neighbors may have different effects in processing depending on their similarity to the target sign: signs sharing location have been found to interfere with recognition while signs sharing their handshape tend to facilitate recognition of the target. We have developed a computational model that can account for this 'pattern of reversals' using the same mechanisms as have been proposed for spoken and written language (Caselli & Cohen-Goldberg, 2014). This suggests that a core mechanism may be responsible for lexical access across all modalities of natural language. 
Second, we have been investigating the effect that language deprivation may have on sign access. Does exposed to a sign language from birth lead signers to retrieve signs differently than being exposed to language after a delay?

Central Taurus Sign Language (CTSL)

We have been documenting the linguistic properties of an emerging sign language in Turkey that we have come to call Central Taurus Sign Language (CTSL). This village sign, which is approximately 7 generations old, was developed by a group of congenitally deaf members who did not have access to Turkish Sign Language (TİD). Village signs and other emerging sign languages offer us the opportunity to  observe how languages arise and mature and provide a window onto the language faculty. What grammatical elements (syntactic, morphological, and phonological) are present at the earliest stages of a language's development and which take time to develop? Through comparison with other emerging sign languages (e.g., Nicaraguan Sign Language (NSL), Al-Sayyid Bedouin Sign Language (ABSL)) and mature sign languages, we hope to learn how grammatical, processing, and social factors jointly influence a language's structure.