Click here to submit your abstract to the 2024 conference now! Submissions close on 21 February, 23:59 GMT.

Memory and Neurolinguistic Function in the Deaf

The Deaf community’s relationship with language provides a new angle from which to study the role of phonemic elements in the interaction between lexemes and memory. Departing from a hearing canon and exploring a broader spectrum of language perception and production, we are able to revise questions on and develop insight into cognitive processes that influence Deaf memory.

This paper will not attempt to ascertain whether the Deaf or the hearing have better memories. Neurocognitive discrepancies between the groups prevent certain abilities from being compared under the same criteria; the Corsi block-tapping test and the Knox cube test found the Deaf performed better with visuospatial memory tasks, while the hearing performed better on acoustic tests invoking prosodic memory, such as metrical rhymes. But there is no comprehensive “better,” only the space afforded by these discrepancies that allows for deeper understanding of neurolinguistic processes in conjunction with memory. 

To investigate the processes that influence Deaf memory, I compare studies on American Sign Language (ASL) and Japanese Sign Language (JSL). Often, alphabets have an innate connection between orthography and phonology, relying on phonological encoding to build stronger cognitive links. But in languages like Japanese, where logogens build upon each other, that is not the case (Hamilton 412). In Japanese, meaning trumps sound, unlike in the English alphabet. In essence, this is sign language. This comparison finds that when research on memory in the Deaf compared to the hearing is limited to English, the Deaf’s supposed deficit is the fault of the English language rather than a lack of phonological information. Linguistic models allow for a detailed understanding of how the brain retrieves information and adjacently the role of memory within language production. They also can narrow down variables of language-specific stimulus and when and where in cognition they come into play. The current standardized models of language production do not account for an inability to perceive audition. Thus, a separate model must be created to represent language production in the Deaf. This paper will conclude by proposing a new Deaf language production model combining Levelt’s (1989) general structure with Grosjean’s (2008) phases, influenced by Deaf models by Fromkin (1971), Garrett (1975), Butterworth (1979), and de Bot’s (2004) bilingual model.

This individual article from the Proceedings is published here