1,157
Views
19
CrossRef citations to date
0
Altmetric
Original Articles

Neural correlates of fingerspelling, text, and sign processing in deaf American Sign Language–English bilinguals

, &
Pages 749-767 | Received 21 Oct 2014, Accepted 28 Jan 2015, Published online: 23 Feb 2015
 

Abstract

We used functional magnetic resonance imaging to identify the neural regions that support comprehension of fingerspelled words, printed words, and American Sign Language (ASL) signs in deaf ASL–English bilinguals. Participants made semantic judgements (concrete–abstract?) to each lexical type, and hearing non-signers served as controls. All three lexical types engaged a left frontotemporal circuit associated with lexical semantic processing. Both printed and fingerspelled words activated the visual word form area for deaf signers only. Fingerspelled words were more left lateralised than signs, paralleling the difference between reading and listening for spoken language. Greater activation in left supramarginal gyrus was observed for signs compared to fingerspelled words, supporting its role in processing sign-specific phonology. Fingerspelling ability was negatively correlated with activation in left occipital cortex, while ASL ability was negatively correlated with activation in right angular gyrus. Overall, the results reveal both overlapping and distinct neural regions for comprehension of signs, text, and fingerspelling.

Acknowledgements

We would like to thank all of our participants and Allison Bassett, Lucinda O'Grady, and Jen Petrich for their assistance with the study.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. By convention, fingerspelled words are written with hyphenated capital letters. ASL signs are denoted by their English translation written in all capital letters without hyphens.

Additional information

Funding

This research was supported by National Science Foundation (NSF) grants awarded to Karen Emmorey and San Diego State University (BCS-0823576; BCS-1154313) and by NSF grant SBE-0541953 awarded to Gallaudet University for the Visual Language and Visual Learning (VL2) Science of Learning Center.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.