October 12, 2010 Audiology

Sentence Recognition for Non-Native Speakers

Researchers Reduce Linguistic Bias in Audiology Assessment

see also

Understanding speech in adverse listening conditions is difficult for most listeners, but it's even more challenging for non-native speakers of a language. As the U.S. population continues to become more diverse, audiologists will serve increasing numbers of clients whose native language is not English.

Researchers have learned a great deal about non-native speech perception in the past 15 years (e.g., Bradlow & Pisoni, 1999; Cutler et al., 2008). However, there are no standards for determining whether poor speech recognition scores obtained by non-native listeners on English-language speech recognition tests are due to the listeners' linguistic inexperience with the English language or auditory impairment.

Jennifer Weintraub (left) and Paulina Kulesza

Jennifer Weintraub (left) and Paulina Kulesza, research assistants in communication sciences and disorders, conduct an experimental protocol for the sentence-recognition project at the Queens College Auditory Research Laboratory.

One approach to address this issue is to test non-native speakers of English in their native languages. However, there are disadvantages to this approach: All clinics would need to be equipped with speech-recognition tests in many different languages; standardized test materials may not be available in a particular language; and an examiner would need to be fluent (or at least highly proficient) in the test language in order to score the listener's responses or have access to a trained interpreter who could score the responses.

It also is important to consider those listeners who communicate predominantly in English but do not speak English as their native language. For these listeners, communication disorders would primarily (though not exclusively) be experienced in English. Thus, evaluating speech recognition in English, rather than in the native language, may be functionally more relevant.

Tools to assess the English-language speech perception of non-native English speakers are limited, a situation that poses problems for clinicians and researchers alike. As a first step in the development
of a clinical tool—and to stimulate further research with non-native English speakers—the Speech and Auditory Research Laboratory at Queens College of the City University of New York and the Phonetics Laboratory at the University of Texas-Austin have been working to develop new English-sentence test materials designed to test non-native speakers of English. This project is supported by ASHA's 2009–2010 Grant Program for Projects on Multicultural Activities.

To develop the lexicon used to create the sentences, research assistants interviewed 100 non-native speakers of English representing 28 nationalities and 15 native languages. These interviews consisted of short conversations initiated with a brief, scripted introduction of one of 20 topics such as sports, vacations, or shopping. Topics were designed to elicit a large lexicon while limiting the scope of the conversation to increase the potential for vocabulary repetition across participants.

Researchers digitally recorded and orthographically transcribed more than 26 hours of conversational speech by non-native English speakers. On average, each subject discussed eight topics for approximately two minutes per topic. The lexicon developed from these transcriptions included more than 200,000 words including 4,100 unique words.

The use of vocabulary familiar to many non-native English speakers with various linguistic backgrounds and levels of English proficiency was critical to the development of the test sentences. Therefore, the words most frequently spoken by the subjects were used to create 500 sentences. Five different syntactical structures using simple English grammar were developed to promote consistency across sentences. Each sentence consisted of five to seven words including four keywords. The 500 sentences were divided into 20 different test lists (each including 25 sentences and 100 keywords) based on vocabulary, syntactic structure distribution, syllable count, and high-frequency speech information (i.e., the distribution of fricative and affricate sounds).

High-quality digital recordings were created for all of the 500 sentences with three different talkers (two females and one male). To gather normative data for non-native speakers, these sentences are being tested on a large population of speakers of a variety of native languages and various degrees of English-language proficiency.

These sentences will provide a valuable tool for researchers and clinicians. Although linguistic bias can never be eliminated completely from English speech-recognition testing for those who do not speak English as their native language, a large pool of sentences has been developed in an ecologically valid way (i.e., using vocabulary naturally elicited by non-native speakers of English) with simple grammar. These sentences can be used to test listeners' recognition of English speech in quiet or in noise, and will help researchers move one step closer to identifying correctly whether poor English speech perception is due to linguistic inexperience or specific types of auditory impairment.   

Lauren Calandruccio, Phd, CCC-A, is an assistant professor in the Department of Linguistics and Communication Disorders at Queens College of the City University of New York. She has collaborated with Dr. Rajka Smiljanic at the University of Texas–Austin on work related to this project. Contact Callandruccio at lauren.calandruccio@qc.cuny.edu.

cite as: Calandruccio, L. (2010, October 12). Sentence Recognition for Non-Native Speakers : Researchers Reduce Linguistic Bias in Audiology Assessment. The ASHA Leader.

References

Bradlow, A. R., & Pisoni, D. B. (1999). Recognition of spoken words by native and non-native listeners: Talker-, listener-, and item-related factors. Journal of the Acoustical Society of America, 106(4), 2074–2085.

Cutler, A., Garcia Lecumberri, M. L., & Cooke, M. (2008). Consonant identification in noise by native and non-native listeners: Effects of local context. Journal of the Acoustical Society of America, 24(2), 1264–1268.



  

Advertise With UsAdvertisement