November 1, 2013 Departments

From the Journals: November 2013

Catch up on the latest findings by researchers in communication sciences and disorders in this round up of study results.

Preschool Teachers Perceive Hearing Assistive Technology Positively

Hearing assistive technology is frequently used in classrooms of preschoolers who are deaf or hard of hearing, with generally positive teacher perceptions of the benefits of using such technology, according to a study published in the July 2013 issue of Language, Speech, and Hearing Services in Schools.

Using a cross-sectional survey design, University of Utah researchers—led by Lauri H. Nelson—explored how often sound-field amplification and personal frequency-modulated systems are used in preschool classrooms, teacher perceptions of advantages and disadvantages of using hearing assistive technology, and teacher recommendations for hearing assistive technology use.

The authors sent 306 surveys to 162 U.S. deaf education programs. Ninety-nine programs returned completed surveys (32 percent).

The authors received surveys from teachers working at listening and spoken-language preschool programs (65 percent) and at bilingual-bicultural and total communication preschool programs (35 percent). Most respondents said hearing assistive technology improved students' academic performance (71 percent), speech and language development (79 percent), and attention in the classroom (67 percent). Most respondents also reported that they definitely or probably would recommend a sound-field or personal FM system to other educators.

First Full Genome Sequencing for Autism

A collaborative formed by Autism Speaks, a science and advocacy organization, has performed full genome sequencing and examined the entire DNA code of people with autism spectrum disorder and their family members. The findings provide evidence that whole-genome sequencing data can aid in the detection and clinical evaluation of people with ASD, and also provides a look at the wide-ranging genetic variations associated with ASD.

The study, led by Yong-hui Jiang of the Duke University School of Medicine—published online July 11, 2013, in the American Journal of Human Genetics—reports on full genome sequencing of 32 unrelated Canadians with autism and their families.

This dramatic finding of genetic risk variants associated with clinical manifestation of ASD or accompanying symptoms in 50 percent of the participants tested is promising, because current diagnostic technology has been able to determine a genetic basis in only about 20 percent of tested people with ASD. The large number of families identified with genetic alterations of concern is in part due to the comprehensive and uniform ability to examine regions of the genome possible with whole-genome sequencing.

Researchers identified genetic variations associated with risk for ASD including de novo, X-linked and other inherited DNA lesions in four genes not previously recognized for ASD; nine genes previously determined to be associated with ASD risk; and eight candidate ASD-risk genes. Some families had a combination of genes involved. In addition, risk alterations were found in genes associated with fragile X or related syndromes, social-cognitive deficits, epilepsy, and ASD-associated CHARGE syndrome—a genetic syndrome that is the leading cause of congenital deaf-blindness.

In this pilot effort, 99 people were tested, including the 32 people with ASD (25 male and seven female) and their parents, as well as three members of one control family not on the autism spectrum. This initiative will ultimately perform whole genome sequencing on more than 2,000 participating families who have two or more children with ASD. Data from the 10,000 genetic resource exchange participants will enable new research in the genomics of ASD.

More SLPs Use Traditional Interventions for Speech Sound Disorder

Speech-language pathologists provided children ages 3 to 6 who had speech sound disorders with 30 or 60 minutes of treatment weekly, regardless of group or individual setting, according to survey results published in the July 2013 issue of Language, Speech, and Hearing Services in Schools. The study confirms previous findings about the amount of service provided to this population.

Kansas State University researchers, led by Klaire Mann Brumbaugh, e-mailed a survey to 2,084 SLPs who worked in pre-elementary settings across the United States, asking about service delivery and interventions with children ages 3 to 6 who have speech sound disorders. More SLPs indicated that they used traditional intervention—focused on the correction of individual phonemes—than other types of intervention. However, many SLPs also reported using aspects of phonological intervention and providing phonological awareness training. Fewer SLPs indicated that they used nonspeech oral-motor exercises than in a previous survey. Recently graduated SLPs were no more familiar with recent advances in phonological intervention than their more experienced colleagues.

Brain Tracks Frequency and Time to Hear Salient Sounds

Research reveals how our brains track frequency and time to pick out important sounds from the noisy world around us. The findings, published online July 23 in the journal eLife, could lead to new diagnostic tests for hearing disorders.

Ears effortlessly pick out the sounds we need to hear from a noisy environment—a mobile phone ringtone in the middle of a carnival, for example—but how the brain processes this information (the "cocktail party problem") has been a longstanding question.

Researchers, led by Sundeep Teki of the University College London, used complicated sounds representative of those in real life—"machine-like beeps" that overlap in frequency and time—to re-create a busy sound environment and obtain new insights into how the brain solves this problem.

Ten groups of eight to 10 volunteers (male and female, ages 19–47) with normal hearing and no history of audiological or neurological disorders identified target sounds in a noisy background in a series of 10 experiments. Participants could detect complex target sounds from the background noise, even when the target sounds were delivered at a faster rate or there was a loud, disruptive noise between them.

Previous models based on simple tones suggest that people differentiate sounds based on differences in frequency, or pitch. The study shows that time is also an important factor, with sounds grouped as belonging to one object by virtue of being correlated in time. These findings provide insight into a fundamental brain mechanism for detecting sound patterns, and identify a process that can go wrong in hearing disorders. The results may lead to better tests for disorders that affect the ability to hear sounds in noisy environments.


  

Advertise With UsAdvertisement