Children with high-functioning autism spectrum disorder (ASD) can outgrow a critical social communication disability in adolescence according to a study published in the scientific journal Cerebral Cortex.
Younger children with ASD have trouble integrating auditory and visual cues with speech, but researchers at the Albert Einstein College of Medicine at Yeshiva University in New York have found that the problem clears up once the children are older.
"This is an extremely hopeful finding," said lead author and Professor of Paediatrics, John Foxe.
"It suggests that the neurophysiological circuits for speech in these children aren't fundamentally broken and that we might be able to do something to help them recover sooner."
According to Professor Foxe, the ability to integrate "heard" and "seen" speech signals is crucial to effective communication.
"Children who don't appropriately develop this capacity have trouble navigating educational and social settings," he said.
In a previous study, Professor Foxe and his colleagues demonstrated that children with ASD integrate multisensory information such as sound, touch and vision differently from typically developing children. Among typically developing children, multisensory integration (MSI) abilities were known to continue improving late into childhood.
The current study looked at whether one aspect of MSI — integrating audio and visual speech signals —continues to develop in high-functioning children with ASD as well. In the study, 222 children aged between 5 and 17, including both typically developing children and high-functioning children with ASD, were tested for how well they could understand speech with increasing levels of background noise.
In one test, the researchers played audio recordings of simple words. In a second test, the researchers played a video of the speaker articulating the words, but no audio. A third test presented the children with both the audio and video recordings.
The test mimics the so-called "cocktail party" effect: a noisy environment with many different people talking. In such settings, people naturally rely on both auditory and facial clues to understand what another person is saying.
"You get a surprisingly big boost out of lip-reading, compared with hearing alone," said Professor Foxe. "It's an integrative process."
- In the first test (audio alone), the children with ASD performed almost as well as typically developing children across all age groups and all background noise levels.
- In the second test (video alone), the children with ASD performed significantly worse than the typically developing children across all age groups and all background noise levels.
"But the typically developing children didn't perform very well, either," said Professor Foxe. "Most people are fairly terrible at lip-reading."
- In the third test (audio and video), the younger children with ASD, aged 6-12 years, performed much worse than the typically developing children of the same age, particularly at higher levels of background noise. However, among the older children, there was no difference in performance between the typically developing and children with ASD.
"In adolescence, something amazing happens and the kids with ASD begin to perform like the typically developing kids," said Professor Foxe.
"At this point, we can't explain why. It may be a function of a physiological change in their brain or of interventions they've received, or both. That is something we need to explore."
The researchers acknowledge some limitations to their study.
"Instead of doing a cross-sectional study like this, where we tested children at various ages, we would prefer to do a longitudinal study that would involve the same kids who'd be followed over the years from childhood through adolescence.
"We also need to find a way to study what is happening with low- and mid-functioning children with ASD. They are much less tolerant of testing and thus harder to study."
According to Professor Foxe, the work highlights the need to develop more effective therapies to help ASD children better integrate audio and visual speech signals.
* * *
See Professor Foxe talk about the findings in this short video.