Authors :
Presenting Author: Liberty Hamilton, PhD – The University of California, Berkeley
Maansi Desai, PhD – The University of Texas at Austin
Alyssa Field, MEd – The University of Texas at Austin
Jacob Leisawitz, BS – Baylor College of Medicine
Sandra Georges, MS – Baylor College of Medicine
Anne Anderson, MD – Baylor College of Medicine/Texas Children's Hospital
Dave Clarke, MD – The University of Texas at Austin
Rosario DeLeon, PhD – The University of Texas at Austin
Nancy Nussbaum, PhD – The University of Texas at Austin
Elizabeth Tyler-Kabara, MD, PhD – The University of Texas at Austin, Dell Medical School
Andrew Watrous, PhD – Baylor College of Medicine
Howard Weiner, MD – Texas Children's Hospital & Baylor College of Medicine
Rationale:
Typically, infants learn the phonemes of their native language as early as 6 months to one year of age. Still, many speech-related tasks, including understanding speech in noise, do not mature until adolescence. In children with epilepsy, language networks may be affected by ongoing seizures and other comorbidities. Understanding how such networks develop in the brain is important for identifying the basis of typical and atypical language development, and for guiding interventions in children with communication disorders. Here, we answer this question using intracranial recordings during speech tasks from children, adolescents, and young adults with drug resistant epilepsy.
Methods:
We acquired intracranial recordings from 50 patients aged 4–21 undergoing Phase 2 monitoring for drug resistant epilepsy while they watched audiovisual movies. Recordings included bilateral coverage of speech-sensitive peri-Sylvian cortex including Heschl’s gyrus (HG), planum temporale (PT), superior temporal gyrus (STG), superior temporal sulcus (STS), and middle temporal gyrus (MTG). Neural data were preprocessed by manually rejecting epileptiform activity, applying a common average reference, and high gamma band power (70-150 Hz). Phonemes in the stimulus were transcribed and synchronized with neural data. We fit regression models to predict neural activity from acoustic or phonetic information in the stimulus. Model performance was evaluated by calculating the correlation between the neural response predicted by the model and the actual neural response to held out data. To measure neural processing speed, we calculated the peak latency of speech responses and analyzed trends across age groups. We also correlated neural selectivity with behavioral measures of verbal comprehension, attention, and reading as assessed through preoperative neuropsychological testing.
Results:
We found robust responses to acoustic information in HG (rmax=0.48) across the entire age range. However, robust responses to phoneme categories did not emerge until early adolescence (rmax=0.68). Response latencies in speech-selective cortex (STG, STS, MTG) became faster and more precise with age, while auditory responses in HG were relatively more stable. Measures of neural selectivity were most strongly correlated with actual patient age, rather than developmental measures of language ability.
Conclusions:
Our results suggest that robust cortical representations of phonetic information emerge relatively late, concordant with behavioral reports of later crystallization of speech in noise processing. Our results also suggest that phoneme processing develops along a biological timeline, relatively independent of language and attention measures. By incorporating movie watching tasks, we were able to assess speech and language development across a wide age range with a child-friendly research paradigm. Our results may have implications for language-related disorders including dyslexia and auditory processing disorder.Funding:
NIH R01018579 (PI: Hamilton)