Authors :
Presenting Author: Radha Kodali, PhD – University of Tennessee Health Science Center, Le Bonheur Children's Hospital
Negar Noorizadeh, Ph.D. – University of Tennessee Health Science Center and Le Bonheur Neuroscience Institute, Le Bonheur Children's Hospital, Memphis, TN, USA
Taylor Jones, BS – University of Tennessee Health Science Center, Le Bonheur Children's Hospital
Victoria Tryba, BS – Le Bonheur Neuroscience Institute, Le Bonheur Children's Hospital, Memphis, TN, USA
James Wheless, BScPharm, MD, FAAP, FACP, FAAN, FAES, FCNS – University of Tennessee Health Science Center and Le Bonheur Children's Hospital
Shalini Narayana, PhD – University of Tennessee Health Science Center and Le Bonheur Neuroscience Institute, Le Bonheur Children's Hospital, Memphis, TN, USA
Rationale:
For nearly 30% of patients with epilepsy whose seizures become medically intractable, surgery is an effective alternative. Epilepsy surgery requires accurate mapping of eloquent cortices and noninvasive transcranial magnetic stimulation (TMS) offers a safer alternative to invasive methods, to localize language cortices, especially in children. However, its utility is limited by subjective analysis, lack of standardization, and low sensitivity. We investigated whether incorporating automated TMS speech error detection could enhance the accuracy and efficiency of TMS language mapping.
Methods:
This retrospective study included 122 children who underwent TMS language mapping as part of phase I evaluation (67 females; mean age 12.8 years, SD ± 3.93). Speech recordings were segmented into 19,865 word-level audio clips (44,100 Hz) and categorized by response time (RT) relative to TMS: 17,898 normal speech (NS) clips (< 800 ms), 1,630 performance errors (PE) ((RT >800 ms), and 337 speech arrests (SA) (no response within 3500 ms), totaling ~20 hours of audio. Data were split 80 train /20 test with 10-fold cross-validation. The proposed DirichNet used 80 Dirichlet filters (length 251), followed by two convolutional layers (60 filters, length 5), three fully connected layers, batch normalization, and Leaky-ReLU activation. Unlike traditional convolutional neural networks (CNN) that use learned finite impulse response filters, DirichNet replaces convolution filters with the Dirichlet kernel. This is defined as DM(x) =(sin(M+1)x/2))/(sin(x/2)), and acts as a low-pass filter that preserves frequency components up to the Mth harmonic. This allows DirichNet to adaptively learns frequency ranges during training, enabling flexible bandpass filtering and enhance modeling of the complex speech dynamics in TMS data.
Results:
CNN and DirichNet were trained to identify TMS induced speech errors versus NS, and to classify them into NS, PE, and SA. DirichNet was trained with 131.6 million parameters across 24 layers, including 80 adaptive filters with learned center frequencies from 0.0012 to 0.472 Hz. In the binary task (Figure 1a), DirichNet achieved 57.1% sensitivity, 85.7% positive predictive value, and 91.9% accuracy, while CNN showed 0% sensitivity misclassifying all ES as normal. In the multiclass setting (Figure 1b), DirichNet detected 46.4% of PE, maintained 94.6% accuracy on NS, and reached 93.7% accuracy. However, SA was not detected, likely due to limited sample size. CNN failed to detect any speech errors (0% sensitivity) for both PE and SA.
Conclusions:
DirichNet improved detection of TMS-induced speech errors, achieving 57% and 46% sensitivity in binary and multiclass tasks—tackling the low sensitivity of traditional CNN and subjectivity of current clinical analysis. Integrating the DirichNet model into current analysis pipeline has the potential to improve the accuracy and efficiency of TMS language mapping.
Funding: Pediatric Epilepsy Research Foundation