Automatic Detection of Epileptic Spasms from Home Videos
Abstract number :
1.179
Submission category :
2. Translational Research / 2A. Human Studies
Year :
2024
Submission ID :
1354
Source :
www.aesnet.org
Presentation date :
12/7/2024 12:00:00 AM
Published date :
Authors :
Presenting Author: Gadi Miron, MD – Charitè - Universitätsmedizin Berlin
Mustafa Halimeh, MsC – Charité – Universitätsmedizin Berlin
Christian Meisel, MD – Charité - Universitätsmedizin Berlin
Rationale: Rationale: Epileptic spasms (ES) are the hallmark seizures of infantile epileptic spasm syndrome (IESS). Timely and accurate recognition of ES is critical since delayed treatment may lead to severe cognitive and developmental impairments. However, diagnosis is often delayed due to misrecognition of events and long lag times to pediatric epilepsy specialists. While the use of smartphone videos has been shown to accelerate time to diagnosis of ES, it still requires analysis by a specialist. Our aim was to evaluate if artificial intelligence (AI)-based analysis of videos could be used for automated detection of ES from home videos.
Methods: In this phase 2 retrospective study, we collected smartphone videos from infants under the age of two years with ES. The inclusion criteria were that the infant be clearly visible, and that semiology could be recognized by a specialist. Otherwise, we did not limit the recording device, resolution, or setting. Five-second segments from videos were annotated as either containing a seizure or not. For model development, we used a transfer learning-based approach, extracting features using a foundational human action recognition model, which was further trained using low rank adaptation. Nested 5-fold cross-validation was used to train and evaluate the classification model. We split the cohort into 5 non-overlapping infant groups, ensuring all video segments from each child were in the same group. For each test split, we trained and validated the model on the remaining four splits, repeating this step 5 times. We reported the area under the receiving-operating-characteristic curve (AUC), sensitivity, specificity and accuracy. In addition, false alarm rate (FAR) was calculated by applying the fixed models on an additional independent, out-of-sample dataset of smartphone videos of freely behaving normal infants.
Results: We included a total of 152 infants with ES (991 seizures and 597 non-seizure 5-second video segments; 9±8 segments per child, range 1-42), and 127 healthy infants (1385 video segments;11±10 segments per child, range 1-70). We detected ES with an AUC of 0.94, sensitivity of 78%, specificity of 85%, and accuracy of 81% (Figure 1). For evaluation of FAR, we examined an additional 67 normal children (666 video segments; 10±15 segments per child, range 1-124). We found an average FAR rate of 5.6%, corresponding to 37/666 segments wrongly classified. When setting a threshold of 90% sensitivity on the validation data, we achieved an AUC of 0.94, 88% sensitivity, 66% specificity, 79% accuracy, and FAR 7.3% on the out-of-sample test data.
Conclusions: We evaluated the largest dataset of children with ES videos in the literature and report that deep learning can be used to detect ES with high performance in a highly heterogeneous, in-field cohort. The use of automated video analysis applied to home videos of seizures has potential to address a critical clinical need by accelerating diagnosis of ES through a low-barrier and highly accessible approach.
Funding: This study is funded by a grant from the Berlin Institute of Health.
Translational Research