Abstracts

Artificial Intelligence Based Detection of Epileptic Seizures from Video Data

Abstract number : 2.54
Submission category : 3. Neurophysiology / 3A. Video EEG Epilepsy-Monitoring
Year : 2024
Submission ID : 1468
Source : www.aesnet.org
Presentation date : 12/8/2024 12:00:00 AM
Published date :

Authors :
Presenting Author: Aidan Boyne, BS – Baylor College of Medicine

Anthony Allam, BS – Baylor College of Medicine
Hsiang Yeh, BS – UCLA
Brandon Brown, MD – University of Utah Health Academic Medical Center
Mohammad Taba, MD – University of California, Los Angeles
R. James Cotton, MD, PhD – Shirley Ryan AbilityLab
Zulfi Haneef, MBBS, MD – Baylor College of Medicine

Rationale: Epilepsy affects over 3.5 million patients in the United States with an estimated 1.5 million patients refractory to pharmaceutical management alone. Limited availability of epilepsy monitoring units (EMU) and experienced epileptologists to interpret video EEG (vEEG) results presents a significant bottleneck for patients who might benefit from surgical treatment but have not yet been screened for surgical candidacy. Automated video   analysis using machine learning provides a promising solution for first pass seizure detection without the need for expensive and resource intensive EEG monitoring. Several  previous models have employed 2D convolutional neural networks (CNN) and long short-term memory networks to detect and classify seizures using video with general success, but they often suffer from high false positive rates or fail to generalize to new patients.   We employ an novel approach, using a competitive architecture for activity recognition to identify seizure events from EMU videos by incorporating the temporal nature of video data.

Methods: We utilize a two-stream inflated 3D-ConvNet (I3D) architecture to classify video clips as seizure or non-seizure. A pretrained human-action classification model was fine-tuned and tested on eleven hours of video data containing 49 tonic-clonic seizure events from 25 patients  monitored at a large academic hospital (site A) using a leave-one-patient-out cross validation scheme. The performance of the model was evaluated by comparing model predictions for each video to ground-truth annotations obtained from vEEG review by a fellowship trained epileptologist on videos from hospital A and on a separate video dataset from a large academic hospital with different EMU setup (site B).

Results: The model classified previously unseen videos from site A with a mean accuracy and area under the receiver operating curve (AUC) score of 0.944 ± 0.064 and 0.991 ± 0.015 respectively. The mean latency between seizure onset and model detection of the seizure was 0.61 ± 2.01 seconds,   surpassing prior best-in-class seizure detection algorithms. Similar performance was achieved when the model was trained and tested at site B. Cross-site evaluation (site A model tested on site B data and vice versa) demonstrated generalizability of the model with some loss of performance.

Conclusions: Our model demonstrates the high performance in identification of epileptic seizures from video data achievable with 3D-CNN models and outperforms all similar models in the literature. The results suggest high generalizability of the underlying I3D architecture, validate the incorporation of the temporal nature of video data into seizure detection models, and provide a foundation for at-home video seizure monitoring.

Funding: Dept. of Defence (DoD), CDMRP Virtual Post-Traumatic Epilepsy Research Center (P-TERC) Faculty Award . Proposal Number EP230097. Award Number HT9425-24-1-0355

Neurophysiology