Abstracts

Natural Language Processing for Identification of Refractory Status Epilepticus

Abstract number : 1.134
Submission category : 2. Translational Research / 2E. Other
Year : 2021
Submission ID : 1825726
Source : www.aesnet.org
Presentation date : 12/9/2021 12:00:00 PM
Published date : Nov 22, 2021, 06:50 AM

Authors :
Latania Reece, BS - Boston Children's Hospital; Assaf Landschaft, MSc – Boston Children's Hospital; Justice Clark, MPH – Boston Children's Hospital; Amir Kimia, MD – Boston Children's Hospital; Tobias Loddenkemper, MD – Boston Children's Hospital

Rationale: Manual chart review identifying rare events typically focus on notes that are highly likely to describe the event, with an unknown sensitivity. Natural Language Processing (NLP) and machine learning (ML) models can screen larger corpus of data adding ancillary notes. We sought to assess the performance of an NLP algorithm trained by a lay NLP user compared with current manual review practice.

Methods: Our rare event is refractory status epilepticus (rSE) among patients ages 28 days to 21 years old. The pediatric status epilepticus research group (pSERG) utilized the following rSE inclusion criteria: ongoing/intermittent seizure(s), that fail to respond to 1st or 2nd line anti-seizure medication or require continuous infusion for seizure cessation.

As human review data: we used the pSERG screening log of notes from 2012 used prospectively at the time to identify rSE patients eligible for a study. Notes manually screened were chosen by consensus, as those most likely to describe rSE. A total of 500 notes were reviewed, identifying 27 rSE cases (thus establishing a positive predictive value [PPV] of 5.5%).

Training an NLP model: we manually reviewed 780 medical notes nested within the Boston Children’s Hospital (BCH) data repository. We added ancillary notes (consult notes, nursing notes, etc.) creating a large and diverse corpus to train the model, accepting a lower prevalence of rSE. We used a locally developed NLP screening tool (DrT) to train an NLP and ML algorithm to screen for rSE, assigning a continuous score (ML score) to each note, where high scores correspond with high likelihood of rSE and vice versa. Next, we chose a cutoff to create a binary model with a plan to manually review notes that scored greater than or equal to the cutoff. We aimed at high sensitivity, but with a PPV >5% mirroring the burden of manual review used in the human review set, estimating prevalence of < 0.01% across our inclusive dataset (Figure 1).
Translational Research