Abstracts

An Attention-based Multi-instance Learning Framework for F-FDG PET Imaging Diagnosis in Autoimmune Encephalitis

Abstract number : 1.218
Submission category : 2. Translational Research / 2C. Biomarkers
Year : 2024
Submission ID : 1094
Source : www.aesnet.org
Presentation date : 12/7/2024 12:00:00 AM
Published date :

Authors :
Presenting Author: Yueqian Sun, PhD – Beijing Tiantan Hospital

Ruizhe Sun, PhD – Beijing Tiantan Hospital
Qun Wang, MD – Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, China

Rationale: Objective: This study aims to explore the PET-CT imaging characteristics of patients with autoimmune encephalitis (AE) and to precisely classify patients through image feature extraction and fusion methods,achieving diagnosis of AE and its subtypes.


Methods: Methods: This study included data from three centers. Specifically, Beijing Tiantan Hospital provided 137 patients with AE and 40 healthy controls (HC) as the training set. Yanda Hospital contributed 32 AE patients and 40 healthy controls, while Jining Medical College provided 40 healthy controls, both serving as independent external validation datasets for multicenter analysis. MATLAB software and SPM12 software were used for normalization,smoothing,and other preprocessing steps of 18F-FDG PET images. Image features were extracted using the deep convolutional neural network ResNet18,and multiple instance learning (MIL) methods along with attention mechanisms were applied for feature fusion and classification. The multimodal multiple instance learning (m-MIL) were developed by integrating patients' gender and age information, and logistic regression (LR) and random forest (RF) algorithms were employed for comparative analysis.


Results: Results: This study effectively differentiated between patients with AE and HC groups, as well as among different subtypes of AE, including LGI1-AE, NMDA-AE, GABAb-AE, and GAD65-AE. In the training dataset, when comparing AE patients to HC, the m-MIL method exhibited superior performance compared to LR and RF methods (m-MIL: ACC = 92.31%, SEN = 91.91%, SPE = 100%, and AUC = 96.39%; LR: ACC = 84.62%, SEN = 100%, SPE = 84.62%, and AUC = 91.66%; RF: ACC = 84.62%, SEN = 100%, SPE = 84.62%, and AUC = 91.66%). Additionally, in subtype diagnosis of AE, the m-MIL method demonstrated similar performance advantages (m-MIL: ACC = 95.05%). For the validation dataset, when comparing AE patients to HC, the m-MIL method achieved an accuracy of 73.95%, sensitivity of 91.89%, specificity of 54.84%, and AUC of 81%, while LR yielded an accuracy of 62.18%, sensitivity of 91.89%, specificity of 44.74%, and AUC of 81%, and RF achieved an accuracy of 44.53%, sensitivity of 94.59%, specificity of 35.35%, and AUC of 80%. Similarly, in the subtype diagnosis of AE, the m-MIL method exhibited comparable performance advantages (m-MIL: ACC = 72.97%).

Conclusions: Conclusion: The method developed in this study for image feature extraction and fusion effectively enhances the accuracy and specificity of AE and its subtype diagnosis,proving the advantages of multimodal fusion models in improving diagnostic performance. These findings are of significant importance for the early diagnosis and treatment selection of AE,offering new perspectives and tools for future clinical practice.


Funding: The National Key R&D Program of China grant (2022YFC2503800), National Natural Science Foundation of China (82371449), the Beijing Natural Science Foundation (7232045 and Z200024), and Capital Health Research and Development of Special grants (2024-1-2041).


Translational Research