Multimodal Feature Fusion and AI-Driven Framework for Early Detection of Alzheimer’s Disease
Date of Award
4-2025
Degree Name
Doctor of Philosophy
Department
Electrical and Computer Engineering
First Advisor
Ikhlas Abdel-Qader, Ph.D.
Second Advisor
Bradley Bazuin, Ph.D.
Third Advisor
Alessander Santos, Ph.D.
Fourth Advisor
Saad Shebrain, Ph.D.
Keywords
Early detection of Alzheimer’s disease, multimodal feature fusion
Abstract
Alzheimer's disease (AD) is a progressive neurological disorder causing memory loss long before clinical symptoms appear. Early detection is essential for timely intervention, yet research focuses on AD and cognitively normal (CN) stages and often neglects mild cognitive impairment (MCI). To address this, a multimodal feature fusion framework is introduced to combine features from different sources to improve the accuracy and robustness of AD detection algorithms. In the first stage, fusion vectors are generated by combining hippocampal texture features, derived from the 2D Gray-Level Co-occurrence Matrix (2D-GLCM), with hippocampal volumes from 3D Magnetic Resonance Imaging (MRI). These fused features are then used to classify subjects into AD, MCI, and CN groups. Data from the Medical Decathlon Hippocampus Dataset is classified using KNearest Neighbor (KNN), Probabilistic Neural Network (PNN), and Random Forest (RF) algorithms. The second stage expands the feature set to include engineered texture features from the hippocampus and entorhinal cortex alongside Standardized Uptake Value Ratios (SUVR) from Positron Emission Tomography (PET) images sourced from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. This process involves segmenting 3D brain regions, including skull stripping and creating aligned masks for the regions of interest (ROIs). After segmentation, histogram-based feature engineering is applied to capture value distributions, extracting a single feature from each ROI’s 2D-GLCM texture features. Subsequently, four classifiers - Linear Support Vector Machine (L-SVM), Linear Discriminant Analysis (LDA), Logistic Regression (LR), and Logistic Regression with Stochastic Gradient Descent (LRSGD)- evaluate and rank the contribution of different features in distinguishing AD-negative (MCI stable) from AD-positive (MCI conversion). The final stage focuses on multimodal feature fusion across memory, vision, and speech brain regions, examining biomarkers such as 2D-GLCM, volume, SUVR, sex, and obesity. The results from these stages reveal significant insights. In the first stage, binary classification of MCI vs. CN achieves 97% accuracy, demonstrating the enhanced performance of fused hippocampal 2D-GLCM texture and 3D MRI volumes over individual modalities. In the second stage, the 2D-GLCM texture features excel, achieving 90% sensitivity in identifying AD-positive (MCI conversion) cases with low false positives, emphasizing the importance of incorporating texture features from the hippocampus and entorhinal cortex alongside SUVR from PET images. The third stage reveals sex-specific differences: males show more pronounced associations between texture features in memory regions, volume in vision regions, and SUVR in speech regions. Conversely, females show significant texture features in memory and speech regions and SUVR in vision regions. These findings highlight the necessity of exploring brain regions beyond memory for comprehensive AD detection. By leveraging multimodal feature fusion and advanced AI integration, our framework significantly improves the accuracy and robustness of early AD detection. Fusing multimodal features into a unified AIdriven system sets a new benchmark for predictive performance, offering a scalable and adaptable solution for diverse dataset applications in clinical diagnostics. Moreover, the study emphasizes the critical role of multimodal feature fusion and feature importance analysis in identifying early biomarkers of AD, particularly in identifying sex-specific patterns, enabling personalized early interventions, which can improve outcomes and slow disease progression.
Access Setting
Dissertation-Abstract Only
Restricted to Campus until
5-1-2027
Recommended Citation
Hassouneh, Aya, "Multimodal Feature Fusion and AI-Driven Framework for Early Detection of Alzheimer’s Disease" (2025). Dissertations. 4153.
https://scholarworks.wmich.edu/dissertations/4153
Comments
Fifth Advisor: Ilgin P Acar, Ph.D.