Please use this identifier to cite or link to this item: http://dspace.aiub.edu:8080/jspui/handle/123456789/2651
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKhanom, Fahmida-
dc.contributor.authorBiswas, Shuvo-
dc.contributor.authorShorif Uddin, Mohammad-
dc.contributor.authorMostafiz, Rafid-
dc.date.accessioned2025-03-04T03:55:12Z-
dc.date.available2025-03-04T03:55:12Z-
dc.date.issued2024-10-16-
dc.identifier.citationScopusen_US
dc.identifier.issnElectronic: 1572-8110. Print: 1381-2416-
dc.identifier.urihttp://dspace.aiub.edu:8080/jspui/handle/123456789/2651-
dc.description.abstractParkinson's disease (PD) is a progressive neurological disorder that gradually worsens over time, making early diagnosis difficult. Traditionally, diagnosis relies on a neurologist's detailed assessment of the patient's medical history and multiple scans. Recently, artificial intelligence (AI)-based computer-aided diagnosis (CAD) systems have demonstrated superior performance by capturing complex, nonlinear patterns in clinical data. However, the opaque nature of many AI models, often referred to as "black box" systems, has raised concerns about their transparency, resulting in hesitation among clinicians to trust their outputs. To address this challenge, we propose an explainable ensemble machine learning framework, XEMLPD, designed to provide both global and local interpretability in PD diagnosis while maintaining high predictive accuracy. Our study utilized two clinical datasets, carefully curated and optimized through a two-step data preprocessing technique that handled outliers and ensured data balance, thereby reducing bias. Several ensemble machine learning (EML) models—boosting, bagging, stacking, and voting—were evaluated, with optimized features selected using techniques such as SelectedKBest, mRMR, PCA, and LDA. Among these, the stacking model combined with LDA feature optimization consistently delivered the highest accuracy. To ensure transparency, we integrated explainable AI methods—SHapley Adaptive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME)—into the stacking model. These methods were applied post-evaluation, ensuring that each prediction is accompanied by a detailed explanation. By offering both global and local interpretability, the XEMLPD framework provides clear insights into the decision-making process of the model. This transparency aids clinicians in developing better treatment strategies and enhances the overall prognosis for PD patients. Additionally, our framework serves as a valuable tool for clinical data scientists in creating more reliable and interpretable CAD systems.en_US
dc.language.isoenen_US
dc.publisherSpringer Natureen_US
dc.relation.ispartofseries3;-
dc.subjectParkinson's disease · Ensemble machine learning · Feature optimization · Explainable AI · CAD systemsen_US
dc.titleXEMLPD: an explainable ensemble machine learning approach for Parkinson disease diagnosis with optimized featuresen_US
dc.typeArticleen_US
Appears in Collections:Publications: Journals

Files in This Item:
File Description SizeFormat 
Dspace (4).docxInformation4.57 MBMicrosoft Word XMLView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.