Please use this identifier to cite or link to this item:
http://dspace.aiub.edu:8080/jspui/handle/123456789/2651
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Khanom, Fahmida | - |
dc.contributor.author | Biswas, Shuvo | - |
dc.contributor.author | Shorif Uddin, Mohammad | - |
dc.contributor.author | Mostafiz, Rafid | - |
dc.date.accessioned | 2025-03-04T03:55:12Z | - |
dc.date.available | 2025-03-04T03:55:12Z | - |
dc.date.issued | 2024-10-16 | - |
dc.identifier.citation | Scopus | en_US |
dc.identifier.issn | Electronic: 1572-8110. Print: 1381-2416 | - |
dc.identifier.uri | http://dspace.aiub.edu:8080/jspui/handle/123456789/2651 | - |
dc.description.abstract | Parkinson's disease (PD) is a progressive neurological disorder that gradually worsens over time, making early diagnosis difficult. Traditionally, diagnosis relies on a neurologist's detailed assessment of the patient's medical history and multiple scans. Recently, artificial intelligence (AI)-based computer-aided diagnosis (CAD) systems have demonstrated superior performance by capturing complex, nonlinear patterns in clinical data. However, the opaque nature of many AI models, often referred to as "black box" systems, has raised concerns about their transparency, resulting in hesitation among clinicians to trust their outputs. To address this challenge, we propose an explainable ensemble machine learning framework, XEMLPD, designed to provide both global and local interpretability in PD diagnosis while maintaining high predictive accuracy. Our study utilized two clinical datasets, carefully curated and optimized through a two-step data preprocessing technique that handled outliers and ensured data balance, thereby reducing bias. Several ensemble machine learning (EML) models—boosting, bagging, stacking, and voting—were evaluated, with optimized features selected using techniques such as SelectedKBest, mRMR, PCA, and LDA. Among these, the stacking model combined with LDA feature optimization consistently delivered the highest accuracy. To ensure transparency, we integrated explainable AI methods—SHapley Adaptive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME)—into the stacking model. These methods were applied post-evaluation, ensuring that each prediction is accompanied by a detailed explanation. By offering both global and local interpretability, the XEMLPD framework provides clear insights into the decision-making process of the model. This transparency aids clinicians in developing better treatment strategies and enhances the overall prognosis for PD patients. Additionally, our framework serves as a valuable tool for clinical data scientists in creating more reliable and interpretable CAD systems. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Springer Nature | en_US |
dc.relation.ispartofseries | 3; | - |
dc.subject | Parkinson's disease · Ensemble machine learning · Feature optimization · Explainable AI · CAD systems | en_US |
dc.title | XEMLPD: an explainable ensemble machine learning approach for Parkinson disease diagnosis with optimized features | en_US |
dc.type | Article | en_US |
Appears in Collections: | Publications: Journals |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Dspace (4).docx | Information | 4.57 MB | Microsoft Word XML | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.