Please use this identifier to cite or link to this item: http://dspace.aiub.edu:8080/jspui/handle/123456789/2918
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSourav, Datto-
dc.contributor.authorMustakim, Ahmed-
dc.contributor.authorMd. Eaoumoon, Haque-
dc.contributor.authorKazi, Redwan-
dc.contributor.authorSajedul, Islam-
dc.contributor.authorNasif, Hannan-
dc.contributor.authorMohammad Shah, Paran-
dc.contributor.authorAbu, Shufian-
dc.date.accessioned2025-12-14T08:56:00Z-
dc.date.available2025-12-14T08:56:00Z-
dc.date.issued2025-10-27-
dc.identifier.citation930en_US
dc.identifier.issn2772-7831-
dc.identifier.urihttp://dspace.aiub.edu:8080/jspui/handle/123456789/2918-
dc.description.abstractSmart grids are under pressure due to rising energy use, more renewable sources, and unpredictable consumption. When control systems fail to respond in time, they can cause frequency changes, power imbalances, and reduced grid reliability. This paper introduces a reinforcement learning (RL) framework using Q-learning to handle these challenges through real-time, adaptive control. The approach uses Principal Component Analysis (PCA) to reduce data complexity and discretizes continuous variables to make learning more efficient. A custom Markov Decision Process (MDP) models the grid environment, where the agent chooses actions: Increase, Decrease, or Hold based on the current state. A tabular Q-learning algorithm helps the agent learn the best decisions by maximizing rewards over time. Results show that the RL agent improves power stability by 22% over baseline methods and reacts accurately to supply and demand shifts, with action preferences distributed as Increase (58%), Hold (31%), and Decrease (11%). Heatmaps and 3D plots reveal clear action patterns and strong confidence in decisions, with more than 85% of states showing a decisive optimal action. The model adapts well to changes, proving useful for intelligent and stable grid control. This work supports smarter energy systems.en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.subjectClimate changeen_US
dc.subjectPower Systemen_US
dc.subjectDecarbonizationen_US
dc.subjectTechno-economic feasibilityen_US
dc.subjectRenewable energyen_US
dc.subjectEnergy storage; Microgrid; Prosumeren_US
dc.titleReinforcement Learning for Smart Grid Stability Using Adaptive Control and State Abstractionen_US
dc.typeArticleen_US
Appears in Collections:Publications From Faculty of Engineering

Files in This Item:
File Description SizeFormat 
Shufian_2025_Malaysia Conf.docxShufian_2025_Malaysia Conf3.28 MBMicrosoft Word XMLView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.