Please use this identifier to cite or link to this item:
http://dspace.aiub.edu:8080/jspui/handle/123456789/2918Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Sourav, Datto | - |
| dc.contributor.author | Mustakim, Ahmed | - |
| dc.contributor.author | Md. Eaoumoon, Haque | - |
| dc.contributor.author | Kazi, Redwan | - |
| dc.contributor.author | Sajedul, Islam | - |
| dc.contributor.author | Nasif, Hannan | - |
| dc.contributor.author | Mohammad Shah, Paran | - |
| dc.contributor.author | Abu, Shufian | - |
| dc.date.accessioned | 2025-12-14T08:56:00Z | - |
| dc.date.available | 2025-12-14T08:56:00Z | - |
| dc.date.issued | 2025-10-27 | - |
| dc.identifier.citation | 930 | en_US |
| dc.identifier.issn | 2772-7831 | - |
| dc.identifier.uri | http://dspace.aiub.edu:8080/jspui/handle/123456789/2918 | - |
| dc.description.abstract | Smart grids are under pressure due to rising energy use, more renewable sources, and unpredictable consumption. When control systems fail to respond in time, they can cause frequency changes, power imbalances, and reduced grid reliability. This paper introduces a reinforcement learning (RL) framework using Q-learning to handle these challenges through real-time, adaptive control. The approach uses Principal Component Analysis (PCA) to reduce data complexity and discretizes continuous variables to make learning more efficient. A custom Markov Decision Process (MDP) models the grid environment, where the agent chooses actions: Increase, Decrease, or Hold based on the current state. A tabular Q-learning algorithm helps the agent learn the best decisions by maximizing rewards over time. Results show that the RL agent improves power stability by 22% over baseline methods and reacts accurately to supply and demand shifts, with action preferences distributed as Increase (58%), Hold (31%), and Decrease (11%). Heatmaps and 3D plots reveal clear action patterns and strong confidence in decisions, with more than 85% of states showing a decisive optimal action. The model adapts well to changes, proving useful for intelligent and stable grid control. This work supports smarter energy systems. | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | IEEE | en_US |
| dc.subject | Climate change | en_US |
| dc.subject | Power System | en_US |
| dc.subject | Decarbonization | en_US |
| dc.subject | Techno-economic feasibility | en_US |
| dc.subject | Renewable energy | en_US |
| dc.subject | Energy storage; Microgrid; Prosumer | en_US |
| dc.title | Reinforcement Learning for Smart Grid Stability Using Adaptive Control and State Abstraction | en_US |
| dc.type | Article | en_US |
| Appears in Collections: | Publications From Faculty of Engineering | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Shufian_2025_Malaysia Conf.docx | Shufian_2025_Malaysia Conf | 3.28 MB | Microsoft Word XML | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.