Please use this identifier to cite or link to this item:
http://dspace.aiub.edu:8080/jspui/handle/123456789/2799
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Alam, Muhammad Morshed | - |
dc.contributor.author | Trina, Sayma Akter | - |
dc.contributor.author | Hossain, Tamim | - |
dc.contributor.author | Mahmood, Shafin | - |
dc.contributor.author | Ahmed, Md. Sanim | - |
dc.contributor.author | Arafat, Muhammad Yeasir | - |
dc.date.accessioned | 2025-06-24T07:07:50Z | - |
dc.date.available | 2025-06-24T07:07:50Z | - |
dc.date.issued | 2025-02-19 | - |
dc.identifier.citation | 0 | en_US |
dc.identifier.uri | http://dspace.aiub.edu:8080/jspui/handle/123456789/2799 | - |
dc.description.abstract | Autonomous unmanned aerial vehicle (UAV) swarm networks (UAVSNs) can efficiently perform surveillance, connectivity, computing, and energy transfer services for ground users (GUs). These missions require trajectory planning, UAV-GUs association, task offloading, next-hop selection, and resource allocation, including transmit power, bandwidth, timeslots, caching, and computing resources, to enhance network performance. Owing to the highly dynamic topology, limited resources, stringent quality of service requirements, and lack of global knowledge, optimizing network performance in UAVSNs is very intricate. To address this, an adaptive joint optimization framework is required to handle both discrete and continuous decision variables, ensuring optimal performance under various dynamic constraints. A multi-agent deep reinforcement learning-based adaptive actor–critic framework offers an effective solution by leveraging its ability to extract hidden features through agent interactions, generate hybrid actions under uncertainty, and adaptively learn with scalable generalization in dynamic conditions. This paper explores the recent evolutions of actor–critic frameworks to deal with joint optimization problems in UAVSNs by proposing a novel taxonomy based on the modifications in the internal actor– critic neural network structure. Additionally, key open research challenges are identified, and potential solutions are suggested as directions for future research in UAVSNs. | en_US |
dc.language.iso | en | en_US |
dc.publisher | MDPI (Switzerland) | en_US |
dc.subject | Multi-agent deep reinforcement learning | en_US |
dc.subject | actor-critic frameworks | en_US |
dc.subject | UAV swarm networks | en_US |
dc.subject | Tajectory control | en_US |
dc.subject | task offloading | en_US |
dc.subject | Resource allocation | en_US |
dc.title | Variations in Multi-Agent Actor–Critic Frameworks for Joint Optimizations in UAV Swarm Networks: Recent Evolution, Challenges, and Directions | en_US |
dc.type | Article | en_US |
Appears in Collections: | Publications From Faculty of Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Dr Alam_2025_JOMA-DRL.docx | Variations in Multi-Agent Actor–Critic Frameworks for Joint Optimizations in UAV Swarm Networks: Recent Evolution, Challenges, and Directions | 3.01 MB | Microsoft Word XML | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.