Analysis of Learning-Based Methods for Intelligent Edge Computing
PDF
PDF

How to Cite

Sandar, Nay Myo, and Thinzar Aung Win. 2026. “Analysis of Learning-Based Methods for Intelligent Edge Computing”. Journal of Electronics and Informatics 7 (4): 281-97. https://doi.org/10.36548/jei.2025.4.003.

Keywords

— Edge Computing
— Deep Reinforcement Learning
— Task Offloading
— Resource Allocation
— Internet of Things (IoT)
— Vehicular Edge Computing
Published: 07-01-2026

Abstract

Edge Computing is recognized as an important computing paradigm that helps in the processing of data as well as the making of decisions near the user, hence reducing the need to rely on cloud infrastructure. Edge Computing is of high importance in fields that are sensitive to latency, such as smart vehicles, IoT services, health monitoring, UAVs, and smart cities. Edge Computing environments face various issues, such as limited computational power, network variability, mobility, and the need to process high volumes of diverse tasks. These issues in edge computing are difficult to handle using traditional methods of optimization. Deep Reinforcement Learning (DRL) algorithms are significantly used in edge computing systems as an intelligent decision-making tool that can optimize decisions through interactions with the edge computing system. This review paper presents a comprehensive analysis of recent studies and representative research that apply DRL techniques to edge computing systems. The reviewed papers cover various applications such as vehicular edge computing, IoT-based multi-access edge computing, UAV-assisted edge computing, collaboration between fog and cloud computing, and blockchain and federated learning-based edge computing systems. The discussion on DRL applications encompasses task offloading, resource management, workload management, caching, and mobility management. A comparative analysis discusses the most used DRL algorithms and their applications along with performance improvements. Additionally, this literature review highlights some of the main drawbacks of DRL, such as training complexity, scalability, privacy, and deployment, and identifies areas that require improvement to create safe and efficient DRL-based edge computing solutions.

References

  1. Li, Mushu, Jie Gao, Lian Zhao, and Xuemin Shen. "Deep Reinforcement Learning for Collaborative Edge Computing in Vehicular Networks." IEEE Transactions on Cognitive Communications and Networking 6, no. 4 (2020): 1122-1135.
  2. Zhao, Rui, Xinjie Wang, Junjuan Xia, and Liseng Fan. "Deep Reinforcement Learning Based Mobile Edge Computing for Intelligent Internet of Things." Physical Communication 43 (2020): 101184.
  3. Chen, Xianfu, Honggang Zhang, Celimuge Wu, Shiwen Mao, Yusheng Ji, and Mehdi Bennis. "Performance Optimization in Mobile-Edge Computing Via Deep Reinforcement Learning." In 2018 IEEE 88th Vehicular Technology Conference (VTC-Fall), IEEE, 2018, 1-6.
  4. Tang, Ming, and Vincent WS Wong. "Deep Reinforcement Learning for Task Offloading in Mobile Edge Computing Systems." IEEE Transactions on Mobile Computing 21, no. 6 (2020): 1985-1997.
  5. Liu, Yi, Huimin Yu, Shengli Xie, and Yan Zhang. "Deep Reinforcement Learning For Offloading and Resource Allocation in Vehicle Edge Computing and Networks." IEEE Transactions on Vehicular Technology 68, no. 11 (2019): 11158-11168.
  6. Ning, Zhaolong, Peiran Dong, Xiaojie Wang, Joel JPC Rodrigues, and Feng Xia. "Deep Reinforcement Learning for Vehicular Edge Computing: An Intelligent Offloading System." ACM Transactions on Intelligent Systems and Technology (TIST) 10, no. 6 (2019): 1-24.
  7. He, Ying, Yuhang Wang, Chao Qiu, Qiuzhen Lin, Jianqiang Li, and Zhong Ming. "Blockchain-Based Edge Computing Resource Allocation in IoT: A Deep Reinforcement Learning Approach." IEEE Internet of Things Journal 8, no. 4 (2020): 2226-2237.
  8. Huang, Liang, Suzhi Bi, and Ying-Jun Angela Zhang. "Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks." IEEE Transactions on Mobile Computing 19, no. 11 (2019): 2581-2593.
  9. Zheng, Tao, Jian Wan, Jilin Zhang, and Congfeng Jiang. "Deep Reinforcement Learning-Based Workload Scheduling for Edge Computing." Journal of Cloud Computing 11, no. 1 (2022): 3.
  10. Qiao, Guanhua, Supeng Leng, Sabita Maharjan, Yan Zhang, and Nirwan Ansari. "Deep Reinforcement Learning for Cooperative Content Caching in Vehicular Edge Computing and Networks." IEEE Internet of Things Journal 7, no. 1 (2019): 247-257.
  11. Li, Yunzhao, Feng Qi, Zhili Wang, Xiuming Yu, and Sujie Shao. "Distributed Edge Computing Offloading Algorithm Based on Deep Reinforcement Learning." IEEE Access 8 (2020): 85204-85215.
  12. Liu, Qian, Long Shi, Linlin Sun, Jun Li, Ming Ding, and Feng Shu. "Path Planning For UAV-Mounted Mobile Edge Computing with Deep Reinforcement Learning." IEEE Transactions on Vehicular Technology 69, no. 5 (2020): 5723-5728.
  13. Zhan, Wenhan, Chunbo Luo, Jin Wang, Chao Wang, Geyong Min, Hancong Duan, and Qingxin Zhu. "Deep-Reinforcement-Learning-Based Offloading Scheduling for Vehicular Edge Computing." IEEE Internet of Things Journal 7, no. 6 (2020): 5449-5465.
  14. Alfakih, Taha, Mohammad Mehedi Hassan, Abdu Gumaei, Claudio Savaglio, and Giancarlo Fortino. "Task Offloading and Resource Allocation for Mobile Edge Computing by Deep Reinforcement Learning Based on SARSA." Ieee Access 8 (2020): 54074-54084.
  15. Sheng, Shuran, Peng Chen, Zhimin Chen, Lenan Wu, and Yuxuan Yao. "Deep Reinforcement Learning-Based Task Scheduling in IoT Edge Computing." Sensors 21, no. 5 (2021): 1666.
  16. Wang, Liang, Kezhi Wang, Cunhua Pan, Wei Xu, Nauman Aslam, and Arumugam Nallanathan. "Deep Reinforcement Learning Based Dynamic Trajectory Control for UAV-Assisted Mobile Edge Computing." IEEE Transactions on Mobile Computing 21, no. 10 (2021): 3536-3550.
  17. Yamansavascilar, Baris, Ahmet Cihat Baktir, Cagatay Sonmez, Atay Ozgovde, and Cem Ersoy. "Deepedge: A Deep Reinforcement Learning Based Task Orchestrator for Edge Computing." IEEE Transactions on Network Science and Engineering 10, no. 1 (2022): 538-552.
  18. Luo, Quyuan, Changle Li, Tom H. Luan, and Weisong Shi. "Collaborative Data Scheduling for Vehicular Edge Computing Via Deep Reinforcement Learning." IEEE Internet of Things Journal 7, no. 10 (2020): 9637-9650.
  19. Peng, Haixia, and Xuemin Shen. "Deep Reinforcement Learning Based Resource Management for Multi-Access Edge Computing in Vehicular Networks." IEEE Transactions on Network Science and Engineering 7, no. 4 (2020): 2416-2428.
  20. Goudarzi, Mohammad, Marimuthu Palaniswami, and Rajkumar Buyya. "A Distributed Deep Reinforcement Learning Technique for Application Placement in Edge and Fog Computing Environments." IEEE Transactions on Mobile Computing 22, no. 5 (2021): 2491-2505.
  21. Zhang, Liang, Bijan Jabbari, and Nirwan Ansari. "Deep Reinforcement Learning Driven UAV-Assisted Edge Computing." IEEE Internet of Things Journal 9, no. 24 (2022): 25449-25459.
  22. Chen, Zhao, and Xiaodong Wang. "Decentralized Computation Offloading for Multi-User Mobile Edge Computing: A Deep Reinforcement Learning Approach." EURASIP Journal on Wireless Communications and Networking 2020, no. 1 (2020): 188.
  23. Lei, Lei, Huijuan Xu, Xiong Xiong, Kan Zheng, Wei Xiang, and Xianbin Wang. "Multiuser Resource Control with Deep Reinforcement Learning in IoT Edge Computing." IEEE Internet of Things Journal 6, no. 6 (2019): 10119-10133.
  24. Geng, Liwei, Hongbo Zhao, Jiayue Wang, Aryan Kaushik, Shuai Yuan, and Wenquan Feng. "Deep-Reinforcement-Learning-Based Distributed Computation Offloading in Vehicular Edge Computing Networks." IEEE Internet of Things Journal 10, no. 14 (2023): 12416-12433.
  25. Yu, Shuai, Xu Chen, Zhi Zhou, Xiaowen Gong, and Di Wu. "When Deep Reinforcement Learning Meets Federated Learning: Intelligent Multitimescale Resource Management for Multiaccess Edge Computing In 5G Ultradense Network." IEEE Internet of Things Journal 8, no. 4 (2020): 2238-2251.