Path Planning of Mobile Robot Using Reinforcement Learning
PDF

Keywords

Path planning
reinforcement learning
Q-learning
mobile robot
robot operating system

How to Cite

Krishnan, Kiran G, Abhishek Mohan, S. Vishnu, Steve Abraham Eapen, Amith Raj, and Jeevamma Jacob. 2022. “Path Planning of Mobile Robot Using Reinforcement Learning”. Journal of Trends in Computer Science and Smart Technology 4 (3): 153-62. https://doi.org/10.36548/jtcsst.2022.3.004.

Abstract

In complex planning and control operations and tasks like manipulating objects, assisting experts in various fields, navigating outdoor environments, and exploring uncharted territory, modern robots are designed to complement or completely replace humans. Even for those skilled in robot programming, designing a control schema for such robots to carry out these tasks is typically a challenging process that necessitates starting from scratch with a new and distinct controller for each task. The designer must consider the wide range of circumstances the robot might encounter. This kind of manual programming is typically expensive and time consuming. It would be more beneficial if a robot could learn the task on its own rather than having to be preprogrammed to perform all these tasks. In this paper, a method for the path planning of a robot in a known environment is implemented using Q-Learning by finding an optimal path from a specified starting and ending point.

PDF

References

Ee Soong Low, Pauline Ong, Kah Chun Cheah, “Solving the optimal path planning of a mobile robot using improved Q-Learning”, International Journal for Robotics and Autonomous System(Elsevier), vol. 115, pp. 160-169, 2019.

Iker Zamora, Nestor Gonzalez Lopez, Victor Mayoral Vilches, Alejandro Hernandez Cordero,”Extending the OpenAI Gym for robotics: a toolkit for reinforcement learning using ROS and Gazebo”. Erle Robotics, pp. 322-340, 2017.

Khaled Alaa, Nicolo Botteghi, Beril Sirmacek, Mannes Poel,”Towards continuous control for mobile robot navigation: A Reinforcement Learning And SLAM based approach”, International Control Conference, pp. 932-940. May 2019.

Murat Koseoglu, Orkan Murat Celik, Omer Pektas, “Design of an autonomous mobile robot based on ROS”, International Artificial Intelligence and Data Processing Symposium(IDAP), pp. 1024-1030, 2017.

Lei Tai, Giuseppe Paolo, Ming Liu, “Virtual-to-real Deep Reinforcement Learning: continuous control of mobile robots for mapless navigation”,IEEE Transaction on Robotics and Automation, vol 35, pp. 799-816, 2019.

Yu Fan Chen, Michael Everett, Miao Liu, Jonathan P How, “Socially aware motion planning with Deep Reinforcement Learning”, Cornell University Thesis, pp. 120-135, 2017

G. Priyandoko, T Y Ming, M S H Achmad, “Mapping of unknown industrial plant using ROS-based navigation mobile robot”, International Conference on Computer Engineering and Science, pp. 767-772, May 2018.

Zhiqiang Tang, Peiyi Wang, Wenci Xin, and Cecilia Laschi. Learning-based approach for a soft assistive robotic arm to achieve simultaneous position and force control. IEEE Robotics and Automation Letters, 2022.

Krishnan, K. G. (2022). Using Deep Reinforcement Learning For Robot Arm Control. Journal of Artificial Intelligence and Capsule Networks, 4(3), 160-166. doi:10.36548/jaicn.2022.3.002

Baoguo Xu, Wenlong Li, Deping Liu, Kun Zhang, Minmin Miao, Guozheng Xu, and Aiguo Song. Continuous hybrid bci control for robotic arm using noninvasive electroencephalogram, computer vision, and eye tracking. Mathematics, 10(4):618, 2022.