Short Title: Int. J. Mech. Eng. Robot. Res.
Frequency: Bimonthly
Professor of School of Engineering, Design and Built Environment, Western Sydney University, Australia. His research interests cover Industry 4.0, Additive Manufacturing, Advanced Engineering Materials and Structures (Metals and Composites), Multi-scale Modelling of Materials and Structures, Metal Forming and Metal Surface Treatment.
2024-10-25
2024-09-24
Abstract—The path planning of the mobile robots cannot be separated from the effectiveness of navigation and collisionfree motion. In addition, dynamic path planning of unknown environment has always been a challenge for mobile robots since lack of information for the surrounding environment. This paper proposes a Deep Reinforcement Learning (DRL) based collision free path planning architecture for mobile robots. As navigating the environment, the mobile robots can figure out the unknown environment through DRL and the predicted values of control parameters from DRL are used to inputs for the mobile robots in the next time step. In addition, the architecture does not need any supervision. The experimental results of the architecture is compared to wellknown approaches and shows that the architecture can be successfully applied to solve the complex navigation problem in the dynamic environments. Index Terms—deep reinforcement learning, autonomous navigation, collision free, path planning, mobile robot Cite: Kiwon Yeom, "Deep Reinforcement Learning Based Autonomous Driving with Collision Free for Mobile Robots," International Journal of Mechanical Engineering and Robotics Research, Vol. 11, No. 5, pp. 338-344, May 2022. DOI: 10.18178/ijmerr.11.5.338-344 Copyright © 2022 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.