Home > Published Issues > 2022 > Volume 11, No. 6, June 2022 >
IJMERR 2022 Vol.11(6): 373-378
DOI: 10.18178/ijmerr.11.6.373-378

Robotic Path Planning by Q Learning and a Performance Comparison with Classical Path Finding Algorithms

Phalgun Chintala, Rolf Dornberger, and Thomas Hanne
University of Applied Sciences and Arts Northwestern Switzerland, Basel/Olten, Switzerland

Abstract—Q Learning is a form of reinforcement learning for path finding problems that does not require a model of the environment. It allows the agent to explore the given environment and the learning is achieved by maximizing the rewards for the set of actions it takes. In the recent times, Q Learning approaches have proven to be successful in various applications ranging from navigation systems to video games. This paper proposes a Q learning based method that supports path planning for robots. The paper also discusses the choice of parameter values and suggests optimized parameters when using such a method. The performance of the most popular path finding algorithms such as A* and Dijkstra algorithm have been compared to the Q learning approach and were able to outperform Q learning with respect to computation time and resulting path length.

Index Terms—reinforcement learning, Q learning, robot navigation, path planning, path finding, shortest path

Cite: Phalgun Chintala, Rolf Dornberger, and Thomas Hanne, "Robotic Path Planning by Q Learning and a Performance Comparison with Classical Path Finding Algorithms," International Journal of Mechanical Engineering and Robotics Research, Vol. 11, No. 6, pp. 373-378, June 2022. DOI: 10.18178/ijmerr.11.6.373-378

Copyright © 2022 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.