Please send your full manuscript to:
Abstract— One of the biggest challenges in implementing assistive companion robots is the ability to navigate around obstacles while being visually tethered to a human subject. This is further complicated when advanced hardware and computation-heavy algorithms such as Light Detection and Ranging (LiDAR) modules or Simultaneous Localization and Mapping (SLAM) are not readily available. This research aims to prove the validity of a robot navigation model that relies on multi-sensor fusion of a depth camera, proximity sensors array and an active IR Marker tracking system, all of which consist of commercial off the shelf (COTS) components. Common indoor robot navigation solutions rely on prior environmental mapping to be able to plot routes beyond obstacles in the immediate vicinity. This model differentiates itself by considering the general direction of the target person and the mid-range depth landscape in addition to the immediate vicinity of the robot. To examine its performance, a set of three scenarios were created to emulate the testing conditions of several similar robot navigation studies presented by existing literature. The simulation results show that the implemented navigation system can maintain a consistent distance from the target while traversing a route that is shorter and less impeded by obstructions compared to the benchmark studies.