iAMEC, an Intelligent Autonomous Mover for Navigation in Indoor People Rich Environments

This paper presents the key sensing, navigation techniques, and edge AI computing chip design for an autonomous mover, iAMEC, tailored to indoor and people rich environments such as shopping centers. iAMEC aims at serving as a platform for service robots and features swift maneuverability and collision avoidance in navigation. It is equipped with a smart sensing module consisting of a Lidar, a camera and an ultrasonic array radar. Camera images are analyzed on the fly by using 2-stage CNN models for not only object recognition but also pedestrian behavior prediction. The ultrasonic array radar can detect both the distance and the direction of the surrounding objects in short distance. The low cost 1-ray Lidar performs SLAM as well as scanning for a wider range. Data fusion results of these three sensing techniques are passed to the navigation/control module, which determines the optimal path and also steers iAMEC to the destination. Navigation is based on a reinforcement learning model, which is trained in a virtual environment by using a simulation engine, UNITY. DLA (deep learning architecture) acceleration chip design and the associated model-to-DLA mapping tool are also developed to facilitate real time edge computing of CNN based image analysis. A 4-wheeled autonomous mover prototype has been built and mounted with developed sensing, navigation and control modules. The evaluation results indicate preliminary success of iAMEC in navigating under a controlled environment.