We demonstrated multi-mobile robot navigation based on Visible Light Positioning(VLP) localization. From our experiment, the VLP can accurately locate robots' positions in navigation. 1. Overview With the rapid development of robot technology, robotic control is promising in research and commerce. Mobile robots are also widely used in a variety of environments. Especially in the automated warehouse, robots have the advantage of being cheaper and more efficient than human beings for simple and repetitive missions. On account of the sophisticated environment, the stability of multi-robot system is better than non-integrated robotic system. In addition, Robotic Positioning is a fundamental ability of multi-mobile robot system. However, compared with other interior positioning, such as Wireless Local Area Network(WLAN), Radio Frequency Identification(RFID), Ultra Wide Band(UWB), Visible Light Positioning(VLP) theoretically has high resistance to electromagnetic interference because it is transmitted to the sensor on optical signal[1]. Nowadays, VLP positioning technology has been widely used in robotic system. Since VLP localization method has been applied on Robot Operating System (ROS)[2], the robustness and accuracy of positioning in the process of continuous change of robot pose have also been improved[3]. Furthermore, VLP-based positioning methods promoted the development of multi-robot collaboration system(MRCS)[4]. As the navigation technology in robotic system being more and more popular in recent years, the need for quality of navigation technology in robotic system has been increasing, which leads to higher accuracy requirements of positioning. Considering the advantages of VLP positioning mentioned above, we applied image sensor-based VLP positioning on multi-robot navigation system to determine the position and movement of each robot through high-precision positioning. And we presented a higher accuracy multi-robot localization and navigation framework. 2. Innovation We proposed a VLP navigation framework demonstration. The accuracy and real-time performance of each mobile robot in navigation were observed by using VLP localization method based on images, and the results of our experiments were verified in various environments. The main contributions of our work are as follow: 1. Based on the positioning of VLP, we designed a multi-robot positioning and navigation framework, which can accurately determine the position information of each robot. 2. Based on the various multi-agent environment, the adaptability of VLP system was adjusted and the localization algorithm of VLP was optimized correspondingly. 3. Based on the algorithms above, we demonstrated a high accuracy and stability multi-robot navigation demonstration. 3. Description of Demonstration In the implementation of this demonstration, we built an 832× 480cm2 multi-robot navigation experimental platform. The multi-robot navigation system is consists of two turtlebot3 robots with built-in ubuntu16.04 and ROS Kinetic operating systems and a 220cm height intelligent LED. The correlation structure of Turtlebot3 robot and experiment environment are shown as follows(Fig.1). The LiDAR is used to show the location by perceiving the boundary distance of the environment. The RES camera sensor above turtlebot3 is essential in this experiment, which is used to receive the optical information of LED to determine its position information. The LED emits optical signals in the region-of-interest(ROI) on the LED, containing its ID. ROI is set to determine a boundary of emission, helping promote images processing. The host of the two robots is a laptop with an ubuntu18.04 operating system and ROS Melodic version, connecting via WiFi. Through RVIZ(3D visualization tool for ROS) the host can not only receive two robots' real-time location information while navigating but also set the goal of navigation by the 2D Nav Goal button. The whole environment of navigation was constructed by gmapping before multi-robot navigation. After optical signals are emitted in the ROI of LED, they are first captured by the RES camera as grayscale images, then converted into a stripe pattern with the binarization of grayscale images. Then the stripes are decoded to obtain the ID of LED. The pose information of the robot will be estimated based on the ID of LED and the position information of LED at the moment the camera receives the optical signals and the ID is obtained. Considering ambient light interference, the RSE camera also can distinguish ambient light interference by capturing the ROI. Thus it can be adapted to different environments as well. To test the accurate positioning ability of multi-robot system based on VLP positioning, we navigated two robots with different initial positions. One was located in the LED coverage area and the other was located out of the LED coverage area. We navigated the robots into and out of the LED coverage area while observing the accuracy of navigation localization. We found that there is a peak offset angle of the perceived boundary of LiDAR on the first robot, for approximately 5 degrees, which ran out of the LED coverage area. While the peak offset angle doesn't exceed 1 degree when it entered the LED coverage area and received optical signals. Besides, when two robots were both in the LED coverage area, the peak distance between the perceived boundary of the first robot and that of the second robot was less than 3cm. After the second robot left the LED coverage area, running straight towards the navigating destination, the peak distance between the perceived boundary and the map boundary increased to a certain extent. The delay of VLP positioning does not affect the efficiency of navigation, for the maximum speed of the robots in the LED coverage area is nearly the same as the maximum speed out of the LED coverage area. Therefore, our experiment shows that the VLP framework equips multi-mobile robot system with a greatly stable, precise, and efficient positioning ability, after eliminating errors of small changes in the environment before and after the map built by gmapping. More information about the OCC's work can be found at:[5]-[9].More work on VLP can be found at:[10]-[12] Fig.1.Demonstration setup and experiment circumstance of dual robots Navigation and Localization based on VLP Demo link: https://www.bilibili.com/video/BV1Df4y1T7u3?spm_id_from=333.999.0.0 4. OFC Relevance Our demonstration implements multi-agent high-precision indoor positioning and VLP framework, both of which are hot topics in the OFC sector in recent times. For OFC being the largest-scale conference on optical network and communication, the completely proposed design contributes to spark a widespread novel application of VLP and provides a new approach for microwave or photonics positioning systems. Acknowledgement This research was funded by Research and Development Program in Key Areas of Guangdong Province (2019B010116002); National Undergraduate Innovation and Entrepreneurship Training Program (202110561007); Guangdong Science and Technology Project under Grant (2017B010114001). References [1] Guan W, Huang L, Wen S, et al. Robot Localization and Navigation using Visible Light Positioning and SLAM Fusion[J]. Journal of Lightwave Technology, 2021. [2] Huang L, Wen S, Yan Z, et al. Single LED positioning scheme based on angle sensors in robotics[J]. Applied Optics, 2021, 60(21): 6275-6287. [3] Guan W, Chen S, Wen S, et al. High-accuracy robot indoor localization scheme based on robot operating system using visible light positioning[J]. IEEE Photonics Journal, 2020, 12(2): 1-16. [4] Yan Z, Guan W, Wen S, et al. Multi-robot Cooperative Localization based on Visible Light Positioning and Odometer[J]. IEEE Transactions on Instrumentation and Measurement, 2021. [5]Zhou Z, Guan W, Wen S, et al. RSE-based Underwater Optical Camera Communication Impeded by Bubble Degradation[C]//Optical Sensors. Optical Society of America, 2021: JTu5A. 8. [6]Zhou Z, Wen S, Guan W. RSE-based optical camera communication in underwater scenery with bubble degradation[C]//Optical Fiber Communication Conference. Optical Society of America, 2021: M2B. 2. [7]Song H, Wen S, Yang C, et al. Universal and Effective Decoding Scheme for Visible Light Positioning Based on Optical Camera Communication[J]. Electronics, 2021, 10(16): 1925. [8]Zhou Z, Wen S, Li Y, et al. Performance Enhancement Scheme for RSE-Based Underwater Optical Camera Communication Using De-Bubble Algorithm and Binary Fringe Correction[J]. Electronics, 2021, 10(8): 950. [9]Xiao Y, Guan W, Wen S, et al. The Optical Bar Code Detection Method Based on Optical Camera Communication Using Discrete Fourier Transform[J]. IEEE Access, 2020, 8: 123238-123252. [10]Song H, Wen S, Yuan D, et al. Robust LED region-of-interest tracking for visible light positioning with low complexity[J]. Optical Engineering, 2021, 60(5): 053102. [11]An F, Xu H, Wen S, et al. A Tilt Visible Light Positioning System Based on Double LEDs and Angle Sensors[J]. Electronics, 2021, 10(16): 1923. [12]Xu H, An F, Wen S, et al. Three-Dimensional Indoor Visible Light Positioning with a Tilt Receiver and a High Efficient LED-ID[J]. Electronics, 2021, 10(11): 1265.
[1]
Weipeng Guan,et al.
Multirobot Cooperative Localization Based on Visible Light Positioning and Odometer
,
2021,
IEEE Transactions on Instrumentation and Measurement.
[2]
Weipeng Guan,et al.
A Tilt Visible Light Positioning System Based on Double LEDs and Angle Sensors
,
2021,
Electronics.
[3]
Weipeng Guan,et al.
Three-Dimensional Indoor Visible Light Positioning with a Tilt Receiver and a High Efficient LED-ID
,
2021,
Electronics.
[4]
Weipeng Guan,et al.
Performance Enhancement Scheme for RSE-Based Underwater Optical Camera Communication Using De-Bubble Algorithm and Binary Fringe Correction
,
2021,
Electronics.
[5]
Jingyi Li,et al.
The Optical Bar Code Detection Method Based on Optical Camera Communication Using Discrete Fourier Transform
,
2020,
IEEE Access.
[6]
Weipeng Guan,et al.
RSE-based optical camera communication in underwater scenery with bubble degradation
,
2021,
2021 Optical Fiber Communications Conference and Exhibition (OFC).
[7]
Weipeng Guan,et al.
High-Accuracy Robot Indoor Localization Scheme Based on Robot Operating System Using Visible Light Positioning
,
2020,
IEEE Photonics Journal.
[8]
Weipeng Guan,et al.
Robot Localization and Navigation Using Visible Light Positioning and SLAM Fusion
,
2021,
Journal of Lightwave Technology.
[9]
Weipeng Guan,et al.
Single LED positioning scheme based on angle sensors in robotics.
,
2021,
Applied optics.
[10]
Weipeng Guan,et al.
Universal and Effective Decoding Scheme for Visible Light Positioning Based on Optical Camera Communication
,
2021,
Electronics.
[11]
W. Guan,et al.
RSE-based Underwater Optical Camera Communication Impeded by Bubble Degradation
,
2021,
OSA Optical Sensors and Sensing Congress 2021 (AIS, FTS, HISE, SENSORS, ES).
[12]
Weipeng Guan,et al.
Robust LED region-of-interest tracking for visible light positioning with low complexity
,
2021
.