Abstract. The increasing demand for reliable indoor navigation systems is leading the research community to investigate various approaches to obtain effective solutions usable with mobile devices. Among the recently proposed strategies, Ultra-Wide Band (UWB) positioning systems are worth to be mentioned because of their good performance in a wide range of operating conditions. However, such performance can be significantly degraded by large UWB range errors; mostly, due to non-line-of-sight (NLOS) measurements. This paper considers the integration of UWB with vision to support navigation and mapping applications. In particular, this work compares positioning results obtained with a simultaneous localization and mapping (SLAM) algorithm, exploiting a standard and a Time-of-Flight (ToF) camera, with those obtained with UWB, and then with the integration of UWB and vision. For the latter, a deep learning-based recognition approach was developed to detect UWB devices in camera frames. Such information is both introduced in the navigation algorithm and used to detect NLOS UWB measurements. The integration of this information allowed a 20% positioning error reduction in this case study.