Human sensing, motion tracking, and identification are at the center of numerous applications such as customer analysis, public safety, smart cities, and surveillance. To enable such capabilities, existing solutions mostly rely on vision-based approaches, e.g., facial recognition that is perceived to be too privacy invasive. Other camera-based approaches using body appearances lack long-term re-identification capability. WiFi-based approaches require the installation and maintenance of multiple units. We propose a novel system - called EyeFi [2] - that overcomes these limitations on a standalone device by fusing camera and WiFi data. We use a three-antenna WiFi chipset to measure WiFi Channel State Information (CSI) to estimate the Angle of Arrival (AoA) using a neural network trained with a novel student-teacher model. Then, we perform cross modal (WiFi, camera) trajectory matching to identify individuals using the MAC address of the incoming WiFi packets. We demonstrate our work using real-world data and showcase improvements over traditional optimization-based methods in terms of accuracy and speed.
[1]
Sachin Katti,et al.
SpotFi: Decimeter Level Localization Using WiFi
,
2015,
SIGCOMM.
[2]
Omprakash Gnawali,et al.
Sonicdoor: scaling person identification with ultrasonic sensors by novel modeling of shape, behavior and walking patterns
,
2017,
BuildSys@SenSys.
[3]
Fabien Moutarde,et al.
Person re-identification in multi-camera system by signature based on interest point descriptors collected on short video sequences
,
2008,
2008 Second ACM/IEEE International Conference on Distributed Smart Cameras.
[4]
Sirajum Munir,et al.
Person tracking and identification using cameras and wi-fi channel state information (CSI) from smartphones: dataset
,
2020,
DATA@SenSys.