Improving human activity recognition with neural translator models.

Multiple sensor modalities provide more accurate Human Activity Recognition (HAR) compared to using a single modality, yet the latter is more convenient and less intrusive. It is advantages to create a model which learns from all available sensors; although it is challenging to deploy such model in an environment with fewer sensors, while maintaining reliable performance levels. We address this challenge with Neural Translator, capable of generating missing modalities from available modalities. These can be used to generate missing or 'privileged' modalities at deployment to improve HAR. We evaluate the translator with k-NN classifiers on the SelfBACK HAR dataset and achieve up-to 4.28% performance improvements with generated modalities. This suggests that non-intrusive modalities suited for deployment benefit from translators that generate missing modalities at deployment.