CML-IOT 2019: the first workshop on continual and multimodal learning for internet of things

Internet of Things (IoT) provides streaming, large-amount, and multimodal sensing data over time. The statistical properties of these data are always characterized very differently by time and sensing modalities, which are hardly captured by conventional learning methods. Continual and multimodal learning allows integrating, adapting, and generalizing the knowledge learned from previous experiential data collected with heterogeneity to new situations. Therefore, continual and multimodal learning is an important step to improve the estimation, utilization, and security for the real-world data from IoT devices. A few major challenges to combine continual learning and multimodal learning with real-world data include 1) how to accurately match, fuse, and transfer knowledge between the multimodal data from the fast-changing dynamic physical environment, 2) how to learn accurately despite the missing, imbalanced or noisy data for continual learning under multimodal sensing scenarios, 3) how to effectively combine information collected in different sensing modalities to improve the understanding of CPS while retaining privacy and security, and 4) how to develop usable systems handling high volume streaming multimodal data on mobile devices. We organize this workshop to bring people working on different disciplines together to tackle these challenges in this topic. This workshop aims to explore the intersection and combination of continual machine learning and multi-modal modeling with applications in the Internet of Things. The workshop welcomes works addressing these issues in different applications/domains as well as algorithmic and systematic approaches to leverage continual learning on multimodal data. We further seek to develop a community that systematically handles the streaming multimodal data widely available in real-world ubiquitous computing systems. Preliminary and on-going work is welcomed.