Smartphone sensing and inference of human behavior and context
暂无分享,去创建一个
Smartphones we carry everyday represent the first truly context-aware mobile device. By embedding specialized sensors and pushing AI into phones, it is now possible to continuously sense and infer people's physical activities, social interactions, and surrounding context. Ultimately, it will be able to monitor our moves, predict our needs, offer suggestions, and even understand our moods. As a result, the phone will work in a more autonomous way.
However, many technical barriers remain to realize this vision. First, performing continuous sensing and robust inference is challenging in the real world due to the uncertainty of the mobility and context of the phone. Second, in many case, classification models that turn raw sensor data into inferences do not scale well due to the diversity of activities and daily routines across different users and environments. Finally, to enable novel types of mobile applications, new system designs and inference methods are necessary to reason about more subtle and complex user states.
This thesis makes three key contributions to smartphone sensing and inference research by proposing new sensing paradigms, inference algorithms, system designs and prototype applications. We first present Jigsaw, a robust motion based physical activity classification and flexible location tracking system for smartphones. Jigsaw takes user interactions and phone context into consideration. It performs auto calibration of the accelerometer sensor and allows accurate activity classification that is robust to the phone's orientation and body placement (e.g., in the pocket, bag, etc.). Jigsaw's mobility-aware location tracking is able to balance localization accuracy and battery consumption adaptively across individual users and devices.
Next, we propose SoundSense, the first smartphone sensing system that leverages the ubiquitous (but underutilized) microphone sensor on the phone. The SoundSense system listens and learns the most important sound events in people's everyday life. User diversity is one of the key hurdles that mobile inference systems need to overcome; for example, different people experience different sounds due to different locations, surroundings, and life styles. It is impractical to train a one-size-fits-all classifier for all users, across different acoustic environment. To address this technical barrier, SoundSense uses an active learning approach, which automatically personalizes inference models to individual users, acquiring class labels from users as they carry and interact with their phone. We believe this approach is fundamental to building large-scale smartphone sensing applications and systems.
The final contribution of this dissertation proposes a new smartphone sensing application for health called StressSense, which aims to detect the stress states of individuals by analyzing the user's speech captured by the phone's microphone. Stress is a much more subtle phenomena to infer than, for example, physical activity and context. This dissertation presents features, modeling and systems design for robust stress inference on the phone. This work pioneers the effort of enabling unobtrusive and continuous mental health monitoring in people's everyday life, which, we believe represents a key missing part of ubiquitous healthcare research.
Smartphone sensing and inference is a rapidly emerging research field. It is an exciting research topic that will have profound impact in our modern digital life. The work presented in this dissertation identifies new challenges and provides novel solutions that will advance our understanding of this fast evolving area of research.