Audio-Kinetic Model for Automatic Dietary Monitoring with Earable Devices
暂无分享,去创建一个
Monitoring dietary intake has a profound implication on healthcare and well-being services in everyday lives. Over the last decade, extensive research efforts have been made to enable automatic dietary monitoring by leveraging multi-modal sensory data on wearable devices [1, 2]. In this poster, we explore an audio-kinetic model of well-formed multi-sensory earable devices for dietary monitoring. We envision that earable devices are ideal for dietary monitoring by virtue of their placement. They are worn close to a user’s mouth, jaw, and throat which make them capable of capturing any acoustic events originating in these body parts. Inertial sensors can potentially capture movements of the head and jaw that are often associated with food intake. As such, fusing the inertial and acoustic data carries a potential for accurate detection of food intake-relevant activities. We showcase two primitive activities with our audio-kinetic model, chewing and drinking. These primitives are simple but provide useful contextual cues. For example, analysing the food intake behaviour from chewing can provide insights into the development of obesity and eating disorders. Similarly, tracking drinking events is useful for estimating a user’s water intake over a period of time.
[1] Min Zheng,et al. Multimodality sensing for eating recognition , 2016, PervasiveHealth.
[2] Gregory D. Abowd,et al. EarBit: Using Wearable Sensors to Detect Eating Episodes in Unconstrained Environments , 2017, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..