Using multiple contexts to distinguish standing from sitting with a single accelerometer

Activity recognition with a single accelerometer placed on the torso is fairly common task, but distinguishing standing from sitting in this way is very difficult because the torso is oriented the same way during both activities, and the transition between the two is very hard to classify into going down or up. We propose a novel approach based on the Multiple Contexts Ensemble (MCE) algorithm which classifies the activity with an ensemble of classifiers, each of which considers the problem in the context of a single feature. The improvement stems from using multiple viewpoints, based on accelerometer data only, designed specifically to distinguish standing from sitting. This approach improves the accuracy on the two activities by 24 percentage points compared to regular machine learning.