Standing on the Shoulders of Other Researchers - A Position Statement

Activity Recognition has made significant progress in the past years. We strongly believe however that we could make far greater progress if we build more systematically on each other’s work. Comparing the activity recognition community with other more mature communities (e.g., those of computer vision and speech recognition) there appear to be two keyingredients that are missing in ours. First, the more mature communities have established a set of well-defined or accepted research problems, and second, the communities have a tradition to compare their algorithms on established and shared benchmark datasets. Establishing both of these ingredients and evolving them over time in a more explicit manner should enable us to progress our field more rapidly. Index Terms—Activity Recognition, Evaluation, Code and Database Sharing I. WHERE DOES THE COMMUNITY NEED TO IMPROVE? In this paper we argue that our community of activity recognition has to improve on two fronts. 1. Research Problems Develop and evolve well-defined and accepted research questions that we believe are essential to make progress in activity recognition. 2. Evaluate, Analyze, and Share In order to make progress in activity recognition we have to understand and analyze thoroughly the strengths and weaknesses of different approaches. Therefore we need to a) share datasets and establish benchmarks to enable direct comparison and b) enable reproducibility of algorithms and results so that others can profit from our work and build upon each other’s work. The first front, namely the definition and maturation of welldefined research problems seems obvious but is – in our view – one of the weaknesses of our area. In many communities such well-defined problems can be tackled (let’s take again the example of computer vision, in which object class recognition or optical flow estimation exist as challenges). In our community however we often take our subjective ideas about activity recognition, motivate why we think this is an important problem, and then record our own – typically non-shared – datasets to evaluate our algorithms. While this is fine at an early stage of a community we strongly believe that we have to rethink this practice and establish attractive research problems that are relevant to pursue and consequently are dedicated to work on. It is important to note that these research problems will and have to evolve over time. One of the reasons but not the only one is the progress we are making on previous research problems. However, these well-defined problems are absolutely essential to enable comparison as well as to analyze and understand our progress. The second front is equally important and again a weak spot of our field. As already mentioned most of us analyze their great new algorithm on a new dataset making it hard to understand the progress that was made. Instead, we should develop (or even enforce) the practice that all new algorithms are compared to previous ones, either on common datasets or using the code shared from previous algorithms. As many of our algorithms originate from machine learning research, it is often inappropriately taken for granted that a trendy algorithm there, translates to superior performance in activity recognition. It is also worth noting that it is not enough to simply state performance numbers in such comparative studies. Instead one has to analyze and discuss why which algorithm performs differently. While this is again standard practice in other research areas this type of analysis and scientific knowledge generation is nearly completely absent in our field. II. OUR BEST RECOMMENDATIONS The discussion of the previous section is based on the following cyclic approach to research. Each cycle comprises four steps: 1) Start with a clear problem definition, 2) Evaluate the State-of-the-Art 3) Synthesize, propose, and implement a (typically novel) Problem Solution 4) Analysis of the proposed solution on real-world data These cycles require both points mentioned in the previous section. Without a set of well-defined problems we cannot start with a clear problem definition and (equally important) cannot evaluate the state-of-the-art. In current activity recognition research it is often not clear how a particular approach might perform on the chosen problem as respective papers often do not formulate the problem definition clearly enough. This in turn is essential to develop a problem solution that is typically synthesized from previous research and often contains novel aspects. These novel aspects again rely on a better understanding of the respective algorithms’ strengths and weaknesses. Probably the most important part of the cycle however is the analysis of the proposed solution where most of the novel scientific knowledge is created. In this last step the Feasibility studies Interes0ng for applica0on Fundamental research

[1]  Steven Salzberg,et al.  On Comparing Classifiers: Pitfalls to Avoid and a Recommended Approach , 1997, Data Mining and Knowledge Discovery.

[2]  Kristof Van Laerhoven,et al.  When Else Did This Happen? Efficient Subsequence Representation and Matching for Wearable Activity Data , 2009, 2009 International Symposium on Wearable Computers.

[3]  Bernt Schiele,et al.  Using rhythm awareness in long-term activity recognition , 2008, 2008 12th IEEE International Symposium on Wearable Computers.

[4]  Bernt Schiele,et al.  Enabling Efficient Time Series Analysis for Wearable Activity Data , 2009, 2009 International Conference on Machine Learning and Applications.

[5]  Bernt Schiele,et al.  Daily Routine Recognition through Activity Spotting , 2009, LoCA.

[6]  Neal Lesh,et al.  Indoor navigation using a diverse set of cheap, wearable sensors , 1999, Digest of Papers. Third International Symposium on Wearable Computers.

[7]  Bernt Schiele,et al.  An Analysis of Sensor-Oriented vs. Model-Based Activity Recognition , 2009, 2009 International Symposium on Wearable Computers.

[8]  Bernt Schiele,et al.  Exploring semi-supervised and active learning for activity recognition , 2008, 2008 12th IEEE International Symposium on Wearable Computers.

[9]  Bernt Schiele,et al.  Multi Activity Recognition Based on Bodymodel-Derived Primitives , 2009, LoCA.

[10]  Bernt Schiele,et al.  Scalable Recognition of Daily Activities with Wearable Sensors , 2007, LoCA.