AVEC'15 Keynote Talk: From Facial Expression Analysis to Multimodal Mood Analysis

In this talk, I will give an overview of our research into developing multimodal technology that analyses the affective state and more broadly behaviour of humans. Such technology is useful for a number of applications, with applications in healthcare, e.g. mental health disorders, being a particular focus for us. Depression and other mood disorders are common and disabling disorders. Their impact on individuals and families is profound. The WHO Global Burden of Disease reports quantify depression as the leading cause of disability worldwide. Despite the high prevalence, current clinical practice depends almost exclusively on self-report and clinical opinion, risking a range of subjective biases. There currently exist no laboratory-based measures of illness expression, course and recovery, and no objective markers of end-points for interventions in both clinical and research settings. Using a multimodal analysis of facial expressions and movements, body posture, head movements as well as vocal expressions, we are developing affective sensing technology that supports clinicians in the diagnosis and monitoring of treatment progress. Encouraging results from a recently completed pilot study demonstrate that this approach can achieve over 90% agreement with clinical assessment. After more than eight years of research, I will also talk about the lessons learnt in this project, such as measuring spontaneous expressions of affect, subtle expressions, and affect intensity using multimodal approaches. We are currently extending this line of research to other disorders such as anxiety, post-traumatic stress disorder, dementia and autism spectrum disorders. In particular for the latter, a natural progression is to analyse dyadic and group social interactions. At the core of our research is a focus on robust approaches that can work in real-world environments.