Moving as a Leader: Detecting Emergent Leadership in Small Groups using Body Pose

Detecting leadership while understanding the underlying behavior is an important research topic particularly for social and organizational psychology, and has started to get attention from social signal processing research community as well. It is known that, visual activity is a useful cue to investigate the social interactions, even though previously applied nonverbal features based on head/body actions were not performing well enough for identification of emergent leaders (ELs) in small group meetings. Starting from these premises, in this study, we propose an effective method that uses 2D body pose based nonverbal features to represent the visual activity of a person. Our results suggest that, i) overall, the proposed nonverbal features derived from body pose perform better than existing visual activity based features, ii) it is possible to improve classification results by applying unsupervised feature learning as a preprocessing step, and iii) the proposed nonverbal features are able to advance the EL identification performances of other types of nonverbal features when they are used together.

[1]  Scott E. Hudson,et al.  Combining body pose, gaze, and gesture to determine intention to interact in vision-based interfaces , 2014, CHI.

[2]  D. Gática-Pérez,et al.  A Nonverbal Behavior Approach to Identify Emergent Leaders in Small Groups , 2012, IEEE Transactions on Multimedia.

[3]  Ana Paiva,et al.  Automatic analysis of affective postures and body motion to detect engagement with a game companion , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[4]  Yaser Sheikh,et al.  OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Nadia Bianchi-Berthouze,et al.  Modeling human affective postures: an information theoretic characterization of posture features , 2004, Comput. Animat. Virtual Worlds.

[6]  E.,et al.  GROUPS : INTERACTION AND PERFORMANCE , 2001 .

[7]  Vittorio Murino,et al.  Predicting online lecture ratings based on gesturing and vocal behavior , 2014, Journal on Multimodal User Interfaces.

[8]  Daniel Gatica-Perez,et al.  Identifying emergent leadership in small groups using nonverbal communicative cues , 2010, ICMI-MLMI '10.

[9]  Daniel Gatica-Perez,et al.  Fusing Audio-Visual Nonverbal Cues to Detect Dominant People in Group Conversations , 2010, 2010 20th International Conference on Pattern Recognition.

[10]  Geoffrey E. Hinton,et al.  Deep Boltzmann Machines , 2009, AISTATS.

[11]  Rosalind W. Picard,et al.  Automated Posture Analysis for Detecting Learner's Interest Level , 2003, 2003 Conference on Computer Vision and Pattern Recognition Workshop.

[12]  Daniel Gatica-Perez,et al.  Estimating Dominance in Small Group Meetings with Audio-Visual Fusion of Nonverbal Cues , 2010 .

[13]  B. Bass,et al.  The Bass handbook of leadership : theory, research, and managerial applications , 2008 .

[14]  David W. Johnson,et al.  Joining Together: Group Theory and Group Skills , 1975 .

[15]  Jeremy N. Bailenson,et al.  Automatic Detection of Nonverbal Behavior Predicts Learning in Dyadic Interactions , 2014, IEEE Transactions on Affective Computing.

[16]  Dairazalia Sanchez Cortes,et al.  Computational Methods for Audio-Visual Analysis of Emergent Leadership in Teams , 2013 .

[17]  Geoffrey E. Hinton,et al.  Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images , 2010, AISTATS.

[18]  Honglak Lee,et al.  Deep learning for robust feature generation in audiovisual emotion recognition , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[19]  Ethem Alpaydin,et al.  Localized multiple kernel learning , 2008, ICML '08.

[20]  Ethem Alpaydin,et al.  Multiple Kernel Learning Algorithms , 2011, J. Mach. Learn. Res..

[21]  Emily Mower Provost,et al.  Automatic recognition of self-reported and perceived emotion: does joint modeling help? , 2016, ICMI.

[22]  Gerhard Tröster,et al.  Quantifying Behavioral Mimicry by Automatic Detection of Nonverbal Cues from Body Motion , 2012, 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing.

[23]  Amir Muaremi,et al.  Discriminating Individually Considerate and Authoritarian Leaders by Speech Activity Cues , 2011, 2011 IEEE Third Int'l Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third Int'l Conference on Social Computing.

[24]  Vittorio Murino,et al.  Identification of emergent leaders in a meeting scenario using multiple kernel learning , 2016, ASSP4MI '16.

[25]  Daniel Gatica-Perez,et al.  Fusing Audio-Visual Nonverbal Cues to Detect Dominant People in Small Group Conversations , 2010 .

[26]  Stan Matwin,et al.  Addressing the Curse of Imbalanced Training Sets: One-Sided Selection , 1997, ICML.

[27]  Dacheng Tao,et al.  A Survey on Multi-view Learning , 2013, ArXiv.

[28]  M. Taccetta-Chapnick Transformational leadership. , 1996, Nursing administration quarterly.

[29]  Daniel Gatica-Perez,et al.  Detecting Emergent Leaders in Small Groups using Nonverbal Behavior , 2011 .

[30]  Vittorio Murino,et al.  Detecting emergent leader in a meeting environment using nonverbal visual features only , 2016, ICMI.

[31]  Yoshua Bengio,et al.  Practical Recommendations for Gradient-Based Training of Deep Architectures , 2012, Neural Networks: Tricks of the Trade.

[32]  Daniel Gatica-Perez,et al.  Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition , 2012, Journal on Multimodal User Interfaces.