Markerless 2D kinematic analysis of underwater running: A deep learning approach.

Kinematic analysis is often performed with a camera system combined with reflective markers placed over bony landmarks. This method is restrictive (and often expensive), and limits the ability to perform analyses outside of the lab. In the present study, we used a markerless deep learning-based method to perform 2D kinematic analysis of deep water running, a task that poses several challenges to image processing methods. A single GoPro camera recorded sagittal plane lower limb motion. A deep neural network was trained using data from 17 individuals, and then used to predict the locations of markers that approximated joint centres. We found that 300-400 labelled images were sufficient to train the network to be able to position joint markers with an accuracy similar to that of a human labeler (mean difference < 3 pixels, around 1 cm). This level of accuracy is sufficient for many 2D applications, such as sports biomechanics, coaching/training, and rehabilitation. The method was sensitive enough to differentiate between closely-spaced running cadences (45-85 strides per minute in increments of 5). We also found high test-retest reliability of mean stride data, with between-session correlation coefficients of 0.90-0.97. Our approach represents a low-cost, adaptable solution for kinematic analysis, and could easily be modified for use in other movements and settings. Using additional cameras, this approach could also be used to perform 3D analyses. The method presented here may have broad applications in different fields, for example by enabling markerless motion analysis to be performed during rehabilitation, training or even competition environments.

[1]  Hongdong Li,et al.  A learning-based markerless approach for full-body kinematics estimation in-natura from a single image. , 2017, Journal of biomechanics.

[2]  Peter V. Gehler,et al.  DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Daniel P. Ferris,et al.  Biomechanics and energetics of running on uneven terrain , 2015, The Journal of Experimental Biology.

[4]  Steffi L. Colyer,et al.  A Review of the Evolution of Vision-Based Motion Analysis and the Integration of Advanced Computer Vision Methods Towards Developing a Markerless System , 2018, Sports Medicine - Open.

[5]  A. Kilding,et al.  AKINEMATIC COMPARISON OF DEEP WATER RUNNING AND OVERGROUND RUNNING IN ENDURANCE RUNNERS , 2007, Journal of strength and conditioning research.

[6]  Takeru Kato,et al.  Kinematical Analysis of Underwater Walking and Running , 2001 .

[7]  Richard A. Brand,et al.  The biomechanics and motor control of human gait: Normal, elderly, and pathological , 1992 .

[8]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  M. M. Reijne,et al.  Accuracy of human motion capture systems for sport applications; state-of-the-art review , 2018, European journal of sport science.

[10]  Matthias Bethge,et al.  Using DeepLabCut for 3D markerless pose estimation across species and behaviors , 2018 .

[11]  Daniel P. Ferris,et al.  Biomechanics and energetics of walking on uneven terrain , 2013, Journal of Experimental Biology.

[12]  Martín Abadi,et al.  TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems , 2016, ArXiv.

[13]  Bernt Schiele,et al.  DeeperCut: A Deeper, Stronger, and Faster Multi-person Pose Estimation Model , 2016, ECCV.

[14]  Kevin M. Cury,et al.  DeepLabCut: markerless pose estimation of user-defined body parts with deep learning , 2018, Nature Neuroscience.

[15]  Kamiar Aminian,et al.  Front-Crawl Instantaneous Velocity Estimation Using a Wearable Inertial Measurement Unit , 2012, Sensors.