Human pose estimation;Machine learning;Clinical gait metrics;Infant pose estimation
Serrano, Miguel M. ; Vela, Patricio A. Electrical and Computer Engineering Howard, Ayanna M. Chen, Yu-ping Yezzi, Anthony J. Bloch, Matthieu R. ; Vela, Patricio A.
The objective of this work is to present the Robust Articulated Point-set Tracking (RAPTr) system. It works by synthesizing components from articulated model-based and machine learning methods in a framework for pose estimation. Purely machine learning based pose estimation methods are robust to image artifacts. However, they require large annotated datasets. On the other hand, articulated model-based methods can emulate an infinite number of poses while respecting the subject's geometry but are susceptible to local minima, as they are sensitive to the various artifacts that appear in realistic imaging conditions (e.g. subtle background noise due to shadows or movements). The proposed work outlines how to drive the dataset generation using the same models employed in the model fitting to create a representative training set and how to include the trained detector's response in the model fitting strategy to introduce a robustness to artifacts and an increase to the solution's region of attraction. Furthermore, the articulated model serves as a shape and moment-based feature generator. A linear regression model trained on these features predicts the final pose estimate. When necessary, an intermediate representation is defined so that the two approaches may operate on compatible inputs. The proposed solution will be applied to articulated pose estimation problems where pose estimation accuracy is the priority.