A Novel Human Pose Estimation Method with Bayesian Network
Kuang-You Cheng (鄭匡佑)
中華民國 98 年 6 月
輔仁大學---智慧型系統實驗室---雲端視覺組
Overview ‧Human pose estimation method is important for the development of behavior recognition, human-robot interaction and visual surveillance. Markerless human pose estimation method can provides non-intrusive and high-free motion capture. It has great challenges due to large range variations of motion and clothe. We propose a novel human motion capture method. The proposed method can locate human body joint position and reconstruct the human pose in 3D space from monocular images. We propose a directed probabilistic graphical model to estimate human joint positions by a devised annealing Gibbs sampling inference method. Experiments are conducted on HumanEva dataset to show the effectiveness of the proposed method. Subjects in the HumanEva have no clothe lamination and markers. The experimental data are image sequences of walking motion around a region with large ranges variation of pose. Experimental results show that the proposed method can estimate human pose from monocular images efficiently.
Method ‧We will elucidate the proposed Bayesian network for pose estimation. Bayesian network combining probability theory and graphical model is generally used for human pose estimation because it can achieve great estimation performance with limited information. The proposed Bayesian network is a belief propagation network using an annealing Gibbs sampling algorithm to estimate 3D human joint positions. It is difficult to reconstruct 3D human model from monocular 2D images without depth information. Moreover, human motions with high-degree freedom usually cause self-occlusion and unpredictability and further limit all monocular 3D human motion capture systems to only simulate body part out of sensor. The proposed framework infers 3D human pose from 2D human pose. 2D human poses are estimated by image observations, and the 2D results the observation of 3D human pose estimation procedure. There are two reasons that we infer 3D from 2D. First, 2D human poses provides model has more information than human silhouette. Second, 2D human estimations results are low complexity features that can reduce the computing complexity of 3D human pose estimation. ![]() ![]() ![]() Conclusions
‧The method we proposed is used for tracking 2D and 3D people pose from markerless monocular image in which the position of joint will be observed more obviously. It has well estimation error that average joint node error less than 20cm in HumanEva I database. We overcome three problems in markerless human pose tracking: self-occlusion, clothing limitation, and high free moving action with no time space features. Our proposal could help to specified region human pose reconstruction and human behavior analysis for HRI, HCI and surveillance. 輔仁大學---智慧型系統實驗室
|