Exploring Latent Cross-Channel Embedding for Accurate 3D Human Pose Reconstruction in a Diffusion Framework
Junkun Jiang (1) and Jie Chen* (1)
(1) Department of Computer Science, Hong Kong Baptist University, Hong Kong SAR, China
* Corresponding author
Code | Paper (preprint) | IEEE SigPort
Abstract
Monocular 3D human pose estimation poses significant challenges due to the inherent depth ambiguities that arise during the reprojection process from 2D to 3D. Conventional approaches that rely on estimating an over-fit projection matrix struggle to effectively address these challenges and often result in noisy outputs. Recent advancements in diffusion models have shown promise in incorporating structural priors to address reprojection ambiguities. However, there is still ample room for improvement as these methods often overlook the exploration of correlation between the 2D and 3D joint-level features. In this study, we propose a novel cross-channel embedding framework that aims to fully explore the correlation between joint-level features of 3D coordinates and their 2D projections. In addition, we introduce a context guidance mechanism to facilitate the propagation of joint graph attention across latent channels during the iterative diffusion process. To evaluate the effectiveness of our proposed method, we conduct experiments on two benchmark datasets, namely Human3.6M and MPI-INF-3DHP. Our results demonstrate a significant improvement in terms of reconstruction accuracy compared to state-of-the-art methods. The code for our method will be made available online for further reference.
Presentation
Pipeline
System diagram for the proposed framework during inference. The distribution of the 2D and initial 3D pose predictions are fitted using a Gaussian Mixture Model, based on which $h_K= {\mathbf{p}_K, \mathbf{d}_K}$ will be sampled and go through $K$ iterations of reverse diffusion process until the high quality 3D pose $\hat{\mathbf{d}}_0$ is predicted.
Qualitative results
Citation
1 | @inproceedings{jiang2024exploring, |