CVAM-Pose: Conditional Variational Autoencoder for Multi-Object Monocular Pose Estimation

Zhao, Jianyu orcid iconORCID: 0000-0002-1531-8658, Quan, Wei orcid iconORCID: 0000-0003-2099-9520 and Matuszewski, Bogdan orcid iconORCID: 0000-0001-7195-2509 (2024) CVAM-Pose: Conditional Variational Autoencoder for Multi-Object Monocular Pose Estimation. In: 35th British Machine Vision Conference 2024, 25-28 November 2024, Glasgow, Scotland, United Kingdom.

[thumbnail of AAM]
Preview
PDF (AAM) - Accepted Version
Available under License Creative Commons Attribution.

2MB
[thumbnail of 53319 Zhao et al. Supplemental Material.pdf]
Preview
PDF - Supplemental Material
3MB

Official URL: https://bmvc2024.org/

Abstract

Estimating rigid objects' poses is one of the fundamental problems in computer vision, with a range of applications across automation and augmented reality. Most existing approaches adopt one network per object class strategy, depend heavily on objects' 3D models, depth data, and employ a time-consuming iterative refinement, which could be impractical for some applications. This paper presents a novel approach, CVAM-Pose, for multi-object monocular pose estimation that addresses these limitations. The CVAM-Pose method employs a label-embedded conditional variational autoencoder network, to implicitly abstract regularised representations of multiple objects in a single low-dimensional latent space. This autoencoding process uses only images captured by a projective camera and is robust to objects' occlusion and scene clutter. The classes of objects are one-hot encoded and embedded throughout the network. The proposed label-embedded pose regression strategy interprets the learnt latent space representations utilising continuous pose representations. Ablation tests and systematic evaluations demonstrate the scalability and efficiency of the CVAM-Pose method for multi-object scenarios. The proposed CVAM-Pose outperforms competing latent space approaches. For example, it is respectively 25% and 20% better than AAE and Multi-Path methods, when evaluated using the ARVSD metric on the Linemod-Occluded dataset. It also achieves results somewhat comparable to methods reliant on 3D models reported in BOP challenges.


Repository Staff Only: item control page