期刊论文详细信息
IEEE Access
Graph-Based Hand-Object Meshes and Poses Reconstruction With Multi-Modal Input
Murad Almadani1  Didier Stricker1  Jameel Malik2  Ahmed Elhayek3 
[1] Augmented Vision Group, German Research Center for Artificial Intelligence, Kaiserslautern, Germany;School of Electrical Engineering and Computer Science (SEECS), National University of Sciences and Technology (NUST), Islamabad, Pakistan;University of Prince Mugrin, Medina, Saudi Arabia;
关键词: Hand pose estimation;    hand shape estimation;    hand-object interaction;    graph convolution;    machine learning;   
DOI  :  10.1109/ACCESS.2021.3117473
来源: DOAJ
【 摘 要 】

Estimating the hand-object meshes and poses is a challenging computer vision problem with many practical applications. In this paper, we introduce a simple yet efficient hand-object reconstruction algorithm. To this end, we exploit the fact that both the poses and the meshes are graphs-based representations of the hand-object with different levels of details. This allows taking advantage of the powerful Graph Convolution networks (GCNs) to build a coarse-to-fine Graph-based hand-object reconstruction algorithm. Thus, we start by estimating a coarse graph that represents the 2D hand-object poses. Then, more details (e.g. third dimension and mesh vertices) are gradually added to the graph until it represents the dense 3D hand-object meshes. This paper also explores the problem of representing the RGBD input in different modalities (e.g. voxelized RGBD). Hence, we adopted a multi-modal representation of the input by combining 3D representation (i.e. voxelized RGBD) and 2D representation (i.e. RGB only). We include intensive experimental evaluations that measure the ability of our simple algorithm to achieve state-of-the-art accuracy on the most challenging datasets (i.e. HO-3D and FPHAB).

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:1次