Proceedings | |
Data-Driven Representation of Soft Deformable Objects Based on Force-Torque Data and 3D Vision Measurements | |
Tawbe, Bilal1  | |
关键词: deformation; force-torque sensor; kinect; RGB-D data; neural gas; clustering; mesh simplification; 3D object modeling; | |
DOI : 10.3390/ecsa-3-E006 | |
学科分类:社会科学、人文和艺术(综合) | |
来源: mdpi | |
【 摘 要 】
The realistic representation of deformations is still an active area of research, especially for soft objects whose behavior cannot be simply described in terms of elasticity parameters. Most of existing techniques assume that the parameters describing the object behavior are known a priori based on assumptions on the object material, such as its isotropy or linearity, or values for these parameters are chosen by manual tuning until the results seem plausible. This is a subjective process and cannot be employed where accuracy is expected. This paper proposes a data-driven neural-network-based model for capturing implicitly deformations of a soft object, without requiring any knowledge on the object material. Visual data, in form of 3D point clouds gathered by a Kinect sensor, is collected over an object while forces are exerted by means of the probing tip of a force-torque sensor. A novel approach advantageously combining distance-based clustering, stratified sampling and neural gas-tuned mesh simplification is then proposed to describe the particularities of the deformation. The representation is denser in the region of the deformation (an average of 97% perceptual similarity with the collected data in the deformed area), while still preserving the object overall shape (74% similarity over the entire surface) and only using on average 30% of the number of vertices in the mesh.
【 授权许可】
CC BY
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO201902024836075ZK.pdf | 836KB | download |