This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Paper The following article is Open access

Viewpoint Estimation using Triplet Loss with A Novel Viewpoint-based Input Selection Strategy

, , and

Published under licence by IOP Publishing Ltd
, , Citation Changjian Gu et al 2019 J. Phys.: Conf. Ser. 1207 012009 DOI 10.1088/1742-6596/1207/1/012009

1742-6596/1207/1/012009

Abstract

Viewpoint estimation is a fundamental procedure in vision-based robot tasks. A good viewpoint of the camera relative to objects can help the visual system perform better both in observation and manipulation. Recently, CNN-based algorithms, which can effectively extract discriminative features from images in challenging conditions, are utilized to handle the viewpoint estimation problem. However, most existing algorithms focus on how to leverage the extracted deep features while neglecting the spatial relationship among images that captured from various viewpoints. In this paper, we present a deep metric learning method for solving the viewpoint estimation problem. A triplet loss with a novel viewpoint-based input selection strategy is introduced, which could learn more powerful features after incorporating the spatial relationship between viewpoints. Combined with the traditional classification loss, the presented loss can further enhance the discriminative power of features. To evaluate the performance of our method, a dataset containing a large number of images generated from five different texture-less workpieces is built and the experiment results show the effectiveness of the proposed method.

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

Please wait… references are loading.
10.1088/1742-6596/1207/1/012009