Abstract
2020 International Conference on 3D Vision (3DV), pp. 81-90. IEEE,
2020 Growing free online 3D shapes collections dictated research on 3D retrieval.
Active debate has however been had on (i) what the best input modality is to
trigger retrieval, and (ii) the ultimate usage scenario for such retrieval. In
this paper, we offer a different perspective towards answering these questions
-- we study the use of 3D sketches as an input modality and advocate a
VR-scenario where retrieval is conducted. Thus, the ultimate vision is that
users can freely retrieve a 3D model by air-doodling in a VR environment. As a
first stab at this new 3D VR-sketch to 3D shape retrieval problem, we make four
contributions. First, we code a VR utility to collect 3D VR-sketches and
conduct retrieval. Second, we collect the first set of $167$ 3D VR-sketches on
two shape categories from ModelNet. Third, we propose a novel approach to
generate a synthetic dataset of human-like 3D sketches of different abstract
levels to train deep networks. At last, we compare the common multi-view and
volumetric approaches: We show that, in contrast to 3D shape to 3D shape
retrieval, volumetric point-based approaches exhibit superior performance on 3D
sketch to 3D shape retrieval due to the sparse and abstract nature of 3D
VR-sketches. We believe these contributions will collectively serve as enablers
for future attempts at this problem. The VR interface, code and datasets are
available at https://tinyurl.com/3DSketch3DV.