Abstract
Growing free online 3D shapes collections dictated research on 3D retrieval. Active debate has however been had on (i) what is the best input modality to trigger retrieval, and (ii) the ultimate usage scenario for such retrieval. In this paper, we offer a different perspective towards answering these questions – we study the use of 3D sketches as an input modality, and advocate a VR-scenario where retrieval is conducted. The ultimate vision is therefore users can freely retrieve 3D model by air-doodling in a VR environment. As a first stab at this new 3D VR-sketch to 3D shape retrieval problem, we make four contributions: first, we code a VR utility to collect 3D VR-Sketches and conduct retrieval; second, we collect the first set of 167 3D VR-sketches on two shape categories from ModelNet; third, we propose a novel approach to generate a synthetic dataset of human-like 3D sketches of different abstract levels to train deep networks; at last, we compare the common multi-view and volumetric approaches, and show that in contrast to 3D shape retrieval, due to sparse and abstract nature of 3D VR-sketch, volumetric point-based approach exhibits superior performance. We believe these contribution will collectively serve as enablers for future attempts at this problem, and will make the VR interface, code, datasets publicly available to facilitate such research.