Abstract
Rising concerns about privacy and anonymity preservation of deep learning
models have facilitated research in data-free learning (DFL). For the first
time, we identify that for data-scarce tasks like Sketch-Based Image Retrieval
(SBIR), where the difficulty in acquiring paired photos and hand-drawn sketches
limits data-dependent cross-modal learning algorithms, DFL can prove to be a
much more practical paradigm. We thus propose Data-Free (DF)-SBIR, where,
unlike existing DFL problems, pre-trained, single-modality classification
models have to be leveraged to learn a cross-modal metric-space for retrieval
without access to any training data. The widespread availability of pre-trained
classification models, along with the difficulty in acquiring paired
photo-sketch datasets for SBIR justify the practicality of this setting. We
present a methodology for DF-SBIR, which can leverage knowledge from models
independently trained to perform classification on photos and sketches. We
evaluate our model on the Sketchy, TU-Berlin, and QuickDraw benchmarks,
designing a variety of baselines based on state-of-the-art DFL literature, and
observe that our method surpasses all of them by significant margins. Our
method also achieves mAPs competitive with data-dependent approaches, all the
while requiring no training data. Implementation is available at
\url{https://github.com/abhrac/data-free-sbir}.