Abstract
Generalised zero-shot learning (GZSL) methods aim to classify previously seen
and unseen visual classes by leveraging the semantic information of those
classes. In the context of GZSL, semantic information is non-visual data such
as a text description of both seen and unseen classes. Previous GZSL methods
have utilised transformations between visual and semantic embedding spaces, as
well as the learning of joint spaces that include both visual and semantic
information. In either case, classification is then performed on a single
learned space. We argue that each embedding space contains complementary
information for the GZSL problem. By using just a visual, semantic or joint
space some of this information will invariably be lost. In this paper, we
demonstrate the advantages of our new GZSL method that combines the
classification of visual, semantic and joint spaces. Most importantly, this
ensembling allows for more information from the source domains to be seen
during classification. An additional contribution of our work is the
application of a calibration procedure for each classifier in the ensemble.
This calibration mitigates the problem of model selection when combining the
classifiers. Lastly, our proposed method achieves state-of-the-art results on
the CUB, AWA1 and AWA2 benchmark data sets and provides competitive performance
on the SUN data set.