Abstract
Dense word vectors or 'word embeddings' which encode semantic properties of
words, have now become integral to NLP tasks like Machine Translation (MT),
Question Answering (QA), Word Sense Disambiguation (WSD), and Information
Retrieval (IR). In this paper, we use various existing approaches to create
multiple word embeddings for 14 Indian languages. We place these embeddings for
all these languages, viz., Assamese, Bengali, Gujarati, Hindi, Kannada,
Konkani, Malayalam, Marathi, Nepali, Odiya, Punjabi, Sanskrit, Tamil, and
Telugu in a single repository. Relatively newer approaches that emphasize
catering to context (BERT, ELMo, etc.) have shown significant improvements, but
require a large amount of resources to generate usable models. We release
pre-trained embeddings generated using both contextual and non-contextual
approaches. We also use MUSE and XLM to train cross-lingual embeddings for all
pairs of the aforementioned languages. To show the efficacy of our embeddings,
we evaluate our embedding models on XPOS, UPOS and NER tasks for all these
languages. We release a total of 436 models using 8 different approaches. We
hope they are useful for the resource-constrained Indian language NLP. The
title of this paper refers to the famous novel 'A Passage to India' by E.M.
Forster, published initially in 1924.