Abstract
Recently, prompt-based learning has gained popularity across many natural
language processing (NLP) tasks by reformulating them into a cloze-style format
to better align pre-trained language models (PLMs) with downstream tasks.
However, applying this approach to relation classification poses unique
challenges. Specifically, associating natural language words that fill the
masked token with semantic relation labels (\textit{e.g.}
\textit{``org:founded\_by}'') is difficult. To address this challenge, this
paper presents a novel prompt-based learning method, namely LabelPrompt, for
the relation classification task. Motivated by the intuition to ``GIVE MODEL
CHOICES!'', we first define additional tokens to represent relation labels,
which regard these tokens as the verbaliser with semantic initialisation and
explicitly construct them with a prompt template method. Then, to mitigate
inconsistency between predicted relations and given entities, we implement an
entity-aware module with contrastive learning. Last, we conduct an attention
query strategy within the self-attention layer to differentiates prompt tokens
and sequence tokens. Together, these strategies enhance the adaptability of
prompt-based learning, especially when only small labelled datasets is
available. Comprehensive experiments on benchmark datasets demonstrate the
superiority of our method, particularly in the few-shot scenario.