Abstract
In this paper, we propose FrameBERT, a RoBERTa-based model that can
explicitly learn and incorporate FrameNet Embeddings for concept-level metaphor
detection. FrameBERT not only achieves better or comparable performance to the
state-of-the-art, but also is more explainable and interpretable compared to
existing models, attributing to its ability of accounting for external
knowledge of FrameNet.