Abstract
Prompt-tuning models output relation types as verbalised-type tokens instead of predicting the confidence scores for each relation type. However, existing prompt-tuning models cannot perceive named entities of a relation instance because they are normally implemented on raw input that is too weak to encode the contextual features and semantic dependencies of a relation instance. This study proposes a cue prompt adapting (CPA) model for relation extraction (RE) that encodes contextual features and semantic dependencies by implanting task-relevant cues in a sentence. Additionally, a new transformer architecture is proposed to adapt pre-trained language models (PLMs) to perceive named entities in a relation instance. Finally, in the decoding process, a goal-oriented prompt template is designed to take advantage of the potential semantic features of a PLM. The proposed model is evaluated using three public corpora: ACE, ReTACRED, and Semeval. The performance achieves an impressive improvement, outperforming existing state-of-the-art models. Experiments indicate that the proposed model is effective for learning task-specific contextual features and semantic dependencies in a relation instance.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
2 The weight of BART & RoBERTa were downloaded from https://huggingface.co/models.