1,527
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Cue prompt adapting model for relation extraction

, , , , , & show all
Article: 2161478 | Received 03 Aug 2022, Accepted 19 Dec 2022, Published online: 28 Dec 2022
 

Abstract

Prompt-tuning models output relation types as verbalised-type tokens instead of predicting the confidence scores for each relation type. However, existing prompt-tuning models cannot perceive named entities of a relation instance because they are normally implemented on raw input that is too weak to encode the contextual features and semantic dependencies of a relation instance. This study proposes a cue prompt adapting (CPA) model for relation extraction (RE) that encodes contextual features and semantic dependencies by implanting task-relevant cues in a sentence. Additionally, a new transformer architecture is proposed to adapt pre-trained language models (PLMs) to perceive named entities in a relation instance. Finally, in the decoding process, a goal-oriented prompt template is designed to take advantage of the potential semantic features of a PLM. The proposed model is evaluated using three public corpora: ACE, ReTACRED, and Semeval. The performance achieves an impressive improvement, outperforming existing state-of-the-art models. Experiments indicate that the proposed model is effective for learning task-specific contextual features and semantic dependencies in a relation instance.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

2 The weight of BART & RoBERTa were downloaded from https://huggingface.co/models.

Additional information

Funding

This work was supported by the National Natural Science Foundation of China [grant numbers 62066008 and 62166007], and the Key Projects of Science and Technology Foundation of Guizhou Province [grant number [2020]1Z055].