1,292
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Toward Prompt-Enhanced Sentiment Analysis with Mutual Describable Information Between Aspects

, , &
Article: 2186432 | Received 29 Dec 2022, Accepted 27 Feb 2023, Published online: 15 Mar 2023

References

  • Balaneshinkordan, S., and A. Kotov. 2019. Bayesian approach to incorporating different types of biomedical knowledge bases into information retrieval systems for clinical decision support in precision medicine. Journal of Biomedical Informatics 98:103238. doi:10.1016/j.jbi.2019.103238.
  • Brauwers, G., and F. Frasincar. 2022. A survey on aspect-based sentiment classification. ACM Computing Surveys 55 (4):1–898.
  • Brown, T., B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems 33:1877–901.
  • Cambria, E., B. Schuller, Y. Xia, and C. Ine Havasi. 2013. New avenues in opinion mining and sentiment analysis. IEEE Intelligent Systems 28 (2):15–21. doi:10.1109/MIS.2013.30.
  • Chen, X., X. Xie, N. Zhang, J. Yan, S. Deng, C. Tan, F. Huang, S. Luo, and H. Chen. 2021. Adaprompt: Adaptive prompt-based finetuning for relation extraction. arXiv preprint arXiv 2104:07650.
  • Crammer, K., Y. Singer, and Y. Singer. 2002. Pranking with ranking. Advances in Neural Information Processing Systems 14:641–47.
  • Cui, Y., W. Che, T. Liu, B. Qin, S.J. Wang, and H. Guoping. 2020. Revisiting pre-trained models for Chinese natural language processing. arXiv preprint arXiv:200413922.
  • Devlin, J., M. -W. Chang, K. Lee, and K. Toutanova. 2019. “BERT: Pre-training of deep bidirectional transformers for language understanding”. In Proceedings of the 17th Conference of the North American chapter of the association for computational linguistics: Human language technologies, (NAACL-HLT’19), Minneapolis, Minnesota, 4171–86. http://arxiv.org/abs/1810.04805
  • Devlin, J., M.W. Chang, K. Lee, K. Toutanova, W. M. DeCampli, C. A. Caldarone, A. Dodge-Khatami, P. Eghtesady, J. M. Meza, P. J. Gruber, et al. 2018. Intervention for arch obstruction after the Norwood procedure: Prevalence, associated factors, and practice variability. The Journal of Thoracic and Cardiovascular Surgery, 157 (2):684–95.e8. arXiv preprint arXiv:1810.04805. doi:10.1016/j.jtcvs.2018.09.130.
  • Dong, L., F. Wei, C. Tan, D. Tang, M. Zhou, and K. Xu. 2014. Adaptive recursive neural network for target-dependent Twitter sentiment classification. In Proceedings of the 52nd annual meeting of the association for computational linguistics (volume 2: Short papers), Baltimore, Maryland, 49–54.
  • Goodfellow, I. J., J. Shlens, and C. Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv. 1412:6572.
  • He, X., E. H. Y. Lau, P. Wu, X. Deng, J. Wang, X. Hao, Y. C. Lau, J. Y. Wong, Y. Guan, X. Tan, et al. 2020. Temporal dynamics in viral shedding and transmissibility of COVID-19. Nature medicine 26:672–75. doi:10.1038/s41591-020-0869-5.
  • Huang, B., and K. M. Carley. 2019a. A hierarchical location prediction neural network for Twitter user geolocation. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Hong Kong, China, 4732–42.
  • Huang, B., and K. M. Carley. 2019b. Syntax-aware aspect level sentiment classification with graph attention networks. arXiv preprint arXiv 1909:02606.
  • Hu, W., L. Liu, Y. Sun, Y. Wu, Z. Liu, R. Zhang, and T. Peng. 2022. NLIRE: A Natural Language Inference method for Relation Extraction. Journal of Web Semantics 72:100686.
  • Jiang, Z., F. F. Xu, J. Araki, and G. Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics 8:423–38. doi:10.1162/tacl_a_00324.
  • Kingma, D. P., and B. Jimmy. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv. 1412:6980.
  • Kiritchenko, S., X. Zhu, and S. M. Mohammad. 2014. Sentiment analysis of short informal texts. The Journal of Artificial Intelligence Research 50 (2014):723–62. doi:10.1613/jair.4272.
  • Lan, Z., M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv 1909:11942.
  • Lee, Y., J. Son, and M. Song. 2022. BertSRC: Transformer-based semantic relation classification. BMC Medical Informatics and Decision Making 22 (2022):234. doi:10.1186/s12911-022-01977-5.
  • Levine, G. N., B. E. Cohen, Y. Commodore-Mensah, J. Fleury, J. C. Huffman, U. Khalid, D. R. Labarthe, H. Lavretsky, E. D. Michos, E. S. Spatz, et al. 2021. Psychological health, well-being, and the mind-heart-body connection: A scientific statement from the American Heart Association. Circulation 143 (10):e763–83.
  • Liu, B. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies 5 (1):1–167.
  • Liu, Y., M. Ott, N. Goyal, D. Jingfei, M.D. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv 1907:11692.
  • Liu, J.-M., M. You, Z. Wang, G.-Z. Li, X. Xu, and Z. Qiu. 2015. Cough event classification by pre-trained deep neural network. BMC Medical Informatics and Decision Making 15 (Suppl 4):S2. doi:10.1186/1472-6947-15-S4-S2.
  • Liu, Y., R. Zhang, T. Li, J. Jiang, J. Ma, and P. Wang. 2023. MolRoPE-BERT: An enhanced molecular representation with rotary position embedding for molecular property prediction. Journal of Molecular Graphics & Modelling 118:108344. doi:10.1016/j.jmgm.2022.108344.
  • Pan, M., J. Wang, J. X. Huang, A. J. Huang, Q. Chen, and J. Chen. 2022. A probabilistic framework for integrating sentence-level semantics via BERT into pseudo-relevance feedback. Information Management and Processing 59 (1):102734. doi:10.1016/j.ipm.2021.102734.
  • Petroni, F., T. Rocktäschel, P. Lewis, A. Ton Bakhtin, W. Yuxiang, A. H. Miller, and S. Riedel. 2019. Language models as knowledge bases? arXiv preprint arXiv 1909:01066.
  • Pontiki, M., D. Galanis, H. Papageorgiou, I. Androutsopoulos, S. Manandhar, M.M. Al-Smadi, M. Al-Ayyoub, Y. Zhao, B. Qin, O. De Clercq, et al. 2016. Semeval- 2016 task 5: Aspect-based sentiment analysis. International workshop on semantic evaluation, San Diego, California, 19–30.
  • Rajpurkar, P., J. Zhang, K. Lopyrev, and P. Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.
  • Robertson, S., H. Zaragoza, et al. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends® in Information Retrieval. 3(4):333–89. doi:10.1561/1500000019.
  • Sang, E. F., and F. De Meulder. 2003. Introduction to the Conll2003 shared task: Language. Introduction to the Conll2003 shared task: Languageindependent named entity recognition. arXiv preprint cs/0306050.
  • Sarzynska-Wawer, J., A. Wawer, A. Sandra Pawlak, J. Szymanowska, I. Stefa- Niak, M. Jarkiewicz, and L. Okruszek. 2021. Detecting formal thought disorder by deep contextualized word representations. Psychiatry Research 304:114135. doi:10.1016/j.psychres.2021.114135.
  • Schick, T., and H. Schütze. 2020. Few-shot text generation with pattern-exploiting training. arXiv preprint arXiv 2012:11926.
  • Singh, B., A. Kshatriya, E. Sagheb, W. CIl, J. Yoon, H. Y. Seol, Y. Juhn, and J. Sohn. 2021. Identification of asthma control factor in clinical notes using a hybrid deep learning model. BMC Med Inform Decis Mak 21 (Suppl 7):272. doi:10.1186/s12911-021-01633-4.
  • Socher, R., A. Perelygin, W. Jean, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, Seattle, Washington, USA, 1631–42.
  • Song, Y., J. Wang, T. Jiang, Z. Liu, and Y. Rao. 2019. Attentional encoder network for targeted sentiment classification. arXiv preprint arXiv 1902:09314.
  • Sun, K., R. Zhang, S. Mensah, Y. Mao, and X. Liu. 2019. Aspect-level sentiment analysis via convolution over dependency tree. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), Hong Kong, China, 5679–88.
  • Tang, D., B. Qin, X. Feng, and T. Liu. 2015. Effective LSTMs for target-dependent sentiment classification. arXiv preprint arXiv 1512:01100.
  • Tang, D., B. Qin, and T. Liu. 2016. Aspect-level sentiment classification with deep memory network. arXiv preprint arXiv 1605:08900.
  • Vaswani, A., N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. 2017. Attention is all you need. Advances in Neural Information Processing Systems 30.
  • Wang, S., X. Yichong, Y. Fang, Y. Liu, S. Sun, X. Ruochen, C. Zhu, and M. Zeng. 2022. Training data is more valuable than you think: A simple and effective method of retrieving from training data. arXiv preprint arXiv 2203:08773.
  • Xu, H., C. Zhang, and D. Hong. 2022. BERT-based NLP techniques for classification and severity modeling in basic warranty data study. Insurance, Mathematics & Economics 107:57–67. doi:10.1016/j.insmatheco.2022.07.013.
  • Yang, Z., Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in Neural Information Processing Systems 32.
  • Yang, F., X. Wang, H. Maand, and J. Li. 2021. Transformers-sklearn: A toolkit for medical language understanding with transformer-based models. BMC Medical Informatics and Decision Making 21 (Suppl 2):90. doi:10.1186/s12911-021-01459-0.
  • Zhao, Z., E. Wallace, S. Feng, D. Klein, and S. Singh. 2021. Calibrate before use: Improv- ing few-shot performance of language models. In International Conference on Machine Learning, United States, 12697–706. PMLR.