866
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Investigating Opinions on Public Policies in Digital Media: Setting up a Supervised Machine Learning Tool for Stance Classification

ORCID Icon, ORCID Icon, , &

References

  • Alam, F., Qazi, U., Imran, M., & Ofli, F. (2021, April). HumAID: Human-Annotated Disaster Incidents Data from Twitter with Deep Learning Benchmarks. Proceedings of the Fifteenth International AAAI Conference on Web and Social Media (ICWSM 2021), 933–942. Association for the Advancement of Artificial Intelligence.
  • ALDayel, A., & Magdy, W. (2021). Stance detection on social media: State of the art and trends. Information Processing & Management, 58(4), 102597. https://doi.org/10.1016/j.ipm.2021.102597
  • Bahad, P., Saxena, P., & Kamal, R. (2019). Fake news detection using bi-directional LSTM-recurrent neural network. Procedia Computer Science, 165, 74–82. https://doi.org/10.1016/j.procs.2020.01.072
  • Bahdanau, D., Cho, K., & Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. In Y. Bengio & Y. LeCun (Eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. http://arxiv.org/abs/1409.0473
  • Bansal, T., Jha, R., & McCallum, A. (2020, December). Learning to few-shot learn across diverse natural language classification tasks. In Proceedings of the 28th international conference on computational linguistics (pp. 5108–5123). Barcelona, Spain (Online): International Committee on Computational Linguistics. https://aclanthology.org/2020.coling-main.448
  • Beck, T., Lee, J., Viehmann, C., Maurer, M., Quiring, O., & Gurevych, I. (2021). Investigating label suggestions for opinion mining in German Covid-19 social media. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing ( Volume 1: Long Papers), S. 1–13. https://doi.org/10.18653/v1/2021.acl-long.1
  • Berelson, B. (1952). Content analysis in communication research. Free press.
  • Bojanowski, P., Grave, E., Joulin, A., & Mikolov, T. (2017). Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5, 135–146. https://doi.org/10.1162/tacl_a_00051
  • Boukes, M., van de Velde, B., Araujo, T., & Vliegenthart, R. (2020). What’s the tone? easy doesn’t do it: Analyzing performance and agreement between off-the-shelf sentiment analysis tools. Communication Methods and Measures, 14(2), 83–104. https://doi.org/10.1080/19312458.2019.1671966
  • Boumans, J. W., & Trilling, D. (2016). Taking stock of the toolkit. Digital Journalism, 4(1), 8–23. https://doi.org/10.1080/21670811.2015.1096598
  • Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., … Amodei, D. (2020). Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, & H. Lin (Eds.) Advances in neural information processing systems. Vol. 33 1877–1901. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
  • Budak, C., Garrett, R. K., & Sude, D. (2021). Better crowdcoding: Strategies for promoting accuracy in crowdsourced content analysis. Communication Methods and Measures, 15(2), 141–155. https://doi.org/10.1080/19312458.2021.1895977
  • Burscher, B., Odijk, D., Vliegenthart, R., de Rijke, M., & de Vreese, C. H. (2014). Teaching the computer to code frames in news: Comparing two supervised machine learning approaches to frame analysis. Communication Methods and Measures, 8(3), 190–206. https://doi.org/10.1080/19312458.2014.937527
  • Chan, B., Schweter, S., & Möller, T. (2020, December). German’s next language model. In Proceedings of the 28th international conference on computational linguistics (pp. 6788–6796). Barcelona, Spain (Online): International Committee on Computational Linguistics. https://aclanthology.org/2020.coling-main.598
  • Colleoni, E., Rozza, A., & Arvidsson, A. (2014, 3). Echo chamber or public sphere? Predicting political orientation and measuring political homophily in Twitter using Big Data. Journal of Communication, 64(2), 317–332. https://doi.org/10.1111/jcom.12084
  • Dai, A. M., & Le, Q. V. (2015). Semi-supervised sequence learning. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, & R. Garnett, (Eds.) Advances in neural information processing systems. Vol. 28, pp. 3079-3087. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2015/file/7137debd45ae4d0ab9aa953017286b20-Paper.pdf
  • Delobelle, P., Winters, T., & Berendt, B. (2020). RobBERT: A Dutch RoBERTabased language model. Findings of the Association for Computational Linguistics: EMNLP 2020, 16-20. November 2020, Online, 3255–3265. Association for Computational Linguistics. https://aclanthology.org/2020.findings-emnlp.292
  • Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019, June). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: Human language technologies,volume 1 ( long and short papers) (pp. 4171–4186). Minneapolis, Minnesota: Association for Computational Linguistics. https://aclanthology.org/N19-1423
  • De Vries, W., Van Cranenburgh, A., Bisazza, A., Caselli, T., Van Noord, G., & Nissim, M. (2019). BERTje: A Dutch BERT model. arXiv preprint arXiv:1912.09582, abs/1912.09582. http://arxiv.org/abs/1912.09582
  • Dodge, J., Ilharco, G., Schwartz, R., Farhadi, A., Hajishirzi, H., & Smith, N. A. (2020). FineTuning pretrained language models: Weight initializations, data orders, and early stopping. arXiv preprint arXiv:2002.06305, abs/2002.0. https://arxiv.org/abs/2002.06305
  • Dos Santos, C., & Gatti, M. (2014, August). Deep convolutional neural networks for sentiment analysis of short texts. In Proceedings of COLING 2014, the 25th international conference on computational linguistics: Technical papers (pp. 69–78). Dublin, Ireland: Dublin City University and Association for Computational Linguistics. https://aclanthology.org/C14-1008
  • Dun, L., Soroka, S., & Wlezien, C. (2021). Dictionaries, supervised learning, and media coverage of public policy. Political Communication, 38(1–2), 140–158. https://doi.org/10.1080/10584609.2020.1763529
  • Ethayarajh, K., & Jurafsky, D. (2021, August). Attention flows are shapley value explanations. In Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing (volume 2: Short papers) (pp. 49–54). Association for Computational Linguistics. https://aclanthology.org/2021.acl-short.8
  • Feder, A., Oved, N., Shalit, U., & Reichart, R. (2021, 7). CausaLM: Causal model explanation through counterfactual language models. Computational Linguistics, 47(2), 333–386. https://doi.org/10.1162/coli_a_00404
  • Feng, S., Wallace, E., Grissom, A., II, Iyyer, M., Rodriguez, P., & Boyd-Graber, J. (2018, October–November). Pathologies of neural models make interpretations difficult. In Proceedings of the 2018 conference on empirical methods in natural language processing (pp. 3719–3728). Brussels, Belgium: Association for Computational Linguistics. https://aclanthology.org/D18-1407
  • Geiß, S. (2021). Statistical power in content analysis designs. Computational Communication Research, 3(1), 61–89. https://doi.org/10.5117/CCR2021.1.003.GEIS
  • Ghosh, S., Singhania, P., Singh, S., Rudra, K., & Ghosh, S. (2019). Stance detection in web and social media: A comparative study. In F. Crestani, Braschler, M., Savoy, J., Rauber, A., Müller, H., Losada, D. E., Bürki, G. H., Cappellato, L., Ferro, N. (Eds.) Experimental IR meets multilinguality, multimodality, and interaction. 75–87. Springer International Publishing. https://doi.org/10.1007/978-3-030-28577-7_4
  • Goodfellow, I., Bengio, Y., & Courville, A. (2017). Deep learning. MIT press.
  • Graves, A. (2012). Supervised sequence labelling with recurrent neural networks (Vol. 385). Springer-Verlag Berlin Heidelberg. https://doi.org/10.1007/978-3-642-24797-2
  • Grimmer, J., & Stewart, B. M. (2013). Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Political Analysis, 21(3), 267–297. https://doi.org/10.1093/pan/mps028
  • Gururangan, S., Marasović, A., Swayamdipta, S., Lo, K., Beltagy, I., Downey, D., & Smith, N. A. (2020, July). Don’t stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th annual meeting of the association for computational linguistics (pp. 8342–8360). Online: Association for Computational Linguistics. https://aclanthology.org/2020.acl-main.740
  • Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735
  • Howard, J., & Ruder, S. (2018, July). Universal language model fine-tuning for text classification. In Proceedings of the 56th annual meeting of the association for computational linguistics (volume 1: Long papers) (pp. 328–339). Melbourne, Australia: Association for Computational Linguistics. https://aclanthology.org/P18-1031
  • Hsueh, P.-Y., Melville, P., & Sindhwani, V. (2009, June). Data quality from crowdsourcing: A study of annotation selection criteria. In Proceedings of the NAACL HLT 2009 workshop on active learning for natural language processing (pp. 27–35). Boulder, Colorado: Association for Computational Linguistics. https://aclanthology.org/W09-1904
  • Hvitfeldt, E., & Silge, J. (2021). Supervised machine learning for text analysis in R. Chapman and Hall/CRC.
  • Joachims, T. (1998). Text categorization with support vector machines: Learning with many relevant features. In European conference on machine learning (pp. 137–142). Springer.
  • Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255–260. https://doi.org/10.1126/science.aaa8415
  • Kim, Y. (2014, October). Convolutional neural networks for sentence classification. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1746–1751). Doha, Qatar: Association for Computational Linguistics. https://aclanthology.org/D14-1181
  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84–90. https://doi.org/10.1145/3065386
  • Lai, M., Patti, V., Ruffo, G., & Rosso, P. 2018. Stance evolution and twitter interactions in an Italian political debate. In Natural language processing and information systems M. Silberztein, F. Atigui, E. Kornyshova, E. Métais, & F. Meziane Eds. Springer International Publishing 15–27. https://doi.org/10.1007/978-3-319-91947-8_2
  • Li, J., Monroe, W., & Jurafsky, D. (2016). Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220 http://arxiv.org/abs/1612.08220
  • Liu, P., Qiu, X., & Huang, X. (2016). Recurrent neural network for text classification with multi-task learning. In Proceedings of the twenty-fifth international joint conference on artificial intelligence (p. 2873–2879). AAAI Press. https://www.ijcai.org/Proceedings/16/Papers/408.pdf
  • Mahrt, M., & Scharkow, M. (2013). The value of big data in digital media research. Journal of Broadcasting & Electronic Media, 57(1), 20–33. https://doi.org/10.1080/08838151.2012.761700
  • Matthes, J. (2009). What’s in a frame? a content analysis of media framing studies in the world’s leading communication journals, 1990-2005. Journalism & Mass Communication Quarterly, 86(2), 349–367. https://doi.org/10.1177/10776990090860020
  • Maurer, M., Daxenberger, J., Orlikowski, M., & Gurevych, I. (2019). Argument mining: A new method for automated text analysis and its application in communication science. In Müller, P., Geiss, S., Schemer, C., Naab, T. K., Peter, C. (Eds.), Dynamische Prozesse in der Kommunikationswissenschaft: Methodische Herausforderungen (pp. 18–37). Halem.
  • McCann, B., Bradbury, J., Xiong, C., & Socher, R. (2017). Learned in translation: Contextualized word vectors. In Proceedings of the 31st international conference on neural information processing systems (p. 6297–6308). Red Hook, NY, USA: Curran Associates Inc. https://dl.acm.org/doi/pdf/10.5555/3295222.3295377
  • Merkley, E. (2020). Are experts (news)worthy? balance, conflict, and mass media coverage of expert consensus. Political Communication, 37(4), 530–549. https://doi.org/10.1080/10584609.2020.1713269
  • Mickoleit, A. (2014, December). Social media use by governments: A policy primer to discuss trends, identify policy opportunities and guide decision makers (OECD Working Papers on Public Governance No. 26). OECD Publishing. https://ideas.repec.org/p/oec/govaaa/26-en.html
  • Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. In Y. Bengio & Y. LeCun (Eds.), 1st international conference on learning representations, ICLR 2013, scottsdale, arizona, usa, may 2-4, 2013, workshop track proceedings. http://arxiv.org/abs/1301.3781
  • Mikolov, T., Yih, W.-T., & Zweig, G. (2013). Linguistic regularities in continuous space word representations. In Proceedings of the 2013 conference of the North American chapter of the association for computational linguistics: Human language technologies (pp. 746–751).
  • Mohammad, S. M., Sobhani, P., & Kiritchenko, S. (2017). Stance and sentiment in tweets. ACM Transactions on Internet Technology (TOIT), 17(3), 1–23. https://doi.org/10.1145/3003433
  • Newman, N. (2020). Executive summary and key findings of the 2020 report. Reuters Institute for the Study of Journalism Digital News Report. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2020-06/DNR_2020_FINAL.pdf
  • Nielsen, M. (2019). Neural Networks and Deep Learning. Determination Press. http://neuralnetworksanddeeplearning.com/index.html
  • Olteanu, A., Vieweg, S., & Castillo, C. (2015, February). What to expect when the unexpected happens: Social media communications across crises. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing (pp. 994–1009).
  • Peters, M. E., Ammar, W., Bhagavatula, C., & Power, R. (2017, July). Semi-supervised sequence tagging with bidirectional language models. In Proceedings of the 55th annual meeting of the association for computational linguistics (volume 1: Long papers) (pp. 1756–1765). Vancouver, Canada: Association for Computational Linguistics. https://aclanthology.org/P17-1161
  • Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018, June). Deep contextualized word representations. In Proceedings of the 2018 conference of the North American chapter of the association for computational linguistics: Human language technologies, volume 1 (long papers) (pp. 2227–2237). New Orleans, Louisiana: Association for Computational Linguistics. https://aclanthology.org/N18-1202
  • Phang, J., Févry, T., & Bowman, S. R. (2018). Sentence Encoders on STILTs: Supplementary training on intermediate labeled-data tasks. CoRR, abs/1811.01088. http://arxiv.org/abs/1811.01088
  • Pilny A., McAninch K., Slone A. and Moore K. (2019). Using Supervised Machine Learning in Automated Content Analysis: An Example Using Relational Uncertainty. Communication Methods and Measures, 13(4), 287–304. https://doi.org/10.1080/19312458.2019.1650166
  • Rajadesingan, A., & Liu, H. 2014. Identifying users with opposing opinions in twitter debates. In Social computing, behavioral-cultural modeling and prediction W. G. Kennedy, N. Agarwal, & S. J. Yang Eds. Springer International Publishing 153–160. https://doi.org/10.1007/978-3-319-05579-4_19
  • Ribeiro, M., Singh, S., & Guestrin, C. (2016, June). “why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Demonstrations (pp. 97–101). San Diego, California: Association for Computational Linguistics. https://aclanthology.org/N16-3020
  • Rieger, J., & von Nordheim, G. (2021). corona100d: German-language twitter dataset of the first100 days after chancellor merkel addressed the coronavirus outbreak on tv (DoCMA Working Paper No. 4). Dortmund. http://hdl.handle.net/10419/231349
  • Riffe, D., Lacy, S., Watson, B. R., & Fico, F. (2019). Analyzing media messages: Using quantitative content analysis in research. Routledge.
  • Risch, J., Stoll, A., Ziegele, M., & Krestel, R. (2019). hpidedis at germeval 2019: Offensive language identification using a German bert model. In Proceedings of the 15th conference on natural language processing (konvens 2019) (pp. 405–410), 9-11 October 2019, Erlangen, Germany. German Society for Computational Linguistics & Language Technology. https://corpora.linguistik.uni-erlangen.de/data/konvens/proceedings/papers/germeval/Germeval_Task_2_2019_paper_10.HPIDEDIS.pdf
  • Röchert, D., Neubaum, G., Ross, B., Brachten, F., & Stieglitz, S. (2020). Opinion-based Homogeneity on YouTube. Computational Communication Research, 2(1), 81–108. https://doi.org/10.5117/CCR2020.1.004.ROCH
  • Rudkowsky, E., Haselmayer, M., Wastian, M., Jenny, M., Emrich, Š., & Sedlmair, M. (2018). More than bags of words: Sentiment analysis with word embeddings. Communication Methods and Measures, 12(2–3), 140–157. https://doi.org/10.1080/19312458.2018.1455817
  • Ruoho, I. A., & Kuusiplao, J. (2019). The inner circle of power on twitter? how politicians and journalists form a virtual network elite in Finland. Observatorio (OBS*), 13(1). https://doi.org/10.15847/obsOBS13120191326
  • Sagi, E., & Dehghani, M. (2014). Measuring moral rhetoric in text. Social Science Computer Review, 32(2), 132–144. https://doi.org/10.1177/0894439313506837
  • Schaefer, R., & Stede, M. 2019. Improving implicit stance classification in tweets using word and sentence embeddings. In Ki 2019: Advances in artificial intelligence C. Benzmüller & H. Stuckenschmidt Eds. Springer International Publishing 299–307. https://doi.org/10.1007/978-3-030-30179-8_26
  • Schaefer, R., & Stede, M. (2021). Argument mining on twitter: A survey. It – Information Technology, 63(1), 45–58. https://doi.org/10.1515/itit-2020-0053
  • Schiller, B., Daxenberger, J., & Gurevych, I.(2021). Stance detection benchmark: How robust is your stance detection? KI-Künstliche Intelligenz. 1–13. https://doi.org/10.1007/s13218-021-00714-w
  • Serrano, S., & Smith, N. A. (2019, July). Is attention interpretable? In Proceedings of the 57th annual meeting of the association for computational linguistics (pp. 2931–2951). Florence, Italy: Association for Computational Linguistics. https://aclanthology.org/P19-1282
  • Song, H., Tolochko, P., Eberl, J. M., Eisele, O., Greussing, E., Heidenreich, T., Lind, F., Galyga, S., & Boomgaarden, H. G. (2020). In validations we trust? The impact of imperfect human annotations as a gold standard on the quality of validation of automated content analysis. Political Communication, 37(4), 550–572. https://doi.org/10.1080/10584609.2020.1723752
  • Steinbach, P., Gernhardt, F., Tanveer, M., Schmerler, S., & Starke, S. (2022). Machine learning state-of-the-art with uncertainties. arXiv preprint arXiv:2204.05173. https://doi.org/10.48550/arXiv.2204.05173
  • Stoll, A. (2020). Supervised Machine Learning mit Nutzergenerierten Inhalten: Oversampling für nicht balancierte Trainingsdaten. Publizistik, 65(2), 233–251. https://doi.org/10.1007/s11616-020-00573-9
  • Stoll, A., Ziegele, M., & Quiring, O. (2020). Detecting impoliteness and incivility in online discussions. Computational Communication Research, 2(1), 109–134. https://doi.org/10.5117/CCR2020.1.005.KATH
  • Sundararajan, M., Taly, A., & Yan, Q. (2017). Axiomatic attribution for deep networks. In Proceedings of the 34th international conference on machine learning - volume 70 (p. 3319–3328). JMLR.org. https://dl.acm.org/doi/pdf/10.5555/3305890.3306024
  • Turian, J., Ratinov, L., & Bengio, Y. (2010). Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th annual meeting of the association for computational linguistics (pp. 384–394).
  • van Atteveldt, W., van der Velden, M. A., & Boukes, M. (2021). The validity of sentiment analysis: Comparing manual annotation, crowd-coding, dictionary approaches, and machine learning algorithms. Communication Methods and Measures, 1–20. https://doi.org/10.1080/19312458.2020.1869198
  • Van der Meer, T. G. (2016). Automated content analysis and crisis communication research. Public Relations Review, 42(5), 952–961. https://doi.org/10.1016/j.pubrev.2016.09.001
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 5998–6008. https://arxiv.org/abs/1706.03762
  • Veitch, V., D’Amour, A., Yadlowsky, S., & Eisenstein, J. (2021). Counterfactual invariance to spurious correlations in text classification. In A. Beygelzimer, Y. Dauphin, P. Liang, & J. W. Vaughan (Eds.), Advances in neural information processing systems. https://openreview.net/forum?id=BdKxQp0iBi8
  • Wang, R., Zhou, D., Jiang, M., Si, J., & Yang, Y. (2019). A survey on opinion mining: From stance to product aspect. IEEE Access, 7, 41101–41124. https://doi.org/10.1109/ACCESS.2019.2906754
  • Weber, R., Mangus, J. M., Huskey, R., Hopp, F. R., Amir, O., Swanson, R., Gordon, A., Khooshabeh, P., Hahn, L., & Tamborini, R. (2018). Extracting latent moral information from text narratives: Relevance, challenges, and solutions. Communication Methods and Measures, 12(2–3), 119–139. https://doi.org/10.1080/19312458.2018.1447656
  • Wiegreffe, S., & Pinter, Y. (2019, November). Attention is not not explanation. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (emnlp-ijcnlp) (pp. 11–20). Hong Kong, China: Association for Computational Linguistics. https://aclanthology.org/D19-1002
  • Xie, S. M., Raghunathan, A., Liang, P., & Ma, T. (2021). An explanation of in-context learning as implicit Bayesian inference. arXiv preprint arXiv:2111.02080. https://doi.org/10.48550/arXiv.2111.02080
  • Yin, W., Kann, K., Yu, M., & Schütze, H. (2017). Comparative study of CNN and RNN for natural language processing. arXiv preprint arXiv:1702.01923. https://arxiv.org/abs/1702.01923
  • Zhang, H. (2004). The optimality of naive Bayes. In V. Barr & Z. Markov (Eds.), Proceedings of the Seventeenth International Florida Artificial Intelligence Research Society Conference,Miami Beach, Florida, USA (pp. 562–567). AAAI Press. http://www.aaai.org/Library/FLAIRS/2004/flairs04-097.php
  • Zhao, X., Liu, J. S., & Deng, K. (2013). Assumptions behind intercoder reliability indices. Annals of the International Communication Association, 36(1), 419–480. https://doi.org/10.1080/23808985.2013.11679142
  • Ziegele, M., Quiring, O., Esau, K., & Friess, D. (2020). Linking news value theory with online deliberation: How news factors and illustration factors in news articles affect the deliberative quality of user discussions in sns’ comment sections. Communication Research, 47(6), 860–890. https://doi.org/10.1177/0093650218797884
  • Zubiaga, A., Kochkina, E., Liakata, M., Procter, R., Lukasik, M., Bontcheva, K., Cohn, T., & Augenstein, I. (2018). Discourse-aware rumour stance classification in social media using sequential classifiers. Information Processing & Management, 54(2), 273–290. https://doi.org/10.1016/j.ipm.2017.11.009

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.