1,103
Views
2
CrossRef citations to date
0
Altmetric
Articles

XFlag: Explainable Fake News Detection Model on Social Media

ORCID Icon, &
Pages 1808-1827 | Received 10 Apr 2021, Accepted 29 Mar 2022, Published online: 20 Apr 2022

References

  • Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
  • Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. https://doi.org/10.1257/jep.31.2.211
  • Arras, L., Arjona-Medina, J., Widrich, M., Montavon, G., Gillhofer, M., Müller, K.-R., Hochreiter, S., & Samek, W. (2019). Explaining and interpreting LSTMs. In Explainable ai: Interpreting, explaining and visualizing deep learning (pp. 211–238). Springer.
  • Arras, L., Montavon, G., Müller, K.-R., & Samek, W. (2017). Explaining recurrent neural network predictions in sentiment analysis. arXiv preprint arXiv:1706.07206
  • Ayoub, J., Yang, X. J., & Zhou, F. (2021). Combat COVID-19 infodemic using explainable natural language processing models. Information Processing & Management, 58(4), 102569. https://doi.org/10.1016/j.ipm.2021.102569
  • Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One, 10(7), e0130140. https://doi.org/10.1371/journal.pone.0130140
  • Bansal, G., Wu, T., Zhou, J., Fok, R., Nushi, B., Kamar, E., … Weld, D. (2021, May). Does the whole exceed its parts? The effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–16). https://doi.org/10.1145/3411764.3445717
  • Bhaskara, A., Skinner, M., & Loft, S. (2020). Agent transparency: A review of current theory and evidence. IEEE Transactions on Human-Machine Systems, 50(3), 215–224. https://doi.org/10.1109/THMS.2020.2965529
  • Chang, E., Hussain, F., & Dillon, T. (2006). Trust and reputation for service-oriented environments: technologies for building business intelligence and consumer confidence. John Wiley & Sons.
  • Chen, H., Lundberg, S., & Lee, S.-I. (2021). Explaining models by propagating Shapley values of local components. In Explainable AI in Healthcare and Medicine (pp. 261–270). Springer.
  • Chen, J. Y., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency. Army research lab aberdeen proving ground md human research and engineering directorate.
  • Chien, S.-Y., Lewis, M., Sycara, K., Kumru, A., & Liu, J. S. (2020). Influence of culture, transparency, trust, and degree of automation on automation use. IEEE Transactions on Human-Machine Systems, 50(3), 205–214. https://doi.org/10.1109/THMS.2019.2931755
  • Chien, S.-Y., Lewis, M., Sycara, K., Liu, J. S., & Kumru, A. (2016). Relation between trust attitudes toward automation, Hofstede’s cultural dimensions, and big five personality traits. Proceedings of the Human Factors and Ergonomics Society, 60(1), 840–844. https://doi.org/10.1177/1541931213601192
  • Chien, S.-Y., Lewis, M., Sycara, K., Liu, J.-S., & Kumru, A. (2018). The effect of culture on trust in automation: Reliability and workload. ACM Transactions on Interactive Intelligent Systems, 8(4), 1–31. https://doi.org/10.1145/3230736
  • Conroy, N. J., Rubin, V. L., & Chen, Y. (2015). Automatic deception detection: Methods for finding fake news. Proceedings of the Association for Information Science and Technology, 52(1), 1–4. https://doi.org/10.1002/pra2.2015.145052010082
  • Das, S. D., Basak, A., & Dutta, S. (2021). A heuristic-driven uncertainty based ensemble framework for fake news detection in tweets and news articles. arXiv preprint arXiv:2104.01791
  • Dong, Y., Su, H., Zhu, J., Zhang, B. (2017). Improving interpretability of deep neural networks with semantic information. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4306–4314).
  • Feng, V. W., Hirst, G. (2013). Detecting deceptive opinions with profile compatibility. In Proceedings of the sixth international joint conference on natural language processing (pp. 338–346).
  • Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59(7), 96–104. https://doi.org/10.1145/2818717
  • Gedikli, F., Jannach, D., & Ge, M. (2014). How should I explain? A comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies, 72(4), 367–382. https://doi.org/10.1016/j.ijhcs.2013.12.007
  • Gramlich, J. (2019, May 16). 10 facts about Americans and Facebook https://www.pewresearch.org/fact-tank/2019/05/16/facts-about-americans-and-facebook/
  • Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. Advances in Psychology, 52(C), 139–183. https://doi.org/10.1016/S0166-4115(08)62386-9
  • Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). XAI—Explainable artificial intelligence. Science Robotics, 4(37), 1. https://doi.org/10.1126/scirobotics.aay7120
  • Herlocker, J. L., Konstan, J. A., & Riedl, J. (2000). Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM Conference on Computer supported cooperative work (pp. 241–250). https://doi.org/10.1145/358916.358995
  • Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735
  • Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
  • Kim, A., & Dennis, A. R. (2019). Says who? The effects of presentation format and source rating on fake news in social media. MIS Quarterly, 43(3), 1025–1039. https://doi.org/10.25300/MISQ/2019/15188
  • Kim, A., Moravec, P. L., & Dennis, A. R. (2019). Combating fake news on social media with source ratings: The effects of user and expert reputation ratings. Journal of Management Information Systems, 36(3), 931–968. https://doi.org/10.1080/07421222.2019.1628921
  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980
  • Le, Q., Mikolov, T. (2014). Distributed representations of sentences and documents. In International conference on machine learning (pp. 1188–1196). PMLR.
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  • Lei, T., Barzilay, R., & Jaakkola, T. (2016). Rationalizing neural predictions. arXiv preprint arXiv:1606.04155
  • Lester, J. C., & Stone, B. A. (1997). Increasing believability in animated pedagogical agents. In Proceedings of the first international conference on Autonomous agents (pp. 16–21). https://doi.org/10.1145/267658.269943
  • Lu, Y.-J., & Li, C.-T. (2020). GCAN: Graph-aware co-attention networks for explainable fake news detection on social media. arXiv preprint arXiv:2004.11648
  • Lundberg, S., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. arXiv preprint arXiv:1705.07874
  • Lyons, J. B. (2013). Being transparent about transparency: A model for human-robot interaction. In 2013 AAAI Spring Symposium Series.
  • Madumal, P., Singh, R., Newn, J., & Vetere, F. (2018). Interaction design for explainable AI: Workshop proceedings. arXiv preprint arXiv:1812.08597
  • Ma, J., Gao, W., Mitra, P., Kwon, S., Jansen, B. J., Wong, K.-F., & Cha, M. (2016). Detecting rumors from microblogs with recurrent neural networks.
  • Ma, J., Gao, W., & Wong, K.-F. (2018). Rumor detection on twitter with tree-structured recursive neural networks. Association for Computational Linguistics.
  • Mercado, J. E., Rupp, M. A., Chen, J. Y., Barnes, M. J., Barber, D., & Procci, K. (2016). Intelligent agent transparency in human–agent teaming for Multi-UxV management. Human Factors, 58(3), 401–415.
  • Mishra, S., Sturm, B. L., & Dixon, S. (2017). Local interpretable model-agnostic explanations for music content analysis. In ISMIR, pp. 537–543.
  • Molnar, C. (2019). Interpretable machine learning. A guide for making black box models explainable. https://christophm.github.io/interpretable-ml-book.
  • Moravec, P., Kim, A., & Dennis, A. R. (2020). Appealing to sense and sensibility: System 1 and system 2 interventions for fake news on social media. Information Systems Research, 31(3), 987–1006. https://doi.org/10.1287/isre.2020.0927
  • Moravec, P., Kim, A., Dennis, A. R., Minas, R. (2018a). Do you really know if it’s true? How asking users to rate stories affects belief in fake news on social media. How asking users to rate stories affects belief in fake news on social media (October 22, 2018). Kelley School of Business Research Paper, (pp. 18–89).
  • Moravec, P., Minas, R., & Dennis, A. R. (2018b). Fake news on social media: People believe what they want to believe when it makes no sense at all. Kelley School of Business Research Paper, pp. 18–87.
  • Ott, M., Cardie, C., Hancock, J. T. (2013). Negative deceptive opinion spam. In Proceedings of the 2013 conference of the North American chapter of the association for computational linguistics: Human language technologies (pp. 497–501).
  • Pynadath, D. V., Barnes, M. J., Wang, N., & Chen, J. Y. (2018). Transparency communication for machine learning in human-automation interaction. In Human and machine learning (pp. 75–90). Springer.
  • Ribeiro, M. T., Singh, S., Guestrin, C. (2016). “Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International conference on knowledge discovery and data mining (pp. 1135–1144).
  • Rubin, V. L., & Lukoianova, T. (2015). Truth and deception at the rhetorical structure level. Journal of the Association for Information Science and Technology, 66(5), 905–917. https://doi.org/10.1002/asi.23216
  • Ruchansky, N., Seo, S., & Liu, Y. (2017). Csi: A hybrid deep model for fake news detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 797–806).
  • Selkowitz, A. R., Lakhmani, S. G., Larios, C. N., Chen, J. Y. (2016). Agent transparency and the autonomous squad member. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 60, No. 1, pp 1319–1323). SAGE Publications. https://doi.org/10.1177/1541931213601305
  • Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
  • Shrikumar, A., Greenside, P., Shcherbina, A., & Kundaje, A. (2016). Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713
  • Shu, K., Cui, L., Wang, S., Lee, D., & Liu, H. (2019). defend: Explainable fake news detection. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 395–405).
  • Tandoc, E. C., Jr, Lim, Z. W., & Ling, R. (2018). Defining “fake news” A typology of scholarly definitions. Digital Journalism, 6(2), 137–153. https://doi.org/10.1080/21670811.2017.1360143
  • Theodorou, A., Wortham, R. H., & Bryson, J. J. (2017). Designing and implementing transparency for real time inspection of autonomous robots. Connection Science, 29(3), 230–241. https://doi.org/10.1080/09540091.2017.1310182
  • van Der Linden, S., Roozenbeek, J., & Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology, 11, 1–7. https://doi.org/10.3389/fpsyg.2020.566790
  • Wang, Z., & Guo, Y. (2020). Empower rumor events detection from Chinese microblogs with multi-type individual information. Knowledge and Information Systems, 62(9), 3585–3614. https://doi.org/10.1007/s10115-020-01463-2
  • Wang, N., Pynadath, D. V., Hill, S. G. (2016). The impact of POMDP-generated explanations on trust and performance in human-robot teams. In Proceedings of the 2016 international conference on autonomous agents & multiagent systems (pp. 997–1005).
  • Wang, Y., Qian, S., Hu, J., Fang, Q., & Xu, C. (2020). Fake news detection via knowledge-driven multimodal graph convolutional networks. In Proceedings of the 2020 International Conference on Multimedia Retrieval (pp. 540–547). https://doi.org/10.1145/3372278.3390713
  • World Health Organization. (2020). Coronavirus disease (COVID-19) advice for the public: Mythbusters. Retrieved May 5, 2020, from https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public/myth-busters.
  • Yang, F., Pentyala, S. K., Mohseni, S., Du, M., Yuan, H., Linder, R., … Hu, X. (2019). Xfake: Explainable fake news detector with visualizations. In The World Wide Web Conference (pp. 3600–3604). https://doi.org/10.1145/3308558.3314119
  • Yu, J., Huang, Q., Zhou, X., & Sha, Y. (2020). Iarnet: An information aggregating and reasoning network over heterogeneous graph for fake news detection. In 2020 International Joint Conference on Neural Networks (IJCNN) (pp. 1–9). IEEE. https://doi.org/10.1109/IJCNN48605.2020.9207406
  • Yuan, C., Ma, Q., Zhou, W., Han, J., & Hu, S. (2019). Jointly embedding the local and global relations of heterogeneous graph for rumor detection. In 2019 IEEE International Conference on Data Mining (ICDM) (pp. 796–805). IEEE. https://doi.org/10.1109/ICDM.2019.00090
  • Zhang, H., Fan, Z., Zheng, J., & Liu, Q. (2012). An improving deception detection method in computer-mediated communication. Journal of Networks, 7(11), 1811. https://doi.org/10.4304/jnw.7.11.1811-1816
  • Zhao, R., Benbasat, I., & Cavusoglu, H. (2019). Transparency in advice-giving systems: A framework and a research model for transparency provision. In IUI Workshops.
  • Zhou, Y., Booth, S., Ribeiro, M. T., & Shah, J. (2021). Do feature attribution methods correctly attribute features? arXiv preprint arXiv:2104.14403

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.