208
Views
0
CrossRef citations to date
0
Altmetric
Research Article

“It is Luring You to Click on the Link With False Advertising” - Mental Models of Clickbait and Its Impact on User’s Perceptions and Behavior Towards Clickbait Warnings

ORCID Icon, ORCID Icon & ORCID Icon
Received 08 Nov 2023, Accepted 19 Feb 2024, Published online: 08 Mar 2024

References

  • Abraham, C., & Michie, S. (2008). A taxonomy of behavior change techniques used in interventions. Health Psychology, 27(3), 379–387. https://doi.org/10.1037/0278-6133.27.3.379
  • Abu-Salma, R., & Livshits, B. (2020). Evaluating the end-user experience of private browsing mode [Paper presentation]. In Proceedings of the 2020 Chi Conference on Human Factors in Computing Systems (pp. 1–12), Honolulu, HI.
  • Agrawal, A. (2016). Clickbait detection using deep learning [Paper presentation]. 2016 2nd International Conference on Next Generation Computing Technologies (NGCT) (pp. 268–272), Dehradun, India.
  • Ajina, A. S., Javed, H. M. U., Ali, S., & Zamil, A. M. (2023). Fake or fact news? Investigating users’ online fake news sharing behavior: The moderating role of social networking sites (SNS) dependency. International Journal of Human–Computer Interaction, 1–15. https://doi.org/10.1080/10447318.2023.2192108
  • Al-Ameen, M. N., & Kocabas, H. (2020). “I cannot do anything”: User’s behavior and protection strategy upon losing, or identifying unauthorized access to online account [Poster session]. Symposium on usable privacy and security.
  • Al-Ameen, M. N., Kocabas, H., Nandy, S., & Tamanna, T. (2021). “We, three brothers have always known everything of each other”: A cross-cultural study of sharing digital devices and online accounts. Proceedings on Privacy Enhancing Technologies, 2021(4), 203–224. https://doi.org/10.2478/popets-2021-0067
  • Aldawood, H., & Skinner, G. (2019). A taxonomy for social engineering attacks via personal devices. International Journal of Computer Applications, 178(50), 19–26. https://doi.org/10.5120/ijca2019919411
  • Allen, J., Martel, C., & Rand, D. G. (2022). Birds of a feather don’t fact-check each other: Partisanship and the evaluation of news in Twitter’s Birdwatch crowdsourced fact-checking program [Paper presentation]. Chi Conference on Human Factors in Computing Systems (pp. 1–19), New Orleans, LA.
  • Amgoud, L., Bonnefon, J.-F., & Prade, H. (2007). The logical handling of threats, rewards, tips, and warnings. In Symbolic and quantitative approaches to reasoning with uncertainty: 9th European Conference, ECSQARU 2007, Hammamet, Tunisia, October 31–November 2, 2007, Proceedings 9 (pp. 235–246). Springer.
  • Avery, J., Almeshekah, M., & Spafford, E. (2017). Offensive deception in computing. In International Conference on Cyber Warfare and Security (p. 23). Academic Conferences International Limited.
  • Babu, A., Liu, A., & Zhang, J. (2017). New updates to reduce clickbait headlines. Facebook Newsroom. https://about.fb.com/news/2017/05/news-feed-fyi-new-updates-to-reduce-clickbait-headlines/
  • Baxter, K., Courage, C., & Caine, K. (2015). Understanding your users: A practical guide to user research methods (2nd ed.). Morgan Kaufmann Publishers Inc.
  • Bhuiyan, M. M., Horning, M., Lee, S. W., & Mitra, T. (2021). NudgeCred: Supporting news credibility assessment on social media through nudges. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1–30. https://doi.org/10.1145/3479571
  • Bhuiyan, M. M., Zhang, K., Vick, K., Horning, M. A., & Mitra, T. (2018). FeedReflect: A tool for nudging users to assess news credibility on Twitter. Companion of the 2018 ACM conference on computer supported cooperative work and social computing (pp. 205–208). Association for Computing Machinery.
  • Bin Naeem, S., & Kamel Boulos, M. N. (2021). Covid-19 misinformation online and health literacy: A brief overview. International Journal of Environmental Research and Public Health, 18(15), 8091. https://doi.org/10.3390/ijerph18158091
  • Boyatzis, R. E. (1998). Transforming qualitative information: Thematic analysis and code development. Sage.
  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
  • Chakraborty, A., Paranjape, B., Kakarla, S., & Ganguly, N. (2016). Stop clickbait: Detecting and preventing clickbaits in online news media [Paper presentation]. 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (pp. 9–16). IEEE.
  • Chien, S.-Y., Yang, C.-J., & Yu, F. (2022). Xflag: Explainable fake news detection model on social media. International Journal of Human–Computer Interaction, 38(18–20), 1808–1827. https://doi.org/10.1080/10447318.2022.2062113
  • Clark, B. (2013). Relevance theory. Cambridge University Press.
  • Cronkhite, G. L. (1964). Logic, emotion, and the paradigm of persuasion. Quarterly Journal of Speech, 50(1), 13–18. https://doi.org/10.1080/00335636409382640
  • Dessart, L., & Standaert, W. (2023). Strategic storytelling in the age of sustainability. Business Horizons, 66(3), 371–385. https://doi.org/10.1016/j.bushor.2023.01.005
  • D. Molina, M., Sundar, S. S., Rony, M. M. U., Hassan, N., Le, T., & Lee, D. (2021). Does clickbait actually attract more clicks? Three clickbait studies you must read [Paper presentation]. Chi Conference on Human Factors in Computing Systems (pp. 1, in Proceedings of the 2021–19), Yokohama, Japan.
  • Dumaru, P., Shrestha, A., Paudel, R., Haverkamp, C., McClain, M. B., & Al-Ameen, M. N. (2023). “…I have my dad, sister, brother, and mom’s password”: Unveiling users’ mental models of security and privacy-preserving tools. Information & Computer Security, https://doi.org/10.1108/ICS-04-2023-0047
  • Ecker, U. K., Lewandowsky, S., & Tang, D. T. (2010). Explicit warnings reduce but do not eliminate the continued influence of misinformation. Memory & Cognition, 38(8), 1087–1100. https://doi.org/10.3758/MC.38.8.1087
  • Faris, R., Roberts, H., Etling, B., Bourassa, N., Zuckerman, E., & Benkler, Y. (2017). Partisanship, propaganda, and disinformation: Online media and the 2016 us presidential election (p. 6). Berkman Klein Center Research Publication.
  • Fillenbaum, S. (1976). Inducements: On the phrasing and logic of conditional promises, threats, and warnings. Psychological Research, 38(3), 231–250. https://doi.org/10.1007/BF00309774
  • Geeng, C., Yee, S., & Roesner, F. (2020). Fake news on Facebook and Twitter: Investigating how people (don’t) investigate. In Proceedings of the 2020 Chi Conference on Human Factors in Computing Systems (pp. 1–14). Association for Computing Machinery.
  • Gleicher, N. (2019). Removing coordinated inauthentic behavior from china. Facebook Newsroom, 19. https://about.fb.com/news/2019/08/removing-cib-china/
  • Hadnagy, C. (2010). Social engineering: The art of human hacking. John Wiley & Sons.
  • Hassan, N., Yousuf, M., Mahfuzul Haque, M., A. Suarez Rivas, J., & Khadimul Islam, M. (2019). Examining the roles of automation, crowds and professionals towards sustainable fact-checking [Paper presentation]. In Companion proceedings of the 2019 world wide Web Conference (pp. 1001–1006), San Francisco, CA.
  • Heuer, H., & Glassman, E. L. (2022). A comparative evaluation of interventions against misinformation: Augmenting the WHO checklist [Paper presentation]. Chi Conference on Human Factors in Computing Systems (pp. 1–21), New Orleans, LA.
  • Hu, D., & Apuke, O. D. (2023). Modeling the factors that stimulates the circulation of online misinformation in a contemporary digital age. International Journal of Human–Computer Interaction, 1–13. https://doi.org/10.1080/10447318.2023.2209839
  • Huang, Y. L., Starbird, K., Orand, M., Stanek, S. A., & Pedersen, H. T. (2015). Connected through crisis: Emotional proximity and the spread of misinformation online. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 969–980). Association for Computing Machinery.
  • Huber, M., Kowalski, S., Nohlberg, M., & Tjoa, S. (2009). Towards automating social engineering using social networking sites [Paper presentation]. 2009 International Conference on Computational Science and Engineering (Vol. 3, pp. 117–124), Vancouver, BC, Canada.
  • Indrajit, R. E. (2017). Social engineering framework: Understanding the deception approach to human element of security. International Journal of Computer Science Issues, 14(2), 8. https://doi.org/10.20943/01201702.816
  • Ipeirotis, P. G., Provost, F., & Wang, J. (2010). Quality management on amazon mechanical Turk [Paper presentation]. Proceedings of The ACM SIGKDD Workshop on Human Computation (pp. 64–67), Washington, DC.
  • Javed, R. T., Shuja, M. E., Usama, M., Qadir, J., Iqbal, W., Tyson, G., Castro, I., & Garimella, K. (2020). A first look at covid-19 messages on Whatsapp in Pakistan [Paper presentation]. 2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (pp. 118–125), The Hague, Netherlands.
  • Johnston-Laird, P. (1983). Mental models: Towards a cognitive science of language, inference, and consciousness (No. 153.4 JOHm). Harvard University Press.
  • Kaiser, B., Wei, J., Lucherini, E., Lee, K., Matias, J. N., Mayer, J. (2021). Adapting security warnings to counter online disinformation. In 30th Usenix Security Symposium (Usenix Security 21)(pp. 1163–1180). USENIX Association.
  • Kang, R., Dabbish, L., Fruchter, N., & Kiesler, S. (2015). “My data just goes everywhere:” User mental models of the internet and implications for privacy and security. In Eleventh Symposium on Usable Privacy and Security (Soups)(pp. 39–52). USENIX Association.
  • Kaptein, M., Markopoulos, P., De Ruyter, B., & Aarts, E. (2015). Personalizing persuasive technologies: Explicit and implicit personalization using persuasion profiles. International Journal of Human-Computer Studies, 77, 38–51. https://doi.org/10.1016/j.ijhcs.2015.01.004
  • Karande, H., Walambe, R., Benjamin, V., Kotecha, K., & Raghu, T. (2021). Stance detection with Bert embeddings for credibility analysis of information on social media. PeerJ Computer Science, 7, e467. https://doi.org/10.7717/peerj-cs.467
  • Kee, J., & Deterding, B. (2008). Social engineering: Manipulating the source. GCIA Gold Certification. https://www.giac.org/paper/gcia/2968/social-engineering-manipulating-source/115738
  • Khiralla, F. A. M. (2020). Statistics of cybercrime from 2016 to the first half of 2020. International Journal of Computer Science Network, 9(5), 252–261.
  • Konstantinou, L., Panos, D., & Karapanos, E. (2024). Exploring the design of technology-mediated nudges for online misinformation. International Journal of Human–Computer Interaction, 1–28. https://doi.org/10.1080/10447318.2023.2301265
  • Krombholz, K., Hobel, H., Huber, M., & Weippl, E. (2015). Advanced social engineering attacks. Journal of Information Security and Applications, 22, 113–122. https://doi.org/10.1016/j.jisa.2014.09.005
  • Kumaraguru, P., Sheng, S., Acquisti, A., Cranor, L. F., & Hong, J. (2010). Teaching Johnny not to fall for phish. ACM Transactions on Internet Technology, 10(2), 1–31. https://doi.org/10.1145/1754393.1754396
  • Kung, F. Y., Kwok, N., & Brown, D. J. (2018). Are attention check questions a threat to scale validity? Applied Psychology, 67(2), 264–283. https://doi.org/10.1111/apps.12108
  • Lan, X., Wu, Y., Shi, Y., Chen, Q., & Cao, N. (2022). Negative emotions, positive outcomes? exploring the communication of negativity in serious data stories [Paper presentation]. Chi Conference on Human Factors in Computing Systems (pp. 1–14), New Orleans, LA.
  • Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. https://doi.org/10.1177/1529100612451018
  • Li, X., Zhou, J., Xiang, H., & Cao, J. (2022). Attention grabbing through forward reference: An ERP study on clickbait and top news stories. International Journal of Human–Computer Interaction, 1–16. https://doi.org/10.1080/10447318.2022.2158262
  • Liu, B., Andersen, M.S., Schaub, F., Almuhimedi, H., Zhang, S.A., Sadeh, N., Agarwal, Y., & Acquisti, A. (2016). Follow my recommendations: A personalized privacy assistant for mobile app permissions. In Soups 2016-Proceedings of the 12th Symposium on Usable Privacy and Security(pp. 27–41). USENIX Association.
  • Marwick, A. E., & Lewis, R. (2017). Media manipulation and disinformation online. Data and Society.
  • Mayer, R. E. (2014). Introduction to multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp. 1–24). Cambridge University Press. https://doi.org/10.1017/CBO9781139547369.002
  • Michie, S., Richardson, M., Johnston, M., Abraham, C., Francis, J., Hardeman, W., Eccles, M. P., Cane, J., & Wood, C. E. (2013). The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: Building an international consensus for the reporting of behavior change interventions. Annals of Behavioral Medicine, 46(1), 81–95. https://doi.org/10.1007/s12160-013-9486-6
  • Moreno, R., & Valdez, A. (2005). Cognitive load and learning effects of having students organize pictures and words in multimedia environments: The role of student interactivity and feedback. Educational Technology Research and Development, 53(3), 35–45. https://doi.org/10.1007/BF02504796
  • Murnane, E. L., Jiang, X., Kong, A., Park, M., Shi, W., Soohoo, C., Vink, L., Xia, I., Yu, X., Yang-Sammataro, J., & Young, G. (2020). Designing ambient narrative-based interfaces to reflect and motivate physical activity [Paper presentation]. Proceedings of the 2020 Chi Conference on Human Factors in Computing Systems (pp. 1–14), Honolulu, HI..
  • Norman, D. (2013). The design of everyday things (revised and expanded edition). Basic books.
  • Norman, D. A. (2014). Some observations on mental models. In Mental models (pp. 15–22). Psychology Press.
  • Oates, M., Ahmadullah, Y., Marsh, A., Swoopes, C., Zhang, S., Balebako, R., & Cranor, L. F. (2018). Turtles, locks, and bathrooms: Understanding mental models of privacy through illustration. Proceedings on Privacy Enhancing Technologies, 2018(4), 5–32. https://doi.org/10.1515/popets-2018-0029
  • O’Donnell, A. (2018, May). What is clickbait?: What’s really happening when you click that link to finish an irresistible story. https://www.lifewire.com/the-dark-side-of-clickbait-2487506.
  • Oppenheimer, D. M., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45(4), 867–872. https://doi.org/10.1016/j.jesp.2009.03.009
  • Paivio, A. (2006). Mind and its evolution: A dual coding theoretical interpretation. Lawrence Erlbaum Associates, Inc.
  • Paudel, R., Shrestha, A., Dumaru, P., & Al-Ameen, M. N. (2023). “It doesn’t just feel like something a lawyer slapped together.” Mental-model-based privacy policy for third-party applications on Facebook. Companion publication of the 2023 conference on computer supported cooperative work and social computing (pp. 298–306). Association for Computing Machinery.
  • Peck, A. (2020). A problem of amplification: Folklore and fake news in the age of social media. Journal of American Folklore, 133(529), 329–351. https://doi.org/10.5406/jamerfolk.133.529.0329
  • Peer, E., Vosgerau, J., & Acquisti, A. (2014). Reputation as a sufficient condition for data quality on amazon mechanical Turk. Behavior Research Methods, 46(4), 1023–1031. https://doi.org/10.3758/s13428-013-0434-y
  • Pennycook, G., McPhetres, J., Bago, B., & Rand, D. G. (2022). Beliefs about COVID-19 in Canada, the United Kingdom, and the United States: A novel test of political polarization and motivated reasoning. Personality & Social Psychology Bulletin, 48(5), 750–765. https://doi.org/10.1177/01461672211023652
  • Pine, K. H., Lee, M., Whitman, S. A., Chen, Y., & Henne, K. (2021). Making sense of risk information amidst uncertainty: Individuals’ perceived risks associated with the covid-19 pandemic [Paper presentation]. In Proceedings of the 2021 Chi Conference on Human Factors in Computing Systems (pp. 1–15), Yokohama, Japan.
  • Redmiles, E. M., Chachra, N., & Waismeyer, B. (2018). Examining the demand for spam: Who clicks? In Proceedings of the 2018 Chi Conference on Human Factors in Computing Systems (p. 212), Montreal, QC.
  • Rides, K. (2017, August). Clickbait malware sites. https://www.linkedin.com/pulse/clickbait-malware-sites-kris-rides/.
  • Roth, Y., & Harvey, D. (2018, June). How twitter is fighting spam and malicious automation. Twitter [blog]
  • Safety, T. (2019, August 19). Information operations directed at Hong Kong. Twitter Blog.
  • Schul, Y. (1993). When warning succeeds: The effect of warning on success in ignoring invalid information. Journal of Experimental Social Psychology, 29(1), 42–62. https://doi.org/10.1006/jesp.1993.1003
  • Scott, K. (2021). You won’t believe what’s in this paper! Clickbait, relevance and the curiosity gap. Journal of Pragmatics, 175, 53–66. https://doi.org/10.1016/j.pragma.2020.12.023
  • Shahid, F., Kamath, S., Sidotam, A., Jiang, V., Batino, A., & Vashistha, A. (2022). “It matches my worldview”: Examining perceptions and attitudes around fake videos. In Chi conference on human factors in computing systems (pp. 1–15), Association for Computing Machinery.
  • Sharevski, F., Treebridge, P., Jachim, P., Li, A., Babin, A., & Westbrook, J. (2022). Socially engineering a polarizing discourse on Facebook through malware-induced misperception. International Journal of Human–Computer Interaction, 38(17), 1621–1637. https://doi.org/10.1080/10447318.2021.2009671
  • Shrestha, A., Paudel, R., Dumaru, P., & Al-Ameen, M. N. (2023). Towards improving the efficacy of windows security notifier for apps from unknown publishers: The role of rhetoric. International Conference on Human-Computer Interaction (pp. 101–121). Springer Nature Switzerland.
  • Shrestha, A., Sharma, T., Saha, P., Ahmed, S. I., & Al-Ameen, M. N. (2023). A first look into software security practices in Bangladesh. ACM Journal on Computing and Sustainable Societies, 1(1), 1–24. https://doi.org/10.1145/3616383
  • Simmons, A. (2019). The story factor: Inspiration, influence, and persuasion through the art of storytelling. Basic books.
  • Souza, F. (2015, June). Analyzing a Facebook clickbait worm. https://blog.sucuri.net/2015/06/analyzing-a-facebook-clickbait-worm.html.
  • Sperber, D., Cara, F., & Girotto, V. (1995). Relevance theory explains the selection task. Cognition, 57(1), 31–95. https://doi.org/10.1016/0010-0277(95)00666-m
  • Sperber, D., & Wilson, D. (1986). Relevance: Communication and cognition (vol. 142). Citeseer.
  • Sylvia Chou, W.-Y., Gaysynsky, A., & Cappella, J. N. (2020). Where we go from here: Health misinformation on social media (vol. 110, No. S3). American Public Health Association.
  • Tasnim, S., Hossain, M. M., & Mazumder, H. (2020). Impact of rumors and misinformation on covid-19 in social media. Yebang Uihakhoe Chi [Journal of Preventive Medicine and Public Health], 53(3), 171–174. https://doi.org/10.3961/jpmph.20.094
  • Thatcher, A., & Greyling, M. (1998). Mental models of the internet. International Journal of Industrial Ergonomics, 22(4–5), 299–305. https://doi.org/10.1016/S0169-8141(97)00081-4
  • Urakami, J., Kim, Y., Oura, H., & Seaborn, K. (2022). Finding strategies against misinformation in social media: A qualitative study [Paper presentation]. Chi Conference on Human Factors in Computing Systems Extended Abstracts (pp. 1–7), New Orleans, LA.
  • Vance, A., Kirwan, B., Bjornn, D., Jenkins, J., Anderson, B. B. (2017). What do we really know about how habituation to warnings occurs over time? A longitudinal FMRI study of habituation and polymorphic warnings. Proceedings of the 2017 Chi Conference on Human Factors in Computing Systems (pp. 2215–2227). Association for Computing Machinery.
  • Vasudeva, F., & Barkdull, N. (2020). Whatsapp in India? A case study of social media related lynchings. Social Identities, 26(5), 574–589. https://doi.org/10.1080/13504630.2020.1782730
  • Wineburg, S., & McGrew, S. (2019). Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information. Teachers College Record: The Voice of Scholarship in Education, 121(11), 1–40. https://doi.org/10.1177/016146811912101102
  • Wittes, B., Poplin, C., Jurecic, Q., & Spera, C. (2016). Sextortion: Cybersecurity, teenagers, and remote sexual assault (pp. 1–47). Center for Technology Innovation at Brookings.
  • Woolbert, C. H. (1918). The place of logic in a system of persuasion. Quarterly Journal of Speech, 4(1), 19–39. https://doi.org/10.1080/00335631809360643
  • Wu, J., & Zappala, D. (2018). When is a tree really a truck? Exploring mental models of encryption. In Fourteenth Symposium on Usable Privacy and Security (Soups)(pp. 395–409). USENIX Association.
  • Xiang, H., Zhou, J., & Wang, Z. (2023). Reducing younger and older adults’ engagement with covid-19 misinformation: The effects of accuracy nudge and exogenous cues. International Journal of Human–Computer Interaction, 1–16. https://doi.org/10.1080/10447318.2022.2158263
  • Young, I. (2008). Mental models: Aligning design strategy with human behavior. Rosenfeld Media.
  • Zeng, E., Kohno, T., & Roesner, F. (2020). Bad news: Clickbait and deceptive ads on news and misinformation websites. In Workshop on technology and consumer protection (pp. 1–11).
  • Zhang, Y., Suhaimi, N., Yongsatianchot, N., Gaggiano, J. D., Kim, M., Patel, S. A., Sun, Y., Marsella, S., Griffin, J., & Parker, A. G. (2022). Shifting trust: Examining how trust and distrust emerge, transform, and collapse in COVID-19 information seeking [Paper presentation]. Chi Conference on Human Factors in Computing Systems (pp. 1–21), New Orleans, LA.
  • Zheng, H.-T., Chen, J.-Y., Yao, X., Sangaiah, A. K., Jiang, Y., & Zhao, C.-Z. (2018). Clickbait convolutional neural network. Symmetry, 10(5), 138. https://doi.org/10.3390/sym10050138
  • Zhou, Y. (2017). Clickbait detection in tweets using self-attentive network. CoRR. http://arxiv.org/abs/1710.05364.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.