1,022
Views
6
CrossRef citations to date
0
Altmetric
Research Articles

Speciesism and Preference of Human–Artificial Intelligence Interaction: A Study on Medical Artificial Intelligence

, , , , , & show all
Pages 2925-2937 | Received 25 Aug 2022, Accepted 01 Feb 2023, Published online: 14 Feb 2023

References

  • Alexander, V., Blinder, C., & Zak, P. J. (2018). Why trust an algorithm? Performance, cognition, and neurophysiology. Computers in Human Behavior, 89, 279–288. https://doi.org/10.1016/j.chb.2018.07.026
  • Amann, J., Blasimme, A., Vayena, E., Frey, D., & Madai, V. I. (2020). Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20(1), 310. https://doi.org/10.1186/s12911-020-01332-6
  • Amann, J., Vetter, D., Blomberg, S. N., Christensen, H. C., Coffee, M., Gerke, S., Gilbert, T. K., Hagendorff, T., Holm, S., Livne, M., Spezzatti, A., Strümke, I., Zicari, R. V., & Madai, V. I. (2022). To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems. PLoS Digital Health, 1(2), e0000016. https://doi.org/10.1371/journal.pdig.0000016
  • Amiot, C. E., Sukhanova, K., & Bastian, B. (2020). Social identification with animals: Unpacking our psychological connection with other animals. Journal of Personality and Social Psychology, 118(5), 991–1017. https://doi.org/10.1037/pspi0000199
  • Angerschmid, A., Zhou, J., Theuermann, K., Chen, F., & Holzinger, A. (2022). Fairness and explanation in AI-informed decision making. Machine Learning and Knowledge Extraction, 4(2), 556–579. https://doi.org/10.3390/make4020026
  • Aoki, N. (2021). The importance of the assurance that “humans are still in the decision loop” for public trust in artificial intelligence: Evidence from an online experiment. Computers in Human Behavior, 114, 106572. https://doi.org/10.1016/j.chb.2020.106572
  • Babic, B., Gerke, S., Evgeniou, T., & Cohen, I. G. (2021). Beware explanations from AI in health care. Science, 373(6552), 284–286. https://doi.org/10.1126/science.abg1834
  • Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S. (2022). A systematic literature review of user trust in AI-enabled systems: An HCI perspective. International Journal of Human–Computer Interaction, 1–16 (online). https://doi.org/10.1080/10447318.2022.2138826
  • Belanche, D., Casaló, L. V., Flavián, C., & Schepers, J. (2020). Service robot implementation: A theoretical framework and research agenda. The Service Industries Journal, 40(3–4), 203–225. https://doi.org/10.1080/02642069.2019.1672666
  • Beldad, A. D., & Hegner, S. M. (2018). Expanding the technology acceptance model with the inclusion of trust, social influence, and health valuation to determine the predictors of German users’ willingness to continue using a fitness App: A structural equation modeling approach. International Journal of Human–Computer Interaction, 34(9), 882–893. https://doi.org/10.1080/10447318.2017.1403220
  • Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34. https://doi.org/10.1016/j.cognition.2018.08.003
  • Botti, S., Orfali, K., & Iyengar, S. S. (2009). Tragic choices: Autonomy and emotional responses to medical decisions. Journal of Consumer Research, 36(3), 337–352. https://doi.org/10.1086/598969
  • Brislin, R. W. (1970). Back-translation for cross-cultural research. Journal of Cross-Cultural Psychology, 1(3), 185–216. https://doi.org/10.1177/135910457000100301
  • Bruers, S. (2021). Speciesism, arbitrariness and moral illusions. Philosophia, 49(3), 957–975. https://doi.org/10.1007/s11406-020-00282-7
  • Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825. https://doi.org/10.1177/0022243719851788
  • Castillo, D., Canhoto, A. I., & Said, E. (2021). The dark side of AI-powered service interactions: Exploring the process of co-destruction from the customer perspective. The Service Industries Journal, 41(13–14), 900–925. https://doi.org/10.1080/02642069.2020.1787993
  • Caviola, L., Everett, J. A. C., & Faber, N. S. (2019). The moral standing of animals: Towards a psychology of speciesism. Journal of Personality and Social Psychology, 116(6), 1011–1029. https://doi.org/10.1037/pspp0000182
  • Choi, J. K., & Ji, Y. G. (2015). Investigating the importance of trust on adopting an autonomous vehicle. International Journal of Human–Computer Interaction, 31(10), 692–702. https://doi.org/10.1080/10447318.2015.1070549
  • Choung, H., David, P., & Ross, A. (2022). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 1–13 (online). https://doi.org/10.1080/10447318.2022.2050543
  • Consumer Technology Association (2021). The use of artificial intelligence in health care: Trustworthiness (Repot No. ANSI/CTA-2090).
  • Devaraj, S., Easley, R. F., & Crant, J. M. (2008). Research note—How does personality matter? Relating the five-factor model to technology acceptance and use. Information Systems Research, 19(1), 93–105. https://doi.org/10.1287/isre.1070.0153
  • Dhont, K., Hodson, G., Costello, K., & MacInnis, C. C. (2014). Social dominance orientation connects prejudicial human–human and human–animal relations. Personality and Individual Differences, 61–62, 105–108. https://doi.org/10.1016/j.paid.2013.12.020
  • Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology. General, 144(1), 114–126. https://doi.org/10.1037/xge0000033
  • Dufour, L., Maoret, M., & Montani, F. (2020). Coupling high self‐perceived creativity and successful newcomer adjustment in organizations: The role of supervisor trust and support for authentic self‐expression. Journal of Management Studies, 57(8), 1531–1555. https://doi.org/10.1111/joms.12547
  • Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. https://doi.org/10.1038/nature21056
  • Fan, W., Liu, J., Zhu, S., & Pardalos, P. M. (2020). Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Annals of Operations Research, 294(1–2), 567–592. https://doi.org/10.1007/s10479-018-2818-y
  • Finlayson, S. G., Bowers, J. D., Ito, J., Zittrain, J. L., Beam, A. L., & Kohane, I. S. (2019). Adversarial attacks on medical machine learning. Science, 363(6433), 1287–1289. https://doi.org/10.1126/science.aaw4399
  • Frank, D.-A., Elbæk, C. T., Børsting, C. K., Mitkidis, P., Otterbring, T., & Borau, S. (2021). Drivers and social implications of artificial intelligence adoption in healthcare during the COVID-19 pandemic. PLoS One, 16(11), e0259928. https://doi.org/10.1371/journal.pone.0259928
  • Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An integrated model. MIS Quarterly, 27(1), 51–90. https://doi.org/10.2307/30036519
  • Gillath, O., Ai, T., Branicky, M. S., Keshmiri, S., Davison, R. B., & Spaulding, R. (2021). Attachment and trust in artificial intelligence. Computers in Human Behavior, 115, 106607. https://doi.org/10.1016/j.chb.2020.106607
  • Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
  • Grand View Research (2021). Artificial intelligence in healthcare market (Report No. GVR-3-68038-951-7).
  • Grove, W. M., & Meehl, P. E. (1996). Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: The clinical–statistical controversy. Psychology, Public Policy, and Law, 2(2), 293–323. https://doi.org/10.1037/1076-8971.2.2.293
  • Gulati, S., Sousa, S., & Lamas, D. (2019). Design, development and evaluation of a human–computer trust scale. Behaviour & Information Technology, 38(10), 1004–1015. https://doi.org/10.1080/0144929X.2019.1656779
  • Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157–169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008
  • Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2006). Multivariate data analysis (6th ed.). Prentice Hall.
  • Hasani, N., Morris, M. A., Rhamim, A., Summers, R. M., Jones, E., Siegel, E., & Saboury, B. (2022). Trustworthy artificial intelligence in medical imaging. PET Clinics, 17(1), 1–12. https://doi.org/10.1016/j.cpet.2021.09.007
  • Hayes, A. F. (2018). Introduction to mediation, moderation, and conditional process analysis. A regression-based approach (2nd ed.). The Guilford Press.
  • Hayes, A. F. (2018). Partial, conditional, and moderated moderated mediation: Quantification, inference, and interpretation. Communication Monographs, 85(1), 4–40. https://doi.org/10.1080/03637751.2017.1352100
  • Hayes, A. F., & Rockwood, N. J. (2020). Conditional process analysis: Concepts, computation, and advances in the modeling of the contingencies of mechanisms. American Behavioral Scientist, 64(1), 19–54. https://doi.org/10.1177/0002764219859633
  • He, J., Baxter, S. L., Xu, J., Xu, J., Zhou, X., & Zhang, K. (2019). The practical implementation of artificial intelligence technologies in medicine. Nature Medicine, 25(1), 30–36. https://doi.org/10.1038/s41591-018-0307-0
  • Hegner, S. M., Beldad, A. D., & Brunswick, G. J. (2019). In automatic we trust: Investigating the impact of trust, control, personality characteristics, and extrinsic and intrinsic motivations on the acceptance of autonomous vehicles. International Journal of Human–Computer Interaction, 35(19), 1769–1780. https://doi.org/10.1080/10447318.2019.1572353
  • Holzinger, A. (2021). The next frontier: AI we can really trust. In M. Kamp, I. Koprinska, A. Bibal, T. Bouadi, B. Frenay, L. Galarraga, J. Oramas, & L. Adilova (Eds.), Machine learning and principles and practice of knowledge discovery in databases. ECML PKDD 2021 (pp. 427–440). Springer.
  • Holzinger, A., Dehmer, M., Emmert-Streib, F., Cucchiara, R., Augenstein, I., Ser, J. D., Samek, W., Jurisica, I., & Díaz-Rodríguez, N. (2022). Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence. Information Fusion, 79, 263–278. https://doi.org/10.1016/j.inffus.2021.10.007
  • Jaquet, F. (2019). Is speciesism wrong by definition? Journal of Agricultural and Environmental Ethics, 32(3), 447–458. https://doi.org/10.1007/s10806-019-09784-1
  • Kawakami, K., Amodio, D. M., & Hugenberg, K. (2017). Chapter one – Intergroup perception and cognition: An integrative framework for understanding the causes and consequences of social categorization. In J. M. Olson (Ed.), Advances in experimental social psychology (pp. 1–80). Academic Press.
  • Kim, J., Giroux, M., & Lee, J. C. (2021). When do you trust AI? The effect of number presentation detail on consumer trust and acceptance of AI recommendations. Psychology & Marketing, 38(7), 1140–1155. https://doi.org/10.1002/mar.21498
  • Lai, P. C. (2017). The literature review of technology adoption models and theories for the novelty technology. Journal of Information Systems and Technology Management, 14(1), 21–38. https://doi.org/10.4301/S1807-17752017000100002
  • Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 205395171875668. https://doi.org/10.1177/2053951718756684
  • Leung, E., Paolacci, G., & Puntoni, S. (2018). Man versus machine: Resisting automation in identity-based consumer behavior. Journal of Marketing Research, 55(6), 818–831. https://doi.org/10.1177/0022243718818423
  • Lin, Y., Huang, G., Ho, Y., & Lou, M. (2020). Patient willingness to undergo a two‐week free trial of a telemedicine service for coronary artery disease after coronary intervention: A mixed‐methods study. Journal of Nursing Management, 28(2), 407–416. https://doi.org/10.1111/jonm.12942
  • Liu, H., Wang, Y., Fan, W., Liu, X., Li, Y., Jain, S., Liu, Y., Jain, A. K., Tang, J. (2021). Trustworthy AI: A computational perspective (arXiv:2107.06641arXiv). http://arxiv.org/abs/2107.06641
  • Liu, L. L., He, Y. M., & Liu, X. D. (2019). Investigation on patients’ cognition and trust in artificial intelligence medicine. Chinese Medicine Ethics, 8(32), 986–990. https://doi.org/10.12026/j.issn.1001-8565.2019.08.07
  • Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650. https://doi.org/10.1093/jcr/ucz013
  • London, A. J. (2019). Artificial intelligence and black-box medical decisions: Accuracy versus explainability. The Hastings Center Report, 49(1), 15–21. https://doi.org/10.1002/hast.973
  • Lynn, M., & Snyder, C. (2002). R. Uniqueness seeking. In C. R., Snyder & S. J., Lopez (Eds.), Handbook of positive psychology: Part V. Self-based approaches (pp 395–410). Oxford University Press.
  • Lysaght, T., Lim, H. Y., Xafis, V., & Ngiam, K. Y. (2019). AI-assisted decision-making in healthcare: The application of an ethics framework for big data in health and research. Asian Bioethics Review, 11(3), 299–314. https://doi.org/10.1007/s41649-019-00096-0
  • MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1(2), 130–149. https://doi.org/10.1037/1082-989X.1.2.130
  • Maddox, T. M., Rumsfeld, J. S., & Payne, P. R. O. (2019). Questions for artificial intelligence in health care. JAMA, 321(1), 31. https://doi.org/10.1001/jama.2018.18932
  • McDonald, R. P., & Ho, M.-H R. (2002). Principles and practice in reporting structural equation analyses. Psychological Methods, 7(1), 64–82. https://doi.org/10.1037/1082-989X.7.1.64
  • Montalan, B., Lelard, T., Godefroy, O., & Mouras, H. (2012). Behavioral investigation of the influence of social categorization on empathy for pain: A minimal group paradigm study. Frontiers in Psychology, 3, 389. https://doi.org/10.3389/fpsyg.2012.00389
  • Morley, J., Machado, C. C. V., Burr, C., Cowls, J., Joshi, I., Taddeo, M., & Floridi, L. (2020). The ethics of AI in health care: A mapping review. Social Science & Medicine, 260, 113172. https://doi.org/10.1016/j.socscimed.2020.113172
  • Narayanan, Y. (2018). Cow protectionism and bovine frozen-semen farms in India: Analyzing cruelty, speciesism, and climate change. Society & Animals, 26(1), 13–33. https://doi.org/10.1163/15685306-12341481
  • Nissenbaum, H., & Walker, D. (1998). Will computers dehumanize education? A grounded approach to values at risk. Technology in Society, 20(3), 237–273. https://doi.org/10.1016/S0160-791X(98)00011-6
  • Ostherr, K. (2022). Artificial intelligence and medical humanities. Journal of Medical Humanities, 43, 211–232. https://doi.org/10.1007/s10912-020-09636-4
  • Ostrom, A. L., Fotheringham, D., & Bitner, M. J. (2019). Customer acceptance of AI in service encounters: Understanding antecedents and consequences. In P. Maglio, C. Kieliszewski, J. Spohrer, K. Lyons, L. Patrício, & Y. Sawatani (Eds.), Handbook of service science: Vol II. Service science: Research and innovations in the service economy (pp 77–103). Springer.
  • Ploug, T., & Holm, S. (2020). The four dimensions of contestable AI diagnostics—A patient-centric approach to explainable AI. Artificial Intelligence in Medicine, 107, 101901. https://doi.org/10.1016/j.artmed.2020.101901
  • Preacher, K. J., Curran, P. J., & Bauer, D. J. (2006). Computational tools for probing interactions in multiple linear regression, multilevel modeling, and latent curve analysis. Journal of Educational and Behavioral Statistics, 31(4), 437–448. https://doi.org/10.3102/10769986031004437
  • Qi, Q., Tao, F., Hu, T., Anwer, N., Liu, A., Wei, Y., Wang, L., & Nee, A. Y. C. (2021). Enabling technologies and tools for digital twin. Journal of Manufacturing Systems, 58, 3–21. https://doi.org/10.1016/j.jmsy.2019.10.001
  • Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D. C., Pentland, A., … Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477–486. https://doi.org/10.1038/s41586-019-1138-y
  • Roberts, M., Driggs, D., Thorpe, M., Gilbey, J., Yeung, M., Ursprung, S., Aviles-Rivero, A. I., Etmann, C., McCague, C., Beer, L., Weir-McCall, J. R., Teng, Z., Gkrania-Klotsas, E., Ruggiero, A., Korhonen, A., Jefferson, E., Ako, E., Langs, G., Gozaliasl, G., … Schönlieb, C.-B. (2021). Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nature Machine Intelligence, 3(3), 199–217. https://doi.org/10.1038/s42256-021-00307-0
  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x
  • Sahal, R., Alsamhi, S. H., & Brown, K. N. (2022). Personal digital twin: A close look into the present and a step towards the future of personalised healthcare industry. Sensors, 22(15), 5918. https://doi.org/10.3390/s22155918
  • Schmitt, B. (2020). Speciesism: An obstacle to AI and robot adoption. Marketing Letters, 31(1), 3–6. https://doi.org/10.1007/s11002-019-09499-3
  • Seyama, J., & Nagayama, R. S. (2007). The uncanny valley: Effect of realism on the impression of artificial human faces. Presence: Teleoperators and Virtual Environments, 16(4), 337–351. https://doi.org/10.1162/pres.16.4.337
  • Shaffer, V. A., Probst, C. A., Merkle, E. C., Arkes, H. R., & Medow, M. A. (2013). Why do patients derogate physicians who use a computer-based diagnostic support system? Medical Decision Making, 33(1), 108–118. https://doi.org/10.1177/0272989X12453501
  • Shin, D. (2020). User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. Journal of Broadcasting & Electronic Media, 64(4), 541–565. https://doi.org/10.1080/08838151.2020.1843357
  • Silva, A., Schrum, M., Hedlund-Botti, E., Gopalan, N., & Gombolay, M. (2022). Explainable artificial intelligence: Evaluating the objective and subjective impacts of xAI on human–agent interaction. International Journal of Human–Computer Interaction, 1–15(online). https://doi.org/10.1080/10447318.2022.2101698
  • Şimşek, Ö. F., & Yalınçetin, B. (2010). I feel unique, therefore I am: The development and preliminary validation of the personal sense of uniqueness (PSU) scale. Personality and Individual Differences, 49(6), 576–581. https://doi.org/10.1016/j.paid.2010.05.006
  • Singer, P. (1973). Animal liberation. In R. Garner (Ed.), Animal rights (pp. 7–18). Palgrave Macmillan.
  • Singer, P., & Mason, J. (2007). The ethics of what we eat: Why our food choices matter. Rodale Books.
  • Söllner, M., Hoffmann, A., & Leimeister, J. M. (2016). Why different trust relationships matter for information systems users. European Journal of Information Systems, 25(3), 274–287. https://doi.org/10.1057/ejis.2015.17
  • Thompson, W. R., Reinisch, A. J., Unterberger, M. J., & Schriefl, A. J. (2019). Artificial intelligence-assisted auscultation of heart murmurs: Validation by virtual clinical trial. Pediatric Cardiology, 40(3), 623–629. https://doi.org/10.1007/s00246-018-2036-z
  • Vanman, E. J., & Kappas, A. (2019). “Danger, Will Robinson!” The challenges of social robots for intergroup relations. Social and Personality Psychology Compass, 13(8), e12489. https://doi.org/10.1111/spc3.12489
  • Wagner, B. (2019). Liable, but not in control? Ensuring meaningful human agency in automated decision-making systems: Human agency in decision-making Systems. Policy & Internet, 11(1), 104–122. https://doi.org/10.1002/poi3.198
  • Wang, J., Molina, M. D., & Sundar, S. S. (2020). When expert recommendation contradicts peer opinion: Relative social influence of valence, group identity and artificial intelligence. Computers in Human Behavior, 107, 106278. https://doi.org/10.1016/j.chb.2020.106278
  • Wang, M., Dai, X., & Yao, S. (2011). Development of the Chinese big five personality inventory (CBF-PI) III: Psychometric properties of CBF-PI brief version. Chinese Journal of Clinical Psychology, 19(4), 454–457. https://doi.org/10.16128/j.cnki.1005-3611.2011.04.004
  • Watson, D. (2019). The rhetoric and reality of anthropomorphism in artificial intelligence. Minds and Machines, 29(3), 417–440. https://doi.org/10.1007/s11023-019-09506-6
  • Wiersema, M. F., & Bowen, H. P. (1997). Empirical methods in strategy research: Regression analysis and the use of cross-section versus pooled time-series, cross-section data. In M. Ghertman, J. Obadia, & J. L. Arregle (Eds.), Statistical models for strategic management (pp.201–219). Springer.
  • Yokoi, R., Eguchi, Y., Fujita, T., & Nakayachi, K. (2021). Artificial intelligence is trusted less than a doctor in medical treatment decisions: Influence of perceived care and value similarity. International Journal of Human–Computer Interaction, 37(10), 981–990. https://doi.org/10.1080/10447318.2020.1861763
  • Yu, K.-H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature Biomedical Engineering, 2(10), 719–731. https://doi.org/10.1038/s41551-018-0305-z
  • Zhang, J., & Curley, S. P. (2018). Exploring explanation effects on consumers’ trust in online recommender agents. International Journal of Human–Computer Interaction, 34(5), 421–432. https://doi.org/10.1080/10447318.2017.1357904
  • Zhao, X., Lynch, J. G., & Chen, Q. (2010). Reconsidering Baron and Kenny: Myths and truths about mediation analysis. Journal of Consumer Research, 37(2), 197–206. https://doi.org/10.1086/651257
  • Zhao, Y., Ni, Q., & Zhou, R. (2018). What factors influence the mobile health service adoption? A meta-analysis and the moderating role of age. International Journal of Information Management, 43, 342–350. https://doi.org/10.1016/j.ijinfomgt.2017.08.006
  • Zhou, T., & Lu, Y. (2011). The effects of personality traits on user acceptance of mobile commerce. International Journal of Human–Computer Interaction, 27(6), 545–561. https://doi.org/10.1080/10447318.2011.555298
  • ZłOtowski, J., Yogeeswaran, K., & Bartneck, C. (2017). Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. International Journal of Human–Computer Studies, 100, 48–54. https://doi.org/10.1016/j.ijhcs.2016.12.008

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.