715
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Understanding the impact and design of AI teammate etiquette

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Received 06 Jul 2022, Accepted 02 Mar 2023, Published online: 24 Mar 2023

References

  • Abdelwahab, O., & Elmaghraby, A. (2018, July). Deep learning based vs. Markov chain based text generation for cross domain adaptation for sentiment classification. In 2018 IEEE International Conference on Information Reuse and Integration (IRI) (pp. 252–255). https://doi.org/10.1109/IRI.2018.00046
  • Adinolf, S., & Turkay, S. (2018, October). Toxic behaviors in esports games: Player perceptions and coping strategies. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts (pp. 365–372). New York, USA: Association for Computing Machinery. https://doi.org/10.1145/3270316.3271545
  • Alan, A. T., Costanza, E., Ramchurn, S. D., Fischer, J., Rodden, T., & Jennings, N. R. (2016). Tariff agent: Interacting with a future smart energy system at home. ACM Transactions on Computer- Human Interaction (TOCHI), 23(4), 1–28. https://doi.org/10.1145/2943770
  • Atoyan, H., Duquet, J. -R., & Robert, J. -M. (2006). Trust in new decision aid systems. In Proceedings of the 18th conference on l’interaction homme-machine, Montreal Canada, (pp. 115–122).
  • Babel, F., Kraus, J. M., & Baumann, M. (2021). Development and testing of psychological conflict resolution strategies for assertive robots to resolve human–robot goal conflict. Frontiers in Robotics and AI, 7. https://doi.org/10.3389/frobt.2020.591448
  • Baker, A. L., Phillips, E. K., Ullman, D., & Keebler, J. R. (2018, November). Toward an understanding of trust repair in human-robot interaction: Current research and future directions. ACM Transactions on Interactive Intelligent Systems, 8(4), 30:1–30:30. https://doi.org/10.1145/3181671
  • Bansal, G., Nushi, B., Kamar, E., Lasecki, W. S., Weld, D. S., & Horvitz, E. (2019). Beyond accuracy: The role of mental models in human-ai team performance. In Proceedings of the aaai conference on human computation and crowdsourcing, Stevenson Washington, USA, (Vol. 7, pp. 2–11).
  • Bergstrom, N., Kanda, T., Miyashita, T., Ishiguro, H., & Hagita, N. (2008). Modeling of natural human-robot encounters. In 2008 ieee/rsj international conference on intelligent robots and systems, Madrid, Spain, (pp. 2623–2629).
  • Bhatti, S., Demir, M., Cooke, N. J., & Johnson, C. J. (2021). Assessing communication and trust in an ai teammate in a dynamic task environment. In 2021 ieee 2nd international conference on human-machine systems (ichms), Magdeburg, Germany, (pp. 1–6).
  • Bickmore, T., & Cassell, J. (2005). Social dialongue with embodied conversational agents. In van Kuppevel, J, Dybkjaer, L, & Bernsen, N. (eds.). Advances in natural multimodal dialogue systems, (pp. 23–54). Springer.
  • Bliese, P. D. (2000). Within-group agreement, non-independence, and reliability: Implications for data aggregation and analysis.
  • Braun, V., & Clarke, V. (2012). Thematic analysis. American Psychological Association.
  • Chakraborti, T., Kambhampati, S., Scheutz, M., & Zhang, Y. (2017). Ai challenges in human-robot cognitive teaming. arXiv preprint arXiv 170704775.
  • Chen, J. Y. C., & Barnes, M. J. (2014, February). Human–agent teaming for multirobot control: A review of human factors issues. IEEE Transactions on Human-Machine Systems, 44(1), 13–29. https://doi.org/10.1109/THMS.2013.2293535
  • Chiou, E. K., Demir, M., Buchanan, V., Corral, C. C., Endsley, M. R., Lematta, G. J., McNeese, N. J. (2022). Towards human–robot teaming: Tradeoffs of explanation-based communication strategies in a virtual search and rescue task. International Journal of Social Robotics, 14(5), 1117–1136. https://doi.org/10.1007/s12369-021-00834-1
  • Chiou, E. K., & Lee, J. D. (2021). Trusting automation: Designing for responsivity and resilience. Human Factors: The Journal of the Human Factors and Ergonomics Society, 65(1), 00187208211009995. https://doi.org/10.1177/00187208211009995
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Academic press.
  • Culley, K. E., & Madhavan, P. (2013). A note of caution regarding anthropomorphism in hci agents. Computers in Human Behavior, 29(3), 577–579. https://doi.org/10.1016/j.chb.2012.11.023
  • Cummings, L. L., & Bromiley, P. (1996). The organizational trust inventory (oti). Trust in Organizations: Frontiers of Theory and Research, 302(330), 39–52.
  • Dautenhahn, K., Walters, M., Woods, S., Koay, K. L., Nehaniv, C. L., Sisbot, A., Siméon, T. (2006). How may i serve you? a robot companion approaching a seated person in a helping context. In Proceedings of the 1st acm sigchi/sigart conference on human-robot interaction, Salt Lake City, Utah, USA. (pp. 172–179).
  • Demir, M., McNeese, N. J., Cooke, N. J., Ball, J. T., Myers, C., & Frieman, M. (2015, September). Synthetic Teammate Communication and Coordination with Humans. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 59(1), 951–955. Publisher: SAGE Publications Inc https://doi.org/10.1177/1541931215591275
  • de Visser, E. J., Pak, R., & Neerincx, M. A. (2017, March). Trust Development and Repair in Human- Robot Teams. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (pp. 103–104). New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3029798.3038409
  • Dillon, A., & Watson, C. (1996). User analysis in hci—the historical lessons from individual differences research. International Journal of Human-Computer Studies, 45(6), 619–637. https://doi.org/10.1006/ijhc.1996.0071
  • Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697–718. https://doi.org/10.1016/S1071-5819(03)00038-7
  • Ezer, N., Bruni, S., Cai, Y., Hepenstal, S. J., Miller, C. A., & Schmorrow, D. D. (2019). Trust engineering for human-ai teams. In Proceedings of the human factors and ergonomics society annual meeting, Seattle, Washington, USA, (Vol. 63, pp. 322–326).
  • Flathmann, C., McNeese, N., & Canonico, L. B. (2019). Using human-agent teams to purposefully design multi-agent systems. In Proceedings of the human factors and ergonomics society annual meeting, Seattle, Washington, USA, (Vol. 63, pp. 1425–1429).
  • Flathmann, C., Schelble, B. G., Zhang, R., & McNeese, N. J. (2021). Modeling and guiding the creation of ethical human-ai teams. In Proceedings of the 2021 aaai/acm conference on ai, ethics, and society (pp. 469–479).
  • Følstad, A., Nordheim, C. B., & Bjørkli, C. A. (2018). What Makes Users Trust a Chatbot for Customer Service? An Exploratory Interview Study. In S. S. Bodrunova (Ed.), Internet Science (pp. 194–208). Springer International Publishing. https://doi.org/10.1007/978-3-030-01437-716
  • Gavin, H. (2008). Thematic analysis. Understanding Research Methods and Statistics in Psychology, 273–282.
  • Gillespie, N., & Dietz, G. (2009, January), Trust Repair After an Organization-Level Failure. Academy of Management Review, 34(1), 127–145. Publisher: Academy of Management https://doi.org/10.5465/amr.2009.35713319
  • Gombolay, M., Bair, A., Huang, C., & Shah, J. (2017, June), Computational design of mixed- initiative human–robot teaming that considers human factors: Situational awareness, work- load, and workflow preferences. The International Journal of Robotics Research, 36(5–7), 597–617. Publisher: SAGE Publications Ltd STMhttps://doi.org/10.1177/0278364916688255
  • Goodrich, M. A., Morse, B. S., Gerhardt, D., Cooper, J. L., Quigley, M., Adams, J. A., & Humphrey, C. (2008). Supporting wilderness search and rescue using a camera-equipped mini uav. Journal of Field Robotics, 25(1–2), 89–110. https://doi.org/10.1002/rob.20226
  • Grawitch, M. J., & Munz, D. C. (2004). Are your data nonindependent? a practical guide to evaluating nonindependence and within-group agreement. Understanding Statistics, 3(4), 231–257. https://doi.org/10.1207/s15328031us0304_2
  • Grover, S., Sengupta, S., Chakraborti, T., Mishra, A. P., & Kambhampati, S. (2020, November), RADAR: Automated task planning for proactive decision support. Human–Computer Interaction, 35(5–6), 387–412. Publisher: Taylor & Francis eprint https://doi.org/10.1080/07370024.2020.1726751
  • Haring, K. S., Satterfield, K. M., Tossell, C. C., De Visser, E. J., Lyons, J. R., Mancuso, V. F., Funke, G. J. (2021). Robot authority in human-robot teaming: Effects of human-likeness and physical embodiment on compliance. Frontiers in Psychology, 12, 625713. https://doi.org/10.3389/fpsyg.2021.625713
  • Hart, S. G., & Staveland, L. E. (1988, January). Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In P. A. Hancock & N. Meshkati Eds., Advances in Psychology Vol. 52 (pp. 139–183). North-Holland. https://doi.org/10.1016/S0166-41150862386-9
  • Hinds, P. J., Roberts, T. L., & Jones, H. (2004). Whose job is it anyway? a study of human-robot interaction in a collaborative task. Human–Computer Interaction, 19(1–2), 151–181.
  • Homewood, S. (2016, May). Maintaining relationships with our devices. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (pp. 273–276). New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2851581.2889467
  • Horwitz, S. K. (2005). The compositional impact of team diversity on performance: Theoretical considerations. Human Resource Development Review, 4(2), 219–245. https://doi.org/10.1177/1534484305275847
  • Horwitz, S. K., & Horwitz, I. B. (2007). The effects of team diversity on team outcomes: A meta- analytic review of team demography. Journal of Management, 33(6), 987–1015. https://doi.org/10.1177/0149206307308587
  • Inkpen, K., Chancellor, S., De Choudhury, M., Veale, M., & Baumer, E. P. S. (2019, May). Where is the human? Bridging the gap between AI and HCI. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–9). New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3290607.3299002
  • James, L. R., Demaree, R. G., & Wolf, G. (1984). Estimating within-group interrater reliability with and without response bias. The Journal of Applied Psychology, 69(1), 85. https://doi.org/10.1037/0021-9010.69.1.85
  • Jones, H., & Hinds, P. (2002). Extreme work teams: Using swat teams as a model for coordinating distributed robots. In Proceedings of the 2002 acm conference on computer supported cooperative work, New Orleans, Louisiana, USA, (pp. 372–381).
  • Jung, M. F., Martelaro, N., & Hinds, P. J. (2015). Using robots to moderate team conflict: The case of repairing violations. In Proceedings of the tenth annual acm/ieee international conference on human-robot interaction, Portland, Oregon, USA, (pp. 229–236).
  • Karreman, D., Utama, L., Joosse, M., Lohse, M., van Dijk, B., & Evers, V. (2014). Robot etiquette: How to approach a pair of people? In Proceedings of the 2014 acm/ieee international conference on human-robot interaction, Bielefeld, Germany, (pp. 196–197).
  • Koay, K. L., Walters, M. L., May, A., Dumitriu, A., Christianson, B., Burke, N., & Dautenhahn, K. (2013). Exploring robot etiquette: Refining a hri home companion scenario based on feedback from two artists who lived with robots in the uh robot house. In International conference on social robotics, Bristol, UK, (pp. 290–300).
  • Law, T., & Scheutz, M. (2021). Trust: Recent concepts and evaluations in human-robot interaction. In Nam, C., & Lyons, J. Trust in human-robot interaction, (pp. 27–57). Elsevier.
  • Liao, Q. V., Gruen, D., & Miller, S. (2020, April). Questioning the AI: Informing design practices for explainable ai user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–15). New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3313831.3376590
  • Looi, Q. E., & See, S. L. (2012). Applying politeness maxims in social robotics polite dialogue. In Proceedings of the seventh annual acm/ieee international conference on human-robot interaction, Boston, Massachusetts, USA, (pp. 189–190).
  • Lounsbury, J. W., Smith, R. M., Levy, J. J., Leong, F. T., & Gibson, L. W. (2009, March). Personality characteristics of business majors as defined by the big five and narrow personality traits. Journal of Education for Business, 84(4), 200–205. https://doi.org/10.3200/JOEB.84.4.200-205
  • Lucas, G. M., Gratch, J., King, A., & Morency, L. -P. (2014). It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior, 37, 94–100. https://doi.org/10.1016/j.chb.2014.04.043
  • Lyons, J. B., Aldin Hamdan, I., & Vo, T. Q. (2023). Explanations and trust: What happens to trust when a robot partner does something unexpected? Computers in Human Behavior, 138, 107473. https://doi.org/10.1016/j.chb.2022.107473
  • Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human–human and human–automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277–301. https://doi.org/10.1080/14639220500337708
  • McNeese, N., Demir, M., Chiou, E., Cooke, N., & Yanikian, G. (2019). Understanding the role of trust in human-autonomy teaming. Hawaii International Conference on Social Systems, Maui, HI, USA. https://doi.org/10.24251/HICSS.2019.032
  • McNeese, N. J., Demir, M., Cooke, N. J., & Myers, C. (2018, March). Teaming with a Synthetic Teammate: Insights into Human-Autonomy Teaming. Human Factors: The Journal of the Human Factors and Ergonomics Society, 60(2), 262–273. Publisher: SAGE Publications Inc https://doi.org/10.1177/0018720817743223
  • McNeese, N. J., Schelble, B. G., Canonico, L. B., & Demir, M. (2021). Who/What is my teammate? team composition considerations in human–ai te aming. IEEE Transactions on Human-Machine Systems, 51(4), 288–299. https://doi.org/10.1109/THMS.2021.3086018
  • Meyer, J., Miller, C., Hancock, P., De Visser, E. J., & Dorneich, M. (2016). Politeness in machine- human and human-human interaction. In Proceedings of the human factors and ergonomics society annual meeting, Washington, DC, USA, (Vol. 60, pp. 279–283).
  • Miller, C. A. (2002). Definitions and dimensions of etiquette. In Proc. aaai fall symposium on etiquette and human-computer work. North Falmount, MA, USA.
  • Miller, C. A. (2005). Trust in adaptive automation: The role of etiquette in tuning trust via analogic and affective methods. In Proceedings of the 1st international conference on augmented cognition (pp. 22–27).
  • Miller, C. A. (2011). Relationships and etiquette with technical systems. In Whitworth, B., & de Moor, A.Virtual communities: Concepts, methodologies, tools and applications, (pp. 1618–1633). IGI Global.
  • Mishra, P., & Hershey, K. A. (2004). Etiquette and the design of educational technology. Communications of the ACM, 47(4), 45–49. https://doi.org/10.1145/975817.975843
  • Moreno, K. N., Person, N. K., Adcock, A. B., Eck, R., Jackson, G. T., & Marineau, J. C. (2002). Etiquette and efficacy in animated pedagogical agents: The role of stereotypes. Aaai symposium on personalized agents, cape cod, Cape Cod, MA, USA, ma.
  • Musick, G., O’neill, T. A., Schelble, B. G., McNeese, N. J., & Henke, J. B. (2021). What happens when humans believe their teammate is an ai? an investigation into humans teaming with autonomy. Computers in Human Behavior, 122, 106852. https://doi.org/10.1016/j.chb.2021.106852
  • Neustaedter, C., & Greenberg, S. (2012, May). Intimacy in long-distance relationships over video chat. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 753–762). New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2207676.2207785
  • Ogden, B., & Dautenhahn, K. (2000). Robotic etiquette: Structured interaction in humans and robots. In Procs sirs 2000, 8th symp on intelligent robotic systems Reading, UK.
  • O’neill, T., McNeese, N., Barron, A., & Schelble, B. (2022). Human–autonomy teaming: A review and analysis of the empirical literature. Human Factors: The Journal of the Human Factors and Ergonomics Society, 64(5), 904–938. https://doi.org/10.1177/0018720820960865
  • Parasuraman, R., & Miller, C. A. (2004). Trust and etiquette in high-criticality automated systems. Communications of the ACM, 47(4), 51–55. https://doi.org/10.1145/975817.975844
  • Parasuraman, R., & Riley, V. (1997, June). Humans and automation: Use, misuse, disuse, abuse. Human Factors: The Journal of the Human Factors and Ergonomics Society, 39(2), 230–253. Publisher: SAGE Publications Inc https://doi.org/10.1518/001872097778543886
  • Pipitone, A., Geraci, A., D’amico, A., Seidita, V., & Chella, A. (2021). Robot’s inner speech effects on trust and anthropomorphic cues in human-robot cooperation. arXiv preprint arXiv 210909388.
  • Ramos, G., Meek, C., Simard, P., Suh, J., & Ghorashi, S. (2020, November). Interactive machine teaching: A human-centered approach to building machine-learned models. Human–Computer Interaction, 35(5–6), 413–451. https://doi.org/10.1080/07370024.2020.1734931
  • Rentsch, J. R., & Klimoski, R. J. (2001). Why do ‘great minds’ think alike?: Antecedents of team member schema agreement. Journal of Organizational Behavior, 22(2), 107–120. https://doi.org/10.1002/job.81
  • Salas, E., Dickinson, T. L., Converse, S. A., & Tannenbaum, S. I. (1992). Toward an Understanding of Team Performance and Training. In Sweezy, R. W., Salas, E. (eds.). Teams: Their Training and Performance.
  • Salas, E., Sims, D. E., & Burke, C. S. (2005, October). Is there a “Big Five” in Teamwork? Small Group Research, 36(5), 555–599. Publisher: SAGE Publications Inc https://doi.org/10.1177/1046496405277134
  • Schelble, B. G., Flathmann, C., & McNeese, N. (2020). Towards meaningfully integrating human- autonomy teaming in applied settings. In Proceedings of the 8th international conference on human-agent interaction, Melbourne, Australia, (pp. 149–156).
  • Schiaffino, S., & Amandi, A. (2006). Polite personal agents. IEEE Intelligent Systems, 21(1), 12–19. https://doi.org/10.1109/MIS.2006.15
  • Schulte, A., Donath, D., & Lange, D. S. (2016). Design Patterns for Human-Cognitive Agent Teaming. In D. Harris (Ed.), Engineering Psychology and Cognitive Ergonomics (pp. 231–243). Springer International Publishing. https://doi.org/10.1007/978-3-319-40030-324
  • Seeber, I., Bittner, E., Briggs, R. O., De Vreede, T., De Vreede, G. -J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., Schwabe, G., & Söllner, M. (2020). Machines as teammates: A research agenda on ai in team collaboration. Information & Management, 57(2), 103174. https://doi.org/10.1016/j.im.2019.103174
  • Shklovski, I., Kraut, R., & Cummings, J. (2008, April). Keeping in touch by technology: Maintaining friendships after a residential move. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 807–816). New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/1357054.1357182.
  • Sierhuis, M., Bradshaw, J. M., Acquisti, R., Van Hoof, R., & Jeffers, R. (2003). Human-agent teamwork and adjustable autonomy in practice. In Proceedings of the Seventh International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS). Nara, Japan, Citeseer.
  • Sin, J., & Munteanu, C. (2020, November). An empirically grounded sociotechnical perspective on designing virtual agents for older adults. Human–Computer Interaction, 35(5–6), 481–510. Publisher: Taylor & Francis https://doi.org/10.1080/07370024.2020.1731690
  • Skantze, G. (2021). Turn-taking in conversational systems and human-robot interaction: A review. Computer Speech & Language, 67, 101178. https://doi.org/10.1016/j.csl.2020.101178
  • Svikhnushina, E., & Pu, P. (2020). Social and emotional etiquette of chatbots: A qualitative approach to understanding user needs and expectations. arXiv preprint arXiv 200613883.
  • Textor, C., Zhang, R., Lopez, J., Schelble, B. G., McNeese, N. J., Freeman, G., de Visser, E. J. (2022). Exploring the relationship between ethics and trust in human–artificial intelligence teaming: A mixed methods approach. Journal of Cognitive Engineering and Decision Making, 16(4), 252–281. https://doi.org/10.1177/15553434221113964
  • Thomas, J., & Vaughan, R. (2018, October). AfteR you: Doorway negotiation for human-robot and robot-robot interaction. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3387–3394). ( ISSN: 2153-0866) https://doi.org/10.1109/IROS.2018.8594034
  • Thomas, J., & Vaughan, R. (2019, November). Right of way, assertiveness and social recognition in human-robot doorway interaction. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 333–339). ( ISSN: 2153-0866) https://doi.org/10.1109/IROS40897.2019.8967862
  • Traeger, M. L., Strohkorb Sebo, S., Jung, M., Scassellati, B., & Christakis, N. A. (2020). Vulnerable robots positively shape human conversational dynamics in a human–robot team. Proceedings of the National Academy of Sciences, 117(12), 6370–6375.
  • Trinh, T. Q., Schroeter, C., Kessler, J., & Gross, H. -M. (2015). “Go ahead, please”: Recognition and resolution of conflict situations in narrow passages for polite mobile robot navigation. In International conference on social robotics, Paris, France, (pp. 643–653).
  • Türkay, S., Formosa, J., Adinolf, S., Cuthbert, R., & Altizer, R. (2020, April). SeE no evil, hear no evil, speak no evil: How collegiate players define, experience and cope with toxicity. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–13). New York, USA: Association for Computing Machinery. https://doi.org/10.1145/3313831.3376191
  • Van Pinxteren, M. M., Pluymaekers, M., & Lemmink, J. G. (2020). Human-like communication in conversational agents: A literature review and research agenda. Journal of Service Management, 31(2), 203–225. https://doi.org/10.1108/JOSM-06-2019-0175
  • Walters, M. L., Dautenhahn, K., Woods, S. N., & Koay, K. L. (2007). Robotic etiquette: Results from user studies involving a fetch and carry task. In 2007 2nd acm/ieee international conference on human-robot interaction (hri), Arlington, Virginia, USA. (pp. 317–324).
  • Wang, Y., Murray, R. C., Bao, H., & Rose, C. (2020). Agent-based dynamic collaboration support in a smart office space. In Proceedings of the 21th annual meeting of the special interest group on discourse and dialogue. Virtual. (pp. 257–260).
  • Wiltshire, T. J., Barber, D., & Fiore, S. M. (2013). Towards modeling social-cognitive mechanisms in robots to facilitate human-robot teaming. In Proceedings of the human factors and ergonomics society annual meeting (Vol. 57, pp. 1278–1282). SAGE Publications Sage CA: Los Angeles, CA. https://doi.org/10.1177/1541931213571283
  • Xu, W. (2019, June). Toward human-centered AI: A perspective from human-computer interaction. Interactions, 26(4), 42–46. https://doi.org/10.1145/3328485
  • Zhang, T., Zhu, B., & Kaber, D. B. (2010). Setting etiquette expectations Hayes, C., Miller , C. (eds.). Human-Computer Etiquette, (pp. 231–259). CRC Press.
  • Zheng, Y. (2019, September). A course on applied ai and data science: Recommender systems. In Proceedings of the 20th Annual SIG Conference on Information Technology Education (pp. 43–48). New York, USA: Association for Computing Machinery. https://doi.org/10.1145/3349266.3351405

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.