93
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Driving Factors of Generative AI Adoption in New Product Development Teams from a UTAUT Perspective

ORCID Icon & ORCID Icon
Received 17 Feb 2024, Accepted 30 Jun 2024, Published online: 12 Jul 2024

References

  • Agarwal, R., & Prasad, J. (1998). A conceptual and operational definition of personal innovativeness in the domain of information technology. Information Systems Research, 9(2), 204–215. https://doi.org/10.1287/isre.9.2.204
  • Ali, I. (2019). Personality traits, individual innovativeness and satisfaction with life. Journal of Innovation & Knowledge, 4(1), 38–46. https://doi.org/10.1016/j.jik.2017.11.002
  • An, X., Chai, C. S., Li, Y., Zhou, Y., & Yang, B. (2023). Modeling students’ perceptions of artificial intelligence assisted language learning. Computer Assisted Language Learning, 1–22. https://doi.org/10.1080/09588221.2023.2246519
  • Apotheker, J., Duranton, S., Lukic, V., de Bellefonds, N., Iyer, S., Bouffault, O., de Laubier, R. (2024). BCG AI radar: From potential to profit with GenAI. BCG Global. https://www.bcg.com/publications/2024/from-potential-to-profit-with-genai
  • AQSIQ & SAC. (2017). Industrial classification for national economic activities (GB/T 4754–2017).
  • Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189. https://doi.org/10.1016/j.chb.2018.03.051
  • Ariff, M. S. M., Yeow, S. M., Zakuan, N., Jusoh, A., & Bahari, A. Z. (2012). The effects of computer self-efficacy and technology acceptance model on behavioral intention in internet banking systems. Procedia – Social and Behavioral Sciences, 57, 448–452. https://doi.org/10.1016/j.sbspro.2012.09.1210
  • Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv. https://doi.org/10.48550/arXiv.1409.0473
  • Barth, S., & de Jong, M. D. T. (2017). The privacy paradox – Investigating discrepancies between expressed privacy concerns and actual online behavior – A systematic literature review. Telematics and Informatics, 34(7), 1038–1058. https://doi.org/10.1016/j.tele.2017.04.013
  • Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1(1), 71–81. https://doi.org/10.1007/s12369-008-0001-3
  • Batey, M., & Furnham, A. (2006). Creativity, intelligence, and personality: A critical review of the scattered literature. Genetic, Social, and General Psychology Monographs, 132(4), 355–429. https://doi.org/10.3200/MONO.132.4.355-430
  • Bell, J. J., Pescher, C., Tellis, G. J., & Füller, J. (2023). Can AI help in ideation? A theory-based model for idea screening in crowdsourcing contests. Marketing Science, 43(1), 54–72. https://doi.org/10.1287/mksc.2023.1434
  • Bettman, J. R. (1973). Perceived risk and its components: A model and empirical test. Journal of Marketing Research, 10(2), 184–190. https://doi.org/10.2307/3149824
  • Blut, M., Wang, C., Wünderlich, N. V., & Brock, C. (2021). Understanding anthropomorphism in service provision: A meta-analysis of physical robots, chatbots, and other AI. Journal of the Academy of Marketing Science, 49(4), 632–658. https://doi.org/10.1007/s11747-020-00762-y
  • Booz, Allen & Hamilton. (1982). New products management for the 1980s.
  • Bouschery, S. G., Blazevic, V., & Piller, F. T. (2023). Augmenting human innovation teams with artificial intelligence: Exploring transformer‐based language models. Journal of Product Innovation Management, 40(2), 139–153. https://doi.org/10.1111/jpim.12656
  • Cabrera-Sánchez, J.-P., Villarejo-Ramos, Á. F., Liébana-Cabanillas, F., & Shaikh, A. A. (2021). Identifying relevant segments of AI applications adopters – Expanding the UTAUT2’s variables. Telematics and Informatics, 58(2), 101529. https://doi.org/10.1016/j.tele.2020.101529
  • Cai, A., Rick, S. R., Heyman, J. L., Zhang, Y., Filipowicz, A., Hong, M., Klenk, M., & Malone, T. (2023). DesignAID: Using generative AI and semantic diversity for design inspiration. Proceedings of the ACM Collective Intelligence Conference (pp. 1–11). ACM. https://doi.org/10.1145/3582269.3615596
  • Cao, G., Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2021). Understanding managers’ attitudes and behavioral intentions towards using artificial intelligence for organizational decision-making. Technovation, 106(5), 102312. https://doi.org/10.1016/j.technovation.2021.102312
  • Cao, W., & Yu, Z. (2023). Exploring learning outcomes, communication, anxiety, and motivation in learning communities: A systematic review. Humanities and Social Sciences Communications, 10(1), 866. https://doi.org/10.1057/s41599-023-02325-2
  • Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P. S., Sun, L. (2023). A comprehensive survey of AI-generated content (AIGC): A history of generative AI from GAN to ChatGPT. http://arxiv.org/abs/2303.04226
  • Celik, V., & Yesilyurt, E. (2013). Attitudes to technology, perceived computer self-efficacy and computer anxiety as predictors of computer supported education. Computers & Education, 60(1), 148–158. https://doi.org/10.1016/j.compedu.2012.06.008
  • Chatterjee, S., & Bhattacharjee, K. K. (2020). Adoption of artificial intelligence in higher education: A quantitative analysis using structural equation modelling. Education and Information Technologies, 25(5), 3443–3463. https://doi.org/10.1007/s10639-020-10159-7
  • Chen, B., Wu, Z., & Zhao, R. (2023). From fiction to fact: The growing role of generative AI in business and finance. Journal of Chinese Economic and Business Studies, 21(4), 471–496. https://doi.org/10.1080/14765284.2023.2245279
  • Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. d O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov, M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray, S., … Zaremba, W. (2021). Evaluating large language models trained on code. arXiv:2107.03374. https://doi.org/10.48550/arXiv.2107.03374
  • Cheng, X., Zhang, X., Cohen, J., & Mou, J. (2022). Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms. Information Processing & Management, 59(3), 102940. https://doi.org/10.1016/j.ipm.2022.102940
  • Chuang, S.-C., Lin, F.-M., & Tsai, C.-C. (2015). An exploration of the relationship between Internet self-efficacy and sources of Internet self-efficacy among Taiwanese university students. Computers in Human Behavior, 48, 147–155. https://doi.org/10.1016/j.chb.2015.01.044
  • Chuma, E. L., & Oliveira, G. G. d (2023). Generative AI for business decision-making: A case of ChatGPT. Management Science and Business Decisions, 3(1), 5–11. http://publish.thescienceinsight.com/index.php/msbd/article/view/63 https://doi.org/10.52812/msbd.63
  • Chung, J. J. Y., Kim, W., Yoo, K. M., Lee, H., Adar, E., & Chang, M. (2022). TaleBrush: Sketching stories with generative pretrained language models. CHI Conference on Human Factors in Computing Systems (pp. 1–19). https://doi.org/10.1145/3491102.3501819
  • Coenen, A., Davis, L., Ippolito, D., Reif, E., & Yuan, A. (2021). Wordcraft: A human-AI collaborative editor for story writing. arXiv:2107.07430. https://doi.org/10.48550/arXiv.2107.07430
  • Comrey, A. L., & Lee, H. B. (1992). A first course in factor analysis. (2nd ed, pp. xii, 430). Lawrence Erlbaum Associates, Inc.
  • Cooper, R. (2008). Perspective: The Stage‐Gate® Idea‐to‐Launch Process—Update, What’s New, and NexGen Systems*. Journal of Product Innovation Management, 25(3), 213–232. https://www.semanticscholar.org/paper/Perspective%3A-The-Stage%E2%80%90Gate%C2%AE-Idea%E2%80%90to%E2%80%90Launch-What%27s-Cooper/d5a51046068664b31bcf9a1ee13d79b25eab2d60
  • Cooper, R. G. (1990). Stage-gate systems: A new tool for managing new products. Business Horizons, 33(3), 44–54. https://doi.org/10.1016/0007-6813(90)90040-I
  • Cooper, R. G. (2010). The stage–gate idea to launch system. In J. Sheth & N. Malhotra (Eds.), Wiley international encyclopedia of marketing (1st ed.). Wiley. https://doi.org/10.1002/9781444316568.wiem05014
  • Cooper, R. G., & Kleinschmidt, E. J. (1986). An investigation into the new product process: Steps, deficiencies, and impact. Journal of Product Innovation Management, 3(2), 71–85. https://doi.org/10.1016/0737-6782(86)90030-5
  • Creswell, J. W., & Clark, V. L. P. (2007). Designing and conducting mixed methods research (pp. xviii, 275). Sage Publications, Inc.
  • Dean, D., Hender, J., Rodgers, T., & Santanen, E. (2006). Identifying quality, novel, and creative ideas: Constructs and scales for idea evaluation. Journal of the Association for Information Systems, 7(10), 646–699. https://doi.org/10.17705/1jais.00106
  • Diederich, S., Lichtenberg, S., Brendel, A., Trang, S. (2019). Promoting sustainable mobility beliefs with persuasive and anthropomorphic design: Insights from an experiment with a conversational agent. International Conference on Interaction Sciences. https://www.semanticscholar.org/paper/Promoting-Sustainable-Mobility-Beliefs-with-and-an-Diederich-Lichtenberg/bd50e86b714159582301f1cc6aea44e8488c6b35
  • Dollinger, S. J. (2012). Openness to experience. In N. M. Seel (Ed.), Encyclopedia of the sciences of learning (pp. 2522–2524). Springer US. https://doi.org/10.1007/978-1-4419-1428-6_87
  • Eapen, T. T., Finkenstadt, D. J., Folk, J., Venkataswamy, L. (2023). How generative AI can augment human creativity. Harvard Business Review, July–August 2023. https://hbr.org/2023/07/how-generative-ai-can-augment-human-creativity
  • Fakhimi, A., Garry, T., & Biggemann, S. (2023). The effects of anthropomorphised virtual conversational assistants on consumer engagement and trust during service encounters. Australasian Marketing Journal, 31(4), 314–324. https://doi.org/10.1177/14413582231181140
  • Fang, Y.-M. (2023). The role of generative AI in industrial design: Enhancing the design process and learning. In International Conference on Innovation, Communication and Engineering (ICICE 2023), Bangkok, Thailand. pp. 135–136. https://doi.org/10.1049/icp.2024.0303
  • Feng, Z., Guo, D., Tang, D., Duan, N., Feng, X., Gong, M., Shou, L., Qin, B., Liu, T., Jiang, D., & Zhou, M. (2020). CodeBERT: A pre-trained model for programming and natural languages. arXiv:2002.08155. https://doi.org/10.48550/arXiv.2002.08155
  • Ferioli, M., Dekoninck, E., Culley, S., Roussel, B., & Renaud, J. (2010). Understanding the rapid evaluation of innovative ideas in the early stages of design. International Journal of Product Development, 12(1), 67. https://doi.org/10.1504/IJPD.2010.034313
  • Florén, H., & Frishammar, J. (2012). From preliminary ideas to corroborated product definitions: Managing the front end of new product development. California Management Review, 54(4), 20–43. https://doi.org/10.1525/cmr.2012.54.4.20
  • Foroughi, B., Senali, M. G., Iranmanesh, M., Khanfar, A., Ghobakhloo, M., Annamalai, N., & Naghmeh-Abbaspour, B. (2023). Determinants of intention to use ChatGPT for educational purposes: Findings from PLS-SEM and fsQCA. International Journal of Human–Computer Interaction, 1–20. https://doi.org/10.1080/10447318.2023.2226495
  • Gansser, O. A., & Reich, C. S. (2021). A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of application. Technology in Society, 65, 101535. https://doi.org/10.1016/j.techsoc.2021.101535
  • García De Blanes Sebastián, M., Sarmiento Guede, J. R., & Antonovica, A. (2022). Application and extension of the UTAUT2 model for determining behavioral intention factors in use of the artificial intelligence virtual assistants. Frontiers in Psychology, 13, 993935. https://doi.org/10.3389/fpsyg.2022.993935
  • Girotra, K., Terwiesch, C., & Ulrich, K. T. (2010). Idea generation and the quality of the best idea. Management Science, 56(4), 591–605. https://doi.org/10.1287/mnsc.1090.1144
  • Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
  • Go, E., & Sundar, S. S. (2019). Humanizing Chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior, 97, 304–316., https://doi.org/10.1016/j.chb.2019.01.020
  • Goodhue, D. L., & Thompson, R. L. (1995). Task-technology fit and individual performance. MIS Quarterly, 19(2), 213–236. https://doi.org/10.2307/249689
  • Gosling, S. D., Rentfrow, P. J., & Swann, W. B. (2003). A very brief measure of the Big-Five personality domains. Journal of Research in Personality, 37(6), 504–528. https://doi.org/10.1016/S0092-6566(03)00046-1
  • Guo, D., Ren, S., Lu, S., Feng, Z., Tang, D., Liu, S., Zhou, L., Duan, N., Svyatkovskiy, A., Fu, S., Tufano, M., Deng, S. K., Clement, C., Drain, D., Sundaresan, N., Yin, J., Jiang, D., & Zhou, M. (2021). GraphCodeBERT: Pre-training code representations with data flow. arXiv:2009.08366. https://doi.org/10.48550/arXiv.2009.08366
  • Hair, J. F. (2009). Multivariate data analysis (7th ed.). Pearson Prentice Hall. http://www.researchgate.net/publication/311573712_Multivariate_Data_Analysis
  • Hong, J.-W. (2022). I was born to love AI: The influence of social status on AI self-efficacy and intentions to use AI. International Journal of Communication, 16, 172–191. https://ijoc.org/index.php/ijoc/article/view/17728
  • Hsiung, C.-P., León, G. A., Stinson, D., & Chiou, E. K. (2023). Blaming yourself, your partner, or an unexpected event: Attribution biases and trust in a physical coordination task. Human Factors and Ergonomics in Manufacturing & Service Industries, 33(5), 379–394. https://doi.org/10.1002/hfm.20998
  • Huang, L., Yin, Y., & Ong, S. K. (2022). A novel deep generative model based on imaginal thinking for automating design. CIRP Annals, 71(1), 121–124. https://doi.org/10.1016/j.cirp.2022.04.053
  • Hui, Z., Khan, A., Chenglong, Z., & Khan, N. (2023). When Service quality is enhanced by human-artificial intelligence interaction: An examination of anthropomorphism, responsiveness from the perspectives of employees and customers. International Journal of Human–Computer Interaction, 1–16. https://doi.org/10.1080/10447318.2023.2266254
  • Hussein, B. A., Pigagaite, G., & Silva, P. P. (2014). Identifying and dealing with complexties in new product and process development projects. Procedia – Social and Behavioral Sciences, 119, 702–710. https://doi.org/10.1016/j.sbspro.2014.03.078
  • Jain, R., Garg, N., & Khera, S. N. (2022). Adoption of AI-enabled tools in social development organizations in India: An extension of UTAUT model. Frontiers in Psychology, 13, 893691. https://doi.org/10.3389/fpsyg.2022.893691
  • Janson, A. (2023). How to leverage anthropomorphism for chatbot service interfaces: The interplay of communication style and personification. Computers in Human Behavior, 149(2), 107954. https://doi.org/10.1016/j.chb.2023.107954
  • Joibi, J., & Eune, J. (2023). Exploring the impact of integrating engineering, product, and affective design semantics on the performance of text-to-image GAI (Generative AI) for drone designs. Korea Society of Design Studies Autumn International Conference (KSDS). Hanyang University.
  • Joinson, A. N., Reips, U.-D., Buchanan, T., & Schofield, C. B. P. (2010). Privacy, trust, and self-disclosure online. Human-Computer Interaction, 25(1), 1–24. https://doi.org/10.1080/07370020903586662
  • Jokisch, M. R., Schmidt, L. I., Doh, M., Marquard, M., & Wahl, H.-W. (2020). The role of internet self-efficacy, innovativeness and technology avoidance in breadth of internet use: Comparing older technology experts and non-experts. Computers in Human Behavior, 111(2), 106408. https://doi.org/10.1016/j.chb.2020.106408
  • Kanbach, D. K., Heiduk, L., Blueher, G., Schreiter, M., & Lahmann, A. (2023). The GenAI is out of the bottle: Generative artificial intelligence from a business model innovation perspective. Review of Managerial Science, 18(4), 1189–1220., https://doi.org/10.1007/s11846-023-00696-z
  • Kaplan, A. D., Kessler, T. T., Brill, J. C., & Hancock, P. A. (2023). Trust in artificial intelligence: Meta-analytic findings. Human Factors, 65(2), 337–359. https://doi.org/10.1177/00187208211013988
  • Kehr, F., Wentzel, D., Kowatsch, T. (2014). Privacy paradox revised: Pre-existing attitudes, psychological ownership, and actual disclosure. International Conference on Interaction Sciences, December 15. https://www.semanticscholar.org/paper/Privacy-Paradox-Revised%3A-Pre-Existing-Attitudes%2C-Kehr-Wentzel/95646c1324bdc6d4258342795b66297992dd8b6e
  • Kim, J. S., Kim, M., & Baek, T. H. (2024). Enhancing user experience with a generative AI Chatbot. International Journal of Human–Computer Interaction, 1–13. https://doi.org/10.1080/10447318.2024.2311971
  • Kim, J., Giroux, M., & Lee, J. (2021). When do you trust AI? The effect of number presentation detail on consumer trust and acceptance of AI recommendations. Psychology & Marketing, 38(7), 1140–1155. https://doi.org/10.1002/mar.21498
  • Kim, S., Zhao, J., Tian, Y., & Chandra, S. (2021). Code prediction by feeding trees to transformers. arXiv:2003.13848. https://doi.org/10.48550/arXiv.2003.13848
  • Klein, M., & Garcia, A. C. (2014). The bag of stars: High-speed idea filtering for open innovation. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2387180
  • Kraus, J., Miller, L., Klumpp, M., Babel, F., Scholz, D., Merger, J., & Baumann, M. (2023). On the role of beliefs and trust for the intention to use service robots: An integrated trustworthiness beliefs model for robot acceptance. International Journal of Social Robotics, 24. https://doi.org/10.1007/s12369-022-00952-4
  • Kshetri, N., Dwivedi, Y. K., Davenport, T. H., & Panteli, N. (2023). Generative artificial intelligence in marketing: Applications, opportunities, challenges, and research agenda. International Journal of Information Management, 75(6), 102716. https://doi.org/10.1016/j.ijinfomgt.2023.102716
  • Kuo, Y.-F., & Feng, L.-H. (2013). Relationships among community interaction characteristics, perceived benefits, community commitment, and oppositional brand loyalty in online brand communities. International Journal of Information Management, 33(6), 948–962. https://doi.org/10.1016/j.ijinfomgt.2013.08.005
  • Lachaux, M.-A., Roziere, B., Chanussot, L., & Lample, G. (2020). Unsupervised translation of programming languages. arXiv:2006.03511. https://doi.org/10.48550/arXiv.2006.03511
  • Lan, H., Tang, X., Ye, Y., & Zhang, H. (2024). Abstract or concrete? The effects of language style and service context on continuous usage intention for AI voice assistants. Humanities and Social Sciences Communications, 11(1), 99. https://doi.org/10.1057/s41599-024-02600-w
  • Larson, L., & DeChurch, L. (2020). Leading teams in the digital age: Four perspectives on technology and what they mean for leading teams. The Leadership Quarterly, 31(1), 101377. https://doi.org/10.1016/j.leaqua.2019.101377
  • Lee, K. M., & Nass, C. (2003). Designing social presence of social actors in human computer interaction. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 289–296). https://doi.org/10.1145/642611.642662
  • Leenders, R., Th, A. J., van Engelen, J. M. L., & Kratzer, J. (2003). Virtuality, communication, and new product team creativity: A social network perspective. Journal of Engineering and Technology Management, 20(1-2), 69–92. https://doi.org/10.1016/S0923-4748(03)00005-5
  • Li, M., & Suh, A. (2022). Anthropomorphism in AI-enabled technology: A literature review. Electronic Markets, 32(4), 2245–2275. https://doi.org/10.1007/s12525-022-00591-7
  • Li, W. (2024). A study on factors influencing designers’ behavioral intention in using AI-generated content for assisted design: Perceived anxiety, perceived risk, and UTAUT. International Journal of Human–Computer Interaction, 1–14. https://doi.org/10.1080/10447318.2024.2310354
  • Li, X., & Sung, Y. (2021). Anthropomorphism brings us closer: The mediating role of psychological distance in user–AI assistant interactions. Computers in Human Behavior, 118(4), 106680. https://doi.org/10.1016/j.chb.2021.106680
  • Liebrecht, C., Sander, L., & van Hooijdonk, C. (2021). Too informal? How a Chatbot’s communication style affects brand attitude and quality of interaction. In A. Følstad, T. Araujo, S. Papadopoulos, E. L.-C. Law, E. Luger, M. Goodwin, & P. B. Brandtzaeg (Eds.), Chatbot research and design (pp. 16–31). Springer International Publishing. https://doi.org/10.1007/978-3-030-68288-0_2
  • Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of Management Education, 21(2), 100790. https://doi.org/10.1016/j.ijme.2023.100790
  • Liu, M., & Zhou, B. (2022). Innovative design of Chinese traditional textile patterns based on conditional generative adversarial network. In M. Rauterberg (Ed.), Culture and computing (Vol. 13324, pp. 234–245). Springer International Publishing. https://doi.org/10.1007/978-3-031-05434-1_15
  • Maaravi, Y., Heller, B., Shoham, Y., Mohar, S., & Deutsch, B. (2021). Ideation in the digital age: Literature review and integrative model for electronic brainstorming. Review of Managerial Science, 15(6), 1431–1464. https://doi.org/10.1007/s11846-020-00400-5
  • Maican, C. I., Sumedrea, S., Tecau, A., Nichifor, E., Chitu, I. B., Lixandroiu, R., & Bratucu, G. (2023). Factors influencing the behavioural intention to use AI-generated images in business: A UTAUT2 perspective with moderators. Journal of Organizational and End User Computing, 35(1), 1–32. https://doi.org/10.4018/JOEUC.330019
  • Meijer, B. (2023). A comparative analysis of human and A.I. feedback on business idea evaluation. [Bachelor's thesis]. University of Twente.
  • Mesbah, S., Arous, I., Yang, J., & Bozzon, A. (2023). HybridEval: A human-AI collaborative approach for evaluating design ideas at scale. Proceedings of the ACM Web Conference 2023 (pp. 3837–3848). https://doi.org/10.1145/3543507.3583496
  • Moussawi, S., & Benbunan-Fich, R. (2021). The effect of voice and humour on users’ perceptions of personal intelligent agents. Behaviour & Information Technology, 40(15), 1603–1626. https://doi.org/10.1080/0144929X.2020.1772368
  • Moussawi, S., Koufaris, M., & Benbunan-Fich, R. (2021). How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents. Electronic Markets, 31(2), 343–364. https://doi.org/10.1007/s12525-020-00411-w
  • Murad, M. A. A., & Martin, T. (2007). Similarity-based estimation for document summarization using fuzzy sets. International Journal of Computer Science and Security, 1(4), 1–12. https://www.cscjournals.org/library/manuscriptinfo.php?mc=IJCSS-20
  • Nass, C., Fogg, B. J., & Moon, Y. (1996). Can computers be teammates? International Journal of Human-Computer Studies, 45(6), 669–678. https://doi.org/10.1006/ijhc.1996.0073
  • Nov, O., & Ye, C. (2008). Personality and technology acceptance: Personal innovativeness in IT, openness and resistance to change. Proceedings of the 41st Annual Hawaii International Conference on System Sciences HICSS 2008 (pp. 448–448). https://doi.org/10.1109/HICSS.2008.348
  • Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381(6654), 187–192.
  • Organisciak, P., Acar, S., Dumas, D., & Berthiaume, K. (2023). Beyond semantic distance: Automated scoring of divergent thinking greatly improves with large language models. Thinking Skills and Creativity, 49, 101356. https://doi.org/10.1016/j.tsc.2023.101356
  • Osta, A., Kokkinaki, A., & Chedrawi, C. (2022). Online health communities: The impact of AI conversational agents on users. In M. Themistocleous & M. Papadaki (Eds.), Information systems. (Vol. 437, pp. 488–501). Springer International Publishing. https://doi.org/10.1007/978-3-030-95947-0_35
  • Page, A. L. (1993). Assessing new product development practices and performance: Establishing crucial norms. Journal of Product Innovation Management, 10(4), 273–290. https://doi.org/10.1016/0737-6782(93)90071-W
  • Pahl, G., Beitz, W., Feldhusen, J., & Grote, K.-H. (2007). Engineering design: A Systematic Approach. Springer. https://doi.org/10.1007/978-1-84628-319-2
  • Park, J., & Woo, S. E. (2022). Who likes artificial intelligence? Personality predictors of attitudes toward artificial intelligence. The Journal of Psychology, 156(1), 68–94. https://doi.org/10.1080/00223980.2021.2012109
  • Patterson, F., Kerrin, M., Gatto-Roissard, G. (2009). Characteristics & behaviours of innovative people in organisations. https://www.semanticscholar.org/paper/Characteristics-%26-Behaviours-of-Innovative-People-Dr-Gatto-Roissard/695b72b177326b9da2f01f7f0fc1b7b6cbf1ee75
  • Pelau, C., Dabija, D.-C., & Ene, I. (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Computers in Human Behavior, 122, 106855. https://doi.org/10.1016/j.chb.2021.106855
  • Peter, J. P., & Ryan, M. J. (1976). An investigation of perceived risk at the brand level. Journal of Marketing Research, 13(2), 184–188. https://doi.org/10.2307/3150856
  • Ribera, M., Lapedriza García, À. (2019). Can we do better explanations? A proposal of user-centered explainable AI. In Joint Proceedings of the ACM IUI 2019 Workshops, Los Angeles, USA, March 20, 2019. https://openaccess.uoc.edu/handle/10609/99643
  • Riedl, R. (2022). Is trust in artificial intelligence systems related to user personality? Review of empirical evidence and future research directions. Electronic Markets, 32(4), 2021–2051. https://doi.org/10.1007/s12525-022-00594-4
  • Rietz, T., Benke, I., Maedche, A. (2019). The impact of anthropomorphic and functional Chatbot design features in enterprise collaboration systems on user acceptance. 14th International Conference on Wirtschaftsinformatik. https://www.semanticscholar.org/paper/The-Impact-of-Anthropomorphic-and-Functional-Design-Rietz-Benke/70c00ac5c69eb707610a5ece0c1bd28112243209
  • Roh, T., Park, B. I., & Xiao, S. S. (2023). Adoption of AI-enabled Robo-advisors in fintech: Simultaneous employment of UTAUT and the theory of reasoned action. Journal of Electronic Commerce Research, 24(1), 29–47. http://www.jecr.org/node/676
  • Romero-Rodríguez, J.-M., Ramírez-Montoya, M.-S., Buenestado-Fernández, M., & Lara-Lara, F. (2023). Use of ChatGPT at university as a tool for complex thinking: Students’ perceived usefulness. Journal of New Approaches in Educational Research, 12(2), 323. https://doi.org/10.7821/naer.2023.7.1458
  • Rose, J. (2023). Generative AI for summarization and information retrieval. TheBlue.Ai. https://theblue.ai/blog/genai-summarization/
  • Russo, D. (2024). Navigating the complexity of generative AI adoption in software engineering. ACM Transactions on Software Engineering and Methodology, 33(5), 1–50., https://doi.org/10.1145/3652154
  • Said, N., Potinteu, A. E., Brich, I., Buder, J., Schumm, H., & Huff, M. (2023). An artificial intelligence perspective: How knowledge and confidence shape risk and benefit perception. Computers in Human Behavior, 149, 107855. https://doi.org/10.1016/j.chb.2023.107855
  • Saville, J. D., & Foster, L. L. (2021). Does technology self-efficacy influence the effect of training presentation mode on training self-efficacy? Computers in Human Behavior Reports, 4(6), 100124. https://doi.org/10.1016/j.chbr.2021.100124
  • Sawyer, K. (2011). The cognitive neuroscience of creativity: A critical review. Creativity Research Journal, 23(2), 137–154. https://doi.org/10.1080/10400419.2011.571191
  • Schuetzler, R. M., Grimes, G. M., & Scott Giboney, J. (2020). The impact of chatbot conversational skill on engagement and perceived humanness. Journal of Management Information Systems, 37(3), 875–900. https://doi.org/10.1080/07421222.2020.1790204
  • Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., Schwabe, G., & Söllner, M. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2), 103174. https://doi.org/10.1016/j.im.2019.103174
  • Seeger, A.-M., Pfeiffer, J., & Heinzl, A. (2021). Texting with humanlike conversational agents: designing for anthropomorphism. Journal of the Association for Information Systems, 22(4), 931–967. https://doi.org/10.17705/1jais.00685
  • Selker, T. (2023). AI for the generation and testing of ideas towards an AI supported knowledge development environment. arXiv:2307.08876. https://doi.org/10.48550/arXiv.2307.08876
  • Selya, A., Rose, J., Dierker, L., Hedeker, D., & Mermelstein, R. (2012). A practical guide to calculating Cohen’s f, a measure of local effect size, from PROC MIXED. Frontiers in Psychology, 3, 111. https://doi.org/10.3389/fpsyg.2012.00111
  • Shamszare, H., & Choudhury, A. (2023). The impact of performance expectancy, workload, risk, and satisfaction on trust in ChatGPT: Cross-sectional survey analysis. arXiv:2311.05632. https://doi.org/10.48550/arXiv.2311.05632
  • Sharma, S., Islam, N., Singh, G., & Dhir, A. (2024). Why do retail customers adopt artificial intelligence (AI) based autonomous decision-making systems? IEEE Transactions on Engineering Management, 71, 1846–1861. https://doi.org/10.1109/TEM.2022.3157976
  • Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146(83), 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
  • Shorten, A., & Smith, J. (2017). Mixed methods research: Expanding the evidence base. Evidence-Based Nursing, 20(3), 74–75. https://doi.org/10.1136/eb-2017-102699
  • Siemon, D. (2022). Elaborating team roles for artificial intelligence-based teammates in human-AI collaboration. Group Decision and Negotiation, 31(5), 871–912. https://doi.org/10.1007/s10726-022-09792-z
  • Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99–118. https://doi.org/10.2307/1884852
  • Sindermann, C., Yang, H., Elhai, J. D., Yang, S., Quan, L., Li, M., & Montag, C. (2022). Acceptance and Fear of Artificial Intelligence: Associations with personality in a German and a Chinese sample. Discover Psychology, 2(1), 8. https://doi.org/10.1007/s44202-022-00020-y
  • Spreafico, C., & Sutrisno, A. (2023). Artificial intelligence assisted social failure mode and effect analysis (FMEA) for sustainable product design. Sustainability, 15(11), 8678. https://doi.org/10.3390/su15118678
  • Stone, R. N., & Grønhaug, K. (1993). Perceived risk: Further considerations for the marketing discipline. European Journal of Marketing, 27(3), 39–50. https://doi.org/10.1108/03090569310026637
  • Strzelecki, A. (2023). Students’ acceptance of ChatGPT in higher education: An extended unified theory of acceptance and use of technology. Innovative Higher Education, 49(2), 223–245., https://doi.org/10.1007/s10755-023-09686-1
  • Syverson, B. (2020). The rules of brainstorming change when artificial intelligence gets involved. Here’s how. IDEO. https://www.ideo.com/journal/the-rules-of-brainstorming-change-when-artificial-intelligence-gets-involved-heres-how
  • Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed., pp. xxvii–xx980). Allyn & Bacon/Pearson Education.
  • Thomke, S., & Fujimoto, T. (2000). The effect of “front-loading” problem-solving on product development performance. Journal of Product Innovation Management, 17(2), 128–142. https://doi.org/10.1111/1540-5885.1720128
  • Ulfert, A.-S., Georganta, E., Centeio Jorge, C., Mehrotra, S., & Tielman, M. (2023). Shaping a multidisciplinary understanding of team trust in human-AI teams: A theoretical framework. European Journal of Work and Organizational Psychology, 33(2), 158–171. https://doi.org/10.1080/1359432X.2023.2200172
  • Ulfert-Blank, A.-S., & Schmidt, I. (2022). Assessing digital self-efficacy: Review and scale development. Computers & Education, 191(1), 104626. https://doi.org/10.1016/j.compedu.2022.104626
  • Van Pinxteren, M. M. E., Pluymaekers, M., & Lemmink, J. G. A. M. (2020). Human-like communication in conversational agents: A literature review and research agenda. Journal of Service Management, 31(2), 203–225. https://doi.org/10.1108/JOSM-06-2019-0175
  • Venkatesh, V., Morris, M., Davis, G., & Davis, F. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(23), 425–478. https://doi.org/10.2307/30036540
  • Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157. https://doi.org/10.2307/41410412
  • Von Hippel, E., & Tyre, M. J. (1995). How learning by doing is done: Problem identification in novel process equipment. Research Policy, 24(1), 1–12. https://doi.org/10.1016/0048-7333(93)00747-H
  • Wang, Y.-Y., & Chuang, Y.-W. (2023). Artificial intelligence self-efficacy: Scale development and validation. Education and Information Technologies, 29(4), 4785–4808. https://doi.org/10.1007/s10639-023-12015-w
  • Weitzman, C. (2023). Unlock the power of AI video summarization now. https://speechify.com/blog/ai-video-summarization/
  • Wilson, N., Keni, K., & Tan, P. H. (2021). The role of perceived usefulness and perceived ease-of-use toward satisfaction and trust which influence computer consumers’ loyalty in China. Gadjah Mada International Journal of Business, 23(3), 262–294. https://doi.org/10.22146/gamaijb.32106
  • Wu, W., Zhang, B., Li, S., & Liu, H. (2022). Exploring factors of the willingness to accept AI-assisted learning environments: An empirical investigation based on the UTAUT model and perceived risk theory. Frontiers in Psychology, 13, 870777. https://doi.org/10.3389/fpsyg.2022.870777
  • Xi, Z., Chen, W., Guo, X., He, W., Ding, Y., Hong, B., Zhang, M., Wang, J., Jin, S., Zhou, E., Zheng, R., Fan, X., Wang, X., Xiong, L., Zhou, Y., Wang, W., Jiang, C., Zou, Y., Liu, X., … Gui, T. (2023). The rise and potential of large language model based agents: A survey. arXiv:2309.07864. http://arxiv.org/abs/2309.07864
  • Xie, Y., Zhu, K., Zhou, P., & Liang, C. (2023). How does anthropomorphism improve human-AI interaction satisfaction: A dual-path model. Computers in Human Behavior, 148, 107878. https://doi.org/10.1016/j.chb.2023.107878
  • Xiong, Y., Shi, Y., Pu, Q., & Liu, N. (2023). More trust or more risk? User acceptance of artificial intelligence virtual assistant. Human Factors and Ergonomics in Manufacturing & Service Industries, 34(3), 190–205. https://doi.org/10.1002/hfm.21020
  • Yang, Q., Pang, C., Liu, L., Yen, D. C., & Michael Tarn, J. (2015). Exploring consumer perceived risk and trust for online payments: An empirical study in China’s younger generation. Computers in Human Behavior, 50, 9–24. https://doi.org/10.1016/j.chb.2015.03.058
  • Yesil, S., & Sozbilir, F. (2013). An empirical investigation into the impact of personality on individual innovation behaviour in the workplace. Procedia – Social and Behavioral Sciences, 81, 540–551. https://doi.org/10.1016/j.sbspro.2013.06.474
  • Yeşilyurt, E., Ulaş, A. H., & Akan, D. (2016). Teacher self-efficacy, academic self-efficacy, and computer self-efficacy as predictors of attitude toward applying computer-supported education. Computers in Human Behavior, 64(2), 591–601. https://doi.org/10.1016/j.chb.2016.07.038
  • Yi, M., & Choi, H. (2023). What drives the acceptance of AI technology?: The role of expectations and experiences. arXiv. https://doi.org/10.48550/arXiv.2306.13670
  • Yin, M., Han, B., Ryu, S., & Hua, M. (2023). Acceptance of generative AI in the creative industry: Examining the role of AI anxiety in the UTAUT2 model. In H. Degen, S. Ntoa, & A. Moallem (Eds.), HCI international 2023 – Late breaking papers (Vol. 14059, pp. 288–310). Springer Nature. https://doi.org/10.1007/978-3-031-48057-7_18
  • Zamfirescu-Pereira, J. D., Wong, R. Y., Hartmann, B., & Yang, Q. (2023). Why Johnny can’t prompt: How non-AI experts try (and fail) to design LLM prompts. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1–21). https://doi.org/10.1145/3544548.3581388
  • Zhang, A., & Patrick Rau, P.-L. (2023). Tools or peers? Impacts of anthropomorphism level and social role on emotional attachment and disclosure tendency towards intelligent agents. Computers in Human Behavior, 138, 107415. https://doi.org/10.1016/j.chb.2022.107415
  • Zhang, C., Wang, W., Pangaro, P., Martelaro, N., & Byrne, D. (2023). Generative image AI using design sketches as input: Opportunities and challenges. Proceedings of the 15th Conference on Creativity and Cognition (pp. 254–261). https://doi.org/10.1145/3591196.3596820
  • Zhang, R., McNeese, N. J., Freeman, G., & Musick, G. (2021). “An ideal human”: Expectations of AI teammates in human-AI teaming. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW3), 1–25. https://doi.org/10.1145/3432945
  • Zhu, W., Huang, L., Zhou, X., Li, X., Shi, G., Ying, J., & Wang, C. (2024). Could AI ethical anxiety, perceived ethical risks and ethical awareness about Ai influence university students’ use of generative AI products? An ethical perspective. International Journal of Human–Computer Interaction, 1–23. https://doi.org/10.1080/10447318.2024.2323277

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.