3,669
Views
1
CrossRef citations to date
0
Altmetric
Editorial

Understanding the Role of Cultural Context and User Interaction in Artificial Intelligence Based Systems

&

Introduction

Organizations have begun to adopt and implement artificial intelligence (AI) as part of their IT initiatives. Though researchers have focused on adoption, implementation, and use of IS for many decades, AI-based systems or AI system (AIS) are opening new vistas for research in the field. Among many aspects of AIS implementation and usage, this essay focuses on one of the key characteristics of AI that has been frequently mentioned in contemporary practitioner literature, but little researched – interaction between users and their AI, called mutual learning. That is, AI should learn from its users’ decision-making behaviors, and users should better understand how AI can support and influence their decision-making as well as its limitations. Based on a discussion of socio-technological contexts for successful implementation of AI in organizations, this essay describes the unique nature of AIS that sets them apart from traditional IS, particularly on the issue of mutual learning. It also identifies the possible role of users’ cultural context and its influence on user-AI interactions and progressive customization of AIS.

Unique Nature of AI Contexts: Mutual Learning

Organizations have implemented AIS for various purposes such as factory automation, cancer detection, investment recommendations, and legal support. Some of them (e.g., factory automation and cancer detection) may not need to understand users’ references as AI can learn from historical data. However, several AIS may need to recognize and understand users’ behaviors and preferences to better support them. For instance, although AI-based stock recommendation agents can recommend investment choices based on their historical financial performance, users may be reluctant to strictly follow the suggestions due to other factors, such as risk tolerance and assessment of future prospects for the recommended firms. In addition, users need to recognize how and when, and the extent to which, they may rely on AIS. For instance, users are likely to follow their individualized AI’s recommendations, which conform to their expected decisions and preferences. However, despite such conformity, AIS’s recommendations may not always improve the decision quality.

This situation arises frequently because correct answers often exist only stochastically, where AI cannot always make a precise judgment, instead it can help users make decisions under uncertainty by providing better predictions (Choudhury et al., Citation2018). Furthermore, individuals may seek psychological satisfaction in their decision-making when considering the utility of AIS. In this context, AIS would need to learn a user’s decision-making and behavioral patterns to increase the adoption and use of AI systems. This is particularly true as prediction is useful only when users employ it in their decision-making processes (Agrawal et al., Citation2019). It is also likely that users may also change their own decision making, consciously or unconsciously, in the process of adapting to AIS. Thus, unlike the traditional IS environments, the interactions between the system and users can play an important role in the process of emerging usage patterns.

Theoretical Understanding of Mutual Learning between AI and Users

A majority of IS research interests have traditionally been in the areas of 1) the influences of technologies on users and 2) users’ influence on the development and use of technologies. These streams of research typically investigate one-way influence from IS to users (e.g., the influence of IS characteristics on continued use), or vice versa (e.g., the influence of developers’ characteristics on IS quality). However, only a relatively small group of researchers have investigated sociomateriality, highlighting the need for the identification of inextricable nature of systems consisting of people (social) and things such as buildings, layout, equipment, hardware, software, procedures (material), and the need to consider them together for better understanding IS use behaviors (Orlikowski, Citation2010). This notion of sociomateriality posits that “the social and the material are constitutively entangled in everyday life” (Orlikowski, Citation2010, p. 1437). As mutual learning should take place for both AIS and users during the AIS implementation processes, researchers can benefit by identifying the interactions between users and technologies, and usage patterns which may emerge differently across individual users. In this regard, sociomateriality highlighting constitutive entanglement, can provide a useful (meta) theoretical framework. We should note that though constitutive entanglement does not presume an ontological separation between the entities, we recognize users and AIS as distinct for analytical purposes in practice (Orlikowski, Citation2010).

Importance of Cultural Issues

Cultural values that individuals and/or their societies uniquely possess can influence their learning processes in everyday practices. In turn, AI that mimics its users’ decision-making patterns can also begin to reflect the users’ cultural values. depicts a view of how cultural values can influence the learning process and the emerging usage patterns, suggesting several research questions that should be investigated. Cultural influences can be broadly observed in three areas: users’ adoption and use behaviors, AI’s recommendation patterns, and emerging usage patterns, which are discussed below.

Figure 1. Role of cultural influences on mutual learning and emerging usage patterns.

Figure 1. Role of cultural influences on mutual learning and emerging usage patterns.

Adoption and Use Decisions

AIS adoption and use can be understood as users’ effort to mitigate decision uncertainty by having possible predicted answers to the problems given to them (Louis, Citation1980). In this context, individuals’ cognitive components in decision-making processes cannot be insulated from culture and/or cultural values (see Kastanakis and Voyer (Citation2014) for a detailed review). For instance, as individuals need to take responsibility for their decisions, broadly individuals with low uncertainty avoidance culture may be more likely to bear decision-risks than those with high uncertainty avoidance culture (Leidner & Kayworth, Citation2006). Therefore, individuals with a high uncertainty avoidance culture may have a greater tendency to adopt and rely on AIS compared to those with low uncertainty avoidance culture. This is also in line with Hofstede’s (Citation2001) argument that technological solutions appeal more to high uncertainty avoidance cultures as they can increase predictability, and findings of a follow-up study by GLOBE group, which reported that societies with high uncertainty avoidance cultures are more likely to invest in technologies (de Luque & Javidan, Citation2004).

However, because norms and values in the individualistic Western cultures emphasize self-sufficiency, autonomy, and self-promotion (Hofstede, Citation2001), people under these cultures tend to prefer pursuing internal locus of control – the decision-making situations within their own control – thus, they may not be as inclined to depend on AIS recommendations as those under Eastern cultures. This does not preclude the role of other factors that may influence the adoption and use of AIS by users in individualistic cultures. The above rationale suggests that we may not have concrete answers for how culture and/or cultural values may influence individuals’ adoption and use decisions, given the contrasting influences of culture. Thus, the first group of questions should be: Do cultures influence individuals’ adoption and use of AIS? Which cultural variables may affect the adoption and use of AIS more saliently?

AI’s Recommendation Patterns

AI can become a reflection of its user as it learns the user’s cognitive and behavioral patterns. That is, although AI may be by itself an artifact, interestingly, the cognitions and behaviors embedded in AI may be getting similar to those of its culturally oriented users. If so, AI that absorbs its users’ decision-making patterns can culturally diverge. Though it is just an artifact, the decision-making patterns of AIS can be different based on the cultural backgrounds of their users as much as users’ behaviors are different across cultures.

A useful analogy is CogniToys that is connected to IBM WATSON for machine learning from its users. Though children initially have the same toys, their toys become different as the toys interact with their users overtime. This illustrates how culture can manifest itself through technological artifacts (Leidner & Kayworth, Citation2006) in the sociomaterial routines as witnessed in languages and rituals that numerous researchers have identified. This is similar to the arguments that IS development processes and use processes can be different across cultures (Kappos & Rivard, Citation2008), but expands them by considering that AIS as artifacts may become divergent while interacting with users. Hence, the second group of questions that is worthy of consideration should be: Do the recommendations suggested by AIS culturally evolve over time? Do the recommendations of AIS vary across cultures?

Emerging Usage Patterns

In continuous interactions between an AIS and its user, several relationship quality issues may arise, depending on the cultural background of users. We should note that emergent usage patterns should be considered separate from adoption and use decisions, as the former focuses on the dynamics in the usage behaviors in a longitudinal manner while the latter taps mainly on an adoption and use decision at an initial point. Identification of emerging usage patterns is particularly important in that AIS and users are likely to be entangled in everyday practices in organizations, through which AIS may understand a user’s preferences and provide suggest recommendations corresponding to the preference. However, even if AI starts emulating users’ decision-making patterns and may provide recommendations that can confirm users’ expected decisions, users may not depend entirely on AIS due to their background and cultural factors.

Individuals from collectivistic culture can process contradictory pieces of information and maintain conflicting statements, whereas people from individualistic culture favor one over the other and try to reduce cognitive dissonance by rejecting one in favor of the other (Peng & Nisbett, Citation1999). If so, people from individualistic cultures may be more likely to engage in higher usage, compared to those with collectivistic culture, when AIS provide recommendations that confirm their decisions and reduce cognitive dissonance between users’ decisions and AIS recommendations. Additionally, as opposed to those with individualistic culture, individuals with collectivistic culture are less likely to make egocentric errors as they have a more holistic mode of thinking (Wu & Keysar, Citation2007). Therefore, they may be more likely to attribute their decision-making errors to AIS which plays important role in their decision-making environment. If so, when users with collectivistic culture employ AIS and experience low levels of performance, the low performance experiences may be more likely attributed to AIS by the users. Thus, they may exhibit lower levels of dependence on the system and/or higher levels of variations in their usage patterns. In this regard, usage patterns can include measures such as increase or decrease as well as variance in use over time. Even if AIS based on their predictive abilities and extensive data have abilities to complement the expertise of individuals with collectivist culture, such users may not fully benefit from using AIS.

This issue should be critical in that users’ initial performance experiences may influence emerging usage patterns in a longitudinal manner, which may be contingent on their cultures. For example, a negative performance feedback may reinforce individual’s negative attitudes toward AI, making a vicious cycle of interactions between an AIS and its user. This is not what organizations may anticipate when implementing AIS. On the contrary, users’ initial higher performance may strengthen their positive attitudes, generating a virtuous cycle of interactions. More interestingly, these negative (positive) feedback loops may be stronger for users with collectivistic culture, compared to those with individualistic culture. Hence, the third group of questions should be: Are emerging usage patterns different across cultures? Which cultural variables influence the usage patterns?

Conclusion

Researchers and practitioners commonly agree that AI will shape new patterns in our daily lives in organizations. Additionally, AIS would learn their users’ decision-making preferences and knowledge in many cases progressively as they interact with them. In this regard, we highlight the importance of viewing AIS based on the entanglement of social and material/technical objects in organizational routines (Gaskin, Berente, Lyytinen, & Yoo, Citation2014). We discuss how different culture and/or cultural values may influence not only user behaviors and emerging usage patterns (e.g., Martinsons & Davison, Citation2007) but also AI’s recommendation patterns. Then, we suggest several critical research questions that are worthy of consideration. We believe that due to its socio-technological nature, the diffusion of AIS in organizations can pave new avenues in understanding the role of cultural values in the dynamics of the adoption and use of AIS.

Additional information

Funding

This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (2019S1A5A2A03034742).

Notes on contributors

Kyootai Lee

Kyootai Lee is Professor at the Graduate School of Management of Technology at Sogang University, Republic of Korea. He received his PhD from University of Missouri–St. Louis. His research has been published in Journal of Organizational Behavior, Leadership Quarterly, Family Business Review, Information Systems Journal, Journal of Global Information Technology Management, European Management Journal, Journal of Business Research, and Technological Forecasting and Social Change, among others. His recent research is focused on decision making in the innovation implementation and on the impact of artificial intelligence on decision making.

Kailash Joshi

Kailash Joshi is Professor of Information Systems at University of Missouri-St Louis. He obtained his PhD in Business Administration from Indiana University, Bloomington. He has published in MIS Quarterly, Information Systems Journal, Decision Sciences, IEEE Engineering Transactions and other journals. He also serves as the Senior Associate Editor of JITCAR.

References

  • Agrawal, A., Gans, J. S., & Goldfarb, A. (2019). Artificial Intelligence: The ambiguous labor market impact of automating prediction. Journal of Economic Perspectives, 33(2),31–50. doi:10.1257/jep.33.2.31
  • Choudhury P. Starr E. & Agarwal R. (2018). Machine learning and human capital: experimental evidence on productivity complementarities. SSRN Working Paper.
  • de Luque, M. S., & Javidan, M. (2004). Uncertainty avoidance. In R. J. House, P. J. Hanges, M. Javidan, M. Dorfman, & V. Gupta (Eds.), Culture, leadership, and organizations: The GLOBE study of 62 societies, pp. 602–654. Thousand Oaks, CA: Sage.
  • Gaskin, J. E., Berente, N., Lyytinen, K., & Yoo, Y. (2014). Toward generalizable sociomaterial inquiry: A Computational approach for zooming in and out of sociomaterial routines. MIS Quarterly, 38(3), 849–871. doi:10.25300/MISQ/2014/38.3.10
  • Hofstede, G. (2001). Culture’s consequences: Comparing values, behaviors, institutions and organizations across nations. London: Sage publications.
  • Kappos, A., & Rivard, S. (2008). A three-perspective model of culture, information systems, and their development and use. MIS Quarterly, 32(3), 601–634. doi:10.2307/25148858
  • Kastanakis, M. N., & Voyer, B. G. (2014). The effect of culture on perception and cognition: A conceptual framework. Journal of Business Research, 67(4), 425–433. doi:10.1016/j.jbusres.2013.03.028
  • Leidner, D. E., & Kayworth, T. (2006). A review of culture in information systems research: Toward a theory of information technology culture conflict. MIS Quarterly, 30(2), 357–399. doi:10.2307/25148735
  • Louis, M. R. (1980). Surprising and sense-making: What new-comers experience in entering unfamiliar organization settings. Administrative Science Quarterly, 25, 226–251. doi:10.2307/2392453
  • Martinsons, M. G., & Davison, R. M. (2007). Strategic decision making and support systems: Comparing American, Japanese and Chinese management. Decision Support Systems, 43(1), 284–300. doi:10.1016/j.dss.2006.10.005
  • Orlikowski, W. J. (2010). The sociomateriality of organizational life: Considering technology in management research. Cambridge Journal of Economics, 34(1), 125–141. doi:10.1093/cje/bep058
  • Peng, K., & Nisbett, R. E. (1999). Culture, dialectics, and reasoning about contradiction. American Psychologist, 54(9), 741–754. doi:10.1037/0003-066X.54.9.741
  • Wu, S., & Keysar, B. (2007). The effect of culture on perspective taking. Psychological Science, 18(7), 600–606. doi:10.1111/j.1467-9280.2007.01946.x

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.