3,710
Views
0
CrossRef citations to date
0
Altmetric
Research Article

How to Apply Artificial Intelligence for Social Innovations

, ORCID Icon &
Article: 2031819 | Received 08 Oct 2021, Accepted 18 Jan 2022, Published online: 31 Jan 2022

ABSTRACT

This study investigates how to apply artificial intelligence for social innovations using two socio-political factors critical for a Behavioral Theory of the Firm (BTF) – uncertainty and conflict. The analysis leads to four approaches for applying artificial intelligence for social innovations such as the agent-based modeling, social entrepreneurship, stakeholder capital, and social contract approaches. The valuation of artificial intelligence in the projects is endogenously created while the social innovators evaluate their environments, goals, and technologies. The study offers step-by-step guidance to assess the performance of and to create implementation strategies for social innovation projects combined with artificial intelligence.

Introduction

This study addresses how social changes affect organizations, which has been a critical subject not only for philosophers but also for business scholars for many decades (Miles Citation1987; Oliver Citation1991). Among various social changes, we focus on social innovations that are related to organizational survival. More importantly, we propose how to address social innovations effectively and efficiently with advanced technologies and illustrate various ways to apply different artificial intelligence techniques at a project level.

Generally defined, social innovations are new social practices that aim to meet social needs in a better way than the existing solutions, resulting from, for example, working conditions, education, community development or health. Over the last decade, social innovations have become critical for organizational survival and an important subject in the literature (Berrone et al. Citation2010; Crilly, Zollo, and Hansen Citation2012; Delmas and Toffel Citation2008; Okhmatovskiy and David Citation2012; Porter and Kramer Citation2011; Waldron, Navis, and Fisher Citation2013).

In fact, the existing literature uses various perspectives to address social innovations, including the cognitive perspective (Hahn et al. Citation2014; Lange and Washburn Citation2012), institutional perspective (Campbell Citation2007), social perspective (Aguilera et al. Citation2007), and individual organizational perspective (McWilliams and Siegel Citation2001). However, it seldom mentions what drives heterogeneity in the activities of social innovations or propose a general theory to explain the mechanism behind the heterogeneity. In practice, while social innovators seek the digital transformation of their projects using advanced technologies such as artificial intelligence, they do not have a theoretical foundation that would help them understand the diverse aspects of it (Mulgan et al., Citation2007; Morrar, Arman, and Mousa Citation2017). Moreover, the embedded uncertainty around, for example, data ownership and AI ethics, heightens stakeholder conflict around the use of such technologies within organizations. Therefore, this study intends to provide some clarification to why artificial intelligence is relevant and important for organizational success in addressing social innovations by generating a general framework that can be used by scholars and practitioners.

The foundational theory for this study is a Behavioral Theory of the Firm (BTF) (Cyert and March Citation1963). In particular, this study uses two key variables in the BTF, uncertainty, and conflict. Social innovation projects have internal and external stakeholders. Under uncertainty, the outcomes of the dynamic interactions among the stakeholders determine how social innovation projects can be combined with advanced technologies and how well the combination works, which are consistent with the perspective of Cyert and March (Citation1963).

In applying artificial intelligence for social innovation projects, entrepreneurship becomes most important when uncertainty is high (Knight Citation1921), and the creation of shared value becomes most important when stakeholder conflict is high (Porter and Kramer Citation2011). Drawn from these, therefore, we derive four approaches to deploy artificial intelligence for social innovation projects using the different levels of uncertainty and stakeholder conflict as follows: an agent-based modeling (ABM) approach, a social entrepreneurship approach, a stakeholder capital approach, and a social contract approach. We posit that these four approaches and their relationships form a general theory for using digital transformation techniques (i.e. artificial intelligence) for social innovations. Furthermore, the propositions outline the methods of applying artificial intelligence for evaluating the performance of and forming implementation strategies for social innovation projects. Ultimately, the study intends to create a general theory to help social innovators understand the diverse aspects of using artificial intelligence techniques in social innovation projects and use the theory for the valuation of and creating implementation strategies for social innovation projects that use artificial intelligence.

The contributions this study makes are three-fold. First, it provides a framework (i.e. the classification framework) for researchers and practitioners to identify the state of social innovation that uses artificial intelligence from the four categories determined by the levels of Knightian uncertainty and stakeholder conflict. Each of the four states is accompanied by the most suitable approach to take for social innovators such as an Agent-Based Modeling (ABM) approach, a social entrepreneurship approach, a stakeholder capital approach, and a social contract approach. Second, it provides practical guidance for assessing the performance of social innovation projects that use artificial intelligence technologies. Third, based on the classification framework, this study provides a step-by-step guideline for incorporating AI technologies in social innovation projects, which includes a macro-level strategy and an implementation strategy.

Finally, this study extends prior studies in the value sensitive design (VSD) literature. For example, Friedman and Kahn (1992), Friedman and Hendry (Citation2019), and Umbrello and van de Poel (Citation2021) explore the VSD approach, an established method to integrate values into technical design. Friedman and Hendry (Citation2019) discuss how to apply people’s moral and technological imagination to the design of technology. For academicians and practitioners, they explore theories, methods, and applications to create responsible innovations. Furthermore, Umbrello and van de Poel (Citation2021) explore how VSD is applied to AI. They argue that AI systems pose new challenges to design and innovation and require that the existing VSD approach be modified to overcome such challenges. While such studies broadly address technological innovations in relation to human values or treat AI as a sociotechnical system (i.e., incorporating stakeholder dialogue and values into AI social innovations), our study extends them by addressing how to design AI technologies in combination with social innovation projects specifically in order to address various situations defined by different levels of uncertainty and stakeholder conflict that are not solely created by AI systems but are existent ex-ante.

Methodology

The study treats each social innovation project as a unit of analysis, conducting a project-level analysis. The main reason is related that the purpose of this study is to create a general theory that helps explain the heterogeneity in digital transformation (e.g. applying artificial intelligence for social innovations), which is expected to provide a ground for creating an evaluating method and implementation strategies for social innovation projects. Among the technologies used for digital transformation, this study focuses on artificial intelligence in particular.

The main independent variables are Knightian uncertainty (Keynes Citation1921; Knight Citation1921) and stakeholder conflict. As explained by Keynes (Citation1937), Knightian uncertainty arises when a decision-maker cannot quantitatively assess the risks that arise in his/her decision-making process. In our study, Knightian uncertainty arises when a certain aspect of social innovation is too ambiguous or complicated to make it difficult to predict the outcomes of the projects or even to calculate the probability distribution for the outcomes. Keynes (Citation1937) offers “a European war, the price of copper, the rate of interest twenty years or the obsolescence of a new invention, or the position of private wealth owners in the social system” as the examples that illustrate qualitative uncertainty. These examples have one thing in common: the odds of the occurrence are not quantitatively measurable. On the contrary, risk is measurable uncertainty whose probability distribution can be calculated quantitatively. In this paper, uncertainty refers to Knightian uncertainty and risk refers to quantifiable uncertainty. Today’s increased hyperconnectivity among the members of society also increases the complexity and ambiguity embedded in social innovations, which indicates that many critical issues of social innovations are in fact qualitative and unpredictable.

In addition to uncertainty being the critical element in the BTF, there are other reasons that we focus on uncertainty. First, despite its importance, uncertainty has been overlooked in the discussion of stakeholder management. Second, prior studies use uncertainty as a central concept for analyzing organizational decision-making processes and the consequences from them (Thompson Citation1967). Third, uncertainty is deeply related to another important variable relevant to social innovations (e.g. stakeholder conflict). For example, increasing uncertainty may lead to increasing stakeholder conflict, but decreasing uncertainty may also lead to increasing stakeholder conflict as the state of low uncertainty reveals conflict more clearly. On the other hand, the state of low stakeholder conflict may induce some stability in society, but only to make it more prone to experiencing unexpected changes, hence high uncertainty.

Stakeholder conflict is another main independent variable in this study. It is a critical element for the BTF that stresses the relationships among organizational members and is often discussed in the literature for organizational politics and conflict management. It occurs when the goals of organizational members diverge (Burton, Obel, and HWaakonsson Citation2015; Coff Citation1999; Weick Citation1979). Stakeholder conflict is particularly important for social innovations because most of the value-laden goals for social innovations make them subject to diverging interpretations by the stakeholders, making stakeholder conflict an important variable for social innovations. Indeed, there are other variables that affect the relationship between social innovations and artificial intelligence such as the scale of the project, the number of stakeholders, the kinds of artificial intelligence techniques, or the types of underlying data for analysis; however, we believe that using uncertainty and stakeholder conflict can help provide a simple but rational framework (Kang et al. Citation2018).

We construct four approaches using different levels of the variables. summarizes a general theory of social innovation, digital transformation, and their relationship. Using this table, one can understand the heterogeneous nature of social innovations combined with artificial intelligence and evaluate the performance of and create implementation strategies for social innovations.

Table 1. States of social innovations and digital-transformation strategies

The combination of artificial intelligence and social innovations

Low uncertainty and low conflict: An agent-based modeling approach

The primary reason for applying artificial intelligence for social innovation projects would be to have a more scientific way of understanding the states of the projects and decision-making while reducing costs. One should understand the state of the relevant issues, seek available options to achieve the goal based on the understanding, and predict possible outcomes from each option. The most efficient way to do these is to model the circumstances and variables related to the project. Once a model is constructed, it can collect data and update itself automatically in real-time, allowing the users to understand the state of their projects at a glance. The users can also incorporate various options related to the project in the model and observe the outcomes (e.g. scenario planning) before making decisions. Simply put, the model allows the users to run simulations that help them decide how to proceed with social innovations.

The process described above is called agent-based modeling (ABM). ABM is a quantitative modeling technique that simulates different situations by modeling or assuming independent variables and their interactions to understand the impact from them (Railsback and Grimm Citation2019). Using this approach, Peng, Finin, Labrou, Cost, Chu, Long, J., … & Boughannam (Citation1999) state that “a set of agents with specialized expertise can be quickly assembled to help with the gathering of relevant information and knowledge, to cooperate with each other and with other parts of the production management system and humans to arrive at timely decisions in dealing with various enterprise scenarios. As such, ABM is deeply related to artificial intelligence theoretically and practically (Jennings Citation2000; Klügl and Bazzan Citation2012).

In a state of low uncertainty and stakeholder conflict, the advantages of quantitative modeling become more distinct while the disadvantages become less distinct, making an ABM approach most suitable for applying artificial intelligence for social innovations. If uncertainty is high but the risk is low, one can easily identify or statistically describe the state of the project; however, if uncertainty is high, artificial intelligence is difficult to use because it requires quantitative data as input data. Alternative data (e.g. text, image, voice) that involve qualitative information is hard to obtain, is expensive even when obtainable, and requires heavy computing power to process. On the contrary, risk is quantifiable, making it easier to interpret, collect relevant data and analyze the data regardless of its degree.

On the other hand, a state of low stakeholder conflict lowers the need to model strategic behaviors of the stakeholders while a state of high stakeholder conflict makes it difficult to model at all. For example, representative artificial intelligence techniques such as deep learning are optimization algorithms. However, a state of high stakeholder conflict implies goal incongruence, which obscures the subject for optimization. Furthermore, although it increases the need to model the situation, the state of high stakeholder conflict decreases the ease of gaining data that requires the consent of release from the stakeholders.

To summarize, in a state of low uncertainty and stakeholder conflict, it is appropriate to use an ABM approach to apply artificial intelligence for social innovations.

Proposition 1 (ABM: agent-based modeling). The lower the Knightian uncertainty and stakeholder conflict are, the more effective the agent-based modeling (ABM) approach becomes in applying artificial intelligence to accomplish social innovations.

Example #1: Digital twins for upgrading projects

Updating an existing project by using artificial intelligence techniques involves low uncertainty and stakeholder conflict. An existing project has low uncertainty from having accumulated a certain amount of data and key features revealed and the relationship among the stakeholders is likely more stable in the middle of the project than in the beginning. Therefore, in this section, we focus on the case of upgrading an existing project using artificial intelligence and explain why an ABM approach is most suitable for using advanced technologies such as artificial intelligence for social innovations in the state of low uncertainty and stakeholder conflict. In particular, we propose that social innovators use a special form of ABM, digital twins, to expand their projects. It is because digital twins enable social innovators to go beyond exploiting the benefits of technologies and to focus on the strategies and project models, enhancing their ability to utilize artificial intelligence.

Organizations that actively use digital twins are called circulating businesses. Circulating businesses establish business processes based on real-time data and many organizations are embracing it today. From the perspective of scholars, digital twins are considered one type of ABM, so digital twins can be considered real-time ABM. As explained earlier, ABM models the actions and interactions of the individuals or organizations to analyze their influence on the whole system under diverse scenarios. If one uses ABM to replicate the organization and its stakeholder interactions virtually, it becomes digital twins.

In practice, many companies pay attention to digital twins and implement them for simulation modeling. Companies collect data, research, and run various tests before making decisions. This is called an evidence-based policy. However, an evidence-based policy is costly and time-consuming to act on. If the companies can simulate relevant situations including internal and external variables to the organization (e.g. the project, partners, customers), they can conduct less costly and faster analysis. Digital twins also help create strategies more scientifically, thereby allowing the digital transformation of projects driven from the strategies rather than technologies.

Another advantage of using digital twins is that constant updating is possible. For example, real-time data accumulation using the Internet of Things (IoT) makes big data, which gives more accuracy and clarity to the model. As simulation and updating continue, digital twins become the tool for organizational learning and would continue to develop. In the process, the feedback from the stakeholders plays an important role as it is constantly reflected in the digital twins. In other words, the process of gathering, managing, and reflecting the responses of the stakeholders is accumulated in the code and data of the digital twins, and this accumulating process enables the constant development of the digital twins. Perhaps this opens the possibility that digital twins can provide relevant data and prediction of the states and organizations automatically, i.e. a technical advisor. In the future, not just the stakeholders but the entire ecosystem may be replicated by digital twins, expanding the coverage of ABM. Any organizations that do not implement digital twins will inevitably fall behind the organizations that do implement them in their digital competency for the fourth industrial revolution. (see Online Appendix for the examples of financial and auto industries)

High uncertainty and low conflict: A social entrepreneurship approach

A state of low stakeholder conflict indicates that there is a possibility of creating shared value. Then, social innovators can leverage this to the benefit of their projects (Crane et al. Citation2014; Porter and Kramer Citation2011). In addition, a state of high uncertainty increases the possibility of creating or discovering opportunities by using artificial intelligence because the absence of uncertainty means the absence of opportunities. One can understand this in the context of “high risk and high return.” Therefore, social innovators should try to seize new opportunities to apply artificial intelligence for social innovations. In sum, a state of low stakeholder conflict and high uncertainty makes a social entrepreneurship approach that achieves social innovations through the creation of shared value creation most appropriate to utilize artificial intelligence for social innovations. This is consistent with the definition of a social entrepreneur. Compared to ordinary people, social entrepreneurs accept qualitative uncertainty to find new opportunities in the process (Knight Citation1921). Therefore, the proposition follows:

Proposition 2 (social entrepreneurship). The higher the Knightian uncertainty and the lower stakeholder conflict are, the more effective the social entrepreneurship approach becomes in applying artificial intelligence to accomplish social innovations.

In practice, uncertainty creates opportunities for social innovators through artificial intelligence. The modern era is called a hyperconnected society, which means that there are more channels through which people, companies, and goods can access each other, and the amount of information that can be obtained through access increases. Increasing connectivity implies increasing complexity in networking, hence increasing uncertainty. Increasing connectivity and complexity also lead to the difficulty of hiding information, increasing occurrence of external shocks to the network, and difficulty of predicting the effect of such shocks on the network. Even some minor shocks to the network can create a butterfly effect to damage the entire network. All of these heighten uncertainty.

For example, unethical behaviors of corporate managers who abuse their authority over their employees or affiliated parties that provide products and services for them can seriously jeopardize the reputation capital of the companies. Some call it an Instagram risk, which shows the influence of networks. On the other hand, companies can utilize the influence of networks to create value as well. The case of BTS, a Korean singer group that has become popular worldwide, makes a good example of the successful utilization of the influence of networks. Taking advantage of the hyper-connected society, BTS builds strong ties with its global fans who actively contribute to the success of BTS. In addition, innovative companies like Dollar Share Club or Warby Parker are known to use a direct-to-customer (D2C) approach to take advantage of the hyper-connected society (Ingrassia Citation2020). According to the research by Edelman, the two-thirds of the survey respondents say that they care about social scandals or issues of the companies that they purchase their products and services from (Edelman Citation2018). The above examples such as BTS or the enterprises that successfully used an SNS-related strategy or a D2C sales approach for their success and the survey results that indicate the shift in the consumer demand indicate the importance for organizations to pay attention to social issues to survive in a hyper-connected society.

This is an opportunity for social innovators. The ripple effect of social issues heightens uncertainty, thereby creating opportunities for social innovators to undertake social innovation projects. Furthermore, the demand by companies for the alliance with social innovators with the social entrepreneurship mind-set may increase in order for the companies to address diverse social issues that are outside the organizational capacity. Social innovators can find good opportunities under such circumstances. At the same time, a hyper-connected society generates a large amount of data that are complicated, thereby increasing the demand for the application of artificial intelligence. In sum, in a state of high uncertainty and low stakeholder conflict, social innovators had better take a social entrepreneurship approach in using artificial intelligence for social innovations to new opportunities.

Example #2: AI for the environment

Concerns over environmental disruptions are widespread and require little persuasion for people to realize their gravity. Therefore, stakeholder conflict is low when it comes to environmental issues. However, environmental issues entail complexity and high uncertainty. For example, it is impossible to accurately predict when or how any environmental disruption may arise. People make scenarios based on their scientific research, but the research only provides a hint of what might happen, which cannot provide people with sufficient time or resources to prepare for the disruptions. Paris Agreement set a goal to restrict the global temperature from increasing within 1.5 Celsius degrees from the current temperature, but it is not certain whether the current state has already breached the tipping point. Furthermore, the cause and impact of environmental issues are hard to measure ex-ante, making a precise prediction of the occurrence of such issues difficult. Most of the uncertainty around measuring and preparing for environmental disruptions is related to the limitation of human intelligence and objectivity and structural complexity. In summary, environmental disruptions entail high uncertainty albeit low stakeholder conflictFootnote1 for there is a strong consensus for the importance and gravity of the issues among the stakeholders.

Various studies mention the merits of advanced technologies to resolve environmental issues. For example, Rolnick et al. (Citation2019) investigate how machine learning reduces greenhouse gas and helps people to handle climate change better. They propose a number of different methods for utilizing artificial intelligence in the illustration of thirteen cases such as electrical systems, transportation, buildings/cities, industry, farms/forest. carbon dioxide removal, climate prediction, societal impact, solar geoengineering, individual action, collective decisions, education, and finance. They emphasize that climate change is described by its “complexity, scale, and fundamental uncertainty” and explain how machine learning is the suitable technology to alleviate them. They also argue that their propositions can help generate innovative ideas for valuable ventures and social entrepreneurs. In other studies, a weather forecasting model using computational intelligence tools (Yahya and Seker Citation2019), or the development of an ESRI ArcGIS tool that implements an artificial neural network (ANN) is also suggested to address climate change.

Social innovators should also induce the utilization of existing technologies to solve environmental issues. In addition, they must seek creative ideas to combine social innovations with artificial intelligence to solve such environmental issues as climate change that entails high uncertainty. Therefore, the entrepreneurship mind-set is important for social innovators.

In summary, environmental disruptions such as climate change exemplify why a social entrepreneurship approach is effective in applying artificial intelligence for social innovations in a state of high uncertainty and low stakeholder conflict. Innovators need to seek ways to solve environmental problems by utilizing artificial intelligence with a social entrepreneurship approach.

Low uncertainty and high conflict: A stakeholder capital approach

In a state of low uncertainty and high stakeholder conflict, stakeholder capital becomes crucial for social innovations or for using artificial intelligence for social innovations. Social innovators should accumulate substantial social capital, especially stakeholder capital (Dorobantu, Henisz, and Nartey Citation2017; Dorobantu, Nartey, and Henisz Citation2013; Henisz, Dorobantu, and Nartey Citation2014) based on virtues. Then, an effective approach to combine social innovations with artificial intelligence would be to use accumulated stakeholder capital. In this section, stakeholder capital can be seen as social capital formed among the stakeholders.

A state of low uncertainty reveals stakeholder conflict more clearly. For example, there is a mobile application called LawTalk in Korea that connects users who seek legal advice to lawyers who provide legal advice. Usually, it is complicated and expensive for ordinary people to find attorneys to help them solve legal issues. However, the LawTalk application lowers the barrier to entry for both the users and entry-level lawyers. For this reason, stakeholder conflict is high especially among the existing lawyers. Meanwhile, uncertainty is low because only simple technology (i.e., to provide a platform that connects the users to the lawyers) is used.            

Social innovation projects also often face this kind of situation. Similar to the case of LawTalk, in a state of low uncertainty and high stakeholder conflict, social innovators have no choice but to respond to the demand from the stakeholders. However, social innovators must find opportunities to accumulate stakeholder capital in the process of concession rather than providing ungrounded concessions to the stakeholders unconditionally. In addition, stakeholder capital should be accumulated strategically based on virtues, which should involve the strategic selection of virtues and development of characteristics or reputation of social innovators based on the chosen virtues. Through this, social innovators can be regarded as virtuous beings, which will later help them gain support from the stakeholders for their strategy to apply artificial intelligence for social innovation projects. Of course, for this purpose, social innovators must practice the chosen virtues by themselves through training and formation of habits.

The virtues here can be best understood as virtue ethics developed by Aristotle or, more recently, Vallor (Citation2016) who discusses them in relation to technological developments. While Reijers and Gordijn (Citation2019) distinguish virtues from values, or VSD from virtuous practice design (VPD), which departs from virtuous ethics rather than heuristic values, Umbrello (Citation2020) argues that VPD extends VSD and is not an alternative to VSD. In accordance with Umbrello (Citation2020), this study does not distinguish virtues from values.

Meanwhile, a state of low uncertainty is helpful for accumulating stakeholder capital by selecting and practicing the chosen virtues because social innovators can identify the main issues and sources of stakeholder conflict, which helps them to create an issue list (Kang et al. Citation2018). In addition, social innovators and stakeholders can accumulate stakeholder capital and utilize complex artificial intelligence technologies in solving the issues together. This can heighten the potential of artificial intelligence by allowing social innovators to focus on their strategies or issues per se rather than technology itself (Kane et al. Citation2015). Communication and negotiation processes that take place in the midst of solving the issues also provide an opportunity for social innovators to accumulate social capital and become virtuous beings. It is because, in the process of communication and negotiation, social innovators can be perceived as pursuing the common interests of the community, not of their own, which gives an appearance of a protector of the community against irresponsibility and corruption. In fact, this line of argument is similar to the VSD approach discussed by Friedman and Hendry (Citation2019) or by Umbrello and van de Poel (Citation2021) who argue that, by addressing AI as a sociotechnical system (something inextricably linked to the context and stakeholders that the system is embedded in), one can incorporate stakeholder dialogue and values into AI social innovations to overcome new challenges to design and innovation created by AI systems.

Accumulated stakeholder capital from the interactions with the stakeholders becomes quite useful for social innovation projects that apply artificial intelligence. This is because accumulated stakeholder capital and virtues become the core competence or strategic resource for organizations as indicated by prior studies (Blyler and Coff Citation2003; Coff Citation1999). Also, virtues that form stakeholder capital play an important role in risk management (Godfrey Citation2005; Godfrey, Merrill, and Hansen Citation2009; Kim, Lee, and Kang Citation2021; Koh, Qian, and Wang Citation2014; Minor and Morgan Citation2011; Shiu and Yang Citation2017). This is because even if social innovators fail to achieve their original goals and cause losses to the stakeholders, the stakeholders will more likely tolerate such failure by considering the positive (i.e., virtuous) characteristics of the social innovators. Therefore, from both a practical and ethical standpoint, it is recommended that social innovators focus on virtues rather than deontology or consequentialism, of which action will influence the accumulation of stakeholder capital. Then the below proposition follows:

Proposition 3 (stakeholder capital). The higher stakeholder conflict and the lower the Knightian uncertainty are, the more effective the stakeholder-capital approach becomes in applying artificial intelligence to accomplish social innovations.

Social capital, including stakeholder capital, and virtue ethics are theoretically closely related. It also has an inseparable relationship depending on the perspective, which is particularly evident in Aristotle’s argument. Aristotle saw political friendship or civic friendship as a fundamental requirement of a healthy society. Political alliance here is in fact social capital, expressed in Aristotle’s Nicomachean Ethics (Ameriks & Clarke, Citation2000). Virtue ethicists, influenced by Aristotle, argued that communities and institutions should play a role as the infrastructure of virtue, consistent with the claims of social capital literature.

Then, if stakeholder capital consists of a strategically selected portfolio of virtues, the question remains as to how to build it from the perspective of social entrepreneurs. An easy way would be to use the list of common virtues to be pursued and vices to avoid as described in Aristotle’s Nicomachean Ethics.Footnote2 Then, the necessary items from the list should be adjusted and selected according to the characteristics of the stakeholders. Aristotle argues that virtues reflect the peculiarities of society, implying the relation between stakeholder capital and virtue ethics. MacIntyre (Citation1981) also argues that virtue should be based on the community of stakeholders, which implies that virtue and social capital are closely related to one another. He states, “a virtue is an acquired human quality the possession and exercise of which tends to enable us to achieve those goods which are internal to practices and the lack of which effectively prevents us from achieving any such goods (p.191).” Such a statement suggests that virtues and vices can vary depending on a perspective or the community on which they are based, the criticized weakness of virtue ethics. However, it is rather an advantage for social innovators who want to use artificial intelligence by accumulating stakeholder capital.

Example #3: Data ownership

Deep learning has driven the rapid development of artificial intelligence, and the tremendous amount of data of high quality has enabled deep learning. Then, the problem remains with the ownership of input data. Currently, most of input data are owned by large conglomerates, which nerves the affected stakeholders such as the government. Considering the importance of data for artificial intelligence, the debate over data ownership is so critical that it ultimately determines the competitiveness of artificial intelligence. Since the question of who owns data can be resolved institutionally, the uncertainty embedded in the problem itself may be low. However, since ownership itself is zero-sum, stakeholder conflict is high, making it difficult to settle the conflict over data ownership. Therefore, the problem of data ownership can be considered a social problem that entails high stakeholder conflict and low Knightian uncertainty.

The case of Cambridge Analytica that involves the serious undermining of Facebook’s stakeholder capital is a good example of stakeholder conflict over data. According to the whistleblower, Wylie, similar companies to Cambridge Analytica are actively influencing the privacy of the individuals and politics in many countries (Wylie Citation2019). (see Online Appendix II for more description regarding its relation to democracy)

Originally, the Internet was considered a means to distribute power from the centralized body to individuals. However, the development of cloud computing and smart devices has led to de-facto centralization of data ownership, which encourages the centralization of power around data on the Internet. In fact, a few monopolistic companies and countries have unmatched influence on the Internet, and their positions become stronger as they accumulate more data and develop more algorithms to analyze the data. Social innovators may require that the government manage and disclose data that are related to infrastructure and have a large external effect such as detailed spatial information like a map to public use. This requires stakeholder capital and soft regulations that private companies voluntarily create to provide ethical guidelines regarding data use are ideal for industrial development.

Additionally, there are technologies that social innovators can use to facilitate the accumulation of stakeholder capital for social innovations. First, blockchain technology enables the dissemination and management of data and collective decision-making by the users without transferring data ownership to certain individuals or groups, contributing to the mitigation of data monopoly. Once its contribution to stakeholder capital is established through social innovations based on such public benefits, the odds of receiving government and private sector funding will increase. Sufficient funding will facilitate the development of blockchain technology, which, in turn, will increase its contribution to accumulating stakeholder capital in social innovations.

Second, the sharing economy using advanced technologies is well known for its efficient distribution of resources that contributes to the protection of the environment and to sustainable growth (e.g. the efficient use of parked cars or empty spaces via sharing). However, the suppliers for the sharing economy are micro-innovators who may be subject to unfavorable working conditions without proper regulations and monitoring. If a company becomes a monopolist with a dominating platform by using the shadow of sharing economy, the company may show some predatory behaviors. In the end, there is a risk that the initial aim does not match with the output of the sharing economy. This creates an opportunity for social innovators that can step in to point out the abuses and negative effects of the sharing economy and provide proper monitoring, which in turn will contribute to accumulating stakeholder capital for the social entrepreneurs.

For example, Airbnb, Upwork, and Facebook represent companies based on sharing economy, where users voluntarily provide data and engage in important transactions on their servers. Currently, major sharing economy companies are taking the lion’s share of the value they create in the form of brokerage fees, and continuously extracting value from the accumulated big data using artificial intelligence. The key issue of the sharing economy lies in how to distribute the ownership, usage, and value of data. Yet, leading sharing economy companies are primarily interested in maximizing profits through data monopoly rather than accumulating stakeholder capital. This should present great opportunities for social entrepreneurs.

Social innovators must develop relevant knowledge about how the sharing economy can solve the problems faced by humans and society, how to allocate the wealth and surplus created by the sharing economy, and how to make the relevant decisions. If stakeholder capital is accumulated this way, social innovators can develop nonmarket leadership for sharing economy companies that will most likely be challenged for their legitimacy in the future and guide them in the direction of social innovations. (see Online Appendix III for additional examples illustrating how to use stakeholder capital for social innovation rojects)

High uncertainty and high conflict: A social contract approach

In a state of high uncertainty and stakeholder conflict, a social contract approach can be effective for using artificial intelligence. Social contract solves social issues and justifies organizational or individual actions amid high uncertainty and conflict. For example, during the intensive conflict state such as ‘bellum omnium contra omnes’ (the war of all against all), a social contract is born in the form of a country (Hobbes Citation1651) or intends to provide benefits for the least advantaged people amid the veil of ignorance (Rawls Citation1971). A social contract theory is embedded in our history. For example, a social contract theory prevailed from the seventeenth to the early nineteenth century by Hobbes, Locke, and Rousseau, when Europe experienced dual revolutions (Hobsbawm Citation1962). The theory was created to solve social conflict and confusion. The same happened in the U.S. where the theory formed an important basis for the Declaration of Independence after the American Revolutionary War.

Will the technological singularity ever happen? Will artificial general intelligence be possible in the near future? Then how severe is the existential threat for humans? Such questions address uncertainty and conflict related to the use of artificial intelligence. So what approach is the most appropriate in the presence of such controversial and uncertain innovations for social innovations, which also entail much obscurity and conflict per se? A social contract approach gives some answers to these questions and, thus, the following proposition is formed.

Proposition 4 (social contract). The higher the Knightian uncertainty and stakeholder conflict are, the more effective the social-contract approach becomes in applying artificial intelligence to accomplish social innovations.

A social contract theory shows how an organization is established by the stakeholders amid high uncertainty and conflict and how it is justified. Therefore, it provides a solid theoretical ground for justifying the use of artificial intelligence for social innovations under high uncertainty and high conflict. The social contract theory emphasizes authority and sovereignty, which can be interpreted as nonmarket leadership in the context of social innovations. Therefore, nonmarket leadership is an important element when selecting the social contract approach for social innovations. In particular, in the rapidly changing innovation field where political authority and institutional development are lagging, social innovators should achieve its goals by quickly seeking the understanding and consent of stakeholders while pursuing social contracts, making the state better than the state of nature or the void of authority.

Example #4: Humans versus machines

This study argues that a social contract theory can be used to develop social innovations and artificial intelligence in a situation involving the human-machine relationship that entails high uncertainty and high stakeholder conflict. First, social contract is important to solve externalities. This is because externalities and social cost caused by technologies can be best addressed economically when institutional support exists. Artificial intelligence not only can replace manpower but also can empower humans (Korinek and Stiglitz Citation2017). It can help humans to develop the competency for solving complicated tasks and make difficult decisions more efficiently. From the perspective of companies, the goal of which is to maximize profits, using artificial intelligence would be ideal to maximize profits regardless of its impact. However, it may not be ideal from a social perspective or for the long-term sustainable economic growth. A similar case would be that emitting pollutants may be inevitable from a business perspective, but not from a social perspective. Having different perspectives on the same phenomenon also apply to the case of ruining social capital. In the end, social innovators should use artificial intelligence based on the careful consideration of its benefits and costs (i.e. externalities) to society, and it is only possible if there is institutional support and social contract.

Second, the social contract based on the complementary relationship between humans and machines contributes to profit maximization for organizations. In applying artificial intelligence, combining human and artificial intelligence can generate quite successful outcomes. There is an interesting example introduced in Zero-to-One (Thiel and Masters Citation2014). PayPal operates an online payment system founded in 1998. The company was making losses from transaction fraud up until the mid-2000s, and it was nearly impossible to use manpower to catch fraudulent transactions among thousands of transactions happening per minute. Therefore, PayPal hired mathematicians and data scientists to create their own artificial intelligence technology. However, their programs were hacked by outsiders shortly, so PayPal chose to use a hybrid method of using both manpower and artificial intelligence to catch fraudulent transactions. The method was to allow the machine to first identify bad transactions, and have humans verify the results. It was successful and even was adopted by the FBIs. Based on this experience, Peter Thiel founded another company called Palantir that mainly uses the hybrid method. The company not only caught a number of fraudulent financial transactions, but also helped shut down abusive websites, predict and investigate the cause of multiple contagious viruses, and even find Osama Bin Laden using the Killer Application on terrorists. Today, the company is acclaimed for overcoming the limitations of the CIA that heavily relies on manpower and NSA that heavily relies on computers. Palantir는 A case like this highlights the important of the relationship between humans and machines. Similar to PayPal and Palantir, the human-machine relation can be utilized by lawyers, doctors, educators, and corporate managers to generate tremendous synergies in our society.

Third, in the era of the pure machine economy, the social contract between men and machines becomes more important. The pure machine economy has extremely high labor productivity. Even if not for the pure machine economy, artificial intelligence can significantly improve labor productivity, which may increase or decrease the supply of labor. When labor productivity increases, for example the demand for labor by companies may increase but the labor supply may decrease. This is because increased productivity and individual income lead to increased demand for leisure. Increased need for leisure would then contribute to the development of leisure-related industries and increase employment in service industries in particular. To succeed in leisure-related industries, it is requisite to have a genuine understanding of humans and businesses. This provides important implications for companies. As artificial intelligence technologies are developed, it becomes more important to understand the needs of humans and society so the companies can adapt to the changes in the tastes and behaviors of the stakeholders, which brings business opportunities. If companies simply aim to replace humans with machines without a thorough understanding of these, significant business opportunities can be lost. Therefore, social innovators need to focus on the potential outcomes of using artificial intelligence to replace manpower and create common values between machines and human society. In other words, social innovators should prepare social contracts to benefit humans in replacing humans with machines based on the genuine understanding of the societal needs.

Fourth, to upgrade their business models with disruptive innovation, companies would better make social contract with the stakeholders. If the companies do not have the capacity to build social contract, they should partner with social innovators to do so. The confrontation between humans and machines may jeopardize the capital competitiveness. The most critical factor that determines the relationship between employment and technological development lies in how easily businesses can adopt technological advances (Baldwin and Lin Citation2002; D’Este et al. Citation2012; Galia and Legros Citation2004). When it is difficult for businesses to upgrade technologies due to institutional reasons or the lack of competency, technological development will obsolete the existing capital and destroy employment. Therefore, to prevent advanced technologies such as artificial intelligence from destroying employment, companies must constantly upgrade their technologies and perhaps should eliminate the barrier to entry for technology ventures. If society is reluctant to allow companies to adopt disruptive innovations based on the concerns over disruptive innovations per se or stakeholder pressure, employment will likely remain vulnerable to hostile interference of such developing technologies. More specifically, if capital loses competitiveness, companies and the economy will lose competitiveness and, as a result, employment will be destroyed. In addition, a country should secure the capital competitiveness within the economy, which also makes it important to share the surplus generated from disruptive innovations righteously. All such explain why innovative companies that rely on sensitive technologies such as artificial intelligence should pay attention to social contract, which social innovators can help with. (see Appendix IV for additional information regarding why human-machine relationship is uncertain and controversial)

Discussion of practical implications

We offer a step-by-step guideline for the valuation of social innovations combined with various artificial intelligence techniques in . First, one must identify the state of the project from the four suggested states. Second, an appropriate approach is automatically chosen according to the state. Third, one must check whether the combination of the state and approach is appropriate for the social project. If it is not appropriate, one must identify reasons and make modifications accordingly. Fourth, one must finally evaluate whether the chosen approach has successfully carried out the goal of the project.

Table 2. Practical guidance for the valuation of AI-based social innovations

Lastly, our study generates detailed implications for the use of artificial intelligence for social innovations with a step-by-step guideline for establishing implementation strategies on a macro-level as described in . For example, one must first evaluate the level of Knightian uncertainty for both the chosen artificial intelligence and social innovations, and then the level of stakeholder conflict. Combining the outcomes of the first two assessments, one must find the most suitable approach to construct a macro strategy. Then, an implementation strategy must be made for the chosen approach. Lastly, if the strategies do not contribute to the successful application of the chosen artificial intelligence technology, one must search for alternative technology and repeat the process from the beginning.

Table 3. A step-by-step guideline for implementing AI technologies for social innovations

Conclusion

This study offers the general theory as well as practical guidance for using artificial intelligence for social innovations for the first time in the literature. So far, there has been no framework that helps analyze them or form macro-level strategies. Therefore, the classification framework we create can be used by the evaluators and participants of social innovation projects. This would give them clear understanding of the status of the projects, which would enable them to choose the most appropriate strategy to implement, preventing time and resources from being wasted.

In the future, related studies can test the propositions using different AI techniques for each case or use the framework to test actual projects. In addition, the step-by-step guidance for evaluation and implementation of social innovation projects that deploy AI technologies can facilitate communication among stakeholders based on the clear guidelines, thereby contributing to the development of social innovation projects.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1. While the urgency and importance of climate changes entail low conflict, proposed solutions can generate large stakeholder conflicts. We highlight the former in this example

References

  • Aguilera, R. V., D. E. Rupp, C. A. Williams, and J. Ganapathi. 2007. Putting the S back in corporate social responsibility: A multilevel theory of social change in organizations. Academy of Management Review 32 (3):836–1836. doi:10.5465/amr.2007.25275678.
  • Ameriks, K., and D. M. Clarke. 2000. Aristotle: nicomachean ethics. Cambridge, England: Cambridge University Press.
  • Baldwin, J., and Z. Lin. 2002. Impediments to advanced technology adoption for Canadian manufacturers. Research Policy 31 (1):1–18. doi:10.1016/S0048-7333(01)00110-X.
  • Berrone, P., C. Cruz, L. R. Gomez-Mejia, and M. Larraza-Kintana. 2010. Socioemotional wealth and corporate responses to institutional pressures: Do family-controlled firms pollute less? Administrative Science Quarterly 55 (1):82–113. doi:10.2189/asqu.2010.55.1.82.
  • Blyler, M., and R. W. Coff. 2003. Dynamic capabilities, social capital, and rent appropriation: Ties that split pies. Strategic Management Journal 24 (7):677–86. doi:10.1002/smj.327.
  • Burton, R. M., B. Obel, and D. D. HWaakonsson. 2015. Organizational design: A step-by-step approach. Cambridge, England: Cambridge University Press.
  • Campbell, J. L. 2007. Why would corporations behave in socially responsible ways? An institutional theory of corporate social responsibility. Academy of Management Review 32 (3):946–67. doi:10.5465/amr.2007.25275684.
  • Coff, R. W. 1999. When competitive advantage doesn’t lead to performance: The resource-based view and stakeholder bargaining power. Organization Science 10 (2):119–33. doi:10.1287/orsc.10.2.119.
  • Crane, A., G. Palazzo, L. J. Spence, and D. Matten. 2014. Contesting the value of “creating shared value. California Management Review 56 (2):130–53. doi:10.1525/cmr.2014.56.2.130.
  • Crilly, D., M. Zollo, and M. T. Hansen. 2012. Faking it or muddling through? Understanding decoupling in response to stakeholder pressures. Academy of Management Journal 55 (6):1429–48. doi:10.5465/amj.2010.0697.
  • Cyert, R. M., and J. G. March. 1963. A behavioral theory of the firm. Englewood Cliffs, NJ 2 (4):169–87.
  • D’Este, P., S. Iammarino, M. Savona, and N. Von Tunzelmann. 2012. What hampers innovation? Revealed barriers versus deterring barriers. Research Policy 41 (2):482–88. doi:10.1016/j.respol.2011.09.008.
  • Delmas, M. A., and M. W. Toffel. 2008. Organizational responses to environmental demands: Opening the black box. Strategic Management Journal 29 (10):1027–55. doi:10.1002/smj.701.
  • Dorobantu, S., L. Nartey, and W. J. Henisz. 2013. First impressions: stakeholder networks, proactive engagement & stakeholder opinions of companies. Academy of Management Proceedings 2013 (1):17448. doi:10.5465/ambpp.2013.17448abstract.
  • Dorobantu, S., W. J. Henisz, and L. Nartey. 2017. Not all Sparks light a fire: Stakeholder and shareholder reactions to critical events in contested markets. Administrative Science Quarterly 62 (3):561–97. doi:10.1177/0001839216687743.
  • Edelman, 2018. Two-Thirds of consumers worldwide now buy on beliefs. https://www.edelman.com/news-awards/two-thirds-consumers-worldwide-now-buy-beliefs
  • Friedman, B., & Hendry, D. G. 2019. Value sensitive design: Shaping technology with moral imagination. Cambridge, MA: MIT Press.
  • Galia, F., and D. Legros. 2004. Complementarities between obstacles to innovation: Evidence from France. Research Policy 33 (8):1185–99. doi:10.1016/j.respol.2004.06.004.
  • Godfrey, P. C., C. B. Merrill, and J. M. Hansen. 2009. The relationship between corporate social responsibility and shareholder value: An empirical test of the risk management hypothesis. Strategic Management Journal 30 (4):425–45. doi:10.1002/smj.750.
  • Godfrey, P. C. 2005. The relationship between corporate philanthropy and shareholder wealth: A risk management perspective. Academy of Management Review 30 (4):777–98. doi:10.5465/amr.2005.18378878.
  • Hahn, T., L. Preuss, J. Pinkse, and F. Figge. 2014. Cognitive frames in corporate sustainability: Managerial sensemaking with paradoxical and business case frames. Academy of Management Review 39 (4):463–87. doi:10.5465/amr.2012.0341.
  • Henisz, W. J., S. Dorobantu, and L. J. Nartey. 2014. Spinning gold: The financial returns to stakeholder engagement. Strategic Management Journal 35 (12):1727–48. doi:10.1002/smj.2180.
  • Hobbes, T. (1651). Leviathan (Project Gutenberg eBook of Leviathan, 2009). http://www.gutenberg.org/etext/3207
  • Hobsbawm, E. 1962. The age of revolution: Europe 1748-1848. London: Weidenfeld and Nicholson.
  • Ingrassia, L. 2020. Billion dollar brand club: how dollar shave club, warby parker, and other disruptors are remaking what we buy. New York, NY: Henry Holt and Co.
  • Jennings, N. R. 2000. On agent-based software engineering. Artificial Intelligence 117 (2):277–96. doi:10.1016/S0004-3702(99)00107-1.
  • Kane, G. C., D. Palmer, A. N. Phillips, D. Kiron, and N. Buckley. 2015. Strategy, not technology, drives digital transformation. 14 (1–25). Cambridge, MA: MIT Sloan Management Review and Deloitte University Press.
  • Kang, H.-G., W. Woo, R. M. Burton, and W. Mitchell. 2018. Constructing M&A valuation: How do merger evaluation methods differ as uncertainty and conflict vary? Journal of Organization Design 7 (1):2. doi:10.1186/s41469-017-0025-y.
  • Keynes, J. M. 1921. A treatise on probability. North Chelmsford, Massachusetts: Courier Corporation.
  • Keynes, J. M. 1937. The general theory of employment. The Quarterly Journal of Economics 51 (2):209–23. doi:10.2307/1882087.
  • Kim, S., G. Lee, and H.-G. Kang. 2021. Risk management and corporate social responsibility. Strategic Management Journal 42 (1):202–30. doi:10.1002/smj.3224.
  • Klügl, F., and A. L. Bazzan. 2012. Agent-based modeling and simulation. Ai Magazine 33 (3):29–29. doi:10.1609/aimag.v33i3.2425.
  • Knight, F. H. 1921. Risk, uncertainty and profit. North Chelmsford, Massachusetts: Courier Corporation.
  • Koh, P.-S., C. Qian, and H. Wang. 2014. Firm litigation risk and the insurance value of corporate social performance. Strategic Management Journal 35 (10):1464–82. doi:10.1002/smj.2171.
  • Korinek, A., and J. E. Stiglitz. 2017. Artificial intelligence and its implications for income distribution and unemployment. Cambridge, Massachusetts: National Bureau of Economic Research.
  • Lange, D., and N. T. Washburn. 2012. Understanding attributions of corporate social irresponsibility. Academy of Management Review 37 (2):300–26. doi:10.5465/amr.2010.0522.
  • MacIntyre, A. 1981. After virtue: A study in moral theology. Notre Dame, Indiana: University of Notre Dame Press.
  • McWilliams, A., and D. Siegel. 2001. Corporate social responsibility: A theory of the firm perspective. Academy of Management Review 26 (1):117–27. doi:10.5465/amr.2001.4011987.
  • Miles, R. H. 1987. Managing the Corporate Social Environment: A Grounded Theory. Hoboken, New Jersey: Prentice Hall Direct.
  • Minor, D., and J. Morgan. 2011. CSR as reputation insurance: Primum non nocere. California Management Review 53 (3):40–59. doi:10.1525/cmr.2011.53.3.40.
  • Morrar, R., H. Arman, and S. Mousa. 2017. The fourth industrial revolution (Industry 4.0): A social innovation perspective. Technology Innovation Management Review 7 (11):12–20. doi:10.22215/timreview/1117.
  • Mulgan, G., Tucker, S., Ali, R., & Sanders, B. 2007. Social Innovation: what it is, why it matters, how it can be accelerated.
  • Okhmatovskiy, I., and R. J. David. 2012. Setting your own standards: Internal corporate governance codes as a response to institutional pressure. Organization Science 23 (1):155–76. doi:10.1287/orsc.1100.0642.
  • Oliver, C. 1991. Strategic responses to institutional processes. Academy of Management Review 16 (1):145–79. doi:10.2307/258610.
  • Peng, Y., T. Finin, Y. Labrou, R. S. Cost, B. T. Chu, J. Long, W. J. Tolone, and A. Boughannam. 1999. Agent-based approach for manufacturing integration: The CIIMPLEX experience. Applied Artificial Intelligence 13 (1–2):39–63. doi:10.1080/088395199117487.
  • Porter, M. E., and M. R. Kramer (2011, January 1). Creating Shared Value. Harvard Business Review, (January-February 2011). https://hbr.org/2011/01/the-big-idea-creating-shared-value
  • Railsback, S. F., and V. Grimm. 2019. Agent-based and individual-based modeling: A practical introduction. Princeton, New Jersey: Princeton university press.
  • Rawls, J. 1971. A theory of justice. Cambridge, MA: Harvard university press.
  • Reijers, W., and Gordijn, B. 2019, 'Moving from value sensitive design to virtuous practice design', Journal of Information, Communication and Ethics in Society 17 (2):196–209.
  • Rolnick, D., P. L. Donti, L. H. Kaack, K. Kochanski, A. Lacoste, K. Sankaran, A. S. Ross, N. Milojevic-Dupont, N. Jaques, and A. Waldman-Brown (2019). Tackling climate change with machine learning. ArXiv Preprint ArXiv:1906.05433.
  • Shiu, Y.-M., and S.-L. Yang. 2017. Does engagement in corporate social responsibility provide strategic insurance-like effects? Strategic Management Journal 38 (2):455–70. doi:10.1002/smj.2494.
  • Thiel, P. A., and B. Masters. 2014. Zero to one: Notes on startups, or how to build the future. New York, NY: Currency.
  • Thompson, J. D. 1967. Organizations in action: Social science bases of administrative theory. New York, NY: McGraw-Hill.
  • Umbrello, S. 2020. Combinatory and Complementary Practices of Values and Virtues in Design: A Reply to Reijers and Gordijn. Filosofia 65:107–121. https://doi.org/10.13135/2704-8195/5236
  • Umbrello, S., & van de Poel, I. 2021. Mapping value sensitive design onto AI for social good principles. AI and Ethics 1–14.
  • Vallor, S. 2016. Technology and the virtues: A philosophical guide to a future worth wanting. England: Oxford University Press.
  • Waldron, T. L., C. Navis, and G. Fisher. 2013. Explaining differences in firms’ responses to activism. Academy of Management Review 38 (3):397–417. doi:10.5465/amr.2011.0466.
  • Weick, K. E. 1979. The Social Psychology of Organizing. 2nd ed. New York, NY: McGraw-Hill Humanities/Social Sciences/Languages.
  • Wylie, C. 2019. Mindf*ck: Cambridge Analytica and the Plot to Break America. New York, NY: Random House.
  • Yahya, B. M., and D. Z. Seker. 2019. Designing weather forecasting model using computational intelligence tools. Applied Artificial Intelligence 33 (2):137–51. doi:10.1080/08839514.2018.1530858.