1,296
Views
0
CrossRef citations to date
0
Altmetric
Editorials

Time to Revisit Sumantra Ghoshal’s Legacy: One AI Algorithm at a Time

What is the role of a corporation? Is shareholder value maximization the prime goal of a corporation? Sumantra Ghoshal raised a very fundamental question “We also know that the value a company creates is produced through a combination of resources contributed by different constituencies. Employees, including managers, contribute their human capital, for example, while shareholders contribute financial capital. If the value creation is achieved by combining the resources of both employees and shareholders, why should the value distribution favor only the latter? Why must the mainstream of our theory be premised on maximizing the returns to just one of these various contributors?” (Ghoshal, Citation2005, p. 80). The question is further exacerbated in the context of a global organization which must deal with another abstraction – local vs. global, i.e., how much to share with local communities and local employees?

That leads to our next question? What should be the goal of an AI algorithm? Should the AI algorithm’s sole purpose be to optimize efficiency or profit, or maximize revenue? Should an algorithm work only to maximize the shareholder’s wealth? These questions are important, else our AI algorithms be relentlessly and even more obsessively directed towards achieving their unilateral goals driven by shareholder’s wealth maximization.

There are many stakeholders and many forms of capital. It is the type of theoretical lens that is used which justifies placement of one type of stakeholder and one type of capital over the others. To examine the relative importance of different stakeholders and different capitals (Ghoshal, Citation2005; Nahapiet & Ghoshal, Citation1998) – financial and social to name a few, we need to understand what a good theory is and how to differentiate between a good and a bad theory. Ghoshal (Citation2005) suggests that a theory’s main function is to explain the underlying phenomenon. A good theory is falsifiable, and a theory is held, till it is proven wrong by another right theory. Theory generation in social science, Ghoshal suggests, is little different than natural science because in social science it is possible for a wrong theory to get disguised as a right theory. Social science deals with humans, who react differently depending upon how you “theorize” and treat them.

Ghoshal (Citation2005) explains the difference between natural science theories and social science theories. According to him if a natural science theory is wrong, the underlying nature’s phenomenon does not change according to the flawed theory. When people predicted that sun goes around the earth, it did nothing to the sun’s behavior, the sun still did what it was doing, but when a flawed social science theory is advocated, it changes the human behavior to match the flawed theoretical prediction. When transaction cost theory suggests that employees cannot be trusted and hence, they need to be monitored and controlled, it deteriorates trust, and creates the circumstances where employees need to be closely monitored and controlled. Such bad theories create a self-fulfilling prophecy by creating distrust and lowering intrinsic motivation. “Surveillants come to distrust their targets as a result of their own surveillance and targets in fact become unmotivated and untrustworthy. The target is now demonstrably untrustworthy and requires more intensive surveillance, and the increased surveillance further damages the target. Trust and trustworthiness both deteriorate (Enzle & Anderson, Citation1993; Ghoshal, Citation2005, p. 85).” Ghoshal also lists agency theory, social network analysis theory, game theory, and negotiation theory in the same category of bad management theories. Bad because they create a self-fulfilling prophecy by deteriorating trust and lowering intrinsic motivation.

Before we revisit the assumptions behind our algorithms, we need to pause, and revisit the assumptions behind our management theories first. The impact of these theories is not trivial. These bad management theories, according to Ghosal (Citation2005), create a typical manager that is “ruthlessly hard-driving, strictly top-down, command-and-control focused, shareholder-value-obsessed, win-at-any-cost business leader (p. 85).” Ghoshal (Citation2005) argues that our management theories have several characteristics, first, they are not empirically supported, second, are based on partial analysis and unrealistic and biased assumptions, and third, despite of these facts, are widely used to drive our research, and business decision making. The state of affairs of the bad management theories, is very similar to the state of affairs of our AI algorithms. Many of the AI algorithms are based on flawed assumptions, their inaccuracy and non-representativeness hidden behind their opaque nature, and despite these facts are highly scalable, are used universally, and cause universal harm (Eubanks, Citation2018; O’Neil, Citation2016). Ghoshal (Citation2005) cautions that he is not worried about the flawed assumptions behind these theories, but he is more worried for the fact that “these theories can accurately [owing to the self-fulfilling prophecy] predict the outcomes (p. 80).”

Before we further invest in the flawed AI and let it guide us, we need to pause and understand how we should guide it instead, before time runs out. We need to reflect if we are building a flawed executive layer, on top of our flawed guiding theories. These AI executioners will not pause, will carry out their task with utmost precision, relentlessly, and exactly with the same human biases, which we are not even aware exist in our decision-making processes. This understanding is important to learn the how, what, when, where, and why of the AI development and human-AI coexistence. I, therefore, discuss the fiver pillars of theory building in the context of AI.

  1. How can we make AI more explainable and transparent? In this category, future research must examine questions such as how we can measure the impact of the low AI transparency on global business practices and governmental services? How can AI algorithm transparency be measured, and improved? There is some traction already happening in this direction, for instance, owing to the efforts of DARPA and companies like Capital One, but we need more systematic and dedicated research in this area. Louis D. Brandeis, former US Supreme Court Justice appropriately said that the sunlight is the best of the disinfectants. Making the AI more explainable is an important step in the direction to make AI more accountable and will also help answer our next big question.

  2. What variables and data we want the AI to use as it learns about us, and in what context? Research is needed to understand how our AI is being developed, how it is generating and using training/test data, is it using relevant and appropriate training/test data. Research should also examine to what extent our AI algorithms are using the correct variables as opposed to easy, convenient and remotely related proxies. Another research question to examine is if these algorithms are using the right variables for the right job, are they using the opted-in data, for the purpose they were sanctioned or are merely taking the user data for granted. Research can help us understand that when can AI developed in one part of the world, with one set of population, be safe to use in another part of the world on a different population. We also need to research to identify what are the barriers to implementing transparent and responsible AI. This leads to our next important question about accuracy of the AI algorithms.

  3. When can we allow AI to take over control? Can we allow AI to be used if it is less than perfectly accurate, especially considering the fact, that humans blindly rely on the recommendations provided by the AI (Eubanks Citation2018)? Research should examine the technical, and host of other factors such behavioral, managerial, and governance that could help improve the accuracy of the AI models. Research should be carried out to examine if we need different accuracy measures, such as the ones which minimize overall “cost” (Bansal, Sinha, & Zhao, Citation2008), and not just the direction less squared errors.

  4. Where do we set the geographical boundaries on AI algorithms? AI has global implications, so our approach to handling AI mishaps should be global as well. Events like Facebook/Cambridge Analytica have global implications. Data breaches impact users from more than one country, but the data privacy protection laws vary from country to country. The UN Conference on Trade and Development (UNCTD, Citation2019) reports that about a third of the countries in the world do not have any data privacy legislation on their books. We need research to investigate what global laws can be created, and how they can be created, to provide minimum human privacy to global citizens. And lastly, we need to revisit Sumantra Ghoshal’s legacy at every step, but most importantly to answer the why.

  5. Why do we want AI to take control in the first place – to maximize wealth for one just stakeholder, or to maximize wellbeing for all stakeholders? Research needs to be conducted if alternate AI optimization methods, other than shareholder wealth maximization, can increase the size of the overall pie.

These questions are particularly more important in the context of the future world-order (Lee, Citation2018). Data driven economies will lead to better AI, better AI will lead to more generation of data, this will create a cycle that will reward data rich economies to get richer at the expense of the data poor economies, and similarly, it will reward data rich companies over data poor ones. Lee in his book (Citation2018) argues that the AI engines will lead to both global inequality (some countries far richer than others), as well as regional inequality (some companies far richer than others). Inequality is not only an economic phenomenon, it has other non-economic consequences as well. Virginia Eubanks (Citation2018) and Cathy O’Neil (Citation2016) rightfully bring up the issue of the collateral damage that is caused by these ruthless, single-minded AI algorithms – that are based on biased, and flawed assumptions, and tainted unrepresentative data.

In the final analysis, before we start examining the assumptions behind our Al algorithms, we need to revisit Sumantra Ghoshal’s guiding legacy to check the assumptions of our theories first. Otherwise, we would be simply treating the symptoms and the cause will be somewhere else.

Additional information

Notes on contributors

Gaurav Bansal

Gaurav Bansal is Frederick E. Baer Professor in Business, and Professor of MIS/Statistics at the Austin E. Cofrin School of Business at UW-Green Bay. He is the Founding Academic Director and former chair of the Master of Science in Data Science program at UW-Green Bay. He currently serves as senior editor (SE) for Journal of Information Technology Case and Application Research, and as associate editor (AE) for Journal of Midwest AIS. Before starting his academic career, he worked as Quality Assurance Engineer for General Motors India (1998-2000) and Daewoo Motors India (1996-1998).

References

  • Bansal, G., Sinha, A. P., & Zhao, H. (2008). Tuning data mining methods for cost-sensitive regression: A study in loan charge-off forecasting. Journal of Management Information Systems, 25(3), 315–336. doi:10.2753/MIS0742-1222250309
  • Enzle, M. E., & Anderson, S. C. (1993). Surveillant intentions and intrinsic motivation. Journal of Personality and Social Psychology, 64(2), 257. doi:10.1037/0022-3514.64.2.257
  • Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. New York, NY: St. Martin’s Press.
  • Ghoshal, S. (2005). Bad management theories are destroying good management practices. Academy of Management Learning & Education, 4(1), 75–91. doi:10.5465/amle.2005.16132558
  • Lee, K.-F. (2018). AI superpowers: China, silicon valley, and the new world order. New York, NY: Houghton Mifflin Harcourt.
  • Nahapiet, J., & Ghoshal, S. (1998). Social capital, intellectual capital, and the organizational advantage. Academy of Management Review, 23(2), 242–266. doi:10.5465/amr.1998.533225
  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group, NY.
  • UNCTD. (2019, May 29). Data protection and privacy legislation worldwide. Retrieved from https://unctad.org/en/Pages/DTL/STI_and_ICTs/ICT4D-Legislation/eCom-Data-Protection-Laws.aspx.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.