428
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Data Science in Military Decision-Making: Foci and Gaps

ORCID Icon &
Received 22 Nov 2023, Accepted 03 May 2024, Published online: 02 Jun 2024

ABSTRACT

In contemporary warfare, data science is crucial for the military in achieving information superiority. To gain an overview of the topic, 158 peer-reviewed articles were analysed through a semi-systematic literature review. The proportion of social science literature with a focus on risks of data science implies that policymakers are disproportionally influenced by a pessimistic view on military data science. The perceived risks of data science, however, are hardly addressed in formal science literature. Additionally, when levels of war are taken into account, relatively low attention for the operational level is observed. Literature reflects an emphasis on the tactical level when it comes to opportunities for military data science. On the contrary, studies examining the risks of military data science mostly consider the strategic level. Consequently, domain-specific requirements for military strategic data science may not be expressed. Lacking such applications ultimately leads to suboptimal strategic decisions in today’s warfare.

1. Introduction

Data science and related concepts attract popular attention these days. The number of sensors that capture data on everything and everyone, results in an ever-increasing amount of data. In combination with a dramatic increase in computational power, this brings opportunities for analytic purposes in a wide range of industries, for instance cancer research (Steff Citation2021), finance (Wen et al. Citation2021), and public services (Band et al. Citation2022). The same holds true for the military. In addition to these opportunities, decision-makers are faced with challenges, for instance regarding the disruptive effects Artificial Intelligence (AI) potentially has on the military (Hunter et al. Citation2022). Other challenges include: How to integrate all relevant data in the decision-making process? Which algorithms do we use and why? Are we allowed to use all available data for all purposes? In a competitive environment, however, the main challenge and one of the deciding factors between winning and losing may be the ability to process more reliable and detailed data faster than one’s competitor. Hence, the military must strive for an authoritative information position, as also acknowledged by policymakers (e.g. Ministry of Defence Citation2020, Department of Defense Citation2020).

To achieve this information superiority, it is necessary to deal with the so-called fog of war, the uncertainty that is inseparable from warfare. In the contemporary information environment, data science is crucial in achieving that superiority. In other words, data science is a prerequisite both for exploiting the opportunities of big data and for answering the challenges that come with that voluminous amount of data.

In addition to the opportunities for military data science, scholars also acknowledge the risks (e.g. Katagiri Citation2023; Johnson 2020c; Yahri-Milo 2013). To the best of our knowledge, however, no literature survey on the opportunities and risks of data science for military decision-making has been conducted. To address this gap, we focus on the following research questions:

RQ1: To what extent is scholarly literature on data science in military decision-making focused on opportunities or risks?

RQ2: Does this focus differ per level of war the research concentrates on? If so, what does this imply?

The principal goal of our study is to gain insight into the current foci and gaps of the academic body of knowledge on data science in military decision-making. Therefore, we conducted an integrative, semi-systematic literature review to provide a thorough evaluation of current research on this topic.

The remainder of this article is structured as follows. To put our study in a broader perspective, a brief overview of prior research on data science in military decision-making is provided in Section 2, where we also provide definitions of data science and the levels of war at which military decision-making takes place. The aim of Section 2 is therefore not to provide an in-depth analysis of the existing literature on these topics but is mainly intended to exemplify the topics discussed and to explain why we defined the research questions as described above. Section 3 describes our research methodology. We present and discuss our main findings in Section 4. In the concluding section we summarise our key insights, provide suggestions for further research and describe the limitations of our study.

2. Data science and military decision-making introduced

As our study entails the combination of both the data science domain and military decision-making at different levels of war, we provide a brief overview of related work. We do not intend to provide an in-depth analysis of available literature on these topics, but we introduce the main concepts before we dive into the methodology and the findings of our literature review.

2.1. Data science

Data science is a combination of mathematics, computer science, and business knowledge. Xu et al. define data science as “the basic theory and methodology related to the realisation of the data value chain. It uses modelling, analyzing, computating, and learning to study the conversion from data to information, from information to knowledge, and from knowledge to decision-making” (Xu et al. Citation2021). We emphasise the relation of data science to decision-support. Data science supports and guides the extraction of information and knowledge from data via (automated) analysis, in order to improve decision-making (Provost and Fawcett Citation2013). Data science is thus the study of extracting value from data, where value is subject to interpretation by the decision-maker at the end of the data life cycle and extracting refers to the whole process (Wing Citation2019).

The emergence of big data urged the development of data science, since the analysis of this scale of data requires computational power and new algorithms. Computer science is thus a foundation of data science (Xu et al. Citation2021). Data science includes AI and Machine Learning (ML) but it also encompasses, for instance, data governance and information management.

Data science is thus a field of study drawing knowledge from many other domains. Provost and Fawcett (Citation2013) illustrate a successful data scientist as being “able to view business problems from a data perspective”. Data science therefore requires, among other things, the accurate formulation of business problems and includes a strategy design to ultimately solve or elucidate those problems with data as a fundamental resource.

2.2. Military decision-making and the levels of war

A common contemporary description of military strategy is that it is “about maintaining a balance between ends, ways, and means; about identifying objectives; and about the resources and methods available for meeting such objectives” (Freedman Citation2013). Strategists translate political goals into military objectives and develop an overarching game plan on how to achieve those.

The operational level of war is concerned with campaign planning: pursuing strategic goals by designing schemes of warfare, i.e. the combination of tactics (Luttwak Citation1980). The tactical level considers warfare on the actual battlefield, although this battle has evolved over time from man-to-man fighting to the use of, for example, long range missiles and cyber-attacks.

At the tactical level, one is more concerned with the type of technology or technique that should be used than with choosing which political objective to support (Matheny Citation2016); yet, tactical actions are eventually tied to those objectives (that is, when all levels are properly aligned) and can have strategic effects on their own (Liddy Citation2004).

2.3. Prior work on data science in a military context

Since data science supports decision-making and military decision-making is conceptually divided into three levels of war, data science is relevant to all these levels. In this section, we briefly review relevant literature on data science in military decision-making.

Although AI and ML are only a subset of data science, a plethora of literature on military AI/ML exists. Several lines of research within this body of literature stand out. First, with regard to autonomy and automation of military platforms, the role of humans in applying AI/ML is discussed (referred by as human in, on, or out the loop) (e.g. RAND Citation2020, Nurkin and Rodriguez Citation2019, Z. Davis Citation2019). Other topics considered in this line of research are meaningful human control and appropriate levels of human judgement (RAND Citation2020, Bode and Qiao-Franco Citation2024). Some criticise those topics arguing that it “puts the machine conceptually in the centre and the human at the periphery”. They argue that each phase of decision-making requires another kind of AI/ML system (Blair et al. Citation2021).

Other lines of research on military AI/ML focus on the question of how AI/ML techniques can be applied in military operations. Machine vision, speech recognition, and waveform phenomenology are examples hereof (RAND Citation2021a). Some foresee an “oracle-like AI” counselling human decision-makers without carrying the burden of information overload, fatigue, and other emotions (Ayoub and Payne Citation2016). Other examples are Nurkin and Rodriguez (Citation2019), Pashakhanlou (Citation2019), and Yan (Citation2020). The use of AI/ML in the field of nuclear deterrence attracts ample attention, where the debate about autonomy is ubiquitous. Some envision algorithms that may instigate a nuclear war (e.g. Johnson Citation2019b, Horowitz, Scharre, and Velez-Green Citation2019, Fitzpatrick Citation2019, Johnson Citation2020b, Price, Walker, and Wiley Citation2018).

Apart from the lines of research concerning AI/ML, the US DoD demonstrates a broader scope on data science. The US DoD Data Strategy, for example, provides guidance on data quality, standards, architecture, governance, availability, ethics, etc. It considers data as a strategic asset (Department of Defense Citation2020). In addition, the Dutch MoD recognises the strategic relevance of an authoritative information position and declares to aspire to be information-driven “from the strategic level to the level of individual military personnel in the field” (Ministry of Defence Citation2020). Notwithstanding the importance of these approaches, the referred policies themselves entail a corporate strategy. Likewise, literature can be found on strategic HR planning (e.g. RAND Citation2021b) or the acquisition process (e.g. National Academies of Sciences, Engineering, and Medicine Citation2021). Although relevant for the military, those subjects are not military in their essence.

3. Methods

We conducted a semi-systematic literature review (Snyder Citation2019), resulting in a research agenda as an integrative attribution (Torraco Citation2005). In this section we first describe the literature search and selection process and conclude with the data extraction process from the selected literature.

3.1. Literature search

We conducted an electronic search (in February 2022) through several search engines and databases: Google Scholar, ScienceDirect, Taylor and Francis journals, WorldCat.org, IEEE Publications Database, SAGE journals, ProQuest Central, SpringerLink, and Wiley Online Library. The following search terms were applied: military, strategy, strategic, strategic decision, decision support, algorithm, data science, data, artificial intelligence, level of war, and information age. To achieve a more effective and comprehensive search strategy, the Boolean operators AND/OR were used to create meaningful combinations of two to three search terms. We searched for publications since 2010, since data science emerged as a discipline in its modern form around 2011 (Song and Zhu Citation2016, Ahmad et al. Citation2022).

At this stage of our sample selection, we applied the following inclusion criteria. Only English-language articles refereed through a peer-review were included; therefore, books and non-scholarly works were excluded. Although these works may provide valuable insights on the use of data science in a military context, we assume those insights can also be found in scholarly works. Conference papers were also excluded, since most of the conference proceedings are likely to be published as peer-reviewed articles as well. Moreover, books, non-scholarly works, and conference papers are considered so-called grey literature (Adams, Smart, and Huff Citation2017). Regarding our research questions, the available white literature suffices to form a coherent set of high-quality articles for our analysis.

In the last phase of our sample selection, we filtered the articles that met our inclusion criteria through a two-step process. First, based upon article titles and abstract, we selected the articles relevant to data science in military decision-making. The inclusion criterion used was that the data science technique or method presented in the article could theoretically be of value in military decision-making. To further reduce our sample set, the second step involved reading the entire article to further assess its relevance. The availability of a full-text article was therefore another inclusion criterion. Our selection process resulted in a final set of 158 articles, published across 46 journals.

3.2. Data extraction

We identified several variables from our final set and recorded them in a spreadsheet (see ). Next, we categorised the articles using these variables (see A-E).Footnote1 The variables journal categorisation (JC) and type of subject (T) require some additional explanation. Regarding JC, we define journals with prevalent formal science publications as formal science journals; we define all other journals as social science journals, since no articles were selected from the branch of natural sciences. Considering the variable T, we denominated articles as having a military or non-military subject. For the purpose of our study, an article with a military subject is defined as having a focus on military operations. Although, for instance, the comparison of multiple candidate fighter aircraft for the Air Force is an element of strategic decision-making within the military, in essence this regards an acquisition process. This idea is reinforced by the fact that strategy in this specific article primarily entails corporate strategy (Ardil Citation2020). This article is thus categorised as non-military. This is not to say that the very technique used (multiple criteria decision-making) cannot be of value for decision-making in military operations.

Table 1. Data extraction of selected literature.

Table 2. Categorisation of selected articles.

4. Findings and discussion

In this section we present the findings and main insights of our literature review. First, we discuss the focus on risks and opportunities as observed in the literature. Then, we examine the differences per level of war regarding that focus. We conclude this section by addressing the promise that non-military data science literature holds for military decision-making.

To present our results we use the following (slight abuse of) notation. If X and Y (and Z) are variables and x1, x2 and y1, y2 (z1, z2) their respective arguments then we denote by X∼ (a% x1, b% x2) the distribution of the arguments of X, i.e. this means that X takes the value of x1 in a% of the cases and the value of x2 in b% of the cases. Distributions conditioned on a specific value of one (or more) other variable(s) are denoted by [X | Y = y & Z = z] ∼ (a% x1, b% x2), meaning that X takes the value of x1 in a% of the cases and the value of x2 in b% of the cases, given that Y takes the value y and that Z takes the value z.

4.1. Risks versus opportunities in general (RQ1)

The number of articles (n = 158) in our final set is almost evenly divided over journal categorisation JC ∼ (53% FO, 47% SO) and research method RM ∼ (56% QL, 44% QN). More than two thirds of all articles focus on opportunities (F = O) instead of risks (F = R), i.e. F ∼ (68% O, 32% R), however, in social science publications (JC= SO) we observe the opposite: here, more than half of the articles focus on risks (F = R) instead of opportunities (F = O), [F | JC = SO] ∼ (41% O, 59% R). In formal science (JC= FO) publications, more than 90% of the articles have a focus on opportunities (F = O), i.e. [F | JC = FO] ∼ (92% O, 8% R). An explanation for this can be found in the fact that formal science journals mainly publish data scientific models and algorithms that solve a particular problem. Therefore, the risks of data science are less emphasised. Stressing those risks is typically a qualitative endeavour and qualitative (QL) studies are more present in social science (SO) journals, i.e. [JC | RM = QL] ∼ (23% FO, 77% SO).

This raises the question, however, whether qualitative studies in both formal and social science journals are relatively even divided when it comes to the focus on either risks or opportunities of data science? They are not. We observe the following [F | JC = SO & RM = QL] ∼ (35% O, 65% R), i.e. 65% of the qualitative (QL) studies published in social science (SO) journals focus on the risks (F = R) of data science. On the contrary [F | JC = FO & RM = QL] ∼ (75% O, 25% R), meaning only 25% of the qualitative (QL) studies in formal science (FO) journals focus on risks (F = R).

Moreover, only one article of the latter category, i.e. [F = R; JC = FO; RM = QL], elaborates on military applications, i.e. (Omohundro Citation2014). The implications of these observations are twofold: first, concerns regarding the risks of the application of data science voiced by ethicists, political scientists, and academics in the field of security studies, etc., are typically not addressed at the formal science audience that builds and develops quantitative data science models and algorithms. Second, if the policymaking audience in the domain of politics and the military (the business knowledge part of data science in a military context) is mainly informed by publications in social science journals, an imbalance is introduced; in that case, they are influenced by the predominant focus on the risks of data science.

The interest for risks of data science in military decision-making can be partly explained by (ethical) concerns on the autonomous use of violence by platforms (e.g. Roff Citation2014, Tóth et al. Citation2022). Yet, military data science applications entail more than autonomous platforms, and military decision-making is more than the use of force.

Our general observation is that within the academic literature, the interest for the risks of data science seems to increase when studied in a military context. The academic debate can be balanced with more research on the opportunities for military data science in the social science journals and more publications on its risks in formal science journals. Hence, we recommend cross-disciplinary research on the pros and cons of military data science.

4.2. Risks versus opportunities per level of war (RQ2)

After analysing our entire set of articles we presented some general observations in the previous section regarding the focus on opportunities versus risks of the application of data science. Since we mainly focus our research on data science in military decision-making, we narrow down our review in this section to the subset of 77 articles of Type military (T= M).

In we present the division per level of war (L) of the subset of articles with a military subject (T = M). An article is labelled as general if no specific level applies. The focus on the operational level of war is relatively low compared with the other two levels, suggesting a research gap with reference to data science applications at the military operational level. The need for more research on this level has already been pointed out by Davis, although his study is limited to the use of AI in military operations (S. I. Davis Citation2022).

Figure 1. Division per level of war for articles with a military topic; [L | T = M].

Figure 1. Division per level of war for articles with a military topic; [L | T = M].

When we take research method (RM) and focus (F) into account, we can distil additional insights. In we present the level of war (L) for quantitative (QN) studies with a focus on opportunities (F = O). In a and 3b the level of war (L) is depicted for qualitative (QL) studies with a focus on, respectively, opportunities (F = O) and risks (F = R). Indeed, no quantitative (QN) studies were selected with both a military subject (T = M) and a focus on risks (F = R). As mentioned earlier, this can be explained by the fact that stressing the risks of data science typically requires qualitative research methods.

Figure 2. Division per level of war for quantitative studies focused on opportunities of data science; [L | RM = QN & F = O].

Figure 2. Division per level of war for quantitative studies focused on opportunities of data science; [L | RM = QN & F = O].

Figure 3. (a) Division per level of war for qualitative studies focused on opportunities of data science; [L | RM = QL & F = O]. (b) Division per level of war for qualitative studies focussed on risks of data science; [L | RM = QL & F = R].

Figure 3. (a) Division per level of war for qualitative studies focused on opportunities of data science; [L | RM = QL & F = O]. (b) Division per level of war for qualitative studies focussed on risks of data science; [L | RM = QL & F = R].

Both from a quantitative (QN) and qualitative (QL) research methodological perspective most of the articles consider the tactical level while focusing on the opportunities of data science in the military domain (see and a). When the risks are emphasised, however, most of the articles consider the strategic level of war (b). Although our analysis does not show causality, only correlation, we can draw insights from the observed distinction of the focus on opportunities at the tactical level and the focus on risks at the strategic level.

First, the risks that come with military data science specifically related to the tactical level might be perceived as a neglected line of research. The focus on opportunities, however, can be explained by the fact that there are ample possibilities for tactical data science and every possible application could yield a publishable paper, while a more thematic approach suffices to examine the risks of those applications (e.g. debates on autonomy or security). Although this is also true for the strategic level, we observe a high interest for this level (relative to the tactical and operational level) when articles focus on the risks of data science (b).

Additionally, from the social science publications regarding data science for the strategic level, almost twice as many of the articles focus on risks instead of opportunities (A and 2B). This implies much greater consequences. This attention for risks may be reinforced by the amount of attention for nuclear deterrence at the strategic level. Concentrating on major risks in a subfield of strategic decision-making (although relevant) may paralyse academics and policymakers, resulting in overlooking other possible applications. As a consequence, domain-specific requirements for the development of data science applications in military strategic decision-making with a hypothetical value may not be expressed. The opportunities for strategic data science can thus also be regarded as a research gap in current literature.

Subsequently, presuming data science enhances decision quality (Provost and Fawcett Citation2013), military strategic decisions will be suboptimal as long as research on the possibilities for strategic data science is not forthcoming. Since major consequences are inherent to strategic decisions, this is detrimental for policymakers and strategic decision-makers. Data science will not prevent strategic failure (which in some case is a euphemism for needless casualties), but when the chance for such a failure can be reduced using data science, strategic decision-makers have a moral obligation to do so. Obviously, this requires more research on how data science can aid rigorous military strategic decision-making.

Can this observed distinction between the tactical and strategic level be explained? Do we face, for instance, more risks upon implementing data science at the strategic level? As already mentioned, a characteristic of strategic decisions is that they potentially have major consequences. Theoretically, they encompass almost every aspect of conflict, implying complexity. On the one hand, it is always uncertain whether all relevant aspects are taken into account and, when a certain aspect is actually considered, available information might be inaccurate. On the other hand, it is always uncertain what consequences a certain decision may have on all different facets.

So, we argue that strategic decision-making might be indeed riskier than operational or tactical decision-making. Implementing strategic data science might therefore come with higher risks than operational or tactical data science applications. We also argue, however, that the added value of data science increases by definition when faced with an increasing amount of data, since the analysis requires computational power and modern algorithms. So, given the all-encompassing character of strategic decision-making, the need for strategic data science is undeniable. The risks of strategic data science are first and foremost related to strategic decision-making, not to data science as such.

The difference between the strategic and tactical level literature regarding the focus on opportunities and risks of data science thus cannot be fully explained by the risks attached to the strategic level. Another explanation is that problems at the tactical level can be more easily defined in terms of data, databases can be built and maintained more easily, less data is needed (more data to collect and analyse requires more capacity but also increases the chance for ethical issues), and data science applications might be more suitable for dual use (i.e. technology that can be used for both peaceful and military aims) without dramatic modifications to the specific military context.

So, an emphasis on the tactical level when examining military data science applications can be explained. However, this does not alter the fact that we need to explore how data science can aid military strategic and operational decision-making as well. In the next section we provide some suggestions on how to bridge this gap.

4.3. Discussion on non-military data science for military strategic decision-making

In the previous sections we found an emphasis on the strategic level in literature focusing on the risks of military data science. We seek to initiate a more balanced discourse within the literature on data science in military decision-making. Hence, in this section, we present a discussion of the opportunities for data science applications in a military strategic context.

Military strategic decision-making (MSDM), theoretically encompassing every aspect of conflict and often conducted under high pressure, is challenging. Supporting decision-makers in “the continuous trade-off between the effective acquisition of more information and cost-efficient decision making” (Voorberg et al. Citation2021) can thus be valuable, although the costs in a military context may be defined in terms of relative (military) power, the passing of windows of opportunity, losing morale, etc.

The use of AI in designing future scenarios (Erspamer et al. Citation2022) may also have value for MSDM. It may support in preparing for a multipolar world order, but the technique itself may also prove useful for modelling shifting powers of multiple belligerents in a certain conflict or theatre.

A human-interpretable text analysis method able to analyse real-word large datasets addresses ethical concerns considering transparency and accountability of AI (Kim, Park, and Suh Citation2020). Trained on military language and reports, it may support military decision-makers as well. The aspects of transparency and accountability are especially important when the use of violence comes into play (Martin Citation2019, Tóth et al. Citation2022, Roff and Danks Citation2018).

That MSDM is, by definition, a multi-criteria decision analysis process under uncertainty may explain why data science is not yet established as a norm in MSDM. Fortunately, there is ample research available on this as well, as presented by Durbach and Stewart (Citation2012). Another example is a framework for strategic decision-making in forest management (Álvarez-Miranda et al. Citation2018). Once one understands the plethora of aspects this involves, and how the uncertainty of climate change affects this management, one cannot unsee the parallels with MSDM.

In order to disclose the undiscovered treasure of data science for MSDM, we should not only explore the non-military literature. Bridges can also be built between the levels of war themselves. On the tactical level, for instance, literature is available on algorithmic selection of the most suitable UAV for a certain mission (Lin and Hung Citation2011) or weapon assignment under certain threats (Naeem and Masood Citation2010). Such algorithms may also be useful in the course of action selection on other levels of war, when adapted to the specific context.

To conclude, it has been extensively argued in this article that the risks of applying data science in a military context should not be ignored. However, we must distinguish the risks of military operations as such from the risks of implementing data science. Data science alters decision-making, enabling faster analysis while improving the quality of decision-making by integrating more details for optimised decisions. The concerns on data integrity, attribution of responsibility, cognitive biases, etc., are evenly valid for human decision-making. Furthermore, it is not said that data science implementation should replace human decision-making; it can at least inform and aid that decision-making, e.g. by combining AI and human-centred wargames (Barzashka Citation2023). Yet, concerns on whether humans can intervene in a timely manner when decisions have lethal consequences (M. C. Horowitz Citation2019) and to which extent machines are able to interpretate and judge data (Goldfarb and Lindsay Citation2022) must be examined. Further research is required on how to build in safeties to avoid nuclear flash wars (Price, Walker, and Wiley Citation2018, Altmann and Sauer Citation2017), and how programming and use of lethal autonomous weapons can be regulated (Crootof Citation2015, Umbrello, Torres, and Bellis Citation2020, Garcia Citation2024).

5. Conclusions

This study provides a review of the current body of knowledge on data science in military decision-making, instigating an agenda for further research in order to advance the exploitation of data science in military decision-making. In this study, we did not intend to summarise the literature under review, instead we assessed to what extent the literature is focused on the opportunities and risks of data science. In more detail, we assessed to what extent this focus differs per level of war. Additionally, we discussed the value of non-military data science literature for military decision-making.

We conducted an integrative, semi-systematic literature review of 158 scholarly articles to gain insight into the current foci and gaps in the academic body of knowledge on data science in military decision-making. We have categorised these articles by journal type in which they were published (social or formal science), their main research method (qualitative or quantitative), their focus on opportunities or risks regarding data science, and their type of subject (military or non-military). The articles with a military subject were subsequently divided per level of war the study was focused on (tactical, operational or strategic). In case no specific level of war applied, the article was labelled as general. This categorisation allows for a clear view on the landscape of the reviewed literature.

Hence, the main contributions of our study are as follows, which we briefly elaborate on subsequently:

  1. A structured analysis of the academic body of knowledge regarding its focus on the opportunities or risks of data science; to the best of our knowledge this is currently not existing in the literature. This analysis includes a categorisation of the available data science literature in a military context per level of war.

  2. The ensuing observation that the perceived risks of military data science are hardly addressed in formal science literature.

  3. The observation of a predominant focus in social science literature on the risks of military data science.

  4. The identification of a research and knowledge gap considering the military operational level, in terms of opportunities as well as risks of data science applications.

  5. The identification of a research and knowledge gap considering the opportunities for data science at the military strategic level of war.

  6. An instigation to explore the value that data science literature has when examining the opportunities for military strategic data science applications.

As mentioned, we are – to the best of our knowledge – the first to provide a structured analysis of scholarly work regarding its focus on opportunities or risks of data science and the first providing a categorisation of this literature per level of war, which enables deeper insights in current research and knowledge gaps. This provides us with four key insights.

First, our analysis shows that the perceived risks of data science are hardly addressed in formal science literature. This suggests that the concerns on the military application of data science are not addressed to the audience that can actually develop and enhance data science models and algorithms, while it seems necessary to tailor the further development of those models to all specific military decision-making needs.

Second and contrarily, most of the social science literature focuses on the risks of data science. Policymakers interested in military decision-making – assuming they are mainly informed by social science literature – may therefore be disproportionally influenced by a pessimistic view on the application of data science in the military domain.

We are convinced that cross-disciplinary research on both the opportunities and risks of military data science can address the observed research gaps. This may result in a more balanced academic debate allowing for the exploration of opportunities for military data science applications and for the development of technical solutions mitigating the risks involved.

Third, when we zoom in on military decision-making, we observe a relatively low attention for the operational level of war compared with the other two levels, suggesting a research gap with reference to military operational data science. Researchers can further examine whether and how the specific characteristics of the military operational level influence the adoption of data science applications at this particular level. A survey among military experts at the operational level facing practical decision-making situations at that level, for instance, may provide valuable insights.

Fourth, we observed that studies emphasising the risks of military data science are often heavily concentrated at the strategic level, while literature on the opportunities mostly considers the tactical level. Consequently, domain-specific requirements for military strategic data science applications may not be expressed. Lacking military strategic data science will, in the contemporary information environment, result in suboptimal strategic decisions. This is an ethical concern in itself. Hence, further research into military strategic data science capabilities is invaluable to the military. Interviewing practitioners at the military strategic level (e.g. officers within the Supreme Headquarters Allied Powers Europe (SHAPE) of NATO) may yield insightful understanding of possible strategic data science applications.

The final contribution of this article is that we instigated the exploration of non-military data science literature for valuable knowledge that may benefit military decision-making. This may inspire the generation of a research agenda for military strategic data science. Further research may benefit from an analytical framework enabling a structured examination of this non-military literature for applications in a military context.

As our findings show, there is a need for cross-disciplinary research on the application of data science for military decision-making. This cross-disciplinarity is twofold. One, social and formal scientists can work together to communicate and mitigate the risks of military data science and explore its opportunities. Additionally, research into military decision-making and military operations can benefit from data science research projects outside the security realm. Specifically, for the strategic and operational level of war, this may require a structured research agenda.

This is not only an academic responsibility. It requires the development of future strategic leadership, focused on entrepreneurship (Metz Citation2020). After all, data science only succeeds when mathematics, computer science, and business knowledge go hand in hand. The latter, in turn, depends on how business leaders envision their future. However, a balanced academic debate can aid the envisioning of such a future. Only if those opportunities are thoroughly examined can the risks of military data science be evaluated properly. Concerns on the diminishing role of humans in military decision-making, for instance, need to be taken seriously, but we should at least explore the possibilities to best inform military decision-making with the use of data science applications.

5.1. Limitations

First, since we conducted a cross-discipline literature review among wide-ranging domains, it is impossible to select all relevant literature. Therefore, we have used a semi-systematic approach (Snyder Citation2019), allowing a more flexible search strategy. Although this literature search method is more flexible, we conducted a quantitative, systematic analysis of our sample set. We are convinced that the description of our research method allows for reproduction of the study.

Second, we exclusively reviewed academic literature. Future research may benefit from including non-academic sources regarding the application of data science in military decision-making.

Third, we did not include an in-depth analysis of practical applications of data science within military decision-making, although such an analysis may provide valuable insights on the opportunities and risks thereof. We limited the scope of our study to the macro level in order to gain insights into the foci and gaps in the current discourse.

To conclude, even though the categorisation of whether certain articles emphasise the risks or opportunities of data science might be considered undecided, we are convinced that a careful reading of the final set of papers results in a similar assessment as presented.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 Most articles do not explicitly state the focus (F), being either risks or opportunities of data science. The same holds true for the level of war variable (L). By careful interpretation of the articles however, it was possible to determine the value of those variables.

References

  • Adams, Richard J., Palie Smart, and Anne Sigismund Huff. 2017. “Shades of Grey: Guidelines for Working with the Grey Literature in Systematic Reviews for Management and Organizational Studies.” International Journal of Management Reviews 19 (4): 432–454. https://doi.org/10.1111/ijmr.12102.
  • Ahmad, Norita, Areeba Hamid, Vian Ahmed, and Preeti Chauhan. 2022. “Data Science: Hype and Reality.” Computer 55 (2): 95–101. https://doi.org/10.1109/MC.2021.3130365.
  • Alaimo, Cristina, and Jannis Kallinikos. 2022. “Organizations Decentered: Data Objects, Technology and Knowledge.” Organization Science 33 (1): 19–37.
  • Allen, Ryan T., and Prithwiraj (Raj) Choudhury. 2022. “Algorithm-Augmented Work and Domain Experience The Countervailing Forces of Ability and Aversion.” Organization Science 33 (1): 149–169. https://doi.org/10.1287/orsc.2021.1554.
  • Altmann, Jürgen, and Frank Sauer. 2017. “Autonomous Weapon Systems and Strategic Stability.” Survival 59 (5): 117–142. https://doi.org/10.1080/00396338.2017.1375263.
  • Álvarez-Miranda, Eduardo, Jordi Garcia-Gonzalo, Felipe Ulloa-Fierro, Andrés Weintraub, and Susana Barreiro. 2018. “A Multicriteria Optimization Model for Sustainable Forest Management Under Climate Change Uncertainty: An Application in Portugal.” European Journal of Operational Research 269 (1): 79–98. https://doi.org/10.1016/j.ejor.2017.04.052.
  • Amoroso, Daniele, and Guglielmo Tamburrini. 2021. “In Search of the ‘Human Element’: International Debates on Regulating Autonomous Weapons Systems.” The International Spectator 56 (1): 20–38. https://doi.org/10.1080/03932729.2020.1864995.
  • Aouni, Belaid, Fouad Ben Abdelaziz, and Davide La Torre. 2012. “The Stochastic Goal Programming Model: Theory and Applications.” Journal of Multi-Criteria Decision Analysis 19 (5–6): 185–200. https://doi.org/10.1002/mcda.1466.
  • Ardil, C. 2020. “A Comparative Analysis of Multiple Criteria Decision Making Analysis Methods for Strategic, Tactical, and Operational Decisions in Military Fighter Aircraft Selection.” International Journal of Aerospace and Mechanical Engineering 14 (7): 275–288.
  • Arvidsson, Viktor, Jonny Holmström, and Kalle Lyytinen. 2014. “Information Systems use as Strategy Practice: A Multi-Dimensional View of Strategic Information System Implementation and use.” Journal of Strategic Information Systems 23 (1): 45–61. https://doi.org/10.1016/j.jsis.2014.01.004.
  • Aversa, Paolo, Laure Cabantous, and Stefan Haefliger. 2018. “When Decision Support Systems Fail: Insights for Strategic Information Systems from Formula 1.” Journal of Strategic Information Systems 27 (3): 221–236. https://doi.org/10.1016/j.jsis.2018.03.002.
  • Ayoub, Kareem, and Kenneth Payne. 2016. “Strategy in the Age of Artificial Intelligence.” The Journal of Strategic Studies 39: 793–819. https://doi.org/10.1080/01402390.2015.1088838.
  • Band, Shahab S., Sina Ardabili, Mehdi Sookhak, Anthony Theodore Chronopoulos, Said Enaffar, Massoud Moslehpour, Mako Csaba, Bernat Torok, Hao-Ting Pai, and Amir Mosavi. 2022. “When Smart Cities Get Smarter via Machine Learning: An In-Depth Literature Review.” IEEE Access 10: 60985–61015. https://doi.org/10.1109/ACCESS.2022.3181718.
  • Barzashka, Ivanka. 2023. “Seeking Strategic Advantage: The Potential of Combining Artificial Intelligence and Human-Centred Wargaming.” The RUSI Journal 30: 11. https://www.tandfonline.com/action/showCitFormats?. https://doi.org/10.1080/03071847.2023.2282862.
  • Bayrak, Alparslan Emrah, Christopher McComb, Jonathan Cagan, and Kenneth Kotovsky. 2021. “A Strategic Decision-Making Architecture Toward Hybrid Teams for Dynamic Competitive Problems.” Decision Support Systems 144. https://doi.org/10.1016/j.dss.2020.113490.
  • Beraldi, P., A. Violi, and F. De Simone. 2011. “A Decision Support System for Strategic Asset Allocation.” Decision Support Systems 51 (3): 549–561. https://doi.org/10.1016/j.dss.2011.02.017.
  • Berger, Jean, Abdeslem Boukhtouta, Abdelhamid Benmoussa, and Ossama Kettani. 2012. “A new Mixed-Integer Linear Programming Model for Rescue Path Planning in Uncertain Adversarial Environment.” Computers and Operations Research 39 (12): 3420–3430. https://doi.org/10.1016/j.cor.2012.05.002.
  • Bhagat, Kaushal Kumar, Wei-Kai Liou, and Chun-Yen Chang. 2016. “A Cost-Effective Interactive 3D Virtual Reality System Applied to Military Live Firing Training.” Virtual Reality 20 (2): 127–140. https://doi.org/10.1007/s10055-016-0284-x.
  • Blair, Dave, Joseph O. Chapa, Scott Cuomo, and Jules Hurst. 2021. “Humans and Hardware: An Exploration of Blended Tactical Workflows Using John Boyd's OODA Loop.” In The Conduct of War in the 21st Century: Kinetic, Connected and Synthetic, edited by Robert Johnson, Martijn Kitzen, and Tim Sweijs, 93–115. Milton: Taylor & Francis Group.
  • Bode, I., and G. Qiao-Franco. 2024. “AI Geopolitics and International Relations: A Divided World Behind Contested Conceptions of Human Control. Manuscript Submitted for Publication.” In Handbook on Public Policy and Artificial Intelligence, edited by R. Paul, E. Carmel, and J. Cobbe. Cheltenham: Edward Elgar Publishing. https://findresearcher.sdu.dk/ws/portalfiles/portal/224959944/Bode_Qiao_Franco_AI_Geopolitics_2023.pdf
  • Bodrožić, Zlatko, and Paul S. Adler. 2022. “Alternative Futures for the Digital Transformation: A Macro-Level Schumpeterian Perspective.” Organization Science 33 (1): 105–125. https://doi.org/10.1287/orsc.2021.1558.
  • Brandt, Tobias, Sebastian Wagner, and Dirk Neumann. 2021. “Prescriptive Analytics in Public-Sector Decision-Making: A Framework and Insights from Charging Infrastructure Planning.” European Journal of Operational Research 291 (1): 379–393. https://doi.org/10.1016/j.ejor.2020.09.034.
  • Bravo, Mila, David Pla-Santamaria, and Ana Garcia-Bernabeu. 2010. “Portfolio Selection from Multiple Benchmarks: A Goal Programming Approach to an Actual Case.” Journal of Multi-Criteria Decision Analysis 17 (5-6): 155–166. https://doi.org/10.1002/mcda.460.
  • Caballero, William N., and Brian J. Lunday. 2020. “Robust Influence Modeling Under Structural and Parametric Uncertainty An Afghan Counternarcotics use Case.” Decision Support Systems 128. https://doi.org/10.1016/j.dss.2019.113161.
  • Caddell, John, Matthew Dabkowski, Patrick J. Driscoll, and Patrick DuBois. 2020. “Improving Stochastic Analysis for Tradeoffs in Multi-Criteria Value Models.” Journal of Multi-Criteria Decision Analysis 27 (5-6): 304–317. https://doi.org/10.1002/mcda.1717.
  • Cavdur, Fatih, and Asli Sebatli. 2019. “A Decision Support Tool for Allocating Temporary-Disaster-Response Facilities.” Decision Support Systems 127. https://doi.org/10.1016/j.dss.2019.113145.
  • Chen, Wenbin, Kun Fu, Jiawei Zuo, Xinwei Zheng, Tinglei Huang, and Wenjuan Ren. 2017. “Radar Emitter Classification for Large Data set Based on Weighted-Xgboost.” IET Radar, Sonar & Navigation 11 (8): 1203–1207. https://doi.org/10.1049/iet-rsn.2016.0632.
  • Chen, Haipeng, Qian Han, Sushil Jajodia, Roy Lindelauf, V. S. Subrahmanian, and Yanhai Xiong. 2020. “Disclose or Exploit? A Game-Theoretic Approach to Strategic Decision Making in Cyber-Warfare.” IEEE Systems Journal 14 (3): 3779–3790. https://doi.org/10.1109/JSYST.2020.2964985.
  • Constantiou, Ioanna, Arisa Shollo, and Morten Thanning Vendelø. 2019. “Mobilizing Intuitive Judgement During Organizational Decision Making: When Business Intelligence is not the Only Thing That Matters.” Decision Support Systems 121: 51–61. https://doi.org/10.1016/j.dss.2019.04.004.
  • Coombs, Crispin, Donald Hislop, Stanimira K. Taneva, and Sarah Barnard. 2020. “The Strategic Impacts of Intelligent Automation for Knowledge and Service Work: An Interdisciplinary Review.” Journal of Strategic Information Systems 29 (4): 101600.
  • Crootof, Rebecca. 2015. “The Killer Robots Are Here: Legal and Policy Implications.” CardozoLaw Review 36: 1837–1915.
  • Davis, Zachary. 2019. “Artificial Intelligence on the Battlefield.” PRISM (Institute for National Strategic Security, National Defense University) 8 (2): 114–131. https://www.jstor.org/stable/10.230726803234.
  • Davis, Steven I. 2022. “Artificial Intelligence at the Operational Level of war.” Defense & Security Analysis 38 (1): 74–90. https://doi.org/10.1080/14751798.2022.2031692.
  • Davis, Michael T., Matthew J. Robbins, and Brian J. Lunday. 2017. “Approximate Dynamic Programming for Missile Defense Interceptor Fire Control.” European Journal of Operational Research 259 (3): 873–886. https://doi.org/10.1016/j.ejor.2016.11.023.
  • Dear, Keith. 2019a. “Artificial Intelligence and Decision Making.” RUSI Journal 164 (5-6): 18–25. https://doi.org/10.1080/03071847.2019.1693801.
  • Dear, Keith. 2019b. “Will Russia Rule the World Through AI? Assessing Putin’s Rhetoric Against Russia’s Reality.” RUSI Journal 164 (5-6): 36–60. https://doi.org/10.1080/03071847.2019.1694227
  • Degaut, Marcos. 2016. “Spies and Policymakers: Intelligence in the Information Age.” Intelligence and National Security 31 (4): 509–531. https://doi.org/10.1080/02684527.2015.1017931.
  • Department of Defense. 2020. DoD Data Strategy. Washington: Department of Defense.
  • Durbach, Ian N., and Theodor J. Stewart. 2012. “Modeling Uncertainty in Multi-Criteria Decision Analysis.” European Journal of Operational Research 223 (1): 1–14. https://doi.org/10.1016/j.ejor.2012.04.038.
  • Durst, Carolin, Michael Durst, Thomas Kolonko, Andreas Neef, and Florian Greif. 2015. “A Holistic Approach to Strategic Foresight: A Foresight Support System for the German Federal Armed Forces.” Technological Forecasting & Social Change 97: 91–104. https://doi.org/10.1016/j.techfore.2014.01.005.
  • Eldridge, Christopher, Christopher Hobbs, and Matthew Moran. 2018. “Fusing Algorithms and Analysts: Open-Source Intelligence in the age of ‘Big Data’.” Intelligence and National Security 33 (3): 391–406. https://doi.org/10.1080/02684527.2017.1406677.
  • Ermağan, Umut, Barış Yıldız, and F. Sibel Salman. 2022. “A Learning Based Algorithm for Drone Routing.” Computers and Operations Research 137. https://doi.org/10.1016/j.cor.2021.105524.
  • Erspamer, Christopher, Francesca Della Torre, Giulia Massini, Guido Ferilli, Pier Luigi Sacco, and Paolo Massimo Buscema. 2022. “Global World (dis-)Order? Analyzing the Dynamic Evolution of the Micro-Structure of Multipolarism by Means of an Unsupervised Neural Network Approach.” Technological Forecasting & Social Change 175. https://doi.org/10.1016/j.techfore.2021.121351.
  • Ferrari, Jair Feldens, and Mingyuan Chen. 2020. “A Mathematical Model for Tactical Aerial Search and Rescue Fleet and Operation Planning.” International Journal of Disaster Risk Reduction 50. https://doi.org/10.1016/j.ijdrr.2020.101680.
  • Ferretti, Valentina, Irene Pluchinotta, and Alexis Tsoukiàs. 2019. “Studying the Generation of Alternatives in Public Policy Making Processes.” European Journal of Operational Research 273 (1): 353–363. https://doi.org/10.1016/j.ejor.2018.07.054.
  • Fertier, Audrey, Anne-Marie Barthe-Delanoë, Aurélie Montarnal, Sébastien Truptil, and Frédérick Bénaben. 2020. “A new Emergency Decision Support System: The Automatic Interpretation and Contextualisation of Events to Model a Crisis Situation in Real-Time.” Decision Support Systems 133. https://doi.org/10.1016/j.dss.2020.113260.
  • Fitzpatrick, Mark. 2019. “Artificial Intelligence and Nuclear Command and Control.” Survival 61 (3): 81–92. https://doi.org/10.1080/00396338.2019.1614782.
  • Frank, Aaron. 2017. “Computational Social Science and Intelligence Analysis.” Intelligence and National Security 32 (5): 579–599. https://doi.org/10.1080/02684527.2017.1310968.
  • Freedman, Lawrence. 2013. Strategy: A History. Oxford: Oxford University Press.
  • Frikha, Ahmed, and Hela Moalla. 2015. “Analytic Hierarchy Process for Multi-Sensor Data Fusion Based on Belief Function Theory.” European Journal of Operational Research 241 (1): 133–147. https://doi.org/10.1016/j.ejor.2014.08.024.
  • Frutos-Pascual, Maite, and Begoñya García Zapirain. 2017. “Review of the Use of AI Techniques in Serious Games: Decision Making and Machine Learning.” IEEE Transactions on Computational Intelligence and AI in Games 9 (2): 133–152. https://doi.org/10.1109/TCIAIG.2015.2512592.
  • Galliott, Jai. 2017. “The Limits of Robotic Solutions to Human Challenges in the Land Domain.” Defence Studies 17 (4): 327–345. https://doi.org/10.1080/14702436.2017.1333890.
  • Ganor, Boaz. 2021. “Artificial or Human: A New Era of Counterterrorism Intelligence?” Studies in Conflict & Terrorism 44 (7): 605–624. https://doi.org/10.1080/1057610X.2019.1568815.
  • García-Fernández, Luis Enrique, and Mercedes Garijo. 2010. “Modeling Strategic Decisions Using Activity Diagrams to Consider the Contribution of Dynamic Planning in the Profitability of Projects Under Uncertainty.” IEEE Transactions on Engineering Management 57 (3): 463–476. https://doi.org/10.1109/TEM.2009.2033048.
  • Garcia, Denise. 2024. “Algorithms and Decision-Making in Military Artificial Intelligence.” Global Society 38: 24–33. https://doi.org/10.1080/13600826.2023.2273484.
  • Ghasemaghaei, Maryam. 2019. “Does Data Analytics use Improve Firm Decision Making Quality The Role of Knowledge Sharing and Data Analytics Competency.” Decision Support Systems 120: 14–24. https://doi.org/10.1016/j.dss.2019.03.004.
  • Ghasemaghaei, Maryam, and Goran Calic. 2019. “Can big Data Improve Firm Decision Quality? The Role of Data Quality and Data Diagnosticity.” Decision Support Systems 120: 38–49. https://doi.org/10.1016/j.dss.2019.03.008.
  • Ghasemaghaei, Maryam, Sepideh Ebrahimi, and Khaled Hassanein. 2018. “Data Analytics Competency for Improving Firm Decision Making Performance.” Journal of Strategic Information Systems 27 (1): 101–113. https://doi.org/10.1016/j.jsis.2017.10.001.
  • Goldfarb, Avi, and Jon R. Lindsay. 2022. “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War.” International Security 46 (3): 7–50. https://doi.org/10.1162/isec_a_00425.
  • Grønsund, Tor, and Margunn Aanestad. 2020. “Augmenting the Algorithm: Emerging Human-in-the-Loop Work.” Journal of Strategic Information Systems 29: 2.
  • Günther, Wendy Arianne, Mohammad H. Rezazade Mehrizi, Marleen Huysman, and Frans Feldberg. 2017. “Debating big Data: A Literature Review on Realizing Value from big Data.” Journal of Strategic Information Systems 26 (3): 191–209. https://doi.org/10.1016/j.jsis.2017.07.003.
  • Gutjahr, Walter J., and Pamela C. Nolz. 2016. “Multicriteria Optimization in Humanitarian aid.” European Journal of Operational Research 252 (2): 351–366. https://doi.org/10.1016/j.ejor.2015.12.035.
  • He, Shawei. 2022. “A Time Sensitive Graph Model for Conflict Resolution with Application to International air Carbon Negotiation.” European Journal of Operational Research 302 (2): 652–670.
  • Hipel, Keith W., and Liping Fang. 2021. “The Graph Model for Conflict Resolution and Decision Support.” IEEE Transactions on Systems, Man, and Cybernetics 51 (1): 131–141. https://doi.org/10.1109/TSMC.2020.3041462.
  • Hoffman, Frank G. 2019. “Squaring Clausewitz’s Trinity in the Age of Autonomous Weapons.” Orbis 63 (1): 44–63. https://doi.org/10.1016/j.orbis.2018.12.011.
  • Holford, W. David. 2020. “An Ethical Inquiry of the Effect of Cockpit Automation on the Responsibilities of Airline Pilots: Dissonance or Meaningful Control?” Journal of Business Ethics: 1–17. https://doi.org/10.1007/s10551-020-04640-z.
  • Horowitz, Michael C. 2019. “When Speed Kills: Lethal Autonomous Weapon Systems, Deterrence and Stability.” Journal of Strategic Studies 42 (6): 764–788. https://doi.org/10.1080/01402390.2019.1621174.
  • Horowitz, Michael C., Lauren Kahn, and Casey Mahoney. 2020. “The Future of Military Applications of Artificial Intelligence: A Role for Confidence-Building Measures?” Orbis 64 (4): 528–543. https://doi.org/10.1016/j.orbis.2020.08.003.
  • Horowitz, M. C., P. Scharre, and A. Velez-Green. 2019. “A Stable Nuclear Future? The Impact of Autonomous Systems and Artificial Intelligence.” arXiv Preprint Arxiv:1912.05291 1–35. https://arxiv.org/pdf/1912.05291
  • Hunter, Lance Y., Craig Albert, Josh Rutland, and Chris Hennigan. 2022. “The Fourth Industrial Revolution, Artificial Intelligence, and Domestic Conflict.” Global Society 37 (3): 375–396. https://doi.org/10.1080/13600826.2022.2147812.
  • Islam, Gazi. 2022. “Business Ethics and Quantification: Towards an Ethics of Numbers.” Journal of Business Ethics 176 (2): 195–211. https://doi.org/10.1007/s10551-020-04694-z.
  • Islam, Gazi, and Michelle Greenwood. 2022. “The Metrics of Ethics and the Ethics of Metrics.” Journal of Business Ethics 175 (1): 1–5. https://doi.org/10.1007/s10551-021-05004-x
  • Jabbari, Mona, Shaya Sheikh, Meysam Rabiee, and Asil Oztekin. 2022. “A Collaborative Decision Support System for Multi-Criteria Automatic Clustering.” Decision Support Systems 153. https://doi.org/10.1016/j.dss.2021.113671.
  • Jafarzadeh, Hamed, Jalil Heidary-Dahooie, Pouria Akbari, and Alireza Qorbani. 2022. “A Project Prioritization Approach Considering Uncertainty, Reliability, Criteria Prioritization, and Robustness.” Decision Support Systems 156. https://doi.org/10.1016/j.dss.2022.113731.
  • Jaspersen, Johannes G., and Gilberto Montibeller. 2020. “On the Learning Patterns and Adaptive Behavior of Terrorist Organizations.” European Journal of Operational Research 282 (1): 221–234. https://doi.org/10.1016/j.ejor.2019.09.011.
  • Jenkins, Phillip R., Matthew J. Robbins, and Brian J. Lunday. 2021. “Approximate Dynamic Programming for the Military Aeromedical Evacuation Dispatching, Preemption-Rerouting, and Redeployment Problem.” European Journal of Operational Research 290 (1): 132–143. https://doi.org/10.1016/j.ejor.2020.08.004.
  • John–Mathews, Jean–Marie, Dominique Cardon, and Christine Balagué. 2022. “From Reality to World. A Critical Perspective on AI Fairness.” Journal of Business Ethics: 1–15. https://doi.org/10.1007/s10551-022-05055-8.
  • Johnson, James. 2019a. “Artificial Intelligence & Future Warfare: Implications for International Security.” Defense & Security Analysis 35 (2): 147–169. https://doi.org/10.1080/14751798.2019.1600800.
  • Johnson, James. 2019b. “The AI-Cyber Nexus: Implications for Military Escalation, Deterrence and Strategic Stability.” Journal of Cyber Policy 4 (3): 442–460. https://doi.org/10.1080/23738871.2019.1701693.
  • Johnson, James. 2020a. “Artificial Intelligence, Drone Swarming and Escalation Risks in Future Warfare.” The RUSI Journal 165 (2): 26–36. https://doi.org/10.1080/03071847.2020.1752026.
  • Johnson, James. 2020b. “Deterrence in the age of Artificial Intelligence & Autonomy: A Paradigm Shift in Nuclear Deterrence Theory and Practice?” Defense & Security Analysis 36 (4): 422–448. https://doi.org/10.1080/14751798.2020.1857911.
  • Johnson, James. 2020c. “Delegating Strategic Decision-Making to Machines: Dr. Strangelove Redux?” Journal of Strategic Studies: 1–39. https://doi.org/10.1080/01402390.2020.1759038.
  • Jones, Matthew. 2019. “What we Talk About When we Talk About (big) Data.” Journal of Strategic Information Systems 28 (1): 3–16. https://doi.org/10.1016/j.jsis.2018.10.005.
  • Kadar, Cristina, Rudolf Maculan, and Stefan Feuerriegel. 2019. “Public Decision Support for low Population Density Areas: An Imbalance-Aware Hyper-Ensemble for Spatio-Temporal Crime Prediction.” Decision Support Systems 119: 107–117. https://doi.org/10.1016/j.dss.2019.03.001.
  • Kania, Elsa B. 2019. “Chinese Military Innovation in the AI Revolution.” RUSI Journal 164 (5-6): 26–34. https://doi.org/10.1080/03071847.2019.1693803.
  • Katagiri, Nori. 2023. “Artificial Intelligence and Cross-Domain Warfare: Balance of Power and Unintended Escalation.” Global Society. https://doi.org/10.1080/13600826.2023.2248179.
  • Kim, Buomsoo, Jinsoo Park, and Jihae Suh. 2020. “Transparency and Accountability in AI Decision Support: Explaining and Visualizing Convolutional Neural Networks for Text Information.” Decision Support Systems 134. https://doi.org/10.1016/j.dss.2020.113302.
  • Konrad, Renata A., Andrew C. Trapp, Timothy M. Palmbach, and Jeffrey S. Blom. 2017. “Overcoming Human Trafficking via Operations Research and Analytics: Opportunities for Methods, Models, and Applications.” European Journal of Operational Research 259 (2): 733–745. https://doi.org/10.1016/j.ejor.2016.10.049.
  • Krieg, Andreas, and Jean-Marc Rickli. 2018. “Surrogate Warfare: The art of war in the 21st Century?” Defence Studies 18 (2): 113–130. https://doi.org/10.1080/14702436.2018.1429218.
  • Laan, Corine M., Ana Isabel Barros, Richard J. Boucherie, Herman Monsuur, and Wouter Noordkamp. 2020. “Optimal Deployment for Anti-Submarine Operations with Time-Dependent Strategies.” Journal of Defense Modeling and Simulation: Applications, Methodology, Technology 17 (4): 419–434. https://doi.org/10.1177/1548512919855435.
  • Laguir, Issam, Shivam Gupta, Indranil Bose, Rebecca Stekelorum, and Lamia Laguir. 2022. “Analytics Capabilities and Organizational Competitiveness: Unveiling the Impact of Management Control Systems and Environmental Uncertainty.” Decision Support Systems 156. https://doi.org/10.1016/j.dss.2022.113744.
  • Lawson, Ewan, and Richard Barrons. 2016. “Warfare in the Information Age.” RUSI Journal 161 (5): 20–26. https://doi.org/10.1080/03071847.2016.1253371.
  • Ledwith, Matthew C., Brandon J. Hufstetler, and Mark A. Gallagher. 2021. “Stochastic Preemptive Goal Programming to Balance Goal Achievements Under Uncertainty.” Journal of Multi-Criteria Decision Analysis 28 (1-2): 85–98. https://doi.org/10.1002/mcda.1734.
  • Leicht-Deobald, Ulrich, Thorsten Busch, Christoph Schank, Antoinette Weibel, Simon Schafheitle, Isabelle Wildhaber, and Gabriel Kasper. 2019. “The Challenges of Algorithm-Based HR Decision-Making for Personal Integrity.” Journal of Business Ethics 160 (2): 377–392. https://doi.org/10.1007/s10551-019-04204-w.
  • Lewis, Larry. 2019. “Resolving the Battle Over Artificial Intelligence in War.” RUSI Journal 164 (5-6): 62–71. https://doi.org/10.1080/03071847.2019.1694228.
  • Li, Xiong, Wei Pu, Jiang Rong, Xian Xiao, and Xiaodong Zhao. 2022. “Terrain Visualization Information Integration in Agent-Based Military Industrial Logistics Simulation.” Journal of Industrial Information Integration 25. https://doi.org/10.1016/j.jii.2021.100260.
  • Liddy, Lynda. 2004. “The Strategic Corporal: Some Requirements in Training and Education.” Australian Army Journal 11 (2): 139–148.
  • Liesiö, Juuso, Ahti Salo, Jeffrey M. Keisler, and Alec Morton. 2021. “Portfolio Decision Analysis: Recent Developments and Future Prospects.” European Journal of Operational Research 293 (3): 811–825. https://doi.org/10.1016/j.ejor.2020.12.015.
  • Lim, Kevjn. 2016. “Big Data and Strategic Intelligence.” Intelligence and National Security 31 (4): 619–635. https://doi.org/10.1080/02684527.2015.1062321.
  • Lin, Kuo-Ping, and Kuo-Chen Hung. 2011. “An Efficient Fuzzy Weighted Average Algorithm for the Military UAV Selecting Under Group Decision-Making.” Knowledge-Based Systems 24 (6): 877–889. https://doi.org/10.1016/j.knosys.2011.04.002.
  • Lobo, Benjamin J., Donald E. Brown, Matthew S. Gerber, and Peter J. Grazaitis. 2018. “A Transient Stochastic Simulation–Optimization Model for Operational Fuel Planning in-Theater.” European Journal of Operational Research 264 (2): 637–652. https://doi.org/10.1016/j.ejor.2017.06.057.
  • Luoma, Jukka. 2016. “Model-based Organizational Decision Making: A Behavioral Lens.” European Journal of Operational Research 249 (3): 816–826. https://doi.org/10.1016/j.ejor.2015.08.039.
  • Luttwak, Edward N. 1980. “The Operational Level of War.” International Security 5 (3): 61–79. https://doi.org/10.2307/2538420.
  • Maas, Matthijs M. 2019. “How Viable is International Arms Control for Military Artificial Intelligence? Three Lessons from Nuclear Weapons.” Contemporary Security Policy 40 (3): 285–311. https://doi.org/10.1080/13523260.2019.1576464.
  • Marabelli, Marco, Sue Newell, and Valerie Handunge. 2021. “The Lifecycle of Algorithmic Decision-Making Systems: Organizational Choices and Ethical Challenges.” Journal of Strategic Information Systems 30 (3): 101683.
  • Margolis, Joshua T., Yongjia Song, and Scott J. Mason. 2022. “A Markov Decision Process Model on Dynamic Routing for Target Surveillance.” Computers and Operations Research. https://doi.org/10.1016/j.cor.2022.105699.
  • Marjanovic, Olivera, and Dubravka Cecez-Kecmanovic. 2017. “Exploring the Tension Between Transparency and Datification Effects of Open Government IS Through the Lens of Complex Adaptive Systems.” Journal of Strategic Information Systems 26 (3): 210–232. https://doi.org/10.1016/j.jsis.2017.07.001.
  • Martin, Kirsten. 2019. “Ethical Implications and Accountability of Algorithms.” Journal of Business Ethics 160 (4): 835–850. https://doi.org/10.1007/s10551-018-3921-3.
  • Martin, Kirsten, and Ari Waldman. 2022. “Are Algorithmic Decisions Legitimat? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions.” Journal of Business Ethics 177: 1–18. https://doi.org/10.1007/s10551-020-04725-9.
  • Marttunen, Mika, Judit Lienert, and Valerie Belton. 2017. “Structuring Problems for Multi-Criteria Decision Analysis in Practice: A Literature Review of Method Combinations.” European Journal of Operational Research 263 (1): 1–17. https://doi.org/10.1016/j.ejor.2017.04.041.
  • Matheny, Michael R. 2016. “The Fourth Level of War.” Joint Force Quarterly 80 (1): 62–66.
  • Melancon, Andree-Anne. 2020. “What's Wrong with Drones? Automatization and Target Selection.” Small Wars & Insurgencies 31 (4): 801–821. https://doi.org/10.1080/09592318.2020.1743486.
  • Metz, Steven. 2020. “The Future of Strategic Leadership.” The US Army War College Quarterly: Parameters 50 (2): 61–67.
  • Miller, K. C., A. Bordetsky, J. C. Mun, R. W. Maule, and A. G. Pollman. 2021. “Merging Future Knowledgebase System of Systems with Artificial Intelligence Machine Learning Engines to Maximize Reliability and Availability for Decision Support.” Military Operations Research 26 (4): 77–93. https://doi.org/10.5711/1082598326477.
  • Ministry of Defence. 2020. Defence Vision 2035 Fighting for a safer future. Ministry of Defence. The Hague: MoD.
  • Mora, Manuel, Francisco Cervantes-Pérez, Ovsei Gelman-Muravchik, and Guisseppi A. Forgionne. 2012. “Modeling the Strategic Process of Decision-Making Support Systems Implementations: A System Dynamics Approach Review.” IEEE Transactions on Systems, Man, and Cybernetics 42 (6): 899–912. https://doi.org/10.1109/TSMCC.2011.2171482.
  • Moreira, Catarina, Yu-Liang Chou, Mythreyi Velmurugan, Chun Ouyang, Renuka Sindhgatta, and Peter Bruza. 2021. “LINDA-BN: An Interpretable Probabilistic Approach for Demystifying Black-box Predictive Models.” Decision Support Systems 150. https://doi.org/10.1016/j.dss.2021.113561.
  • Moskowitz, Herbert, Paul Drnevich, Okan Ersoy, Kemal Altinkemer, and Alok Chaturvedi. 2011. “Using Real-Time Decision Tools to Improve Distributed Decision-Making Capabilities in High-Magnitude Crisis Situations.” Decision Sciences 42 (2): 477–493. https://doi.org/10.1111/j.1540-5915.2011.00319.x.
  • Mufalli, Frank, Rajan Batta, and Rakesh Nagi. 2012. “Simultaneous Sensor Selection and Routing of Unmanned Aerial Vehicles for Complex Mission Plans.” Computers and Operations Research 39 (11): 2787–2799. https://doi.org/10.1016/j.cor.2012.02.010.
  • Nadj, Mario, Alexander Maedche, and Christian Schieder. 2020. “The Effect of Interactive Analytical Dashboard Features on Situation Awareness and Task Performance.” Decision Support Systems 135. https://doi.org/10.1016/j.dss.2020.113322.
  • Naeem, Huma, and Asif Masood. 2010. “An Optimal Dynamic Threat Evaluation and Weapon Scheduling Technique.” Knowledge-Based Systems 23 (4): 337–342. https://doi.org/10.1016/j.knosys.2009.11.012.
  • National Academies of Sciences, Engineering, and Medicine. 2021. Empowering the Defense Acquisition Workforce to Improve Mission Outcomes Using Data Science. Washington, DC: The National Academies Press. https://doi.org/10.17226/25979.
  • Newell, Sue, and Marco Marabelli. 2015. “Strategic Opportunities (and Challenges) of Algorithmic Decision-Making: A Call for Action on the Long-Term Societal Effects of ‘Datification’.” Journal of Strategic Information Systems 24 (1): 3–14. https://doi.org/10.1016/j.jsis.2015.02.001
  • Nieto, Yuri, Vicente Gacía-Díaz, Carlos Montenegro, Claudio Camilo González, and Rubén González Crespo. 2019. “Usage of Machine Learning for Strategic Decision Making at Higher Educational Institutions.” IEEE Access 7: 75007–75017. https://doi.org/10.1109/ACCESS.2019.2919343.
  • Nurkin, Tate, and Stephen Rodriguez. 2019. “A Framework for Understanding Applied AI.” Atlantic Council: 22–41. http://www.jstor.com/stable/resrep20946.6.
  • Omohundro, Steve. 2014. “Autonomous Technology and the Greater Human Good.” Journal of Experimental & Theoretical Artificial Intelligence 26 (3): 303–315. https://doi.org/10.1080/0952813X.2014.895111.
  • Onderco, Michal, and Madeline Zutt. 2021. “Emerging Technology and Nuclear Security: What Does the Wisdom of the Crowd Tell us?” Contemporary Security Policy 42 (3): 286–311. https://doi.org/10.1080/13523260.2021.1928963.
  • Ormerod, R. J. 2014. “Critical Rationalism in Practice: Strategies to Manage Subjectivity in OR Investigations.” European Journal of Operational Research 235 (3): 784–797. https://doi.org/10.1016/j.ejor.2013.12.018.
  • Ormerod, Richard J., and Werner Ulrich. 2013. “Operational Research and Ethics: A Literature Review.” European Journal of Operational Research 228 (2): 291–307. https://doi.org/10.1016/j.ejor.2012.11.048.
  • Pang, Min-Seok, and Paul A. Pavlou. 2019. “On Information Technology and the Safety of Police Officers.” Decision Support Systems 127. https://doi.org/10.1016/j.dss.2019.113143.
  • Pashakhanlou, Arash Heydarian. 2019. “AI, Autonomy, and Airpower: The end of.” Defence Studies 19 (4): 337–352. https://doi.org/10.1080/14702436.2019.1676156.
  • Paul, Jomon A., and Minjiao Zhang. 2021. “Decision Support Model for Cybersecurity Risk Planning: A two-Stage Stochastic Programming Framework Featuring Firms, Government, and Attacker.” European Journal of Operational Research 291 (1): 349–364. https://doi.org/10.1016/j.ejor.2020.09.013.
  • Payne, Kenneth. 2018a. “Artificial Intelligence: A Revolution in Strategic Affairs?” Survival 60 (5): 7–32. https://doi.org/10.1080/00396338.2018.1518374.
  • Pethig, Florian, and Julia Kroenung. 2022. “Biased Humans, (Un)Biased Algorithms?” Journal of Business Ethics: 1–16. https://doi.org/10.1007/s10551-022-05071-8.
  • Phillips-Wren, Gloria, Mary Daly, and Frada Burstein. 2021. “Reconciling Business Intelligence, Analytics and Decision Support Systems: More Data, Deeper Insight.” Decision Support Systems 146. https://doi.org/10.1016/j.dss.2021.113560.
  • Phillips, Peter J., and Gabriela Pohl. 2020. “Countering Intelligence Algorithms.” RUSI Journal 165 (7): 22–32. https://doi.org/10.1080/03071847.2021.1893126.
  • Ploumis, Michaeil. 2022. “AI Weapon Systems in Future war Operations; Strategy, Operations and Tactics.” Comparative Strategy 41 (1): 1–18. https://doi.org/10.1080/01495933.2021.2017739.
  • Price, Matthew, Stephen Walker, and Will Wiley. 2018. “The Machine Beneath: Implications of Artificial Intelligence in Strategic Decision.” PRISM 7 (4): 92–105.
  • Provost, Foster, and Tom Fawcett. 2013. “Data Science and Its Relationship to Big Data and Data-Driven Decision Making.” Big Data 1: 51–59. https://doi.org/10.1089/big.2013.1508.
  • RAND. 2020. Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World. Santa Monica: RAND Corporation.
  • RAND. 2021a. Technology Innovation and the Future of Air Force Intelligence Analysis: Volume 1, Findings and Recommendations. Santa Monica: RAND Corporation.
  • RAND. 2021b. Developing an Air Force Retention Early Warning System: Concept and Initial Prototype. Santa Monica: RAND Corporation.
  • Razmak, Jamil, and Belaid Aouni. 2015. “Decision Support System and Multi-Criteria Decision Aid: A State of the Art and Perspectives.” Journal of Multi-Criteria Decision Analysis 22 (1-2): 101–117. https://doi.org/10.1002/mcda.1530.
  • Regens, James L. 2019. “Augmenting Human Cognition to Enhance Strategic Operational and Tactical Intelligence.” Intelligence and National Security 34 (5): 673–687. https://doi.org/10.1080/02684527.2019.1579410.
  • Rocha, Clara, Luis C. Dias, and Isabel Dimas. 2013. “Multicriteria Classification with Unknown Categories: A Clustering–Sorting Approach and an Application to Conflict Management.” Journal of Multi-Criteria Decision Analysis 20 (1-2): 13–27. https://doi.org/10.1002/mcda.1476.
  • Roff, Heather M. 2014. “The Strategic Robot Problem: Lethal Autonomous Weapons in War.” Journal of Military Ethics 13 (3): 211–227. https://doi.org/10.1080/15027570.2014.975010.
  • Roff, Heather M., and David Danks. 2018. “"Trust but Verify": The Difficulty of Trusting Autonomous Weapons Systems.” Journal of Military Ethics 17 (1): 2–20. https://doi.org/10.1080/15027570.2018.1481907.
  • Roponen, Juho, David Ríos Insua, and Ahti Salo. 2020. “Adversarial Risk Analysis Under Partial Information.” European Journal of Operational Research 287 (1): 306–316. https://doi.org/10.1016/j.ejor.2020.04.037.
  • Saaty, Thomas L., and H. J. Zoffer. 2012. “A New Approach To The Middle East Conflict: The Analytic Hierarchy Process.” Journal of Multi-Criteria Decision Analysis 19 (5-6): 201–225. https://doi.org/10.1002/mcda.1470.
  • Sáenz-Royo, Carlos, Vicente Salas-Fumás, and Álvaro Lozano-Rojo. 2022. “Authority and Consensus in Group Decision Making with Fallible Individuals.” Decision Support Systems 153. https://doi.org/10.1016/j.dss.2021.113670.
  • Saifer, Adam, and M. Tina Dacin. 2021. “Data and Organization Studies: Aesthetics, Emotions, Discourse and our Everyday Encounters with Data.” Organization Studies. https://doi.org/10.1177/01708406211006250.
  • Sari, Onur, and Sener Celik. 2021. “Legal Evaluation of the Attacks Caused by Artificial Intelligence-Based Lethal Weapon Systems Within the Context of Rome Statute.” Computer Law & Security Review 42. https://doi.org/10.1016/j.clsr.2021.105564.
  • Schätter, Frank, Ole Hansen, Marcus Wiens, and Frank Schultmann. 2019. “A Decision Support Methodology for a Disaster-Caused Business Continuity Management.” Decision Support Systems 118: 10–20. https://doi.org/10.1016/j.dss.2018.12.006.
  • Schneider, Jacquelyn. 2019. “The Capability/Vulnerability Paradox and Military Revolutions: Implications for Computing Cyber and the Onset of war.” Journal of Strategic Studies 42 (6): 841–863. https://doi.org/10.1080/01402390.2019.1627209.
  • Semiz, Fatih, and Faruk Polat. 2020. “Solving the Area Coverage Problem with UAVs: A Vehicle Routing with Time Windows Variation.” Robotics and Autonomous Systems 126. https://doi.org/10.1016/j.robot.2020.103435.
  • Sharkey, Noel. 2010. “Saying ‘No!’ to Lethal Autonomous Targeting.” Journal of Military Ethics 9 (4): 369–383. https://doi.org/10.1080/15027570.2010.537903
  • Shrestha, Yash Raj, Vivianna Fang He, Phanish Puranam, and Georg von Krogh. 2021. “Algorithm Supported Induction for Building Theory: How Can We Use Prediction Models to Theorize?” Organization Science 32 (3): 856–880. https://doi.org/10.1287/orsc.2020.1382.
  • Snyder, Hannah. 2019. “Literature Review as a Research Methodology: An Overview and Guidelines.” Journal of Business Research 104: 333–339. https://doi.org/10.1016/j.jbusres.2019.07.039.
  • Song, Il-Yeol, and Yongjun Zhu. 2016. “Big Data and Data Science: What Should we Teach?” Expert Systems 33 (4): 364–373. https://doi.org/10.1111/exsy.12130.
  • Speigel, Ian. 2021. “Adopting and Improving a new Forecasting Paradigm.” Intelligence and National Security 36 (7): 961–977. https://doi.org/10.1080/02684527.2021.1946955.
  • Srivastava, Saurabh Ranjan, Yogesh Kumar Meena, and Girdhari Singh. 2021. “The Landscape of Soft Computing Applications for Terrorism Analysis: A Review.” Applied Soft Computing Journal 113. https://doi.org/10.1016/j.asoc.2021.107977.
  • Steff, Brittany. 2021. “Data Science Pairs with Cancer Research for Better Diagnostics, Therapies.” Purdue University. 19 5. Accessed June 10, 2022. https://www.purdue.edu/newsroom/releases/2021/Q2/data-science-pairs-with-cancer-research-for-better-diagnostics,-therapies.html.
  • Sullivan, Yulia W., and Samuel Fosso Wamba. 2022. “Moral Judgments in the Age of Artificial Intelligence.” Journal of Business Ethics: 1–27. https://doi.org/10.1007/s10551-022-05053-w.
  • Sun, Xiaolei, Jianping Li, Dengsheng Wu, and Shanli Yi. 2011. “Energy Geopolitics and Chinese Strategic Decision of the Energy-Supply Security: A Multiple-Attribute Analysis.” Journal of Multi-Criteria Decision Analysis 18 (1-2): 151–160. https://doi.org/10.1002/mcda.479.
  • Tappe, Jonathan, and Fredrik Doeser. 2021. “A Machine Learning Approach to the Study of German Strategic Culture.” Contemporary Security Policy 42 (4): 450–474. https://doi.org/10.1080/13523260.2021.1992150.
  • Tavakoli, Asin, Daniel Schlagwein, and Detlef Schoder. 2017. “Open Strategy: Literature Review, re-Analysis of Cases and Conceptualisation as a Practice.” Journal of Strategic Information Systems 26 (3): 163–184. https://doi.org/10.1016/j.jsis.2017.01.003.
  • Tedjopurnomo, David Alexander, Zhifeng Bao, Baihua Zheng, Farhana Murtaza Choudhury, and A. K. Qin. 2022. “A Survey on Modern Deep Neural Network for Traffic Prediction: Trends, Methods and Challenges.” IEEE Transactions on Knowledge and Data Engineering 34 (4): 1544–1561.
  • Telkamp, Jake B., and Marc H. Anderson. 2022. “The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence.” Journal of Business Ethics: 1–16. https://doi.org/10.1007/s10551-022-05057-6.
  • Torraco, Richard J. 2005. “Writing Integrative Literature Reviews: Guidelines and Examples.” Human Resource Development Review 4 (3): 356–367. https://doi.org/10.1177/1534484305278283.
  • Tóth, Zsófia, Robert Caruana, Thorsten Gruber, and Claudia Loebbecke. 2022. “The Dawn of the AI Robots: Towards a New Framework of AI Robot Accountability.” Journal of Business Ethics: 1–22. https://doi.org/10.1007/s10551-022-05050-z.
  • Umbrello, Steven, Phil Torres, and Angelo F. De Bellis. 2020. “The Future of war: Could Lethal Autonomous Weapons Make Conflict More Ethical?” AI & Society 35 (1): 273–282. https://doi.org/10.1007/s00146-019-00879-x.
  • Valencia-Parra, Álvaro, Luisa Parody, Ángel Jesús Varela-Vaca, Ismael Caballero, and María Teresa Gómez-López. 2021. “DMN4DQ: When Data Quality Meets DMN.” Decision Support Systems 141. https://doi.org/10.1016/j.dss.2020.113450.
  • Veldhuis, Guido A., Nico M. de Reus, and Bas M.J. Keijser. 2020. “Concept Development for Comprehensive Operations Support with Modeling and Simulation.” Journal of Defense Modeling and Simulation: Applications, Methodology, Technology 17 (1): 99–116. https://doi.org/10.1177/1548512918814407.
  • Vemprala, Naga, Charles Zhechao Liu, and Kim-Kwang Raymond Choo. 2021. “From Puzzles to Portraits: Enhancing Situation Awareness During Natural Disasters Using a Design Science Approach.” Decision Sciences: 1–21. https://doi.org/10.1111/deci.12527.
  • Verboven, Sam, Jeroen Berrevoets, Chris Wuytens, Bart Baesens, and Wouter Verbeke. 2021. “Autoencoders for Strategic Decision Support.” Decision Support Systems 150. https://doi.org/10.1016/j.dss.2020.113422.
  • Vilkkumaa, Eeva, Juuso Liesiö, Ahti Salo, and Leena Ilmola-Sheppard. 2018. “Scenario-based Portfolio Model for Building Robust and Proactive Strategies.” European Journal of Operational Research 266 (1): 205–220. https://doi.org/10.1016/j.ejor.2017.09.012.
  • Vogel, Kathleen M., Gwendolynne Reid, Christopher Kampe, and Paul Jones. 2021. “The Impact of AI on Intelligence Analysis: Tackling Issues of Collaboration, Algorithmic Transparency, Accountability, and Management.” Intelligence and National Security 36 (6): 827–848. https://doi.org/10.1080/02684527.2021.1946952.
  • Voorberg, S., R. Eshuis, W. van Jaarsveld, and G. J. van Houtum. 2021. “Decisions for Information or Information for Decisions? Optimizing Information Gathering in Decision-Intensive Processes.” Decision Support Systems 151: 113632. https://doi.org/10.1016/j.dss.2021.113632.
  • Wall, Christopher. 2021. “The (Non) Deus Ex Machina: A Realistic Assessment of Machine Learning for Countering Domestic Terrorism.” Studies in Conflict & Terrorism: 1–23. https://doi.org/10.1080/1057610X.2021.1987656.
  • Warren, Aiden, and Alek Hillas. 2020. “Friend or Frenemy? The Role of Trust in Human-Machine Teaming and Lethal Autonomous Weapons Systems.” Small Wars & Insurgencies 31 (4): 822–850. https://doi.org/10.1080/09592318.2020.1743485.
  • Wen, Chunhui, Jinhai Yang, Liu Gan, and Yang Pan. 2021. “Big Data Driven Internet of Things for Credit Evaluation and Early Warning in Finance.” Future Generation Computer Systems 124: 295–307. http://doi.org/10.1016/j.future.2021.06.003.
  • Westin, Carl, Clark Borst, and Brian Hilburn. 2016. “Strategic Conformance: Overcoming Acceptance Issues of Decision Aiding Automation?” IEEE Transactions on Human-Machine Systems 46 (1): 41–52. https://doi.org/10.1109/THMS.2015.2482480.
  • Wing, Jeannette M. 2019. “The Data Life Cycle.” Harvard Data Science Review.
  • Wright, George, George Cairns, Frances A. O’Brien, and Paul Goodwin. 2019. “Scenario Analysis to Support Decision Making in Addressing Wicked Problems Pitfalls and Potential.” European Journal of Operational Research 278 (1): 3–19. https://doi.org/10.1016/j.ejor.2018.08.035.
  • Xu, Zongben, Niansheng Tang, Chen Xu, and Xueqi Cheng. 2021. “Data Science: Connotation, Methods, Technologies, and Development.” Data Science and Management 1 (1): 32–37.
  • Yan, Guilong. 2020. “The Impact of Artificial Intelligence on Hybrid Warfare.” Small Wars & Insurgencies 31 (4): 898–917. https://doi.org/10.1080/09592318.2019.1682908.
  • Yarhi-Milo, Keren. 2013. “In the Eye of the Beholder: How Leaders and Intelligence Communities Assess the Intentions of Adversaries.” International Security 38 (1): 7–51. https://doi.org/10.1162/ISEC_a_00128.
  • Yearsley, James M., and Jerome R. Busemeyer. 2016. “Quantum Cognition and Decision Theory: A Tutorial.” Journal of Mathematical Psychology 74: 99–116. https://doi.org/10.1016/j.jmp.2015.11.005.
  • You, Shixun, Ming Diao, Lipeng Gao, Fulong Zhang, and Huan Wang. 2020. “Target Tracking Strategy Using Deep Deterministic Policy Gradient.” Applied Soft Computing Journal 95. https://doi.org/10.1016/j.asoc.2020.106490.
  • Yuan, Hua, Jie Zheng, Qiongwei Ye, Yu Qian, and Yan Zhang. 2021. “Improving Fake News Detection with Domain-Adversarial and Graph-Attention Neural Network.” Decision Support Systems 151: 113633. https://doi.org/10.1016/j.dss.2021.113633.
  • Zeleny, Milan. 2011. “Multiple Criteria Decision Making (MCDM): From Paradigm Lost to Paradigm Regained?” Journal of Multi-Criteria Decision Analysis 18 (1-2): 77–89. https://doi.org/10.1002/mcda.473.
  • Zhdanov, Dmitry, Sudip Bhattacharjee, and Mikhail A. Bragin. 2022. “Incorporating FAT and Privacy Aware AI Modeling Approaches Into Business Decision Making Frameworks.” Decision Support Systems 155. https://doi.org/10.1016/j.dss.2021.113715.