3,984
Views
4
CrossRef citations to date
0
Altmetric
Research Articles

Establishing and theorising data analytics governance: a descriptive framework and a VSM-based view

, &
Pages 101-122 | Received 27 Nov 2020, Accepted 07 Jul 2021, Published online: 16 Jul 2021

ABSTRACT

The rise of big data has led to many new opportunities for organisations to create value from data. However, an increasing dependence on data also poses many challenges for organisations. To overcome these challenges, organisations have to establish data analytics governance. Leading IT and information governance literature shows that governance can be implemented through mechanisms. The data analytics literature is not very abundant in describing specific governance mechanisms. Hence, there is a need to identify and describe specific data analytics governance mechanisms. To this end, a preliminary framework based on literature  was developed and validated using a multiple case study design. This resulted in an extended descriptive framework that can aide managers in implementing data analytics governance. Furthermore, we draw on viable system model (VSM) theory to make a theoretical contribution by discussing how data analytics governance can contnue to fulfil its purpose of creating (business) value from data.

1. Introduction

The rise of big data technologies and advanced data analytics tools has led to new opportunities for organisations to create value from data. For instance, it can help organisations to improve decision-making, or it can enable the creation of smart services to advance their service offerings (Davenport et al., Citation2012; Grover et al., Citation2018). Consequently, organisations are allocating an increasing number of resources to data analytics activities in an attempt to create a competitive advantage (McAfee et al., Citation2012; Mikalef et al., Citation2017).

Despite the fact that data analytics provides organisations with great opportunities, the increasing dependence on data also poses many challenges for organisations in managing data analytics. These challenges are more managerial than technological in nature, as demonstrated by two independent surveys amongst executives and professionals (Lavalle et al., Citation2011; Wegener & Sinha, Citation2013). An example of such a managerial challenge is achieving alignment between management and data analytics practitioners. While management often aims for a quick return on investments, data analytics practitioners aim for accurate results (Yamada & Peran, Citation2018). Another example involves the distribution of data analytics activities across organisational units, which potentially leads to the creation of silos. Such fragmentation prevents the organisation from realising the full potential of its data analytics activities (Avery & Cheek, Citation2015). Furthermore, also the ethical use of data is a challenge for many organisations (Vidgen et al., Citation2017). Although the analysis of highly sensitive data, such as healthcare data, might seem attractive to solve certain business issues or to generate business value, there are often ethical considerations involved when it can potentially breach the privacy of a person (Günther et al., Citation2017).

In order to address such challenges, organisations have to govern their data analytics activities (Gröger, Citation2018). Currently, academic research on data analytics governance is mainly limited to addressing the need for an effective data analytics governance framework (Avery & Cheek, Citation2015; Espinosa & Armour, Citation2016; Gröger, Citation2018; Grover et al., Citation2018). Although some data analytics governance frameworks have been described in the practitioner’s literature, these frameworks lack sufficient empirical validation (Oestreich, Citation2016). Therefore, our research aims to increase understanding of how organisations can systematically organise and implement data analytics governance, by answering the following research question: How can organisations realise viable governance over their data analytics activities?

To answer this question, we first conducted a literature review to develop a preliminary data analytics governance framework, consisting of data analytics governance mechanisms (i.e., structures, processes, and relational mechanisms). Subsequently we applied a multiple case study approach to validate and extend this framework. In total, we conducted three case studies and collected qualitative data from 21 interviews. For analysing this data, we applied a combined deductive and inductive coding approach. This resulted in a descriptive framework of data analytics governance mechanisms. Finally, we draw on the Viable System Model (VSM) as a theoretical lens. While a descriptive data analytics governance framework based on a typology of governance mechanisms provides useful insights on the mechanisms to use for implementing data analytics governance, it does not provide insights on the interrelatedness of mechanisms in a holistic data analytics governance approach. Furthermore, leveraging the VSM allows for a theoretical discussion on the viability of a system, i.e., why and how data analytics governance, via specific data analytics governance mechanisms, can continue to fulfill its purpose of creating (business) value from data, through enabling a more sustainable execution of data analytics activities within the organisation. More specifically, by building on the VSM’s “essential elements of organisation”, i.e., its five systemic functions, insights are provided on the key systemic preconditions for viable data analytics governance. The instantiation of these systemic functions, to ensure that these systemic preconditions are met, happens through the implementation of specific data analytics governance mechanisms (such as the ones proposed in our descriptive framework). In summary, our VSM-based reflection provides additional insights on the systemic aspects of data analytics governance that are necessary for its viability, and provides specific illustrations of the instantiation of these aspects by means of the specific data analytics governance mechanisms that were identified in the context of our descriptive framework.

The remainder of this paper is structured as follows. Section 2 presents the theoretical background and the preliminary data analytics governance framework. In the next section, the VSM is introduced and discussed. Then, section 4 describes the method of our research. Thereafter, section 5 presents the research results including our descriptive data analytics governance framework. Finally, a discussion and a conclusion are presented in section 6, using VSM to discuss the viability of data analytics governance based on our framework and to give directions for future research.

2. Theoretical background

This section starts with presenting the main concept in this research: data analytics. Then, it continues by explaining what governance means for data analytics, culminating in the preliminary data analytics governance framework.

2.1. Data analytics

Data analytics is defined as the “realization of business objectives through reporting of data to analyze trends, creating predictive models to foresee future problems and opportunities and analyzing/optimizing business processes to enhance organizational performance” (Delen & Demirkan, Citation2013, p. 361). Data analytics techniques play a central role in data analytics and draw upon different disciplines including software engineering, statistics, and machine learning (Lavalle et al., Citation2011). The discipline of data analytics is considered more advanced than traditional reporting, the latter being mainly descriptive in nature. Conversely, the outcome of data analytics is more predictive and prescriptive in nature, and as such concerned with what will or should happen (Abbasi et al., Citation2016; Kiron et al., Citation2011). Amongst academics and practitioners, different terms are used interchangeably for data analytics, including: data mining, big data analytics, business analytics, knowledge discovery, and data science. In essence, all these terms refer to an activity involving analysis and exploration of data to find new and interesting patterns in data to improve decision making (Davenport, Citation2006). Some of these terms were more popular in the past, e.g., data mining and knowledge discovery. Some accentuate a specific focus or application, e.g., big data analytics and business analytics. Finally, data science is considered a broader concept that refers to the scientific discipline that studies and advances data analytics.

An important development that fuelled interest and investments in data analytics is the advent of big data (Watson, Citation2014). New technologies and smart devices contributed to the generation of data characterised by high volume, variety and velocity, also referred to as big data (Chen et al., Citation2012). While this made the tools and techniques used on a dataset critical factors of attention (Ward & Barker, Citation2013), it also provide organisations an unrivalled potential in terms of what they could do with their data. Consequently, different types of organisations have been able to develop their own data analytics-based improvements to remain competitive (Davenport, Citation2013). Furthermore, data analytics can influence organisational performance directly by improving efficiency, coordination or decision making, but also indirectly by improving the image and reputation of the organisation (Grover et al., Citation2018).

To effectively use big data, organisations have to overcome challenges at different organisational levels in order to create social or economic value (Günther et al., Citation2017). In that context, one stream of research focused on process models and methodologies, which provide guidelines for conducting data analytics projects. These research efforts started in the late 1990s and have by now resulted in an abundance of proposed process models and methodologies (Baijens & Helms, Citation2019; Mariscal et al., Citation2010). The most well-known model is CRISP-DM and was developed by a consortium consisting of industry and academic representatives (Chapman et al., Citation2000). This model comprises the following steps: business understanding, data understanding, data preparation, modelling, evaluation and deployment. Despite the fact that process models and methodologies often provide detailed descriptions, they do not provide an answer to all managerial and cultural barriers related to data analytics. In the end, organisations strive to avoid all barriers whilst establishing a big data analytics capability. In order to achieve this, governance is needed to apply policies and give strategic direction to data analytics activities (Mikalef et al., Citation2017).

2.2. Data analytics governance

In general, (corporate) governance refers to the rules and practices by which the board of directors ensures strategies are in place, monitored, and achieved. Governance initially starts at the corporate level and covers a variety of organisational issues to ensure that investments and activities are aligned with the firm’s strategy (Rau, Citation2004). IT governance is an integral part of corporate governance that supports organisations in achieving control over their present and future IT use. By implementing governance mechanisms, IT governance enables business and IT stakeholders to fulfill their responsibilities to contribute to the alignment of business and IT, and ensuring and maintaining business value through the use of IT (De Haes et al., Citation2020).

Traditionally, the focus of IT governance has been on IT artefacts in general (De Haes & Van Grembergen, Citation2009). However, the continuous adoption and growth of digital technology within organisations impacts the governance of IT, and reinforces the need to revisit IT governance research to improve organisational agility and provide a more inclusive view of the IT artefact (Gregory et al., Citation2018). Thus, scholars also increasingly focused on other, more specific, issues such as the governance of cloud computing (e.g., Choudhary & Vithayathil, Citation2013; Vithayathil, Citation2018; Winkler & Brown, Citation2013)), the governance of service-oriented architectures (Joachim et al., Citation2013), and the governance of information (Tallon et al., Citation2013).

Similarly, the increasing adoption of data analytics for value creation led to a new endeavour to focus on the governance of data analytics activities. While prior governance research focused on IT artefacts in general (i.e., IT governance) (De Haes & Van Grembergen, Citation2009), or the content of these artefacts (i.e., information governance) (Tallon et al., Citation2013), data analytics governance focuses on the transformation of IT artefact content. It aims at establishing structures, policies and controls to coordinate activities and aligning interests to maximise the value of data analytics (Gröger, Citation2018; Yamada & Peran, Citation2018). A distinct but related governance domain deserves to be mentioned, i.e., business intelligence (BI) governance (Niño et al., Citation2020; Watson & Wixom, Citation2007). Traditionally, BI is considered to involve business reporting, OLAP, data warehousing and dashboarding (Chen et al., Citation2012). Although the goal is to achieve insights, it looks back on what happened (Mortenson et al., Citation2015) and mainly concerns integrating and aggregating data to get insights. In other words, BI mainly focuses on descriptive analysis. Data analytics in turn is broader and also includes predictive and prescriptive data analysis techniques (Gröger, Citation2018).

2.3. Data analytics governance mechanisms

Leading (IT) governance literature asserts that governance can be implemented through different types of mechanisms (Alhassan et al., Citation2016; De Haes & Van Grembergen, Citation2009; He & Mahoney, Citation2006; Tallon et al., Citation2013; Zogaj & Bretschneider, Citation2014). Not all studies use exactly the same terminology when it comes to these typologies of governance mechanisms, but the meaning of the categories is often very similar. For example, in IT governance, Almeida et al. (Citation2013) refer to process mechanisms, while Tallon et al. (Citation2013) describe this as procedural mechanisms. In this paper, for data analytics governance, we use the following typology, consisting of three types of mechanisms; structural, process and relational (Almeida et al., Citation2013; De Haes & Van Grembergen, Citation2009; Tallon et al., Citation2013; Wu et al., Citation2015). This typology is the most frequently used in scholarly (IT) governance research.

Structural mechanisms refer to the structural blueprint of how governance will be organised. This category includes organisational structures, roles, and responsibilities (Almeida et al., Citation2013; De Haes & Van Grembergen, Citation2009; Tallon et al., Citation2013; Wu et al., Citation2015). Process mechanisms refer to the formal processes for ensuring that daily behaviours are consistent with policies and provide input back to decisions. Specific examples of process mechanisms include routines for the realisation, monitoring, evaluation, and maturity of processes (Almeida et al., Citation2013; De Haes & Van Grembergen, Citation2009; Tallon et al., Citation2013; Wu et al., Citation2015). Finally, relational mechanisms are aimed at developing collaboration and shared understanding between different stakeholders or stakeholder groups. This category includes specific governance mechanisms such as communication, participation, collaboration, education, training, shared understanding, and conflict resolution (Almeida et al., Citation2013; De Haes & Van Grembergen, Citation2009; Luo et al., Citation2016; Tallon et al., Citation2013; Wu et al., Citation2015).

This typology of mechanisms forms the basis of our preliminary framework for data analytics governance, with nine sub mechanisms, as depicted in . The next sections provide a more detailed description of these mechanisms according to key literature, while also elaborating on them for data analytics governance.

Figure 1. Preliminary Framework for Data Analytics Governance.

Figure 1. Preliminary Framework for Data Analytics Governance.

2.3.1. Data analytics structural mechanisms

The core of structural governance mechanisms for data analytics focuses on organising data analytics functions and related decision rights. Three sub mechanisms emerged in the literature: organisational structure, roles and responsibilities, and coordination and alignment (Almeida et al., Citation2013; De Haes & Van Grembergen, Citation2009; Tallon et al., Citation2013).

First, organisational structure embeds data analytics within the organisation to understand the needs across different business units (Grossman & Siegel, Citation2014). The following three different organisational structures are identified; a centralised, a decentralised and a hybrid. A centralised organisational structure places all data analytics activities (e.g., decision making problem prioritisation) in one unit. In a decentralised structure the activities are spread across different units. The hybrid structure coordinates of activities from one unit and other activities are spread across different units (Grossman & Siegel, Citation2014).

None of these structures is perfect and all come with certain tradeoffs. The decision for a specific structure should be based on the organisation’s specific context, as “a universal best [IT] governance structure does not exist” (Brown & Grant, Citation2005, p. 703). For example, a centralised structure helps to combine activities and avoid unnecessary duplication of activities. However, this structure can provide a high dependence on one unit as they own all resources. Furthermore, such a centralised unit might have no clear view on pressing data analytics questions at other levels (Schüritz et al., Citation2017). In contract, a decentralised structure allows for short lines of communication with the end user and a low threshold for adoption of the end product. However, the reuse of resources can only be realised locally and not across the entire organisation. Lastly, a hybrid structure is a nice middle ground between a centralised and a decentralised structure, but still has potential shortcomings. For example, standards are established centrally while safeguarding still takes place decentralised, which makes it difficult to keep them consistent.

Second, data analytics requires new roles with a diverse set of skills in order to be successful. Therefore, new roles and responsibilities need to be clearly defined to achieve a successful implementation of data analytics (Dremel et al., Citation2017; Grover et al., Citation2018). The following key roles have been observed by Schüritz et al. (Citation2017): data scientist, project manager, data architect and business users. The different roles share responsibilities for creating leverage in the organisation regarding the availability of resources, monitoring various data analytics activities, development of a data analytics platform and design of the data and information architecture for data analytics solutions (Schüritz et al., Citation2017). Additionally, Kiron et al. (Citation2011) propose to have specific roles to ensure data quality for data analytics activities.

Finally, a mechanism is required to achieve coordination and alignment among people and organisational departments. Coordination is much needed, since data analytics activities are carried out across the organisation (Espinosa & Armour, Citation2016). Therefore, a dedicated committee structure is proposed to promote the business value of data analytics activities to ensure that data analytics projects get the required support, but also to take care of the prioritisation of candidate projects (Dremel et al., Citation2018; Grossman, Citation2018; Grossman & Siegel, Citation2014; Kiron et al., Citation2011). Such a committee should consist of (high-level) managers from different departments to oversee the work of data analytics (Dremel et al., Citation2017). In addition, alignment of different organisational norms, values and outcomes is needed to generate business value (Kiron et al., Citation2014). But organisations often struggle to align data analytics activities with the traditional way of decision making (Akter et al., Citation2016). A steering committee for data analytics can help to create alignment between data analytics activities and strategy, by building understanding between data analytics objectives and business priorities (Akter et al., Citation2016).

2.3.2. Data analytics process mechanisms

According to key literature concerning process mechanisms, governance should be used to set up routines for the realisation, monitoring, evaluation, and development of analytics processes (Almeida et al., Citation2013; De Haes & Van Grembergen, Citation2009; Tallon et al., Citation2013). Three sub mechanisms emerge for process mechanisms: process models, monitoring, and evaluation and development.

First, process models support a structured and controlled way of conducting data analytics projects. The most well-known process model for data analytics is the CRISP-DM model, which provides a set of steps and tasks that need to be performed in order to deliver data analytics insights (Mariscal et al., Citation2010). More recently, it has become a popular approach to apply agile practices in data analytics processes (Dremel et al., Citation2017). This facilitates the typical volatile requirements in data analytics projects and allows to quickly react to changes thereof (do Nascimento & de Oliveira, Citation2012; Schmidt & Sun, Citation2018). Although a lot of organisations are not using a process model in practice (neither from the literature nor home-grown), consensus amongst academics remains that maintaining a well-defined repeatable process for data analytics projects will improve efficiency (Saltz et al., Citation2018).

Second, monitoring and evaluating the efficiency and effectiveness of data analytics projects is required. Monitoring of data analytics projects enables the organisation to intervene when problems arise. This ensures that data analytics efforts are leading to the desired business results (Grossman & Siegel, Citation2014). Furthermore, organisations need to get better in measuring the ROI of their data analytics projects, and at making the connection between data analytics efforts and business outcomes (Grover et al., Citation2018).

Finally, a development roadmap should exist to ensure data analytics will develop towards the analytics goals of the organisation. As data analytics can consist of a diverse set of goals, organisations should have in mind how to reach these goals and how to improve on them. A maturity model is one way that can support the creation of a road map for organisations in developing their data analytics capability (Grossman, Citation2018).

2.3.3. Data analytics relational mechanisms

Concerning the key literature on relational governance mechanisms, organisations should organise work in terms of interrelationships between people and groups. From this literature three sub mechanisms emerge: shared perceptions, collaboration, and transfer of knowledge and expertise (Almeida et al., Citation2013; De Haes & Van Grembergen, Citation2009; Tallon et al., Citation2013). First, shared perceptions on data analytics activities are crucial. For example, people in the organisation should share the idea that the outcome of data analytics is often uncertain. Furthermore, organisations should continue to support data analytics activities even after some initial disappointing results. This requires a strong organisational attitude that is open towards failure (Dremel et al., Citation2017). Organizations should provide sufficient autonomy to individuals to form their own judgement on their data analytics work (Barbour et al., Citation2018). Furthermore, an organisational culture embracing data analytics is crucial for its success (Grover et al., Citation2018). Summarising, it should prefer data over gut-feeling, give room for experimentation and testing, and be open to failure, with the intention to learn from it (Abbasi et al., Citation2016; Berndtsson et al., Citation2018; Kiron et al., Citation2011). Moreover, a different managerial mindset will contribute towards more consensus and supports the team to conduct the work in a planned approach (Yamada & Peran, Citation2018).

Second, data analytics is perceived as multidisciplinary knowledge work and highly depends on collaboration between individuals that have complementary skills (Grover et al., Citation2018). While previous research showed that working across disciplines provides many opportunities, it also comes with multiple challenges (Barbour et al., Citation2018). For instance, it increases the complexity and difficulty of managing individuals and it makes communication about data analytics work more difficult (Barbour et al., Citation2018). Therefore, organisations should promote communication among individuals and groups that are involved in data analytics activities. According to Dremel et al. (Citation2018) organisations can establish a social community to support employee-level collaboration by using enterprise social software to promote communication. Furthermore, for creating novel collaboration between data analytics stakeholders, organisations should locate them physically close to each other.

Third, transfer of knowledge and expertise is crucial since organisations should ensure that they acquire and retain the right skills, including: technology, modelling and analytic skills, and knowledge of the data and the business (Davenport et al., Citation2001). For this purpose, there should be opportunities to share know-how and to learn from others (Kiron et al., Citation2011). Consequently, organisations should take care of the development of their employees’ competencies. According to Dremel et al. (Citation2018) organisations need to implement a central education programme to improve data analytics skills. Now, only a limited amount of organisations appear to train employees in data analytics-related disciplines (EYGM Limited, Citation2015). As a result, companies are not able to realise the full potential from their data. Therefore, they are forced to hire external consultants to support their data analytics activities. Indeed, another option to (temporary) acquire the required skills is by intense collaboration with external data analytics consultants (Dremel et al., Citation2018).

3. The Viable System Model (VSM)

This section introduces and discusses the Viable System Model (VSM), which will be used as a theoretical lens in the context of the present research to make a theoretical contribution by discussing why and how data analytics governance, via the proposed data analytics governance mechanisms, can continue to fulfil its purpose of creating (business) value from data.

The VSM represents a systems thinking approach that focuses on adaptation and control (Espejo & Reyes, Citation2011). Developed by Beer (Citation1979, Citation1981, Citation1985), the VSM is theoretically grounded in “management cybernetics”, which is a grand theory described as “the science of effective organization” (Beer, Citation1985, p. ix). The VSM’s key underlying concept is “viability”, which refers to the capacity of a system to continue to fulfil its purpose, despite a changing environment. As such, a viable system has the capacity of co-evolution and adaptation within a dynamic environment (Espejo & Reyes, Citation2011). The VSM identifies five systemic functions, referred to as Systems 1 through 5, that are necessary and sufficient systemic preconditions for viability (Beer, Citation1979). These functions represent “essential elements of organisation”, and are interconnected through communication channels. This interconnected whole of systemic functions enables the system to be effective over time (Beer, Citation1979).

The essence of the VSM is visually summarised in . System 1 is the action-oriented component of the viable system, which is responsible for achieving the viable system’s purpose. It is the “object of control”, while all other systemic functions (i.e., Systems 2 through 5 – together referred to as the metasystem) are controlling-oriented. The figure also illustrates that the viable system is an open system, i.e., that information exchange takes place between the viable system and its external environment (Beer, Citation1979, Citation1981, Citation1985).

Figure 2. The Viable System Model (VSM), simplified (based on Beer (Citation1985)).

Figure 2. The Viable System Model (VSM), simplified (based on Beer (Citation1985)).

The VSM has been leveraged before in the context of IS research in general (Richter & Basten, Citation2014), and IT governance research in specific (Huygh & De Haes, Citation2019, Citation2020; Peppard, Citation2005). In line with the justification of the VSM’s applicability in the context of IT governance (Huygh & De Haes, Citation2020), there are multiple arguments for its applicability as a theoretical lens in the context of data analytics governance. First, the VSM is well-suited to investigate systems that are dealing with a dynamic environment, as it provides an answer of “organisation” to deal with the challenges arising from operating in such an environment. Because of digitisation, data is becoming more important in the contemporary business environment. The growing availability of data is recognised as one of the major disrupting factors in the business environment, fuelled through the technical advancements (e.g., big data, cloud computing, social networking), the need for quick decisions, and the transition to fact based decision-making (Delen & Ram, Citation2018; Vial, Citation2019). As a result, new opportunities to create value from data frequently arise for organisations, relying on data analytics activities (Wixom et al., Citation2013). Second, as the VSM is theoretically grounded in (management) cybernetics, it can be considered a suitable candidate theory for investigating data analytics governance. Indeed, cybernetics is positioned as the science of control (Espejo & Reyes, Citation2011), and data analytics governance is concerned with controlling data analytics activities.

Seeing as the present research takes a specific focus on data analytics governance within the overall context of IT governance, the insights provided in extant research related to leveraging the VSM in the context of IT governance are especially relevant to the present research. In that context, Huygh and De Haes (Citation2020) position “the use of IT” as System 1, and “IT governance” as the metasystem (i.e., the combination of Systems 2 through 5) of a viable system that is concerned with leveraging and controlling the use of IT. Extending this logic, “data analytics activities” can be positioned as System 1, and “data analytics governance” can be positioned as the metasystem (i.e., the combination of Systems 2 through 5) of a viable system that is concerned with leveraging and controlling data analytics activities.

The systemic functions comprising a viable system are summarised in . More specifically, the table provides (1) a generic description, (2) a translation to the context of IT governance based on prior research, and (3) our translation to the specific context of data analytics governance.

Table 1. Summary systematic functions compromising a viable system.

In the discussion section of the present paper, we explore the parallels between each of these systemic functions (i.e., Systems 1 through 5) and data analytics governance in more detail. Furthermore, we provide specific illustrations of possible instantiations of each of these systemic functions by means of the specific data analytics governance mechanisms that were identified in the context of our multiple case study research. This provides tangible insights on how the VSM’s systemic functions relate to the governance of data analytics, and the specific mechanisms through which this can be implemented. In summary, by building on the VSM’s systemic functions, insights are provided on the key systemic preconditions for viable data analytics governance. The instantiation of these systemic preconditions happens through the implementation of specific data analytics governance mechanisms, such as the ones included in our descriptive framework.

4. Research methodology

In this research, our main aim is to develop a descriptive data analytics governance framework. As a first step, the previous section presented a preliminary version of the framework based on a review of the governance and data analytics literature. The next step is to validate this preliminary framework by collecting empirical evidence for each of the mechanisms in the framework. A useful method for evaluating our framework is case study research since it allows for exploring and observing a new phenomenon, such as data analytics governance, in a real-life context (Darke et al., Citation1998). Furthermore, it allows a more in-depth qualitative analysis to foster a more holistic understanding of the data analytics governance mechanisms in their context. More specifically, we choose to apply a multiple case strategy as it enables to compare and contrast findings from different cases.

4.1. Data collection

In total three case studies (herein referred to as A, B and C) were selected to be included in this research. The main criterion for selecting case study organisations was that the organisation significantly invested in data analytics to improve their business results by conducting data analytics projects. To obtain access to relevant case organisations, a master thesis topic was formulated, for which master students in data science management could apply. Students who did not have such an organisation available could not participate. This provided the research team with access to relevant case organisations engaged in data analytics.

In the case organisations, the students collected data using interviews, a technique commonly used for data collection in case studies (Dul & Hak, Citation2008; Yin, Citation2017). The selection of respondents was based on their involvement in data analytics activities. More specifically, we looked for respondents that were accountable for data analytics, responsible for putting it into practice, or for executing data analytics. Furthermore, respondents needed to be active in data analytics for at least one year. Respondents that met these criteria were considered to have enough experience to understand how the organisation is conducting its data analytics. Each interview followed a semi-structured approach using an interview protocol consisting of a number of questions devised by the research team. The interview questions were informed by the data analytics governance mechanisms that resulted from the literature review. An examples of the questions used, is: “How does the organisation enable or promote the collaboration and teamwork between the people and groups who are involved in the data analytics work?”. Before actual use of the interview protocol, the questions were tested in a pilot interview to see if anything needed clarification and if all mechanisms could be covered in roughly sixty minutes of interview.

In total, we conducted twenty-one interviews and the number of interviews varied based on the size of each case organisation. More specifically, at case A we conducted thirteen interviews, at case B five, and at case C three. The sample of interview respondents included persons of different positions in the organisation, as shown in .

Table 2. Positions of the interview respondents.

Each case study was conducted by a different master student, which required upfront training to familiarise them with the interview protocol. During the interviews, each researcher was encouraged to follow the interview protocol carefully, but extending the protocol with probing and clarifying questions was allowed, if deemed necessary. Interviews were held in a personal face-to-face setting to establish trust and providing a comfortable setting for sharing data and experiences. The interviews took place from October 2018 until the end of November 2018, and each of the interviews lasted approximately one hour. Two case organisations allowed us to record the interviews (A and B) and these interviews were transcribed verbatim afterwards. The third case organisation (C) did not allow us to record the interviews, and therefore the researcher made a summary of the interviews, after which each summary was crosschecked with the relevant interviewee for validity purposes.

4.2. Data analysis

Analysing the case study data aimed at finding empirical evidence for the mechanisms comprising the data analytics governance framework. More precisely, we looked for instantiations of the governance mechanisms in each of the case organisations. The three master students responsible for the data collection initially coded their own data. Coding was guided by a coding framework, which was derived from the nine mechanisms in our preliminary data analytics governance framework (). To ensure consistency and completeness, the supervising researcher of the study analysed the codes of the students. Consequently, the supervising researcher could get a sense of the data collected in the interviews to support him with the next round of coding, where the supervising researcher applied his own coding. Furthermore, the student coding helped the supervising researcher to check if no vital information was missing in his coding.

For this purpose he used template analysis, which allows to combine a deductive and inductive coding approach for the analysis of qualitative data (Saunders et al., Citation2009). This enabled to identify concepts or main ideas hidden in the data that relate to the phenomenon of interest (Saldaña, Citation2015).The deductive approach involved the use of a priori codes to start the coding process, which were derived from the nine mechanisms in our preliminary data analytics governance framework and the students’ codes. These codes were used in a first round of coding to mark large portions of the interview data that relate to a specific mechanism.

The next round of analysis focused on the marked texts from the previous round and each governance mechanism was analysed separately. In this round an inductive approach was used, i.e., open coding, to identify instantiations of each governance mechanism in the marked texts. This resulted in first set of unique codes that best described the newly identified instantiation, as shown in the second column of . After processing all the marked text of one mechanism, the codes were discussed with a fellow researcher to resolve any issues, group the newly found codes and identify overlapping or similar codes. This resulted in the second set of unique codes, as shown in the third column of . (Burant et al., Citation2007; Saldaña, Citation2015; Strauss, Citation1987). The computer assisted qualitative data analysis (caqdas) software package Nvivo 12 was used for coding the data.

Table 3. Number of unique codes with inductive coding round.

4.3. Theorising data analytics governance through leveraging the viable system model

Besides developing a descriptive data analytics governance framework, we make a theoretical contribution by drawing on the Viable System Model (VSM) to discuss why and how data analytics governance, via specific data analytics governance mechanisms, can continue to fulfil its purpose of creating (business) value from data. To achieve this, we first draw parallels between the VSM and data analytics governance. As the VSM’s systemic functions represent the key preconditions for a system to maintain viability (i.e., have the capacity to continue to fulfil its purpose) (Espejo & Reyes, Citation2011), these parallels will provide insights on the systemic aspects of data analytics governance that are necessary for its viability. Furthermore, we provide specific illustrations of the instantiation of these aspects by means of the specific data analytics governance mechanisms that were identified in the context of our empirical research.

In summary, tangible insights are provided on how the VSM’s systemic functions (i.e., Systems 1 through 5) relate to the governance of data analytics, and the specific mechanisms through which these systemic functions can be implemented. From an epistemological point of view, the VSM is herein thus used as a functionalist instrument, which enables the classification of surface elements (i.e., data analytics governance mechanisms) based on empirical observation (Jackson, Citation1992).

5. Case study results

The results of the case studies revealed that all nine sub mechanisms of the preliminary data analytics governance framework were identified in at least one case organisation. Moreover, no new mechanisms were found in the cases despite asking this question to the participants explicitly. A complete overview of the identified instantiations of mechanisms is highlighted in , resulting in the descriptive framework of data analytics governance.

Table 4. Descriptive data analytics governance framework.

5.1. Introducing the case organisations

Case organisation A is a global biopharmaceutical company and the most advanced organisation in terms of using data analytics. Their operating conditions have become increasingly challenging under the global pressures of competition. As a result, the company continually takes measures to evaluate, adapt, and improve its business practices to better meet customer needs. Therefore, they leverage digital and data capabilities across the organisation. About four years ago, the company began using data analytics more purposefully. For instance, it created a central data science competency centre, invested in tools and data platforms, and organised internal data science conferences. They conducted data analytics projects for creating supply chain metrics and dashboards, commercial forecasting across various markets, and optimisation of equipment train setups in chemical factories. In these projects, the organisation used different types of analysis including descriptive, predictive and prescriptive analysis.

Case organisation B is a producer of bicycles for the mid-range and higher segments and also bicycle parts and accessories. They are implementing a low-cost strategy to stay more competitive and to rationalise their footprint. Therefore, they focus their data analytics efforts on applications in supply chain planning (e.g., budget and demand forecast) and analysing consumer information (e.g., using data from IoT and social media). The type of analytics applied in this organisation is mainly descriptive and predictive in nature, and status reports are used to present the outcomes to end users.

Case organisation C is a professional trade association for pharmacies and their main objective is to support the promotion of medicine supply. Therefore, they collect detailed data on medicine use in the Netherlands. With data analytics, they provide regular reports (mainly descriptive) on medicine use for their clients. Furthermore, they are experimenting in projects with more advanced predictive data analytics methods (e.g., clustering groups of prescribers, regression analysis to predict seasonal influence, and network analytics to predict shifts between products). Despite the fact that not all of these activities are successful, the results are considered useful enough to continue the experiments.

5.2. Organisational structure

The case studies revealed that the case organisations use three different structures for positioning the data analytics function in the organisation. Case B uses a decentralised structure and distributes its data analytics activities across the whole organisation. This structure leads to multiple data analytics islands and prevents them to be competitive, although they recognise that other factors might also be involved here. Alternatively, case C uses a centralised structure where one department conducts all data analytics activities. The centralised structure causes high dependency on this department. Finally, case A uses a hybrid structure with a centre of excellence on the global level and multiple data analytics activities distributed across different units on functional and divisional level. They recognise three different advantages of this approach. First, the business side themselves can request changes to analytics models instead of depending on IT, which provides them control of all different requests. Second, it provides the opportunity for sharing information and experience among different departments, support each other, and generate ideas and solutions. Last, this structure enables the business side to create their own KPI’s. However, using a hybrid structure also has several drawbacks. One example is the bureaucracy that comes with it, which causes delay in changes or the delivery of analytical models. Another drawback is the number of people who get involved, making it difficult to know who is responsible for what.

5.3. Coordination and alignment

According to the three cases, coordination ensures resource allocation decisions are based on prioritised data analytics activities. In case C the manager of the centralised unit is responsible for prioritisation of analytics activities and the required resources. Similarly, in case A, management decides where to allocate resources, although the individual units also have some autonomy in allocating resources. For example, the IT group has a standard demand management process for all their IT and data analytics projects. This supports the prioritisation and funding of projects. Nevertheless, for the near future they aim to create a council that meets regularly to review results, take action, and identify new opportunities. In contrast, case B tries to coordinate activities by means of a project group consisting of people from multiple disciplines. This group decides which new projects can start.

Concerning alignment, all three cases experienced that alignment takes place during informal communication. Although this is considered valuable, case A highlights that informal discussions might result in the fact that not all the right people are informed. For supporting alignment, case A had two initiatives in place. First, monthly formal meetings between the responsible people of different groups who discuss what demands are going to get approved. Second, across the organisation there are stand-up meetings that span multiple different IT and data analytics topics. The stand-up meetings initiate different interactions among individuals and groups, and support collective decision-making.

5.4. Roles and responsibilities

The three cases revealed four main role categories and corresponding responsibilities, involved in data analytics. The four categories are: analytics roles, data science roles, business roles and platform roles. First, the analytics roles are responsible for data engineering and visualisation. They use data from IT systems and transform it into meaningful results based on business requirements. These roles are well connected to the other role categories. For example, they submit a request towards the platform role, they seek advice from the data science role in complex situations, and they work on demand of the business role. Formalised roles in the analytics category at case A are the data engineer and the data analyst. The data engineer transforms data to fit into a data analytics model. The data analyst focuses on turning data into something useful and presents it in a comprehensible way. Furthermore, Case C has developers that process data (ETL), conduct analyses, create programs for automatic analyses, and build and maintain results of analyses. In case B, ETL is performed by the BI developers.

Second, the data science role focuses on more advanced analysis. Data scientists deliver this on request to the end user and provide support when users themselves struggle with analytics activities. In case A there is a formalised data scientist role. They are responsible for transforming and harmonising data together with the business, but also teach the business to do that on their own. The data scientist role is more experienced in using advanced data processing methods than the data analyst role. Case B did hire a data scientist in the past. Unfortunately, this was not a success for them, because the organisation did not provide enough guidance on topics the data scientist could work on.

Third, the business role is responsible for identifying opportunities within the business and collaborating with stakeholders to get results. This role focuses on creating a demand for identified opportunities and uses metrics and KPIs to monitor results. Formalised business roles in case A are business users who define requirements and definitions with regards to the requested insights. Furthermore, global process stewards ensure that these requirements are defined enterprise-wide. Case B defined different areas for analysts: digital, market, sales, stakeholder, and supply chain analyst. Moreover, in cases A and B the business roles independently conduct data analytics activities. They refer to this as self-service, allowing them to gather the required information themselves in a specific environment which allows them to make graphs from prepared data.

Last, the responsibility of the platform role is to operate and support operational systems that generate and maintain the data. They make sure that data models reflect the requirements and definitions of the business function, and ensure the availability of the data. For the platform role case A formalised IT roles to focus on activities from a technology perspective. Examples of these activities include deciding where to put the servers, or which specific databases to use. Furthermore, case A has a formalised data architect (similar as in case B). The data architect determines how an information request can be met, what table structure is needed, and what the other relevant details are. They understand the existing landscape, including projects and data artefacts that already exist. In case C formalised IT roles are responsible for the delivery of data and tools that enable data analytics, but also for the technical maintenance and design of their systems. Furthermore, they have an administrator who is responsible for the quality of the data and ETL procedures.

5.5. Process model

The use of a process model is only identified in case A, by means of a standardised innovation process that is used across the organisation. It provides them with a level of structure, sets expectations with stakeholders, and helps utilising resources based on demand. Despite the fact that it is not formally documented, it consists of the following three steps: prototype, operationalise, and industrialise.

First, the prototype step uses multiple iterations to create prototypes for data analytics solutions. Depending on the complexity this step can take six to twenty weeks to complete. The phases in the prototype step are comparable to the first three steps of CRISP-DM. However, case A states that using a method as CRISP-DM increases frustration from the business, because it is too sequential and limits them to deliver fast results. Therefore, they keep the process flexible by providing the end user the possibility to change requirements during development. The prototype step consists of five phases: understand problem, understand data, clean and curate data, analyse, and communicate.

Once the prototype is built, the operationalise step gives ownership of the prototype to IT. During this step the software code is verified to ensure everything is coded correctly and the formulas are valid. Last, in the industrialise step, the solution is fully documented and tested. Upon completion, the solution is deployed in an operational environment, making it available to business users. From that moment, there should be a regular refresh of the data, monitoring and maintenance of the solution, and first-line support for business users.

Case B and C both did not have a standard process for doing data analytics projects. At case B they have an ad-hoc method which starts with a request of a report. Despite the fact that case B does not have a standard process they expect it would support them to manage projects across different units. In case C they do not see value in a standard process, because they expect that the efforts to describe and maintain the process would not outweigh the benefits. However, similar process steps from CRISP-DM such as problem understanding, data understanding, clean and curate data, are recognised, but are not obligated to follow.

5.6. Monitoring and evaluation

A mechanism regarding monitoring is identified in case A at the end of the prototype step. In this step the business side verifies if the prototype meets their demands, and IT checks the functionality of the prototype. Furthermore, monitoring is supported by a process tracking tool, i.e., Jira, which is structured by their five core process phases. Moreover, they state that fast iterations during the prototype step helps them with monitoring, since it provides a visibility on the latest requirements.

According to case A, monitoring is done informally, through collaboration and discussion. This discussion happens during the regular strategy meetings where results are reviewed on a routine basis. Case B and C also rely on personal checks when data analytics results are shared among colleagues. Additionally, case B has a person responsible for protecting data analytics activities. This person is regularly the project owner or, alternatively, someone working on the project.

For evaluation, two mechanisms were identified across the cases. First, case C performs benchmarking to compare projects and evaluate how they perform. Second, case B and A favour strong interaction with the end users to evaluate if the outcomes are satisfactory.

5.7. Development

To pursue the development of data analytics activities, case A created a strategic roadmap for the next two to three years. According to them, the road of becoming a data-driven organisation is a lengthy journey that includes all aspects of strategy. Following the multiyear roadmap is important, as well as meeting the small milestones along the way. Case A is already three years into their journey and the problems they are currently facing in year three are different than those faced in year one. For example, access to data and technology was problematic in the beginning. However, this issue is now solved, and they currently face problems in terms of getting value from projects and finish them quickly.

In order to develop this roadmap, they have frequent management meetings to discuss goals and direction. These goals are incorporated in day-to-day activities and tracked by a balanced scorecard to ensure they operate appropriately. Case B grouped BI people together to create a BI community, as demanded in their strategy. In addition, individual efforts measured the organisation in terms of maturity, discovering that they were rather immature and have multiple opportunities for improvement. Still, they lack a structural plan and hence developments are uncoordinated and not connected. Consequently, the goal for the future is to create a BI board that is tasked with strategy formulation. In case C there is no development plan, but there are some developments implemented without a roadmap.

5.8. Shared perceptions

Mechanisms for shared perceptions are important to establish trust in data analytics activities and to create a data-driven mindset. Therefore, in case A, the data-driven mindset is promoted from the top level of the organisation (i.e., tone at the top). Management is aware and supportive on using data analytics by sponsorship and motivating their employees to provide a desire to move forward and explain why it is beneficial. Furthermore, case A heavily invests in building employees’ trust in data to support the use of data analytics. Formerly, they experienced constant struggle with employees challenging the data and the KPIs. To overcome this, they now show the tool to create KPIs and the data used by the tool instead of only showing the result. This enables employees to start seeing the potential of the tool, and what functionality it offers.

To further support the trust in data analytics, case A and B spread early adopters of data analytics throughout the organisation. These adopters share success stories and mentor unexperienced people to convince them of the usefulness of data analytics. In addition, in case C this trust grew in the past years through collaboration and the informal way of working.

5.9. Collaboration

According to case A, collaboration is crucial as not one person has all the knowledge to solve problems in the context of data analytics activities. Problems need to be solved from different perspectives with people that have different, but complementary skills. In case A and B, people naturally lean on each other to solve a problem or seize an opportunity.

According to case A and B, communication contributes towards better collaboration. Hence, case A hosts frequent team meetings where they speak as a group to facilitate communication among team members. Moreover, when physical meetings are not possible, Webex meetings are hosted to communicate in an online environment. In addition, case C purposely puts different departments physically close to each other. For example, the IT and the data analytics department are physically located next to each other to stimulate more communication between them.

5.10. Transfer of know-how

According to case A and B, transfer of know-how among people is crucial to initiate learning, and partly occurs through an employee’s personal connections. Therefore, case A connects people in communities of practices by having employees regularly attend external conferences (e.g., data mining seminars) to share stories with other companies. In addition, the organisation hosts regular internal data science conferences to give its employees the opportunity to share data analytics-related experiences. In case B, a similar aspiration exists, since they plan to organise a hackathon where employees can pitch ideas on data analytics in front of a diverse audience. Hitherto, they only organised short presentation sessions to share knowledge in an informal setting.

Furthermore, knowledge is also shared using online tools. For example, case A and C provide a platform to support sharing of ideas and insights. In case C, they have an internal online platform to share experience on a diverse set of topics. Similarly, case A has a variety of Yammer groups to enable the sharing of new insights, techniques and opportunities that exist within or outside the organisation.

Case A uses yet another mechanism for knowledge sharing, i.e., job rotation. This enables that data analytics practitioners are able to work at different places in the organisation and can learn about different data analytics activities. Nevertheless, case A states that despite people’s willingness to learn, the lessons learned from other people are not always accepted, resulting in the fact that the same mistakes are sometimes repeated.

6. Discussion

This section starts with discussing the empirical validation of the descriptive data analytics governance framework. Next, the VSM is used as a theoretical lens to discuss why and how data analytics governance, via the proposed data analytics governance mechanisms, can continue to fulfil its purpose of creating (business) value from data.

6.1. Reflection on the validity of the data analytics governance framework

Given the lack of research on governance mechanisms for data analytics, the primary objective of this study is to achieve a better understanding of governance mechanisms implemented by organisations to govern their data analytics efforts, and the development of a data analytics governance framework. As a first step, a preliminary framework is developed based on a literature review. The framework consists of two levels (refer back to ), of which the first level comprises three categories of governance mechanisms, i.e., structural, process, and relational. At the second level, more detailed sub-mechanisms are specified in each of these three categories, resulting in a total of nine data analytics governance sub mechanisms.

In the next step, three case studies have been conducted, which confirmed the existence of all nine governance sub mechanisms in an empirical setting (refer back to ). This adds to the construct validity of the framework, since we found at least one instantiation for each of the nine governance sub mechanisms identified in the framework. Furthermore, these instantiations are a valuable addition to our framework, as they provide concrete examples of how governance mechanisms can be implemented in practice. During our analysis of the data, we also focused on mechanisms that did not relate to one of the nine governance sub mechanisms in the framework, but these were not discovered in our case study data. This suggests that the current set of sub mechanisms is comprehensive enough for describing and classifying governance mechanisms for data analytics efforts.

To a large extent, the mechanisms in the descriptive framework are not unique and are in fact very similar to the mechanisms identified in extant literature for IT governance and information governance (De Haes & Van Grembergen, Citation2004; Tallon et al., Citation2013). This is also demonstrated in , which compares the mechanisms of our data analytics governance framework with the mechanisms identified for IT governance and information governance in leading prior studies. From this comparison, it is easy to spot the similarities. This is not surprising since the overall mechanisms of governance are considered to be generic and independent of what is being governed (i.e., the focus of governance) (Joachim et al., Citation2013; Tallon et al., Citation2013; Tiwana et al., Citation2014).

Table 5. Comparison of data analytics governance mechanisms, IT governance mechanisms, and information governance mechanisms.

Nevertheless, our descriptive data analytics framework provides some unique insights and this requires a focus on the instantiations of the mechanisms in the domain of data analytics specifically. In this domain, the structural mechanisms organise the location of the data analytics function, the accompanying roles, and the prioritisation of data analytics initiatives. These are unique for the data analytics domain and differ from those found in the domains of IT or information governance. Furthermore, the process mechanisms are used to set up the routines for the realisation, monitoring, and evaluation of data analytics initiatives, and the development of the data analytics function. In this case, the focus on the data analytics initiatives is unique as well. Finally, the relational mechanisms aim to realise commitment on value creation by data analytics, organise the work in terms of interrelationships between people and enable the transfer of data analytics knowledge. A summary is shown in , it clearly shows that the overall data analytics governance mechanisms (which are similar to those found in other governance domains – as previously shown in ) become unique in their specific instantiations. Hence, our descriptive framework can be regarded as an integrated model for data analytics governance, which is unique in its own sense, and complements existing IT and information governance models by explicitly focusing on data analytics governance activities.

Table 6. Focus of data analytics governance mechanisms.

6.2. Leveraging the viable system model to investigate data analytics governance and its mechanisms

Besides developing a descriptive data analytics governance framework, we make a theoretical contribution by drawing on the Viable System Model (VSM) to discuss why and how data analytics governance can continue to fulfil its purpose of creating (business) value from data (i.e., viability). While a descriptive framework based on a typology of governance mechanisms provides useful insights on the mechanisms that may be used to implement data analytics governance, it does not provide insights on the interrelatedness of mechanisms in a holistic and viable data analytics governance approach. Indeed, leveraging the VSM as a theoretical lens allows for providing insights on the key systemic preconditions for viable data analytics governance. The instantiation of these systemic functions happens through the implementation of specific data analytics governance mechanisms, such as the ones included in our descriptive data analytics governance framework. Below, we draw theoretical parallels between the VSM (i.e., its underlying concepts and systemic functions) and data analytics governance.

In the context of data analytics governance, the VSM’s underlying concept of viability implies that data analytics activities are leveraged and controlled to continue to fulfil the purpose of creating (business) value from data (Avery & Cheek, Citation2015; Espinosa & Armour, Citation2016). The VSM identifies five systemic functions, referred to as Systems 1 through 5, that are the necessary and sufficient systemic preconditions for maintaining this viability. As illustrated in , the action-oriented component (or System 1) of the viable system maps to data analytics activities. All other systemic functions (i.e., Systems 2 through 5 – together referred to as the metasystem) are controlling-oriented, and as such map to (aspects of) data analytics governance.

Figure 3. A VSM-based view on data analytics governance (based on Beer (Citation1985)).

Figure 3. A VSM-based view on data analytics governance (based on Beer (Citation1985)).

Drawing theoretical parallels between the systemic functions of the VSM (i.e., Systems 1 through 5) and data analytics governance provides insights on the systemic aspects of data analytics governance that are necessary for its viability. Furthermore, we provide specific illustrations of the instantiation of these systemic functions by means of the specific data analytics governance mechanisms that were identified in the context of our empirical research. This provides tangible insights on how the VSM’s systemic functions relate to the governance of data analytics, and the specific mechanisms through which this can be implemented. Below, the theoretical parallels between the VSM’s systemic functions and data analytics governance, and corresponding illustrations of specific mechanisms for implementation (based on our descriptive framework), are discussed for each of the VSM’s five systemic functions.

6.2.1. System 1

In general, the System 1 function represents the action-oriented component of the viable system, which is responsible for achieving the viable system’s purpose. It is the “object of control”, while all other systemic functions (i.e., Systems 2 through 5 – together referred to as the metasystem) are controlling-oriented (Espejo & Reyes, Citation2011). As such, System 1 embodies the viable system’s reason for existence, while the combination of Systems 2 through 5 is concerned with ensuring that System 1 fulfils the purpose of the viable system over time (Beer, Citation1985).

In the context of data analytics governance, the System 1 function represents the data analytics activities that are performed to enhance organisational performance by the realisation of business objectives. Extant literature identifies numerous categories of data analytics activities, including: discovering new and valuable sources of data (data source exploration), finding business goals which can be achieved in a data-driven way (goal exploration), finding out what value might be extracted from the data (data value exploration), relating data science results to the business goals (result exploration), extracting valuable stories (e.g., visual or textual) from the data (narrative exploration), and finding ways to turn the value extracted from the data into a service or app that delivers something new and valuable to users and customers (product exploration) (Delen & Ram, Citation2018; Martínez-Plumed et al., Citation2019).

The case studies illustrate that organisations continually use data analytics to measure, evaluate, adapt, and improve their business practices to better meet customer needs. For example, case B leverages data analytics to implement a low-cost strategy to stay more competitive and to rationalise their footprint. More specifically, case B conducts data analytics projects to create metrics and dashboards, and uses data for forecasting and optimisation.

6.2.2. System 2

In general, the System 2 function coordinates the System 1 activities. System 2 mechanisms are aimed at avoiding problems at the level of System 1, rather than solving these problems when they manifest. These mechanisms may reflect managerial decisions, but do not make them. The need for System 2 mechanisms is positively related to the complexity of System 1. Typical System 2 mechanisms include the use of common standards, the development of a common language, and an appropriate culture (Beer, Citation1985).

In the context of data analytics governance, the System 2 function represents the coordination between, and within, data analytics activities (Beer, Citation1979). As such, System 2 mechanisms aim to avoid issues like the diffusion of data analytics activities across organisational units, which potentially can lead to the creation of silos (Avery & Cheek, Citation2015). Additionally, organisations can establish a social community to support employee-level collaboration by using enterprise social software (Dremel et al., Citation2018). To stimulate novel collaborations between data analytics stakeholders, organisations can furthermore physically locate their employees close to each other in a centralised organisational unit (Dremel et al., Citation2018; Schüritz et al., Citation2017).

The case study data illustrates that organisations make use of “sharing success stories” (case A and B), to build a shared perception about data analytics across the organisation. Such a mechanism may indeed establish coordination between, and within, data analytics activities, and may therefore contribute to anticipating and avoiding problems at the level of the data analytics activities (i.e., System 1).

6.2.3. System 3

In general, the System 3 function is responsible for the cohesiveness of System 1. It is tasked with managerial decision-making that steers System 1 towards achieving the purpose of the viable system. To effectuate this, System 3 can (1) use the command axis (which consists of resource management, performance measurement, and establishing and enforcing policies), (2) establish and promote appropriate System 2 mechanisms, and (3) use the audit channel (i.e., System 3*) to obtain information directly from System 1 (Beer, Citation1979, Citation1985).

In the context of data analytics governance, the System 3 function is concerned with ensuring that all data analytics activities are operating as a cohesive whole. This is effectuated through resource management, performance measurement, and establishing and enforcing policies. The responsibility for resource management can be taken up by a dedicated committee structure, which promotes the business value of data analytics activities to ensure that data analytics projects get the required support, and takes care of the prioritisation of projects (Dremel et al., Citation2018; Espinosa & Armour, Citation2016; Grossman, Citation2018; Grossman & Siegel, Citation2014; Kiron et al., Citation2011). Performance management can for instance, be established by monitoring data analytics projects. Such monitoring ensures that data analytics efforts are leading to the desired business results, and that management can intervene if problems arise (Grossman, Citation2018). Lastly, policies should be in place that provide a norm for how teams should conduct data analytics activities. Such policies might manifest as a formalised process model or project methodology, with a clear description of all activities that are required in the context of a data analytics project (Baijens & Helms, Citation2019; Mariscal et al., Citation2010; Schmidt & Sun, Citation2018).

Case B illustrates that a steering committee can be tasked with the prioritisation of data analytics projects. Whether such a committee should be dedicated to the prioritisation of data analytics projects or not is an organisation-specific issue that may ultimately depend on the importance of data analytics for the organisation, and the number of data analytics projects that are to be prioritised.

6.2.4. System 4

In general, the System 4 function provides the viable system with the capacity of adaptation, through sensing the current state of the external environment and anticipating its potential future states. This information is crucial in the context of strategic decision-making (i.e., decision-making about (the path to) desired future states of the viable system), which is a mutual process between System 4 and System 3 (Beer, Citation1979).

In the context of data analytics governance, the System 4 function is responsible for sensing opportunities and threats in the environment that are data analytics-related (e.g., new analytics tools, relevant legal/regulatory issues, how competitors are leveraging data analytics etc.) Therefore, organisations should ensure that they acquire and retain the right knowledge and expertise, and promote the transfer of that relevant knowledge and expertise (Davenport et al., Citation2001; Dremel et al., Citation2018). If the knowledge and expertise is not available in-house, organisations can (temporarily) acquire skills through collaboration with external data analytics consultants, which may also result in knowledge transfer to the organisation (Dremel et al., Citation2018).

Case A illustrates that organisations may enable their employees to visit (external) data analytics conferences, with the goal of building knowledge and expertise. This indeed illustrates how organisations are trying to collect the necessary information to keep up with exogenous change.

6.2.5. System 5

In general, the System 5 function is responsible for establishing the overall direction and purpose of the viable system, and ensuring that this purpose is ultimately achieved. To effectuate this responsibility, System 5 uses information about the overall direction and purpose of the system to overview its strategic decision-making (i.e., the decision-making process between System 4 and System 3) (Beer, Citation1979).

In the context of data analytics governance, the System 5 function should establish what the organisation is seeking to achieve and seeking to avoid with its data analytics activities. In other words, the concern of System 5 is ensuring optimal value delivery from data analytics activities, while also ensuring that the risks are understood and do not exceed the organisation’s overall risk appetite. Data analytics literature indeed asserts that an organisation should set its overall direction and monitor that direction to ensure data analytics will develop towards that direction (Grossman, Citation2018).

Case A illustrates that organisations may opt for hosting regular strategy meetings in the context of data analytics to review results and monitor progress on a routine basis. The discussions held at the level of these strategy meetings may ultimately result in an update of the data analytics strategy, or at least critically evaluate the current strategy, which as such constitutes a good example of a System 5 mechanism in practice.

6.2.6. Viable data analytics governance

The VSM’s five systemic functions (i.e., Systems 1 through 5) represent the key systemic preconditions for viability (Beer, Citation1985). Drawing on the previously-discussed parallels between these systemic functions and data analytics governance provides insights on the key systemic preconditions for viable data analytics governance. The instantiation of these systemic functions, to ensure that these systemic preconditions are met, happens through the implementation of specific data analytics governance mechanisms, such as the ones included in our descriptive framework.

Mapping specific data analytics governance mechanisms as implemented by organisations, such as the ones identified in the context of our case studies, to the VSM’s systemic functions illustrates the possibility to describe organisations’ existing data analytics governance approaches through the VSM lens. summarises the theoretical description of the five systemic functions (both in general terms and in the specific context of data analytics governance), and tangible insights concerning the mechanisms that may be implemented in practice to instantiate these systemic functions in the context of data analytics governance.

Table 7. Data analytics governance mechanisms mapped to VSM systemic functions.

Finally, using the VSM to describe an organisation’s data analytics governance approach has the potential to take into account the interrelatedness of data analytics governance mechanisms in a holistic data analytics governance approach. While our purpose of leveraging the VSM herein was limited to discussing theoretical parallels between the VSM’s systemic functions and data analytics governance, and illustrating specific data analytics governance mechanisms that may be used to instantiate these systemic functions, we nevertheless would like to briefly illustrate this aspect as well. However, it should be noted that the following example draws on specific data analytics governance mechanisms as identified in several individual case organisations. Future research may thus take a similar approach to holistically describe, and diagnose, an organisation’s data analytics governance approach.

Organizations may for instance, enable their employees to visit (external) data analytics conferences, with the goal of building knowledge and expertise (case A). Such mechanism allows for collecting possibly interesting information to keep up with exogenous change in the context of (the use of) data analytics, and is as such an example of the instantiation of the System 4 function. This information can then be used in the context of strategy meetings that are hosted to evaluate and (potentially) update the (data analytics) strategy (case A). Indeed, it is important to evaluate if/how data analytics is/can be leveraged to meet strategic business goals. Such strategy meetings are as such an example of the instantiation of the System 5 function. The result of this strategic discussion may be a planned (data analytics) strategy, which outlines how data analytics should contribute to achieving business goals. A dedicated steering committee may be tasked with the evaluation and prioritisation of data analytics projects (case B), while keeping this strategy in mind, which is as such an example of the instantiation of the System 3 function. Through approving and launching data analytics projects, data analytics activities will ultimately manifest themselves at the level of the System 1 function. An example of this may be the forecasting of customer demand (case B). Finally, a (relational) mechanism like sharing success stories (case A and B), which contributes to building a shared perception across the organisation, may help in the auto-regulatory coordination (i.e., anticipating and avoiding problems, instead of solving them when they happen) of data analytics activities. Such mechanisms are as such an example of the instantiation of the System 2 function.

7. Conclusion

This paper presented a preliminary framework for data analytics governance that was developed by drawing on extant data analytics (governance) literature, and as such represents the first scientifically grounded framework for data analytics governance. Subsequently, the framework was used within three case organisations to elicit illustrations of the implementation of data analytics governance, resulting in the creation of the descriptive data analytics governance framework, as such providing a deeper understanding of the possibilities to govern data analytics in practice. The data analytics governance framework is a response to the call for action by Espinosa and Armour (Citation2016) and Gröger (Citation2018), who both suggest that a data analytics governance framework is needed to provide practical guidance for managers to improve their data analytics governance practices.

Besides developing a descriptive data analytics governance framework, we made a theoretical contribution by drawing on the Viable System Model (VSM) to discuss why and how data analytics governance, via specific data analytics governance mechanisms, can continue to fulfil its purpose of creating (business) value from data. To achieve this, we first established parallels between the VSM and data analytics governance. As the VSM’s systemic functions represent the key preconditions for a system to maintain viability (i.e., have the capacity to continue to fulfil its purpose), these parallels provide insights on the systemic aspects of data analytics governance that are necessary for its viability. Furthermore, we provided specific illustrations of the instantiation of these aspects by means of the specific data analytics governance mechanisms that were identified in the context of our empirical research.

The research also has several practical implications. The results of our research are valuable for those responsible for setting up and maintaining data analytics governance in an organisation. The different governance mechanisms of our framework provide concrete levers for practice regarding how data analytics governance can be configured and what aspect are important in the context of data analytics governance. Furthermore, the illustrations collected from the real-life cases guide practitioners by providing concrete examples of how these governance mechanisms can manifest in a real organisation. Based on the insights provided herein, practitioners could choose and customise a set of data analytics governance mechanisms (or instantiations thereof) from the framework, which they consider most appropriate for their organisation and its specific context. Finally, practitioners can use the parallels between the VSM’s systemic functions and data analytics governance, to gauge if their data analytics governance approach takes into account all five systemic functions that are necessary for its viability. In summary, the insights provided by our research can provide practitioners with a starting point to govern their data analytics activities.

There are also some limitations to take into account when using the results of this research. First of all, the somewhat limited amount of three case studies prevents a conclusive validation of the framework, and it leaves room to suppose that more mechanisms might be found in other organisations. However, none of the three case studies provided information on governance mechanisms that were not covered by our framework yet. Second, despite the fact that the use of a preliminary framework helped not being overwhelmed by the complexity of the case study data, it could have biased the collection and interpretation of the data, and thereby limit the identification of additional mechanisms. A third but related limitation is that the case study data was not collected with the VSM in mind. In other words, the VSM was not leveraged to develop the interview protocol, but was leveraged to develop the coding frame (to guide the mapping of mechanisms to the VSM’s systemic functions). As such, we acknowledge that a VSM-based interview protocol may lead to the identification of additional data analytics governance mechanisms in the case organisations. However, the semi-structured interview protocol that was used during data collection specifically probed for the existence and identification of additional data analytics governance mechanisms that were not yet discussed during the interview. Fourth and finally, the VSM was used herein as a functionalist instrument, which enables the classification of surface elements (i.e., data analytics governance mechanisms) based on empirical observation. However, using the VSM as a functionalist instrument does not unlock the full potential of the VSM. Indeed, using the VSM as a structuralist instrument allows to uncover the aspects underneath the surface (e.g., the violation of cybernetic principles) that might manifest in various pathologies at the surface level. This more advanced interpretation of the VSM would allow for uncovering underlying issues and suggest remedial actions. In other words, it would allow for a more advanced VSM-based diagnosis of an organisation’s data analytics governance approach.

As for future research, scholars could conduct additional research on the instantiations of data analytics governance mechanisms that can be used by organisations, but also how specific sets of mechanisms would be effective for different types of organisations. Furthermore, these mechanisms can be evaluated on dimensions such as “effectiveness” and “ease of implementation” (e.g., by an expert panel), allowing a prioritisation of data analytics governance mechanisms. Another future research opportunity could be the investigation of contextual factors (i.e., contingency factors), such as the role of data analytics for the organisation and the organisation’s compliance requirements, and the influence of such factors on an organisation’s data analytics governance approach. Moreover, future research could investigate the difference in how organisations apply data analytics and how that affects the governance approach. For example, if organisations use data analytics as a self-service within their organisation, they may want to consider a platform-based governance approach, as suggested by Gregory et al. (Citation2018). Furthermore, future research could also investigate how the structures of the data analytics function impacts the governance approach or how this is influenced by the organisational context. Finally, scholars could continue drawing on the VSM for a further investigation of data analytics governance. Future research in this area can conduct more in-depth case study research to describe and diagnose organisations’ data analytics governance approaches and examine the interrelatedness of mechanisms in a holistic data analytics governance approach. In other words, the VSM can be leveraged as a structuralist instrument in future empirical research to investigate data analytics governance. To put this in practice, future research may draw on Pérez Ríos and Schwaninger (Citation2010) three-step approach for a VSM-based diagnosis.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Province of Limburg, The Netherlands [SAS-2020-03117].

References

  • Abbasi, A., Sarker, S., & Chiang, R. H. L. (2016). Big data research in information systems: Toward an inclusive research agenda. Journal of the Association of Information Systems, 17(2), 1–32. https://doi.org/10.17705/1jais.00423
  • Akter, S., Wamba, S. F., Gunasekaran, A., Dubey, R., & Childe, S. J. (2016). How to improve firm performance using big data analytics capability and business strategy alignment? International Journal of Production Economics, 182, 113–131. https://doi.org/10.1016/j.ijpe.2016.08.018
  • Alhassan, I., Sammon, D., & Daly, M. (2016). Data governance activities: An analysis of the literature. Journal of Decision Systems, 25(sup1), 64–75. https://doi.org/10.1080/12460125.2016.1187397
  • Almeida, R., Pereira, R., & Mira Da Silva, M. (2013). IT governance mechanisms: A literature review. In International Conference on Exploring Services Science (pp. 186–199). Berlin, Heidelberg.
  • Avery, A. A., & Cheek, K. (2015). Analytics governance : Towards a definition and framework. In Twenty-first Americas Conference on Information Systems (pp. 1–8). Puerto Rico.
  • Baijens, J., & Helms, R. W. (2019). Developments in knowledge discovery processes and methodologies: Anything new? In Twenty-fifth Americas Conference on Information Systems (pp. 1–10). Cancun, Mexico.
  • Barbour, J. B., Treem, J. W., & Kolar, B. (2018). Analytics and expert collaboration: How individuals navigate relationships when working with organizational data. Human Relations, 71(2), 256–284. https://doi.org/10.1177/0018726717711237
  • Beer, S. (1979). The heart of the enterprise. John Wiley & Sons.
  • Beer, S. (1981). Brain of the firm, second edition. John Wiley & Sons.
  • Beer, S. (1985). Diagnosing the system for organizations. John Wiley & Sons.
  • Berndtsson, M., Forsberg, D., Stein, D., & Svahn, T. (2018). Becoming a data-driven organization. In Twenty-Sixth European Conference on Information Systems. Portsmouth, UK.
  • Brown, A. E., & Grant, G. G. (2005). Framing the frameworks: A review of IT governance research. Communications of the Association for Information Systems, 15(1) 696–712. https://doi.org/10.17705/1CAIS.01538
  • Burant, T. J., Gray, C., Ndaw, E., McKinney-Keys, V., & Allen, G. (2007). The rhythms of a teacher research group. Multicultural Perspectives, 9(1), 10–18. https://doi.org/10.1080/15210960701333674
  • Chapman, P.; Clinton, J.; Kerber, R.; Khabaza, T.; Reinartz, T.; Shearer, C. & Wirth, R. (2000). CRISPDM 1.0 step-by-step data mining guide. Technical report, CRISP-DM.:https://www.the-modeling-agency.com/crisp-dm.pdf
  • Chen, H., Chiang, R. H. L., & Storey, V. C. (2012). Business intelligence and analytics: From big data to big impact. MIS Quarterly, 36(4), 1165–1188. https://doi.org/10.1145/2463676.2463712
  • Choudhary, V., & Vithayathil, J. (2013). The impact of cloud computing: Should the IT department be organized as a cost center or a profit center? Journal of Management Information Systems, 30(2), 67–100. https://doi.org/10.2753/MIS0742-1222300203
  • Darke, P., Shanks, G., & Broadbent, M. (1998). Successfully completing case study research: Combining rigour, relevance and pragmatism. Information Systems Journal, 8(4), 273–289. https://doi.org/10.1046/j.1365-2575.1998.00040.x
  • Davenport, T. H. (2006). Competing on analytics. Harvard Business Review, 84 (1), 98–107. http://www.ncbi.nlm.nih.gov/pubmed/16447373
  • Davenport, T. H. (2013). Analytics 3.0. Harvard Business Review, 91(12), 64–72. https://doi.org/10.1017/CBO9781107415324.004
  • Davenport, T. H., Barth, P., & Bean, R. (2012). How ‘ Big Data ’ is different. MIT Sloan Management Review, 54(1), 22–24. https://www.hbs.edu/ris/Publication%20Files/SMR-How-Big-Data-Is-Different_782ad61f-8e5f-4b1e-b79f-83f33c903455.pdf
  • Davenport, T. H., Harris, J. G., Long, D. W., & Jacobson, A. L. (2001). Data to knowledge to results: Building an analytic capability. California Management Review, 43(2), 117–138. https://doi.org/10.2307/41166078
  • De Haes, S., & Van Grembergen, W. (2004). IT governance and its mechanisms. Information Systems Control Journal, 1, 27–33.
  • De Haes, S., & Van Grembergen, W. (2009). An exploratory study into IT governance implementations and its impact on business/IT alignment. Information Systems Management, 26(2), 123–137. https://doi.org/10.1080/10580530902794786
  • De Haes, S., Van Grembergen, W., Joshi, A., & Huygh, T. (2020). Enterprise governance of information technology (thrid). Springer.
  • Delen, D., & Demirkan, H. (2013). Data, information and analytics as services. Decision Support Systems, 55(1), 359–363. https://doi.org/10.1016/j.dss.2012.05.044
  • Delen, D., & Ram, S. (2018). Research challenges and opportunities in business analytics. Journal of Business Analytics, 1(1), 2–12. https://doi.org/10.1080/2573234X.2018.1507324
  • do Nascimento, G. S., & de Oliveira, A. A. (2012). An agile knowledge discovery in databases software process. In The Second International Conference on Advances in Information Mining and Management compliance (pp. 343–351). Berlin, Heidelberg.
  • Dremel, C., Herterich, M. M., Wulf, J., & vom Brocke, J. (2018). Actualizing big data analytics affordances: A revelatory case study. Information & Management, 57(1), 103121. https://doi.org/10.1016/j.im.2018.10.007
  • Dremel, C., Herterich, M. M., Wulf, J., Waizmann, J.-C., & Brenner, W. (2017). How Audi AG established big data analytics in its digital transformation. MIS Quarterly Executive, 16(2), 81–100. https://aisel.aisnet.org/misqe/vol16/iss2/3
  • Dul, J., & Hak, T. (2008). Case study methodology in business research. Routledge. https://doi.org/10.1007/s13398-014-0173-7.2
  • Espejo, R., & Reyes, A. (2011). Organizational systems: Managing complexity with the viable system model. Springer Science & Business Media.
  • Espinosa, J. A., & Armour, F. (2016). The big data analytics gold rush: A research framework for coordination and governance. In Proceedings of the Annual Hawaii International Conference on System Sciences (pp. 1112–1121). Hawaii.
  • EYGM Limited. (2015). Becoming an analytics-driven organization to create value A report in collaboration with Nimbus Ninety. https://www.ey.com/Publication/vwLUAssets/EY-global-becoming-an-analytics-driven-organization/$FILE/ey-global-becoming-an-analytics-driven-organization.pdf
  • Gregory, R. W., Kaganer, E., Henfridsson, O., & Ruch, T. J. (2018). it consumerization and the transformation of IT governanceit. MIS Quarterly, 42(4), 1225–1253. https://doi.org/10.25300/MISQ/2018/13703
  • Gröger, C. (2018). Building an industry 4.0 analytics platform. Datenbank-Spektrum, 18(1), 5–14. https://doi.org/10.1007/s13222-018-0273-1
  • Grossman, R. L. (2018). A framework for evaluating the analytic maturity of an organization. International Journal of Information Management, 38(1), 45–51. https://doi.org/10.1016/j.ijinfomgt.2017.08.005
  • Grossman, R. L., & Siegel, K. P. (2014). Organizational models for big data and analytics. Journal of Organization Design, 3(1), 20–25. https://doi.org/10.7146/jod.9799
  • Grover, V., Chiang, R. H. L., Liang, T., & Zhang, D. (2018). Creating strategic business value from big data analytics: A research framework. Journal of Management Information Systems, 35(2), 388–423. https://doi.org/10.1080/07421222.2018.1451951
  • Günther, W. A., Mehrizi, M. H. R., Huysman, M., & Feldberg, F. (2017). Debating big data: A literature review on realizing value from big data. Journal of Strategic Information Systems, 26(3), 191–209. https://doi.org/10.1016/j.jsis.2017.07.003
  • He, J., & Mahoney, J. (2006). Firm capability, corporate governance, and firm competitive behavior: A multi-theoretic framework. International Journal of Strategic Change Management, 1(4), 293–318. https://doi.org/10.1504/IJSCM.2009.031408
  • Huygh, T., & De Haes, S. (2019). Investigating IT governance through the viable system model. Information Systems Management, 36(2), 168–192. https://doi.org/10.1080/10580530.2019.1589672
  • Huygh, T., & De Haes, S. (2020). Towards a viable system model-based organizing logic for IT governance. In ICIS 2020 Proceedings. India.
  • Jackson, M. C. (1992). The soul of the viable system model. Systems Practice, 5(5), 561–564. https://doi.org/10.1007/BF01140507
  • Joachim, N., Beimborn, D., & Weitzel, T. (2013). The influence of SOA governance mechanisms on IT flexibility and service reuse. The Journal of Strategic Information Systems, 22(1), 86–101. https://doi.org/10.1016/j.jsis.2012.10.003
  • Kiron, D., Prentice, P., & Boucher Fergunson, R. (2014). The analytics mandate. MIT Sloan Management Review, 55(4), 1. https://sloanreview.mit.edu/projects/analytics-mandate/
  • Kiron, D., Shockley, R., Kruschwitz, N., Finch, G., & Haydock, M. (2011). Analytics: The widening divide advantage through analytics. MIT Sloan Management Review, 52(2), 1–21. https://sloanreview.mit.edu/projects/analytics-the-widening-divide/
  • Lavalle, S., Lesser, E., Shockley, R., Hopkins, M. S., & Kruschwitz, N. (2011). Big data, analytics and the path from insights to value. MIT Sloan Management Review, 52(2), 21–32. https://sloanreview.mit.edu/article/big-data-analytics-and-the-path-from-insights-to-value/
  • Luo, J., Wu, Z., Huang, Z., & Wang, L. (2016). Relational IT governance, its antecedents and outcomes: A study on Chinese firms. In 2016 International Conference on Information Systems (ICIS) (pp. 1–18). Dublin, Ireland.
  • Mariscal, G., Marbán, Ó., & Fernández, C. (2010). A survey of data mining and knowledge discovery process models and methodologies. Knowledge Engineering Review, 25(2), 137–166. https://doi.org/10.1017/S0269888910000032
  • Martínez-Plumed, F., Contreras-ochando, L., Ferri, C., Hernandez-Orallo, J., Kull, M., Lachiche, N., Ramírez-Quintana, M. J., & Flach, P. A. (2019). CRISP-DM Twenty Years Later : From Data Mining Processes to Data Science Trajectories. IEEE Transactions on Knowledge and Data Engineering, 33(8), 3048 - 3061. https://doi.org/10.1109/TKDE.2019.2962680
  • McAfee, A., Brynjolfsson, E., Davenport, T. H., Patil, D. J., & Barton, D. (2012). Big data: The management revolution. Harvard Business Review, 90(10), 60–68. https://hbr.org/visual-library/2015/03/big-data-the-management-revolution-hbr-slide-deck
  • Mikalef, P., Pappas, I. O., Krogstie, J., & Giannakos, M. (2017). Big data analytics capabilities: A systematic literature review and research agenda. Information Systems and E-Business Management, 16(3), 547–578. https://doi.org/10.1007/s10257-017-0362-y
  • Mortenson, M. J., Doherty, N. F., & Robinson, S. (2015). Operational research from Taylorism to Terabytes_ A research agenda for the analytics age. European Journal of Operational Research, 241(3), 583–595. https://doi.org/10.1016/j.ejor.2014.08.029
  • Niño, H. A. C., Niño, P. C. J., & Ortega, R. M. (2020). Business intelligence governance framework in a university : Universidad de la costa case study. International Journal of Information Management, 50, 405–412. https://doi.org/10.1016/j.ijinfomgt.2018.11.012
  • Oestreich, T. (2016). Establish aframework for analytics governance(no. G00268221). Gartner Business Intelligence and Analytics Summit. https://www.gartner.com/en/documents/2892417/establish-a-framework-for-analytics-governance
  • Peppard, J. (2005). The application of the viable systems model to information technology governance. In ICIS 2005 Proceedings (pp. 11–24).Las Vegas, NV, USA.
  • Pérez Ríos, J., & Schwaninger, M. (2010). Models of organizational cybernetics for diagnosis and design. Kybernetes, 39(9/10), 1529–1550. https://doi.org/10.1108/03684921011081150
  • Rau, K. G. (2004). Effective governance of it: Design objectives, roles, and relationships. Information Systems Management, 21(4), 35–42. https://doi.org/10.1201/1078/44705.21.4.20040901/84185.4
  • Richter, J., & Basten, D. (2014). Applications of the viable systems model in IS research - A comprehensive overview and analysis. In Proceedings of the Annual Hawaii International Conference on System Sciences (pp. 4589–4598). Hawaii. https://doi.org/10.1109/HICSS.2014.565
  • Saldaña, J. (2015). The coding manual for qualitative researchers. Sage. https://doi.org/10.1017/CBO9781107415324.004
  • Saltz, J., Wild, D., Hotz, N., & Stirling, K. (2018). Exploring project management methodologies used within data science teams. In Twenty-fourth Americas Conference on Information Systems, (pp. 1–5). New Orleans, LA, USA.
  • Saunders, M., Lewis, P., & Thornhill, A. (2009). Research methods for business students (5th ed.). Pearson Education LTD. https://doi.org/10.1007/s13398-014-0173-7.2
  • Schmidt, C., & Sun, W. N. (2018). Synthesizing agile and knowledge discovery: Case study results. Journal of Computer Information Systems, 58(2), 142–150. https://doi.org/10.1080/08874417.2016.1218308
  • Schüritz, R., Brand, E., Satzger, G., & Bischhoffshausen, J. (2017). How to cultivate analytics capabilities within an organization? – Design and types of analytics competency centers. In Proceedings of the 25th European Conference on Information Systems (ECIS) (pp. 1–15).Guimarães, Portugal.
  • Strauss, A. L. (1987). Qualitative analysis for social scientists. Cambridge university press.
  • Tallon, P. P., Ramirez, R. V., & Short, J. E. (2013). The information artifact in IT governance: Toward a theory of information governance. Journal of Management Information Systems, 30(3), 141–178. https://doi.org/10.2753/MIS0742-1222300306
  • Tiwana, A., Konsynski, B., & Venkatraman, N. (2014). Special issue : Information technology and organizational governance : The IT governance cube. Journal of Management Information Systems, 30(3), 7–12. https://doi.org/10.2753/MIS0742-1222300301
  • Vial, G. (2019). Understanding digital transformation: A review and a research agenda. Journal of Strategic Information Systems, 28(2), 118–144. https://doi.org/10.1016/j.jsis.2019.01.003
  • Vidgen, R., Shaw, S., & Grant, D. B. (2017). Management challenges in creating value from business analytics. European Journal of Operational Research, 261(2), 626–639. https://doi.org/10.1016/j.ejor.2017.02.023
  • Vithayathil, J. (2018). Will cloud computing make the Information Technology (IT) department obsolete? Information Systems Journal, 28(4), 635–649. https://doi.org/10.1111/isj.12151
  • Ward, J. S., & Barker, A. (2013). Undefined by data: A survey of big data definitions. ArXiv Preprint, ArXiv:1309.5821. https://arxiv.org/abs/1309.5821
  • Watson, H. J. (2014). Tutorial: Big data analytics: Concepts, technologies, and applications. Communications of the ACM, 34(1), 1247–1268. http://aisel.aisnet.org/cais/vol34/iss1/65
  • Watson, H. J., & Wixom, B. H. (2007). The current state of business intelligence. Computer, 40(9), 96–99. https://doi.org/10.1109/MC.2007.331
  • Wegener, R., & Sinha, V. (2013). The value of big data: How analytics differentiates winners. bain & company. Retrieved November 4, 2019, from https://www.bain.com/insights/the-value-of-big-data
  • Winkler, T. J., & Brown, C. V. (2013). Horizontal allocation of decision rights for on-premise applications and software-as-a-service. Journal of Management Information Systems, 30(3), 13–48. https://doi.org/10.2753/MIS0742-1222300302
  • Wixom, B. H., Yen, B., & Relich, M. (2013). Maximizing value from business analytics. MISQ Executive, 12(2), 111–123. https://aisel.aisnet.org/misqe/vol12/iss2/6
  • Wu, P.-J., Straub, D. W., & Liang, T.-P. (2015). How information technology governance mechanisms and strategic alignment influence organizational performance: Insights from a matched survey of business and IT managers. MIS Quarterly, 39(2), 497–518. https://doi.org/10.25300/MISQ/2015/39.2.10
  • Yamada, A., & Peran, M. (2018). Governance framework for enterprise analytics and data. In 2017 IEEE International Conference on Big Data (pp. 3623–3631). Boston, MA, USA. https://doi.org/10.1109/BigData.2017.8258356
  • Yin, R. K. (2017). Case study research and applications: Design and methods. SAGE Publications.
  • Zogaj, S., & Bretschneider, U. (2014). Analyzing governance mechanisms for crowdsourcing information systems: A multiple case analysis. In Proceedings European Conference on Information Systems (pp. 1–11). Tel Aviv, Israel.