8,743
Views
3
CrossRef citations to date
0
Altmetric
Research Article

Agile work practices: measurement and mechanisms

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 1-22 | Received 29 Sep 2021, Accepted 27 Jun 2022, Published online: 11 Jul 2022

ABSTRACT

Organizations increasingly follow agile management frameworks (e.g., Scrum), to implement practices that aim to enable continuous change. Currently, it is unclear how agile work practices (AWPs) are best conceptualized and measured. The present study draws from the taskwork-teamwork distinction to develop a new theoretical framework and measurement instrument of AWPs. We outline potential mechanisms of AWPs in terms of (a) temporality, (b) managerial control, (c) team processes, and (d) work design. Based on this framework, we validate measures of agile practices with data collected from 269 different teams, including multisource and multiwave data (n = 1664 observations). We first establish the factorial validity, internal consistency, test-retest reliability, and measurement invariance of the instrument. Subsequently, we show that AWPs diverge from centralized bureaucracy and converge with measures of emergent team planning, autonomy, and feedback. The pattern of relationships with variables in the nomological network supports the taskwork-teamwork model. Results of multilevel regression analyses indicate that the use of AWPs is associated with favorable team planning behaviors and enriched work design experiences. By disentangling the AWP concept from software development and popular management frameworks, this study broadens the scope of research on agility. 

In an increasingly complex world, organizations need structures to facilitate continuous change (Brown & Eisenhardt, Citation1997; Gilson et al., Citation2005; Grass et al., Citation2020; Ohly et al., Citation2006). Agile management frameworks promise to create such structures by establishing a range of practices that have their origins in software development (Beck et al., Citation2001), but are now increasingly used in other work areas (Annosi et al., Citation2020). Consider, for instance, a team that is tasked with the development of a new marketing campaign. Based on the logic of agile management (Sutherland, Citation2014), this team may split up the advertisement development process in sprints or iterative work cycles of about four weeks (Van Oorschot et al., Citation2018). By the end of each sprint, the team aims to have developed and tested a new feature of the advertisement through low cost experiments (e.g., A/B testing; Ghosh, Citation2021). Throughout the agile development process, the team constantly adapts the campaign to changing requirements and reflects on whether their approach still meets the needs of end-users (Grass et al., Citation2020; Liu et al., Citation2019).

This example indicates that agile management tries to enhance the speed of decision-making, foster team reflexivity, and enable co-creation with the end-users of products (Koch & Schermuly, Citation2021). Scholars and practitioners agree that these are important characteristics for work in the 21st century (Petermann & Zacher, Citation2021) – not only in software development contexts – but more broadly, as work becomes more digitalized, knowledge-intensive, and socially interdependent (Cooper, Citation2021; Hewett & Shantz, Citation2021; Mergel et al., Citation2021; Schippers et al., Citation2015; Vough et al., Citation2017). However, there currently exists some confusion about the concept of agile working. In executive surveys, managers have associated the term “agile” with chaos and a loss of managerial control (Girod & Králik, Citation2021). In contrast, employees have reported about heightened peer control and rule pressure due to agile working (Khanagha et al., Citation2022; Stray et al., Citation2020). Finally, researchers in our field likely relate the term “agile” to behavioural constructs such as flexibility (He et al., Citation2014), adaptivity (Maynard et al., Citation2015), or proactivity (Potočnik & Anderson, Citation2016). In other words, current perspectives on agile working are fragmented, disconnected, and perhaps even paradoxical (Prange, Citation2021).

One of the reasons for the state of research is the lack of theory on the concept of agile working (Niederman et al., Citation2018). Moreover, organizational psychologists have raised concerns about existing measures of agile work practices and called for “a psychometric validation study to develop a valid measurement concept that can be used consistently across different studies” (Rietze & Zacher, Citation2022, p. 19). To develop valid measures, we need to be more specific about the nature of the phenomenon. Agile practices are usually implemented in team-based organizations and represent a set of routines that are collectively practiced by team members (Junker et al., Citation2021; Grass et al., Citation2020). Building on this observation, we examine agile work practices from perspectives of the organizational routine (Feldman & Pentland, Citation2003), work teams (Mathieu et al., Citation2019), and work design literature (Parker, Citation2014). Rather than investigating the abundance of practices that have been proposed by management consultants, we focus on practices that (a) have been implemented in a broad range of organizations (Tripp & Armstrong, Citation2018), (b) enable continuous change according to theory (Brown & Eisenhardt, Citation1997), and (c) can be examined through the lens of the established taskwork-teamwork continuum (Crawford & LePine, Citation2013; Fisher, Citation2014; Marks et al., Citation2001).

Agile taskwork is characterized by short-term goal cycles (i.e., sprints) and an iterative approach to task accomplishment (Ghosh & Wu, Citation2021; Liu et al., Citation2019). Iterative development has been argued to be the core indicator of agility (So, Citation2010), and refers to the practice of developing work outputs in repeated cycles through continuous improvement and testing of different elements or prototypes of the product. Agile teamwork is characterized by frequent check-in meetings to enable goal monitoring (also known as “stand-up meetings”) and extensive after-action-reviews to foster team reflexivity (also known as “retrospective meetings”; Hummel et al., Citation2015; Stray et al., Citation2020). These goal monitoring and reflection meetings facilitate agile taskwork (Ghosh & Wu, Citation2021). Thus, agile taskwork and agile teamwork are complementary, yet conceptually distinct manifestations of agile working. Owing to their broader applicability (Tripp & Armstrong, Citation2018), we refer to these activities as Agile Work Practices (AWPs), and distinguish them from domain-specific agile software development practices (Venkatesh et al., Citation2020).

Currently, agile working is mostly associated with software development, and the scientific literature on this topic is dominated by researchers in the field of information systems (Dingsøyr et al., Citation2012; Hoda et al., Citation2017). The here-validated Agile Work Practices Instrument (AWPI) may inspire management scholars and organizational psychologists to conduct research on agile working. Until now, management scholars tend to refrain from developing theory on agile working itself and mostly utilize agile team contexts for investigating related phenomena, such as prioritizing (Kremser & Blagoev, Citation2021), group pressure (Khanagha et al., Citation2022), and peer reactions to proactive behaviour (Twemlow et al. Citation2022). We hope the AWPI provides a basis for more specific research on the concept of agile working.

The AWPI can also be used in practice, to monitor the progress of agile transformation programmes (Girod & Králik, Citation2021), and as a feedback instrument for team development (Mathieu & Rapp, Citation2009). By this date, a whole industry emerged around agile management certifications (PMI & AgileAlliance, Citation2017), and a quick search shows that only in the United States more than 30-thousand open vacancies for agile coaches or agile product owners are visible on popular job platforms (e.g., see indeed.com, May 2022). The present study aims to give organizational psychologists a voice in the conversation on agile working and may encourage them to contribute to this conversation with their knowledge of team planning and work design (Parker et al., Citation2019). Thus, by validating a new measurement instrument and by proposing a framework that may explain the underlying mechanisms of AWPs, this study also contributes to improving the practice of agile working in organizations.

Theoretical background

Origins of agile working

The use of the term “agile” in a work context dates back to the agile manifesto, which is a document that lists a set of values and principles formulated by a group of software developers at the beginning of the new millennium (Beck et al., Citation2001). The agile manifesto was written as an antidote to the prevailing Waterfall paradigm of software development. Waterfall development emphasized the adherence to detailed and long-term strategic plans, which did not fit well with the dynamics of the volatile software development industry (Maruping et al., Citation2009). The proponents of the agile manifesto favoured a lighter planning approach and called for organizing work around self-managing teams that frequently reflect on whether plans still fit with external demands (e.g., customer preferences). The agile manifesto further contributed to the emergence of commercial project management frameworks and certifications such as Scrum (Sutherland, Citation2014) and Kanban (Anderson, Citation2010).

These management frameworks, which have also been labelled as “agile methods” (So, Citation2010), represent a holistic set of practices that aim to enable teams to work in the spirit of the agile manifesto. Thus, the agile manifesto is seen as the underlying philosophy behind agile methods and practices. In addition, we define agile working as the enactment of this philosophy in terms of actual work behaviour (see also Junker et al., Citation2021). Similar distinctions between management philosophies, practices, and behaviour have been made in scholarly writings on total quality management (TQM) in the 1990s (Dean & Bowen, Citation1994) and more recently in reviews on lean development (Balzer et al., Citation2019; Niepce & Molleman, Citation1998). In contrast to TQM and lean, which are traditionally associated with the manufacturing industry, agile practices are traditionally associated with new product development (NPD) and the software development sector (Grass et al., Citation2020; Khanagha et al., Citation2022; Rietze & Zacher, Citation2022). Next, we argue that AWPs are broader in scope than originally thought.

Conceptualizing agile work practices (AWPs) for organizational research

Here we disentangle agile practices from commercial project management frameworks and propose that these practices can also be used implicitly, without the adherence to the rules of Scrum (Schwaber & Sutherland, Citation2017) or Kanban (Anderson, Citation2010). Moreover, we focus on practices that can be implemented beyond the NPD and software domains, which we refer to as Agile Work Practices (AWPs; Junker et al., Citation2021). We define AWPs as a cluster of team planning routines, which aims to facilitate change-oriented behaviour. By defining AWPs as routines, we imply that these practices are “a repetitive, recognizable pattern of interdependent actions, involving multiple actors” (Feldman & Pentland, Citation2003, p. 96). Here, the routine actors are team members who repeatedly perform deliberate planning activities to structure taskwork and teamwork in an agile way (i.e., facilitating change-oriented action; cf., Doeze Jager-van Vliet et al., Citation2022; Doz, Citation2020; Grass et al., Citation2020; Petermann & Zacher, Citation2021).

By defining AWPs as a routine cluster, we emphasize that there exist interrelations and complementarities between the specific agile practices (see, Kremser & Schreyögg, Citation2016). For instance, iterative development often implies shorter goal cycles and the need for more frequent agile meetings to enable goal monitoring and reflection (Ghosh & Wu, Citation2021). Teams implement these routines by (a) following prescriptions of agile frameworks (i.e., importation), (b) tailoring prescribed routines (i.e., evolution), or by (c) inventing routines that follow a similar organizing principle (i.e., creation; cf., Deken et al., Citation2016; Gersick & Hackman, Citation1990). Instead of determining the what of routines (i.e., ostensive aspects), AWPs shape the how of routines (i.e., performative aspects; Feldman & Pentland, Citation2003). Hence, the AWP concept is not tied to a specific routine content such as the hiring, software coding, or academic writing routine. Instead, we argue that each of these activities can be planned in an agile way.

By conceptualizing AWPs as a cluster of team planning routines, we can account for the aforementioned paradoxical perspectives on agility (Girod & Králik, Citation2021; Khanagha et al., Citation2022; Lewis et al., Citation2014; Prange, Citation2021). As a cluster of routines, AWPs offer teams control and stability (Gersick & Hackman, Citation1990), yet – at the same time – the structure provided by AWPs may facilitate change and flexibility (Brown & Eisenhardt, Citation1997; Deken et al., Citation2016; Dönmez & Grote, Citation2018; Grote et al., Citation2018). Therefore, conceptualizing AWPs as routines offers a broad theoretical basis for the concept that allows for the integration of macro (e.g., Monteiro & Adler, Citation2021), meso (e.g., Mathieu et al., Citation2019), and micro-level streams of organizational research (e.g., Potočnik & Anderson, Citation2016). Next, we introduce various mechanisms that may explain how AWPs facilitate continuous change (see, ).

Table 1. A conceptual analysis of agile work practices.

How AWPs facilitate continuous change

In the following, we relate the AWP concept to the seminal work of Brown and Eisenhardt (Citation1997), who investigated how routines may enable continuous change. Through case comparisons of six organizations in the computer industry, they developed the concept of links in time, defined as “explicit organizational practices that address past, present, and future time horizons and the transitions between them” (Brown & Eisenhardt, Citation1997, p. 29). Here, we introduce four AWPs that may facilitate continuous change by creating distinct links in time.

Sprints. Agile teams plan their tasks in relatively short work cycles or intervals, also known as sprints (Van Oorschot et al., Citation2018). The importance of short-term goal setting for enabling continuous change was already noticed by Brown and Eisenhardt (Citation1997). They concluded that successful firms relied on shorter planning cycles to “become entrained to the rhythm of the environment” (p. 25). Entrainment is a biological concept (Ancona et al., Citation2001) that describes how systems tend to synchronize over time (e.g., bodily changes across the day-night cycle). By shortening their goal cycles, agile teams become entrained to their volatile environment. Agile work environments are characterized by frequently changing customer requests and sudden resource constraints, which require dynamic prioritizing (Kremser & Blagoev, Citation2021). Sprints may also promote entrainment within the team, because individuals are better able to coordinate their behaviours when they work towards specific performance goals (Nahrgang et al., Citation2013). Thus, sprints fulfill an important coordinative function for timing work activities, which may facilitate continuous change.

Iterative development. Agile teams tend to approach their goals in a way that is commonly referred to as iterative development (Ghosh & Wu, Citation2021; Tripp et al., Citation2016). For instance, if we had to develop a new mobile application in an agile way, we first would come up with a prototype or minimum viable product (Lee & Geum, Citation2021). Subsequently, we would engage in user-testing to obtain feedback on this preliminary product. Then, we would continue to release new features of the product and try to obtain additional feedback by means of other low-cost experiments (e.g., A/B testing; Ghosh, Citation2021). This iterative approach allows for incorporating changes in user-needs throughout the whole development process. Thus, iterative development may create the ability to continuously “probe into the future with a variety of low-cost experiments” (Brown & Eisenhardt, Citation1997, p. 25). More generally, iterative development builds on the logic of effectuation because this practice creates conditions that allow for affordable loss (e.g., failed prototypes of products; Sarasvathy, Citation2001). This approach is not only applicable in software development. Iterative development can even be practiced in our very own work as scholars,Footnote1 as described in the book “The lean PhD” (Kirchherr, Citation2019).

Stand-up meetings. To coordinate team members’ activities, agile frameworks (e.g., Scrum) recommend conducting short daily check-in meetings also known as “standups”. Team interactions during stand-up meetings typically address members’ activities for the day and potential impediments that hinder goal progress (Hummel et al., Citation2015). Thus, stand-up meetings can be seen as a goal monitoring practice, because these meetings direct members’ attention towards what needs to be done for goal attainment (Rapp et al., Citation2014). In terms of temporal linkages, stand-up meetings enable teams to manage the present because these meetings potentially create a combination of “clear responsibilities and priorities coupled with extensive communication” (Brown & Eisenhardt, Citation1997, p. 15). Although there is a debate about how daily standups should be conducted (e.g., what should be discussed, see, Stray et al., Citation2020), merely having this formal structure may already help teams to coordinate effectively and to integrate knowledge in a flexible manner (Okhuysen & Eisenhardt, Citation2002).

Retrospective meetings. According to the agile manifesto, teams should reflect on their activities and plans very regularly (Beck et al., Citation2001). This is commonly practiced in formal retrospective meetings, which take place at the end of a sprint period (i.e., approximately every four weeks; Schwaber & Sutherland, Citation2017). During these meetings, team members collectively learn from the past by reflecting on their activities. Thus, retrospective meetings may be seen as a type of after-action-reviews, defined as a “systematic technique that turns a recent event into a learning opportunity through a combination of task feedback, reflection, and discussion” (Keiser & Arthur, Citation2021, p. 1108). This potentially impacts team reflexivity or the capacity of a team to jointly question its own functioning (Schippers et al., Citation2015). In addition, retrospective meetings can help to establish a psychologically safe work environment (Hennel & Rosenkranz, Citation2021), and thereby may contribute to team learning (Edmondson, Citation1999). Finally, as a transition moment between each sprint period, retrospective meetings may create temporal linkages by nudging teams to be “rhythmically moving from the past to the present and forward into the future” (Brown & Eisenhardt, Citation1997, p. 30).

Commonalities and interrelations. Building on the work of Brown and Eisenhardt (Citation1997), we proposed that the four AWPs function via distinct temporal mechanisms that enable entrainment (i.e., sprints) and create links with the future (i.e., iterative development), the present (i.e., stand-up meetings), and the past (i.e., retrospective meetings). However, these practices also share certain commonalities that we can base on the well-known taskwork-teamwork continuum (Crawford & LePine, Citation2013; Fisher, Citation2014; Marks et al., Citation2001). Similar distinctions have been utilized in conceptualizations of leadership (i.e., task vs. relationship-oriented; Yukl et al., Citation2002), diversity (i.e., functional vs. social; Gerpott & Lehmann-Willenbrock, Citation2016), and conflict (i.e., task vs. interpersonal; Shah et al., Citation2021). Moreover, these dimensions capture the two primary needs of workgroups, namely (1) goal accomplishment (i.e., taskwork), and (2) addressing members’ socio-emotional needs (McGrath et al., Citation2000; Tuckman, Citation1965). Validating the second-order factor structure will facilitate an integration of the AWP concept with the broader literature (Abele & Wojciszke, Citation2014), and with theories of team processes that are based on the taskwork-teamwork dichotomy (Marks et al., Citation2001).

In the team literature, taskwork generally refers to how teams approach their goals and structure work assignments (Crawford & LePine, Citation2013; Fisher, Citation2014). Agile taskwork, in particular, is structured around short goal cycles that allow for developing work outputs in an iterative manner. Hence, sprints and iterative development are AWPs that indicate how agile teams address their need for goal accomplishment (McGrath et al., Citation2000). Therefore, we predict that these AWPs load on a higher-order taskwork factor. Teamwork refers to how interactions, roles, and responsibilities are organized in groups (Crawford & LePine, Citation2013; Fisher, Citation2014). Agile teams rely on frequent goal monitoring and reflection meetings to facilitate group functioning (Ghosh & Wu, Citation2021). Hence, stand-up and retrospective meetings are AWPs that indicate how agile teams communicate and address members’ socio-emotional needs (Bales, Citation1950; Tuckman, Citation1965). Thus, we expect that stand-up and retrospective meetings load on a higher-order teamwork factor. Other scholars have referred to agile teamwork as “social agile practices” (Gupta et al., Citation2019; Hummel et al., Citation2015), supporting this classification.

Agile taskwork and agile teamwork are not completely independent constructs. Some empirical overlap is expected because the frequency of agile meetings depends on the length of sprints (Gersick, Citation1988), or vice versa because these work cycles are structured around meetings (cf., Okhuysen & Waller, Citation2002). Similarly, agile teamwork may provide input for members’ iterative development activities and these activities are discussed within the context of agile meetings (Twemlow et al., Citation2022). If we consider teams as complex adaptive systems (CAS; McGrath et al., Citation2000), interrelations among the four AWPs may be explained by how these practices “interact” with each other to create a social system that adapts to its environment (Ramos-Villagrasa et al., Citation2018). As customers demand more rapid product development, the duration of the sprint cycles may be shortened, teams rely more on short stand-up meetings instead of extensive retrospective meetings, and iterative development may focus on the quick exploitation of existing prototypes rather than the exploration of novel solutions (cf., Ghosh & Wu, Citation2021). Thus, relationships among the four AWPs not only occur because they address similar internal needs of the team, but also because of how the team engages with the external environment (e.g., customers, technology, other teams, and so forth).

Although the taskwork-teamwork distinction is a simplification of organizational reality, this dichotomy may help to develop a more coherent understanding of the mechanisms of AWPs. Specifically, we later hypothesize that agile taskwork and teamwork influence distinct team planning and work design mechanisms. Moreover, it should be noted that the taskwork-teamwork distinction is not at conflict with other possible conceptualizations of AWPs, such as more nuanced team process theories (Ishak & Ballard, Citation2012). Instead, our hypothesized higher-order factor structure provides a tradeoff in terms of parsimony and complexity, and narrow team processes are distinguishable on a taskwork-teamwork continuum (see, Marks et al., Citation2001, p. 357). For example, from a CAS perspective (Arrow et al., Citation2000), we may expect that over time certain AWPs cluster because they help teams adjust to environmental demands that require similar taskwork and teamwork processes. This reasoning is consistent with our conceptualization of AWPs as clusters of planning routines that can be distinguished on a taskwork-teamwork continuum (Fisher, Citation2014; Kremser & Schreyögg, Citation2016). Therefore, we predict the following factor structure:

Hypothesis 1: The Agile Work Practices Instrument (AWPI) will show a four-factor structure, representing items measuring (1) sprints, (2) iterative development, (3) stand-up meetings, and (4) retrospective meetings.

Hypothesis 2: Agile work practices have two higher-order dimensions, namely: (a) agile taskwork (i.e., sprints and iterative development), and (b) agile teamwork (i.e., stand-up meetings and retrospective meetings).

AWPs imply less centralized bureaucratic control

Agile work practices have been advocated as a new form of planning that potentially reduces organizational bureaucracy (Girod & Králik, Citation2021; Rigby et al., Citation2020). Given the breadth of the bureaucracy construct (Monteiro & Adler, Citation2021), it is necessary to be more specific about which type of bureaucracy is reduced by the implementation of AWPs. When scholars (Mergel et al., Citation2021) and practitioners (Rigby et al., Citation2020) refer to agility as the antidote of bureaucracy, they usually mean centralized bureaucracies characterized by high degrees of workflow formalization and centralized decision-making (Adler & Borys, Citation1996; Langer et al., Citation2019). Instead of relying on formal hierarchies, agile organizations strive for rapid decision-making within teams (Petermann & Zacher, Citation2021). This does not imply chaos, because the use of AWPs formalizes parts of the team’s work processes (i.e., by structuring taskwork and teamwork). Moreover, agile teams replace hierarchy with concertive control, which is a form of peer pressure through which teams ensure that each member complies with social norms (Barker, Citation1993). This shift in managerial control happens because AWPs replace centralized decision-making with locally empowered decision-making by team members (Grass et al., Citation2020). Given that AWPs partly formalize work processes and contribute to the maintenance of managerial control, we do not see the construct as the polar opposite of bureaucracy – but on a different bureaucratic continuum (Monteiro & Adler, Citation2021). The bureaucracy construct is relevant to the context of AWPs, yet different in nature because AWPs promote concertive control instead of centralized bureaucracy (see, Khanagha et al., Citation2022). In sum, we propose the following discriminant validity hypothesis (Campbell & Fiske, Citation1959):

Hypothesis 3: Agile work practices correlate weakly and negatively with centralized bureaucratic control.

AWPs shape emergent team planning behaviour

Earlier we distinguished agile taskwork (i.e., sprints and iterative development) from agile teamwork (i.e., stand-up and retrospective meetings). As relatively deliberate planning routines (Feldman & Pentland, Citation2003), agile taskwork and agile teamwork may give rise to more dynamic or fluctuating team processes (Marks et al., Citation2001). Particularly, we expect that AWPs relate to emergent team planning behaviour (Fisher, Citation2014), which refers to relatively spontaneous planning activities that happen in the moment as a result of “a naturally occurring team process” (p. 425). Thus, emergent team planning can be distinguished from AWPs, which may primarily manifest “as part of a predesignated or structured team activity” (Fisher, Citation2014, p. 425). When asked to describe how their team organizes itself, employees commonly refer to certain practices (e.g., AWPs) rather than emergent team planning behaviours. This is because as a type of routines, AWPs are consciously more available to team members than fluctuating behavioural processes (for similar observations, see, Zellmer-Bruhn, Citation2003, p. 520). As defined and measured by Fisher (Citation2014), emergent team planning focuses on the what, whereas AWPs focus more on the how of planning. Emergent taskwork planning is directed at goals and task-related requirements. Emergent teamwork planning is directed at information-sharing, roles, and relationships. Since the what and the how of team planning are likely interrelated (Mathieu & Rapp, Citation2009; Mathieu & Schulze, Citation2006), we expect that AWPs correlate with emergent planning. Thus, building on Hypothesis 2 and the conceptualization of Fisher (Citation2014), we forward hypotheses on the convergent validity of our new instrument (DeVellis, Citation2016):

Hypothesis 4: Agile taskwork is uniquely and positively associated with emergent taskwork planning (i.e., planning directed at goal and task-requirements).

Hypothesis 5: Agile teamwork is uniquely and positively associated with emergent teamwork planning (i.e., planning directed at interaction and role-requirements).

AWPs have implications for work design

Prior research introduced the AWP concept from a work design perspective (Rietze & Zacher, Citation2022; Tripp et al., Citation2016). Broadly defined, work design refers to “the content and organization of one’s work tasks, activities, relationships, and responsibilities” (Parker, Citation2014, p. 662). Although we are reluctant to equate AWPs with job design as done by Tripp et al. (Citation2016), we recognize that the objective use of these practices has consequences for how employees perceive the design of their job. Agile taskwork may influence work design through vertical loading, that is granting employees more autonomy for self-regulated action (Slocum & Sims, Citation1980). Indeed, there is evidence that agile taskwork practices, such as iterative development, correlate positively with perceptions of job autonomy (Rietze & Zacher, Citation2022; Tripp et al., Citation2016). Agile teamwork may shape work design by opening feedback channels, that is broadening employees’ knowledge of their own functioning and performance (Slocum & Sims, Citation1980). In line with this assertion, perceptions of agile teamwork have been related to social job resources, such as feedback (Tripp et al., Citation2016) and peer support (Rietze & Zacher, Citation2022). A limitation of these prior studies on agile work design is their use of individual-level data for inferring relationships between constructs that are theorized at the team- and job-level, respectively (for why this matters, see, Bakker & Demerouti, Citation2017; Bliese et al., Citation2018). Therefore, to increase theory-method fit (Edmondson & McManus, Citation2007), we formulate and test hypotheses on team-level work design mechanisms of AWPs. Specifically, we predict the following. The more a team practices agile taskwork, the more autonomy is collectively experienced by team members. The more a team practices agile teamwork, the more feedback is collectively experienced by team members. Thus, building on Hypothesis 2 and prior studies on agile work design (Rietze & Zacher, Citation2022; Tripp et al., Citation2016), we propose:

Hypothesis 6: Agile taskwork is uniquely and positively associated with team autonomy.

Hypothesis 7: Agile teamwork is uniquely and positively associated with team feedback.

Exploring the contexts in which AWPs are implemented

Over the past ten years, organizations have been trying to implement AWPs in large-scale transformation processes (Rigby et al., Citation2020). Such transformations have been observed in a wide range of industries, including banking (Barton et al., Citation2018), retail (Bernstein et al., Citation2016), and the technology sector (Laanti et al., Citation2011). Some organizations have used a “big bang approach” and implemented AWPs throughout all departments almost overnight (Denning, Citation2021), whereas other firms used a more incremental approach and slowly implemented AWPs across different work areas (Girod & Králik, Citation2021). Thus, the question arises whether (a) the use and (b) the effects of AWPs differ across the phases or stages of an agile transformation process. Similarly, it is still unclear to what extent AWPs can be implemented in different work areas since most agile practices originated in software development (Annosi et al., Citation2020). With our new instrument, differences in the extent to which AWPs are practiced can be tested by comparing mean scores. Moreover, we can explore whether the potential effects of AWPs hold across contexts, by examining the direction and strength of relationships in the nomological network across different types of teams. We address these empirical issues with the following research questions:

RQ1: How do the mean levels of AWPs vary across (a) different types of teams, and (b) different phases or stages of an agile transformation?

RQ2: How do the relationships between AWPs and variables in the nomological network vary across (a) different types of teams and (b) different phases or stages of an agile transformation?

Methods

Overview of the data sources

The data used in this study were collected as part of a larger research programme on agile working, based on three independent data collection efforts. provides an overview of the main demographic characteristics of the different samples used in the present study. Sample A is a heterogenous convenience sample, based on data collected in early 2019 via the authors’ social networks (e.g., LinkedIn and personal contacts). Sample B, C, and D are based on data collected from teams at the IT division of a large German transport and logistics organization. This organization, shortly called “AgileOrg” throughout this manuscript has initiated an agile transformation process in 2017. The transformation happens in all work areas of AgileOrg, including their “delivery teams” (i.e., IT-services, -maintenance, and -consulting) and their “support teams” (i.e., HR, customer relations, finance). Before the transformation, employees worked together in a departmental work structure. This organizational design was successively replaced by a multiteam system composed of self-managing agile teams. The transformation entailed that large departments were split up into teams, following a programme that closely resembled well-known models of group development (i.e., forming-storming-norming-performing). Employees at AgileOrg primarily belong to only one team, which was registered in the organization’s HR system together with a shared team email account, nominal team size, team type, as well as the team’s progress in the agile transformation program. Thus, the context at AgileOrg allowed for an investigation of AWPs in different types of teams (i.e., delivery vs. support) and different phases of their agile transformation (i.e., beginning vs. end).

Table 2. Overview of sample and participant characteristics.

The data of Sample B were collected in the summer of 2019 and have been partly used in a previous studyFootnote2 (Junker et al., Citation2021). One year later (i.e., summer 2020), we started another round of data collection at AgileOrg, which allowed us to match teams with data from 2019 (i.e., Sample C), and to contact an independent sample of teams from which no data had been collected previously (i.e., Sample D). Therefore, Sample C is nested within Sample B (i.e., the longitudinal follow-up), while the cases of Sample D are unique. We only retained the data of teams for which we had at least two member responses, as done in prior validation studies of team survey measures (e.g., Fortuin et al., Citation2021; Mathieu et al., Citation2020). For parts of Sample D, the present study also includes ratings of the team’s agile coaches (sometimes also called Scrum Masters; e.g., Grass et al., Citation2020). The agile coaches support the teams in HR-related matters and may therefore provide a valid external perspective on AWPs. Hence, the present study includes multiwave (Sample C) and multisource data (Sample D). It should be noted that at the time of the data collection (2019–2020), our studies were exempt from ethical review at our institute due to their correlational nature. In the following, we outline how the different data sources were used in the present study.

Overview of the validation approach

Following best-practice recommendations (e.g., Cortina et al., Citation2020; DeVellis, Citation2016; Johnson et al., Citation2012), we validate the AWPI in three phases. In Phase 1, we establish the factor structure, internal consistency (Sample A and B), and test-retest reliability of the AWPI (Sample C). In Phase 2, we use the unique AgileOrg cases (Sample B and D) to test the second-order factor structure and measurement invariance of the instrument (Hypothesis 1 and 2). We also explore differences in the mean levels of AWPs across team types and phases of the agile transformation (RQ1). In Phase 3, using the data of Sample D, we test the construct validity of the AWPI according to the proposed nomological network (Hypothesis 3 to 7). We also explore whether relationships in the nomological network vary across contexts (RQ2). Analysis codes and materials are available on the Open Science Framework (https://osf.io/ndtx4).

Results of phase 1: factorial validity and reliability

Rationale and analytic approach

The first phase of any scale validation study is the development of a broad pool of items that can potentially capture variance in the underlying construct (Hinkin, Citation1995). Before constructing the preliminary AWPI item pool, we conducted interviews with five practitioners in Germany who had substantial experience with agile methodologies (two members of an agile team, two HR experts, and one agile coach). In the interviews, it was inquired how agile practices are enacted in different team contexts (e.g., software development vs. other contexts). The insights were used to adapt items of existing scales (So, Citation2010; Tripp et al., Citation2016), which are relatively specific to the software development context and have not been extensively validated. Based on the interviews, we created additional items to capture the full breadth of the AWP construct. This resulted in an item pool of 41 items, which was subsequently reviewed by the interviewees and two experts in organizational psychology who have substantial experience in scale development (one associate and one full professor). Based on their feedback, eight items were discarded because they did not match the proposed definitions of the four AWPs. In the following, we report the analyses that we conducted to refine the instrument and arrive at the final item pool (see, ).

Table 3. Items and factor loadings of the APWI in Sample A.

Exploratory factor analyses

Using the data of Sample A, we performed parallel analyses and exploratory factor analyses (EFA) to further reduce the item pool. Although the sample is relatively small (N = 163 employees), the Kaiser-Meyer-Olkin value of .90 indicated that our data is suitable for factor analyses due to sufficient commonalities among the items (see also, Fabrigar et al., Citation1999). Parallel analyses indicated that there are four latent factors present in the data, which accounted for more than half of the variance. Parallel analysis is superior to other methods for factor retention because this procedure identifies the number of eigenvalues that are larger than eigenvalues based on random simulations of the data (Hayton et al., Citation2004). Subsequently, we performed EFAs with oblique factor rotation to examine the loadings of the four-factor solution. Seven items had substantial cross-loadings (> .25) or loaded on a non-hypothesized factor. These items were subsequently discarded until the EFA solution was “clean” (DeVellis, Citation2016). The Root Mean Square Error of Approximation (RMSEA) for the final EFA solution was .064 (90% confidence interval [.053; .076]), indicating appropriate model fit (Marsh et al., Citation2004). The items administered in Sample A are shown together with factor loadings in and correspond to the four hypothesized AWP factors, supporting Hypothesis 1. As indicated in , there were 36 participants (22%) in Sample A who completed the survey in English instead of German. When excluding these participants, a similar factor structure and model fit was obtained (RMSEA = .063).

Correlations with agile methods, team types, and task interdependence

As part of the survey administered in Sample A, we asked participants to indicate whether they regard their team as an agile team on a scale from (1) traditional team to (5) agile team, and which of the four team types according to the typology of Cohen and Bailey (Citation1997) best applied to their team (i.e., service/production team, consulting/advice team, project/time-limited team, or management team). Moreover, we asked participants to report the frequency of using (a) the main agile methods, Scrum and Kanban (So, Citation2010), (b) the agile software development technique Extreme Programming (Hoda et al., Citation2017), (c) related frameworks, Design Thinking and Lean Development (Balzer et al., Citation2019; Liedtka, Citation2015), and (d) the opposing management framework Waterfall Development (Maruping et al., Citation2009; Venkatesh et al., Citation2020). The survey also included measures of task interdependence as introduced by Tims et al. (Citation2013). Shown in , our new AWPI measures all correlated significantly with participants’ perceptions of team agility (ranging from r = .41 to r = .50), as well as the perceived use of classic agile methods (ranging from r = .21 to r = .52) and related frameworks (ranging from r = .22 to r = .44). Only iterative development (r = .24) and stand-up meetings (r = .18) were weakly associated with Waterfall Development. Iterative development was positively associated with task interdependence (r = .21) and project team membership (r = .18). Taken together, the relatonships with perceived team agility and agile methods provide some initial evidence for the construct validity of our new scales. The moderate correlations also indicate that the use of AWPs is not tied to a single agile framework, team type, or level of task interdependence. The two significant correlations with Waterfall Development imply that iterative development and stand-up meetings are to some extent also used in “non-agile” contexts – despite being known as the most prominent agile practices (Ghosh & Wu, Citation2021).

Table 4. Pearson product-moment correlations of variables measured in Sample A.

Multilevel confirmatory factor analyses

Using the data of Sample B, we attempted to confirm the hypothesized factor structure in multilevel confirmatory factor analyses (ML-CFA), shown in . We followed the guidelines of Marsh et al. (Citation2004), who propose that model fit is appropriate when the Comparative Fit Index (CFI) and the Tucker-Lewis Index (TLI) are above .90, while the Root Mean Square Error of Approximation (RMSEA) and the Standardized Root Mean Square Residual (SRMR) should be less than .08 for acceptable fit. By examining the factor loadings of the initial ML-CFA (Model 3 in ) and the between-team CFA (Model 2 in ), we identified items with relatively weak factor loadings at the between-team level, which can quickly worsen approximate model fit indices (e.g., the SRMR; Asparouhov & Muthén, Citation2018). We only retained items with large between-team factor loadings, given that the AWPI theoretically measures a shared team construct. The standardized factor loadings of the retained items at the between-team level were high (mean = .86, range = .56 to .96). The revised models are based on a smaller, yet factorially purer set of items (see bold items in ). This is desirable from psychometric and practical perspectives (DeVellis, Citation2016). In addition, selecting items based on between-team factor loadings can enhance the predictive validity of collective constructs (Bliese et al., Citation2019). The multilevel CFAs support Hypothesis 1 because the four-factor structure of the AWPI holds on both levels of analysis, with a reasonable fit for the initial model (Model 3 in ) and a good fit for the revised model (Model 6 in ). Thereby, we established cross-level isomorphism because the variance in the items of the AWPI is partitioned in a similar manner at the between-team and the within-team level (Tay et al., Citation2014). Thus, individuals’ item responses provide a valid reflection of team-level agile work practices.

Table 5. Fit of confirmatory factor models in Sample B.

Internal consistency, test-retest reliability, and cross-lagged relationships

As shown in , both the long version and the shorter revised version of the AWPI scales have adequate internal consistency reliability according to Cronbach’s alpha (assuming strictly parallel items) and McDonald’s omega hierarchical (assuming congeneric items). The ICC2 values were acceptable, suggesting that sampling more responses per team is unlikely to drastically change the group means (Bliese et al., Citation2018). The group mean reliability was further underscored by the high levels of within-team agreement (rwgj), which shows that the AWPI measures a shared team construct rather than idiosyncratic perceptions of individuals. For a sub-sample of teams, we were able to calculate test-retest reliability over a one-year period (i.e., Sample C). It appears that agile teamwork is temporally less stable than agile taskwork, according to test-retest correlations (see, ). Moreover, agile teamwork at T1 had a cross-lagged relationship with agile taskwork at T2 (ß = .22, SE = .11, p = .021), but not vice versa (ß = .02, SE = .08, p = .857). This suggests that agile teamwork is predictive of changes in agile taskwork (changes in terms of rank-ordering; Henk & Castro-Schilo, Citation2016). In addition, this indicates that agile taskwork and agile teamwork are empirically distinct (Kenny, Citation1975).

Table 6. Internal consistency, group mean stability, median within-team agreement, and test-retest reliability.

Results of phase 2: measurement invariance and mean differences

Rationale and analytic approach

Once we know that our new instrument measures AWPs in a similar way in different sub-groups, we can be more confident about the validity of group comparisons and the applicability of the different scales. Psychometrically, this refers to measurement invariance (MI) and is usually tested in three steps (Vandenberg & Lance, Citation2000; Van de Schoot et al., Citation2012). First, we fit the “configural model”, which imposes a similar factor structure but leaves factor loadings and intercepts free to vary. Then, we fit the “metric model”, which constrains factor loadings to equivalence (i.e., relationships with latent variables are the same across groups). Finally, we fit the “scalar model”, which also constrains the intercepts to equivalence (i.e., no systematic response differences between groups are modelled). Before conducting the MI tests, we identify the higher-order factor model which best describes the relationships between the four AWP factors and the 16 selected items from Phase 1 (see bold items in ). Specifically, we compare the hypothesized taskwork-teamwork factor model with (1) a model that posits only one second-order factor, and (2) a model that allows no correlations among the four AWPs, assuming they are completely independent constructs.

Subsequently, we examine the stability of the higher-order factor solution via MI tests. We use individual-level data rather than aggregated team-level data to increase statistical power and obtain a better sample-size-per-parameter ratio, which is necessary for these psychometric tests (Marsh et al., Citation2004; Molenaar, Citation2016). This choice is reasonable, since we had established cross-level isomorphism in Phase 1 via multilevel CFAs (see, Tay et al., Citation2014), and because we are primarily interested in the measurement properties across broad sub-groups. Specifically, we compare the measurement properties across Sample B and D, across members of different types of teams (i.e., delivery vs. support), and across different phases of the transformation at AgileOrg (i.e., beginning vs. advanced phases). Thereby, we provide a more stringent test of Hypotheses 1 and 2. After establishing measurement invariance, we conduct tests of mean differences to answer RQ1. We use the maximum likelihood estimator because cut-off-values for model fit indices are based on simulation studies using this estimator (Chen, Citation2007; Cheung & Rensvold, Citation2002; Hu & Bentler, Citation1999).

Model comparisons

As shown in , the hypothesized model with agile taskwork (i.e., sprints and iterative development) and agile teamwork (i.e., stand-up and retrospective meetings) as second-order factors fit the data well. In terms of fit indices, this model provides a slightly better representation of the observed data than the more parsimonious one-factor model and the uncorrelated factor model. Thus, it appears empirically justified to make the taskwork-teamwork distinction in the AWPI. Importantly, the factor loadings and intercepts of the hypothesized second-order factor model are largely invariant across subpopulations, according to the CFI and RMSEA indices (see, ). Although simultaneously constraining all factor loadings (i.e., scalar model) and intercepts (i.e., metric model) to equivalence results in significant chi-square differences, these differences are not large enough to reject the null hypothesis of non-invariance according to methodological guidelines (e.g., Chen, Citation2007; Cheung & Rensvold, Citation2002). Thus, we can be reasonably confident that our instrument measured AWPs in similar ways for participants in the two samples (), members of delivery and support teams (), and at different phases of the agile transformation (). Collectively, these findings support Hypotheses 1 and 2.

Table 7. Higher-order factor models and measurement invariance tests in Sample B and D.

Mean differences

To address RQ1, we tested for differences in the latent means of agile taskwork and agile teamwork between (a) team types and (b) phases of the agile transformation. We released the intercept constraints on the second-order factors, while keeping the constraints on the first-order factor and the items to estimate the latent mean differences in terms of taskwork and teamwork (see Chen et al., Citation2005). Members of delivery (vs. support) teams scored higher on the latent agile taskwork factor (d = .49, p < .001) and the latent agile teamwork factor (d = .33, p = .002). Similarly, both latent agile taskwork (d = .31, p < .001) and agile teamwork (d = .21, p = .026) were significantly higher at the advanced (vs. beginning) phases of the transformation at AgileOrg. We also conducted Welch’s t-tests on the observed mean differences for the four subscales. As shown in , the observed differences mirror the latent mean differences. Noteworthy is that the observed mean differences between team types were not significant for iterative development, as were the observed differences between the agile transformation phases for stand-up meetings and retrospective meetings. Taken together, our answer to RQ1 is (a) members of delivery (vs. support) teams score higher on agile taskwork (particularly, sprints) and agile teamwork, and (b) scores at the advanced (vs. beginning) phases of the agile transformation are particularly higher for agile taskwork and somewhat higher for agile teamwork.

Figure 1. Observed group differences on the AWPI scale scores based on Welch’s t-tests.

Note. We report Cohen’s d as a standardized measure of effect size, *** p < .001, ** p < .01, * p < .05, ns = not significant.
Figure 1. Observed group differences on the AWPI scale scores based on Welch’s t-tests.

Results of phase 3: nomological network and observer ratings

Rationale and analytic approach

Another way of showing differences between agile taskwork and agile teamwork is to examine their relationships with variables in the nomological network. Thereby, we can test whether agile taskwork and agile teamwork have different functions (Morgeson & Hofmann, Citation1999), potentially affecting work outcomes via distinct mechanisms (see, ). To this end, the aim of Phase 3 is to investigate relationships with related constructs using the data of Sample D. Thereby, we test the convergent and discriminant validity of our new instrument (Hypotheses 3 to 7). We also examine relationships with agile coach ratings of AWPs to provide a multi-rater perspective on the construct (Campbell & Fiske, Citation1959). Finally, we explore whether relationships of AWPs with established team planning and work design measures differ between team types and phases of the agile transformation to address RQ2.

Measures

Whenever scales were not available in German, the items were translated by the first author and back-translated by a research assistant (unclarities in translations were resolved). For all scale scores, significant between-team variance was present, as the F-values calculated with a one-way random-effects ANOVA were all statistically significant (p < .001). We further calculated ICC2 and rwgj to examine whether analyses at the team-level were justified. Means, standard deviations, and correlations of the scales based on aggregated team-level data are shown in . Readers may notice that some of the ICC2 values are below the recommended .60 value for reasonable group-mean reliability. According to Bliese et al. (Citation2018), low ICC2 values do not harm the validity of team-level analyses but make it harder to detect statistically significant relationships at this level of analysis (i.e., increase type II errors). We refrained from excluding teams with low response rates or low within-group agreement because simulation studies indicated that doing so further reduces statistical power (due to lowering the Level-2 sample size; see, Biemann & Heidemeier, Citation2012). Instead, we jointly examine ICC1, ICC2, and rwgj statistics to consider whether analyses at the team-level are appropriate.

Table 8. Pearson product-moment correlations of variables measured in Sample D.

Agile taskwork

Following the results of Phase 2, we measured agile taskwork with the four iterative development (α = .83) and the four sprint (α = .94) items. Internal consistency estimates justified summing the eight items to an overall agile taskwork score (α = .88, ωh = .85). Intraclass correlations and within-group agreement statistics supported the validity of analyses at the team-level (ICC1 = .40, ICC2 = .74, median rwgj = .94).

Agile teamwork

Given the results of Phase 2, we measured agile teamwork with the four stand-up meeting (α = .92) and the four retrospective meeting (α = .89) items. Internal consistency estimates supported summing the eight items to an overall agile teamwork score (α = .91, ωh = .88). Intraclass correlations and within-group agreement indices justified the use of team-level analyses (ICC1 = .34, ICC2 = .68, median rwgj = .94).

Bureaucratic control

We used the 3-item centralized bureaucratic environment scale (α = .78) of Langer et al. (Citation2019). An example item of this scale is “Even small matters have to be referred to someone higher up for a final answer” rated from 1 (fully disagree) to 7 (fully agree). Analyses at the team-level were appropriate according to within-group agreement statistics (ICC1 = .17, ICC2 = .46, median rwgj = .89).

Emergent team planning

Following the work of Fisher (Citation2014), we included a 3-item emergent taskwork planning (α = .81) scale (e.g., “My team considers alternative courses of action for completing the task”) and a 3-item emergent teamwork planning scale (α = .81) scale (e.g., “My team clarifies the roles and responsibilities of all members”), rated from 1 (fully disagree) to 7 (fully agree). Within-group agreement indices supported a team-level analysis of Fisher’s measures, although the ICC2 statistics were low (for taskwork planning: ICC1 = .13, ICC2 = .39, rwgj = .89; for teamwork planning: ICC1 = .18, ICC2 = .47, median rwgj = .88).

Work design

We adapted the German version of Morgeson and Humphrey’s Work Design Questionnaire (WDQ; Stegmann et al., Citation2010) to measure team work design. Autonomy was measured as a 3-item index (α = .88) with the highest-loading items for autonomy regarding work methods, work scheduling, and decision-making. Feedback from others was measured with the original 3-item (α = .77) scale. We used the referent-shift composition approach (Chan, Citation1998) to capture team perceptions of work design (e.g., “The team members receive feedback on their performance from different people in the organization”) from 1 (fully disagree) to 7 (fully agree). Within-group agreement statistics justified team-level analyses of work design measures, although the ICC2 statistics were somewhat low (for autonomy: ICC1 = .16, ICC2 = .45, rwgj = .92; for feedback: ICC1 = .16, ICC2 = .45, median rwgj = .86).

Discriminant validity

We fit a CFA including the final AWPI items and established measures of team planning, work design, and bureaucracy on the individual-level data of Sample D. Again, for the CFAs we use individual-level data, to achieve a reasonable sample-size-per-parameter ratio (Marsh et al., Citation2004). Results support the distinctiveness of our new measures because the expected nine-factor solution (i.e., four AWPs, taskwork planning, teamwork planning, autonomy, feedback, and bureaucracy) provided an appropriate fit to the data (χ2 = 1208.61, df = 398, CFI = .92, TLI = .91, RMSEA = .06, SRMR = .05). Merging the indicators of any latent variable together on the same factor resulted in a significant deterioration in model fit (Δχ2 ≥ 132.14). Model fit deteriorated the least when merging the indicators of Fisher’s (Citation2014) emergent taskwork and emergent teamwork indicators (Δχ2 = 132.14, Δdf = 8, p < .001), and the most when merging the sprint and stand-up meeting indicators (Δχ2 = 1120.89, Δdf = 8, p < .001). This shows that the item loadings of the four primary AWP factors are independent of other constructs, satisfying discriminant validity (DeVellis, Citation2016; Voorhees et al., Citation2016).

Another way of testing discriminant validity is by forming a multitrait-multimethod matrix (MTMM) and by examining correlations with constructs that are relevant to the context but different in nature (Campbell & Fiske, Citation1959). As shown in , the monotrait-heteromethod correlations between member-ratings of AWPs and agile coach-ratings of AWPs were all positive and significant, satisfying convergent validity (see grey cells in ). These correlations were larger than the heterotrait-monomethod correlations between AWPs and centralized bureaucratic control (see dotted cells in ), satisfying discriminant validity. At the team-level, the correlations between AWPs and centralized bureaucratic control ranged from r = −.29 for iterative development to r = −.09 for stand-up meetings. The latent individual-level correlations between centralized bureaucratic control and AWPs were all statistically significant and negative (r ≤ −.11, p ≤ .032). This pattern of empirical relationships is consistent with Hypotheses 3, which stated that AWPs are weakly and negatively related to centralized bureaucratic control, supporting discriminant validity (Campbell & Fiske, Citation1959).

Convergent validity

Hypotheses 4 to 7 proposed that agile taskwork and agile teamwork are differentially related to emergent team planning and work design constructs. Moreover, we specified these relationships at the team-level, such that we expected that AWPs influence collective team planning and work design experiences. To test these relationships, we conduct multilevel regression analyses following the approach of Bliese et al. (Citation2018). Specifically, we separate the relationships of AWPs with the outcomes at the individual and the team-level by including both within-team centred predictors (i.e., individual-level slopes) and the aggregated versions of the predictors (i.e., team-level slopes). Most relevant for testing our hypotheses are the L2 coefficients in because they represent the potential team-level effects of AWPs (see, LoPilato & Vandenberg, Citation2015). We also include a formal test of how much the team-level slopes differ from the individual-level slopes, indicated by the L2 coefficients in . Finally, we explore whether relationships vary across team types and phases of the agile transformation by testing for moderated relationships, as shown in .

Table 9. Multilevel regression coefficients.

Supporting Hypothesis 4, agile taskwork is uniquely and positively related to emergent taskwork planning. Importantly, the L2 slope in is significant (γ = .24, p = .001), indicating that agile taskwork relates positively to collective emergent taskwork planning. Shown in , the L2 slope is significantly smaller (γ = −.23, p = .005) than the L1 slope (γ = .47, p < .001). According to Bliese et al. (Citation2018) differences in the strength of statistical relationships across levels of analyses can either indicate a shift in the meaning of constructs and/or a change in reliability. Both explanations are plausible because the emergent taskwork planning measure of Fisher (Citation2014) aims to capture a shared team construct (which may differ in meaning from an individual perception), but at the same time had a mediocre group-mean reliability in our sample (ICC2 = .39). Supporting Hypothesis 5, agile teamwork is uniquely related to emergent teamwork planning (γ = .22, p = .014), and the individual-level slope (γ = .21, p < .001) is not significantly different (γ = .01, p = .901) from the team-level slope. In line with Hypothesis 6, agile taskwork is uniquely related to team autonomy (γ = .18, p = .010), and the individual-level slope (γ = .14, p = .004) is not significantly different (γ = .04, p = .661) from the team-level slope. Finally, supporting Hypothesis 7, agile teamwork related uniquely to feedback at the team-level (γ = .32, p < .001), and the individual-level slope (γ = .33, p < .001) did not differ significantly (γ = −.01, p = .929) from the team-level slope.

Taken together, the results shown in support the convergent validity of our new instrument and indicate that AWPs relate in meaningful ways to collective experiences of emergent team planning (Hypothesis 4 and 5) and work design (Hypothesis 6 and 7). Interestingly, the individual-level slopes and team-level slopes of AWPs with the other variables in the nomological network are highly similar (except for emergent taskwork planning). This corroborates our findings from Phase 1, which implied that there exist isomorphic relationships between the individual-level item responses and the latent team-level constructs (Tay et al., Citation2014). Finally, shown in , the slopes of agile taskwork and agile teamwork do not vary depending on team types or phases of the transformation at AgileOrg, as indicated by the lack of statistically significant moderation effects (all p > .05). Therefore, in response to RQ2, we conclude that relationships in the nomological network do not depend on team types or phases of the agile transformation. Hence, AWPs may have similar effects on team planning and work design across contexts.

Discussion

The present study addresses calls for more generalizable theory (Niederman et al., Citation2018) and broadly applicable measures of agile work practices (AWPs; Junker et al., Citation2021; Rietze & Zacher, Citation2022). Building on the routine dynamics (Feldman & Pentland, Citation2003; Kremser & Schreyögg, Citation2016), team process (Fisher, Citation2014; Mathieu et al., Citation2020), and work design literature (Bakker & Demerouti, Citation2017; Parker, Citation2014), we defined AWPs as clusters of team planning routines that aim to facilitate change-oriented action (i.e., agility; Doeze Jager-van Vliet et al., Citation2022; Doz, Citation2020; Grass et al., Citation2020; Petermann & Zacher, Citation2021). We introduced four AWPs that are (a) practiced across a broad range of organizations (Tripp & Armstrong, Citation2018), (b) potentially enable continuous change (Brown & Eisenhardt, Citation1997), and (c) are distinguishable on a taskwork-teamwork continuum (Crawford & LePine, Citation2013).

Agile taskwork is characterized by the reliance on short-term goal cycles (i.e., sprints) and an iterative approach to task accomplishment. Agile teamwork happens in meetings that aim to enable goal monitoring (i.e., stand-up meetings) and reflexivity (i.e., retrospective meetings). According to agile management frameworks (Anderson, Citation2010; Rigby et al., Citation2020; Schwaber & Sutherland, Citation2017), these practices enhance the speed of decision-making, allow for rapid experimentation, and foster co-creation with customers. These are important capabilities for work in the 21st century (Girod & Králik, Citation2021; Lifshitz-Assaf et al., Citation2021). Whether AWPs indeed create these dynamic capabilities is a matter of empirical investigation. The present study sets the groundwork for such investigations in terms of theory and measurement. In the following, we discuss the evidence for the proposed theoretical framework and summarize the psychometric properties of the Agile Work Practices Instrument (AWPI).

Validity of the measures

Building on established theoretical dichotomies (Bales, Citation1950; Crawford & LePine, Citation2013; Fisher, Citation2014), we set out to validate the AWPI in terms of taskwork and teamwork. Results of exploratory and confirmatory factor analyses (CFA) indicated that the four AWPs are empirically distinct, but ground in two higher-order factors (Hypotheses 1 and 2). Specifically, sprints and iterative development load on a higher-order taskwork factor, whereas stand-up and retrospective meetings load on a higher-order teamwork factor. Despite being theoretically distinct, the two higher-order factors are empirically not independent as shown by the moderate correlation of agile taskwork and agile teamwork (team-level correlations of r = .49 in Sample B, r = .47 in Sample C, and r = .60 in Sample D). Hence, teams that practice agile taskwork tend to simultaneously practice agile teamwork. It should also be noted that the fit of the hypothesized taskwork-teamwork model was only slightly better than the fit of the model with one higher-order factor. Nevertheless, we are confident that the taskwork-teamwork model of AWPs will replicateFootnote3 beyond the present study, and we encourage future research to follow this conceptualization because agile taskwork and agile teamwork may function via distinct mechanisms. Similar conceptualizations have been adopted to understand the mechanisms of leadership (Yukl et al., Citation2002), diversity (Gerpott & Lehmann-Willenbrock, Citation2016), and conflict (Shah et al., Citation2021). Thus, an understanding of AWPs in terms of taskwork (goal accomplishment needs) and teamwork (socio-emotional needs) may facilitate an integration of the concept with the broader literature (e.g., Abele & Wojciszke, Citation2014).

The cross-lagged analyses (Phase 1) indicated that agile teamwork was predictive of changes in agile taskwork over a one-year period (ß = .22), but not vice versa (ß = .02). This provides additional support for the distinctiveness of the two dimensions. If agile taskwork and agile teamwork were the same construct and empirically redundant, the cross-lagged relationship would not be expected because all variance would be “absorbed” by the auto-correlation paths (see, Kenny, Citation1975). From a theoretical stance, the finding that only agile teamwork exhibited significant cross-lagged effects aligns with the idea that early interpersonal activities during a team’s history can have long-lasting effects on taskwork activities (Tuckman, Citation1965). Agile teams that dedicate more attention to forming a shared understanding of members’ roles and responsibilities during stand-up and retrospective meetings (i.e., teamwork) may over time be more able to engage in iterative development activities and to work in sprints (i.e., taskwork). Although we did not predict this finding a priori, it aligns with prior research showing that the development of team charters and performance strategies mainly benefits long-term rather than short-term group functioning (Mathieu & Rapp, Citation2009). In agile teams, such activities happen in the context of stand-up or retrospective meetings (Hummel et al., Citation2015). Hence, future research on agile teams in organizational settings may be able to replicate past findings that are based on research with students engaged in business simulations (Mathieu & Rapp, Citation2009) or course projects (Druskat & Kayes, Citation2000).

The taskwork-teamwork distinction was also supported in the analysis of the nomological network during Phase 3. Findings of CFAs and multilevel regression analyses indicated that AWPs are related but distinct from other bureaucracy, team process, and work design constructs. The four AWPs have in common that they are weakly and negatively associated with centralized bureaucratic control (Hypothesis 3), which may imply that these practices lie on a different bureaucratic continuum (i.e., concertive instead of centralized; Monteiro & Adler, Citation2021). Providing evidence for convergent validity, we found that agile taskwork related positively to shared experiences of autonomy and emergent taskwork planning. In contrast, the more the teams practiced agile teamwork, the more their members collectively experienced adequate feedback and engaged in teamwork planning behaviour. We emphasize the collective nature of these empirical relationships to highlight that our findings go beyond subjective perceptions. This is because establishing statistical relationships among collective constructs requires a high degree of intersubjectivity (i.e., shared agreement among team members as indicated by rwgj and ICC2 statistics; Bliese et al., Citation2018). Taken together, the findings support Hypotheses 4 to 7, suggesting that agile taskwork and agile teamwork may function via distinct team planning and work design mechanisms (see, ). While agile taskwork relates to emergent taskwork planning behaviour and team autonomy, agile teamwork converges with emergent teamwork planning behaviour and seems to open feedback channels.

Given the complex nature of the AWP construct, it is unsurprising that not all statistical relationships can be explained in terms of the latent taskwork-teamwork factors. More complex relationships among the four AWPs can be explained by examining agile teams as complex adaptive systems (CAS; McGrath et al., Citation2000; Ramos-Villagrasa et al., Citation2018). From this perspective, we may see AWPs as a dynamic system resulting from team members’ actions (taskwork) and their interactions (teamwork). This system is constrained by contextual dynamics that affect the group’s developmental trajectories. For instance, the addition of a new team member may require more frequent agile meetings to re-establish roles and responsibilities. At the same time, additional labour may allow for reducing the duration of work cycles by lowering the total workload per team member (Van Oorschot et al., Citation2018). To validate more complex explanations of the interrelations between AWPs, psychometric network modelling is a potentially suitable approach (Schmittmann et al., Citation2013). Rather than explaining the covariance of the different AWPs with latent variables (e.g., taskwork-teamwork), this approach would assume that the covariance emerges due to causal “interactions” between the specific practices. To enhance the validity of this approach (Borsboom et al., Citation2021), it will be necessary to collect multiple waves of data to provide stronger evidence for causal interactions of agile practices in a system. Since psychometric network and latent variable models can be statistically equivalent (Van Bork et al., Citation2021), researchers may also employ simulation techniques to examine how AWPs function as a system (e.g., Van Oorschot et al., Citation2018). To this end, the results of survey studies may provide useful starting values for building more complex computational models. Predictions from these models can then be validated using data of real teams (McGrath et al., Citation2000). Thus, even with the advent of computational modelling, survey measures remain important for studying agile teams as complex adaptive systems.

Applicability of the measures

An integration of the AWP concept in organizational research requires a broadly applicable measurement instrument of AWPs that can be used across work contexts – beyond software development (see Junker et al., Citation2021). The findings of Phase 1 indicated that the use of AWPs is not restricted to a single agile method, team type, or level of task interdependence. Next, in Phase 2, we showed that the measurement properties of the AWPI are similar for different types of teams and in different phases of their agile transformation. While the AWP measures had similar meanings for broad groups of participants, members of delivery teams had higher mean scores on agile taskwork and agile teamwork than members of support teams (RQ1). Hence, AWPs are used more consistently in contexts where individuals are directly in charge of producing value for external customers (here customers received IT services, maintenance, and consulting). The most pronounced differences between delivery and support teams existed for the dimension of agile taskwork. This may indicate that it was more difficult to plan tasks in short iterative cycles in support teams, who mainly created value within the organization (here HR, finance, and sales teams). Nevertheless, our findings regarding measurement invariance show that (1) the AWPI can be administered in a broad range of work contexts, and (2) observed score differences can be interpreted as “true” differences in the underlying latent AWP construct (Vandenberg & Lance, Citation2000). Thus, we are confident that the AWPI is a broadly applicable measure, fulfiling validity standards for research in management and organizational psychology (Cortina et al., Citation2020).

We highlight two important observations regarding the applicability of the construct and the generalizability of our findings that emerged during Phase 3 of this study. First, we found that AWPs are related to more established team planning and work design constructs in similar ways for delivery and support teams. Second, the relationships were not otherwise explained by the agile transformation programme. In other words, although certain teams may use AWPs more frequently, these practices appeared to have similar relationships with work design and team planning across different sub-samples. It should be noted that we did not have specific expectations about this and decided to approach this empirical issue with a research question (RQ2). The present study may be somewhat underpowered to find potential moderated effects of AWPs because all participating teams were from the same organization, which may have reduced heterogeneity in the use of AWPs and their effects. We do not intend to convey that AWPs have causal effects or that these practices can be used in all types of teams to enable continuous change. Yet, we conclude that valid measurement of AWPs beyond classic areas of application is possible (see also Junker et al., Citation2021; Koch & Schermuly, Citation2021), and that our measures have similar psychometric properties in different samples. Moreover, certain AWPs can arguably be implemented in almost any type of team. For instance, retrospective meetings represent a type of after-action-review (or debrief), which has been successfully implemented in a broad of different teams (for a meta-analysis, see, Keiser & Arthur, Citation2021). Thus, the AWP concept, or at least certain practices, seem applicable and measurable in a broader range of work contexts than initially thought.

Limitations and future research

Although our validation study was extensive, some limitations and open questions about the measurement and mechanisms of AWPs remain. First, our theorizing regarding the distinct temporal mechanisms of the four AWPs (see, ) needs to be validated. We based this theorizing on the work of Brown and Eisenhardt (Citation1997) who used an inductive case study approach to investigate practices that enable continuous change by creating “links in time” (p. 29). Hence, the distinct temporal mechanisms of the four AWPs can potentially be tested by administering the AWPI and relating score differences to qualitative observations in terms of temporal patterning (e.g., Kremser & Blagoev, Citation2021). Another way to validate the temporal mechanisms may be to examine how the use of AWPs impacts perceptions of subjective time (Shipp & Jansen, Citation2021) and team reflexivity (Schippers et al., Citation2007). For instance, based on our framework, we may expect that the use of retrospective meetings induces a past focus among team members (Shipp et al., Citation2009).

Second, the present study was limited in terms of providing evidence for the criterion validity of the AWPI. Although good work design is an important criterion in work and organizational psychology (Parker & Jorritsma, Citation2021), future research may consider more classic outcomes in terms of well-being and performance. There is some evidence that the use of AWPs relates positively to work engagement (Rietze & Zacher, Citation2022), psychological empowerment (Koch & Schermuly, Citation2021), as well as team proactivity and performance (Junker et al., Citation2021). What currently makes it hard to summarize empirical findings on AWPs is the diversity of ways in which the construct has been operationalized. By administering the AWPI, future research can establish the criterion validity of this new instrument. A consistent use of similar measures would facilitate the integration of the agility literature and enable more valid meta-analytic research.

Third, there were some associations between agile coach ratings and team member ratings that did not align with our predictions. For instance, agile coach ratings of stand-up meetings correlated negatively with member ratings of emergent teamwork planning (r = −.24) and autonomy (r = −.40), opposite to what was expected. It is possible that an overinvolvement of agile coaches in stand-up meetings potentially undermines spontaneous planning activities by members (Fisher, Citation2014; Ghosh & Wu, Citation2021), and reduces collective experiences of autonomy (Van Mierlo et al., Citation2006). Thus, rather than being a methodological artefact, such unanticipated correlations may indicate how much the leaders and members of the same team can diverge in their evaluation of group functioning (see, Bolinger et al., Citation2020). Hence, future research may investigate why leaders and members diverge in their perceptions of AWPs, beyond the usual methodological arguments that are utilized to explain a lack of convergence in multisource ratings (e.g., common method variance).

Finally, it is necessary to cross-validate our findings based on new data from multiple organizations. Although we had access to a wide range of different teams at AgileOrg, the context of this organization was rather unique. For instance, there was strong management support for the implementation of AWPs, and the organization transitioned to agile working in a stepwise manner (instead of a big-bang approach; see, Denning, Citation2021). Our findings regarding measurement invariance (Phase 2) increase our confidence that the AWPI will retain its psychometric properties in more diverse samples. Moreover, we constructively replicated the proposed factor structure in an independent sample with other self-report measures of AWPs (see https://osf.io/ndtx4/). Nevertheless, it is possible that relationships with criterion variables differ between organizations or industrial sectors. Thus, a promising avenue for future research will be to investigate which organizational contexts enhance (or undermine) the potential effects of AWPs on well-being, performance, and innovation outcomes.

Practical implications

As reflected in the label of the concept, AWPs originated in practice and refer to tangible routines that are already used by many organizations worldwide (Annosi et al., Citation2020). Paradoxically, our main practical contribution may thus be that we make the concept of agile working more theoretical, following the statement attributed to Kurt Lewin “nothing is so practical as a good theory” (Lewin, Citation1945). By integrating the AWP concept with our knowledge of work and organizational psychology, we provide a new framework that practitioners can use to guide the implementation of AWPs and understand their mechanisms. The here developed instrument can also be administered to measure the progress of agile transformation programs. According to our findings, the AWPI is valid in different team contexts and may therefore serve as a valid empirical benchmark for agile transformations.

The theoretical framework that we developed to validate the AWPI may also be used to improve the practice of agile working in organizations. Rather than focusing on the adherence to the rules of agile frameworks (e.g., Scrum), we recommend that practitioners focus on theoretically derived mechanisms of AWPs in terms of (a) temporality, (b) managerial control, (c) team planning, and (d) work design (see, ). Organizations may be able to reap the benefits of AWPs by implementing them in ways that enable teams to create links in time (Brown & Eisenhardt, Citation1997), reduce centralized bureaucratic control (Monteiro & Adler, Citation2021), promote emergent team planning (Fisher, Citation2014), and provide favourable work design (Parker, Citation2014). Thus, organizations may focus on these evidence-based mechanisms rather than blindly following popular agile management frameworks (Sutherland, Citation2014). With their knowledge of these mechanisms, work and organizational psychologists can be at the forefront of evidence-based agile transformation programmes.

Conclusion

This study introduces theory and measurement frameworks for research on agile working by providing evidence for the factor structure, reliability, and construct validity of the Agile Work Practices Instrument (AWPI). In line with the taskwork-teamwork dichotomy, we find that AWPs are related but distinct from other constructs in terms of bureaucracy, emergent team planning, and work design. Moreover, our findings show that agile taskwork and agile teamwork may function via distinct team planning and work design mechanisms. We also conclude that the AWPI can be used for research on agility in a range of work contexts and that the concept of agile working is more broadly applicable than commonly thought. Whether the use of AWPs is indeed necessary or sufficient for enabling continuous change is a matter of future empirical research. The present study provides the basis for this research in terms of the measurement and mechanisms of agile work practices.

Data availability

Analysis code and materials are available online (https://osf.io/ndtx4). Data is available on request.

Supplemental material

Supplemental Material

Download MS Word (140.3 KB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplementary material

Supplemental data for this article can be accessed online at https://doi.org/10.1080/1359432X.2022.2096439

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Notes

1. Consider, for instance, the practice of uploading draft versions of manuscripts on pre-print servers or for conference presentations. This can be seen as an iterative development practice because it represents a low-cost experiment through which authors engage with the end-users of their work. This may enable scholars to continuously update their work with advancements in the field they may otherwise be unaware of.

2. Please note, that overlap in terms of variables only exists for our measures of agile work practices. The aim of Junker et al. (Citation2021) was to test a theoretical model of agile working in relation to proactive behaviour and performance. The present study uses the data of Junker et al. (Citation2021) only to provide information on the factorial validity and reliability of the AWPI in Sample B.

3. A re-analysis of publicly available data of AWPs collected by other researchers (https://osf.io/xpzbq), constructively replicates our distinction between agile teamwork and agile taskwork in an independent sample with different self-report measures. Details regarding the re-analysis, including information on CFA model fit and the analysis code can be found in the online supplement (https://osf.io/ndtx4/).

References

  • Abele, A. E., & Wojciszke, B. (2014). Communal and agentic content in social cognition: A dual perspective model. In Olson, J. M., Zanna, M. P. (Eds.), Advances in experimental social psychology 1st ed.(Vol. 50, pp. 195–255). Elsevier Inc. https://doi.org/10.1016/B978-0-12-800284-1.00004-7
  • Adler, P. S., & Borys, B. (1996). Two types of bureaucracy: Enabling and coercive. Administrative Science Quarterly, 41(1), 61–89. https://doi.org/10.2307/2393986
  • Ancona, D. G., Okhuysen, G. A., & Perlow, L. A. (2001). Taking time to integrate temporal research. Academy of Management Review, 26(4), 512–529. https://doi.org/10.5465/amr.2001.5393887
  • Anderson, D. (2010). Kanban: Successful evolutionary change for your technology business. Blue Hole Press.
  • Annosi, M. C., Foss, N., & Martini, A. (2020). When agile harms learning and innovation: (and what can be done about it). California Management Review, 63(1), 61–80. https://doi.org/10.1177/0008125620948265
  • Arrow, H., McGrath, J. E., & Berdahl, J. L. (2000). Small groups as complex systems: Formation, coordination, development, and adaptation. Sage.
  • Asparouhov, T., & Muthén, B. (2018). SRMR in MPlus Retrieved 4 July, 2022, fromhttp://www.statmodel.com/download/SRMR2.pdf
  • Bakker, A. B., & Demerouti, E. (2017). Job demands-resources theory: Taking stock and looking forward. Journal of Occupational Health Psychology, 22(3), 273–285. https://doi.org/10.1037/ocp0000056
  • Bales, R. F. (1950). Interaction process analysis: A method for the study of small groups. University of Chicago Press.
  • Balzer, W. K., Brodke, M. H., Kluse, C., & Zickar, M. J. (2019). Revolution or 30-year fad? A role for I-O psychology in Lean management. Industrial and Organizational Psychology, 12(3), 215–233. https://doi.org/10.1017/iop.2019.23
  • Barker, J. R. (1993). Tightening the iron cage: Concertive control in teams. Administrative Science Quarterly, 38(3), 408–437. https://doi.org/10.2307/2393374
  • Barton, D., Carey, D., & Charan, R. (2018). One bank’s agile team experiment. Harvard Business Review, 96, 59–62. https://hbr.org/2018/03/one-banks-agile-team-experiment
  • Beck, K., Beedle, M., Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., Grenning, J., Highsmith, J., Hunt, A., Jeffries, R., Kern, J., Marick, B., Martin, R. C., Mellor, S., Schwaber, K., Sutherland, J., & Thomas, D. (2001). Manifesto for agile software development. The Agile Alliance.
  • Bernstein, E., Bunch, J., Canner, N., & Lee, M. (2016). Beyond the holacracy hype. Harvard Business Review, 94, 38–49. https://hbr.org/2016/07/beyond-the-holacracy-hype
  • Biemann, T., & Heidemeier, H. (2012). Does excluding some groups from research designs improve statistical power? Small Group Research, 43(4), 387–409. https://doi.org/10.1177/1046496412443088
  • Bliese, P. D., Maltarich, M. A., & Hendricks, J. L. (2018). Back to basics with mixed-effects models: Nine take-away points. Journal of Business and Psychology, 33(1), 1–23. https://doi.org/10.1007/s10869-017-9491-z
  • Bliese, P. D., Maltarich, M. A., Hendricks, J. L., Hofmann, D. A., & Adler, A. B. (2019). Improving the measurement of group-level constructs by optimizing between-group differentiation. Journal of Applied Psychology, 104(2), 293–302. https://doi.org/10.1037/apl0000349
  • Bolinger, A. R., Okhuysen, G. A., & Bonner, B. L. (2020). Investigating individuals’ recollections of group experiences. Academy of Management Discoveries, 6(2), 235–265. https://doi.org/10.5465/amd.2017.0066
  • Borsboom, D., Deserno, M. K., Rhemtulla, M., Epskamp, S., Fried, E. I., McNally, R. J., Robinaugh, D. J., Perugini, M., Dalege, J., Costantini, G., Isvoranu, A.-M., Wysocki, A. C., van Borkulo, C. D., van Bork, R., & Waldorp, L. J. (2021). Network analysis of multivariate data in psychological science. Nature Reviews Methods Primers, 1(1). https://doi.org/10.1038/s43586-021-00055-w
  • Brown, S., & Eisenhardt, K. M. (1997). The art of continuous change: Linking complexity theory and time-paced evolution in relentlessly shifting organizations. Administrative Science Quarterly, 42(1), 1–34. https://doi.org/10.2307/2393807
  • Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81–105. https://doi.org/10.1037/h0046016
  • Chan, D. (1998). Functional relations among constructs in the same content domain at different levels of analysis: A typology of composition models. Journal of Applied Psychology, 83(2), 234–246. https://doi.org/10.1037/0021-9010.83.2.234
  • Chen F Fang, Sousa K H and West S G. (2005). Teacher's Corner: Testing Measurement Invariance of Second-Order Factor Models. Structural Equation Modeling: A Multidisciplinary Journal, 12(3), 471–492. 10.1207/s15328007sem1203_7
  • Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling, 14(3), 464–504. https://doi.org/10.1080/10705510701301834
  • Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9(2), 233–255. https://doi.org/10.1207/S15328007SEM0902_5
  • Cohen, S. G., & Bailey, D. E. (1997). What makes teams work: Group effectiveness research from the shop floor to the executive suite. Journal of Management, 23(3), 239–290. https://doi.org/10.1177/014920639702300303
  • Cooper, R. G. (2021). Accelerating innovation: Some lessons from the pandemic. Journal of Product Innovation Management, 38(2), 221–232. https://doi.org/10.1111/jpim.12565
  • Cortina, J., Zitong, S., Keener, S., Keeler, K., Grubb, L., Schmitt, N., Tonidandel, S., Summerville, K., Heggestad, E., & Banks, G. (2020). From alpha to omega and beyond! A look at the past, present, and (possible) future of psychometric soundness in the journal of applied psychology. Journal of Applied Psychology, 105(12), 1351–1381. https://doi.org/10.1037/apl0000815
  • Crawford, E. R., & LePine, J. A. (2013). A configural theory of team processes: Accounting for the structure of taskwork and teamwork. Academy of Management Review, 38(1), 32–48. https://doi.org/10.5465/amr.2011.0206
  • Dean, J. W., & Bowen, D. E. (1994). Management theory and total quality: Improving research and practice through theory development. Academy of Management, 19(3), 392–418. https://doi.org/10.5465/amr.1994.9412271803
  • Deken, F., Carlile, P. R., Berends, H., & Lauche, K. (2016). Generating novelty through interdependent routines: A process model of routine work. Organization Science, 27(3), 659–677. https://doi.org/10.1287/orsc.2016.1051
  • Denning, S. (2021). Why “big bang” agile transformations are a bad idea. Forbes Magazine. https://www.forbes.com/sites/stevedenning/2021/10/29/why-big-bang-agile-transformations-are-a-bad-idea/?sh=34aa2585406f
  • DeVellis, R. F. (2016). Scale development theory and applications (4th ed.). Sage Publications.
  • Dingsøyr, T., Nerur, S., Balijepally, V., & Moe, N. B. (2012). A decade of agile methodologies: Towards explaining agile software development. Journal of Systems and Software, 85(6), 1213–1221. https://doi.org/10.1016/j.jss.2012.02.033
  • Doeze Jager-van Vliet, S. B., Born, M. P., & van der Molen, H. T. (2022). The relationship between organizational trust, resistance to change and adaptive and proactive employees’ agility in an unplanned and planned change context. Applied Psychology: An International Review, 71(2), 436–460. https://doi.org/10.1111/apps.12327
  • Dönmez, D., & Grote, G. (2018). Two sides of the same coin – How agile software development teams approach uncertainty as threats and opportunities. Information and Software Technology, 93, 94–111. https://doi.org/10.1016/j.infsof.2017.08.015
  • Doz, Y. (2020). Fostering strategic agility: How individual executives and human resource practices contribute. Human Resource Management Review, 30(1), 100693. https://doi.org/10.1016/j.hrmr.2019.100693
  • Druskat, V. U., & Kayes, D. C. (2000). Learning versus performance in short-term project teams. Small Group Research, 31(3), 328–353. https://doi.org/10.1177/104649640003100304
  • Edmondson, A. C. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383. https://doi.org/10.2307/2666999
  • Edmondson, A. C., & McManus, S. E. (2007). Methodological fit in management field research. Academy of Management Review, 32(4), 1155–1179. https://doi.org/10.5465/amr.2007.26586086
  • Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272–299. https://doi.org/10.1037/1082-989X.4.3.272
  • Feldman, M. S., & Pentland, B. T. (2003). Reconceptualizing organizational routines as a source of flexibility and change. Administrative Science Quarterly, 48(1), 94–118. https://doi.org/10.2307/3556620
  • Fisher, D. M. (2014). Distinguishing between taskwork and teamwork planning in teams: Relations with coordination and interpersonal processes. Journal of Applied Psychology, 99(3), 423–436. https://doi.org/10.1037/a0034625
  • Fortuin, D. J., van Mierlo, H., Bakker, A. B., Petrou, P., & Demerouti, E. (2021). Team boosting behaviours: Development and validation of a new concept and scale. European Journal of Work and Organizational Psychology, 30(4), 600–618. https://doi.org/10.1080/1359432X.2020.1854226
  • Gerpott, F. H., & Lehmann-Willenbrock, N. (2016). How differences make a difference: The role of team diversity in meeting processes and outcomes. In J. A. Allen, N. Lehmann-Willenbrock, & S. G. Rogelberg (Eds.), The Cambridge handbook of meeting science (pp. 93–97). Cambridge University Press.
  • Gersick, C. J. G. (1988). Time and transition in work teams: Toward a new model of group development. Academy of Management Journal, 31(1), 9–41. https://doi.org/10.5465/256496
  • Gersick, C. J. G., & Hackman, J. R. (1990). Habitual routines in task-performing groups. Organizational Behavior and Human Decision Processes, 47(1), 65–97. https://doi.org/10.1016/0749-5978(90)90047-D
  • Ghosh, S., & Wu, A. (2021). Iterative coordination and innovation: Prioritizing value over novelty. Organization Science. https://doi.org/10.1287/orsc.2021.1499
  • Ghosh, S. (2021). Think before you act: The unintended consequences of inexpensive business experimentation. Academy of Management Proceedings. Academy of Management. https://doi.org/10.5465/AMBPP.2021.97
  • Gilson, L. L., Mathieu, J. E., Shalley, C. E., & Ruddy, T. M. (2005). Creativity and standardization: Complementary or conflicting drivers of team effectiveness? Academy of Management Journal, 48(3), 521–531. https://doi.org/10.5465/AMJ.2005.17407916
  • Girod, S. J., & Králik, M. (2021). Resetting management: Thrive with agility in the age of uncertainty. Kogan Page.
  • Grass, A., Backmann, J., & Hoegl, M. (2020). From empowerment dynamics to team adaptability - exploring and conceptualizing the continuous agile team innovation process. Journal of Product Innovation Management, 37(4), 324–351. https://doi.org/10.1111/jpim.12525
  • Grote, G., Kolbe, M., & Waller, M. J. (2018). The dual nature of adaptive coordination in teams: Balancing demands for flexibility and stability. Organizational Psychology Review, 8(2–3), 125–148. https://doi.org/10.1177/2041386618790112
  • Gupta, M., George, J. F., & Xia, W. (2019). Relationships between IT department culture and agile software development practices: An empirical investigation. International Journal of Information Management, 44, 13–24. https://doi.org/10.1016/j.ijinfomgt.2018.09.006
  • Hayton, J. C., Allen, D. G., & Scarpello, V. (2004). Factor retention decisions in exploratory factor analysis: A tutorial on parallel analysis. Organizational Research Methods, 7(2), 191–205. https://doi.org/10.1177/1094428104263675
  • He, H., Baruch, Y., & Lin, C. P. (2014). Modeling team knowledge sharing and team flexibility: The role of within-team competition. Human Relations, 67(8), 947–978. https://doi.org/10.1177/0018726713508797
  • Henk, C. M., & Castro-Schilo, L. (2016). Preliminary detection of relations among dynamic processes with two-occasion data. Structural Equation Modeling, 23(2), 180–193. https://doi.org/10.1080/10705511.2015.1030022
  • Hennel, P., & Rosenkranz, C. (2021). Investigating the “Socio” in socio-technical development: The case for psychological safety in agile information systems development. Project Management Journal, 52(1), 11–30. https://doi.org/10.1177/8756972820933057
  • Hewett, R., & Shantz, A. (2021). A theory of HR co-creation. Human Resource Management Review, 31(4), 1–17. https://doi.org/10.1016/j.hrmr.2021.100823
  • Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of Management, 21(5), 967–988. https://doi.org/10.1177/014920639502100509
  • Hoda, R., Salleh, N., Grundy, J., & Tee, H. M. (2017). Systematic literature reviews in agile software development: A tertiary study. Information and Software Technology, 85, 60–70. https://doi.org/10.1016/j.infsof.2017.01.007
  • Hu, L., & Bentler, P. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. https://doi.org/10.1080/10705519909540118
  • Hummel, M., Rosenkranz, C., & Holten, R. (2015). The role of social agile practices for direct and indirect communication in information systems development teams. Communications of the Association for Information Systems, 36, 273–300. https://doi.org/10.17705/1CAIS.03615
  • Ishak, A. W., & Ballard, D. I. (2012). Time to re-group: A typology and nested phase model for action teams. Small Group Research, 43(1), 3–29. https://doi.org/10.1177/1046496411425250
  • Johnson, R. E., Rosen, C. C., Chang, C. H. D., Djurdjevic, E., & Taing, M. U. (2012). Recommendations for improving the construct clarity of higher-order multidimensional constructs. Human Resource Management Review, 22(2), 62–72. https://doi.org/10.1016/j.hrmr.2011.11.006
  • Junker, T. L., Bakker, A. B., Gorgievski, M. J., & Derks, D. (2021). Agile work practices and employee proactivity: A multilevel study. Human Relations. https://doi.org/10.1177/00187267211030101
  • Keiser, N. L., & Arthur, W. (2021). A meta-analysis of the effectiveness of the After-Action Review (or Debrief) and factors that influence its effectiveness. Journal of Applied Psychology, 106(7), 1007–1032. https://doi.org/10.1037/apl0000821
  • Kenny, D. A. (1975). Cross-lagged panel correlation: A test for spuriousness. Psychological Bulletin, 82(6), 887–903. https://doi.org/10.1037/0033-2909.82.6.887
  • Khanagha, S., Volberda, H. W., Alexiou, A., & Annosi, M. C. (2022). Mitigating the dark side of agile teams: Peer pressure, leaders’ control, and the innovative output of agile teams. Journal of Product Innovation Management, 39(3), 334–350. https://doi.org/10.1111/jpim.12589
  • Kirchherr, J. (2019). The lean PhD: Radically improve the efficiency, quality and impact of your research. Red Global Press.
  • Koch, J., & Schermuly, C. C. (2021). Who is attracted and why? How agile project management influences employee’s attraction and commitment. International Journal of Managing Projects in Business, 14(3), 699–720. https://doi.org/10.1108/IJMPB-02-2020-0063
  • Kremser, W., & Schreyögg, G. (2016). The dynamics of interrelated routines: Introducing the cluster level. Organization Science, 27(3), 698–721. https://doi.org/10.1287/orsc.2015.1042
  • Kremser, W., & Blagoev, B. (2021). The dynamics of prioritizing: How actors temporally pattern complex role–routine ecologies. Administrative Science Quarterly, 66(2), 339–379. https://doi.org/10.1177/0001839220948483
  • Laanti, M., Salo, O., & Abrahamsson, P. (2011). Agile methods rapidly replacing traditional methods at Nokia: A survey of opinions on agile transformation. Information and Software Technology, 53(3), 276–290. https://doi.org/10.1016/j.infsof.2010.11.010
  • Langer, J., Feeney, M. K., & Lee, S. E. (2019). Employee fit and job satisfaction in bureaucratic and entrepreneurial work environments. Review of Public Personnel Administration, 39(1), 135–155. https://doi.org/10.1177/0734371X17693056
  • Lee, S., & Geum, Y. (2021). How to determine a minimum viable product in app-based lean start-ups: Kano-based approach. Total Quality Management and Business Excellence, 32(15–16), 1751–1767. https://doi.org/10.1080/14783363.2020.1770588
  • Lewin, K. (1945). The research center for group dynamics at Massachusetts institute of technology. Sociometry, 8(2), 126–135. https://doi.org/10.2307/2785233
  • Lewis, M. W., Andriopoulos, C., & Smith, W. K. (2014). Paradoxical leadership to enable strategic agility. California Management Review, 56(3), 58–77. https://doi.org/10.1525/cmr.2014.56.3.58
  • Liedtka, J. (2015). Perspective: linking design thinking with innovation outcomes through cognitive bias reduction. Journal of Product Innovation Management, 32(6), 925–938. https://doi.org/10.1111/jpim.12163
  • Lifshitz-Assaf, H., Lebovitz, S., & Zalmanson, L. (2021). Minimal and adaptive coordination: How hackathons’ projects accelerate innovation without killing IT. Academy of Management Journal, 64(3), 684–715. https://doi.org/10.5465/AMJ.2017.0712
  • Liu, J. W., Ho, C. Y., Chang, J. Y. T., & Tsai, J. C. A. (2019). The role of Sprint planning and feedback in game development projects: Implications for game quality. Journal of Systems and Software, 154, 79–91. https://doi.org/10.1016/j.jss.2019.04.057
  • LoPilato, A., & Vandenberg, R. J. (2015). The not so direct cross-level direct effect. In C. Lance & R. J. Vandenberg (Eds.), More statistical and methodological myths and urban legends (pp. 292–310). Routledge.
  • Marks, M. A., Mathieu, J., & Zaccaro, S. (2001). A temporally based framework and taxonomy of team processes. Academy of Management Review, 26(3), 356–376. https://doi.org/10.5465/amr.2001.4845785
  • Marsh, H. W., Hau, K.-T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) finding. Structural Equation Modeling, 11(3), 320–341. https://doi.org/10.1207/s15328007sem1103_2
  • Maruping, L. M., Venkatesh, V., & Agarwal, R. (2009). A control theory perspective on agile methodology use and changing user requirements. Information Systems Research, 20(3), 377–399. https://doi.org/10.1287/isre.1090.0238
  • Mathieu, J., & Schulze, W. (2006). The influence of team knowledge and formal plans on episodic team process-performance relationships. Academy of Management Journal, 49(3), 605–619. https://doi.org/10.5465/AMJ.2006.21794678
  • Mathieu, J., & Rapp, T. (2009). Laying the foundation for successful team performance trajectories: The roles of team charters and performance strategies. Journal of Applied Psychology, 94(1), 90–103. https://doi.org/10.1037/a0013257
  • Mathieu, J., Gallagher, P., Domingo, M., & Klock, E. (2019). Embracing complexity: Reviewing the past decade of team effectiveness research. Annual Review of Organizational Psychology and Organizational Behavior, 6(1), 17–46. https://doi.org/10.1146/annurev-orgpsych-012218-015106
  • Mathieu, J., Luciano, M. M., Innocenzo, L. D., Klock, E. A., & Lepine, J. A. (2020). The development and construct validity of a team processes survey measure. Organizational Research Methods, 23(3), 399–431. https://doi.org/10.1177/1094428119840801
  • Maynard, M. T., Kennedy, D. M., & Sommer, S. A. (2015). Team adaptation: A fifteen-year synthesis (1998–2013) and framework for how this literature needs to “adapt” going forward. European Journal of Work and Organizational Psychology, 24(5), 652–677. https://doi.org/10.1080/1359432X.2014.1001376
  • McGrath, J. E., Arrow, H., & Berdahl, J. L. (2000). The study of groups: Past, present, and future. Personality and Social Psychology Review, 4(1), 95–105. https://doi.org/10.1207/S15327957PSPR0401_8
  • McGrath J E, Arrow H and Berdahl J L. (2000). The Study of Groups: Past, Present, and Future. Personality and Social Psychology Review, 4(1), 95–105. 10.1207/S15327957PSPR0401_8
  • Mergel, I., Ganapati, S., & Whitford, A. B. (2021). Agile: A new way of governing. Public Administration Review, 81(1), 161–165. https://doi.org/10.1111/puar.13202
  • Molenaar, D. (2016). On the distortion of model fit in comparing the bifactor model and the higher-order factor model. Intelligence, 57, 60–63. https://doi.org/10.1016/j.intell.2016.03.007
  • Monteiro, P., & Adler, P. S. (2021). Bureaucracy for the 21st century: Clarifying and expanding our view of bureaucratic organization. Academy of Management Annals. https://doi.org/10.5465/annals.2019.0059
  • Morgeson, F. P., & Hofmann, D. A. (1999). The structure and function of collective constructs: Implications for multilevel research and theory development. Academy of Management Review, 24(2), 249–265. https://doi.org/10.2307/259081
  • Nahrgang, J. D., DeRue, D. S., Hollenbeck, J. R., Spitzmuller, M., Jundt, D. K., & Ilgen, D. R. (2013). Goal setting in teams: The impact of learning and performance goals on process and performance. Organizational Behavior and Human Decision Processes, 122(1), 12–21. https://doi.org/10.1016/j.obhdp.2013.03.008
  • Niederman, F., Lechler, T., & Petit, Y. (2018). A research agenda for extending agile practices in software development and additional task domains. Project Management Journal, 49(6), 3–17. https://doi.org/10.1177/8756972818802713
  • Niepce, W., & Molleman, E. (1998). Work design issues in lean production from a sociotechnical systems perspective: Neo-taylorism or the next step in sociotechnical design? Human Relations, 51(3), 259–287. https://doi.org/10.1177/001872679805100304
  • Ohly, S., Sonnentag, S., & Pluntke, F. (2006). Routinization, work characteristics and their relationships with creative and proactive behaviors. Journal of Organizational Behavior, 27(3), 257–279. https://doi.org/10.1002/job.376
  • Okhuysen, G. A., & Waller, M. J. (2002). Focusing on midpoint transition: An analysis of boundary conditions. Academy of Management Journal, 45(5), 1056–1065. https://doi.org/10.5465/3069330
  • Okhuysen, G. A., & Eisenhardt, K. M. (2002). Integrating knowledge in groups: How formal interventions enable flexibility. Organization Science, 13(4), 355–457. https://doi.org/10.1287/orsc.13.4.370.2947
  • Parker, S. K. (2014). Beyond motivation: Job and work design for development, health, ambidexterity, and more. Annual Review of Psychology, 65(1), 661–691. https://doi.org/10.1146/annurev-psych-010213-115208
  • Parker, S. K., Andrei, D. M., & Van den Broeck, A. (2019). Poor work design begets poor work design: Capacity and willingness antecedents of individual work design behavior. Journal of Applied Psychology, 104(7), 907–928. https://doi.org/10.1037/apl0000383
  • Parker, S. K., & Jorritsma, K. (2021). Good work design for all: Multiple pathways to making a difference. European Journal of Work and Organizational Psychology, 30(3), 456–468. https://doi.org/10.1080/1359432X.2020.1860121
  • Petermann, M. K. H., & Zacher, H. (2021). Development of a behavioral taxonomy of agility in the workplace. International Journal of Managing Projects in Business, 14(6), 1383–1405. https://doi.org/10.1108/IJMPB-02-2021-0051
  • PMI, & AgileAlliance. (2017). Agile practice guide. Project Management Institute.
  • Potočnik, K., & Anderson, N. (2016). A constructively critical review of change and innovation-related concepts: Towards conceptual and operational clarity. European Journal of Work and Organizational Psychology, 25(4), 481–494. https://doi.org/10.1080/1359432X.2016.1176022
  • Prange, C. (2021). Agility as the discovery of slowness. California Management Review, 63(4), 27–51. https://doi.org/10.1177/00081256211028739
  • Ramos-Villagrasa, P. J., Marques-Quinteiro, P., Navarro, J., & Rico, R. (2018). Teams as complex adaptive systems: Reviewing 17 years of research. Small Group Research, 49(2), 135–176. https://doi.org/10.1177/1046496417713849
  • Rapp, T. L., Bachrach, D. G., Rapp, A. A., & Mullins, R. (2014). The role of team goal monitoring in the curvilinear relationship between team efficacy and team performance. Journal of Applied Psychology, 99(5), 976–987. https://doi.org/10.1037/a0036978
  • Rietze, S., & Zacher, H. (2022). Relationships between agile work practices and occupational well-being: The role of job demands and resources. International Journal of Environmental Research and Public Health, 19(3), 1258. https://doi.org/10.3390/ijerph19031258
  • Rigby, D. K., Elk, S., & Berez, S. (2020). Doing agile right: transformation without choas. Harvard Business Review Press.
  • Sarasvathy S D. (2001). Causation and Effectuation: Toward a Theoretical Shift from Economic Inevitability to Entrepreneurial Contingency. AMR, 26(2), 243–263. 10.5465/amr.2001.4378020
  • Schippers, M. C., Den Hartog, D. N., & Koopman, P. L. (2007). Reflexivity in teams: A measure and correlates. Applied Psychology, 56(2), 189–211. https://doi.org/10.1111/j.1464-0597.2006.00250.x
  • Schippers, M. C., West, M. A., & Dawson, J. F. (2015). Team reflexivity and innovation: The moderating role of team context. Journal of Management, 41(3), 769–788. https://doi.org/10.1177/0149206312441210
  • Schmittmann, V. D., Cramer, A. O. J., Waldorp, L. J., Epskamp, S., Kievit, R. A., & Borsboom, D. (2013). Deconstructing the construct: A network perspective on psychological phenomena. New Ideas in Psychology, 31(1), 43–53. https://doi.org/10.1016/j.newideapsych.2011.02.007
  • Schwaber, K., & Sutherland, J. (2017). The Scrum Guide. https://scrumguides.org/docs/scrumguide/v2017/2017-Scrum-Guide-US.pdf
  • Shah, P. P., Peterson, R. S., Jones, S. L., & Ferguson, A. J. (2021). Things are not always what they seem: The origins and evolution of intragroup conflict. Administrative Science Quarterly, 66(2), 426–474. https://doi.org/10.1177/0001839220965186
  • Shipp, A. J., Edwards, J. R., & Lambert, L. S. (2009). Conceptualization and measurement of temporal focus: The subjective experience of the past, present, and future. Organizational Behavior and Human Decision Processes, 110(1), 1–22. https://doi.org/10.1016/j.obhdp.2009.05.001
  • Shipp, A. J., & Jansen, K. J. (2021). The “other” time: A review of the subjective experience of time in organizations. Academy of Management, 15(1), 299–334. https://doi.org/10.5465/annals.2018.0142
  • Slocum, J. W., & Sims, H. P. (1980). A typology for integrating technology, organization, and job design. Human Relations, 33(3), 193–212. https://doi.org/10.1177/001872678003300304
  • So, C. (2010). Making software teams effective: How agile practices lead to project success through teamwork mechanisms. Peter Lang.
  • Stegmann, S., Van Dick, R., Ullrich, J., Charalambous, J., Menzel, B., & Egold, N. (2010). Der Work Design Questionnaire Vorstellung und erste Validierung einer deutschen Version [the work design questionnaire – introduction and validation of a German version]. Zeitschrift Fur Arbeits- Und Organisationspsychologie, 54(1), 1–28. https://doi.org/10.1026/0932-4089/a000002
  • Stray, V., Moe, N. B., & Sjoberg, D. I. K. (2020). Daily stand-up meetings: Start breaking the rules. IEEE Software, 37(3), 70–77. https://doi.org/10.1109/MS.2018.2875988
  • Sutherland, J. (2014). Scrum: The art of doing twice the work in half the time. Crown Business.
  • Tay, L., Woo, S. E., & Vermunt, J. K. (2014). A conceptual and methodological framework for psychometric isomorphism: Validation of multilevel construct measures. Organizational Research Methods, 17(1), 77–106. https://doi.org/10.1177/1094428113517008
  • Tims, M., Bakker, A., Derks, D., & Van Rhenen, W. (2013). Job crafting at the team and individual level: Implications for work engagement and performance. Group & Organization Management, 38(4), 427–454. https://doi.org/10.1177/1059601113492421
  • Tripp, J. F., Riemenschneider, C. K., & Thatcher, J. B. (2016). Job satisfaction in agile development teams: Agile development as work redesign. Journal of the Association for Information Systems, 17(4), 267–307. https://doi.org/10.17705/1jais.00426
  • Tripp, J. F., & Armstrong, D. J. (2018). Agile methodologies: Organizational adoption motives, tailoring, and performance. Journal of Computer Information Systems, 58(2), 170–179. https://doi.org/10.1080/08874417.2016.1220240
  • Tuckman, B. (1965). Developmental sequence in small groups. Psychological Bulletin, 63(6), 384–399. https://doi.org/10.1037/h0022100
  • Twemlow M, Tims M and Khapova S N. (2022). A process model of peer reactions to team member proactivity. Human Relations, 10.1177/00187267221094023
  • van Bork, R., Rhemtulla, M., Waldorp, L. J., Kruis, J., Rezvanifar, S., & Borsboom, D. (2021). Latent variable models and networks: Statistical equivalence and testability. Multivariate Behavioral Research, 56(2), 175–198. https://doi.org/10.1080/00273171.2019.1672515
  • van de Schoot, R., Lugtig, P., & Hox, J. (2012). A checklist for testing measurement invariance. European Journal of Developmental Psychology, 9(4), 486–492. https://doi.org/10.1080/17405629.2012.686740
  • van Mierlo, H., Rutte, C., Vermunt, J., Kompier, M., & Doorewaard, J. (2006). Individual autonomy in work teams: The role of team autonomy, self-efficacy, and social support. European Journal of Work and Organizational Psychology, 15(3), 281–299. https://doi.org/10.1080/13594320500412249
  • van Oorschot, K. E., Sengupta, K., & Van Wassenhove, L. N. (2018). Under pressure: The effects of iteration lengths on agile software development performance. Project Management Journal, 49(6), 78–102. https://doi.org/10.1177/8756972818802714
  • Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3(1), 4–69. https://doi.org/10.1177/109442810031002
  • Venkatesh, V., Thong, J. Y. L., Chan, F. K. Y., Hoehle, H., & Spohrer, K. (2020). How agile software development methods reduce work exhaustion: Insights on role perceptions and organizational skills. Information Systems Journal, 30(4), 733–761. https://doi.org/10.1111/isj.12282
  • Voorhees, C. M., Brady, M. K., Calantone, R., & Ramirez, E. (2016). Discriminant validity testing in marketing: An analysis, causes for concern, and proposed remedies. Journal of the Academy of Marketing Science, 44(1), 119–134. https://doi.org/10.1007/s11747-015-0455-4
  • Vough, H. C., Bindl, U. K., & Parker, S. K. (2017). Proactivity routines: The role of social processes in how employees self-initiate change. Human Relations, 70(10), 1191–1216. https://doi.org/10.1177/0018726716686819
  • Yukl, G., Gordon, A., & Taber, T. (2002). Taxonomy of leadership behavior: Half century of behavior research. Journal of Leadership & Organizational Behavior, 9(1), 15–32. https://doi.org/10.1177/107179190200900102
  • Zellmer-Bruhn, M. E. (2003). Interruptive events and team knowledge acquisition. Management Science, 49(4), 514–528. https://doi.org/10.1287/mnsc.49.4.514.14423