3,545
Views
1
CrossRef citations to date
0
Altmetric
Editorial

Clinical research from information systems practice

, , &

ABSTRACT

An increasing presence of practitioners with doctoral degrees in Information Systems and related disciplines holds promise to advance Information Systems research. The prospect is to gain more knowledge from the practical experience of developing, using, and managing information systems in context. To scientifically capitalise on this opportunity, this EJIS special issue introduces the research genre of “Information Systems Clinical Research”. The genre presents knowledge generated from practitioner-researcher interventions to achieve desired outcomes in information systems development, use, and management practice contexts. In this editorial, we introduce and conceptualise the genre; we present a research framework that defines its four key elements; and we discuss how to address its key challenges in research projects. As a result, we derive ten criteria for rigorous Information Systems Clinical Research and provide examples on how the articles published in the special issue have addressed these criteria. We conclude with a call to further advance clinical research as an important part of the Information Systems discipline.

1. Introduction

Perhaps the most prominent professional doctorate is the Doctor of Medicine (MD), originating in Scotland in the eighteenth century. In subsequent centuries, a smattering of other professional doctoral degrees appeared, such as the Juris Doctor (JD) and the Doctor of Dental Surgery (DDS). In the early 2000s, it became apparent that new professional, practice-based doctoral programmes were blossoming. In the US alone, at least 650 such doctoral programmes have awarded thousands of degrees by 2015 (Zusman, Citation2017). Like PhD programmes, practice-based doctoral programmes have a research focus, but they differ by having a stronger interest in practical rather than theoretical questions and by being oriented towards professional rather than academic careers. Halupa (Citation2021) noted degrees such as Doctor of Nursing Practice (DNP), Doctor of Physical Therapy (DPT), Doctor of Pharmacy (PharmD) and Doctor of Audiology (AUD) that have expanded also into non-medical fields such as Doctor of Architecture (DArch), Doctor of Information Technology (DIT), and Doctor of Business Administration (DBA).

These new practice-based doctoral programmes, like the DBA, are placing research qualified practitioners in senior professional and executive positions in organisations thereby adding to the growing population of practitioner-researchers, i.e., practitioners with research training and interests. These practitioner-researchers are well positioned to use rigorous research methods to diagnose organisational problems and opportunities, and to formulate new organisational practices that address these issues. As such, they bring a renewed interest in clinical inquiry, such as action research, design science research and clinical field work, along with a concern for evidence-based practice (Halupa, Citation2021; Rousseau, Citation2020). There is, however, a distinction between the way academics have traditionally used such clinical inquiry, and the way practitioner-researchers expect to use it. It is a distinction well-known as a dilemma with interventionalist research (Rapoport, Citation1970).

In an academic setting, clinical inquiry prioritises theoretical outcomes through engagement with practitioners as the primary goal. Academics are there because they need to learn about their field of interests and advance theoretical knowledge. In a practice setting, clinical inquiry prioritises practical outcomes through practitioner research as the primary goal. Practitioners are there because they need to improve their practices and advance professional knowledge. Research qualified practitioners acquire an appreciation for evidence-based practice because it relies on scientifically grounded evidence, including evidence gathered rigorously for the purpose of diagnosing an organisational problem or opportunity (Rousseau et al., Citation2008). Such evidence includes empirical observations in the organisational setting as well as published evidence from the research literature. Especially important is evidence gathered rigorously to demonstrate the extent to which the organisational problem is resolved, or the opportunity exploited.

The distinction in Information Systems between its academic and practitioner communities has been asserted, bemoaned, debated, debunked, and reasserted since the birth of the field (e.g., Lanamäki et al., Citation2011). The distinction is made difficult because Information Systems is inevitably an applied discipline (Jones & Gregor, Citation20077). At the same time, we see that the boundaries between academics and practitioners are becoming increasingly blurred: academics performing consulting services and practitioners performing adjunct faculty services, not to mention more permanent transitions with job changes across such a line and less permanent transitions in shared conferences, meetings, and associations. No doubt, the rise of practice-based doctoral programmes will further obfuscate the distinction. Nevertheless, the distinction we argue here is the identity of “Clinical Research from Information Systems Practice”, idealising those strong contributions to knowledge we may learn from research qualified practitioners who have been employing their research skills in their day-to-day practice in the Information Systems field. Actualising such an ideal is challenging, not only because of the messy distinction between academics and practitioners, but also because good research skills are not necessarily dependent only on, or even necessarily possessed by those with, a particular kind of doctorate.

So, with our ideals and their problems in our knapsack, we suggest growing practice-based research by advancing clinical research as a new Information Systems research genre that provides opportunities for practitioner-researchers to offer their experiences and insights as contributions to the body of Information Systems knowledge. Although we draw on well-established traditions for clinical research within medicine, psychology, and education, we seek to advance clinical research specifically for Information Systems practitioner-researchers.

In what follows, we define this new research genre and derive an Information Systems Clinical Research Framework with four key elements. Based on this framework, we discuss key challenges, how to address them in research projects, and conclude with specific criteria for planning, conducting, and assessing Information Systems Clinical Research. Next, we discuss the seven articles of Information Systems Clinical Research published in this EJIS special issue to provide examples of how the articles address the criteria. We conclude with a call to further advance clinical research as an important part of the Information Systems discipline.

2. An information systems clinical research framework

We define Information Systems Clinical Research as a research genre that generates knowledge from, and establishes the effectiveness of, practitioner-researcher interventions in achieving desired outcomes in information systems development, use, and management practice contexts.Footnote1

2.1. Key elements

Drawing from this definition, we identify four elements that constitute the Information Systems Clinical Research Framework (). In brief, Information Systems Clinical Research investigates outcomes of interventions in context based on the formula:

  • Outcomeeffective = f(Outcomedesired, Intervention, Context)

Figure 1. Information Systems Clinical Research Framework.

Figure 1. Information Systems Clinical Research Framework.

2.1.1. Situational context

Clinical research makes investigations in context, so all findings apply only in relation to this context. A defining characteristic of Information Systems Clinical Research is therefore the presence of qualitative or quantitative empirical evidence that defines the contextual constraints of the investigations. A key challenge is the possible absence of conceptualisations to describe such context (Vom Brocke et al., Citation2020). In some domains, well-established coding schemas exist, such as in the medical field, and to some degree in economics. For instance, the International Standard Industrial Classification of All Economic Activities (ISIC) is a UN classification for structuring economic activities and, based on these activities, structuring the economic units in the related economic sectors. Most application areas in Information Systems research, however, do not provide such standards. Clinical research will need to refer to given lexica or create their lexica, so that the meaning of what is observed in the given context can be comprehended by readers.

2.1.2. Desired outcome

Information Systems Clinical Research relates to desired outcomes in information systems development, use, and management practices. Achieving desired outcomes is the well-known hobgoblin in developing, using, and managing information systems. In clinical research, such outcomes can be described specifically relevant to stakeholders who desire them. A defining characteristic of Information Systems Clinical Research is thus the presence of qualitative or quantitative empirical evidence that defines what change or transformation is wanted in practice. This evidence can include published evidence from the research literature about known interventions that lead to such a desired outcome. Beware that in practice, decision-making is rarely the completion of a desired outcome, but rather an intermediate step towards achieving a desired outcome.

2.1.3. Practitioner-Researcher intervention

An intervention is a socio-technical action by a practitioner-researcher, typically in collaboration with other practitioners and researchers, into an information systems context. Hence, a defining characteristic of Information Systems Clinical Research is the presence of qualitative or quantitative empirical evidence that defines the completed intervention by the practitioner-researcher. Beware that design is rarely the completion of an intervention. In practice, it is typically an intermediate step towards the development and implementation of a designed artefact into a socio-technical setting.

2.1.4. Outcome effectiveness

Finally, a defining characteristic of Information Systems Clinical Research is the presence of compelling qualitative or quantitative empirical evidence that defines the effectiveness of the intervention in accomplishing the desired outcome. This result may be the efficacy of the intervention. However, it may also be an unexpected outcome that resolved into a different, larger or smaller, desirable or undesirable outcome. As such, the outcome may contradict a commonly used intervention. Evidence for outcome effectiveness can have problems of temporal reliability as early accomplishment of the desired outcome may gradually change over time, for example, due to the well-known Hawthorne effect (Adair, Citation1984). As the intervention ages, its effects may dissipate causing the original problem to resume or new problems to emerge.

3. Key challenges

Like other research genres, Information Systems Clinical Research poses specific inherent challenges that practitioner-researchers and their collaborators need to address.

3.1. Transparency considerations

Transparency in writing clinical research papers may be both more important and more challenging than in other forms of research. For example, methodology sections will not only report aspects like methodology, context, and subjects, but also the organisational role of the author(s) effectively answering the question, “Who is the practitioner-researcher reporting this experience and how is the broader team behind this research configured?” Moreover, a clinical research report sometimes culminates many years of practice involving trial-and-error in the face of intractable and wicked problems. The final report of the clinical outcome can entail only the tip of the iceberg, appearing weak by international standards, and losing the richness of the entirety of the experience. In the interest of research transparency (Burton-Jones et al., Citation2021), it is important to provide access to the underlying considerations of such results, even as they might go beyond what can be documented in an article. Early Information Systems clinicians will have to be creative in reporting this background, for example, not just applying the usual interview reporting mechanics, but also finding ways to show the richness of their clinical case. In other practice-based research areas, such as Design Science Research, “journaling” has been suggested (Vom Brocke et al., Citation2021). Such journals entail keeping a logbook of the various activities contributing to the research process. Such journals can be made accessible to comprehend the reasoning in Information Systems Clinical Research.

3.2. Confidentiality considerations

Obtaining solid empirical evidence in practice for the four elements above needs alignment with confidentiality interests of stakeholders in the research context. By emphasising research from practice, the practitioner-researcher plays an important role in assuring access to data and guarantees of confidentiality. The practitioner-researcher is primarily an agent of the organisation(s) hosting the research, creating a stronger bond of trust between the organisation and the research. Moreover, authority for the execution of an intervention may be within the practice role of the practitioner-researcher. In the spirit of transparency, clinical research benefits from disclosing as much data as possible, including the organisation’s name as well as the role and demographics of practitioners involved. Confidentiality might restrict disclosing the name of the organisation, yet the name of the practitioner-researcher might hint at the company’s name. In the interest of the publishers and the clinical research, it is important that a statement be included by the authors that approval for publishing the article has been obtained from the involved organisation(s).

3.3. Ethical considerations

By emphasising research from practice, practitioner-researchers need to consider that organisations hosting the research may have no human subject procedures and lack other research oversight structures commonly found in academic institutions. In such cases, practitioner-researchers may find themselves left to their own devices in incorporating the ethics procedures learned in their doctoral training. Alternatively, co-researching with academic researchers can bring the enforced ethics policies of an educational institution into play. Doctoral qualified practitioners are particularly well prepared for clinical research because they are trained in research methodology, familiar with the research literature, and taught to think and observe critically and carefully about their practice. These practitioner-researchers typically understand that the evidence for each of the elements in clinical research is critical, and that such evidence must meet the validity criteria of the relevant scientific paradigm. Because research in Information Systems arises from multiple paradigms within the philosophy of science (Hirschheim & Klein, Citation1989), clinical research from practice will be equally multi-paradigmatic, with validity criteria differing between paradigms. For example, the validity criteria for experiments and quasi-experiments (Shadish et al., Citation2002) will differ from the validity criteria for qualitative case studies (Miles & Huberman, Citation1994). Similarly, validity criteria for interpretive research (Kirk & Miller, Citation1986; Klein & Myers, Citation1999; Lincoln & Guba, Citation1985) will differ from validity criteria for positivist research (Carmines & Zeller, Citation1979). As is the case with Information Systems research in general, clinical research from practice may arise from many paradigms with different validity criteria for the provided evidence.

Cases may also involve professional ethics. Determining ideal treatments for achieving desired outcomes may be defined by the profession and implemented by the professionals upon approval of their client (Freidson, Citation1970). Distinguishing client approval can be tidy when the professional is an outside consultant (or even an internal consultant). But doctoral qualified professionals are often senior staff endowed with full authority over all decisions regarding the context of the research treatment at hand. Ethical dilemmas are present when the practitioner-researcher is empowered by their organisation as an executive and by their profession as a researcher. As professionals, the practitioner-researcher can find guidance from professional organisations. These organisations often adopt ethical guidance such as the UK’s Chartered Management Institute codes of conduct and practice.Footnote2 As researchers, many countries define guidelines for responsible conduct of research involving human subjects such as the US “Common Rule” (CitationKorenman, n.d). While such guidelines may be clear with regard to individual human subjects, application regarding organisational subjects may be less so. This lacuna can be critical in the frequent case where treatments are experimental clinical research.

Likewise, the exact methodology for clinical research may differ, so long as the four defining elements are present. Clinical fieldwork (Schein, Citation1987), action research methods (Baskerville & Myers, Citation2004; Baskerville & Wood-Harper, Citation1998; Mathiassen et al., Citation2012), and design science research methods (Hevner et al., Citation2004; Rai, Citation2017) all emphasise interventions. Such intervention research methods do have a natural fit with clinical research, although action research and design science methods tend to assume an iterative structure that may or may not suit the context of clinical research. Similarly, the four-element criteria of clinical research are not necessarily mandated in more exploratory, learning, and theoretical forms of action research and design science. Many other methodologies can be resources for clinical research, simply because of the emphasis on evidence, practice, and various other forms of engagement with organisations (Mathiassen & Nielsen, Citation2008; Van de Ven, Citation2007).

4. Key criteria

The analysis above suggests four criteria of empirical evidence that define the new genre of Information Systems Clinical Research: context evidence, desired outcome evidence, intervention evidence, and outcome effectiveness evidence. The analysis also implies four criteria of research quality for this new genre: evidential validity, methodology rigour, knowledge contribution, and research transparency. Further, we suggest two formal criteria, practitioner-researcher contribution and publication approval. summarises these ten criteria.

Table 1. Clinical research criteria and possible ways to meet them.

5. Examples of information systems clinical research

In the following, we provide examples of how Information Systems Clinical Research articles in this special issue address the criteria above. In this way, we illustrate how we identified and developed the criteria based on feedback from reviewers and our own assessments of a total of 38 articles submitted. After three rounds of review and revisions, we found the seven included articles (18% acceptance rate) acceptable for publication as each of them sufficiently met the stated criteria, although each of them struggled to meet some criteria. As such, we propose the ten criteria in as a comprehensive foundation for advancing the clinical research genres within the Information Systems discipline. On the one hand, the criteria can be used by practitioner-researchers to design and conduct clinical research projects. On the other hand, they can help reviewers and editors assess consequential articles for conference and journal publication.

5.1. Context evidence

The article Unpacking digital options thinking for innovation renewal: a clinical inquiry into car connectivity explicates the context with dire needs for technical innovation that affected car manufacturers in the second decade of this century:

With the introduction of iOS and Android in 2007, we began rethinking connectivity. The ripple effects of Apple and Google’s products spread through our industry, and it became apparent that their combination of remote access, openness, and capacity to change functionality independently from hardware opened up radically new innovation paths for cars. These insights stirred the whole automotive industry: Ford partnered with Microsoft, BMW initiated a collaboration with Apple, and both suppliers and automakers investigated Google’s Android platform. Inspired and to some extent alarmed by these initiatives, our executive management team established a temporary unit – the “Connectivity Hub” – to strategize digital innovation in the area of connected cars. Mastering new digital technologies would be challenging in itself, but early investigations suggested that novel connected car innovations would also require the firm to establish new forms of collaboration and set aside existing innovation practices.

Not only is the corporate creation of the Connectivity Hub evidence of the corporate gravity, but the article goes on to detail a raft of senior executives appointed to the hub. Further, the authors relate interview data indicating how the hub encountered deadlock over the risk and response times involved in new innovations.

5.2. Desired outcome evidence

The article Adopting and integrating cyber-threat intelligence in a commercial organisation provides evidence of the desired outcome in Appendix A, where the authors detail the critical problem facing the organisation. Based on an analysis of Incident Management Records, Risk and Problem Ticketing Records, Risk Registries, Asset Registries and Network Taxonomy Reporting, the authors determined:

The firm invested disproportionate resources securing top tier assets over less critical assets as it assumed attackers shared the same priorities. As a result, attackers were exploiting vulnerable entry points in less critical assets such as mail servers as they were not actively defended.

Extrapolating this evidence forward, we can see that the organisation wanted better security incident response mitigations that cut down the number of incidents and increased the incident response resolution rates.

5.3. Intervention evidence

Interventions using design science research necessarily involve creating and using an artefact as an intervention. This can make for very crisp evidence about the intervention. In the article Digital nudging for technical debt management at Credit Suisse, the authors design, create, and use a naturalistic approach to provide intervention evidence:

The goal was to design a TDM [technical debt management] nudge to direct software development teams’ awareness of TD and induce them to make conscious decisions that take TD into account. We built the design elements of the TDM nudge on the psychological effects from the nudging literature … We implemented the TDM nudge in a digital form building on a visualisation component and a data-processing component … We employed an existing data platform at Credit Suisse and included two variables: those for individual IT applications and those that allowed for comparisons between multiple IT applications.

The artefact and its design are described in detail and was available for the organisation’s 3,000 strong suite of applications. In the evaluation stages, comparisons were made of the effects on TD in different applications, offering further evidence of the intervention and its effects.

5.4. Outcome effectiveness evidence

The design science approach in the article Developing a collaboration system for pancreatic cancer research: a clinical design science study included careful evaluations of the developed collaboration system. These evaluations made it possible to establish that an outcome of interest was found in the practice setting after the intervention was undertaken:

In addition to developing the artefact, we observed the DSR process in general to gain insights into the process itself as well as its applicability in this complex research situation, integrating people from various scientific backgrounds. We recorded field notes and conducted short interviews during and after the project with the involved parties. Post-hoc interviews and observations were conducted 6 and 12 months after the implementation of the artefact at the case institution. The field notes and interviews were then discussed among the academic IS researchers involved in the project and were used not only to assess the long-term utility and efficacy of the artefact but also to understand the development process and its applicability for projects involving both IS and medical practitioner-researchers.

5.5. Evidential validity

Following a design science paradigm, the article Patient health locus of control: the design of information systems for patient-provider interactions adopts focus groups to support the design process and evaluate outcomes. As such it offers a detailed account of focus group evaluations following standard methodology, including:

Consistent with the study protocol, the same research participants from the first focus group were engaged for the second focus group. Potential bias related to the functional focus of the participants was considered (all were nurse care managers with one Ph.D. behavioural scientist; no physicians participated), however, no material conflict was identified in the analysis of results from the first focus group. The focus group protocol consisted of a ten-minute review of the first focus group findings and six questions addressing the who, when, and how of presenting the clinical interventions. Three different vignettes were discussed to study variance of the recommended actions. The vignettes were (a) a hip replacement surgery patient, (b) a myocardial infarction (heart attack) patient, and (c) a thyroid disease patient being treated via an intensive drug regimen.

6. Methodology rigor

The article Strategic alignment of enterprise architecture management – how portfolios of control mechanisms track a decade of enterprise transformation at Commerzbank provides a rich account of the inquiry applied that goes beyond reporting on the usual case study mechanics. A comprehensive appendix provides details on the authors’ strategy of inquiry:

All interviews were conducted by two researchers who led through the interview with the help of a previously developed interview guideline … We employed a coding scheme based on the different control mechanisms … following the recommendation of Eisenhardt … This allowed us to analyse how the environmental jolts and strategic shifts were interpreted by the involved stakeholders and how these interpretations resulted in changes to the EAM control mechanism portfolio over time for each episode.

In this way, the article also provides an overview of the data collection and analysis process. The overview covers the entire research process over a period of twelve years, and it lists the involved activities such as workshops, conferences, meetings, and publications. This documentation also illustrates methodology rigour by journaling the research process (Vom Brocke et al., Citation2021).

7. Knowledge contribution

The article Strategic alignment of enterprise architecture management – how portfolios of control mechanisms track a decade of enterprise transformation at Commerzbank advances the Enterprise Architecture Management (EAM) body of knowledge in a way that highlights the unique opportunities offered by Information Systems Clinical Research. By investigating interventions in EAM at Commerzbank for more than a decade, the authors were able to conclude:

In combination, the rich data collected during our prolonged engagement with Commerzbank’s EAM department and the external interview data from other stakeholders within the organisation allows us to accurately trace and explain the complex organisational dynamics … that commonly underly the development of EAM in large organisations.

As such, the authors were able to identify the contextual dependencies of EAM governance measures, while at the same time demonstrating changes in the context and the need for dynamic adaptation of EAM governance measures.

8. Research transparency

In the context of practitioner-researchers in large organisations, research transparency about causal claims can be important. If proof of causality is a nightmare for laboratory experiments, it can be even worse for practice studies. For example, in the article Unpacking digital options thinking for innovation renewal: a clinical inquiry into car connectivity, the outcome desired was effectively more or better innovation in a huge organisation. Never mind how difficult this outcome may be to measure, attributing any such advance by such a large organisation to a particular intervention is liable to be regarded as a self-aggrandising oversimplification. In their Research Method Appendix, the authors were very careful and very transparent about exactly what claims their data supported about the effects of their intervention on innovation at Volvo Cars:

The follow-up studies differ from the early phases in that they were analysed by the researcher, rather than collaboratively … In phase IV, the data set was first scrutinised in search for the three stages of the option lifecycle, then screened for evidence of capability gap negotiations and reconfigurations of internal as well as external resources. These activities can be linked to Connectivity Hub interventions in that they were spearheaded by Hub members. In phase V it was more difficult to establish such direct links, since the study took a much broader view and was implemented seven years later. Therefore, the overall purpose was to identify patterns of options thinking in contemporary agile processes. To do that we returned to capabilities identified in the previous phase, assessed how they had been renegotiated, and how supporting resources had been reconfigured. In that sense, it evaluated coherence between contemporary agile processes and the early initiatives taken by the Hub members, and it tried to characterise the role of connectivity in Volvo Cars’ innovation renewal.

Thus, the authors provide a meticulous description of not only the evidence for the short-term impact of their interventions, but also the quite different evidence for the long-term effects. The evidence and the causal claims are rather different, conditioned by scale and passage of time, but as they say, “coherent”.

9. Practitioner-Researcher contribution

The article Developing human/AI interactions for chat-based customer services: lessons learned from the Norwegian government describes a long-term relationship between the government organisation and researchers. The article outlines the researcher-practitioner relationship as follows:

NAV proactively sought to strengthen collaboration with the University where the research team for the project reported in this paper is based. To do that, they assigned to one of its employees the role of Research Champion, liaising between research and practice and championing research at NAV. To realise this collaboration, the Research Champion got a designated workstation at the University and took part in day-to-day University activities. The physical proximity increased both the formal and informal interactions creating more opportunities for collaboration.

10. Publication approval

The influence of context can create interesting contrasts, as it is not simply the kind of organisational setting, but also the nature of the desired outcome. In Unpacking digital options thinking for innovation renewal: a clinical inquiry into car connectivity, the case (Volvo Cars) is explicitly named, and the desired outcome is becoming more digitally innovative. In that line of business, innovation is admired, and we expect Volvo Cars to be rightfully proud of research reports that indicate the high value placed by the firm on innovation.

11. Advancing information systems clinical research

Information Systems is inevitably an applied discipline (Jones & Gregor, Citation2007) and through its evolution there have been many initiatives to strengthen the relations between practice and theory, including the field’s extensive development of action research methods (Baskerville & Myers, Citation2004; Mathiassen et al., Citation2012; McKay & Marshall, Citation2001), the foundation of collaborative practice research (Mathiassen, Citation2002), the origination of design science research (Hevner & Chatterjee, Citation2010; Vaishnavi & Kuechler, Citation2015; Winter, Citation2008), and ground-breaking work in participatory systems development (Bødker et al., Citation2011; Grønlund & Guohua, Citation1993). In terms of events, theory and practice merge in the annual CIO Forum at the International Conference on Information Systems; the forum is jointly sponsored with the Society for Information Management. These initiatives have served us well in creating bridges and collaboration between practitioners and researchers to inform the disciplines evolving research agenda, to communicate research results to practitioners, and to help in ongoing efforts to make Information Systems education relevant and up-to-date. Still, compared to neighbouring disciplines like computer science and software engineering, we have never managed to effectively engage practitioners in our key conferences and journals. This gap is unfortunate as Information Systems practices constantly change as a result of fast-paced technological developments and changes in information processing needs, requiring the body of Information Systems knowledge to be continuously challenged and updated. In this regard, Information Systems Clinical Research intends to make a huge leap in advancing Information Systems research through practice experience, which is rigorously crafted and communicated according to the criteria we describe in this article.

Despite the good intentions behind previous initiatives to have practitioners engage actively in our conferences and journals, they have had limited effect because they view practitioners as problem setters and consumers of Information Systems research. Based on this assumption, we have worked with many academic and practitioner-researcher colleagues at recent ICIS conferences and beyond to create a clinical research genre for the Information Systems discipline as a means to have practitioners, possibly with other practitioners and researchers, contribute publications to the discipline’s conferences and journals. In effect, our goal is to move practitioner-researchers within the Information Systems discipline from being commentators and consumers of Information Systems research to having an active voice in producing research to inspire others within the field.

To advance Information Systems Clinical Research, we therefore encourage conference chairs to include the genre in calls for papers; we encourage journal editors and board members to adopt the genre as a new type of submission; and, not least, we encourage the growing community of doctoral qualified Information Systems practitioners to pursue clinical research and submit their articles to our conferences and journals.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Notes

References

  • Adair, J. G. (1984). The Hawthorne effect: A reconsideration of the methodological artifact. The Journal of Applied Psychology, 69(2), 334.
  • Baskerville, R., & Myers, M. (2004). Special issue on action research in information systems: Making is research relevant to practice–foreword. MIS Quarterly, 28(3), 329–335. https://doi.org/10.2307/25148642
  • Baskerville, R., & Wood-Harper, A. T. (1998). Diversity in information systems action research methods. European Journal of Information Systems, 7(2), 90–107. https://doi.org/10.1057/palgrave.ejis.3000298
  • Bødker, K., Kensing, F., & Simonsen, J. (2011). Participatory design in information systems development. In H. Isomäki & S. Pekkola (Eds.), Reframing humans in information systems development (pp. 115–134). Springer London.
  • Burton-Jones, A., Oborn, E., Padmanabhan, B., Boh, W. F., & Kohli, R. (2021). Advancing research transparency-A view from MISQ. AIS Research Exchange, 12.
  • Carmines, E., & Zeller, R. (1979). Reliability and validity assessment. Sage.
  • Freidson, E. (1970). Profession of medicine: A study of the sociology of applied knowledge. Dodd, Mead & Co.
  • Grønlund, A., & Guohua, B. (1993). Participatory information systems - information systems as venues for participation. In D. Avison, J. E. Kendall, & J. I. DeGross (Eds.), Human, organizational and social dimensions of information systems development IFIP transactions A-24 (pp. 193–240). North-Holland.
  • Halupa, C. M. (2021). Action research in practice-based doctoral programs. In R. Throne (Ed.), Practice-Based and practice-led research for dissertation development (pp. 137–164). IGI Global.
  • Hevner, A. R., & Chatterjee, S. (2010). Design science research in information systems. Springer.
  • Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105. https://doi.org/10.2307/25148625
  • Hirschheim, R., & Klein, H. (1989). Four paradigms of information systems development. Communications of the ACM, 32(10), 1199–1216. https://doi.org/10.1145/67933.67937
  • Jones, D., & Gregor, S. (2007). The anatomy of a design theory. Journal of the Association for Information Systems, 8(5), 312–335. https://doi.org/10.17705/1jais.00129
  • Kirk, J., & Miller, M. L. (1986). Reliability and validity in qualitative research. Sage.
  • Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Quarterly, 23(1), 67–93. https://doi.org/10.2307/249410
  • Korenman, S. G. (n.d.). Teaching the Responsible Conduct of Research in Humans. Retrieved from https://ori.hhs.gov/education/products/ucla/default.htm
  • Lanamäki, A., Stendal, K., & Thapa, D. (2011). Mutual informing between is academia and practice: Insights from KIWISR-5. Communications of the Association for Information Systems, 29, 7. https://doi.org/10.17705/1CAIS.02907
  • Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage.
  • Mathiassen, L. (2002). Collaborative practice research. Information Technology & People, 15(4), 321–345. https://doi.org/10.1108/09593840210453115
  • Mathiassen, L., Chiasson, M., & Germonprez, M. (2012). Style composition in action research publication. MIS Quarterly, 36(2), 347–363. https://doi.org/10.2307/41703459
  • Mathiassen, L., & Nielsen, P. (2008). Engaged scholarship in is research – the Scandinavian case. Scandinavian Journal of Information Systems, 20(2), 3–20.
  • McKay, J., & Marshall, P. (2001). The dual imperatives of action research. Information Technology & People, 14(1), 46–59. https://doi.org/10.1108/09593840110384771
  • Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: A sourcebook of new methods (2nd ed.). Sage.
  • Rai, A. (2017). Editor’s comments: Diversity of design science research. MIS Quarterly, 41(1), iii–xviii.
  • Rapoport, R. (1970). Three dilemmas of action research. Human Relations, 23(6), 499–513. https://doi.org/10.1177/001872677002300601
  • Rousseau, D. M. (2020). The realist rationality of evidence-based management. Academy of Management Learning & Education, 19(3), 415–424. https://doi.org/10.5465/amle.2020.0050
  • Rousseau, D. M., Manning, J., & Denyer, D. (2008). Evidence in management and organizational science: Assembling the field’s full weight of scientific knowledge through syntheses. The Academy of Management Annals, 2(1), 475–515. https://doi.org/10.5465/19416520802211651
  • Schein, E. (1987). The clinical perspective in fieldwork. Sage.
  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.
  • Vaishnavi, V. K., & Kuechler, W. (2015). Design science research methods and patterns: Innovating information and communication technology (2nd. ed.). CRC Press.
  • Van de Ven, A. H. (2007). Engaged Scholarship: A Guide for Organizational and Social Research. Oxford University Press.
  • Vom Brocke, J., Gau, M., & Mädche, A. (2021). Journaling the design science research process. Transparency about the making of design knowledge. In K. L. Chandra, S. Seidel, & G. I. Hausvik (Eds.), The next wave of sociotechnical design. DESRIST 2021 (pp. 131–136). Springer.
  • Vom Brocke, J., Winter, R., Hevner, A., & Maedche, A. (2020). Special issue editorial–accumulation and evolution of design knowledge in design science research: A journey through time and space. Journal of the Association for Information Systems, 21(3), 9.
  • Winter, R. (2008). Design science research in Europe. European Journal of Information Systems, 17(5), 470–475. https://doi.org/10.1057/ejis.2008.44
  • Zusman, A. (2017). Changing degrees: Creation and growth of new kinds of professional doctorates. The Journal of Higher Education, 88(1), 33–61. https://doi.org/10.1080/00221546.2016.1243941

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.