431
Views
5
CrossRef citations to date
0
Altmetric
Editorial

Changing the challenge: measure what makes you better and be better at what you measure

Pages 1-3 | Published online: 19 Dec 2017

Six excellent articles are yours in this issue of EJIS

To what degree do we adopt IT because it's just plain ‘fun’? Is this ‘fun’ motive seriously influenced by the opinions of those around us? From three Vienna Universities, Dickinger, Arami, and Meyer join those bold researchers who investigate the adoption of IT on the basis of perceived enjoyment and the influence of social norms. Using the case of push-to-talk mobile phone technology and a staged research design involving panels of experts from mobile operators, focus groups of potential users, and a survey-style field study, the importance of social norms and fun for IT adoption may surprise you.

From Kent State and Washington State Universities, Datta and Chatterjee propose and illustrate their theory about trusting intermediaries in electronic markets. Who are these intermediaries? They include EBay, Google, Priceline, Paypal, etc. Datta and Chatterjee launch their arguments from the sources of consumer uncertainty online (like inaccurate or incomplete market information). They theorize that consumer trust in the intermediaries as institutions compensates for uncertainty and enables the markets to function economically. Perhaps with exquisite timing (given the contemporary problems in the mortgage derivatives marketplace) they illustrate the theory using the online mortgage marketplace.

Two articles in this issue deal with mental models. From Missouri State and Drury Universities, Karuppan and Karuppan bring us to grips with effective user training during a complex system implementation in a large hospital. How do we overcome the learn-and-forget trap that ruins user training? The results from the Karuppan and Karuppan field study indicate well-timed training with well-selected super users is the key. Their work offers insight into these selection indicators, especially mental model accuracy.

The other article dealing with mental models shifts our focus to software development team learning. We all know that individual characteristics like knowledge and skill are important in building effective software teams. Intuitively diverse and complementary skills are important. But must the team's individuals also be compatible somehow to take advantage of this diversity? From Ewha Women's University and the University of Washington, Yang, Kang, and Mason survey Korean software development companies to test a theory that shared mental models mediate team effectiveness. Because such models can be fostered and developed, this work offers a means by which organizations can manage the collective learning ability within software teams.

We will continue this theme of learning from software projects by shifting the focus to organizational learning. From Georgia State and Aalborg Universities, Kasi, Keil, Mathiassen, and Pedersen explore the slim uptake of post-mortem evaluations among IT professionals. It is known to be of great value in engineering disciplines, especially in learning from failure, why not IT? They engage a Delphi panel of 23 IT practitioners to explain both the disinterest and the means for developing more interest in this tool.

What can we learn from a project that is neither a spectacular failure nor a spectacular success? Our final article in this issue follows the adoption of an online training system in a major hospital. This adoption process is a venue for us to learn how the hospital learns. From National Chiayi and Georgia State Universities, Chu and Robey explain why, over the years, three different professions in the hospital (nursing, pharmacy, and administration) ultimately used the system in quite different ways. Human agency seems core, but different agencies have differing histories and trajectories. When we understand the effects of time on human agency, we better understand the inconsistent outcomes engendered by many IT projects.

An inaugural editorial: changing the challenge

This issue marks my first editorial as EIC for EJIS. I decided to reverse the usual order of an editor's remarks by introducing the content of the issue first. Perhaps it's a silly demonstration, but it marks a ‘content first’ orientation of Ray Paul's that I hope to continue at EJIS. The research published in EJIS must meet two primary criteria: First, it must be interesting to a large part of our readership. Second, it must be of the highest quality. The first criterion is a primary function of the editors. The second is a primary function of the review panels. If done well, EJIS will continue to be a serious research journal that readers can enjoy in much the same way as an entertaining magazine: one interesting ‘story’ after another.

To underscore the steady continuation of the EJIS leadership, this editorial takes its starting point from Ray CitationPaul's (2007b) recent editorials expressing and expanding his concerns about the manner in which the information systems (IS) community conducts its affairs: ‘Changing the challenge: To challenge makes you larger, and being challenged makes you small’ (CitationPaul, 2007a, p. 193). My own belief is that this conduct reflects its roots in the business and management discipline, a field in which both research and practice suffers from not-to-dissimilar issues.

What gets measured, gets done

This maxim, and its companion, ‘You cannot manage what you cannot measure,’ are variously attributed to Deming, Drucker, Jones, Peters, etc. Quality management historically draws its fundamental strength from this perspective. In its most recent appearance, it moves beyond quality and represents the current appeal of objective, control-model determinants of management decisions like scorecards and dashboards. Perhaps this is more than a maxim; perhaps it is a paradigm.

At this point, it doesn' matter whether this paradigm originated in Britain, Japan, North America, or elsewhere. This paradigm is not only a dominating influence on business and management practice, but it has become reasonably prevalent in administering colleges of business and management. We are practicing what we teach. Since many IS departments are housed there, it is not surprising that this paradigm is becoming our own. Indeed, the paradigm has swept across to the administration in other schools, colleges, and universities. Within this paradigm, quality improvement lies primarily in the management of our metrics.

For research purposes, we can measure things like grants funding, publication quantities, citation counts, impact factors, and journal rankings (through surveys). We have chosen to consider these measures as indicators (or substitute measures) of research quality. Taken to an extreme, the research quality of any particular scholar can be reduced to a single figure (e.g., the h-index or its variants, see CitationHirsch, 2005) quickly compiled and calculated from Google queries (see ‘Publish or Perish’ from http://www.harzing.com/pop.htm).

Even given their comic, tragic, and terrifying characteristics, do not think that such objective measures are without intense appeal to university administrators and evaluators faced with tough decisions that may get dissected under public scrutiny. Such measures often reinforce each other reliably. For example, we learned in a previous EJIS number, that there is a high degree of correlation between journal rankings by stated preference and by revealed preference, that is, citation factors (CitationMingers & Harzing, 2007). Once a measure is accepted, the narrow means by which to improve performance by this measure may be obvious and thereby a tempting quality improvement strategy on its own.

What does not get measured, does not get done

As an example of how our quality measures define what we do, consider the impact of the IS academic discipline on practice. Why should we find it surprising that IS research journals do not seem to be publishing the kinds of research that draw a large following among IS practitioners? It does not affect our citation factors because practitioner-oriented writings rarely cite academic research. Since we lack any other widely accepted metrics for practical impact, we do not seem to be seriously managing it. Therefore, under our prevailing paradigm, we really do not care if this gets done.

In fact, there are indications that the pursuit of practical publications could diminish certain research quality measures (i.e., increasing number-of-publications divisor without affecting the number-of-citations dividend). This metrics view is not merely a silly statistic; it reflects the serious viewpoint held by many scholars that additional publications in ‘low-quality’ journals diminish an otherwise brilliant publication record in ‘high-quality’ journals. Publishing in low impact journals diminishes the time available for publishing in high impact journals.

Given the prevailing metrics paradigm, if we are serious about impacting practice, we must need practice impact factors. What might these be? Potential measures for these factors may spring from an unexpected source: The passion developing among university administrators for discovery and capitalization of intellectual property. Perhaps the best practical impact factor will be regarded as revenue to the university generated by sales and royalties from patents, copyrights, trademarks, etc.

What cannot be measured can only be managed indirectly

Quality measures for the purpose of managing research are unlikely to soon decline in their appeal to those charged with managing researchers. Arguments about which index is best will change neither the management goal nor the challenge. We might apply h-index arithmetic to journals rather than authors and revise our thinking about the order of journal ranks. Such a re-ordering arises because high journal impact factors are often dependent on a relatively small number of important articles and citations in conferences are generally disregarded (cf. CitationHarzing & van der Wal, 2007). But reordering journal rankings or choosing among alternative indexes will not change the challenge.

Changing the challenge means changing the measures. None of our common measures are particular to IS. For example, how have we measured research? Treat it as development, and measure the grants funding it attracts. Treat it as intellectual property, and measure its income stream. Treat it as publication, and measure its placement journal ranking. Treat it as the basis of further research, and measure its citation count. One piece of research may be worthless on one measure, and priceless on another.

All of these are common substitute measures, as are enrollments, graduation rates, and student placements. We use these convenient substitutes because the intrinsic value of education and research is difficult to measure. Since we are already operating with substitute measures, changing the challenge entails changing the substitute measures.

Does selecting fundamentally different IS measures sound easy? Hardly! How do we measure, for example, ‘what emerges from the usage and adaptation of the IT and the formal and informal processes by all of its users?’ (CitationPaul, 2007a, p. 195) Do we measure the number of IT systems it takes the average manager to get through their day? Fifty odd years ago when LEO was switched on, this number was less than one. It is rather larger today, and growing (door, car, train, phone, etc.).

In an ideal world, the academic communities should recognize that considerable thought and wisdom is necessary to evaluate the outcome of education and research programs. Finding the best measures that enable administrators to manage and demonstrate this value is equally challenging. Our first challenge, in changing the challenge, is to discover better ways to measure the really important things delivered by IS educational and research programs.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.