Abstract
The central role of information systems review articles has been recognised in a recent explosion of interest in editorials, research articles, and opinion papers investigating methods and approaches for conducting standalone reviews. In continuity with recent developments in this area, this descriptive review seeks to determine the extent to which various types of review articles published in our field are transparent, i.e., they report important methodological elements about their design. To fulfil this objective, we identified, classified, and coded 142 review articles from the Association for Information Systems (AIS) senior scholars’ basket of journals published between 2000 and 2014. Overall, our findings indicate inadequate reporting of the methods, procedures, and techniques used in a majority of reviews. Our assessment also reveals that theory development and narrative reviews, which are the most frequently published types of reviews in our field, generally were the least explicit with regard to the methods they used. Based on our observations, we recommend that authors of all forms of reviews better document design decisions so to increase trustworthiness, get meaningful results, and develop a cumulative body of knowledge in our discipline. The list of reporting items developed in this study can serve as a framework to assist prospective authors of reviews both within and outside our field.
Acknowledgements
We would like to thank the Editor, Frantz Rowe, who was particularly helpful in getting this paper through the review process. We are also thankful to Guido Schryen, Gerit Wagner, Anne-Marie Croteau, Ana Ortiz de Guinea and four anonymous reviewers for their helpful comments and suggestions on earlier versions of this manuscript. Last, we are indebted to Haitham Tamim for his assistance during the data coding phase.