Abstract
The simplest and widely used assessment of academic research and researchers is the journal impact factor (JIF). However, the JIF may exhibit patterns that are skewed towards journals that publish high number of non-research items and short turnover research. Moreover, there are concerns as the JIF is often used as a comparison for journals from different disciplines. In this study, the JIF computation of eight top ranked journals from four different subject categories was analyzed. The analysis reveals that most of the published items (>65%) in the science disciplines were nonresearch items while fewer such items (<22%) were observed in engineering-based journals. The single regression analysis confirmed that there is correlation (R2 ≥ .99) in the number of published items or citations received over the two-year period used in the JIF calculation amongst the eight selected journals. A weighted factor computation is introduced to compensate for the smaller journals and journals that publish longer turnover research. It is hoped that the approach can provide a comprehensive assessment of the quality of a journal regardless of the disciplinary field.
Acknowledgments
All the bibliographical data used in this study were obtained from the Web of Science and the Journal Citation Report of the Thomson Scientific.