Abstract
This study explores variation in postgraduate students' and supervisors' conceptions of research. Based on earlier work, a Conceptions of Research Inventory (CoRI) was trialled on a mixed sample of postgraduate students and supervisors (n = 251). Exploratory factor analyses of resultant data yields a five‐dimensional empirical model, the composition of which is consistent with earlier work by the present authors. Four of these five dimensions distinguish between (variation in) conceptions of research such as “truth”, “problem‐solving”, “re‐search”, and “an insightful process”. The fifth dimension captures variation in terms of what are interpreted as “misconceptions”. The discrete conceptual dimensions suggested within the factor model are further explored via k‐means cluster analyses in terms of partitioning of the dataset, as limited by sample sizes, according to postgraduate status and supervisor designation. These analyses provide further insights into variation, across the various clusters in each case, as expressed in the profiles of cluster mean scores. Such differences as are evidenced highlight contrasting patterns of variation between, for example, experienced and novice researchers. There is also evidence of dissonance in some of the cluster solutions and it is further demonstrated via analysis of variance that dissonant cluster membership is associated with a generally lower level of self‐estimated performance. The implications of these findings are finally considered in relation to postgraduate training and supervision.
Acknowledgements
Dr Mari Murtonen, who also interpreted the Finnish version to account for differences between that country's tertiary education structure and those in South Africa and Australia, translated the Finnish version of the inventory used in the present study (in both directions). The authors thank her for this contribution. The authors also wish to thank two anonymous referees for their helpful comments.
Notes
1. At the time of writing the present paper the authors are unaware of any other published work, empirical or otherwise, on students' conceptions of research. Contemporary books that deal with issues of postgraduate supervision and research, such as those by Taylor and Beazley (Citation2005) and Wisker (Citation2005), do not even index “conceptions of research”.
2. “Adequacy” in the context of the exploratory factor analysis can be interpreted here in two senses: the strictly technical issue of case‐to‐observable ratio, and the conceptual issue of “fitness for purpose”. There is not a mathematically precise answer in the abstract to the question of what constitutes an adequate case‐to‐observable ratio. A case‐to‐observable ratio of approximately 4:1 is generally considered adequate as a lower limit for exploratory purposes. This lower limit is one which the present study is close to, and the pragmatic “fitness for purpose” justification lies essentially in the fact that it has been possible to extract a fully converged maximum likelihood solution with communality estimates in preference to a more direct Procrustean solution based on principal component analysis.
3. The question of unstable factor loadings is fully acknowledged and the sample size might partially explain the difficulty in extracting an unambiguous “conceptually distinct” factor model. There are, however, other considerations. What is being factor analyzed is a set of responses to a pool of items that represent, in varying degrees, sources of conceptual and empirical redundancy. The purpose of the analysis is to explore (not confirm) the dimensionality of what are, in effect, responses to redundant sets of items. Under these circumstances it is quite normal, and indeed expected, that resultant factor solutions will exhibit a less than possibly “conceptually distinct” factor model. An appeal to the scree plot under these circumstances is but one of several considerations that contribute to the art, as opposed to the mathematics, of factor analysis.
4. The question of whether a partitioning of the dataset (according to some categorical variable that defines discrete subgroups) might yield different factor structures is implicitly acknowledged. To explore these issues falls outside the scope of the present study because of sample size limitations. However, a complementary avenue of exploration (the cluster analyses) has been pursued, based on the assumption that the factor structure might provide a modest basis for exploring such subgroup differences as the partitioning of the dataset can support.