1,179
Views
27
CrossRef citations to date
0
Altmetric
Web paper

Perceived academic quality and approaches to studying in the health professions

, , , &
Pages e108-e116 | Published online: 03 Jul 2009

Abstract

Background: Students in higher education may adopt different approaches to studying, depending upon their perceptions of the academic quality of their courses and programmes, and both are likely to depend upon the nature of the curricula to which they are exposed.

Aims: Perceptions of quality and approaches to studying were investigated in students taking pre-registration programmes in a school of health professions. Two of the programmes were 3-year undergraduate programmes with subject-based curricula, and two were 2-year entry-level masters programmes with problem-based curricula.

Method: The Course Experience Questionnaire (CEQ) and the Revised Approaches to Studying Inventory (RASI) were administered to the students within a single survey. Their teachers were also surveyed with regard to their beliefs and intentions about teaching.

Results: The teachers on the two kinds of programme exhibited similar beliefs and intentions about teaching. However, the students on the masters programmes produced higher ratings than did the students on the undergraduate programmes with regard to the appropriateness of their assessment, the acquisition of generic skills and the emphasis on student independence. The students on the masters programmes were also more likely to show a deep approach to studying and less likely to show a surface approach to studying than were the students on the undergraduate programmes.

Conclusions: The CEQ and the RASI provide complementary evidence for use in research, in quality assurance and in quality enhancement. In comparison with subject-based curricula, problem-based curricula seem to enhance students’ perceptions of their programmes and the quality of their learning.

Practice points

  • Evaluations of academic programmes in the health professions need to take into account students’ perceptions of academic quality and the approaches to studying that they adopt.

  • The CEQ and the RASI can be recommended for use as research instruments and together provide complementary evidence for use in quality assurance and quality enhancement.

  • The introduction of problem-based curricula in the health professions can enhance students’ perceptions of the quality of their programmes and the quality of their learning.

Introduction

It is well established that students in higher education may demonstrate different approaches to studying across different programmes (Richardson Citation2000). These variations seem to depend upon the students’ perceptions of their academic context and in particular their perceptions of the quality of their courses (Richardson Citation2005). Accordingly, when evaluating programmes of study, it is desirable to obtain students’ accounts both of the perceived academic quality of those programmes and of the approaches to studying that they adopt on those programmes. The former will complement the latter in illuminating the nature of the student experience.

The Course Experience Questionnaire (CEQ) was developed by Ramsden (Citation1991) as an indicator of the quality of degree programmes in Australia. It consists of 30 statements in five scales that correspond to various aspects of effective instruction. Students indicate their level of agreement with each statement on a scale from 1 to 5. Half of the items are consistent with the meaning of the relevant scale, and the actual response is scored for those items. The other half of the items have a meaning that is opposite to that of the relevant scale, and these items are scored in reverse. The defining items of the five scales are shown in . Sadlo (Citation1997) found that students studying occupational therapy at institutions of higher education in six different countries were significantly different in their scores on the 30-item CEQ.

Table 1.  Defining items of the original scales in the Course Experience Questionnaire

Since 1993, an adapted version of the CEQ (containing only 17 out of the original 30 items) has been administered annually to all new graduates from Australian universities. This includes a sixth scale concerned with the fostering of generic skills and is supplemented by an item in which students rate their general level of satisfaction with their programmes. Wilson et al. (Citation1997) proposed that the original 30-item version of the CEQ should be augmented with the Generic Skills scale to yield a 36-item questionnaire, and they presented results from Australian students to demonstrate the reliability and validity of this instrument. Richardson et al. (Citation2005) found that this questionnaire was highly robust when used to compare students studying at seven Danish schools of occupational therapy.

The Revised Approaches to Studying Inventory (RASI) was devised by Entwistle et al. (Citation2000). In its present version, it contains 52 statements in 13 subscales that are subsumed under three broad approaches to studying (see ).

  • A deep approach involves a focus on the underlying meaning of the course materials and would generally be regarded as a desirable way of studying in higher education.

    Table 2.  Subscales contained in the Revised Approaches to Studying Inventory

  • A strategic approach involves a focus on achieving the best results, regardless of whether this involves attention to the meaning of the course materials.

  • A surface approach involves a focus on memorizing course materials for the purposes of assessment and would be regarded as an undesirable way of studying in higher education.

Again, students indicate their level of agreement with each statement on a scale from 1 to 5. Reid et al. (Citation2005) found that the RASI was broadly satisfactory when it was used to monitor students’ approaches to learning at a Scottish medical school.

If students are asked about their perceptions of academic quality and approaches to studying within a single survey, then on theoretical grounds one would expect higher ratings of academic quality to be linked to the use of a deep approach to studying and lower ratings of perceived academic quality to be linked to the use of a surface approach to studying. This was confirmed by Lawless & Richardson (Citation2002) and Richardson (Citation2005) in students who were taking courses by distance learning and by Sadlo & Richardson (Citation2003) and Richardson et al. (Citation2005) in students who were taking campus-based programmes in occupational therapy. From both a theoretical perspective and a practical perspective, therefore, it was considered to be appropriate to combine the CEQ and the RASI into a single survey to compare perceptions of academic quality and approaches to studying among students who were taking programmes in the health professions.

Context

This study considered the four pre-registration programmes taught in the same school at an English university. All were full-time programmes involving periods of clinical experience and accredited by the relevant professional bodies. The two undergraduate programmes (in physiotherapy and podiatry) followed the normal English model and lasted 3 academic years. The two masters programmes (in occupational therapy and physiotherapy) followed an accelerated model (of the sort that has been introduced in the UK during the last decade) over 2 calendar years. The terms of Margetson (Citation1991), the smaller masters programmes adopted explicitly problem-based curricula, but the larger undergraduate programmes adopted broadly subject-based curricula, albeit involving considerable interactive learning activities and a similar emphasis on the development of practical clinical skills.

A recent evaluation of the programmes carried out by the Quality Assurance Agency for Higher Education was unequivocally positive, concluding in each case that the reviewers had confidence in the academic and practitioner standards that were achieved by the relevant programmes. To obtain more information about the academic context, all of the teaching staff responsible for these programmes were asked to complete a questionnaire devised by Norton et al. (Citation2005) concerning various aspects of their underlying beliefs and their actual intentions in teaching. The questionnaire contained 34 items concerned with learning facilitation (a student-centred and learning-orientated conception of teaching) or knowledge transmission (a teacher-centred and content-orientated conception of teaching). Once again, the respondents indicated their level of agreement or disagreement with each item along a scale from 1 to 5.

Not counting two of the authors who distributed the survey, completed questionnaires were received from 34 (or 74%) of the 46 teachers; 27 stated that they taught mainly on either of the two undergraduate programmes, and 7 stated that they taught mainly on one of the two Masters programmes, although some physiotherapists taught on both the undergraduate and the Masters pre-registration programmes. Their mean scores on the questionnaire are shown in . Both groups showed a very high concern with learning facilitation and less concern with knowledge transmission. In comparison with the 638 teachers who were surveyed by Norton et al. (Citation2005) and who were teaching a range of disciplines at four different institutions of higher education, the teachers in the present study obtained somewhat higher scores on learning facilitation and somewhat lower scores on knowledge transmission: that is, they exhibited a commitment to student-centred rather than subject-centred teaching.

Table 3.  Mean (and SD) scores for beliefs and intentions in two groups of teachers

A multivariate analysis of variance using a doubly multivariate design found that there was a significant overall difference between the teachers’ beliefs and their intentions, F (2, 31) = 27.80, p < 0.01. Univariate tests showed that for knowledge transmission they obtained higher scores on intentions than on beliefs, F (1, 32) = 56.76, p < 0.01, but for learning facilitation there was a marginally significant trend for them to obtain higher scores on beliefs than on intentions, F (1, 32) = 3.41, p = 0.07. In other words, the teachers’ intentions in practice were more orientated towards knowledge transmission and rather less orientated towards learning facilitation than were their underlying pedagogical beliefs. A similar pattern was obtained by Norton et al. (Citation2005), who suggested that the academic and social context of higher education compromised teachers in practising their underlying beliefs about teaching.

There was no significant difference between the scores obtained by the undergraduate teachers and the scores obtained by the masters teachers, F (2, 31) = 1.97, p = 0.16, and no significant interaction with the difference between their beliefs and intentions, F (2, 31) = 0.40, p = 0.67. There was no significant difference between the undergraduate teachers and the masters teachers on knowledge transmission, F (1, 32) = 0.41, p = 0.53. However, there was a marginally significant trend for the masters teachers to obtain higher scores on learning facilitation than did the undergraduate teachers, F (1, 32) = 3.36, p = 0.08. In other words, the teachers who were delivering problem-based curricula to masters students were rather more student-centred than were the teachers who were delivering subject-based curricula to undergraduate students. This is of interest as it is known that students whose teachers adopt a student-centred approach to teaching are more likely to show a deep approach to studying and are less likely to show a surface approach to studying than are students whose teachers adopt a subject-centred approach to teaching (Trigwell et al. Citation1999). (It should, however, be borne in mind that some staff taught on both the undergraduate and the masters physiotherapy programmes.)

Methods

In this quantitative investigation, a questionnaire was administered to students in each of the four programmes, both as a research study and to obtain student feedback in order to enhance the quality of the four programmes. Institutional ethics approval was obtained in advance for the student survey and for the staff survey described above. Publication approval was granted on the basis that results would be reported in terms of level of study, not programme of study.

Study population

Each programme was structured into three stages which for the undergraduate programmes followed the 3 academic years. The target population consisted of all students who were in Stages 1 and 2 of each of the four programmes during the 2004–2005 academic year, a total of 351 students.

Materials

The CEQ was piloted with the previous cohort of students on the programme in occupational therapy since it had been claimed to be inappropriate for evaluating problem-based curricula (Lyon & Hendry Citation2002). As a result, a few items were amended slightly, and the instructions were extended so that a response of 1 was to be used if an item was never true and a response of 5 was to be used if an item was always true. Following the previous studies by Richardson (Citation2005) and Richardson et al. (Citation2005), the CEQ and the RASI were then combined into a single questionnaire and supplemented by questions about the participants’ age and gender. Otherwise, their responses were entirely anonymous. Finally, they were asked whether they had any other comments on their programme in general or about the questionnaire itself.

Data collection

The questionnaire was administered to each of the eight cohorts of students during regular classroom activities by the first author (who described himself as a researcher independent of the institution) and a research fellow. The students were advised both of the purposes of the study and that their participation was entirely voluntary. They provided written consent on a separate form that was retained by the institution. They were assured that their individual responses would be kept wholly confidential by the first author and that only the aggregate data from each cohort would be provided by way of feedback to their institution.

Data analysis

The students responded to each of the 36 items in the CEQ by indicating their agreement or disagreement with a particular statement along a 5-point scale from 5 for ‘definitely agree’ to 1 for ‘definitely disagree’. As noted earlier, 15 items are opposite in meaning to the scale to which they belong, and these are scored in reverse, so that 5 is scored as 1 and vice versa. The students were assigned scores on the six scales in the usual way by calculating the mean score across the constituent items. This yields scores between 1 and 5, where high scores represent favourable perceptions. An overall measure of perceived quality was calculated by taking the mean of the six scale scores. As in previous research, the 37th item (‘Overall, I am satisfied with the quality of this programme’) was used to assess the criterion validity of the CEQ.

Similarly, the students responded to each of the 52 items in the RASI by indicating their agreement or disagreement with a particular statement along a 5-point scale from 5 for ‘definitely agree’ to 1 for ‘definitely disagree’. All the items are consistent in meaning with the scale to which they belong. The students were assigned scores on the 13 subscales in the usual manner by calculating the totals of the scores on the constituent items. This then yields scores between 4 and 20 on each subscale, where higher scores indicate that the respondent has a greater disposition to adopt the relevant approach to studying. Scores on the three main scales were calculated by calculating the total scores across the relevant subscales.

Cronbach's (Citation1951) coefficient alpha was used as a measure of the reliability of the scales in each of the instruments. Their construct validity was assessed by exploratory factor analysis using principal axis factoring. In each case, the number of factors to be extracted was determined by the number of principal components whose eigenvalues were greater than one, by Cattell's (Citation1966) scree test, and by O’Connor's (Citation2000) procedure based upon the parallel analysis of random correlation matrices. Squared multiple correlations were used as the initial estimates of communality. Where appropriate, the extracted factor solution was submitted to an oblique rotation using a quartimin method.

Since the CEQ and the RASI had been administered within a single survey, it was then feasible to evaluate the relationship between the students’ scores on the two instruments by means of a multivariate analysis of variance and by examining the correlation coefficients among the various scale scores. Finally, comparisons between the scores obtained by the undergraduate students and the Masters students were carried out using multivariate analyses of variance, and univariate analyses of variance were used to identify the scales and subscales on which there was statistically significant variation.

The probability level of 0.05 was employed as the criterion of statistical significance. Comparisons may be statistically significant and yet of little practical importance, especially when there are large numbers of participants. This can be addressed by deriving a measure of the relevant effect (see Richardson Citation1996). When two different groups are being compared, the most common measure of effect size is derived by standardising the difference between their two means by dividing it by the pooled within-group standard deviation; thus, an effect size of 0.5 means that the two groups differ on average by an amount equal to half of their common standard deviation. Cohen (Citation1988, pp. 24–27) proposed that effect sizes of 0.2, 0.5 and 0.8 should be described as ‘small’, ‘medium’ and ‘large’, respectively.

Results

Completed copies of the questionnaire were returned by 269 students, which represents an overall response rate of 77%. Completed copies were returned by 182 (or 77%) of the 236 students on the undergraduate programmes and by 87 (or 76%) of the 115 students on the masters programmes. The difference between the two proportions was not statistically significant, X2(1) = 0.09, p = 0.76. In most cases, the failure to achieve a 100% response rate was due to the students’ absence from the relevant class session rather than to the students’ non-compliance with the request to participate in the survey.

One respondent failed to indicate their age or gender. Of the remaining respondents, 220 (or 82%) were women and 48 (or 18%) were men. There was no significant difference between the proportion of men on the undergraduate programmes (16%) and the proportion of men on the masters programmes (22%), X2(1) = 1.35, p = 0.25. The ages of the undergraduate students ranged from 18 to 60 with a mean of 26.1 years, and those of the masters students ranged from 20 to 48 with a mean of 26.3 years. The mean ages of the two groups were not significantly different, F (1, 266) = 0.04, p = 0.84, but the standard deviation of the undergraduate students (9.06 years) was significantly greater than the standard deviation of the masters students (4.26 years), F (1, 266) = 56.27, p < 0.001.

Course Experience Questionnaire

shows the overall mean and standard deviation on each of the six scales in the CEQ, together with the values of coefficient alpha. The latter values were generally satisfactory on conventional research-based criteria (Robinson et al. Citation1991). A principal components analysis on the CEQ scores identified two components with eigenvalues greater than one. However, the eigenvalues-one rule tends to overestimate the true number of components because of sampling effects (Cliff Citation1988). Cattell's (Citation1966) scree test and the parallel analysis of 1000 random correlation matrices using O’Connor's (Citation2000) program implied that only one factor should be extracted. shows that the scores on this factor defined a single underlying dimension that could plausibly be interpreted as an overall measure of perceived academic quality. According to the factor loadings, this factor was most closely associated with scores on good teaching, appropriate assessment, generic skills and emphasis on independence but less closely related with scores on clear goals and standards and appropriate workload.

Table 4.  Descriptive statistics for the Course Experience Questionnaire

also shows the correlation coefficients between the students’ scale scores and their ratings of general satisfaction. The latter were significantly correlated with their scores on all six scales of the CEQ. However, they were more strongly associated with their scores on good teaching, clear goals and standards, appropriate assessment, generic skills and emphasis on independence than with their scores on appropriate workload. This is similar to the pattern obtained from the loadings in the factor analysis mentioned above, and it provides evidence for the criterion validity of the CEQ as a measure of perceived academic quality.

Revised Approaches to Studying Inventory

shows the overall means and standard deviations on each of the 13 subscales of the RASI, together with the values of coefficient alpha. Some of the latter are unsatisfactory on conventional research-based criteria (Robinson et al. Citation1991). Nevertheless, the values for the three main RASI scales, calculated as the total scores across the relevant subscales, are more satisfactory. Overall, the students tended to obtain higher scores on the subscales measuring deep approach and strategic approach than on the subscales measuring surface approach. A principal components analysis identified four components with eigenvalues greater than one. Nevertheless, both Cattell's (Citation1966) scree test and the parallel analysis of 1000 random correlation matrices using O’Connor's (Citation2000) program implied that just three factors should be extracted. shows that these factors were associated with the three main scales of the RASI. However, the lack of purpose subscale did not show a clear loading on the surface approach scale. As in previous studies, the ‘deep’ and ‘strategic’ factors were positively correlated with each other but were essentially uncorrelated with the ‘surface’ factor.

Table 5.  Descriptive statistics for the Revised Approaches to Studying Inventory

Relationships between CEQ scores and RASI scores

A multivariate analysis of variance showed that the amount of variation shared between the respondents’ scores on the six scales of the CEQ and their scores on the 13 subscales of the RASI was 75.7% (Wilks’ lambda = 0.243), F (78, 1385) = 5.19, p < 0.01. In particular, the overall measure of perceived quality on the CEQ showed a strong negative relationship with their scores on surface approach (r = −0.46) but weaker positive relationships with their scores on deep approach (r = +0.30) and strategic approach (r = +0.29). In short, perceptions of academic quality varied inversely with the adoption of an undesirable approach to studying and were directly if less strongly related to the adoption of more desirable approaches.

Comparisons between the CEQ scores of undergraduate and masters students

shows the mean scores on the six scales of the CEQ for the undergraduate students and the Masters students. A multivariate analysis of variance showed that the difference between the two groups explained 40.4% of the variation in their scale scores (Wilks’ lambda = 0.596), F (6, 262) = 29.60, p < 0.01. Univariate tests showed that the two groups produced significantly different scores on appropriate assessment, F (1, 267) = 88.92, p < 0.01, clear goals and standards, F (1, 267) = 15.26, p < 0.01, generic skills, F (1, 267) = 15.89, p < 0.01, and emphasis on independence, F (1, 267) = 32.96, p < 0.01. They also produced significantly different overall scores on perceived academic quality, F (1, 267) = 16.77, p < 0.01, but they did not produce significantly different ratings of general satisfaction with their programmes, F (1, 267) = 0.33, p = 0.56.

Table 6.  Mean scores on the Course Experience Questionnaire

Inspection of shows that the masters students produced higher scores than did the undergraduate students on appropriate assessment, generic skills, emphasis on independence and perceived academic quality. However, the undergraduate students produced higher scores than did the masters students on clear goals and standards. also shows the relevant effect sizes. In Cohen's (Citation1988) terms, all the effects were between medium and large; in other words, they are likely to be of both theoretical and practical importance.

Comparisons between the RASI scores of undergraduate and masters students

shows the mean scores on the subscales of the RASI for the undergraduate students and the Masters students. A multivariate analysis of variance showed that the difference between the two groups explained 22.4% of the variation in their subscale scores (Wilks’ lambda = 0.776), F (13, 255) = 5.67, p < 0.01. Univariate tests showed that the two groups obtained significantly different scores on seeking meaning, F (1, 267) = 24.55, p < 0.01, relating ideas, F (1, 267) = 20.84, p < 0.01, use of evidence, F (1, 267) = 33.18, p < 0.01, alertness to assessment demands, F (1, 267) = 4.22, p = 0.04, monitoring effectiveness, F (1, 267) = 7.04, p < 0.01, lack of purpose, F (1, 267) = 4.45, p = 0.04, unrelated memorizing, F (1, 267) = 27.73, p < 0.01, and syllabus-boundness, F (1, 267) = 24.23, p < 0.01.

Table 7.  Mean scores on the Revised Approaches to Studying Inventory

A separate multivariate analysis of variance showed that the difference between the undergraduate and the masters students explained 14.1% of the variation in their scores on the three major scales (Wilks’ lambda = 0.859), F (3, 265) = 14.48, p < 0.01. Univariate tests showed that the two groups obtained significantly different scores on deep approach, F (1, 267) = 27.88, p < 0.01, and on surface approach, F (1, 267) = 18.78, p < 0.01, but not on strategic approach, F (1, 267) = 0.97, p = 0.32.

Inspection of shows that the masters students obtained higher scores than did the undergraduate students on seeking meaning, relating ideas, use of evidence and deep approach. However, the undergraduate students obtained higher scores than did the Masters students on unrelated memorizing, syllabus-boundness and surface approach. In Cohen's (Citation1988) terms, these effects were between medium and large and hence likely to be of both theoretical and practical importance. The undergraduate students also obtained higher scores than the masters students on alertness to assessment demands, monitoring effectiveness and lack of purpose; however, these effects would be regarded only as small.

Discussion

The CEQ once again turned out to be reasonably robust: each of the six scales demonstrated satisfactory internal consistency, and a factor analysis confirmed their intended constituent structure. As in previous research (Lawless & Richardson Citation2002; Sadlo & Richardson Citation2003), students’ perceptions of academic quality depended on various aspects of their programmes, as reflected in the different scales of the CEQ. The students’ ratings of their programmes were broadly positive, but they were highest on generic skills, appropriate assessment and good teaching and lowest on appropriate workload and emphasis on independence. (The heavy workload of programmes in the health professions is often evident from student feedback.) Nevertheless, the students’ ratings of general satisfaction were also positive, with a mean rating of 4.13 and a modal rating of 4 out of a maximum of 5. In other words, collectively the students expressed a high level of satisfaction with the quality of their programmes.

The undergraduate students produced higher scores than did the masters students on the scale concerned with clear goals and standards. This is perhaps unsurprising, because the latter students were responsible for setting their own learning goals for each problem that they encountered. Lyon & Hendry (2000) argued that goals and standards in problem-based curricula could be seen as intrinsically ambiguous if the students’ choice of topic was given priority. Nevertheless, they themselves found no difference in the scores on clear goals and standards produced by students who were taking problem-based and traditional curricula, a finding confirmed by Sadlo & Richardson (Citation2003). Moreover, it should be noted that both the undergraduate students and the masters students in this study obtained mean scores on clear goals and standards that were above the midpoint of the response scale, reflecting broadly positive judgements. The programme teams responsible for the masters students might usefully consider how to make their programme goals and standards even more explicit, but it would clearly be misleading to describe the goals and standards as ‘ambiguous’.

Nevertheless, the masters students produced higher overall ratings of perceived quality than did the undergraduate students. This was associated with higher scores on the scales concerned with appropriate assessment, with generic skills and with emphasis on Independence. In principle, these differences might simply reflect the fact that Masters students have had more experience of higher education than undergraduate students. In the annual surveys of recent graduates in Australia, masters students do tend to give more positive ratings of their programmes than undergraduate students. However, the differences in question tend to be small in magnitude and achieve statistical significance only because of the very large sample size (Ainley & Long Citation1994, Citation1995; Johnson et al. Citation1996). In the present study, the relevant differences were between medium and large on Cohen's (Citation1988) criteria and thus more likely to be due to the different curricula that the students had experienced.

In fact, Sadlo (Citation1997) (see also Sadlo & Richardson Citation2003) obtained a similar pattern when comparing undergraduate programmes with problem-based and subject-based curricula across six schools of occupational therapy. As Sadlo and Richardson remarked, ‘The results indicate that problem-based curricula are perceived as fostering student autonomy through the use of assessment methods that are consistent with the intended learning outcomes’ (p. 266). It is also worth noting that students taking the accelerated masters programmes did not produce significantly different ratings of either the quality of the teaching that they received or the appropriateness of their workload. This could mean that the problem-based curricula did not involve a heavier overall workload, or that the students calibrated their ratings against their initial expectation that the workload would be heavier on an accelerated programme.

The RASI was rather less satisfactory in this context. Four subscales (relating ideas, organized studying, alertness to assessment demands and unrelated memorising) did not exhibit satisfactory internal consistency, and one subscale (lack of purpose) failed to show a clear loading on any of the extracted factors. In other respects, however, the factor solution reflected the intended structure of the 13 subscales, and the three main scales did demonstrate satisfactory internal consistency. Moreover, students’ scores on the six scales of the CEQ and the 13 subscales of the RASI shared three-quarters of their respective variation. This confirms previous findings of an intimate relationship between students’ perceptions of the academic quality of their courses and the approaches to studying they adopt on those courses (Lawless & Richardson Citation2002; Sadlo & Richardson Citation2003; Richardson Citation2005; Richardson et al. Citation2005). This is, of course, a purely correlational relationship, and the nature of the underlying causal mechanisms is currently a matter of debate (see Richardson Citation2006).

As has already been noted, the students tended to obtain higher scores on subscales that measured the use of a deep approach or a strategic approach than on the subscales that measured the use of a surface approach. In general, then, all the programmes were fostering desirable approaches to studying rather than undesirable ones. Nevertheless, in contrast to previous studies, the present students tended to obtain relatively high scores on syllabus-boundness (‘relying on staff to define learning tasks’) and fear of failure (‘pessimism and anxiety about academic outcomes’): (Ramsden & Entwistle Citation1981, p. 371). This is perhaps not surprising on professional training programmes where the syllabi are defined by professional bodies and where the students’ future employability depends crucially upon their satisfactory academic performance. A similar pattern is evident in the results that were obtained by Reid et al. (Citation2005) in the case of Scottish medical students.

The students on the masters programmes obtained significantly higher scores on deep approach than did the students on the undergraduate programmes; this was associated specifically with higher scores on the subscales measuring seeking meaning, relating ideas and use of evidence. In contrast, the students on the undergraduate programmes obtained significantly higher scores on surface approach than did the students on the masters programmes; this was mainly associated with higher scores on unrelated memorizing and syllabus-boundness. Again, in principle these differences might simply reflect the fact that masters students have had more experience of higher education and thus have developed appropriate approaches to studying. However, Richardson (Citation1998) found that undergraduate and postgraduate students taking the same courses obtained similar scores on a predecessor to the RASI, and this would imply that there are no intrinsic differences between undergraduate and masters students in their approaches to studying.

Of course, masters students are typically older than undergraduate students, and it is well established that older students are more likely to adopt a deep approach and are less likely to adopt a surface approach than younger students (Richardson Citation1994). Even so, in the present study, the undergraduate students and the masters students had very similar mean ages. Instead, the present results confirm the findings of previous investigations that, in comparison with subject-based curricula, problem-based curricula tend to enhance the use of a deep approach to studying and to discourage the use of a surface approach to studying (Coles Citation1985; Newble & Clarke Citation1986; Sadlo & Richardson Citation2003). Insofar as problem-based curricula are designed around real-life issues, this may help the students to create personal meaning by integrating the various topics that they have studied.

Acknowledgements

The authors are most grateful to Lynne Caladine for permission to carry out this study and to publish the findings, to Mark Cage for his assistance in the administration of the survey and to the staff of the Survey Office of the Open University for their assistance in the design and processing of the survey questionnaires.

Additional information

Notes on contributors

John T. E. Richardson

JOHN T. E. RICHARDSON is Professor of Student Learning and Assessment in the Institute of Educational Technology at the Open University.

Lesley Dawson

LESLEY DAWSON is course leader of the M.Sc. Rehabilitation Science programme in the School of Health Professions at the University of Brighton.

Gaynor Sadlo

GAYNOR SADLO is Head of the Division of Occupational Therapy in the School of Health Professions at the University of Brighton.

Virginia Jenkins

VIRGINIA JENKINS is course leader of the B.Sc. (Hons.) Physiotherapy programme in the School of Health Professions at the University of Brighton.

Janet Mcinnes

JANET MCINNES is Head of the Division of Podiatry in the School of Health Professions at the University of Brighton.

References

  • Ainley J, Long M. The Course Experience Survey 1992 Graduates. Australian Government Publishing Service, Canberra 1994
  • Ainley J, Long M. The 1994 Course Experience Questionnaire: A Report Prepared for the Graduate Careers Council of Australia. Graduate Careers Council of Australia, ParkvilleVictoria 1995
  • Cattell RB. The scree test for the number of factors. Multivar Behav Res 1966; 1: 245–276
  • Cliff N. The eigenvalues-greater-than-one rule and the reliability of components. Psychol Bull 1988; 103: 276–279
  • Cohen J. Statistical Power Analysis for the Behavioral Sciences2nd. Hillsdale, NJErlbaum 1988
  • Coles CR. Differences between conventional and problem-based curricula in their students’ approaches to studying. Med Educ 1985; 19: 308–309
  • Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika 1951; 16: 297–334
  • Entwistle N, Tait H, McCune V. Patterns of response to an approaches to studying inventory across contrasting groups and contexts. Euro J Psychol Educ 2000; 15: 33–48
  • Johnson T, Ainley J, Long M. The 1995 Course Experience Questionnaire: A Report Prepared for the Graduate Careers Council of Australia. Graduate Careers Council of Australia, ParkvilleVictoria 1996
  • Lawless CJ, Richardson JTE. Approaches to studying and perceptions of academic quality in distance education. High Educ 2002; 44: 257–282
  • Lyon PM, Hendry GD. The use of the Course Experience Questionnaire as a monitoring evaluation tool in a problem-based medical programme. Assessment Evalu High Educ 2002; 27: 339–352
  • Margetson DB. Problem-focused education and the question of theory and practice, with special reference to some university courses. Unpublished doctoral dissertation. University of Tasmania, HobartTasmania 1991
  • Newble DJ, Clarke RM. The approaches to learning of students in a traditional and an innovative problem-based medical school. Med Educ 1986; 20: 267–273
  • Norton L, Richardson JTE, Hartley J, Newstead S, Mayes J. Teachers’ intentions and beliefs concerning teaching in higher education. High Educ 2005; 50: 537–571
  • O’Connor BP. SPSS and SAS programs for determining the number of components using parallel analysis and Velicer's MAP test. Behav Res Methods Instrum Comput 2000; 32: 396–402
  • Ramsden P. A performance indicator of teaching quality in higher education: the Course Experience Questionnaire. Stud High Educ 1991; 16: 129–150
  • Ramsden P, Entwistle NJ. Effects of academic departments on students’ approaches to studying. Brit J Educ Psychol 1981; 51: 368–383
  • Reid WA, Duvall E, Evans P. Can we influence medical students’ approaches to learning?. Med Teach 2005; 27: 401–407
  • Richardson JTE. Mature students in higher education: I. A literature survey on approaches to studying. Stud High Educ 1994; 19: 309–325
  • Richardson JTE. Measures of effect size. Behav Res Methods Instrum Comput 1996; 28: 12–22
  • Richardson JTE. Approaches to studying in undergraduate and postgraduate students. Stud High Educ 1998; 23: 217–220
  • Richardson JTE. Researching Student Learning: Approaches to Studying in Campus-Based and Distance Education. SRHE & Open University Press, Buckingham 2000
  • Richardson JTE. Students’ perceptions of academic quality and approaches to studying in distance education. Brit Educ Res J 2005; 31: 7–27
  • Richardson JTE. Investigating the relationship between variations in students’ perceptions of their academic environment and variations in study behaviour in distance education. Brit J Educ Psychol 2006; 76: 867–893
  • Richardson JTE, Gamborg G, Hammerberg G. Perceived academic quality and approaches to studying at Danish schools of occupational therapy. Scand J Occup Ther 2005; 12: 110–117
  • Robinson JP, Shaver PR, Wrightsman LS. Criteria for scale selection and evaluation. Measures of Personality and Social Psychological Attitudes, J.P Robinson, P.R Shaver, L.S Wrightsman. Academic Press, San Diego, CA 1991; 1–16
  • Sadlo G. Problem-based learning enhances the educational experiences of occupational therapy students. Educ Health 1997; 10: 101–114
  • Sadlo G, Richardson JTE. Approaches to studying and perceptions of the academic environment in students following problem-based and subject-based curricula. High Educ Res Devel 2003; 22: 253–274
  • Trigwell K, Prosser M, Waterhouse F. Relations between teachers’ approaches to teaching and students’ approaches to learning. High Educ 1999; 37: 57–70
  • Wilson KL, Lizzio A, Ramsden P. The development, validation and application of the Course Experience Questionnaire. Stud High Educ 1997; 22: 33–53

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.