219
Views
12
CrossRef citations to date
0
Altmetric
Original Articles

Achievement and affect in OECD nations

, , &
Pages 517-545 | Published online: 23 Jan 2007
 

Abstract

A statistical relationship between student affect and student achievement is routinely observed—students who like a particular subject also tend to do well in that subject. Theory suggests that the underlying causality is a mutual influence relationship in which affect influences, and is influenced by, achievement. Published analyses, however, usually assume a unidirectional influence—affect influences achievement. To the extent that the latter assumption is an over‐simplification, as theory suggests it is, then current understandings of the importance of affect for achievement are probably in error to some degree. The analyses reported here take a position consistent with theory to model the underlying causality of the relationship between affect and achievement as bidirectional. To this end, the present analyses formulate a non‐recursive structural equation model which specifies affect and achievement as influences on each other. This model is estimated separately for each of 23 nations, 19 of which are members of the Organisation for Economic Cooperation and Development (OECD). All 23 nations participated in the OECD‐sponsored Programme for International Student Assessment (PISA), a programme whose focus is national achievement levels in populations of 15‐year‐olds. The results of these analyses lend support to the proposition that affect and performance exist in a mutual influence relationship, though the nature of this relationship varies between countries.

Acknowledgements

We wish to acknowledge the helpful comments of the two Oxford Review of Education reviewers.

Notes

1. It may be helpful to think of non‐recursive models in terms similar to those used above to describe the cross‐lagged model with repeated measures as two points in time. A non‐recursive model has a lag time of zero.

2. Actually there are five ‘plausible value’ estimates for each student. This matter is discussed below in connection with the discussion of the measurement of reading achievement.

3. The effect of the latent variable on its indicator is fixed as well (to 1) in order to scale the latent variable to the scale of the indicator.

4. Identifying variables to serve as instrumental variables is generally seen as difficult. Instrumental variables and their effects have to be theoretically non‐trivial in terms of the model in question, and with a zero effect that is at least plausible (Klein, Citation1962). The task becomes doubly difficult if one is doing secondary analyses since these variables have to be found among what is available in the dataset. In the PISA data only grade‐level and family structure seemed defensible in these respects. The case for grade‐level effects on performance is straightforward since, as noted in the text proper, students in different grade levels have had different degrees of opportunity‐to‐learn. We make the seemingly reasonable assumption that grade‐level has no direct influence on affect, only an indirect one through performance. Where family structure is concerned we make the common assumption that differences in the parental configuration of families are associated with parallel differences in psychological support for learning largely as a function of family disruption and/or increased child‐rearing burden. In this way family structure is assumed to influence performance only through its influence on learning affect, and not directly. While assumptions of this kind can be challenged, in the sense that one can always develop a scenario to support an effect where we have assumed no effect, the fit statistics and fit diagnostics associated with each model provide a check on the tenability of these assumptions. If there are, for example, direct family structure effects on performance, the fit of the model to the data will be degraded overall, and with respect to this particular relationship.

5. One is not quite clear of the woods even if the model is identified in principle, as it is here. It is possible for such a model to be under‐identified in practice, a condition known as empirical under‐identification. This can occur for a number of reasons, among them the presence of an instrumental variable only weakly related to its effect in the reciprocal relationship. This phenomenon surfaced during the course of the present analyses. The original intention was to estimate a model similar to Figure 1, but with provision for a correlation between the errors of affect and achievement. However, this configuration produced anomalous estimates for most countries. We attribute this to the weakness of family structure as an instrumental variable. The subsequent analyses, reported here, embodied the assumption that this error correlation was zero, as indicated in Figure 1. This assumption serves to identify the model. However, to the extent that the true correlation departs from zero, some of the estimates obtained may be biased.

6. Explanatory analyses based on cross‐sectional data, structural equation models among them, need to assume that the system being modeled is in equilibrium at the time of data collection, or is at least proceeding smoothly toward equilibrium. In general the case for equilibrium can be argued on substantive grounds only (Kaplan et al., Citation2001). In instances where the system has not stabilised and is experiencing oscillations from cycle to cycle explained variance statistics can become negative. This is especially the case in non‐recursive models when the coefficients for the reciprocal effects are of opposite sign as they are for the UK, the Czech Republic, Denmark and the Russian Federation. However, only Denmark and the UK generate negative variance estimates—for the affect equation in the case of Denmark and for both equations in the case of the U.K. We are led to assume then that where 15‐year‐olds in the UK and Denmark are concerned this affect–achievement relationship has not reached equilibrium and, hence, that we should not report the model estimates.

7. It is worth noting in passing that one could also explore the role of marks/grades assigned by teachers in this same general context. However, it is difficult to see how one would add such a measure to the model described above, since it would be necessary to allow for additional bidirectional effects and this in turn would lead to identification problems. The (presumably) high correlation between grades and test scores would be problematic as well. However, a related analysis modelling the influence of student affect on teachers’ grading practices, and the reverse, could be well worth considering as a contribution to the understanding of grade inflation.

8. Balanced Incomplete Block (BIB) spiralling is a technique used to increase content‐area coverage without a concomitant increase in the assessment time demanded of students. Each student completes only a subset of the total pool of assessment items with the resulting data containing missing values for other items in the pool but not in the subset. Since the subsets of items are assigned randomly to students the missing data are missing by design. To ensure the random assignment of items the total pool of assessment items is divided into item blocks and subsets of the item blocks assembled into a number of test booklets along with a common set of background questionnaire items. Each block is paired with each other block in one and only one booklet. Booklets are spiralled into random sequences and assigned to students. Analysis of these data draws on the general model of Rubin (Citation1987) and the scaling application developed by Mislevy et al. (Citation1992). The trade‐off for increased coverage through BIB spiralling is increased measurement error in the scores available for each student. This is accommodated through the estimation of (five) plausible values for each student rather than a single (unreliable) point estimate. Beaton and Gonzalez (Citation1995) provide a good general discussion. A more technical presentation covering the PISA data can be found in Adams (Citation2002).

9. Note that we have avoided direct comparisons of effects based on the standardised coefficients. Since the two coefficients in question were derived from two different equations, they are standardised to different variances. In this case direct comparisons are not legitimate.

10. In assigning meaning to these patterns of effects it is important to keep in mind that the groupings of countries identified in this way are somewhat artificial since they are produced by dichotomising each of two continuous distributions of effect sizes using statistical significance as the criterion. Other sampling scenarios and/or increased tolerance for error could render some of these effects as statistically significant.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 385.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.