237
Views
0
CrossRef citations to date
0
Altmetric
Educational Assessment & Evaluation

Is it worth attending higher education? Lessons from a systematic review on institutional contribution to learning outcomes

ORCID Icon & ORCID Icon
Article: 2351241 | Received 30 Aug 2023, Accepted 29 Apr 2024, Published online: 12 Jun 2024

Abstract

There is a growing interest in understanding more precisely the contribution of HE to student learning outcomes. We present a systematic review of the literature published between 2000 and 2023 in the WoS and Scopus databases. We identified 49 research articles that analyse individual and institutional variables related to student’s academic and employment outcomes. Most focus on establishing non-causal relationships between students’ entry characteristics and outcomes. In parallel, work estimating institutional contribution has evolved. Until 2015, there was an interest in comparing different methods and outcome variables. Since then, there has been more interest in using causal techniques to identify factors related to institutional contribution. Methodologically, VA research shows variations in results depending on the type of outcome used and the inclusion of institutional selectivity in the estimates. In addition, the results show gains in student learning; however, academic gaps related to gender, SES, or ethnicity remain or even increase after HE completion. In parallel, some value-added research has identified institutional variables that may be related to academic outcomes, such as expenditure per student, type of teachers’ contract, and teaching practices such as discussions outside the classroom and participation in research projects. However, the predominance of descriptive techniques in the literature does not allow conclusive results.

Introduction

Higher education (HE) has attained great importance since the mid-20th century, which has been reflected in a sustained increase in enrolment and public and private spending (OECD, Citation2021). In OECD countries, enrolment among 20–25 year-olds reached 41% in 2021, representing an increase of 4% from 2005 to date. The average public and private expenditure on HE in OECD countries is 17,100 USD per student or 37% of GDP per capita (OECD, Citation2021).

The economic literature has studied the impact of education on various labour outcomes for decades (Ehrenberg, Citation2004). Findings show that more years of education could increase the probability of finding a job and access to higher salaries (Becker, Citation1994). However, economic research from the outset noted that outcomes were not the same for everybody. Factors like students’ socioeconomic characteristics, prior academic performance (e.g. Chetty et al., Citation2020), the type and quality of higher education institutions (e.g. Kane & Rouse, Citation1995), or selection bias (e.g. Card, Citation2001; Black & Smith, Citation2004), are associated with graduate outcomes.

Despite progress in economic research, its focus has been mainly on analysing the relationship between education and economic returns (e.g. access to the labour market, wages). So, there is limited evidence regarding the effectiveness of the educational process, that is, the actual improvement of students’ learning associated to higher education (Koretz, Citation2019).

School Effectiveness Research studies has focused on analysing the effect of educational institutions on student learning outcomes (David et al., Citation2000). However, the significant progress observed in understanding primary and secondary education has not been observed in higher education (Koretz, Citation2019).

Characteristics of HE, such as the curricular diversity of study programs or the unfrequent availability of large-scale test results (Koretz, Citation2019), have hindered the advancement of value-added and school effectiveness research at the tertiary level.

Despite these difficulties, this area of research has gained momentum in the last decade due to increased concern about whether students are achieving the learning outcomes expected in areas such as analytical abilities and communication skills (Coates & Zlatkin-Troitschanskaia, Citation2019; Kinzie, Citation2019). An example of this has been the increased availability of tests such as SaberPRO in Colombia, the CLA in the United States, or the OECD's AHELO project.

This paper presents the results of a systematic literature review of the research that addressed the effect of higher education institutions on the learning outcomes of graduates using quantitative methodology. In our review, we aim to answer two questions: What is the contribution of higher education institutions to student learning outcomes? (Q1) And how was this contribution achieved? (Q2).

We analysed 654 articles in the WoS and Scopus databases published between 2000 and 2023. This article is organised into a theoretical framework section, methodological strategy section, and results section, followed by a discussion and a conclusion section, in which we outline possible lines of future research.

What is the institutional contribution?

The concept of contribution arose from economics and referred to the difference between the value of production and the cost of raw materials (Braun et al., Citation2010). This analysis framework was introduced early in the 1970s in the wake of the human capital theory and the concept of production function (Hanushek, Citation1979).

In education, this analysis framework seeks to understand better which school inputs (teacher salaries, teaching experience, training, number of students per classroom) are crucial for generating learning outcomes and economic returns. However, the transposition from economics to education has been somewhat inexact, as the contribution is less tangible when measuring changes in learning outcomes than when measuring changes in the value that result from the production process (Braun et al., Citation2010).

In education, the institutional contribution refers to the relative contribution that teachers, institutions, or specific programs make to various educational goals, considering other factors that are not controlled by the institutions, like socioeconomic level, race, or gender (Kim & Lalancette, Citation2013). Typically, this contribution has been estimated by comparing students’ academic performance on standardised tests from two or more points in time, using value-added models that consider the influence of these other factors in the gain estimate (OECD, Citation2008). From an equity perspective, these considerations are relevant, as they make comparisons fairer. However, some counter-arguments suggest that to understand more precisely the contribution of institutions to learning outcomes, it is crucial to understand the factors within HEIs that generate inequitable outcomes (Carter & Reardon, Citation2014). It requires more than just statistically controlling for factors associated with unequal educational outcomes (e.g. socioeconomic level, race, or gender).

Independent of criticism, studying the contribution of institutions to student outcomes has advanced our understanding of the influence of education on people’s lives. Today, we understand that more years of schooling are related to productivity and that specific institutional characteristics add variability to this relationship (Hox et al., Citation2017); however, this knowledge comes from the school system.

Understanding how much of the difference in students’ learning is attributable to the higher educational institution and not to other factors is a significant methodological challenge (OECD, Citation2008). In addition, learning is naturally cumulative (Braun et al., Citation2010), and it has a social component (Engeström, Citation1987) as it results from previous teachers, classmates, educational administrators’ decisions, and the students’ extra-educational experience (Braun et al., Citation2010). Thus researchers disagree on how pertinent and appropriate it is to mathematically model such relationships with the available data and without clear theoretical bases (Hox et al., Citation2017).

Additionally, there is no consensus on the most appropriate instrument to measure the HE contribution and on how to draw a representative sample of students and their HE experience. Studies in primary and secondary education illustrates attempts to measure a representative sample of students’ learning. However, determining which sample in HE represents all programs is more complicated due to curricular diversity (Kinzie, Citation2019). For example, mathematics in STEM programs differs from mathematics in education or psychology programs. In addition, there is the possibility that the learning measured in such exams does not fully reflect the contribution of HE to students. Practical and other workplace skills, such as teamwork, self-criticism, or oral expression, are not generally included in measures examining value-added (Koretz, Citation2019).

Despite these challenges, the increased interest on the effectiveness of educational processes has prompted the study of the contribution of HEIs by testing different methodological alternatives to address the challenges described above.

Methodology

We searched for articles published between 2000 and 2023 in the Web of Science (WoS) and Scopus databases. Within WoS, we searched in the Core Collection, specifically in Science Citation Index Expanded, Social Sciences Citation Index, and Arts & Humanities Citation Index. In the case of Scopus, we searched in the following indices: Social Sciences; Economics, Econometrics, and Finance, Psychology, and Arts and Humanities.

A total of 41 keywords were used, organised into six groups. Keyword groups aimed to capture synonyms within categories used frequently in higher education value-added research. shows the keywords for each keyword group created.

Table 1. Keywords used.

We searched for articles combining three or four keyword groups at the time, according to the four combinations shown in . The boolean term ‘OR’ was used within the keyword group, and the boolean ‘AND’ was used to link the different keyword groups. Keywords could appear in any of the three dataset filter fields: title, topic, or abstract. The keyword group NOT referred to all study types that did not meet the studýs inclusion criteria and that needed to be excluded. Finally, we combined the code from the four search sets depicted in using the ‘OR’ boolean. This allowed us to get a final set of results without duplicated manuscripts for each dataset (WoS, Scopus). Results from both datasets were manually combined excluding duplicates.

Table 2. Combination of Keyword Groups used.

The search strategy resulted in 430 articles indexed in WoS and 315 in Scopus. There were 91 duplicates, so the total number of articles was 654.

The publications were selected based on the titles and abstracts of the 654 articles. The inclusion criteria were: (i) quantitative empirical studies, (ii) focused on higher education, (iii) learning outcomes or access to the labour market as a dependent variable (e.g. scores on tests, grades, probability of graduation, salary, time to find a first job), (iv) academic or socio-economic characteristics of students or institutions as independent variables (eg previous academic performance, parents’ educational level, parents’ salary) and (v) outcomes were measured at the end of HE. The use of these criteria resulted in a final sample of 49 articles. The 49 selected articles were reviewed in depth with three focus: first, the objectives stated by the authors; second, the methodology used, with emphasis on the study design, the independent and dependent variables used, and the statistical technique used; third, the results achieved. The review focused on the study results and not on the new hypotheses they may include.

Results

The results are presented in two sections. The first summarises the consensus and disagreements on the methodologies used to estimate the institutional contribution and the associated factors. The second section presents the main findings regarding the contribution of HEIs to the outcomes of their graduates (Q1), as well as how this contribution is achieved (Q2).

On the methods

Before delving into the research questions, we present a brief overview of the statistical techniques used in the 49 articles to better comprehend the scope of their results. As for the type of statistical techniques used, we found causal and descriptive studies, with a predominance of the latter (78%). Among the most used descriptive techniques, we found the Ordinary Least Squares regressions (OLS) with and without fixed effects at the institutional level (N = 18), followed by Hierarchical Linear Modeling regressions (HLM). Causal techniques were used only in 22% (N = 11) of the studies, from 2015 onwards. In decreasing order, the causal techniques most frequently found included Propensity Score Matching (PSM), Regression Discontinuity Design (RDD), and Instrumental Variables (IV). Finally, we found three studies using Difference in Differences (DID), Bayesian Network, and Entropy Balancing. presents the detailed information.

Table 3. Papers description.

Our review revealed a growing interest in seeking more precise estimates of the institutional contribution since the early attempts to transfer school effectiveness studies to the higher education field (Rodgers, Citation2005; Yunker, Citation2005). We found discussions on (a) the relevance of using cross-sectional designs; (b) the comparison of different statistical techniques, (c) how to best measure the institutional contribution, and (d) the problems generated by the lack of randomisation in the access to HE. Each of these discussions is presented below more detail.

Regarding the type of designs (a), at least two studies concluded that the use of cross-sectional design, as opposed to the longitudinal design, tends to have an upward bias on the results. This is due to difficulties controlling for differences between first-year and final-year students, even when using causal statistical techniques such as matching. Students differ in academic, social, or motivational characteristics, among others (Liu et al., Citation2016; Steedle, Citation2012). We also note that cross-sectional designs seem to have been used due to the limited access to longitudinal databases, despite the awareness of researchers regarding their methodological weaknesses. For example, Liu et al. (Citation2016) combined cross-sectional descriptive analyses (OLS regression) with Propensity Scoring Methodology.

We observed a growing interest during the last decade in comparing different statistical techniques to increase accuracy (b). Results confirm that considering the nested data structure using multilevel techniques such as HLM models provided more stable and consistent results over time (Steedle, Citation2012; Shavelson et al., Citation2016). However, studies such as those by Bogoya et al. (Citation2017) and Horn and Lee (Citation2019) reported that these models are less accurate in the distribution’s tails, so they recommended exploring techniques that reduce this imprecision, such as quantile regression.

In keeping with the search for precision, we observed an interest in methodologically broadening the concept of institutional contribution (c). Several publications analysed changes in the estimates resulting from the use of different types and numbers of dependent variables. The estimates can vary significantly depending on the type of variable used (e.g. Cunha & Miller, Citation2014; Milla et al., Citation2016; Rodríguez-Revilla & Vallejo-Molina, Citation2022; Shavelson et al., Citation2016). In that respect, a HEI could, for example, generate a high contribution to reading comprehension but a lower contribution in mathematics or vice versa. Typically, the literature (e.g. Broatch & Lohr, Citation2012; Espinel et al., Citation2019; Owusu-Agyeman & Larbi-Siaw, Citation2018) analysed the institutional contribution based on only one outcome variable (e.g. score on a test, grade point average, probability of graduation); however, Milla et al. (Citation2016) included five test results (Spanish, English, Mathematics, Social Sciences, and Science) in the dependent variable of the regression model. Their results confirmed the existence of changes in the estimates of the institutional contribution depending on the use of one or several outcome variables. In that sense, their proposal is consistent with the multidimensional view of contribution, as discussed in the works of Cunha and Miller (Citation2014) and Shavelson et al. (Citation2016). Rodríguez-Revilla and Vallejo-Molina (Citation2022) also contribute to this discussion. Although they did not include several outcomes in one dependent variable, as Milla et al. (Citation2016) did, they did compare VA results considering outcomes related to mathematics and reading skills. Their results show that if VA is estimated considering mathematics skills, there are statistically significant differences between types of institutions (eg official and private), which is not the case when considering critical reading outcomes. Unfortunately, the authors do not explore this difference further. Finally, the results of Bagues et al. (Citation2008) raise a caveat regarding using student’s grades as outcomes. Specifically, they note that students who graduate from institutions with higher final grades do not perform better in the labour market and vocational examinations. In this sense, differences in grade point averages would be more related to the different grading standards among institutions than to the institution's contribution to student learning.

The lack of randomness in the distribution of students among HEIs (d) is another issue addressed in different publications over the last decade (e.g. Falch et al., Citation2022; Gelbgiser, Citation2018; Cugnata et al., Citation2016; Shamsuddin, Citation2016). Most frequently this is done analysing institutional selectivity. This has predominantly been incorporated into HLM models as a covariate at the individual and institutional level (e.g. Cunha & Miller, Citation2014; Liu, Citation2011a) to model peers' influence on students’ academic outcomes. These studies showed that including this variable reduced the over- or underestimation of institutions’ contributions to socially and academically diverse students.

Selectivity was modelled using two types of variables. The first involved school variables measured several months or years before entering HE. Examples included courses taken in secondary school, remedial programs in which students participated, or the probability of graduating from secondary school (eg Cunha & Miller, Citation2014). University admission exam results was the second type of variable most widely used (49% of all papers). This variable was measured at the individual level but, in the analyses, was aggregated at the institutional level. For example, Tian et al. (Citation2019), Kugelmass and Ready (Citation2011), Bratti (Citation2002), Shavelson et al. (Citation2016), and Rodríguez-Revilla and Vallejo-Molina (Citation2022) used the scores of an HE admission exam twice, first at the individual level and then at the institutional level as an average. In contrast, papers such as Wai and Tran (Citation2022), Simpfenderfer (Citation2023), Naven and Whalen (Citation2022), and Rodríguez-Revilla and Vallejo-Molina (Citation2022) aggregated this variable only at the institutional level (e.g. average SAT score of the institution, average parental income, institutional average SABER-11).

Main findings on HEIs contribution to learning

This section focuses on the main findings of the research included in the review. Based on our research questions, results are organized into four groups based on the differences identified in research goals, the statistical techniques, methodological designs, and the independent variables considered in the estimations. All 49 articles analysed had an academic (e.g. test results at graduation, time to graduation, academic honours) or labour outcome (e.g. access to the labour market, salary, place of work) as the dependent variable, as this was one of our inclusion criteria. We also distinguished between those actions that the institution deliberately implemented to promote student learning outcomes (e.g. curriculum, or academic support programmes); versus those variables that, while relevant for student learning outcomes, are not directly controlled by the institution (e.g. selectivity, gender, race, SES). Considering the methodological designs and statistical techniques employed, the initial three groups offer varying levels of depth in addressing our Q1: What is the contribution of higher education institutions to student learning outcomes? The first group presents the research that analyses the relationship between student characteristics (prior academic performance, SES) and student outcomes; the second group includes research looking at the relationship between institutional factors (e.g. curricular updates, student intervention programmes) and student outcomes; the third group refers to research that estimates the value added by HEIs. The fourth group includes research that estimates value-added, but also analyses institutional characteristics that help to understand differences in VA. The latter group contributes to answering our Q2: how was this contribution achieved?

Group 1: How are students' entry characteristics related to their outcomes?

Articles in the first group analyse student outcomes without explicitly considering the institution as an element in the estimation. In that sense, this first group of studies contributes to answering our Q1 but only at an initial and limited level, as it only allows us to examine the role of students' variables and their relationship to students’ results.

Using descriptive and causal techniques, such as logit models, ordinary least square regressions, weighted regressions, instrumental variables, Bayesian models, entropy balancing, hierarchical linear modelling, structural equations, and propensity score matching, studies showed a high association of SES and prior academic performance to university outcomes.

First, regarding the association with SES, students with a low SES are less likely to obtain a bachelor’s degree and get lower grades than their peers with a higher SES at the same type of institution (Ferrão, Citation2023; Gelbgiser, Citation2018; Shamsuddin, Citation2016). Also, they had lower levels of employability and wages despite having comparable academic performance before university entry (Bagues et al., Citation2008; Cugnata et al., Citation2016). Combet and Oesch (Citation2021), using a causal technique (entropy balancing), found more than 20% points difference in university completion rates between high and low SES students. This relationship between socioeconomic origin and graduate outcomes would be mediated by institutional selectivity; however, it is unclear how selectivity operates within institutions to achieve differences in outcomes. This issue is further addressed in the results of group 2.

A second area of research examined the influence of prior academic performance on student outcomes. Papers such as Wariyo and Asgedom (Citation2021), Ferrão (Citation2023) and Garcia and Maza (Citation2018) agree that this is the factor that most predict academic outcomes. However, they also recognise that such models explain approximately one-third of the variance in university success, so an important set of factors still needs to be investigated.

Of all the research reviewed, only one study found no significant relationship between prior academic performance and performance at graduation (El-Moussa et al., Citation2021). However, this article describes specific research that follows the trajectory of Saudi Arabian students who move to the USA to enrol in HE. Such students may have differentiating characteristics compared to students who do not migrate to study, which may affect the results.

In summary, this first group of studies shows how socioeconomic status and prior academic performance are highly related to student’s academic and employment outcomes. However, the designs and statistical techniques prevent us from interpreting findings as causal, nor do they allow us to understand the mechanisms that operate within institutions with greater or lesser selectivity.

Group 2: What institutional characteristics are related to student outcomes?

Studies in the second group do not use value-added models but contribute to answering our Q1 by identifying institutional characteristics that are associated with student outcomes. We organized the results from these studies along two themes: the mediating influence of institutional selectivity and the influence of changes at the curricular or teaching level.

Regarding selectivity, the studies showed that more selective universities increased students’ graduation probabilities or access to the labour market (Gelbgiser, Citation2018; Shamsuddin, Citation2016). However, this seems more related to their ability to attract students with higher academic achievement in high school and higher SES rather than to the quality of the education offered (Bagues et al., Citation2008; Delahoz-Dominguez et al., Citation2022). The research reviewed agrees that students from lower SES or prior academic performance were less likely to have access to more selective universities (Gelbgiser, Citation2018; Wai & Tran, Citation2022), either because of economic constraints or because of the influence of family upbringing on their application decision (Cugnata et al., Citation2016). In this sense, research does not identify the mechanisms operating within selective institutions to achieve higher outcomes. Papers such as Combet and Oesch (Citation2021) note that students who enter more selective universities develop more ambitious educational goals, which could affect their labour market outcomes; however, the authors do not empirically support such a claim. On the other hand, more recent work questions the effectiveness of selective institutions over specific outcomes. Simpfenderfer (Citation2023), for example, shows that less selective universities improved intergenerational mobility even for students with academic difficulties during their university education.

Regarding the curricular or teaching level variables, we identified a growing interest in reporting the outcomes of various learning experiences in students’ academic or work-related outcomes over the last three years. We present results, differentiating them by the statistical techniques used. First, we describe the results from descriptive studies, to move then to those that use quasi-experimental techniques.

Descriptive studies show that actions such as allocating a female advisor to low-performing students in secondary school (Kato & Song, Citation2022), having academic (workshops, courses) and vocational (attending lectures, research presentations) support to encourage enrolment in STEM programs (Draganov et al., Citation2023), using community based training for future doctors (Huang et al., Citation2023), implementing a different course sequence in the curricula (Lim et al., Citation2021), has a positive relationship with academic outcomes. In contrast, Bicak et al. (Citation2023) showed that enrolling in additional mathematics courses is unrelated to better academic outcomes (GPA and graduation). Regardless of the results, the design and the statistical techniques used do not allow for the claim of a causal relationship between these actions and student outcomes.

Studies that use quasi-experimental designs and causal statistical techniques, on the other hand, come to a common conclusion: changes at the teaching or curricular level that emphasize academic support through student-centered and hands-on activities show positive results in various student outcomes. For example, teaching medical students in situations closer to the community (Latessa et al., Citation2015), offering instruction that engages students in learning mathematics (Wang et al., Citation2022), including mathematics reinforcement courses (Meiselman & Schudde, Citation2022), or promoting enrolment in academic enhancement courses (Turk, Citation2019), have a significant association with various post-graduation outcomes (e.g. retention, transfer from community college to four-year universities, graduation GPA, test scores at graduation, access to the labour market). Also, using causal techniques and quasi-experimental designs, Hahm and Kluve (Citation2019) conclude that the Bologna reform led to a significant increase in the probability of graduating within the expected instructional time by placing the student as the teaching focus and promoting a curriculum that combines theory and practice.

Although the quasi-experimental studies find positive and significant relationships between different innovations implemented at the curricular level and student outcomes (final GPA, salary, continuity of studies), their results should be considered with caution. The studies were often conducted using data from one location (e.g. one university, except for Hahm & Kluve, Citation2019), and therefore, results are not generalisable to other institutions.

Group 3: What did higher education institutions contribute to the learning outcomes of their students?

We identified a third group of research that focuses on estimating the value institutions add to students’ academic outcomes (Q1). These articles used pre and post-intervention designs and statistical techniques such as OLS regressions with and without institutional fixed effects, multilevel regressions (HLM), quantile regression, PSM, and difference-in-differences.

These papers generally reported progress in student learning during HE (e.g. Liu et al., Citation2016; Klein et al., Citation2005; Pedersen & White, Citation2011). However, (a) the low variation in students' academic performance, (b) the influence of institutional selectivity, and (c) the persistence or deepening of prior academic gaps make it difficult to distinguish clearly whether the increase in learning is due to the experience in HE, maturation of the students, or some combination of these and other factors.

In regard to (a), several studies reported a high correlation between the scores students obtained in exams at the beginning and at the end of their education. This could be interpreted as a low value-added by the HEIs; in other words, those who had high performance in the school system are likely to continue exhibiting it in HE (e.g. Abramishvili & Tsirekidze, Citation2019; Bogoya & Bogoya, Citation2013; Delahoz-Dominguez et al., Citation2022; Shavelson et al., Citation2016).

On (b), we observe high influence of institutional selectivity on student learning outcomes associated with the low variance in the scores. Studies as those by Klein et al. (Citation2005) and Liu (Citation2011b) reported that scores on admission tests (e.g. SAT) explained more than 70% of the variance in university academic performance. Including selectivity in estimates of institutional contributions to outcomes decreased the diversity in HEIs' contribution (Cunha & Miller, Citation2014; Kugelmass & Ready, Citation2011; Liu, Citation2011a; Shavelson et al., Citation2016; Steedle, Citation2012). These results suggest that the institutional contribution may be more related to the selection made by HEIs of students with high academic performance rather to the academic contribution associated with their training.

In regard to (c), we found that the knowledge gaps observed before students enrol in HE are maintained or even increased at the end of HE. Using different techniques (e.g. simple linear regressions, difference-in-differences, PSM, HLM) and dependent variables (e.g. standardised mathematics and reading tests), several papers indicated that factors outside the control of individuals, such as gender, race, parents’ educational level, or place of origin, were more associated with learning outcomes than university education itself (Espinel et al., Citation2019; Gómez et al., Citation2020; Kugelmass & Ready, Citation2011). For example, Gómez et al. (Citation2020) demonstrated that the gender-related prior achievement gap in mathematics and reading increased after university, especially women. Similarly, women’s initial advantage in reading is reversed after the university, in favour of men. In addition, Kugelmass and Ready (Citation2011) also showed that the initial academic gaps between African-American students and white peers widened during college.

Group 4: What institutional factors influence the contribution of HEIs to student learning outcomes?

This last research group allowed us to identify more explicitly the institutional characteristics associated to students' learning outcomes (Q2). We identified three types of results. The first group examined whether differences observed in contribution estimates among institutions could be associated with various measures of institutional quality (Bratti, Citation2002; Falch et al., Citation2022; Kugelmass & Ready, Citation2011). To do this, they regressed the institutional effects estimated in the first stage of the analysis to institutional covariates associated with institutional quality or expenditure. Bratti (Citation2002) and Falch et al. (Citation2022) analysed staff quality, research quality, expenditure per student, and peer effect (average score of A-level students), while Kugelmass and Ready (Citation2011) included in the estimates institutional selectivity and spending by HEIs on faculty development, teaching and learning centres, and academic support staff.

The studies reported expenditure and selectivity (or the peer effect) as the only institutional quality measures significantly associated with student outcomes. On this latter point, although the authors observed that studying in institutions with higher-academic-performance students contributed to increased learning outcomes, it was unclear whether this increase could be attributed solely to peer effect or whether there were institutional mechanisms not considered in the estimate.

A second factor identified is teaching quality, although the evidence is not decisive. This second set includes studies that descriptively examine teachers’ characteristics, while others describe the teaching process more in-depth. Tian et al. (Citation2019) found that part-time teachers have a more significant effect on students’ academic performance than full-time teachers. Falch et al. (Citation2022) conclude that full-time teachers influence institutional VA outcomes the most. On the other hand, Broatch and Lohr (Citation2012) did not observe a relationship between teachers' characteristics (type of contract, gender, ethnicity, age, academic degree, area of work) and academic performance. Looking further into the teaching process, Anaya (Citation2001) concludes that in institutions that add more value, there is an improvement in the learning of students who participated in out-of-class discussions with their teachers, and those who worked on their lecturers’ research projects. However, it was unclear whether the variables examined by Anaya (Citation2001) can be attributed to the institution or to teachers’ initiatives.

We identify a third set of studies examining the contribution of HEIs to students’ academic and employment outcomes. Using HLM and regression discontinuity, Naven and Whalen (Citation2022) found that law schools ranked in the top 14 places (T14) in the United States did not significantly impact students’ bar exam results. However, they did make a difference in the type of jobs they got. The lowest VA-ranked institution, within the top 14 law schools, increased by 30 percentage points the likelihood of its graduates being employed in important law firms. As a complement, through HLM, Rodríguez-Revilla and Vallejo-Molina (Citation2022) analyse the academic outcomes of graduates over five years. Their results show that lower-quality institutions (non-accredited or private) added more value.

In summary, research suggests that institutional value added is not necessarily related to being ranked as a higher prestige or quality institution (e.g. Top 14 or accredited).

Discussion

The results showed a consensus on increased student learning upon completion of HE. However, it was not clear whether this increase is due to the institution, the students’ previous characteristics, or a combined effect of both. We identified three factors that could lie behind the uncertain role of institutional contribution.

First, from the methodological point of view, we find a preponderance of descriptive techniques (eg OLS and HLM regressions), which do not enable us to establish causal relationships between the institutional effect and graduate outcomes. Although most of the descriptive studies sought to control biases in their results by using designs with pre and post-measures or by including covariates to control for differences related to prior academic performance, SES, or gender, these were not enough to establish a causal link (e.g. Gerber & Green, Citation2012). The lack of randomisation in the distribution of students into the HEIs, methodological problems in attributing student outcomes to HEIs alone (exclusion restriction in the causal inference literature), or difficulties in controlling for peer influence on learning (SUTVA) were some of the additional problems identified (Gerber & Green, Citation2012).

Our review revealed a growing interest in solving these problems. Over the last five years, there have been efforts to reduce biases in the study of value added. These have been addressed by combining descriptive techniques such as HLM or OLS regressions with causal techniques such as PSM, Differences-in-Differences, or Bayesian networks (e.g. Cugnata et al., Citation2016; Gómez et al., Citation2020; Liu et al., Citation2016; Naven & Whalen, Citation2022). Despite such efforts, the studies still acknowledge pending problems that cannot be remedied by statistical analysis alone (eg Liu et al., Citation2016; Owusu-Agyeman & Larbi-Siaw, Citation2018).

Second, the studies reviewed to answer the first research question (Q1: What is the contribution of higher education institutions to student learning outcomes) reach a consensus on the fact that institutional contribution estimates varied depending on the type and number of outcome variables used and the inclusion or omission of academic institutional selectivity in the analysis. Regarding the number and type of variables, there seems to be a consensus that using only one outcome variable resulted in a limited perspective of institutional contribution. Some of the studies in the review addressed this discussion by including more than one dependent variable and estimating different models for each one of them (Cunha & Miller, Citation2014; Rodríguez-Revilla & Vallejo-Molina, Citation2022; Shavelson et al., Citation2016); or by including several outcome variables in the same estimate of institutional contribution (Milla et al., Citation2016). By implementing these alternatives, studies attempted to consider the discussion about the multidimensional contribution of HEIs, broadening the concept beyond results on a standardised test (Koretz, Citation2019). However, the limited development of standardised tests measuring graduate learning outcomes and the limited availability of data on other types of contributions to higher education (e.g. teamwork, leadership, innovation) made it difficult to broaden the concept of institutional contribution.

The research showed that institutional selectivity was a relevant variable in estimating institutional contribution. Students with higher SES and academic performance are concentrated in institutions with a longer tradition or prestige. Furthermore, while there is a body of literature that identifies that the most selective institutions achieved the best outcomes (Bogoya & Bogoya, Citation2013; Klein et al., Citation2005; Liu, Citation2011b; Owusu-Agyeman & Larbi-Siaw, Citation2018), it was unclear whether these outcomes were a product of what HEIs did to make students learn or whether it is a response to students’ prior academic skills and the influence these skills may have on their peers. Along this line, several studies (Bagues et al., Citation2008; Naven & Whalen, Citation2022; Simpfenderfer, Citation2023) suggest that the value added of selective institutions is lower than that of less selective institutions. Not considering the selectivity effect would go against an important strand of educational research that views learning as a social and cumulative phenomenon over time (Braun et al., Citation2010; Engeström, Citation1987).

We also observe an academic discussion in the modelling of selectivity. Studies included selectivity as an institutional variable in the estimates, although the variable was usually built using individual-level data (average score on HE admission exams obtained by enrolled students). In the opinion of authors such as Hox et al. (Citation2017), such a combination would not be theoretically supported and would be inappropriate for estimating the effect of the institution. Combining the levels would not consider the contribution from high-achieving students to their peers and to the institution itself.

The analyses conducted to answer our second research question (Q2: How is the institutional contribution achieved?), showed that only 16% (n = 8) of the papers included institutional variables in their estimates. Furthermore, the variables considered in the models referred only to administrative institutional characteristics (e.g. public or private HEIs, type of teaching staff contracts, accreditation), and did not capture teaching and learning characteristics. We observe a disconnect between the literature on teaching and learning and the literature on the institutional contribution of HE. In the literature on teaching and learning, there is consensus on the important role that active learning strategies (e.g. Astin, Citation1993), participation in extracurricular activities (eg Pascarella & Terenzini, Citation2005), or institutional involvement in student learning (e.g. Kuh et al., Citation2011) play in fostering deeper and higher quality learning (Biggs & Tang, Citation2011). However, most of the VA literature included in this review (groups 3 and 4) does not address this discussion. Most of the studies that do address this issue are included in group 2 and do not use a dependent variable measuring the HEIs contribution. Through descriptive techniques, those studies report positive relationships between interventions at the curricular level and students’ academic outcomes. Although these papers do not use value-added models, they provide a path to follow. It would be interesting for the VA literature to consider variables of this type to advance our understanding of the factors related to HEIs contribution.

Despite the difficulties described, we observe areas of progress in the research on institutional contribution. At the beginning of the 2010s, only a few analyses of institutional contribution were available based on the literature review conducted by Kim and Lalancette (Citation2013), and cross-sectional designs were frequently used. This review showed that longitudinal models have become more frequent and that there are important efforts to combine descriptive (OLS and HLM) and causal techniques (e.g. PSM) to estimate the institutional contribution starting in 2015. From our perspective, the latest developments show a growing concern for dealing with the lack of randomness in the distribution of students and the exploration of more accurate estimates.

Conclusions

The analysis of the institutional contribution of HE leaves several lessons for future research. First, developing models that more accurately estimate the contribution of institutions to high- and low-achieving students could facilitate a deeper understanding of institutional effects on a more diverse student body. An interesting line to explore is using quantile regression models (e.g. Bogoya et al., Citation2017; Page et al., Citation2017).

In addition, it is necessary to include causal techniques in the estimation of institutional contribution since these methods provide a more accurate estimation of the institutional effect on student outcomes. However, these efforts require further theoretical and methodological discussion to meet the assumptions recommended by the specialised literature on causal inference (Gerber & Green, Citation2012).

In parallel, moving towards a multidimensional perspective of institutional contribution is important. The evidence shows that estimates change depending on the type and number of outcome variables.

Finally, the literature on institutional contribution could be even more helpful to the scientific community and decision-makers if, in addition to identifying which institutions generate greater or lesser contributions, it would enable us to determine for which group of students the contribution occurs and to which institutional factors it associates. In this regard, future research should include institutional variables closer to the issues studied in the teaching and learning literature.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work was supported by the Millennium Nucleus ‘Student Experience in Higher Education in Chile: Expectations and Realities’, Doctorate Scholarship Programme of the National Research and Development Agency [2018-21181511], and project Fondecyt 1211883.

Notes on contributors

Gonzalo Cifuentes Gomez

Gonzalo Cifuentes Gomez holds a master’s degree in learning assessment and is a doctoral candidate at the Pontificia Universidad Católica de Chile. His research interests include initial teacher training, learning assessment in higher education, and value-added models in higher education.

Maria Veronica Santelices

Maria Veronica Santelices, PhD, is an associate professor at Pontificia Universidad Católica de Chile, Department of Education. Her research interests include educational measurement and educational policy. In higher education, she has researched admissions to selective institutions both in Chile and in the United States, and college-going decision-making, transition to higher education, persistence and the impact of financial aid on educational outcomes.

References

  • Abramishvili, Z., & Tsirekidze, D. (2019). Value added of Universities: Evidence From Georgia. Economics Bulletin, 39(3), 2184–2191.
  • Anaya, G. (2001). Correlates of performance on the MCAT: an examination of the influence of college environments and experiences on student learning. Advances in Health Sciences Education: theory and Practice, 6(3), 179–191. https://doi.org/10.1023/A:1012691921321
  • Astin, A. (1993). What matters in college. Jossey-Bass Publishers.
  • Bagues, M., Labini, M., & Zinovyeva, N. (2008). Differential grading standards and university funding: Evidence from Italy. CESifo Economic Studies, 54(2), 149–176. https://doi.org/10.1093/cesifo/ifn011
  • Becker, G. (1994). Human capital – A theoretical and empirical analysis, with special reference to education (3rd ed.). National Bureau of Economic Research.
  • Bicak, I., Schudde, L., & Flores, K. (2023). Predictors and consequences of math course repetition: The role of horizontal and vertical repetition in success among community college transfer students. Research in Higher Education, 64(2), 260–299. https://doi.org/10.1007/s11162-022-09706-7
  • Biggs, J., & Tang, C. (2011). Teaching for quality learning at university. McGraw-hill education.
  • Black, D., & Smith, J. (2004). How robust is the evidence on the effects of college quality? Evidence from matching. Journal of Econometrics, 121(1-2), 99–124. https://doi.org/10.1016/j.jeconom.2003.10.006
  • Bogoya, J. D., & Bogoya, J. M. (2013). An academic value-added mathematical model for higher education in Colombia. Ingeniería e Investigación, 33(2), 76–81. https://doi.org/10.15446/ing.investig.v33n2.39521
  • Bogoya, J. D., Bogoya, J. M., & Peñuela, A. (2017). Value-added in higher education: ordinary least squares and quantile regression for a Colombian case. Ingeniería e Investigación, 37(3), 30–36. https://doi.org/10.15446/ing.investig.v37n3.61729
  • Bratti, M. (2002). Does the choice of university matter?: A study of the differences across UK universities in life sciences students’ degree performance. Economics of Education Review, 21(5), 431–443. https://doi.org/10.1016/S0272-7757(01)00035-8
  • Braun, H., Chudowsky, N., & Koenig, J. (2010). Getting value out of value-added. The National Academies Press.
  • Broatch, J., & Lohr, S. (2012). Multidimensional assessment of value added by teachers to real-world outcomes. Journal of Educational and Behavioral Statistics, 37(2), 256–277. https://doi.org/10.3102/1076998610396900
  • Card, D. (2001). Estimating the return to schooling: Progress on some persistent econometric problems. Econometrica, 69(5), 1127–1160. https://doi.org/10.1111/1468-0262.00237
  • Carter, P., & Reardon, S. (2014). Inequality matters. William T. Grant Foundation.
  • Chetty, R., Friedman, J., Saez, E., Turner, N., & Yagan, D. (2020). Income segregation and intergenerational mobility across colleges in the United States. The Quarterly Journal of Economics, 135(3), 1567–1633. https://doi.org/10.1093/qje/qjaa005
  • Coates, H., & Zlatkin-Troitschanskaia, O. (2019). The governance, policy and strategy of learning outcomes assessment in higher education. Higher Education Policy, 32(4), 507–512. https://doi.org/10.1057/s41307-019-00161-1
  • Combet, B., & Oesch, D. (2021). The social-origin gap in university graduation by gender and immigrant status: a cohort analysis for Switzerland. Longitudinal and Life Course Studies, 12(2), 119–146. https://doi.org/10.1332/175795920X16034769228656
  • Cugnata, F., Perucca, G., & Salini, S. (2016). Bayesian networks and the assessment of universities’ value added. Journal of Applied Statistics, 44(10), 1785–1806. https://doi.org/10.1080/02664763.2016.1223839
  • Cunha, J. M., & Miller, T. (2014). Measuring value-added in higher education: Possibilities and limitations in the use of administrative data. Economics of Education Review, 42, 64–77. https://doi.org/10.1016/j.econedurev.2014.06.001
  • David, R., Teddlie, C., & Reynolds, D. (2000). The international handbook of school effectiveness research. Psychology Press.
  • Delahoz-Dominguez, E., Zuluaga-Ortiz, R., Camelo-Guarín, A., & Suarez-Sánchez, M. (2022). Performance evaluation of mechanical engineering degrees using partial minimum squares and data envelopment analysis. International Journal of Innovation and Learning, 32(4), 397–413. https://doi.org/10.1504/IJIL.2022.126635
  • Draganov, T., Kim, J., & Yoon, W. (2023). Increasing retention of underrepresented students in STEM fields at California community colleges: A study of the STEM2 program. Journal of College Student Retention, 25, 152102512211496. https://doi.org/10.1177/15210251221149648
  • Ehrenberg, R. (2004). Econometric studies of higher education. Journal of Econometrics, 121(1-2), 19–37. https://doi.org/10.1016/j.jeconom.2003.10.008
  • El-Moussa, O., Alghazo, R., & Pilotti, M. (2021). Data-driven predictions of academic success among college students in Saudi Arabia. Critical Studies in Teaching and Learning, 9(1), 115–134. https://doi.org/10.14426/cristal.v9i1.316
  • Engeström, Y. (1987). Learning by expanding: An activity-theoretical approach to developmental research. Orienta-Konsultit Oy.
  • Espinel, J., Arias, A., & Van Gameren, E. (2019). Evolution of the inequality of educational opportunities from secondary education to university. International Journal of Educational Development, 66, 193–202. https://doi.org/10.1016/j.ijedudev.2018.09.006
  • Falch, T., Iversen, J., Nyhus, O., & Strøm, B. (2022). Quality measures in higher education: Norwegian evidence. Economics of Education Review, 87(1), 102235. https://doi.org/10.1016/j.econedurev.2022.102235
  • Ferrão, M. (2023). Differential effect of university entrance scores on graduates’ performance: the case of degree completion on time in Portugal. Assessment & Evaluation in Higher Education, 48(1), 95–106. https://doi.org/10.1080/02602938.2022.2052799
  • Garcia, L., & Maza, V. (2018). Factors influencing the academic performance in standardized tests of computer science/engineering students in Colombia. The International Journal of Engineering Education, 34(3), 1073–1084.
  • Gelbgiser, D. (2018). College for all, degrees for few: for-profit colleges and socioeconomic differences in degree attainment. Social Forces, 96(4), 1785–1824. https://doi.org/10.1093/sf/soy022
  • Gerber, A., & Green, D. (2012). Field experiments: Design, analysis, and interpretation. W. W. Norton.
  • Gómez, S., Alvarado, K., & Bernal, G. (2020). Women in STEM: Does college boost their performance? Higher Education, 79(5), 849–866. https://doi.org/10.1007/s10734-019-00441-0
  • Hahm, S., & Kluve, J. (2019). Better with Bologna? Tertiary education reform and student outcomes. Education Economics, 27(4), 425–449. https://doi.org/10.1080/09645292.2019.1616280
  • Hanushek, E. A. (1979). Conceptual and empirical issues in the estimation of educational production functions. The Journal of Human Resources, 14(3), 351–388. https://doi.org/10.2307/145575
  • Horn, A., & Lee, G. (2019). Evaluating the accuracy of productivity indicators in performance funding models. Educational Policy, 33(5), 702–733. https://doi.org/10.1177/0895904817719521
  • Hox, J., Moerbeek, M., & Van de Schoot, R. (2017). Multilevel analysis: Techniques and applications. Routledge.
  • Huang, S., Ho, C., Chu, Y., Wu, J., & Yang, Y. (2023). The quantified analysis of the correlation between medical humanities curriculums and medical students’ performance. BMC Medical Education, 23(1), 571. https://doi.org/10.1186/s12909-023-04073-y
  • Kane, T., & Rouse, C. (1995). Labor-market returns to two- and four-year colleges. American Economic Review, 85, 600–614.
  • Kato, T., & Song, Y. (2022). Advising, gender, and performance: Evidence from a university with exogenous adviser–student gender match. Economic Inquiry, 60(1), 121–141. https://doi.org/10.1111/ecin.13023
  • Kim, H., & Lalancette, D. (2013). Literature review on the value-added measurement in higher education. OECD.
  • Kinzie, J. (2019). Taking stock of initiatives to improve learning quality in American higher education through assessment. Higher Education Policy, 32(4), 577–595. https://doi.org/10.1057/s41307-019-00148-y
  • Klein, S., Kuh, G., Chun, M., Hamilton, L., & Shavelson, R. (2005). An approach to measuring cognitive outcomes across higher education institutions. Research in Higher Education, 46(3), 251–276. https://doi.org/10.1007/s11162-004-1640-3
  • Koretz, D. (2019). Measuring postsecondary achievement: Lessons from large-scale assessments in the K-12 sector. Higher Education Policy, 32(4), 513–536. https://doi.org/10.1057/s41307-019-00142-4
  • Kugelmass, H., & Ready, D. (2011). Racial/ethnic disparities in collegiate cognitive gains: A multilevel analysis of institutional influences on learning and its equitable distribution. Research in Higher Education, 52(4), 323–348. https://doi.org/10.1007/s11162-010-9200-5
  • Kuh, G., Kinzie, J., Schuh, J., & Whitt, E. (2011). Student success in college: Creating conditions that matter. John Wiley & Sons.
  • Latessa, R., Beaty, N., Royal, K., Colvin, G., Pathman, D., & Heck, J. (2015). Academic outcomes of a community-based longitudinal integrated clerkships program. Medical Teacher, 37(9), 862–867. https://doi.org/10.3109/0142159X.2015.1009020
  • Lim, J., Settlage, D., & Wollscheid, J. (2021). Analyzing knowledge decay and gender differences on end of program assessment measures: Case of a Mid-South University in the USA. The International Journal of Management Education, 19(2), 100465. https://doi.org/10.1016/j.ijme.2021.100465
  • Liu, L. (2011a). Measuring value‐added in higher education: conditions and caveats – results from using the Measure of Academic Proficiency and Progress (MAPP™). Assessment & Evaluation in Higher Education, 36(1), 81–94. https://doi.org/10.1080/02602930903197917
  • Liu, L. (2011b). Value-added assessment in higher education: A comparison of two methods. Higher Education, 61(4), 445–461. https://doi.org/10.1007/s10734-010-9340-8
  • Liu, L., Liu, H., Roohr, K., & McCaffrey, D. (2016). Investigating college learning gain: Exploring a propensity score weighting approach. Journal of Educational Measurement, 53(3), 352–367. https://doi.org/10.1111/jedm.12112
  • Meiselman, A., & Schudde, L. (2022). The impact of corequisite math on community college student outcomes: Evidence from Texas. Education Finance and Policy, 17(4), 719–744. https://doi.org/10.1162/edfp_a_00365
  • Milla, J., Martín, E., & Van Bellegem, S. (2016). Higher education value added using multiple outcomes. Journal of Educational Measurement, 53(3), 368–400. https://doi.org/10.1111/jedm.12114
  • Naven, M., & Whalen, D. (2022). The signaling value of university rankings: Evidence from top 14 law schools. Economics of Education Review, 89(1), 102282. https://doi.org/10.1016/j.econedurev.2022.102282
  • OECD. (2008). Measuring improvements in learning outcomes. Best practices to assess the value-added of schools. OECD.
  • OECD. (2021). Education at a glance 2021: OECD indicators. OECD Publishing.
  • Owusu-Agyeman, Y., & Larbi-Siaw, O. (2018). Measuring students’ learning using a value added approach. Africa Education Review, 15(4), 99–117. https://doi.org/10.1080/18146627.2016.1224582
  • Page, G., Martín, E., Orellana, J., & González, J. (2017). Exploring complete school effectiveness via quantile value added. Journal of the Royal Statistical Society Series A, 180(1), 315–340. https://doi.org/10.1111/rssa.12195
  • Pascarella, E., & Terenzini, P. (2005). How college affects students: A third decade of research. Jossey-Bass.
  • Pedersen, D., & White, F. (2011). Using a value-added approach to assess the sociology major. Teaching Sociology, 39(2), 138–149. https://doi.org/10.1177/0092055X11400437
  • Rodgers, T. (2005). Measuring value added in higher education: Do any of the recent experiences in secondary education in the United Kingdom suggest a way forward? Quality Assurance in Education, 13(2), 95–106. https://doi.org/10.1108/09684880510594355
  • Rodríguez-Revilla, R., & Vallejo-Molina, D. (2022). Valor agregado y las competencias genéricas de los estudiantes de educación superior en Colombia. Revista Iberoamericana de Educación Superior, 13(36), 44–62. https://doi.org/10.22201/iisue.20072872e.2022.36.1183
  • Shamsuddin, S. (2016). Berkeley or bust? Estimating the causal effect of college selectivity on bachelor’s degree completion. Research in Higher Education, 57(7), 795–822. https://doi.org/10.1007/s11162-016-9408-0
  • Shavelson, R., Domingue, B., Mariño, J., Molina, A., Morales, A., & Wiley, E. (2016). On the practices and challenges of measuring higher education value added: the case of Colombia. Assessment & Evaluation in Higher Education, 41(5), 695–720. https://doi.org/10.1080/02602938.2016.1168772
  • Simpfenderfer, A. (2023). The role of higher education in intergenerational mobility: An exploration using multilevel structural equation modeling. Research in Higher Education, 64, 1–38. https://doi.org/10.1007/s11162-023-09753-8
  • Steedle, J. (2012). Selecting value-added models for postsecondary institutional assessment. Assessment & Evaluation in Higher Education, 37(6), 637–652. https://doi.org/10.1080/02602938.2011.560720
  • Tian, Z., Wei, Y., & Li, F. (2019). Who are better teachers? The effects of tenure-track and part-time faculty on student achievement. China Economic Review, 53, 140–151. https://doi.org/10.1016/j.chieco.2018.08.014
  • Turk, J. (2019). Estimating the impact of developmental education on associate degree completion: A dose–response approach. Research in Higher Education, 60(8), 1090–1112. https://doi.org/10.1007/s11162-019-09549-9
  • Wai, J., & Tran, B. (2022). Student characteristics, institutional factors, and outcomes in higher education and beyond: An analysis of standardized test scores and other factors at the institutional level with school rankings and salary. Journal of Intelligence, 10(2), 22. https://doi.org/10.3390/jintelligence10020022
  • Wang, X., Lee, Y., Zhu, X., & Okur Ozdemir, A. (2022). Exploring the relationship between community college students’ exposure to math contextualization and educational outcomes. Research in Higher Education, 63(2), 309–336. https://doi.org/10.1007/s11162-021-09644-w
  • Wariyo, L., & Asgedom, A. (2021). Promoting effects of abilities while enhancing probability of college-success: A moderation role of higher education. Journal on Efficiency and Responsibility in Education and Science, 14(2), 101–117. https://doi.org/10.7160/eriesj.2021.140204
  • Yunker, J. (2005). The dubious utility of the value-added concept in higher education: The case of accounting. Economics of Education Review, 24(3), 355–367. https://doi.org/10.1016/j.econedurev.2004.06.003