2,857
Views
5
CrossRef citations to date
0
Altmetric
Articles

In search of valid non-cognitive student selection criteria

ORCID Icon &

Abstract

Properly selecting students is one of the core responsibilities of higher education institutions, which is done with selection criteria that predict student success. However, student selection literature suffers from a dearth of research on non-cognitive selection criteria which can lead to incorrect admission assessments. Contrarily, personnel selection studies are heavily focused on non-cognitive selection criteria and, as such, can offer insights that can improve the student selection literature. We carried out a systematic literature review of both literature strands and looked for ways in which personnel selection literature could inform student selection literature. We found that non-cognitive selection criteria are better predictors of success in personnel selection than in student selection, implying that non-cognitive skills are more important for job success. We also identified promising selection criteria from the personnel selection literature that could lead to better student success assessment during the selection phase: personality tests, conscientiousness, person-organization-fit, core-self-evaluations and polychronicity.

Introduction

Recently, the number of student applications, especially for prestigious universities, has ballooned (Karabel Citation2005; State and Hudson Citation2019; CBS Citation2020; d’Hombres and Schnepf Citation2021). This growing stream of applicants, combined with a limited teaching capacity, increases universities’ responsibility to find and implement fair and valid selection criteria, preventing students from being wrongly rejected from a university program (Pitman Citation2016; van Ooijen-van der Linden et al. Citation2017).

There is a large body of literature on the predictive validity of student selection criteria. The most commonly studied selection criteria, grades and standardized tests measure cognitive skills (Kuncel, Credé, and Thomas Citation2005; Steenman Citation2018) often with positive results (Vulperhorst et al. Citation2018; Nagy and Molontay Citation2021). Cognitive skills refer to an applicant’s ability to acquire, process and utilize knowledge (De Visser et al. Citation2018, 189). Less attention has been paid to selection criteria that measure non-cognitive skills (Thomas, Kuncel, and Credé Citation2007; Eva et al. Citation2009; Niessen and Meijer Citation2017). Such skills include teamwork, empathy, curiosity, confidence and communication (Wood et al. Citation1990). This decreased attention can lead to incorrect student selections.

The lack of insight into non-cognitive selection criteria can be partially mitigated by looking at personnel selection literature. Employees are often hired based on non-cognitive, social skills, measured by criteria also used in student selection, such as interviews, résumés and motivational letters (Tett and Christiansen Citation2007). The personnel selection literature also looks at criteria currently unused in student selection literature, like personality tests (Behling Citation1998; Wiersma and Kappe Citation2017). Although students and employees are selected based on different skills, and student success and job performance are measured differently, research regarding personnel selection could provide valuable insights about student selection for several reasons. First, the personnel selection literature has a rich history of studying selection criteria ability to predict future job performance based on non-cognitive skills (Schmidt, Ones, and Hunter Citation1992; Schmidt and Hunter Citation1998). Second, one role of educational programs is to prepare students for their future jobs, where they have to become successful personnel (Brennan Citation1985; Moore and Morton Citation2017; Clarke Citation2018; Suleman Citation2018). Student success is becoming less dependent on just intellectual prowess and cognitive abilities and more dependent on other highly appreciated skills in the job market, such as teamwork and communication (Te Wierik, Beishuizen, and van Os Citation2015; Jackson and Bridgstock Citation2019). More attention to career guidance and post-educational employment is given to students during their time at university (Sun and Yuen Citation2012; Hughes, Law, and Meijers Citation2017). Knowledge from the personnel selection literature could improve the practice of student selection, especially with regard to non-cognitive selection criteria. This paper aims to answer the following research question: to what extent can the personnel selection literature complement the literature on student selection?

We first systematically reviewed the student and personnel selection literatures. Based on this, we identified promising avenues where personnel selection literature could complement student selection literature. We aimed to establish links between these two strands of literature and searched for complementary knowledge to improve our understanding of student selection criteria. We used these insights to formulate a research agenda for the student selection literature. This will allow further selection research to dive into these understudied areas and improve the internal validity of the student selection literature. Relevant knowledge from personnel selection literature could be implemented into selection procedures for university students.

Conceptual background

The success of students and performance of personnel are different concepts, each with its own dimensions; thus, there are, by definition, differences in how they are measured. Moreover, student success and job performance are far from unidimensional concepts in their own rights. Both student success and job performance can be and are conceptualized and measured in a variety of ways.

Student success

Student success is defined as ‘academic achievement, engagement in educationally purposeful activities, satisfaction, acquisition of desired knowledge, skills, and competencies, persistence, and attainment of educational objectives’ (Kuh et al. Citation2007, 7). The most common association with student success is academic success, which is the achievement of desired learning outcomes (Kuh et al. Citation2011). Academic success is commonly measured using the students attained grades or grade-point average (GPA; van der Zanden et al. Citation2018), as well as student retention and degree attainment (Kuh et al. Citation2007; Trapmann et al. Citation2007; Crisp and Cruz Citation2009; York et al. Citation2015). Within studies on student success, there is a strong focus on success in the first year because most students who drop out do so then (Credé and Niehorster Citation2012; Fokkens-Bruinsma et al. Citation2020). A limitation many studies have when measuring academic success is that they only look at the single study program the student entered after application (Jones-White et al. Citation2010). Most studies do not take into account the success of students who transfer mid-curriculum. This means that in the representation of student success, early curriculum success is overrepresented.

However, student success does not consist solely of academic success. There have been multiple scholars calling for a more expansive view on student success because improving less quantifiable skills of students is a core objective of higher education institutes. (York et al. Citation2015; Niessen and Meijer Citation2017; van der Zanden et al. Citation2018; Alyahyan and Düştegör Citation2020). In recent years, notions of student success have expanded to accommodate students’ personal backgrounds, situations and goals, and to acknowledge their development more broadly (Kahu Citation2013). More holistic conceptualizations of student success include their critical thinking abilities and social and emotional well-being (van der Zanden et al. Citation2018). Medical students have a noteworthy position in this regard as medical programs are the most common context of student selection studies. Within medical selection studies, a holistic view of student selection is relatively prominent, considering that medical students’ success is often evaluated based on their clinical performance (or performance in the medical ward), where non-cognitive skills are highly necessary. Expanding the conceptualization of student success is important because students who score well on more traditional, cognitive measurements of success do not necessarily score well on non-cognitive measures (van der Zanden et al. Citation2018). We distinguish academic from student success and consider academic success to be one of the aspects of student success.

Job performance

Job performance is one of the most important outcomes in work contexts (Ohme and Zacher Citation2015). It relates to an employee’s ability to reach a goal or set of goals within their job, role or organization (Campbell Citation1990). Job performance influences the employee’s salary, promotion and training program decisions, which makes studying its predictors of major relevance for both organizational scholars and practitioners (Ohme and Zacher Citation2015). Just like student success, job performance is a multidimensional construct, as job performance can be conceptualized in various ways depending on the goals and objectives of the organization and individual (Jackson and Frame Citation2018). Jackson and Frame (Citation2018) distinguish three dimensions of job performance: task, contextual and adaptive performance. Task performance involves behaviour that converts resources into goods or services provided by the organization. Contextual performance involves furthering the organization’s goals by positively contributing to its climate and culture (Johnson Citation2001). Adaptive performance relates to the employee’s ability to cope with, react to and support changes in such a way that they contribute to the organization’s goals in times of uncertainty (Griffin, Neal, and Parker Citation2007). When comparing the dimensions of job performance to the dimensions of student success, we argue that there are similarities between contextual job performance and the holistic view on student success because both dimensions require the possession of similar non-cognitive skills, like the ability to work in teams (O’Connell et al. Citation2007). The selection criteria covered in our results will, therefore, mostly focus on contextual performance, as we expect articles studying this kind of performance to yield useful insights for non-cognitive student selection.

Given the multidimensional nature of measuring a job, it comes as no surprise that there are a wide variety of measurements for job performance, which depends on the type of job and the organization. However, the distinction between how the various dimensions of job performance are measured is less clear than it is in student selection literature. For most personnel, job performance is measured through a supervisor rating (Viswesvaran, Ones, and Schmidt Citation1996; Morin and Renaud Citation2009). Other measurements are the salary growth or promotions. For personnel in the medical sector, job performance can be measured by their clinical performance. For academic personnel, job performance is often measured by looking at their ability to win grants or their citation records (Lehmann, Jackson, and Lautrup Citation2008; Costas, van Leeuwen, and Bordons Citation2010).

Methods

We conducted our literature review following Mayring (Citation2000). These steps are: material collection, descriptive analysis of the material, selection of the main conceptual categories, and evaluation of the material pertaining to these categories.

Material collection

We compiled a database of relevant articles on student and employee selections via queries in Elsevier’s Scopus database. As the aim was to combine different strands of literature, we initially used two separate queries to identify relevant articles on student selection and personnel selection, which are found in . The queries were designed to return empirical articles within the domains of student selection and personnel selection literature. Both queries consisted of four terms: the type of applicant, such as a student or employee; the organization the applicant applies to; articles on selection or recruitment; what the articles should study - the performance or hire of the applicant. Multiple iterations of the queries were tested. Each iteration was assessed to evaluate each query’s face validity, ensuring the queries returned relevant articles to be analysed. Only articles published in academic journals written in English since 1990 were considered because of an increase in published articles on student selection at that time. This increase was observed during data collection in Scopus.

Table 1. Search queries used in this study.

During the assessment of query 2, the authors found a relatively small number of empirical studies focusing on the prediction of employees during the application process. Many studies were not empirical, and many empirical studies focused on the job success of employees already working at their organization. As such, the authors used the query in an attempt to gather more empirical studies that predict job performance. This gave us a better overview of the state-of-the-art nature of this field and increased the quality of our review.

Descriptive analysis and classification

The total article count using the queries was 4,754, with nine overlapping articles between the search results, leading to 4,745 unique articles. Next, based on its title and the abstract, the authors classified the articles as either relevant or irrelevant for this study. Articles were considered relevant if they study the predictive validity of a criterion for student or employee success. This step brought the number of articles down to 834. Of these articles, 575 were from the literature strand on student selection, 69% of the total number. The remaining 259 articles (31%) were on personnel selection. Note that these numbers do not match the final numbers provided by , as articles on student selection as well as personnel selection were found through all three queries.

Descriptive results

gives an overview of articles published per year: the total number of articles and the number of articles in the fields of personnel and student selection.

Figure 1. Number of articles in dataset over time.

Figure 1. Number of articles in dataset over time.

highlights the dominance of research on student selection, particularly medical students (see ). This is because medical studies are among the most popular and selective programs. However, this does leave knowledge gaps in the selection of non-medical students. Selection criteria that are common and valid for the selection of medical students may not be valid for other students. The overrepresentation of medical students causes bias, making the results less generalizable for the entire student population.

Table 2. Most frequently used journals in a dataset for studies on student selection (2a: top) personnel selection (2 b: bottom).

Selection of main conceptual categories

Finally, all remaining articles were coded. For each article, we coded if the article focused on the selection of students or personnel. The jobs we find in our sample are quite diverse; examples include personnel at firms, governments, or non-governmental organisations, Ph.D. candidates (classified as personnel because they are paid), medical residents, postdocs, and professors.

For all articles in the final sample, the authors coded the dependent selection criteria and the direction of their relationship (if any). In line with other systematic literature reviews (Pittaway and Cope Citation2007; Thoemmes and Kim Citation2011; Connolly et al. Citation2012; Berne et al. Citation2013; Perkmann et al. Citation2013), we categorically coded the effect of the selection criteria. Our categories were positive, negative or no effect if the study found no clear, consistent results. When articles studied multiple selection criteria, these were coded separately. Therefore, the total number of effects studied in our analysis differs from the number of articles. Only the variables found at least twice in the sample were used in our final analysis, so as to increase the generalizability and internal validity of the results. This cut-off resulted in the 22 selection criteria in , which explains what these variables entail and how they are commonly measured. For each variable, we looked at the occurrence of different codes. Some articles have more nuanced outcomes that are not easily captured by categorical coding. For these, we made notes, which we qualitatively assessed and discussed in the results.

Table 3. Explanations of selection criteria.

Results

We begin by discussing the predictive validity of student success and job performance for each of the 22 selection criteria. These criteria assess candidates based on various skills and characteristics, which are found in . We found 714 positive effects, 41 negative effects, and 174 cases with no effects. This might indicate a potential positivity bias in the data, as there is a widely reported publication bias in favour of studies that report statistically significant and/or positive results, meaning that studies with negative or insignificant results are less often published (Begg and Mazumdar Citation1994; Thornton and Lee Citation2000; Duval and Tweedie Citation2000). We especially find few studies that report negative results. The 22 criteria were divided into criteria that test cognitive skills and criteria that test non-cognitive skills. For each criterion, we provide four columns with information. The first column lists the total number of studies conducted on a particular criterion for both students and staff. The second column lists the absolute number of studies that reported a positive effect of a certain selection criterion on student performance. We also provide the percentage of positive cases compared to the total number of times a criterion was studied. The same information is provided for the number of studies that found a negative effect or that found no effect on student performance in the last two columns.

Table 4. Occurrences of selection criteria on performance.

Predicting performance

Cognitive criteria

Cognitive criteria are studied 503 times in our dataset. Of these effects, 461 were measured in student selection and 42 in personnel selection. In , grades and standardized tests, both cognitive criteria, are the most commonly studied selection criteria with 229 and 228 occurrences, respectively, the majority of which comes from student selection literature. When looking at the predictive validity of grades, we see that this is 87% in student selection, which is the highest of the cognitive criteria. For personnel selection, the predictive validity is lower, at 75%. That grades are more often a better predictor of student performance than standardized tests is further supported by numerous studies on grades and standardized tests, which display the positive effects of both but confirm grades as generally having a higher positive predictive validity (Rhodes, Bullough, and Fulton Citation1994; Hoffman and Lowitzki Citation2005; Cerdeira et al. Citation2018). One explanation for this is that grades reflect students’ self-regulatory competencies better than standardized tests, and these are needed to be a successful student (Galla et al. Citation2019). Another explanation is that while standardized tests form a snapshot of an applicant’s cognitive skills at the moment of testing, grades are acquired over a longer period and provide more consistent insight into the applicant’s cognitive skills.

There are also several important nuances concerning the predictive validity of grades. Grades from previous education lose their predictive validity in later years of study (Sladek et al. Citation2016; Vulperhorst et al. Citation2018). This supports the notion that high school and the first year of the bachelor’s program require mostly lower-order knowledge, whereas the third year of the bachelor’s program requires higher-order knowledge (Steenman Citation2018). This also explains why grades are less good at predicting job performance than they are at predicting student success. When looking at grades on the level of individual courses, we see that grades for mathematics are often found to be a better predictor than average grades for other subjects (Botha, Owen, and McCrindle Citation2003; Anderton, Hine, and Joyce Citation2017; Conn et al. Citation2018).

When looking at the predictive validity of standardized tests, the results show that the predictive validity in student and personnel selections is identical (both 74%). An important note to make is that standardized tests often comprise various elements, for example, a writing element, reading element and logical reasoning element. Several studies have found that the predictive validity differs between these elements (Glaser et al. Citation1992; Goodyear and Lampe Citation2004; McManus et al. Citation2011). The predictive validity of the entire test can therefore differ immensely from the predictive validity of the individual components, with logical reasoning often being the best predictor of performance. Finally, while graders and standardized tests are accurate predictors of student success, a combination of both grades and standardized tests is often found to be an even more accurate predictor (Cope et al. Citation2001; Madigan Citation2006).

Another predictor of cognitive skills is the mastering of a language, which is almost always English. This is studied far less and has a lower predictive validity for student success compared to standardized tests and grades, at 71%. In personnel selection, language was only studied twice: one study reported a positive effect and the other a negative effect. The fourth cognitive predictor we found was admission tests. For students, admission tests have a lower predictive validity than normal standardized tests (64% vs. 74%). This is surprising since admission tests test for study-related knowledge, which should make it a more valid selection criterion because of its clear alignment with the contents of the study program (Steenman Citation2018). For personnel selection, we found only one study on admission tests, which has a positive effect on job performance. As the final cognitive criterion, we have GMA with eight occurrences, all of them in personnel selection and all of them with a positive effect on job performance.

Non-cognitive criteria

In total, non-cognitive criteria are studied 426 times in our dataset. Of these effects, 247 were studies on student selection, and 179 were on personnel selection. This means that studies on non-cognitive criteria form a larger share of state-of-the-art knowledge regarding personnel selection compared with student selection. These results shows that the predictive validity of all criteria on non-cognitive skills is higher for personnel selection than student selection.

Procedural non-cognitive criteria

Interviews were the most common non-cognitive selection criterion with 80 occurrences. Interviews have a positive effect on performance in 63% of studies on student selection and 79% in studies on personnel selection. Therefore, interviews seem to be a more suitable selection criterion in personnel selection. This could be attributed to non-cognitive and interpersonal skills being more important in a professional environment. While interviews might be a less effective predictor of student success, the students do need the skills tested in interviews during their later careers. However, while interviews have a positive effect, unstructured one-on-one interviews have a low predictive validity for both students and personnel. The majority of studies reporting a positive effect studied specific types of interviews, such as the multiple mini-interview or the structured situational interview (Campion, Campion, and Hudson Citation1994; Reiter et al. Citation2007; Eva et al. Citation2012; Husbands and Dowell Citation2013).

Special admission procedures are exclusively found in articles on student selection. In 56% of the articles, these programs positively affect performance, in 8%, a negative effect, and in 36%, no effect is found. This means that if higher education institutes and human resource departments want to increase diversity in their organization using a special admission procedure, there is a substantial risk that they end up selecting students with lower performance.

Recommendation letters receive a lot less attention in academic literature, as they are only studied nine times. This is somewhat surprising, given that universities consider them an important selection criterion (Steenman Citation2018). In terms of predictive validity, the results are mixed. Recommendation letters are seen three times in student selection and six times in personnel selection. In both student and personnel selections, the percentage of positive cases was 67%.

Demographic criteria

Gender is the second most studied non-cognitive criterion. Women tend to perform better in both student and personnel selection. However, the percentage of positive studies on student selection is much lower than those on personnel selection (66% versus 100%). In the field of student selection, we also found seven studies (15%) that report men as performing better and an additional nine studies (19%) that report no significant effect. The field of the student does not have a consistent impact on the success of either female or male students.

Students from underprivileged backgrounds were studied 34 times, whereas personnel from underprivileged backgrounds were not studied. Of these 34 studies, 38% reported a positive effect on performance, 35% reported a negative effect, and 27% saw no effect. This means that there is little consensus about the effect of the background of the applicant on performance.

Finally, with regard to the age of applicants, we see that the older applicants performed much better in both student and personnel selections.

Personality-related criteria

Personality tests were studied a total of 48 times, 40 of which were in the field of personnel selection. This makes it the most commonly studied non-cognitive criterion in the field of personnel selection. Ninety percent of studies in personnel selection report a positive effect on performance, compared to 62.5% in student selection, a sizeable difference. The most commonly used personality test, the Big Five, consists of conscientiousness, extraversion, openness, agreeableness and neuroticism. As with standardized tests, the predictive validity of the entire test can differ greatly from the individual dimensions’ predictive validity. For these tests, conscientiousness was often a very good predictor of student success, and neuroticism was often a negative predictor of student success. Conscientiousness was also studied as a separate criterion in 13 studies, 12 of them in the personnel selection literature. All of these studies report a positive predictive validity. Motivation is studied 4 times in terms of student selection and 10 times in personnel selection; the predictive validity is 50% and 100%, respectively. However, given the small number of cases, more work is necessary. We end with a selection criterion that we find exclusively in personnel selection literature, and where all studies report a positive predictive validity: core-self-evaluation, with five occurrences.

Experience of the applicant

Previous experience through an internship, a previous job or a previous degree is generally a good predictor of performance. This is especially true regarding personnel selection, where this criterion has a predictive validity in 89% of articles; in student selection, this percentage is 76%. This could be attributed to students (especially undergraduates) requiring less specialized knowledge, which makes previous experience less important. PO-fit is also a positive predictor of performance, especially in personnel selection, where it has a 100% predictive validity. A notable limitation of the robustness of this variable is that fit and observed employee performance are often expressed through supervisor ratings, which have a risk of bias.

Skills and capabilities

Finally, we discuss the non-cognitive selection criteria that measure the applicant’s skills and capabilities. These include psychomotor skills, non-cognitive skills, emotional intelligence (EI), critical thinking and polychronicity.

Psychomotor skills are measured 17 times and are often used as a selection criterion for medical students and personnel. Of all studies on student selection, 55% report a positive effect on performance, compared to 86% in personnel selection, making this yet another non-cognitive criterion with a higher predictive validity in personnel versus student selection. Generic non-cognitive skills were studied 14 times in student selection, where they have a positive effect on performance in 93% of studies. With regard to personnel selection, generic non-cognitive skills were only studied twice, but they both reported a positive effect on performance. Emotional intelligence was studied four times in the student selection, displaying one positive study, one negative study and two studies that reported no effect. In personnel selection, EI was studied 24 times, with 23 studies (96%) reporting a positive effect on performance. The next criterion, critical thinking, has a predictive validity of 100% in personnel selection; however, this is based on a single study. Regarding the student selection, we found eight studies, 75% of which reported a positive effect on performance. Polychronicity was studied four times in our dataset. All of these were in personnel selection and reported a positive effect on job performance.

Criteria that can improve student selection

What insights from the personnel selection literature can add to the student selection literature? In order to draw meaningful lessons about selection criteria, they must be scarcely studied in student selection literature and often studied in the personnel selection literature. Moreover, they must have positive predictive validity. Five criteria fit this description: personality tests, conscientiousness, PO-fit, core-self-evaluation and polychronicity. GMA may also fit, but we expect this criterion to have little added value over other cognitive criteria as they both measure the applicant’s cognitive abilities, and cognitive criteria have already been widely studied in student selection literature.

Personality tests could help universities select students who possess the traits needed in the educational program of their choice or in their future careers. For some students, conscientiousness could be a highly necessary personality trait; for others, extraversion might be essential. However, this requires a careful analysis by universities regarding which personality traits are needed to succeed in the study program; otherwise, the risk of wrongful implementation increases. Furthermore, personality tests can be subject to fraud and faking (Connelly and Ones Citation2010). Applicants might know what the desired outcomes of such tests are and enter them accordingly. There is a high risk in many personality tests as they allow applicants to rate their own personality traits, which can lead to desirable answering and false representation of an applicant’s personality. Several meta-analyses have found that observer ratings of personality traits, therefore, have higher predictive validity than self-reported scores (Connelly and Ones Citation2010; Oh, Wang, and Mount Citation2011). However, forming a reliable and truthful observer rating of a student requires time and intense personal contact. This strengthens the case for careful implementation of personality tests, should universities decide to do so. However, we found strong evidence that conscientiousness, when used as a separate criterion, is an effective predictor of job performance. It may prove to be useful for universities to expand the possibilities for selecting students based partly on their conscientiousness, without submitting students to an entire personality test.

The insights from personnel selection literature on PO-fit could prove to be a useful addition to the field of student selection because this criterion has not yet been studied among students in our dataset and has a 100% positive score among personnel selection. Prior research has found that students flourish in academic environments that match with their personality (Rocconi, Liu, and Pike Citation2020). PO-fit for students can be tested by having applicants participate in a day of ‘onboarding’ to ascertain whether applicants fit in well with the organization and if the organization matches the preferences of the applicants. Onboarding trials have already been conducted at universities, often with positive results (Niessen, Meijer, and Tendeiro Citation2016).

Finally, core-self-evaluation remains a potentially fruitful selection criteria for students. For this criterion to work, universities must find ways of allowing students to critically reflect on their self-efficacy, self-esteem and locus of control. This could be done in structured interviews, but written documents such as motivational letters can also be used for this purpose. A noteworthy challenge is that self-esteem and self-efficacy are far from static. Scholars have argued that these concepts can change rapidly, perhaps even over the course of a few minutes (Judge and Bono Citation2001). This is because external events, such as peer feedback or job rewards, profoundly impact the outcome of a core-self-evaluation. At the moment of measurement, core-self-evaluation are still positive predictors of job performance, despite their flaws, making it at least interesting to include this in the student selection process. An explanation for the absence of these criteria in student selection literature could be because the non-cognitive skills tested by these criteria are more critical for personnel. However, reflective skills multi-tasking are also needed by successful students (Taub et al. Citation2014).

Research agenda

Going beyond these promising non-cognitive criteria, we also have other points for a research agenda for non-cognitive student selection. There is a scarcity of studies researching the predictive validity of motivation letters and personal statements in both student and personnel selection. Given that applicants to both jobs and universities are often required to write such a letter, more knowledge on the predictive validity of these letters is needed. Current research concerning motivational letters often measures letter quality through a rating of observers (Salvatori Citation2001). However, the inter-rater reliability of these ratings is often low. In future research, we suggest the use of methodologies from computational sciences, such as text mining and natural language processing (Blei, Ng, and Edu Citation2003; Chowdhury Citation2005). A more robust depiction of the quality of motivation letters can be achieved using such measures, as was shown by Pennebaker et al. (Citation2014). Structured motivational letters could also help with this since structured interviews lead to a higher predictive validity, as shown by several empirical studies (Reiter et al. Citation2007). This method for increasing the predictive validity could also help with motivation letters.

For some selection criteria, we have only found a relatively small number of empirical articles. We hope that by making explicit which selection criteria are understudied, we may encourage scholars to contribute to filling in the remaining knowledge gaps. Our results also show that student selection literature is dominated by research into undergraduate medical students. This is problematic since these students require different knowledge from students in other fields and are selected using different selection criteria. Therefore, we suggest that more research into graduate students in other fields.

Discussion and conclusions

Limitations

One limitation of this study encounters is the uneven distribution of empirical studies across the two literature strands. Studies on student selection are more common in our dataset than studies on personnel selection; there are several possible explanations for this. One explanation is that universities are under more intense scrutiny to ensure that selection is made with evidence-based selection. Another explanation is that empirical studies on selection are easier to execute for students. Data are easily accessible, and there are many ready-to-go success measures; this is more complex for personnel.

Assessing promising selection criteria and research agenda

This literature review answers the following research question: to what extent can the personnel selection literature complement the state-of-the-art literature regarding student selection? We expected the added value of personnel selection literature to concern more knowledge on selecting based on non-cognitive criteria, and find that personality tests and conscientiousness, PO-fit, core-self-evaluation and polychronicity are indeed promising non-cognitive criteria for student selection because of their high predictive validity in personnel selection. Implementing these criteria mitigates the scarcity of well-studied non-cognitive selection criteria in student selection. Including these criteria can lead to a more accurate assessment of students’ non-cognitive skills, which are becoming more important in higher education. The overall quality of selection improves when non-cognitive skills are better assessed during selection, leading to higher student success. However, for this to happen, careful implementation of these selection criteria is important because they are not without risks or drawbacks. An important facet of a valid implementation is that higher education institutes should carefully assess the applicant’s necessary skills and knowledge to be successful in the study program. These skills need to be measured accurately by the selection criterion. In other words, there needs to be alignment between the knowledge and skills needed in the study program and the knowledge and skills as measured by the criterion (Steenman Citation2018).

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Notes on contributors

Timon de Boer

&

References