Publication Cover
Educational Research and Evaluation
An International Journal on Theory and Practice
Volume 29, 2024 - Issue 3-4
1,552
Views
0
CrossRef citations to date
0
Altmetric
Articles

The influence of English language proficiency test scores on the academic success of ESL undergraduate students

ORCID Icon, &
Pages 111-129 | Received 27 Jun 2023, Accepted 31 Jan 2024, Published online: 09 Feb 2024

ABSTRACT

This study considers language proficiency test scores achieved by ESL students who subsequently entered a New Zealand university at the undergraduate level. Scores from IELTS and the university’s in-house English Proficiency Test are analysed to determine the predictive ability of overall scores on these two tests and scores for each of the four language skills on academic success. The results indicated that language proficiency had a significant effect on undergraduate academic achievement. Furthermore, analyses of the predictive ability of scores for listening, speaking, and reading were remarkably similar between the tests, with non-significant results of similar magnitude for each skill. On the other hand, IELTS writing scores demonstrated a non-significant negative effect on academic achievement, while EPT writing scores were strongly significantly predictive of academic achievement. The results suggest that universities may benefit from measuring ESL students’ academic writing proficiency in more authentic university-level writing tasks.

Introduction

A key factor influencing the academic success of ESL international students is their English language proficiency. A number of previous studies have considered appropriate cut-off points for university acceptance (e.g., Ingram & Bayliss, Citation2007; Johnson, Citation2008; Knoch et al., Citation2014), with a range of results. Other studies have considered individual scores for listening, speaking, reading, and writing and how they predict academic success (Dang & Dang, Citation2023; Dooey & Oliver, Citation2002; Humphreys et al., Citation2012; Kerstjens & Nery, Citation2000; Neumann et al., Citation2019; Oliver et al., Citation2012; Woodrow, Citation2006). Different results have been found, which may be due to the range of educational practices employed in different geographical, cultural, and educational contexts. Such differences mean that different tests and cut-off scores will be appropriate in different situations.

This study focusses on international students from non-Anglophone countries, who were required to provide evidence of English language proficiency at the time of enrolment. The study considers scores provided to the university as evidence of sufficient English language proficiency by international ESL students who subsequently entered a New Zealand university at the undergraduate level. International students in this context originate predominantly from Asia. Scores from two tests are analysed: International English Language Testing System (IELTS, Citation2018) and the university’s in-house English Proficiency Test (EPT). In addition to overall test scores, individual scores for reading, writing, listening and speaking are also analysed to determine the predictive ability of each language skill on academic success, measured by cumulative Grade Point Average (GPA) throughout the students’ undergraduate studies.

In a previous study, we investigated the overall academic success of students who entered with different kinds of English language proficiency evidence and found differences between the academic performance of different groups. In this study, we look not at how well each group of students performed, but rather at the relationship between their English language proficiency test scores and their performance after admission. That is; how well their English language proficiency test scores predicted their performance after admission to university. The results are intended to clarify the relationship between specific language skills and academic success. The results are expected to be of interest to all those involved in the preparation of ESL students for study in Anglophone contexts, especially those preparing students for study in New Zealand universities. In addition, the results are likely to be informative for university administrators in Anglophone contexts who are involved in determining admissions tests and criteria for ESL undergraduate students. The research questions which guided this study are:.

  1. To what extent are IELTS and the EPT capable of predicting academic success, as measured by GPA throughout students’ undergraduate studies?

  2. Which language skills best predict academic success throughout students’ undergraduate studies?

Literature review

Clearly, there are a range of factors determining the probability of a student being academically successful. The level of language proficiency is just one of these factors, albeit one which is particularly pertinent for students who are studying an additional language. It would be unethical for educational institutions to accept students who have no chance of succeeding in the programme of study into which they are accepted. Thus, it is important for institutions to ascertain the appropriate skills and level of competence required for successful study in that context. As IELTS (Citation2015, p. 13) states, “It is vital that … institutions have a clear understanding of the contribution that IELTS scores can make in determining an applicant’s suitability for entry, including the relative importance of scores in the four skills for particular academic courses.” This demonstrates both the idea that educational offerings in different contexts are likely to have differing language proficiency requirements and the idea that in different contexts, a different set of academic language skills may be required.

Scores required for undergraduate study

Somewhat conflicting evidence is available regarding which tests are appropriate for measuring English language proficiency for university study. Both Hill et al. (Citation1999) and Woodrow (Citation2006), in Australia, found that IELTS scores significantly correlated with academic success after matriculation, whereas TOEFL scores did not. Oliver et al. (Citation2012) found that students who entered through either IELTS or TOEFL significantly outperformed those who entered after taking the in-house English proficiency test at the university. Phakiti (Citation2008), on the other hand, suggests a hybrid approach in which large international English proficiency tests are used for the initial screening of students and incoming students are then given an in-house test for more detailed measurement of the specific language skills required in the context.

When it comes to the selection of appropriate cut-off scores for admission to universities, there are clear guidelines published by IELTS (Citation2018) for different disciplines. They state that an overall score of 7.5 is required for success in more linguistically demanding courses, and 7.0 is required for less linguistically demanding courses. An overall score of 0.5 less than this is “probably acceptable”, whereas students with scores of 6.5 and 6.0 respectively are likely to require further language learning. IELTS (Citation2018) does not specify whether such language learning should be completed prior to enrolment, or in the form of sheltered instruction.

The cut-off scores have also been corroborated through research. Ingram and Bayliss (Citation2007) and Johnson (Citation2008) found that students with overall IELTS scores of less than 7.0 required support to succeed in linguistically demanding studies, while Knoch et al. (Citation2014) made an even stronger recommendation that students with scores up to and including 7.0 could still benefit from sheltered instruction. However, the extent to which institutions follow IELTS (Citation2018) recommendations varies considerably.

Most universities in Australia require an overall IELTS score of 6.5, while most in New Zealand require just 6.0 (Read, Citation2015). Similarly, Dooey and Oliver (Citation2002) report that ESL students need an overall score of 6.0 to enter Curtin University, whereas all other public universities in Western Australia require 6.5. Kerstjens and Nery (Citation2000) state that students entering the Faculty of Business at RMIT University (in Australia) require an overall score of 6.5. The norm at universities in Australasia is for students to be fully admitted into undergraduate study, without a requirement of additional language training after admission. On the other hand, Carnegie Mellon University, in the United States, requires a minimum score of 7.5 in order to enter without undergoing additional language instruction, in line with the recommendations of IELTS (Ginther & Yan, Citation2017).

In the process of selecting such cut-off scores, research is necessary to investigate the relationship between achieving different scores on each test and the subsequent level of academic achievement that can be expected. Scores on IELTS have usually been found to have a positive, but weak relationship with subsequent academic success. Indeed, Davies (Citation2008) claims that it is unlikely for any test to achieve more than a medium effect size when measuring such relationships. Phakiti (Citation2008) found that differences in overall IELTS scores accounted for just 7% of the variance in students’ academic achievement, although the relationship was statistically significant. Feast (Citation2002), Johnson et al. (Citation2017) and Woodrow (Citation2006) found small but significant relationships between IELTS overall scores and GPA, while Dang and Dang (Citation2023), Hill et al. (Citation1999), Schoepp (Citation2018) and Yen and Kuzma (Citation2009) found moderate relationships. On the other hand, Dooey and Oliver (Citation2002) found that most of the students who failed to succeed academically had high IELTS scores compared to those who succeeded. Although these results are somewhat mixed, as suggested by Davies (Citation2008), the overall trend is for a positive, but relatively weak, relationship between the two types of measures.

Overall, this literature suggests that IELTS is a viable language proficiency test to measure academic language proficiency for undergraduate study, although different scores are likely to be sufficient in different contexts. Moreover, although language proficiency plays a role in predicting academic achievement after admission, it is unlikely to have a large effect.

Relationships between specific language skills and achievement

The largest number of studies investigating relationships between specific language skills and subsequent academic achievement have been conducted in Australia. Of these, the majority have found that reading is the most important language skill in predicting academic success. Dooey and Oliver (Citation2002) as well as Oliver et al. (Citation2012) conducted studies investigating the performance of students in multiple disciplines in contexts where the minimum score required for admission was 6.0. Indeed, Oliver et al. (Citation2012) found that not only was reading the only academic language skill to correlate with academic success in subsequent studies, but that even IELTS overall scores did not correlate significantly with subsequent success. Another study in Australia investigated contexts in which a minimum score of 6.5 was required for admission (Kerstjens & Nery, Citation2000), and found that reading was the only language skill to significantly predict academic success.

In addition, one study conducted outside of Australia also found reading to be the most significant language skill for academic achievement. Neumann et al. (Citation2019) correlated IELTS overall scores and individual skill subsection scores with grades achieved in two courses at a university in Canada. They also found reading to be the only language skill that correlated with grades in one course and to have the strongest correlation with grades in the other course (speaking, listening, and overall scores were also significantly correlated, but not writing).

Another study investigated the relationships between IELTS subsection scores and academic achievement during the first year of undergraduate study at an Australian university (Humphreys et al., Citation2012), and found that both receptive language skills were significantly correlated with achievement, whereas the productive skills were not. However, beyond the first year, no significant correlations were found.

Two studies conducted outside of Australia both found writing skills to be a significant indicator of future academic success. Li et al. (Citation2010) found not only that writing skill was a significant predictor of academic performance, but also that it was a stronger predictor than overall language proficiency. Harrington and Roche (Citation2014) found that both writing and vocabulary were significant predictors of academic performance, whereas reading was not. In line with Harrington and Roche’s (Citation2014) findings, in Australia, Woodrow (Citation2006) focussed on just one discipline in which students were required to achieve an overall score of 6.5 for admission and found that speaking, listening, and writing skills were all significantly correlated with academic performance, whereas reading was not.

Generally, the results above suggest that reading proficiency, and to a lesser extent writing proficiency, are important if students are to be academically successful. Our conclusion may be that, in the Australian context at least, reading and writing are the key language skills students need to succeed in university study.

However, some studies have discussed specific reasons why students’ performance in subsections of English proficiency tests may be more or less predictive of their subsequent academic performance. Moore and Morton (Citation2005) conducted a comparison of IELTS writing subsection tasks and writing tasks required during university study. They found that the writing tasks in IELTS were markedly different from those required for university study, having more in common with public non-academic writing than academic writing. Deakin (Citation1997) and Green (Citation2007) also both found that students who had met institutional requirements in terms of their IELTS writing score often struggled with authentic university writing tasks which tend to require longer and more complex written texts. It is suggested by Green (Citation2007) and corroborated by Johnson and Tweedie (Citation2021) that English for Academic Purposes (EAP) courses are a better method of preparation for university study than simply studying towards and taking the IELTS test. Specifically, one of the advantages of EAP mentioned by Green is practice in writing longer texts, and texts which use reading materials as sources of information. Thus, scores on the writing subsection of IELTS may not be predictive of academic success not because writing is not an important skill in educational contexts, but because the writing subsection of IELTS does not measure the kinds of writing skills required in university study.

Similarly, Ginther and Yan (Citation2017) suggest that some students may receive scores on multiple-choice subsections of tests that are not representative of their actual language proficiency because of excessive test preparation. Such test preparation may take priority over activities that would be useful in developing productive language abilities. Since the reading subsection of the IELTS test includes many selected-response questions, this may affect the results of the studies discussed above. Overall, it seems clear that reading and writing are both important in predicting the academic achievement of ESL students. However, the relative importance of these skills remains somewhat opaque.

Duration of language proficiency effects

Many studies have considered readiness for university study by looking at students’ academic achievement in the first semester or first year of study. For example, Dang and Dang (Citation2023), Hill et al. (Citation1999), Kerstjens and Nery (Citation2000) and Phakiti (Citation2008) only considered students’ achievement in their first semester of study. In addition, Dooey and Oliver (Citation2002), Ginther and Yan (Citation2017), Harrington and Roche (Citation2014), Humphreys et al. (Citation2012), Neumann et al. (Citation2019) and Yen and Kuzma (Citation2009) only considered students’ achievement in their first year of study. Other studies have investigated students’ success over longer periods. For example, Schoepp (Citation2018) looked at students’ success in the first three semesters of study and Feast (Citation2002) looked at students’ success over their first five semesters of study.

However, there is evidence that ESL students experience language-related difficulties beyond the first year of study. Kamaşak et al. (Citation2021) presented self-reported challenges faced by ESL students in an English-medium degree programme in Turkey and found that students reported more reading-related challenges at the second-year level and at the fourth-year level than they did at the first-year level. This makes sense since the level of academic content increases as students progress through degree programmes.

While studying at higher undergraduate levels may pose more challenges than at the first-year level, several studies have found that L2 students’ language proficiency does not increase as a result of studying in English. This has been found both in Anglophone contexts and in English-medium instructional contexts in non-Anglophone countries. Knoch et al. (Citation2014) found that students’ language fluency increased after one year of study in an Anglophone university, but their accuracy, complexity and overall language quality did not improve. If academic challenge increases as students progress through their programmes and there is no clear evidence of language proficiency increasing, this suggests that studies considering the appropriate language proficiency level should look beyond the first semester or first year of study. Clearly, the aim of admitting students is for them to complete their studies and achieve a qualification. Johnson and Tweedie’s (Citation2021) study supports this idea. They looked at students’ GPA in their first semester and final semester of study and found that English-language proficiency measures taken before admission had stronger correlations with final semester GPA than with first semester GPA.

Overall, institutions in different contexts employ different cut-off scores for admission into undergraduate degree programmes and different cut-off scores are likely to be effective in different contexts, due to variations in academic skills requirements and curriculum. The literature suggests that reading and writing are more important skills to focus on in most undergraduate contexts in attempts to predict the likelihood of academic success. Moreover, measurements of language proficiency appear to be predictive of academic performance throughout studies towards an undergraduate degree, not only in the first year of study. These findings suggest that more research is still needed across a range of contexts to identify language proficiency measures that are able to accurately predict academic performance, in addition to which language skills, and which score levels are effective in predicting academic success across the undergraduate curriculum.

Research methods

This study is part of a larger study of international students at this university. Exploratory analysis of the larger dataset revealed that there was no direct correlation between the groups of students who achieved well in their first semester of study and those who achieved well over the longer term. On the contrary, some groups achieved well at first, but that success was not sustained over the long term. Other groups did not perform so well originally but exhibited more robust achievement over the long term. Therefore, in the current study, we decided that looking at students’ achievement throughout their undergraduate studies would be more illuminating than only considering their first semester or first year of study.

The purpose of this study was to determine the relative predictive ability of overall IELTS scores and overall EPT scores on subsequent academic success, measured by cumulative GPA throughout students’ undergraduate studies. In addition, it aimed to further research efforts determining the relative predictive ability of each language skill (listening, speaking, reading, and writing). Finally, the research is intended to compare the performance of these two tests: one of which is a large and widely used English language proficiency test, while the other is an in-house EAP test.

Context

This research was conducted at a large university in New Zealand, with over 20,000 students and 14 undergraduate degree programmes. Approximately 11% of the entire student population was made up of full fee-paying international students at the time of data collection. However, there are relatively fewer international students at the undergraduate level (around 7% at the time of data collection). To gain admission to undergraduate programmes at the university, international students are required to demonstrate evidence of sufficient English language proficiency to study in English medium. A range of methods of demonstrating English language proficiency are accepted by the university. Those accepted at the time this study was conducted can be seen in the Appendix. However, IELTS and the in-house English proficiency test (EPT) will be the focus of this research.

IELTS

IELTS measures levels of language proficiency in the four skills (listening, speaking, reading, and writing) on a scale from non-user (band score 1) to expert (band score 9). The university where this study was conducted requires an IELTS (Academic) overall band score of 6.0 with no sub-score below 5.5 for undergraduate admission.

In the IELTS Academic test, each language skill is assessed separately. The Listening section includes four tasks that aim to evaluate test-takers’ ability to understand the information presented as well as the speakers’ opinions and purpose. The Reading section (40 questions) tests a wide range of reading skills, from reading for different levels of information to understanding logical arguments and recognising writers’ opinions and purpose. The Writing section includes two tasks, one involves describing and explaining a graph or diagram and one involves responding to a point of view, argument, or problem. The Speaking section is a three-part face-to-face interview that assesses test-takers’ use of spoken English in both familiar and abstract topics. The total test time is 2 hours and 45 minutes.

The English Proficiency Test (EPT)

The EPTs are the end-of-course tests administered by the university’s English Language Institute. Similar to IELTS, the EPT measures proficiency in all four skills (listening, speaking, reading, and writing), but on a scale from low (band score 1) to advanced (band score 6). Undergraduate admission is gained by receiving one rating of 3 and three ratings of 4 on the EPT. A 3 rating indicates moderate proficiency in English, and a 4 rating, good proficiency. The EPT assessment tasks are designed in-house to reflect the use of all four language skills as required for academic discourse, the ability to recognise and use a range of academic vocabulary items and grammatical constructions in written and spoken academic discourse, as well as the development of critical and creative thinking skills in an academic context.

EPTs are performed under timed conditions in the final week of the course, although the speaking grade includes an element of internal assessment gathered throughout the trimester. Each skill is assessed separately. The Listening assessment includes three tests that aim to evaluate the test-takers’ ability to understand and record academic information presented as short academic lectures and talks. The Reading assessment includes three tests that aim to evaluate the test takers’ ability to understand academic texts. Like the IELTS reading tests, the EPT reading tests cover a wide range of skills, including reading for different levels of information to understanding logical arguments and recognising the writers’ opinions and purpose. Writing assessment involves two tests. The first is an argument essay, where students argue a point of view on a controversial topic, including a counterargument and refutation. The second is a data comparison essay, where students describe the main patterns or similarities and differences evident in a graph or graphs and provide possible reasons for them. The speaking assessment includes an academic seminar presentation delivered to an audience and an assessor, and a formal interview, where academic content studied on the course is discussed with an assessor. Evidence is also gained from speaking tasks during the course to contribute to the speaking grade. The total test time is approximately 5 hours.

While there are similarities between IELTS and the EPT, there are some key differences. These differences are most evident in the test times and the writing tasks. The total time required for the EPT is almost twice that of IELTS, which allows for longer EPT writing tests and the potential for longer texts. Whereas IELTS writing tests allow 60 min for two tests and a length of over 150 words for the first task and 250 words for the second task, the EPT allows 90 minutes for two tests with no word limits. EPT writing tests also require more complex responses. For example, the data comparison essay adds to the IELTS requirement of data description to include reasons and possible implications. Similarly, the argument essay provides scope to develop an argument while acknowledging other views.

Students need to have gained the requisite scores on either the IELTS or the EPT within two years of applying for admission to the university. Although the EPT can only be taken by students who are enrolled in the English Proficiency Programme (EPP) at the university, IELTS can be taken by anyone, including students enrolled in EPP. Some students who are enrolled in the English Proficiency Programme nevertheless take an IELTS test and use their IELTS scores as evidence of English language proficiency rather than the EPT. In some cases, students take IELTS before they have completed the English Proficiency Programme as they want confirmation that they will be accepted into undergraduate study in advance of the EPT score release. In other cases, students take the IELTS to hedge their bets and sometimes fail the EPT, but achieve the necessary scores on IELTS, using those scores to enter the university. Data for this study was collected from the university records. Therefore, the test that each student used to gain admission to the university was included in the data. The university does not hold information about other tests taken. Therefore, entering through IELTS does not preclude having been enrolled in the EPP. In this study, the students who entered the university through each test are considered to be a separate group of students who met a particular admission criterion for entry to the university, regardless of whether or not they had been enrolled in the EPP.

Participants

The participants in this study were full fee-paying international students who entered the university over the three-year period between 2014 and 2017 to study at the undergraduate level. Of the students who entered over that three-year period, 747 were identified as non-native speakers of English and required to provide evidence of English language proficiency. Students who cannot demonstrate a suitable level of academic attainment are unable to enter the university regardless of their evidence of English language proficiency. Such students are directed to foundation studies programmes and are not included in this study. In addition, students who had studied in English medium for at least one year prior to application for admission can enter the university without providing evidence of language proficiency. Those students are also not included in this study.

Of the 747 students who were required to provide evidence of English language proficiency, 272 (36.4%) provided IELTS scores as evidence and another 170 (22.8%) provided EPT scores as evidence. These two types of evidence were most commonly provided by ESL international undergraduate students applying for admission to the university. Test scores were available for 399 of the 442 students (90.3%) who provided these two types of evidence. The test scores and GPA data for those 399 students were collected and analysed in this research to determine the ability of the overall test scores and individual skill scores provided at the time of admission to predict academic achievement, as measured by GPA achieved throughout their period of undergraduate enrolment.

The ethical guidelines of the university allowed the researchers to collect anonymous data relating to this large group of students. Unfortunately, the requirement for the data to be anonymous precluded us from identifying demographic information such as age, gender or first language of the participants. In addition, ethical requirements prevented the collection of data regarding the major or degree programme in which each student was enrolled.

Students who enter the English Proficiency Programme are a diverse group with a wide range of educational backgrounds. What they have in common is that by the time they arrive at the university, they have failed to meet the university’s language proficiency entry criteria. After completing the EPP, students enrol in a wide range of majors across the university which are representative of the majors enrolled in by international students as a whole. Students who take the IELTS make up the largest group of international students admitted to the university; more than one-third of international undergraduate students use IELTS scores for admission to the university. Overall, the students included in this study made up more than 59% of all international undergraduate students admitted over the time period. Thus, although data about the majors in which students were enrolled were not available to us, both groups of students were likely to be representative of the majors taken by international undergraduate students.

Data collection and analysis

Data relating to pre-admission qualifications was generated by the admissions office at the university and the management information office added cumulative GPA data for each student and anonymised the data before sharing it with the researchers. The researchers cleaned and coded the pre-admission qualification data for analysis. It was confirmed that the data for both the students who entered with IELTS scores and the students who entered through the EPT met the assumptions of normal distribution and homogeneity of variance and could thus be analysed using parametric statistics. Each dataset was then subjected to two linear regression analyses to determine the extent to which the overall test scores and the individual skill scores were predictive of subsequent academic achievement.

Overall IELTS scores were available for 240 of the 272 students who used IELTS to demonstrate language proficiency (88.2%). A linear regression analysis was employed using the enter method to determine the predictive validity of overall IELTS scores. Although overall IELTS scores were available for 240 students, individual IELTS subsection scores were only available for 49 students. A second linear regression analysis was employed using the enter method to determine the extent to which proficiency in each of the four language skills predicted academic achievement, including data from these 49 students.

Overall EPT scores were available for 160 of the 170 students who entered through the in-house English Proficiency Test (94.1%) and individual subsection scores were available for all 160. Therefore, both regression analyses for the EPT included the same 160 students.

Conducting regression analyses with academic achievement as the dependent variable and different measures of English language proficiency as independent variables is intended to determine the extent to which the two tests and the four language skills predict international ESL students’ academic achievement in an Anglophone context.

Results

GPA

Cumulative GPA data throughout each student’s period of undergraduate study was used as the dependent variable for all regression analyses. GPA scores at the university in focus range from 0 (E) to 9 (A+). The mean cumulative GPA for the entire group of 399 students was 4.23 (a B-), with a range from 0 to 8.6 (a low A+). The standard deviation was 1.84. These descriptive statistics demonstrate that this group of full fee-paying international students perform slightly better overall than domestic students at the university, whose mean cumulative GPA over the same time period was 4.0.

IELTS

Descriptive statistics for both the overall IELTS scores and IELTS subsection scores can be seen in .

Table 1. Descriptive statistics for IELTS scores.

The first regression analysis measured the ability of overall IELTS scores to predict academic achievement, measured by cumulative GPA. The results of this analysis can be seen in .

Table 2. IELTS overall score regression results (N = 240).

The second regression analysis measured the ability of each subsection score to predict academic achievement. The results of this analysis can be seen in .

Table 3. IELTS subsection score regression results (N = 49).

The EPT

The descriptive statistics for both the overall EPT scores and the EPT subsection scores can be seen in .

Table 4. Descriptive statistics for EPT scores.

The first regression analysis measured the ability of overall EPT scores to predict academic achievement, measured by cumulative GPA. The results can be seen in .

Table 5. EPT overall score regression results (N = 160).

The second regression analysis found that writing subsection scores were significantly predictive of academic achievement. The other three subsection scores were found not to significantly predict academic achievement (see ).

Table 6. EPT subsection score regression results (N = 160).

Discussion

Overall, the results indicate that language proficiency had a significant effect on academic achievement after admission, with a medium effect size, corroborating the literature (Dang & Dang, Citation2023; Davies, Citation2008; Feast, Citation2002; Woodrow, Citation2006). Furthermore, the predictive ability of individual subsection scores for listening, speaking, and reading was remarkably similar between the two tests. On the other hand, the predictive ability of writing scores was discrepant.

A previous needs analysis conducted by one of the researchers revealed that 55% of grades across the undergraduate curriculum at the university were obtained through the completion of extended written assignments, and in faculties which were more linguistically demanding as much as 71% of grades were obtained in this way. This demonstrates the importance of writing across the undergraduate curriculum in this context, which may be one factor explaining why writing scores significantly predicted undergraduate achievement in this context while listening, speaking, and reading did not.

However, in contrast to the EPT writing test scores (which had a significant positive relationship), there was a slight trend for students with lower IELTS writing scores to achieve better academically than those who received higher IELTS writing scores. A similar result was found by Dooey and Oliver (Citation2002). This non-significant result is also consistent with literature which has reported that there are clear differences between the type of writing measured in the IELTS writing subsection and the type of writing required for university study (Dang & Dang, Citation2023; Deakin, Citation1997; Green, Citation2007; Moore & Morton, Citation2005). This suggests a significant weakness in the leading commercial test of English for Academic purposes.

EPT writing scores were the only subsection scores found to be significantly predictive of academic achievement, and the relationship was strongly significant. This suggests that the EPT writing test is effective for measuring students’ preparedness for the university level writing assignments. Focussing on the differences between these two tests may help us to understand the reasons for these discrepant results. Differences between the two tests can be seen in .

Table 7. Comparison of IELTS and EPT writing tests.

The first difference between these two tests is the emphasis on critical thinking skills in the EPT writing test. Specifically, the EPT argument essay task requires students to take a position, argue for their position and consider and respond to alternative positions. In other words, going beyond description and explanation and demonstrating higher levels of critical thinking in their writing, such as analysis and evaluation, contributes to successful performance in the task. This is different from the IELTS essay tasks where test takers respond to a topic, which could be a point of view, a problem, or an argument and are then assessed on four criteria, task response, coherence and cohesion, lexical resource and grammatical range and accuracy (IELTS, Citation2018), which focus mainly on language use, not critical thinking skills. In addition, IELTS Task 1 asks test takers to describe data or explain figures or diagrams presented, whereas the EPT task extends this, requiring students to interpret the patterns and determine possible implications of the data. The emphasis on critical reasoning in the EPT writing tests is closely aligned with the requirements of academic writing at university.

Deakin (Citation1997) and Green (Citation2006) both found that authentic university writing tasks tend to require longer and more complex written texts than the IELTS. Respondents in Deakin’s (Citation1997) study felt that the construct of academic writing in the IELTS was partial or simplified compared to actual academic writing tasks. While Green (Citation2006) found that EAP courses included a wider range of written genres, deeper research on topics and greater cognitive demands compared to IELTS preparation courses. However, it is not beyond reason that the writing section of a commercial language proficiency test could require longer and more complex writing tasks and thus provide a better measure of readiness for academic study. Nevertheless, another difference between the IELTS writing section in its current form and the EPT writing test is the length. In the EPT writing test, both tasks require longer and more complex writing than the IELTS versions. This may contribute to their performance in university study, which usually involves producing written assignments that are significantly longer than the IELTS minimum length of 250 words (for the essay task) or 150 words (for the data writing task). Rather than specifying a word limit, students who sit the EPT writing tests are encouraged to write as much as they can within a 45-minute time frame for each essay, but with an emphasis on quality over quantity.

Conclusion

There are some clear limitations inherent in this study, which need to be taken into account when considering the results, implications and when drawing conclusions. Most importantly, the data were collected from just one university in New Zealand. The discussion in the literature review makes it clear that different levels of language proficiency and different combinations of language skills are likely to be required in different contexts. In addition, the analysis of the relationship between individual IELTS subsection scores and subsequent academic achievement included data from just 49 students. Including data from a larger number of students may have enabled significant relationships to be found. In addition, the inability to collect demographic data on the participants means that there may have been differences between the two groups of students that could help us to explain the differences between the groups, especially differences in the first language background of the students. Furthermore, collecting data about which disciplines students studied would also have improved this research, since different language skills may be more important in different disciplines. Nevertheless, the results of this study contribute to discussions about the predictive ability of overall IELTS scores and IELTS subsection scores.

Most notably, the non-significant relationship found between IELTS writing subsection scores and subsequent academic achievement was a negative one. Increased data is likely to change the strength, rather than the direction of such a relationship, suggesting that the writing subsection of IELTS is not useful in determining whether students have sufficient written language proficiency for university-level writing assignments. Phakiti (Citation2008) suggested students take a standardized language proficiency test such as the IELTS test before admission as well as an in-house test upon admission to gain a more detailed understanding of incoming students’ competencies. This suggestion would seem to be particularly effective if the in-house test students take upon admission to university takes the form of a writing proficiency test which is more closely matched to the actual writing skills needed after admission.

Although the in-house proficiency test was as effective as IELTS overall at predicting the academic performance of undergraduate students after admission, some adjustments to the admissions criteria may allow the predictive validity of the test to be increased further. Most significantly, currently, test takers are required to obtain three scores of four and one score of three in the four subsections of the test. Since writing subsection scores were found to be significantly predictive of subsequent academic achievement while other subsection scores were not, requiring students to receive a score of four in the writing subsection of the test may improve the test’s predictive ability. Such changes should be considered to maximise the usefulness of test scores.

One disadvantage of the in-house proficiency test was that taking the test required approximately five hours time commitment from both test takers and administrators. Since a university in-house test is mainly concerned with the quality of outcomes, this time commitment seems reasonable. However, since the writing subtest scores were strongly predictive of academic achievement, institutions could consider employing a similar writing test to the EPT and reducing the length and complexity of other subsections of their tests, to determine whether similar predictive ability of the overall test scores could be achieved with less emphasis on other language skills.

However, it is not unreasonable to suggest that the IELTS academic writing test should be improved to measure skills that are more closely aligned with university study. There are few university contexts in which the ability to write a 250-word essay will be sufficient to successfully progress through even undergraduate studies. Moreover, IELTS academic test scores are also used to measure language proficiency for entry to postgraduate and even doctoral studies, and in these programmes students who have only taken the IELTS in its current form may experience significant difficulties. Suggested changes to the IELTS academic writing test include extending the time allowed and required length of both writing tasks, requiring greater complexity in written texts and demonstration of critical thinking skills in English, all of which are likely to provide more robust evidence of sufficient language proficiency for university study.

As mentioned in the introduction, there is a wide range of factors that affect the academic success of each student who enters university. However, for international students who are native speakers of a language other than English studying in an Anglophone context, language proficiency is likely to be one important factor. Results of this study and previous studies have invariably found a small but significant effect of language proficiency. Moreover, in relation to research question 1, it was found that overall IELTS scores and overall EPT scores were roughly equal in their ability to predict academic achievement after admission, predicting a small but significant amount of variance in students’ cumulative GPA.

In response to research question 2, this research corroborated previous research, finding that writing is the most important predictor of university academic achievement (Harrington & Roche, Citation2014; Li et al., Citation2010). However, since the results of previous research have demonstrated discrepant findings, there is certainly a need for further research. In particular, research investigating different test formats and tasks for all four skills that can predict academic achievement at university is still needed. In addition, language test tasks that reflect specific features of the university curriculum in a particular context would be likely to achieve greater predictive ability. Thus, a constructive alignment which goes beyond the individual course and considers connections between admissions requirements, university curriculum and graduate profile would be valuable in further progressing this line of research.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Notes on contributors

Rachael Ruegg

Rachael Ruegg is a Senior Lecturer in the School of Linguistics and Applied Language Studies at Te Herenga Waka - Victoria University of Wellington.

Ha Hoang

Ha Hoang is a Teacher in the English Language Institute at Te Herenga Waka - Victoria University of Wellington.

Natalia Petersen

Natalia Petersen is a Senior Teacher in the English Language Institute at Te Herenga Waka - Victoria University of Wellington.

References

  • Dang, C. N., & Dang, T. N. Y. (2023). The predictive validity of the IELTS test and contribution of IELTS preparation courses to international students’ subsequent academic study. RELC Journal, 54(1), 84–98.
  • Davies, A. (2008). Assessing academic English. Testing English proficiency 1950-1989: The IELTS solution. Cambridge University Press.
  • Deakin, G. (1997). IELTS in context: Issues in EAP for overseas students. EA Journal, 15(2), 7–28.
  • Dooey, P., & Oliver, R. (2002). An investigation into the predictive validity of the IELTS test as an indicator of future academic success. Prospect, 17(1), 36–54.
  • Feast, V. (2002). The impact of IELTS scores on performance at university. International Education Journal, 3(4), 70–85.
  • Ginther, A., & Yan, X. (2017). Interpreting the relationships between TOEFL iBT scores and GPA: Language proficiency, policy, and profiles. Language Testing, 35(2), 271–295. https://doi.org/10.1177/0265532217704010
  • Green, A. (2006). Watching for washback: Observing the influence of the international English language testing system academic writing test in the classroom. Language Assessment Quarterly, 3(4), 333–368. https://doi.org/10.1080/15434300701333152
  • Green, A. (2007). Washback to learning outcomes: A comparative study of IELTS preparation and university pre-sessional language courses. Assessment in Education: Principles, Policy & Practice, 14(1), 75–97. https://doi.org/10.1080/09695940701272880
  • Harrington, M., & Roche, T. (2014). Identifying academically at-risk students in an English-as-a-lingua-franca university setting. Journal of English for Academic Purposes, 15, 37–47. https://doi.org/10.1016/j.jeap.2014.05.003
  • Hill, K., Storch, N., & Lynch, B. (1999). A comparison of IELTS and TOEFL as predictors of academic success. IELTS Research Reports, 2, 53–63.
  • Humphreys, P., Haugh, M., Fenton-Smith, B., Lobo, A., Michael, R., & Walkinshaw, I. (2012). Tracking international students’ English proficiency over the first semester of undergraduate study. IELTS Research Reports Online Series, 41(1), 1–41.
  • IELTS. (2015). Ensuring quality and fairness in international language testing. British Council.
  • IELTS. (2018). Setting IELTS entry scores. Retrieved 25 February 2019, from https://www.ielts.org/ielts-for-organisations/setting-ielts-entry-scores.
  • Ingram, D., & Bayliss, A. (2007). IELTS as a predictor of academic language performance, Part 1. International English Language Testing System (IELTS) Research Reports 2007: Volume 7, 1.
  • Johnson, E. M. (2008). An investigation into pedagogical challenges facing international tertiary-level students in New Zealand. Higher Education Research and Development, 27(3), 231–243. https://doi.org/10.1080/07294360802183796
  • Johnson, & R. C., Tweedie, M. G. (2017). A comparison of IELTS, TOEFL, and EAP course results as predictors of English language learning success in an undergraduate nursing program. In C. Coombe, P. Davidson, & A. Gebril (Eds.), Language assessment in the Middle East and north Africa: Theory, practice and future trends. (pp. 36–53). TESOL Arabia.
  • Johnson, R. C., & Tweedie, M. G. (2021). “IELTS-out/TOEFL-out”: Is the end of general English for academic purposes near? Tertiary student achievement across standardized tests and general EAP. Interchange, 52(1), 101–113. https://doi.org/10.1007/s10780-021-09416-6
  • Kamaşak, R., Sahan, K., & Rose, H. (2021). Academic language-related challenges at an English-medium university. Journal of English for Academic Purposes, 49, 100945. https://doi.org/10.1016/j.jeap.2020.100945
  • Kerstjens, M., & Nery, C. (2000). Predictive validity in the IELTS test: A study of the relationship between IELTS scores and students’ subsequent academic performance. International English Language Testing System (IELTS) Research Reports 2000: Volume 3, 85.
  • Knoch, U., Rouhshad, A., & Storch, N. (2014). Does the writing of undergraduate ESL students develop after one year of study in an English-medium university? Assessing Writing, 21, 1–17. https://doi.org/10.1016/j.asw.2014.01.001
  • Li, G., Chen, W., & Duanmu, J. (2010). Determinants of international students’ academic performance: A comparison between Chinese and other international students. Journal of Studies in International Education, 14(4), 389–405. https://doi.org/10.1177/1028315309331490
  • Moore, T., & Morton, J. (2005). Dimensions of difference: A comparison of university writing and IELTS writing. Journal of English for Academic Purposes, 4(1), 43–66. https://doi.org/10.1016/j.jeap.2004.02.001
  • Neumann, H., Padden, N., & McDonough, K. (2019). Beyond English language proficiency scores: Understanding the academic performance of international undergraduate students during the first year of study. Higher Education Research & Development, 38(2), 324–338. https://doi.org/10.1080/07294360.2018.1522621
  • Oliver, R., Vanderford, S., & Grote, E. (2012). Evidence of English language proficiency and academic achievement of non-English-speaking background students. Higher Education Research & Development, 31(4), 541–555. https://doi.org/10.1080/07294360.2011.653958
  • Phakiti, A. (2008). Predicting NESB international postgraduate students’ academic achievement: A structural equation modeling approach. International Journal of Applied Educational Studies, 3(1), 18–36.
  • Read, J. (2015). Assessing English proficiency for university study. Palgrave Macmillan.
  • Schoepp, K. (2018). Predictive validity of the IELTS in an English as a medium of instruction environment. Higher Education Quarterly, 72(4), 271–285. https://doi.org/10.1111/hequ.12163
  • Woodrow, L. (2006). Academic success of international postgraduate education students and the role of English proficiency. University of Sydney Papers in TESOL, 1(1), 51–70.
  • Yen, D., & Kuzma, J. (2009). Higher IELTS score, higher academic performance? The validity of IELTS in predicting the academic performance of Chinese students. Worcester Journal of Learning and Teaching, 3.

Appendix

Accepted evidence of English language proficiency.