3,854
Views
1
CrossRef citations to date
0
Altmetric
CURRICULUM & TEACHING STUDIES

IELTS Writing Preparation Course Expectations and Outcome: A Comparative Study of Iranian Students and Their Teachers’ Perspectives

ORCID Icon & | (Reviewing editor)
Article: 1918853 | Received 14 Jul 2020, Accepted 07 Apr 2021, Published online: 05 May 2021

Abstract

The writing module of the IELTS has proven to be a challenging section for test takers. Part of this difficulty may stem from the divergence between what test takers expect to learn in such courses and what is delivered by instructors. To address this issue, a 24-item questionnaire (Student Questionnaire A) relating to IELTS writing preparation course expectations was distributed among 64 Iranian students attending such classes at course entry. A slightly modified version of the same questionnaire (Student Questionnaire B) was given to the students at course exit to measure their after-the-course perceptions. Moreover, a third questionnaire (Teacher Questionnaire) was administered to seven teachers instructing the very classes at course exit, allowing the researchers to compare the students and the teachers’ perceptions of the course content and outcome at the course exist. Interviews were also conducted with the seven instructors at the end of the course to further explain any possible divergence in the learners and teacher’ perceptions. The results showed little change in the students’ expectations at course entry and their retrospective perceptions at course exit. Furthermore, the teachers’ perceptions (of the course content and outcome) did not indicate much difference compared with those of the students at course exit. Despite the similarities, the instructors enumerated some factors as the possible causes of such divergence in perceptions of the course content and outcome.

PUBLIC INTEREST STATEMENT

Large-scale, high-stakes international English language tests such as IELTS normally bear extremely important consequences for their participants. Among the four language skills assessed on the IELTS test, writing has proved to be the most difficult one. The skill undoubtedly is of great importance for university students since academic assessment highly relies on writing. Given that importance, IELTS writing prep courses are abundant in countries where there is a large number of participants wanting to take part in the test. However, there are not many studies that have focused on how students perceive the academic module of IELTS, specifically the writing skill. The present study attempts to address this gap in the literature by exploring the expectations (regarding content and outcome) that students bring to IELTS writing preparation courses how these expectations compare with the students’ own and their teachers’ perceptions at course exit. Understanding and resolving the mismatches can contribute to the efficacy of such courses.

1. INTRODUCTION

The International English Language Testing System (IELTS) is said to be one of the most popular high-stakes English language tests; the British Council websiteFootnote1 calls it “the world’s most popular language test for higher education and global migration”. Green (2006, p. 113) describes it as “a high-stakes gate keeping test used by universities to screen applicants for language ability.” How a candidate performs on the test may have serious, future consequences for the testee. Green further explains that “IELTS might be expected to exert a strong influence on teacher and learner behavior” (p. 114). It is no wonder, then, that IELTS-oriented courses are abundant in countries where there is a large number of participants in need of taking part in the test.

Among the four linguistic skills assessed in IELTS, writing has proved to be one of the most difficult skills. As reported on the official IELTS website,Footnote2 the writing skill had the lowest band score among the four language skills obtained by both Academic and General Training test takers. Undoubtedly, it is of great importance for university students since academic assessment highly relies on writing and is most often carried out through this very skill (Nunan, Citation2015; Ur, Citation2012). It, moreover, assumes a key role in scholastic and academic achievement (Lee et al., Citation2016; Uysal, Citation2010). Since it seems to be a skill the majority of testees have problems with, there are numerous preparation courses which cater to the writing needs of students. However, it appears that, at times, the course priorities that instructors of such courses set may diverge from what learners really need or deem useful. In other words, IELTS candidates may have test needs that require to be incorporated in preparation courses. There are not many studies which have focused on how students’ perceptions of a course which prepares them for such assessment matches with those of instructors of such courses, and also how the course content caters to IELTS candidates’ needs, specifically those required for the writing skill in the academic module of the IELTS. This study aims to bridge this gap through exploring this matter from the points of view of both the teacher and the learner so as to provide teachers with a clearer idea of the expectations their learners bring to class and how they are (or are not) fulfilled upon course exit. To that end, the following questions were posed:

  1. What expectations (regarding content and outcome) do Iranian students bring to IELTS writing preparation courses?

  2. How do these expectations compare with the students’ own and their teachers’ perceptions at course exit?

2. REVIEW OF THE LITERATURE

2.1. Importance and Development of the Writing Skill

Writing is an extremely versatile tool that can be used to accomplish a variety of goals (Bai, B, Citation2016). People use writing to create imagined worlds, tell stories, share information, explore who they are, combat loneliness, and chronicle their experiences. In addition, good written communication skills are essential for students as well as professionals (Hyland, Citation2019; Russ, Citation2009; Swales & Feak, Citation2012). Krapels et al. (Citation2003) state that written communication skills are greatly valued across different professions and many employers “specifically identify communications skills as a job requirement” even when not explicitly listed, there is a “strong assumption that the requirement is implicit” (p. 90). Writing is also an indispensable tool for second language (L2) learning; the impact of writing on L2 learning development has been shown in many studies (e.g., Blake, Citation2009; Hirvela et al., Citation2016; Weigle, Citation2002)

Writing, and specifically academic writing, has long proved to be one of the most problematic skills to improve for English language learners (Hyland, Citation2019; Ip & Lee, Citation2015). When composing texts, multiple linguistic, physical, and cognitive operations are employed and synchronized to attain the objectives the text seeks, such as audience needs, communicative purposes, and specific writing conventions. These operations include planning, generating text, transcribing, reviewing, and revising and are recursive and iterative in nature (Dempsey et al., Citation2009; Crusan, 2010; Flower & Hayes, Citation1980). To many English language learners, writing can be a daunting experience and may pose real challenges, regardless of learners’ language proficiency level. Additionally, this does not seem to be a problem restricted to nonnative speakers of English. In 2002, the National Commission on Writing in America’s Schools and Colleges released a groundbreaking report which argues that writing has been shortchanged in the school reform movement of the past twenty years and must now receive the attention it deserves. The Commission names writing the Neglected “R” and claims that the writing of students in the United States “is not what it should be” (National Commission on Writing, Citation2003, p. 7).

Two basic approaches have dominated much of the discussion about how writing develops. One viewpoint focuses on how context shapes writing development (Russell, Citation1997), whereas the other concentrates mostly on the role of cognition and motivation in writing (Hayes, Citation1996). The contextual view of writing is captured by Schultz and Fecho (Citation2000), who indicate that writing development: (a) reflects and contributes to the social, historical, political, and institutional contexts in which it occurs; (b) varies across school, home, and work contexts in which it occurs; (c) is shaped by the curriculum and pedagogical decisions made by teachers and schools; (d) is tied to the social and cultural identity of the writer(s), and (e) is greatly influenced by the social interactions surrounding writing. This view of writing development is illustrated through a model developed by Russell (Citation1997). The model shows how macro-level social and political forces influence micro-level writing actions and vice versa.

A basic unit in Russell’s model is the activity system, which examines how actors (e.g., a student, pair of students, student and teacher, or class—perceived in social terms and also taking into account the history of their involvement in the activity system) use concrete tools, such as paper and pencil, to accomplish an action leading to an outcome, such as writing a description of a field trip. The outcome is accomplished in a problem space where the actors use writing tools in an ongoing interaction with others to shape the paper that is being produced over time in a shared direction. Russell’s (Citation1997) model also employs the concept of genre, “as typified ways of purposefully interacting in and among some activity system(s),” (p. 513). When people actually use writing in regularized ways, it helps these typified interaction patterns to stabilize, which in turn create a method of writing which is predictable for the most part. . These are conceived as only temporarily stabilized structures, however, because they are subject to change depending upon the context. For example, a new student entering a classroom with an established activity system may appropriate some of the routinized tools used by others in the class, such as using more interesting words instead of more common ones when writing. In turn, the new student may change typified ways of writing in a classroom, as other students in the class adapt unfamiliar routines applied by their new classmate, such as beginning a paper with an attention grabber. The description of Russell’s (Citation1997) model offered so far mostly focuses on how writing development is shaped by the social and contextual interactions that occur within the classroom, between students and with the teacher.

This underscores the role of teacher in learners’ writing achievement. With regards to teacher capacity, many teachers report that they are ill-prepared to teach writing. For example, in a survey conducted by Kiuhara et al. (Citation2009), one out of every two high school teachers indicated that they had little to no preparation in how to teach writing. Although many teacher training courses and even teacher’s guides provide teachers with a roadmap, this map is of limited value if teachers do not possess the knowledge, skills, and tools needed to achieve the outlined objectives. These objectives include having a reasonable handle on why writing is important, how writing develops, and how to teach writing (Graham et al., Citation2013). If teachers know why writing is important, as Graham et al. (Citation2013) noted, they are more likely to invest the energy and time needed to achieve the writing standards. It seems that writing instructors need to gain and deepen an understanding of how writing develops to, in turn, reach high levels of writing instruction expertise. Moreover, possessing effective tools for teaching writing may well assist teachers in adjusting writing benchmarks and their instruction in a way that they become more pertinent to individual students’ needs. Green (2006, p. 131) conducted research on a restricted sample of courses preparing Chinese learners for academic study at UK universities, and he observes that “evidence from both the teachers and the students regarding their course expectations and outcomes is indicative of substantive differences between the course types included in the study”.

A heavy burden, thus, may be placed on test instructors. Participants in exam preparation courses may have expectations other than the objectives which are to be met in such classes. Another issue could be the effectiveness with which the instructors of these courses teach and prepare their learners. The question of what constitutes effective teaching has always been high on the researchers and educators’ agenda. However, changes in testing strategies, more recent statistical methodologies with access to databases which give information on student achievement, and also the fact that these data can be utilized are among these factors. This would help teacher effectiveness to be more clearly understood which can consequently facilitate making more informed decisions as to how to recruit and pay teachers, how to provide them with in-service training, and how to evaluate them (Stronge, Citation2018; Stronge et al., Citation2011). There has also been recent emphasis placed on how significant connecting teacher effectiveness to teacher education and its different aspects can be (Darling-Hammond & Bransford, Citation2005; Hanushek, Citation2008; National Academy of Education, Citation2008; Odden, Citation2004). Aligning teacher effectiveness with teacher education has significantly helped the provision of quality education and the improvement of school performance. Despite the importance of writing and its instruction, however, little time is devoted to teaching writing or using writing as a tool to support learning in many countries (e.g., Gilbert & Graham, Citation2010; National Commission on Writing, Citation2003; Wyse, Citation2003). This has affected English language learners’ writing skill and even the way writing is regarded and instructed.

2.2. IELTS Writing

The IELTS Test is considered to be an established and widely used international English language proficiency exam. It has two modules that serve two different purposes—General Training and Academic. There are four language skills assessed in this test, which are equally weighted to add up to an overall averaged band of proficiency measured from 0 to 9. There has been research done on the validity of the IELTS test modules (Alavi et al., Citation2018; Quaid, Citation2018), washback on teaching practices (Estaji & Ghiasvand, Citation2019; Green, Citation2006a, Citation2007a,b; Mickan & Motteram, Citation2008; Saville & Hawkey, Citation2004), learners’ approaches to test preparation (Brown, Citation1998; Elder & O’Loughlin, Citation2003; Green, Citation2007a,b; Read & Hayes, Citation2003; Mickan & Motteram, Citation2009), learner perspectives on IELTS preparation course expectations and outcomes (Green, Citation2006a) and score gain (Elder & O’Loughlin, Citation2003; Green, Citation2007a; Humphreys et al., Citation2012; O’Loughlin & Arkoudis, Citation2009).

According to IELTS Guide for Teachers, published on the IELTS official website,Footnote3 the writing test in each module has two tasks. In the Academic test, topics are selected in a way which interests and suits those who want to enter university programs or apply for jobs.

In the Academic test, topics of general interest and suitable for test takers entering undergraduate or postgraduate studies or seeking professional registration are selected. In Task 1, test takers are given graphs, tables, charts, or diagrams. They are asked to describe, summarize, and explain data, describe the stages of a process, how something works, or describe an object or event. Task 2, however, requires an essay in response to a point of view, argument, or problem. The General Training test, which also includes two tasks, has topics of general interest. Task 1 Test takers need to write a letter requesting information or explaining a situation they are given. The letter may be personal, semi-formal or formal in style. Task 2 Test takers are asked to write an essay in response to a point of view, argument, or problem.

In Iran, like in the other countries taking part in the IELTS test, most English learners participating in this test have shown lower grades in writing compared with the other three language skills as information obtained from the official website of IELTS (https://www.ielts.org/teaching-and-research/test-taker-performance) indicates. Since such IELTS participants show high levels of proficiency in other skills, it seems part of the issue stems from lack of effective writing instruction. This study narrows down the focus on one possible cause of this inefficacy, that is, the divergence between teachers and students’ opinions regarding the IELTS writing preparation course content and outcome.

3. METHOD

3.1. Research Design

A sequential mixed-methods design was employed to collect the required data. In the quantitative phase, two sets of questionnaires were administered to the students participating in IELTS writing preparation courses both at course entry and course exit. The teachers of the said courses also completed a questionnaire at the end of the course and were interviewed about the possible convergence or divergence of their opinions. In the qualitative phase, semi-structured interviews were utilized to further help the research. The purpose was to use the qualitative data, collected through in-person interviews in this research, to further explain, clarify, and interpret the data from the quantitative phase, that is, the questionnaires.

3.2. Participants

The participants in this study were 64 students taking IELTS writing preparation courses at a private institute in Tehran, Iran, which specializes in IELTS preparation courses. The participants consisted of both male and female students, in the 18–38 age bracket, with the average age being 29.5. The gender variable was not controlled in the present study. The participants all were at the upper-intermediate level of English proficiency, as assessed by the institute’s proficiency test. The test, which has been modelled on Cambridge Practice Tests for IELTS, had a reliability index of 0.87. They all spoke Persian as their L1. The sampling utilized in this study was that of convenience sampling, with seven intact classes taking part in this study. None of the learners had taken the IELTS test previously and all were intending to take the test following their courses with the intention of using their scores for either university admission or immigration.

The teachers of these seven classes, who were later interviewed, had a minimum of six years to a maximum of 15 years of teaching experience with a focus on IELTS. They all hold regular IELTS writing preparation courses. All the necessary consent, approval, and permissions were obtained prior to data collection.

3.3. Instruments

3.3.1. Questionnaires

In the current study, three questionnaires were utilized. The first questionnaire (Student Questionnaire A) aimed to measure the learners’ expectations regarding the course content and outcome at the entry. The second questionnaire (Student Questionnaire B) was a slightly modified version of the Student Questionnaire A and measured the students’ expectations about these two variables at the end of the course. Finally, the third questionnaire (Teacher Questionnaire) was completed by the teachers of the very courses at the end of the course; the focus of items on this questionnaire was on the teachers’ perceptions of the focus of their courses (see Appendix 1). The overall reliabilities of the of the Student Questionnaires A, B and the Teacher Questionnaire, calculated via Cronbach Cronbach’s alpha coefficient right before the commencement of the courses, were 0.78, 0.90, and 0.80 respectively, indicating that the questionnaires enjoyed an acceptable level of reliability. All designed by Green (2006, p. 117), the items of the questionnaires “comprised a sentence accompanied by a five-point Likert scale attached to descriptors ranging from 1 (I definitely disagree) to 5 (I definitely agree)”. All three questionnaires contained the same 24 items, only differing in tenses and wording to adapt to the perspective of the respondents (see the Appendixes A–C). On the development of the questionnaires, Green (2006, p. 116) notes:

The shared section of the three questionnaire instruments … was developed in the first instance from teacher and student comments collected in interviews regarding salient differences between IELTS preparation and other forms of EAP instruction … . As a form of qualitative validation, these comments were compared against syllabus documents and publicity for EAP and IELTS preparation courses, IELTS preparation textbooks and surveys of EAP learners … .

Through the interviews mentioned above, Green (2006, pp. 116–117) identified two key areas of difference. The first one which was said to be given greater emphasis on EAP courses included eight categories as follows:

- The use of books and journals as extensive input for academic writing (reflected in items C12, C15, and C20).

- Use and integration of sources in academic writing including the nature of academic evidence (reflected in items C7, C13).

- Learning of subject specific content and vocabulary (reflected in items C1, C22).

- Learning about expectations and cultural difference in academic settings (reflected in items C3, C4, C11).

- Effective written communication and organization (reflected in items C8, C9).

- Editing and redrafting text (reflected in item C14).

- Producing extended texts—much longer than the 250 words of IELTS Task 2 (reflected in item C16).

- Concern with formal academic style (reflected in item C23).

The second area which was said to be given greater emphasis on IELTS courses included the following five categories:

- Learning of general rather than technical vocabulary (reflected in item C2).

- A close focus on the test format (reflected in items C19, C24).

- Concern with strategies for improving test scores (reflected in items C5, C6, C21).

- Managing study time (reflected in item C17).

- A focus on grammar (reflected in items C10, C18).

Put together, they form 13 categories reflecting and distinguishing the areas which are emphasized in EAP and IELTS courses respectively.

3.3.2. Semi-structured interview

The second measurement instrument utilized in this research was a semi-structured interview. The interviewees, who were the teachers who had filled in the Teacher Questionnaire, were asked to provide their opinions on their students’ responses to the items in Questionnaire B. The interview protocol included the 24 items of the Student Questionnaire B and Teacher Questionnaire and the teachers were asked to explain and offer insights as to why their expectations from the course differed from what their students thought they had learned from the course, in case these two contrasted with one another. The convergence or divergence of their opinions was further analyzed. The interviews were conducted either in English or Persian, depending on which language the interviewees opted for.

3.4. Data Collection Procedure

The study was conducted in four phases, as detailed below:

  1. Sixty-four Iranian upper-intermediate IELTS students were given the Student Questionnaire A at the beginning of their IETS preparation courses. The IELTS preparation courses included in this study ranged from 6 to 8 weeks in length.

  2. the Student Questionnaire B was handed out to the very students at the end of their IELTS preparation courses.

  3. The instructors of the aforementioned students were asked to fill in Teacher Questionnaire at the end of the IELTS preparation courses, administered at the same time as the Student Questionnaire B.

  4. The above-mentioned instructors were then interviewed on their opinions about their students’ views collected through the Student Questionnaire A and the Student Questionnaire B.

3.5. Data Analysis

The Wilcoxon rank-sum test was used to compare the students’ expectations about the IELTS preparation course at course entry and their perceptions at course exit. Moreover, the Mann-Whitney U test was applied to compare the students and teachers’ perceptions of the course content and outcome at course exit.

4. RESULTS

4.1. Students Expectations at Course Entry (Student Questionnaire A)

The students’ expectations of the course content and outcome at course entry are reported in . As seen in the table, the expectations were high for learners on all course items, though with different proportions, that is, the students expected to learn 13 categories related to the course content.

Table 1. A Descriptive Profile of the Learners’ Expectations at Course Entry

More specifically, the four most frequent expectations for the students at course entry were as follows:

1. Item 18: I expect my teacher to correct my grammar mistakes in my written work.

2. Item 5: I expect to learn ways of improving my English Language test scores.

3. Item 2: I expect to learn general vocabulary.

4. Item 6: I expect to learn words and phrases for describing graphs and diagrams.

On the other hand, the least frequent expectations were related to the following items:

1. Item 22: I expect to read books and articles about my specialist subject area.

2. Item 4: I expect to learn about differences between university education in my country and in foreign countries.

Having reported the most and least frequent expectations, we turn to the students’ responses related to each item. According to , the highest mean scores relate to test format (a close focus on the test format, emphasized in IELTS courses) and use of books/journals (the use of books and journals as extensive input for academic writing, emphasized in EAP courses). The lowest means relate to time management and producing extended text.

Table 2. Students’ Expectations of Course Categories at Course Entry

4.2. Students’ Perceptions at Course Exit (Student Questionnaire B)

Students’ perceptions of the course content and outcome at course exit were also calculated and it was found out that the students’ perceptions regarding the 13 course categories changed . However, the proportions of responses were not as high as the expectations at course entry (see ).

Table 3. A Descriptive Profile of the Learners’ Perceptions at Course Exit

More specifically, the four most frequent perceptions for the students at course exit were:

  1. Item 2: I learned general vocabulary.

  2. Item 18: My teacher corrected my grammar mistakes in my written work.

  3. Item 19: The activities we did in class were similar to the ones on the IELTS test.

  4. Item 5: I learned ways of improving my English Language test scores.

On the other hand, the least frequent items were:

  1. Item 16: I learned how to write long essays or reports of 1,000 words or more.

  2. Item 4: I learned about differences between university education in my country and in foreign countries.

Moreover, analysis of the students’ responses related to each item demonstrated that the highest mean scores relate to test format and use of books/journals. The lowest means relate to time management and producing extended text. Therefore, the students’ expectations at course entry and their perceptions at course exit were almost the same. However, the degree of proportions of them was different (see ).

Table 4. Students’ Expectations of Course Categories at Course Exit

4.3. Teachers’ Perceptions at Course Exit (Teacher Questionnaire)Teachers’ perceptions of the course content at course exit were calculated (see ).

Table 5. A Descriptive Profile of the Teachers’ Perceptions at Course Exit

As can be seen in , the four most frequent teachers’ perceptions at course exit were:

  1. Item 2: Students learned general vocabulary.

  2. Item 18: I corrected the students’ grammar mistakes in their written work.

  3. Item 19: The activities we did in class were similar to the ones on the IELTS test.

  4. Item 6: Students learned words and phrases for describing graphs and diagrams.

On the other hand, the lowest frequencies related to:

  1. Item 22: Students read books and articles about their specialist subject area.

  2. Item 16: Students learned how to write long essays or reports of 1,000 words or more.

Furthermore, analysis of the teachers’ responses related to each item demonstrated that the highest mean scores relate to use of books/journals and test format (see ). The lowest means relate to producing extended text and editing and redrafting text. Comparing the students and the teachers’ perceptions at course exit indicate that the two groups held almost similar perspectives. However, the degree of proportions of them was different.

Table 6. Teachers’ Perceptions of Course Categories at Course Exit

4.4. Learners’ Expectations at Course Entry and Their Perceptions at Course Exit The assumptions of normality of data were investigated by the use of Kolmogorov-Smirnov and Shapiro-Wilk tests, examining whether the data set “meets the assumptions of having a normal distribution” (Loewen & Plonsky, Citation2016, p. 95). The results of the tests indicated that there was a significant difference between the distributions, and thus the assumptions of normality were not met. Due to the fact that the assumptions of normality of data have been violated, the researchers examined the first research question via a non-parametric test. The Wilcoxon rank-sum test was used to compare learners’ expectations at course entry and their perceptions at course exit. As shown in , the median score for their initial expectations and final perceptions decreased from 96 to 75. Wilcoxon signed-ranks test revealed that the decrease from entry to exit was significant, and the effect size was reported as 0.63 which is a high value. This means those EFL learners’ expectations of IELTS course at the beginning and their learning at the end were not compatible, revealing that there is a highly statistical difference between their expectations at the entry and their learning at the end.

Table 7. Overall Students’ Responses at Course Entry and Course Exit

In order to shed more light on the results, the difference between the learners’ expectations and their learning with regard to the 13 categories, as operationalized by the Student Questionnaires A and B, was measured (see ).

Table 8. Students’ Responses at the Entry and Exit Based on the Course Categories

According to , the students’ expectations at course entry and their perceptions at course exit across the given categories decreased. As for the categories, Wilcoxon signed-ranks tests revealed that that this decrease from course entry to exit was statistically significant.

4.3. Students’ and Teachers’ Perceptions at Course Exit

The assumptions of normality of data were investigated by the use of Kolmogorov-Smirnov and Shapiro-Wilk tests, examining whether the data set “meets the assumptions of having a normal distribution” (Loewen & Plonsky, Citation2016, p. 95). The results of tests indicated that there was a significant difference between the distributions, and thus the assumptions of normality were met. Due to the fact that the assumptions of normality of data have been violated, the researchers examined the second research question via a non-parametric test, that is, the Mann-Whitney U test. The results, as shown in , showed that although the students’ expectations improved from the median score of 75 for the students to 89, for the teachers this increase reached non-statistically significant. This means that there is not a significant difference between the learners’ and teachers’ perceptions of the course content and outcome.

Table 9. The Mann-Whitney U Test Results for the Learners’ Expectations and Teachers’ Perceptions at Course Exit

4.4. The Interview Analysis

The data collected through the interviews were analyzed applying the Thematic Analysis Method devised by Braun and Clarke (Citation2006). In the interview, each teacher was asked to explain why the two scores were different. These two scores involved in the interviews were:

1. The score each teacher gave to each item, which indicates what the teacher expects to be the course outcome (Teacher Questionnaire)

2. The score the class gave to each item, which indicates what the students of that teacher thought to be the course outcome (Student Questionnaire B)

shows how each of the seven teachers scored each item on the Teacher Questionnaire with the average score their students gave to each item on Questionnaire B.

Table 10. Teachers’ Score and Students’ Average Score

What follows is a detailed item-by-item account of the difference between the students and teachers’ perceptions at course exit (Student Questionnaire B and Teacher Questionnaire).

Item 1: Students learned specialist vocabulary for their university subject.

Most teachers believed the difference between the scores came from their students’ inability to distinguish between different types of vocabulary, more specifically between general vocabulary and topic-specific or major-specific vocabulary. The vocabulary they learned may have helped them at university, but this was not the intention of the course instructors. One of the teachers who scored this item 1 (i.e. completely disagree) remarked:

My class gave this item an average score of 3.62, which means most of my students believed I was also teaching them vocabulary related to their university studies. That, of course, is by no means the purpose of my class. I did teach them academic vocabulary, but I never even mentioned whether this may be useful for their university subject. I believe since some academic vocabulary items are quite commonly used in a variety of university majors, engineering majors in particular, my students automatically assumed I was teaching them vocabulary for their university major, having their field of study in mind.

Item 2: Students learned general vocabulary.

The teachers explained that for their students to have a wide enough range of lexical knowledge, they combined both general and IELTS-related vocabulary. They remarked the difference between the scores may have stemmed from the fact that their students had failed to regard GE (general English) vocabulary as relevant to IELTS. One teacher clarified that “students’ perception of IELTS-related language is normally flawed as they believe it is completely different from GE language.”

Item 3: Students learned about the kinds of writing tasks students do at university.

The difference between the teachers’ perceptions of the course and what the students considered the outcome regarding learning the kinds of writing tasks required at university may have come from the students’ unclear understanding of terms such as “academic”. Since academic vocabulary is also taught in class, the word “academic” may have been misleading for some students. The teachers believed their students may have expected the vocabulary taught to them to be also of use at university, for their specific major.

Item 4: Students learned about differences between university education in their country and in foreign countries.

One teacher found the difference in scores surprising as she had not talked about the matter in this item in class, but her students gave a higher score to this item. She reasoned it might have been the casual talks they had at break. She further commented that students may learn other things, even from causal conversations, not just classroom instructions. She then added:

I am actually taken aback by my students’ score. I can only think of two reasons. One, they were being nice and trying to give a high score to respect their teacher. Two, the short chats and casual conversations we had at break may have hinted at the matter.

Item 5: Students learned ways of improving their English Language test scores.

One teacher had misunderstood the question and scored it low. The other teacher, whose students scored her low, reasoned they may have found the instruction inadequate.

Item 6: Students learned words and phrases for describing graphs and diagrams.

Such language is essential for Writing Task 1 in IELTS Academic. In the case of one teacher whose students scored far lower, she reasoned they may have found the instruction on this matter inadequate. She then added:

I believe this particular class was particularly interested in being given handouts. I normally present and teach such vocabulary in PowerPoint slides, accompanied by related graphs and diagrams. So, I did teach them sufficient vocabulary to be able to describe such visual data. However, I think the large number of words and phrases in those slides may not have been clear since they not printed out and easy to count. This may have given them the wrong impression.

Item 7: Students learned how to use evidence to support their written arguments.

In the case of two teachers whose students scored lower, they reasoned the students may have found the instruction on this matter inadequate.

Item 8: Students learned how to organize an essay to help the reader to understand.

One teacher misunderstood the question and scored lower than the students. The others find their students’ lower scores surprising which made them question whether the students had understood the question correctly.

Item 9: Students learned how to communicate their ideas effectively in writing.

In the case of the students’ lower scores, the teacher reasoned they may have found the instruction inadequate. In the case of the teacher’s lower score, she reasoned she did not find this item satisfying in her students’ written assignments. She said she “could not find satisfactory results in terms of clear written communication despite her through and comprehensive instruction”. This is indicative of discrepancy between teaching and learning.

Item 10: Students learned grammar.

Most of the teachers remarked they taught grammar more as an integrated part of the four language skills, while their students may have expected more detailed, separate grammar instruction. This points to differences in belief regarding instruction.

Item 11: Students learned how to write university essays and reports.

Two of the teachers said they did not include this item in their classes, while their students’ scores show otherwise. They reasoned this may have to do with the nature of the IELTS Writing questions, which could be similar to the essays and reports for some university majors.

Item 12: Students learned how to find information from books to use in writing essays.

The teachers reasoned the students may have found the instruction on this matter inadequate.

Item 13: Students learned how to use quotations and references in academic writing.

One teacher taught using quotations and references, but the students may have found this inadequate. Four teachers, however, did not address the matter in this item at all since it is not part of IELTS Writing and found their students’ higher scores surprising.

Item 14: Students learned how to edit and redraft their written work.

One teacher found her students’ much lower score quite surprising. She reasoned it may have been because of their different expectations of editing their writing, which can indicate potential delivery issues. She added:

Students normally perceive editing and proof-reading as a very long process. What I taught in class, however, was modified so that it fits the needs and time limit of the IELTS Writing test and, therefore, it was not as extensive as editing and redrafting a thesis paper, for instance.

Item 15: Students learned how to use ideas from textbooks or academic journals in their writing.

One teacher did not work on using ideas from textbooks or academic journals in writing much, and her students’ scores prove it. Another, though, related her students’ lower score to the fact that the students may have found the instruction inadequate.

Item 16: Students learned how to write long essays or reports of 1,000 words or more.

None of the teachers taught how to write long essays or reports of 1,000 words or more. However, they did work on essay structure and paragraphing and that may explain the scores.

Item 17: Students learned how to organize their time for studying.

Six teachers emphasized they constantly talked about time management and time organization for studying. One of them emphatically stated “Time is IELTS test takers’ worst enemy and it is a fundamental part of our instruction to teach them time management techniques, especially when writing an essay in Writing Task 2.” The students may not have found those ways and tips useful, or they did not utilize them at all to assess their usefulness.

Item 18: I corrected the students’ grammar mistakes in their written work.

According to , there was no significant difference found between the scores.

Item 19: The activities we did in class were similar to the ones on the IELTS test.

The teacher whose students’ scores were much lower than hers reasoned the difference between the scores may be because she normally worked on reading passages other than the ones in the coursebook, usually printed out as handouts. This may have been the reason why the students scored this item so low. She further explained that:

I tend to use a variety of reading passages from different IELTS coursebooks, not just the coursebook we have. I choose them based on the topic I plan to work on. That is exactly what I did in this class as well. If the article was long, I would normally send my students a PDF file of it prior to the class for them to pre-study. If it was a short article, I would print it out and hand out copies in class. Either way, we definitely worked on IELTS-related reading passages. I think since they were cropped out of textbooks and looked like plain paragraphs with questions, my students just assumed they were not related to IELTS.

Item 20: Students learned quick and efficient ways of reading books in English.

The teachers claimed most homework assignments included parts for which the learners had to look for related language and ideas for the assigned topic in IELTS Reading passages and authentic materials. One teacher expected her students to have learned quick and efficient ways of reading books in English. She reasoned her students may have found this instruction inadequate.

Item 21: Students learned how to write successful test essays.

According to , there was no significant difference found between the scores.

Item 22: Students read books and articles about their specialist subject area.

One teacher reasoned her students’ higher score may be due to the similarity of the IELTS-related texts to their university majors.

Item 23: Students learned how to write in a formal, academic style.

The teachers believed their students’ lower scores are due to their dissatisfaction with their own writing skill and how their IELTS writings are. One of the teachers reasoned that:

Iranian test takers are not usually skilled at formal writing, which presumably stems from lack of proper writing instruction at school. So, when they are required to write in a different language than their L1, this incompetence becomes more problematic. This has clearly shown itself in the scores they have given this item as I personally believe our syllabus and instruction have sufficiently covered everything necessary.

Item 24: Students took practice tests in class.

According to , there was no significant difference found between the scores. The motifs in the interviews given by the instructors are as follows:

a) Students’ inability to distinguish between different types of vocabulary

b) Inadequate instruction

c) Teachers and students misunderstanding the question

d) Discrepancy between teaching and learning

e) Dissatisfaction with teaching

f) Dissatisfaction with the students’ performance

g) Different expectations

5. DISCUSSION

With internationally recognized language tests gaining in popularity and playing a significant role in being accepted into either educational centers or professions, test-oriented courses try to assist their learners in gearing up for these tests, such is the case for IELTS preparation courses. According to Green (Citation2007a) and Mickan and Motteram (Citation2008), students in IELTS preparation are inclined to focus on tasks and materials which are related to the test. Also, Mickan and Motteram (Citation2008, p. 8) found that “the dominant activities were test practice, skills-focused activities, and explanations of the format and content of the IELTS modules and test-taking procedures”. Moreover, as Green (Citation2007a) points out, learners’ characteristics and values, such as what they know of and understand from the test, resources to meet the test demands and their acceptance of these demands, and also how they perceive the importance and difficulty of the test, may all mediate the impact of testing on teaching practices and learning behaviors.

According to research (Qi, Citation2005; Wall, Citation2005; Watanabe, Citation1996), it seems that teachers are indeed inclined to keep to material covered in the test when selecting the content which they are to teach and concentrate on tasks that resemble those used in the test. However, the methods they adopt appear to be less directly affected. As for students, there is not as extensive research on them as teachers, but the results (Goşa, Citation2004; Green, Citation2007a; Mickan & Motteram, Citation2009; Xie & Andrews, Citation2012) indicate that learners try to decide for themselves what the best way it is to prepare for the test.

Moreover, it is of particular interest to language test developers that tests influence education systems and also the settings where they are conducted; specifically, those tests that have far-reaching implications (Green, Citation2006b, Citation2007b). It is widely accepted that assessment and examinations have an impact on the attitudes, behavior, and motivation of teachers, learners, and parents (Bachman, Citation2000, Citation2010; Pearson, Citation1988). Also, according to Alderson and Wall (Citation1993, p. 41), “It is common to claim the existence of washback (the impact of a test on teaching) and to declare that tests can be powerful determiners, both positively and negatively, of what happens in classrooms”.

The findings of this research could have important implications for IELTS writing instructors. Most of the expectations the students had of the class prior to the course start stayed consistent with what they reported to have learned at the end of the course. Their course exit perceptions also matched with those of their teachers to a great extent. This shows no drastic divergence of the two parties’ perceptions of the class and that the students attended such classes with a somewhat clear idea of the course content and outcome. However, what was further clarified in the interviews showed that how these expectations were fulfilled in practice could still benefit from some modification.

As the students’ inability to tell different kinds of vocabulary apart was a point raised in the interviews, it appears that clear explanation could be helpful to assure students that academic and IELTS-related lexical items are being covered in class (see Nushi & Jenabzadeh, Citation2016 for a distinction between GE and academic vocabulary). Also, since inadequate instruction on certain matters seemed to be another reason why the students scored some items lower than their teachers, it could be of use for teachers to ascertain, on a regular basis, whether their students are receiving sufficient instruction on parts they need more help with. This, time permitting, could be incorporated as (pop) quizzes in class or in the form of oral reports when the class is on a tight schedule.

Regarding the marking and assessing of students’ written assignments, it would be beneficial for both teachers and students to work out a marking system with codes for writing mistakes. This could potentially save time when grading an essay and, in turn, allow for more feedback in the form of comments, suggestions, alternative phrases or sentences, and even essay organization tips. An additional point which could possibly help teachers with accommodating their students’ needs and expectations is a second handing in of written assignments with the feedback and corrections applied and incorporated. It could serve as a checkpoint, specifically for instructors, to see to what extent their students work on and include the received feedback and also to ensure the instruction is adequate.

Since it seems receiving detailed, thorough feedback on writing assignments is of high importance for students, and understandably so, ample time should be allotted while planning the course for this matter. This is mostly directed at institutes holding IELTS preparation courses. Most of the instructors interviewed in this study struggled with not having enough time to allocate to even more precise assessment of their students’ writing assignments. This should be given more attention as instructors are normally willing to invest time in the detailed assessment of writing assignments. However, in most cases this would require taking work home and from what was understood from the interviews, if compensated financially, instructors would most probably allot time outside of work for this.

Lack of communication in terms of what was being taught was another point of disagreement between teachers and their students. Although course syllabi are available to the participants of this study, it appears both teachers and students could benefit from more clear communication regarding what the purpose of the lesson is or what aspect of the writing skill is being focused on.

Giving timed practice tests of IELTS writing seems to be quite beneficial in not only preparing students further under test conditions, at least in terms of time limitation, but also in fulfilling their expectations from an IELTS writing prep class. They could be used as a tool to assess to what extent the instructed content has been learned and also to consolidate the already learned material.

As with the many attitudinal studies, there are some limitations to be taken into account when considering the findings of this research. For instance, due to institutional limitations, the progress the students made in these preparation classes could not be evaluated. Consequently, a pretest and a posttest could be included in further studies to assess students’ IELTS writing before and after the preparation course in order to see whether their forward or backward movement has any effect on their perceptions at course exit. Also, this research could be extended to include other language skills, specifically speaking, to explore the washback effect of IELTS preparation courses on those skills too. The findings in the qualitative phase, namely the interviews, were derived from self-report data. So were the teachers’ responses to the Teacher Questionnaire. The interviewees may not have accurately recalled or reported what they had actually taught and worked on in class, which is particularly important since these were compared with their students’ responses. Although the interview data did appear to support those from the questionnaires, which increases the reliability of the findings, an aspect which could be added to future studies is classroom observations to ensure the teachers’ comments match with what they do in practice. There could also be another part introduced to the qualitative phase where the students of preparation courses are interviewed for further explanation and clarification of their responses to the questionnaire items. These interviewed could also be useful to do away with possible misunderstandings on the parts of the students. Finally, this research was carried out in one institute in Tehran, Iran with a rather small group of teachers and the generalizability of the findings is potentially limited by this constraint. It is suggested for future studies to include a larger group of teachers, and possibly from different IELTS institutes.

Additional information

Funding

The authors received no direct funding for this research.

Notes

References

Appendix

Appendix A Student Questionnaire A

Appendix B

Student Questionnaire B

Appendix C

Teacher Questionnaire