313
Views
0
CrossRef citations to date
0
Altmetric
Teacher Education & Development

University teachers’ underlying assumptions about assessment in English as a foreign language context in Ethiopia

, , &
Article: 2335748 | Received 13 Aug 2023, Accepted 04 Mar 2024, Published online: 05 Apr 2024

Abstract

This study investigated EFL teachers’ underlying assumptions about assessment and how these assumptions are congruent with current perspectives in assessment. Informed by interpretivist philosophical underpinnings, the study adopted qualitative research approach. Nine teachers from three universities located in western Ethiopia participated in the semi-structured interviews, and the data were analyzed using thematic and content analysis approaches. Generally, the findings revealed complexity and diversity in teachers’ assumptions, which centered on four emergent themes: knowledge/content, valid approach to assessing knowledge, values and judgements, and power relations. Most of the teachers’ assumptions are contradictory with current perspectives and the expectations of the recent reforms on classroom assessment. A few modernist assumptions that teachers held even could not always translate into practice, suggesting the influences of contextual and institutional factors that should be further studied. The need to help EFL teachers develop contemporary assumptions in foreign language assessment is the primary implication of the study. Findings of this study can be considered as one of the necessary steps towards developing a new measure of assumptions about assessment that can be used to assess the underlying assumptions embedded in EFL teachers views and practices of assessment.

1. Introduction

Quality of education has a direct relationship with quality of assessment. Consequently, there has been growing interest in improving the quality of education, and specifically, in attaining the needs of the current education goals, such as preparing English language learners for lifelong learning skills, by improving the quality of EL assessment (Fulcher, Citation2014; Latif & Wasim, Citation2022; Nasab, Citation2015). Therefore, it was desirable to alter the assessment system, as achieving such goals is hardly possible through the traditional approach. In line with this, Irons (Citation2008) argues that changing the assessment system is the quickest way to bring about desired changes in education. Accordingly, theoretical shifts have been made from traditional assessments that tend to be psychometric, standardized, objective, quantitative and dogmatic in approach to alternative assessment paradigms that tend to be social, contextual, qualitative, subjective, and less systematic (Buhagiar, Citation2007; Gipps, Citation1999; McNamara, Citation2001; Nasab, Citation2015; Serafini, Citation2002).

Alternative assessment, which is understood here as any assessment that is different from the traditional paradigm and as a process firmly integrated with instruction and positioned for promoting learning (Tan, Citation2012), is based on a totally different set of philosophical beliefs and theoretical assumptions (Bintz, 1991, cited in Anderson, Citation1998). It arose chiefly in response to the current meaning-making process view of learning which, in contrast to the prior understanding that knowledge is transmitted, depends heavily on learners’ construction of their own experience (Al-Bahlani & Ecke, Citation2023; Buhagiar, Citation2007). As such, it shares theoretical perspectives that are fundamental in constructivism (Buhagiar, Citation2007). Consistent with the constructivist views of multiple realities, knowledge construction, subjectivity and student agency, alternativist or modernist assessment came with new perspectives that define assessment as a social, contextual and interpretive activity.

In the new paradigm, assessments are contingent upon the social context in which they are designed and used and emphasize on questions that look for multiple justifiable perspectives and application of knowledge rather than reproduction of facts (Janisch et al., Citation2007; McNamara, Citation2001). Students, along with teacher, decide on what and how to assess, seek and interpret evidence for use to decide where they are in their learning, where they need to go, and how best to get there (Serafini, Citation2002; Wellberg & Evans, Citation2022). It, thus, considers students as agents of their learning and grants them to generate knowledge (Nasab, Citation2015). Janisch et al. (Citation2007, p.221) also asserted that “the theoretical framework for using alternative assessment in the classroom includes considering learners as constructors of knowledge … and empowering students”. This is the basic departure from the traditional assessment that advances the view of a common core of knowledge that should be directly transmitted from the teacher, the hegemonic, to students. Moreover, assessment is seen as a value-laden process, and since knowledge is seen as individualized, for people are different and have different perspectives, individual learner’s subjectivity is the foundation of assessment (Serafini, Citation2001; Siarova et al., Citation2017).

Subsequently, several important developments were introduced within alternative assessment paradigm. Emphasis has shifted to the formative aspects of assessment rather than summative, constructive feedback rather than just marks and grades, using various methods rather than depending on a single method, and student-led assessments rather than relying on assessment by teachers alone (Buhagiar, Citation2007; Latif & Wasim, Citation2022; Nasab, Citation2015). Several writers and international agencies have also been promoting these assessment reforms to improve educational quality (Al-Bahlani & Ecke, Citation2023; Price et al., Citation2014; World Bank, Citation2011). This is supported by empirical evidences suggesting a weak to strong effect on student achievement (Black & Wiliam, Citation1998; Hattie & Timperley, Citation2007; Wafubwa & Csíkos, Citation2022).

In this context, EFL assessment, the process of collecting information to make decisions related to aspects of a learner’s EL ability (Brown & Abeywickrama, Citation2010), can be a valuable tool for providing reliable information regarding EL teaching for teachers and offering a learning opportunity for learners by providing them a chance to learn from their mistakes (Brown & Abeywickrama, Citation2010; Hill & McNamara, Citation2012). Alderson (Citation2005) argues that if EL instruction is linked to assessment, that gives students insights into their own thinking and growth, and they gain new perspective on their potential to learn and use the language. Particularly, in the context where learners study English as a Foreign Language like in Ethiopia, where the use of the language is restricted to schools, EL assessment has a decisive role to help learners practice the language by engaging them in different language activities. Generally, the assessment reforms hold considerable promise to equip EL learners with the knowledge and skills needed in current academic and workplace settings.

Wellberg and Evans (Citation2022, p.3) stress that “the success of any educational policy meant to improve instruction is heavily dependent upon the assumptions that teachers have”. This quote delineates the key role of teachers and their assumptions in policy implementation. Obviously, effective implementation of new assessment developments is highly reliant on teachers, as they are important agents of change in the reform efforts and given their key role as frontline curriculum implementers with the imperative of preparing learners for lifelong learning skills. However, these teachers have their own set of firmly held assumptions on which their assessment decisions and actions are usually based, so critical in assessment policy implementation.

The terms assumptions and beliefs are usually used interchangeably. The premise ‘accepting something as true without proof’ characterizes definitions underlying both. However, as some definitions indicate, beliefs are firmly held than assumptions (Atkin, Citation2017; Borg, Citation2006), which specify existence of differences in the degree of strength and certainty, implying assumptions are more open to change. So, assumptions are defined here as propositions or implicit/explicit statements teachers make about assessment that are taken for granted as a basis of argument or action, and are susceptible to change through examinations (Atkin, Citation2017; Brookfield, Citation2013; Delandshere, Citation2001). Assumptions are conceptualized not as what teachers say they believe about assessment but what their justifications behind their assessment behavior (what they say and do) reveal about what they believe. In this study, teachers’ assumptions about the form and nature of classroom assessment contents (how knowledge is defined), how it is assessed and learning evidence is interpreted (in an objective standardized or in a subjective manner), and the nature of the relationships between the teachers and students involved in the process were emphasized. This is also based on Delandshere (Citation2001) and Mitana (Citation2018) framework of how current assessment addresses epistemological and sociological questions.

Shortly, teachers’ assumptions, which constitute the basis of the judgements they make as part of the assessment process, are important factors in implementing quality assessments (Delandshere, Citation2001; Stabile, Citation2014;). Buhagiar (Citation2007) stresses that unless teachers’ assumptions are changed, classroom assessment practices cannot change in line with the demands of the new assessment paradigm. Assumptions incompatible with new assessment developments can even become a major stumbling block to assessment changes (Buhagiar, Citation2007; Ergün, Citation2013). That is, teachers’ assumptions can either encourage or deter them from engaging in real transformation toward new assessment developments. Thus, a change in assessment is realized if teachers’ assumptions are compatible with those of the model of change; otherwise, it is unlikely to occur (Anderson, Citation1998; Ergün, Citation2013; Fulmer et al., Citation2015). Therefore, for the effective implementation of new assessment developments, teachers’ assumptions are expected to be in line with current ideas of assessment in the new paradigm, shifting away from the traditional.

Vandeyar and Killen (Citation2007) argued that it is naïve to assume that teachers’ assessment conception will change because policies and school contexts have changed. Using the same principle, simply assuming that EFL teachers’ assessment assumptions have changed and kept pace with recent developments in assessment and learning would be less valid, as shifts in assessment theories and policies cannot be guaranteed for changes in teachers’ assumptions. Hence, unpacking EFL teachers’ existing assessment assumptions and identifying their compatibility with the current and improved trends in assessment becomes a necessity, which the present study also focuses on.

As teachers’ assessment assumptions form the core of their assessment behavior and changes in their behavior often follow changes in their assumptions or beliefs (Latif & Wasim, Citation2022; Stabile, Citation2014), unpacking EFL teachers’ assumptions about assessment as they are used to inform practice has huge implications, particularly for professional development. Once assumptions underlying the current practices are made conscious, assumptions that interfere with reform-oriented assessment practices can be identified and paid attention to when developing professional learning opportunities. Clarifying teachers’ existing assumptions, thus, provides opportunities to unveil those interfering with the innovative practices for examinations (for critical reflection) through which shifts in assumptions can be realized, resulting in a positive culture of classroom assessment.

On the other hand, failure to do this could have unwanted consequences in the classroom, given the power of assumptions in determining what is assessed, how it is assessed, how learners are perceived and treated, and teachers’ positionality. Delandshere (Citation2001, p.113) also argues that without clarity on assumptions behind assessment, there will not be theoretical and philosophical debates, which in turn may “result in practices that tend to reproduce themselves in a vacuum, resist change, and are disconnected from relevant issues of knowledge, power and social organization in general’. So, given the role of quality assessments in enhancing students” EL and the role of assumptions in influencing the success of such assessment implementation, if the desired change in EL assessment, and then, in its teaching and learning is to be easily reached, EFL teachers’ assessment assumptions should be brought out of the closet and examined.

Accordingly, prior studies have tried to explore teachers’ assessment assumptions (e.g. Delandshere, Citation2001; Mcpherron, Citation2005; Klenowski & Willis, Citation2011; Peterson & Bainbridge, Citation2002; Mitana, Citation2018). A glaring disparity between teachers’ assessment assumptions and contemporary perspectives in assessment and learning is generally indicated by these studies. For example, while in Delandshere (Citation2001) many assessment contents were found to be based on assumptions of universal and essential knowledge, Mitana (Citation2018) reported participants’ tendency towards essentialists’ assessment assumptions rather than existentialists in the elementary school context. These findings are in line with the general understanding in the literature that shows that current language assessment practices are substantially guided by assumptions that have roots in behaviorist and psychometric theories (Fulcher, Citation2014; Serafini, Citation2002; Wellberg & Evans, Citation2022).

However, little is known about how assumptions are linked to assessment in the English as a foreign language context at higher education institutions, as the target of most studies in this area was teachers of general education and/or lower educational levels. Nevertheless, without a clearer picture of how teachers’ current assumptions do or do not align with assessment reforms, it would be difficult to identify what is needed, either at a policy or university level, to help EFL teachers develop contemporary assumptions in foreign language assessment. Overall, indicating a general dearth of studies on the area, let alone studies specific to fields, there has been a growing world-wide call from researchers for more in-depth studies (Dawson et al., Citation2019; Delandshere, Citation2001; Holzweiss, Bustamante & Fuller, Citation2016; Klenowski & Willis, Citation2011; Mitana, Citation2018).

The situation is even worse in Ethiopia, where, as far as the researchers’ knowledge is concerned, studies are lacking despite the growing recognition that teacher assumption greatly influences the success of assessment implementation. Previous local studies, both at higher and lower educational levels (e.g. Nibret, Citation2013; Mekonnen, Citation2014; Zelalem & Emily, Citation2018) have never studied EFL teachers’ assessment assumptions. Hence, the assumptions guiding current English language assessment practices in Ethiopia are not yet clear, and our knowledge of this area is very limited. Therefore, this study explored EFL teachers’ underlying assumptions about assessment and assessed the extent to which these assumptions were in harmony with current and improved trends in assessment at some universities in Ethiopia.

2. Methods

2.1. Paradigm and methodological approach

This study is based on a qualitative, interpretivist paradigm since it attempts to interpret phenomena by understanding the meanings through the experiences of the individuals (Cohen et al., Citation2007; Lincoln & Guba, Citation1985). That is, the paradigm best suits this study as the current researchers seek to “understand, explain, and demystify” teachers’ assumptions about assessment through the eyes of different participants (Cohen et al., Citation2007, p. 19). This paradigm has also been chosen since we set out with the beliefs that there exist multiple realities, and access to them is only possible through social constructions mediated by language, consciousness and shared meanings (Creswell, Citation2014). We believe that each teacher holds their own understanding of assessment based on their own learning experience, and thus, they would offer multiple, equally valid descriptions and explanations of assessment. Interpretivist view is that there are multiple interpretations of any event, and all provide understanding of a phenomenon (Denzen & Lincoln, Citation2005). In addition, knowledge about assessment assumptions will be produced based on the findings created in the process of interaction between the researchers and the participants (inter-subjective knowledge construction) without manipulating teachers’ actual knowledge of the subject. Shortly, the interpretive concept of understanding, implication and engagement (Lincoln & Guba, Citation1985) would assist us in grasping teachers’ assumptions.

As mentioned, we employed a qualitative research approach within the interpretivist paradigm (Denzen & Lincoln, Citation2005). To address the lack of empirical studies in this area of research, this study aimed to explore and gather more in-depth knowledge of teachers’ assessment assumptions in Ethiopian higher education EFL context. Thus, the methodology adopted was basically a naturalistic enquiry approach, as it allows exploring university teachers’ assumption within its real-life context, enabling a deeper understanding (Denzen & Lincoln, Citation2005) and its complexity (Miles & Huberman, Citation1994). Researchers also widely consider this approach to be appropriate for grasping an in-depth understanding of teachers’ cognitions (Borg, Citation2006; Holzweiss et al., Citation2016). Generally, the purpose behind using qualitative research approach in this study was to elicit teachers’ reasoning behind their assessment decisions and practices in order to develop a detailed understanding and comprehensive description of assumptions underlying assessment in an English language teaching course.

2.2. Context of the study

The study was conducted at three public universities located in western Ethiopia: Jimma, Metu and Wallagga. The use of English as a medium of instruction in Ethiopian universities has necessitated the development of students’ English language proficiency, which has become an essential part of university education. Accordingly, English language improvement courses are offered to students mainly to help them succeed with other university courses as well as to effectively communicate in professional contexts. This study was conducted within the context of this EL improvement course, conventionally known as ‘Communicative English Language Skills’, a course with three credit hours and taken by all undergraduate first-year students as a key compulsory across universities in the country.

Almost all the teachers teaching this course are locals (Ethiopians) who were taught the language as a foreign language by the local teachers as well. The teachers hold either an MA or PhD degree, and those with first degree are rarely found. Any teacher with any specialization in EL such as TEFL, ELT, literature, and linguistics can be assigned for the course. The purpose of assessment in the course is both to evaluate achievement and enhance instruction. In addition to formative assessment, mid and final exams, which are conducted in an organized way following traditional testing method, are employed. Thus, teachers’ individual freedom of deciding on assessment in the course is limited to the former approach since the mid and final exams, which are uniform for all students taking the course, are centrally and collectively prepared. Grades are based on results from both summative assessments covering 60-80% and formative covering 20-40%.

2.3. Participants

The participants in this study were EFL teachers teaching communicative English skills courses in the academic year 2022 GC at the three universities. Nine teachers (three from each university) participated in the study. The main goal of sampling in qualitative study is to find individuals who can provide rich and varied insights into a particularly selected phenomena so as to maximize what we can learn, and this is best achieved through purposive sampling (Denzen & Lincoln, Citation2005; Dornyei, Citation2007). Accordingly, we adopted purposive sampling to select a predetermined number of participants using predefined selection criteria since we intended to gain deep and rich insights into teachers’ assumptions by focusing on those who can provide relevant and meaningful insights into the research questions rather than to make empirical generalizations (Creswell, Citation2014; Patton, Citation2002). Particularly, using maximum variation sampling technique, individuals with diverse range of backgrounds and that are all relevant to the phenomenon under study were targeted to ensure the diversity of the different groups. The technique was also chosen since we wanted to gain greater insight into teachers’ assumption by looking at it from all angles and to identify common themes that are evident across the samples (Dornyei, Citation2007).

Accordingly, with the help of department heads, first, accompanied by our predefined criteria, we identified those who can provide rich insights into the research questions. Then, we contacted once or twice with each of nine EFL teachers who gave us permission to be interviewed. The final selection was made by taking into account: mixed sex groups, a range of teaching experience, academic qualifications, varied specialization areas, and so on. As there were few female teachers in the universities, the majority were males (N = 7) and females (N = 2) with teaching experience ranging from 4 to 19 years. TEFL graduates comprised the majority (N = 7) whereas (N = 2) were non-TEFL graduates. The majority of them were master’s degree holders (N = 6) while Ph.D. holders were (N = 3). No one has received any further training/workshops on assessment ().

Table 1. Participants’ descriptive characteristics/demographics

2.4. Data collection and analysis

The data presented in the current study were collected as part of a larger cross-sectional study that investigated the practices and assumptions of assessment in the EFL context. The present study specifically focused on exploring the underlying assumptions embedded in EFL teachers’ assessment practices. In this case, the use of in-depth semi-structured interviews are essential and adopted for this study, which helped us explore teachers’ assumptions at a deeper level and allowed flexibility to probe the depth of teachers’ complex, embedded and implicit assumptions on assessment (Denscombe, Citation2007; Dornyei, Citation2007). The study’s focus on exploring relatively new issues (teachers’ assumptions about assessment in EFL context) in depth also necessitates using the technique (Denzen & Lincoln, Citation2005) since it offers a more complete picture of assumptions underlying assessment practices in the target course. Open-ended questions are crucial in these regards. So, we used open-ended questions that enabled us to fully and richly collect data from the participants by allowing the interviewer to develop rapport with them and the interviewees freely express their reasoning behind assessment practices and decisions (Denscombe, Citation2007).

According to Brookfield (Citation1992, Citation2013) and Gabbitas (Citation2009), assumptions underlying practices can be discovered by asking questions such as Why did you do this? and why do you think that?. Accordingly, the question schedule with which the interviews were guided was organized into two sections. The first part was designed by focusing on typical assessment practices selected and used by the teachers. Thus, by focusing on activities related to methods, feedback, purpose, and so on the teachers utilized, they were asked why they practiced the way they did to elicit descriptions of thinking that informed their assessment practices. (e.g. What are the methods you regularly use to assess your students EL communication skills? Why do you follow such methods? Please, justify.). This line of questioning was to elicit assumptions grounded in practice. The second part of the interview questions focused on general assessment issues. Teachers’ views in terms of students’ engagement and roles, factors that determine what to/not to assess, good and bad students’ work and assessment experience, and universal and sociocultural reflections of assessment were elicited through the general questions. (e.g. How do you describe teacher’s and students’ roles in relation to assessment? Should students take part in the assessment process? Why (not)?).

The interviews were conducted in face-to-face approach. This relates to the opportunity it provides to better understand the responses of the interviewees through both verbal and non-verbal communications (Dornyei, Citation2007; Patton, Citation2002). All the interview sessions were conducted in the participants’ offices using English, as these EL teachers can communicate their views using the language and to use the advantage of overcoming the reliability and validity problems that come with translation. The interviews took from 43 to 76 minutes and all were audio recorded with the permissions granted from the interviewees.

The analysis began by carefully transcribing interview data. First, we applied a thematic analysis approach, which involved data reduction in the form of codes, categories and themes (Braun and Clarke, Citation2006). This relates to the study’s purpose, which was to comprehend the patterned meanings of the phenomenon under study through data interpretations based on the in-depth comprehension of the participants’ statements (Saldana, Citation2009). All instances that the teachers used to reason about their assessment practices were identified and analyzed. The coding process began by carefully reading the data word by word and assigning codes to a segment of text, which resulted in many codes (open coding). This was followed by grouping and rearranging codes of similar topics into one category based on the connection between the codes (axial-coding). Finally, we searched for connections and meanings between categories to identify emerging themes. Although the themes emerged from the data set, our knowledge from existing literature, including studies in other fields (e.g. Anderson, Citation1998; Atkin, Citation2017) helped us organize them.

As the coding process in nature is cyclic and iterative, rather than a one-time approach, we continuously reviewed, revised and refined codes, categories, and themes through constant back-and-forth movement between the entire data set (Braun and Clarke, Citation2006; Saldana, Citation2009). Additionally, to ensure consistency of the coding process, each instance of reasoning was coded separately by two of the researchers. To determine inter-rater reliability, we divided the total number of agreements (77) by disagreements (11) plus agreements (77), the process resulted in an 84% agreement rate. The disagreements were reconciled through discussion. For example, two different codes that relate to ‘accuracy’ and ‘dominating’ were assigned for a response: ‘… because you don’t need to involve them [students in assessment] not to influence the results’ (Teacher 3), which we later agreed and decided to consider both codes.

Content analysis was also used to determine the proportion of responses that fell within the themes and dimensions. In particular, using Anderson’s (Citation1998) framework of assessment assumptions, we coded each instance of teachers’ reasons into traditionalist and alternativist dimensions. This was also based on the way assumption is generally described in the main stream literature (e.g. Donaghue, Citation2003). In doing this, we focused on responses that we could confidently code as either traditionalist or alternativist. To determine the extent to which the teachers’ existing assumptions were in harmony with current perspectives on classroom assessment, all instances that teachers used to reasoning about their assessment practices were first separated according to whether they were in line with traditionalist or alternativist views. Then, frequency counts were determined for each dimension upon which percentages were calculated in order to determine how far the teachers’ assumptions align with current or alternativist assessment perspectives. To obtain the percentage for each frequency, the total number of the teachers’ instances of reasoning associated with each dimension was divided by the total number of the instances of reasoning included and multiplied by hundred. The same procedure was followed to determine this at the level of themes.

Some strategies were followed to ensure trustworthiness. For instance, we spent sufficient time (4 months) in the study setting and involved in meaningful interactions with the teachers. This helped us to build trust and rapport with them, which in turn enabled providing more accurate findings (Creswell, Citation2014). Member checks were also used to determine the accuracy of the findings through taking the final analysis, interpretation and conclusion back to participants and determining if the teachers feel that they are accurate (Dornyei, Citation2007; Lincoln & Guba, Citation1985) though no claim of inaccuracy was indicated. We also described the processes within the study in detail, which may help enable future researchers to use the model in other contexts if deemed appropriate. To achieve confirmability, we attempted to our best to provide accurate, complete, and bias-free accounts of the participants’ views and feelings as they were revealed to the researchers and as they were experienced by the participants. We also used an external auditor for the independent and objective assessment of the study (Lincoln & Guba, Citation1985) and received comments on many aspects such as accuracy of the analysis and interpretations ().

Table 2. Coding exemplars

3. Findings

3.1. Teachers’ assumptions

The teachers’ assumptions about assessment were centered around four themes: knowledge/content, valid approach to assessing knowledge, values and judgements, and power relations. Most of the assumptions under each theme are dichotomously paired with each other, some being truly bipolar and others not. Hence, although the assumptions in each pair are presented separately, some of them are not mutually exclusive and can be held simultaneously by the same teacher and acted upon in response to a specific context and situation.

3.1.1. Knowledge/content

3.1.1.1. Focus of assessment on knowledge construction process

The view that assessment should engage students in the knowledge construction process rather than repeating the existing one was widely reflected by teachers in relation to the assessment purpose, method selection, view of good work, and task assignment. For example, teachers were critical of using selective item exams as “they foster memorization; just recall of facts” (Teacher 4) and preferred performance-based assessments as they “lead to greater learning experience” and provide opportunity to see students’ “knowledge creation ability” (Teacher 2). The view was also reflected in the teachers’ perception that good work lies in the students’ ability to reveal the unseen perspectives by justifiably presenting them rather than in their ability to duplicate the established knowledge. For instance, Teacher 8 expected students to take a critical approach to paragraph writing because students should “not be allowed to copy and simply follow the way presented somewhere else; they have to produce unique piece of work … you have to force them to show something genuine, different views”. It is implied that students are expected to construct knowledge that should be accounted for in the assessment process.

Teachers further believed that assessment should monitor students’ ability to develop their own understanding by transforming and reorganizing existing knowledge. Describing that the main goal of the target course is for the learners’ academic success, Teacher 5 maintained, “what we do through assessment is ensuring that our learners can take over what they have learned to … based on understandings they developed”. Another teacher recounted that he engaged students in writing tasks describing a particular building in their campus after exposing them to a lesson because “they must be asked to shift their understanding to a new context. This requires them creativity.” (Teacher 7) These teachers recognized that learning is ensured when students ably develop their own understanding based on established knowledge. Thus, their assessment focused on knowledge construction processes, suggesting their expectation for students to be creators of knowledge.

3.1.1.2. Assessing recall of essential and universal contents- Knowing is reproducing

The teachers’ rationales behind their assessment decisions and practices also illustrated the assumptions that there are essential and fixed contents that students are expected to accumulate, and that assessment measures students correct reproduction and straightforward use of these contents. A response from Teacher 1, “I wanted them [students] to bring the essential points that I taught them during exams …” best illustrates the view. This was also reflected by Teacher 9, whose oral skills assessment usually emphasized functional languages and tasks covered in the module or practiced in the classroom. He stated, “students have to follow those best expression and procedure examples if they are wanted to be speakers of the language like natives” in justifying the practice.

Such anticipation for students to accumulate rather than produce knowledge was also reflected in the answers expected from students. The teachers’ reasoning suggests existence of verified ready-made answers that students should adhere to while solving the given questions. For example, Teacher 6’s rationale for correcting students’ answers to class activities was to ensure that they have “the right answers that they need to use in the future perhaps for the mid or final exam”. That is, each question of the teacher has one acknowledged correct answer that he equips students with, allowing no room for the emergence of varied perspectives. Furthermore, it was remarked that there are standardized approaches that students need to adopt for a given question, enabling them to arrive at a predefined clear answer. Generally, these teachers equated learning with correct repetition of established knowledge. They also showed that the soundness of students’ answers is to the extent that they align with the contents (factual and procedural knowledge) as presented in module, textbook, or lecture: apple to authority. This practice of assessment explicitly defines the existence of universal knowledge and the specified form in which it is demonstrated.

3.1.1.3. Integrated skills assessment

For many teachers, assessment of the English language that integrates different skills together was a reasonable practice on the basis that such assessment gives students the opportunity to use the language. They placed greater emphasis on a more holistic approach to language acquisition and assessment, in light of the communication function of language. For instance, the current communicative approach to language teaching tends to necessitate assessment to focus on students’ ability to use language skills comprehensively. “… communicative competence requires such integration; …. It can’t be captured in separated tests of the skills ….” (Teacher 7) This assertion denotes the teacher’s reliance on integrated skill assessments to effectively capture students’ communicative competence.

Further, the teachers realized that effective assessment should be mock of real-life situations; so, it requires students to apply several abilities. Teacher 5’s assessment required students to deal with complex tasks that demanded listening to news, writing a summary, and presenting it in the class. Her rational was that “the task encompassed various skills in one shot because natural setting imposes bringing different skills together to use the language. That’s the way I feel right”. This view was further illustrated by Teacher 4 in his recount of ineffective assessment practices:

… commonly we often teach and evaluate each skills distinctly. I don’t know where the practice emerged, but I see it malpractice that goes against the nature of the language itself that naturally works as a complete system. … must be in ways language is used in real life. It needs to work on it, to disconnect from practices that treat skills independently.

Overall, teachers’ reasoning behind their assessment decisions and practices appear to reflect the assumption that integrated skills assessments that account for a holistic observation of students’ use of the language are effective, suggesting their holistic view of knowledge.

3.1.1.4. Teaching and assessing skills discretely

The reasonings of a few teachers also appeared to reflect the assumption that language is a collection of component skills that are taught and assessed independently. These teachers were also certain that students’ communicative competence can be captured in additive assessments of writing, grammar, reading, vocabulary, and other discrete points of the language, and that the development of each skill needs to be ensured before moving to the next.

For instance, independently assigning values to and separately assessing each language skill and sub-skill were common practices among teachers. Although many admitted that the practice hardly accounted for a holistic observation of students’ use of the language and held the university assessment system and students’ ability responsible, few maintained positive perspectives. Teacher 1’s rational was that “the skills are taught separately. … Similarly, that [testing the skills separately] should follow, allow me to accurately measure their capability of using English”.

The teachers also believed that they need to assess each skill discretely, as this enabled them to look for efficient transfer of each skill taught and as holistic assessment was unrealistic for them. “By mixing these skills in testing, you may lack focus; your understanding of students’ extent of achievement comes shallow; you can’t get deep knowledge of their level of learning on each skill.” (Teacher 6) Another teacher also assesses the skills distinctly by allotting 10 points each. “I have to do this way to ensure students understanding of one skill before jumping to the other.” Teacher 3 Such teachers’ reflections further suggest the implicit assumption that complex knowledge should be broken into its components to effectively transmit and assess its efficient transfer.

3.1.2. Valid approach to assessing knowledge

3.1.2.1. Consistency

Many teachers felt that all students taking communicative English skills course should sit for and pass the same assessment in terms of content, method, scoring criteria and timing. They could avoid favoritism and ensure equality by uniformly assessing students, which remained central to the assessment practices of many EFL teachers. They realized that this is the essential feature of assessment practices in the course that enables and leads to gaining more reliable data. Underlying such practices and awareness is the assumption that assessment measures should be consistent across student populations and contexts. For example, Teacher 3 stated,

… because, on same course I think assessment has to be administered uniformly; test items, tasks, types, time of the tests has to be the same for them to secure reliable results. Could it be reliable if, for example, some students take classroom tests and, simultaneously, others take home take assignments? … It also needs to make learners not feel distinguished.

In pursuit of the impartiality of assessment, two teachers also claimed that they did not assess their students differently “because you have to be impartial; reasonable if they all be measured on the same task” (Teacher 8) and “… to avoid privileging some students over the other” (Teacher 9). Further, being detached from the mass and assessing one’s own students separately was a concern for Teacher 6 because such practice is “counter to the regulation. … All conditions must be equal for every student for fair assessment”. Generally, central to the teachers’ assessment practices was ensuring consistency, which they knew was an important aspect of assessment. Thus, assessment practices that took the ‘one-size-fits-all’ approach into account were what worked for them.

3.1.3. Values and judgements

3.1.3.1. Objectivity

The assumption that students’ learning can and should be measured objectively is the most widely reflected among this study teachers. They tended to reflect that the assessment process and the final grade are conducted in a value-free approach. This is thought to be maintained through communal practices, structured tools, quantitative methods, and by limiting others’ involvement. Such strategies allowed teachers to overcome possible sociocultural influences on their assessment practices. In this regard, in response to why exams are so a collective action, Teacher 6, department head, responded: “university necessitates objectively measuring learning … crucial for large populations”. Correspondingly, Teacher 9 described this as follows:

In this course we need to ensure that we are able to objectively measure what a student has learned. That is where we reach consensus with students and others. The marks should be free from any influence. … we set some 15 objective questions in mid exam, each having two marks. … leave no room for our subjective judgements.

Teachers’ intention to objectively measure learning was also evidenced in their justifications to rely on paper-based examinations and to distrust open-ended items, that is, trust in highly structured instruments. Their reliance on examination relates to the rationality of its effectiveness; it permits “accurate evaluation” and allows students to gain clearer information on their learning than other methods (Teacher 3). On the other hand, open-ended questions were seen as unfair because they were not appropriate for “objective scoring” and “bear biases” (Teacher 2). Further, the teachers seemed to place greater faith in quantifiable data and numeric descriptions of students’ learning in order to avoid subjective bearings and provide accurate information about grades produced. A comment from Teacher 1, “I have to evaluate them by grades … objective expression of how well they are doing. So, numeric values is a must. … otherwise, you can’t produce sound grades” illustrates the view well. His rationale was to produce unbiased grades that could accurately express students’ English language ability.

The final group of constructs, which reflects the assumption of objectivity, relates to students’ engagement in assessment. Teachers rarely involve students in the assessment processes, as they are concerned about students’ ability to accurately carry out assessment issues. They justified their concerns by drawing on the view that students’ engagement in assessment precludes the possibility of producing valid learning results. This presupposes the assumption that involving students in the assessment process renders the results invalid. “You may not involve them because you don’t need to influence the results.” (Teacher 3) “Students are inaccurate while evaluating themselves. … it makes impossible to get accurate information about the marks I need at the end.” (Teacher 8) Such assertions indicate that students are not made to have a significant role in assessment because of the teachers’ motive to have value-free assessments and lack of that from the students’ part, suggesting the teachers’ assumption to objectively measure what a student has learned.

3.1.3.2. Qualitative descriptions and judgmental interpretations

Regardless of its representation by a few comments, the teachers also expressed that qualitative descriptions and judgmental interpretations of students’ learning progress and performance are illuminating. They felt that verbal data-based assessment practices offer deep insights for teachers and their students regarding students’ learning progress. They also distrusted assessments that adopted objective methods and inquiries on the basis that they did not provide a trustworthy interpretation of students’ learning. In this regard, Teacher 7 stated,

It is more important to see what students do through like portfolio which carry thorough description. … if possible, I don’t give grades. That’s senseless. You may face huge challenge if you use your own judgmental approach to determine their learning but more justifiable. … If learning is end goal, we must go beyond counting numbers and quit judging learning by grades.

Similarly, questioning the existing system, another teacher suggested that he did not accept the notion of objectivity. “… you do since you have to. The relation between grade a student holds and his actual English ability tells you much how ineffective procedure we are made to follow … and why I don’t trust number-based judgements”. (Teacher 4) Being suspicious about the soundness of number and grade-based assessment practices in representing students’ performance, what worked for the teachers was judgmental approaches, as these enabled them to more credibly represent students’ performance. Even when grades are mandatory, students’ efforts and progresses during their learning should be considered. This was reflected in Teacher 8’s comment, who went beyond ordinary practices to grade students’ performance, justifying the following:

I don’t rely on those results because an hour exam can’t be good indicator of learners’ performance of the course. You may call it personal judgement or that I am biasing results, but I consider not just results but the efforts students make … their progress during learning while giving grades. That I believe more realistically represent their English performance.

These teachers rejected the objectivity notion, as assessments based on such inquiries are less likely to yield deeper information about students’ learning. Their drive to gain deeper understanding of students led them to rely on judgmental interpretations of students’ learning progress and performance irrespective of the influence of their values, which may also suggest the assumption that inquiry in and interpretation of assessment results are subjective but more important.

3.1.4. Power relationship

3.1.4.1. Teachers’ dominant power

Many teachers shared the assumption that much of the power in assessment needs to remain with the course teacher. This was reflected in their comments which showed the course teacher’s sole authority and role as taken for granted in assessment. For example, “… just teachers know about assessment”. (Teacher 6) “From my experience, teachers assess students; it is the duty of teachers. That’s what I know. … You may involve students, but this does not mean that they are in charge of it.” (Teacher 1) Such authority over assessment was further captured in the teachers’ claim that of all educational processes, assessment should be under teacher control. “It is assessment, not another thing, and I would like to say it is my duty. Because after all I am responsible for … not students. I can’t see their place here.” (Teacher 3)

Even hinting at policy documents, the teachers showed their expectations for the course teacher to take control of assessment. “… those written in legal documents are for teacher, makes you accountable not students. … I mean, whatever … I have legal reason to control it.” (Teacher 9) This indicates teachers’ interest in sustaining the existing institutionally armored hegemony that privileges them to possess more power in assessment. Through the assertion that “… because there must be some kind of distance between teacher and student in assessment”, Teacher 8 also explicitly indicated her authoritative nature in assessment. Such justifications for not involving students in assessment processes clearly show teachers’ willingness to control power in assessment. Through their over utilization of paper-based methods and selective item questions that restrict students from expressing their fuller views, teachers’ dominant power and exertion against students were also implicitly conveyed.

3.1.4.2. Shared power

A small number of teachers were not in a position to accept the notion that much power should remain with them. Relating with multidimensional nature of learning, these teachers discussed that they consider students as part of learning, who are able to make effective instructional decisions. They even condemned the hegemony teachers traditionally have in assessments and tried to promote democratic assessments in the classrooms by actively engaging students. These teachers’ decisions and practices seem to rest on the assumption that students should be shared power.

This was evidenced in feedback practices in which students were allowed to comment on each other’s presentations, group discussions, and anonymously written paragraphs. In doing so, teachers recognized that students are not passive recipients, but are capable of generating and communicating comments to their peers. Teacher 7’s rationale was: “their [students’] role is immense. They have a lot to share to each other and must be given that chance”. Resonating with this, Teacher 4, who employs peer and self-assessments in his classroom along with empowering students by teaching them on how to conduct, remarked: “learners are capable to learn by themselves if they are engaged”. This was supplemented by a comment from Teacher 2 while describing his relationship with students: “It is smooth. … I do not need to dominate them. I think the view that since teachers are more knowledgeable, they should control assessment is incorrect. Learners should be part of decisions.” Generally, being aware of students’ essential role in assessment, teachers placed greater faith in joint assessment processing, indicating their expectations for students to have firm power.

3.2. How compatible teachers’ assumptions are with current and improved trends in assessment

To determine the extent to which teachers’ existing assumptions were in harmony with the current perspectives in assessment, the reasons the teachers expressed were separated according to whether they were in line with traditionalist or modernist/alternativist assessment perspectives. Based on the frequency count of the elicited constructs (the teachers’ instances of reasoning), the assumptions that current assessment practices in communicative English skills course predominantly rest on were identified.

Considering each theme separately, as can be observed in , the teachers expressed more reasons (19.1%) that were congruent with modernist or alternativist assessment perspectives than that support traditionalist assessment (15.73%) regarding the knowledge theme. However, the findings were different for the other themes, where the majority of teachers’ reasoning did not support alternativist assessment perspectives. The teachers expressed reasons that were more compatible with traditionalist than alternativist assessment assumptions with regard to valid approach to assessing knowledge, values and judgements, and power themes. Taken together, out of the 89 valid instances of teachers’ reasons, only 30 (33.71%) were in harmony with the current and improved trends in assessment, and the remaining 59 (66.28%) were firmly rooted in the traditional assessment culture. In other words, the majority of teachers’ responses conformed to the assumptions underlying traditionalist assessment.

Table 3. Frequency and categorization of the teachers’ instances of reasonings

4. Discussions

Asking EFL teachers to describe their reasoning about their practices and views of assessment revealed some basic assumptions that guide current assessment practice in communicative English skills course in West Ethiopian universities. The study revealed diversity, multiplicity and complexity in EFL teachers’ assessment assumptions. While teachers’ assumptions on knowledge theme seem to keep pace with current theoretical approaches and recent developments in assessment, their assumptions on valid approach to assessing knowledge, values and judgements, and power relationship themes appear to resist shifting. This suggests that teachers may develop their assessment assumptions at different times between the different factors of assumptions. It also indicates teachers’ incoherence in expressing their assumptions, which often results in semantically self-conflicting epistemic assumptions. For example, several teachers believed that assessment should focus on knowledge construction rather than reproduction. However, many of them are still unwilling to empower students in the assessment process, upon which they develop their own understanding and build knowledge (Tan, Citation2012; Lau, 2013).

The teachers’ assumptions that assessment is expected to promote application of learned skills, knowledge production, accuracy, consistency and hierarchical power structure reveal variations and multiplicity in terms of their interpretations. It may also indicate teachers’ possession of a combination of traditionalist and alternativist assessment assumptions, coexisting in their belief systems. Some prior studies on teachers’ assessment beliefs, conceptions and theories also revealed similar findings (e.g. Remesal, Citation2011; Barnes et al, Citation2014; Latif & Wasim, Citation2022). For instance, in Latif & Wasim (Citation2022), EFL teachers held divergent beliefs that validity, reliability, authenticity, and the assessment of language production comprise the main features of good assessment. Even, teachers from similar socio-cultural contexts may reflect varied or mixed beliefs about assessment (Remesal, Citation2011). Indeed, literature indicates that teachers can simultaneously hold divergent and manifold assumptions of assessment (Anderson, Citation1998; Serafini, Citation2001).

Asking EFL teachers to describe their reasoning about assessment also revealed the connection between teachers’ underlying assumptions and contextual and institutional dynamics. Sometimes, teachers’ reasons for their traditional assessment practices are not in ways that validate these practices. Rather, they specify teachers’ criticality in using such assessments and their strong knowledge base on current assessment approach, despite their failure to employ innovative assessments. Thus, some of the assumptions that have roots in the alternative assessment culture that the teachers expressed are found to be ideal world assumptions, as they are not reflected in practice. This stipulates how not only teachers’ philosophy but also, even strongly, contextual and institutional dynamics dictate the practical choice of assessment in communicative English skills course. Latif and Wasim (Citation2022) also revealed that while tertiary EFL teachers held contemporary theories and beliefs about assessment, it was not reflected in their classroom assessment practices because of contextual variables at micro and macro levels that affected their decision-making process in their assessment practices. This shows the complex nature of assessment influenced by distinct institutional, cultural and educational policy dynamics of the social context, which affect teachers’ assessment assumptions, beliefs, knowledge base and practices (Ergün, Citation2013; Fulmer et al., Citation2015; Latif and Wasim, Citation2022; McNamara, Citation2001; Remesal, Citation2011).

Generally, this study teachers hold diverse assumptions that could be categorized into four themes. Regarding knowledge or content, what is widely reflected suggests that teachers are unwilling to accept assessments that value common core content that promotes the reproduction of facts and knowledge. Through their focus on contents that consider sociocultural contexts and students’ ability to develop their own understanding, most teachers expressed the implicit assumption that knowledge is constructed and situated in context, suggesting teachers’ possession of assumptions productive in the contemporary classroom assessment context. Consistent with this, EFL teachers in Latif and Wasim (Citation2022) study felt that assessment should emphasize application of learned skills and language production instead of language memorization. Such findings, that indicates teachers’ assumption compatibility with current perspectives is interesting, especially considering Delandshere’s (Citation2001) and Mitana (Citation2018) study, in which many assessment contents were predicated on the assumptions of universal and essential knowledge.

Further, regarding valid approach to assessing, values and judgements, and power relations themes, outdated assumptions appear to firmly guide current assessment practices. The teachers’ willingness to accept the notion of value-free assessments in which knowledge and learning are ought to be measured objectively and consistently regardless of contextual and individual differences, underlines that teachers’ thinking about approach to assessing and interpretation of evidence of learning are hampered by misconceptions and traditions which are incompatible with current theories. Validity of such views raises question in the classroom context, where assessments are primarily used to facilitate learning (Black & Wiliam, Citation1998; Buhagiar, Citation2007). Since teachers opt to rely on technical aspects such as neutrality and accuracy in pursuit of the intended reliability and objectivity, the facilitation role of assessment can be reduced by valuing one-size-fits-all approaches and by guarding students from assessment processes (Buhagiar, Citation2007; Serafini, Citation2002; Tan, Citation2012).

Through their focus on providing students with rather than engaging them in assessment marking, scoring, feedback, and so on, the teachers expressed their demand to have powerful authority over students, which may suggest that assessment is a potent means of exercising power against students in Ethiopian universities. This is also implied, though unnoticed in their accounts, in their over utilization of paper-based methods and selective item questions that restrict students from expressing their fuller views. This hidden teacher’s privilege over students indicates that such approaches are taken for granted and are strongly embedded in their established practice-based assumptions. Studies on teacher-student power relations similarly reported teachers’ power control in classroom assessment, resulting in unbalanced power relationships (Tan, Citation2012; Lau, 2013). However, empowering students in assessment has recently been given prominence as a key venue for increasing their independent learning and instructional decision-making power (Tan, Citation2012; Wellberg & Evans, Citation2022).

Taken together, the findings suggesting that most of the assumptions that teachers possess are rooted in traditional assessment culture indicate gaps in EFL teachers’ assessment knowledge base. This seems consistent with the available research findings (Delandshere, Citation2001; Klenowski & Willis, Citation2011; Mitana, Citation2018; Peterson & Bainbridge, Citation2002) and general understanding in the literature that shows as much language assessment still depends on assumptions embedded in behaviorist and psychometric theories (Fulcher, Citation2014; Serafini, Citation2002; Shepard, Citation2000). There may be ways in which traditionalist assumptions can further certain reasonably effective practices, but overall, they are not as effective as they should be, as they are not compatible with contemporary learning and assessment theories (Anderson, Citation1998).

5. Conclusion and implications

The aim of this study is to explore EFL teachers’ underlying assumptions about assessment in communicative English skills course in West Ethiopian universities. The findings revealed diversity and complexity in teachers’ assumptions. While the assumptions that students’ knowledge building process and holistic view of their English language ability should be accounted for in assessment are widely reflected among teachers, their assumptions of consistency, objectivity, and hierarchical power relationships are also found to highly drive their assessment decisions and practices. Thus, despite the few modernist assumptions they hold, that even do not always translate into practice, traditionalist assessment assumptions basically underly the practices of assessment in communicative English skills course. Therefore, it can be concluded that most of the assumptions that EFL teachers in West Ethiopian universities hold about assessment are incompatible with current perspectives in foreign language assessment and learning and with the expectations of the recent reforms on classroom assessment.

This study fills a gap in the literature by clarifying assumptions that guide assessment practices in communicative English skills courses, particularly by unveiling those that interfere with innovative assessment practices. According to Stabile (Citation2014), implementing new developments in assessment can be pointless if teachers remain unaware of their assumptions. Thus, the picture of EFL teachers’ assumptions that we have painted in this study may help teachers raise their awareness, especially regarding outdated assumptions that interfere with innovative assessment practices. This may arouse them to examine these assumptions and begin to question their practices, enabling critical reflection, that results in enhanced instruction. Again, keeping teachers’ professional development in mind, their assumptions need to be continuously examined by concerned bodies. Assessing the underlying assumptions embedded in teachers’ views and practices of assessment is, thus, the first step. However, there is currently a lack of an appropriate framework to assess EFL teachers’ assessment assumptions. The findings of this study can therefore be considered one of the necessary steps towards creating a new measure of assumptions about assessment.

Further, the study suggests that what teachers actually do does not necessarily mean what they think and judge to be worthwhile, as their assumptions do not always translate into practice, which has important implications for teacher development and practice context. This provides guidance for policy makers and university management to consider not only teachers’ assumptions but also the practice context while arranging teachers for professional learning programs. The finding also cautions hastily judging teachers who do not employ reform-oriented assessments as unchanged, and it suggests rather unraveling those barriers and facilitating situations so that teachers can interpret their assumptions into practice.

This study may inspire future research on this topic. It presents an important step towards investigating EFL teachers’ assumptions about assessment in the context of English for academic purposes in West Ethiopian universities and provides a starting point for complementary research. Hence, confirmatory studies with teachers teaching similar course in universities located in other areas of the country are required, which may allow a deeper comprehension of the issue. In addition, the findings suggest that teachers’ lack of using reform-oriented assessments does not always nullify the assumptions of alternative assessments on their part; rather, it points to the importance of knowing what affects the practice. Thus, it has implications for future researchers to carefully explore the underlying reasons for any discrepancy between teachers’ assessment assumptions and practices in the EFL context, a topic that is currently underexplored.

6. Recommendations

Given the prevalence of assumptions contradictory to current assessment perspectives among EFL teachers, the main implication that can be drawn is the need to change and shift teachers’ existing assumptions. With this in mind, we recommend that policy makers and university management offer professional development programs that focus on promoting EFL teachers’ awareness on assumptions underlying contemporary assessment approaches. They may also emphasize assumptions that interfere with innovative assessment practices when designing training opportunities. Programs that give teachers the opportunity to reflect on their assumptions are encouraged. In addition to attending the programs, EFL teachers are also suggested to engage in self-directed learning. It is more fruitful if the teachers identify their own assumptions and examine their validity in light of current and improved trends in assessment, either to accept as compatible or reject as unjustified, so that intentional practices take place.

7. Limitations

Although we believe that stimulated recall interviews following classroom observations and repertory grid techniques are more effective in exploring teacher cognition, we could not use this in our study because of the corona-virus during the data collection and time constraints. Relying solely on interviews to explore teachers’ assumptions is also a limitation, as we could not triangulate the data using other methods. Thus, to enrich the findings, provide more insight, and even allow firmer conclusions regarding the issue, further studies may consider the repertory grid technique, stimulate recall interviews by observing and videotaping classrooms, and consider complementing the study with quantitative data while investigating the issue.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Dagim Endale

Dagim Endale is a PhD student in TEFL at Haramaya University. Among his research interests are assessment in ELT context, emphasizing EFL teachers’ assessment literacy, and teaching and learning of language skills.

Adinew Tadesse

Adinew Tadesse (Ph.D.) is an associate professor in ELT at Haramaya University. His research interests include teacher education policy and practice, language assessment, issues in EAP/ESP, practicum, and reflective teaching. He is research advisor for EFL students and actively engages in thesis and dissertation examinations in different Ethiopian universities.

Abera Admasu

Abera Admasu (Ph.D.) is an assistant professor in Applied Linguistics at Haramaya University. His research areas include advisor-advisee power relationship in research work, language learning in multicultural settings, and EL teachers’ preparation and practicum.

Alemayehu Getachew

Alemayehu Getachew (Ph.D.) is an assistant professor in TEFL at Haramaya University. His teaching and research areas include academic writing, active language learning strategies, and assessment and evaluation.

References

  • Al-Bahlani, S. M., & Ecke, P. (2023). Assessment competence and practices including digital assessment literacy of postsecondary English language teachers in Oman. Cogent Education, 10(2), 2239535. https://doi.org/10.1080/2331186X.2023.2239535
  • Alderson, J. C. (2005). Diagnosing foreign language proficiency. The interface between learning and assessment. Continuum.
  • Anderson, R. S. (1998). Why talk about different ways to grade? The shift from traditional assessment to alternative assessment. New Directions for Teaching and Learning, 1998(74), 5–16. https://doi.org/10.1002/tl.7401
  • Atkin, H. (2017). Investigating service user and staff assumptions about neurological rehabilitation practice, their influence on inclusion and examining conditions for change [Doctoral Dissertation]. Northumbria University.
  • Barnes, N., Fives, H., & Dacey, C. M. (2014). Teachers’ beliefs about assessment. In H. Fives & M. G. Gill (Eds.), International handbook of research on teachers’ beliefs (pp. 296–312). Routledge.
  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7–74. https://doi.org/10.1080/0969595980050102
  • Borg, S. (2006). Teacher cognition and language education: Research and practice. Continuum.
  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
  • Brookfield, S. (1992). Uncovering assumptions: The key to reflective practice. Adult Learning, 3(4), 13–18. https://doi.org/10.1177/104515959200300405
  • Brookfield, S. (2013). Teaching for critical thinking. International Journal of Adult Vocational Education and Technology, 4(1), 1–15. https://doi.org/10.4018/javet.2013010101
  • Brown, H. D., & Abeywickrama, P. (2010). Language assessment: principles and practices. Pearson.
  • Buhagiar, M. A. (2007). Classroom assessment within the alternative assessment paradigm: Revisiting the territory. The Curriculum Journal, 18(1), 39–56. https://doi.org/10.1080/09585170701292174
  • Cohen, L., Manion, L., & Morrison, K. (2007). Research Methods in Education. Routledge.
  • Creswell, J. (2014). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). SAGE.
  • Delandshere, G. (2001). Implicit theories, unexamined assumptions & status quo of educational assessment. Assessment in Education: Principles, Policy & Practice, 8(2), 113–133. https://doi.org/10.1080/09695940123828
  • Denscombe, M. (2007). The Good Research Guide for Small-scale Social Research Projects (3rd ed.). Open University Press.
  • Denzen, N. K., & Lincoln, Y. S. (2005). The sage handbook of qualitative research (3rd ed.). Sage.
  • Donaghue, H. (2003). An instrument to elicit teachers’ beliefs and assumptions. ELT Journal, 57(4), 344–351. https://doi.org/10.1093/elt/57.4.344
  • Dornyei, Z. (2007). Research Methods in Applied Linguistics: Qualitative, Quantitative and Mixed Methodologies. Oxford University Press.
  • Ergün, R. (2013). The significance of assumptions underlying school culture in the process of change. IJERT, 4(2), 43–48. https://soeagra.com/ijert/ijertjune2013/7.pdf
  • Fulcher, G. (2014). Philosophy and language testing. In A. J. Kunnan (Ed.), The Companion to Language Assessment (1st ed.). John Wiley & Sons, Inc.
  • Fulmer, G. W., Lee, I. C. H., & Tan, K. H. K. (2015). Multi-level model of contextual factors and teachers’ assessment practices: an integrative review of research. Assessment in Education: Policy & Practice, 22(4), 475–494. https://doi.org/10.1080/0969594X.2015.1017445
  • Gabbitas, B. W. (2009). Critical Thinking and Analyzing Assumptions in Instructional Technology (MA theses). Brigham Young University.
  • Gipps, C. (1999). Sociocultural aspects of assessment. Review of Research in Education, 24, 355–392.  https://doi.org/10.2307/1167274
  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
  • Hill, K., & McNamara, T. (2012). Developing a comprehensive, empirically based research framework for classroom-based assessment. Language Testing, 29(3), 395–420. https://doi.org/10.1177/0265532211428317
  • Holzweiss, P. C., Bustamante, R., & Fuller, M. B. (2016). Institutional cultures of assessment: A qualitative study of administrator perspectives. Journal of Assessment and Institutional Effectiveness, 6(1), 1–27.  https://doi.org/10.5325/jasseinsteffe.6.1.1
  • Irons, A. (2008). Enhancing Learning Through Formative Assessment and Feedback. Routledge.
  • Janisch, C., Liu, X., & Akrofi, A. (2007). Implementing alternative assessment: opportunities and obstacle. The Educational Forum, 71(3), 221–230. https://doi.org/10.1080/00131720709335007
  • Klenowski, V., & Willis, J. (2011). Challenging teachers’ assumptions in an era of curriculum and assessment change. The Primary and Middle Years Educator, 9(1), 1–9. https://eprints.qut.edu.au/43662/
  • Latif, M. W., & Wasim, A. (2022). Teacher beliefs, personal theories and conceptions of assessment literacy- a tertiary EFL teachers’ perspective. Language Testing in Asia, 12(1), 11. https://doi.org/10.1186/s40468-022-00158-5
  • Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic Inquiry. Sage Publications.
  • Mcpherron, P. (2005). Assumptions in assessment: The role of the teacher in evaluating ESL students. The CATESOL Journal, 17(1), 38–54.
  • McNamara, T. (2001). Language assessment as social practice: Challenges for research. Language Testing, 18(4), 333–349. https://doi.org/10.1177/026553220101800402
  • Mekonnen, G. (2014). EFL classroom assessment: Teachers’ practice and teaching techniques adjustment in Ethiopia. Educational Research and Reviews, 9(20), 1071–1089. http://www.academicjournals.org/ERR
  • Miles, M., & Huberman, A. (1994). Qualitative Data Analysis (2nd ed.). Sage Publications.
  • Mitana, J. M. (2018). Philosophical Assumptions of Educational Assessment in Primary Schools in Uganda: Case of Kampala and Kabale Districts [Doctoral Dissertation]. Makerere University.
  • Nasab, F. G. (2015). Alternative versus traditional assessment. Journal of Applied Linguistics and Language Research, 2(6), 165–178. http://www.jallr.com/index.php/JALLR/article/download/136/pdf136
  • Nibret, A. (2013). The Effect of Modifying EFL Teachers’ Assessment on Students’ Integrated Approach to Learning English [Doctoral Dissertation]. AAU.
  • Patton, M. Q. (2002). Qualitative Research and Evaluation Methods (3rd ed.). Sage Publications.
  • Dawson, P., Henderson, M., Mahoney, P., Phillips, M., Ryan, T., Boud, D., & Molloy, E. (2019). What makes for effective feedback: staff and student perspectives. Assessment & Evaluation in Higher Education, 44(1), 25–36. https://doi.org/10.1080/02602938.2018.1467877
  • Peterson, S., & Bainbridge, J. (2012). Questioning the underlying assumptions of practices in literacy education. Language and Literacy, 4(2), 1–9. https://doi.org/10.20360/G2601Q
  • Price, J. K., Light, D., & Pierson, E. (2014). Classroom assessment: a key component to support education transformation. In R. Huang, Kinshuk, & J. K. Price, (Eds.), ICT in Education in Global Context: Emerging Trends Report 2013‐2014. Springer.
  • Remesal, A. (2011). Primary and secondary teachers’ conceptions of assessment: A qualitative study. Teaching and Teacher Education, 27(2), 472–482. https://doi.org/10.1016/j.tate.2010.09.017
  • Saldana, J. (2009). The Coding Manual for Qualitative Researchers. SAGE Publications.
  • Serafini, F. (2001). Three paradigms of assessment: measurement, procedure, and inquiry. The Reading Teacher, 54(4), 384–393. http://www.jstor.org/stable/20204924
  • Serafini, F. (2002). Dismantling the factory model of assessment. Reading & Writing Quarterly, 18(1), 67–85. https://doi.org/10.1080/105735602753386342
  • Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4–14. https://doi.org/10.3102/0013189X029007004
  • Siarova, H., Sternadel, D., & Mašidlauskaitė, R. (2017). Assessment practices for 21st century learning: Review of evidence, NESET II report. Publications Office of the European Union. https://doi.org/10.2766/71491
  • Stabile, C. (2014). A culture of teaching and learning excellence starts with an examination of assumptions: Influences of general semantics on faculty development. A Review of General Semantics, 71(3), 220–226. https://www.jstor.org/stable/24761902.
  • Tan, K. H. K. (2012). How teachers understand and use power in alternative assessment. Education Research International, 2012, 1–11. https://doi.org/10.1155/2012/382465
  • Vandeyar, S., & Killen, R. (2007). Educators’ conceptions and practices of classroom assessment in post-apartheid South Africa. South African Journal of Education, 27(1), 101–115. https://files.eric.ed.gov/fulltext/EJ1150092
  • Wafubwa, R. N., & Csíkos, C. (2022). Impact of formative assessment instructional approach on students’ mathematics achievement and their metacognitive awareness. International Journal of Instruction, 15(2), 119–138. https://doi.org/10.29333/iji.2022.1527a
  • Wellberg, S., & Evans, C. (2022). Assumptions underlying performance assessment reforms intended to improve instructional practices: A research-based framework. Practical Assessment, Research & Evaluation, 27, 23. https://scholarworks.umass.edu/pare/vol27/iss1/23
  • World Bank. (2011). Learning for all: Investing in people’s knowledge and skills to promote development. International Bank for Reconstruction and Development/World Bank.
  • Zelalem, B., & Emily, J. (2018). EFL Instructors’ Beliefs and Practices of Formative Assessment in Teaching Writing. Journal of Language Teaching and Research, 9(1), 42–50. https://doi.org/10.17507/jltr.0901.06