713
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Generalizability of CEFR Criterial Grammatical Features in a Korean EFL Corpus across A1, A2, B1, and B2 Levels

 

ABSTRACT

Recent research conducted as part of the English Profile identified grammatical criterial features that are characteristic of each Common European Framework of Reference (CEFR) proficiency level. The extent to which these criterial features attest in English learners of various first language backgrounds call for an empirical examination. In this study, I investigated the use of the criterial features drawing upon a Korean EFL corpus. Data consisted of 6,486 essays produced by 3,243 Korean college students from A1 to B2 CEFR proficiency levels. I examined how 35 criterial features which pertain to levels A2, B1, and B2 attested in this corpus. Overall, although more frequent and wide use of the features was observed as proficiency level advanced, the features occurred in low frequencies. I also found some misalignments between the criterial level of the features and learner proficiency. I discuss the potential influence of first language, test and task type, and rating on the findings of this study. Moreover, I raise the need to reconsider the criteriality of the identified features depending on the context when adopting and applying them in local assessment contexts.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes

1 Hawkins and Filipović (Citation2012) did not explicitly indicate that their list of criterial features is features of writing, but the corpora used in their research consisted only of writings.

2 The Yonsei English Learner Corpus is available from the English Language Informatics Laboratory at Yonsei University, Seoul, South Korea (https://web.yonsei.ac.kr/yonseicorpuslab/).

3 One of the topics for part 2, “corporal punishment,” appeared more frequently (27.5%) in the corpus than the other topics (12.1–15.8%). The seven topics for narrative essays were evenly distributed.

4 I removed 8 defective text files from the original corpus.

5 The accuracy of the Stanford Part-of-Speech Tagger is approximately 97.3% (Manning, Citation2011) for native English data. Geertzen, Alexopoulou, and Korhonen (Citation2014) reported 21.6% POS tagging error by the Stanford Parser on learner English.

6 The WordSmith Tools (Scott, Citation2016)was used to obtain the number of tokens and types, and mean sentence length.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.