Rater Bias in Assessing Iranian EFL Learners’ Writing Performance
|
Mahnaz Saeidi , Mandana Yousefi , Purya Baghayei |
|
|
Abstract: (8892 Views) |
Evidence suggests that variability in the ratings of students’ essays results not only from their differences in their writing ability, but also from certain extraneous sources. In other words, the outcome of the rating of essays can be biased by factors which relate to the rater, task, and situation, or an interaction of all or any of these factors which make the inferences and decisions made about students’ writing ability undependable. The purpose of this study, therefore, was to examine the issue of variability in rater judgments as a source of measurement error this was done in relation to EFL learners’ essay writing assessment. Thirty two Iranian sophomore students majoring in English language participated in this study. The learners’ narrative essays were rated by six different raters and the results were analyzed using many-facet Rasch measurement as implemented in the computer program FACETS. The findings suggest that there are significant differences among raters concerning their harshness as well as several cases of bias due to the rater-examinee interaction. This study provides a valuable understanding of how effective and reliable rating can be realized, and how the fairness and accuracy of subjective performance can be assessed. |
|
Keywords: Rater bias, Writing ability, Many-Facet Rasch Measurement, Inter-rater reliability |
|
Full-Text [PDF 260 kb]
(3177 Downloads)
|
Type of Study: Research |
Published: 2013/03/15
|
|
|
|
|
Add your comments about this article |
|
|