|
|
|
|
|
 |
Search published articles |
 |
|
Showing 1 results for Rating Scale.
Zahra Mohammadi Salari, Volume 27, Issue 1 (4-2024)
Abstract
The current study explored the status of rating scales among Iranian EFL raters. It appeared that EFL/ESL assessment environments were significantly influenced by the perceived authority of native assessment groups. Consequently, examining the realities of rating practices in EFL/ESL settings could offer a more accurate understanding of how assessment is viewed and implemented. To assess this, the present study conducted a comprehensive survey within the Iranian EFL writing assessment framework. A carefully designed eight-item interview guide was created to investigate various aspects of the rating task, including the rating scale. This guide was administered to ten raters from various universities and institutions in Iran, all of whom possessed either a Master's or Doctorate degree in TEFL. The raters participated in 40 minutes interview sessions, and the audio-recorded interviews were transcribed by the researcher for qualitative analysis. Through a thorough content analysis of the interview data, some general patterns emerged. Results of interviews with Iranian EFL composition raters revealed that rating scale in its common sense did not exist. In fact, raters relied on their own internalized criteria developed through their long years of practice. Therefore, native speaker legitimacy in the design and development of scales for the EFL context was challenged and the local agency in the design and development of rating scale was emphasized.
|
|
|
|
|
|
|
|
|