|
|
|
|
|
 |
Search published articles |
 |
|
Showing 2 results for Writing Assessment
Parviz Maftoon, Kourosh Akef, Volume 12, Issue 2 (9-2009)
Abstract
The purpose of the present study was to develop appropriate scoring scales for each of the defined stages of the writing process, and also to determine to what extent these scoring scales can reliably and validly assess the performances of EFL learners in an academic writing task. Two hundred and two students’ writing samples were collected after a step-by-step process oriented essay writing instruction. Four stages of writing process – generating ideas (brainstorming), outlining (structuring), drafting, and editing – were operationally defined. Each collected writing sample included student writers’ scripts produced in each stage of the writing process. Through a detailed analysis of the collected writing samples by three raters, the features which highlighted the strong or weak points in the student writers’ samples were identified, and then the student writers’ scripts were categorized into four levels of performance. Then, descriptive statements were made for each identified feature to represent the specified level of performance. These descriptive statements, or descriptors, formed rating scales for each stage of the writing process. Finally, four rating sub-scales, namely brainstorming, outlining, drafting, and editing were designed for the corresponding stages of the writing process. Subsequently, the designed rating scales were used by the three raters to rate the 202 collected writing samples. The scores thus obtained were put to statistical analyses. The high inter-rater reliability estimate (0.895) indicated that the rating scales could produce consistent results. The Analysis of Variance (ANOVA) indicated that there was no significant difference among the ratings created by the three raters. Factor analysis suggested that at least three constructs, –language knowledge, planning ability, and idea creation ability – could possibly underlie the variables measured by the rating scale.
Zahra Mohammadi Salari, Volume 27, Issue 1 (4-2024)
Abstract
The current study explored the status of rating scales among Iranian EFL raters. It appeared that EFL/ESL assessment environments were significantly influenced by the perceived authority of native assessment groups. Consequently, examining the realities of rating practices in EFL/ESL settings could offer a more accurate understanding of how assessment is viewed and implemented. To assess this, the present study conducted a comprehensive survey within the Iranian EFL writing assessment framework. A carefully designed eight-item interview guide was created to investigate various aspects of the rating task, including the rating scale. This guide was administered to ten raters from various universities and institutions in Iran, all of whom possessed either a Master's or Doctorate degree in TEFL. The raters participated in 40 minutes interview sessions, and the audio-recorded interviews were transcribed by the researcher for qualitative analysis. Through a thorough content analysis of the interview data, some general patterns emerged. Results of interviews with Iranian EFL composition raters revealed that rating scale in its common sense did not exist. In fact, raters relied on their own internalized criteria developed through their long years of practice. Therefore, native speaker legitimacy in the design and development of scales for the EFL context was challenged and the local agency in the design and development of rating scale was emphasized.
|
|
|
|
|
|
|
|
|