|
|
|
|
Search published articles |
|
|
Showing 3 results for Esfandiari
, Volume 18, Issue 2 (9-2015)
Abstract
In this study, the researcher used the many-facet Rasch measurement model (MFRM) to detect two pervasive rater errors among peer-assessors rating EFL essays. The researcher also compared the ratings of peer-assessors to those of teacher assessors to gain a clearer understanding of the ratings of peer-assessors. To that end, the researcher used a fully crossed design in which all peer-assessors rated all the essays MA students enrolled in two Advanced Writing classes in two private universities in Iran wrote. The peer-assessors used a 6-point analytic rating scale to evaluate the essays on 15 assessment criteria. The results of Facets analyses showed that, as a group, peer-assessors did not show central tendency effect and halo effect; however, individual peer-assessors showed varying degrees of central tendency effect and halo effect. Further, the ratings of peer-assessors and those of teacher assessors were not statistically significantly different.
Rajab Esfandiari, Razieh Nouri, Volume 19, Issue 2 (9-2016)
Abstract
Professionalism requires that language teachers be assessment literate so as to assess students’ performance more effectively. However, assessment literacy (AL) has remained a relatively unexplored area. Given the centrality of AL in educational settings, in the present study, we identified the factors constituting AL among university instructors and examined the ways English Language Instructors (ELIs) and Content Instructors (CIs) differed on AL. A researcher-made, 50-item questionnaire was constructed and administered to both groups: ELIs (N = 155) and CIs (N = 155). A follow-up interview was conducted to validate the findings. IBM SPSS (version 21) was used to analyse the data quantitatively. Results of exploratory factor analysis showed that AL included three factors: theoretical dimension of testing, test construction and analysis, and statistical knowledge. Further, results revealed statistically significant differences between ELIs and CIs in AL. Qualitative results showed that the differences were primarily related to the amount of training in assessment, methods of evaluation, purpose of assessment, and familiarity with psychometric properties of tests. Building on these findings, we discuss implications for teachers’ professional development.
Shohreh Esfandiari, Kobra Tavassoli, Volume 22, Issue 2 (9-2019)
Abstract
This study aimed at investigating the comparative effect of using self-assessment vs. peer-assessment on young EFL learners’ performance on selective and productive reading tasks. To do so, 56 young learners from among 70 students in four intact classes were selected based on their performance on the A1 Movers Test. Then, the participants were randomly divided into two groups, self-assessment and peer-assessment. The reading section of a second A1 Movers Test was adapted into a reading test containing 20 selective and 20 productive items, and it was used as the pretest and posttest. This adapted test was piloted and its psychometric characteristics were checked. In the self-assessment group, the learners assessed their own performance after each reading task while in the peer-assessment group, the participants checked their friends’ performance in pairs. The data were analyzed through repeated-measures two-way ANOVA and MANOVA. The findings indicated that self-assessment and peer-assessment are effective in improving young EFL learners’ performance on both selective and productive reading tasks. Further, neither assessment method outdid the other in improving students’ performance on either task. These findings have practical implications for EFL teachers and materials developers to use both assessment methods to encourage learners to have better performance on reading tasks. |
|
|
|
|
|
|