[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
:: Search published articles ::
Showing 3 results for Bias

Mahnaz Saeidi, Mandana Yousefi, Purya Baghayei,
Volume 16, Issue 1 (3-2013)
Abstract

Evidence suggests that variability in the ratings of students’ essays results not only from their differences in their writing ability, but also from certain extraneous sources. In other words, the outcome of the rating of essays can be biased by factors which relate to the rater, task, and situation, or an interaction of all or any of these factors which make the inferences and decisions made about students’ writing ability undependable. The purpose of this study, therefore, was to examine the issue of variability in rater judgments as a source of measurement error this was done in relation to EFL learners’ essay writing assessment. Thirty two Iranian sophomore students majoring in English language participated in this study. The learners’ narrative essays were rated by six different raters and the results were analyzed using many-facet Rasch measurement as implemented in the computer program FACETS. The findings suggest that there are significant differences among raters concerning their harshness as well as several cases of bias due to the rater-examinee interaction. This study provides a valuable understanding of how effective and reliable rating can be realized, and how the fairness and accuracy of subjective performance can be assessed. 
Although the use of verbal protocols is growing in oral assessment, research on the use of raters’ verbal protocols is rather rare. Moreover, those few studies did not use a mixed-methods design. Therefore, this study investigated the possible impacts of rater training on novice and experienced raters’ application of a specified set of standards in rating. To meet this objective, the study made use of verbal protocols produced by 20 raters who scored 300 test takers’ oral performances and analyzed the data both qualitatively and quantitatively. The outcomes demonstrated that through applying the training program, the raters were able to concentrate more on linguistic, discourse, and phonological features; therefore, the extent of their agreement increased specifically among the inexperienced raters. The analysis of verbal protocols also revealed that training how to apply a well-defined rating scale can foster its use for raters both validly and reliably. Various groups of raters approach the task of rating in different ways, which cannot be explored through pure statistical analysis. Thus, think-aloud verbal protocols can shed light on the vague sides of the issue and add to the validity of oral language assessment. Moreover, since the results of this study showed that inexperienced raters can produce protocols of higher quality and quantity in the use of macro and micro strategies to evaluate test takers’ performances, there is no evidence based on which decision makers should exclude inexperienced raters solely because of their lack of adequate experience.

Seyyed Ali Ostovar-Namaghi, Shiva Nakhaee,
Volume 22, Issue 2 (9-2019)
Abstract

Content and Language Integrated Learning (CLIL) has recently been the focus of numerous studies in language education since it aims to overcome the pitfalls of form-focused and meaning-focused instruction by systematically integrating content and language. This meta-analysis aims to synthesize the findings of 22 primary studies that tested the effect of CLIL on language skills and components. Guiding the analysis are three questions: What is the overall combined effect of CLIL on language skills and components? How do moderators condition the effect of CLIL? To what extent the overall combined effect is conditioned by publication bias? The overall effect size was found to be g=0.81, which represents a medium effect size with respect to Plonsky and Oswald’s (2014) scale. The results of moderator analysis show that CLIL has the highest effect on students’ grammar and listening proficiency and in lower levels of education, especially in elementary schools. It also has the highest effect when combined with hotel management as the subject matter. Fail-safe N test of publication bias shows that the significant positive outcome of CLIL cannot be accounted for by publication bias. The findings have clear implications for practitioners, researchers and curriculum developers.
 


Page 1 from 1     

Iranian Journal of Applied Linguistics
Persian site map - English site map - Created in 0.08 seconds with 27 queries by YEKTAWEB 4645