[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
:: Search published articles ::
Showing 2 results for Item Response Theory

Sara Jalali, Gholam Reza Kiany,
Volume 12, Issue 1 (3-2009)
Abstract

Classical test theory and item response theory are widely perceived as representing two very different measurement frameworks. Few studies have empirically examined the similarities and differences in the parameters estimated using the two frameworks. The purpose of this study was to examine how item statistics (i.e. item difficulty and item discrimination) and person statistics (i.e. ability estimates) behave under the two measurement frameworks i.e.CTTandIRT. The researchers tried to compare the two models from both theoretical and practical perspectives. For this purpose, first, a theoretical comparison of the two models was carried out then, a sample of 3000 testees taking part in the English language university entrance exam was used in order to compare the two models practically. The findings showed that person statistics from CTT were comparable with those from IRT for all three IRT models. Item difficulty indexes from CTT were comparable with those from all IRT models and especially from the one-parameter logistic (1PL) model. Item discrimination indexes from CTT were somewhat less comparable with those from IRT.
, ,
Volume 16, Issue 2 (9-2013)
Abstract

In response to the increasing interest in and need for a practical brief measure in language testing, this study explored the properties of an offline short-form test (OSF) versus a conventional lengthy test. From the total of 98 vocabulary items pooled from the Iranian National University Entrance Exams, 60 items were selected for the conventional test (CT). To build the OSF, we created an item bank by examining the item response theory (IRT) parameter estimates. Data for the IRT calibration included the responses of 774,258 examinees. Upon the results of the item calibration, 43 items with the highest discrimination power and minimal guessing values from different levels of ability were selected for the item bank. Then, using the responses of 253 EFL learners, we compared the measurement properties of the OSF scores with those of the CT scores in terms of the score precision, score comparability, and consistency of classification decisions. The results revealed that although the OSF generally did not achieve the same level of measurement precision as the CT, it still achieved a desired level of precision while lessening the negative effects of a lengthy test. The results also signified an excellent degree of correspondence between OSF and CT scores and classification results. In all, findings suggest that OSF can stand as a reasonable alternative for a longer test, especially when conditions dictate that a very short test be used.


Page 1 from 1     

Iranian Journal of Applied Linguistics
Persian site map - English site map - Created in 0.06 seconds with 26 queries by YEKTAWEB 4666