|
|
|
|
Search published articles |
|
|
Showing 1 results for Item Difficulty
Fateme Nikmard, Kobra Tavassoli, Volume 24, Issue 2 (9-2021)
Abstract
To explore the characteristics of the items of the Teaching English as a Foreign Language (TEFL) MA Admission Test (henceforth TMAAT) as a high-stakes test in Iran, the current research utilized a three-parameter logistic Item Response Theory (IRT) calibration of the test items. The three-parameter logistic IRT model is the most comprehensive among the three models of IRT for it takes into account all the three effective parameters of item difficulty, item discrimination, and guessing simultaneously. The data were a random selection of 1000 TMAAT candidates taking the test in 2020 collected from Iran’s National Organization of Educational Testing (NOET). The software used to analyze the data was jMetrik (Version 4.1.1), which is the newest version so far. As the results indicated, the TMAAT worked well in discriminating the higher and lower ability candidates and preventing the candidates from guessing the responses by chance, but it was not much acceptable regarding the difficulty level of the items as the items were far too difficult for the test-takers. The most important beneficiaries of the present investigation are test developers, testing experts, and policy-makers in Iran since they are responsible to improve the quality of the items in such a high-stakes test. |
|
|
|
|
|
|