Item parameter calibration is important for tests based on item response theory, because the scoring, equating, bias analysis of the test, and item selection in adaptive tests are all based on the item parameters. As a test is continuously administered, item calibration needs to be conducted for new items at intervals to replace overexposed, obsolete, or flawed items in the item bank. Although it is possible to recruit examinees for the sole purpose of pretesting the new items, a more cost-effective and commonly employed approach is to embed the new items in operational tests. When this approach is employed in computerized adaptive tests (CAT), it is called "online calibration." Analogous to the tailored testing feature in CAT, where an optimal set of operational items are selected for each examinee to more efficiently estimate their ability levels, online calibration makes it possible to select an optimal sample of examinees for each pretest item to more efficiently calibrate their item parameters. During the operational tests, different pretest items can be selected for each examinee. The parameter values of the pretest items are constantly updated, based on which the sampling scheme is dynamically adjusted. A few pretest item selection methods have been proposed, but such development is still in its infant phase. This thesis proposes a new framework for pretest item selection in online calibration. A simulation study was conducted to compare the proposed methods with existing methods and also compare different estimation methods and pretest item seeding locations. Results show significant superiority of the proposed methods compared to existing methods in the 1PL and 2PL models. Middle and late seeding locations lead to more accurate calibration results. The Bayesian MEM estimation method is recommended among the six compared estimation methods.
【 预 览 】
附件列表
Files
Size
Format
View
New methods of online calibration for item bank replenishment