学位论文详细信息
Some theoretical and applied developments to support cognitive learning and adaptive testing
Cognitive diagnosis;Model Misspeci cation;Maximum Likelihood,\rRobust Estimation;large sample theory;Computerized Adaptive Testing;Item Response Theory;Martingale Limit Theory;Nominal Response Model;Response Revision;Sequential Design
Wang, Shiyu
关键词: Cognitive diagnosis;    Model Misspeci cation;    Maximum Likelihood,\rRobust Estimation;    large sample theory;    Computerized Adaptive Testing;    Item Response Theory;    Martingale Limit Theory;    Nominal Response Model;    Response Revision;    Sequential Design;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/90484/WANG-DISSERTATION-2016.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

Cognitive diagnostic Modeling (CDM) and Computerized Adaptive Testing (CAT) are useful tools to measure subjects' latent abilities from two different aspects. CDM plays a very important role in the fine-grained assessment, where the primary purpose is to accurately classify subjects according to the skills or attributes they possess, while CAT is a useful tool for coarse-grained assessment, which provides a single number to indicate the student's overall ability. This thesis discusses and solves several theoretical and applied issues related to these two areas.The first problem we investigate related to a nonparametric classifier in Cognitive Diagnosis. Latent Class models for cognitive diagnosis have been developed to classify examinees into one of the 2K attribute profiles arising from a K-dimensional vector of binary skill indicators. These models recognize that response patterns tend to deviate from the ideal responses that would arise if skills and items generated item responses through a purely deterministic conjunctive process. An alternative to employing these latent class models is to minimize the distance between observed item response patterns and ideal response patterns, in a nonparametric fashion that utilizes no stochastic terms for these deviations. Theorems are presented that show the consistency of this approach, when the true model is one of several common latent class models for cognitive diagnosis. Consistency of classification is independent of sample size, because no model parameters need to be estimated. Simultaneous consistency for a large group of subjects can also be shown given some conditions on how sample size and test length grow with one another.The second issue we consider is still within CDM framework, however our focus is about the model misspecification. The maximum likelihood classification rule is a standard method to classify examinee attribute profiles in cognitive diagnosis models. Its asymptotic behavior is well understood when the model is assumed to be correct, but has not been explored in the case of misspecified latent class models. We investigate the consequences of using a simple model when the true model is different. In general, when a CDM is misspecified as a conjunctive model, the MLE for attribute profiles is not necessarily consistent. A sufficient condition for the MLE to be a consistent estimator under a misspecified DINA model is found. The true model can be any conjunctive models or even a compensatory model. Two examples are provided to show the consistency and inconsistency of the MLE under a misspecified DINA model. A Robust DINA MLE technique is proposed to overcome the inconsistency issue, and theorems are presented to show that it is a consistent estimator for attribute profile as long as the true model is a conjunctive model. Simulation results indicate that when the true model is a conjunctive model, the Robust DINA MLE and the DINA MLE based on the simulated item parameters can result in relatively good classification results even when the test length is short. These findings demonstrate that simple models can be fitted without severely affecting classification accuracy in some cases.The last one discusses and solves a controversial issue related to CAT. In Computerized Adaptive Testing (CAT), items are selected in real time and are adjusted to the test-taker's ability. A long debated question related to CAT is that they do not allow test-takers to review and revise their responses. The last chapter of this thesis presents a CAT design that preserves the efficiency of a conventional CAT, but allows test takers to revise their previous answers at any time during the test, and the only imposed restriction is on the number of revisions to the same item. The proposed method relies on a polytomous Item Response Theory model that is used to describe the first response to each item, as well as any subsequent revisions to it. The test-taker's ability is updated on-line with the maximizer of a partial likelihood function. I have established the strong consistency and asymptotic normality of the final ability estimator under minimal conditions on the test-taker's revision behavior. Simulation results also indicated this proposed design can reduce measurement error and is robust against several well-known test-taking strategies.

【 预 览 】
附件列表
Files Size Format View
Some theoretical and applied developments to support cognitive learning and adaptive testing 5340KB PDF download
  文献评价指标  
  下载次数:7次 浏览次数:11次