Date of Award
6-2017
Degree Name
Doctor of Philosophy
Department
Evaluation
First Advisor
Dr. E. Brooks Applegate
Second Advisor
Dr. Fernando H Andrade Adaniya
Third Advisor
Dr. Rajib Paul
Keywords
Student misconceptions, item response theory, nested logit, bayesian
Abstract
Student misconceptions have been studied for decades from a curricular/instructional perspective and from the assessment/test level perspective. Numerous misconception assessment tools have been developed in order to measure students’ misconceptions relative to the correct content. Often, these tools are used to make a variety of educational decisions including students’ achievement level, instructional method effectiveness, and curriculum related achievement progress. These tools have included qualitative and quantitative assessment methods.
The quantitative analysis of misconceptions has mostly relied on classical test theory methods of test construction related to total raw score, percentage of correct responses, and/or percentage of misconception responses. More recently, researchers have begun to use modern test theory methods of test construction including item response theory and cognitive diagnostic models to assess misconceptions. However, to date, there has not been any test construction modeling that has scaled a student’s ability estimate and a student’s misconception level into a continuous metric.
The purpose of this study was to investigate if it is possible to model misconceptions, which in the latent framework have been only measured using a latent class approach, as single or multiple factor continuous latent variables in addition to a latent variable of interest, and see if modeling misconceptions help provide additional test information. Bayesian (Markov Chain Monte Carlo, (MCMC) methods were used to estimate model parameters. This study investigated if test length, number of misconceptions, and the prior distribution specification affected model convergence, parameter estimation precision, and the value-added impact gained by the modeling of student misconceptions.
The findings indicated that overall estimation precision was satisfactory for both item and person parameters when single factor misconception was used however increasing the number of misconceptions reduces estimation precision. Increasing the number of distractors measuring misconceptions increases the test information related to the misconception.
Future research might consider test lengths other than 25 or 50 as well as different sizes of sample used in this study. The framework provided by this study could inform and guide the misconception instrument development processes.
Access Setting
Dissertation-Open Access
Recommended Citation
Yildiz, Mustafa, "Modelling Student Misconceptions Using Nested Logit Item Response Models" (2017). Dissertations. 3123.
https://scholarworks.wmich.edu/dissertations/3123