Investigate Performance of Expected Maximization on the Knowledge Tracing Model

Junjie Gu, Hang Cai, Joseph E. Beck

The Knowledge Tracing model is broadly used in various intelligent tutoring systems. As it estimates the knowledge of the student, it is important to get an accurate estimate. The most common approach for fitting the model is Expected Maximization (EM), which normally stops iterating when there is minimal model improvement as measured by log-likelihood. Even though the model’s predictive accuracy has converged, EM may not have come up with the right parameters when it stops, because the convergence of the log-likelihood value does not necessarily mean the convergence of the parameters. In this work, we examine the model fitting process in more depth and answer the research question: when should EM stop, specifically for the Knowledge Tracing model. While typically EM runs for approximately 7 iterations, in this work we forced EM to run for 50 iterations for a simulated dataset and a real dataset. By recording the parameter values and convergence states at each iteration, we found that stopping EM earlier leads to problems, as the parameter estimates continue to noticeably change after the convergence of the log-likelihood scores.

The final publication is available at Springer via https://doi.org/10.1007/978-3-319-07221-0_19.