Search for a command to run...
Knowledge Tracing, which estimates how students’ knowledge evolves during interactions with educational content, is a cornerstone of Intelligent Tutoring Systems. While deep learning models achieve superior predictive performance in this task, they lack interpretability, a limitation that is particularly critical in educational contexts. We introduce gTransformer, a new type of grounded Transformer model bridging deep learning performance with intrinsic interpretability through representational grounding. It adds theory-based parameters to input interaction sequences and uses attention mechanisms to transform them into latent representations. These are projected into enriched parameters that incorporate historical learning context while preserving semantics. Validation demonstrates: (1) structural encoding around theoretical concepts (probing selectivity ΔR2>0.5); (2) semantic alignment; and (3) functional alignment with quantified confidence. Results show that gTransformer achieves predictive performance competitive with state-of-the-art architectures while offering intrinsically interpretable predictions. The trade-off is characterised by a significant Area Under the Curve (AUC) gain over traditional theory-based models (+19.9%), with a minimal cost (3.9%) relative to non-interpretable configurations. Critically, gTransformer enables context-aware personalisation by differentiating students based on longitudinal learning trajectories rather than immediate responses, capturing patterns that traditional models cannot represent. This offers a practical path toward adaptive instruction driven by artificial intelligence that is both accurate and interpretable.