The task of knowledge tracing involves tracking users’ cognitive states by modeling their exercise-answering sequence, predicting their performance over time, and achieving an intelligent assessment of the users’ knowledge. Current works mainly model the skills related to the exercises, while ignoring the rich information contained in the contexts of exercises. Moreover, the current deep learning-based methods are agnostic, which undermines the explainability of the model. In this paper, we propose an interpretable deep knowledge tracking (IDKT) framework. First, we alleviate the data sparsity problem by using the contextual information of the exercises and skills to obtain more representative exercise and skill representations. Then the hidden knowledge states are fused with the aforementioned embeddings to learn a personalized attention, which is later used to aggregate neighbor embeddings in the exercise-skill graph. Finally, given a prediction result, an inference path is selected as the explanation based on the personalized attention. Compared with typical existing methods, IDKT exhibits its superiority by not only achieving the best prediction performance, but also providing an explanation at the inference path level for the prediction results.