Song, Xiangyu
(School of IT, Faculty of Science, Engineering and Built Environment, Deakin University)
,
Li, Jianxin
(School of IT, Faculty of Science, Engineering and Built Environment, Deakin University)
,
Lei, Qi
(School of Information Engineering, Chang’an University)
,
Zhao, Wei
(School of Computer Science and Technology, Xidian University)
,
Chen, Yunliang
(School of Computer Science, China University of Geosciences)
,
Mian, Ajmal
(Department of Computer Science and Software Engineering, The University of Western Australia)
Abstract The goal of Knowledge Tracing (KT) is to estimate how well students have mastered a concept based on their historical learning of related exercises. The benefit of knowledge tracing is that students’ learning plans can be better organised and adjusted, and interventions can be made w...
Abstract The goal of Knowledge Tracing (KT) is to estimate how well students have mastered a concept based on their historical learning of related exercises. The benefit of knowledge tracing is that students’ learning plans can be better organised and adjusted, and interventions can be made when necessary. With the recent rise of deep learning, Deep Knowledge Tracing (DKT) has utilised Recurrent Neural Networks (RNNs) to accomplish this task with some success. Other works have attempted to introduce Graph Neural Networks (GNNs) and redefine the task accordingly to achieve significant improvements. However, these efforts suffer from at least one of the following drawbacks: (1) they pay too much attention to details of the nodes rather than to high-level semantic information; (2) they struggle to effectively establish spatial associations and complex structures of the nodes; and (3) they represent either concepts or exercises only, without integrating them. Inspired by recent advances in self-supervised learning, we propose a Bi-Graph Contrastive Learning based Knowledge Tracing (Bi-CLKT) to address these limitations. Specifically, we design a two-layer comparative learning scheme based on an “exercise-to-exercise” (E2E) relational subgraph. It involves node-level contrastive learning of subgraphs to obtain discriminative representations of exercises, and graph-level contrastive learning to obtain discriminative representations of concepts. Moreover, we designed a joint contrastive loss to obtain better representations and hence better prediction performance. Also, we explored two different variants, using RNN and memory-augmented neural networks as the prediction layer for comparison to obtain better representations of exercises and concepts respectively. Extensive experiments on four real-world datasets show that the proposed Bi-CLKT and its variants outperform other baseline models.
Abstract The goal of Knowledge Tracing (KT) is to estimate how well students have mastered a concept based on their historical learning of related exercises. The benefit of knowledge tracing is that students’ learning plans can be better organised and adjusted, and interventions can be made when necessary. With the recent rise of deep learning, Deep Knowledge Tracing (DKT) has utilised Recurrent Neural Networks (RNNs) to accomplish this task with some success. Other works have attempted to introduce Graph Neural Networks (GNNs) and redefine the task accordingly to achieve significant improvements. However, these efforts suffer from at least one of the following drawbacks: (1) they pay too much attention to details of the nodes rather than to high-level semantic information; (2) they struggle to effectively establish spatial associations and complex structures of the nodes; and (3) they represent either concepts or exercises only, without integrating them. Inspired by recent advances in self-supervised learning, we propose a Bi-Graph Contrastive Learning based Knowledge Tracing (Bi-CLKT) to address these limitations. Specifically, we design a two-layer comparative learning scheme based on an “exercise-to-exercise” (E2E) relational subgraph. It involves node-level contrastive learning of subgraphs to obtain discriminative representations of exercises, and graph-level contrastive learning to obtain discriminative representations of concepts. Moreover, we designed a joint contrastive loss to obtain better representations and hence better prediction performance. Also, we explored two different variants, using RNN and memory-augmented neural networks as the prediction layer for comparison to obtain better representations of exercises and concepts respectively. Extensive experiments on four real-world datasets show that the proposed Bi-CLKT and its variants outperform other baseline models.
10.1145/3038912.3052580 J. Zhang, X. Shi, I. King, D.-Y. Yeung, Dynamic key-value memory networks for knowledge tracing, in: Proceedings Of The 26th International Conference On World Wide Web, 2017, pp. 765-774.
Pandey 2019 A self-attentive model for knowledge tracing
10.1609/aaai.v32i1.11864 Y. Su, Q. Liu, Q. Liu, Z. Huang, Y. Yin, E. Chen, C. Ding, S. Wei, G. Hu, Exercise-enhanced sequential modeling for student performance prediction, in: Proceedings Of The AAAI Conference On Artificial Intelligence, 32, (1) 2018.
d Baker 406 2008 International Conference On Intelligent Tutoring Systems More accurate student modeling through contextual estimation of slip and guess probabilities in bayesian knowledge tracing
Minn 163 2019 Pacific-Asia Conference On Knowledge Discovery And Data Mining Dynamic student classiffication on memory networks for knowledge tracing
Dos Santos 606 2016 Joint European Conference On Machine Learning And Knowledge Discovery In Databases Multilabel classification on heterogeneous graphs with gaussian embeddings
10.1007/978-3-030-67658-2_18 Y. Yang, J. Shen, Y. Qu, Y. Liu, K. Wang, Y. Zhu, W. Zhang, Y. Yu, GIKT: A graph-based interaction model for knowledge tracing, in: ECML/PKDD, 2020.
10.1145/3331184.3331195 G. Abdelrahman, Q. Wang, Knowledge tracing with sequential key-value memory networks, in: Proceedings Of The 42nd International ACM SIGIR Conference On Research And Development In Information Retrieval, 2019, pp. 175-184.
Xue 2021 Dynamic network embedding survey
IEEE Intell. Syst. Song 36 1 46 2020 10.1109/MIS.2020.3006961 SEPN: a sequential engagement based academic performance prediction model
Bachman 2019 Learning representations by maximizing mutual information across views
10.1109/CVPR42600.2020.00975 K. He, H. Fan, Y. Wu, S. Xie, R. Girshick, Momentum contrast for unsupervised visual representation learning, in: Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition, 2020, pp. 9729-9738.
Tian 776 2020 Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XI 16 Contrastive multiview coding
10.1145/1390156.1390177 R. Collobert, J. Weston, A unified architecture for natural language processing: Deep neural networks with multitask learning, in: Proceedings Of The 25th International Conference On Machine Learning, 2008, pp. 160-167.
A. Mnih, K. Kavukcuoglu, Learning word embeddings efficiently with noise-contrastive estimation, in: Adv. Neural Inf. Process. Syst., 2013, pp. 2265-2273.
Pardos 243 2011 International Conference On User Modeling, Adaptation, And Personalization KT-IDEM: Introducing item difficulty to the knowledge tracing model
Pardos 255 2010 International Conference On User Modeling, Adaptation, And Personalization Modeling individualization in a bayesian networks implementation of knowledge tracing
Yudelson 171 2013 International Conference On Artificial Intelligence In Education Individualized bayesian knowledge tracing models
Ann. Neurosci. Ebbinghaus 20 4 155 2013 10.5214/ans.0972.7531.200408 Memory: A contribution to experimental psychology
Online Submission Pavlik Jr. 2009 Performance factors analysis-a new alternative to knowledge tracing
Xu 2898 2018 IJCAI Deep multi-view concept learning
Kingma 2014 Adam: A method for stochastic optimization
10.1145/3132847.3132919 J. Li, H. Dani, X. Hu, J. Tang, Y. Chang, H. Liu, Attributed network embedding for learning in a dynamic environment, in: Proceedings Of The 2017 ACM On Conference On Information And Knowledge Management, 2017, pp. 387-396.
Nature Becker 355 6356 161 1992 10.1038/355161a0 Self-organizing neural network that discovers surfaces in random-dot stereograms
10.1109/CVPR.2018.00393 Z. Wu, Y. Xiong, S.X. Yu, D. Lin, Unsupervised feature learning via non-parametric instance discrimination, in: Proceedings Of The IEEE Conference On Computer Vision And Pattern Recognition, 2018, pp. 3733-3742.
10.1109/CVPR.2019.00637 M. Ye, X. Zhang, P.C. Yuen, S.-F. Chang, Unsupervised embedding learning via invariant and spreading instance feature, in: Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition, 2019, pp. 6210-6219.
10.1145/3442381.3449802 Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, L. Wang, Graph contrastive learning with adaptive augmentation, in: Proceedings Of The Web Conference 2021, 2021, pp. 2069-2080.
IEEE Trans. Knowled. Data Eng. Liu 33 1 100 2019 10.1109/TKDE.2019.2924374 Ekt: Exercise-aware knowledge tracing for student performance prediction
10.1145/3448139.3448188 D. Shin, Y. Shim, H. Yu, S. Lee, B. Kim, Y. Choi, SAINT+: Integrating temporal features for EdNet correctness prediction, in: LAK21: 11th International Learning Analytics And Knowledge Conference, 2021, pp. 490-496.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.