ExpanRL: Hierarchical Reinforcement Learning for Course Concept Expansion in MOOCs

Jifan Yu1, Chenyu Wang2, Gan Luo1, Lei Hou1, Juanzi Li1, Jie Tang1, Minlie Huang1, Zhiyuan Liu1
1Tsinghua University, 2Beihang University


Within the prosperity of Massive Open Online Courses (MOOCs), the education applications that automatically provide extracurricular knowledge for MOOC users become rising research topics. However, MOOC courses' diversity and rapid updates make it more challenging to find suitable new knowledge for students.

In this paper, we present ExpanRL, an end-to-end hierarchical reinforcement learning (HRL) model for concept expansion in MOOCs. Employing a two-level HRL mechanism of seed selection and concept expansion, ExpanRL is more feasible to adjust the expansion strategy to find new concepts based on the students' feedback on expansion results.

Our experiments on nine novel datasets from real MOOCs show that ExpanRL achieves significant improvements over existing methods and maintain competitive performance under different settings.