Learning Goal-oriented Dialogue Policy with Opposite Agent Awareness

Zheng Zhang1, Lizi Liao2, Xiaoyan Zhu1, Tat-Seng Chua2, Zitao Liu3, Yan Huang4, Minlie Huang1
1Tsinghua University, 2National University of Singapore, 3TAL Education Group, 4TAL AI Lab


Abstract

Most existing approaches for goal-oriented dialogue policy learning used reinforcement learning, which focuses on the target agent policy and simply treats the opposite agent policy as part of the environment. While in real-world scenarios, the behavior of an opposite agent often exhibits certain patterns or underlies hidden policies, which can be inferred and utilized by the target agent to facilitate its own decision making. This strategy is common in human mental simulation by first imaging a specific action and the probable results before really acting it. We therefore propose an opposite behavior aware framework for policy learning in goal-oriented dialogues. We estimate the opposite agent's policy from its behavior and use this estimation to improve the target agent by regarding it as part of the target policy. We evaluate our model on both cooperative and competitive dialogue tasks, showing superior performance over state-of-the-art baselines.