The Journal of Engineering | |
Deep imitation reinforcement learning with expert demonstration data | |
Menglong Yi1  Yujun Zeng2  Xin Xu3  | |
[1] College of Intelligence Science and Technology, National University of Defense Technology , Changsha , People'Laboratory of Science and Technology on Integrated Logistics Support , National University of Defense Technology , Changsha , People's Republic of China | |
关键词: learning agent; existing DRL algorithms; good action policy; expert demonstration data; task requirements; Mario racing game; DIRL algorithm; expert guidance; expert data; deep imitation reinforcement learning algorithm; | |
DOI : 10.1049/joe.2018.8314 | |
学科分类:工程和技术(综合) | |
来源: IET | |
【 摘 要 】
In recent years, deep reinforcement learning (DRL) has made impressive achievements in many fields. However, existing DRL algorithms usually require a large amount of exploration to obtain a good action policy. In addition, in many complex situations, the reward function cannot be well designed to meet task requirements. These two problems will make it difficult for DRL to learn a good action policy within a relatively short period. The use of expert data can provide effective guidance and avoid unnecessary exploration. This study proposes a deep imitation reinforcement learning (DIRL) algorithm that uses a certain amount of expert demonstration data to speed up the training of DRL. In the proposed method, the learning agent imitates the expert's action policy by learning from demonstration data. After imitation learning, DRL is used to optimise the action policy in a self-learning way. By experimental comparison on a video game called the Mario racing game, it is shown that the proposed DIRL algorithm with expert demonstration data can obtain much better performance than previous DRL algorithms without expert guidance.
【 授权许可】
CC BY
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO201910258831234ZK.pdf | 3411KB | download |