期刊论文详细信息
Energies
Reinforcement Learning–Based Energy Management Strategy for a Hybrid Electric Tracked Vehicle
Teng Liu2  Yuan Zou1  Dexing Liu2  Fengchun Sun2  Joeri Van Mierlo2  Ming Cheng2  Omar Hegazy2 
[1] Collaborative Innovation Center of Electric Vehicles in Beijing, School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China;
关键词: reinforcement learning (RL);    hybrid electric tracked vehicle (HETV);    Q-learning algorithm;    Dyna algorithm;    dynamic programming (DP);    stochastic dynamic programming (SDP);   
DOI  :  10.3390/en8077243
来源: mdpi
PDF
【 摘 要 】

This paper presents a reinforcement learning (RL)–based energy management strategy for a hybrid electric tracked vehicle. A control-oriented model of the powertrain and vehicle dynamics is first established. According to the sample information of the experimental driving schedule, statistical characteristics at various velocities are determined by extracting the transition probability matrix of the power request. Two RL-based algorithms, namely Q-learning and Dyna algorithms, are applied to generate optimal control solutions. The two algorithms are simulated on the same driving schedule, and the simulation results are compared to clarify the merits and demerits of these algorithms. Although the Q-learning algorithm is faster (3 h) than the Dyna algorithm (7 h), its fuel consumption is 1.7% higher than that of the Dyna algorithm. Furthermore, the Dyna algorithm registers approximately the same fuel consumption as the dynamic programming–based global optimal solution. The computational cost of the Dyna algorithm is substantially lower than that of the stochastic dynamic programming.

【 授权许可】

CC BY   
© 2015 by the authors; licensee MDPI, Basel, Switzerland.

【 预 览 】
附件列表
Files Size Format View
RO202003190009248ZK.pdf 783KB PDF download
  文献评价指标  
  下载次数:4次 浏览次数:2次