期刊论文详细信息
Energies
Comparison of Deep Reinforcement Learning and PID Controllers for Automatic Cold Shutdown Operation
Daeil Lee1  Jonghyun Kim1  Inseok Jang2  Seoryong Koo2 
[1] Department of Nuclear Engineering, Chosun University, Dong-gu, Gwangju 61452, Korea;Korea Atomic Energy Research Institute, Yuseong-gu, Daejeon 34057, Korea;
关键词: nuclear power plant;    autonomous operation;    artificial intelligence;    deep reinforcement learning;    soft actor-critic algorithm;   
DOI  :  10.3390/en15082834
来源: DOAJ
【 摘 要 】

Many industries apply traditional controllers to automate manual control. In recent years, artificial intelligence controllers applied with deep-learning techniques have been suggested as advanced controllers that can achieve goals from many industrial domains, such as humans. Deep reinforcement learning (DRL) is a powerful method for these controllers to learn how to achieve their specific operational goals. As DRL controllers learn through sampling from a target system, they can overcome the limitations of traditional controllers, such as proportional-integral-derivative (PID) controllers. In nuclear power plants (NPPs), automatic systems can manage components during full-power operation. In contrast, startup and shutdown operations are less automated and are typically performed by operators. This study suggests DRL-based and PID-based controllers for cold shutdown operations, which are a part of startup operations. By comparing the suggested controllers, this study aims to verify that learning-based controllers can overcome the limitations of traditional controllers and achieve operational goals with minimal manipulation. First, to identify the required components, operational goals, and inputs/outputs of operations, this study analyzed the general operating procedures for cold shutdown operations. Then, PID- and DRL-based controllers are designed. The PID-based controller consists of PID controllers that are well-tuned using the Ziegler–Nichols rule. The DRL-based controller with long short-term memory (LSTM) is trained with a soft actor-critic algorithm that can reduce the training time by using distributed prioritized experience replay and distributed learning. The LSTM can process a plant time-series data to generate control signals. Subsequently, the suggested controllers were validated using an NPP simulator during the cold shutdown operation. Finally, this study discusses the operational performance by comparing PID- and DRL-based controllers.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:1次