学位论文详细信息
A reinforcement learning framework for the automation of engineering decisions in complex systems
Reinforcement learning;Artificial intelligence;Engineering design;Imitation learning;CAD2Vec
Ramamurthy, Arun ; Mavris, Dimitri N. Aerospace Engineering Schrage, Daniel P. Kennedy, Graeme J. Song, Le Briceno, Simon I. Villeneuve, Frederic ; Mavris, Dimitri N.
University:Georgia Institute of Technology
Department:Aerospace Engineering
关键词: Reinforcement learning;    Artificial intelligence;    Engineering design;    Imitation learning;    CAD2Vec;   
Others  :  https://smartech.gatech.edu/bitstream/1853/62626/1/RAMAMURTHY-DISSERTATION-2019.pdf
美国|英语
来源: SMARTech Repository
PDF
【 摘 要 】

The process of engineering design is characterized by a series of decisions that determine the performance of the final product. Engineers are faced with decisions, such as choices pertaining to the type of model to be used, appropriate parameter settings, system architecture etc., all through the design process and these decisions are undertaken with a desired goal in mind. The decisions themselves manifest as a planned set of actions that are informed by observed behavior and domain expertise. The mathematical formalization of such a design process would resemble that of a sequential decision process. It is natural to ponder if the underlying logic behind these decisions can be abstracted into a computer program such that, when faced with a similar situation, an intelligent system can aid the ensuing design process. To satisfy this need, expert systems, capable of incorporating design expertise and domain knowledge, have been designed. The state-of-the-art for such systems view them as static entities that are configured to operate on a predefined problem using some variant of rule-based or case-based decision-making methods. The lack of a dynamic quality in the face of evolving design environments and processes necessitates frequent updates and redesign of these systems making their use infrequent in a typical engineering environment. Recent developments in the field of reinforcement learning have demonstrated significant success in their application to sequential decision-making problems. The reinforcement learning setup of an agent learning from interactions with the environment makes these class of methods a perfect alternative to the static expert systems with predefined rules. In such scenarios, expert interactions can serve as demonstrations for the learning algorithm and could help train the agent. Further, the exploratory nature of the learning algorithm leads to the possibility that the training agent would identify decision paths that outperform the ones demonstrated by an expert, thereby enabling the system to self-learn to improve the resulting design or process. The current research work implements a reinforcement learning framework that relies on the principles of life-long learning in order to assist engineering design processes. Assistance is provided in the form of recommendations of design decisions to the design engineer in the course of utilization of the design environment for a given problem. The framework implements aspects of machine learning such as imitation learning from human demonstrations in order to train intelligent agents. The life-long learning aspect of the framework enables adaptation of the trained agents to new and incoming data such that both newly explored portions of the design space and new demonstrations from design engineers are incorporated into the decision making model. The exploratory nature of reinforcement learning algorithm enables the possibility of identifying decision paths that are better that the ones demonstrated by design engineers hence enabling the system to self-learn with the goal of improving the resultant design. An adaptive knowledge graph, representing interactions and effects of human actions, is utilized to encode the sequence of states experienced by the design system with each state represents some unique configuration of a design. An automated approach to the creation of the knowledge graph is implemented through the automation of the knowledge extraction and representation processes. The knowledge is then utilized through an imitation learning process which generates recommendations of actions to design engineers. The analysis of the implemented framework is carried out on three fronts. First, it is shown that an agent trained on the problem of UAS design is capable of replicating human-like decisions in the presence of demonstrations. Further, it is shown that if a better decision path is available, the exploratory nature of the algorithm enables the identifications of designs that are better than the best demonstrated one. Finally, an analysis of the robustness of the agent to changes in the set of requirements is performed in order to estimate the flexibility of the framework and its capability to generalize across different but similar problems. A rigorous analysis on the impact of training times, amount of data and the size of the problem is performed in conjunction to the first problem setup. Second, an approach to automate the extraction, representation and utilization of knowledge from multiple sources of information is demonstrated on the problem of automation of engineering systems. Finally, it is shown that the implemented framework outperforms existing state-of-the-art systems that rely on rule-based inference and case-based reasoning. It is shown that the agents trained by the implemented framework are more adaptive to the problem at hand and require less configuration in comparison to the state-of-the-art systems.

【 预 览 】
附件列表
Files Size Format View
A reinforcement learning framework for the automation of engineering decisions in complex systems 9919KB PDF download
  文献评价指标  
  下载次数:44次 浏览次数:17次