学位论文详细信息
Manipulating state space distributions for sample-efficient imitation-learning
Imitation learning;Reinforcement learning;Deep learning;Machine learning;Artificial intelligence
Schroecker, Yannick Karl Daniel ; Isbell, Charles L Interactive Computing Chernova, Sonia Boots, Byron Essa, Irfan de Freitas, Nando ; Isbell, Charles L
University:Georgia Institute of Technology
Department:Interactive Computing
关键词: Imitation learning;    Reinforcement learning;    Deep learning;    Machine learning;    Artificial intelligence;   
Others  :  https://smartech.gatech.edu/bitstream/1853/62755/1/SCHROECKER-DISSERTATION-2020.pdf
美国|英语
来源: SMARTech Repository
PDF
【 摘 要 】

Imitation learning has emerged as one of the most effective approaches to train agents to act intelligently in unstructured and unknown domains. On its own or in combination with reinforcement learning, it enables agents to copy the expert's behavior and to solve complex, long-term decision making problems. However, to utilize demonstrations effectively and learn from a finite amount of data, the agent needs to develop an understanding of the environment. This thesis investigates estimators of the state-distribution gradient as a means to influence which states the agent will see and thereby guide it to imitate the expert's behavior. Furthermore, this thesis will show that approaches which reason over future states in this way are able to learn from sparse signals and thus provide a way to effectively program agents. Specifically, this dissertation aims to validate the following thesis statement: Exploiting inherent structure in Markov chain stationary distributions allows learning agents to reason about likely future observations, and enables robust and efficient imitation learning, providing an effective and interactive way to teach agents from minimal demonstrations.

【 预 览 】
附件列表
Files Size Format View
Manipulating state space distributions for sample-efficient imitation-learning 4022KB PDF download
  文献评价指标  
  下载次数:37次 浏览次数:31次