科技报告详细信息
Toward Justifiable Trust in Autonomous Systems Incorporating Human Knowledge in Autonomous Systems through Machine Learning
Oza, Nikunj C ; Das, Kamalika ; Iverson, David ; Janakiraman, Vijay
关键词: AUTONOMY;    MACHINE LEARNING;    ARTIFICIAL INTELLIGENCE;    HUMAN-COMPUTER INTERFACE;    ANOMALIES;    DETECTION;    FLIGHT MANAGEMENT SYSTEMS;    FLIGHT PATHS;   
RP-ID  :  ARC-E-DAA-TN60106
美国|英语
来源: NASA Technical Reports Server
PDF
【 摘 要 】

Trust in Autonomous Systems is largely about humans trusting the decisions made by autonomous systems. This trust can be increased through learning from domain experts. In particular, autonomous systems can learn offline from past mission operations before conducting any operations of its own. Additionally, autonomous systems can learn online by obtaining human feedback during operations. We will discuss several classes of machine learning methods and our application of them to autonomous systems. The first class of methods is anomaly detection, which uses operations data to identify examples of anomalous operations. The second class of methods is inverse reinforcement learning, also known as apprenticeship learning, that takes past operations data as input and yields a controller that is able to duplicate the operations described by the data. The third class is active learning, which identifies examples on which the model is most uncertain and requests domain expert feedback.

【 预 览 】
附件列表
Files Size Format View
20180007131.pdf 4115KB PDF download
  文献评价指标  
  下载次数:15次 浏览次数:6次