学位论文详细信息
Explaining model decisions and fixing them via human feedback
Visual explanations;Interpretability;Computer vision;Vision and language;Deep learning;Grad-CAM
Ramasamy Selvaraju, Ramprasaath ; Parikh, Devi Computer Science Batra, Dhruv Hoffman, Judy Lee, Stefan Kim, Been ; Parikh, Devi
University:Georgia Institute of Technology
Department:Computer Science
关键词: Visual explanations;    Interpretability;    Computer vision;    Vision and language;    Deep learning;    Grad-CAM;   
Others  :  https://smartech.gatech.edu/bitstream/1853/62867/1/RAMASAMYSELVARAJU-DISSERTATION-2020.pdf
美国|英语
来源: SMARTech Repository
PDF
【 摘 要 】

Deep networks have enabled unprecedented breakthroughs in a variety of computer vision tasks. While these models enable superior performance, their increasing complexity and lack of decomposability into individually intuitive components makes them hard to interpret. Consequently, when today’s intelligent systems fail, they fail spectacularly disgracefully, giving no warning or explanation. Towards the goal of making deep networks interpretable, trustworthy and unbiased, in this dissertation, we will present my work on building algorithms that provide explanations for decisions emanating from deep networks in order to —1. understand/interpret why the model did what it did,2. enable knowledge transfer between humans and AI,3. correct unwanted biases learned by AI models, and4. encourage human-like reasoning in AI.

【 预 览 】
附件列表
Files Size Format View
Explaining model decisions and fixing them via human feedback 62312KB PDF download
  文献评价指标  
  下载次数:34次 浏览次数:12次