Frontiers in Psychology | |
Navigational Behavior of Humans and Deep Reinforcement Learning Agents | |
Rachel W. Kallen1  Michael J. Richardson1  Gaurav Patil1  Hamish F. Stening2  Lillian M. Rigoli2  | |
[1] Centre for Elite Performance, Expertise and Training, Macquarie University, Sydney, NSW, Australia;School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia; | |
关键词: task-dynamical model; dynamical perceptual motor primitives; deep reinforcement learning; navigational behavior; obstacle avoidance; route selection; | |
DOI : 10.3389/fpsyg.2021.725932 | |
来源: DOAJ |
【 摘 要 】
Rapid advances in the field of Deep Reinforcement Learning (DRL) over the past several years have led to artificial agents (AAs) capable of producing behavior that meets or exceeds human-level performance in a wide variety of tasks. However, research on DRL frequently lacks adequate discussion of the low-level dynamics of the behavior itself and instead focuses on meta-level or global-level performance metrics. In doing so, the current literature lacks perspective on the qualitative nature of AA behavior, leaving questions regarding the spatiotemporal patterning of their behavior largely unanswered. The current study explored the degree to which the navigation and route selection trajectories of DRL agents (i.e., AAs trained using DRL) through simple obstacle ridden virtual environments were equivalent (and/or different) from those produced by human agents. The second and related aim was to determine whether a task-dynamical model of human route navigation could not only be used to capture both human and DRL navigational behavior, but also to help identify whether any observed differences in the navigational trajectories of humans and DRL agents were a function of differences in the dynamical environmental couplings.
【 授权许可】
Unknown