The objective of this research is to describe both human-robot interactions and inter-robot interactions and analyze the behavior of the resulting multi-agent systems, while drawing comparisons to psychological studies regarding human team behavior. In particular, we look at the effects of trust, energy, and manipulability on these interactions. We first address the problem of modeling trust evolution and describing how it affects the states of agents in a system - whether they be human or robot. We introduce two different types of trust models - self-centered and team-oriented - and show, through simulations and theoretical analyses, under what initial trust conditions these systems achieve their objectives. We show our models to be psychologically consistent in that they exhibit group polarization, belief polarization, and a positive trust-performance correlation. In the second part of this work, we look at the effect of energy on inter-robot interactions by solving an energy-constrained coordination problem in which robots must determine where and when to meet given differing initial battery levels to do so in the least amount of time. This is formulated as a constrained optimization problem where the constraints arise from solving for a single agent's optimal control input. Lastly, we address the effect of manipulability on human-robot interactions through a haptic human-swarm interaction user study. Manipulability, a notion describing how effective a leader robot is at controlling the follower robots, is provided as force feedback on a haptic joystick that a human operator uses to control a swarm of robots. Ten subjects complete the experiment in which they move the group of robots through a series of waypoints and different mappings between manipulability and the haptic feedback force are investigated.
【 预 览 】
附件列表
Files
Size
Format
View
Psychologically consistent coordinated control of multi-agent teams