Multiagent learning (MAL) is the study of agents learning while in the presence of other agents who are also learning. As a field, MAL is built upon work done in both artificial intelligence and game theory. Game theory has mostly focused on proving that certain theoretical properties hold for a wide class of learning situations while ignoring computational issues, whereas artificial intelligence has mainly focused on designing practical multiagent learning algorithms for small classes of games.This thesis is concerned with finding a balance between the game-theory and artificial-intelligence approaches. We introduce a new learning algorithm, FRAME, which provably converges to the set of Nashequilibria in self-play, while consulting experts which can greatly improve the convergence rate to the set of equilibria.Even if the experts are not well suited to the learning problem, or are hostile, then FRAME will still provably converge.Our second contribution takes this idea further by allowing agents to consult multiple experts, and dynamically adapting so that the best expert for the given game is consulted.The result is a flexible algorithm capableof dealing with new and unknown games.Experimental results validate our approach.
【 预 览 】
附件列表
Files
Size
Format
View
Improving Convergence Rates in Multiagent Learning Through Experts and Adaptive Consultation