期刊论文详细信息
Frontiers in Computational Neuroscience
Learning Modular Policies for Robotics
Andras eKupcsik1  Alexandros eParaschos2  Jan ePeters2  Christian eDaniel2  Gerhard eNeumann2 
[1] National University of Singapore;TU Darmstadt;
关键词: Robotics;    modularity;    motor control;    movement primitives;    Hierarchical Reinforcement Learning;    Policy Search;   
DOI  :  10.3389/fncom.2014.00062
来源: DOAJ
【 摘 要 】

A promising idea for scaling robot learning to more complex tasks is to use elemental behaviors as building
blocks to compose more complex behaviour. Ideally, such building blocks are used in combination with a learning algorithm
that is able to learn to select, adapt, sequence and co-activate the building blocks. While there has been a lot of work on approaches that support one of these
requirements, no learning algorithm exists that unifies all these properties in one framework.
%Adaptation of the parameter vector is needed to reuse the elemental behaviours for more than one situation.
%Second, the learning architecture needs to learn to select between and to sequence such parametrized building blocks. Finally,
%the expressibility of a modular control architecture can be drastically increased if the architecture supports co-activate the single building blocks.
%In this paper we give an overview of our work on learning such a modular control architecture.
In this paper we present our work on a unified approach for learning such a modular control architecture. We introduce new policy search algorithms
that are based on information-theoretic principles and are able to learn to select, adapt and sequence the building blocks. Furthermore, we developed
a new representation for the individual building block that support co-activation and principled ways for adapting the movement. Finally, we summarize
our experiments for learning modular control architectures in simulation and with real robots.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次